You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
joblib-spark will use the nodes that are on/available when cluster is started, but it never triggers the Databricks system to wake up another node, even when the one it is using is running on all cores.
I found some comments from 2020 that say auto-scaling with Spark is generally problematic, compared to auto-scaling in Azure itself. So maybe auto-scaling is not supposed to work.
Should it work?
The text was updated successfully, but these errors were encountered:
in line 112, it reset the njobs from the spark context object return value in line 128 (which returns the number of active cores in the cluster)
it does not ask for more workers at all.....
So databricks would not get any signal or demand to autoscale,In short, joblibspark is not designed to autoscale databricks clusters at all.
But we should push for this feature as most spark loads are being moved to the cloud and autoscaling is integral part of cost saving and effiiciency
joblib-spark will use the nodes that are on/available when cluster is started, but it never triggers the Databricks system to wake up another node, even when the one it is using is running on all cores.
I found some comments from 2020 that say auto-scaling with Spark is generally problematic, compared to auto-scaling in Azure itself. So maybe auto-scaling is not supposed to work.
Should it work?
The text was updated successfully, but these errors were encountered: