We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have used the provided syntax.
clusterer = Birch() with parallel_backend('spark', n_jobs=100): clusterer.fit(df.toPandas())
Spark UI does not register it as a job and no executors get deployed. However, the example provided in docs gets registered as a sparkjob.
Error - "Unable to allocate 920. GiB for an array with shape (123506239506,) and data type float64"
The text was updated successfully, but these errors were encountered:
This looks like it runs out of memory, should not be an issue of joblib-spark.
Sorry, something went wrong.
No branches or pull requests
I have used the provided syntax.
Spark UI does not register it as a job and no executors get deployed. However, the example provided in docs gets registered as a sparkjob.
Error - "Unable to allocate 920. GiB for an array with shape (123506239506,) and data type float64"
The text was updated successfully, but these errors were encountered: