From bde3065175e98e1172a0ff0a8a4acfc9dd277012 Mon Sep 17 00:00:00 2001 From: Marc Becker <33069354+be-marc@users.noreply.github.com> Date: Thu, 5 Sep 2024 13:22:48 +0200 Subject: [PATCH] add RhpcBLASctl to faq (#170) --- mlr-org/faq.qmd | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mlr-org/faq.qmd b/mlr-org/faq.qmd index c0ab1eb1..f79dfb07 100644 --- a/mlr-org/faq.qmd +++ b/mlr-org/faq.qmd @@ -42,6 +42,9 @@ When resampling or tuning a fast-fitting learner, it helps to chunk multiple res The option [`mlr3.exec_chunk_bins`](https://mlr3.mlr-org.com/reference/mlr3-package.html#package-options) determines the number of chunks to split the resampling iterations into. For example, when running a benchmark with 100 resampling iterations, `options("mlr3.exec_chunk_bins" = 4)` creates 4 computational jobs with 25 resampling iterations each. This reduces the parallelization overhead and speeds up the execution. +The parallelization of the BLAS library can interfere with future parallelization due to over-utilization of the available cores. +Install [`RhpcBLASctl`](https://cran.r-project.org/web/packages/RhpcBLASctl/index.html) so that mlr3 can turn off the parallelization of BLAS. +`RhpcBLASctl` can only be included as an optional dependency due to licensing issues. ## Why is the parallelization of tuning slow? {#tuning-slow}