-
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add survival benchmark tuning spaces #47
base: main
Are you sure you want to change the base?
Conversation
We have to add them to suggests and use
Maybe you can figure out what to do exactly. @bblodfon You have the same problem with |
I have to change the
I don't really understand that. The example gives you all integer values between 1 and 10. |
Ah you want to do a grid search with |
You can also set constants as long as they overwrite the learner's default. I will adapt the printer for this. |
Can you merge main? This could also be the cause. |
Yes, I was planning to try it out today and see if CRAN likes that or not. I will report it here. Note that I also had tried to have my own |
You add the tuning space to the learner and then build the pipeline with it. Or does the distrcompose pipeop also have hyperparameters? |
Yes, the So, every composition pipeline pretty much in Lukas was using this for example many times in his benchmark script, see here. |
Sounds like adding
should be enough, as the r-universe repo also pulls in learner dependencies (CoxBoost for example) 🤔 |
Done, doesn't seem to have helped yet? |
Also yes, but the main thing was that I didn't quite understand from the docs whether |
I have not, I wanted to start really basic here with the bare minimum for our search spaces and then see how far we can go. |
No, if the token is set on a p_int, the drawn values remain integer values. I was curious if you can do the grid search. It seems that is works like this. library(paradox)
param_set = ps(
x_int = p_int(lower = 1, upper = 10)
)
param_set$set_values(x_int = to_tune(p_fct(c(1, 2, 3, 4, 5))))
search_space = param_set$search_space()
design = generate_design_random(search_space, n = 1)
x = design$transpose()
x
# [[1]]
# [[1]]$x_int
# [1] 3
class(x[[1]]$x_int)
# [1] "numeric" I think this was a new feature Martin introduced with the |
Pulled upstream changes and tweaked a) It no longer assumes that I assume this is maybe something you'd prefer to do otherwise and not as a drive-by change in this PR, but well, I still haven't managed to get all tuning spaces to print properly ( I also added |
Yes, the So, what more is needed from the initial points 1-5? |
I realized I somehow had 4 points and labelled them 1-5.
|
I added an example with the pipeline for XGB-AFT to also predict the Also:
|
I started adding tuning spaces but there are some issues I don't know how to address yet:
mlr3extralearners
,CoxBoost
,survivalmodels
, andmlr3proba
for survival in general).extra_trafo
ortrafo
I didn't know how to correctly translate usingtune_token
.Another example are
p_int
spaces where I didn't quite understand whether I can safely use e.g.tune_token(1L, 10L)
to get a discrete search space.rd_info(lts("surv.penalized.sbb"))
) errors in various ways, e.g....which I assume to be related to 2. because this does not happen for "simple" tuning spaces that only consist of params I can easily specify with
tune_token
EDIT:
Oh, and
surv.ranger
tonum.trees = 1000
etc. could be important. Now I'm not sure sure whether this is considered out of scope for the package, or if not, how I can add fixed parameters to the tuning spaces.