-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
delete unnecessary output #3
Conversation
I agree that, in principle, a library should be quiet. However, numerical computations can sometimes break assumptions, and those cases warrant a message. Here are my thoughts so far.
|
|
First of all, do you mean you compute an approximation of the gradient or that you don't have a gradient function at all? (L-BFGS-B is a gradient-based method and requires a gradient to work.) Also, I should be clear that L-BFGS-B will work for nonconvex problems but it will only find local minima in those cases. Managing the global optimization is up to you (random restarts, etc.). Updating the L-BFGS-B Fortran code to consistently support the |
What I mean is that I do not have an analytical form for the gradient. But I am able to compute likelihood at any point, this allows me to compute the gradient. Thanks, I can create a patch if needed, it is trivial. It seems that in all other places write is wrapped with |
Sorry for the delay on this and my lack of attention. I haven't had much time to work on this recently. I will take a look at your new pull request (#6) and at the new methods for passing pointers with cgo (#4) sometime later this week. Let's keep this discussion open until this issue is fixed as this is where all the information is. Also, since you can compute the function value (likelihood) at any point but not the gradient, it seems like one of the non-gradient-based optimization algorithms (that use finite differences and/or interpolation) would be worth looking into. Such an algorithm could be more efficient as it maintains its own approximation of the gradients based on the function values. (I assume you're already aware of these algorithms, but I figured I should mention it in case you're not.) |
Thanks, I'm looking forward. Regarding the optimization algorithm. I already tried couple of gradient free methods (e.g. those implemented in nlopt). It seems so far they have a much worse performance compared to LBFGSB. But if you have any particular suggestions, e.g. your favorite gradient-free method, it would be interesting to hear. |
If you have already tried those algorithms in nlopt, then I don't have anything additional to suggest. |
Fixed in commit ab8187a. |
Sometimes the following message is written to the stdout:
Since it is a library, I don't think writing a message, especially to stdout is a good idea.
What do you think about removing those lines?
P.S. Thanks for a very useful library!