-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to set "par_numthreads" #166
Comments
Knitro is currently having an issue: when we have a model with a generic non-linear objective and/or non-linear constraints, we could not use different threads inside Knitro (apart in the sparse linear algebra). The reason is that we are getting segfault when we are calling a Julia function from a C thread. This is a known issue in Julia (see e.g. #93 ). However, I agree that this is done in an implicit manner. We should notify the user, at least with a warning. |
I am having tough time finding proper Knitro link. If I may use this opportunity to ask unrelated additional question. |
Your feedback is interesting. I would say:
I agree I should spend more time investigating this issue with Julia. Last time I checked (~3 months ago) the problem was not resolved yet. Once the following issue resolved (JuliaLang/julia#17573) I think we will have more hope to get decent parallel support in Julia. Concerning your other question:
|
I have sent you on email working example, albeit not a minimum example but more or less full example. |
I am also concerned with derivation evaluation performance (Jacobian and Hessian) for KNITRO in Julia (for NLConstraints). Could you provide some feedback if there is an actual difference in this regard between Julia link and AMPL link? Even factor 2 is large, let alone 5. Do you have some side by side comparisons? I am working on scientific paper comparing different ways to solve bi-level models out of which some are non-convex quadratic while some general non-linear. I can easily justify running all on single core, but slow evaluations are different problem that could unjustifiably significantly worsen particular methods. |
Most of the difference between AMPL and Julia comes from the different AD backends used in these two modeling tools. JuMP's AD is quite good, but not in par with AMPL's one (which benefit from almost thirty years of development). Notably, AMPL is very efficient to compute the Hessian matrix with AD. JuMP's AD is using a fork of ForwardDiff, with additional coloring abilities to compute the Hessian in reverse mode. See e.g. https://github.com/mlubin/ReverseDiffSparse.jl (which was included into JuMP in 2018). More efficient approaches exist (as edge pushing algorithm to compute the Hessian) but none of them have found their way into JuMP. In my experience (mostly on OPF benchmarks) the evaluations of callbacks in JuMP is 5x to 7x times slower than in Ampl. You could use AmplNLWriter.jl with Knitro or Ipopt to see what's the difference on your non-linear model. Note that Knitro has a dedicated API for quadratic and linear expressions, and thus do not need AD to evaluate these terms (Hessian and gradient of quadratic expressions are computed once and for all in the C code). If you are dealing with QCQP problems, the performance should be on par with AMPL as we are not calling any Julia code during the resolution (and we could use parallel multistart). |
Have you had time to check bound problem from the example? |
I am using a command:
pol=Model(optimizer_with_attributes(KNITRO.Optimizer,"par_numthreads"=>4))
I don't know why, but the solver states that it is running with:
I have tried everything, but on this model it just simply refuses to set the option?
The model has general nonlinear constraints (no integral variables).
On some other models which are QCQP, option is successfully set-up.
Is this a know issue? Can it be fixed? There is no error reported from the solver and Julia, simply the setting is ignored.
I am running Windows 10 x64, Julia 1.4.2 x64, KNITRO 12.2.2.
The text was updated successfully, but these errors were encountered: