Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to set "par_numthreads" #166

Closed
KSepetanc opened this issue Aug 4, 2020 · 7 comments · Fixed by #240
Closed

Unable to set "par_numthreads" #166

KSepetanc opened this issue Aug 4, 2020 · 7 comments · Fixed by #240

Comments

@KSepetanc
Copy link

KSepetanc commented Aug 4, 2020

I am using a command:
pol=Model(optimizer_with_attributes(KNITRO.Optimizer,"par_numthreads"=>4))

I don't know why, but the solver states that it is running with:

datacheck:               0
hessian_no_f:            1
par_numthreads:          1

I have tried everything, but on this model it just simply refuses to set the option?
The model has general nonlinear constraints (no integral variables).
On some other models which are QCQP, option is successfully set-up.

Is this a know issue? Can it be fixed? There is no error reported from the solver and Julia, simply the setting is ignored.
I am running Windows 10 x64, Julia 1.4.2 x64, KNITRO 12.2.2.

@frapac
Copy link
Collaborator

frapac commented Aug 4, 2020

Knitro is currently having an issue: when we have a model with a generic non-linear objective and/or non-linear constraints, we could not use different threads inside Knitro (apart in the sparse linear algebra). The reason is that we are getting segfault when we are calling a Julia function from a C thread. This is a known issue in Julia (see e.g. #93 ).
Since Knitro 12.2, the number of threads is determined automatically inside Knitro (to optimize internal performance). So if we define a non-linear model via the Julia interface, we could get a segfault even if the user did not specify a number of threads greater than 1. To avoid this, we force the number of threads to be equal to 1 if the model is non-linear (if the model is linear/quadratic/conic, we could use a greater number of threads as we are not calling any Julia code):
https://github.com/jump-dev/KNITRO.jl/blob/master/src/kn_solve.jl#L13L16

However, I agree that this is done in an implicit manner. We should notify the user, at least with a warning.

@KSepetanc
Copy link
Author

I am having tough time finding proper Knitro link.
I have been using Gams/Knitro until now, but I have found out that Gams represents all quadratic constraints as general non-linear to the solver. Also, it is still using old Knitro API. I don't know how it handles multithreading, but it does have some features.
Now I find that Julia link has multithreaded problems which are not handled nicely even in ampl.
Do you have any suggestions for a good Knitro link?
What is a prospect of solving multithreading in Julia? I noticed some new features on this topic in Julia 1.5 release notes.
I agree that at least a warning should be issued.

If I may use this opportunity to ask unrelated additional question.
I have a model which has non-convex quadratic constraint of type: v = (x-y)^2.
If I set up lower bound for v to 0 (which follows from the constraint), the solver gets extremely close to solution but fails to converge with making basically infinitesimally small iteration steps (solver does not stop due to optimality estimation). This happens regardless of provided variable start point (I have tried even with providing the optimum it self as a start). Even honorbnds=0 did not help. However, once I remove lower bound for v (or set it to e.g. -0.01), it converges in 11 iterations taking only 0.2s. I am heavily confused with this behavior. I thought that as much as possible should be modeled with the bounds and less with constraints. What should be the rule of the thumb here?

@frapac
Copy link
Collaborator

frapac commented Aug 5, 2020

Your feedback is interesting. I would say:

  • Knitro-Julia is good to handle quadratic constraints, as it is using the new API. The main issue is the problem we are facing with multithreading when we are calling Julia callbacks from Knitro.
  • Knitro-Ampl is maybe the most mature interface. In this interface, you could call callbacks (which are implemented in C at the end, using ASL) in a multithreaded environment. However, you could not call them in a concurrent fashion (par_concurrent_eval set to 0)
  • so at the end the only way to use multithreading properly in Knitro is using C/C++ and defining the optimization model directly using Knitro's C API

I agree I should spend more time investigating this issue with Julia. Last time I checked (~3 months ago) the problem was not resolved yet. Once the following issue resolved (JuliaLang/julia#17573) I think we will have more hope to get decent parallel support in Julia.

Concerning your other question:

  • I guess you are using algorithm=1 here (Interior point with direct linear algebra). Maybe other algorithms would lead to different results. You could also check if the problem is issued by the presolve by setting presolve=0. If you have the detailed log (both for v >= 0 and v unbounded) I could investigate this issue in more detail. A minimum working example could also help to figure out what is happening exactly.
  • in any case, you are right when you state that it is better to specify the bounds for the optimization variables

@KSepetanc
Copy link
Author

I have sent you on email working example, albeit not a minimum example but more or less full example.
It is using algorithm=1 (as set by default). This algorithm is a must as I have found out that only that one computes consistently correct constraint marginal values which is the purpose of that solve (presolve).

@KSepetanc
Copy link
Author

I am also concerned with derivation evaluation performance (Jacobian and Hessian) for KNITRO in Julia (for NLConstraints).
JuMP manual states that it is about 5 times slower than in AMPL. I noticed that these evaluations take the most time in NL solve, i.e. they are bottleneck even in GAMS (which should have C based evaluations).

Could you provide some feedback if there is an actual difference in this regard between Julia link and AMPL link? Even factor 2 is large, let alone 5. Do you have some side by side comparisons? I am working on scientific paper comparing different ways to solve bi-level models out of which some are non-convex quadratic while some general non-linear. I can easily justify running all on single core, but slow evaluations are different problem that could unjustifiably significantly worsen particular methods.

@frapac
Copy link
Collaborator

frapac commented Aug 14, 2020

Most of the difference between AMPL and Julia comes from the different AD backends used in these two modeling tools. JuMP's AD is quite good, but not in par with AMPL's one (which benefit from almost thirty years of development). Notably, AMPL is very efficient to compute the Hessian matrix with AD.

JuMP's AD is using a fork of ForwardDiff, with additional coloring abilities to compute the Hessian in reverse mode. See e.g. https://github.com/mlubin/ReverseDiffSparse.jl (which was included into JuMP in 2018). More efficient approaches exist (as edge pushing algorithm to compute the Hessian) but none of them have found their way into JuMP.

In my experience (mostly on OPF benchmarks) the evaluations of callbacks in JuMP is 5x to 7x times slower than in Ampl. You could use AmplNLWriter.jl with Knitro or Ipopt to see what's the difference on your non-linear model.

Note that Knitro has a dedicated API for quadratic and linear expressions, and thus do not need AD to evaluate these terms (Hessian and gradient of quadratic expressions are computed once and for all in the C code). If you are dealing with QCQP problems, the performance should be on par with AMPL as we are not calling any Julia code during the resolution (and we could use parallel multistart).

@KSepetanc
Copy link
Author

Have you had time to check bound problem from the example?
I haven't found any reason why providing a bound poses a problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging a pull request may close this issue.

2 participants