-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
issue with pmap: document? #146
Comments
The error is not related to NLopt. Here is the relevant section of the Julia manual: https://docs.julialang.org/en/v1/manual/parallel-computing/#code-availability-1 To avoid issues with serializing the @everywhere function optimizeone!(i)
foo = Foo([4.0,1.0,-5.0], .01,.01,5)
# ...
return obj.guess
end
pmap(optimizeone!, 1:3) |
Oh yes duh!! I am sorry, I read the error too fast. (it's an attempt to make a small example. In real life I'm running into this problem with things inside a module). So when I start julia with
To be replicate this: @everywhere using NLopt
@everywhere struct Foo
data::Vector{Float64}
guess::Vector{Float64}
opt::NLopt.Opt
end
@everywhere function Foo(data, frel,fabs,maxeval)
guess = similar(data)
opt = NLopt.Opt(:LD_SLSQP, 1)
NLopt.ftol_rel!(opt,frel)
NLopt.ftol_abs!(opt,fabs)
NLopt.maxeval!(opt, maxeval)
return Foo(data,guess,opt)
end
@everywhere function optimizeone!(obj::Foo,i)
function f(x, g)
d = (x[1]-obj.data[i])
objective = d*d
if length(g) > 0
g[1] = 2d
end
return objective
end
opt = foo.opt
min_objective!(opt, f)
optf, optx, ret = NLopt.optimize(opt, [obj.guess[i]])
obj.guess[i] = optx[1]
return obj.guess[i]
end
foo = Foo([4.0,1.0,-5.0], .01,.01,5) # or @everywhere, same error
bar = Distributed.pmap(1:3) do i
optimizeone!(foo,i)
end
|
You need to follow the second half of my previous comment. |
oh yes, that's what I already did in my package. Documentation about the problem would have been helpful (and would have saved me time to diagnose the issue). |
You can make a PR to improve the documentation by clicking on the pencil Icon at the top of the README on the homepage: |
yes! will do when I get the time. |
Closing as a duplicate of #186. |
This issue seems related to #12, but here is an example. The NLopt object is part of a type, to be reused multiple times. All is well, unless we want to distributed the work across processors.
so all is well here. The problem appears when we use multiple processors:
I get an error starting with:
As said in #12, I can serialize the optimization parameters and create the
Opt
object by the worker. But still, it would be a nice feature to have, and if not, it would be nice to be documented.The text was updated successfully, but these errors were encountered: