Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Arbitrary Precision Optimization #208

Open
ClementeSmarra opened this issue Jan 6, 2023 · 2 comments
Open

Arbitrary Precision Optimization #208

ClementeSmarra opened this issue Jan 6, 2023 · 2 comments

Comments

@ClementeSmarra
Copy link

ClementeSmarra commented Jan 6, 2023

Hi,
I have recently started to use this project and I am finding it very useful.
I am new to Julia, so my question might be a bit naive.
I am trying to perform a function minimization with BigFloat numbers. However, it seems that I cannot get it to work.
Is there a way to do that? As an example, I tried to do it with the Rosenbrock function.

function rosenbrock2d(x) return (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2 end

res = bboptimize(rosenbrock2d; SearchRange = [(-BigFloat(5//3),BigFloat(5//3)),(-BigFloat(5//3),BigFloat(5//3))])

This works, but the result is a Float64.
If I do

function rosenbrock2d(x) return BigFloat((1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2) end
res = bboptimize(rosenbrock2d; SearchRange = [(-BigFloat(5//3),BigFloat(5//3)),(-BigFloat(5//3),BigFloat(5//3))])

instead, I get the following error
`ArgumentError: The supplied fitness function does NOT return the expected fitness type Float64when called with a potential solution (when called with [0.6116625671601585, -1.6325667883503008] it returned 402.83444589341110031455173157155513763427734375 of type BigFloat so we cannot optimize it!

Stacktrace:
[1] setup_problem(func::Function, parameters::ParamsDictChain)
@ BlackBoxOptim ~/.julia/packages/BlackBoxOptim/I3lfp/src/bboptimize.jl:40
[2] bbsetup(functionOrProblem::Function, parameters::Dict{Symbol, Any}; kwargs::Base.Pairs{Symbol, Vector{Tuple{BigFloat, BigFloat}}, Tuple{Symbol}, NamedTuple{(:SearchRange,), Tuple{Vector{Tuple{BigFloat, BigFloat}}}}})
@ BlackBoxOptim ~/.julia/packages/BlackBoxOptim/I3lfp/src/bboptimize.jl:111
[3] bboptimize(functionOrProblem::Function, parameters::Dict{Symbol, Any}; kwargs::Base.Pairs{Symbol, Vector{Tuple{BigFloat, BigFloat}}, Tuple{Symbol}, NamedTuple{(:SearchRange,), Tuple{Vector{Tuple{BigFloat, BigFloat}}}}})
@ BlackBoxOptim ~/.julia/packages/BlackBoxOptim/I3lfp/src/bboptimize.jl:92
[4] top-level scope
@ In[62]:1
[5] eval
@ ./boot.jl:368 [inlined]
[6] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
@ Base ./loading.jl:1428`

How can I solve? Thanks a lot!

@robertfeldt
Copy link
Owner

robertfeldt commented Jan 14, 2023

Yes, sorry even though this should in principle be possible some parts of BBO currently directly assumes that the candidate solutions are vectors of Float64. These parts will stop BBO to work on BigFloat.

An update to allow any type for the candidates is being worked on but it is not yet clear if it will be part of BBO or a separate/new Julia lib. This is because this change is quite a big one, design-wise, and will affect many aspects of the lib. I still hope to be able to include it here but not 100% sure yet.

OTOH, are you sure you need arbitrary precision and cannot just "map" into such a space in your fitness function.

Something like:

const ResolutionDivisor = 1e3
function mytranslatingfitness(x::Vector{Float64})
    xbigf = BigFloat.(x)./ResolutionDivisor
    return myfitness(xbigf)
end

and then optimise on mytranslatingfitness instead of your original myfitness. Even if this doesn't take you all the way it might still allow you to get better results than if just mapping your original search space (needing BigFloat precision) directly into Float64?? Not sure though and it might depend on your specific application.

@robertfeldt
Copy link
Owner

If your original fitness function really returns a BigFloat then you should also convert back. More complete example:

julia> function myfitness(x::Vector{BigFloat})
       x[1] + 2*x[2]^2
       end

julia> typeof(myfitness(BigFloat.([1.0, 2.0])))
BigFloat

julia> using BlackBoxOptim

julia> bboptimize(myfitness; SearchRange = [(-BigFloat(5//3),BigFloat(5//3)),(-BigFloat(5//3),BigFloat(5//3))])
ERROR: MethodError: no method matching myfitness(::Vector{Float64})
...

julia> const ResolutionDivisor = 1e3
1000.0

julia> function mytranslatingfitness(x::Vector{Float64})
           xbigf = BigFloat.(x)./ResolutionDivisor
           return Float64(myfitness(xbigf))
       end

julia> bboptimize(mytranslatingfitness; SearchRange = [(-BigFloat(5//3),BigFloat(5//3)),(-BigFloat(5//3),BigFloat(5//3))])
Starting optimization with optimizer DiffEvoOpt{FitPopulation{Float64}, RadiusLimitedSelector, BlackBoxOptim.AdaptiveDiffEvoRandBin{3}, RandomBound{ContinuousRectSearchSpace}}
0.00 secs, 0 evals, 0 steps

Optimization stopped after 10001 steps and 0.05 seconds
...
Best candidate found: [-1.66667, 4.63405e-9]

Fitness: -0.001666667

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants