Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Learner1D: return inf loss when the bounds aren't done #271

Merged
merged 5 commits into from
Sep 24, 2021

Conversation

basnijholt
Copy link
Member

@basnijholt basnijholt commented Apr 20, 2020

Description

@bernardvanheck found an issue that occurs when using multiple cores on the following function
image
Here the middle region is exponentially flat and the blue curve is a homogeneous plot and the orange points are of the finished learner with a very small loss (<0.01).

I think that a learner should report an infinite loss whenever the bounds aren't included.

Having that, means that the issue I described above is much less likely to occur.

The following code snippet results in a learner with a small loss after only 3 points.

import adaptive
from adaptive.learner.learner1D import curvature_loss_function, Learner1D
from adaptive.runner import replay_log

learner_fail = Learner1D(
    lambda x: x, bounds=(-400e-6, 400e-6), loss_per_interval=curvature_loss_function(),
)

# The first returning points are in the middle flat region.
runner_log = [
    ("ask", 4),
    ("tell", 0.00013333333333333334, 1.9998184085251902),
    ("ask", 1),
    ("tell", -0.00013333333333333334, 1.9998184085251902),
    ("ask", 1),
    ("tell", 0.0, 1.9998184085251902),
]
replay_log(learner_fail, runner_log)

runner = adaptive.BlockingRunner(learner_fail, goal=lambda learner: learner.loss() < 0.01, log=True)

Checklist

  • Fixed style issues using pre-commit run --all (first install using pip install pre-commit)
  • pytest passed

Type of change

Check relevant option(s).

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

@basnijholt basnijholt force-pushed the learner1d-no-bounds-included branch from 4301874 to 69730f1 Compare April 20, 2020 10:10
@basnijholt basnijholt force-pushed the learner1d-no-bounds-included branch from ad3faac to e07416d Compare September 24, 2021 08:33
@basnijholt basnijholt disabled auto-merge September 24, 2021 11:19
@basnijholt basnijholt merged commit 527ae60 into master Sep 24, 2021
@basnijholt basnijholt deleted the learner1d-no-bounds-included branch September 24, 2021 11:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant