Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Progessbar for Hyperopt #494

Merged
merged 1 commit into from
Jan 9, 2025
Merged

Progessbar for Hyperopt #494

merged 1 commit into from
Jan 9, 2025

Conversation

jduerholt
Copy link
Contributor

So far the hyperopt runner in bofire was only showing a progress bar when not using the FractionalFactorialStrategy, unfortunately, this is the default for singletaskgps. This PR adds the progress bar.

@Jimbo994 Jimbo994 self-assigned this Jan 9, 2025
Copy link
Collaborator

@Jimbo994 Jimbo994 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Nice to have the pbar when running hyperopt!

@Jimbo994 Jimbo994 merged commit 9c5acca into main Jan 9, 2025
7 of 9 checks passed
dlinzner-bcs pushed a commit that referenced this pull request Jan 20, 2025
dlinzner-bcs added a commit that referenced this pull request Jan 28, 2025
* add draft of restrucuted doe class

* refactoring doe

* add formulaic to be installed always

* add formulaic to be installed always

* add formulaic to be installed always

* add formulaic to be installed always

* check style

* check style

* check style

* remove enumns

* remove enumns

* remove enumns

* fix branch and bound

* move delta into criterion

* move delta into criterion

* move delta into criterion

* move delta into criterion

* move default criterion

* move default criterion

* move default criterion

* move default criterion

* refactor formulas and number of experiments

* pyright

* fix test

* fix test

* fix test

* fix tutorial

* fix tutorial

* fix tutorial

* fix test

* fix test

* fix getting started

* aarons review

* rmv unneded tests

* formulaic version fixed bc of breaking changes

* add explanatory text to doe basic examples

* typo in basic_examples.ipynb

* format basic doe example

* consolidate spac_filling with doe

* Add categoricals for `FractionalFactorialStrategy` (#480)

* integrate factorial in fractional factorial

* fix tests

* merge main

* Multiplicative additive sobo objectives (#481)

* added MultiplicativeAdditive data model

* added actual multiplicative model (callable is missing)

* added torch functions

* added test for objective

* added test for sobo stategy: multiplicative_additive

* changed additive/multiplicative calculations:
- Removed scaling by x**(1/(....)) to avoid numerical errors, if x<0
- included weight transformation for multiplicative objectives from (0, 1] to [1, inf) scale to avoid numerical errors at weights != 1.0
- added tests for weights != 1.0

* added notebook for comparison of merging objectives

* after hooks

* addet .idea/ folder (pycharm) to gitignore

* after hooks

* Apply pre-commit fixes

* Delete .idea directory

* corrected tests for multiplicative_additive_botorch_objective

* after pre-commit

* lint specifications

* corrected weightings calc in test for multiplicative objective

* after hooks

* changed docstrings to google docstring format

* easy fixes, spelling errors

* forgot linting

* easy fixes, spelling errors

* removed denominator additive from multiplicative_additive_sobo strategy

* after hooks

* fixed typing

* tensor initialization of objectives

* after hooks

* avoiding torch size error

* avoid linting error

* after hooks

* reverting test-renaming

* revert isinstance list comprehension to tuple.... solution

* testing copilot suggestions for linting errors

* reverting wrong copilot suggestions

* added test for _callables_and_weights

* after hooks

* added test for SOBO strategy data model

* added test for SOBO strategy data model

* added new sobo strategy to a mysterious list

* after hooks

* still trying to get rid of the linting error, expecting tuple(types)

* WIP

* WIP

* WIP

* WIP

* WIP

* minor corrections

* add pbar support for hyperopt (#494)

* Make the BoFire Data Models OpenAI compatible (#495)

* tuples to lists

* fix tests

* fix linting issues

* Group split kfold (#484)

* add group kfold option in cross_validate of any traainable surrogate

* changed to GroupShuffleSplit, added test case

* improve docstring & add some inline comments in test

* refactor cross_validate & add tests

* imrpve tests, remove unnecessary case while checking group split col

* add push

* formatting

---------

Co-authored-by: Jim Boelrijk Valcon <[email protected]>

* fix strict candidate enforcement (#492)

* Drop support for Python 3.9 (#493)

* update tests and pyproject.toml

* update lint workflow

* update test

* bump pyright

* different pyright version

* change linting

* Update pyproject.toml (#501)

BoTorch is slowed down massively by scipy 1.15: pytorch/botorch#2668. We should fix it.

* kernels working on a given set of features (#476)

* kernels working on a given set of features

* pre-commit

* test map singletaskgp with additive kernel

* test active_dims of mapped kernels

* add features_to_idx_mapper to outlier detection tutorial

* correctly handling categorical mol features

* validating mol features transforms

* verifying proper type

* custom hamming kernel enabling single task gp on categorical features

* removed unnecessary parameter from data model

* testing equivalence of mixed gp and single gp with custom kernel

* (temporary) running on all py versions

* (temporary) debug github actions by printing

* more printing

* Revert "testing equivalence of mixed gp and single gp with custom kernel"

This reverts commit 4a2a547.

* Revert "removed unnecessary parameter from data model"

This reverts commit 6ad1dfd.

* Revert "custom hamming kernel enabling single task gp on categorical features"

This reverts commit 17d8350.

* Revert "Revert "custom hamming kernel enabling single task gp on categorical features""

This reverts commit 2e29852.

* Revert "Revert "testing equivalence of mixed gp and single gp with custom kernel""

This reverts commit 1cd2776.

* removed test debug and restored to latest implemented features

* pinning compatible version of formulaic

* pinning compatible version of formulaic

* removed old code

* lint

* removed scratch file

* removed old code again

* silencing pyright false positive

* compatibility with py39

* pin compatible version of formulaic

* restored old code

* pinning sklearn

* pinning sklearn

* pinning scikit everywhere

* not testing for prediction quality

* matching lengthscale constraints in hamming kernel

* removed equivalence test

* testing hamming kernel

* added test for mol features in single task gp

* categorical onehot kernel uses the right lengthscale for multiple features

* removed redundant check

* more descriptive name for base kernel

* updated docstring

* improved tests and comments

---------

Co-authored-by: Robert Lee <[email protected]>

* Fix mapper tests (#502)

* fix kernel mapper tests

* bump botorch dependency

* rmv unused import in staregies.api

* rmv unused import in space filling

* rmv unused import in space filling

* fix data models tets

* fix data models tets

* fix data models tets

* fix data models tets

* fix data models tets

* no more bnb in test

* add fixtures for criteria

* add fixtures for criteria

---------

Co-authored-by: LinzneDD_basf <[email protected]>
Co-authored-by: Dominik Linzner <[email protected]>
Co-authored-by: linznedd <[email protected]>
Co-authored-by: Robert Lee <[email protected]>
Co-authored-by: Lukas Hebing <[email protected]>
Co-authored-by: Julian Keupp <[email protected]>
Co-authored-by: Jim Boelrijk Valcon <[email protected]>
Co-authored-by: Emilio Dorigatti <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants