Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Small changes to documentation #548

Merged
merged 7 commits into from
Nov 12, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 1 addition & 13 deletions docs/source/algorithms.md
Original file line number Diff line number Diff line change
@@ -392,7 +392,7 @@ install optimagic.
.. warning::
In our benchmark using a quadratic objective function, the trust_constr
algorithm did not find the optimum very precisely (less than 4 decimal places).
If you require high precision, you should refine an optimum found with Powell
If you require high precision, you should refine an optimum found with trust_constr
with another local optimizer.

.. note::
@@ -907,12 +907,6 @@ We implement a few algorithms from scratch. They are currently considered experi
and therefore may require fewer iterations to arrive at a local optimum than
Nelder-Mead.

The criterion function :func:`func` should return a dictionary with the following
fields:

1. ``"value"``: The sum of squared (potentially weighted) errors.
2. ``"root_contributions"``: An array containing the root (weighted) contributions.

Scaling the problem is necessary such that bounds correspond to the unit hypercube
:math:`[0, 1]^n`. For unconstrained problems, scale each parameter such that unit
changes in parameters result in similar order-of-magnitude changes in the criterion
@@ -1015,12 +1009,6 @@ need to have [petsc4py](https://pypi.org/project/petsc4py/) installed.
and therefore may require fewer iterations to arrive at a local optimum than
Nelder-Mead.

The criterion function :func:`func` should return a dictionary with the following
fields:

1. ``"value"``: The sum of squared (potentially weighted) errors.
2. ``"root_contributions"``: An array containing the root (weighted) contributions.

Scaling the problem is necessary such that bounds correspond to the unit hypercube
:math:`[0, 1]^n`. For unconstrained problems, scale each parameter such that unit
changes in parameters result in similar order-of-magnitude changes in the criterion
2 changes: 1 addition & 1 deletion docs/source/how_to/how_to_algorithm_selection.ipynb
Original file line number Diff line number Diff line change
@@ -52,7 +52,7 @@
" E[\"Can you exploit<br/>a least-squares<br/>structure?\"] -- yes --> F[\"differentiable?\"]\n",
" E[\"Can you exploit<br/>a least-squares<br/>structure?\"] -- no --> G[\"differentiable?\"]\n",
"\n",
" F[\"differentiable?\"] -- yes --> H[\"scipy_ls_lm<br/>scipy_ls_trf<br/>scipy_ls_dogleg\"]\n",
" F[\"differentiable?\"] -- yes --> H[\"scipy_ls_lm<br/>scipy_ls_trf<br/>scipy_ls_dogbox\"]\n",
" F[\"differentiable?\"] -- no --> I[\"nag_dflos<br/>pounders<br/>tao_pounders\"]\n",
"\n",
" G[\"differentiable?\"] -- yes --> J[\"scipy_lbfgsb<br/>nlopt_lbfgsb<br/>fides\"]\n",
2 changes: 1 addition & 1 deletion src/optimagic/optimizers/_pounders/gqtpar.py
Original file line number Diff line number Diff line change
@@ -55,7 +55,7 @@ def gqtpar(model, x_candidate, *, k_easy=0.1, k_hard=0.2, maxiter=200):
- ``linear_terms``, a np.ndarray of shape (n,) and
- ``square_terms``, a np.ndarray of shape (n,n).
x_candidate (np.ndarray): Initial guess for the solution of the subproblem.
k_easy (float): topping criterion for the "easy" case.
k_easy (float): Stopping criterion for the "easy" case.
k_hard (float): Stopping criterion for the "hard" case.
maxiter (int): Maximum number of iterations to perform. If reached,
terminate.
2 changes: 1 addition & 1 deletion src/optimagic/optimizers/_pounders/pounders_auxiliary.py
Original file line number Diff line number Diff line change
@@ -240,7 +240,7 @@ def solve_subproblem(
gtol_rel_conjugate_gradient (float): Convergence tolerance for the relative
gradient norm in the conjugate gradient step of the trust-region
subproblem ("bntr").
k_easy (float): topping criterion for the "easy" case in the trust-region
k_easy (float): Stopping criterion for the "easy" case in the trust-region
subproblem ("gqtpar").
k_hard (float): Stopping criterion for the "hard" case in the trust-region
subproblem ("gqtpar").
2 changes: 1 addition & 1 deletion src/optimagic/optimizers/pounders.py
Original file line number Diff line number Diff line change
@@ -262,7 +262,7 @@ def internal_solve_pounders(
gtol_rel_conjugate_gradient_sub (float): Convergence tolerance for the
relative gradient norm in the conjugate gradient step of the trust-region
subproblem if "cg" is used as ``conjugate_gradient_method_sub`` ("bntr").
k_easy_sub (float): topping criterion for the "easy" case in the trust-region
k_easy_sub (float): Stopping criterion for the "easy" case in the trust-region
subproblem ("gqtpar").
k_hard_sub (float): Stopping criterion for the "hard" case in the trust-region
subproblem ("gqtpar").