Multi-fidelity BO tutorial doesn't work? #2523
-
I am going off of the multi-fidelity BO tutorial, which compares multi-fidelity knowledge gradient (KG) with the traditional expected improvement (EI) on just the highest fidelity. At the end, I plot the optimization curves as a function of cost (in the case of multi-fidelity BO, only tracking the highest-fidelity evaluations for the best value so far), but it seems that single-fidelity EI outperforms multi-fidelity KG. Is this not the right way to track this? fixed_cost = 5.0
cost_ei = []
best_y_ei = []
cumulative_cost = 0
for xi, yi in zip(train_x_ei, train_obj_ei):
yi = yi[0].item()
costi = xi[-1].item()
cumulative_cost += costi + fixed_cost
if costi == 1:
cost_ei.append(cumulative_cost)
if not best_y_ei or yi > best_y_ei[-1]:
best_y_ei.append(yi)
else:
best_y_ei.append(best_y_ei[-1])
cost_mf = []
best_y_mf = []
cumulative_cost = 0
for xi, yi in zip(train_x, train_obj):
yi = yi[0].item()
costi = xi[-1].item()
cumulative_cost += costi + fixed_cost
if costi == 1:
cost_mf.append(cumulative_cost)
if not best_y_mf or yi > best_y_mf[-1]:
best_y_mf.append(yi)
else:
best_y_mf.append(best_y_mf[-1])
import matplotlib.pyplot as plt
plt.figure()
plt.plot(cost_ei, best_y_ei, marker='o', label="Single-fidelity, EI")
plt.plot(cost_mf, best_y_mf, marker='o', label="Multi-fidelity, KG")
plt.xlabel("Cost")
plt.ylabel("Objective")
plt.legend() Edit: I just looked at the output of an earlier cell in the notebook where it makes a final recommendation from the multi-fidelity KG, and the output is the following:
The objective value is better than it had found during optimization or what EI was able to find. So somehow the GP "knows" about the best objective, but never actually evaluates it? Am I interpreting this correctly? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 8 replies
-
Yes, this is correct. MFKG avoids evaluating the high fidelities due to the high cost. So your analysis about how well it does needs to be based on the points it would recommend at each step (not what it evaluates as part of the optimization). Take a look at the ["Make a final recommendation¶"](https://botorch.org/tutorials/discrete_multi_fidelity_bo#Make-a-final-recommendation) section of the tutorial. You'll want to call This is often referred to as considering "inference regret" rather than "simple regret" - the former is based on what the model believes is best, the latter is simply based on the points that have been evaluated so far. |
Beta Was this translation helpful? Give feedback.
Yes, this is correct. MFKG avoids evaluating the high fidelities due to the high cost. So your analysis about how well it does needs to be based on the points it would recommend at each step (not what it evaluates as part of the optimization). Take a look at the ["Make a final recommendation¶"](https://botorch.org/tutorials/discrete_multi_fidelity_bo#Make-a-final-recommendation) section of the tutorial. You'll want to call
get_recommendation()
in each step based on the model…