home |
copyright ©2016, [email protected]
overview |
syllabus |
src |
submit |
chat
NOTE TO STUDENTS: if this homework seems too complex, then reflect a little more on the prior optimizing examples. This code generalizes prior work to the point where we can quickly write many models and many optimizers. So it actually simplifies the optimization process.... at the cost of some extra architecture.
From the following, show the output of running sa, mws on Schaffer, Osyczka2, Kursawe.
Rewrite your SA and MWS code such that you can run the following loop:
for model in [Schaffer, Osyczka2, Kursawe]:
for optimizer in [sa, mws]:
optimizer(model())
This is the generic experiment loop that allows for rapid extension to handle more models and more optimizers.
The above loops requires the Kursawe model.
The above code assumes that mws, sa are functions that accept one argument: a description of the model they are processing.
For the above loop to work, each model (e.g. Schaffer) has to be class that produces an instance via model(). That model defines:
- the number of decisions;
- the number of objectives;
- the name of each decision/objective;
- the min/max range of each decision/objective;
- the any function that generates an instance of the decisions(randomly between min and max for each decision)
- the ok function that checks if a particular candidate is valid (for Schaffer and Kursawe, this returns True while for Osyczka2, this does some checking).
- the eval function that computes the objective scores for each candidate