-
Notifications
You must be signed in to change notification settings - Fork 947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Export the simulation as a video #885
Comments
Temporary solution is to record the browser screen using e.g. https://chrome.google.com/webstore/detail/screen-recorder/hniebljpgcogalllopnjokppmgbhaden. But given that we use Solara now, it should be easier to capture the Matplotlib space state as pictures that can be turned to animated GIF. |
The second option (to capture the matplotlib space state as images) sounds good to me. Any hints on where to start with this? I'm happy to work on this feature if it might be useful to others. |
A quick hack to test the idea is to add a line after mesa/mesa/experimental/jupyter_viz.py Line 262 in 9c9a02e
imagemagick CLI.
|
You can also have a look at https://matplotlib.org/stable/api/animation_api.html Especially the examples on the bottom are helpful to create an animation. No need to use solara here, you can also just do your plots normally and call model.step() inside the Animation function. |
That animation API uses |
This works well 🎉 I think it might be the easiest integration with the existing infrastructure. space_ax.set_axis_off()
space_fig.savefig(f"space_{model.schedule.steps}.png") However, I personally prefer to have a separate function to generate gif for a given model. I guess Matplotlib's animation API is the best option then. |
Here is the implementation using Show codeimport matplotlib.animation as animation
from matplotlib.figure import Figure
from mesa.experimental.jupyter_viz import JupyterContainer
def plot_n_steps(viz_container: JupyterContainer, n_steps: int = 10):
model = viz_container.model_class(**viz_container.model_params_input, **viz_container.model_params_fixed)
space_fig = Figure(figsize=(10, 10))
space_ax = space_fig.subplots()
space_ax.set_axis_off()
# set limits to grid size
space_ax.set_xlim(0, model.grid.width)
space_ax.set_ylim(0, model.grid.height)
# set equal aspect ratio
space_ax.set_aspect('equal', adjustable='box')
scatter = space_ax.scatter(**viz_container.portray(model.grid))
def update_grid(_scatter, data):
_scatter.set_offsets(list(zip(data["x"], data["y"])))
if "c" in data:
_scatter.set_color(data["c"])
if "s" in data:
_scatter.set_sizes(data["s"])
return _scatter
def animate(_):
if model.running:
model.step()
return update_grid(scatter, viz_container.portray(model.grid))
ani = animation.FuncAnimation(space_fig, animate, repeat=True, frames=n_steps, interval=400)
# To save the animation using Pillow as a gif
writer = animation.PillowWriter(fps=15, metadata=dict(artist='Me'), bitrate=1800)
ani.save('scatter.gif', writer=writer) It's actually pretty fast: ~10 seconds for 1000 steps and 5 agents. Please let me know what you think @rht @Corvince! |
We could incorporate your code, but there is a possibility that we are migrating the plotting to Altair instead (see #1806). This is still up for discussion. |
Just let me know if this or something like it might be useful, I am actively using this to produce gifs to check the model when running on the compute cluster. |
With #2430, we added generic matplotlib functions for drawing spaces. These are not confined to the solara frontent but can also be used to e.g. make movies using matplotlib. |
I might want to take a shot sometime to document this and maybe write an utility function for it. |
Given that creating a movie is more of a matplotlib thing rather than a mesa thing, I would be hesitant to add such a function to mesa itself. There is an excellent Stackoverflow answer on how to make an mp4 with Matplotlib and several other resources just one Google search away. It's trivial to combine that with the new draw_x functions. |
Worth adding it in the viz tutorial maybe? It’s a good use case to show AMB results and model behavior. |
yes or a quick snippet in the currently outdated how-to guide. |
What's the problem this feature will solve?
A readily obtainable video output of our simulation would make it easier for collaboration between people working on a model as well as for presentations, publications, etc. A normal screen recording will not do well, as the time required for each step may fluctuate during a simulation. NetLogo offers an option to record the simulation region or even the whole interface (with the sliders, graphs, etc).
Describe the solution you'd like
Just a thought: It would be better if we can obtain the exact same thing as the browser visualization as a video. Maybe we can screen capture the active elements of the screen during every step and store it temporarily, and at the end of the simulation, stitch them together (say 25 steps per second) and output it into a predefined location/filename.
EDIT: I came across a few python packages that can 'program' svgs. We can assign some shape for every agent and then put them together at the appropriate coordinates to get a vector image for each step. Maybe also display the parameters of the model, etc.
The text was updated successfully, but these errors were encountered: