Code for SIGGRAPH 2024 paper "Text-Guided Synthesis of Crowd Animation".
- Install packages in
requirements.txt
. The implementation is based on Pytorch. - Find package
pyDeclutter
andRVO2_Python
inLibs
. For each library, runpython setup.py build
to build, andpython setup.py install
to install. - Visit Diffusers to install the Diffusers, which is a modular library that contains most of the SOTA pre-trained diffusion models.
You can download the already generated and post-processed dataset from this link. Download Dataset.zip
to get the dataset and unzip it into the Language_Crowd_Animation
folder for use.
Or you can generate it by yourself:
- Run
Dataset_Generation.py
to generate the initial dataset. The generated dataset contains velocity fields without optimization, which would push the agents to be concentrated. - Run
Dataset_Postprocess.py
to post-process the initial dataset.
You can directly use the pre-trained diffusion models from this link. Download Models_Server_ForTest.zip
to get the pre-trained diffusion models and unzip it into the Language_Crowd_Animation
folder for use.
Or you can train the models by yourself:
- Run
Trainer_SgDistrDiffusion_Full_V1_Server.py
to train the start and goal diffusion model. - Run
Trainer_FieldDiffusion_Full_V2_Server.py
to train the velocity field diffusion model.
Run Quantitative_Exps.py
to evaluate the model using testing data from dataset.
Please consider citing our paper if our code is useful for your research:
@inproceedings{ji2024text,
title={Text-Guided Synthesis of Crowd Animation},
author={Ji, Xuebo and Pan, Zherong and Gao, Xifeng and Pan, Jia},
booktitle={ACM SIGGRAPH 2024 Conference Papers},
pages={1--11},
year={2024}
}
Feel free to email [email protected]
or [email protected]
if you have any questions.