Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log folder not created correctly when running run_open_LLM_with_vllm.py #25

Open
aligoldenhat opened this issue Oct 21, 2024 · 3 comments

Comments

@aligoldenhat
Copy link

aligoldenhat commented Oct 21, 2024

I am running the following command for fine-tuning a model using run_open_LLM_with_vllm.py:

!python run_open_LLM_with_vllm.py --llm_model llama \
                       --llm_path /content/llama \
                       --dataset hangzhou \
                       --traffic_file anon_4_4_hangzhou_real.json \
                       --proj_name TSCS

I want save the logs in the {llm_model}_logs folder (using this log for finetunning). However, instead of creating this folder, the script creates a fails folder and stores a .json file there. The expected behavior is that a folder named {llm_model}_logs (like llama_logs) should be created to store the fine-tuning logs.

Could you help me fix this issue so that the fine-tuning logs are saved correctly in the {llm_model}_logs folder? If any additional configuration is required, or if this is an issue with the script, please let me know.

@SQLai2099
Copy link
Collaborator

Thank you for your suggestion. I have updated the code.

@aligoldenhat
Copy link
Author

Thanks for the recent update. I noticed that while log_dir is now created if unavailable, failure messages are still being saved in the fails folder, and nothing is stored in the log_dir as expected.

The expected behavior is to save the logs for fine-tuning in {llm_model}_logs (similar to gpt_logs in run_chatgpt.py), but that’s not happening. I reviewed the LLM_Inference_VLLM class in llm_aft_trainer.py and found that LOG_DIR isn’t being used for logging there.

Is run_open_LLM_with_vllm.py meant to generate logs for fine-tuning, like run_chatgpt.py?
If not, could you suggest how to modify the code to save logs for fine-tuning in {llm_model}_logs?

Thanks again for your help!

@SQLai2099
Copy link
Collaborator

Thank you for the feedback. The response of LLMs will be saved in self.dic_path["PATH_TO_WORK_DIRECTORY"] (as stated in line 1066 ) with run_open_LLM_with_vllm.py. You can either change it to another directory or find the log file in self.dic_path["PATH_TO_WORK_DIRECTORY"].

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants