-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loading Models that require execution of third party code (trust_remote_code=True) #354
Comments
Hi @nearmax-p, could you install vLLM from source? Then this error should disappear. Sorry for the inconvenience. We will update our pypi package very soon. |
I see, thank you very much, this worked! One more issue I came across is that MPT-30B doesn't seem to load on 2 A100 GPUs. I used the following command:
And got the following response: But the model is never loaded properly and can be called (I waited for 20+ minutes and the model was already downloaded from the huggingface hub on my device). Have you encountered this before? |
@nearmax-p thanks for reporting it. Could you share how large your CPU memory is? It seems such a bug occurs when the CPU memory is not enough. We haven't succeeded reproducing the bug, so your information would be very helpful. |
@WoosukKwon Sure! I am using an a2-highgpu-2g instance from gcp, so I have 170GB of CPU RAM. This actually seems like a lot to me |
@nearmax-p Then it's very weird. We've tested the model on the exactly same setup. Which type of disk are you using? And if possible, could you re-install vLLM and try again? |
@WoosukKwon Interesting. I am using a 500GB balanced persistent disk, but I doubt that this makes a difference. I will try to reinstall and let you know what happens. Thanks for the quick responses, really appreciate it! |
@nearmax-p Thanks! That would be very helpful. |
following up on the discussion. I incurred in the same problem trying to load xgen-7b-8k-inst (I am not sure it is supported but being based on llama I think it should) I have installed vllm from source, as suggested, but when I run:
I get:
where should I set trust_remote_code=True? Any feedback would be very welcome :) |
@WoosukKwon I tested my code after reinstalling vllm (0.1.2), unfortunately, nothing has changed. Maybe I should have mentioned that I am working from an nvidia pytorch Docker image. However, all other models run just fine. |
@WoosukKwon now checking it outside of the container, will get back to you |
@nearmax-p If you are using docker, could you try increasing the shared memory size (e.g., to 64G?)? docker run --gpus all -it --rm --shm-size=64g nvcr.io/nvidia/pytorch:22.12-py3 |
@WoosukKwon alright, it doesn't seem to be related to RAM, but to distributed serving. Outside of the container, I am facing the same problem, even with mpt-7b, when I use tensor_parallel_size=2. With tensor_parallel_size=1, it works. I've used the default packages that were installed after installing vllm, I've only uninstalled pydantic, but I'd assume that that doesn't cause any issues |
@WoosukKwon Narrowed it down a bit. It is actually only a problem when using the AsyncLLMEngine.
This script causes the issue. When writing an analogous script with the normal (non-async) LLMEngine, the issue didn't come up. |
Hi @nearmax-p , we faced a similar issue - As a quick fix, setting |
Closing this issue as stale as there has been no discussion in the past 3 months. If you are still experiencing the issue you describe, feel free to re-open this issue. |
This PR writes the commit id to the untracked vllm/commit_id.py instead of modifying version.py, mainly to prevent somebody from accidentally committing their commit IDs, and to keep setup.py from polluting developers' git status. --- <details> <!-- inside this <details> section, markdown rendering does not work, so we use raw html here. --> <summary><b> PR Checklist (Click to Expand) </b></summary> <p>Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.</p> <h3>PR Title and Classification</h3> <p>Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:</p> <ul> <li><code>[Bugfix]</code> for bug fixes.</li> <li><code>[CI/Build]</code> for build or continuous integration improvements.</li> <li><code>[Doc]</code> for documentation fixes and improvements.</li> <li><code>[Model]</code> for adding a new model or improving an existing model. Model name should appear in the title.</li> <li><code>[Frontend]</code> For changes on the vLLM frontend (e.g., OpenAI API server, <code>LLM</code> class, etc.) </li> <li><code>[Kernel]</code> for changes affecting CUDA kernels or other compute kernels.</li> <li><code>[Core]</code> for changes in the core vLLM logic (e.g., <code>LLMEngine</code>, <code>AsyncLLMEngine</code>, <code>Scheduler</code>, etc.)</li> <li><code>[Hardware][Vendor]</code> for hardware-specific changes. Vendor name should appear in the prefix (e.g., <code>[Hardware][AMD]</code>).</li> <li><code>[Misc]</code> for PRs that do not fit the above categories. Please use this sparingly.</li> </ul> <p><strong>Note:</strong> If the PR spans more than one category, please include all relevant prefixes.</p> <h3>Code Quality</h3> <p>The PR need to meet the following code quality standards:</p> <ul> <li>We adhere to <a href="https://google.github.io/styleguide/pyguide.html">Google Python style guide</a> and <a href="https://google.github.io/styleguide/cppguide.html">Google C++ style guide</a>.</li> <li>Pass all linter checks. Please use <a href="https://github.com/vllm-project/vllm/blob/main/format.sh"><code>format.sh</code></a> to format your code.</li> <li>The code need to be well-documented to ensure future contributors can easily understand the code.</li> <li>Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests.</li> <li>Please add documentation to <code>docs/source/</code> if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.</li> </ul> <h3>Notes for Large Changes</h3> <p>Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with <code>rfc-required</code> and might not go through the PR.</p> <h3>What to Expect for the Reviews</h3> <p>The goal of the vLLM team is to be a <i>transparent reviewing machine</i>. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process: </p> <ul> <li> After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability.</li> <li> After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team.</li> <li> After the review, the reviewer will put an <code> action-required</code> label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.</li> <li> Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion. </li> </ul> <h3>Thank You</h3> <p> Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone! </p> </details>
I am trying to load MPT using the AsyncLLMEngine:
But I am getting this error:
ValueError: Loading mosaicml/mpt-7b-chat-local requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option
trust_remote_code=Trueto remove this error.
Is there any workaround for this or could it be possible to add the option to trust remote code to EngineArgs?
The text was updated successfully, but these errors were encountered: