Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loading Models that require execution of third party code (trust_remote_code=True) #354

Closed
nearmax-p opened this issue Jul 4, 2023 · 15 comments
Labels
bug Something isn't working

Comments

@nearmax-p
Copy link

I am trying to load MPT using the AsyncLLMEngine:


engine_args = AsyncEngineArgs("mosaicml/mpt-7b-chat", engine_use_ray=True)
engine = AsyncLLMEngine.from_engine_args(engine_args)

But I am getting this error:
ValueError: Loading mosaicml/mpt-7b-chat-local requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.

Is there any workaround for this or could it be possible to add the option to trust remote code to EngineArgs?

@WoosukKwon
Copy link
Collaborator

WoosukKwon commented Jul 4, 2023

Hi @nearmax-p, could you install vLLM from source? Then this error should disappear. Sorry for the inconvenience. We will update our pypi package very soon.

@nearmax-p
Copy link
Author

I see, thank you very much, this worked! One more issue I came across is that MPT-30B doesn't seem to load on 2 A100 GPUs.

I used the following command:

engine_args = AsyncEngineArgs("mosaicml/mpt-30b-chat", engine_use_ray=True, tensor_parallel_size=2)
engine = AsyncLLMEngine.from_engine_args(engine_args)

And got the following response:
```llm_engine.py:60] Initializing an LLM engine with config: model='mosaicml/mpt-30b-chat', tokenizer='mosaicml/mpt-30b-chat', tokenizer_mode=auto, dtype=torch.bfloat16, use_dummy_weights=False, download_dir=None, use_np_weights=False, tensor_parallel_size=2, seed=0)````

But the model is never loaded properly and can be called (I waited for 20+ minutes and the model was already downloaded from the huggingface hub on my device). Have you encountered this before?

@WoosukKwon
Copy link
Collaborator

@nearmax-p thanks for reporting it. Could you share how large your CPU memory is? It seems such a bug occurs when the CPU memory is not enough. We haven't succeeded reproducing the bug, so your information would be very helpful.

@nearmax-p
Copy link
Author

@WoosukKwon Sure! I am using an a2-highgpu-2g instance from gcp, so I have 170GB of CPU RAM. This actually seems like a lot to me

@WoosukKwon
Copy link
Collaborator

@nearmax-p Then it's very weird. We've tested the model on the exactly same setup. Which type of disk are you using? And if possible, could you re-install vLLM and try again?

@nearmax-p
Copy link
Author

@WoosukKwon Interesting. I am using a 500GB balanced persistent disk, but I doubt that this makes a difference. I will try to reinstall and let you know what happens. Thanks for the quick responses, really appreciate it!

@WoosukKwon
Copy link
Collaborator

@nearmax-p Thanks! That would be very helpful.

@jth3galv
Copy link

jth3galv commented Jul 7, 2023

following up on the discussion. I incurred in the same problem trying to load xgen-7b-8k-inst (I am not sure it is supported but being based on llama I think it should)

I have installed vllm from source, as suggested, but when I run:

llm = LLM(model="xgen-7b-8k-inst")

I get:

  File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 669, in from_pretrained
    raise ValueError(
ValueError: Loading /home/ec2-user/data/xgen-7b-8k-inst requires you to execute the tokenizer file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.

where should I set trust_remote_code=True?

Any feedback would be very welcome :)

@nearmax-p
Copy link
Author

@WoosukKwon I tested my code after reinstalling vllm (0.1.2), unfortunately, nothing has changed. Maybe I should have mentioned that I am working from an nvidia pytorch Docker image. However, all other models run just fine.

@nearmax-p
Copy link
Author

@WoosukKwon now checking it outside of the container, will get back to you

@WoosukKwon
Copy link
Collaborator

@nearmax-p If you are using docker, could you try increasing the shared memory size (e.g., to 64G?)?

docker run --gpus all -it --rm --shm-size=64g nvcr.io/nvidia/pytorch:22.12-py3

@nearmax-p
Copy link
Author

@WoosukKwon alright, it doesn't seem to be related to RAM, but to distributed serving. Outside of the container, I am facing the same problem, even with mpt-7b, when I use tensor_parallel_size=2. With tensor_parallel_size=1, it works.

I've used the default packages that were installed after installing vllm, I've only uninstalled pydantic, but I'd assume that that doesn't cause any issues

@nearmax-p
Copy link
Author

@WoosukKwon Narrowed it down a bit. It is actually only a problem when using the AsyncLLMEngine.

from vllm.engine.arg_utils import AsyncEngineArgs
from vllm.engine.async_llm_engine import AsyncLLMEngine
from vllm.sampling_params import SamplingParams
from vllm.utils import random_uuid
import asyncio

engine_args = AsyncEngineArgs(model="openlm-research/open_llama_7b", engine_use_ray=True)
engine = AsyncLLMEngine.from_engine_args(engine_args)

sampling_params = SamplingParams(max_tokens=200, top_p=0.8)
request_id = random_uuid()
results_generator = engine.generate("Hello, my name is Max and I am the founder of", sampling_params, request_id)

async def stream_results():
    async for request_output in results_generator:
        text_outputs = [output.text for output in request_output.outputs]
        yield text_outputs


async def get_result():
    async for s in stream_results():
        print(s)

asyncio.run(get_result())

This script causes the issue. When writing an analogous script with the normal (non-async) LLMEngine, the issue didn't come up.

@justusmattern27
Copy link

Hi @nearmax-p , we faced a similar issue - As a quick fix, setting engine_use_ray to False worked for us

@zhuohan123 zhuohan123 added the bug Something isn't working label Jul 18, 2023
@hmellor
Copy link
Collaborator

hmellor commented Mar 8, 2024

Closing this issue as stale as there has been no discussion in the past 3 months.

If you are still experiencing the issue you describe, feel free to re-open this issue.

@hmellor hmellor closed this as completed Mar 8, 2024
yukavio pushed a commit to yukavio/vllm that referenced this issue Jul 3, 2024
This PR writes the commit id to the untracked vllm/commit_id.py instead
of modifying version.py, mainly to prevent somebody from accidentally
committing their commit IDs, and to keep setup.py from polluting
developers' git status.

---

<details>
<!-- inside this <details> section, markdown rendering does not work, so
we use raw html here. -->
<summary><b> PR Checklist (Click to Expand) </b></summary>

<p>Thank you for your contribution to vLLM! Before submitting the pull
request, please ensure the PR meets the following criteria. This helps
vLLM maintain the code quality and improve the efficiency of the review
process.</p>

<h3>PR Title and Classification</h3>
<p>Only specific types of PRs will be reviewed. The PR title is prefixed
appropriately to indicate the type of change. Please use one of the
following:</p>
<ul>
    <li><code>[Bugfix]</code> for bug fixes.</li>
<li><code>[CI/Build]</code> for build or continuous integration
improvements.</li>
<li><code>[Doc]</code> for documentation fixes and improvements.</li>
<li><code>[Model]</code> for adding a new model or improving an existing
model. Model name should appear in the title.</li>
<li><code>[Frontend]</code> For changes on the vLLM frontend (e.g.,
OpenAI API server, <code>LLM</code> class, etc.) </li>
<li><code>[Kernel]</code> for changes affecting CUDA kernels or other
compute kernels.</li>
<li><code>[Core]</code> for changes in the core vLLM logic (e.g.,
<code>LLMEngine</code>, <code>AsyncLLMEngine</code>,
<code>Scheduler</code>, etc.)</li>
<li><code>[Hardware][Vendor]</code> for hardware-specific changes.
Vendor name should appear in the prefix (e.g.,
<code>[Hardware][AMD]</code>).</li>
<li><code>[Misc]</code> for PRs that do not fit the above categories.
Please use this sparingly.</li>
</ul>
<p><strong>Note:</strong> If the PR spans more than one category, please
include all relevant prefixes.</p>

<h3>Code Quality</h3>

<p>The PR need to meet the following code quality standards:</p>

<ul>
<li>We adhere to <a
href="https://google.github.io/styleguide/pyguide.html">Google Python
style guide</a> and <a
href="https://google.github.io/styleguide/cppguide.html">Google C++
style guide</a>.</li>
<li>Pass all linter checks. Please use <a
href="https://github.com/vllm-project/vllm/blob/main/format.sh"><code>format.sh</code></a>
to format your code.</li>
<li>The code need to be well-documented to ensure future contributors
can easily understand the code.</li>
<li>Include sufficient tests to ensure the project to stay correct and
robust. This includes both unit tests and integration tests.</li>
<li>Please add documentation to <code>docs/source/</code> if the PR
modifies the user-facing behaviors of vLLM. It helps vLLM user
understand and utilize the new features or changes.</li>
</ul>

<h3>Notes for Large Changes</h3>
<p>Please keep the changes as concise as possible. For major
architectural changes (>500 LOC excluding kernel/data/config/test), we
would expect a GitHub issue (RFC) discussing the technical design and
justification. Otherwise, we will tag it with <code>rfc-required</code>
and might not go through the PR.</p>

<h3>What to Expect for the Reviews</h3>

<p>The goal of the vLLM team is to be a <i>transparent reviewing
machine</i>. We would like to make the review process transparent and
efficient and make sure no contributor feel confused or frustrated.
However, the vLLM team is small, so we need to prioritize some PRs over
others. Here is what you can expect from the review process: </p>

<ul>
<li> After the PR is submitted, the PR will be assigned to a reviewer.
Every reviewer will pick up the PRs based on their expertise and
availability.</li>
<li> After the PR is assigned, the reviewer will provide status update
every 2-3 days. If the PR is not reviewed within 7 days, please feel
free to ping the reviewer or the vLLM team.</li>
<li> After the review, the reviewer will put an <code>
action-required</code> label on the PR if there are changes required.
The contributor should address the comments and ping the reviewer to
re-review the PR.</li>
<li> Please respond to all comments within a reasonable time frame. If a
comment isn't clear or you disagree with a suggestion, feel free to ask
for clarification or discuss the suggestion.
 </li>
</ul>

<h3>Thank You</h3>

<p> Finally, thank you for taking the time to read these guidelines and
for your interest in contributing to vLLM. Your contributions make vLLM
a great tool for everyone! </p>


</details>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants