Skip to content

Commit

Permalink
Use info instead of warning
Browse files Browse the repository at this point in the history
  • Loading branch information
DarkLight1337 committed Oct 17, 2024
1 parent 20d7ab6 commit 9b7746d
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 16 deletions.
18 changes: 9 additions & 9 deletions docs/source/models/supported_models.rst
Original file line number Diff line number Diff line change
Expand Up @@ -294,6 +294,10 @@ Text Embedding
-
- ✅︎

.. important::
Some model architectures supports both generation and embedding tasks.
In this case, you have to pass :code:`--task embed` to run the model in embedding mode.

Reward Modeling
---------------

Expand Down Expand Up @@ -424,7 +428,7 @@ Text Generation
- :code:`google/paligemma-3b-pt-224`, :code:`google/paligemma-3b-mix-224`, etc.
-
- ✅︎
* - :code:`Phi3VForCausalLM` (see note)
* - :code:`Phi3VForCausalLM`
- Phi-3-Vision, Phi-3.5-Vision
- T + I\ :sup:`E+`
- :code:`microsoft/Phi-3-vision-128k-instruct`, :code:`microsoft/Phi-3.5-vision-instruct` etc.
Expand Down Expand Up @@ -462,10 +466,6 @@ Text Generation
For :code:`openbmb/MiniCPM-V-2`, the official repo doesn't work yet, so we need to use a fork (:code:`HwwwH/MiniCPM-V-2`) for now.
For more details, please see: https://github.com/vllm-project/vllm/pull/4087#issuecomment-2250397630

.. note::
The :code:`Phi3VForCausalLM` architecture supports both generation and embedding tasks.
For text generation, please pass (:code:`--task generate`) to run the model in generation mode.

Multimodal Embedding
--------------------

Expand All @@ -479,16 +479,16 @@ Multimodal Embedding
- Example HF Models
- :ref:`LoRA <lora>`
- :ref:`PP <distributed_serving>`
* - :code:`Phi3VForCausalLM` (see note)
* - :code:`Phi3VForCausalLM`
- Phi-3-Vision-based
- T + I
- :code:`TIGER-Lab/VLM2Vec-Full`
- 🚧
- ✅︎

.. note::
The :code:`Phi3VForCausalLM` architecture supports both generation and embedding tasks.
For text generation, please pass (:code:`--task embed`) to run the model in embedding mode.
.. important::
Some model architectures supports both generation and embedding tasks.
In this case, you have to pass :code:`--task embed` to run the model in embedding mode.

----

Expand Down
10 changes: 3 additions & 7 deletions vllm/config.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
import enum
import json
import warnings
from dataclasses import dataclass, field, fields
from typing import (TYPE_CHECKING, Any, ClassVar, Dict, Final, List, Literal,
Mapping, Optional, Set, Tuple, Type, Union)
Expand Down Expand Up @@ -274,12 +273,9 @@ def _resolve_task(
task = next(iter(supported_tasks))

if len(supported_tasks) > 1:
msg = (
f"This model supports multiple tasks: {supported_tasks}. "
f"Defaulting to '{task}'. As this behavior may change in "
"the future, please specify one explicitly via `--task`.")

warnings.warn(msg, stacklevel=2)
logger.info(
"This model supports multiple tasks: %s. "
"Defaulting to '%s'.", supported_tasks, task)
else:
if task_option not in supported_tasks:
msg = (
Expand Down

0 comments on commit 9b7746d

Please sign in to comment.