Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Ultravox audio doesn't work with auto tool choice #14209

Open
1 task done
erkintelnyx opened this issue Mar 4, 2025 · 1 comment
Open
1 task done

[Bug]: Ultravox audio doesn't work with auto tool choice #14209

erkintelnyx opened this issue Mar 4, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@erkintelnyx
Copy link

erkintelnyx commented Mar 4, 2025

Your current environment

The output of `python collect_env.py`
Collecting environment information...
INFO 03-04 12:10:58 [__init__.py:207] Automatically detected platform rocm.
PyTorch version: 2.7.0a0+git3a58512
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42133-1b9c17779

OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 18.0.0git (https://github.com/RadeonOpenCompute/llvm-project roc-6.3.1 24491 1e0fda770a2079fbd71e4b70974d74f62fd3af10)
CMake version: version 3.31.4
Libc version: glibc-2.35

Python version: 3.12.9 (main, Feb  5 2025, 08:49:00) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI100 (gfx908:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42133
MIOpen runtime version: 3.3.0
Is XNNPACK available: True

CPU:
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        48 bits physical, 48 bits virtual
Byte Order:                           Little Endian
CPU(s):                               16
On-line CPU(s) list:                  0-15
Vendor ID:                            AuthenticAMD
Model name:                           AMD EPYC 7713 64-Core Processor
CPU family:                           25
Model:                                1
Thread(s) per core:                   1
Core(s) per socket:                   16
Socket(s):                            1
Stepping:                             1
BogoMIPS:                             3999.99
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean flushbyasid pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid fsrm arch_capabilities
Virtualization:                       AMD-V
Hypervisor vendor:                    KVM
Virtualization type:                  full
L1d cache:                            1 MiB (16 instances)
L1i cache:                            1 MiB (16 instances)
L2 cache:                             8 MiB (16 instances)
L3 cache:                             256 MiB (16 instances)
NUMA node(s):                         1
NUMA node0 CPU(s):                    0-15
Vulnerability Gather data sampling:   Not affected
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Mitigation; safe RET
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pyzmq==26.2.1
[pip3] torch==2.7.0a0+git3a58512
[pip3] torchvision==0.19.1a0+6194369
[pip3] transformers==4.49.0
[pip3] triton==3.2.0+gite5be006a
[conda] Could not collect
ROCM Version: 6.3.42133-1b9c17779
Neuron SDK Version: N/A
vLLM Version: N/A
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect

🐛 Describe the bug

When I run ultravox v5 via:

$ VLLM_USE_V1=1 vllm serve fixie-ai/ultravox-v0_5-llama-3_3-70b --tensor-parallel-size 8 --download-dir /app/data/models --trust-remote-code --enable-auto-tool-choice --chat-template-content-format openai --chat-template /app/vllm/examples/tool_chat_template_llama3.1_json.jinja --tool-call-parser llama3_json --enable-chunked-prefill --max-model-len 9000

INFO 03-03 18:56:52 [__init__.py:207] Automatically detected platform rocm.
INFO 03-03 18:57:09 [api_server.py:912] vLLM API server version 0.7.4.dev181+gf35f8e22.d20250303
INFO 03-03 18:57:09 [api_server.py:913] args: Namespace(subparser='serve', model_tag='fixie-ai/ultravox-v0_5-llama-3_3-70b', config='', host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template='/app/vllm/examples/tool_chat_template_llama3.1_json.jinja', chat_template_content_format='openai', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=True, tool_call_parser='llama3_json', tool_parser_plugin='', model='fixie-ai/ultravox-v0_5-llama-3_3-70b', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=True, allowed_local_media_path=None, download_dir='/app/data/models', load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=9000, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=8, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=True, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=False, reasoning_parser=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, dispatch_function=<function ServeSubcommand.cmd at 0x7f2598eb9e40>)
WARNING 03-03 18:57:09 [arg_utils.py:1434] Setting max_num_batched_tokens to 2048 for OPENAI_API_SERVER usage context.
INFO 03-03 18:57:34 [config.py:576] This model supports multiple tasks: {'classify', 'reward', 'score', 'embed', 'generate'}. Defaulting to 'generate'.
INFO 03-03 18:57:34 [config.py:1486] Defaulting to use mp for distributed inference
INFO 03-03 18:57:34 [config.py:1519] Disabled the custom all-reduce kernel because it is not supported on AMD GPUs.
INFO 03-03 18:57:34 [config.py:1661] Chunked prefill is enabled with max_num_batched_tokens=2048.
INFO 03-03 18:57:40 [__init__.py:207] Automatically detected platform rocm.
INFO 03-03 18:57:57 [core.py:50] Initializing a V1 LLM engine (v0.7.4.dev181+gf35f8e22.d20250303) with config: model='fixie-ai/ultravox-v0_5-llama-3_3-70b', speculative_config=None, tokenizer='fixie-ai/ultravox-v0_5-llama-3_3-70b', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=9000, download_dir='/app/data/models', load_format=LoadFormat.AUTO, tensor_parallel_size=8, pipeline_parallel_size=1, disable_custom_all_reduce=True, quantization=None, enforce_eager=False, kv_cache_dtype=auto,  device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=fixie-ai/ultravox-v0_5-llama-3_3-70b, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
WARNING 03-03 18:57:57 [multiproc_worker_utils.py:309] Reducing Torch parallelism from 16 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
INFO 03-03 18:57:57 [custom_cache_manager.py:19] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager
INFO 03-03 18:57:57 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3, 4, 5, 6, 7], buffer_handle=(8, 10485760, 10, 'psm_c2d15247'), local_subscribe_addr='ipc:///tmp/42d28b49-4bba-4d95-a4fd-1d3640a6b3a0', remote_subscribe_addr=None, remote_addr_ipv6=False)
INFO 03-03 18:58:02 [__init__.py:207] Automatically detected platform rocm.

both chat completion and tool call work.
But audio doesn't work in this case (pasting base64 as well for ease of running);

from openai import OpenAI
import requests
import json
import time


openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

text = "Tell me a fun fact!"

# audio_base64 = generate_audio(text)

# mp3 base64 for "Tell me a fun fact!"
audio_base64 = "SUQzBAAAAAAAI1RTU0UAAAAPAAADTGF2ZjU4Ljc2LjEwMAAAAAAAAAAAAAAA//NwwAAAAAAAAAAAAEluZm8AAAAPAAAAOQAAF/oACxAQFBQZGR0hISYmKiouMzM3Nzs7QERESEhNTVFWVlpaXl5jZ2dra3BwdHh4fX2BgYWKio6Ok5OXm5ugoKSkqK2tsbG1tbq+vsLCx8fL0NDU1NjY3eHh5eXq6u7y8vf3+/v/AAAAAExhdmM1OC4xMwAAAAAAAAAAAAAAACQD+wAAAAAAABf6idnQqgAAAAAAAAAAAAAAAAD/80DEAAAAA0gAAAAACTLadh9zN7PSKQYYPQvTSa6aTXTSGAy+ZHUpIrDoJhGZAjfoNGXZgLCwLGTQezbtBn//0mubd4sJjJg1mnfpMgBASHGZtwDD9CIPvSJ3lL9Q4bks+7t/J853W//zQsRbD/kJjAowTND8mhNfU7W/7fnDn32+X+t+cP6KFYNA8FMBAPkYQMQ3zLQ9vT79vQ9EMkSR7CcEDeRHzdKcnVp+6lMyZW3ZHX592YhCbreKNDqqalr7dcZd60H/ZtJNEpElWutM9//zQMR3C2AOPjYwxABvBLpiPnq9F3SefCNCIiO76FnSkI0PnKfJ9fL0V5LCtlLYkeL52CMRoo+LpMy09TpkYYMwQtKnAOA5ZW2Oyv4jhZvfd05w74pE0hyLDp3vWVml1Jyh5zrVrSmZ//NCxKQd2yIsCnmG3UCkCZYAoCRswMKHwK8ePFFG0GzMmNCL6LQqp8zS0u8pN7Dk4OauK3NwhQqQ1v1GlQQAQItxiTwK/UeYw5raw5GksBYjpz0YkOFEBAPzA7EMoAgRihNkK8A0Fwbk//NAxIgW0SJECU8YABtyOeYD8bB4yH01mSRhgos3NQszSxOsnucPMHVP9d8vMT1XTGiII4xmQX7SuKTSri24ta+anvMJPPUeeknxUdXWnUdaf//8f8YoJjyB09IYHlmGQY/x/////6z/80LEhyQL0kyrmEABfH////+HkGGTqs53O37fb7fZ4/HiMBCPiH2u6fFIet2Lrkl8aPPwpAqYXM4dTNU0CRIY5CRMS6XJoFXLUCQTWqbrzxfPF9B02RPKRW8YMqJQ3My+PB116TNrTYz/80DEUiUjIv5fj2gCSXNTeX1Gg57drqf03dZfN1uaDnJdzc4aGYw//1//2JcLwUSXGAH8eZJl9M3MDwwB0l///dTLX9dDlxBwg/xOFv9RqpY1E25JIkYk/nODk4zwztA+ztbIU60J+v/zQsQYGnmS6l3GYAJNO6czb3nX2GIcY6jh2eIR4ZnkBoeHJYSMHf7VLePoectmx9RQEtVpKHUSHGoqZTpmzUNsUBpYXSkuKP/1A1lQlslSvDuIn/6wmDQNFRgmLHg1LhAD3+YXGCmnWf/zQMQKFflixiDLDnziW+czhnQk9XStEID5HtQq2BAmWQTMcCYS77OmhIPFbl0JTRfK86W0g2J2B/medo1WVl1+hp6YyHAyID5uR6VjBEsMB4W3FDYxR0ln+5ZFwATANB3KtlKhA6Y0//NCxA0XcX7CFMvElMzGtGduSiJU4xP9IELuAxd6f4EosPlDAdj0HBOnT7FLIILQYzK+nes3taDIRqWVSsqhyys5f/6GcKM6ojhBYSh9rzTjomZln8jxLyTrd3//soAAACToAoAy7ruT//NAxAsWQZbCPsJLBGM3ZanL/q0Y6BkcXwvVZUI3M0sZdpZQhC3lN/FbmRggtZCJVi9S+qkRHPZFI/GA4gyLMuhkI/VF2/KaU51KwgMIAQ0UCoaIs//43dpJa/AiC7GycMBzLcufVsD/80LEDRRQ+r2+09ZYQRB4ZDUV08BhB7FKVzm8L6eD97FxlqFeIYqLZpcdsv8C0oakuYhipoTK9qpe/jx5kkktyjKvlUOIBQnLK/+GRhNv//pqqACLkskmwAG8q++zojIUkjF53d+jliP/80DEFxRBNspew8xU69lMnzLnreMyjxUe3mCwUEBCCEHEguMc8HtHe2Qpj04PtT1H8Nsb/Br9+lFQ8TlAkHAOEw178COHk//u61dZB0tESEBQxnjGo+RAa3+qaZii17f6lhYWM/9lmv/zQsQhEoG6uADKFPBVXWtm6iR81DWpAAdjb0ZRABeEzIcMQMBSL0Pr51/tZ9LFhFvs+LFWEKAUo7AX9toSvUGWvqRhT73GYL4NXmKcW+Y391nl3LPBar3W4dK0H40XIH/1MLX/9//brP/zQMQzE1oy/lZQESbokvC/G+tX7/ce7oNCmXf/8WUii7Qc7tRK4nXgCp7/wBqUQClk2mBHvNpdswRkiv99vJ+QafXqrD8Q2b/CAAFdNxbDE8RKe9AwMEU/cMAggJvzQ++4YGgOJwiE//NCxEAUYUbmXgvGOKln2gUOm/9Y+TJwytP////+hdxiFV3ip/XgDOyul0JxbyxMcJxOCoVrQ/CE+3m1y2GKbsVavO/R852i0/XmglfRuApnBNyq0yWHuyWWNhNwVGiRBE7GzoqIwr/5//NAxEoUMOrufmCYgFIoChJcAnUntEuegEJbLjEYUHka7cSFCVSQKgM4RKydVSyVGq1yR8zc6H+mgsMiRKDdD2oCC0si5PPlZP8pQEHFjBDbU0mRri6yEuIsVA4bcwWesgc5RdOQ2lv/80LEVBSBArQCeYZMtbrbJJHAefA0iRCq0syxsslJaWXInKm/aukCkYqqpAUyD78AjwrO4jWMJMBo+ovWLGyqDwQQxnIPefT2fWhZapHTAdw9Fn7dH9qr2lbLNJFXWKqEIBiB4othgAX/80DEXhN5BvpeMYYyixeRWU0jmQeuYGw41SaFw47OSPZMz19bZZ1GzzBHYt0uk0SdS4mgmhRnQcupnxeLlEKacQOSo22M//RX+Lddi02ciPMae/tkaAc7zNqQUdopd8vm7TcUh11hU//zQsRrE6l6ulIYRYDOeUzALQLocRpBgQDiNMfyLKYF4FcLCOJVpnId+Nz9SxUCbSYkJrSpo22t8QEy4n4Ql9Tv///+RQGXYmi4G6XWUQMEHDEQRlzw0yDYKNgUDIAl3IVmKyZjgUYAJv/zQMR4FPCmykjGcEgRZdEHOmNohGul2dNcGplypisRdUKAgkKVqxN3LZgpcHRBGh2E4eqnRH/V/0Nr56j5278sPYUsRxQ7LLc5kQhcYyDRYeADv/6TaeAVKmKamVUZ2QZxv8TdpYT4//NCxH8cCZq+RN5QrMkZLUvqvJ4kwjDPeZrDjBawI+1eeUb+U9nL4qk2BRRsQCcO6/IsBbIKTQhuhf/9ZSlh26P7lM6FFLq3//+/3rVFjYsCNce8W//rEUxlV8oDlc3X6ZaWQe+/UkAn//NAxGoXmlLeNniNVmTtHtgwEGu1ec8Mpd/cMkpe8WwYYtVcd6JrbCE7YcAmZOYITm0ToDRsre/p6nfhr6H6p4Y/XswtKet+rI/9/8tV+2k8utUoQhYB3GmvjoAx09AMhKac+AXKDR3/80LEZhQRnuGWecUKjtK+sgfyHsq8tX0JleqZNs/0iq7GAl4ZhzNhaF4YKh7OMqncs+p+bUtYDLFEdw75rJcpu4/wL/0t5MUEkbgBAZFTKnmFlnt8KXMsqxkwZMokxyWuJADpe/HJms3/80DEcRPxUqzU0kbF8QyoEQyzAaNGQ0Vo2YJSz0C4I3o/jrdn83zdZ4y80wmNgpCFQz9T4hpf/mdFv///8LdaBMAjAAhkAGNNi/yKh8nLq4QywwCHSsIGAHZD2twZeGEaNuZZDBN9nP/zQsR8FOlioZzTzm5QkQSd74neuSEijoTgZxCcs6fP//+me7/Tc9BDf+8DPcEL5c0X6cnVCkAyXbDy0oN4BqR6t5/HEnZrxZt7KVDlB0q1BlW0r/vmaMiLhj4ZFkWexFT3JDSkhH3gSP/zQMSEE8l+qb7SRnRGbEcjUiPnSQnaSkd2k0xP1MI33coRfckJoxJPu9RqcyMVivFFBQQOzlDaSrbcW0eyVUcgQLo6zVAwFOSdATrwUpZXUcJWYRpjECUrMV4kyMeO+wozIULkeGHa//NCxI8lM2qwXnjSvGZNkYrRzZa5dJI25Lrhl8+8EGiYQ9ptVzS84GJdgrHf1AQlpNuXaXorGvVhMUmv9Y/ZOzjfhj+UCC0em0w4YdDX6CFCSgAZHH1Iocwzpy43ar7p82yuemOq6WGz//NAxFYkg2rmXkjY/quwZpTJ4yedjPLsGEDikeVWwwwp28ZrBBPQVrtVtaci5suwPWeXaucq5HxSucjQWzFiEpLWCuXY14hFn68rWXWy2CQKknJpY/Ia2OKJ5eZP3670iTwiiBjmVur/80LEHxRBurpTRhgAio+/NPcifIU7Pz7dv9Ij6kV1macaU8sMZHUEnT2fDXRha6gNsIDooJzEXYz++sb6fQ7e5plFguPbp2WNDDsQs9uukYlYpM2a5Ll6ARekvNIxTbSsPxll7qPyrY7/80DEKh5RwrgBmcAA8/WNJGJY5FiJwy61Nd5jTQJ78WPaYtx7X9u8xzxudyjFnOxnnyGZlrFDLK1L/8x/9Xv/8P3zeeGeH8x5Q1KfFyotFg/9DNQAwVilbf//QlXdkGAakbHxBq8FC//zQsQLFTDW5ZfPQAIu1VCJ2ysLCskpZpK8/0wos5QxhhnEMLoOQwQOxrvX6LOeLjGwyLtChAIzQeOdTHbfUGmBm90IprbyUs8QgaY9n/9eBRBQXTSUqYWSgakDCHwNBafHMJ4Lumg8Af/zQMQSFBEe4fYLBh6nn8hqtgaMB2OqIS1VOIgokRe7CBIgQEBra2qolK9LItxIOHTDg85QlWp6GJ2MEjEj81u09np///+sQJYh+tVYkG5HB/hrG+ssceYfr8RJg11D0dHmNVOhkSlV//NCxBwTmcrZVnrEXk/T9FRlo5W3TYb7HEllfp2ZaxYkzjVLdrqm3y9egdlOeSrjaLK5TTV6f///AYoeYsrAyruYFKS6UZJkWDcm0++UDm/ZbLRSLYbegK6YrMm4i7nMz1JeK/BXxXPI//NAxCkTeSbpfgvGDkUmDwcf6UgqOF1qNJNmnGAyZWY6iz0fyTyL137f/68mDQnijDGBCAXyODykdCFnoHyw4nif+wYnBDsdWbOtFmxEeTcnsXTYn2XzZZFYhkV2XvETGxLLCWHCLJT/80LENhRZptF2SEdG4eX6VjPQ2UXImxVCtBQqMCj3U1////bdLKetKhQvdkHDyDChposJ3WVc0fNLqdB8IKlFH4XWMCNKMuYmw22JXz1mkc3DgJRg6NpJoOgAQEsgUARBxP8ifKbq1h3/80DEQBMRHtC2QMUKMioNo//6cjANZjeHagWbhe3aOIkGjQKTJJQkeK7aVOObU5sQbJFHNfsFfs5s8abNy2EpsYmrpgTf0v/4f8/qqXq22a5WOSQ4/nnTCQ2IgMt3t//9m0eJwEFj4v/zQsROFHHyyXZZhjrPIiu6ArCcYOynwGPY7RMXG0sVh1C3Bi43/XwWXUGLCEszUrAIm9mZj/+NUCkYUgJxIITD1JgoYFnzuwNCEBZL9OIgoDIiKgrkuVIO1f+R5Z/wVbUEJ2GljTSyLP/zQMRYE9keob9PGABIUMue4ruFPmzSPyadobs8PYnmpkyJfLg9DFCkph4EUvvZbI0BgyYQA5jmiloLOF+gNYLMChhyA3FqZFJGbm6rMsIoHML4Ugl6Q8mdVSPdv0CgZmKc3MGJR90F//NCxGMhWjJcCZhoAi0NWhZN1p/rnyoJpY4AHxYAY9DCtWQdppFy6hgIf6G/9z1KUtsAkoYGQZCkbi0YzvZZrsRe3if8DDMqFe27xqHhWu0MiPcUxlPLB1nbm797vvY6WFwIMCcU8qYq//NAxDkiijagy494AFkK9t/GV0YFoMMk6fYlBIkkKUuXtNxIlbUm21uaHrhUPGtRtfe6dYZLw3kfWoze4Niuy7pjV87pjW/66xT////vKK9jA58cCe/0f9GD6nUHx1VDr+IQCwSiABD/80LECRYKdqG9iFAAgwAAfVOlixifI0iZ7kpZfmsQ/isF8LQ+eqeIoWiEBUaK6od+KoLIrCKH3//nGyEiGYCv9v/isDcUC6BaHolCuC8IYjOX////KGs9B8dhpaoCBwRiQSCQSAQRBAT/80DEDBcLytJfhSgBAA8aL8OCgfxRgUd3x5187iI1fJn1/f85S//59FZP+jc53urnHrIf/+r8hCB8XkWiqpjZhcXFP//FydToRn+QXOLgQRfi6EX///////UPh4RV/+t+/+2tbcETHP/zQsQKFqJfFl/FKAIGQtsrqzNiSjFdVapDMjFZ33Q1GamTLMRHoVF2/uyCsWKUlPdOymuj90O1SK63Sv/2XIzlcqOxiDYKcgkMWETxoVMzJZzU24SAr8rBmDUXQeWJpnhoaYntZDo5pv/zQMQLFwJS3x4wxPTpqAOVXfOuV42O0lG4QKcpl0HTZiKF5LfLZjRGePKbRnMp5NR1DhSl0d3T26L/KZHoZ06f/TTzOpWNOMZbsGZL71AUAvET1WppDQdPMaSLERa3V2TWuTVuYSgM//NCxAoT2dLmXhmGBulgDAuqCOMAQN2MjEJRpmKl15W5Par9kNVz6s9oev+f9QCc10BU0bvqtgpvOHsuDAUcgpXe/zpIqeIoib/cdnv0eZiJ8kpRGgS4nAGsJLOciik6gkjIBCDZWRPP//NAxBYT6lqE/jFE7ISYicKzGzO6ZWQz8tDgogLrOxnMcr2/mrlQxUlEo9//ys+YxxjOUuv/+nR1KX5jiTPWMJYafiH9aV04bi4B0UoQGSayLgPHU2JWSiW0Rsuw0VFjolQ4gecbkKH/80LEIRPhjmA8YkZQpD/ypSlxiYwbGGzz1QdmzuRZbqpFhxk7cyuCa2LuhqGILkWQTIbjAGDnOoov2NUZoPCa6bE6O5VKpxjRVShAhaQ4MsjQUk1RkrZf5GRxSaZL1a5xeufIeDhxlaD/80DELRLxslASeYZsUt7Miany9bjTzpqf4LYgs5Bxg2IIMtlVVJ/+n9P/rQtdMkQuBwuOywAAxIcOgEsJgIxr3i7FXpJA0wqNc8ULlxGBQmwsvPAZDTd6zKQ2JpF9Dy9aIsfrYirs6f/zQMQ8ESAmWDRLBADGenWeqyf3fqUajISoQDY4NGmlWkOU4FOg2AQEbJpQ5IGJT1wQNB0OYSijSIEJAIKBwQtQEhKOBoKiKCYieLBMwhVvp/+z+rpt/7kAkViko3GBIWPqUHa2KDu3//NCxFIQcEJMEkpGCO5L2Z5vVT1fmVP6W0+6tW//+no/2HW2a/IqCtAkSh0SA6BWBoOhIyp+NGD2EUJOoQJH1hAenPcwy1z2sZQlSLHMaHSqRvRXq9KKEr1PAXRV7KLSrTxVCZgGNuNJ//NAxGwKWA5RnhjGALTUstCamkUqQVTZ0I9q08iP/+eX/4Gv/xzXbPm+35X2r/+u5NUIARoBCcqppV2OgImKpM3qSqTAQowoCKP/qxV2Y6uoCJIMBNq3f//2Y9V1UoBATBQICMFTuDX/80LEnQ+ABjQMEMYARwaWHSrtQdW5QNLBVwlBVwiBUNKPdQd4KwVd/IlQVwVDRZUIlkZWWUEDBAgoYKCBhA4SWWVHL/9WoIGCllljoX////ZZLLPyVlDAwQMIDBggYISA4H/1CwuKitL/80DEuwoIBkpeCMQADIVFRUUFm4SFhYWF/+KioroCoqKCwsLC3rFVTEFNRTMuMTAwVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVf/zQsTtFhF2BUIwRiBVVVVVVVVVVVVVVVVVVVVVVVVVVUxBTUUzLjEwMFVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVf/zQMTwFOF5ICwYBuBVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV//NCxKMAAANIAAAAAFVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV"


client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

model = "fixie-ai/ultravox-v0_5-llama-3_3-70b"

chat_completion = client.chat.completions.create(
    messages=[
        {"role": "user", "content": [
            {
                "type": "audio_url",
                "audio_url": {
                    # Any format supported by librosa is supported
                    "url": f"data:audio/mp3;base64,{audio_base64}"
                },
            },
        ]},
    ],
    model=model,
    stream=False,
)

print(chat_completion)

on vLLM side I get:

ERROR 03-03 19:06:24 [serving_chat.py:664] Error in chat completion stream generator.
ERROR 03-03 19:06:24 [serving_chat.py:664] Traceback (most recent call last):
ERROR 03-03 19:06:24 [serving_chat.py:664]   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/serving_chat.py", line 362, in chat_completion_stream_generator
ERROR 03-03 19:06:24 [serving_chat.py:664]     async for res in result_generator:
ERROR 03-03 19:06:24 [serving_chat.py:664]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 208, in _generate
ERROR 03-03 19:06:24 [serving_chat.py:664]     q = await self.add_request(
ERROR 03-03 19:06:24 [serving_chat.py:664]         ^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-03 19:06:24 [serving_chat.py:664]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 153, in add_request
ERROR 03-03 19:06:24 [serving_chat.py:664]     request = self.processor.process_inputs(request_id, prompt, params,
ERROR 03-03 19:06:24 [serving_chat.py:664]               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-03 19:06:24 [serving_chat.py:664]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/processor.py", line 129, in process_inputs
ERROR 03-03 19:06:24 [serving_chat.py:664]     preprocessed_inputs = self.input_preprocessor.preprocess(
ERROR 03-03 19:06:24 [serving_chat.py:664]                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-03 19:06:24 [serving_chat.py:664]   File "/usr/local/lib/python3.12/dist-packages/vllm/inputs/preprocess.py", line 766, in preprocess
ERROR 03-03 19:06:24 [serving_chat.py:664]     return self._process_decoder_only_prompt(
ERROR 03-03 19:06:24 [serving_chat.py:664]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-03 19:06:24 [serving_chat.py:664]   File "/usr/local/lib/python3.12/dist-packages/vllm/inputs/preprocess.py", line 715, in _process_decoder_only_prompt
ERROR 03-03 19:06:24 [serving_chat.py:664]     prompt_comps = self._prompt_to_llm_inputs(
ERROR 03-03 19:06:24 [serving_chat.py:664]                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-03 19:06:24 [serving_chat.py:664]   File "/usr/local/lib/python3.12/dist-packages/vllm/inputs/preprocess.py", line 347, in _prompt_to_llm_inputs
ERROR 03-03 19:06:24 [serving_chat.py:664]     return self._process_multimodal(
ERROR 03-03 19:06:24 [serving_chat.py:664]            ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-03 19:06:24 [serving_chat.py:664]   File "/usr/local/lib/python3.12/dist-packages/vllm/inputs/preprocess.py", line 277, in _process_multimodal
ERROR 03-03 19:06:24 [serving_chat.py:664]     return mm_processor.apply(prompt, mm_data, mm_processor_kwargs)
ERROR 03-03 19:06:24 [serving_chat.py:664]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-03 19:06:24 [serving_chat.py:664]   File "/usr/local/lib/python3.12/dist-packages/vllm/multimodal/processing.py", line 1513, in apply
ERROR 03-03 19:06:24 [serving_chat.py:664]     self._validate_mm_placeholders(mm_placeholders, mm_item_counts)
ERROR 03-03 19:06:24 [serving_chat.py:664]   File "/usr/local/lib/python3.12/dist-packages/vllm/multimodal/processing.py", line 1423, in _validate_mm_placeholders
ERROR 03-03 19:06:24 [serving_chat.py:664]     raise RuntimeError(
ERROR 03-03 19:06:24 [serving_chat.py:664] RuntimeError: Expected there to be 1 prompt updates corresponding to 1 audio items, but instead found 0 prompt updates! Either the prompt text has missing/incorrect tokens for multi-modal inputs, or there is a problem with your implementation of merged multi-modal processor for this model (usually arising from an inconsistency between `_call_hf_processor` and `_get_prompt_updates`).

The result is same without specifying chat template.

This works when auto tool is not enabled.

I found this thread which mentions it should work.

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@erkintelnyx erkintelnyx added the bug Something isn't working label Mar 4, 2025
@DarkLight1337
Copy link
Member

cc @farzadab

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants