Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[lmi][vllm][trtllm] add support for generation parameters fron genera… #2685

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

siddvenk
Copy link
Contributor

@siddvenk siddvenk commented Jan 24, 2025

…tion_config.json

Description

This will resolve #2672.

Note, this feature is available in vllm as of 0.6.6, but we currently use 0.6.3.post1. See this commit from vllm vllm-project/vllm@5aef498.

Once we upgrade to 0.6.6, we should be able to rely on vllm for this directly.

To enable this behavior, users must set OPTION_GENERATION_CONFIG=auto or option.generation_config=auto. This is done to maintain backwards compatibility.

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Checklist:

  • Please add the link of Integration Tests Executor run with related tests.
  • Have you manually built the docker image and verify the change?
  • Have you run related tests? Check how to set up the test environment here; One example would be pytest tests.py -k "TestCorrectnessLmiDist" -m "lmi_dist"
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

Tested lmi, vllm, tensorrt-llm locally using llama-3.1-8b-instruct which contains a generation_config.json. I added some logging to validate the parameters were set in the sampling params

@siddvenk siddvenk force-pushed the vllm-generation-config branch from 40fe0eb to 798500c Compare January 24, 2025 21:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Default values for inference from generation_config.json are not being applied
1 participant