You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/dist-packages/mlc_llm/build.py", line 47, in <module>
main()
File "/usr/local/lib/python3.10/dist-packages/mlc_llm/build.py", line 41, in main
parsed_args = core._parse_args(parsed_args) # pylint: disable=protected-access
File "/usr/local/lib/python3.10/dist-packages/mlc_llm/core.py", line 444, in _parse_args
parsed = _setup_model_path(parsed)
File "/usr/local/lib/python3.10/dist-packages/mlc_llm/core.py", line 494, in _setup_model_path
validate_config(args.model_path)
File "/usr/local/lib/python3.10/dist-packages/mlc_llm/core.py", line 538, in validate_config
config["model_type"] in utils.supported_model_types
AssertionError: Model type qwen2 not supported.
Maybe, I think that's because mlc patches. How can I use NVILA?
Additional
No response
The text was updated successfully, but these errors were encountered:
the NVILA repo supports it in HF and AWQ TinyChat. I have the action to profile it and add TinyChat support to OpenAI server. When it is more ready I will retag dustynv/awq as dustynv/vila since those are circular dependencies now.
Search before asking
Question
I used VLM of Jetson Platform Service. In here, I saw your mlc-llm patch.
I tried to use Efficient-Large-Model/NVILA-8B.
Then, I saw the error below.
Maybe, I think that's because mlc patches. How can I use NVILA?
Additional
No response
The text was updated successfully, but these errors were encountered: