Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AccessViolationException in LlamaWeights.LoadFromFile() when the model file doesn't exist #441

Closed
godefroi opened this issue Jan 16, 2024 · 2 comments

Comments

@godefroi
Copy link

I'm getting an AccessViolationException in LlamaWeights.LoadFromFile(), coming from LLama.Native.NativeApi.llama_model_meta_count(LLama.Native.SafeLlamaModelHandle) when the model file does not exist:

[LLamaSharp Native] [Info] NativeLibraryConfig Description:
- Path:
- PreferCuda: True
- PreferredAvxLevel: AVX2
- AllowFallback: True
- SkipCheck: False
- Logging: True
- SearchDirectories and Priorities: { ./bin/Debug/net8.0/, ./ }
[LLamaSharp Native] [Info] Detected OS Platform: WINDOWS
[LLamaSharp Native] [Info] Detected cuda major version 12.
[LLamaSharp Native] [Info] ./bin/Debug/net8.0/runtimes/win-x64/native/cuda12/libllama.dll is selected and loaded successfully.
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA RTX A2000 8GB Laptop GPU, compute capability 8.6
Fatal error. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Repeat 2 times:
--------------------------------
   at LLama.Native.NativeApi.llama_model_meta_count(LLama.Native.SafeLlamaModelHandle)
--------------------------------
   at LLama.Native.SafeLlamaModelHandle.get_MetadataCount()
   at LLama.Native.SafeLlamaModelHandle.ReadMetadata()
   at LLama.LLamaWeights..ctor(LLama.Native.SafeLlamaModelHandle)
   at LLama.LLamaWeights.LoadFromFile(LLama.Abstractions.IModelParams)
   at Program+<Main>d__0.MoveNext()
   at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[System.__Canon, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]](System.__Canon ByRef)
   at Program.Main(System.String[])
   at Program.<Main>(System.String[])
@martindevans
Copy link
Member

Could you try this out with the master branch and see if you get more useful error messages?

#395 reported a similar issue and #437 has added extra checks before passing the path to llama.cpp (does the file exist, and is it readable).

@martindevans
Copy link
Member

0.10.0 has just released, this includes the extra checks that were merged in #427.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants