You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 20, 2024. It is now read-only.
I am using an Onnx model directly for chat completion:
var builder = Kernel.CreateBuilder();
builder.AddOnnxRuntimeGenAIChatCompletion("phi3", @"C:\git\Phi-3-mini-4k-instruct-onnx\cpu_and_mobile\cpu-int4-rtn-block-32")
.AddLocalTextEmbeddingGeneration();
But a call to a SemanticTextMemory object's .SaveInformationAsync(...) method gives the following error:
System.MissingMethodException
HResult=0x80131513
Message=Method not found: 'System.ValueTuple`3<System.ReadOnlyMemory`1<Int64>,System.ReadOnlyMemory`1<Int64>,System.ReadOnlyMemory`1<Int64>> FastBertTokenizer.BertTokenizer.Encode(System.String, Int32, System.Nullable`1<Int32>)'.
Source=SmartComponents.LocalEmbeddings
StackTrace:
at SmartComponents.LocalEmbeddings.LocalEmbedder.Embed[TEmbedding](String inputText, Nullable`1 outputBuffer, Int32 maximumTokens)
at SmartComponents.LocalEmbeddings.LocalEmbedder.Embed(String inputText, Int32 maximumTokens)
at SmartComponents.LocalEmbeddings.SemanticKernel.LocalTextEmbeddingGenerationService.GenerateEmbeddingsAsync(IList`1 data, Kernel kernel, CancellationToken cancellationToken)
at Microsoft.SemanticKernel.Embeddings.EmbeddingGenerationExtensions.<GenerateEmbeddingAsync>d__0`2.MoveNext()
at Microsoft.SemanticKernel.Memory.SemanticTextMemory.<SaveInformationAsync>d__3.MoveNext()
at LocalChat.Helpers.MemoryHelper.<PopulateInterestingFacts>d__0.MoveNext() in C:\git\ai-agent-sk\LocalChat\LocalChat\Helpers\MemoryHelper.cs:line 19
For what it's worth, I downloaded the bge-micro-v2 model directly from HuggingFace and was able to get this to work by using the .AddBertOnnxTextEmbeddingsGeneration(...) extension method.
var modelPath = @"C:\git\Phi-3-mini-4k-instruct-onnx\cpu_and_mobile\cpu-int4-rtn-block-32";
var textModelPath = @"C:\git\bge-micro-v2\onnx\model.onnx";
var foo = @"C:\git\bge-micro-v2\vocab.txt";
var builder = Kernel.CreateBuilder();
builder.AddOnnxRuntimeGenAIChatCompletion("phi3", modelPath)
.AddBertOnnxTextEmbeddingGeneration(textModelPath, foo);
//.AddLocalTextEmbeddingGeneration();
The text was updated successfully, but these errors were encountered:
Hit the "..." button next to "Train" and select clone repository for instructions how how to pull the large files with git
Wherever you clone the files tool, use the path for the "model.onnx" and "vocab.txt" and pass that in.
The only other thing was to add the right libraries to get that AddBertOnnxTextEmbeddingGeneration extension method. I think it's in the Microsoft.ML.OnnxRuntime nuget package. Not sure though. He's all the packages I referenced to get this working:
Give it a try and let me know if you get it working. Good luck!
I am using an Onnx model directly for chat completion:
But a call to a
SemanticTextMemory
object's.SaveInformationAsync(...)
method gives the following error:Generally speaking, I followed this Blog Post to test out Semantic Kernel with local RAG, but want to avoid using a local HTTP server which is why I used the OnnxRuntimeGenAIChatCompletion for Phi-3: https://techcommunity.microsoft.com/t5/educator-developer-blog/building-intelligent-applications-with-local-rag-in-net-and-phi/ba-p/4175721.
For what it's worth, I downloaded the
bge-micro-v2
model directly from HuggingFace and was able to get this to work by using the.AddBertOnnxTextEmbeddingsGeneration(...)
extension method.The text was updated successfully, but these errors were encountered: