Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Server.cpp: Documentation of JSON return value of /completion endpoint #3632

Merged
merged 2 commits into from
Oct 17, 2023
Merged

Server.cpp: Documentation of JSON return value of /completion endpoint #3632

merged 2 commits into from
Oct 17, 2023

Conversation

coezbek
Copy link
Contributor

@coezbek coezbek commented Oct 15, 2023

Improved documentation of the /completion endpoint of server.cpp to clarify some ambiguous wording (e.g. initial prompt). Also provided documentation of the result JSON in case of streaming and non-streaming mode.

@coezbek coezbek changed the title Added documentation of JSON return value of /completion endpoint Server.cpp: Documentation of JSON return value of /completion endpoint Oct 15, 2023
@ggerganov ggerganov merged commit 3ad1e3f into ggml-org:master Oct 17, 2023
joelkuiper added a commit to vortext/llama.cpp that referenced this pull request Oct 19, 2023
* 'master' of github.com:ggerganov/llama.cpp:
  fix embeddings when using CUDA (ggml-org#3657)
  llama : avoid fprintf in favor of LLAMA_LOG (ggml-org#3538)
  readme : update hot-topics & models, detail windows release in usage (ggml-org#3615)
  CLBlast: Fix temporary buffer size for f16 conversion (wsize)
  train-text-from-scratch : fix assert failure in ggml-alloc (ggml-org#3618)
  editorconfig : remove trailing spaces
  server : documentation of JSON return value of /completion endpoint (ggml-org#3632)
  save-load-state : fix example + add ci test (ggml-org#3655)
  readme : add Aquila2 links (ggml-org#3610)
  tokenizer : special token handling (ggml-org#3538)
  k-quants : fix quantization ranges (ggml-org#3646)
  llava : fix tokenization to not add bos between image embeddings and user prompt (ggml-org#3645)
  MPT : support GQA for replit-code-v1.5 (ggml-org#3627)
  Honor -ngl option for Cuda offloading in llava (ggml-org#3621)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants