Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix/minor doc typo fixes #28

Merged
merged 3 commits into from
Feb 14, 2024
Merged

Fix/minor doc typo fixes #28

merged 3 commits into from
Feb 14, 2024

Conversation

yisz
Copy link
Contributor

@yisz yisz commented Feb 14, 2024

  1. fixed links in docs
  2. fixed typos in LLM metric docstrings
  3. updated style consistency metric prompt slightly
  4. fixed judicator threshold to match normalized score

Ellipsis 🚀 This PR description was created by Ellipsis for commit 48507ff.

Summary:

This PR corrects various typos in docstrings and documentation, and updates the judicator threshold in the ensembling classifier.

Key points:

  • Fixed docstring typos in calculate methods of LLMBasedAnswerCorrectness, LLMBasedAnswerRelevance, and LLMBasedStyleConsistency in /continuous_eval/metrics/generation_LLM_based_metrics.py.
  • Fixed docstring typos in calculate methods of LLMBasedContextPrecision and LLMBasedContextCoverage in /continuous_eval/metrics/retrieval_LLM_based_metrics.py.
  • Fixed broken links and updated prompt in /docs/src/content/docs/index.mdx.
  • Updated judicator threshold in /docs/src/content/docs/metrics/ensembling/ensembling_classifier.md and /examples/ensemble_metric_with_judicator.py.

Generated with ❤️ by ellipsis.dev

@yisz yisz linked an issue Feb 14, 2024 that may be closed by this pull request
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me! Reviewed entire PR up to commit 48507ff

Reviewed 123 lines of code across 5 files in 2 minute(s) and 12 second(s).

See details
  • Skipped files: 0 (please contact us to request support for these files)
  • Confidence threshold: 50%
  • Drafted 5 additional comments.
  • Workflow ID: wflow_q36s0YJiIyZmWFLb
View 5 draft comments

These comments were drafted by Ellipsis, but were filtered out of the final review. They're included here so you can see our internal thought process and help you configure your ellipsis.yaml.

Drafted 5 comments under confidence threshold

Filtered comment at continuous_eval/metrics/generation_LLM_based_metrics.py:92

Confidence changes required: 0%

Commentary: The PR author has updated the docstrings in the calculate methods of the LLMBasedFaithfulness, LLMBasedAnswerCorrectness, LLMBasedAnswerRelevance, and LLMBasedStyleConsistency classes in the generation_LLM_based_metrics.py file. The changes seem to be correct and reflect the purpose of each method accurately.

The updated docstrings accurately reflect the purpose of each method. No changes are required.

Filtered comment at continuous_eval/metrics/retrieval_LLM_based_metrics.py:25

Confidence changes required: 0%

Commentary: The PR author has updated the docstrings in the calculate methods of the LLMBasedContextPrecision and LLMBasedContextCoverage classes in the retrieval_LLM_based_metrics.py file. The changes seem to be correct and reflect the purpose of each method accurately.

The updated docstrings accurately reflect the purpose of each method. No changes are required.

Filtered comment at docs/src/content/docs/index.mdx:30

Confidence changes required: 0%

Commentary: The PR author has fixed the links in the index.mdx file. The changes seem to be correct and the links should now point to the correct locations.

The updated links correctly point to the intended locations. No changes are required.

Filtered comment at docs/src/content/docs/metrics/ensembling/ensembling_classifier.md:156

Confidence changes required: 0%

Commentary: The PR author has updated the threshold in the judicator function in the ensembling_classifier.md file. The change seems to be correct as the normalized score ranges from 0 to 1, and a threshold of 0.5 would be a suitable choice for a binary classification task.

The updated threshold in the judicator function is appropriate for a binary classification task. No changes are required.

Filtered comment at examples/ensemble_metric_with_judicator.py:48

Confidence changes required: 0%

Commentary: The PR author has updated the threshold in the judicator function in the ensemble_metric_with_judicator.py file. The change seems to be correct as the normalized score ranges from 0 to 1, and a threshold of 0.5 would be a suitable choice for a binary classification task.

The updated threshold in the judicator function is appropriate for a binary classification task. No changes are required.


Something look wrong? You can customize Ellipsis by editing the ellipsis.yaml for this repository.

Generated with ❤️ by ellipsis.dev

@yisz yisz linked an issue Feb 14, 2024 that may be closed by this pull request
@yisz yisz merged commit 48bca6d into main Feb 14, 2024
2 checks passed
@pantonante pantonante deleted the fix/minor-doc-typo-fixes branch February 19, 2024 01:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Fix docs link on home page Fix typos in generation LLM metric docstrings
1 participant