Releases: impredicative/podgenai
Releases · impredicative/podgenai
0.11.0
- Update
generate_subtopic
prompt to better avoid math notation and symbols. This invalidates the disk cache. - Update requirements: openai
0.10.1
- Update
generate_subtopic
prompt to add instruction to avoid generating duplicative content that belongs in other segments. This invalidates the disk cache. - Update prompts to add instruction to not hallucinate information. This invalidates the disk cache.
- Improve subtopic text validation to check for code blocks and markdown headers more effectively.
- Update requirements: openai, semantic-text-splitter
0.9.0
- Improve prompt which lists subotopics to make the output more factual. This invalidates the disk cache.
- Update third-party packages: openai, semantic-text-splitter.
- Fix avoidance of some code blocks in generated text. This can conditionally invalidate the disk cache.
- Avoid markdown section blocks in generated text. This can conditionally invalidate the disk cache.
0.8.0
- Update text generation model to
gpt-4o-2024-11-20
fromgpt-4o-2024-08-06
. This invalidates the disk cache. - Update the
list_subtopics
prompt for intricacies ofgpt-4o-2024-11-20
. This invalidates the disk cache. - Update dependencies: openai, semantic-text-splitter
0.7.0
- Update LLM prompts. The
list_subtopics
prompt should now reject a little less often, and use more consistent terminology in the prompt. Thegenerate_subtopic
prompt also now uses more consistent terminology in the prompt. These updates however will invalidate the disk cache. - Update requirements: openai, semantic-text-splitter
0.6.2
- Handle missing
completion.usage.prompt_tokens_details
as returned by openai.
0.6.1
- Change text model to the newer model
gpt-4o-2024-08-06
. Previously the oldergpt-4-0125-preview
was used. This change is made because only the newer model has newer info that the older model doesn't have. Moreover, hallucinations have not thus far observed while testing the newer model. The newer model is also cheaper to use. As a side effect, the generated content is now about 25% shorter and more to the point. - Update the work path of each section's generated internal transcript file to also include the model name in the filename. This however invalidates all prior disk caches which can manually be deleted from the
work
directory. - Update the voice generation prompt to exhibit a bias toward diversified voices, failing which some voices were almost never being selected.
0.5.7
- Add option
--max-sections
to limited the maximum number of generated sections.
0.5.6
- Update the LLM prompt and also its respective validation logic to prevent the generation of code blocks in a section's text. If a code block is still found, the generation is retried. This invalidates prior cache of section texts.
0.5.4
- Update prompt, code, and exception to print a rejection reason when a topic is rejected. This invalidates prior caches.