Skip to content

Releases: impredicative/podgenai

0.11.0

18 Dec 23:15
Compare
Choose a tag to compare
  • Update generate_subtopic prompt to better avoid math notation and symbols. This invalidates the disk cache.
  • Update requirements: openai

0.10.1

16 Dec 19:29
Compare
Choose a tag to compare
  • Update generate_subtopic prompt to add instruction to avoid generating duplicative content that belongs in other segments. This invalidates the disk cache.
  • Update prompts to add instruction to not hallucinate information. This invalidates the disk cache.
  • Improve subtopic text validation to check for code blocks and markdown headers more effectively.
  • Update requirements: openai, semantic-text-splitter

0.9.0

01 Dec 01:31
Compare
Choose a tag to compare
  • Improve prompt which lists subotopics to make the output more factual. This invalidates the disk cache.
  • Update third-party packages: openai, semantic-text-splitter.
  • Fix avoidance of some code blocks in generated text. This can conditionally invalidate the disk cache.
  • Avoid markdown section blocks in generated text. This can conditionally invalidate the disk cache.

0.8.0

24 Nov 04:11
Compare
Choose a tag to compare
  • Update text generation model to gpt-4o-2024-11-20 from gpt-4o-2024-08-06. This invalidates the disk cache.
  • Update the list_subtopics prompt for intricacies of gpt-4o-2024-11-20. This invalidates the disk cache.
  • Update dependencies: openai, semantic-text-splitter

0.7.0

19 Oct 13:08
Compare
Choose a tag to compare
  • Update LLM prompts. The list_subtopics prompt should now reject a little less often, and use more consistent terminology in the prompt. The generate_subtopic prompt also now uses more consistent terminology in the prompt. These updates however will invalidate the disk cache.
  • Update requirements: openai, semantic-text-splitter

0.6.2

02 Oct 18:15
Compare
Choose a tag to compare
  • Handle missing completion.usage.prompt_tokens_details as returned by openai.

0.6.1

02 Oct 16:12
Compare
Choose a tag to compare
  • Change text model to the newer model gpt-4o-2024-08-06. Previously the older gpt-4-0125-preview was used. This change is made because only the newer model has newer info that the older model doesn't have. Moreover, hallucinations have not thus far observed while testing the newer model. The newer model is also cheaper to use. As a side effect, the generated content is now about 25% shorter and more to the point.
  • Update the work path of each section's generated internal transcript file to also include the model name in the filename. This however invalidates all prior disk caches which can manually be deleted from the work directory.
  • Update the voice generation prompt to exhibit a bias toward diversified voices, failing which some voices were almost never being selected.

0.5.7

23 Sep 01:12
Compare
Choose a tag to compare
  • Add option --max-sections to limited the maximum number of generated sections.

0.5.6

21 Sep 00:10
Compare
Choose a tag to compare
  • Update the LLM prompt and also its respective validation logic to prevent the generation of code blocks in a section's text. If a code block is still found, the generation is retried. This invalidates prior cache of section texts.

0.5.4

20 Sep 03:27
Compare
Choose a tag to compare
  • Update prompt, code, and exception to print a rejection reason when a topic is rejected. This invalidates prior caches.