Replies: 1 comment
-
There are many ways to speed up switching between models/backends/directives -- gptel's defaults try to be simple and discoverable, not fast. Ranked easy to involved, you could try: Keyboard macros
Most sequences of keys can be automated easily. Assuming your leader key is (keymap-global-set "<f6>" "SPC u SPC a r h h d") to jump to the directive editor. You can set up keyboard macros for other things too, like switching to gpt-4: (keymap-global-set "SPC a 4" "SPC u SPC a r - m c h a <return> 4 <return>") Call gptel functions directlyThe transient menu is available at Helper functions
How is what you're doing (to switch between gpt-4 and gpt-3.5) tedious? It's just one keybinding? To be safe, you should be setting the backend along with the model.
If you know these combinations ahead of time, you can do something like this: (defvar my/gptel-presets
;; PRESET NAME MODEL BACKEND NAME DIRECTIVE
'(("gpt4+coding" "gpt-4" "ChatGPT" "My coding directive for gpt-4")
("mixtral+summarize" "Mixtral..." "anyscale" "My directive for summarizing with Mixtral")
("gpt-3.5+general" "gpt-3.5-turbo" "ChatGPT" "My directive for gpt-3.5"))
"List of preset combinations for gptel.")
(defun my/gptel-switch-preset (preset)
(interactive (list (completing-read
"Select preset: "
(mapcar #'car my/gptel-presets)
nil t)))
(let ((combination (assoc preset my/gptel-presets)))
(setq gptel-model (nth 1 combination)
gptel-backend (cdr (assoc (nth 2 combination) gptel--known-backends))
gptel--system-message (nth 3 combination))
(message "Selected gptel preset: %s" preset))) NOTE: This can break in the future as it accesses internal gptel variables ( Custom gptel queries with
|
Beta Was this translation helpful? Give feedback.
-
From @LazerJesus (#177)
with this gptel becomes even more important to my daily work. i love it!
i got a couple of questions, not sure if here is the best spot, so let me know if i should start a new issue about any of these.
managing multiple backends
at the moment i toggle models by keybinding custom expressions. like this
but with multiple backends that is becoming tedious. is there a better way?
whats the fastest way to provide one-off directives
i often highlight text regions and want to write just a few words as directive.
using the gpt-system buffer (reached via
h h
from the gptel minibuffer) is about 8 key hits for me.open minibuffer, h h, closing minibuffer, clearing text, c-c c-c.
is there a faster way?
managing directives / directives per model?
different models have different strengths and weaknesses, therefore i use them for different reasons.
for example, i use gpt4 for coding and mixtral for summarizing large bodies of text.
that requires different directives. is it possible to have directives per model?
how would i do that?
Beta Was this translation helpful? Give feedback.
All reactions