What happens after we send data though the Respond Webhook at the end of a workflow? #34
Replies: 1 comment 1 reply
-
|
Hey there, Good question. The message and data both get sent to the LLM and the LLM decides what to do with it. A more strict prompt might help get the LLM to read only the message, but not sure if you want that. For example, if I ask "is jellyfin running on my truenas?" it will run truenas_get_status and use the data to answer the question "Yes, Jellyfin is running on your TrueNAS. It's one of the 19 apps currently active." This is the answer I want, not the canned message. On a similar topic - Can you check Setting > LLM Settings > Temperature. What temperature do you have set and what model are you running? I'm curious because I was just reading about Ministral-3 and for best instruction following it suggests a 0.15 temperature. I had mine at 1.5! The default is 0.7, but 0.15 is the recommendation for this model and will make the responses more deterministic. This should improve tool calling and also reduce hallucinations, it will be less 'creative' in its answers. I only have 0.1 increments in the settings, I'll have to change that. Try 0.1 or 0.2 though. I'm also in the middle of 'fine tuning' a version of ministral-3:8b that is trained on specifically calling CAAL tools. Goal is to improve tool calling performance even further. Cheers |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
Got another question for you: what happens when we send data back to CAAL using the last webhook on a workflow?
I have noticed that when I run my version of the weather_get_forecast n8n workflow, CAAL does not say exactly what was sent as the message. I am using the openweather node to get the current conditions and then I send them to an Ai agent that creates a short paragraph mean to be read out loud by CAAL. When the workflow runs, CAAL announces the weather forecast, but what is said does not match what the message created was (word for word). Does CAAL run the data from the webhook through the LLM before speaking? Thanks.
Beta Was this translation helpful? Give feedback.
All reactions