Hi!
First off, great work on askalex! It's super helpful.
I was wondering if you could add support for running the model on localhost instead of relying on OpenAI. This would be awesome for those who prefer using open-source LLMs locally to save on API costs.
It would be cool if we could choose between OpenAI's API and a local model in the settings.
e.g.
client = OpenAI(base_url="http://localhost:11434/v1", api_key="...")
Maybe you could also include some instructions on how to set up a local model?
Thanks for considering this!