Run Ollama and LMStudio models on Chorus
Chorus supports local models running via Ollama or LMStudio.
First, set up Ollama. Then run it:
Next, open the model picker (⌘ + J) and scroll to the LOCAL
section. You may need to click the refresh icon.
Download LMStudio and load your first model.
Next, go to the developer tab and start the LMStudio server. Make sure that CORS is enabled:
Next, open the model picker (⌘ + J) and scroll to the LOCAL
section. You may need to click the refresh icon.
To change the LMStudio base URL, go to Settings
-> API KEYS
-> Open the LMStudio settings at the bottom of the tab.