Local Providers
Lis Novel can connect to local LLM runtimes so your manuscript and prompts stay on your machine unless you choose a cloud provider.
Today, the standard LLM Providers dialog supports:
- LM Studio
- Ollama
Open the Providers dialog
Open LLM Providers from the app status menu, then choose Add to create a new connection.
Each provider connection can:
- be enabled or disabled independently
- be tested from the dialog
- fetch its currently available models
- set a provider default model
- enable or disable specific fetched models for pickers in chat and prompt workflows
- show provider-reported model capabilities such as tools, vision, and reasoning state
Connect Ollama
Use Ollama when you already have an Ollama server running locally or on your network.
Default connection values:
- Host:
http://localhost - Port:
11434
Lis Novel combines those fields into the Ollama server URL and uses that URL when testing the connection and loading models.
During Test, Lis Novel also calls Ollama's /api/show for each fetched model so it can detect context length and provider-reported capabilities such as tools, vision, and thinking support.
Typical setup flow:
- Start Ollama and make sure the server is reachable.
- Pull at least one chat model in Ollama.
- In Lis Novel, add an Ollama provider connection.
- Confirm the host and port.
- Click Test to fetch available models.
- Optionally choose a Default Model for that connection.
If your Ollama server is on another machine, use that machine's reachable host instead of localhost.
Connect LM Studio
Use LM Studio when you are running the LM Studio local server.
Default connection values:
- Host:
ws://localhost - Port:
1234
After starting the LM Studio server, add an LM Studio connection and click Test to fetch downloaded models.
When available, Lis Novel also reads LM Studio's /api/v1/models metadata after the main model fetch so the provider dialog can show per-model reasoning states such as Reasoning: on/off or Reasoning: on.
Model capabilities and traits
When a provider reports model capabilities, Lis Novel uses that information to decide whether a model supports features such as:
- tools
- vision
- reasoning
For LM Studio models, the provider dialog can also show structured reasoning metadata such as allowed options and the provider default. This is informational for now. It does not automatically change Lis Novel's existing reasoning-effort controls unless the resolved model trait also reports reasoning-effort support.
For Ollama models, Lis Novel reads the model's /api/show capabilities array and maps values like tools, vision, and thinking into the same capability badges used elsewhere in the provider dialog. Ollama's thinking capability is currently treated as boolean reasoning support rather than a structured reasoning-options object.
If a provider or model reports those details imperfectly, open Model Traits from the provider dialog and override them for that specific provider/model pair.
See Model Traits for more detail.