Routing Codex to a local model keeps prompts on the workstation while still allowing fast switches between hosted and local backends for different coding tasks.
The Codex CLI uses --oss to enter the local routing path for a run, --local-provider to pick Ollama or LM Studio, and -m to select the exact local model identifier. If --oss should default to one backend, set oss_provider in ~/.codex/config.toml, while saved profiles can pin the concrete provider ID such as ollama or lmstudio together with the model.
Routing changes only which backend Codex calls; the provider still needs to be running and the model still needs to be available locally before the prompt is sent. Leaving the provider API bound beyond localhost can expose prompts to other hosts, and running Codex outside a trusted repository can still trigger the trust check before execution starts.
Related: How to use local models with Codex
Related: How to fix Codex trusted directory error
Steps to route Codex to a local model:
- List the local models exposed by the chosen backend before selecting a route.
$ ollama list NAME ID SIZE MODIFIED gpt-oss:20b 17052f91a42e 13 GB 4 months ago
For LM Studio, use the model identifier shown by the local server and substitute lmstudio in the routing steps below. Related: [DRAFT] How to model list in Ollama
Related: [DRAFT] How to download a model in LM Studio - Route a single Codex run to Ollama with explicit flags.
$ codex exec --oss --local-provider ollama -m gpt-oss:20b "Reply with exactly OK" OpenAI Codex v0.121.0 (research preview) -------- model: gpt-oss:20b provider: ollama -------- codex OK
Direct flags override any saved config for that run only.
- Set a default backend in ~/.codex/config.toml when --oss should route to Ollama automatically.
oss_provider = "ollama"
oss_provider picks the backend for --oss, but it does not choose the local model. Keep passing -m unless a saved profile also pins the model.
- Re-run Codex with --oss to confirm the saved default backend is used.
$ codex exec --oss -m gpt-oss:20b "Reply with exactly OK" OpenAI Codex v0.121.0 (research preview) -------- model: gpt-oss:20b provider: ollama -------- codex OK
- Create a named profile in ~/.codex/config.toml for repeatable local runs that pin both the provider and the model.
[profiles.local_ollama] model_provider = "ollama" model = "gpt-oss:20b"
Saved profiles use the concrete provider ID such as ollama or lmstudio rather than the --oss alias.
- Run Codex with the named profile and confirm the provider line points at the local backend.
$ codex exec -p local_ollama "Reply with exactly OK" OpenAI Codex v0.121.0 (research preview) -------- model: gpt-oss:20b provider: ollama -------- codex OK
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
