Using local models with Codex keeps prompts and code on a local model server instead of sending them to a hosted provider. This is useful when the task includes sensitive source code, internal identifiers, or data that should stay on the local machine.
The Codex CLI can route local runs through a supported provider such as Ollama or LM Studio. You still choose the model with -m, but the model name must exactly match the identifier exposed by the local provider.
A local run only works when the provider is reachable and supports POST /v1/responses on its local API. Keep that API bound to localhost unless network access is intentional, and use the provider-specific setup guides if Codex cannot connect or the model name is rejected.
Related: How to use Ollama models with Codex
Related: How to use LM Studio models with Codex
Related: How to troubleshoot local models in Codex
Steps to use local models with Codex:
- Check the local provider for the exact model ID you will pass to Codex.
$ ollama list NAME ID SIZE MODIFIED gpt-oss:20b 17052f91a42e 13 GB 4 months ago
For LM Studio, note the model ID shown by the local server or app, for example openai/gpt-oss-20b.
Related: How to use Ollama models with Codex
Related: How to use LM Studio models with Codex - Run Codex against the local provider with the matching model ID.
$ codex exec --oss --local-provider ollama -m gpt-oss:20b "Reply with exactly: OK" OK
Replace ollama and gpt-oss:20b with the provider and model ID you actually use.
- Save the final local reply to a file when you need reusable output for scripts or review.
$ codex exec --oss --local-provider ollama -m gpt-oss:20b --output-last-message /tmp/codex-local.txt "Reply with exactly: OK" OK
--output-last-message overwrites the destination file when it already exists.
- Confirm that the saved result contains the expected local-model response.
$ cat /tmp/codex-local.txt OK
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
