Using local models with Codex keeps prompts and code on a local model server instead of sending them to a hosted provider. This is useful when the task includes sensitive source code, internal identifiers, or data that should stay on the local machine.
The Codex CLI can route local runs through a supported provider such as Ollama or LM Studio. You still choose the model with -m, but the model name must exactly match the identifier exposed by the local provider.
A local run only works when the provider is reachable and supports POST /v1/responses on its local API. Keep that API bound to localhost unless network access is intentional, and use the provider-specific setup guides if Codex cannot connect or the model name is rejected.
Related: How to use Ollama models with Codex
Related: How to use LM Studio models with Codex
Related: How to troubleshoot local models in Codex
$ ollama list NAME ID SIZE MODIFIED gpt-oss:20b 17052f91a42e 13 GB 4 months ago
For LM Studio, note the model ID shown by the local server or app, for example openai/gpt-oss-20b.
Related: How to use Ollama models with Codex
Related: How to use LM Studio models with Codex
$ codex exec --oss --local-provider ollama -m gpt-oss:20b "Reply with exactly: OK" OK
Replace ollama and gpt-oss:20b with the provider and model ID you actually use.
$ codex exec --oss --local-provider ollama -m gpt-oss:20b --output-last-message /tmp/codex-local.txt "Reply with exactly: OK" OK
--output-last-message overwrites the destination file when it already exists.
$ cat /tmp/codex-local.txt OK