Using Ollama models with Codex keeps prompt traffic on the local machine while still letting Codex inspect files, edit code, and run commands in the current workspace.
The Codex CLI routes local-model runs through --oss, and --local-provider ollama keeps the request on Ollama's built-in OpenAI-compatible API at http://localhost:11434/v1. The value passed with -m must match the exact model ID exposed by Ollama.
The Ollama server must already be running and the target model must already be available before the prompt run starts. Running the command from a Git repository, or pointing Codex at one with -C, keeps the trusted-directory check in place so the prompt reaches Ollama instead of stopping at workspace validation.
Related: How to use local models with Codex
Related: How to troubleshoot local models in Codex
$ ollama list NAME ID SIZE MODIFIED gpt-oss:20b 17052f91a42e 13 GB 4 months ago
Use the value in the NAME column as the -m model argument, and pull the model first if it does not already appear in this list.
$ codex exec --oss \ --local-provider ollama -m gpt-oss:20b \ -C ~/repo "Reply with exactly: OK" OK
--local-provider ollama avoids accidental routing to another local backend, and -C prevents a trusted-directory failure when the current shell is outside the target repo.
$ codex exec --oss \ --local-provider ollama -m gpt-oss:20b \ -C ~/repo -o /tmp/ok.txt \ "Reply with exactly: OK" OK
-o writes only the final assistant message and overwrites the destination file when it already exists.
$ cat /tmp/ok.txt OK