Using Ollama models with Codex keeps prompt traffic on the local machine while still letting Codex inspect files, edit code, and run commands in the current workspace.
The Codex CLI routes local-model runs through --oss, and --local-provider ollama keeps the request on Ollama's built-in OpenAI-compatible API at http://localhost:11434/v1. The value passed with -m must match the exact model ID exposed by Ollama.
The Ollama server must already be running and the target model must already be available before the prompt run starts. Running the command from a Git repository, or pointing Codex at one with -C, keeps the trusted-directory check in place so the prompt reaches Ollama instead of stopping at workspace validation.
Related: How to use local models with Codex
Related: How to troubleshoot local models in Codex
Steps to use Ollama models with Codex:
- Check Ollama for the exact model ID that Codex should request.
$ ollama list NAME ID SIZE MODIFIED gpt-oss:20b 17052f91a42e 13 GB 4 months ago
Use the value in the NAME column as the -m model argument, and pull the model first if it does not already appear in this list.
- Run Codex against the Ollama provider and point it at the repository directory that Codex should inspect.
$ codex exec --oss \ --local-provider ollama -m gpt-oss:20b \ -C ~/repo "Reply with exactly: OK" OK
--local-provider ollama avoids accidental routing to another local backend, and -C prevents a trusted-directory failure when the current shell is outside the target repo.
- Save the final Ollama reply to a file when the result needs reuse outside the terminal.
$ codex exec --oss \ --local-provider ollama -m gpt-oss:20b \ -C ~/repo -o /tmp/ok.txt \ "Reply with exactly: OK" OK
-o writes only the final assistant message and overwrites the destination file when it already exists.
- Confirm that the saved file contains the expected local-model response.
$ cat /tmp/ok.txt OK
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
