Local model support in Codex keeps prompts on the local machine and enables fast iteration without relying on hosted APIs, but a stopped provider or mismatched model name can block runs with confusing connection or model errors.
Local providers such as Ollama and LM Studio run as separate servers, and Codex forwards each prompt to the selected provider when --oss is enabled. The active provider comes from the configured default or from --local-provider (ollama or lmstudio), and the model identifier is still chosen with -m.
Most failures fall into a few patterns: provider not running (connection refused/timeouts), model not installed or named differently (model not found), and backend runtime issues that only appear in provider logs (GPU/CPU memory pressure, driver faults, corrupted model files, or crashes). Trusted-directory restrictions can also prevent local execution until the working directory is marked as trusted.
Related: How to use local models with Codex
Steps to troubleshoot local models in Codex:
- Run a quick request against the Ollama provider using a model name available in Ollama.
$ codex exec --oss --local-provider ollama -m llama3.2 "Return OK." OK
llama3.2 is an example. Replace it with the exact model identifier shown by the provider.
Related: server-status-check
Related: [DRAFT] How to download models with Ollama
Related: How to fix Codex trusted directory errors - Run a quick request against the LM Studio provider using the model identifier exposed by the local server.
$ codex exec --oss --local-provider lmstudio -m "llama-3.2-3b-instruct" "Return OK." OK
Keep the model name exactly as shown in LM Studio, including hyphens and case.
Related: [DRAFT] How to enable the local server in LM Studio
Related: model-download
Related: How to fix Codex trusted directory errors - Repeat the successful provider test using the intended local model name to confirm the target model is reachable.
$ codex exec --oss --local-provider "<provider>" -m "<model-id>" "Return OK." OK
Use ollama or lmstudio for <provider>.
- Run the same prompt on a hosted model to confirm Codex itself is working.
$ codex exec -m gpt-5.2-codex "Return OK." OK
If hosted runs succeed while local runs fail, the issue is likely in the local provider or model configuration.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
