Using LM Studio models with Codex keeps prompts on the local machine while still allowing Codex to inspect files, edit code, and run commands in the current workspace.
The Codex CLI uses --oss for local providers, and --local-provider lmstudio sends requests to LM Studio's local OpenAI-compatible API. The model value passed with -m must match a model identifier exposed by the LM Studio server.
LM Studio must keep its local server running while Codex is active, and the target model must already be downloaded or loaded before the prompt run starts. Smaller local models can run out of context or tool-calling headroom during longer coding sessions, so keep the server bound to localhost unless remote access is intentional.
Related: How to use local models with Codex
Related: How to troubleshoot local models in Codex
$ lms server start --port 1234
If lms is not on the shell path, start the server from LM Studio → Local Server instead.
$ curl http://localhost:1234/v1/models
{"data":[{"id":"openai/gpt-oss-20b","object":"model"}]}
The exact id value can differ. Use the model identifier returned by LM Studio in the next command.
$ codex exec --oss --local-provider lmstudio -m openai/gpt-oss-20b "Return OK." OK
--local-provider lmstudio keeps the run on the LM Studio backend even when another local provider is configured elsewhere.
$ codex exec --oss --local-provider lmstudio -m openai/gpt-oss-20b --output-last-message /tmp/codex-lmstudio.txt "Return OK." OK
--output-last-message writes only the final assistant reply, which is useful for scripts and comparisons.
$ cat /tmp/codex-lmstudio.txt OK