Using LM Studio models with Codex keeps prompts on the local machine while still allowing Codex to inspect files, edit code, and run commands in the current workspace.

The Codex CLI uses --oss for local providers, and --local-provider lmstudio sends requests to LM Studio's local OpenAI-compatible API. The model value passed with -m must match a model identifier exposed by the LM Studio server.

LM Studio must keep its local server running while Codex is active, and the target model must already be downloaded or loaded before the prompt run starts. Smaller local models can run out of context or tool-calling headroom during longer coding sessions, so keep the server bound to localhost unless remote access is intentional.

Steps to use LM Studio models with Codex:

  1. Start the LM Studio local server on port 1234.
    $ lms server start --port 1234

    If lms is not on the shell path, start the server from LM StudioLocal Server instead.

  2. Confirm the local server exposes the available model list before running Codex.
    $ curl http://localhost:1234/v1/models
    {"data":[{"id":"openai/gpt-oss-20b","object":"model"}]}

    The exact id value can differ. Use the model identifier returned by LM Studio in the next command.

  3. Run Codex against the LM Studio provider with an exact LM Studio model identifier.
    $ codex exec --oss --local-provider lmstudio -m openai/gpt-oss-20b "Return OK."
    OK

    --local-provider lmstudio keeps the run on the LM Studio backend even when another local provider is configured elsewhere.

  4. Save the final assistant message to a file when the result needs reuse outside the terminal.
    $ codex exec --oss --local-provider lmstudio -m openai/gpt-oss-20b --output-last-message /tmp/codex-lmstudio.txt "Return OK."
    OK

    --output-last-message writes only the final assistant reply, which is useful for scripts and comparisons.

  5. Confirm the saved reply matches the expected output.
    $ cat /tmp/codex-lmstudio.txt
    OK