Selecting a model for Codex controls cost, latency, and reasoning depth, keeping results predictable for each task.

The Codex CLI defaults to gpt-5.2-codex on macOS and Linux, and gpt-5 on Windows. Passing -m or --model overrides the model for a single invocation, which makes it practical to compare outputs across models without changing global settings.

Model IDs are gated by account access and may differ between environments, so an unavailable model fails at runtime. Pinning an explicit model in scripts reduces surprises when defaults change, and keeping a known fallback model name avoids blocked workflows.

Steps to run Codex with a specific model:

  1. Set a shell variable for the model ID you want to test.
    $ MODEL=gpt-5.2-codex
  2. Run exec with an explicit model override and save the output.
    $ codex exec -m "$MODEL" --output-last-message /tmp/codex-$MODEL.txt -C /home/user/projects/example-repo "Return OK."
    OK.

    Use the exact model ID (for example gpt-5.2-codex) enabled for the active login.

  3. Repeat the run with a different model for output, latency, reasoning depth comparison.
    $ MODEL=gpt-5
    $ codex exec -m "$MODEL" --output-last-message /tmp/codex-$MODEL.txt -C /home/user/projects/example-repo "Return OK."
    OK

    Keep the prompt identical for an apples-to-apples comparison.

  4. Compare the saved responses from both models.
    $ diff -u /tmp/codex-gpt-5.2-codex.txt /tmp/codex-gpt-5.txt
    --- /tmp/codex-gpt-5.2-codex.txt
    +++ /tmp/codex-gpt-5.txt
    @@
    -OK.
    \ No newline at end of file
    +OK
    \ No newline at end of file

    No output means the files match.