Skip to main content

GPT-5 Codex

GPT-5 Codex is OpenAI's latest model, offering advanced reasoning and code generation capabilities.

Overview

GPT-5 Codex provides:

  • Advanced reasoning - Complex problem solving
  • Large context - Understand big codebases
  • Code expertise - Trained on vast code corpus
  • Tool use - Function calling capabilities

When to Use Codex

Ideal For

  • ✅ Complex algorithms
  • ✅ Architecture decisions
  • ✅ Code optimization
  • ✅ Advanced debugging
  • ✅ Large refactoring

Less Ideal For

  • ⚠️ Budget-conscious projects (expensive)
  • ⚠️ Simple tasks (overkill)
  • ⚠️ Real-time applications (slower)

Setup

API Key

  1. Go to platform.openai.com
  2. Create an API key
  3. Add billing (required for GPT-5)
export OPENAI_API_KEY="your-openai-api-key"

Configuration

{
"agents": {
"codex": {
"enabled": true,
"model": "gpt-5",
"autonomy": "workspace-write"
}
}
}

Model Options

ModelContextSpeedCost
gpt-5128KMedium$$$$
gpt-4-turbo128KFast$$$
gpt-4o128KFaster$$

Usage

From UI

  1. Open task
  2. Click "Run Agent"
  3. Select "GPT-5 Codex"
  4. Start

From CLI

friday-dev run --task 123 --agent codex

Capabilities

Code Generation

Codex excels at generating complex code:

## Task
Implement a binary search tree with:
- Insert, delete, search operations
- In-order, pre-order, post-order traversal
- Self-balancing (AVL or Red-Black)
- TypeScript with full type safety

Algorithm Optimization

## Task
Optimize the current O(n²) search to O(log n)

Current implementation is in src/search/linear.ts
Suggest data structure changes if needed

Architecture Design

## Task
Design the event-driven architecture for:
- Real-time notifications
- Message queuing
- At-least-once delivery guarantee
- Horizontal scaling support

Example Output

Complex Algorithm

Task: "Implement LRU cache with O(1) operations"

/**
* LRU Cache implementation using HashMap + Doubly Linked List
*
* Time Complexity:
* - get: O(1)
* - put: O(1)
*
* Space Complexity: O(capacity)
*/
class LRUCache<K, V> {
private capacity: number;
private cache: Map<K, DoublyLinkedListNode<K, V>>;
private head: DoublyLinkedListNode<K, V>;
private tail: DoublyLinkedListNode<K, V>;

constructor(capacity: number) {
this.capacity = capacity;
this.cache = new Map();

// Sentinel nodes
this.head = new DoublyLinkedListNode(null as K, null as V);
this.tail = new DoublyLinkedListNode(null as K, null as V);
this.head.next = this.tail;
this.tail.prev = this.head;
}

get(key: K): V | undefined {
const node = this.cache.get(key);
if (!node) return undefined;

// Move to front (most recently used)
this.removeNode(node);
this.addToFront(node);

return node.value;
}

put(key: K, value: V): void {
if (this.cache.has(key)) {
// Update existing
const node = this.cache.get(key)!;
node.value = value;
this.removeNode(node);
this.addToFront(node);
} else {
// Add new
const node = new DoublyLinkedListNode(key, value);

if (this.cache.size >= this.capacity) {
// Evict least recently used
const lru = this.tail.prev!;
this.removeNode(lru);
this.cache.delete(lru.key);
}

this.addToFront(node);
this.cache.set(key, node);
}
}

private addToFront(node: DoublyLinkedListNode<K, V>): void {
node.next = this.head.next;
node.prev = this.head;
this.head.next!.prev = node;
this.head.next = node;
}

private removeNode(node: DoublyLinkedListNode<K, V>): void {
node.prev!.next = node.next;
node.next!.prev = node.prev;
}
}

class DoublyLinkedListNode<K, V> {
key: K;
value: V;
prev: DoublyLinkedListNode<K, V> | null = null;
next: DoublyLinkedListNode<K, V> | null = null;

constructor(key: K, value: V) {
this.key = key;
this.value = value;
}
}

Best Practices

Leverage Context Window

Codex can handle large contexts:

## Task
Refactor the entire authentication module

## Context
Include all files:
- src/auth/*.ts (8 files)
- src/middleware/auth.ts
- src/types/auth.ts
- tests/auth/*.test.ts

Maintain backward compatibility with existing API

Ask for Explanations

## Task
Optimize database queries in UserService

## Requirements
- Explain current performance issues
- Show before/after query plans
- Document the optimization strategy

Pricing

GPT-5 is OpenAI's premium model:

Cost Optimization

  1. Use GPT-4o for simpler tasks
  2. Reserve GPT-5 for complex problems
  3. Be specific to reduce token usage
  4. Use shorter context when possible

Comparison

FeatureCodexClaudeGemini
Reasoning⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Speed⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Cost$$$$Free/$
Context128K200K1M

Troubleshooting

API Errors

Error: insufficient_quota

Solutions:

  1. Add billing to OpenAI account
  2. Check spending limits
  3. Use a different model

Slow Responses

GPT-5 prioritizes quality:

  1. Use GPT-4o for faster responses
  2. Reduce context size
  3. Break into smaller tasks

Quality Issues

If output isn't as expected:

  1. Provide more context
  2. Add examples
  3. Be explicit about requirements

Next Steps