gpt-oss-120b

gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases.

Reasoning
Speed
Context
131K
Max output
131K
Input price
Output price
Capabilities
vision
tool call
structured output
reasoning
json mode
streaming
fine tuning
batch
Details
Provider Baseten
Creator OpenAI
Familygpt-oss
Statusactive
Input modalitiestext
Output modalitiestext
Knowledge cutoff
Release date
Deprecation date
Sourceofficial
Last updated2026-03-21
Max input
gpt-oss family
ModelContextMax outInputOutput
gpt-oss-120bS
API
GET/v1/models/baseten/gpt-oss-120b

gpt-oss-120b

Changes · 1 entries
gpt-oss-120bcreate8b78603Mar 21, 2026, 05:16 AM