Skip to content

Conversation

@thnk2wn
Copy link

@thnk2wn thnk2wn commented Jan 16, 2026

Add configurable verbosity for OpenAI Responses API

Fixes #2775

Problem

Models like gpt-5.2-codex and other newer OpenAI models only support medium reasoning and verbosity levels, but the codebase was using low by default. This caused 400 Bad Request errors:

Failed to stream openai-responses chat: openai 400 Bad Request:
Unsupported value: 'low' is not supported with the 'gpt-5.2-codex' model.
Supported values are: 'medium'.

Solution

This PR implements a scalable, user-configurable approach instead of hardcoding model-specific constraints:

  1. Changed default verbosity from "low" to "medium" - more widely supported across OpenAI models
  2. Added ai:verbosity config option - allows users to configure verbosity per model in waveai.json
  3. Changed rate limit fallback from low to medium thinking level for better compatibility
  4. Removed hardcoded model checks - solution is scalable for future models

Changes

Backend Changes

  • pkg/aiusechat/openai/openai-convertmessage.go - Use configurable verbosity with safe defaults
  • pkg/aiusechat/uctypes/uctypes.go - Add Verbosity field to AIOptsType
  • pkg/aiusechat/usechat.go - Pass verbosity from config to options
  • pkg/wconfig/settingsconfig.go - Add Verbosity to AIModeConfigType

Schema Changes

  • schema/waveai.json - Add ai:verbosity with enum values (low/medium/high)
  • frontend/types/gotypes.d.ts - Auto-generated TypeScript types

Configuration Example

Users can now configure both thinking level and verbosity per model:

{
  "openai-gpt52-codex": {
    "display:name": "GPT-5.2 Codex",
    "ai:provider": "openai",
    "ai:model": "gpt-5.2-codex",
    "ai:thinkinglevel": "medium",
    "ai:verbosity": "medium"
  }
}

@CLAassistant
Copy link

CLAassistant commented Jan 16, 2026

CLA assistant check
All committers have signed the CLA.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 16, 2026

Walkthrough

This pull request adds an optional AI verbosity configuration field across the codebase. It introduces "ai:verbosity" (enum "low", "medium", "high") to TypeScript, Go structs, and the JSON schema; adds Verbosity to AIOptsType; sets a default OpenAIDefaultVerbosity = "medium" and wires verbosity into OpenAI request construction; and adjusts premium rate-limit thinking level handling to use medium instead of low.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: adding configurable verbosity for OpenAI Responses API, which is the primary objective of this PR.
Description check ✅ Passed The description is detailed and directly related to the changeset, explaining the problem, solution, and providing configuration examples that match the code changes.
Linked Issues check ✅ Passed The PR successfully addresses all coding requirements from issue #2775: changed default verbosity to 'medium', added configurable ai:verbosity option, updated rate-limit fallback, and removed hardcoded model checks.
Out of Scope Changes check ✅ Passed All changes are directly scoped to implementing configurable verbosity for OpenAI Responses API; no unrelated modifications or out-of-scope additions are present.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

- Add ai:verbosity config option to AIModeConfigType schema
- Support low/medium/high verbosity levels (defaults to medium)
- Use medium verbosity by default for better model compatibility
- Change rate limit fallback from low to medium thinking level
- Remove hardcoded model-specific constraints in favor of user config
- Document that verbosity is OpenAI Responses API specific

Fixes issue where models like gpt-5.2-codex only support medium
verbosity/reasoning levels, causing 400 Bad Request errors with
unsupported 'low' values.

Users can now configure both ai:thinkinglevel and ai:verbosity per
model in their waveai.json config files.
@thnk2wn thnk2wn force-pushed the fix/openai-responses-model-constraints branch 2 times, most recently from c4287bc to b87d4c0 Compare January 17, 2026 00:54
01NeuralNinja added a commit to 01NeuralNinja/waveterm that referenced this pull request Jan 17, 2026
…onses API

This PR adds support for configurable verbosity levels (low/medium/high)
for the OpenAI Responses API, which enables better support for APIs like
Codex that require different verbosity settings.

Changes:
- Add ai:verbosity configuration option
- Update OpenAI request building to use configured verbosity
- Default verbosity changed from 'low' to 'medium' for better compatibility
- Add verbosity to schema and type definitions
@sawka sawka added the maintainer-interest Indicates maintainer interest after review; merge is likely but not guaranteed. label Jan 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

maintainer-interest Indicates maintainer interest after review; merge is likely but not guaranteed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Failed to stream openai-responses chat: openai 400 Bad Request: Unsupported value: 'low' is not supported

3 participants