-
Notifications
You must be signed in to change notification settings - Fork 732
Add configurable verbosity for OpenAI Responses API #2776
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add configurable verbosity for OpenAI Responses API #2776
Conversation
WalkthroughThis pull request adds an optional AI verbosity configuration field across the codebase. It introduces Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
- Add ai:verbosity config option to AIModeConfigType schema - Support low/medium/high verbosity levels (defaults to medium) - Use medium verbosity by default for better model compatibility - Change rate limit fallback from low to medium thinking level - Remove hardcoded model-specific constraints in favor of user config - Document that verbosity is OpenAI Responses API specific Fixes issue where models like gpt-5.2-codex only support medium verbosity/reasoning levels, causing 400 Bad Request errors with unsupported 'low' values. Users can now configure both ai:thinkinglevel and ai:verbosity per model in their waveai.json config files.
c4287bc to
b87d4c0
Compare
…onses API This PR adds support for configurable verbosity levels (low/medium/high) for the OpenAI Responses API, which enables better support for APIs like Codex that require different verbosity settings. Changes: - Add ai:verbosity configuration option - Update OpenAI request building to use configured verbosity - Default verbosity changed from 'low' to 'medium' for better compatibility - Add verbosity to schema and type definitions
Add configurable verbosity for OpenAI Responses API
Fixes #2775
Problem
Models like
gpt-5.2-codexand other newer OpenAI models only supportmediumreasoning and verbosity levels, but the codebase was usinglowby default. This caused 400 Bad Request errors:Solution
This PR implements a scalable, user-configurable approach instead of hardcoding model-specific constraints:
"low"to"medium"- more widely supported across OpenAI modelsai:verbosityconfig option - allows users to configure verbosity per model inwaveai.jsonlowtomediumthinking level for better compatibilityChanges
Backend Changes
pkg/aiusechat/openai/openai-convertmessage.go- Use configurable verbosity with safe defaultspkg/aiusechat/uctypes/uctypes.go- AddVerbosityfield toAIOptsTypepkg/aiusechat/usechat.go- Pass verbosity from config to optionspkg/wconfig/settingsconfig.go- AddVerbositytoAIModeConfigTypeSchema Changes
schema/waveai.json- Addai:verbositywith enum values (low/medium/high)frontend/types/gotypes.d.ts- Auto-generated TypeScript typesConfiguration Example
Users can now configure both thinking level and verbosity per model:
{ "openai-gpt52-codex": { "display:name": "GPT-5.2 Codex", "ai:provider": "openai", "ai:model": "gpt-5.2-codex", "ai:thinkinglevel": "medium", "ai:verbosity": "medium" } }