Skip to content

feat: add OpenAI Responses API provider#255

Open
ObesityChow wants to merge 4 commits intotheJayTea:mainfrom
ObesityChow:feat/openai-responses-provider
Open

feat: add OpenAI Responses API provider#255
ObesityChow wants to merge 4 commits intotheJayTea:mainfrom
ObesityChow:feat/openai-responses-provider

Conversation

@ObesityChow
Copy link

@ObesityChow ObesityChow commented Mar 23, 2026

What

This PR now includes three related updates:

  1. Windows/Linux: add a new OpenAI Responses API provider
  2. macOS: fix Privacy & Security deep links across macOS versions
  3. macOS OpenAI provider: improve base URL handling and add an optional Force Streaming mode

Included

1) Windows/Linux — OpenAI Responses API provider

  • new selectable provider: OpenAI Responses API
  • uses POST /v1/responses
  • parses output from response.output[0].content[0].text
  • supports stateful follow-up via previous_response_id
  • configurable API key, base URL, and model
  • model dropdown includes gpt-4o-mini, gpt-4o, o3-mini, o1, plus custom

Files:

  • Windows_and_Linux/aiprovider.py
  • Windows_and_Linux/WritingToolApp.py

2) macOS — Privacy & Security settings fallback

  • replaces the old single URL scheme approach with fallback handling
  • tries the newer System Preferences / System Settings scheme first
  • falls back for older macOS versions when needed

Files:

  • macOS/WritingTools/Views/Onboarding/OnboardingView.swift
  • macOS/WritingTools/Views/OnboardingPermissionsStep.swift

3) macOS — OpenAI base URL + Force Streaming

  • fixes default/custom OpenAI base URL behavior so API requests consistently target the API path
  • adds a placeholder hint for the Base URL field
  • adds a Force Streaming option for OpenAI-compatible providers
  • when enabled, uses streaming internally and accumulates chunks into one final response
  • useful for third-party proxies that require stream=true

Files:

  • macOS/WritingTools/Models/Providers/OpenAIProvider.swift
  • macOS/WritingTools/Views/Settings/Providers/OpenAISettingsView.swift
  • macOS/WritingTools/App/AppSettings.swift
  • macOS/WritingTools/App/AppState.swift

Notes

  • existing provider behavior stays unchanged unless the new options are selected
  • API key storage for the new Windows/Linux provider follows the existing obfuscation pattern
  • this branch currently contains both the new Responses API provider and the follow-up macOS/OpenAI fixes listed above

Hermanito Ed and others added 4 commits March 23, 2026 19:59
Add OpenAIResponsesProvider as a selectable provider option alongside
the existing Gemini, OpenAI Compatible, and Ollama providers.

Key differences from the existing OpenAICompatibleProvider:
- Uses /v1/responses endpoint (not /v1/chat/completions)
- Input field is 'input' (list of message dicts or string)
- Output parsed from response.output[0].content[0].text
- Stateful multi-turn via previous_response_id (no full history replay)

Changes:
- Windows_and_Linux/aiprovider.py: add OpenAIResponsesProvider class
  - API key, base URL, model dropdown (gpt-4o-mini / gpt-4o / o3-mini / o1 + custom)
  - Obfuscated key storage matching existing pattern
  - Stateful follow-up caching via _last_response_id
- Windows_and_Linux/WritingToolApp.py:
  - Import and register OpenAIResponsesProvider
  - Add provider-specific branch in process_followup_question for
    stateful Responses API multi-turn
…macOS versions

The x-apple.systemsettings: scheme stopped working on macOS 26+.
Consolidate all Privacy pane opening logic into a single helper that
tries x-apple.systempreferences: first and falls back to
x-apple.systemsettings: for older versions.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Custom base URLs without /v1 (e.g. https://newapi.example.com) would
result in requests to /chat/completions instead of /v1/chat/completions,
hitting the web UI instead of the API and causing "Failed to parse API
response" errors.

Also add placeholder hint in the Base URL text field.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Some third-party API proxies require stream=true and reject non-streaming
requests. Add a toggle in OpenAI settings that, when enabled, uses SSE
streaming internally and accumulates chunks into a single response.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant