-
Notifications
You must be signed in to change notification settings - Fork 583
fix(openai-agents): Patch models functions following library refactor #5449
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Semver Impact of This PR🟢 Patch (bug fixes) 📋 Changelog PreviewThis is how your changes will appear in the changelog. Bug Fixes 🐛Openai Agents
Other
Documentation 📚
Internal Changes 🔧
🤖 This preview updates automatically when you update the PR. |
Codecov Results 📊❌ 25 failed | Total: 25 | Pass Rate: 0% | Execution Time: 1m 54s ❌ Patch coverage is 0.00%. Project has 17827 uncovered lines. Files with missing lines (184)
Generated by Codecov Action |
ericapisani
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small things, but otherwise LGTM 🚀
| @wraps(original_fetch_response) | ||
| async def wrapped_fetch_response(*args: "Any", **kwargs: "Any") -> "Any": | ||
| response = await original_fetch_response(*args, **kwargs) | ||
| if hasattr(response, "model") and response.model: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick: I think an alternative approach that you could take here in order to make this conditional more concise (by removing the and part of the conditional) is to do the following:
| if hasattr(response, "model") and response.model: | |
| if getattr(response, "model", None): |
This would also make things consistent with what you have on line 119 within the wrapped_get_response function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to rethink this response model business because we probably shouldn't be setting response models on agent spans either.
This was put in before I knew as much about agents, but there are libraries that let you vary the request model during agent execution as well (in particular Claude Code wrapped by the Claude Agent SDK).
Created an issue to track #5458
(and the plan would be to clean up as part of that issue).
| # Uses explicit try/finally instead of context manager to ensure cleanup | ||
| # even if the consumer abandons the stream (GeneratorExit). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment looks like it needs to be updated or removed as there's no try/finally below and the code uses a context manager 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're completely right, and it doesn't look like we catch GeneratorExit anywhere for that matter 😅 .
I'll keep this commit atomic since we're moving the contents of wrapped_get_model() to _get_model() as is now, and follow up with a PR after this first train is merged 🚅
| if hasattr(streaming_response, "model") | ||
| and streaming_response.model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also a nitpick but similar to the comment I left above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| ) -> "agents.Model": | ||
| return _get_model(turn_preparation.get_model, agent, run_config) | ||
|
|
||
| agents.run_internal.run_loop.get_model = new_wrapped_get_model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Model patch applied to wrong module
Medium Severity
In the v0.8+ branch, the wrapper is built from turn_preparation.get_model but assigned to agents.run_internal.run_loop.get_model. This leaves turn_preparation.get_model unpatched, so the new model path can bypass _get_model instrumentation in patches/models.py.


Description
The
AgentRunner._get_model()class method has been moved to the functionagents.run_internal.turn_preparation.get_model().Patch the new function by re-using existing wrapper logic.
Issues
Reminders
tox -e linters.feat:,fix:,ref:,meta:)