-
Notifications
You must be signed in to change notification settings - Fork 5.7k
Add azureopenai #333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add azureopenai #333
Conversation
Summary of ChangesHello @kazuma-424, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the LLM provider ecosystem of TradingAgents by introducing comprehensive support for Azure OpenAI. It integrates Azure OpenAI at multiple layers, from a dedicated client and configuration management to CLI interactions and documentation, enabling users to seamlessly utilize Azure's large language models within the application. The changes ensure that Azure OpenAI functions as a first-class provider without impacting existing LLM integrations. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively integrates Azure OpenAI as a new LLM provider. The changes span the CLI, configuration, and client factory, providing a complete user flow. My review focuses on two main points: improving the configuration logic in the new Azure client for better robustness and maintainability, and addressing code duplication and a potentially confusing configuration key name in the graph logic. Overall, this is a great addition with a few areas for refinement.
| azure_endpoint = ( | ||
| self.kwargs.get("azure_endpoint") | ||
| or self.base_url | ||
| or os.environ.get("AZURE_OPENAI_ENDPOINT") | ||
| ) | ||
| api_version = self.kwargs.get("api_version") or os.environ.get( | ||
| "AZURE_OPENAI_API_VERSION", | ||
| "2024-10-21", | ||
| ) | ||
| api_key = self.kwargs.get("api_key") or os.environ.get("AZURE_OPENAI_API_KEY") | ||
|
|
||
| llm_kwargs = { | ||
| "azure_deployment": self.model, | ||
| "model": self.model, | ||
| "api_version": api_version, | ||
| } | ||
|
|
||
| if azure_endpoint: | ||
| llm_kwargs["azure_endpoint"] = azure_endpoint | ||
| if api_key: | ||
| llm_kwargs["api_key"] = api_key | ||
|
|
||
| for key in ("timeout", "max_retries", "reasoning_effort", "callbacks"): | ||
| if key in self.kwargs: | ||
| llm_kwargs[key] = self.kwargs[key] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method manually re-implements configuration resolution from environment variables, which is already handled robustly by the AzureChatOpenAI class. This custom logic can be brittle, especially with empty environment variables, and makes the code harder to maintain.
It's better to simplify this by only passing parameters from kwargs and base_url, and letting langchain's client handle the fallback to environment variables. This aligns with the implementation of other clients in this project (e.g., OpenAIClient) and reduces the chance of configuration-related bugs.
llm_kwargs = {
"azure_deployment": self.model,
"model": self.model,
}
# Prioritize endpoint from kwargs, then base_url. Let AzureChatOpenAI handle env vars.
azure_endpoint = self.kwargs.get("azure_endpoint") or self.base_url
if azure_endpoint:
llm_kwargs["azure_endpoint"] = azure_endpoint
# Pass through other known parameters from kwargs.
# AzureChatOpenAI will fall back to environment variables if these are not provided.
for key in (
"api_version",
"api_key",
"timeout",
"max_retries",
"reasoning_effort",
"callbacks",
):
if key in self.kwargs and self.kwargs[key] is not None:
llm_kwargs[key] = self.kwargs[key]There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
ae25b16
tradingagents/graph/trading_graph.py
Outdated
| reasoning_effort = self.config.get("openai_reasoning_effort") | ||
| if reasoning_effort: | ||
| kwargs["reasoning_effort"] = reasoning_effort |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic to handle reasoning_effort is duplicated from the openai provider block. This harms maintainability. Consider refactoring to handle shared parameters like this outside the provider-specific if/elif blocks.
Additionally, reusing the openai_reasoning_effort configuration key for the Azure provider can be confusing. A more generic name, such as llm_reasoning_effort, would improve clarity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
ae25b16
Summary
This PR adds Azure OpenAI support to TradingAgents across the LLM client layer, runtime config, and CLI flow.
What Changed
tradingagents/llm_clients/azure_openai_client.pyAzureChatOpenAIfromlangchain_openaiAZURE_OPENAI_API_KEYAZURE_OPENAI_ENDPOINTAZURE_OPENAI_API_VERSIONreasoning_effort, callbacks, timeout/retriestradingagents/llm_clients/factory.pyazureprovider routing toAzureOpenAIClienttradingagents/graph/trading_graph.pyazure_endpoint,api_version) and reusedopenai_reasoning_efforttradingagents/default_config.pyazure_endpoint,azure_api_versioncli/utils.pyAzureto provider selectioncli/main.pytradingagents/llm_clients/validators.py.env.examplewith Azure env varsREADME.mdprovider list now includes AzureWhy
Users could not run TradingAgents with Azure OpenAI despite existing multi-provider support.
This PR enables Azure as a first-class provider without changing existing OpenAI/Google/Anthropic/xAI/OpenRouter/Ollama behavior.
Backward Compatibility
llm_provider = "azure".Notes
AZURE_OPENAI_ENDPOINTalone is not sufficient; a deployment target is still required.Validation
python -m compileall cli tradingagentspassed.Related Issue
#334