Skip to content

Conversation

@kazuma-424
Copy link

@kazuma-424 kazuma-424 commented Feb 4, 2026

Summary

This PR adds Azure OpenAI support to TradingAgents across the LLM client layer, runtime config, and CLI flow.

What Changed

  • Added a new Azure client:
    • tradingagents/llm_clients/azure_openai_client.py
    • Uses AzureChatOpenAI from langchain_openai
    • Supports:
      • AZURE_OPENAI_API_KEY
      • AZURE_OPENAI_ENDPOINT
      • AZURE_OPENAI_API_VERSION
      • optional reasoning_effort, callbacks, timeout/retries
  • Extended provider factory:
    • tradingagents/llm_clients/factory.py
    • Added azure provider routing to AzureOpenAIClient
  • Updated provider kwargs wiring:
    • tradingagents/graph/trading_graph.py
    • Added Azure-specific kwargs (azure_endpoint, api_version) and reused openai_reasoning_effort
  • Updated defaults/config:
    • tradingagents/default_config.py
    • Added azure_endpoint, azure_api_version
  • Updated CLI:
    • cli/utils.py
    • Added Azure to provider selection
    • Added Azure prompts for endpoint, API version, and deployment names
  • Updated run config mapping:
    • cli/main.py
    • Persists Azure selections into runtime config
  • Updated model validation behavior:
    • tradingagents/llm_clients/validators.py
    • Azure accepts arbitrary deployment names
  • Updated docs/env template:
    • .env.example with Azure env vars
    • README.md provider list now includes Azure

Why

Users could not run TradingAgents with Azure OpenAI despite existing multi-provider support.
This PR enables Azure as a first-class provider without changing existing OpenAI/Google/Anthropic/xAI/OpenRouter/Ollama behavior.

Backward Compatibility

  • No breaking changes expected for existing providers.
  • Existing config keys remain valid.
  • Azure support is opt-in via llm_provider = "azure".

Notes

  • For Azure, model selection is treated as deployment-oriented in practice.
  • AZURE_OPENAI_ENDPOINT alone is not sufficient; a deployment target is still required.

Validation

  • Static check: python -m compileall cli tradingagents passed.
  • Runtime Azure API call was not executed in this environment (depends on external credentials/network).

Related Issue

#334

@gemini-code-assist
Copy link

Summary of Changes

Hello @kazuma-424, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the LLM provider ecosystem of TradingAgents by introducing comprehensive support for Azure OpenAI. It integrates Azure OpenAI at multiple layers, from a dedicated client and configuration management to CLI interactions and documentation, enabling users to seamlessly utilize Azure's large language models within the application. The changes ensure that Azure OpenAI functions as a first-class provider without impacting existing LLM integrations.

Highlights

  • Azure OpenAI Client Integration: A new AzureOpenAIClient has been added, leveraging langchain_openai.AzureChatOpenAI to support Azure OpenAI services. This client handles API key, endpoint, and API version configuration from environment variables or runtime settings.
  • CLI and Configuration Updates: The command-line interface (CLI) now includes 'Azure' as a selectable LLM provider. It prompts users for Azure-specific details like endpoint, API version, and deployment names for both quick and deep thinking models, and persists these settings in the runtime configuration.
  • Dynamic Provider Keyword Arguments: The trading_graph module has been updated to dynamically pass Azure-specific keyword arguments (azure_endpoint, api_version) to the LLM client based on the selected provider, ensuring proper initialization of Azure OpenAI models.
  • Flexible Model Validation for Azure: Model validation logic has been enhanced to allow arbitrary deployment names for Azure OpenAI, aligning with how Azure handles model deployments rather than fixed model names.
  • Documentation and Environment Variable Support: The README.md and .env.example files have been updated to reflect the new Azure OpenAI provider, including necessary environment variables for API key, endpoint, and API version.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • .env.example
    • Added AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT, and AZURE_OPENAI_API_VERSION environment variables.
  • README.md
    • Updated the list of supported LLM providers to include 'Azure OpenAI'.
    • Modified the llm_provider configuration example to include 'azure'.
  • cli/main.py
    • Renamed 'Step 5: OpenAI backend' to 'Step 5: LLM Provider' for broader applicability.
    • Introduced azure_endpoint and azure_api_version variables for Azure-specific configuration.
    • Added a conditional block to prompt for Azure OpenAI endpoint, API version, and deployment names when 'azure' is selected.
    • Ensured azure_endpoint and azure_api_version are included in the runtime configuration mapping.
  • cli/utils.py
    • Extended select_shallow_thinking_agent and select_deep_thinking_agent to include Azure-specific deployment options.
    • Added 'Azure' to the BASE_URLS list in select_llm_provider with a default endpoint.
    • Generalized the exit message for unselected LLM providers.
    • Implemented new utility functions: ask_azure_endpoint, ask_azure_api_version, and ask_azure_deployment_name for interactive Azure configuration.
  • tradingagents/default_config.py
    • Added azure_endpoint and azure_api_version to the DEFAULT_CONFIG dictionary with default values.
  • tradingagents/graph/trading_graph.py
    • Modified _get_provider_kwargs to retrieve and pass azure_endpoint, api_version, and reasoning_effort when the provider is 'azure'.
  • tradingagents/llm_clients/azure_openai_client.py
    • Added new file azure_openai_client.py to implement AzureOpenAIClient for Azure OpenAI integration.
    • The client uses langchain_openai.AzureChatOpenAI and supports configuration via kwargs, base_url, or environment variables.
    • Maps the generic model parameter to azure_deployment for Azure's specific terminology.
    • Includes logic to pass timeout, max_retries, reasoning_effort, and callbacks to the underlying AzureChatOpenAI instance.
    • Implements validate_model to allow any model name for Azure.
  • tradingagents/llm_clients/factory.py
    • Imported AzureOpenAIClient.
    • Updated the create_llm_client function to instantiate AzureOpenAIClient when the provider is 'azure'.
  • tradingagents/llm_clients/validators.py
    • Updated validate_model to consider 'azure' as a provider that accepts any model/deployment name, similar to 'ollama' and 'openrouter'.
Activity
  • No human activity (comments, reviews) has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively integrates Azure OpenAI as a new LLM provider. The changes span the CLI, configuration, and client factory, providing a complete user flow. My review focuses on two main points: improving the configuration logic in the new Azure client for better robustness and maintainability, and addressing code duplication and a potentially confusing configuration key name in the graph logic. Overall, this is a great addition with a few areas for refinement.

Comment on lines 23 to 48
azure_endpoint = (
self.kwargs.get("azure_endpoint")
or self.base_url
or os.environ.get("AZURE_OPENAI_ENDPOINT")
)
api_version = self.kwargs.get("api_version") or os.environ.get(
"AZURE_OPENAI_API_VERSION",
"2024-10-21",
)
api_key = self.kwargs.get("api_key") or os.environ.get("AZURE_OPENAI_API_KEY")

llm_kwargs = {
"azure_deployment": self.model,
"model": self.model,
"api_version": api_version,
}

if azure_endpoint:
llm_kwargs["azure_endpoint"] = azure_endpoint
if api_key:
llm_kwargs["api_key"] = api_key

for key in ("timeout", "max_retries", "reasoning_effort", "callbacks"):
if key in self.kwargs:
llm_kwargs[key] = self.kwargs[key]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This method manually re-implements configuration resolution from environment variables, which is already handled robustly by the AzureChatOpenAI class. This custom logic can be brittle, especially with empty environment variables, and makes the code harder to maintain.

It's better to simplify this by only passing parameters from kwargs and base_url, and letting langchain's client handle the fallback to environment variables. This aligns with the implementation of other clients in this project (e.g., OpenAIClient) and reduces the chance of configuration-related bugs.

        llm_kwargs = {
            "azure_deployment": self.model,
            "model": self.model,
        }

        # Prioritize endpoint from kwargs, then base_url. Let AzureChatOpenAI handle env vars.
        azure_endpoint = self.kwargs.get("azure_endpoint") or self.base_url
        if azure_endpoint:
            llm_kwargs["azure_endpoint"] = azure_endpoint

        # Pass through other known parameters from kwargs.
        # AzureChatOpenAI will fall back to environment variables if these are not provided.
        for key in (
            "api_version",
            "api_key",
            "timeout",
            "max_retries",
            "reasoning_effort",
            "callbacks",
        ):
            if key in self.kwargs and self.kwargs[key] is not None:
                llm_kwargs[key] = self.kwargs[key]

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix
ae25b16

Comment on lines 156 to 158
reasoning_effort = self.config.get("openai_reasoning_effort")
if reasoning_effort:
kwargs["reasoning_effort"] = reasoning_effort

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to handle reasoning_effort is duplicated from the openai provider block. This harms maintainability. Consider refactoring to handle shared parameters like this outside the provider-specific if/elif blocks.

Additionally, reusing the openai_reasoning_effort configuration key for the Azure provider can be confusing. A more generic name, such as llm_reasoning_effort, would improve clarity.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix
ae25b16

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant