Skip to content

feat: APIs for stack jobs#383

Draft
aledefra wants to merge 1 commit intodevelopfrom
stack-apis
Draft

feat: APIs for stack jobs#383
aledefra wants to merge 1 commit intodevelopfrom
stack-apis

Conversation

@aledefra
Copy link
Copy Markdown
Contributor

@aledefra aledefra commented Apr 2, 2026

No description provided.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for “stack job” metadata propagation into deeploy_specs.job_config and introduces a new Deeploy manager API endpoint to create multiple pipelines in a single request.

Changes:

  • Added /create_pipelines_batch endpoint that processes a list of create-pipeline payloads and returns per-item results plus an aggregate status.
  • Extended _ensure_deeploy_specs_job_config to merge stack_job_config fields into deeploy_specs.job_config, and wired it into pipeline create/update flows.
  • Added new Deeploy constants/keys/status (requests, results, item_index, stack-related keys, and partial) plus unit tests covering the new behavior.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.

File Description
extensions/business/deeploy/test_deeploy.py Adds unit tests for the new batch endpoint and stack job_config normalization.
extensions/business/deeploy/deeploy_mixin.py Extends deeploy specs normalization to merge stack fields into job_config.
extensions/business/deeploy/deeploy_manager_api.py Adds the new create_pipelines_batch endpoint and passes stack_job_config through request processing.
extensions/business/deeploy/deeploy_const.py Introduces new request/response keys, stack metadata keys, partial status, and the batch request template.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

item_result = self._process_pipeline_request(
request=request_item,
is_create=True,
async_mode=False,
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

create_pipelines_batch calls _process_pipeline_request(..., async_mode=False) for each item, which makes this endpoint fully synchronous and potentially long-running per request. Given _get_pipeline_responses() uses a tight loop without sleeping, a larger batch can peg CPU and hold the request open until timeouts. Consider making the batch endpoint async (return per-item pending ids / statuses) or adding bounded polling/backoff so batch calls don’t busy-wait.

Suggested change
async_mode=False,
async_mode=True,

Copilot uses AI. Check for mistakes.
Comment on lines +1054 to +1057
requests = request.get(DEEPLOY_KEYS.REQUESTS, [])
if not isinstance(requests, list) or len(requests) == 0:
raise ValueError(f"{DEEPLOY_ERRORS.REQUEST3}: '{DEEPLOY_KEYS.REQUESTS}' must be a non-empty list.")

Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The batch endpoint currently accepts an unbounded requests list. A large payload can lead to excessive work (and long blocking time since each item is processed sequentially), which is a reliability/DoS risk. Consider enforcing a reasonable maximum batch size (configurable) and returning a validation error when exceeded.

Copilot uses AI. Check for mistakes.
Comment on lines +1054 to +1066
requests = request.get(DEEPLOY_KEYS.REQUESTS, [])
if not isinstance(requests, list) or len(requests) == 0:
raise ValueError(f"{DEEPLOY_ERRORS.REQUEST3}: '{DEEPLOY_KEYS.REQUESTS}' must be a non-empty list.")

results = []
statuses = []

for idx, request_item in enumerate(requests):
if not isinstance(request_item, dict):
item_result = {
DEEPLOY_KEYS.STATUS: DEEPLOY_STATUS.FAIL,
DEEPLOY_KEYS.ERROR: f"{DEEPLOY_ERRORS.REQUEST3}: request[{idx}] must be an object.",
}
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New validation branches in create_pipelines_batch (non-list/empty requests, and non-dict items producing per-item FAIL results) aren’t covered by tests. Adding unit tests for these cases would help lock in the API contract and prevent regressions.

Copilot uses AI. Check for mistakes.
request: dict = DEEPLOY_CREATE_BATCH_REQUEST
):
"""
Create multiple pipelines in one API call.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we would like to have the stack jobs in multiple pipelines.
The main idea was having multiple CARs in the same pipeline and paying for it as per-CAR price summed up (not as a native job as we're doing it now)
For such apps it's really important to keep the CARs in the same pipeline, so they could share the same semaphoring mechanism and ENV variables injeciton

results = []
statuses = []

for idx, request_item in enumerate(requests):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, this is a no-go.
I think, we should create the pipeline similarly to what we're doing for native apps right now , just by allowing ONLY CAR/WAR plugins

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants