Conversation
There was a problem hiding this comment.
Pull request overview
This PR adds support for “stack job” metadata propagation into deeploy_specs.job_config and introduces a new Deeploy manager API endpoint to create multiple pipelines in a single request.
Changes:
- Added
/create_pipelines_batchendpoint that processes a list of create-pipeline payloads and returns per-item results plus an aggregate status. - Extended
_ensure_deeploy_specs_job_configto mergestack_job_configfields intodeeploy_specs.job_config, and wired it into pipeline create/update flows. - Added new Deeploy constants/keys/status (
requests,results,item_index, stack-related keys, andpartial) plus unit tests covering the new behavior.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
extensions/business/deeploy/test_deeploy.py |
Adds unit tests for the new batch endpoint and stack job_config normalization. |
extensions/business/deeploy/deeploy_mixin.py |
Extends deeploy specs normalization to merge stack fields into job_config. |
extensions/business/deeploy/deeploy_manager_api.py |
Adds the new create_pipelines_batch endpoint and passes stack_job_config through request processing. |
extensions/business/deeploy/deeploy_const.py |
Introduces new request/response keys, stack metadata keys, partial status, and the batch request template. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| item_result = self._process_pipeline_request( | ||
| request=request_item, | ||
| is_create=True, | ||
| async_mode=False, |
There was a problem hiding this comment.
create_pipelines_batch calls _process_pipeline_request(..., async_mode=False) for each item, which makes this endpoint fully synchronous and potentially long-running per request. Given _get_pipeline_responses() uses a tight loop without sleeping, a larger batch can peg CPU and hold the request open until timeouts. Consider making the batch endpoint async (return per-item pending ids / statuses) or adding bounded polling/backoff so batch calls don’t busy-wait.
| async_mode=False, | |
| async_mode=True, |
| requests = request.get(DEEPLOY_KEYS.REQUESTS, []) | ||
| if not isinstance(requests, list) or len(requests) == 0: | ||
| raise ValueError(f"{DEEPLOY_ERRORS.REQUEST3}: '{DEEPLOY_KEYS.REQUESTS}' must be a non-empty list.") | ||
|
|
There was a problem hiding this comment.
The batch endpoint currently accepts an unbounded requests list. A large payload can lead to excessive work (and long blocking time since each item is processed sequentially), which is a reliability/DoS risk. Consider enforcing a reasonable maximum batch size (configurable) and returning a validation error when exceeded.
| requests = request.get(DEEPLOY_KEYS.REQUESTS, []) | ||
| if not isinstance(requests, list) or len(requests) == 0: | ||
| raise ValueError(f"{DEEPLOY_ERRORS.REQUEST3}: '{DEEPLOY_KEYS.REQUESTS}' must be a non-empty list.") | ||
|
|
||
| results = [] | ||
| statuses = [] | ||
|
|
||
| for idx, request_item in enumerate(requests): | ||
| if not isinstance(request_item, dict): | ||
| item_result = { | ||
| DEEPLOY_KEYS.STATUS: DEEPLOY_STATUS.FAIL, | ||
| DEEPLOY_KEYS.ERROR: f"{DEEPLOY_ERRORS.REQUEST3}: request[{idx}] must be an object.", | ||
| } |
There was a problem hiding this comment.
New validation branches in create_pipelines_batch (non-list/empty requests, and non-dict items producing per-item FAIL results) aren’t covered by tests. Adding unit tests for these cases would help lock in the API contract and prevent regressions.
| request: dict = DEEPLOY_CREATE_BATCH_REQUEST | ||
| ): | ||
| """ | ||
| Create multiple pipelines in one API call. |
There was a problem hiding this comment.
I don't think we would like to have the stack jobs in multiple pipelines.
The main idea was having multiple CARs in the same pipeline and paying for it as per-CAR price summed up (not as a native job as we're doing it now)
For such apps it's really important to keep the CARs in the same pipeline, so they could share the same semaphoring mechanism and ENV variables injeciton
| results = [] | ||
| statuses = [] | ||
|
|
||
| for idx, request_item in enumerate(requests): |
There was a problem hiding this comment.
IMO, this is a no-go.
I think, we should create the pipeline similarly to what we're doing for native apps right now , just by allowing ONLY CAR/WAR plugins
No description provided.