Skip to content

Adding SGP-related extensible configuration to the model engine helm chart#762

Open
arniechops wants to merge 7 commits intomainfrom
arnavchopra/extensibility-hooks
Open

Adding SGP-related extensible configuration to the model engine helm chart#762
arniechops wants to merge 7 commits intomainfrom
arnavchopra/extensibility-hooks

Conversation

@arniechops
Copy link
Collaborator

@arniechops arniechops commented Feb 20, 2026

Pull Request Summary

What is this PR changing? Why is this change being made? Any caveats you'd like to highlight? Link any relevant documents, links, or screenshots here if applicable.

Test Plan and Usage Guide

How did you validate that your PR works correctly? How do you run or demo the code? Provide enough detail so a reviewer can reasonably reproduce the testing procedure. Paste example command line invocations if applicable.

Greptile Summary

This PR makes the model-engine Helm chart more extensible for non-AWS deployments (on-prem, GCP, airgapped environments) by introducing configurable values across 22 files. The chart version is bumped from 0.1.10 to 0.2.0.

  • Extensibility hooks: Adds extraEnvVars, extraVolumes, extraVolumeMounts, and extraPodEnvFrom for wrapper charts to inject environment-specific configuration without forking templates
  • Configurable utility images: Replaces hardcoded public.ecr.aws and curlimages/curl references with utilityImages.* values, enabling airgapped registries
  • Service account token control: Adds automountServiceAccountToken across all deployments/jobs with a projected token volume fallback (modelEngine.tokenVolume/modelEngine.tokenVolumeMount) for when automount is disabled
  • Redis auth from Secrets: Supports redis.authSecretName/redis.authSecretKey as an alternative to plaintext redis.auth
  • GCP support: Adds GCP Memorystore broker detection in the celery autoscaler, separate azure.inference_client_id for inference service accounts
  • Configurable inference frameworks: Inference framework image tags are now values-driven instead of hardcoded to "latest"
  • Helm job improvements: Adds {{ .Release.Revision }} to job names and before-hook-creation delete policy to prevent stale job conflicts during upgrades
  • New database_init_job.yaml: Runs schema initialization at hook-weight -2 (before migration at -1)
  • Boolean YAML handling: service_config_map.yaml now uses kindIs "bool" to avoid incorrectly quoting boolean config values
  • GPU tolerations: Adds configurable gpuTolerations for non-AWS GPU node label schemes

Confidence Score: 4/5

  • This PR is safe to merge — changes are backward-compatible with sensible defaults that preserve existing behavior.
  • All new values have safe defaults that match prior hardcoded behavior (e.g., automountServiceAccountToken: true, utility images pointing to existing ECR/Docker Hub repos, inferenceFramework.*: "latest"). The changes are well-structured and consistently applied across templates. Minor inconsistency: two job templates (populate_fine_tuning_repository, restart_keda_operator) are missing the explicit pod-level automountServiceAccountToken, though the SA-level setting covers it.
  • charts/model-engine/templates/_helpers.tpl (core helper functions with new token volume logic), charts/model-engine/templates/celery_autoscaler_stateful_set.yaml (new GCP broker detection logic)

Important Files Changed

Filename Overview
charts/model-engine/templates/_helpers.tpl Adds fullnameOverride, extraEnvVars/extraVolumes/extraVolumeMounts extensibility hooks, Redis secret support, on-prem AWS_PROFILE comment, and projected token volume helpers for when automountServiceAccountToken is disabled. Well-structured helper functions.
charts/model-engine/templates/celery_autoscaler_stateful_set.yaml Added GCP Memorystore broker detection, automountServiceAccountToken support with projected token volume, and restructured volumeMounts/volumes to always render the key even when only token volume is needed.
charts/model-engine/templates/database_init_job.yaml New file: database init job with hook weight -2 (runs before migration at -1), proper hook-delete-policy, revision-based naming, and automountServiceAccountToken support.
charts/model-engine/templates/populate_fine_tuning_repository_job.yaml Added revision-based naming and before-hook-creation delete policy. Missing automountServiceAccountToken compared to other jobs.
charts/model-engine/templates/restart_keda_operator.yaml Added revision-based naming and before-hook-creation delete policy. Missing automountServiceAccountToken compared to other jobs.
charts/model-engine/templates/service_account.yaml Added automountServiceAccountToken field to ServiceAccount resource.
charts/model-engine/templates/service_config_map.yaml Added boolean type handling with kindIs "bool" to avoid quoting boolean values in YAML config. Applied consistently to both launch and infra config sections across both ConfigMaps.
charts/model-engine/templates/service_template_config_map.yaml Configurable utility images (busybox, curl), GPU tolerations, extraPodEnvFrom for inference pods, revision-based job naming, and gpuTolerations across all template types.
charts/model-engine/values.yaml Added new defaults for inferenceFramework, extensibility hooks (extraEnvVars/Volumes/VolumeMounts/PodEnvFrom), automountServiceAccountToken, utilityImages, and gpuTolerations. Missing trailing newline.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[values.yaml] -->|automountServiceAccountToken| B{automount enabled?}
    B -->|true| C[Default K8s token mount]
    B -->|false| D[Projected token volume]
    D --> E[serviceAccountToken\n86400s expiry]
    D --> F[kube-root-ca.crt\nConfigMap]
    D --> G[namespace\ndownwardAPI]
    E --> H[Mount at\n/var/run/secrets/kubernetes.io/serviceaccount]
    F --> H
    G --> H

    A -->|extraEnvVars| I[Gateway / Builder / Cacher]
    A -->|extraVolumes| I
    A -->|extraVolumeMounts| I
    A -->|extraPodEnvFrom| J[Inference Pod Templates]
    A -->|utilityImages.*| K[Balloon / Busybox / Curl Pods]
    A -->|gpuTolerations| L[GPU Balloon + Service Templates]
    A -->|inferenceFramework.*| M[Inference Framework ConfigMap]
    A -->|redis.authSecretName| N[Redis Auth via K8s Secret]
Loading

Last reviewed commit: f8ab5a8

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

22 files reviewed, 5 comments

Edit Code Review Agent Settings | Greptile

@greptile-apps
Copy link

greptile-apps bot commented Feb 20, 2026

Additional Comments (1)

charts/model-engine/templates/proportional_a100_autoscaler_deployment.yaml
Hardcoded image breaks airgapped deployments

This PR parameterizes ubuntu, busybox, and curl images via utilityImages for airgapped/on-prem environments, but the cluster-proportional-autoscaler image from registry.k8s.io remains hardcoded here and in the a10/t4 autoscaler deployments. In an airgapped environment, registry.k8s.io would be unreachable just like the other registries that were parameterized.

Consider adding a utilityImages.clusterProportionalAutoscaler entry (or similar) to values.yaml and referencing it here:

        - image: {{ .Values.utilityImages.clusterProportionalAutoscaler.repository }}:{{ .Values.utilityImages.clusterProportionalAutoscaler.tag }}
Prompt To Fix With AI
This is a comment left during a code review.
Path: charts/model-engine/templates/proportional_a100_autoscaler_deployment.yaml
Line: 30

Comment:
**Hardcoded image breaks airgapped deployments**

This PR parameterizes ubuntu, busybox, and curl images via `utilityImages` for airgapped/on-prem environments, but the `cluster-proportional-autoscaler` image from `registry.k8s.io` remains hardcoded here and in the a10/t4 autoscaler deployments. In an airgapped environment, `registry.k8s.io` would be unreachable just like the other registries that were parameterized.

Consider adding a `utilityImages.clusterProportionalAutoscaler` entry (or similar) to `values.yaml` and referencing it here:

```suggestion
        - image: {{ .Values.utilityImages.clusterProportionalAutoscaler.repository }}:{{ .Values.utilityImages.clusterProportionalAutoscaler.tag }}
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines 24 to 26
{{- with $.Values.automountServiceAccountToken }}
automountServiceAccountToken: {{ . }}
{{- end }}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with skips block when value is false

Helm's with treats false as falsy, so when automountServiceAccountToken is set to false (the exact scenario where you want to disable automount on the ServiceAccount), this block will be skipped entirely and automountServiceAccountToken won't be rendered into the YAML. Only when the value is true will it appear — which is the opposite of the critical behavior.

In contrast, the pod-level specs in other templates correctly use {{ .Values.automountServiceAccountToken }} directly without a with gate.

Suggested change
{{- with $.Values.automountServiceAccountToken }}
automountServiceAccountToken: {{ . }}
{{- end }}
automountServiceAccountToken: {{ $.Values.automountServiceAccountToken }}
Prompt To Fix With AI
This is a comment left during a code review.
Path: charts/model-engine/templates/service_account.yaml
Line: 24-26

Comment:
**`with` skips block when value is `false`**

Helm's `with` treats `false` as falsy, so when `automountServiceAccountToken` is set to `false` (the exact scenario where you want to disable automount on the ServiceAccount), this block will be skipped entirely and `automountServiceAccountToken` won't be rendered into the YAML. Only when the value is `true` will it appear — which is the opposite of the critical behavior. 

In contrast, the pod-level specs in other templates correctly use `{{ .Values.automountServiceAccountToken }}` directly without a `with` gate.

```suggestion
automountServiceAccountToken: {{ $.Values.automountServiceAccountToken }}
```

How can I resolve this? If you propose a fix, please make it concise.

{{- include "modelEngine.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hook weight collision with migration job

Both database_init_job.yaml and database_migration_job.yaml use "helm.sh/hook-weight": "-1" on pre-install,pre-upgrade. This means Helm does not guarantee execution order between them. Since init_database.py creates schemas (create schema if not exists) and tables (Base.metadata.create_all), and the migration job applies Alembic migrations that depend on those schemas existing, the migration could run first and fail.

Consider setting this job's hook weight to "-2" (or lower) so it is guaranteed to run before the migration job at weight "-1".

Suggested change
"helm.sh/hook-weight": "-1"
"helm.sh/hook-weight": "-2"
Prompt To Fix With AI
This is a comment left during a code review.
Path: charts/model-engine/templates/database_init_job.yaml
Line: 10

Comment:
**Hook weight collision with migration job**

Both `database_init_job.yaml` and `database_migration_job.yaml` use `"helm.sh/hook-weight": "-1"` on `pre-install,pre-upgrade`. This means Helm does not guarantee execution order between them. Since `init_database.py` creates schemas (`create schema if not exists`) and tables (`Base.metadata.create_all`), and the migration job applies Alembic migrations that depend on those schemas existing, the migration could run first and fail.

Consider setting this job's hook weight to `"-2"` (or lower) so it is guaranteed to run before the migration job at weight `"-1"`.

```suggestion
    "helm.sh/hook-weight": "-2"
```

How can I resolve this? If you propose a fix, please make it concise.

@greptile-apps
Copy link

greptile-apps bot commented Feb 25, 2026

Additional Comments (2)

charts/model-engine/templates/populate_fine_tuning_repository_job.yaml
Missing automountServiceAccountToken setting

All other pod specs in this PR (gateway, builder, cacher, celery autoscaler, database migration, database init, spellbook init, and the proportional autoscalers) explicitly set automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}. This job and restart_keda_operator.yaml are the only ones missing it.

When automountServiceAccountToken is false, the modelEngine.volumes helper will inject the projected token-volume, but without disabling the default automount on the pod spec, Kubernetes will also mount its own default token — resulting in a conflict at /var/run/secrets/kubernetes.io/serviceaccount.

      automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}
      serviceAccountName: {{ include "modelEngine.fullname" . }}
Prompt To Fix With AI
This is a comment left during a code review.
Path: charts/model-engine/templates/populate_fine_tuning_repository_job.yaml
Line: 44

Comment:
**Missing `automountServiceAccountToken` setting**

All other pod specs in this PR (gateway, builder, cacher, celery autoscaler, database migration, database init, spellbook init, and the proportional autoscalers) explicitly set `automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}`. This job and `restart_keda_operator.yaml` are the only ones missing it.

When `automountServiceAccountToken` is `false`, the `modelEngine.volumes` helper will inject the projected `token-volume`, but without disabling the default automount on the pod spec, Kubernetes will also mount its own default token — resulting in a conflict at `/var/run/secrets/kubernetes.io/serviceaccount`.

```suggestion
      automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}
      serviceAccountName: {{ include "modelEngine.fullname" . }}
```

How can I resolve this? If you propose a fix, please make it concise.

charts/model-engine/templates/restart_keda_operator.yaml
Missing automountServiceAccountToken setting

Same inconsistency as populate_fine_tuning_repository_job.yaml — this job is missing automountServiceAccountToken: {{ .Values.automountServiceAccountToken }} which every other pod spec in this PR has. This will cause issues when automountServiceAccountToken is set to false.

      automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}
      serviceAccountName: {{ include "modelEngine.fullname" . }}
Prompt To Fix With AI
This is a comment left during a code review.
Path: charts/model-engine/templates/restart_keda_operator.yaml
Line: 43

Comment:
**Missing `automountServiceAccountToken` setting**

Same inconsistency as `populate_fine_tuning_repository_job.yaml` — this job is missing `automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}` which every other pod spec in this PR has. This will cause issues when `automountServiceAccountToken` is set to `false`.

```suggestion
      automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}
      serviceAccountName: {{ include "modelEngine.fullname" . }}
```

How can I resolve this? If you propose a fix, please make it concise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants