Adding SGP-related extensible configuration to the model engine helm chart#762
Adding SGP-related extensible configuration to the model engine helm chart#762arniechops wants to merge 7 commits intomainfrom
Conversation
Additional Comments (1)
This PR parameterizes ubuntu, busybox, and curl images via Consider adding a Prompt To Fix With AIThis is a comment left during a code review.
Path: charts/model-engine/templates/proportional_a100_autoscaler_deployment.yaml
Line: 30
Comment:
**Hardcoded image breaks airgapped deployments**
This PR parameterizes ubuntu, busybox, and curl images via `utilityImages` for airgapped/on-prem environments, but the `cluster-proportional-autoscaler` image from `registry.k8s.io` remains hardcoded here and in the a10/t4 autoscaler deployments. In an airgapped environment, `registry.k8s.io` would be unreachable just like the other registries that were parameterized.
Consider adding a `utilityImages.clusterProportionalAutoscaler` entry (or similar) to `values.yaml` and referencing it here:
```suggestion
- image: {{ .Values.utilityImages.clusterProportionalAutoscaler.repository }}:{{ .Values.utilityImages.clusterProportionalAutoscaler.tag }}
```
How can I resolve this? If you propose a fix, please make it concise. |
| {{- with $.Values.automountServiceAccountToken }} | ||
| automountServiceAccountToken: {{ . }} | ||
| {{- end }} |
There was a problem hiding this comment.
with skips block when value is false
Helm's with treats false as falsy, so when automountServiceAccountToken is set to false (the exact scenario where you want to disable automount on the ServiceAccount), this block will be skipped entirely and automountServiceAccountToken won't be rendered into the YAML. Only when the value is true will it appear — which is the opposite of the critical behavior.
In contrast, the pod-level specs in other templates correctly use {{ .Values.automountServiceAccountToken }} directly without a with gate.
| {{- with $.Values.automountServiceAccountToken }} | |
| automountServiceAccountToken: {{ . }} | |
| {{- end }} | |
| automountServiceAccountToken: {{ $.Values.automountServiceAccountToken }} |
Prompt To Fix With AI
This is a comment left during a code review.
Path: charts/model-engine/templates/service_account.yaml
Line: 24-26
Comment:
**`with` skips block when value is `false`**
Helm's `with` treats `false` as falsy, so when `automountServiceAccountToken` is set to `false` (the exact scenario where you want to disable automount on the ServiceAccount), this block will be skipped entirely and `automountServiceAccountToken` won't be rendered into the YAML. Only when the value is `true` will it appear — which is the opposite of the critical behavior.
In contrast, the pod-level specs in other templates correctly use `{{ .Values.automountServiceAccountToken }}` directly without a `with` gate.
```suggestion
automountServiceAccountToken: {{ $.Values.automountServiceAccountToken }}
```
How can I resolve this? If you propose a fix, please make it concise.| {{- include "modelEngine.labels" . | nindent 4 }} | ||
| annotations: | ||
| "helm.sh/hook": pre-install,pre-upgrade | ||
| "helm.sh/hook-weight": "-1" |
There was a problem hiding this comment.
Hook weight collision with migration job
Both database_init_job.yaml and database_migration_job.yaml use "helm.sh/hook-weight": "-1" on pre-install,pre-upgrade. This means Helm does not guarantee execution order between them. Since init_database.py creates schemas (create schema if not exists) and tables (Base.metadata.create_all), and the migration job applies Alembic migrations that depend on those schemas existing, the migration could run first and fail.
Consider setting this job's hook weight to "-2" (or lower) so it is guaranteed to run before the migration job at weight "-1".
| "helm.sh/hook-weight": "-1" | |
| "helm.sh/hook-weight": "-2" |
Prompt To Fix With AI
This is a comment left during a code review.
Path: charts/model-engine/templates/database_init_job.yaml
Line: 10
Comment:
**Hook weight collision with migration job**
Both `database_init_job.yaml` and `database_migration_job.yaml` use `"helm.sh/hook-weight": "-1"` on `pre-install,pre-upgrade`. This means Helm does not guarantee execution order between them. Since `init_database.py` creates schemas (`create schema if not exists`) and tables (`Base.metadata.create_all`), and the migration job applies Alembic migrations that depend on those schemas existing, the migration could run first and fail.
Consider setting this job's hook weight to `"-2"` (or lower) so it is guaranteed to run before the migration job at weight `"-1"`.
```suggestion
"helm.sh/hook-weight": "-2"
```
How can I resolve this? If you propose a fix, please make it concise.
Additional Comments (2)
All other pod specs in this PR (gateway, builder, cacher, celery autoscaler, database migration, database init, spellbook init, and the proportional autoscalers) explicitly set When Prompt To Fix With AIThis is a comment left during a code review.
Path: charts/model-engine/templates/populate_fine_tuning_repository_job.yaml
Line: 44
Comment:
**Missing `automountServiceAccountToken` setting**
All other pod specs in this PR (gateway, builder, cacher, celery autoscaler, database migration, database init, spellbook init, and the proportional autoscalers) explicitly set `automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}`. This job and `restart_keda_operator.yaml` are the only ones missing it.
When `automountServiceAccountToken` is `false`, the `modelEngine.volumes` helper will inject the projected `token-volume`, but without disabling the default automount on the pod spec, Kubernetes will also mount its own default token — resulting in a conflict at `/var/run/secrets/kubernetes.io/serviceaccount`.
```suggestion
automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}
serviceAccountName: {{ include "modelEngine.fullname" . }}
```
How can I resolve this? If you propose a fix, please make it concise.
Same inconsistency as Prompt To Fix With AIThis is a comment left during a code review.
Path: charts/model-engine/templates/restart_keda_operator.yaml
Line: 43
Comment:
**Missing `automountServiceAccountToken` setting**
Same inconsistency as `populate_fine_tuning_repository_job.yaml` — this job is missing `automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}` which every other pod spec in this PR has. This will cause issues when `automountServiceAccountToken` is set to `false`.
```suggestion
automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}
serviceAccountName: {{ include "modelEngine.fullname" . }}
```
How can I resolve this? If you propose a fix, please make it concise. |
Pull Request Summary
What is this PR changing? Why is this change being made? Any caveats you'd like to highlight? Link any relevant documents, links, or screenshots here if applicable.
Test Plan and Usage Guide
How did you validate that your PR works correctly? How do you run or demo the code? Provide enough detail so a reviewer can reasonably reproduce the testing procedure. Paste example command line invocations if applicable.
Greptile Summary
This PR makes the model-engine Helm chart more extensible for non-AWS deployments (on-prem, GCP, airgapped environments) by introducing configurable values across 22 files. The chart version is bumped from 0.1.10 to 0.2.0.
extraEnvVars,extraVolumes,extraVolumeMounts, andextraPodEnvFromfor wrapper charts to inject environment-specific configuration without forking templatespublic.ecr.awsandcurlimages/curlreferences withutilityImages.*values, enabling airgapped registriesautomountServiceAccountTokenacross all deployments/jobs with a projected token volume fallback (modelEngine.tokenVolume/modelEngine.tokenVolumeMount) for when automount is disabledredis.authSecretName/redis.authSecretKeyas an alternative to plaintextredis.authazure.inference_client_idfor inference service accounts{{ .Release.Revision }}to job names andbefore-hook-creationdelete policy to prevent stale job conflicts during upgradesdatabase_init_job.yaml: Runs schema initialization at hook-weight-2(before migration at-1)service_config_map.yamlnow useskindIs "bool"to avoid incorrectly quoting boolean config valuesgpuTolerationsfor non-AWS GPU node label schemesConfidence Score: 4/5
automountServiceAccountToken: true, utility images pointing to existing ECR/Docker Hub repos,inferenceFramework.*: "latest"). The changes are well-structured and consistently applied across templates. Minor inconsistency: two job templates (populate_fine_tuning_repository, restart_keda_operator) are missing the explicit pod-levelautomountServiceAccountToken, though the SA-level setting covers it.charts/model-engine/templates/_helpers.tpl(core helper functions with new token volume logic),charts/model-engine/templates/celery_autoscaler_stateful_set.yaml(new GCP broker detection logic)Important Files Changed
kindIs "bool"to avoid quoting boolean values in YAML config. Applied consistently to both launch and infra config sections across both ConfigMaps.Flowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A[values.yaml] -->|automountServiceAccountToken| B{automount enabled?} B -->|true| C[Default K8s token mount] B -->|false| D[Projected token volume] D --> E[serviceAccountToken\n86400s expiry] D --> F[kube-root-ca.crt\nConfigMap] D --> G[namespace\ndownwardAPI] E --> H[Mount at\n/var/run/secrets/kubernetes.io/serviceaccount] F --> H G --> H A -->|extraEnvVars| I[Gateway / Builder / Cacher] A -->|extraVolumes| I A -->|extraVolumeMounts| I A -->|extraPodEnvFrom| J[Inference Pod Templates] A -->|utilityImages.*| K[Balloon / Busybox / Curl Pods] A -->|gpuTolerations| L[GPU Balloon + Service Templates] A -->|inferenceFramework.*| M[Inference Framework ConfigMap] A -->|redis.authSecretName| N[Redis Auth via K8s Secret]Last reviewed commit: f8ab5a8