From 9d08e51ae3b036a0897b8fd79c2614ee104e20aa Mon Sep 17 00:00:00 2001 From: Casey Davenport Date: Mon, 16 Mar 2026 14:06:37 -0700 Subject: [PATCH 1/6] Add guide for migrating from API server to native v3 CRDs --- calico/operations/crd-migration.mdx | 145 ++++++++++++++++++++++++++++ sidebars-calico.js | 1 + 2 files changed, 146 insertions(+) create mode 100644 calico/operations/crd-migration.mdx diff --git a/calico/operations/crd-migration.mdx b/calico/operations/crd-migration.mdx new file mode 100644 index 0000000000..12343169cf --- /dev/null +++ b/calico/operations/crd-migration.mdx @@ -0,0 +1,145 @@ +--- +description: Migrate Calico resources from the aggregated API server (v1 CRDs) to native v3 CRDs to remove the API server component. +--- + +# Migrate from API server to native CRDs + +## Big picture + +Automatically migrate $[prodname] resources from the aggregated API server's `crd.projectcalico.org/v1` backing storage to native `projectcalico.org/v3` CRDs, allowing you to remove the API server component. + +## Value + +Newer $[prodname] installations use native `projectcalico.org/v3` CRDs directly, without the aggregated API server. This is simpler to operate, removes a component, and enables Kubernetes-native features like CEL validation rules. The `DatastoreMigration` controller provides an automated, in-place migration path for existing clusters that are still running the API server. + +## Concepts + +### How it works + +The migration controller copies all $[prodname] resources from the v1 CRDs (used as backing storage by the API server) to native v3 CRDs. During the migration window, the datastore is briefly locked (`DatastoreReady=false`) so components pause and retain their cached dataplane state — existing workload connectivity is preserved throughout. + +The migration proceeds through these phases: + +| Phase | Description | +|-------|-------------| +| `Pending` | CR created, prerequisites are being validated | +| `Migrating` | Datastore locked, resources being copied from v1 to v3 CRDs | +| `WaitingForConflictResolution` | Conflicts found — user action needed (see [resolving conflicts](#resolve-conflicts)) | +| `Converged` | All resources migrated, datastore unlocked, waiting for components to switch to v3 | +| `Complete` | All components running against v3 CRDs | + +### What gets migrated + +All $[prodname] resource types are migrated: network policies, IP pools, BGP configuration, Felix configuration, IPAM blocks, and more. IPAM resources are migrated last to minimize the window where new IP allocations are blocked. + +The controller handles policy name migration (removing the legacy `default.` prefix) automatically during the copy. + +### What happens during the migration window + +- Components (Felix, Typha, kube-controllers) pause and retain cached dataplane state +- **Existing workload connectivity is preserved** — no packet loss expected +- New pod scheduling and policy changes are blocked until migration completes +- IPAM allocations are blocked during the final phase of the migration + +The locked window is typically short (seconds to a few minutes depending on cluster size), but you should plan for a maintenance window where no policy changes or new pod deployments are needed. + +## Before you begin + +- $[prodname] v3.32+ (or the release that includes the migration controller) +- Cluster is currently running in API server mode (the aggregated API server is deployed) +- **If using GitOps (ArgoCD, Flux):** pause sync before starting the migration. These tools may interfere with the API group switchover. You'll update your manifests to use `projectcalico.org/v3` after migration completes. + +## How to + +### Migrate to native CRDs + +1. **Install v3 CRDs.** + + Apply the v3 CRD manifests from the $[prodname] release. While the aggregated APIService is active, Kubernetes ignores these CRDs, so this is safe to do ahead of time. + + ```bash + kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/vX.Y.Z/manifests/v3_projectcalico_org.yaml + ``` + + Replace `vX.Y.Z` with your $[prodname] version. + +2. **Install the DatastoreMigration CRD.** + + ```bash + kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/vX.Y.Z/manifests/migration.projectcalico.org_datastoremigrations.yaml + ``` + +3. **Create the DatastoreMigration CR.** + + ```bash + kubectl apply -f - < Date: Tue, 17 Mar 2026 11:34:05 -0700 Subject: [PATCH 2/6] Document OwnerReference limitation for non-Calico objects --- calico/operations/crd-migration.mdx | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/calico/operations/crd-migration.mdx b/calico/operations/crd-migration.mdx index 12343169cf..eed00ae611 100644 --- a/calico/operations/crd-migration.mdx +++ b/calico/operations/crd-migration.mdx @@ -143,3 +143,7 @@ The finalizer handles rollback: - Components resume normal operation as if the migration never happened The v1 data is never modified during migration, so it remains authoritative after an abort. + +### Known limitations + +**OwnerReferences from non-Calico resources.** The migration remaps OwnerReference UIDs on Calico resources, but does not scan non-Calico resources (ConfigMaps, Secrets, custom resources from other projects) for OwnerReferences pointing to Calico objects. If you have non-Calico resources with OwnerReferences to Calico resources, those references will become stale after migration because the Calico resource UIDs change. You'll need to update those references manually after migration completes. This is expected to be rare. From 8c84a0c3e68a1fcdcf5576723720d19cc727569c Mon Sep 17 00:00:00 2001 From: Casey Davenport Date: Tue, 31 Mar 2026 16:05:01 -0700 Subject: [PATCH 3/6] Update calico/operations/crd-migration.mdx Co-authored-by: MichalFupso --- calico/operations/crd-migration.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/calico/operations/crd-migration.mdx b/calico/operations/crd-migration.mdx index eed00ae611..8096256226 100644 --- a/calico/operations/crd-migration.mdx +++ b/calico/operations/crd-migration.mdx @@ -78,7 +78,7 @@ The locked window is typically short (seconds to a few minutes depending on clus metadata: name: v1-to-v3 spec: - kind: V1ToV3 + type: APIServerToCRDs EOF ``` From 43066311968fa511cc1622a32ce3a5c8566e3125 Mon Sep 17 00:00:00 2001 From: Casey Davenport Date: Wed, 1 Apr 2026 11:18:30 -0700 Subject: [PATCH 4/6] Address PR feedback on CRD migration guide Fix review comments from ctauchen and MichalFupso: add tech preview admonition, use $[manifestsUrl] variable for manifest URLs, fix "dataplane" to "data plane", backtick APIService, update spec field to type: APIServerToCRDs, and add "finalizer" to Vale accept list. --- .../vocabularies/CalicoTerminology/accept.txt | 1 + calico/operations/crd-migration.mdx | 18 +++++++++++------- 2 files changed, 12 insertions(+), 7 deletions(-) diff --git a/.github/styles/config/vocabularies/CalicoTerminology/accept.txt b/.github/styles/config/vocabularies/CalicoTerminology/accept.txt index cea62847ab..6527be6153 100644 --- a/.github/styles/config/vocabularies/CalicoTerminology/accept.txt +++ b/.github/styles/config/vocabularies/CalicoTerminology/accept.txt @@ -27,6 +27,7 @@ adjacencies [eE]xfiltrat(e|ed|ing|ion) [fF]ailover [fF]ailsafe +[fF]inalizer[s]? [fF]irewalled [gG]lobal[Nn]etwork[Ss]et[s]? [gG]oroutine[s]? diff --git a/calico/operations/crd-migration.mdx b/calico/operations/crd-migration.mdx index 8096256226..5948f641ca 100644 --- a/calico/operations/crd-migration.mdx +++ b/calico/operations/crd-migration.mdx @@ -4,6 +4,12 @@ description: Migrate Calico resources from the aggregated API server (v1 CRDs) t # Migrate from API server to native CRDs +:::note + +This feature is tech preview. Tech preview features may be subject to significant changes before they become GA. + +::: + ## Big picture Automatically migrate $[prodname] resources from the aggregated API server's `crd.projectcalico.org/v1` backing storage to native `projectcalico.org/v3` CRDs, allowing you to remove the API server component. @@ -16,7 +22,7 @@ Newer $[prodname] installations use native `projectcalico.org/v3` CRDs directly, ### How it works -The migration controller copies all $[prodname] resources from the v1 CRDs (used as backing storage by the API server) to native v3 CRDs. During the migration window, the datastore is briefly locked (`DatastoreReady=false`) so components pause and retain their cached dataplane state — existing workload connectivity is preserved throughout. +The migration controller copies all $[prodname] resources from the v1 CRDs (used as backing storage by the API server) to native v3 CRDs. During the migration window, the datastore is briefly locked (`DatastoreReady=false`) so components pause and retain their cached data plane state — existing workload connectivity is preserved throughout. The migration proceeds through these phases: @@ -36,7 +42,7 @@ The controller handles policy name migration (removing the legacy `default.` pre ### What happens during the migration window -- Components (Felix, Typha, kube-controllers) pause and retain cached dataplane state +- Components (Felix, Typha, kube-controllers) pause and retain cached data plane state - **Existing workload connectivity is preserved** — no packet loss expected - New pod scheduling and policy changes are blocked until migration completes - IPAM allocations are blocked during the final phase of the migration @@ -55,18 +61,16 @@ The locked window is typically short (seconds to a few minutes depending on clus 1. **Install v3 CRDs.** - Apply the v3 CRD manifests from the $[prodname] release. While the aggregated APIService is active, Kubernetes ignores these CRDs, so this is safe to do ahead of time. + Apply the v3 CRD manifests from the $[prodname] release. While the aggregated `APIService` is active, Kubernetes ignores these CRDs, so this is safe to do ahead of time. ```bash - kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/vX.Y.Z/manifests/v3_projectcalico_org.yaml + kubectl apply -f $[manifestsUrl]/manifests/v3_projectcalico_org.yaml ``` - Replace `vX.Y.Z` with your $[prodname] version. - 2. **Install the DatastoreMigration CRD.** ```bash - kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/vX.Y.Z/manifests/migration.projectcalico.org_datastoremigrations.yaml + kubectl apply -f $[manifestsUrl]/manifests/migration.projectcalico.org_datastoremigrations.yaml ``` 3. **Create the DatastoreMigration CR.** From 73819b5fd43756aa97e8935f06a15c7b11c99c14 Mon Sep 17 00:00:00 2001 From: Casey Davenport Date: Wed, 1 Apr 2026 11:20:35 -0700 Subject: [PATCH 5/6] Add CRD migration guide to Calico Enterprise docs --- .../operations/crd-migration.mdx | 153 ++++++++++++++++++ sidebars-calico-enterprise.js | 1 + 2 files changed, 154 insertions(+) create mode 100644 calico-enterprise/operations/crd-migration.mdx diff --git a/calico-enterprise/operations/crd-migration.mdx b/calico-enterprise/operations/crd-migration.mdx new file mode 100644 index 0000000000..5948f641ca --- /dev/null +++ b/calico-enterprise/operations/crd-migration.mdx @@ -0,0 +1,153 @@ +--- +description: Migrate Calico resources from the aggregated API server (v1 CRDs) to native v3 CRDs to remove the API server component. +--- + +# Migrate from API server to native CRDs + +:::note + +This feature is tech preview. Tech preview features may be subject to significant changes before they become GA. + +::: + +## Big picture + +Automatically migrate $[prodname] resources from the aggregated API server's `crd.projectcalico.org/v1` backing storage to native `projectcalico.org/v3` CRDs, allowing you to remove the API server component. + +## Value + +Newer $[prodname] installations use native `projectcalico.org/v3` CRDs directly, without the aggregated API server. This is simpler to operate, removes a component, and enables Kubernetes-native features like CEL validation rules. The `DatastoreMigration` controller provides an automated, in-place migration path for existing clusters that are still running the API server. + +## Concepts + +### How it works + +The migration controller copies all $[prodname] resources from the v1 CRDs (used as backing storage by the API server) to native v3 CRDs. During the migration window, the datastore is briefly locked (`DatastoreReady=false`) so components pause and retain their cached data plane state — existing workload connectivity is preserved throughout. + +The migration proceeds through these phases: + +| Phase | Description | +|-------|-------------| +| `Pending` | CR created, prerequisites are being validated | +| `Migrating` | Datastore locked, resources being copied from v1 to v3 CRDs | +| `WaitingForConflictResolution` | Conflicts found — user action needed (see [resolving conflicts](#resolve-conflicts)) | +| `Converged` | All resources migrated, datastore unlocked, waiting for components to switch to v3 | +| `Complete` | All components running against v3 CRDs | + +### What gets migrated + +All $[prodname] resource types are migrated: network policies, IP pools, BGP configuration, Felix configuration, IPAM blocks, and more. IPAM resources are migrated last to minimize the window where new IP allocations are blocked. + +The controller handles policy name migration (removing the legacy `default.` prefix) automatically during the copy. + +### What happens during the migration window + +- Components (Felix, Typha, kube-controllers) pause and retain cached data plane state +- **Existing workload connectivity is preserved** — no packet loss expected +- New pod scheduling and policy changes are blocked until migration completes +- IPAM allocations are blocked during the final phase of the migration + +The locked window is typically short (seconds to a few minutes depending on cluster size), but you should plan for a maintenance window where no policy changes or new pod deployments are needed. + +## Before you begin + +- $[prodname] v3.32+ (or the release that includes the migration controller) +- Cluster is currently running in API server mode (the aggregated API server is deployed) +- **If using GitOps (ArgoCD, Flux):** pause sync before starting the migration. These tools may interfere with the API group switchover. You'll update your manifests to use `projectcalico.org/v3` after migration completes. + +## How to + +### Migrate to native CRDs + +1. **Install v3 CRDs.** + + Apply the v3 CRD manifests from the $[prodname] release. While the aggregated `APIService` is active, Kubernetes ignores these CRDs, so this is safe to do ahead of time. + + ```bash + kubectl apply -f $[manifestsUrl]/manifests/v3_projectcalico_org.yaml + ``` + +2. **Install the DatastoreMigration CRD.** + + ```bash + kubectl apply -f $[manifestsUrl]/manifests/migration.projectcalico.org_datastoremigrations.yaml + ``` + +3. **Create the DatastoreMigration CR.** + + ```bash + kubectl apply -f - < Date: Wed, 1 Apr 2026 14:13:11 -0700 Subject: [PATCH 6/6] Use plain "API service" instead of backticked APIService --- calico-enterprise/operations/crd-migration.mdx | 2 +- calico/operations/crd-migration.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/calico-enterprise/operations/crd-migration.mdx b/calico-enterprise/operations/crd-migration.mdx index 5948f641ca..710ca471d0 100644 --- a/calico-enterprise/operations/crd-migration.mdx +++ b/calico-enterprise/operations/crd-migration.mdx @@ -61,7 +61,7 @@ The locked window is typically short (seconds to a few minutes depending on clus 1. **Install v3 CRDs.** - Apply the v3 CRD manifests from the $[prodname] release. While the aggregated `APIService` is active, Kubernetes ignores these CRDs, so this is safe to do ahead of time. + Apply the v3 CRD manifests from the $[prodname] release. While the aggregated API service is active, Kubernetes ignores these CRDs, so this is safe to do ahead of time. ```bash kubectl apply -f $[manifestsUrl]/manifests/v3_projectcalico_org.yaml diff --git a/calico/operations/crd-migration.mdx b/calico/operations/crd-migration.mdx index 5948f641ca..710ca471d0 100644 --- a/calico/operations/crd-migration.mdx +++ b/calico/operations/crd-migration.mdx @@ -61,7 +61,7 @@ The locked window is typically short (seconds to a few minutes depending on clus 1. **Install v3 CRDs.** - Apply the v3 CRD manifests from the $[prodname] release. While the aggregated `APIService` is active, Kubernetes ignores these CRDs, so this is safe to do ahead of time. + Apply the v3 CRD manifests from the $[prodname] release. While the aggregated API service is active, Kubernetes ignores these CRDs, so this is safe to do ahead of time. ```bash kubectl apply -f $[manifestsUrl]/manifests/v3_projectcalico_org.yaml