This guide covers importing existing TrueNAS volumes into tns-csi management and adopting them into Kubernetes clusters.
Adoption is the process of taking an existing TrueNAS dataset/ZVOL and making it available as a Kubernetes PersistentVolume managed by tns-csi. This is useful for:
- Migration from democratic-csi - Move volumes to tns-csi without data loss
- Disaster recovery - Restore volumes to a new cluster after failure
- Cluster recreation - Re-attach volumes after rebuilding a cluster
- Manual volume import - Bring manually-created TrueNAS volumes into Kubernetes
READ THIS BEFORE PROCEEDING
- Always back up critical data before any migration - Use
pg_dump, application-level backups, or ZFS snapshots - Scale down workloads first - Never migrate volumes while pods are using them
- Set Retain policy - Prevent accidental deletion during migration
- Test with non-critical volumes first - Verify the process works in your environment
- StatefulSet volumes require exact PVC names - Plan carefully for stateful workloads
- Suspend GitOps reconciliation - If using Flux/ArgoCD, suspend kustomizations before manual changes
Do NOT attempt PVC adoption for database operators like CloudNativePG, Zalando PostgreSQL Operator, or similar.
These operators manage their own PVC lifecycle and expect specific naming conventions (e.g., postgres-1, postgres-2, postgres-3 for CNPG). Attempting to adopt PVCs with different names will cause:
- Cluster stuck in "unrecoverable" state
- Operators continuously trying to recreate pods with wrong volumes
- Data corruption risks
Instead, use dump/restore:
# 1. Create a backup pod with the old volume mounted
kubectl run pg-recovery --image=postgres:16 --restart=Never \
--overrides='{"spec":{"containers":[{"name":"pg-recovery","image":"postgres:16",
"command":["sleep","infinity"],
"volumeMounts":[{"name":"data","mountPath":"/var/lib/postgresql/data"}]}],
"volumes":[{"name":"data","persistentVolumeClaim":{"claimName":"old-pvc-name"}}]}}'
# 2. Start postgres and dump data
kubectl exec -it pg-recovery -- bash
pg_ctl start -D /var/lib/postgresql/data/pgdata
pg_dumpall -U postgres > /tmp/backup.sql
# 3. Copy backup out
kubectl cp pg-recovery:/tmp/backup.sql ./backup.sql
# 4. Restore to new cluster
kubectl exec -i postgres-1 -n db -- psql -U postgres < backup.sqlPVs and PVCs have finalizers (kubernetes.io/pv-protection, kubernetes.io/pvc-protection) that prevent deletion while in use. If a PV/PVC is stuck in Terminating:
# Check finalizers
kubectl get pv <pv-name> -o jsonpath='{.metadata.finalizers}'
# Remove finalizers (only after confirming no pod is using the volume!)
kubectl patch pv <pv-name> -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl patch pvc <pvc-name> -n <namespace> -p '{"metadata":{"finalizers":null}}' --type=mergeWarning: Only remove finalizers after confirming:
- No pods are mounting the volume
- Data is backed up or you're certain the PV data is safe
The full adoption process involves both TrueNAS-side and Kubernetes-side steps:
┌─────────────────────────────────────────────────────────────────┐
│ KUBERNETES SIDE │
│ 1. Scale down workload (pods stop using volume) │
│ 2. Set PV reclaim policy to Retain │
│ 3. Delete old PVC (PV becomes Released, data safe) │
│ 4. Delete old PV (optional cleanup) │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ TRUENAS SIDE (tns-csi) │
│ 5. Import dataset into tns-csi (sets ZFS properties) │
│ 6. Generate PV/PVC manifests │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ KUBERNETES SIDE │
│ 7. Apply new PV/PVC manifests │
│ 8. Scale up workload │
└─────────────────────────────────────────────────────────────────┘
This is the most common adoption scenario. Follow these steps carefully.
- kubectl access to the cluster
- kubectl tns-csi plugin installed (installation guide)
- TrueNAS credentials configured (plugin auto-discovers from installed driver)
Migrating a volume used by qbittorrent StatefulSet:
PVC: config-qbittorrent-0 (namespace: media)
PV: pvc-2cf78549-3392-457e-9119-6a7be7da6707
Dataset: storage/iscsi/v/pvc-2cf78549-3392-457e-9119-6a7be7da6707
Protocol: iSCSI (democratic-csi)
CRITICAL: Stop all pods using the volume before proceeding.
# For StatefulSet
kubectl scale statefulset qbittorrent -n media --replicas=0
# For Deployment
kubectl scale deployment myapp -n media --replicas=0
# Verify pods are terminated
kubectl get pods -n media -l app=qbittorrentWait until all pods are terminated before continuing.
Set the reclaim policy to Retain so the PV won't be deleted when the PVC is removed:
# Check current reclaim policy
kubectl get pv pvc-2cf78549-3392-457e-9119-6a7be7da6707 -o jsonpath='{.spec.persistentVolumeReclaimPolicy}'
# Set to Retain if not already
kubectl patch pv pvc-2cf78549-3392-457e-9119-6a7be7da6707 \
-p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'This releases the PV but keeps the data safe (because of Retain policy):
kubectl delete pvc config-qbittorrent-0 -n mediaThe PV will now show status Released:
kubectl get pv pvc-2cf78549-3392-457e-9119-6a7be7da6707
# STATUS: ReleasedNow use the tns-csi plugin to mark the dataset as managed by tns-csi:
# Dry run first to see what will happen
kubectl tns-csi import storage/iscsi/v/pvc-2cf78549-3392-457e-9119-6a7be7da6707 \
--protocol iscsi \
--dry-run
# If everything looks good, run for real
kubectl tns-csi import storage/iscsi/v/pvc-2cf78549-3392-457e-9119-6a7be7da6707 \
--protocol iscsiThis sets ZFS properties on the dataset:
tns-csi:managed_by= "tns-csi"tns-csi:protocol= "iscsi"tns-csi:adoptable= "true"- And other metadata properties
Generate Kubernetes manifests for the tns-csi managed volume:
kubectl tns-csi adopt storage/iscsi/v/pvc-2cf78549-3392-457e-9119-6a7be7da6707 \
--pvc-name config-qbittorrent-0 \
--namespace media \
--storage-class tns-iscsi \
-o yaml > qbittorrent-volume.yamlImportant for StatefulSets: The --pvc-name must match the expected PVC name pattern: <volumeClaimTemplate-name>-<statefulset-name>-<ordinal>
Review the generated manifests:
cat qbittorrent-volume.yamlThe old democratic-csi PV is no longer needed:
kubectl delete pv pvc-2cf78549-3392-457e-9119-6a7be7da6707kubectl apply -f qbittorrent-volume.yamlVerify the PVC is bound:
kubectl get pvc config-qbittorrent-0 -n media
# STATUS: Boundkubectl scale statefulset qbittorrent -n media --replicas=1
# Verify pod is running and can access data
kubectl get pods -n media -l app=qbittorrent
kubectl logs -n media qbittorrent-0kubectl tns-csi import storage/nfs/pvc-xxx --protocol nfs
# If NFS share doesn't exist, create it:
kubectl tns-csi import storage/nfs/pvc-xxx --protocol nfs --create-sharekubectl tns-csi import storage/nvmeof/v/pvc-xxx --protocol nvmeofNote: NVMe-oF requires the NVMe-oF port to be configured in TrueNAS.
kubectl tns-csi import storage/iscsi/v/pvc-xxx --protocol iscsiNote: iSCSI requires the iSCSI portal to be configured in TrueNAS.
Older versions of tns-csi (pre-0.8) used base64-encoded JSON volumeHandles instead of plain volume IDs. These volumes work correctly but won't appear in kubectl tns-csi list.
# Check volumeHandle length (old format is ~316 chars, new is ~40 chars)
kubectl get pv -o json | jq -r '
.items[] |
select(.spec.csi.driver == "tns.csi.io") |
"\(.metadata.name): \(.spec.csi.volumeHandle | length) chars"'The volumeHandle field is immutable, so you must recreate the PV/PVC:
- Scale down workload
- Set Retain policy on PV
- Delete PVC
- Delete old PV
- Create new PV with plain volumeHandle
- Create new PVC
- Import dataset (to set ZFS properties so it shows in
tns-csi list) - Scale up workload
# Example: Convert volumeHandle from base64 to plain
# Old PV had: volumeHandle: eyJuYW1lIjoicHZjLTEyMzQ1...
# New PV uses: volumeHandle: pvc-12345-xxxx-xxxx-xxxx
# 1. Get the plain name from the base64
kubectl get pv <pv-name> -o jsonpath='{.spec.csi.volumeHandle}' | base64 -d | jq -r '.name'
# 2. Recreate PV with that plain name as volumeHandle
# 3. Import dataset to set ZFS properties:
kubectl tns-csi import <dataset-path> --protocol nfsWhen a Kubernetes cluster is lost but TrueNAS data survives, use this process to recover volumes.
Find volumes that were managed by tns-csi:
kubectl tns-csi listOr find all orphaned volumes (volumes with no matching PVC):
kubectl tns-csi list-orphanedFor each volume to recover:
kubectl tns-csi adopt <dataset-path> \
--pvc-name <desired-pvc-name> \
--namespace <namespace> \
-o yaml | kubectl apply -f -Deploy your applications. If using GitOps with the same PVC names, volumes will be automatically bound.
For GitOps workflows, configure StorageClasses to automatically adopt existing volumes when PVCs with matching names are created.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: truenas-nfs-gitops
provisioner: tns.csi.io
parameters:
protocol: nfs
pool: tank
parentDataset: csi
server: truenas.local
markAdoptable: "true" # New volumes can be adopted later
adoptExisting: "true" # Auto-adopt volumes with matching names
reclaimPolicy: Retain # Keep volumes on PVC deletion
allowVolumeExpansion: true- When a PVC is created, the driver searches for an existing volume by name
- If found and adoptable, the existing volume is used instead of creating new
- Missing TrueNAS resources (NFS shares, iSCSI targets) are recreated automatically
- The volume is returned as if newly provisioned, but with existing data
See FEATURES.md for detailed adoption behavior.
To import volumes that were never managed by any CSI driver (manually created):
kubectl tns-csi list-unmanaged --pool storageThis shows all datasets/ZVOLs not managed by tns-csi, including:
- Manually created datasets
- Democratic-csi volumes
- Other CSI driver volumes
# Import with NFS protocol (creates share if needed)
kubectl tns-csi import storage/mydata/volume1 --protocol nfs --create-share
# Generate manifests
kubectl tns-csi adopt storage/mydata/volume1 \
--pvc-name my-volume \
--namespace default \
-o yaml > my-volume.yaml
# Apply
kubectl apply -f my-volume.yamlCheck that:
- The PV exists and is in
Availablestate - The PVC's
volumeNamematches the PV name - The StorageClass matches between PV and PVC
- Access modes match between PV and PVC
kubectl describe pv <pv-name>
kubectl describe pvc <pvc-name> -n <namespace>The dataset already has tns-csi properties. Either:
- Use
kubectl tns-csi adoptdirectly (skip import) - Or remove existing properties on TrueNAS:
zfs inherit -r tns-csi:managed_by <dataset>
StatefulSets expect PVCs with specific names: <volumeClaimTemplate-name>-<statefulset-name>-<ordinal>
For a StatefulSet named postgres with volumeClaimTemplate data:
- Replica 0:
data-postgres-0 - Replica 1:
data-postgres-1
Ensure --pvc-name matches exactly when adopting.
If the NFS share was deleted but the dataset exists:
kubectl tns-csi import <dataset> --protocol nfs --create-shareIf using GitOps, you may see errors like:
PVC <name> spec is immutable after creation
This happens when your GitOps manifests have a different storageClassName than the live PVC.
Solution:
- Suspend the relevant kustomization/application
- Update your Git manifests to match the new storage class
- Include both PV and PVC in your manifests (static provisioning)
- Commit and push
- Resume reconciliation
# Flux example
flux suspend kustomization <name>
# ... make changes, commit, push ...
flux resume kustomization <name>If operators (vm-operator, coroot-operator, etc.) keep recreating pods during migration:
# Scale down the operator first
kubectl scale deploy <operator-name> -n <namespace> --replicas=0
# Do your migration
# ...
# Scale operator back up
kubectl scale deploy <operator-name> -n <namespace> --replicas=1-
Verify the mount succeeded:
kubectl exec -it <pod> -- df -h kubectl exec -it <pod> -- ls -la /path/to/mount
-
Check volume attributes match:
kubectl get pv <pv-name> -o yaml
-
Verify NFS share path / iSCSI IQN / NVMe NQN is correct
| Command | Description |
|---|---|
kubectl tns-csi list |
List all tns-csi managed volumes |
kubectl tns-csi list-orphaned |
Find volumes without matching PVCs |
kubectl tns-csi list-unmanaged --pool <pool> |
List volumes not managed by tns-csi |
kubectl tns-csi import <dataset> --protocol <proto> |
Import dataset into tns-csi management |
kubectl tns-csi adopt <dataset> |
Generate PV/PVC manifests |
kubectl tns-csi describe <volume> |
Show detailed volume info |
kubectl tns-csi mark-adoptable <volume> |
Mark volume as adoptable |
See KUBECTL-PLUGIN.md for complete CLI documentation.
- FEATURES.md - Full feature documentation including automatic adoption
- KUBECTL-PLUGIN.md - Complete kubectl plugin reference
- DEPLOYMENT.md - Installation and configuration guide