The Ultimate Guide to Homelab NAS Migration: Moving K3s Services to Centralized Storage
Every technology enthusiast eventually hits the “homelabber’s wall.” You start by running applications on a single Raspberry Pi or a mini-PC. Things run smoothly until a local SD card fails or a solid-state drive suddenly dies. In an instant, you lose your application data, your configuration files, and hours of hard work. Localized, node-locked disks quickly become a major bottleneck for data integrity and high availability.
The solution to this hardware bottleneck is a complete homelab NAS migration. This process involves moving your stateful application data from local hardware directly onto a centralized Network Attached Storage device. By making this transition, you separate your computing power from your data storage.
Setting up a NAS for homelab environments provides enterprise-grade features for your home network. You gain access to redundant array of independent disks (RAID) setups that protect against drive failure. You also get easy snapshot features for instant backups. Most importantly, centralized storage gives you the ability to move application pods between different computing nodes without ever losing access to your data.
The goal of this guide is to provide a complete, step-by-step roadmap for your infrastructure. We will cover how to move crucial services like Gitea, JupyterHub, and various databases into a K3s environment backed entirely by robust, centralized storage.
Why a NAS is the Backbone of a Modern Homelab
Running applications with local storage is risky. In a standard K3s cluster setup, the default storage method is often the “local-path” provisioner. This method ties your application data to a specific hardware node.
If that specific node loses power, requires a reboot, or suffers a hardware failure, the data becomes entirely inaccessible. Your applications will crash, and your services will go offline. Local storage prevents your cluster from healing itself.
Centralizing your data changes how your entire network operates. A dedicated storage server provides massive benefits for a modern setup:
- Data Integrity: Modern storage systems use advanced file systems like ZFS or BTRFS. These file systems actively check your data for “bitrot,” which is the slow degradation of data over time. If they find corrupted data, they use RAID redundancy to repair it automatically.
- Scalability: When you run out of space on a single mini-PC, upgrading the internal drive is frustrating. With a centralized system, adding more drives to the storage array is incredibly simple. It is much easier than upgrading every single K3s node in your network.
- Node Portability: Centralized storage is the ultimate prerequisite for building a high availability cluster. If a computing node dies, K3s can simply spin up your application on a surviving node. The new node will connect to the network storage and pick up exactly where the old node left off.
To achieve this portability, you must transition to using homelab persistent volumes. In a Kubernetes environment, a persistent volume is a digital object that represents actual storage capacity.
Instead of pointing to a simple folder on a local hard drive, these specific homelab persistent volumes point across your local network. They connect directly to an NFS share or an iSCSI target on your storage server. When a pod requests storage, it claims a piece of this networked volume.
Preparing Your Infrastructure
Before you can migrate any data, you must choose the right hardware and software for your storage server. Building a NAS for homelab usage requires careful planning. You have several excellent operating system choices depending on your technical comfort level.
TrueNAS is an incredible option if you want the ultimate power of the ZFS file system. It provides advanced data protection and caching features. Synology offers pre-built devices that are famous for their user-friendly interfaces and extreme ease of use. If you prefer to build your own system, running a standard Linux distribution with OpenMediaVault is a lightweight, effective choice.
Once your operating system is running, you must choose how your K3s cluster will communicate with the storage server. The two main protocols are NFS and iSCSI.
NFS (Network File System)
NFS is a file-level protocol. It is very easy to set up and manage. The biggest advantage of NFS is that it supports ReadWriteMany access. This means multiple application pods, potentially on completely different computing nodes, can read and write to the exact same data simultaneously. This protocol is ideal for shared folders, Gitea repositories, or JupyterHub workspaces.
iSCSI (Internet Small Computer Systems Interface)
iSCSI is a block-level protocol. Instead of sharing files, it shares raw blocks of storage space over the network. It offers much higher performance and lower latency, which is perfect for heavy database workloads. However, iSCSI generally limits a storage volume to a single node at a time, known as ReadWriteOnce access.
After choosing your protocol, you must configure your storage system specifically for K3s. Follow these NAS configuration steps:
- Create a Dedicated Dataset: Do not mix your container data with your personal family photos. Create a specific dataset or shared folder just for K3s data.
- Configure Permissions: Network permissions can be tricky. You must set up a “Mapall User” to map incoming traffic to a specific user identity. You can map traffic to
root, or you can map it to a specific user ID and group ID that matches your container users. This step prevents frustrating “Permission Denied” errors when pods try to save files. - Network Separation: We highly recommend creating a dedicated storage Virtual Local Area Network (VLAN). This separates your heavy storage traffic from your general internet traffic. A dedicated VLAN ensures that someone watching a 4K movie on your network does not slow down your database queries.
When you correctly configure your network and permissions, your NAS for homelab operations becomes a high-speed, secure vault for your k3s NAS storage needs.
Configuring k3s NAS Storage
With your hardware running and your datasets created, you must teach your K3s cluster how to communicate with the network storage. Out of the box, K3s does not know how to connect to external storage arrays.
To fix this, K3s requires a Container Storage Interface driver. A Container Storage Interface (CSI) driver is a small piece of software that acts as a translator. It allows your Kubernetes cluster to “talk” to the storage server and automatically request space when needed.
The most highly recommended driver for file-based storage is the NFS Subdir External Provisioner. This tool is incredibly efficient. It automatically creates new subdirectories on your network server for every single Persistent Volume Claim your cluster creates. You do not have to manually create folders on your storage server every time you launch a new application.
To make this work, you must define a StorageClass in K3s. A StorageClass is a YAML configuration file. This file tells K3s to use the new CSI driver as the default provider for all k3s NAS storage requests.
Here is an example of what this YAML configuration looks like:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "false"
In this file, the provisioner line tells K3s which driver to use. The archiveOnDelete parameter tells the system what to do when an application is deleted. Setting it to “false” means the data is deleted when the claim is removed. Setting it to “true” would save an archived copy of the data.
Once you apply this file to your cluster, you can set it as your default storage option. Any new application you install will automatically reach out across the network, create a folder, and store its data safely on your centralized server.
Centralizing your data also brings massive administrative benefits. Once storage is centralized in one location, monitoring its health, capacity, and performance becomes much easier. For more information on monitoring your overall network health, read our guide on Using VictoriaMetrics and Grafana with OCI and your Homelab.
By completing this configuration, your k3s NAS storage is fully integrated and ready to accept data from your local hardware.
The Migration Strategy (Step-by-Step)
Moving your live data requires a very careful, methodical approach. A successful homelab NAS migration requires you to plan the move without losing any configurations or corrupting active databases.
Follow this step-by-step strategy to safely transition your local data to your new homelab persistent volumes.
Step 1: Resource Inventory
Before you move a single file, you must list all existing data claims. You need to know exactly what is running on your local disks. Open your command line and run kubectl get pvc -A. This command will list every active storage claim across all namespaces in your cluster.
Take note of the sizes, the access modes, and the application names. You will need to recreate these exactly on the network server. Understanding your cluster’s current state is vital. Always refer to official documentation for advanced resource management before making major changes.
Citation: Advanced Options and Configuration for K3s Clusters
Step 2: Quiescing Services
You cannot safely copy a file while an application is actively writing to it. Doing so guarantees data corruption. You must “quiesce” your services, which means pausing them completely.
In Kubernetes, you do this by “scaling to zero.” Use the command kubectl scale deployment <name> --replicas=0. This tells the cluster to shut down all running pods for that specific application. The data on the local disk is now completely static and safe to copy.
Step 3: The Data Move
With the applications turned off, you must physically move the files from the old local directory to the new network directory. There are a few ways to accomplish this.
- Method A: rsync
Thersynctool is a built-in utility on most Linux systems. It is designed for copying files securely while perfectly preserving ownership and permissions. Using a command likersync -avzP /source/directory/ /destination/directory/will copy the files, verify them, and show you a progress bar. - Method B: pv-migrate
If you want a more automated approach, you can usepv-migrate. This is a specialized Kubernetes command-line tool. It automates the migration of data between two storage volumes. When you run it, it spins up temporary “transfer” pods in your cluster, copies the data over the network safely, and then deletes the temporary pods. - Handling the K3s Data Directory
Sometimes you need to move the actual K3s system data, not just application data. For migrations involving the K3s data directory itself, such as/var/lib/rancher/k3s/storage, you must be extremely careful. You can use the standardmvcommand to move the data. Afterward, you must create a symbolic link pointing from the old local path to the new network mount point. This tricks the operating system into thinking the data is still local, maintaining path consistency for the cluster.
Citation: How to Move K3s Data to a New Location
Step 4: Updating Manifests
Now that the data exists on the new network storage, you must tell your applications where to find it. You need to update your deployment files.
Open your application’s Helm charts or YAML files. Locate the storageClassName field within the Persistent Volume Claim section. Change the name from your old local provisioner to your new network class (like the nfs-client example we created earlier).
Once you update and apply these manifests, you can scale your deployments back up. Your applications will turn on, connect to the new homelab persistent volumes, and resume normal operations. For a deeper dive into the specific command line tools required for this process, see our complete tutorial on Migrating Kubernetes Persistent Volumes to a NAS.
Handling Specific Stateful Services
Different applications handle data in very different ways. A generic migration approach will not work for every single service. You must tailor your k3s NAS storage strategy to the specific needs of your applications.
Here is how to handle the migration of common stateful services in a home environment.
Gitea
Gitea is a lightweight code hosting solution. When migrating Gitea, you are dealing with two separate types of data: raw Git repository files and a configuration database.
When you move the gitea-data volume to the network storage, file permissions are your biggest hurdle. Gitea containers run under very specific user IDs for security reasons. You must ensure the new k3s NAS storage dataset supports the correct User ID. Usually, the Gitea container runs as UID 1000. Your network storage permissions must map perfectly to UID 1000, or Gitea will not be able to read your code repositories.
JupyterHub
JupyterHub is an interactive computing environment used heavily for data science. The most critical data in this service is the user “home” directories, where people write and compile code.
Compiling code requires reading and writing thousands of tiny files very quickly. This demands high Input/Output Operations Per Second (IOPS). Standard spinning hard drives struggle with high IOPS. If you migrate JupyterHub to a network server, you must ensure the storage array is using solid-state drive (SSD) caching. Adding a fast SSD cache allows the network storage to handle the heavy demands of code compilation without lagging.
Databases (PostgreSQL and MariaDB)
Databases are the most sensitive applications in your entire cluster. You should never simply drag and drop database files across a network while the database is running. Moving raw block data over file-sharing protocols can lead to instant corruption.
Furthermore, if your migration involves moving the core Kubernetes datastore, extra steps are required. If you are changing the etcd datastore endpoint entirely, you must use specific K3s commands to update the cluster configuration safely without breaking the cluster’s brain.
Citation: How to Successfully Migrate Your k3s Data Store Endpoint
The best practice for migrating any database is to perform a logical data dump. Use tools like pg_dump for PostgreSQL to export a clean SQL file of your entire database. Once you have this clean backup file, you can stop the pod and move the volume. If the physical file move fails or corrupts, you can simply spin up a fresh database on the new storage and restore your data perfectly from the logical dump file.
Verification and Performance Tuning
After completing your data moves and updating your application files, you cannot simply walk away. You must verify that the migration was successful and that your homelab persistent volumes are performing correctly under load.
Verification Steps
First, check the status of your storage claims. Run the command kubectl get pvc -A again. Look at the “STATUS” column. Every claim you migrated should say “Bound.” If a claim says “Pending,” it means the cluster cannot connect to the network storage, and you must check your YAML configurations.
Next, you need to check the application logs. Even if a volume says it is bound, there might be underlying file permission issues. Run kubectl logs <pod-name> for your migrated applications. Scan the output carefully. You are looking specifically for “Read-only file system” errors or “Permission Denied” warnings. If you see these, you must adjust the user mapping on your storage server.
Performance Testing
You need to know if your new network storage is fast enough to handle your daily workloads. You can test the IOPS and bandwidth of your new homelab persistent volumes using a tool called fio (Flexible I/O Tester).
You can deploy a temporary, lightweight Alpine Linux pod that is connected to your new network storage. Once inside that pod, run fio commands to simulate heavy database reads or massive file transfers. This will give you exact speed metrics. If the speeds are too low, you may need to upgrade your network switches or add SSD caching to your storage array.
Ongoing Maintenance
Centralizing your storage changes how you maintain your cluster. You must ensure that future software updates do not break your storage connections. Always refer to the official K3s manual upgrade documentation to ensure that future cluster updates do not accidentally overwrite or break your NAS mount configurations.
Finally, remember that a storage array is not a backup system. RAID protects against hardware failure, but it does not protect against accidental file deletions or ransomware. You must implement the “3-2-1” backup rule. You should have three copies of your data, on two different types of media, with one copy stored offsite.
Even with a robust storage server, you should use native storage snapshot tools or Kubernetes backup solutions like Velero. These tools will package your application data and send it to a secure offsite location, ensuring your hard work is always protected.
Conclusion
Completing a full homelab NAS migration is a true rite of passage for any serious technology hobbyist. Moving away from fragile, node-locked local disks fundamentally changes how your environment operates. It transforms a fragile group of single-board computers into a resilient, professional-grade private cloud.
Centralizing your local storage on a dedicated NAS for homelab use is just the beginning. It is the crucial first step toward exploring advanced enterprise topics like multi-node failover, automated load balancing, and comprehensive disaster recovery. Once your data is safely decoupled from your computing hardware, your cluster can finally scale without limits.
To continue improving your local infrastructure and explore more advanced networking topics, visit our Home page for more comprehensive guides and tutorials.
