We will setting up a NFS server to use it as remote storage for our cluster to create a lot of persistent volumes in our local infrastructure !
We Assume that we have 4 Ubuntu 20.04 LTS, The Kubernetes is installed and the 4n6nk8s-nfs
host in the same network with the cluster :
Role | Hostname | IP address |
---|---|---|
Master | 4n6nk8s-master | 192.168.1.18/24 |
Worker | 4n6nk8s-worker1 | 192.168.1.19/24 |
Worker | 4n6nk8s-worker2 | 192.168.1.20/24 |
NFS Server | 4n6nk8s-nfs | 192.168.1.80/24 |
# What is a NFS (Network File System) Server:
Network File System (NFS) is a networking protocol for distributed file sharing. A file system defines the way data in the form of files is stored and retrieved from storage devices, such as hard disk drives, solid-state drives and tape drives. NFS is a network file sharing protocol that defines the way files are stored and retrieved from storage devices across networks.
This distributed file system protocol allows a user on a client computer to access files over a network in the same way they would access a local storage file.
# Setting up the NFS server
We need to install the nfs-kernel-server
package on the NFS server. This package will store additional packages such as nfs-common
and 4n6nk8s@rpcbind
1 | 4n6nk8s-nfs:~$ sudo apt install nfs-kernel-server |
Now let’s create an NFS Export Directory
1 | 4n6nk8s@4n6nk8s-nfs:~$ sudo mkdir /mnt/nfs-data |
Now let’s give it a read,write and execute privileges to all the contents inside the directory
1 | 4n6nk8s@4n6nk8s-nfs:~$ sudo chmod 777 /mnt/nfs-data |
Now Lets add a new line to the /etc/exports
configuration file.
The
/etc/exports
file indicates all directories that a nfs server exports to its clients. Each line in the file specifies a single directory.
1 | 4n6nk8s@4n6nk8s-nfs:~$ sudo vim /etc/exports |
You can provide access to a single client, multiple clients, or specify an entire subnet. In this guide, we have allowed an entire subnet to have access to the NFS share.
1 | /mnt/nfs-data 192.168.1.0/24(rw,sync,no_subtree_check) |
After granting access to the subnet, let’s export the NFS share directory and restart the NFS
1 | 4n6nk8s@4n6nk8s-nfs:~$ sudo exportfs -a |
Let’s allow NFS access through the firewall
1 | 4n6nk8s@4n6nk8s-nfs:~$ sudo ufw allow from 192.168.43.0/24 to any port nfs |
# Install the NFS Client on the Kubernetes Nodes
We must install the nfs-common
packages to access to the NFS share so let’s install it by running the following command on each node:
1 | 4n6nk8s@4n6nk8s-worker1:~$ sudo apt install nfs-common |
This command mount the NFS Share on one node for testing and sanity check only
The mount command is not a mandatory step. We mount for testing purposes. you can skip to the next section
1 | 4n6nk8s@4n6nk8s-worker1:~$ sudo mount 4n6nk8s-nfs:/mnt/nfs-data /mnt |
Let’s Create a file for testing
1 | 4n6nk8s@4n6nk8s-worker1:~$ cd /mnt |
Check the /mnt/nfs-data
on the NFS server
1 | 4n6nk8s@4n6nk8s-nfs:~$ cd /mnt/nfs-data |
# Kubernetes with NFS remote Storage demo
After Setting up the NFS server and install the NFS client on the kubernetes nodes. Now it’s time to do some practice with Persistent Volume
and Persistent Volume Claim
with NFS storage.
# Create a Persistent Volume with NFS
Example of Persistent Volume manifest using nfs:
1 | apiVersion: v1 |
Make sure to put the correct IP address of the NFS server and the correct NFS Share point!
Create the persistent volume using kubectl
1 | 4n6nk8s@4n6nk8s-master:~$ kubectl apply -f nfs-pv.yaml |
List the Persistent Volumes to make sure for the creation
1 | 4n6nk8s@4n6nk8s-master:~$ kubectl get pv |
# Create a Persistent Volume Claim with NFS
Example of Persistent Volume Claim manifest using nfs:
1 | apiVersion: v1 |
Create the persistent volume using kubectl
1 | 4n6nk8s@4n6nk8s-master:~$ kubectl apply -f nfs-pvc.yaml |
List the Persistent Volumes Claims
to make sure for the creation
1 | 4n6nk8s@4n6nk8s-master:~$ kubectl get pvc |
# Create Nginx Deployment
We use the volumeMounts
and volumes
attributes in this manifest to use the persistent volume we created:
1 | apiVersion: apps/v1 |
Deploy the nginx-deployment.yaml
using the kubectl apply -f
.
1 | 4n6nk8s@4n6nk8s-master:~$ kubectl apply -f nginx-deployment.yaml |
Make sure that the deployment was created without any problems!
1 | 4n6nk8s@4n6nk8s-master:~$ kubectl get pods |
# Sanity Check (Testing the NFS volumes):
Let’s get shell on one of the running containers and go to the mount point then create a file!
1 | 4n6nk8s@4n6nk8s-master:~$ kubectl exec --stdin --tty nginx-deployment-7976956b49-fgbb4 -- /bin/bash Command Line Prompt |
Now the shell is opened. Let’s create a file in /usr/share/nginx/html
:
1 | root@nginx-deployment-7976956b49-fgbb4:/# cd /usr/share/nginx/html/ |
We find the file created in the client test xD
1 | root@nginx-deployment-7976956b49-fgbb4:/usr/share/nginx/html# ls |
Create a file named “hi from the other side!”
1 | root@nginx-deployment-7976956b49-fgbb4:/usr/share/nginx/html# touch "hi from the other side!" |
Let’s open another shell on another running container:
1 | 4n6nk8s@4n6nk8s-master:~$ kubectl exec --stdin --tty nginx-deployment-7976956b49-kg5tx -- /bin/bash Command Line Prompt |
Bingoo! we find the same content on the same share point!
1 | root@nginx-deployment-7976956b49-kg5tx:/usr/share/nginx/html# ls |
Now we will try to delete the deployment and recreate another to check the data in the persistent volume
1 | 4n6nk8s@4n6nk8s-master:~$ kubectl delete -f nginx-deployment.yaml |
Check the deployment created or not !
1 | 4n6nk8s@4n6nk8s-master:~ kubectl get pods |
Open another shell on running container from the new deployment to check the content of the persistent volume:
1 | 4n6nk8s@4n6nk8s-master:~$ kubectl exec --stdin --tty nginx-deployment-7976956b49-7d5vw -- /bin/bash Command Line Prompt |
Display the content of the mount point /usr/share/nginx/html/
1 | root@nginx-deployment-7976956b49-7d5vw:/usr/share/nginx/html# ls |
Bingoo! The content still in the persistent volume without any problem !