k8s搭建nfs共享存储实践
作者:九制橘皮茶
本文介绍NFS服务端搭建与客户端配置,涵盖安装工具、目录设置及服务启动,随后讲解K8S中NFS动态存储部署,包括创建命名空间、ServiceAccount、RBAC权限和StorageClass,实现存储验证
1. NFS搭建
| nfs服务端 | nfs客户端 |
|---|---|
| 192.168.48.19 | 192.168.48.0/24 |
1.1 部署NFS服务端
NFS 是Network File System的缩写,即网络文件系统。英文Network File System(NFS),是基于TCP/IP协议的应用,可以通过网络,让不同的机器、不同的操作系统可以共享彼此的文件。
NFS在文件传送或信息传送过程中依赖于RPC服务。
RPC:远程过程调用 (Remote Procedure Call) 是能使客户端执行其他系统中程序的一种机制。
NFS服务器可以看作是一个FILE SERVER。它可以让你的机器(客户端)通过网络将远端的NFS SERVER共享目录MOUNT到自己的系统中。
1.1.1 下载nfs-utils和rpcbind
yum -y install nfs-utils rpcbind
1.1.2 创键共享目录
mkdir -p /data/k8s_data chmod 777 /data/k8s_data
1.1.3 修改配置文件
cat > /etc/exports <<'EOF' /data/k8s_data 192.168.48.0/24(rw,sync,no_root_squash,no_subtree_check) EOF
1.1.4 启动nfs服务端
systemctl start rpcbind # 启动rpc systemctl start nfs-server #启动nfs exportfs -arv #使配置生效 systemctl enable rpcbind #设置开机自启 systemctl enable nfs-server #设置开机自启
1.2 部署NFS客户端
yum -y install nfs-utils rpcbind systemctl start rpcbind # 启动rpc systemctl start nfs-server #启动nfs systemctl enable rpcbind #设置开机自启 systemctl enable nfs-server #设置开机自启
1.3 检测NFS是否正常
showmount -e 192.168.48.19
正常输出:
[root@master1 k8s-nfs]# showmount -e 192.168.48.19
Export list for 192.168.48.19:
/data/k8s_data 192.168.48.0/24
2. K8S部署NFS Dynamic Provisioning
2.1 创建namespace
kubectl create namespace nfs-storageclass
2.2 创建SeviceAccount和RBAC权限
vim nfs-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-storageclass
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-storageclass
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-storageclass
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-storageclass
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-storageclass
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io2.3 部署NFS Provisioner
先拉取镜像:
docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
vim nfs-deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
namespace: nfs-storageclass
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
# value: <YOUR NFS SERVER HOSTNAME>
value: 192.168.48.19
- name: NFS_PATH
# value: /var/nfs
value: /data/k8s_data
volumes:
- name: nfs-client-root
nfs:
# server: <YOUR NFS SERVER HOSTNAME>
server: 192.168.48.19
# share nfs path
path: /data/k8s_data2.4 创建StorageClass
vim nfs-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
namespace: nfs-storageclass
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
pathPattern: ${.PVC.namespace}/${.PVC.name}
onDelete: delete2.5 验证NFS存储
2.5.1 创建PVC
vim nfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi2.5.2 创建PV
vim nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
namespace: kube-system
spec:
capacity:
storage: 30Gi # 存储容量
accessModes:
- ReadWriteMany # 支持多节点读写
persistentVolumeReclaimPolicy: Retain # 删除PVC后保留PV数据
storageClassName: nfs-client # 指定存储类名称(可自定义)
nfs:
server: 192.168.48.19 # NFS服务器IP
path: /data/k8s_data # NFS共享路径执行所有yaml文件:
kubectl apply -f ./
3. 验证
[root@master1 k8s-nfs]# kubectl get all -n nfs-storageclass NAME READY STATUS RESTARTS AGE pod/nfs-client-provisioner-c8b7f495d-b2zpk 1/1 Running 0 64m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nfs-client-provisioner 1/1 1 1 82m NAME DESIRED CURRENT READY AGE replicaset.apps/nfs-client-provisioner-c8b7f495d 1 1 1 82m [root@master1 k8s-nfs]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 83m [root@master1 k8s-nfs]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE nfs Bound nfs-pv 30Gi RWX nfs-client <unset> 83m [root@master1 k8s-nfs]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE nfs-pv 30Gi RWX Retain Bound default/nfs nfs-client <unset> 84m
总结
以上为个人经验,希望能给大家一个参考,也希望大家多多支持脚本之家。
