linux如何将sda分区扩容
作者:Lyndon1107
这篇文章主要介绍了linux如何将sda分区扩容问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教
Docker镜像、容器等文件目录默认在系统盘,虚拟机分配的系统盘20G容量也太小,研究下系统盘扩容
linux 系统分区 /dev/sda3,创建新系统分区sda4,扩容20G
基本的逻辑卷管理概念
PV(Physical Volume)- 物理卷
物理卷在逻辑卷管理中处于最底层,它可以是实际物理硬盘上的分区,也可以是整个物理硬盘,也可以是raid设备。
VG(Volumne Group)- 卷组
卷组建立在物理卷之上,一个卷组中至少要包括一个物理卷,在卷组建立之后可动态添加物理卷到卷组中。一个逻辑卷管理系统工程中可以只有一个卷组,也可以拥有多个卷组。
LV(Logical Volume)- 逻辑卷
逻辑卷建立在卷组之上,卷组中的未分配空间可以用于建立新的逻辑卷,逻辑卷建立后可以动态地扩展和缩小空间。系统中的多个逻辑卷可以属于同一个卷组,也可以属于不同的多个卷组。
在虚拟机分配20G硬盘
略
系统加盘sda4
[root@node1 ~]# fdisk /dev/sda Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): n Partition type: p primary (3 primary, 0 extended, 1 free) e extended Select (default e): p Selected partition 4 First sector (20971520-62914559, default 20971520): Using default value 20971520 Last sector, +sectors or +size{K,M,G} (20971520-62914559, default 62914559): Using default value 62914559 Partition 4 of type Linux and of size 20 GiB is set Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks.
加入sda4
[root@node1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 30G 0 disk ├─sda1 8:1 0 300M 0 part /boot ├─sda2 8:2 0 1G 0 part ├─sda3 8:3 0 8.7G 0 part / └─sda4 8:4 0 20G 0 part sr0 11:0 1 1024M 0 rom
创建物理卷
[root@node1 ~]# pvcreate /dev/sda4 Physical volume "/dev/sda4" successfully created. 查看物理卷 [root@node1 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda4 lvm2 --- 20.00g 20.00g [root@node1 ~]# pvdisplay "/dev/sda4" is a new physical volume of "20.00 GiB" --- NEW Physical volume --- PV Name /dev/sda4 VG Name PV Size 20.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID dnbfor-Mftv-L8o5-gdbP-sVdj-H6HY-tHW4J8 [root@node1 ~]# partprobe
创建卷组
[root@node1 ~]# vgcreate vg0 /dev/sda4 Volume group "vg0" successfully created #查看卷组 [root@node1 ~]# vgscan Reading volume groups from cache. Found volume group "vg0" using metadata type lvm2 [root@node1 ~]# vgextend vg0 /dev/sda6 #增加一个卷组 [root@node1 ~]# pvmove /dev/sda6 //移走PV的数据 [root@node1 ~]# vgreduce vg0 /dev/sda6 //从vg0中删除物理卷
创建逻辑卷(单位:M)
[root@node1 ~]# lvcreate -L 10G -n lv1 vg0 Logical volume "lv1" created. [root@node1 /]# lvscan ACTIVE '/dev/vg0/lv1' [10.00 GiB] inherit [root@node1 ~]# lvdisplay --- Logical volume --- LV Path /dev/vg0/lv1 LV Name lv1 VG Name vg0 LV UUID o9Qy20-I08o-GtAK-v0y9-DuY5-QlYl-J63IWc LV Write Access read/write LV Creation host, time node1, 2021-03-28 21:40:00 +0800 LV Status available # open 0 LV Size 10.00 GiB Current LE 2560 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:0
建文件系统(格式化)
[root@node1 ~]# mkfs -t xfs /dev/vg0/lv1 meta-data=/dev/vg0/lv1 isize=512 agcount=4, agsize=655360 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
挂载逻辑卷
[root@node1 ~]# mount /dev/vg0/lv1 /home [root@node1 ~]# df -h #扩容前10G Filesystem Size Used Avail Use% Mounted on /dev/sda3 8.7G 3.3G 5.5G 38% / /dev/sda1 297M 118M 180M 40% /boot /dev/mapper/vg0-lv1 10G 33M 10G 1% /home tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup -------------------------------------------------------------------------------- 为根分区扩展空间 [root@node1 ~]# lvextend -L +1G /dev/mapper/vg0-lv1 [root@node1 ~]# resize2fs -f /dev/mapper/vg0-lv1 #不生效使用下面 -------------------------------------------------------------------------------- [root@node1 ~]# xfs_growfs /dev/mapper/vg0-lv1 meta-data=/dev/mapper/vg0-lv1 isize=512 agcount=4, agsize=655360 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 2621440 to 4507648 -------------------------------------------------------------------------------- [root@node1 ~]# df -h #扩容后18G Filesystem Size Used Avail Use% Mounted on /dev/sda3 8.7G 3.3G 5.5G 38% / devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 297M 118M 180M 40% /boot /dev/mapper/vg0-lv1 18G 33M 18G 1% /home
完
修改docker默认目录(终极大招)
修改docker默认目录到/home/docker下
Docker镜像、容器等文件目录迁移
[root@node1 ~]# systemctl stop docker [root@node1 ~]# mkdir /home/docker [root@node1 ~]# cp -R /var/lib/docker/* /home/docker/ [root@node1 ~]# cat /etc/docker/daemon.json { "registry-mirrors": ["https://17tjx23n.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"], "graph": "/home/docker" #添加新目录 } [root@node1 ~]# systemctl daemon-reload && systemctl start docker [root@node1 ~]# ln -s /usr/libexec/docker/docker-runc-current docker-runc 删除原有文件 [root@node1 ~]# rm -rf /var/lib/docker
虽然没有用删除操作,但是看到/var/lib/docker目录已经不存在了,镜像与容器显示在了新的目录/home/docker中
[root@node1 ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/sda3 xfs 8.7G 2.7G 6.1G 31% / devtmpfs devtmpfs 1.4G 0 1.4G 0% /dev tmpfs tmpfs 1.4G 0 1.4G 0% /dev/shm tmpfs tmpfs 1.4G 9.3M 1.4G 1% /run tmpfs tmpfs 1.4G 0 1.4G 0% /sys/fs/cgroup /dev/sda1 xfs 297M 118M 180M 40% /boot /dev/mapper/vg0-lv1 xfs 20G 831M 20G 5% /home tmpfs tmpfs 1.4G 12K 1.4G 1% /var/lib/kubelet/pods/9bb01287-42bd-4d81-9e24-4fd1b2953df1/volumes/kubernetes.io~secret/default-token-2l58c tmpfs tmpfs 1.4G 12K 1.4G 1% /var/lib/kubelet/pods/dcbaf00d-64f2-4c41-8d94-78e919a84db3/volumes/kubernetes.io~secret/flannel-token-k794f tmpfs tmpfs 1.4G 12K 1.4G 1% /var/lib/kubelet/pods/9e616d2b-6328-4edf-bf03-8666fb094871/volumes/kubernetes.io~secret/kube-proxy-token-78wp7 tmpfs tmpfs 284M 0 284M 0% /run/user/0 overlay overlay 20G 831M 20G 5% /home/docker/overlay2/136ee7e8387889f29b5d8b348d6efbad849d8c961fa65eb3d0777a002daac01b/merged overlay overlay 20G 831M 20G 5% /home/docker/overlay2/0ae46f788b0344c898b2b05c00ad644e5abc2d5629048b76393bf53e93bc5173/merged overlay overlay 20G 831M 20G 5% /home/docker/overlay2/c9fdd6a853ebb74d6aa5d6a7116b57ba4152c9ae6fcd50c1e5c75198961a49ba/merged shm tmpfs 64M 0 64M 0% /home/docker/containers/3f778588c1026b13831ca20b83ebb2049660ed2d275ad869cace012b1322bea8/mounts/shm shm tmpfs 64M 0 64M 0% /home/docker/containers/334ed06000ed551253d46f7fb93fe96ff3fbd7a1beddc9f6dbbf83228dce6b1b/mounts/shm shm tmpfs 64M 0 64M 0% /home/docker/containers/b8644ce2df31395ca9d985cf1c27ed000ee0aff164d0b9bad33738d261e47094/mounts/shm overlay overlay 20G 831M 20G 5% /home/docker/overlay2/a4723f525ef18097008ca12f6dad08c40a93a240c4fbf3c278333a70ca14634f/merged overlay overlay 20G 831M 20G 5% /home/docker/overlay2/4fa16c90c3201c2766dd99b2aa0153f9d7827e57869e1d07b307ab24afa0e7c3/merged overlay overlay 20G 831M 20G 5% /home/docker/overlay2/746c68cd3386c7f246439bc7cfb1a1f13171b83b8596d6e907c7147bcf88d159/merged
总结
以上为个人经验,希望能给大家一个参考,也希望大家多多支持脚本之家。