Centos7使用Ceph-deploy快速部署Ceph分布式存储
简述
随着OpenStack日渐成为开源云计算的标准软件栈,Ceph也已经成为OpenStack的首选后端存储,Ceph是一种为优秀的性能、可靠性和可扩展性而设计的统一的、分布式文件系统
ceph官方文档 http://docs.ceph.org.cn/
ceph中文开源社区 http://ceph.org.cn/
Ceph是一个开源的分布式文件系统,因为它还支持块存储、对象存储,所以很自然的被用做云计算框架openstack或cloudstack整个存储后端。当然也可以单独作为存储,例如部署一套集群作为对象存储、SAN存储、NAS存储等
ceph支持
1、对象存储:即radosgw,兼容S3接口,通过rest api上传、下载文件
2、文件系统:posix接口,可以将ceph集群看做一个共享文件系统挂载到本地
3、块存储:即rbd,有kernel rbd和librbd两种使用方式。支持快照、克隆。相当于一块硬盘挂到本地,用法和用途和硬盘一样。比如在OpenStack项目里,Ceph的块设备存储可以对接OpenStack的后端存储
Ceph相比其它分布式存储有哪些优点?
1、统一存储
虽然ceph底层是一个分布式文件系统,但由于在上层开发了支持对象和块的接口;所以在开源存储软件中,能够一统江湖,至于能不能千秋万代,就不知了
2、高扩展性
扩容方便、容量大,能够管理上千台服务器、EB级的容量
3、可靠性强
支持多份强一致性副本,EC;副本能够垮主机、机架、机房、数据中心存放,所以安全可靠。存储节点可以自管理、自动修复,无单点故障,容错性强
4、高性能
因为是多个副本,因此在读写操作时候能够做到高度并行化,理论上,节点越多,整个集群的IOPS和吞吐量越高,另外一点ceph客户端读写数据直接与存储设备(osd) 交互
Ceph各组件介绍:
•Ceph OSDs: Ceph OSD 守护进程( Ceph OSD )的功能是存储数据,处理数据的复制、恢复、回填、再均衡,并通过检查其他OSD 守护进程的心跳来向 Ceph Monitors 提供一些监控信息。当 Ceph 存储集群设定为有2个副本时,至少需要2个 OSD 守护进程,集群才能达到 active+clean 状态( Ceph 默认有3个副本,但你可以调整副本数)。
•Monitors: Ceph Monitor维护着展示集群状态的各种图表,包括监视器图、 OSD 图、归置组( PG )图、和 CRUSH 图。 Ceph 保存着发生在Monitors 、 OSD 和 PG上的每一次状态变更的历史信息(称为 epoch )
•MDSs: Ceph 元数据服务器( MDS )为 Ceph 文件系统存储元数据(也就是说,Ceph 块设备和 Ceph 对象存储不使用MDS )。元数据服务器使得 POSIX 文件系统的用户们,可以在不对 Ceph 存储集群造成负担的前提下,执行诸如 ls、find 等基本命令
Ceph集群部署
基本环境
IP | 主机名 | 角色 |
---|---|---|
192.168.200.116 | ceph-admin(ceph-deploy) | mds1、mon1(也可以将monit节点另放一台机器) |
192.168.200.117 | ceph-node1 | osd1 |
192.168.200.118 | ceph-node2 | osd2 |
192.168.200.119 | ceph-node3 | osd3 |
各个修改主机名
[root@ceph-admin ~]# hostnamectl set-hostname ceph-admin
[root@ceph-node1 ~]# hostnamectl set-hostname ceph-node1
[root@ceph-node2 ~]# hostnamectl set-hostname ceph-node2
[root@ceph-node3 ~]# hostnamectl set-hostname ceph-node3
各个绑定主机名映射
[root@ceph-admin ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.116 ceph-admin
192.168.200.117 ceph-node1
192.168.200.118 ceph-node2
192.168.200.119 ceph-node3
关闭防火墙和selinux
[root@ceph-admin ~]# systemctl stop firewalld
[root@ceph-admin ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@ceph-admin ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
[root@ceph-admin ~]# setenforce 0
各个节点配置时间同步
[root@ceph-admin ~]# yum install ntp ntpdate ntp-doc -y
[root@ceph-admin ~]# systemctl restart ntpd
各个节点准备yum源
[root@ceph-admin ~]# yum clean all
[root@ceph-admin ~]# mkdir /mnt/bak && mv /etc/yum.repos.d/* /mnt/bak/
#下载阿里云的base源和epel源
[root@ceph-admin ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@ceph-admin ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
#添加ceph源
[root@ceph-admin ~]# vim /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/x86_64/
gpgcheck=0
priority =1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/
gpgcheck=0
priority =1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS
gpgcheck=0
priority=1
各个节点创建cephuser用户设置sudo权限
[root@ceph-admin ~]# vim /etc/yum.repos.d/ceph.repo
[root@ceph-admin ~]# useradd -d /home/cephuser -m cephuser
[root@ceph-admin ~]# echo "cephuser"|passwd --stdin cephuser
更改用户 cephuser 的密码 。
passwd:所有的身份验证令牌已经成功更新。
[root@ceph-admin ~]# echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
cephuser ALL = (root) NOPASSWD:ALL
[root@ceph-admin ~]# chmod 0440 /etc/sudoers.d/cephuser
[root@ceph-admin ~]# sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers
SSH免密登录
[root@ceph-admin ~]# su - cephuser
[cephuser@ceph-admin ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cephuser/.ssh/id_rsa):
Created directory '/home/cephuser/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/cephuser/.ssh/id_rsa.
Your public key has been saved in /home/cephuser/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:5tK8jylNdHtZwQzun6tkXM+6kI4wcFGXeV/OaQ1l1+E cephuser@ceph-admin
The key's randomart image is:
+---[RSA 2048]----+
| . o* o*|
| . oo B.+|
| . .. E+|
| .... .o=|
| ..S. ..oo |
| B. ..o+ + |
| .o* .* o o|
| ...* = . o |
| .+.o o.+. |
+----[SHA256]-----+
[cephuser@ceph-admin ~]$ cd .ssh/
[cephuser@ceph-admin .ssh]$ ls
id_rsa id_rsa.pub
[cephuser@ceph-admin .ssh]$ cp id_rsa.pub authorized_keys
[cephuser@ceph-admin .ssh]$ scp -r /home/cephuser/.ssh ceph-node1:/home/cephuser/
[cephuser@ceph-admin .ssh]$ scp -r /home/cephuser/.ssh ceph-node2:/home/cephuser/
[cephuser@ceph-admin .ssh]$ scp -r /home/cephuser/.ssh ceph-node3:/home/cephuser/
准备磁盘
(ceph-node1、ceph-node2、ceph-node3三个节点)
#检查磁盘
[cephuser@ceph-admin ~]$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
#格式化磁盘
[cephuser@ceph-admin ~]$ sudo mkfs.xfs /dev/sdb -f
#查看磁盘格式(xfs格式)
[cephuser@ceph-admin ~]$ sudo blkid -o value -s TYPE /dev/sdb
部署阶段
(ceph-admin节点上使用ceph-deploy快速部署)
[root@ceph-admin cephuser]# su - cephuser
#安装ceph-deploy
[cephuser@ceph-admin ~]$ sudo yum update -y && sudo yum install ceph-deploy -y
# 创建cluster目录
[cephuser@ceph-admin ~]$ mkdir cluster
[cephuser@ceph-admin ~]$ cd cluster/
#创建集群(后面填写monit节点的主机名,这里monit节点和管理节点是同一台机器,即ceph-admin)
[cephuser@ceph-admin cluster]$ ceph-deploy new ceph-admin
......
[ceph_deploy.new][DEBUG ] Resolving host ceph-admin
[ceph_deploy.new][DEBUG ] Monitor ceph-admin at 192.168.200.116
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-admin']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.200.116']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
#如执行报错需安装pip
[cephuser@ceph-admin cluster]$ sudo yum -y install python-pip
#修改ceph.conf文件(注意:mon_host必须和public network 网络是同网段内!)
[cephuser@ceph-admin cluster]$ vim ceph.conf
...... #添加下面两行配置内容
public network = 192.168.200.116/24
osd pool default size = 3
安装ceph
(过程有点长,需要等待一段时间....)
[cephuser@ceph-admin cluster]$ ceph-deploy install ceph-admin ceph-node1 ceph-node2 ceph-node3 --adjust-repos
...
[ceph-node3][DEBUG ]
[ceph-node3][DEBUG ] 完毕!
[ceph-node3][INFO ] Running command: sudo ceph --version
[ceph-node3][DEBUG ] ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
初始化monit监控节点并收集所有密钥
[cephuser@ceph-admin cluster]$ ceph-deploy mon create-initial
[cephuser@ceph-admin cluster]$ ceph-deploy gatherkeys ceph-admin
添加OSD到集群
检查OSD节点上所有可用的磁盘
[cephuser@ceph-admin cluster]$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : list
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f2a6979c830>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : ['ceph-node1', 'ceph-node2', 'ceph-node3']
[ceph_deploy.cli][INFO ] func : <function disk at 0x7f2a699e2d70>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO ] Running command: sudo fdisk -l
[ceph-node2][DEBUG ] connection detected need for sudo
[ceph-node2][DEBUG ] connected to host: ceph-node2
[ceph-node2][DEBUG ] detect platform information from remote host
[ceph-node2][DEBUG ] detect machine type
[ceph-node2][DEBUG ] find the location of an executable
[ceph-node2][INFO ] Running command: sudo fdisk -l
[ceph-node3][DEBUG ] connection detected need for sudo
[ceph-node3][DEBUG ] connected to host: ceph-node3
[ceph-node3][DEBUG ] detect platform information from remote host
[ceph-node3][DEBUG ] detect machine type
[ceph-node3][DEBUG ] find the location of an executable
[ceph-node3][INFO ] Running command: sudo fdisk -l
使用zap选项删除所有osd节点上的分区
[cephuser@ceph-admin cluster]$ ceph-deploy disk zap ceph-node1:/dev/sdb ceph-node2:/dev/sdb ceph-node3:/dev/sdb
准备OSD并激活
[cephuser@ceph-admin cluster]$ ceph-deploy osd create --data /dev/sdb ceph-node1
[cephuser@ceph-admin cluster]$ ceph-deploy osd create --data /dev/sdb ceph-node2
[cephuser@ceph-admin cluster]$ ceph-deploy osd create --data /dev/sdb ceph-node3
#可能出现下面的报错:
[ceph-node3][WARNIN] [--dmcrypt] [--no-systemd]
[ceph-node3][WARNIN] ceph-volume lvm create: error: Unable to proceed with non-existing device: /dev/vdb
[ceph-node3][ERROR ] RuntimeError: command returned non-zero exit status: 2
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
但是这个报错没有影响ceph的部署,在三个osd节点上通过命令已显示磁盘已成功mount:
[root@ceph-node1 ~]# lsblk -l
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 40G 0 disk
sda1 8:1 0 1G 0 part /boot
sda2 8:2 0 39G 0 part
centos-root 253:0 0 35G 0 lvm /
centos-swap 253:1 0 4G 0 lvm [SWAP]
sdb 8:16 0 50G 0 disk
ceph--9534c57e--dc24--46fa--bbc6--aef465c0cb79-osd--block--60b6029b--c400--4c28--917e--db0292122c9e 253:2 0 50G 0 lvm
sr0 11:0 1 1024M 0 rom
[root@ceph-node2 ~]# lsblk -l
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 40G 0 disk
sda1 8:1 0 1G 0 part /boot
sda2 8:2 0 39G 0 part
centos-root 253:0 0 35G 0 lvm /
centos-swap 253:1 0 4G 0 lvm [SWAP]
sdb 8:16 0 50G 0 disk
ceph--34e11c7e--6b2e--4404--b784--c314c627a18a-osd--block--9c5d5378--4f61--476a--87ab--24a4d78ade68 253:2 0 50G 0 lvm
sr0 11:0 1 1024M 0 rom
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 40G 0 disk
sda1 8:1 0 1G 0 part /boot
sda2 8:2 0 39G 0 part
centos-root 253:0 0 35G 0 lvm /
centos-swap 253:1 0 4G 0 lvm [SWAP]
sdb 8:16 0 50G 0 disk
ceph--9571ca2c--a8bc--4688--b08e--0f85147ec7e3-osd--block--4bb09715--4904--41ee--a4ea--9ef0216703d3 253:2 0 50G 0 lvm
sr0
查看OSD
[cephuser@ceph-admin cluster]$ ceph-deploy disk list ceph-node1 ceph-node2 ceph-node3
用ceph-deploy把配置文件和admin密钥拷贝到管理节点和Ceph节点,这样你每次执行Ceph命令行时就无需指定monit节点地址和ceph.client.admin.keyring了
[cephuser@ceph-admin cluster]$ ceph-deploy admin ceph-admin ceph-node1 ceph-node2 ceph-node3
#修改密钥权限
[cephuser@ceph-admin cluster]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
检查ceph状态
[cephuser@ceph-admin cluster]$ sudo ceph health
HEALTH_WARN no active mgr
#报错解决办法
[cephuser@ceph-admin cluster]$ ceph-deploy mgr create node1 node2 node3
[cephuser@ceph-admin cluster]$ sudo ceph health
HEALTH_OK
[cephuser@ceph-admin cluster]$ sudo ceph -s
cluster:
id: 2001873c-3805-4938-bc9c-67a1f414bf68
health: HEALTH_OK
services:
mon: 1 daemons, quorum ceph-admin
mgr: 192.168.200.117(active), standbys: 192.168.200.118, 192.168.200.119
osd: 3 osds: 3 up, 3 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 147 GiB / 150 GiB avail
pgs:
解决问题url:http://blog.itpub.net/25854343/viewspace-2642445/
查看ceph osd运行状态
[cephuser@ceph-admin cluster]$ ceph osd stat
3 osds: 3 up, 3 in; epoch: e13
查看osd的目录树
[cephuser@ceph-admin cluster]$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.14639 root default
-3 0.04880 host ceph-node1
0 ssd 0.04880 osd.0 up 1.00000 1.00000
-5 0.04880 host ceph-node2
1 ssd 0.04880 osd.1 up 1.00000 1.00000
-7 0.04880 host ceph-node3
2 ssd 0.04880 osd.2 up 1.00000 1.00000
查看monit监控节点的服务情况
[cephuser@ceph-admin cluster]$ sudo systemctl status ceph-mon@ceph-admin
● ceph-mon@ceph-admin.service - Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
Active: active (running) since 一 2020-06-01 21:54:50 CST; 38min ago
Main PID: 86353 (ceph-mon)
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ceph-admin.service
└─86353 /usr/bin/ceph-mon -f --cluster ceph --id ceph-admin --setuser ceph --setgroup ceph
6月 01 21:54:50 ceph-admin systemd[1]: Started Ceph cluster monitor daemon.
6月 01 22:08:46 ceph-admin ceph-mon[86353]: 2020-06-01 22:08:46.051 7fe14e59a700 -1 log_channel(cluster) log [ERR] : Health check failed: no active mgr (MGR_DOWN)
[cephuser@ceph-admin cluster]$ ps -ef|grep ceph|grep 'cluster'
ceph 86353 1 0 21:54 ? 00:00:02 /usr/bin/ceph-mon -f --cluster ceph --id ceph-admin --setuser ceph --setgroup ceph
cephuser 87412 87251 0 22:33 pts/1 00:00:00 grep --color=auto cluster
分别查看下ceph-node1、ceph-node2、ceph-node3三个节点的osd服务情况,发现已经在启动中
创建文件系统
查看管理节点状态,默认是没有管理节点的
[cephuser@ceph-admin cluster]$ ceph mds stat
创建管理节点
ceph-admin作为管理节点
需要注意:如果不创建mds管理节点,client客户端将不能正常挂载到ceph集群!!
[cephuser@ceph-admin ~]$ pwd
/home/cephuser
[cephuser@ceph-admin ~]$ cd cluster/
[cephuser@ceph-admin cluster]$ ceph-deploy mds create ceph-admin
#再次查看管理节点状态,发现已经在启动中
[cephuser@ceph-admin cluster]$ ceph mds stat
, 1 up:standby
[cephuser@ceph-admin cluster]$ sudo systemctl status ceph-mds@ceph-admin
● ceph-mds@ceph-admin.service - Ceph metadata server daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mds@.service; enabled; vendor preset: disabled)
Active: active (running) since 一 2020-06-01 22:36:47 CST; 44s ago
Main PID: 87508 (ceph-mds)
CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@ceph-admin.service
└─87508 /usr/bin/ceph-mds -f --cluster ceph --id ceph-admin --setuser ceph --setgroup ceph
6月 01 22:36:47 ceph-admin systemd[1]: Started Ceph metadata server daemon.
6月 01 22:36:47 ceph-admin ceph-mds[87508]: starting mds.ceph-admin at -
[cephuser@ceph-admin cluster]$ ps -ef|grep cluster|grep ceph-mds
ceph 87508 1 0 22:36 ? 00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id ceph-admin --setuser ceph --setgroup ceph
创建pool
pool是ceph存储数据时的逻辑分区,它起到namespace的作用
[cephuser@ceph-admin cluster]$ ceph osd lspools #先查看pool
0 rbd
[cephuser@ceph-admin cluster]$ ceph osd pool create cephfs_data 10 #后面的数字是PG的数量
pool 'cephfs_data' created
[cephuser@ceph-admin cluster]$ ceph osd pool create cephfs_metadata 10 #创建pool的元数据
pool 'cephfs_metadata' created
[cephuser@ceph-admin cluster]$ ceph fs new myceph cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1
#再次查看pool状态
[cephuser@ceph-admin cluster]$ ceph osd lspools
1 cephfs_data
2 cephfs_metadata
#检查mds管理节点状态
[cephuser@ceph-admin cluster]$ ceph mds stat
myceph-1/1/1 up {0=ceph-admin=up:active}
查看ceph集群状态
[cephuser@ceph-admin cluster]$ sudo ceph -s
cluster:
id: 2001873c-3805-4938-bc9c-67a1f414bf68
health: HEALTH_WARN
too few PGs per OSD (20 < min 30)
services:
mon: 1 daemons, quorum ceph-admin
mgr: 192.168.200.117(active), standbys: 192.168.200.118, 192.168.200.119
mds: myceph-1/1/1 up {0=ceph-admin=up:active} #多了此行状态
osd: 3 osds: 3 up, 3 in
data:
pools: 2 pools, 20 pgs
objects: 22 objects, 2.2 KiB
usage: 3.0 GiB used, 147 GiB / 150 GiB avail
pgs: 20 active+clean
查看ceph集群端口
[cephuser@ceph-admin cluster]$ sudo lsof -i:6789
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ceph-mon 28190 ceph 10u IPv4 70217 0t0 TCP ceph-admin:smc-https (LISTEN)
ceph-mon 28190 ceph 19u IPv4 70537 0t0 TCP ceph-admin:smc-https->ceph-node1:41308 (ESTABLISHED)
ceph-mon 28190 ceph 20u IPv4 70560 0t0 TCP ceph-admin:smc-https->ceph-node2:48516 (ESTABLISHED)
ceph-mon 28190 ceph 21u IPv4 70583 0t0 TCP ceph-admin:smc-https->ceph-node3:44948 (ESTABLISHED)
ceph-mon 28190 ceph 22u IPv4 72643 0t0 TCP ceph-admin:smc-https->ceph-admin:51474 (ESTABLISHED)
ceph-mds 29093 ceph 8u IPv4 72642 0t0 TCP ceph-admin:51474->ceph-admin:smc-https (ESTABLISHED)
client端挂载ceph存储
采用fuse方式
## 安装ceph-fuse
[root@k8s-op-nfs ~]# vim /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=https://download.ceph.com/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=2
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=2
[ceph-source]
name=Ceph source packages
baseurl=https://download.ceph.com/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=2
[root@k8s-op-nfs ~]# yum makecache
[root@k8s-op-nfs ~]# yum install -y ceph-fuse
创建挂载目录
[root@k8s-op-nfs ~]# mkdir /cephfs
复制配置文件
将ceph配置文件ceph.conf从管理节点copy到client节点
192.168..200.116为管理节点
[root@k8s-op-nfs ~]# rsync -e "ssh -p22" -avpgolr root@192.168.200.116:/etc/ceph/ceph.conf /etc/ceph/
The authenticity of host '192.168.200.116 (192.168.200.116)' can't be established.
ECDSA key fingerprint is SHA256:sOJWnTR116Hl90agvsW2ZtqV/Sr8ALNwOqZxrg+/vGo.
ECDSA key fingerprint is MD5:2c:83:3a:5d:7c:28:80:7d:99:9e:b8:03:e1:18:9e:66.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.200.116' (ECDSA) to the list of known hosts.
root@192.168.200.116's password:
receiving incremental file list
created directory /etc/ceph
ceph.conf
sent 43 bytes received 359 bytes 73.09 bytes/sec
total size is 264 speedup is 0.66
[root@k8s-op-nfs ~]#
复制密钥
[root@k8s-op-nfs ~]# rsync -e "ssh -p22" -avpgolr root@192.168.200.116:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
root@192.168.200.116's password:
receiving incremental file list
ceph.client.admin.keyring
sent 43 bytes received 261 bytes 86.86 bytes/sec
total size is 151 speedup is 0.50
查看ceph授权
[root@centos6-02 ~]# ceph auth list
installed auth entries:
mds.ceph-admin
key: AQAZZxdbH6uAOBAABttpSmPt6BXNtTJwZDpSJg==
caps: [mds] allow
caps: [mon] allow profile mds
caps: [osd] allow rwx
osd.0
key: AQCuWBdbV3TlBBAA4xsAE4QsFQ6vAp+7pIFEHA==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQC6WBdbakBaMxAAsUllVWdttlLzEI5VNd/41w==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.2
key: AQDJWBdbz6zNNhAATwzL2FqPKNY1IvQDmzyOSg==
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQCNWBdbf1QxAhAAkryP+OFy6wGnKR8lfYDkUA==
caps: [mds] allow *
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQCNWBdbnjLILhAAT1hKtLEzkCrhDuTLjdCJig==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
key: AQCOWBdbmxEANBAAiTMJeyEuSverXAyOrwodMQ==
caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
key: AQCNWBdbiO1bERAARLZaYdY58KLMi4oyKmug4Q==
caps: [mon] allow profile bootstrap-osd
client.bootstrap-rgw
key: AQCNWBdboBLXIBAAVTsD2TPJhVSRY2E9G7eLzQ==
caps: [mon] allow profile bootstrap-rgw
将ceph集群存储挂载到客户机的/cephfs目录下
[root@k8s-op-nfs ~]# ceph-fuse -m 192.168.200.116:6789 /cephfs
ceph-fuse[99026]: starting ceph client
2020-06-01 15:23:52.671520 7fb9696f1240 -1 init, newargv = 0x55e56a671200 newargc=9
ceph-fuse[99026]: starting fuse
[root@k8s-op-nfs ~]# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/mapper/centos-root 35G 7.4G 28G 21% /
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 412M 3.5G 11% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 146M 869M 15% /boot
tmpfs 781M 0 781M 0% /run/user/0
ceph-fuse 47G 0 47G 0% /cephfs
取消ceph存储的挂载
[root@k8s-op-nfs ~]# umount /cephfs
温馨提示:
当有一半以上的OSD节点挂掉后,远程客户端挂载的Ceph存储就会使用异常了,即暂停使用。比如本案例中有3个OSD节点,当其中一个OSD节点挂掉后(比如宕机)
客户端挂载的Ceph存储使用正常;但当有2个OSD节点挂掉后,客户端挂载的Ceph存储就不能正常使用了(表现为Ceph存储目录下的数据读写操作一直卡着状态)
当OSD节点恢复后,Ceph存储也会恢复正常使用。OSD节点宕机重新启动后,osd程序会自动起来(通过监控节点自动起来)