ceph 005 赋权补充 rbd块映射

我的ceph版本

[root@serverc ~]# ceph -v ceph version 16.2.0-117.el8cp (0e34bb74700060ebfaa22d99b7d2cdc037b28a57) pacific (stable) 

认证授权

[root@serverc ~]# ceph auth get-or-create  client.user2 mon 'allow rw' osd 'allow rw pool=pool2' > /etc/ceph/ceph.client.user2.keyring 只能看指定池子  x: x权限 [root@serverc ~]# ceph auth get-or-create client.user2 --id boss Error EACCES: access denied [root@serverc ~]# ceph auth get client.boss [client.boss]     key = AQBOcfdinDbjNBAAKADdWC1teSs1k+IngZFtLA==     caps mon = "allow rw"     caps osd = "allow rw" exported keyring for client.boss [root@serverc ~]#    [root@serverc ~]# ceph auth caps client.boss mon 'allow rwx' osd 'allow rwx' updated caps for client.boss [root@serverc ~]# ceph auth get-or-create client.user2 --id boss [client.user2]     key = AQCpb/ditbuXExAAI0DbTNL5dJta4DwXsd4pIw== [root@serverc ~]# ceph auth get-or-create client.user3 --id boss [client.user3]     key = AQAgcvdifGyoHxAAyiO5TzBFb7n6ajvE18STRg== [root@serverc ~]#  

让你可以创建用户(偏向于管理员)

指定权限

ceph 005 赋权补充 rbd块映射

ceph auth ls --keyring abc --name client.boss
当key不在标准的/etc目录时,可以指定

ceph  auth  get-or-create client.user1 mon 'allow rw' osd 'allow rw pool=pool1 namespace=sys'  >  /etc/ceph/ceph.client.user8.keyring 精细化指定存储池,以及命名空间,不指定则是所有命名空间,及所有存储池 

要保证ceph用户对密钥文件和配置文件由读权限
修改权限时,要注意,全部都写上,不能写一点补一点

删除用户
ceph auth del client.user3

--id 默认admin
--name 默认client.admin
--keyring 默认/etc/ceph/ceph.client.admin.keyring
--conf 默认/etc/ceph/ceph.conf

profile 授权
osd相互访问会使用osd profile权限

ceph 005 赋权补充 rbd块映射

内部权限相当是,但可以赋给user1
访问相应rbd,osd,mds之类

[root@serverc ceph]# ceph auth get-or-create-key client.user1 AQBifPZijsT7IhAAJa5qCKaMzQX29ni2yJu//Q== 

获取key

ceph 005 赋权补充 rbd块映射

ceph auth get client.xxx 可以看到权限信息

导出用户

ceph auth get client.breeze -o /etc/ceph/ceph.client.breeze.keyring 

导入用户

ceph auth  import -i /etc/ceph/ceph.client.breeze.keyring 

Ceph密钥管理

客户端访问ceph集群时,会使用本地的keyring文件,默认依次查找下列路径和名称的keyring文件: /etc/ceph/$cluster.$name.keyring /etc/ceph/$cluster.keyring /etc/ceph/keyring /etc/ceph/keyring.bin 

管理RBD块

ceph 005 赋权补充 rbd块映射

ceph的三种存储模式,都基于存储池

[root@serverc ceph]# ceph osd pool create rbdpool pool 'rbdpool' created  [root@serverc ceph]# ceph osd pool application enable rbdpool rbd enabled application 'rbd' on pool 'rbdpool' 同样效果或者两个都写 rbd pool init rbd 

从存储池创建一个块

裸设备从池中创建,名为镜像(被挂载的家伙)

创建一个指定用户,去管理rbd

[root@serverc ceph]# ceph auth get-or-create client.rbd mon 'profile rbd' mgr 'profile rbd' osd 'profile rbd' > /etc/ceph/ceph.client.rbd.keyring 这个用户可以给业务端 

[classroom环境]
clienta 管理节点 是集群一份子,没有osd。集群客户端

业务客户端(服务器)
虚拟机可不可以把东西放在rbd里

ceph 005 赋权补充 rbd块映射

alias rbd='rbd --id rbd'
可以偷懒

创建镜像

[root@serverc ceph]# rbd -p rbdpool create --size 1G image1 --id rbd 

查看存储池里的镜像

[root@serverc ceph]# rbd ls -p rbdpool --id rbd image1 

查看镜像详情

[root@serverc ceph]# rbd info rbdpool/image1 --id rbd rbd image 'image1':     size 1 GiB in 256 objects       order 22 (4 MiB objects)     snapshot_count: 0     id: fae567c39ea1     block_name_prefix: rbd_data.fae567c39ea1     format: 2     features: layering, exclusive-lock, object-map, fast-diff, deep-flatten     op_features:      flags:      create_timestamp: Sat Aug 13 07:59:16 2022     access_timestamp: Sat Aug 13 07:59:16 2022     modify_timestamp: Sat Aug 13 07:59:16 2022 

256个对象

查看对象

[root@serverc ceph]# rados -p rbdpool ls rbd_object_map.fae567c39ea1 rbd_directory rbd_info rbd_header.fae567c39ea1 rbd_id.image1 

元数据信息,描述这个词,与对象的信息

用了再分配对象,不会立即吃你1G

镜像映射到服务器

客户端需要rbd命令
特殊权限用户
ceph配置文件

映射

[root@serverc ceph]# rbd map rbdpool/image1 --id rbd [root@serverc ceph]# rbd showmapped id  pool     namespace  image   snap  device    0   rbdpool             image1  -     /dev/rbd0 1   rbdpool             image1  -     /dev/rbd1 

[手误映射多了,取消rbd的一个映射]

[root@serverc ceph]# rbd unmap /dev/rbd1   [root@serverc ceph]# mkfs.xfs /dev/rbd0 meta-data=/dev/rbd0              isize=512    agcount=8, agsize=32768 blks         =                       sectsz=512   attr=2, projid32bit=1         =                       crc=1        finobt=1, sparse=1, rmapbt=0         =                       reflink=1 data     =                       bsize=4096   blocks=262144, imaxpct=25         =                       sunit=16     swidth=16 blks naming   =version 2              bsize=4096   ascii-ci=0, ftype=1 log      =internal log           bsize=4096   blocks=2560, version=2         =                       sectsz=512   sunit=16 blks, lazy-count=1 realtime =none                   extsz=4096   blocks=0, rtextents=0 Discarding blocks...Done. [root@serverc ceph]# mkdir /mnt/rbd0 [root@serverc ceph]# mount /dev/rbd0 /mnt/rbd0/ [root@serverc ceph]# df -h /dev/rbd0      1014M   40M  975M   4% /mnt/rbd0   [root@serverc ceph]# rados -p rbdpool ls rbd_data.fae567c39ea1.0000000000000020 rbd_object_map.fae567c39ea1 rbd_data.fae567c39ea1.0000000000000040 rbd_data.fae567c39ea1.00000000000000c0 rbd_directory rbd_info rbd_data.fae567c39ea1.0000000000000080 rbd_data.fae567c39ea1.00000000000000a0 rbd_data.fae567c39ea1.0000000000000060 rbd_data.fae567c39ea1.0000000000000000 rbd_header.fae567c39ea1 rbd_data.fae567c39ea1.00000000000000e0 rbd_data.fae567c39ea1.00000000000000ff rbd_id.image1  [root@serverc rbd0]# dd if=/dev/zero of=file1 bs=1M count=20   [root@serverc rbd0]# sync [root@serverc rbd0]# rados -p rbdpool ls rbd_data.fae567c39ea1.0000000000000020 rbd_object_map.fae567c39ea1 rbd_data.fae567c39ea1.0000000000000040 rbd_data.fae567c39ea1.00000000000000c0 rbd_directory rbd_data.fae567c39ea1.0000000000000003 rbd_data.fae567c39ea1.0000000000000001 rbd_info rbd_data.fae567c39ea1.0000000000000080 rbd_data.fae567c39ea1.00000000000000a0 rbd_data.fae567c39ea1.0000000000000060 rbd_data.fae567c39ea1.0000000000000000 rbd_header.fae567c39ea1 rbd_data.fae567c39ea1.00000000000000e0 rbd_data.fae567c39ea1.0000000000000004 rbd_data.fae567c39ea1.0000000000000002 rbd_data.fae567c39ea1.00000000000000ff rbd_id.image1 rbd_data.fae567c39ea1.0000000000000005 [root@serverc rbd0]#  看不出什么 块多是因为三副本 

扩大

[root@serverc rbd0]# rbd resize --size 2G rbdpool/image1 --id rbd Resizing image: 100% complete...done.  [root@serverc rbd0]# rbd du rbdpool/image1 NAME    PROVISIONED  USED   image1        2 GiB  56 MiB  [root@serverc rbd0]# xfs_growfs /mnt/rbd0/ meta-data=/dev/rbd0              isize=512    agcount=8, agsize=32768 blks         =                       sectsz=512   attr=2, projid32bit=1         =                       crc=1        finobt=1, sparse=1, rmapbt=0         =                       reflink=1 data     =                       bsize=4096   blocks=262144, imaxpct=25         =                       sunit=16     swidth=16 blks naming   =version 2              bsize=4096   ascii-ci=0, ftype=1 log      =internal log           bsize=4096   blocks=2560, version=2         =                       sectsz=512   sunit=16 blks, lazy-count=1 realtime =none                   extsz=4096   blocks=0, rtextents=0 data blocks changed from 262144 to 524288 [root@serverc rbd0]# df -h | tail -n 2 tmpfs           576M     0  576M   0% /run/user/0 /dev/rbd0       2.0G   68M  2.0G   4% /mnt/rbd0 

1.创建rbd存储池

ceph  osd  pool create  rbdpool   

2.初始化rbd存储池

rbd pool init rbdpool 

3.创建rbd用户

ceph  auth  get-or-create  client.rbd mon 'profile rbd' mgr 'profile rbd' osd 'profile rbd' > /etc/ceph/ceph.client.rbd.keyring 允许与rbd有关的所有相关操作 

4.创建rbd镜像

alias rbd='rbd  --id rbd' rbd  --id rbd --size 1G rbdpool/image1 

5.映射镜像

rbd map rbdpool/image1 rbd showmapped 

6.格式化rbd

mkfs.xfs /dev/rbd0 

7.挂载

mount  /dev/rbd0  /mnt/rbd0 

8.永久挂载

/dev/rbd0   /mnt/rbd0  xfs  defaults,_netdev 0 0 网络设备,要等一切服务起来之后(不加_netdev真的会起不来) 

9.开机自动映射

[root@serverc ~]# vim /etc/ceph/rbdmap  [root@serverc ~]# cat /etc/ceph/rbdmap  # RbdDevice		Parameters #poolname/imagename	id=client,keyring=/etc/ceph/ceph.client.keyring rbdpool/image1  id=rbd,keyring=/etc/ceph/ceph.client.rbd.keyring  [root@serverc ~]# systemctl enable rbdmap.service  Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /usr/lib/systemd/system/rbdmap.service. [root@serverc ~]#  

ceph-common带上了这个服务

10.扩容

rbd  resize  --size 2G rbdpool/image1 xfs_growfs  /mnt/rbd0/ 

11.删除

#注释fstab [root@serverc ~]# umount  /mnt/rbd0  [root@serverc ~]# rbd unmap rbdpool/image1 [root@serverc ~]# rbd showmapped [root@serverc ~]# rbd rm rbdpool/image1 Removing image: 100% complete...done. [root@serverc ~]#  

12.回收站功能(不太确定要不要删)

[root@serverc ~]# rbd create --size 1G rbdpool/image2  [root@serverc ~]# rbd trash move rbdpool/image2 [root@serverc ~]# rbd -p rbdpool ls [root@serverc ~]# rbd trash ls -p rbdpool fb5f7c2dd404 image2 [root@serverc ~]# rbd trash restore fb5f7c2dd404 -p rbdpool [root@serverc ~]# rbd -p rbdpool ls image2 [root@serverc ~]#  

rbd trash purge
清除回收站里的指定池的所有数据

ceph 005 赋权补充 rbd块映射

发表评论

相关文章