Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ceph 常用命令 #18

Open
zjs1224522500 opened this issue Sep 15, 2020 · 1 comment
Open

Ceph 常用命令 #18

zjs1224522500 opened this issue Sep 15, 2020 · 1 comment
Labels
Command Command line

Comments

@zjs1224522500
Copy link
Owner

zjs1224522500 commented Sep 15, 2020

Deploy Ceph

Ceph Cluster

  • ceph -s // show ceph status
  • ceph osd tree // show osd tree of ceph

RADOS

  • rados lspools // list pools
  • rados mkpool {pool_name} // create rados pool and you can assign crush rule and auid.
  • rados -p {pool_name} ls // list objects in rados pool
  • rados df // show per-pool and total usage

RBD

  • rbd info -p {pool_name} {image_name} // Check info of given image
  • rbd ls -p {pool_name} // list images of given pool

OSD

Disk Part

nvme ssd

  • parted /dev/nvme1n1
  • mklabel
  • gpt
  • mkpart osd-service-3-data 0G 30G
  • mkpart osd-service-3-wal 30G 60G
  • mkpart osd-device-3-db 60G 90G
  • mkpart osd-device-3-block 90G 150G
  • mkfs.xfs /dev/nvme1n1p1

Remove OSD

  • ceph osd out {osd-num} // out osd
  • ceph osd crush remove {name}
  • ceph auth del osd.{osd-num}
  • ceph osd rm {osd-num}

Create OSD

  • ceph osd create
  • ceph-osd -i 1 --mkfs --mkkey
  • ceph auth add osd.1 osd 'allow *' mon 'allow profile osd' -i /zm3/osd.1/keyring
  • ceph osd crush add osd.1 1.0 host=sw211
  • ceph-osd -i 1
@zjs1224522500 zjs1224522500 added the Command Command line label Sep 15, 2020
@zjs1224522500
Copy link
Owner Author

zjs1224522500 commented Oct 14, 2020

Ceph FS

  • ceph osd pool create cephfs_data 64 // 创建 cephfs data pool 64PG
  • ceph osd pool create cephfs_metadata 64 // 创建 cephfs metadata pool 64PG
  • ceph fs new cephfs cephfs_metadata cephfs_data // 创建 cephfs
  • ceph fs ls // 查看创建的 ceph fs
  • ceph mds stat // 查看 mds 的运行状况

MDS

  • apt-get install ceph-mds // 安装 ceoh-mds
  • mkdir /var/lib/ceph/mds
  • mkdir /var/lib/ceph/mds/ceph-mdsa
  • ceph auth get-or-create mds.mdsa mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' > /var/lib/ceph/mds/ceph-mdsa/keyring
  • systemctl start ceph-mds@mdsa

CEPH-FUSE Mount

  • ceph-fuse --id admin -m 192.168.31.214:6789 /zm3/mnt/
  • ceph-fuse --id admin -m 192.168.31.214:6789 /zm3/test/

Umount

  • fusermount -zu /zm3/test/

Filebench Test

  • ./configure --build=alpha-unknown-linux-gnu // build on sw_64
  • make & make install
  • You can change the workload file.
  • ./filebench-1.5-alpha3/filebench -f /usr/local/share/filebench/workloads/createfiles.f

Config mds HA

  • ceph fs get ${fs_name} // ceph fs get cephfs
  • ceph fs set ${fs_name} max_mds 2 // ceph fs set cephfs max_mds 2

Create new MDS

  • mkdir /var/lib/ceph/mds/ceph-mds.b // new mds
  • ceph auth get-or-create mds.mds.b mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' > /var/lib/ceph/mds/ceph-mds.b/keyring
  • ceph-mds -i mds.b
  • ceph-f status

Bind MDS to dir

  • setfattr -n ceph.dir.pin -v {rank} {dir}

Remove mds

  • ceph fs set ${fs_name} max_mds 1
  • ceph mds deactivate {rank}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Command Command line
Projects
None yet
Development

No branches or pull requests

1 participant