(资料图)
视频链接:初始化ceph最小集群
使用cephadm bootstrap初始化最小集群> cephadm bootstrap 过程是在单一节点上创建一个小型的ceph集群,包括一个ceph monitor和一个ceph mgr,以及监控组件,包括prometheus、node-exporter等。```shell## 初始化时,指定了mon-ip、集群网段、dashboard初始用户名和密码# cephadm bootstrap --mon-ip 192.168.59.241 --cluster-network 10.168.59.0/24 --initial-dashboard-user admin --initial-dashboard-password demo2023Creating directory /etc/ceph for ceph.confVerifying podman|docker is present...Verifying lvm2 is present...Verifying time synchronization is in place...Unit chronyd.service is enabled and runningRepeating the final host check...docker (/usr/bin/docker) is presentsystemctl is presentlvcreate is presentUnit chronyd.service is enabled and runningHost looks OKCluster fsid: 2e1228b0-0781-11ee-aa8a-000c2921faf1Verifying IP 192.168.59.241 port 3300 ...Verifying IP 192.168.59.241 port 6789 ...Mon IP `192.168.59.241` is in CIDR network `192.168.59.0/24`Mon IP `192.168.59.241` is in CIDR network `192.168.59.0/24`Pulling container image quay.io/ceph/ceph:v17...Ceph version: ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)Extracting ceph user uid/gid from container image...Creating initial keys...Creating initial monmap...Creating mon...Waiting for mon to start...Waiting for mon...mon is availableAssimilating anything we can from ceph.conf...Generating new minimal ceph.conf...Restarting the monitor...Setting mon public_network to 192.168.59.0/24Setting cluster_network to 10.168.59.0/24Wrote config to /etc/ceph/ceph.confWrote keyring to /etc/ceph/ceph.client.admin.keyringCreating mgr...Verifying port 9283 ...Waiting for mgr to start...Waiting for mgr...mgr not available, waiting (1/15)...mgr not available, waiting (2/15)...mgr not available, waiting (3/15)...mgr not available, waiting (4/15)...mgr is availableEnabling cephadm module...Waiting for the mgr to restart...Waiting for mgr epoch 5...mgr epoch 5 is availableSetting orchestrator backend to cephadm...Generating ssh key...Wrote public SSH key to /etc/ceph/ceph.pubAdding key to root@localhost authorized_keys...Adding host ceph01...Deploying mon service with default placement...Deploying mgr service with default placement...Deploying crash service with default placement...Deploying prometheus service with default placement...Deploying grafana service with default placement...Deploying node-exporter service with default placement...Deploying alertmanager service with default placement...Enabling the dashboard module...Waiting for the mgr to restart...Waiting for mgr epoch 9...mgr epoch 9 is availableGenerating a dashboard self-signed certificate...Creating initial admin user...Fetching dashboard port number...Ceph Dashboard is now available at: URL: https://ceph01:8443/ User: admin Password: p5tuqo17weEnabling client.admin keyring and conf on hosts with "admin" labelSaving cluster configuration to /var/lib/ceph/2e1228b0-0781-11ee-aa8a-000c2921faf1/config directoryEnabling autotune for osd_memory_targetYou can access the Ceph CLI as following in case of multi-cluster or non-default config: sudo /usr/sbin/cephadm shell --fsid 2e1228b0-0781-11ee-aa8a-000c2921faf1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyringOr, if you are only running a single cluster on this host: sudo /usr/sbin/cephadm shell Please consider enabling telemetry to help improve Ceph: ceph telemetry onFor more information see: https://docs.ceph.com/docs/master/mgr/telemetry/## 也可以在初始化时指定dashboard用户名和密码 --initial-dashboard-user admin --initial-dashboard-password demo2023# ls /etc/ceph/ceph.client.admin.keyring ceph.conf ceph.pub rbdmap```- ceph.client.admin.keyring 是具有ceph管理员的秘钥- ceph.conf 是最小化配置文件- ceph.pub 是一个公钥,拷贝到其他节点后,可以免密登录。> 在5个以上ceph节点时,默认会将其中5个节点当做mon,这可以从`ceph orch ls`中看出来```shell# ceph orch lsNAME PORTS RUNNING REFRESHED AGE PLACEMENT alertmanager ?:9093,9094 1/1 7m ago 46m count:1 crash 1/1 7m ago 46m * grafana ?:3000 1/1 7m ago 46m count:1 mgr 1/2 7m ago 46m count:2 mon 1/5 7m ago 46m count:5 node-exporter ?:9100 1/1 7m ago 46m * prometheus ?:9095 1/1 7m ago 46m count:1 ```> 初始化mon后,此时集群还处于WARN状态,没有OSD,MON的数量也才只有1个,MGR也只有1个,所以接下来就是先添加ceph节点。```shell# ceph -s cluster: id: 67ccccf2-07f6-11ee-a1c2-000c2921faf1 health: HEALTH_WARN OSD count 0 < osd_pool_default_size 3 services: mon: 1 daemons, quorum ceph01 (age 9m) mgr: ceph01.sdqukl(active, since 7m) osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: ```
关键词: