Skip links

Uso e comandos básicos

A seguir estão alguns comandos básicos de uso do Ceph:

  • Verificar estado do cluster:
    ~# ceph -s
    cluster:
    id: d5fde8a5-ad7e-4a96-8c23-f232c65a2828
    health: HEALTH_WARN
    mons are allowing insecure global_id reclaim
    1 pool(s) do not have an application enabled
    services:
    mon: 3 daemons, quorum vm1,vm2,vm3 (age 31h)
    mgr: vm1(active, since 5d), standbys: vm2
    osd: 5 osds: 5 up (since 5d), 5 in (since 6d)
    rgw: 1 daemon active (vm3.rgw0)task status:data:
    pools: 7 pools, 169 pgs
    objects: 25.35k objects, 98 GiB
    usage: 299 GiB used, 701 GiB / 1000 GiB avail
    pgs: 169 active+clean
  • Verificar uso do cluster:
    ~# ceph df
    --- RAW STORAGE ---
    CLASS SIZE AVAIL USED RAW USED %RAW USED
    hdd 1000 GiB 701 GiB 294 GiB 299 GiB 29.90
    TOTAL 1000 GiB 701 GiB 294 GiB 299 GiB 29.90
    — POOLS —
    POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
    device_health_metrics 1 1 0 B 0 0 B 0 202 GiB
    .rgw.root 2 32 1.3 KiB 4 768 KiB 0 202 GiB
    default.rgw.log 3 32 3.4 KiB 207 6 MiB 0 202 GiB
    default.rgw.control 4 32 0 B 8 0 B 0 202 GiB
    default.rgw.meta 5 8 393 B 2 384 KiB 0 202 GiB
    metadata-ec 6 32 0 B 0 0 B 0 202 GiB
    acs-primary-storage 7 32 98 GiB 25.13k 294 GiB 32.63 202 GiB
  • Listar os OSDs do cluster:
    ~# ceph osd tree
    ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
    -1 0.97649 root default
    -3 0.19530 host dp1
    2 hdd 0.19530 osd.2 up 1.00000 1.00000
    -9 0.19530 host dp2
    0 hdd 0.19530 osd.0 up 1.00000 1.00000
    -7 0.19530 host dp3
    1 hdd 0.19530 osd.1 up 1.00000 1.00000
    -5 0.19530 host dp4
    3 hdd 0.19530 osd.3 up 1.00000 1.00000
    -11 0.19530 host dp5
    4 hdd 0.19530 osd.4 up 1.00000 1.00000
  • Verificar detalhes de um OSD:
    ~# ceph osd find 2
    {
    "osd": 2,
    "addrs": {
    "addrvec": [
    {
    "type": "v2",
    "addr": "192.168.122.230:6800",
    "nonce": 63981
    },
    {
    "type": "v1",
    "addr": "192.168.122.230:6801",
    "nonce": 63981
    }
    ]
    },
    "osd_fsid": "9886b73e-6833-4654-a288-0d45eb2b9e1a",
    "host": "dp1",
    "crush_location": {
    "host": "dp1",
    "root": "default"
    }
    }
  • Listar pools:
    ~# ceph osd pool ls device_health_metrics
    .rgw.root
    default.rgw.log
    default.rgw.control
    default.rgw.meta
    metadata-ec
    acs-primary-storage
  • Verificar o estado dos pools:
    ~# ceph osd pool stats
    pool device_health_metrics id 1
    nothing is going on
    pool .rgw.root id 2
    nothing is going onpool default.rgw.log id 3
    nothing is going onpool default.rgw.control id 4
    nothing is going onpool default.rgw.meta id 5
    nothing is going onpool metadata-ec id 6
    nothing is going onpool acs-primary-storage id 7
    nothing is going on
  • Realizar benchmark nos OSDs:
    ~# ceph tell osd.* bench
    osd.0: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 1.5983376140000001,
    "bytes_per_sec": 671786620.42048395,
    "iops": 160.16641150009249
    }
    osd.1: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 1.4726374440000001,
    "bytes_per_sec": 729128427.62132013,
    "iops": 173.8377636960316
    }
    osd.2: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 1.6086810540000001,
    "bytes_per_sec": 667467190.79592013,
    "iops": 159.13657922647479
    }
    osd.3: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 3.2434518730000002,
    "bytes_per_sec": 331049100.16958344,
    "iops": 78.928256075282917
    }
    osd.4: {
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "elapsed_sec": 3.9115396859999998,
    "bytes_per_sec": 274506181.75832057,
    "iops": 65.447373809414046
    }
  • Listar configurações do cluster:
    ~# ceph config dump
    WHO MASK LEVEL OPTION VALUE RO
    mgr advanced mgr/dashboard/ALERTMANAGER_API_HOST https://192.168.122.240:9093 *
    mgr advanced mgr/dashboard/GRAFANA_API_PASSWORD *
    mgr advanced mgr/dashboard/GRAFANA_API_SSL_VERIFY false *
    mgr advanced mgr/dashboard/GRAFANA_API_URL https://192.168.122.240:3000 *
    mgr advanced mgr/dashboard/GRAFANA_API_USERNAME *
    mgr advanced mgr/dashboard/PROMETHEUS_API_HOST https://192.168.122.240:9092 *
    mgr advanced mgr/dashboard/RGW_API_ACCESS_KEY *
    mgr advanced mgr/dashboard/RGW_API_HOST 192.168.122.242 *
    mgr advanced mgr/dashboard/RGW_API_PORT 8080 *
    mgr advanced mgr/dashboard/RGW_API_SCHEME h ttp *
    mgr advanced mgr/dashboard/RGW_API_SECRET_KEY *
    mgr advanced mgr/dashboard/RGW_API_USER_ID ceph-dashboard *
    mgr advanced mgr/dashboard/server_port 8443 *
    mgr advanced mgr/dashboard/ssl false *
    mgr advanced mgr/dashboard/ssl_server_port 8443 *
    mgr advanced mgr/dashboard/vm1/server_addr 192.168.122.240 *
    mgr advanced mgr/selftest/rwoption1 1234 *
    mgr advanced mgr/selftest/rwoption2 10 *
    mgr advanced mgr/selftest/rwoption3 1.500000 *
    mgr advanced mgr/selftest/rwoption4 foo *
    mgr advanced mgr/selftest/rwoption5 false *
    mgr advanced mgr/selftest/testkey *
    mgr advanced mgr/selftest/vm1/testkey *
    mgr advanced mgr/telemetry/contact *
    mgr advanced mgr/telemetry/interval 60 *
    mgr advanced mgr/telemetry/leaderboard true *
  • Buscar os arquivos de logs dos componentes do Ceph:
    • Logs dos Monitores:
      São armazenados nos arquivos
      /var/log/ceph/ceph-mon.{NOME DA VM}.log
    • Logs dos Managers:
      São armazenados nos arquivos
      /var/log/ceph/ceph-mgr.{NOME DA VM}.log
    • Logs dos OSDs:
      São armazenados nos arquivos
      /var/log/ceph/ceph-osd.{OSD ID}.log
This website uses cookies to improve your web experience.