I am trying to trace Ceph using LTTng-UST, but I cannot see Ceph-specific tracepoints after following the recommended steps. I compiled Ceph with LTTng support and preloaded liblttng-ust-fork.so, but
lttng list -u
only shows general tracepoints like lttng_ust_lib:* and lttng_ust_tracelog:*. I expected to see tracepoints related to Ceph’s operations, such as pg:queue_op or osd:do_osd_op_post.
I followed these steps to build ceph with LTTng ON:
# Update package list and install dependencies
sudo apt update
sudo apt install dpkg-dev devscripts -y
sudo apt install lttng-tools lttng-modules-dkms liblttng-ust-dev libbabeltrace-dev
# Retrieve the Ceph source code
apt source ceph
# Install build dependencies
sudo apt build-dep ceph
# Navigate to the Ceph source directory
cd ceph-<version>
# Edit the build rules to enable LTTng
nano debian/rules
# Add the following lines:
extraopts += -DWITH_LTTNG=ON
extraopts += -DWITH_OSD_INSTRUMENT_FUNCTIONS=ON
# Build the Ceph packages with the required options
DEB_BUILD_OPTIONS=nocheck DEB_CMAKE_EXTRA_ARGS="-DWITH_LTTNG=ON" dpkg-buildpackage -uc -us
# Install the built packages
sudo dpkg -i ../ceph*.deb
# Resolve any dependency issues
sudo apt install -f
And here is a script i followed to deploy a ceph cluster manually
#!/bin/bash
# Variables
INTERFACE="ens34"
HOSTNAME=$(hostname)
MON_IP="192.168.1.10"
MON_IP_MASK="255.255.255.0"
PUBLIC_NETWORK="192.168.1.0/24"
FSID=$(uuidgen)
MANAGER_NAME="mgr1"
DISK1="sda"
DISK2="sdb"
# Configure the IP address
sudo bash -c "cat >> /etc/network/interfaces" <<EOF
auto ${INTERFACE}
iface ${INTERFACE} inet static
address ${MON_IP}
netmask ${MON_IP_MASK}
EOF
sudo systemctl restart networking
# ---------------------------------------------------------------------------------------------------
# Create the Ceph configuration file
sudo bash -c "cat > /etc/ceph/ceph.conf" <<EOF
[global]
fsid = ${FSID}
mon_initial_members = ${HOSTNAME}
mon_host = ${MON_IP}
public_network = ${PUBLIC_NETWORK}
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 2
osd_pool_default_min_size = 2
EOF
# Generate keyrings
sudo ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
# Import keys into ceph.mon.keyring
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
sudo chown ceph:ceph /tmp/ceph.mon.keyring
# Generate the monmap
sudo monmaptool --create --add ${HOSTNAME} ${MON_IP} --fsid ${FSID} /tmp/monmap
# Create data directory for the monitor
sudo mkdir -p /var/lib/ceph/mon/ceph-${HOSTNAME}
sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-${HOSTNAME}
# Initialize the monitor
sudo -u ceph ceph-mon --mkfs -i ${HOSTNAME} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
# Wait before starting the monitor
sleep 4
# Start the monitor
sudo systemctl start ceph-mon@${HOSTNAME}
# ---------------------------------------------------------------------------------------------------
# Deploy a manager
echo "Deploying manager: ${MANAGER_NAME}"
# 1. Create an authentication key for the manager
sudo ceph auth get-or-create mgr.${MANAGER_NAME} mon 'allow profile mgr' osd 'allow *' mds 'allow *'
# 2. Create data directory for the manager
sudo mkdir -p /var/lib/ceph/mgr/ceph-${MANAGER_NAME}
# 3. Save the keyring in the directory
sudo bash -c "ceph auth get mgr.${MANAGER_NAME} > /var/lib/ceph/mgr/ceph-${MANAGER_NAME}/keyring"
# 4. Set correct permissions
sudo chown -R ceph:ceph /var/lib/ceph/mgr/ceph-${MANAGER_NAME}
# 5. Start the manager daemon
sudo -u ceph ceph-mgr -i ${MANAGER_NAME}
# ---------------------------------------------------------------------------------------------------
# Add two OSDs
echo "Creating OSDs on /dev/${DISK1} and /dev/${DISK2}"
# Prepare and activate the OSDs
sudo ceph-volume lvm create --data /dev/${DISK1}
sudo ceph-volume lvm create --data /dev/${DISK2}
# ---------------------------------------------------------------------------------------------------
Finaly steps performed to see the tracepoints:
# 1. Initial command to list UST events
lttng list -u
# Obtained result:
UST events:
-------------
NONE
# 2. Preloading liblttng-ust-fork.so before starting Ceph daemons
systemctl edit ceph-mon@debian
# Content added to the file:
[Service]
Environment="LD_PRELOAD=/usr/lib/x86_64-linux-gnu/liblttng-ust-fork.so"
# 3. Restart Ceph daemons
systemctl restart ceph-mon@debian
systemctl restart ceph-osd@debian
systemctl restart ceph-mgr@debian
# 4. Verify events after restart
lttng list -u
# Obtained result:
UST events:
-------------
lttng_ust_lib:*
lttng_ust_tracelog:*
lttng_ust_statedump:*
# Expected result:
UST events:
-------------
PID: 100859 - Name: /path/to/ceph-osd
pg:queue_op (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
osd:do_osd_op_post (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
osd:do_osd_op_pre_unknown (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
osd:do_osd_op_pre_copy_from (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
osd:do_osd_op_pre_copy_get (loglevel: TRACE_DEBUG_LINE (13)) (type: tracepoint)
Lina SADI is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.