Ctdb management utility
ctdb [OPTION...] {COMMAND} [COMMAND-ARGS]
ctdb is a utility to view and manage a CTDB cluster.
The following terms are used when referring to nodes in a cluster:
PNN
Physical Node Number. The physical node number is an integer that describes the node in the cluster. The first node has physical node number 0. in a cluster.
PNN-LIST
This is either a single PNN, a comma-separate list of PNNs or "all".
Commands that reference a database use the following terms:
DB
This is either a database name, such as locking.tdb or a database ID such as "0x42fe72c5".
DB-LIST
A space separated list of at least one DB.
-n PNN-LIST
The nodes specified by PNN-LIST should be queried for the requested information. Default is to query the daemon running on the local host.
-Y
Produce output in machine readable form for easier parsing by scripts. Not all commands support this option.
-t TIMEOUT
Indicates that ctdb should wait up to TIMEOUT seconds for a response to most commands sent to the CTDB daemon. The default is 10 seconds.
-T TIMELIMIT
Indicates that TIMELIMIT is the maximum run time (in seconds) for the ctdb command. When TIMELIMIT is exceeded the ctdb command will terminate with an error. The default is 120 seconds.
-? --help
Print some help text to the screen.
--usage
Print useage information to the screen.
-d --debug=DEBUGLEVEL
Change the debug level for the command. Default is ERR (0).
--socket=FILENAME
Specify that FILENAME is the name of the Unix domain socket to use when connecting to the local CTDB daemon. The default is /tmp/ctdb.socket.
These are commands used to monitor and administer a CTDB cluster.
This command displays the PNN of the current node.
This command displays the PNN of the current node without contacting the CTDB daemon. It parses the nodes file directly, so can produce unexpected output if the nodes file has been edited but has not been reloaded.
This command shows the current status of all CTDB nodes based on information from the queried node.
Note: If the the queried node is INACTIVE then the status might not be current.
Node status
This includes the number of physical nodes and the status of each node. See ctdb(7) for information about node states.
Generation
The generation id is a number that indicates the current generation of a cluster instance. Each time a cluster goes through a reconfiguration or a recovery its generation id will be changed.
This number does not have any particular meaning other than to keep track of when a cluster has gone through a recovery. It is a random number that represents the current instance of a ctdb cluster and its databases. The CTDB daemon uses this number internally to be able to tell when commands to operate on the cluster and the databases was issued in a different generation of the cluster, to ensure that commands that operate on the databases will not survive across a cluster database recovery. After a recovery, all old outstanding commands will automatically become invalid.
Sometimes this number will be shown as "INVALID". This only means that the ctdbd daemon has started but it has not yet merged with the cluster through a recovery. All nodes start with generation "INVALID" and are not assigned a real generation id until they have successfully been merged with a cluster through a recovery.
Virtual Node Number (VNN) map
Consists of the number of virtual nodes and mapping from virtual node numbers to physical node numbers. Virtual nodes host CTDB databases. Only nodes that are participating in the VNN map can become lmaster or dmaster for database records.
Recovery mode
This is the current recovery mode of the cluster. There are two possible modes:
NORMAL - The cluster is fully operational.
RECOVERY - The cluster databases have all been frozen, pausing all services while the cluster awaits a recovery process to complete. A recovery process should finish within seconds. If a cluster is stuck in the RECOVERY state this would indicate a cluster malfunction which needs to be investigated.
Once the recovery master detects an inconsistency, for example a node becomes disconnected/connected, the recovery daemon will trigger a cluster recovery process, where all databases are remerged across the cluster. When this process starts, the recovery master will first "freeze" all databases to prevent applications such as samba from accessing the databases and it will also mark the recovery mode as RECOVERY.
When the CTDB daemon starts up, it will start in RECOVERY mode. Once the node has been merged into a cluster and all databases have been recovered, the node mode will change into NORMAL mode and the databases will be "thawed", allowing samba to access the databases again.
Recovery master
This is the cluster node that is currently designated as the recovery master. This node is responsible of monitoring the consistency of the cluster and to perform the actual recovery process when reqired.
Only one node at a time can be the designated recovery master. Which node is designated the recovery master is decided by an election process in the recovery daemons running on each node.
Example
# ctdb status Number of nodes:4 pnn:0 192.168.2.200 OK (THIS NODE) pnn:1 192.168.2.201 OK pnn:2 192.168.2.202 OK pnn:3 192.168.2.203 OK Generation:1362079228 Size:4 hash:0 lmaster:0 hash:1 lmaster:1 hash:2 lmaster:2 hash:3 lmaster:3 Recovery mode:NORMAL (0) Recovery master:0
This command is similar to the status command. It displays the "node status" subset of output. The main differences are:
The exit code is the bitwise-OR of the flags for each specified node, while ctdb status exits with 0 if it was able to retrieve status for all nodes.
ctdb status provides status information for all nodes. ctdb nodestatus defaults to providing status for only the current node. If PNN-LIST is provided then status is given for the indicated node(s).
By default, ctdb nodestatus gathers status from the local node. However, if invoked with "-n all" (or similar) then status is gathered from the given node(s). In particular ctdb nodestatus all and ctdb nodestatus -n all will produce different output. It is possible to provide 2 different nodespecs (with and without "-n") but the output is usually confusing!
A common invocation in scripts is ctdb nodestatus all to check whether all nodes in a cluster are healthy.
Example
# ctdb nodestatus pnn:0 10.0.0.30 OK (THIS NODE) # ctdb nodestatus all Number of nodes:2 pnn:0 10.0.0.30 OK (THIS NODE) pnn:1 10.0.0.31 OK
This command shows the pnn of the node which is currently the recmaster.
Note: If the the queried node is INACTIVE then the status might not be current.
This command shows the uptime for the ctdb daemon. When the last recovery or ip-failover completed and how long it took. If the "duration" is shown as a negative number, this indicates that there is a recovery/failover in progress and it started that many seconds ago.
Example
# ctdb uptime Current time of node : Thu Oct 29 10:38:54 2009 Ctdbd start time : (000 16:54:28) Wed Oct 28 17:44:26 2009 Time of last recovery/failover: (000 16:53:31) Wed Oct 28 17:45:23 2009 Duration of last recovery/failover: 2.248552 seconds
This command shows lists the ip addresses of all the nodes in the cluster.
Example
# ctdb listnodes 192.168.2.200 192.168.2.201 192.168.2.202 192.168.2.203
Show the current NAT gateway master and the status of all nodes in the current NAT gateway group. See the NAT GATEWAY section in ctdb(7) for more details.
Example
# ctdb natgwlist 0 192.168.2.200 Number of nodes:4 pnn:0 192.168.2.200 OK (THIS NODE) pnn:1 192.168.2.201 OK pnn:2 192.168.2.202 OK pnn:3 192.168.2.203 OK
This command will "ping" specified CTDB nodes in the cluster to verify that they are running.
Example
# ctdb ping -n all response from 0 time=0.000054 sec (3 clients) response from 1 time=0.000144 sec (2 clients) response from 2 time=0.000105 sec (2 clients) response from 3 time=0.000114 sec (2 clients)
This command will display the list of network interfaces, which could host public addresses, along with their status.
Example
# ctdb ifaces Interfaces on node 0 name:eth5 link:up references:2 name:eth4 link:down references:0 name:eth3 link:up references:1 name:eth2 link:up references:1 # ctdb ifaces -Y :Name:LinkStatus:References: :eth5:1:2 :eth4:0:0 :eth3:1:1 :eth2:1:1
This command will display the list of public addresses that are provided by the cluster and which physical node is currently serving this ip. By default this command will ONLY show those public addresses that are known to the node itself. To see the full list of all public ips across the cluster you must use "ctdb ip -n all".
Example
# ctdb ip Public IPs on node 0 172.31.91.82 node[1] active[] available[eth2,eth3] configured[eth2,eth3] 172.31.91.83 node[0] active[eth3] available[eth2,eth3] configured[eth2,eth3] 172.31.91.84 node[1] active[] available[eth2,eth3] configured[eth2,eth3] 172.31.91.85 node[0] active[eth2] available[eth2,eth3] configured[eth2,eth3] 172.31.92.82 node[1] active[] available[eth5] configured[eth4,eth5] 172.31.92.83 node[0] active[eth5] available[eth5] configured[eth4,eth5] 172.31.92.84 node[1] active[] available[eth5] configured[eth4,eth5] 172.31.92.85 node[0] active[eth5] available[eth5] configured[eth4,eth5] # ctdb ip -Y :Public IP:Node:ActiveInterface:AvailableInterfaces:ConfiguredInterfaces: :172.31.91.82:1::eth2,eth3:eth2,eth3: :172.31.91.83:0:eth3:eth2,eth3:eth2,eth3: :172.31.91.84:1::eth2,eth3:eth2,eth3: :172.31.91.85:0:eth2:eth2,eth3:eth2,eth3: :172.31.92.82:1::eth5:eth4,eth5: :172.31.92.83:0:eth5:eth5:eth4,eth5: :172.31.92.84:1::eth5:eth4,eth5: :172.31.92.85:0:eth5:eth5:eth4,eth5:
This command will display details about the specified public addresses.
Example
# ctdb ipinfo 172.31.92.85 Public IP[172.31.92.85] info on node 0 IP:172.31.92.85 CurrentNode:0 NumInterfaces:2 Interface[1]: Name:eth4 Link:down References:0 Interface[2]: Name:eth5 Link:up References:2 (active)
This command displays which scripts where run in the previous monitoring cycle and the result of each script. If a script failed with an error, causing the node to become unhealthy, the output from that script is also shown.
Example
# ctdb scriptstatus 7 scripts were executed last monitoring cycle 00.ctdb Status:OK Duration:0.056 Tue Mar 24 18:56:57 2009 10.interface Status:OK Duration:0.077 Tue Mar 24 18:56:57 2009 11.natgw Status:OK Duration:0.039 Tue Mar 24 18:56:57 2009 20.multipathd Status:OK Duration:0.038 Tue Mar 24 18:56:57 2009 31.clamd Status:DISABLED 40.vsftpd Status:OK Duration:0.045 Tue Mar 24 18:56:57 2009 41.httpd Status:OK Duration:0.039 Tue Mar 24 18:56:57 2009 50.samba Status:ERROR Duration:0.082 Tue Mar 24 18:56:57 2009 OUTPUT:ERROR: Samba tcp port 445 is not responding
This command is used to disable an eventscript.
This will take effect the next time the eventscripts are being executed so it can take a short while until this is reflected in 'scriptstatus'.
This command is used to enable an eventscript.
This will take effect the next time the eventscripts are being executed so it can take a short while until this is reflected in 'scriptstatus'.
List all tuneable variables, except the values of the obsolete tunables like VacuumMinInterval. The obsolete tunables can be retrieved only explicitly with the "ctdb getvar" command.
Example
# ctdb listvars MaxRedirectCount = 3 SeqnumInterval = 1000 ControlTimeout = 60 TraverseTimeout = 20 KeepaliveInterval = 5 KeepaliveLimit = 5 RecoverTimeout = 20 RecoverInterval = 1 ElectionTimeout = 3 TakeoverTimeout = 9 MonitorInterval = 15 TickleUpdateInterval = 20 EventScriptTimeout = 30 EventScriptTimeoutCount = 1 RecoveryGracePeriod = 120 RecoveryBanPeriod = 300 DatabaseHashSize = 100001 DatabaseMaxDead = 5 RerecoveryTimeout = 10 EnableBans = 1 DeterministicIPs = 0 LCP2PublicIPs = 1 ReclockPingPeriod = 60 NoIPFailback = 0 DisableIPFailover = 0 VerboseMemoryNames = 0 RecdPingTimeout = 60 RecdFailCount = 10 LogLatencyMs = 0 RecLockLatencyMs = 1000 RecoveryDropAllIPs = 120 VerifyRecoveryLock = 1 VacuumInterval = 10 VacuumMaxRunTime = 30 RepackLimit = 10000 VacuumLimit = 5000 VacuumFastPathCount = 60 MaxQueueDropMsg = 1000000 UseStatusEvents = 0 AllowUnhealthyDBRead = 0 StatHistoryInterval = 1 DeferredAttachTO = 120 AllowClientDBAttach = 1 RecoverPDBBySeqNum = 0
Get the runtime value of a tuneable variable.
Example
# ctdb getvar MaxRedirectCount MaxRedirectCount = 3
Set the runtime value of a tuneable variable.
Example: ctdb setvar MaxRedirectCount 5
This command shows which node is currently the LVSMASTER. The LVSMASTER is the node in the cluster which drives the LVS system and which receives all incoming traffic from clients.
LVS is the mode where the entire CTDB/Samba cluster uses a single ip address for the entire cluster. In this mode all clients connect to one specific node which will then multiplex/loadbalance the clients evenly onto the other nodes in the cluster. This is an alternative to using public ip addresses. See the manpage for ctdbd for more information about LVS.
This command shows which nodes in the cluster are currently active in the LVS configuration. I.e. which nodes we are currently loadbalancing the single ip address across.
LVS will by default only loadbalance across those nodes that are both LVS capable and also HEALTHY. Except if all nodes are UNHEALTHY in which case LVS will loadbalance across all UNHEALTHY nodes as well. LVS will never use nodes that are DISCONNECTED, STOPPED, BANNED or DISABLED.
Example output:
2:10.0.0.13 3:10.0.0.14
This command shows the capabilities of the current node. See the CAPABILITIES section in ctdb(7) for more details.
Example output:
RECMASTER: YES LMASTER: YES LVS: NO NATGW: YES
Collect statistics from the CTDB daemon about how many calls it has served. Information about various fields in statistics can be found in ctdb-statistics(7).
Example
# ctdb statistics CTDB version 1 num_clients 3 frozen 0 recovering 0 client_packets_sent 360489 client_packets_recv 360466 node_packets_sent 480931 node_packets_recv 240120 keepalive_packets_sent 4 keepalive_packets_recv 3 node req_call 2 reply_call 2 req_dmaster 0 reply_dmaster 0 reply_error 0 req_message 42 req_control 120408 reply_control 360439 client req_call 2 req_message 24 req_control 360440 timeouts call 0 control 0 traverse 0 total_calls 2 pending_calls 0 lockwait_calls 0 pending_lockwait_calls 0 memory_used 5040 max_hop_count 0 max_call_latency 4.948321 sec max_lockwait_latency 0.000000 sec
This command is used to clear all statistics counters in a node.
Example: ctdb statisticsreset
Display statistics about the database DB. Information about various fields in dbstatistics can be found in ctdb-statistics(7).
Example
# ctdb dbstatistics locking.tdb DB Statistics: locking.tdb ro_delegations 0 ro_revokes 0 locks total 14356 failed 0 current 0 pending 0 hop_count_buckets: 28087 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 lock_buckets: 0 14188 38 76 32 19 3 0 0 0 0 0 0 0 0 0 locks_latency MIN/AVG/MAX 0.001066/0.012686/4.202292 sec out of 14356 Num Hot Keys: 1 Count:8 Key:ff5bd7cb3ee3822edc1f0000000000000000000000000000
This command is used to show the filename of the reclock file that is used.
Example output:
Reclock file:/gpfs/.ctdb/shared
This command is used to modify, or clear, the file that is used as the reclock file at runtime. When this command is used, the reclock file checks are disabled. To re-enable the checks the administrator needs to activate the "VerifyRecoveryLock" tunable using "ctdb setvar".
If run with no parameter this will remove the reclock file completely. If run with a parameter the parameter specifies the new filename to use for the recovery lock.
This command only affects the runtime settings of a ctdb node and will be lost when ctdb is restarted. For persistent changes to the reclock file setting you must edit /etc/sysconfig/ctdb.
Get the current debug level for the node. the debug level controls what information is written to the log file.
The debug levels are mapped to the corresponding syslog levels. When a debug level is set, only those messages at that level and higher levels will be printed.
The list of debug levels from highest to lowest are :
EMERG ALERT CRIT ERR WARNING NOTICE INFO DEBUG
Set the debug level of a node. This controls what information will be logged.
The debuglevel is one of EMERG ALERT CRIT ERR WARNING NOTICE INFO DEBUG
This command will return the process id of the ctdb daemon.
This command is used to administratively disable a node in the cluster. A disabled node will still participate in the cluster and host clustered TDB records but its public ip address has been taken over by a different node and it no longer hosts any services.
Re-enable a node that has been administratively disabled.
This command is used to administratively STOP a node in the cluster. A STOPPED node is connected to the cluster but will not host any public ip addresse, nor does it participate in the VNNMAP. The difference between a DISABLED node and a STOPPED node is that a STOPPED node does not host any parts of the database which means that a recovery is required to stop/continue nodes.
Re-start a node that has been administratively stopped.
This command is used to add a new public ip to a node during runtime. This allows public addresses to be added to a cluster without having to restart the ctdb daemons.
Note that this only updates the runtime instance of ctdb. Any changes will be lost next time ctdb is restarted and the public addresses file is re-read. If you want this change to be permanent you must also update the public addresses file manually.
This command is used to remove a public ip from a node during runtime. If this public ip is currently hosted by the node it being removed from, the ip will first be failed over to another node, if possible, before it is removed.
Note that this only updates the runtime instance of ctdb. Any changes will be lost next time ctdb is restarted and the public addresses file is re-read. If you want this change to be permanent you must also update the public addresses file manually.
This command can be used to manually fail a public ip address to a specific node.
In order to manually override the "automatic" distribution of public ip addresses that ctdb normally provides, this command only works when you have changed the tunables for the daemon to:
DeterministicIPs = 0
NoIPFailback = 1
This command will shutdown a specific CTDB daemon.
This command is used ot enable/disable the LMASTER capability for a node at runtime. This capability determines whether or not a node can be used as an LMASTER for records in the database. A node that does not have the LMASTER capability will not show up in the vnnmap.
Nodes will by default have this capability, but it can be stripped off nodes by the setting in the sysconfig file or by using this command.
Once this setting has been enabled/disabled, you need to perform a recovery for it to take effect.
See also "ctdb getcapabilities"
This command is used ot enable/disable the RECMASTER capability for a node at runtime. This capability determines whether or not a node can be used as an RECMASTER for the cluster. A node that does not have the RECMASTER capability can not win a recmaster election. A node that already is the recmaster for the cluster when the capability is stripped off the node will remain the recmaster until the next cluster election.
Nodes will by default have this capability, but it can be stripped off nodes by the setting in the sysconfig file or by using this command.
See also "ctdb getcapabilities"
This command is used when adding new nodes, or removing existing nodes from an existing cluster.
Procedure to add a node:
1, To expand an existing cluster, first ensure with 'ctdb status' that all nodes are up and running and that they are all healthy. Do not try to expand a cluster unless it is completely healthy!
2, On all nodes, edit /etc/ctdb/nodes and add the new node as the last entry to the file. The new node MUST be added to the end of this file!
3, Verify that all the nodes have identical /etc/ctdb/nodes files after you edited them and added the new node!
4, Run 'ctdb reloadnodes' to force all nodes to reload the nodesfile.
5, Use 'ctdb status' on all nodes and verify that they now show the additional node.
6, Install and configure the new node and bring it online.
Procedure to remove a node:
1, To remove a node from an existing cluster, first ensure with 'ctdb status' that all nodes, except the node to be deleted, are up and running and that they are all healthy. Do not try to remove a node from a cluster unless the cluster is completely healthy!
2, Shutdown and poweroff the node to be removed.
3, On all other nodes, edit the /etc/ctdb/nodes file and comment out the node to be removed. Do not delete the line for that node, just comment it out by adding a '#' at the beginning of the line.
4, Run 'ctdb reloadnodes' to force all nodes to reload the nodesfile.
5, Use 'ctdb status' on all nodes and verify that the deleted node no longer shows up in the list..
This command reloads the public addresses configuration file on the specified nodes. When it completes addresses will be reconfigured and reassigned across the cluster as necessary.
This command lists all clustered TDB databases that the CTDB daemon has attached to. Some databases are flagged as PERSISTENT, this means that the database stores data persistently and the data will remain across reboots. One example of such a database is secrets.tdb where information about how the cluster was joined to the domain is stored.
If a PERSISTENT database is not in a healthy state the database is flagged as UNHEALTHY. If there's at least one completely healthy node running in the cluster, it's possible that the content is restored by a recovery run automaticly. Otherwise an administrator needs to analyze the problem.
See also "ctdb getdbstatus", "ctdb backupdb", "ctdb restoredb", "ctdb dumpbackup", "ctdb wipedb", "ctdb setvar AllowUnhealthyDBRead 1" and (if samba or tdb-utils are installed) "tdbtool check".
Most databases are not persistent and only store the state information that the currently running samba daemons need. These databases are always wiped when ctdb/samba starts and when a node is rebooted.
Example
# ctdb getdbmap Number of databases:10 dbid:0x435d3410 name:notify.tdb path:/var/ctdb/notify.tdb.0 dbid:0x42fe72c5 name:locking.tdb path:/var/ctdb/locking.tdb.0 dbid:0x1421fb78 name:brlock.tdb path:/var/ctdb/brlock.tdb.0 dbid:0x17055d90 name:connections.tdb path:/var/ctdb/connections.tdb.0 dbid:0xc0bdde6a name:sessionid.tdb path:/var/ctdb/sessionid.tdb.0 dbid:0x122224da name:test.tdb path:/var/ctdb/test.tdb.0 dbid:0x2672a57f name:idmap2.tdb path:/var/ctdb/persistent/idmap2.tdb.0 PERSISTENT dbid:0xb775fff6 name:secrets.tdb path:/var/ctdb/persistent/secrets.tdb.0 PERSISTENT dbid:0xe98e08b6 name:group_mapping.tdb path:/var/ctdb/persistent/group_mapping.tdb.0 PERSISTENT dbid:0x7bbbd26c name:passdb.tdb path:/var/ctdb/persistent/passdb.tdb.0 PERSISTENT # ctdb getdbmap # example for unhealthy database Number of databases:1 dbid:0xb775fff6 name:secrets.tdb path:/var/ctdb/persistent/secrets.tdb.0 PERSISTENT UNHEALTHY # ctdb -Y getdbmap :ID:Name:Path:Persistent:Unhealthy: :0x7bbbd26c:passdb.tdb:/var/ctdb/persistent/passdb.tdb.0:1:0:
Copy the contents of database DB to FILE. FILE can later be read back using restoredb. This is mainly useful for backing up persistent databases such as secrets.tdb and similar.
This command restores a persistent database that was previously backed up using backupdb. By default the data will be restored back into the same database as it was created from. By specifying dbname you can restore the data into a different database.
In addition to the normal logging to a log file, CTDB also keeps a in-memory ringbuffer containing the most recent log entries for all log levels (except DEBUG).
This is useful since it allows for keeping continuous logs to a file at a reasonable non-verbose level, but shortly after an incident has occured, a much more detailed log can be pulled from memory. This can allow you to avoid having to reproduce an issue due to the on-disk logs being of insufficient detail.
This command extracts all messages of level or lower log level from memory and prints it to the screen. The level is not specified it defaults to NOTICE.
By default, logs are extracted from the main CTDB daemon. If the recoverd option is given then logs are extracted from the recovery daemon.
This command clears the in-memory logging ringbuffer.
By default, logs are cleared in the main CTDB daemon. If the recoverd option is given then logs are cleared in the recovery daemon.
This command will enable the read-only record support for a database. This is an experimental feature to improve performance for contended records primarily in locking.tdb and brlock.tdb. When enabling this feature you must set it on all nodes in the cluster.
This command will enable the sticky record support for the specified database. This is an experimental feature to improve performance for contended records primarily in locking.tdb and brlock.tdb. When enabling this feature you must set it on all nodes in the cluster.
Internal commands are used by CTDB's scripts and are not required for managing a CTDB cluster. Their parameters and behaviour are subject to change.
Show TCP connections that are registered with CTDB to be "tickled" if there is a failover.
Send out a gratious ARP for the specified interface through the specified interface. This command is mainly used by the ctdb eventscripts.
Read a list of TCP connections, one per line, from standard input and terminate each connection. A connection is specified as:
SRC-IPADDR:SRC-PORT DST-IPADDR:DST-PORT
Each connection is terminated by issuing a TCP RST to the SRC-IPADDR:SRC-PORT endpoint.
A single connection can be specified on the command-line rather than on standard input.
Delete KEY from DB.
Print the value associated with KEY in DB.
Store KEY in DB with contents of FILE as the associated value.
Read a list of key-value pairs, one per line from FILE, and store them in DB using a single transaction. An empty value is equivalent to deleting the given key.
The key and value should be separated by spaces or tabs. Each key/value should be a printable string enclosed in double-quotes.
Print the runstate of the specified node. Runstates are used to serialise important state transitions in CTDB, particularly during startup.
If one or more optional runstate arguments are specified then the node must be in one of these runstates for the command to succeed.
Example
# ctdb runstate RUNNING
Set the internal state of network interface IFACE. This is typically used in the 10.interface script in the "monitor" event.
Example: ctdb setifacelink eth0 up
Enable or disable the NAT gateway master capability on a node.
Send a TCP tickle to the source host for the specified TCP connection. A TCP tickle is a TCP ACK packet with an invalid sequence and acknowledge number and will when received by the source host result in it sending an immediate correct ACK back to the other end.
TCP tickles are useful to "tickle" clients after a IP failover has occured since this will make the client immediately recognize the TCP connection has been disrupted and that the client will need to reestablish. This greatly speeds up the time it takes for a client to detect and reestablish after an IP failover in the ctdb cluster.
Display the CTDB version.
These commands are primarily used for CTDB development and testing and should not be used for normal administration.
--print-emptyrecords
This enables printing of empty records when dumping databases with the catdb, cattbd and dumpdbbackup commands. Records with empty data segment are considered deleted by ctdb and cleaned by the vacuuming mechanism, so this switch can come in handy for debugging the vacuuming behaviour.
--print-datasize
This lets database dumps (catdb, cattdb, dumpdbbackup) print the size of the record data instead of dumping the data contents.
--print-lmaster
This lets catdb print the lmaster for each record.
--print-hash
This lets database dumps (catdb, cattdb, dumpdbbackup) print the hash for each record.
--print-recordflags
This lets catdb and dumpdbbackup print the record flags for each record. Note that cattdb always prints the flags.
This command checks if a specific process exists on the CTDB host. This is mainly used by Samba to check if remote instances of samba are still running or not.
This command displays more details about a database.
Example
# ctdb getdbstatus test.tdb.0 dbid: 0x122224da name: test.tdb path: /var/ctdb/test.tdb.0 PERSISTENT: no HEALTH: OK # ctdb getdbstatus registry.tdb # with a corrupted TDB dbid: 0xf2a58948 name: registry.tdb path: /var/ctdb/persistent/registry.tdb.0 PERSISTENT: yes HEALTH: NO-HEALTHY-NODES - ERROR - Backup of corrupted TDB in '/var/ctdb/persistent/registry.tdb.0.corrupted.20091208091949.0Z'
Print a dump of the clustered TDB database DB.
Print a dump of the contents of the local TDB database DB.
Print a dump of the contents from database backup FILE, similar to catdb.
Remove all contents of database DB.
This command will trigger the recovery daemon to do a cluster recovery.
This command will force the recovery master to perform a full ip reallocation process and redistribute all ip addresses. This is useful to "reset" the allocations back to its default state if they have been changed using the "moveip" command. While a "recover" will also perform this reallocation, a recovery is much more hevyweight since it will also rebuild all the databases.
This command returns the monutoring mode of a node. The monitoring mode is either ACTIVE or DISABLED. Normally a node will continuously monitor that all other nodes that are expected are in fact connected and that they respond to commands.
ACTIVE - This is the normal mode. The node is actively monitoring all other nodes, both that the transport is connected and also that the node responds to commands. If a node becomes unavailable, it will be marked as DISCONNECTED and a recovery is initiated to restore the cluster.
DISABLED - This node is not monitoring that other nodes are available. In this mode a node failure will not be detected and no recovery will be performed. This mode is useful when for debugging purposes one wants to attach GDB to a ctdb process but wants to prevent the rest of the cluster from marking this node as DISCONNECTED and do a recovery.
This command can be used to explicitly disable/enable monitoring mode on a node. The main purpose is if one wants to attach GDB to a running ctdb daemon but wants to prevent the other nodes from marking it as DISCONNECTED and issuing a recovery. To do this, set monitoring mode to 0 on all nodes before attaching with GDB. Remember to set monitoring mode back to 1 afterwards.
Create a new CTDB database called DBNAME and attach to it on all nodes.
Detach specified non-persistent database(s) from the cluster. This command will disconnect specified database(s) on all nodes in the cluster. This command should only be used when none of the specified database(s) are in use.
All nodes should be active and tunable AllowClientDBAccess should be disabled on all nodes before detaching databases.
This is a debugging command. This command will make the ctdb daemon to write a fill memory allocation map to standard output.
This is a debugging command. This command will dump the talloc memory allocation tree for the recovery daemon to standard output.
Thaw a previously frozen node.
This is a debugging command. This command can be used to manually invoke and run the eventscritps with arbitrary arguments.
Administratively ban a node for BANTIME seconds. The node will be unbanned after BANTIME seconds have elapsed.
A banned node does not participate in the cluster. It does not host any records for the clustered TDB and does not host any public IP addresses.
Nodes are automatically banned if they misbehave. For example, a node may be banned if it causes too many cluster recoveries.
To administratively exclude a node from a cluster use the stop command.
This command is used to unban a node that has either been administratively banned using the ban command or has been automatically banned.
This command marks the given nodes as rebalance targets in the LCP2 IP allocation algorithm. The reloadips command will do this as necessary so this command should not be needed.
This command checks whether a set of srvid message ports are registered on the node or not. The command takes a list of values to check.
Example
# ctdb check_srvids 1 2 3 14765 Server id 0:1 does not exist Server id 0:2 does not exist Server id 0:3 does not exist Server id 0:14765 exists
This documentation was written by Ronnie Sahlberg, Amitay Isaacs, Martin Schwenke
Copyright © 2007 Andrew Tridgell, Ronnie Sahlberg
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, see \m[blue]http://www.gnu.org/licenses\m[].