Daemon that manages mounting, unmounting, recovery and posix locks
gfs_controld [OPTION]...
GFS lives in the kernel, and the cluster infrastructure (cluster membership and group management) lives in user space. GFS in the kernel needs to adjust/recover for certain cluster events. It's the job of gfs_controld to receive these events and reconfigure gfs as needed. gfs_controld controls and configures gfs through sysfs files that are considered gfs-internal interfaces; not a general API/ABI.
Mounting, unmounting and node failure are the main cluster events that gfs_controld controls. It also manages the assignment of journals to different nodes. The mount.gfs and umount.gfs programs communicate with gfs_controld to join/leave the mount group and receive the necessary options for the kernel mount.
GFS also sends all posix lock operations to gfs_controld for processing. gfs_controld manages cluster-wide posix locks for gfs and passes results back to gfs in the kernel.
Optional cluster.conf settings are placed in the <gfs_controld> section.
Heavy use of plocks can result in high network load. The rate at which plocks are processed are limited by the plock_rate_limit setting, which limits the maximum plock performance, and limits potentially excessive network load. This value is the maximum number of plock operations a single node will process every second. To achieve maximum posix locking performance, the rate limiting should be disabled by setting it to 0. The default value is 100.
<gfs_controld plock_rate_limit="100"/>
To optimize performance for repeated locking of the same locks by processes on a single node, plock_ownership can be set to 1. The default is 0. If this is enabled, gfs_controld cannot interoperate with older versions that did not support this option.
<gfs_controld plock_ownership="1"/>
Three options can be used to tune the behavior of the plock_ownership optimization. All three relate to the caching of lock ownership state. Specifically, they define how aggressively cached ownership state is dropped. More caching of ownership state can result in better performance, at the expense of more memory usage.
drop_resources_time is the frequency of drop attempts in milliseconds. Default 10000 (10 sec).
drop_resources_count is the maximum number of items to drop from the cache each time. Default 10.
drop_resources_age is the time in milliseconds a cached item should be unused before being considered for dropping. Default 10000 (10 sec).
<gfs_controld drop_resources_time="10000" drop_resources_count="10" drop_resources_age="10000"/>
-D
Run the daemon in the foreground and print debug statements to stdout.
-P
Enable posix lock debugging messages.
-w
Disable the "withdraw" feature.
-p
Disable posix lock handling.
-l <num>
Limit the rate at which posix lock messages are sent to <num> messages per second. 0 disables the limit and results in the maximum performance of posix locks. Default 100.
-o <num>
Enable (1) or disable (0) plock ownership optimization. Default 0. All nodes must run with the same value.
-t <ms>
Ownership cache tuning, drop resources time (milliseconds). Default 10000.
-c <ms>
Ownership cache tuning, drop resources count. Default 10.
-a <ms>
Ownership cache tuning, drop resources age (milliseconds). Default 10000.
-h
Print out a help message describing available options, then exit.
-V
Print the version information and exit.
The gfs_controld daemon keeps a circular buffer of debug messages that can be dumped with the 'group_tool dump gfs' command.
The state of all gfs posix locks can also be dumped from gfs_controld with the 'group_tool dump plocks <fsname>' command.