Parse logs, handle events, and make your unstructured text structured.
grok [-d] -f configfile
Grok is software that allows you to easily parse logs and other files. With grok, you can turn unstructured log and event data into structured data.
The grok program is a great tool for parsing log data and program output. You can match any number of complex patterns on any number of inputs (processes and files) and have custom reactions.
Daemonize after parsing the config file. Implemented with daemon\|(3). The default is to stay in foreground.
Specify a grok config file to use.
You can call the config file anything you want. A full example config follows below, with documentation on options and defaults.
# --- Begin sample grok config # This is a comment. :) # # enable or disable debugging. Debug is set false by default. # the 'debug' setting is valid at every level. # debug values are copied down-scope unless overridden. debug: true # you can define multiple program blocks in a config file. # a program is just a collection of inputs (files, execs) and # matches (patterns and reactions), program { debug: false # file with no block. settings block is optional file "/var/log/messages" # file with a block file "/var/log/secure" { # follow means to follow a file like 'tail -F' but starts # reading at the beginning of the file. A file is followed # through truncation, log rotation, and append. follow: true } # execute a command, settings block is optional exec "netstat -rn" # exec with a block exec "ping -c 1 www.google.com" { # automatically rerun the exec if it exits, as soon as it exits. # default is false restart-on-exit: false # minimum amount of time from one start to the next start, if we # are restarting. Default is no minimum minimum-restart-interval: 5 # run every N seconds, but only if the process has exited. # default is not to rerun at all. run-interval: 60 # default is to read process output only from stdout. # set this to true to also read from stderr. read-stderr: false } # You can have multiple match {} blocks in your config. # They are applied, in order, against every line of input that # comes from your exec and file instances in this program block. match { # match a pattern. This can be any regexp and can include %{foo} # grok patterns pattern: "some pattern to match" # You can have multiple patterns here, any are valid for matching. pattern: "another pattern to match" # the default reaction is "%{@LINE}" which is the full line # matched. the reaction can be a special value of 'none' which # means no reaction occurs, or it can be any string. The # reaction is emitted to the shell if it is not none. reaction: "%{@LINE}" # the default shell is 'stdout' which means reactions are # printed directly to standard output. Setting the shell to a # command string will run that command and pipe reaction data to # it. #shell: stdout shell: "/bin/sh" # flush after every write to the shell. # The default is not to flush. flush: true # break-if-match means do not attempt any further matches on # this line. the default is false. break-if-match: true } } # -- End config
Pattern files contain lists of names and patterns for loading into grok.
Patterns are newline-delimited and have this syntax: patternname expression
Any whitespace between the patternname and expression are ignored.
This is the name of your pattern which, when loaded, can be referenced in patterns as %{patternname}
The expression here is, verbatim, available as a regular expression. You do not need to worry about how to escape things.
DIGITS \d+ HELLOWORLD \bhello world\b
The expression engine underneath grok is \s-1PCRE\s0. Any syntax in \s-1PCRE\s0 is valid in grok.
Reactions can reference named patterns from the match. You can also access a few other special values, including:
The line matched.
The substring matched
The starting position of the match from the beginning of the string.
The ending position of the match.
The length of the match
The full set of patterns captured, encoded as a json dictionary as a structure of { pattern: [ array of captures ] }. We use an array becuase you can use the same named pattern multiple times in a match.
Similar to the above, but includes start and end position for every named pattern. That structure is: { "grok": [ { "@LINE": { "start": ..., "end": ..., "value": ... } }, { "@MATCH": { "start": ..., "end": ..., "value": ... } }, { "patternname": { "start": startpos, "end": endpos, "value": "string" } }, { "patternname2": { "start": startpos, "end": endpos, "value": "string" } }, ... ] }
Reaction filters allow you to mutate the captured data. The following filters are available:
An example of using a filter in a reaction is like this: reaction: \*(L"echo Matched: %{@MATCH|shellescape}\*(R"
Escapes all characters necessary to make the string safe in non-quoted a shell argument
Escapes characters necessary to be safe within doublequotes in a shell.
Makes the string safe to represent in a json string (escapes according to json.org recommendations)
pcre\|(3), pcresyntax\|(3),
Sample grok configs are available in in the grok samples/ directory.
Project site: <http://semicomplete.googlecode.com/wiki/Grok>
Google Code: <http://semicomplete.googlecode.com/>
Issue/Bug Tracker: <http://code.google.com/p/semicomplete/issues/list>
Please send questions to [email protected]. File bugs and feature requests at the following \s-1URL:\s0
Issue/Bug Tracker: <http://code.google.com/p/semicomplete/issues/list>
grok was originally in perl, then rewritten in \*(C+ and Xpressive (regex), then rewritten in C and \s-1PCRE\s0.
grok was written by Jordan Sissel.