Debugging Logstash pipelines helps pinpoint where events are being dropped, mis-parsed, or losing fields before they reach downstream outputs like Elasticsearch, files, or message queues.
A Logstash pipeline is built by concatenating the configuration files for its input, filter, and output sections, compiling them, and running the resulting pipeline inside the Logstash JVM. Adding a temporary stdout output with the rubydebug codec prints the complete event structure at the end of the pipeline, making it easier to confirm what the pipeline actually produced.
Commands assume a packaged Logstash installation on Linux managed by systemd, with the main pipeline reading /etc/logstash/conf.d/*.conf. When pipelines are defined in /etc/logstash/pipelines.yml with custom path.config locations, place the debug output in the matching pipeline directory instead. Printing full events can expose sensitive data and can generate large amounts of logs, so debugging output should be enabled only during troubleshooting.
output {
stdout {
codec => rubydebug {
metadata => true
}
}
}
A higher-numbered filename keeps the debug output separate and ensures it is loaded after earlier configuration files.
Leaving stdout debugging enabled can expose sensitive fields and can rapidly fill log storage on high-throughput pipelines.
$ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-configtest --config.test_and_exit Configuration OK
$ sudo systemctl restart logstash
$ sudo systemctl status logstash --no-pager
● logstash.service - logstash
Loaded: loaded (/usr/lib/systemd/system/logstash.service; enabled; preset: enabled)
Active: active (running) since Wed 2026-01-07 12:41:59 UTC; 8s ago
##### snipped #####
$ sudo journalctl --unit=logstash --follow --no-pager
Jan 07 12:33:55 host logstash[29505]: {
Jan 07 12:33:55 host logstash[29505]: "@timestamp" => 2026-01-07T12:31:58.905229546Z,
Jan 07 12:33:55 host logstash[29505]: "message" => "2026-01-07T12:31:32.591662+00:00 host logstash[28594]: \"@metadata\" => {",
Jan 07 12:33:55 host logstash[29505]: "event" => {
Jan 07 12:33:55 host logstash[29505]: "original" => "2026-01-07T12:31:32.591662+00:00 host logstash[28594]: \"@metadata\" => {"
Jan 07 12:33:55 host logstash[29505]: }
Jan 07 12:33:55 host logstash[29505]: }
Tags like _grokparsefailure and missing fields in the printed event usually point to the filter stage where parsing failed. On some installations, internal logs are written to /var/log/logstash/logstash-plain.log while stdout output is captured by the service manager.
$ sudo rm --force /etc/logstash/conf.d/90-debug.conf
Keeping the debug output file in place continues duplicating events to logs and can increase disk usage over time.
$ sudo systemctl restart logstash