Conditionals in Logstash keep ingestion predictable by routing events to the right destination, tagging records for search and alerting, and dropping low-value noise before it consumes storage and CPU.
Conditional expressions are evaluated per event inside the filter and output sections. Fields are referenced using [field] or [field][subfield] and tested with operators such as ==, in, and =~ to branch with if, else if, and else blocks.
Comparisons are case-sensitive unless normalized, missing fields evaluate as false/nil, and expensive regex patterns can slow busy pipelines. The drop filter permanently discards events, so matching logic should be validated against representative data, and configuration should be syntax-tested before restarting the Logstash service.
Steps to use Logstash conditionals:
- Create a pipeline configuration file at /etc/logstash/conf.d/65-conditional.conf.
The example decodes one JSON object per log line (codec => json) and expects [log][level] for branching.
Events matched by the drop block are discarded and cannot be recovered from Logstash.
input { file { id => "app_log" path => "/var/lib/logstash/examples/conditional.json" start_position => "beginning" sincedb_path => "/var/lib/logstash/sincedb-conditional" codec => json } } filter { if [log][file][path] == "/var/lib/logstash/examples/conditional.json" { if [log][level] { mutate { id => "normalize_level" lowercase => ["[log][level]"] } } if "_jsonparsefailure" in [tags] { mutate { id => "tag_invalid_json" add_tag => ["invalid_json"] } } else if ![log][level] { mutate { id => "tag_missing_level" add_tag => ["missing_level"] } } else if [log][level] == "error" { mutate { id => "tag_error" add_tag => ["error"] } } else if [log][level] in ["debug", "trace"] { drop { id => "drop_debug_trace" } } } } output { if [log][file][path] == "/var/lib/logstash/examples/conditional.json" { if "_jsonparsefailure" in [tags] { elasticsearch { id => "es_invalid_json" hosts => ["http://elasticsearch.example.net:9200"] index => "app-invalid-json-%{+YYYY.MM.dd}" } } else if "error" in [tags] { elasticsearch { id => "es_error" hosts => ["http://elasticsearch.example.net:9200"] index => "app-error-%{+YYYY.MM.dd}" } } else { elasticsearch { id => "es_default" hosts => ["http://elasticsearch.example.net:9200"] index => "app-conditional-%{+YYYY.MM.dd}" } } } } - Test the pipeline configuration for syntax errors.
$ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-configtest --config.test_and_exit Configuration OK
- Restart the Logstash service to apply the conditional logic.
$ sudo systemctl restart logstash
- Check the Logstash service status for active state and recent errors.
$ sudo systemctl status logstash --no-pager ● logstash.service - logstash Loaded: loaded (/usr/lib/systemd/system/logstash.service; enabled; preset: enabled) Active: active (running) since Wed 2026-01-07 22:07:58 UTC; 3s ago Main PID: 35154 (java) Tasks: 31 (limit: 28486) Memory: 421.4M (peak: 421.4M) CPU: 13.967s ##### snipped ##### - Check pipeline plugin metrics to confirm conditional branches are matching expected events.
$ curl -s http://localhost:9600/_node/stats/pipelines?pretty { "status" : "green", "pipelines" : { "main" : { "events" : { "in" : 223, "filtered" : 223, "out" : 222, "duration_in_millis" : 2096 }, "plugins" : { "filters" : [ { "id" : "tag_error", "events" : { "in" : 1, "out" : 1 } }, { "id" : "tag_missing_level", "events" : { "in" : 1, "out" : 1 } }, { "id" : "tag_invalid_json", "events" : { "in" : 1, "out" : 1 } }, { "id" : "drop_debug_trace", "events" : { "in" : 1, "out" : 0 } } ], "outputs" : [ { "id" : "es_default", "events" : { "in" : 1, "out" : 1 } }, { "id" : "es_error", "events" : { "in" : 1, "out" : 1 } }, { "id" : "es_invalid_json", "events" : { "in" : 1, "out" : 1 } } ] } } } }A non-zero events.in with events.out of 0 on drop_debug_trace indicates events are being dropped by the conditional branch.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
