Logstash conditionals keep mixed event streams predictable by tagging important records, routing them to the right destination, and discarding low-value noise before it wastes queue space, CPU time, or index storage.
Each event is evaluated against if, else if, and else expressions in the filter and output blocks. Conditions can test nested fields such as [log][level], combine logic with and or or, and use operators such as in, not in, and =~ to branch on lists, tags, and regex matches.
Expression errors stop processing for the affected event, if [field] cannot distinguish a missing field from one that is false or null, and ECS-aware inputs may expose the same data under different field names than older pipelines. On package installs, the monitoring API usually listens on 9600 but can bind to the first free port in the 9600-9700 range, so metric checks may need a different port if 9600 is already in use.
Related: How to use the Logstash mutate filter
Related: How to parse logs with grok in Logstash
Steps to use Logstash conditionals:
- Create a dedicated pipeline file for the conditional branches.
input { file { id => "app_log" path => "/var/lib/logstash/examples/conditional.json" start_position => "beginning" sincedb_path => "/var/lib/logstash/sincedb-conditional" codec => json } } filter { if [log][level] { mutate { id => "normalize_level" lowercase => ["[log][level]"] } } if "_jsonparsefailure" in [tags] { mutate { id => "route_invalid_json" add_tag => ["invalid_json"] add_field => { "[@metadata][target_index]" => "app-invalid-json" } } } else if ![log][level] { mutate { id => "route_missing_level" add_tag => ["missing_level"] add_field => { "[@metadata][target_index]" => "app-missing-level" } } } else if [log][level] == "error" and [deployment] == "production" { mutate { id => "route_production_error" add_tag => ["error"] add_field => { "[@metadata][target_index]" => "app-error" } } } else if [log][level] in ["debug", "trace"] or [message] =~ /^healthcheck/ { drop { id => "drop_low_value_events" } } else { mutate { id => "route_default" add_field => { "[@metadata][target_index]" => "app-conditional" } } } } output { if [@metadata][target_index] { elasticsearch { id => "es_conditional" hosts => ["http://elasticsearch.example.net:9200"] index => "%{[@metadata][target_index]}-%{{yyyy.MM.dd}}" } } }Using [@metadata][target_index] avoids repeating the same output branches and keeps routing state out of the indexed document.
The drop filter permanently discards matched events, so keep the condition narrow and test it with representative data first.
- Seed the watched file with representative events that should match every branch.
$ sudo install --directory --owner logstash --group logstash /var/lib/logstash/examples $ sudo tee /var/lib/logstash/examples/conditional.json >/dev/null <<'EOF' {"log":{"level":"ERROR"},"deployment":"production","message":"database connection failed"} {"log":{"level":"INFO"},"deployment":"production","message":"worker started"} {"log":{"level":"DEBUG"},"deployment":"production","message":"healthcheck ok"} {"deployment":"production","message":"missing log level"} not-json EOF $ sudo chown logstash:logstash /var/lib/logstash/examples/conditional.jsonThe sample file produces one error event, one default-routed event, one dropped event, one missing-level event, and one _jsonparsefailure event.
- Test the pipeline file for syntax errors before reloading the service.
$ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-configtest --config.test_and_exit -f /etc/logstash/conf.d/65-conditional.conf Using bundled JDK: /usr/share/logstash/jdk [2026-04-07T09:12:41,118][INFO ][logstash.codecs.json ] ECS compatibility is enabled but `target` option was not specified. ##### snipped ##### Configuration OK [2026-04-07T09:12:41,244][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
Current Logstash releases warn about the missing JSON codec target when ECS mode is enabled. The short example keeps fields at the event root for readability, but production pipelines should set a target when parsed keys might collide with ECS field names.
- Restart the Logstash service to load the conditional logic.
$ sudo systemctl restart logstash
A restart briefly pauses ingestion while the JVM and pipelines initialize.
- Check the Logstash service status for an active pipeline and recent startup health.
$ sudo systemctl status logstash --no-pager ● logstash.service - logstash Loaded: loaded (/usr/lib/systemd/system/logstash.service; enabled; preset: enabled) Active: active (running) since Tue 2026-04-07 09:13:04 UTC; 5s ago Main PID: 35154 (java) Tasks: 31 (limit: 28486) Memory: 428.9M (peak: 429.1M) CPU: 14.218s ##### snipped ##### - Query the monitoring API and confirm the expected filter and output plugin IDs are receiving events.
$ curl --silent --show-error http://localhost:9600/_node/stats/pipelines?pretty=true { "pipelines" : { "main" : { "plugins" : { "filters" : [ { "id" : "route_invalid_json", "events" : { "in" : 1, "out" : 1 } }, { "id" : "route_missing_level", "events" : { "in" : 1, "out" : 1 } }, { "id" : "route_production_error", "events" : { "in" : 1, "out" : 1 } }, { "id" : "route_default", "events" : { "in" : 1, "out" : 1 } }, { "id" : "drop_low_value_events", "events" : { "in" : 1, "out" : 0 } } ], "outputs" : [ { "id" : "es_conditional", "events" : { "in" : 4, "out" : 4 } } ] } } } }If localhost:9600 does not respond, query the API on the first free port in the 9600-9700 range or check the service journal for the bound port.
- Confirm the routed events landed in the expected Elasticsearch indices.
$ curl --silent --show-error "http://elasticsearch.example.net:9200/_cat/indices/app-error-*,app-conditional-*,app-missing-level-*,app-invalid-json-*?v" health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open app-conditional-2026.04.07 vjR6wE6CTv-r0Lk6w2_0pg 1 0 1 0 8.7kb 8.7kb green open app-error-2026.04.07 BKXG-W9mTjW2SxJTB12M5A 1 0 1 0 8.8kb 8.8kb green open app-invalid-json-2026.04.07 4MYwF4s7SRm7L0FA8jJeqQ 1 0 1 0 8.5kb 8.5kb green open app-missing-level-2026.04.07 ELB5r6lIRjG7qVd4s_4xZA 1 0 1 0 8.6kb 8.6kb
The dropped debug event does not create another index because it never reaches the output block.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
