Using the dissect filter in Logstash turns fixed-format log lines into named fields before they reach Elasticsearch. Structured fields make searches, dashboards, and alert rules more reliable than leaving the full line in message, especially when the same delimiter pattern repeats across every event.
The dissect filter tokenizes text with literal delimiters and {field} placeholders instead of regular expressions. Current Elastic documentation still positions it as the faster choice for reliably repeated text, and bracket notation such as [service][name] can write directly into nested fields while the last placeholder captures the remaining text after the final delimiter.
Dissection succeeds only when every expected segment is present in the right order. Unexpected padding, missing tokens, or optional sections can add the default _dissectfailure tag, while all captured values remain strings until another filter converts them. Package-based installs load pipeline fragments from /etc/logstash/conf.d, so keep the filter inside a condition that only matches the events that actually share the target format.
Related: How to parse logs with grok in Logstash
Related: How to use the Logstash date filter
Steps to use the Logstash dissect filter:
- Add a dedicated pipeline fragment under /etc/logstash/conf.d/50-dissect.conf.
input { file { path => "/var/lib/logstash/examples/dissect.log" start_position => "beginning" sincedb_path => "/var/lib/logstash/sincedb-dissect" tags => ["dissect_demo"] } } filter { if "dissect_demo" in [tags] { dissect { id => "dissect_app_log" mapping => { "message" => "%{ts} %{[log][level]} %{[service][name]} %{msg}" } tag_on_failure => ["_dissectfailure"] } } } output { if "dissect_demo" in [tags] { elasticsearch { hosts => ["http://elasticsearch.example.net:9200"] index => "app-dissect-%{+YYYY.MM.dd}" } } }The mapping above expects log lines shaped like 2026-04-07T08:17:29Z INFO checkout-api completed order=18237. The final msg placeholder captures the remainder of the line after the third delimiter, and bracket notation writes nested fields directly.
If a fixed-width log uses padding spaces for visual alignment, add the → suffix to the field on the left of the padding, such as {[service][name]->}.
- Test the pipeline configuration before applying it to the running service.
$ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-configtest --config.test_and_exit Using bundled JDK: /usr/share/logstash/jdk Configuration OK
The packaged settings directory keeps the test aligned with /etc/logstash/logstash.yml and /etc/logstash/pipelines.yml, while the temporary --path.data path avoids the live service data directory.
- Restart the Logstash service to load the updated pipeline.
$ sudo systemctl restart logstash
Restarting Logstash briefly pauses ingestion while the JVM reloads the pipeline.
- Query the Logstash monitoring API and confirm the dissect plugin ID is receiving events.
$ curl --silent --show-error "http://localhost:9600/_node/stats/pipelines/main?pretty=true&filter_path=pipelines.main.plugins.filters.id,pipelines.main.plugins.filters.name,pipelines.main.plugins.filters.events" { "pipelines" : { "main" : { "plugins" : { "filters" : [ { "id" : "dissect_app_log", "name" : "dissect", "events" : { "in" : 1, "out" : 1 } } ] } } } }If the monitoring API is bound to another host or protected with TLS or basic authentication, adjust the URL and credentials to match the local logstash.yml settings.
- Fetch a recent indexed document and confirm the parsed fields exist.
$ curl -s -G "http://elasticsearch.example.net:9200/app-dissect-*/_search" \ --data-urlencode "size=1" \ --data-urlencode "sort=@timestamp:desc" \ --data-urlencode "filter_path=hits.hits._index,hits.hits._source.ts,hits.hits._source.log,hits.hits._source.service,hits.hits._source.msg" \ --data-urlencode "pretty" { "hits" : { "hits" : [ { "_index" : "app-dissect-2026.04.07", "_source" : { "ts" : "2026-04-07T08:17:29Z", "log" : { "level" : "INFO" }, "service" : { "name" : "checkout-api" }, "msg" : "completed order=18237" } } ] } }The ts field remains a string in this example. Apply How to use the Logstash date filter if that parsed timestamp should replace @timestamp.
- Check that parse failures are not accumulating in the target index.
$ curl -s -G "http://elasticsearch.example.net:9200/app-dissect-*/_search" \ --data-urlencode "q=tags:_dissectfailure" \ --data-urlencode "size=0" \ --data-urlencode "filter_path=hits.total" \ --data-urlencode "pretty" { "hits" : { "total" : { "value" : 0, "relation" : "eq" } } }A non-zero count usually means the delimiter pattern no longer matches the incoming line shape. When the format varies by source or contains optional sections, split the flow with conditionals or fall back to How to parse logs with grok in Logstash.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
