Multiline parsing keeps stack traces, wrapped exceptions, and continued log records together as one event so searches, dashboards, and alerts reflect the real failure instead of dozens of disconnected fragments.
Current Filebeat releases handle this inside each input definition. The recommended path is a filestream input with a parsers list that includes multiline, so Filebeat can detect where an event starts, buffer continuation lines, and publish one combined message through its processors and outputs.
Boundary patterns must match the real first line of a new event, and max_lines plus timeout should stay bounded so one bad pattern does not hold data in memory indefinitely. When the destination is Logstash, multiline assembly belongs in Filebeat instead of a Logstash multiline codec, and very small new test files can stay unread until they grow past the current filestream fingerprint threshold.
$ sudo cp /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.bak
$ sudoedit /etc/filebeat/filebeat.yml
If the host uses filebeat.config.inputs.enabled: true with external snippets such as /etc/filebeat/inputs.d/*.yml, edit the matching input file there instead of creating a second filebeat.inputs block in the main config. When the main config already contains filebeat.inputs, add the parser under the existing input item instead of creating a duplicate top-level key.
filebeat.inputs:
- type: filestream
id: app-logs
enabled: true
paths:
- /var/log/app.log
parsers:
- multiline:
type: pattern
pattern: '^\['
negate: true
match: after
max_lines: 500
timeout: 5s
This example treats each line that starts with [ as the beginning of a new event and appends all following non-matching lines to it, which fits many bracket-prefixed timestamped logs.
The parsers syntax shown here is for filestream only. Deprecated log inputs still use legacy multiline.* keys and should be migrated instead of mixing the two syntaxes in one input.
For Java traces that continue on indented at lines or Caused by: lines, a tighter pattern such as '^[[:space:]]+(at|\\.{3})[[:space:]]+\\b|^Caused by:' is usually safer than a broad timestamp or bracket match.
Use flush_pattern only when events have an explicit end marker, such as Start new event and End event lines.
$ sudo filebeat test config -c /etc/filebeat/filebeat.yml Config OK
Related: How to test a Filebeat configuration
$ sudo filebeat export config -c /etc/filebeat/filebeat.yml | sed -n '1,40p'
filebeat:
inputs:
- enabled: true
id: app-logs
parsers:
- multiline:
match: after
max_lines: 500
negate: true
pattern: ^\\[
timeout: 5s
paths:
- /var/log/app.log
type: filestream
##### snipped #####
This helps catch indentation mistakes and confirms the parser is attached to the right input when multiple input files are loaded.
$ sudo systemctl restart filebeat
$ sudo systemctl status filebeat --no-pager --lines=20
● filebeat.service - Filebeat sends log files to Logstash or Elasticsearch.
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; preset: enabled)
Active: active (running) since Wed 2026-04-02 12:24:53 UTC; 5s ago
CGroup: /system.slice/filebeat.service
└─4821 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.logs /var/log/filebeat
##### snipped #####
In Elasticsearch or Kibana Discover, the combined event should appear as one document whose message field contains embedded newline characters.
If a small new test file does not appear immediately, append the sample to an existing active log or use a file larger than 1024 bytes so the default filestream fingerprint-based identity can detect it.
If unrelated log entries are merged, the start pattern is too narrow; if stack frames split into separate events, the start pattern is too broad or misses one of the real header formats.