Multiline parsing keeps stack traces, wrapped exceptions, and continued log records together as one event so searches, dashboards, and alerts reflect the real failure instead of dozens of disconnected fragments.
Current Filebeat releases handle this inside each input definition. The recommended path is a filestream input with a parsers list that includes multiline, so Filebeat can detect where an event starts, buffer continuation lines, and publish one combined message through its processors and outputs.
Boundary patterns must match the real first line of a new event, and max_lines plus timeout should stay bounded so one bad pattern does not hold data in memory indefinitely. When the destination is Logstash, multiline assembly belongs in Filebeat instead of a Logstash multiline codec, and very small new test files can stay unread until they grow past the current filestream fingerprint threshold.
Steps to configure Filebeat multiline parsing:
- Create a backup copy of the current Filebeat configuration.
$ sudo cp /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.bak
- Open the active Filebeat input configuration for editing.
$ sudoedit /etc/filebeat/filebeat.yml
If the host uses filebeat.config.inputs.enabled: true with external snippets such as /etc/filebeat/inputs.d/*.yml, edit the matching input file there instead of creating a second filebeat.inputs block in the main config. When the main config already contains filebeat.inputs, add the parser under the existing input item instead of creating a duplicate top-level key.
- Add the multiline parser to the target filestream input.
filebeat.inputs: - type: filestream id: app-logs enabled: true paths: - /var/log/app.log parsers: - multiline: type: pattern pattern: '^\[' negate: true match: after max_lines: 500 timeout: 5sThis example treats each line that starts with [ as the beginning of a new event and appends all following non-matching lines to it, which fits many bracket-prefixed timestamped logs.
The parsers syntax shown here is for filestream only. Deprecated log inputs still use legacy multiline.* keys and should be migrated instead of mixing the two syntaxes in one input.
- Choose a pattern that matches the actual first line of each event instead of a generic continuation line.
For Java traces that continue on indented at lines or Caused by: lines, a tighter pattern such as '^[[:space:]]+(at|\\.{3})[[:space:]]+\\b|^Caused by:' is usually safer than a broad timestamp or bracket match.
Use flush_pattern only when events have an explicit end marker, such as Start new event and End event lines.
- Test the Filebeat configuration for syntax or permission errors before any restart.
$ sudo filebeat test config -c /etc/filebeat/filebeat.yml Config OK
Related: How to test a Filebeat configuration
- Export the resolved configuration and confirm the multiline parser is attached to the intended input.
$ sudo filebeat export config -c /etc/filebeat/filebeat.yml | sed -n '1,40p' filebeat: inputs: - enabled: true id: app-logs parsers: - multiline: match: after max_lines: 500 negate: true pattern: ^\\[ timeout: 5s paths: - /var/log/app.log type: filestream ##### snipped #####This helps catch indentation mistakes and confirms the parser is attached to the right input when multiple input files are loaded.
- Restart the Filebeat service to apply the multiline parser.
$ sudo systemctl restart filebeat
- Check that the Filebeat service returned to the active state.
$ sudo systemctl status filebeat --no-pager --lines=20 ● filebeat.service - Filebeat sends log files to Logstash or Elasticsearch. Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; preset: enabled) Active: active (running) since Wed 2026-04-02 12:24:53 UTC; 5s ago CGroup: /system.slice/filebeat.service └─4821 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.logs /var/log/filebeat ##### snipped ##### - Confirm that a representative stack trace or wrapped log entry now arrives as one event instead of one event per line.
In Elasticsearch or Kibana Discover, the combined event should appear as one document whose message field contains embedded newline characters.
If a small new test file does not appear immediately, append the sample to an existing active log or use a file larger than 1024 bytes so the default filestream fingerprint-based identity can detect it.
If unrelated log entries are merged, the start pattern is too narrow; if stack frames split into separate events, the start pattern is too broad or misses one of the real header formats.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
