Enabling the Logstash dead letter queue keeps non-retriable failures available for inspection instead of losing them inside a busy ingest pipeline. That makes it possible to isolate bad documents, mapping conflicts, or broken conditionals before the same failure pattern keeps dropping more events.
When the dead letter queue is enabled, Logstash writes eligible failures to per-pipeline files on disk. Current Elastic documentation still limits DLQ writes to documents rejected by the Elasticsearch output with HTTP status 400 or 404 and to events that fail during conditional evaluation. Reprocessing is separate from enablement, so reading or cleaning those events later still requires a pipeline that uses the dead_letter_queue input plugin.
Current releases keep the feature disabled by default, store it on the local filesystem under path.data unless path.dead_letter_queue overrides the location, and enforce the size cap from dead_letter_queue.max_bytes with drop_newer as the default storage policy. On Debian and RPM packages, path.data defaults to /var/lib/logstash and logstash.yml lives under /etc/logstash. Changes to logstash.yml require a service restart, and configuration tests are safest under the logstash service account because current releases block superuser runs unless allow_superuser is changed.
dead_letter_queue.enable: true dead_letter_queue.max_bytes: 1024mb #dead_letter_queue.storage_policy: drop_newer #dead_letter_queue.retain.age: 7d #path.dead_letter_queue: /var/lib/logstash/dead_letter_queue
Each pipeline gets its own DLQ directory under path.dead_letter_queue. On package-based installs that keep the default path.data value, the usual location becomes /var/lib/logstash/dead_letter_queue/<pipeline-id>.
If a custom path.dead_letter_queue location is used, create it with ownership that lets the logstash service account read and write the directory before restarting the service.
Current Elastic documentation still requires a local filesystem for DLQ integrity and performance. NFS is not supported, and two Logstash instances must not share the same dead letter queue path.
$ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-configtest --config.test_and_exit Using bundled JDK: /usr/share/logstash/jdk ##### snipped ##### Configuration OK [2026-04-07T08:03:34,546][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
The temporary --path.data directory must be writable by the logstash user and keeps the validation away from the live service data directory in /var/lib/logstash.
Current Logstash releases reject superuser runs by default, so the same validation fails as root unless allow_superuser was intentionally changed.
$ sudo systemctl restart logstash.service
Restarting the service pauses every active pipeline while inputs reopen, filters recompile, and outputs reconnect.
Changes to logstash.yml are not applied by automatic pipeline reload, so this restart is still required even when config.reload.automatic is already enabled for pipeline files.
$ curl -s 'http://localhost:9600/_node/pipelines?pretty'
{
##### snipped #####
"pipelines" : {
"main" : {
"workers" : 8,
"batch_size" : 125,
"batch_delay" : 50,
"dead_letter_queue_enabled" : true
}
}
}
Replace main in later checks when /etc/logstash/pipelines.yml uses a different pipeline.id. Current releases also expose dead_letter_queue_path in this response, which is useful when a custom path.dead_letter_queue is in use.
If the monitoring API is not available on localhost:9600, check api.enabled, api.http.host, and api.http.port in /etc/logstash/logstash.yml, then adjust the request to match the active host, port, TLS, or authentication settings.
$ curl -s 'http://localhost:9600/_node/stats/pipelines/main?pretty'
{
##### snipped #####
"dead_letter_queue" : {
"expired_events" : 0,
"storage_policy" : "drop_newer",
"dropped_events" : 0,
"last_error" : "no errors",
"queue_size_in_bytes" : 1,
"max_queue_size_in_bytes" : 1073741824
}
##### snipped #####
}
The dead_letter_queue object is the runtime proof that the queue is active for that pipeline. The size fields show the current on-disk usage and the enforced ceiling, while last_error and dropped_events reveal whether the queue has started rejecting entries.
queue_size_in_bytes can be non-zero even before a real failed event is written because Logstash prepares DLQ segment files when the feature is active. If dropped_events increases or last_error changes, inspect the rejected events and either fix the pipeline or raise dead_letter_queue.max_bytes within the available disk budget.