Checking Logstash pipeline metrics reveals whether events are keeping up with ingest, stalling in the queue, or spending too much worker time in one plugin. That makes the monitoring API one of the fastest ways to confirm whether a slow pipeline is starved, saturated, or blocked by downstream outputs.
The built-in Logstash monitoring API returns pipeline-scoped JSON for event counters, flow rates, queue state, reload status, and plugin-level worker costs. The _node/stats/pipelines/<pipeline-id> endpoint is the main source for day-to-day checks because it exposes the metrics that show whether a pipeline is draining normally or building pressure.
Examples assume a current package-based Logstash installation on Linux. Current releases enable the API by default, usually keep it on the local interface, and use a port in the 9600-9700 range, with many single-instance hosts landing on 9600. When the API is secured with TLS and basic authentication, switch the examples to https:// and add the required credentials.
$ curl -s 'http://localhost:9600/?pretty'
{
"host" : "logstash-01",
"version" : "9.3.2",
"http_address" : "127.0.0.1:9600",
"status" : "green",
"pipeline" : {
"workers" : 8,
"batch_size" : 125,
"batch_delay" : 50
}
}
If the request fails, check api.enabled, api.http.host, and api.http.port in /etc/logstash/logstash.yml. Current settings default api.enabled to true and use the 9600-9700 port range, so another local instance can bind a port other than 9600.
Keep ?pretty only for human-readable checks; omit it in scripts or automation.
$ curl -s 'http://localhost:9600/_node/pipelines?pretty'
{
"pipelines" : {
"main" : {
"workers" : 8,
"batch_size" : 125,
"batch_delay" : 50,
"dead_letter_queue_enabled" : false
}
}
}
Replace main in later commands with the pipeline ID returned here.
$ curl -s 'http://localhost:9600/_node/stats/pipelines/main?filter_path=pipelines.main.events,pipelines.main.flow,pipelines.main.queue&pretty'
{
##### snipped #####
"pipelines" : {
"main" : {
"events" : {
"in" : 22,
"filtered" : 22,
"out" : 22,
"duration_in_millis" : 170,
"queue_push_duration_in_millis" : 131
},
"flow" : {
"input_throughput" : {
"current" : 1.011,
"lifetime" : 0.9729
},
"filter_throughput" : {
"current" : 1.011,
"lifetime" : 0.973
},
"output_throughput" : {
"current" : 1.011,
"lifetime" : 0.973
},
"queue_backpressure" : {
"current" : 0.004988,
"lifetime" : 0.005794
},
"worker_concurrency" : {
"current" : 0.005393,
"lifetime" : 0.007519
},
"worker_utilization" : {
"current" : 0.06741,
"lifetime" : 0.09399
}
##### snipped #####
},
"queue" : {
"type" : "persisted",
"events_count" : 0,
"queue_size_in_bytes" : 4609,
"max_queue_size_in_bytes" : 134217728
##### snipped #####
}
}
}
}
events.in, filtered, and out should continue advancing together on a healthy steady-state pipeline. A widening gap between in and out, or a fast-rising queue_push_duration_in_millis, points to downstream delay or queue contention.
When worker_utilization approaches 100, the pipeline is saturated and all workers are staying busy. When queue_backpressure stays above zero, inputs are spending measurable time blocked by the queue. A pipeline with very low worker_utilization and flat event counters is usually starved by low input volume rather than overloaded.
On persistent queues, positive queue_persisted_growth_events or queue_persisted_growth_bytes means the queue is growing faster than it is draining. Negative values mean the queue is shrinking.
$ curl -s 'http://localhost:9600/_node/stats/pipelines/main?filter_path=pipelines.main.plugins.filters.name,pipelines.main.plugins.filters.flow.worker_utilization,pipelines.main.plugins.filters.flow.worker_millis_per_event,pipelines.main.plugins.outputs.name,pipelines.main.plugins.outputs.flow.worker_utilization,pipelines.main.plugins.outputs.flow.worker_millis_per_event&pretty'
{
##### snipped #####
"pipelines" : {
"main" : {
"plugins" : {
"filters" : [ {
"name" : "mutate",
"flow" : {
"worker_utilization" : {
"current" : 0.01185
},
"worker_millis_per_event" : {
"current" : 0.9167
}
}
} ],
"outputs" : [ {
"name" : "file",
"flow" : {
"worker_utilization" : {
"current" : 0.02908
},
"worker_millis_per_event" : {
"current" : 2.25
}
}
} ]
}
}
}
}
Input plugins expose throughput rather than worker_utilization. High worker_utilization or worker_millis_per_event on one filter or output usually identifies the specific plugin consuming pipeline capacity.
Related: How to optimize Logstash pipeline performance
Related: How to debug Logstash pipelines
$ sudo ss -lntp | grep -E ':(96[0-9]{2}|9700)([[:space:]]|$)'
LISTEN 0 4096 127.0.0.1:9600 0.0.0.0:* users:(("java",pid=22164,fd=135))
The monitoring API exposes pipeline names, queue state, reload details, and plugin metrics. Binding it to 0.0.0.0 or a routable address without TLS and authentication exposes operational data to the network.