Logstash pipelines control how events flow from inputs through filters to outputs, shaping what arrives in downstream systems such as Elasticsearch. Keeping the pipeline configuration readable and validated prevents dropped events, broken parsing, and runaway index patterns.
A packaged Logstash install typically runs a default pipeline (often named main) that reads pipeline configuration from /etc/logstash/conf.d. Files in that directory are loaded in lexical order and combined into a single effective pipeline configuration, so filename prefixes like 10-...conf help keep processing stages predictable.
A syntax error or invalid plugin setting can stop Logstash from starting, halting ingestion until the configuration is fixed. Configuration changes usually require a service restart unless automatic config reload is enabled in /etc/logstash/logstash.yml, and a restart temporarily pauses inputs, so testing before applying changes reduces disruption risk.
Steps to configure Logstash pipelines:
- Create a pipeline configuration file at /etc/logstash/conf.d/10-main.conf.
Multiple .conf files under /etc/logstash/conf.d are merged into one pipeline; numeric filename prefixes keep the merge order deterministic.
Use HTTPS and authentication for production Elasticsearch clusters, and avoid embedding credentials in world-readable files.
input { beats { port => 5044 } } filter { mutate { add_field => { "ingest_source" => "beats" } } } output { elasticsearch { hosts => ["http://elasticsearch.example.net:9200"] index => "beats-%{+YYYY.MM.dd}" } } - Test the pipeline configuration for errors.
$ sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash --config.test_and_exit Using bundled JDK: /usr/share/logstash/jdk Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties ##### snipped ##### Configuration OK
A non-zero exit indicates a parsing or plugin error, and the log output identifies the failing file and line.
- Restart the Logstash service to load the new pipeline.
$ sudo systemctl restart logstash
Restarting Logstash temporarily stops ingestion; upstream shippers may buffer or back off, but unbuffered sources can drop events during the restart window.
- Check the service status for an active state after the restart.
$ sudo systemctl status logstash ● logstash.service - logstash Loaded: loaded (/usr/lib/systemd/system/logstash.service; enabled; preset: enabled) Active: active (running) since Wed 2026-01-07 04:23:00 UTC; 26min ago Main PID: 12684 (java) Tasks: 101 (limit: 28486) Memory: 1.1G (peak: 1.1G) CPU: 1min 6.928s ##### snipped #####Pipeline errors after a restart are commonly visible in /var/log/logstash/logstash-plain.log.
- Confirm the pipeline is running.
$ curl -s http://localhost:9600/_node/pipelines?pretty { "host" : "host", "version" : "8.19.9", "http_address" : "127.0.0.1:9600", "id" : "3723b694-8264-4225-a32b-a201e0fcb5dc", "name" : "0.0.0.0", "ephemeral_id" : "89fbf22c-3cce-44b0-a124-7c12c3089764", "snapshot" : false, "status" : "green", "pipeline" : { "workers" : 10, "batch_size" : 125, "batch_delay" : 50 }, "pipelines" : { "main" : { "ephemeral_id" : "13ee2b8b-fd1d-4627-9c8c-ddbd9655676b", "hash" : "7a92b37cac4156b31a188f4e79b1281d2da9b8ad8d062d15cde81c75ca90596f", "workers" : 10, "batch_size" : 125, "batch_delay" : 50, "config_reload_automatic" : false, "config_reload_interval" : 3000000000, "dead_letter_queue_enabled" : false } } }The monitoring API listens on port 9600 by default; http.host and http.port are configured in /etc/logstash/logstash.yml.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
