Receiving syslog directly in Logstash centralizes device, appliance, and server events so they can be parsed, enriched, and searched together instead of staying scattered across individual hosts. A dedicated syslog listener is useful when routers, firewalls, switches, and Unix systems already know how to forward syslog and do not need another agent installed first.
The syslog input plugin accepts RFC3164-style messages over the network, parses priority, timestamp, host, and program fields, and turns each line into a Logstash event that filters and outputs can handle. Current plugin docs state that this input opens both TCP and UDP listeners on the configured port, and ecs_compatibility controls whether parsed fields stay in legacy locations or land under ECS paths such as [log][syslog][priority].
Package installs usually load /etc/logstash/conf.d/*.conf into the active pipeline, so a new syslog input can interact with existing filters and outputs unless you separate pipelines or guard with conditionals. Traditional syslog port 514 is privileged, syslog traffic is often plaintext, and current Logstash releases block superuser runs by default, so using an unprivileged port such as 5514, testing the full pipeline as the logstash user, and firewalling the listener keeps the rollout simpler and safer.
Related: How to configure a Logstash TCP input
Related: How to configure a Logstash UDP input
$ sudo ss -lnpt | grep -F ':5514' $ sudo ss -lnpu | grep -F ':5514'
No output usually means port 5514 is available. The syslog input plugin opens both transports on the same port, so check both socket tables before reusing a number.
Ports below 1024, including the default syslog port 514, are privileged and can prevent the service from starting when Logstash runs as the packaged non-root account.
input {
syslog {
id => "syslog-main-5514"
host => "0.0.0.0"
port => 5514
ecs_compatibility => "v8"
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch.example.net:9200"]
index => "syslog-%{+YYYY.MM.dd}"
}
}
Set ecs_compatibility explicitly when the rest of the pipeline or the target index templates expect ECS field paths such as [log][syslog][priority].
The built-in parser expects RFC3164 lines by default. If a sender uses a non-standard format, add grok_pattern before restarting, and make sure the custom pattern still captures a timestamp field.
Binding to 0.0.0.0 accepts syslog from every reachable interface. Restrict access with a firewall or bind to a specific local address when only certain senders should reach the listener.
$ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-syslog-configtest --config.test_and_exit
Using bundled JDK: /usr/share/logstash/jdk
[2026-04-08T09:12:54,481][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"9.2.3", "jruby.version"=>"jruby 9.4.13.0 (3.1.4) 2025-06-10 9938a3461f OpenJDK 64-Bit Server VM 21.0.9+10-LTS on 21.0.9+10-LTS +indy +jit"}
Configuration OK
[2026-04-08T09:12:55,436][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
The --path.data directory must be writable by the user running the test, and a throwaway path under /tmp keeps the syntax check away from the service data directory.
--config.test_and_exit validates syntax and settings assembly, but it does not prove that any custom grok pattern, remote output, or credentialed connection will work at runtime.
$ sudo systemctl restart logstash
$ sudo systemctl is-active logstash active
If the command returns failed or stays in activating too long, inspect journalctl –unit=logstash –no-pager –lines=80 before sending test traffic.
$ sudo ss -lnpt | grep -F ':5514'
LISTEN 0 4096 0.0.0.0:5514 0.0.0.0:* users:(("java",pid=21904,fd=112))
$ sudo ss -lnpu | grep -F ':5514'
UNCONN 0 0 0.0.0.0:5514 0.0.0.0:* users:(("java",pid=21904,fd=111))
If you need a listener that accepts only one transport, use the dedicated tcp or udp input instead of the syslog input plugin.
The same listener port accepts both TCP and UDP, so pick the transport that the sending device supports. Use TCP when the sender can retry and ordered delivery matters more than datagram simplicity.
$ logger --rfc3164 --server 127.0.0.1 --port 5514 --udp --tag logstash-test "syslog input test"
Replace 127.0.0.1 with the real Logstash address when testing from another host, or switch to --tcp when the sender uses TCP instead of UDP.
$ curl --silent --show-error --request GET --header 'Content-Type: application/json' --data '{"size":1,"sort":[{"@timestamp":{"order":"desc"}}],"query":{"match_phrase":{"message":"syslog input test"}},"_source":["@timestamp","host.hostname","process.name","message","log.syslog.priority"]}' 'http://elasticsearch.example.net:9200/syslog-*/_search?pretty'
{
"hits" : {
"hits" : [
{
"_index" : "syslog-2026.04.08",
"_source" : {
"@timestamp" : "2026-04-08T09:18:01.000Z",
"host" : {
"hostname" : "host"
},
"log" : {
"syslog" : {
"priority" : 13
}
},
"message" : "syslog input test",
"process" : {
"name" : "logstash-test"
}
}
}
]
}
}
Secured Elasticsearch deployments may require authentication and TLS, so add options such as --user and --cacert when HTTPS or security is enabled.
If you leave ecs_compatibility disabled, expect legacy field names such as priority instead of [log][syslog][priority] in the indexed event.