Capturing syslog messages in Logstash centralizes logs from routers, firewalls, Linux servers, and applications into a single searchable stream. Consistent ingestion makes it easier to correlate events, build dashboards, and alert on suspicious activity without logging into each device individually.

The syslog input plugin receives syslog frames over the network, parses common RFC3164-style fields (timestamp, hostname, program, severity), and turns each message into an event flowing through the pipeline. Events can then be enriched with filters and forwarded to an output such as Elasticsearch using an index pattern that rolls daily.

Syslog is often sent over UDP on port 514, which is privileged on Linux and transmits in plaintext, so binding directly to 514 may require elevated privileges and the traffic should stay on a trusted network. A high port such as 5514 avoids privileged binds, and firewalling the listener prevents log spoofing or unwanted ingestion. On default package installs, configuration files in /etc/logstash/conf.d are typically combined into the same pipeline, so outputs in one file can affect events from other inputs unless separated into dedicated pipelines or guarded with conditionals.

Steps to configure a syslog input in Logstash:

  1. Create a pipeline configuration file at /etc/logstash/conf.d/20-syslog.conf.
    input {
      syslog {
        host => "0.0.0.0"
        port => 5514
      }
    }
    
    output {
      elasticsearch {
        hosts => ["http://elasticsearch.example.net:9200"]
        index => "syslog-%{+YYYY.MM.dd}"
      }
    }

    Listening on 0.0.0.0 accepts syslog from any reachable host, and syslog is typically unencrypted over the network, so restrict access to trusted networks and firewall the port to prevent spoofed or unintended log ingestion.

  2. Test the pipeline configuration for syntax errors.
    $ sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash --config.test_and_exit
    ##### snipped #####
    Configuration OK
  3. Restart the Logstash service to load the syslog pipeline.
    $ sudo systemctl restart logstash
  4. Confirm the Logstash service is running after the restart.
    $ sudo systemctl status --no-pager --lines=20 logstash
    ● logstash.service - logstash
         Loaded: loaded (/usr/lib/systemd/system/logstash.service; enabled; preset: enabled)
         Active: active (running) since Wed 2026-01-07 05:04:11 UTC; 6s ago
    ##### snipped #####
  5. Configure syslog senders to target the Logstash host on port 5514.

    Set the remote syslog destination to the Logstash server address and port, then choose UDP or TCP based on device support and reliability requirements.

  6. Verify Logstash is listening on the syslog port.
    $ sudo ss -lntup | grep ':5514'
    udp   UNCONN 0      0                 0.0.0.0:5514       0.0.0.0:*    users:(("java",pid=19080,fd=93))
  7. Send a test syslog message to the listener.
    $ logger --rfc3164 --server 127.0.0.1 --port 5514 --udp --tag logstash-test "syslog input test"

    Use the actual Logstash hostname or IP instead of 127.0.0.1 when testing from another host.

  8. Search Elasticsearch for the test message in the syslog index.
    $ curl --silent --show-error --request GET --header 'Content-Type: application/json' --data '{"size":1,"sort":[{"@timestamp":{"order":"desc"}}],"query":{"match_phrase":{"message":"syslog input test"}}}' 'http://elasticsearch.example.net:9200/syslog-*/_search?pretty'
    {
      "took" : 68,
      "timed_out" : false,
      "_shards" : {
        "total" : 1,
        "successful" : 1,
        "skipped" : 0,
        "failed" : 0
      },
      "hits" : {
        "total" : {
          "value" : 2,
          "relation" : "eq"
        },
        "max_score" : null,
        "hits" : [
          {
            "_index" : "syslog-2026.01.07",
            "_id" : "KNXYlpsBMfcBipKWFKaX",
            "_score" : null,
            "_source" : {
              "@timestamp" : "2026-01-07T05:05:01.000Z",
              "event" : {
                "original" : "<13>Jan  7 05:05:01 host logstash-test: syslog input test"
              },
              "log" : {
                "syslog" : {
                  "priority" : 13,
                  "severity" : {
                    "name" : "Notice",
                    "code" : 5
                  },
                  "facility" : {
                    "name" : "user-level",
                    "code" : 1
                  }
                }
              },
              "@version" : "1",
              "service" : {
                "type" : "system"
              },
              "process" : {
                "name" : "logstash-test"
              },
              "ingest_source" : "beats",
              "message" : "syslog input test",
              "host" : {
                "ip" : "127.0.0.1",
                "hostname" : "host"
              }
            },
            "sort" : [
              1767762301000
            ]
          }
        ]
      }
    }

    Secured Elasticsearch deployments may require authentication and TLS, so add credentials (for example --user) and CA options (for example --cacert) to the curl request when HTTPS and security are enabled.