A Logstash tcp input gives applications, custom scripts, and network log senders a direct way to push events into a pipeline without writing local files first. It is useful when the source can open a socket and send lines over the network, but does not need a heavier shipper or a message broker in front of Logstash.
The tcp input runs in server mode by default, listens on the configured host and port, and treats each event as one line of text unless another codec is set. Current Elastic plugin docs still list host defaulting to 0.0.0.0 and the common input codec defaulting to line, so JSON senders should deliver one complete JSON object per line and terminate each event with a newline.
Keep the first validation run on 127.0.0.1 when possible. Current Logstash 9.x packages also reject superuser config-test runs unless allow_superuser is enabled, and older TCP SSL keys such as ssl_enable, ssl_cert, and ssl_verify no longer work in current plugin releases. If remote systems must connect, replace the localhost bind with the required service address and pair it with firewall rules plus ssl_enabled and the current ssl_* settings before exposing the listener.
input {
tcp {
id => "tcp_json_5514"
host => "127.0.0.1"
port => 5514
codec => json
}
}
output {
stdout {
id => "stdout_tcp_debug"
codec => rubydebug
}
}
The temporary stdout output keeps the first validation focused on whether the tcp listener is decoding events at all, instead of mixing input work with a production output. The explicit input and output id values also make the pipeline stats API easier to read.
Current Logstash 9.3.2 validation logs show that a tcp input configured with codec ⇒ json automatically switches to json_lines, so each sender message should be one newline-terminated JSON document.
Binding to 127.0.0.1 keeps the first test local. If remote systems must connect, change the bind address, restrict access at the firewall, and use current TLS settings such as ssl_enabled, ssl_certificate, and ssl_key. Obsolete keys such as ssl_enable, ssl_cert, and ssl_verify cause the current plugin to fail to start.
$ sudo -u logstash /usr/share/logstash/bin/logstash \ --path.settings /etc/logstash \ --path.data /tmp/logstash-tcp-input-configtest \ --config.test_and_exit \ -f /etc/logstash/conf.d/25-tcp-input.conf Using bundled JDK: /usr/share/logstash/jdk [2026-04-08T00:21:36,241][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because command line options are specified [2026-04-08T00:21:37,019][INFO ][logstash.javapipeline ][main] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise. Configuration OK [2026-04-08T00:21:37,031][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
Fix any reported errors before restarting, as invalid pipeline syntax prevents Logstash from starting cleanly. The temporary --path.data directory must be writable by the logstash user, and the warning about /etc/logstash/pipelines.yml being ignored is expected because this command validates only the file passed with -f.
Current Logstash 9.x packages default allow_superuser to false, so running the same check as root fails unless that setting was changed deliberately.
$ sudo systemctl restart logstash.service
Restarting Logstash briefly pauses every active pipeline in the service while inputs reopen and outputs reconnect.
$ sudo systemctl status logstash.service --no-pager --lines=20
● logstash.service - logstash
Loaded: loaded (/usr/lib/systemd/system/logstash.service; enabled; preset: enabled)
Active: active (running) since Tue 2026-04-08 00:22:05 UTC; 12s ago
Main PID: 22164 (java)
Tasks: 95 (limit: 28486)
Memory: 963.1M
##### snipped #####
$ sudo ss -lntp | grep -F ':5514'
LISTEN 0 4096 127.0.0.1:5514 0.0.0.0:* users:(("java",pid=22164,fd=196))
If you changed host to a service address or 0.0.0.0 for remote senders, ss should show that bind target instead of 127.0.0.1.
$ printf '{"message":"tcp input validation","service":"billing","level":"info"}\n' | nc -w 2 127.0.0.1 5514
The tcp input is line-oriented, so one JSON object per line is the safest sender format for codec ⇒ json on current releases.
Current Logstash 9.x packages default pipeline ECS compatibility to v8. If sender keys can collide with ECS fields such as host, event, url, or service, assign a JSON codec target instead of decoding directly at the event root.
$ curl -s http://localhost:9600/_node/stats/pipelines/main?pretty
{
"pipelines" : {
"main" : {
"events" : {
"filtered" : 1,
"in" : 1,
"out" : 1
},
"plugins" : {
"inputs" : [ {
"id" : "tcp_json_5514",
"name" : "tcp",
"events" : {
"out" : 1
}
} ]
}
}
}
}
The explicit input id keeps the stats readable when the same pipeline has multiple listeners. If the pipeline ID is not main on your host, replace it in the API path.
$ sudo journalctl --unit=logstash --since "5 minutes ago" --no-pager --lines=40
Apr 08 00:22:05 logstash-01 logstash[22164]: [2026-04-08T00:22:05,368][INFO ][logstash.inputs.tcp ][main][tcp_json_5514] Starting tcp input listener {:address=>"127.0.0.1:5514", :ssl_enabled=>false}
Apr 08 00:22:27 logstash-01 logstash[22164]: {
Apr 08 00:22:27 logstash-01 logstash[22164]: "message" => "tcp input validation",
Apr 08 00:22:27 logstash-01 logstash[22164]: "service" => "billing",
Apr 08 00:22:27 logstash-01 logstash[22164]: "level" => "info"
Apr 08 00:22:27 logstash-01 logstash[22164]: }
With the temporary stdout output shown earlier, the journal is usually the quickest place to confirm the decoded event on package installs managed by systemd.
If the input counters rise but the journal does not show the event, stop the service and rerun the same pipeline in the foreground with sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-tcp-input-foreground -f /etc/logstash/conf.d/25-tcp-input.conf so the decoded event is printed directly to the terminal.