Accurate event timestamps keep searches, alerts, and dashboards ordered correctly even when log lines arrive late or out of sequence. In Logstash, leaving @timestamp at its default ingestion time can hide delays and distort incident timelines.
The date filter parses a timestamp from a field and writes it to @timestamp (or another field via target). The match setting accepts one or more formats such as ISO8601, UNIX, UNIX_MS, and custom tokens, and timezone can apply an IANA time zone when the source timestamp has no offset.
The source field must be extracted before the date filter runs, and a wrong timezone can shift events by hours and mis-order charts and time-based indexes. Default package installs run Logstash under systemd and load pipeline fragments from /etc/logstash/conf.d, so paths and systemctl commands may differ on custom deployments. When parsing fails, the event is tagged (default _dateparsefailure or tag_on_failure) and @timestamp remains unchanged, so failures should be monitored during rollout.
Steps to use the Logstash date filter:
- Create a pipeline configuration file at /etc/logstash/conf.d/55-date.conf.
The date filter parses the field named in match; enabling codec ⇒ json on the input expands JSON log lines into fields.
Setting the wrong timezone shifts @timestamp and can place events into the wrong date-based Elasticsearch index.
input { file { path => "/var/lib/logstash/examples/date.json" start_position => "beginning" sincedb_path => "/var/lib/logstash/sincedb-date" codec => json } } filter { if [log][file][path] == "/var/lib/logstash/examples/date.json" { date { id => "date_log_time" match => ["log_time", "ISO8601", "YYYY-MM-dd HH:mm:ss,SSS", "UNIX_MS"] target => "@timestamp" timezone => "UTC" tag_on_failure => ["_dateparsefailure_log_time"] } } } output { if [log][file][path] == "/var/lib/logstash/examples/date.json" { elasticsearch { hosts => ["http://elasticsearch.example.net:9200"] index => "app-date-%{+YYYY.MM.dd}" } } } - Test the pipeline configuration.
$ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-configtest --config.test_and_exit Using bundled JDK: /usr/share/logstash/jdk Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties Configuration OK Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
Configuration testing validates pipeline syntax but does not confirm external connectivity.
- Restart the Logstash service to apply the filter.
$ sudo systemctl restart logstash
- Confirm the Logstash service is running after the restart.
$ sudo systemctl status logstash --no-pager ● logstash.service - logstash Loaded: loaded (/usr/lib/systemd/system/logstash.service; enabled; preset: enabled) Active: active (running) since Wed 2026-01-07 22:07:58 UTC; 3s ago Main PID: 35154 (java) Tasks: 31 (limit: 28486) Memory: 421.4M (peak: 421.4M) CPU: 13.967s ##### snipped ##### - Query Elasticsearch for a recent event to verify log_time was parsed into @timestamp.
$ curl -s -G "http://elasticsearch.example.net:9200/app-date-*/_search" \ --data-urlencode "size=1" \ --data-urlencode "sort=@timestamp:desc" \ --data-urlencode "filter_path=hits.hits._index,hits.hits._source.log_time,hits.hits._source.@timestamp" \ --data-urlencode "pretty" { "hits" : { "hits" : [ { "_index" : "app-date-2026.01.07", "_source" : { "log_time" : "2026-01-07T22:09:30.789Z", "@timestamp" : "2026-01-07T22:09:30.789Z" } } ] } } - Search indexed events for _dateparsefailure_log_time tags.
$ curl -s -G "http://elasticsearch.example.net:9200/app-date-*/_search" \ --data-urlencode "q=tags:_dateparsefailure_log_time" \ --data-urlencode "size=0" \ --data-urlencode "filter_path=hits.total" \ --data-urlencode "pretty" { "hits" : { "total" : { "value" : 0, "relation" : "eq" } } }A non-zero count indicates a match pattern or timezone mismatch.
- Review pipeline metrics for event throughput, including date filter activity.
$ curl -s http://localhost:9600/_node/stats/pipelines?pretty { "pipelines" : { "main" : { "plugins" : { "filters" : [ { "id" : "date_log_time", "name" : "date", "events" : { "in" : 1, "out" : 1, "duration_in_millis" : 5 } } ] } } } }
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
