Normalizing event fields in Logstash prevents mapping surprises, broken visualizations, and inconsistent searches when logs arrive with drifting keys or types across environments.

The mutate filter runs in the pipeline filter stage and performs common field operations such as adding fields, renaming keys, type conversion, and removing unwanted data, making it a practical fit for aligning events to Elastic Common Schema (ECS) naming.

Field paths use Logstash field-reference syntax, so nested targets such as host.name must be written as [host][name], and type conversion must be planned early because an existing Elasticsearch mapping may still require a new index (or reindex) after applying convert.

Steps to use the Logstash mutate filter:

  1. Create a pipeline configuration file at /etc/logstash/conf.d/60-mutate.conf.
    input {
      file {
        path => "/var/lib/logstash/examples/mutate.log"
        start_position => "beginning"
        sincedb_path => "/var/lib/logstash/sincedb-mutate"
      }
    }
    
    filter {
      if [log][file][path] == "/var/lib/logstash/examples/mutate.log" {
        mutate {
          id => "mutate_add_fields"
          add_field => { "[service][environment]" => "production" }
          add_field => { "source_host" => "app01" }
          add_field => { "bytes" => "1234" }
        }
    
        mutate {
          id => "mutate_normalize_fields"
          rename => { "source_host" => "[host][name]" }
          convert => { "bytes" => "integer" }
        }
      }
    }
    
    output {
      if [log][file][path] == "/var/lib/logstash/examples/mutate.log" {
        elasticsearch {
          hosts => ["http://elasticsearch.example.net:9200"]
          index => "app-mutate-%{+YYYY.MM.dd}"
        }
      }
    }

    Use [parent][child] field references for nested fields, and extract values into fields (for example bytes) before using convert on them.

  2. Test the pipeline configuration.
    $ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-configtest --config.test_and_exit
    Configuration OK
  3. Restart the Logstash service to apply the filter.
    $ sudo systemctl restart logstash
  4. Check pipeline metrics to confirm events are flowing through the mutate filter.
    $ curl --silent --show-error http://localhost:9600/_node/stats/pipelines?pretty
    {
      "pipelines" : {
        "main" : {
          "plugins" : {
            "filters" : [ {
              "id" : "mutate_normalize_fields",
              "events" : {
                "in" : 1,
                "out" : 1
              }
            }, {
              "id" : "mutate_add_fields",
              "events" : {
                "in" : 1,
                "out" : 1
              }
            } ]
          }
        }
      }
    ##### snipped #####
  5. Retrieve a sample event from Elasticsearch to confirm the updated fields are present.
    $ curl -s -G "http://elasticsearch.example.net:9200/app-mutate-*/_search" \
      --data-urlencode "size=1" \
      --data-urlencode "sort=@timestamp:desc" \
      --data-urlencode "filter_path=hits.hits._index,hits.hits._source.service,hits.hits._source.host,hits.hits._source.bytes" \
      --data-urlencode "pretty"
    {
      "hits" : {
        "hits" : [ {
          "_index" : "app-mutate-2026.01.07",
          "_source" : {
            "host" : {
              "name" : "app01"
            },
            "service" : {
              "environment" : "production"
            },
            "bytes" : 1234
          }
        } ]
      }
    ##### snipped #####