Archiving Logstash events to Amazon S3 keeps logs available for audits, incident response, and cost-effective retention without holding everything in hot storage.

The logstash-output-s3 plugin buffers events into local files and uploads them as S3 objects using a configurable bucket, prefix, and rotation settings such as size_file or time_file.

S3 uploads require working AWS credentials and permission to write objects to the target bucket, and the plugin’s local spool directory must remain writable and have enough free space to absorb bursts or upload failures.

Steps to configure an S3 output in Logstash:

  1. Install the logstash-output-s3 plugin.
    $ sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-s3
    Using bundled JDK: /usr/share/logstash/jdk
    Validating logstash-output-s3
    ERROR: Installation aborted, plugin 'logstash-output-s3' is already provided by 'logstash-integration-aws'

    Plugin installation modifies the Logstash runtime and takes effect after a restart, so schedule the change alongside a controlled service restart.

    Logstash 8.19 bundles S3 output support via logstash-integration-aws, so installation is skipped when already present.

  2. Create a writable spool directory for S3 uploads.
    $ sudo install -d -o logstash -g logstash -m 0750 /var/lib/logstash/s3

    Failed uploads accumulate files in the spool directory and can fill the filesystem if S3 is unreachable or credentials are wrong.

  3. Create a pipeline configuration file at /etc/logstash/conf.d/90-s3.conf.
    input {
      file {
        path => "/var/log/app/app.log"
        start_position => "beginning"
        sincedb_path => "/var/lib/logstash/sincedb-app"
      }
    }
    
    output {
      s3 {
        id => "s3_archive"
        bucket => "logs-archive"
        region => "us-east-1"
        prefix => "logstash/%{+YYYY/MM/dd}"
        temporary_directory => "/var/lib/logstash/s3"
        endpoint => "http://s3.example.net:9000"
        additional_settings => {"force_path_style" => true}
        rotation_strategy => "size_and_time"
        size_file => 10485760
        time_file => 15
        codec => "json_lines"
      }
    }

    AWS credentials are typically provided via instance roles, container task roles, or environment variables read by the Logstash service, keeping access keys out of pipeline files.

  4. Test the pipeline configuration.
    $ sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash --config.test_and_exit
    Using bundled JDK: /usr/share/logstash/jdk
    Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
    [2026-01-08T08:36:48,523][WARN ][logstash.runner          ] NOTICE: Running Logstash as a superuser is strongly discouraged as it poses a security risk. Set 'allow_superuser' to false for better security.
    ##### snipped #####
    Configuration OK
    [2026-01-08T08:36:49,027][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
  5. Restart the Logstash service to apply the S3 output.
    $ sudo systemctl restart logstash
  6. Verify the s3 output is running in the node pipeline statistics.
    $ curl -s http://localhost:9600/_node/stats/pipelines?pretty
    {
      "pipelines" : {
        "main" : {
          "events" : {
            "in" : 2,
            "duration_in_millis" : 64,
            "filtered" : 2,
            "out" : 2,
            "queue_push_duration_in_millis" : 0
          },
          "plugins" : {
            "outputs" : [ {
              "id" : "s3_archive",
              "events" : {
                "in" : 2,
                "duration_in_millis" : 63,
                "out" : 2
              }
            } ]
          }
        }
      }
    ##### snipped #####
    }

    No s3 output entry usually indicates a pipeline compile error, missing plugin, or a pipeline ID other than main.