How to configure Logstash output to Amazon S3

Sending selected Logstash events to Amazon S3 creates a lower-cost retention tier for audit trails, replay data, and long-lived logs that do not need to remain in hot search storage forever.

The s3 output batches events into temporary local files, rotates those files by size or time, and uploads them to a bucket path built from bucket and an optional interpolated prefix. With crash recovery enabled, unfinished temporary files can be recovered and uploaded after an abnormal stop instead of being discarded.

Current Elastic docs support this output only for AWS S3, not generic S3-compatible endpoints. Current Logstash packages also ship the plugin through logstash-integration-aws, so the main operational concerns are credentials, bucket permissions, and a writable temporary_directory with enough free space to absorb retries or destination outages.

Steps to configure Logstash output to Amazon S3:

  1. Confirm the current Logstash build already includes the s3 output.
    $ sudo /usr/share/logstash/bin/logstash-plugin list --verbose | grep -E 'logstash-integration-aws|logstash-output-s3' | sed 's/^[^a-z]*//'
    Using bundled JDK: /usr/share/logstash/jdk
    logstash-integration-aws (7.2.1)
    logstash-output-s3

    Current Logstash 9.x installs bundle the s3 output through logstash-integration-aws, so a separate standalone plugin install is not the normal package workflow.

  2. Create a dedicated writable temporary directory for S3 uploads.
    $ sudo install -d -o logstash -g logstash -m 0750 /var/lib/logstash/s3-output

    Temporary files grow here while uploads are pending or retrying, so keep the directory on monitored storage with enough free space for burst traffic and S3 outages.

    When multiple s3 outputs use restore, give each output its own temporary_directory so recovered files do not collide.

  3. Create or update the S3 output block in /etc/logstash/conf.d/40-s3-output.conf.
    output {
      s3 {
        id => "s3_archive"
        bucket => "logs-archive"
        region => "us-east-1"
        prefix => "logstash/%{+YYYY/MM/dd}/"
        temporary_directory => "/var/lib/logstash/s3-output"
        restore => true
        rotation_strategy => "size_and_time"
        size_file => 10485760
        time_file => 15
        codec => json_lines
      }
    }

    The plugin follows the AWS SDK credential chain. Prefer an IAM instance profile, task role, or other role-based credential source when available. If static keys are unavoidable, reference them through environment or keystore-backed variables such as ${AWS_ACCESS_KEY_ID} and ${AWS_SECRET_ACCESS_KEY} instead of writing literal secrets into the pipeline file. Related: How to add a secret to a Logstash keystore

    Elastic documents this output as AWS S3 only. Although the plugin still exposes options such as endpoint and additional_settings, this page's current supported path is an AWS S3 bucket rather than a generic S3-compatible service.

    Keep prefix coarse-grained, such as a date path. Highly unique interpolated prefixes can leave too many uploads open at once and hurt stability. If the IAM policy allows writes only under a sub-prefix and not at the bucket root, add validate_credentials_on_root_bucket ⇒ false.

  4. Test the pipeline configuration with the packaged settings directory and a temporary data path.
    $ sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.data /tmp/logstash-s3-configtest --config.test_and_exit -f /etc/logstash/conf.d/40-s3-output.conf
    Using bundled JDK: /usr/share/logstash/jdk
    Configuration OK
    [2026-04-07T23:57:34,162][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

    Current Logstash 9.x defaults allow_superuser to false, so run the validation as the logstash service account unless that setting was intentionally changed.

    This check validates syntax and plugin settings only. It does not prove that AWS credentials, bucket policy, DNS resolution, or PutObject permission are correct.

  5. Restart the Logstash service so the updated output is loaded.
    $ sudo systemctl restart logstash.service

    Restarting Logstash briefly pauses every active pipeline while inputs, filters, queues, and outputs are rebuilt.

    If /etc/logstash/logstash.yml already enables config.reload.automatic, a pipeline file change can be picked up without a full service restart. Plugin changes, logstash.yml changes, and service-unit changes still require a restart.

  6. Verify the output plugin is loaded in the running pipeline statistics.
    $ curl -s http://127.0.0.1:9600/_node/stats/pipelines/main?pretty | grep -n -A6 '"id" : "s3_archive"'
    22:          "id" : "s3_archive",
    23-          "events" : {
    24-            "in" : 12,
    25-            "duration_in_millis" : 91,
    26-            "out" : 12
    27-          }

    Replace main with the actual pipeline id when /etc/logstash/pipelines.yml uses another name.

    If the id is missing, the pipeline may not have loaded, the output id may differ from s3_archive, or the monitoring API may still be disabled or bound to another port. Related: How to check Logstash pipeline metrics

  7. After a fresh event passes through the pipeline, list the S3 prefix to confirm new objects are being uploaded.
    $ aws s3 ls s3://logs-archive/logstash/ --recursive | tail -n 1
    2026-04-07 14:25:33        248 logstash/2026/04/07/ls.s3.312bc026-2f5d-49bc-ae9f-5940cf4ad9a6.2026-04-07T14.25.part0.txt

    The object key pattern shows the configured prefix followed by the plugin-generated file name. If AWS CLI is not present on the Logstash host, confirm the same prefix in the S3 console or another S3 client instead.

    An empty listing usually means the pipeline has not processed a new event yet or uploads are failing. Send a fresh event through the pipeline, then inspect /var/log/logstash/logstash-plain.log and the local temporary_directory for retrying spool files.