Bulk indexing pushes many documents to Elasticsearch in fewer HTTP requests, which significantly improves ingestion speed for log backfills, event streams, and one-time imports.

The Bulk API accepts NDJSON where each operation is described on a metadata line (such as index) followed by a source document line. Actions can target different indices in the same request, and the request must be sent as Content-Type: application/x-ndjson so the server parses it as newline-delimited JSON.

Bulk requests can overwhelm a cluster if they are too large or too frequent, and an HTTP 200 response can still contain per-item failures. Security-enabled clusters often require HTTPS plus authentication, and forced refreshes can reduce indexing throughput during sustained ingestion.

Steps to bulk index documents into Elasticsearch:

  1. Create a bulk request file in NDJSON format.
    $ cat > bulk.ndjson <<'BULK'
    { "index": { "_index": "logs-2026.01" } }
    { "timestamp": "2026-01-21T10:13:02Z", "level": "INFO", "message": "service started" }
    { "index": { "_index": "logs-2026.01" } }
    { "timestamp": "2026-01-21T10:14:45Z", "level": "ERROR", "message": "connection timeout" }
    BULK

    Each JSON object must be a single line, and the file must end with a trailing newline for the bulk parser to accept the final action.

  2. Submit the bulk request to Elasticsearch.
    $ curl -sS -H "Content-Type: application/x-ndjson" -X POST "http://localhost:9200/_bulk?filter_path=took,errors,items.*.status,items.*.error&pretty" --data-binary @bulk.ndjson
    {
      "errors" : false,
      "took" : 181,
      "items" : [
        {
          "index" : {
            "status" : 201
          }
        },
        {
          "index" : {
            "status" : 201
          }
        }
      ]
    }

    Use --data-binary to send the file without newline transformations. Treat errors or any non-2xx item status as a failed ingest.

  3. Refresh the index for immediate search visibility.
    $ curl -sS -X POST "http://localhost:9200/logs-2026.01/_refresh"
    {"_shards":{"total":2,"successful":1,"failed":0}}

    Frequent refreshes reduce indexing throughput and can increase disk I/O during large imports.

  4. Check the document count for the target index.
    $ curl -sS "http://localhost:9200/logs-2026.01/_count?pretty"
    {
      "count" : 2,
      "_shards" : {
        "total" : 1,
        "successful" : 1,
        "skipped" : 0,
        "failed" : 0
      }
    }
  5. Run a simple search to confirm documents are queryable with expected fields.
    $ curl -sS -H "Content-Type: application/json" -X POST "http://localhost:9200/logs-2026.01/_search?filter_path=hits.total,hits.hits._source&pretty" -d '
    {
      "size": 2,
      "sort": [
        { "timestamp": "asc" }
      ]
    }'
    {
      "hits" : {
        "total" : {
          "value" : 2,
          "relation" : "eq"
        },
        "hits" : [
          {
            "_source" : {
              "timestamp" : "2026-01-21T10:13:02Z",
              "level" : "INFO",
              "message" : "service started"
            }
          },
          {
            "_source" : {
              "timestamp" : "2026-01-21T10:14:45Z",
              "level" : "ERROR",
              "message" : "connection timeout"
            }
          }
        ]
      }
    }