Production configuration turns Elasticsearch from a development node into a service that can hold cluster state safely, survive restarts cleanly, and keep indexing plus search latency predictable under sustained load.
Self-managed package installs read static node settings from /etc/elasticsearch/elasticsearch.yml, optional JVM overrides from /etc/elasticsearch/jvm.options.d, and service limits from the systemd unit. Current self-managed releases also secure the HTTP endpoint by default when automatic security setup runs successfully, so healthy production API checks commonly use the normal authenticated HTTPS endpoint for the cluster.
Moving a node to production mode is what makes bootstrap checks decisive instead of advisory. As soon as network.host or other non-loopback networking is configured, stale discovery settings, weak kernel limits, swap-backed memory, or an incomplete TLS plan can stop the service from starting. Current Elastic guidance also recommends leaving JVM heap auto-sizing in place for most production nodes unless capacity testing shows a clear reason to override it.
path: data: /srv/elasticsearch/data logs: /srv/elasticsearch/logs
Debian and RPM packages already store data under /var/lib/elasticsearch and logs under /var/log/elasticsearch. Set custom paths only for dedicated volumes or alternate layouts, and keep ownership on the elasticsearch service account.
# /etc/elasticsearch/jvm.options.d/heap.options -Xms4g -Xmx4g
Current Elastic guidance recommends the default auto-sized heap for most production nodes. If overriding it, keep Xms and Xmx equal, keep total heap at no more than 50% of available RAM, and stay below the compressed ordinary object pointer threshold, which is about 26GB on most systems.
# /etc/sysctl.d/99-elasticsearch.conf vm.max_map_count = 1048576
$ sudo sysctl --system * Applying /etc/sysctl.d/99-elasticsearch.conf ... vm.max_map_count = 1048576 ##### snipped #####
The bootstrap check minimum is 262144, while current Elastic guidance recommends 1048576 for production hosts.
$ sudo swapoff --all
Remove or comment the Elasticsearch host's swap entries in /etc/fstab so swap does not return on reboot. If swap cannot be fully disabled, lower swappiness and use bootstrap.memory_lock as part of the mitigation.
$ sudo systemctl show elasticsearch --property=LimitNOFILE,LimitNPROC,LimitMEMLOCK,LimitFSIZE,LimitAS
# /etc/systemd/system/elasticsearch.service.d/override.conf [Service] LimitMEMLOCK=infinity LimitNOFILE=65535 LimitNPROC=4096 LimitFSIZE=infinity LimitAS=infinity
Current Debian and RPM units already default LimitNOFILE to at least 65535. The override is mainly for stricter local policy or when bootstrap.memory_lock: true must be honored.
$ sudo systemctl daemon-reload
bootstrap.memory_lock: true
Do not leave bootstrap.memory_lock enabled unless the service actually receives unlimited memlock, or the node will fail bootstrap checks in production mode.
cluster.name: logging-prod node.name: es-prod-01 node.roles: [ master, data, ingest ]
Do not reuse the same cluster.name across development, staging, and production. Once node.roles is set, it fully replaces the default role set, so every required role must be listed explicitly.
network.host: 192.0.2.11
Setting network.host moves the node into production mode and turns bootstrap warnings into startup-stopping exceptions. Keep the transport port internal to trusted cluster nodes only.
discovery.seed_hosts: - es-prod-01.example.net:9300 - es-prod-02.example.net:9300 - es-prod-03.example.net:9300
Every address in discovery.seed_hosts must resolve to a reachable master-eligible node. Use DNS names or IP addresses that stay stable across restarts.
cluster.initial_master_nodes: - es-prod-01 - es-prod-02 - es-prod-03
The entries must match the final node.name values exactly, including FQDN versus short hostnames.
Remove cluster.initial_master_nodes from every node after the first successful cluster formation. Never keep it on restarting nodes, nodes joining an existing cluster, or a full-cluster restart.
Current self-managed package installs usually generate /etc/elasticsearch/certs/http_ca.crt and secure the HTTP API automatically on first start. Replace the auto-generated certificates with long-lived trusted certificates before exposing the API beyond the management network.
path.repo: ["/srv/elasticsearch/snapshots"]
Filesystem copies of the data directory are not a supported backup method. Use snapshots only.
$ sudo systemctl restart elasticsearch
$ systemctl is-active elasticsearch active
Use sudo journalctl --unit=elasticsearch.service --no-pager --lines=80 when the unit does not return active.
$ sysctl vm.max_map_count vm.max_map_count = 1048576
Anything below the required threshold will fail bootstrap checks as soon as the node runs in production mode.
$ curl --silent --show-error --user elastic:password 'https://localhost:9200/_nodes/_local?filter_path=nodes.*.process.mlockall&pretty'
{
"nodes" : {
"Seu8a6NcQjm1SD-3o49mfg" : {
"process" : {
"mlockall" : true
}
}
}
}
Skip this check when bootstrap.memory_lock is intentionally disabled because swap is fully removed from the host.
$ curl --silent --show-error --user elastic:password 'https://localhost:9200/_cluster/health?pretty'
{
"cluster_name" : "logging-prod",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 128,
"active_shards" : 256,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"unassigned_primary_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
Recent package installs default to https with authentication when automatic security setup succeeds. Use the local CA file or a trusted replacement certificate, and keep investigating until the cluster reaches the expected node count and shard state.