Loading the built-in Filebeat dashboards into Kibana makes incoming log data usable immediately for troubleshooting, trend checks, and module-specific analysis without building saved searches or visualizations by hand.
Filebeat ships dashboards, visualizations, searches, and the filebeat-* data view as saved objects. Running filebeat setup --dashboards imports those assets through the Kibana API, while Filebeat also checks the connected Elasticsearch version as part of the setup run.
The Kibana endpoint must be reachable from the host running Filebeat, and the setup account needs permission to import saved objects. Current Filebeat releases can also auto-load dashboards with setup.dashboards.enabled: true, but that repeats the import on startup and stops Filebeat if Kibana is unavailable, so the explicit one-time setup command is usually safer for existing environments.
Steps to load Filebeat dashboards into Kibana:
- Configure the Kibana endpoint that Filebeat should use for dashboard loading in /etc/filebeat/filebeat.yml.
setup.kibana: host: "https://kibana.example.net:5601" username: "filebeat_setup" password: "${KIBANA_PASSWORD}" space.id: "observability" ssl.certificate_authorities: ["/etc/filebeat/certs/kibana-ca.crt"]setup.kibana.username and setup.kibana.password are optional when output.elasticsearch already uses an account that can import saved objects. Add setup.kibana.path when Kibana is published behind a reverse-proxy base path, and use the Filebeat keystore or environment expansion instead of leaving plain passwords in /etc/filebeat/filebeat.yml.
- Validate the Filebeat configuration before importing dashboards.
$ sudo filebeat test config -c /etc/filebeat/filebeat.yml Config OK
Related: How to test a Filebeat configuration
- Query the Kibana status API to confirm the target endpoint is reachable and available.
$ curl --silent --show-error --user filebeat_setup --cacert /etc/filebeat/certs/kibana-ca.crt "https://kibana.example.net:5601/api/status" | jq '.status.overall' { "level": "available", "summary": "All services and plugins are available" }Use the same host, base path, and CA chain that Filebeat will use. On unsecured lab systems, omit --user and --cacert.
Related: How to check Kibana status
- Run the one-time dashboard import from the Filebeat host.
$ sudo filebeat setup --dashboards -e -c /etc/filebeat/filebeat.yml Loading dashboards (Kibana must be running and reachable) Loaded dashboards
Filebeat overwrites matching dashboards, visualizations, searches, and the filebeat-* data view when the imported saved object IDs already exist.
Current releases also log a Kibana dashboards successfully loaded. message to standard error when -e is enabled.
If output.logstash is normally enabled, temporarily disable it and point the setup run at Elasticsearch because Filebeat still checks Elasticsearch version information while loading dashboards:
$ sudo filebeat setup --dashboards -e \ -E output.logstash.enabled=false \ -E output.elasticsearch.hosts=['https://es.example.net:9200'] \ -E output.elasticsearch.username=filebeat_setup \ -E output.elasticsearch.password=${ES_SETUP_PASSWORD} \ -E setup.kibana.host=https://kibana.example.net:5601 - Confirm Filebeat dashboards and the filebeat-* data view exist in Kibana.
$ curl --silent --show-error --user filebeat_setup --cacert /etc/filebeat/certs/kibana-ca.crt -H 'kbn-xsrf: true' "https://kibana.example.net:5601/api/saved_objects/_find?type=dashboard&search_fields=title&search=filebeat&per_page=1" | jq '.total' 76 $ curl --silent --show-error --user filebeat_setup --cacert /etc/filebeat/certs/kibana-ca.crt -H 'kbn-xsrf: true' "https://kibana.example.net:5601/api/saved_objects/_find?type=index-pattern&search_fields=title&search=filebeat&per_page=1" | jq -r '.saved_objects[0].attributes.title' filebeat-*
Prefix the API path with /s/<space_id> when assets were loaded into a non-default Kibana space. The Saved Objects API still uses index-pattern for data views, and the imported dashboards should also appear under Analytics -> Dashboards after searching for filebeat.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
