Exporting Kibana saved objects creates a portable .ndjson snapshot of dashboards, data views, maps, queries, and other workspace content. That export is useful for backups, promotion between environments, and repeatable recovery after accidental deletion or space cleanup.
Kibana stores this content as saved objects and exposes the POST /api/saved_objects/_export endpoint to stream it as NDJSON. Exports can target explicit object ids or broader type-based selections, and includeReferencesDeep controls whether related child objects are added to the file.
Access to the UI or API requires a role with the Saved Objects Management Kibana privilege. The examples below use a local HTTP endpoint for clarity; add the correct authentication headers or --user plus --cacert when Kibana security or TLS is enabled, insert any configured server.basePath before /api/saved_objects, and add /s/<space_id> for a non-default space. The savedObjects.maxImportExportSize setting limits how many objects can be exported, the file contains Kibana metadata rather than source Elasticsearch documents, and exported saved objects are not backward compatible with older Kibana versions.
Related: How to import Kibana saved objects
Related: How to create a Kibana space
Steps to export Kibana saved objects:
- Export saved objects by type when the goal is to capture one class of assets in a single file.
$ curl --silent --show-error --fail \ --header 'kbn-xsrf: true' \ --header 'Content-Type: application/json' \ --request POST 'http://localhost:5601/api/saved_objects/_export' \ --data '{ "type": [ "index-pattern" ], "search": "Application logs", "excludeExportDetails": true }' \ --output index-patterns.ndjsonThe search filter uses the Elasticsearch simple query string syntax. Replace index-pattern with another saved object type such as dashboard, visualization, map, or lens, or use * to export every type.
The Kibana UI calls these objects data views, but the saved object type exported in NDJSON remains index-pattern.
- Export a specific saved object by its type and id when a narrow backup or promotion file is needed.
$ curl --silent --show-error --fail \ --header 'kbn-xsrf: true' \ --header 'Content-Type: application/json' \ --request POST 'http://localhost:5601/api/saved_objects/_export' \ --data '{ "objects": [ { "type": "index-pattern", "id": "ce1bc425-f6f4-4bd9-9e0a-d755609086c6" } ], "excludeExportDetails": true, "includeReferencesDeep": true }' \ --output application-logs.ndjsonUse includeReferencesDeep when the selected object depends on other saved objects. Exported dashboards, for example, can pull in their associated data views and other referenced objects.
The objects list cannot be combined with the type option in the same request body.
- Confirm the export file exists and has a non-zero size.
$ ls -lh application-logs.ndjson -rw-r--r-- 1 user staff 500B Apr 2 21:47 application-logs.ndjson
Protect exported NDJSON files because they can expose object names, data-view patterns, queries, URLs, and other operational context.
- Count exported records in the NDJSON file.
$ grep -c '^' application-logs.ndjson 1
Kibana export streams do not have to end with a trailing newline, so grep -c '^' is more reliable here than wc -l for single-object exports.
If excludeExportDetails is left at its default false value, Kibana appends one extra export-details record at the end of the stream.
- Inspect the exported object metadata to confirm the expected id, type, and title are present.
$ jq -R 'fromjson? | select(.) | {id, type, title: .attributes.title, name: .attributes.name}' application-logs.ndjson { "id": "ce1bc425-f6f4-4bd9-9e0a-d755609086c6", "type": "index-pattern", "title": "logs-app-*-default", "name": "Application logs" }For multi-object files, the same jq -R 'fromjson?' pattern prints one decoded JSON object per NDJSON line.
- Open Stack Management → Saved Objects and confirm the exported objects match the items selected for backup or migration.
The toolbar export in the UI includes related child objects by default, so exported dashboards normally bring their associated data views with them.
Mohd Shakir Zakaria is a cloud architect with deep roots in software development and open-source advocacy. Certified in AWS, Red Hat, VMware, ITIL, and Linux, he specializes in designing and managing robust cloud and on-premises infrastructures.
