Kibana alerts turn Elasticsearch search results into scheduled checks that surface failures, spikes, or missing events before they disappear into a dashboard time range. Moving the condition into a rule keeps detection running even when nobody has Discover or a dashboard open.
Current Kibana creates this type of alert from Stack Management under Alerts and Insights. The Elasticsearch query rule type runs a KQL, Lucene, DSL, or ES|QL search against a data view or index, compares the result to a threshold, and then creates alerts that can stay visible in Kibana or trigger connector actions.
Reliable alerting depends on matching the rule to the same time field and filters used during manual troubleshooting. Self-managed deployments also need alerting prerequisites such as xpack.encryptedSavedObjects.encryptionKey in kibana.yml, and connector choices can be license-dependent; current self-managed Basic labs can create the rule itself but not the Webhook connector type.
$ curl --silent --show-error --cacert /etc/elasticsearch/certs/http_ca.crt --user elastic:password -H "Content-Type: application/json" -X POST "https://localhost:9200/logs-*/_search?pretty&filter_path=hits.total" -d '{
"size": 0,
"query": {
"bool": {
"filter": [
{ "range": { "@timestamp": { "gte": "now-15m" } } },
{ "match": { "message": "timeout" } }
]
}
}
}'
{
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
}
}
}
Use the same time field and core filter that the rule will use so the manual query and the scheduled rule measure the same condition.
Save the rule without actions when the detection itself needs testing before messages are sent anywhere.
Current self-managed Basic licensing can create the rule itself but disables the Webhook connector type; choose a licensed connector or keep the first pass in Kibana only.
Pick one query style for the rule. KQL with a data view is usually the fastest option when the fields are already visible in Discover, while DSL or ES|QL is better when the condition needs a raw query body or tabular aggregation logic.
A specific query such as message : "timeout" or log.level : "error" and service.name : "checkout" is easier to tune than a broad match-all rule with a high threshold.
Current Kibana shows the number of matching documents for KQL, Lucene, and DSL rules, and a result table for ES|QL rules.
A 0 matches result usually means the time field, time window, or query scope is wrong rather than the rule engine being broken.
Keep the check interval shorter than the time window so scheduled runs overlap instead of leaving detection gaps.
For KQL, Lucene, or DSL rules without grouping, leave Exclude matches from previous run enabled unless the same documents should be allowed to retrigger on every run.
Related: How to create a Kibana alert rule
On status changes reduces repeat notifications for the same long-running condition, while summary actions are better when a single message should cover new, ongoing, and recovered alerts together.
Connector actions can expose document context, hostnames, or internal URLs, so keep the connector destination and message template scoped to what the recipient actually needs.
When several rules exist in the same space, the Alerts view under Stack Management → Alerts and Insights helps filter alert instances by rule name, status, or type.
Failed or Warning responses usually point back to the rule query, missing privileges, connector limits, or alerting prerequisites in kibana.yml.