products:promonitor:latest:userguide:configuration:plugins:elasticsearch
Table of Contents
Elasticsearch Plugin
Purpose
- This plugin allows you to send alarms, metric and monitor metadata directly to an Elasticsearch instance
- Alarms, metrics and monitor metadata can be indexed in separate Elasticsearch indices
- Built-in retry mechanism on Elasticsearch overload (HTTP 429 / rejected execution)
- Supports bulk indexing, buffering to protect Elasticsearch under load
- Automatic backpressure management: reduces flush rate and batch size when Elasticsearch is overloaded
- Non-blocking plugin updates: configuration changes via the UI will not hang even if Elasticsearch is unreachable
Configuration
- From the plugin menu of Redpeaks, select
Elasticsearchin the plugin drop-down and pressAdd. - The plugin has the following parameters:
| Parameter | Description | Mandatory |
|---|---|---|
| Active | Enables or disables the Elasticsearch plugin | Yes |
| Configuration | Choose between Standard Configuration or Cloud Configuration | Yes |
| Hostname | The IP address or hostname of the Elasticsearch instance (Standard Config) | Yes |
| Port | The port used to connect to Elasticsearch (Default is 9200) | Yes |
| Cloud ID | The Cloud ID used for connecting to a cloud-based Elasticsearch instance (Cloud Config) | Yes |
| API Key | The API key for authentication (Cloud Config) | Yes |
| Name | A unique name for the plugin instance | Yes |
| Alarm Index | The Elasticsearch index where alarms will be stored | Yes (if Send alarms checked) |
| Metric Index | The Elasticsearch index where metrics will be stored | Yes (if Send metrics checked) |
| Metadata Index | The Elasticsearch index where metadata will be stored | Yes (if Send metadata checked) |
| Username | The username for Elasticsearch authentication | No |
| Password | The password for Elasticsearch authentication | No |
| Properties | A semicolon-separated list of additional Elasticsearch properties | No |
| Max queue size | Maximum number of documents kept in memory before dropping. Minimum: 1000. Very large values (>250,000) may cause high RAM usage | Yes |
| Max items per flush | Maximum number of documents per bulk flush (batch size). Must be greater than 0. Values above 100,000 may cause long flush times | Yes |
| Threads | Number of worker threads used to flush batches (1 to 30) | Yes |
| Socket timeout (ms) | HTTP socket timeout for Elasticsearch requests. Minimum: 5,000 ms (5s). Maximum: 300,000 ms (5min) | Yes |
| Send alarms | Enables sending alarms | No |
| Send metrics | Enables sending metrics | No |
| Send metadata | Enables sending metadata | No |
| Split Metadata | Sends metadata as multiple documents (one per array element) | No |
| Use Datastream | Uses datastream templates + datastream indexing | No |
| Create Templates | Automatically creates templates (for datastream mode) | No |
| Use Compression | Enables HTTP compression for Elasticsearch requests | No |
Indices
You can configure different indices (or prefixes) for alarms, metrics, and metadata.
- When Create Templates is enabled, the plugin automatically creates index templates
- When Use Datastream is enabled, data is indexed into Elasticsearch datastreams
If templates are disabled and datastream too, the plugin can attempt a fallback index creation if an index does not exist
Example
Standard Configuration
Note: Ensure that the Elasticsearch instance is reachable and properly configured to accept data from Redpeaks The configured user or API key must have permission to write documents and manage templates if enabled
Troubleshooting
If you encounter issues:
- Verify the hostname and port (Standard Configuration) or the Cloud ID and API key (Cloud Configuration)
- Check Elasticsearch logs for any errors related to authentication or index operations
- Ensure that the indices specified in the configuration exist in Elasticsearch and have the appropriate permissions set for the configured user
- If Elasticsearch is overloaded (HTTP 429), consider reducing the batch size or increasing cluster capacity
- If the plugin shows “Could not acquire lock” errors in the logs, it means a flush was stuck for more than 90 seconds — check Elasticsearch connectivity and performance
- Check the plugin statistics page for dropped metrics/alarms/metadata counters — non-zero values indicate the queue was full and data was lost
products/promonitor/latest/userguide/configuration/plugins/elasticsearch.txt · Last modified: by jtbeduchaud

