API Docs - v1.0.0
Sink
prometheus (Sink)
The sink publishes events processed by WSO2 SP into Prometheus metrics and exposes them to Prometheus server at the provided url. The created metrics can be published to Prometheus through 'server' or 'pushGateway' publishing modes depending on the preference of the user. The server mode exposes the metrics through an http server at the provided url and the pushGateway mode pushes the metrics to pushGateway which must be running at the provided url.
The metric types that are supported by Prometheus sink are counter, gauge, histogram and summary. The values and labels of the Prometheus metrics can be updated through the events.
Syntax
@sink(type="prometheus", job="<STRING>", publish.mode="<STRING>", push.url="<STRING>", server.url="<STRING>", metric.type="<STRING>", metric.help="<STRING>", metric.name="<STRING>", buckets="<STRING>", quantiles="<STRING>", quantile.error="<DOUBLE>", value.attribute="<STRING>", push.operation="<STRING>", grouping.key="<STRING>", @map(...)))
QUERY PARAMETERS
Name | Description | Default Value | Possible Data Types | Optional | Dynamic |
---|---|---|---|---|---|
job | This parameter specifies the job name of the metric. The name must be the same job name as defined in the prometheus configuration file. | siddhiJob | STRING | Yes | No |
publish.mode | This parameter specifies the mode of exposing metrics to Prometheus server.The possible publishing modes are 'server' and 'pushgateway'. | server | STRING | Yes | No |
push.url | This parameter specifies the target url of the Prometheus pushGateway where the pushGateway must be listening. This url should be previously defined in prometheus configuration file as a target. | http://localhost:9091 | STRING | Yes | No |
server.url | This parameter specifies the url where the http server is initiated to expose metrics for 'server' publish mode. This url must be previously defined in prometheus configuration file as a target. | http://localhost:9080 | STRING | Yes | No |
metric.type | The type of Prometheus metric that has to be created at the sink. The supported metric types are 'counter', 'gauge', 'histogram' and 'summary'. |
STRING | No | No | |
metric.help | A brief description of the metric and its purpose. | STRING | Yes | No | |
metric.name | This parameter specifies the user preferred name for the metric. The metric name must match the regex format, i.e., [a-zA-Z_:][a-zA-Z0-9_:]*. | STRING | Yes | No | |
buckets | The bucket values preferred by the user for histogram metrics. The bucket values must be in 'string' format with each bucket value separated by a comma. The expected format of the parameter is as follows: "2,4,6,8" |
null | STRING | Yes | No |
quantiles | The user preferred quantile values for summary metrics. The quantile values must be in 'string' format with each quantile value separated by a comma. The expected format of the parameter is as follows: "0.5,0.75,0.95" |
null | STRING | Yes | No |
quantile.error | The error tolerance value for calculating quantiles in summary metrics. This must be a positive value though less than 1. | 0.001 | DOUBLE | Yes | No |
value.attribute | The name of the attribute in stream definition which specifies the metric value. The defined value attribute must be included inside the stream attributes. The value of the 'value' attribute that is published through events, increase the metric value for the counter and gauge metric types. For histogram and summary metric types, the values are observed. | value | STRING | Yes | No |
push.operation | This parameter defines the mode for pushing metrics to pushGateway The available push operations are 'push' and 'pushadd'. The operations differ according to the existing metrics in pushGateway where 'push' operation replaces the existing metrics and 'pushadd' operation only updates the newly created metrics. | pushadd | STRING | Yes | No |
grouping.key | This parameter specifies the grouping key of created metrics in key-value pairs. Grouping key is used only in pushGateway mode in order to distinguish the metrics from already existing metrics. The expected format of the grouping key is as follows: "'key1:value1','key2:value2'" |
STRING | Yes | No |
System Parameters
Name | Description | Default Value | Possible Parameters |
---|---|---|---|
jobName | This is the property that specifies the default job name for the metric. The name must be the same job name as defined in the prometheus configuration file. | siddhiJob | Any string |
publishMode | The default publish mode for the Prometheus sink for exposing metrics to Prometheus server. The mode can be either 'server' or 'pushgateway'. | server | server or pushgateway |
serverURL | This property configures the url where the http server will be initiated to expose metrics. This url must be previously defined in prometheus configuration file as a target to be identified by Prometheus. By default, the http server will be initiated at 'http://localhost:9080' | http://localhost:9080 | Any valid URL |
pushURL | This property configures the target url of Prometheus pushGateway where the pushGateway must be listening. This url should be previously defined in prometheus configuration file as a target to be identified by Prometheus. | http://localhost:9091 | Any valid URL |
groupingKey | This property configures the grouping key of created metrics in key-value pairs. Grouping key is used only in pushGateway mode in order to distinguish the metrics from already existing metrics under same job. The expected format of the grouping key is as follows: "'key1:value1','key2:value2'" . | null | Any key value pairs in the supported format |
Examples EXAMPLE 1
@sink(type='prometheus',job='fooOrderCount', server.url ='http://localhost:9080', publish.mode='server', metric.type='counter', metric.help= 'Number of foo orders', @map(type='keyvalue'))
define stream FooCountStream (Name String, quantity int, value int);
In the above example, the Prometheus-sink creates a counter metric with the stream name and defined attributes as labels. The metric is exposed through an http server at the target url.
EXAMPLE 2
@sink(type='prometheus',job='inventoryLevel', push.url='http://localhost:9080', publish.mode='pushGateway', metric.type='gauge', metric.help= 'Current level of inventory', @map(type='keyvalue'))
define stream InventoryLevelStream (Name String, value int);
In the above example, the Prometheus-sink creates a gauge metric with the stream name and defined attributes as labels.The metric is pushed to Prometheus pushGateway at the target url.
Source
prometheus (Source)
The source consumes Prometheus metrics which are exported from the specified url as Siddhi events, by making http requests to the url. According to the source configuration, it analyses metrics from the text response and sends them as Siddhi events through key-value mapping.The user can retrieve metrics of types including, counter, gauge, histogram and summary. Since the source retrieves the metrics from a text response of the target, it is advised to use 'string' as the attribute type for the attributes that correspond to Prometheus metric labels. Further, the Prometheus metric value is passed through the event as 'value'. Therefore, it is advisable to have an attribute with the name 'value' in the stream.
The supported types for the attribute, 'value' are INT, LONG, FLOAT and DOUBLE.
Syntax
@source(type="prometheus", target.url="<STRING>", scrape.interval="<INT>", scrape.timeout="<INT>", scheme="<STRING>", metric.name="<STRING>", metric.type="<STRING>", username="<STRING>", password="<STRING>", client.truststore.file="<STRING>", client.truststore.password="<STRING>", headers="<STRING>", job="<STRING>", instance="<STRING>", grouping.key="<STRING>", @map(...)))
QUERY PARAMETERS
Name | Description | Default Value | Possible Data Types | Optional | Dynamic |
---|---|---|---|---|---|
target.url | This property specifies the target url where the Prometheus metrics are exported in text format. | STRING | No | No | |
scrape.interval | This property specifies the time interval in seconds within which the source should make an HTTP request to the provided target url. | 60 | INT | Yes | No |
scrape.timeout | This property is the time duration in seconds for a scrape request to get timed-out if the server at the url does not respond. | 10 | INT | Yes | No |
scheme | This property specifies the scheme of the target URL. The supported schemes are 'HTTP' and 'HTTPS'. |
HTTP | STRING | Yes | No |
metric.name | This property specifies the name of the metrics that are to be fetched. The metric name must match the regex format, i.e., [a-zA-Z_:][a-zA-Z0-9_:]* . | Stream name | STRING | Yes | No |
metric.type | This property specifies the type of the Prometheus metric that is required to be fetched. The supported metric types are 'counter', 'gauge'," 'histogram' and 'summary'. |
STRING | No | No | |
username | This property specifies the username that has to be added in the authorization header of the HTTP request, if basic authentication is enabled at the target. It is required to specify both username and password to enable basic authentication. If one of the parameter is not given by user then an error is logged in the console. | STRING | Yes | No | |
password | This property specifies the password that has to be added in the authorization header of the request, if the basic authentication is enabled at the target. It is required to specify both the username and password to enable basic authentication. If one of the parameter is not given by user, then an error is logged in the console. | STRING | Yes | No | |
client.truststore.file | The file path to the location of the truststore to which the client needs to send https requests through 'https' protocol. | STRING | Yes | No | |
client.truststore.password | The password for client-truststore to send https requests. A custom password can be specified if required. | STRING | Yes | No | |
headers | Headers that should be included as HTTP request headers in the request. The format of the supported input is as follows, "'header1:value1','header2:value2'" |
STRING | Yes | No | |
job | This property defines the job name of the exported Prometheus metrics that has to be fetched. | STRING | Yes | No | |
instance | This property defines the instance of the exported Prometheus metrics that has to be fetched. | STRING | Yes | No | |
grouping.key | This parameter specifies the grouping key of the required metrics in key-value pairs. Grouping key is used if the metrics are exported by Prometheus pushGateway in order to distinguish the metrics from already existing metrics. The expected format of the grouping key is as follows: "'key1:value1','key2:value2'" |
STRING | Yes | No |
System Parameters
Name | Description | Default Value | Possible Parameters |
---|---|---|---|
scrapeInterval | The default time interval in seconds for the Prometheus source to make HTTP requests to the target URL. | 60 | Any integer value |
scrapeTimeout | This default time duration (in seconds) for an HTTP request to time-out if the server at the URL does not respond. | 10 | Any integer value |
scheme | The scheme of the target for Prometheus source to make HTTP requests. The supported schemes are HTTP and HTTPS. | HTTP | HTTP or HTTPS |
username | The username that has to be added in the authorization header of the HTTP request, if basic authentication is enabled at the target. It is required to specify both username and password to enable basic authentication. If one of the parameter is not given by user then an error is logged in the console. | Any string | |
password | The password that has to be added in the authorization header of the HTTP request, if basic authentication is enabled at the target. It is required to specify both username and password to enable basic authentication. If one of the parameter is not given by user then an error is logged in the console. | Any string | |
trustStoreFile | The default file path to the location of truststore that the client needs to send for HTTPS requests through 'HTTPS' protocol. | ${carbon.home}/resources/security/client-truststore.jks | Any valid path for the truststore file |
trustStorePassword | The default password for the client-truststore to send HTTPS requests. | wso2carbon | Any string |
headers | The headers that should be included as HTTP request headers in the scrape request. The format of the supported input is as follows, "'header1:value1','header2:value2'" |
Any valid http headers | |
job | The default job name of the exported Prometheus metrics that has to be fetched. | Any valid job name | |
instance | The default instance of the exported Prometheus metrics that has to be fetched. | Any valid instance name | |
groupingKey | The default grouping key of the required Prometheus metrics in key-value pairs. Grouping key is used if the metrics are exported by Prometheus pushGateway in order to distinguish the metrics from already existing metrics. The expected format of the grouping key is as follows: "'key1:value1','key2:value2'" |
Any valid grouping key pairs |
Examples EXAMPLE 1
@source(type= 'prometheus', target.url= 'http://localhost:9080/metrics', metric.type= 'counter', metric.name= 'sweet_production_counter', @map(type= 'keyvalue'))
define stream FooStream1(metric_name string, metric_type string, help string, subtype string, name string, quantity string, value double);
In this example, the prometheus source makes an http request to the 'target.url' and analyse the response. From the analysed response, the source retrieves the Prometheus counter metrics with the name, 'sweet_production_counter' and converts the filtered metrics into Siddhi events using the key-value mapper.
The generated maps will have keys and values as follows:
metric_name -> sweet_production_counter
metric_type -> counter
help -> <help_string_of_metric>
subtype -> null
name -> <value_of_label_name>
quantity -> <value_of_label_quantity>
value -> <value_of_metric>
EXAMPLE 2
@source(type= 'prometheus', target.url= 'http://localhost:9080/metrics', metric.type= 'summary', metric.name= 'sweet_production_summary', @map(type= 'keyvalue'))
define stream FooStream2(metric_name string, metric_type string, help string, subtype string, name string, quantity string, quantile string, value double);
In this example, the prometheus source makes an http request to the 'target.url' and analyses the response. From the analysed response, the source retrieves the Prometheus summary metrics with the name, 'sweet_production_summary' and converts the filtered metrics into Siddhi events using the key-value mapper.
The generated maps have keys and values as follows:
metric_name -> sweet_production_summary
metric_type -> summary
help -> <help_string_of_metric>
subtype -> <'sum'/'count'/'null'>
name -> <value_of_label_name>
quantity -> <value_of_label_quantity>
quantile -> <value of the quantile>
value -> <value_of_metric>
EXAMPLE 3
@source(type= 'prometheus', target.url= 'http://localhost:9080/metrics', metric.type= 'histogram', metric.name= 'sweet_production_histogram', @map(type= 'keyvalue'))
define stream FooStream3(metric_name string, metric_type string, help string, subtype string, name string, quantity string, le string, value double);
In this example, the prometheus source will make an http request to the 'target.url' and analyse the response. From the analysed response, the source retrieves the Prometheus histogram metrics with name 'sweet_production_histogram' and converts the filtered metrics into Siddhi events using the key-value mapper.
The generated maps will have keys and values as follows,
metric_name -> sweet_production_histogram
metric_type -> histogram
help -> <help_string_of_metric>
subtype -> <'sum'/'count'/'bucket'>
name -> <value_of_label_name>
quantity -> <value_of_label_quantity>
le -> <value of the bucket>
value -> <value_of_metric>