3.2. Client-side HTTP Instrumentation
8. Customizing individual metrics
Micrometer provides a legacy bridge to Spring Boot 1.5. To install the required dependency in Gradle:
千分尺为Spring Boot 1.5提供了一个传统的桥梁。 在Gradle中安装所需的依赖项:
In Gradle:
compile 'io.micrometer:micrometer-spring-legacy:latest.release'
Or in Maven:
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-spring-legacy</artifactId>
<version>${micrometer.version}</version>
</dependency>
This dependency should be added alongside any registry implementations you want to use, e.g. micrometer-registry-prometheus
.
此依赖项应与您要使用的所有注册表实现一起添加,例如 千分尺注册普罗米修斯。
Spring Boot auto-configures a composite meter registry and adds a registry to the composite for each of the supported implementations that it finds on the classpath. Having a dependency on micrometer-registry-{system}
in your runtime classpath is enough for Spring Boot to configure the registry. Spring Boot will also add any autoconfigured registries to the global static composite registry on the Metrics
class unless you explicitly tell it not to by setting:
Spring Boot自动配置组合仪表注册表,并为其在类路径上找到的每个受支持的实现向组合添加注册表。 在运行时类路径中具有对micrometer-registry- {system}的依赖足以让Spring Boot配置注册表。 Spring Boot还会将任何自动配置的注册表添加到Metrics类的全局静态复合注册表中,除非您通过设置明确指示不要这样做:
# true by default
management.metrics.use-global-registry=false
Leaving configuration of the global registry on enables you to collect metrics from libraries that use the static global registry to wire their metrics without doing anything further.
通过保留全局注册表的配置,您可以从使用静态全局注册表连接其指标的库中收集指标,而无需执行任何其他操作。
You can control whether a registry implementation is autoconfigured via a property, even if it is otherwise present on the classpath:
您可以控制是否通过属性自动配置注册表实现,即使该实现存在于类路径中也是如此:
# true by default
management.metrics.export.{system}.enabled=false
Optionally register any number of `MeterRegistryCustomizer`s to further configure the registry (such as applying common tags) before any meters are registered with the registry.
(可选)注册任意数量的“ MeterRegistryCustomizer”,以在将任何仪表注册到注册表之前进一步配置注册表(例如,应用通用标签)。
@SpringBootApplication
public class MyApplication {
@Bean
MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() {
return registry -> registry.config().commonTags("region", "us-east-1");
}
}
You can apply customizations to particular registry implementations by being more specific about the generic type:
您可以通过更具体地了解通用类型,将自定义项应用于特定的注册表实现:
@SpringBootApplication
public class MyApplication {
@Bean
MeterRegistryCustomizer<GraphiteMeterRegistry> graphiteMetricsNamingConvention() {
return registry -> registry.config().namingConvention(MY_CUSTOM_CONVENTION);
}
}
Spring auto-configures the most commonly used binders.
Spring自动配置最常用的活页夹。
The JvmMemoryMetrics
binder will be automatically configured to record memory and buffer pool utilization. In the presence of Logback, the LogbackMetrics
binder will also be configured to record the number of events logged to Logback at each level. Lastly, UptimeMetrics
reports a gauge for uptime and a fixed gauge representing the application’s absolute start time.
JvmMemoryMetrics绑定程序将自动配置为记录内存和缓冲池利用率。在存在Logback的情况下,还将配置LogbackMetrics联编程序以记录每个级别记录到Logback的事件数。最后,UptimeMetrics会报告正常运行时间的量度和代表应用程序绝对启动时间的固定量度。
To register other binders with the registry, add them as `@Bean`s to your application context.
要在注册表中注册其他联编程序,请将它们作为@Bean添加到您的应用程序上下文中。
Below is a list of the most common configuration properties you will want to change and their default values (from any property source, e.g. application.yml):
以下是您要更改的最常见配置属性及其默认值的列表(来自任何属性源,例如application.yml):
# The location of your Atlas server
management.metrics.export.atlas.uri=http://localhost:7101/api/v1/publish
# You will probably want disable Atlas publishing in a local development profile.
management.metrics.export.atlas.enabled=true # default is true
# The interval at which metrics are sent to Atlas. See Duration.parse for the expected format.
# The default is 1 minute.
management.metrics.export.atlas.step=PT1M
For a full list of configuration properties that can influence Atlas publishing, see com.netflix.spectator.atlas.AtlasConfig
.
有关可能影响Atlas发布的配置属性的完整列表,请参见com.netflix.spectator.atlas.AtlasConfig。
If Spring Boot Actuator is on the classpath, an actuator endpoint will be wired to /prometheus
by default that presents a Prometheus scrape with the appropriate format.
如果Spring Boot Actuator在类路径中,则默认情况下,会将一个执行器端点连接到/prometheus
,以正确的格式显示Prometheus抓取。
To add actuator if it isn’t already present on your classpath in Gradle:
要添加执行器(如果尚未在Gradle的类路径中添加执行器):
compile 'org.springframework.boot:spring-boot-starter-actuator'
Or in Maven:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
Here is an example scrape_config
to add to prometheus.yml:
这是添加到prometheus.yml的示例scrape_config:
scrape_configs:
- job_name: 'spring'
metrics_path: '/prometheus'
static_configs:
- targets: ['HOST:PORT']
If you’d like the endpoint to use a different path, add the property:
如果您希望端点使用其他路径,请添加属性:
endpoints.prometheus.path=micrometheus
NOTE | The endpoint is secured by default, so either include authentication in your Prometheus scrape endpoint or unsecure the prometheus endpoint with a property like management.security.enabled=false |
注意:
默认情况下,该端点是安全的,因此可以在Prometheus抓取端点中包括身份验证,或者使用诸如management.security.enabled = false之类的属性取消对Prometheus端点的安全保护 。
The Datadog registry pushes metrics to datadoghq periodically. Below is a list of the most common configuration properties you will want to change and their default values (from any property source, e.g. application.yml
):
Datadog注册表会定期将指标推送到datadoghq。以下是您要更改的最常见配置属性及其默认值的列表(来自任何属性源,例如application.yml):
management.metrics.export.datadog.api-key=YOURKEY
# Needed to send meter-level metadata like descriptions and base units to Datadog, but not strictly required.
management.metrics.export.datadog.application-key=YOURKEY
# You will probably want disable Datadog publishing in a local development profile.
management.metrics.export.datadog.enabled=true # default is true
# The interval at which metrics are sent to Datadog. See Duration.parse for the expected format.
# The default is 10 seconds, which matches the Datadog Agent publishes at.
management.metrics.export.datadog.step=PT10S
For a full list of configuration properties that can influence Datadog publishing, see
有关可能影响Datadog发布的配置属性的完整列表,请参见
[io.micrometer.core.instrument.datadog.DatadogConfig
](https://github.com/micrometer-metrics/micrometer/blob/master/implementations/micrometer-registry-datadog/src/main/java/io/micrometer/datadog/DatadogConfig.java).
The StatsD registry pushes metrics over UDP to a StatsD agent eagerly. Below is a list of the most common configuration properties you will want to change:
StatsD注册表急切地通过UDP将度量标准推送到StatsD代理。以下是您要更改的最常见配置属性的列表:
management.metrics.export.statsd.enabled=true # default is true
management.metrics.export.statsd.flavor=etsy # or datadog, telegraf
management.metrics.export.statsd.host=localhost
management.metrics.export.statsd.port=8125
Micrometer contains built-in instrumentation for timings of requests made to Spring MVC server endpoints.
千分尺包含用于检测对Spring MVC服务器端点的请求计时的内置工具。
Spring Boot auto-configures interceptors to record metrics on your endpoints. By default, timings are recorded for every endpoint in your app. You don’t need to do anything to your controller to instrument the endpoint:
Spring Boot自动配置拦截器,以在您的端点上记录指标。默认情况下,将记录应用程序中每个端点的计时。您无需对控制器进行任何操作即可检测端点:
@RestController
public class MyController {
@GetMapping("/api/people")
public List<Person> listPeople() { ... }
The Timer
is registered with a name of http.server.requests
by default. This can be changed by setting:
默认情况下,Timer的名称为http.server.requests。可以通过设置来更改:
# default is `http.server.requests`
management.metrics.web.server.requests-metric-name=i.want.to.be.different
The Timer
contains a set of dimensions for every request, governed by the primary bean WebMvcTagsProvider
registered in your application context. If you don’t provide such a bean, a default implementation is selected which adds the following dimensions:
Timer包含每个请求的一组维度,并由在您的应用程序上下文中注册的主bean WebMvcTagsProvider来控制。如果您不提供这样的bean,则会选择默认实现,它会添加以下维度:
method
, the HTTP method (for example, GET
or PUT
)
status
, the numeric HTTP status code (for example, 200
, 201
, 500
)
uri
, the URI template prior to variable substitution (for example, /api/person/{id}
)
exception
, the simple name of the exception class thrown (only if an exception is thrown)
outcome
, request’s outcome based on the status code of the response. 1xx
is INFORMATIONAL, 2xx
is SUCCESS, 3xx
is REDIRECTION, 4xx
CLIENT_ERROR, and 5xx is SERVER_ERROR
3.1.1. Adding percentiles, histograms, and SLA boundaries(添加百分位,直方图和SLA边界)
The preferred way to add percentiles, percentile histograms, and SLA boundaries is to apply the general purpose property-based meter filter mechanism to this timer:
添加百分位,百分位直方图和SLA边界的首选方法是将基于通用属性的仪表过滤器机制应用于此计时器:
management.metrics.distribution:
percentiles[http.server.requests]: 0.95, 0.99
percentiles-histogram[http.server.requests]: true (1)
sla[http.server.requests]: 10ms, 100ms
If percentile approximations based on histograms are supported by your monitoring system, prefer this instead of the percentiles
option.
如果您的监视系统支持基于直方图的百分位数近似值,则首选此方法,而不是百分位数选项。
3.1.2. Only timing endpoints you mark with @Timed
仅使用@Timed标记的计时端点
You can turn this off by setting:
您可以通过设置关闭此功能:
# true by default
management.metrics.web.server.auto-time-requests=false
If you turn off autoTimeRequests
, or if you’d like to customize the timer for a particular endpoint, add @io.micrometer.core.annotation.Timed
to:
如果您关闭了autoTimeRequests,或者想要为特定端点自定义计时器,请添加@io.micrometer.core.annotation.Timed到:
@RestController
@Timed (1)
public class MyController {
@GetMapping("/api/people")
@Timed(extraTags = { "region", "us-east-1" }) (2)
@Timed(value = "all.people", longTask = true) (3)
public List<Person> listPeople() { ... }
A controller class to enable timings on every request handler in the controller.
A method to enable for an individual endpoint. This is not necessary if you have it on the class, but can be used to further customize the timer for this particular endpoint.
A method with longTask = true
to enable a long task timer for the method. Long task timers require a separate metric name, and can be stacked with a short task timer.
The instrumentation of any RestTemplate
created using the auto-configured RestTemplateBuilder
is enabled. It is also possible to apply MetricsRestTemplateCustomizer
manually. A timer is recorded for each invocation that includes tags for URI (before parameter substitution), host, and status. The name of this timer is http.client.requests
and can be changed by setting:
启用使用自动配置的RestTemplateBuilder创建的任何RestTemplate的检测。也可以手动应用MetricsRestTemplateCustomizer。每次调用都会记录一个计时器,其中包括URI(在参数替换之前),主机和状态的标记。此计时器的名称为http.client.requests,可以通过设置进行更改:
# default is http.client.requests
management.metrics.web.client.requests-metric-name=i.want.to.be.different.again
The Timer
contains a set of dimensions for every request, governed by the primary bean RestTemplateExchangeTagsProvider
registered in your application context. If you don’t provide such a bean, a default implementation is selected which adds the following dimensions:
Timer包含每个请求的一组维度,并由在您的应用程序上下文中注册的主bean RestTemplateExchangeTagsProvider来控制。如果您不提供这样的bean,则会选择默认实现,它会添加以下维度:
method
, the HTTP method (for example, GET
or PUT
)
status
, the numeric HTTP status code (for example, 200
, 201
, 500
)
uri
, the URI template prior to variable substitution (for example, /api/person/{id}
)
clientName
, the host portion of the URI
outcome
, request’s outcome based on the status code of the response. 1xx
is INFORMATIONAL, 2xx
is SUCCESS, 3xx
is REDIRECTION, 4xx
CLIENT_ERROR, and 5xx is SERVER_ERROR
You can use the convenience static functions in RestTemplateExchangeTags
to construct your own RestTemplateExchangeTagsProvider
.
您可以在RestTemplateExchangeTags中使用便捷静态函数来构造自己的RestTemplateExchangeTagsProvider。
Enabling metrics in your Spring Boot application plus enabling AOP configures AOP advice that times @Scheduled
methods. For a method to be timed, it must be marked as @Timed
with a name - for example: Timed("my.metric.name")
.
在Spring Boot应用程序中启用指标以及启用AOP会配置对@Scheduled方法计时的AOP建议。对于要计时的方法,必须使用名称将其标记为@Timed,例如:Timed(“ my.metric.name”)。
Depending on the duration of the scheduled task, you may want to choose to time the method with a LongTaskTimer
, a Timer
, or both (generally it won’t be both). The following code snippet shows an example of measuring a scheduled task with both long task and regular timings:
根据计划任务的持续时间,您可能希望选择使用LongTaskTimer,Timer或两者(通常不会同时使用)对方法进行计时。下面的代码片段显示了一个测量计划任务的示例,该任务同时测量长任务和定期时间:
@Timed("beep")
@Timed(value = "long.beep", longTask = true)
@Scheduled(fixedRate = 1000)
void longBeep() {
// calculate the meaning of life, then beep...
System.out.println("beep");
}
Auto-configuration enables the instrumentation of all available Caches
on startup with metrics prefixed with cache
. Cache instrumentation is standardized for a basic set of metrics. Additional, cache-specific metrics are also available.
通过自动配置,可以在启动时使用前缀为缓存的指标来检测所有可用的缓存。高速缓存检测针对一组基本指标进行了标准化。还提供其他特定于缓存的指标。
The following cache libraries are supported:
支持以下缓存库:
Caffeine
EhCache 2
Hazelcast
Any compliant JCache (JSR-107) implementation
Metrics are tagged by the name of the cache and by the name of the CacheManager
that is derived from the bean name.
通过高速缓存的名称和从Bean名称派生的CacheManager的名称来标记度量标准。
Data sources can be instrumented with the registry. This requires the DataSourcePoolMetadataProvider
automatically configured by Spring Boot, so it only works in a Spring Boot context where these providers are configured. Data source instrumentation is auto-configured by Spring Boot otherwise, so there is nothing for you to do.
数据源可以通过注册表进行检测。这需要由Spring Boot自动配置的DataSourcePoolMetadataProvider,因此它仅在配置了这些提供程序的Spring Boot上下文中起作用。否则,Spring Boot会自动配置数据源检测,因此您无需执行任何操作。
Data source instrumentation results in gauges representing the currently active, maximum allowed, and minimum allowed connections in the pool. Each of these gauges has a name which is prefixed by the provided name ("data.source" in this example).
数据源检测产生的量规表示池中当前活动,最大允许和最小允许的连接。这些仪表中的每一个都有一个以提供的名称为前缀的名称(在本示例中为“ data.source”)。
The original data source instance is unchanged by instrumentation.
原始数据源实例通过检测保持不变。
To register custom metrics, inject MeterRegistry
into your component:
要注册自定义指标,请将MeterRegistry注入您的组件中:
class Dictionary {
private List<String> words = new CopyOnWriteArrayList<>();
public MyComponent(MeterRegistry registry) {
registry.gaugeCollectionSize("dictionary.size", Tags.empty(), words);
}
...
}
If you find that you repeatedly instrument a suite of metrics across components or applications, you may encapsulate this suite in a MeterBinder
implementation. By default, metrics from all MeterBinder
beans will be automatically bound to the Spring-managed MeterRegistry
.
如果发现您在组件或应用程序中反复测试了一组度量标准,则可以将此度量标准封装在MeterBinder实现中。默认情况下,所有MeterBinder Bean的度量标准都将自动绑定到Spring管理的MeterRegistry。
If you need to apply customizations to specific Meter
instances you can use the io.micrometer.core.instrument.config.MeterFilter
interface. By default, all MeterFilter
beans will be automatically applied to Spring’s managed MeterRegistry
.
如果需要将自定义应用于特定的仪表实例,则可以使用io.micrometer.core.instrument.config.MeterFilter接口。默认情况下,所有MeterFilter Bean都会自动应用于Spring的托管MeterRegistry。
For example, if you want to rename the mytag.region
tag to mytag.area
for all meter IDs beginning with com.example
, you can do the following:
例如,如果要将所有以com.example开头的仪表ID的mytag.region标签重命名为mytag.area,则可以执行以下操作:
@Bean
public MeterFilter renameRegionTagMeterFilter() {
return MeterFilter.renameTag("com.example", "mytag.region", "mytag.area");
}
In addition to MeterFilter
beans, it’s also possible to apply a limited set of customization on a per-meter basis using properties. Per-meter customizations apply to any all meter IDs that start with the given name. For example, the following will disable any meters that have an ID starting with example.remote
除了MeterFilter bean之外,还可以使用属性在每米基础上应用一组有限的自定义设置。每米定制适用于以给定名称开头的所有所有表ID。例如,以下将禁用所有ID以example.remote开头的仪表
management.metrics.enable.example.remote=false
The following properties allow per-meter customization:
以下属性允许按米自定义:
Property | Description |
---|---|
| Whether to deny meters from emitting any metrics. |
| Whether to publish a histogram suitable for computing aggregable (across dimensions) percentile approximations. |
| Publish percentile values computed in your application |
| Publish a cumulative histogram with buckets defined by your SLAs. |
In Spring Boot 1.5.x, you must escape the metric name prefix following any of these properties, like this:
在Spring Boot 1.5.x中,您必须在以下任何属性之后转义度量名称前缀,如下所示:
management.metrics.distribution.percentiles-histogram[http.server.requests]=true
The escaping looks the same in both .properties
and .yml
formats. This escaping is not required in Spring Boot 2.
.properties和.yml格式的转义看起来相同。 Spring Boot 2中不需要这种转义。
For more details on concepts behind percentiles-histogram
, percentiles
and sla
refer to the "Histograms and percentiles" section of the Concepts documentation.
有关百分位数-直方图,百分位数和sla背后的概念的更多详细信息,请参阅概念文档的“直方图和百分位数”部分。
Spring Boot Actuator provides its own metrics support and it’s not interacting with Micrometer at all. So the metrics collections happen independently.
Spring Boot Actuator提供了自己的指标支持,并且完全不与Micrometer进行交互。因此,指标收集独立发生。
If you want to disable the metrics support from Spring Boot Actuator, add the following properties:
如果要从Spring Boot Actuator禁用指标支持,请添加以下属性:
spring.autoconfigure.exclude=\
org.springframework.boot.actuate.autoconfigure.MetricFilterAutoConfiguration,\
org.springframework.boot.actuate.autoconfigure.MetricRepositoryAutoConfiguration,\
org.springframework.boot.actuate.autoconfigure.MetricsDropwizardAutoConfiguration,\
org.springframework.boot.actuate.autoconfigure.MetricsChannelAutoConfiguration,\
org.springframework.boot.actuate.autoconfigure.MetricExportAutoConfiguration,\
org.springframework.boot.actuate.autoconfigure.PublicMetricsAutoConfiguration
endpoints.metrics.enabled=false
The following application properties are supported for Micrometer configuration:
千分尺配置支持以下应用程序属性:
# METRICS
management.metrics.binders.files.enabled=true # Whether to enable files metrics.
management.metrics.binders.integration.enabled=true # Whether to enable Spring Integration metrics.
management.metrics.binders.jvm.enabled=true # Whether to enable JVM metrics.
management.metrics.binders.logback.enabled=true # Whether to enable Logback metrics.
management.metrics.binders.processor.enabled=true # Whether to enable processor metrics.
management.metrics.binders.uptime.enabled=true # Whether to enable uptime metrics.
management.metrics.distribution.percentiles-histogram.*= # Whether meter IDs starting-with the specified name should be publish percentile histograms.
management.metrics.distribution.percentiles.*= # Specific computed non-aggregable percentiles to ship to the backend for meter IDs starting-with the specified name.
management.metrics.distribution.sla.*= # Specific SLA boundaries for meter IDs starting-with the specified name. The longest match wins, the key `all` can also be used to configure all meters.
management.metrics.enable.*= # Whether meter IDs starting-with the specified name should be enabled. The longest match wins, the key `all` can also be used to configure all meters.
management.metrics.export.atlas.batch-size=10000 # Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made.
management.metrics.export.atlas.config-refresh-frequency=10s # Frequency for refreshing config settings from the LWC service.
management.metrics.export.atlas.config-time-to-live=150s # Time to live for subscriptions from the LWC service.
management.metrics.export.atlas.config-uri=http://localhost:7101/lwc/api/v1/expressions/local-dev # URI for the Atlas LWC endpoint to retrieve current subscriptions.
management.metrics.export.atlas.connect-timeout=1s # Connection timeout for requests to this backend.
management.metrics.export.atlas.enabled=true # Whether exporting of metrics to this backend is enabled.
management.metrics.export.atlas.eval-uri=http://localhost:7101/lwc/api/v1/evaluate # URI for the Atlas LWC endpoint to evaluate the data for a subscription.
management.metrics.export.atlas.lwc-enabled=false # Whether to enable streaming to Atlas LWC.
management.metrics.export.atlas.meter-time-to-live=15m # Time to live for meters that do not have any activity. After this period the meter will be considered expired and will not get reported.
management.metrics.export.atlas.num-threads=2 # Number of threads to use with the metrics publishing scheduler.
management.metrics.export.atlas.read-timeout=10s # Read timeout for requests to this backend.
management.metrics.export.atlas.step=1m # Step size (i.e. reporting frequency) to use.
management.metrics.export.atlas.uri=http://localhost:7101/api/v1/publish # URI of the Atlas server.
management.metrics.export.datadog.api-key= # Datadog API key.
management.metrics.export.datadog.application-key= # Datadog application key. Not strictly required, but improves the Datadog experience by sending meter descriptions, types, and base units to Datadog.
management.metrics.export.datadog.batch-size=10000 # Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made.
management.metrics.export.datadog.connect-timeout=1s # Connection timeout for requests to this backend.
management.metrics.export.datadog.descriptions=true # Whether to publish descriptions metadata to Datadog. Turn this off to minimize the amount of metadata sent.
management.metrics.export.datadog.enabled=true # Whether exporting of metrics to this backend is enabled.
management.metrics.export.datadog.host-tag=instance # Tag that will be mapped to "host" when shipping metrics to Datadog.
management.metrics.export.datadog.num-threads=2 # Number of threads to use with the metrics publishing scheduler.
management.metrics.export.datadog.read-timeout=10s # Read timeout for requests to this backend.
management.metrics.export.datadog.step=1m # Step size (i.e. reporting frequency) to use.
management.metrics.export.datadog.uri=https://app.datadoghq.com # URI to ship metrics to. If you need to publish metrics to an internal proxy en-route to Datadog, you can define the location of the proxy with this.
management.metrics.export.ganglia.addressing-mode=multicast # UDP addressing mode, either unicast or multicast.
management.metrics.export.ganglia.duration-units=milliseconds # Base time unit used to report durations.
management.metrics.export.ganglia.enabled=true # Whether exporting of metrics to Ganglia is enabled.
management.metrics.export.ganglia.host=localhost # Host of the Ganglia server to receive exported metrics.
management.metrics.export.ganglia.port=8649 # Port of the Ganglia server to receive exported metrics.
management.metrics.export.ganglia.protocol-version=3.1 # Ganglia protocol version. Must be either 3.1 or 3.0.
management.metrics.export.ganglia.rate-units=seconds # Base time unit used to report rates.
management.metrics.export.ganglia.step=1m # Step size (i.e. reporting frequency) to use.
management.metrics.export.ganglia.time-to-live=1 # Time to live for metrics on Ganglia. Set the multi-cast Time-To-Live to be one greater than the number of hops (routers) between the hosts.
management.metrics.export.graphite.duration-units=milliseconds # Base time unit used to report durations.
management.metrics.export.graphite.enabled=true # Whether exporting of metrics to Graphite is enabled.
management.metrics.export.graphite.host=localhost # Host of the Graphite server to receive exported metrics.
management.metrics.export.graphite.port=2004 # Port of the Graphite server to receive exported metrics.
management.metrics.export.graphite.protocol=pickled # Protocol to use while shipping data to Graphite.
management.metrics.export.graphite.rate-units=seconds # Base time unit used to report rates.
management.metrics.export.graphite.step=1m # Step size (i.e. reporting frequency) to use.
management.metrics.export.graphite.tags-as-prefix= # For the default naming convention, turn the specified tag keys into part of the metric prefix.
management.metrics.export.influx.auto-create-db=true # Whether to create the Influx database if it does not exist before attempting to publish metrics to it.
management.metrics.export.influx.batch-size=10000 # Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made.
management.metrics.export.influx.compressed=true # Whether to enable GZIP compression of metrics batches published to Influx.
management.metrics.export.influx.connect-timeout=1s # Connection timeout for requests to this backend.
management.metrics.export.influx.consistency=one # Write consistency for each point.
management.metrics.export.influx.db=mydb # Tag that will be mapped to "host" when shipping metrics to Influx.
management.metrics.export.influx.enabled=true # Whether exporting of metrics to this backend is enabled.
management.metrics.export.influx.num-threads=2 # Number of threads to use with the metrics publishing scheduler.
management.metrics.export.influx.password= # Login password of the Influx server.
management.metrics.export.influx.read-timeout=10s # Read timeout for requests to this backend.
management.metrics.export.influx.retention-policy= # Retention policy to use (Influx writes to the DEFAULT retention policy if one is not specified).
management.metrics.export.influx.step=1m # Step size (i.e. reporting frequency) to use.
management.metrics.export.influx.uri=http://localhost:8086 # URI of the Influx server.
management.metrics.export.influx.user-name= # Login user of the Influx server.
management.metrics.export.jmx.enabled=true # Whether exporting of metrics to JMX is enabled.
management.metrics.export.jmx.step=1m # Step size (i.e. reporting frequency) to use.
management.metrics.export.newrelic.account-id= # New Relic account ID.
management.metrics.export.newrelic.api-key= # New Relic API key.
management.metrics.export.newrelic.batch-size=10000 # Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made.
management.metrics.export.newrelic.connect-timeout=1s # Connection timeout for requests to this backend.
management.metrics.export.newrelic.enabled=true # Whether exporting of metrics to this backend is enabled.
management.metrics.export.newrelic.num-threads=2 # Number of threads to use with the metrics publishing scheduler.
management.metrics.export.newrelic.read-timeout=10s # Read timeout for requests to this backend.
management.metrics.export.newrelic.step=1m # Step size (i.e. reporting frequency) to use.
management.metrics.export.newrelic.uri=https://insights-collector.newrelic.com # URI to ship metrics to.
management.metrics.export.prometheus.descriptions=true # Whether to enable publishing descriptions as part of the scrape payload to Prometheus. Turn this off to minimize the amount of data sent on each scrape.
management.metrics.export.prometheus.enabled=true # Whether exporting of metrics to Prometheus is enabled.
management.metrics.export.prometheus.step=1m # Step size (i.e. reporting frequency) to use.
management.metrics.export.signalfx.access-token= # SignalFX access token.
management.metrics.export.signalfx.batch-size=10000 # Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made.
management.metrics.export.signalfx.connect-timeout=1s # Connection timeout for requests to this backend.
management.metrics.export.signalfx.enabled=true # Whether exporting of metrics to this backend is enabled.
management.metrics.export.signalfx.num-threads=2 # Number of threads to use with the metrics publishing scheduler.
management.metrics.export.signalfx.read-timeout=10s # Read timeout for requests to this backend.
management.metrics.export.signalfx.source= # Uniquely identifies the app instance that is publishing metrics to SignalFx. Defaults to the local host name.
management.metrics.export.signalfx.step=10s # Step size (i.e. reporting frequency) to use.
management.metrics.export.signalfx.uri=https://ingest.signalfx.com # URI to ship metrics to.
management.metrics.export.simple.enabled=true # Whether, in the absence of any other exporter, exporting of metrics to an in-memory backend is enabled.
management.metrics.export.simple.mode=cumulative # Counting mode.
management.metrics.export.simple.step=1m # Step size (i.e. reporting frequency) to use.
management.metrics.export.statsd.enabled=true # Whether exporting of metrics to StatsD is enabled.
management.metrics.export.statsd.flavor=datadog # StatsD line protocol to use.
management.metrics.export.statsd.host=localhost # Host of the StatsD server to receive exported metrics.
management.metrics.export.statsd.max-packet-length=1400 # Total length of a single payload should be kept within your network's MTU.
management.metrics.export.statsd.polling-frequency=10s # How often gauges will be polled. When a gauge is polled, its value is recalculated and if the value has changed (or publishUnchangedMeters is true), it is sent to the StatsD server.
management.metrics.export.statsd.port=8125 # Port of the StatsD server to receive exported metrics.
management.metrics.export.statsd.publish-unchanged-meters=true # Whether to send unchanged meters to the StatsD server.
management.metrics.export.wavefront.api-token= # API token used when publishing metrics directly to the Wavefront API host.
management.metrics.export.wavefront.batch-size=10000 # Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made.
management.metrics.export.wavefront.connect-timeout=1s # Connection timeout for requests to this backend.
management.metrics.export.wavefront.enabled=true # Whether exporting of metrics to this backend is enabled.
management.metrics.export.wavefront.global-prefix= # Global prefix to separate metrics originating from this app's white box instrumentation from those originating from other Wavefront integrations when viewed in the Wavefront UI.
management.metrics.export.wavefront.num-threads=2 # Number of threads to use with the metrics publishing scheduler.
management.metrics.export.wavefront.read-timeout=10s # Read timeout for requests to this backend.
management.metrics.export.wavefront.source= # Unique identifier for the app instance that is the source of metrics being published to Wavefront. Defaults to the local host name.
management.metrics.export.wavefront.step=10s # Step size (i.e. reporting frequency) to use.
management.metrics.export.wavefront.uri=https://longboard.wavefront.com # URI to ship metrics to.
management.metrics.use-global-registry=true # Whether auto-configured MeterRegistry implementations should be bound to the global static registry on Metrics.
management.metrics.tags.*= # Common tags that are applied to every meter.
management.metrics.web.client.max-uri-tags=100 # Maximum number of unique URI tag values allowed. After the max number of tag values is reached, metrics with additional tag values are denied by filter.
management.metrics.web.client.requests-metric-name=http.client.requests # Name of the metric for sent requests.
management.metrics.web.server.auto-time-requests=true # Whether requests handled by Spring MVC or WebFlux should be automatically timed.
management.metrics.web.server.requests-metric-name=http.server.requests # Name of the metric for received requests.