在用influxdb写数据的时候,突然出现如下的错误,我用的是到目前最新的版本1.2:
java.lang.RuntimeException: {"error":"partial write: max-values-per-tag limit exceeded (100009/100000): measurement=\"t1\" tag=\"openId\" value=\"oGeG_0emqmIf5xakHAZq5_NEbcJ0\"dropped=4"}
at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:466)
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:267)
at com.ihangmei.datapro.consumer.influxdb.InfluxdbInstance.writeInfluxdb(InfluxdbInstance.java:97)
at com.ihangmei.datapro.consumer.kafka.WechatRunnable.run(WechatRunnable.java:50)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
java.lang.RuntimeException: {"error":"partial write: max-values-per-tag limit exceeded (100009/100000): measurement=\"t1\" tag=\"openId\" value=\"oGeG_0RngX2VL7c_wmzDNyacMQHI\"dropped=4"}
at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:466)
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:267)
at com.ihangmei.datapro.consumer.influxdb.InfluxdbInstance.writeInfluxdb(InfluxdbInstance.java:97)
at com.ihangmei.datapro.consumer.kafka.WechatRunnable.run(WechatRunnable.java:50)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
在官方文档中是这么说的:
The maximum number of tag values allowed per tag key. The default setting is 100000
. Change the setting to 0
to allow an unlimited number of tag values per tag key. If a tag value causes the number of tag values of a tag key to exceed max-values-per-tag
InfluxDB will not write the point, and it returns a partial write
error.
Any existing tag keys with tag values that exceed max-values-per-tag
will continue to accept writes, but writes that create a new tag value will fail.
Environment variable: INFLUXDB_DATA_MAX_VALUES_PER_TAG
默认情况下每个tag key只允许最多有10w个tag values,如果超过了10w个就会出现如上的错误,当然在另一篇文章中,我们找到个这么做的原因:
influxdb增加了对tag的值的大小的校验,最大值是10000.若写入的数据的tag的值超限,则调用方会收到报错。用来防止写入到measurement的数据的series-cardinality(series-cardinality)过大。如果我们写入了大量的series-cardinality很高的数据,那么当我们删除数据的时候,InfluxDB会OOM。
原因是InfluxDB在内存中维护了系统中每个series数据的索引。随着具有唯一性的series数据数量的增长,RAM的使用也会增长。过高的series cardinality会导致操作系统kill掉InfluxDB进程,抛出OOM异常。
参考:https://docs.influxdata.com/influxdb/v1.2/administration/config#max-values-per-tag-100000