当我使用 pyflink 蜂巢 sql 读取数据插入到 es 中时,抛出以下示例:环境: flink 1.11.2 flink-sql-连接器-hive-3.1.2_2.11-1.11.2.jar hive 3.1.2
2020-12-17 21:10:24,398 WARN org.apache.flink.runtime.taskmanager.Task [] - Source: HiveTableSource(driver_id, driver_base_lc_p1, driver_90d_lc_p1, driver_30d_lc_p1, driver_14d_lc_p1, driver_180d_lc_p1, vehicle_base_lc_p1, driver_active_zone, is_incremental, dt) TablePath: algorithm.jiawei_oas_driver_features_for_incremental_hive2kafka, PartitionPruned: false, PartitionNums: null, ProjectedFields: [0, 8, 9] -> Calc(select=[driver_id, is_incremental, dt, () AS bdi_feature_create_time]) -> Sink: Sink(table=[default_catalog.default_database.0_demo4_903157246_tmp], fields=[driver_id, is_incremental, dt, bdi_feature_create_time]) (1/1) (98f4259c3d00fac9fc3482a4cdc8df3c) switched from RUNNING to FAILED.
at org.apache.orc.impl.ConvertTreeReaderFactory$AnyIntegerTreeReader.nextVector(ConvertTreeReaderFactory.java:445) ~[flink-sql-connector-hive-3.1.2_2.11-1.11.2.jar:1.11.2]
at org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1300) ~[flink-sql-connector-hive-3.1.2_2.11-1.11.2.jar:1.11.2]
at org.apache.flink.connectors.hive.read.HiveVectorizedOrcSplitReader.reachedEnd(HiveVectorizedOrcSplitReader.java:99) ~[flink-sql-connector-hive-3.1.2_2.11-1.11.2.jar:1.11.2]
at org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:90) ~[flink-dist_2.11-1.11.2.jar:1.11.2]
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) ~[flink-dist_2.11-1.11.2.jar:1.11.2]
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:213) ~[flink-dist_2.11-1.11.2.jar:1.11.2]
2020-12-17 21:10:24,402 INFO org.apache.flink.runtime.taskmanager.Task [] - Freeing task resources for Source: HiveTableSource(driver_id, driver_base_lc_p1, driver_90d_lc_p1, driver_30d_lc_p1, driver_14d_lc_p1, driver_180d_lc_p1, vehicle_base_lc_p1, driver_active_zone, is_incremental, dt) TablePath: algorithm.jiawei_oas_driver_features_for_incremental_hive2kafka, PartitionPruned: false, PartitionNums: null, ProjectedFields: [0, 8, 9] -> Calc(select=[driver_id, is_incremental, dt, () AS bdi_feature_create_time]) -> Sink: Sink(table=[default_catalog.default_database.0_demo4_903157246_tmp], fields=[driver_id, is_incremental, dt, bdi_feature_create_time]) (1/1) (98f4259c3d00fac9fc3482a4cdc8df3c).
java.lang.ArrayIndexOutOfBoundsException: 1024
at org.apache.flink.orc.shim.OrcShimV210.nextBatch(OrcShimV210.java:35) ~[flink-sql-connector-hive-3.1.2_2.11-1.11.2.jar:1.11.2]
at org.apache.flink.orc.shim.OrcShimV210.nextBatch(OrcShimV210.java:29) ~[flink-sql-connector-hive-3.1.2_2.11-1.11.2.jar:1.11.2]
at org.apache.flink.orc.OrcSplitReader.ensureBatch(OrcSplitReader.java:134) ~[flink-sql-connector-hive-3.1.2_2.11-1.11.2.jar:1.11.2]
at org.apache.orc.impl.TreeReaderFactory$LongTreeReader.nextVector(TreeReaderFactory.java:612) ~[flink-sql-connector-hive-3.1.2_2.11-1.11.2.jar:1.11.2]
at org.apache.orc.impl.TreeReaderFactory$TreeReader.nextVector(TreeReaderFactory.java:269) ~[flink-sql-connector-hive-3.1.2_2.11-1.11.2.jar:1.11.2]
at org.apache.orc.impl.ConvertTreeReaderFactory$StringGroupFromAnyIntegerTreeReader.nextVector(ConvertTreeReaderFactory.java:1477) ~[flink-sql-connector-hive-3.1.2_2.11-1.11.2.jar:1.11.2]
at org.apache.orc.impl.TreeReaderFactory$StructTreeReader.nextBatch(TreeReaderFactory.java:2012) ~[flink-sql-connector-hive-3.1.2_2.11-1.11.2.jar:1.11.2]
at org.apache.flink.orc.OrcSplitReader.reachedEnd(OrcSplitReader.java:101) ~[flink-sql-connector-hive-3.1.2_2.11-1.11.2.jar:1.11.2]
at org.apache.flink.connectors.hive.read.HiveTableInputFormat.reachedEnd(HiveTableInputFormat.java:261) ~[flink-sql-connector-hive-3.1.2_2.11-1.11.2.jar:1.11.2]
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) ~[flink-dist_2.11-1.11.2.jar:1.11.2]
2020-12-17 21:10:24,406 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor
如何解决这个问题?
原因是ORC格式在数组批量大小时存在bug
https://issues.apache.org/jira/browse/ORC-598
https://issues.apache.org/jira/browse/ORC-672
我们创建了一个问题来解决Flink中的orc格式错误。https://issues.apache.org/jira/browse/FLINK-20667
我正在使用Spark SQL读取一个配置单元表,并将其分配给一个scala val 有什么方法可以绕过这个错误吗?我需要将记录插入到同一个表中。 嗨,我试着按建议做,但仍然得到同样的错误。
我在Hive(beeline)中创建了一个表,下面的命令是: 我还有一个小的自定义文件,其中包含示例记录,如: 有什么想法吗?
我试图在配置单元中执行select*from db.abc操作,此配置单元表是使用spark加载的 “它不工作”显示错误: 错误:java.io.IOException:java.lang.IllegalArgumentException:bucketId超出范围:-1(状态=,代码=0) 我需要在spark-submit或shell中添加任何属性吗?或者使用spark读取此hiv e表的另一种方
我有一个奇怪的错误,我正在尝试写数据到hive,它在spark-shell中运行良好,但是当我使用spark-submit时,它抛出的数据库/表在默认错误中找不到。 下面是我试图在spark-submit中编写的代码,我使用的是Spark2.0.0的自定义构建 16/05/20 09:05:18 INFO sparksqlParser:解析命令:spark_schema.measures_2016
states是按国家分区的,所以当我对上面的数据集进行计数时,查询会扫描所有分区。但是如果我这样读的话- 分区被正确修剪。有人能解释为什么当您将表映射到case类时会丢失分区信息吗?
我设置了一个AWS EMR集群,其中包括Spark 2.3.2、hive 2.3.3和hbase 1.4.7。如何配置Spark以访问hive表? 我采取了以下步骤,但结果是错误消息: Java语言lang.ClassNotFoundException:java。lang.NoClassDefFoundError:org/apache/tez/dag/api/SessionNotRunning使用