我正在尝试使用AWS胶水将大约1.5 GB的Gzip CSV转换为拼花地板。下面的脚本是自动生成的粘合作业,用于完成该任务。这似乎需要很长时间(我已经等了10个dpu好几个小时了,从来没有看到它结束或产生任何输出数据)
我想知道是否有人有任何经验将1.5 GB GZIPPED CSV转换为镶木地板-是否有更好的方法来完成此转换?
我有TB的数据要转换。值得关注的是,转换GBs似乎需要很长时间。
我的胶水作业日志有数千个条目,如:
18/03/02 20:20:20 DEBUG Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 172.31.58.225
ApplicationMaster RPC port: 0
queue: default
start time: 1520020335454
final status: UNDEFINED
tracking URL: http://ip-172-31-51-199.ec2.internal:20888/proxy/application_1520020149832_0001/
user: root
AWS自动生成胶水作业代码:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "test_datalake_db", table_name = "events2_2017_test", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "test_datalake_db", table_name = "events2_2017_test", transformation_ctx = "datasource0")
## @type: ApplyMapping
## @args: [mapping = [("sys_vortex_id", "string", "sys_vortex_id", "string"), ("sys_app_id", "string", "sys_app_id", "string"), ("sys_pq_id", "string", "sys_pq_id", "string"), ("sys_ip_address", "string", "sys_ip_address", "string"), ("sys_submitted_at", "string", "sys_submitted_at", "string"), ("sys_received_at", "string", "sys_received_at", "string"), ("device_id_type", "string", "device_id_type", "string"), ("device_id", "string", "device_id", "string"), ("timezone", "string", "timezone", "string"), ("online", "string", "online", "string"), ("app_version", "string", "app_version", "string"), ("device_days", "string", "device_days", "string"), ("device_sessions", "string", "device_sessions", "string"), ("event_id", "string", "event_id", "string"), ("event_at", "string", "event_at", "string"), ("event_date", "string", "event_date", "string"), ("int1", "string", "int1", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("sys_vortex_id", "string", "sys_vortex_id", "string"), ("sys_app_id", "string", "sys_app_id", "string"), ("sys_pq_id", "string", "sys_pq_id", "string"), ("sys_ip_address", "string", "sys_ip_address", "string"), ("sys_submitted_at", "string", "sys_submitted_at", "string"), ("sys_received_at", "string", "sys_received_at", "string"), ("device_id_type", "string", "device_id_type", "string"), ("device_id", "string", "device_id", "string"), ("timezone", "string", "timezone", "string"), ("online", "string", "online", "string"), ("app_version", "string", "app_version", "string"), ("device_days", "string", "device_days", "string"), ("device_sessions", "string", "device_sessions", "string"), ("event_id", "string", "event_id", "string"), ("event_at", "string", "event_at", "string"), ("event_date", "string", "event_date", "string"), ("int1", "string", "int1", "string")], transformation_ctx = "applymapping1")
## @type: ResolveChoice
## @args: [choice = "make_struct", transformation_ctx = "resolvechoice2"]
## @return: resolvechoice2
## @inputs: [frame = applymapping1]
resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_struct", transformation_ctx = "resolvechoice2")
## @type: DropNullFields
## @args: [transformation_ctx = "dropnullfields3"]
## @return: dropnullfields3
## @inputs: [frame = resolvechoice2]
dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://devops-redshift*****/prd/parquet"}, format = "parquet", transformation_ctx = "datasink4"]
## @return: datasink4
## @inputs: [frame = dropnullfields3]
datasink4 = glueContext.write_dynamic_frame.from_options(frame = dropnullfields3, connection_type = "s3", connection_options = {"path": "s3://devops-redshift*****/prd/parquet"}, format = "parquet", transformation_ctx = "datasink4")
job.commit()
是的,我最近发现Spark DataFrames(与Glue的DynamicFrames相比)是速度更快的方法。
# boiler plate, generated code
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
# some job-specific variables
compression_type = 'snappy' # 'snappy', 'gzip', or 'none'
source_path = 's3://source-bucket/part1=x/part2=y/'
destination_path = 's3://destination-bucket/part1=x/part2=y/'
# CSV to Parquet conversion
df = spark.read.option('delimiter','|').option('header','true').csv(source_path)
df.write.mode("overwrite").format('parquet').option('compression', compression_type).save(destination_path )
job.commit()
我的 hdfs 中有 Parquet 文件。我想将这些镶木地板文件转换为csv格式
我正在使用AWS S3、Glue和Athena进行以下设置: S3级-- 我的原始数据作为CSV文件存储在S3上。我正在使用Glue进行ETL,并使用Athena查询数据。 因为我使用的是雅典娜,所以我想将CSV文件转换为拼花。我现在正在用AWS胶水来做这个。这是我当前使用的流程: 运行爬虫读取CSV文件并填充数据目录 胶水作业一次只允许我转换一个表。如果我有很多CSV文件,这个过程很快就变得无法
有没有可能让粘合作业将JSON表重新分类为拼花,而不需要另一个爬虫来抓取拼花文件? 当前设置: 分区S3 bucket中的JSON文件每天爬网一次 我必须相信有一种方法可以在没有另一个爬虫的情况下转换表分类(但我以前被AWS烧伤过)。非常感谢任何帮助!
我正在尝试将存储在HDFS(100Gbs)上的一堆多部分avro文件转换为拼花文件(保留所有数据) Hive可以使用以下命令将avro文件作为外部表读取: 但是当我试图设计一张拼花桌时: 它会抛出一个错误: 失败:执行错误,从org.apache.hadoop.hive.ql.exec.DDLTask返回代码1。java.lang.UnsupportedOperationException:未知字
我们有一个以红移方式处理数据的用例。但我想在S3中创建这些表的备份,以便使用Spectrum查询这些表。 为了将表从Redshift移动到S3,我使用了一个胶水ETL。我已经为AWS红移创建了一个爬虫程序。胶水作业将数据转换为拼花地板,并将其存储在S3中,按日期进行分区。然后,另一个爬虫会对S3文件进行爬行,以再次对数据进行编目。 如何消除第二个爬虫并在作业本身中执行此操作?
我每天都有csv文件被传递到S3,这些文件在当月是增量的。所以file1包含第1天的数据,file2包含第1天和第2天的数据,等等。每天我都想对该数据运行一个ETL并将其写入不同的S3位置,这样我就可以使用Athena查询它,而不会出现重复的行。本质上,我只想查询聚合数据的最新状态(这只是最近交付给S3的文件的内容)。 我认为书签不会起作用,因为增量交付包含以前文件中的数据,因此会产生重复。我知道