这是我导入库的sbt代码
libraryDependencies ++= Seq(
"javassist" % "javassist" % "3.12.1.GA" ,
"com.typesafe" % "config" % "1.3.4",
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" %% "spark-sql" % sparkVersion ,
"com.datastax.spark" %% "spark-cassandra-connector" % "2.4.1",
"com.twitter" % "jsr166e" % "1.1.0",
"com.amazonaws" % "aws-java-sdk" % "1.11.592"
"org.apache.hadoop" % "hadoop-aws" % "2.7.3",
"org.apache.spark" %% "spark-catalyst" % sparkVersion
)
我的scala代码如下所示
val rdd = sparkSession.sparkContext.parallelize(
Seq(
("first", Array(2.0, 1.0, 2.1, 5.4)),
("test", Array(1.5, 0.5, 0.9, 3.7)),
("choose", Array(8.0, 2.9, 9.1, 2.5))
)
)
val dfWithoutSchema = sparkSession.createDataFrame(rdd)
sparkSession.sparkContext.hadoopConfiguration.set("fs.s3a.access.key", "XXXXXX")
sparkSession.sparkContext.hadoopConfiguration.set("fs.s3a.secret.key", "XXXXXXX")
sparkSession.sparkContext.hadoopConfiguration.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
dfWithoutSchema.write
.mode("overwrite")
.parquet("s3a://test-daily-extracts/sample2")
java.lang.NoClassDefFoundError: com/amazonaws/auth/AWSCredentialsProvider
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.<init>(FileOutputCommitter.java:113)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.<init>(FileOutputCommitter.java:88)
at org.apache.parquet.hadoop.ParquetOutputCommitter.<init>(ParquetOutputCommitter.java:43)
at org.apache.parquet.hadoop.ParquetOutputFormat.getOutputCommitter(ParquetOutputFormat.java:442)
at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupCommitter(HadoopMapReduceCommitProtocol.scala:100)
at org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol.setupCommitter(SQLHadoopMapReduceCommitProtocol.scala:40)
at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupTask(HadoopMapReduceCommitProtocol.scala:217)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:229)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.amazonaws.auth.AWSCredentialsProvider
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 30 more
libraryDependencies ++= Seq(
"javassist" % "javassist" % "3.12.1.GA" ,
"com.typesafe" % "config" % "1.3.4",
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" %% "spark-sql" % sparkVersion ,
"com.datastax.spark" %% "spark-cassandra-connector" % "2.4.1",
"com.twitter" % "jsr166e" % "1.1.0",
"com.amazonaws" % "aws-java-sdk" % "1.7.4",
"net.java.dev.jets3t" % "jets3t" % "0.9.4",
"org.apache.hadoop" % "hadoop-aws" % "2.7.3",
"org.apache.hadoop" % "hadoop-client" % "2.7.3",
"org.apache.hadoop" % "hadoop-hdfs" % "2.7.3",
"org.apache.spark" %% "spark-catalyst" % sparkVersion
)
val urls = urlsinclasspath(getClass.getClassLoader).foreach(println)
def urlsinclasspath(cl: ClassLoader): Array[java.net.URL] = cl match {
case null => Array()
case u: java.net.URLClassLoader => u.getURLs() ++ urlsinclasspath(cl.getParent)
case _ => urlsinclasspath(cl.getParent)
}
我现在可以看到aws-java-sdk-1.7.4正在运行时加载,并且其中有AWSCredentialsProvider类。但我仍然得到以下错误。我完整的泰斯在下面
19/07/17 17:02:25 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, XX.XX.XX.XX, executor 0): java.lang.NoClassDefFoundError: com/amazonaws/auth/AWSCredentialsProvider
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.<init>(FileOutputCommitter.java:113)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.<init>(FileOutputCommitter.java:88)
at org.apache.parquet.hadoop.ParquetOutputCommitter.<init>(ParquetOutputCommitter.java:43)
at org.apache.parquet.hadoop.ParquetOutputFormat.getOutputCommitter(ParquetOutputFormat.java:442)
at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupCommitter(HadoopMapReduceCommitProtocol.scala:100)
at org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol.setupCommitter(SQLHadoopMapReduceCommitProtocol.scala:40)
at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupTask(HadoopMapReduceCommitProtocol.scala:217)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:229)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.amazonaws.auth.AWSCredentialsProvider
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 30 more
这个依赖项有com/amazonaws/auth/awscredentialsprovider
类,您缺少这个类。
librarydependencies+=“com.amazonaws”%“aws-java-sdk”%“1.11.592”
我建议您使用uber jar,即使用SBT将所有具有依赖关系的jar打包为一个jar,这样就不会遗漏任何内容。
val urls = urlsinclasspath(getClass.getClassLoader).foreach(println)
def urlsinclasspath(cl: ClassLoader): Array[java.net.URL] = cl match {
case null => Array()
case u: java.net.URLClassLoader => u.getURLs() ++ urlsinclasspath(cl.getParent)
case _ => urlsinclasspath(cl.getParent)
}
我正在尝试将图片上传到我的amazonaws,但当我提交上传时,我收到此错误: 在“”上执行“PutObject”时出错https://s3-us-west-2.amazonaws.com/*********/1476203786.jpg“AWS HTTP错误:客户端错误:
我尝试用默认的amazon子域:some-unique-path.elb.amazonaws.com在ALB上配置https,问题是我找不到如何为该域生成ssl证书:amazon证书管理器不允许为amazonaws子域生成证书。有没有什么方法可以使它不购买自定义域和配置为ALB?
AUTH password 通过设置配置文件中 requirepass 项的值(使用命令 CONFIG SET requirepass password ),可以使用密码来保护 Redis 服务器。 如果开启了密码保护的话,在每次连接 Redis 服务器之后,就要使用 AUTH 命令解锁,解锁之后才能使用其他 Redis 命令。 如果 AUTH 命令给定的密码 password 和配置文件中的密码
用户认证 // 判断当前用户是否已认证(是否已登录) Auth::check(); // 获取当前的认证用户 Auth::user(); // 获取当前的认证用户的 ID(未登录情况下会报错) Auth::id(); // 通过给定的信息来尝试对用户进行认证(成功后会自动启动会话) Auth::attempt(['email' => $email, 'password' => $password]
修改请求头的authorization字段,这个字段是网页401弹出的输入框中输入用户名和密码的Base64编码,配置方式: pattern auth://username:password # 或者采用json格式 pattern auth://filepath filepath为Values里面的{key}或者本地文件(如:e:\test\xxx、e:/test/xxx、/User/use