当前位置: 首页 > 知识库问答 >
问题:

pycharm:Java网关进程在发送端口号前退出

闾丘成礼
2023-03-14

我使用以下方法安装了pyspark:

pip install pyspark 

根据该示例的web,它应该刚好可以用以下代码执行:

python nameofthefile.py

但我有一个错误:

Exception in thread "main" java.lang.ExceptionInInitializerError
    at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
    at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
    at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
    at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
    at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
    at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
    at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
    at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2422)
    at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2422)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2422)
    at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:79)
    at org.apache.spark.deploy.SparkSubmit.secMgr$lzycompute$1(SparkSubmit.scala:359)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$secMgr$1(SparkSubmit.scala:359)
    at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$7.apply(SparkSubmit.scala:367)
    at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$7.apply(SparkSubmit.scala:367)
    at scala.Option.map(Option.scala:146)
    at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:366)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:143)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.StringIndexOutOfBoundsException: begin 0, end 3, length 2
    at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3319)
    at java.base/java.lang.String.substring(String.java:1874)
    at org.apache.hadoop.util.Shell.<clinit>(Shell.java:52)
    ... 23 more
Traceback (most recent call last):
  File "C:/Users/.../PycharmProjects/PoC/Databricks.py", line 4, in <module>
    spark = SparkSession.builder.appName("Databricks").getOrCreate()
  File "C:\Users\...\Desktop\env\lib\site-packages\pyspark\sql\session.py", line 173, in getOrCreate
    sc = SparkContext.getOrCreate(sparkConf)
  File "C:\Users\...\Desktop\env\lib\site-packages\pyspark\context.py", line 349, in getOrCreate
    SparkContext(conf=conf or SparkConf())
  File "C:\Users\...\Desktop\env\lib\site-packages\pyspark\context.py", line 115, in __init__
    SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
  File "C:\Users\...\Desktop\env\lib\site-packages\pyspark\context.py", line 298, in _ensure_initialized
    SparkContext._gateway = gateway or launch_gateway(conf)
  File "C:\Users\...\Desktop\env\lib\site-packages\pyspark\java_gateway.py", line 94, in launch_gateway
    raise Exception("Java gateway process exited before sending its port number")
Exception: Java gateway process exited before sending its port number
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
    at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379)
    at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394)
    at org.apache.hadoop.util.Shell.<clinit>(Shell.java:387)
    at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
    at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
    at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
    at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
    at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
    at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
    at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
    at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2422)
    at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2422)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2422)
    at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:79)
    at org.apache.spark.deploy.SparkSubmit.secMgr$lzycompute$1(SparkSubmit.scala:359)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$secMgr$1(SparkSubmit.scala:359)
    at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$7.apply(SparkSubmit.scala:367)
    at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$7.apply(SparkSubmit.scala:367)
    at scala.Option.map(Option.scala:146)
    at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:366)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:143)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2019-01-24 08:46:16 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).

接着,要解决第二个问题,只需在控制面板中定义HADOOP_HOME和PATH环境变量,以便任何Windows程序都可以使用它们。

共有1个答案

吉凯捷
2023-03-14

简短的回答:

我也遇到过类似的问题,我通过更改JAVA_HOME环境变量配置解决了这个问题。您可以手动添加一个新的用户环境变量JAVA_HOME链接到您的Java开发工具包的路径(例如“c://progra~1/Java/jdk1.8.0_121”,或者“c://progra~2/Java/jdk1.8.0_121”,如果它安装在Windows上的“程序文件(x86)”中的话)。

您也可以在python代码的开头尝试类似的操作:

import os
os.environ["JAVA_HOME"] = "C:/Progra~1/Java/jdk1.8.0_121"
import os
os.environ["JAVA_HOME"] = "C:/Progra~2/Java/jdk1.8.0_121"
os.environ["SPARK_HOME"] = "/path/to/spark-2.3.1-bin-hadoop2.7"

然后我建议使用findspark(您可以安装wich pip install findspark):https://github.com/minrk/findspark

然后您可以这样使用它:

import findspark
findspark.init()
from pyspark.sql import SparkSession

spark = SparkSession.builder.master("local[*]").getOrCreate()

尤其是在Windows上,JAVA_HOME应该类似于:

C:\Progra~1\Java\jdk1.8.0_121
 类似资料:
  • 我正试图用Anaconda在我的Windows10中安装Spark,但当我试图在JupyterNotebook中运行pyspark时,我遇到了一个错误。我正在遵循本教程中的步骤。然后,我已经下载了Java8并安装了Spark 3.0.0和Hadoop 2.7。 我已经为SPARK_HOME、JAVA_HOME设置了路径,并在“path”环境中包含了“/bin”路径。 在Anaconda pyspa

  • 我在Spark 3.1.2和Hadoop 2.7中面临两个错误: 第一个是在python中导入'pyspark'并创建一个会话。 错误:'Java网关进程在发送端口号前已退出‘ 我从GitHub上的存储库中为以下Hadoop版本下载了它们:[2.7.1,2.7.7] 试过了,都不起作用。 我的环境变量--就我所检查的而言--是正确的: Windows 10 Python:3.7.10 水蟒:4.1

  • 我运行Windows10,并通过Anaconda3安装了Python3。我在用Jupyter笔记本。我从这里安装了Spark(Spark-2.3.0-bin-Hadoop2.7.tgz)。我已经解压缩了这些文件,并将它们粘贴到我的目录d:\spark中。我已经修改了环境变量: 用户变量: 变量:SPARK_HOME 值:D:\spark\bin 我已经通过conda安装/更新了以下模块: 熊猫 皮

  • 尝试使用时我遇到了一个问题。我在我的用户路径中安装并添加了。但根据文档,它不需要任何其他依赖项。 我的问题是,我必须安装其他东西吗?像Spark本身或类似的东西? 我在中使用。

  • 我在python环境中使用了pip安装pyspark,安装了java,但是当我尝试初始化spark会话时,我得到了一个java错误,java网关进程在发送端口号之前退出 运行时错误发布在上面,我在其他帖子中没有看到这种类型的错误

  • 代码在下面 获取错误异常:Java网关进程在发送其端口号之前退出