当前位置: 首页 > 知识库问答 >
问题:

Java网关进程在发送端口号Spark之前退出

龙德润
2023-03-14

我正试图用Anaconda在我的Windows10中安装Spark,但当我试图在JupyterNotebook中运行pyspark时,我遇到了一个错误。我正在遵循本教程中的步骤。然后,我已经下载了Java8并安装了Spark 3.0.0和Hadoop 2.7。

我已经为SPARK_HOME、JAVA_HOME设置了路径,并在“path”环境中包含了“/bin”路径。

C:\Users\mikes>java -version
java version "1.8.0_251"
Java(TM) SE Runtime Environment (build 1.8.0_251-b08)
Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode)

在Anaconda pyspark的PowerShell中,它可以工作。

(base) PS C:\Users\mikes> pyspark
Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
20/06/05 07:14:56 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... 
using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
    ____              __
   / __/__  ___ _____/ /__
 _ \ \/ _ \/ _ `/ __/  '_/
/__ / .__/\_,_/_/ /_/\_\   version 3.0.0-preview2
   /_/

Using Python version 3.6.5 (default, Mar 29 2018 13:32:41)
SparkSession available as 'spark'.
>>>
>>> nums = sc.parallelize([1,2,3,4])
>>> nums.map(lambda x: x*x).collect()
[1, 4, 9, 16]
>>>           

Netx步骤是在我的Jupyter笔记本上运行pyspark。我已经安装了findspark,然后,我的开始代码

import findspark
findspark.init('c:\spark\spark-3.0.0-preview2-bin-hadoop2.7')
#doesent work findspark.init() is necessary write the path.
findspark.find()
import pyspark
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession

conf = pyspark.SparkConf().setAppName('appName').setMaster('local')
sc = pyspark.SparkContext(conf=conf) #Here is the error
spark = SparkSession(sc)

显示以下错误:

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
<ipython-input-6-c561ad39905c> in <module>()
      4 conf = pyspark.SparkConf().setAppName('appName').setMaster('local')
      5 sc = pyspark.SparkConf()
----> 6 sc = pyspark.SparkContext(conf=conf)
      7 spark = SparkSession(sc)

c:\spark\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\context.py in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls)
    125                 " is not allowed as it is a security risk.")
    126 
--> 127         SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
    128         try:
    129             self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,

c:\spark\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\context.py in _ensure_initialized(cls, instance, gateway, conf)
    317         with SparkContext._lock:
    318             if not SparkContext._gateway:
--> 319                 SparkContext._gateway = gateway or launch_gateway(conf)
    320                 SparkContext._jvm = SparkContext._gateway.jvm
    321 

c:\spark\spark-3.0.0-preview2-bin-hadoop2.7\python\pyspark\java_gateway.py in launch_gateway(conf, popen_kwargs)
    103 
    104             if not os.path.isfile(conn_info_file):
--> 105                 raise Exception("Java gateway process exited before sending its port number")
    106 
    107             with open(conn_info_file, "rb") as info:

Exception: Java gateway process exited before sending its port number

我看到了另一个类似于这个问题的问题,但也许情况是另一个,因为我已经尝试过这些解决方案,如下:

-为pyspark_submit_args设置另一个参与方,但我不知道我是否做错了。

os.environ['PYSPARK_SUBMIT_ARGS']= "--master spark://localhost:8888"

其他解决方案是:-为java_home,spark_home设置路径(已经安装了)-安装Java 8(不是10)

我已经花了几个小时尝试,甚至重新安装Anaconda,因为我删除了一个环境。

共有1个答案

养焱
2023-03-14

经过一个星期的寻找不同的方法来解决异常显示,最后我找到了另一个教程,但这解决了我的问题,答案是水蟒是问题,相同的变量和路径是相同的。然后我直接在我的Windows中安装笔记本python(没有Anaconda),现在这个问题解决了。

 类似资料: