当前位置: 首页 > 知识库问答 >
问题:

df. topandas()-无法在hadoop二进制路径中找到winutils二进制文件

金阳华
2023-03-14

我正在使用PyCharm和PySpark运行一个巨大的文本文件。

这就是我想做的:

spark_home = os.environ.get('SPARK_HOME', None)
os.environ["SPARK_HOME"] = "C:\spark-2.3.0-bin-hadoop2.7"
import pyspark
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
conf = SparkConf()
sc = SparkContext(conf=conf)
spark = SparkSession.builder.config(conf=conf).getOrCreate() 
import pandas as pd
ip = spark.read.format("csv").option("inferSchema","true").option("header","true").load(r"some other file.csv")
kw = pd.read_csv(r"some file.csv",encoding='ISO-8859-1',index_col=False,error_bad_lines=False)
for i in range(len(kw)):
    rx = '(?i)'+kw.Keywords[i]
    ip = ip.where(~ip['Content'].rlike(rx))
op = ip.toPandas()
op.to_csv(r'something.csv',encoding='utf-8')

但是,PyCharm抛给我这个错误:

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2018-06-08 11:31:52 WARN  Utils:66 - Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
Traceback (most recent call last):
  File "C:/Users/mainak.paul/PycharmProjects/Concept_Building_SIP/ThemeSparkUncoveredGames.py", line 17, in <module>
    op = ip.toPandas()
  File "C:\Python27\lib\site-packages\pyspark\sql\dataframe.py", line 1966, in toPandas
    pdf = pd.DataFrame.from_records(self.collect(), columns=self.columns)
  File "C:\Python27\lib\site-packages\pyspark\sql\dataframe.py", line 466, in collect
    port = self._jdf.collectToPython()
  File "C:\Python27\lib\site-packages\py4j\java_gateway.py", line 1160, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "C:\Python27\lib\site-packages\pyspark\sql\utils.py", line 63, in deco
    return f(*a, **kw)
  File "C:\Python27\lib\site-packages\py4j\protocol.py", line 320, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o30.collectToPython.
: java.lang.IllegalArgumentException

我只是不明白为什么。toPandas()不工作。Spark版本为2.3。这个版本有什么我不知道的变化吗?我用spark 2.2在另一台机器上运行了这段代码,它运行得很好。

我甚至把出口线改成了这样

op = ip.where(ip['Content'].rlike(rx)).toPandas()

仍然得到相同的错误。我做错了什么?是否有其他方法可以在不影响性能的情况下将pyspark.sql.dataframe.DataFrame导出到. csv

编辑我还尝试使用:

ip.write.csv('file.csv')

现在,我得到以下错误:

Traceback (most recent call last):
  File "somefile.csv", line 21, in <module>
    ip.write.csv('somefile.csv')
  File "C:\Python27\lib\site-packages\pyspark\sql\readwriter.py", line 883, in csv
    self._jwrite.csv(path)
  File "C:\Python27\lib\site-packages\py4j\java_gateway.py", line 1160, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "C:\Python27\lib\site-packages\pyspark\sql\utils.py", line 63, in deco
    return f(*a, **kw)
  File "C:\Python27\lib\site-packages\py4j\protocol.py", line 320, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o102.csv.

添加stacktrace:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
18/06/11 16:53:14 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable C:\spark-2.3.0-bin-hadoop2.7\bin\bin\winutils.exe in the Hadoop binaries.
    at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379)
    at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394)
    at org.apache.hadoop.util.Shell.<clinit>(Shell.java:387)
    at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
    at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
    at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
    at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
    at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
    at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
    at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
    at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2430)
    at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2430)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2430)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:295)
    at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:488)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:236)
    at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
    at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.base/java.lang.Thread.run(Thread.java:844)
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/C:/opt/spark/spark-2.2.0-bin-hadoop2.7/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
18/06/11 16:53:14 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Traceback (most recent call last):
  File "C:/Users/mainak.paul/PycharmProjects/Concept_Building_SIP/ThemeSparkUncoveredGames.py", line 22, in <module>
    op = ip.toPandas().collect()
  File "C:\Python27\lib\site-packages\pyspark\sql\dataframe.py", line 1937, in toPandas
    if self.sql_ctx.getConf("spark.sql.execution.pandas.respectSessionTimeZone").lower() \
  File "C:\Python27\lib\site-packages\pyspark\sql\context.py", line 142, in getConf
    return self.sparkSession.conf.get(key, defaultValue)
  File "C:\Python27\lib\site-packages\pyspark\sql\conf.py", line 46, in get
    return self._jconf.get(key)
  File "C:\Python27\lib\site-packages\py4j\java_gateway.py", line 1160, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "C:\Python27\lib\site-packages\pyspark\sql\utils.py", line 63, in deco
    return f(*a, **kw)
  File "C:\Python27\lib\site-packages\py4j\protocol.py", line 320, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o86.get.
: java.util.NoSuchElementException: spark.sql.execution.pandas.respectSessionTimeZone
    at org.apache.spark.sql.internal.SQLConf$$anonfun$getConfString$2.apply(SQLConf.scala:1089)
    at org.apache.spark.sql.internal.SQLConf$$anonfun$getConfString$2.apply(SQLConf.scala:1089)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.internal.SQLConf.getConfString(SQLConf.scala:1089)
    at org.apache.spark.sql.RuntimeConfig.get(RuntimeConfig.scala:74)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:564)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.base/java.lang.Thread.run(Thread.java:844)


Process finished with exit code 1

共有1个答案

耿俊
2023-03-14

您需要按以下方式更改html" target="_blank">代码:

spark_home = os.environ.get('SPARK_HOME', None)
os.environ["SPARK_HOME"] = "C:\spark-2.3.0-bin-hadoop2.7"
import pyspark
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
conf = SparkConf()
sc = SparkContext(conf=conf)
spark = SparkSession.builder.config(conf=conf).getOrCreate() 
import pandas as pd
ip = spark.read.format("csv").option("inferSchema","true").option("header","true").load(r"some other file.csv")
kw = pd.read_csv(r"some file.csv",encoding='ISO-8859-1',index_col=False,error_bad_lines=False)
for i in range(len(kw)):
    rx = '(?i)'+kw.Keywords[i]
    ip = ip.where(~ip['Content'].rlike(rx))
op = ip.toPandas().collect()
op.to_csv(r'something.csv',encoding='utf-8')

toPandas()需要在PySpark中跟随Collection()操作才能实现DataFrame。但是,对于大型数据集不应该这样做,因为toPandas()。收集()会导致数据移动到驱动程序,如果数据集太大而无法放入驱动程序内存,这可能会崩溃。

至于这一行:ip.write.csv('file.csv')我认为应该将其更改为ip.write.csv('file://home/your-user-name/file.csv')以将文件保存在本地linux文件系统上,

<代码>ip。选项(“标题”、“true”)。csv(“file:///C:/out.csv“”将文件保存在本地windows文件系统上(如果您在windows上运行Spark和Hadoop)

或ip。写csv('hdfs:///user/your-user/file.csv“)将文件保存到HDFS

请告诉我这个解决方案是否适合你。

更新

https://github.com/steveloughran/winutils/tree/master/hadoop-2.7.1/binfollow此链接并下载winutils。exe文件。在C驱动器上创建一个名为hadoop的文件夹,并在hadoop文件夹中创建另一个名为bin的文件夹。放置winutils。您先前下载到此目录的exe。然后需要编辑系统变量,并将变量HADOOP\u HOME添加到列表中。完成后,您将不会从spark获得winutils/hadoop的错误。

 类似资料:
  • 嗯,我下载了winutils.exe,创建了“C:\winutils\bin”并复制了winutils。还创建了环境路径HADOOP_HOME。但我不明白为什么它不起作用。我的代码

  • 我可以运行这个程序,但由于某些原因,它会显示/放置随机字符,而不是二进制的初始值,而且我似乎无法将程序从十进制运行回二进制。我该如何改进这些代码。要明确说明它不会将二进制转换为十进制,我将如何将其转换回十进制转换为二进制,如果有一些代码可以帮助我,将不胜感激。

  • 问题内容: 我有一个Go二进制文件,试图在Alpine Docker映像上运行。 这对于Docker Go二进制文件很好用。 但是,对于Go二进制文件,我要安装。 我认为这可能与这个答案有关,但是在运行时我并没有得到同样的错误。 在Alpine Linux Docker的路径中找不到这个安装的Go二进制文件的想法吗? 问题答案: RUN mkdir /lib64 && ln -s /lib/lib

  • 问题内容: 我知道用和 将小数转换为二进制(在这里我取32位结果的低16位)。 我想做的是另一种方法,并采用16位二进制补码二进制字符串并将其转换为十进制。 即 而不是 我该怎么做? 问题答案: 您需要将结果读取到中。 此打印。

  • 我正在尝试用Chrome运行硒测试。我正在使用C#。 看起来像是chromedriver。已找到exe,但它可以找到Chrome二进制文件。我设置了通往chrome的路径。自动搜索失败后显式执行。我甚至在最后用“chrome.exe”试过了。我总是得到同样的结果: 在以下位置找不到Chrome二进制文件: C:\用户\Vilem\AppData\本地\谷歌\Chrome\应用程序 仅供参考:我有一

  • 本文向大家介绍十进制到二进制转换,包括了十进制到二进制转换的使用技巧和注意事项,需要的朋友参考一下 十进制数字也可以转换为二进制格式。要将十进制数转换为二进制数,我们需要将数字除以2,直到达到0或1。然后,在每一步骤中,其余部分将分开存储以形成相反的二进制等效数。 在此算法中,我们将遵循递归方法。这将帮助我们在不使用堆栈数据结构的情况下解决问题。在实现中,我们知道函数的递归将遵循内部堆栈。我们将使