当前位置: 首页 > 工具软件 > SparkR > 使用案例 >

在Windows和Rstudio下本地安装SparkR

况承福
2023-12-01

转自http://blog.sina.com.cn/s/blog_614408630102vyom.html
原文地址:http://www.r-bloggers.com/installing-and-starting-sparkr-locally-on-windows-os-and-rstudio/

毋庸置疑,Spark已经成为最火的大数据工具,本文详细介绍安装SparkR的方法,让你在5分钟之内能在本地使用。

​环境要求:java 6 + (下载地址:http://www.java.com/en/download/chrome.jsp
​ R 及 Rstudio
Rtools (下载地址:https://github.com/stan-dev/rstan/wiki/Install-Rtools-for-Windows​)

第一步:下载Spark

​在浏览器打开 http://spark.apache.org/,点击右边的绿色按钮“Download Spark”

你会看到如下页面:

​按照上面的 1到 3 创建下载链接。
在“2. Choose a package type” 选项中,选择一个 pre-built 的类型(如下图)。

因为我们打算在Windows下本地运行,所以选择 Pre-built package for Hadoop 2.6 and later 。

在“3. Choose a download type” 选择 “Direct Download” 。

选好之后,一个下载链接就在4. Download Spark”创建好了。​

把这个压缩文件下载到你的电脑上。

第二步:解压缩安装文件

​解压缩到路径“C:/Apache/Spark-1.4.1″

​第三步:用命令行运行

​打开命令行窗口(开始-搜索框中输入cmd),更改路径:

输入命令 “.\bin\sparkR”

​成功后会看到一些日志,大约15s后,一切顺利的话,会有“Welcome to SparkR!”

设置环境变量:

​在“我的电脑”右击,选择“属性”:

​选择“Advanced system settings”

​点击“Environment Variables”,在下面的“System variables“里面找到Path,并加入“C:\ProgramData\Oracle\Java\javapath;“

​第四步:在Rstudio中运行​

​#(附一个例子)

​# Set the system environment variables

Sys.setenv(SPARK_HOME = “C:/Apache/spark-1.4.1”)

.libPaths(c(file.path(Sys.getenv(“SPARK_HOME”), “R”, “lib”), .libPaths()))

注意把spark-1.4.1目录下lib里面的SparkR放入R的library里面,否则无法直接安装sparkR的包​

load the Sparkr library

library(SparkR)

Create a spark context and a SQL context

sc <- sparkR.init(master = “local”)

sqlContext <- sparkRSQL.init(sc)

create a sparkR DataFrame

DF <- createDataFrame(sqlContext, faithful)

head(DF)

Create a simple local data.frame

localDF <- data.frame(name=c(“John”, “Smith”, “Sarah”), age=c(19, 23, 18))

Convert local data frame to a SparkR DataFrame

df <- createDataFrame(sqlContext, localDF)

Print its schema

printSchema(df)

root

|– name: string (nullable = true)

|– age: double (nullable = true)

Create a DataFrame from a JSON file

path <- file.path(Sys.getenv(“SPARK_HOME”), “examples/src/main/resources/people.json”)

peopleDF <- jsonFile(sqlContext, path)

printSchema(peopleDF)

Register this DataFrame as a table.

registerTempTable(peopleDF, “people”)

SQL statements can be run by using the sql methods provided by sqlContext

teenagers <- sql(sqlContext, “SELECT name FROM people WHERE age >= 13 AND age <= 19”)

Call collect to get a local data.frame

teenagersLocalDF <- collect(teenagers)

Print the teenagers in our dataset

print(teenagersLocalDF)

Stop the SparkContext now

sparkR.stop()

​# 另一个例子 wordcount————–

来源 http://www.cnblogs.com/hseagle/p/3998853.html

sc <- sparkR.init(master=”local”, “RwordCount”)

lines <- textFile(sc, “README.md”)

words <- flatMap(lines,

             function(line) {

               strsplit(line, " ")[[1]]

             })

wordCount <- lapply(words, function(word) { list(word, 1L) })

counts <- reduceByKey(wordCount, “+”, 2L)

output <- collect(counts)

for (wordcount in output) {

cat(wordcount[[1]], “: “, wordcount[[2]], “\n”)

}

​参考资料:

  1. 安装 http://blog.csdn.net/jediael_lu/article/details/45310321

  2. 安装 http://thinkerou.com/2015-05/How-to-Build-Spark-on-Windows/

  3. 徽沪一郎的博客:http://www.cnblogs.com/hseagle/p/3998853.html

  4. 学习 http://www.r-bloggers.com/a-first-look-at-spark/

  5. 学习 http://www.danielemaasit.com/getting-started-with-sparkr/

  6. ​​错误解决:http://stackoverflow.com/questions/10077689/r-cmd-on-windows-7-error-r-is-not-recognized-as-an-internal-or-external-comm

 类似资料: