export JAVA_HOME=/d/java/jdk1.7.0_75
`</li>
<li>运行编译命令
bin/server_package.sh local`(需要sbt环境)spark-submit
提交作业LOG_DIR
,是日志所在位置curl -d "" 'ip:port/contexts/contextName?context-factory=spark.jobserver.context.SQLContextFactory'
ip:port
:为spark job server启动的机器和端口contextName
:context的名字,后面执行操作
需要用到,还可以在jobTracker页面,通过此名字搜索运行的applicationcontext-factory
:初始化context spark.jobserver.context.SQLContextFactory
用来初始化SQLContextspark.jobserver.context.HiveContextFactory
用来初始化HiveContextspark.jobserver.context.DefaultSparkContextFactory
用来初始化SparkContextspark.jobserver.context.StreamingContextFactory
用来初始化StreamingContextcurl --data-binary @/xx/xx/job-server-extras_2.10-0.7.0-SNAPSHOT.jar ip:port/jars/appName
jars/appName
指定appName,后面执行操作
需要用到curl -d "sql=\"show databases\"" 'ip:port/jobs?appName=xxx&classPath=spark.jobserver.SqlTestJob&context=contextName&sync=true'
appName
:步骤2中初始化的值classPath
:用户可以自定义,实现接口spark.jobserver.api.SparkJobBase
的方法runJob
即可 SQLContext.sql(sql).collect()
执行context
:指定提交到的contextName,步骤1中初始化的值sync
:是否为同步模式 {"status": "ERROR", "result": {"message": "Ask timed out on [Actor[akka://JobServer/user/context-supervisor/hive-context-test#91063205]] after[10000 ms]", "errorClass": "akka.pattern.AskTimeoutException", "stack": ["akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:334)", ..."]}
xx secs
curl -v 'ip:port/jobs/jobId'
{ "duration": "24.463 secs", "classPath": "spark.jobserver.HiveTestJob", "startTime":"2016-11-17T11:01:09.249+08:00", "context": "hive-context-test", "result": ["[2,www]",...],"status": "FINISHED", "jobId": "5bc87741-c289-4f13-8f5c-de044256fcc7"}
curl http://host:port/contexts
curl -X DELETE http://host:port/contexts/name
curl -X PUT http://host:port/contexts?reset=reboot
curl http://host:port/jobs
curl -X DELETE http://host:port/jobs/jobId