spark:spark 您所在的位置:网站首页 submit提交到哪里 spark:spark

spark:spark

2024-01-22 05:50| 来源: 网络整理| 查看: 265

Spark:对于提交命令的理解:

https://blog.csdn.net/weixin_38750084/article/details/106973247

spark-submit 可以提交任务到 spark 集群执行,也可以提交到 hadoop 的 yarn 集群执行。

代码中配置:

util:

import org.apache.spark.serializer.KryoSerializer import org.apache.spark.sql.SparkSession object SparkContextUtil { /** * 封装创建sparkContext实例 * * @param appName * @param params * @return */ def createSparkContext(appName: String, params: Map[String, String] = Map.empty) = { // 入口 val spark: SparkSession = SparkSession.builder() .appName(appName) .config("spark.sql.warehouse.dir", "/user/hive/warehouse") .master("local[*]") .config("spark.serializer",classOf[KryoSerializer].getName) .config("spark.debug.maxToStringFields", "100") .enableHiveSupport().getOrCreate() // 封装用户传递进来的参数 params.foreach { case (key, value) => spark.conf.set(key, value) } spark } }

使用: 

object BusinessDataCombineErpJobs { Logger.getLogger("org").setLevel(Level.WARN) val logger = LoggerFactory.getLogger(BusinessDataCombineErpJobs.getClass.getSimpleName) def main(args: Array[String]): Unit = { val spark = SparkContextUtil.createSparkContext(TestSparkSql.getClass.getSimpleName) //返回基础sparkContext,用于创建RDD以及管理群集资源 val sc = spark.sparkContext println("---数据处理开始---") test(spark) println("---数据处理结束---") spark.close() } } 1. 例子

一个最简单的例子,部署 spark standalone 模式后,提交到本地执行。

./bin/spark-submit \ --master spark://localhost:7077 \ examples/src/main/python/pi.py

如果部署 hadoop,并且启动 yarn 后,spark 提交到 yarn 执行的例子如下。

注意,spark 必须编译成支持 yarn 模式,编译 spark 的命令为:

build/mvn -Pyarn -Phadoop-2.x -Dhadoop.version=2.x.x -DskipTests clean package

 其中, 2.x 为 hadoop 的版本号。编译完成后,可执行下面的命令,提交任务到 hadoop yarn 集群执行。

./bin/spark-submit --class org.apache.spark.examples.SparkPi \ --master yarn \ --deploy-mode cluster \ --driver-memory 1g \ --executor-memory 1g \ --executor-cores 1 \ --queue thequeue \ examples/target/scala-2.11/jars/spark-examples*.jar 10 注意:后边的数字10是传入的一个参数

线上实操:

spark2-submit --class bi.tag.TSimilarTagsTable --master yarn-client --executor-memory 6G --num-executors 5 --executor-cores 2 /var/lib/hadoop-hdfs/seijing/ble/tag/spark-sql/pf-spark-master/pi/target/pi-1.0.1-SNAPSHOT.jar spark2-submit --class resume.mlib.RcoAID \ --master yarn \ --deploy-mode client \ --num-executors 4 \ --executor-memory 10G \ --executor-cores 3 \ --driver-memory 10g \ --conf "spark.executor.extraJavaOptions='-Xss512m'" \ --driver-java-options "-Xss512m" \ /var/lib/hadoop-hdfs/als_ecommend/reserver-1.0-SNAPSHOT.jar $1 $2 >> /var/lib/hadoop-hdfs/als_ecommend/logs/log_spark_out_`date +\%Y\%m\%d`.log 注意: (1) $1 $2 是 上一层,执行这个脚本传进来的参数 如: /bin/bash /root/combine.sh aa bb aa bb 就是传入的参数 (2) 最后打印出的日志格式为: -rw-r--r-- 1 root root 2375 Feb 27 15:25 log_spark_out_20200227.log -rw-r--r-- 1 root root 712272 Feb 28 17:03 log_spark_out_20200228.log -rw-r--r-- 1 root root 2375 Mar 9 15:36 log_spark_out_20200309.log -rw-r--r-- 1 root root 712463 Mar 10 20:24 log_spark_out_20200310.log -rw-r--r-- 1 root root 10578 Mar 12 18:51 log_spark_out_20200312.log -rw-r--r-- 1 root root 468018 Mar 13 10:06 log_spark_out_20200313.log -rw-r--r-- 1 root root 712602 Mar 19 18:26 log_spark_out_20200319.log 只有print的,以及DF show 这样的日志才会存储到日志文件中。 logger打印的日志在控制台运行任务时可以看到,但是并不能存储到日志文件中。 2. spark-submit 详细参数说明 参数名参数说明--master master 的地址,提交任务到哪里执行,例如 spark://host:port,  yarn,  local--deploy-mode 在本地 (client) 启动 driver 或在 cluster 上启动,默认是 client--class 应用程序的主类,仅针对 java 或 scala 应用--name 应用程序的名称--jars 用逗号分隔的本地 jar 包,设置后,这些 jar 将包含在 driver 和 executor 的 classpath 下--packages 包含在driver 和executor 的 classpath 中的 jar 的 maven 坐标--exclude-packages 为了避免冲突 而指定不包含的 package--repositories 远程 repository--conf PROP=VALUE

 指定 spark 配置属性的值,

 例如 -conf spark.executor.extraJavaOptions="-XX:MaxPermSize=256m"

--properties-file 加载的配置文件,默认为 conf/spark-defaults.conf--driver-memory Driver内存,默认 1G--driver-java-options 传给 driver 的额外的 Java 选项--driver-library-path 传给 driver 的额外的库路径--driver-class-path 传给 driver 的额外的类路径--driver-cores Driver 的核数,默认是1。在 yarn 或者 standalone 下使用--executor-memory 每个 executor 的内存,默认是1G--total-executor-cores 所有 executor 总共的核数。仅仅在 mesos 或者 standalone 下使用--num-executors 启动的 executor 数量。默认为2。在 yarn 下使用--executor-core 每个 executor 的核数。在yarn或者standalone下使用

 

 

 

 

 

 

 

 

 

 

 

 

 

 

yarn-client模式跑任务无异常(代码配置中配置了.master("local[*]"))

脚本为:

spark2-submit \ --class bi.tag.BusinessDataCombineErpJobs \ --master yarn-client \ --executor-memory 3G \ --num-executors 5 \ --executor-cores 2 \ /var/business_data/p-1.0.1-SNAPSHOT.jar > /var/business_data/business_data.log

代码中去掉.master("local[*]"),任务依然可以跑成功。

但是代码中存在.master("local[*]")参数的情况下,我直接把脚本改为:

--master yarn \ --deploy-mode cluster \

报错了

spark2-submit \ --class bi.tag.BusinessDataCombineErpJobs \ --master yarn \ --deploy-mode cluster \ --driver-memory 1g \ --executor-memory 3g \ --executor-cores 2 \ /var/business_data/p-1.0.1-SNAPSHOT.jar 10 注意:数字10 是代码BusinessDataCombineErpJobs 中自定义的传入的一个参数

报错日志为:

azkaban:

28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO yarn.Client: Application report for application_1583730534669_117324 (state: FAILED) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO yarn.Client: 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - client token: N/A 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - diagnostics: Application application_1583730534669_117324 failed 2 times due to AM Container for appattempt_1583730534669_117324_000002 exited with exitCode: 13 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - For more detailed output, check application tracking page:http://pf-bigdata4:8088/proxy/application_1583730534669_117324/Then, click on links to logs of each attempt. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Diagnostics: Exception from container-launch. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Container id: container_e87_1583730534669_117324_02_000001 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Exit code: 13 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Stack trace: ExitCodeException exitCode=13: 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.util.Shell.runCommand(Shell.java:604) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.util.Shell.run(Shell.java:507) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:789) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at java.util.concurrent.FutureTask.run(FutureTask.java:266) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at java.lang.Thread.run(Thread.java:748) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Container exited with a non-zero exit code 13 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Failing this attempt. Failing the application. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - ApplicationMaster host: N/A 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - ApplicationMaster RPC port: -1 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - queue: default 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - start time: 1590649410241 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - final status: FAILED 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - tracking URL: http://pf-bigdata4:8088/cluster/app/application_1583730534669_117324 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - user: root 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Exception in thread "main" org.apache.spark.SparkException: Application application_1583730534669_117324 finished with failed status 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.yarn.Client.run(Client.scala:1153) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1568) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:892) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO util.ShutdownHookManager: Shutdown hook called 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-eb1e1b60-ef09-4a58-8e5f-dc988411999e 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - 20/05/28 15:04:20 INFO util.ShutdownHookManager: Deleting directory /huayong/data/tmp/spark-dba79ec3-1f27-4da0-8e8e-5a98c31c156f 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Process completed unsuccessfully in 55 seconds. 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine ERROR - Job run failed! java.lang.RuntimeException: azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 1 at azkaban.jobExecutor.ProcessJob.run(ProcessJob.java:305) at azkaban.execapp.JobRunner.runJob(JobRunner.java:787) at azkaban.execapp.JobRunner.doRun(JobRunner.java:602) at azkaban.execapp.JobRunner.run(JobRunner.java:563) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 1 at azkaban.jobExecutor.utils.process.AzkabanProcess.run(AzkabanProcess.java:125) at azkaban.jobExecutor.ProcessJob.run(ProcessJob.java:297) ... 8 more 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine ERROR - azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 1 cause: azkaban.jobExecutor.utils.process.ProcessFailureException: Process exited with code 1 28-05-2020 15:04:20 CST bi_cal_business_data_table_combine INFO - Finishing job bi_cal_business_data_table_combine at 1590649460777 with status FAILED

 

yarn logs -applicationId application_1583730534669_117324命令查看日志为:

20/05/28 15:04:17 WARN lazy.LazyStruct: Extra bytes detected at the end of the row! Ignoring similar problems. 20/05/28 15:04:17 WARN lazy.LazyStruct: Extra bytes detected at the end of the row! Ignoring similar problems. 20/05/28 15:04:19 ERROR yarn.ApplicationMaster: Uncaught exception: java.lang.IllegalStateException: User did not initialize spark context! at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:467) at org.apache.spark.deploy.yarn.ApplicationMaster.org$apache$spark$deploy$yarn$ApplicationMaster$$runImpl(ApplicationMaster.scala:301) at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply$mcV$sp(ApplicationMaster.scala:241) at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply(ApplicationMaster.scala:241) at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply(ApplicationMaster.scala:241) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:782) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) at org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:781) at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:240) at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:806) at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)

脚本最后一行的自定义的传参的参数 10去掉,依然上面的错。

 

但是代码中把.master("local[*]"去掉后,使用client和cluster模式,都可以跑成功。

 

总结:

1.代码中local[*]参数去掉后,两种模式都可以跑成功,不去掉,只能跑client模式

2.cluster模式是在集群跑任务,使用的是集群随机一台机器的资源,而client模式是在提交任务的这台机器上跑,使用的是这台机器的资源

3.没问题的脚本:

client:

spark2-submit \ --class bi.tag.BusinessDataCombineErpJobs \ --master yarn-client \ --driver-memory 1g \ --executor-memory 3g \ --executor-cores 2 \ /var/business_data/pi-1.0.1-SNAPSHOT-yarn-cluster.jar

cluster:

spark2-submit \ --class bi.tag.BusinessDataCombineErpJobs \ --master yarn \ --deploy-mode cluster \ --driver-memory 1g \ --executor-memory 3g \ --executor-cores 2 \ /var/business_data/pi-1.0.1-SNAPSHOT-yarn-cluster.jar

 

sparkstreaming的提交示例:

spark2-submit --master yarn-client --conf spark.driver.memory=2g --class com.tzb.sparkstreaming.prod.DataChangeStreaming --executor-memory 8G --num-executors 5 --executor-cores 2 /test/spark-test-jar-with-dependencies.jar >> /test/sparkstreaming_datachange.log

参考:

https://www.cnblogs.com/weiweifeng/p/8073553.html

 



【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有