scala - SPARK_EXECUTOR_INSTANCES not working in SPARK SHELL, YARN CLIENT MODE -


i new on spark.

trying run spark on yarn in yarn-client mode.

spark version = 1.0.2 hadoop version = 2.2.0

cluster of yarn has 3 live nodes.

properties set in spark-env.sh

spark_executor_memory=1g

spark_executor_instances=3

spark_executor_cores=1

spark_driver_memory=2g

command used : /bin/spark-shell --master yarn-client

but after logging spark-shell, registers 1 executor default mem assign it.

i confirmed via spark-web ui has 1 executor , on master node ( yarn resource manager node ) only.

info yarn.client: command starting spark applicationmaster: list($java_home/bin/java, -server, -xmx2048m, -djava.io.tmpdir=$pwd/tmp, -dspark.tachyonstore.foldername=\"spark-fc6383cc-0904-4af9-8abd-3b66b3f0f461\", -dspark.yarn.secondary.jars=\"\", -dspark.home=\"/home/impadmin/spark-1.0.2-bin-hadoop2\", -dspark.repl.class.uri=\"http://master_node:46823\", -dspark.driver.host=\"master_node\", -dspark.app.name=\"spark shell\", -dspark.jars=\"\", -dspark.fileserver.uri=\"http://master_node:46267\", -dspark.master=\"yarn-client\", -dspark.driver.port=\"41209\", -dspark.httpbroadcast.uri=\"http://master_node:36965\", -dlog4j.configuration=log4j-spark-container.properties, org.apache.spark.deploy.yarn.executorlauncher, --class, notused, --jar , null, --args 'master_node:41209' , --executor-memory, 1024, --executor-cores, 1, --num-executors , 3, 1>, /stdout, 2>, /stderr)

...  ...  ...  14/09/10 22:21:24 info cluster.yarnclientschedulerbackend: registered executor: 

actor[akka.tcp://sparkexecutor@master_node:53619/user/executor#1075999905] id 1 14/09/10 22:21:24 info storage.blockmanagerinfo: registering block manager master_node:40205 589.2 mb ram 14/09/10 22:21:25 info cluster.yarnclientclusterscheduler: yarnclientclusterscheduler.poststarthook done 14/09/10 22:21:25 info repl.sparkiloop: created spark context.. spark context available sc.

and after running spark action amount of parallelization, runs tasks in series on node only!!

ok solved way. have 4 data nodes on cluster

spark-shell --num-executors 4 --master yarn-client


Comments

Popular posts from this blog

javascript - how to protect a flash video from refresh? -

android - Associate same looper with different threads -

visual studio 2010 - Connect to informix database windows form application -