How to prevent Spark Executors from getting Lost when using YARN client mode?
I had a very similar problem. I had many executors being lost no matter how much memory we allocated to them. The solution if you’re using yarn was to set –conf spark.yarn.executor.memoryOverhead=600, alternatively if your cluster uses mesos you can try –conf spark.mesos.executor.memoryOverhead=600 instead. In spark 2.3.1+ the configuration option is now –conf spark.yarn.executor.memoryOverhead=600 It … Read more