Our Solution Architect Rob Keevil, currently working at ING, is presenting at the Dublin Spark Summit. He is doing a data engineering presentation about working with skewed data in Apache Spark, together with Fokko Driesprong from GoDataDriven.
Skewed data is the enemy when joining tables using Spark. It shuffles a large proportion of the data onto a few overloaded nodes, bottlenecking Spark’s parallelism and resulting in out of memory errors. The go-to answer is to use broadcast joins; leaving the large, skewed dataset in place and transmitting a smaller table to every machine in the cluster for joining. But what happens when your second table is too large to broadcast, and does not fit into memory? Or even worse, when a single key is bigger than the total size of your executor? Firstly, we will give an introduction into the problem. Secondly, the current ways of fighting the problem will be explained, including why these solutions are limited. Finally, we will demonstrate a new technique – the iterative broadcast join – developed while processing ING Bank’s global transaction data. This technique, implemented on top of the Spark SQL API, allows multiple large and highly skewed datasets to be joined successfully, while retaining a high level of parallelism. This is something that is not possible with existing Spark join types.
Spark Summits are the world’s largest big data events focused entirely on Apache Spark—assembling the very best engineers, scientists, analysts, and executives from around the globe to share their knowledge and receive expert training on this open-source powerhouse. Since its pioneering summit in 2013, thousands have come to learn how Spark, big data, machine learning, data engineering, and data science are delivering new insights to businesses and institutions worldwide.