'Spark Repartition creates partition more than 128 MB

Let's say I have a file of 1.2 GB, so considering the block size of 128 MB, it would create 10 partitions. Now, if I repartition it (or coalesce) to 4 partitions, it means definitely each partition will be more than 128 MB. In this case, each partition has to hold 320 MB of data, but block size is 128 MB. I'm bit confused here. How is this possible? How can we create a partition with more than block size?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source