'How does Spark SQL alter table work under the hood?

Say I have a table saved as parquet files. I believe these are read / append-only. So when I add / remove / change columns via alter table, does Spark process the entire set of parquet files and write new ones?

Is that super expensive? (and requires locking the entire table right?)



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source