'Spark (Scala): retain column order when writing DataFrame

Dataframe, to be write in south-path, has the following structure: "field1, field2, field". I see it by apply .show()-method.

But when I write it in hdfs-path:

 dataFrame
      .repartition(1)
      .write
      .parquet(outputPath)

output file has the structure with changed order of fields:

{"field2": "", "field1": "", "field3": ""}

I suppose it's related with peculiarity of parallel execution. Is there any way to retain the original order of columns when writing of DataFrame?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source