'How can we truncate and load the documents to a cosmos dB collection with out dropping it in pyspark

I have a monthly job in databricks where I want to truncate all records for previous month and then load for current month in cosmos db so I tried with option("truncate","true") with overwrite mode but it seems collection is dropping and recreating due to which shared key and RU is getting dropped from collection.

df.write.mode("overwrite").option("truncate","true").format("com.mongodb.spark.sql.DefaultSource").option("database","ccdb").option("collection","testCollection1").save()

Is there any way to achieve this use case where every month will load the 4 million data by truncating the previous month data.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source