'Best way to bulk load data from Azure Synapse Serverless SQL pools into Azure storage or Databricks Spark
I am trying to bulk load data from Azure Synapse serverless SQL pools into Azure Storage or directly into Databricks Spark (using JDBC driver). What is the best way to do this bulk loading assuming we only know the external table name and don't know the location of the file underneath? Is there any metadata query to know the location of the file as well?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
