'The spark context has stopped and the driver is restarting. Your notebook will be automatically reattached
I was calling multiple Databricks notebooks into one notebook and execute.
After cluster starts I am getting error as "The spark context has stopped and the driver is restarting. Your notebook will be automatically reattached."
Each of my notebook will process very large data.
What could be the reason for this. And can I solve this issue?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
