'PySpark drop column values if it's too large
I have a problem of loading data in redshift through aws glue, some column values are too large and my job throws exception. Is it possible to ignore columns that have large strings and load those that are normal size for example < 1024 characters
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
