'PySpark: User class threw exception: org.apache.spark.sql.AnalysisException: Attribute name contains invalid character(s)
I'm trying to upload a CSV-file, but I get the following error:
> User class threw exception: org.apache.spark.sql.AnalysisException:
> Attribute name
> "userid,date,availability,timeslot,theme,comment,endtime,am_pm"
> contains invalid character(s) among " ,;{}()\n\t=". Please use alias
> to rename it.;
Could it be he sees all the columns as 1 name? And why doesn't it sees the delimiter?
Solution 1:[1]
Apparantly the CVS-file changes his delimiter to , instead of ; after copy-pasting the file. So by adapting the macro by saving as CSV directly into to right map, the delimiter problem is solved.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Nathalii. |
