'loading a tab delimited text file as a hive table/dataframe in databricks

I am trying to upload a tab delimited text file in databricks notebooks, but all the column values are getting pushed into one column value

here is the sql code I am using

Create table if not exists database.table
using text  
options (path 's3bucketpath.txt', header "true")

I also tried using csv

The same things happens if i'm reading into a spark dataframe

I am expecting to see the columns separated out with their header. Has anyone come across this issue and figured out a solution?



Solution 1:[1]

Have you tried to add a sep option to specify that you're using tab-separated values?

Create table if not exists database.table
using csv  
options (path 's3bucketpath.txt', header 'true', sep '\t')

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Alex Ott