'Fractional part is removed after load data from Teradata to Spark
We're trying to load data from Teradata, the code using is:
sparkSession.read
.format("jdbc")
.options(
Map(
"url" -> "jdbc:teradata://hostname, user=$username, password=$password",
"MAYBENULL" -> "ON",
"SIP_SUPPORT" -> "ON",
"driver" -> "com.teradata.jdbc.TeraDriver",
"dbtable" -> $table_name
)
)
.load()
However, some data lost its fractional part after loading. To be more concise, the column in Teradata is in the type of [Number][1] and after loading, the data type in Spark is DecimalType(38,0), the scale value is 0 which means no digits after decimal point.
Data in Teradata is something like,
id column1 column2
1 50.23 100.23
2 25.8 20.669
3 30.2 19.23
The dataframe of Spark is like,
id column1 column2
1 50 100
2 26 21
3 30 19
The meta data of the table in Teradata is like:
CREATE SET TABLE table_name (id BIGINT, column1 NUMBER, column2 NUMBER) PRIMARY INDEX (id);
The Spark version is 2.3.0 and Teradata is 16.20.32.59.
So here comes question, why the automatic conversion happens and how can I keep the data's fractional part in Spark just as it was in Teradata. [1]: https://docs.teradata.com/r/Teradata-Database-SQL-Data-Types-and-Literals/June-2017/Numeric-Data-Types/FLOAT/REAL/DOUBLE-PRECISION-Data-Types
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
