'Re-evaluate data types in Pandas columns
Sorry if this question is duplicate!!
I have a Dataframe like
0 1 2 3 4
0
1 33 40 75 73 45
2 46 59 40 53 17
3 43 63 5 38 83
4 97 43 14 39 82
The cells of the first row are all empty strings ""
.
Apparently the dtypes are all object
df.dtypes
0 object
1 object
2 object
3 object
4 object
dtype: object
I generate a new Dataframe from the first using the code df2 = df.iloc[1:,:]
.
df2
0 1 2 3 4
1 33 40 75 73 45
2 46 59 40 53 17
3 43 63 5 38 83
4 97 43 14 39 82
The dtypes of this new df2 are still object
How I can re-evaluate the dtypes of the new Dataframe?
Clarification. suppose I have a dataframe in which each column has homogenous (int, float, and datetime) data except for a few rows which contain strings. If I delete these rows then how I make pandas re-evaluate the data types of each column. Should I simply save the dataframe and then read it again!
Solution 1:[1]
To re-evaluate the types of the columns after you modified the dataframe, try relying on df.infer_objects().dtypes.
Solution 2:[2]
do you mean change the datatype? if so try:
df.iloc[0] = df.iloc[0].apply(lambda x: int(x))
you can replace int to float64 as well I believe
Solution 3:[3]
If I understood you correctly you could simply set the dataframe as int.
i.e.:
df2.astype('int')
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | JCL |
Solution 2 | Moo10000 |
Solution 3 | Gorlomi |