'Creating nested dataclass objects in Python
I have a dataclass object that has nested dataclass objects in it. However, when I create the main object, the nested objects turn into a dictionary:
@dataclass
class One:
f_one: int
f_two: str
@dataclass
class Two:
f_three: str
f_four: One
Two(**{'f_three': 'three', 'f_four': {'f_one': 1, 'f_two': 'two'}})
Two(f_three='three', f_four={'f_one': 1, 'f_two': 'two'})
obj = {'f_three': 'three', 'f_four': One(**{'f_one': 1, 'f_two': 'two'})}
Two(**obj)
Two(f_three='three', f_four=One(f_one=1, f_two='two'))
As you can see only **obj works.
Ideally I'd like to construct my object to get something like this:
Two(f_three='three', f_four=One(f_one=1, f_two='two'))
Is there any way to achieve that other than manually converting nested dictionaries to corresponding dataclass object, whenever accessing object attributes?
Thanks in advance.
Solution 1:[1]
You can use post_init for this
from dataclasses import dataclass
@dataclass
class One:
f_one: int
f_two: str
@dataclass
class Two:
f_three: str
f_four: One
def __post_init__(self):
self.f_four = One(**self.f_four)
data = {'f_three': 'three', 'f_four': {'f_one': 1, 'f_two': 'two'}}
print(Two(**data))
# Two(f_three='three', f_four=One(f_one=1, f_two='two'))
Solution 2:[2]
You can try dacite module. This package simplifies creation of data classes from dictionaries - it also supports nested structures.
Example:
from dataclasses import dataclass
from dacite import from_dict
@dataclass
class A:
x: str
y: int
@dataclass
class B:
a: A
data = {
'a': {
'x': 'test',
'y': 1,
}
}
result = from_dict(data_class=B, data=data)
assert result == B(a=A(x='test', y=1))
To install dacite, simply use pip:
$ pip install dacite
Solution 3:[3]
Instead of writing a new decorator I came up with a function modifying all fields of type dataclass after the actual dataclass is initialized.
def dicts_to_dataclasses(instance):
"""Convert all fields of type `dataclass` into an instance of the
specified data class if the current value is of type dict."""
cls = type(instance)
for f in dataclasses.fields(cls):
if not dataclasses.is_dataclass(f.type):
continue
value = getattr(instance, f.name)
if not isinstance(value, dict):
continue
new_value = f.type(**value)
setattr(instance, f.name, new_value)
The function could be called manually or in __post_init__. This way the @dataclass decorator can be used in all its glory.
The example from above with a call to __post_init__:
@dataclass
class One:
f_one: int
f_two: str
@dataclass
class Two:
def __post_init__(self):
dicts_to_dataclasses(self)
f_three: str
f_four: One
data = {'f_three': 'three', 'f_four': {'f_one': 1, 'f_two': 'two'}}
two = Two(**data)
# Two(f_three='three', f_four=One(f_one=1, f_two='two'))
Solution 4:[4]
I have created an augmentation of the solution by @jsbueno that also accepts typing in the form List[<your class/>].
def nested_dataclass(*args, **kwargs):
def wrapper(cls):
cls = dataclass(cls, **kwargs)
original_init = cls.__init__
def __init__(self, *args, **kwargs):
for name, value in kwargs.items():
field_type = cls.__annotations__.get(name, None)
if isinstance(value, list):
if field_type.__origin__ == list or field_type.__origin__ == List:
sub_type = field_type.__args__[0]
if is_dataclass(sub_type):
items = []
for child in value:
if isinstance(child, dict):
items.append(sub_type(**child))
kwargs[name] = items
if is_dataclass(field_type) and isinstance(value, dict):
new_obj = field_type(**value)
kwargs[name] = new_obj
original_init(self, *args, **kwargs)
cls.__init__ = __init__
return cls
return wrapper(args[0]) if args else wrapper
Solution 5:[5]
If you are okay with pairing this functionality with the non-stdlib library attrs (a superset of the functionality that dataclass stdlib provides), then the cattrs library provides a structure function which handles the conversion of native data types to dataclasses and will use type annotations automatically.
Solution 6:[6]
Very important question is not nesting, but value validation / casting. Do you need validation of values?
If value validation is needed, stay with well-tested deserialization libs like:
pydantic(faster but messy reserved attributes likeschemainterfere with attribute names coming from data. Have to rename and alias class properties enough to make it annoying)schematics(slower than pydantic, but much more mature typecasting stack)
They have amazing validation and re-casting support and are used very widely (meaning, should generally work well and not mess up your data). However, they are not dataclass based, though Pydantic wraps dataclass functionality and allows you to switch from pure dataclasses to Pydantic-supported dataclasses with change of import statement.
These libs (mentioned in this thread) work with dataclasses natively, but validation / typecasting is not hardened yet.
dacitevalidated_dc
If validation is not super important, and just recursive nesting is needed, simple hand-rolled code like https://gist.github.com/dvdotsenko/07deeafb27847851631bfe4b4ddd9059 is enough to deal with Optional and List[ Dict[ nested models.
Solution 7:[7]
from dataclasses import dataclass, asdict
from validated_dc import ValidatedDC
@dataclass
class Foo(ValidatedDC):
one: int
two: str
@dataclass
class Bar(ValidatedDC):
three: str
foo: Foo
data = {'three': 'three', 'foo': {'one': 1, 'two': 'two'}}
bar = Bar(**data)
assert bar == Bar(three='three', foo=Foo(one=1, two='two'))
data = {'three': 'three', 'foo': Foo(**{'one': 1, 'two': 'two'})}
bar = Bar(**data)
assert bar == Bar(three='three', foo=Foo(one=1, two='two'))
# Use asdict() to work with the dictionary:
bar_dict = asdict(bar)
assert bar_dict == {'three': 'three', 'foo': {'one': 1, 'two': 'two'}}
foo_dict = asdict(bar.foo)
assert foo_dict == {'one': 1, 'two': 'two'}
ValidatedDC: https://github.com/EvgeniyBurdin/validated_dc
Solution 8:[8]
dataclass-wizard is a modern option that can alternatively work for you. It supports complex types such as date and time, generics from the typing module, and a nested dataclass structure.
Other "nice to have" features such as implicit key casing transforms - i.e. camelCase and TitleCase, which are quite common in API responses - are likewise supported out of box.
The "new style" annotations introduced in PEPs 585 and 604 can be ported back to Python 3.7 via a __future__ import as shown below.
from __future__ import annotations
from dataclasses import dataclass
from dataclass_wizard import fromdict, asdict, DumpMeta
@dataclass
class Two:
f_three: str | None
f_four: list[One]
@dataclass
class One:
f_one: int
f_two: str
data = {'f_three': 'three',
'f_four': [{'f_one': 1, 'f_two': 'two'},
{'f_one': '2', 'f_two': 'something else'}]}
two = fromdict(Two, data)
print(two)
# setup key transform for serialization (default is camelCase)
DumpMeta(key_transform='SNAKE').bind_to(Two)
my_dict = asdict(two)
print(my_dict)
Output:
Two(f_three='three', f_four=[One(f_one=1, f_two='two'), One(f_one=2, f_two='something else')])
{'f_three': 'three', 'f_four': [{'f_one': 1, 'f_two': 'two'}, {'f_one': 2, 'f_two': 'something else'}]}
You can install Dataclass Wizard via pip:
$ pip install dataclass-wizard
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | |
| Solution 2 | |
| Solution 3 | Yourstruly |
| Solution 4 | Daan Luttik |
| Solution 5 | Alex Waygood |
| Solution 6 | ddotsenko |
| Solution 7 | Evgeniy_Burdin |
| Solution 8 |
