'how to store bwdata databases as datapackages to reuse elsewhere

How I can store a brightway database as a datapackage that I can use afterwards in a different machine ? I want to build some basic databases using the usual bwio functionalities, such as importing from excel. Then export these as datapackage that can be used for calculations in a different machine, and where some of the flows will dynamically updated using interfaces.

I can create a couple of databases and even a simple impact assessment method e.g. :

bio_db = bd.Database("mini_biosphere")
bio_db.register()
# biosphere
co2 = bio_db.new_activity(code = 'CO2',
                    name = 'carbon dioxide',
                    categories=('air',),
                    type='emission',
                    unit='kg')
co2.save()

ch4 = bio_db.new_activity(code = 'CH4',
                    name = 'methane',
                    categories=('air',),
                    type='emission',
                    unit='kg')
ch4.save()

bio_db_dpgk = bio_db.datapackage()

# technosphere
a_key = ("testdb", "a")
b_key = ("testdb", "b")

act_a_def = {
    'name': 'a',
    'unit': 'kilogram',
    'exchanges': [{"input": co2.key, "type": "biosphere", "amount": 10},
                  {"input": a_key, "output":a_key,'type':'production','amount':1},
                  {"input": b_key, "output":a_key,'type':'substitution','amount':1},
                 ],
    }

act_b_def = {
    'name': 'b',
    'unit': 'kilogram',
    'exchanges': [
                  {"input": b_key, "output":a_key,'type':'production','amount':1},
                  {"input": ch4.key, "type": "biosphere", "amount": 1},
                 ],
    }
    
db = bd.Database("testdb")
db.write(
    {
    a_key : act_a_def,
    b_key : act_b_def
    }
    )

# impact assessment method
ipcc = bd.Method(('IPCC',))
ipcc.write([
    (co2.key,1),
    (ch4.key,30)
])
ipcc_datapackage = ipcc.datapackage()

but I don't know how to save them and then load them again elsewhere to do some basic calculations.



Solution 1:[1]

well, it seems that the zip file already exists (this is what is loaded when the datapackage method is used. The solution I found is to use db.filepath_processed() to get to the path of the file and just copy it elsewhere using shutil. Following on the example I calculate the carbon footprint from the datapackages

import shutil
from pathlib import Path
from fs.zipfs import ZipFS

import bw2data as bd
import bw2calc as bc
import bw_processing

# store them in the current working directory, for example

shutil.copyfile(src=bio_db.filepath_processed(),
                dst=Path.cwd()/bio_db.filename_processed())

shutil.copyfile(src=ipcc.filepath_processed(),
                dst=Path.cwd()/ipcc.filename_processed())

shutil.copyfile(src=db.filepath_processed(),
                dst=Path.cwd()/db.filename_processed())

# then load them and try them elsewhere

ipcc_dp = bw_processing.load_datapackage(ZipFS('ipcc.676f7b9b8e5bb30500c01e2385e59174.zip'))
biosphere_dp = bw_processing.load_datapackage(ZipFS('mini_biosphere.939a0794.zip'))
test_dp = bw_processing.load_datapackage(ZipFS('testdb.2a571274.zip'))


lca = bc.LCA(demand={act_a.id:1},data_objs=[ipcc_dp,biosphere_dp,test_dp])

lca.lci()
lca.lcia()
lca.score

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Nabla