I have 20,000 ~1000-row dataframes, each of which has a name, in a 170GB pickle file at the moment. I'd like to write these to a file these so I can load them i
myget
aws-lex
custom-training
embedded-linux
windows-kernel
geosparql
bulk-collect
algorithm-animation
kafkajs
footable
jquery-load
binding-mode
peewee
git-fork
relational-model
envdte
spring-data-envers
bids
restbed
binary-serialization
css-color
secondary-indexes
javahelp
dsbulk
bignum
roku
diplib
cloudera-manager
scrapy-spider
silk