I have 20,000 ~1000-row dataframes, each of which has a name, in a 170GB pickle file at the moment. I'd like to write these to a file these so I can load them i
nssharingservice
lazy-initialization
apache-commons-cli
zq
weakmap
xapi
kubernetes-fission
spring-cloud-azure
pivot-without-aggregate
cupertinotabbar
lxml
weld-junit5
google-studio
go-xorm
readwritelock
smooth-numbers
alation
runbook
private-functions
aes-gcm
spring-kafka
jimfs
modelsim
android-device-file-explorer
pandas-settingwithcopy-warning
gts
opensmpp
elasticsearch-spark">elasticsearch-spark
floating
github-api