'Merge/contenate over 100 netcdf files into one file, memory allocation problem

I want to merge/contenate over 100 netcdf files into one file. I have memory allocation problem:

MemoryError: Unable to allocate 2.80 GiB for an array with shape (29, 3601, 7199) and data type float32

I searched some topics here. Some suggested to use xarray.open_mfdataset(f) with chunks={"time": 1} like:

ds = xarray.merge([xarray.open_mfdataset(f,chunks={"time": 1}) for f in glob.glob(path + '/*.nc4')])

But I am getting that kind of error:

"PerformanceWarning: Slicing is producing a large chunk. To accept the large. chunk and silence this warning, set the option with dask.config.set(**{'array.slicing.split_large_chunks': False}):"

Using script below with client also didn't work:

  from dask.distributed import Client, progress
  client = Client(processes=False, threads_per_worker=2,
             n_workers=4, memory_limit='8GB')


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source