'Cache miss penalty: how much data can be loaded from RAM at once? [duplicate]

I'm working with big maps (golang) and dicts (python) and just wondering how much data a RAM write/read can be done at once?

So for example when I'm iterating through a 10GB dict, would each cache miss result in just fetching a DWORD(or how wide is the typical RAM access?) from RAM or will x86/kernel load a batch of related data to L3 (or queueing a set of load instructions to memory controller)?

In both cases, wouldn't processing big data structures be bottlenecked by RAM access latency? (+all the refresh cycles the memory controller has to deal with)



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source