'Comparing memory usage for two different clusters
We have two separate sets of Kafka brokers, both on Amazon MSK (managed Kafka). Looking at their MemoryFree metric (offered by AWS CloudWatch), we observe two very different graphs:
So as we can see, Cluster 1 free memory has a lot of fluctuations, while Cluster 2's free memory remains fairly constant. This is a but strange to us because Cluster 2 handles a higher volume of data. What are some configurations and factors that could cause such a behavior in Cluster 1? I know it is a bit of an open-ended question but any advice would be appreciated. If you need the values for a specific metric or configuration I would be happy to provide them.
Solution 1:[1]
From the graphs, it seems that cluster 2 brokers have more than 200GB of memory compared to the 20-30GB that the cluster 1 brokers have. Is that correct?
I'd say more than 200GB of memory per broker is a lot and that would explain why the cluster 2 memory has almost no fluctuations, it seems over provisioned. Actually there are some small fluctuations but are not as noticeable since 10GB more or less is less than 5% of the total in cluster 2 compared to 30-50% in cluster 1.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Gerard Garcia |


