'Architecture decision for a on-demand data querying app
I'm trying to decide on what technology to use for a new in-house analytics application. We currently work with AWS, and we have a batch job set up which kicks out files of ~40MB-80MB to an S3 bucket. This data needs to be served up in a way that can be quickly filtered on-demand.
My current plan is to create a server-side rendered app. This app will grab the data in some way, filter, and return the results to the client as a dashboard (charts/stats/etc..)
My main question here is how best to store the data for quick retrieval on the server, for this kind of medium file size, without loading the file into RAM? My initial thought is to load the data from S3 into a DynamoDB table - but querying 40-80MB of data repeatedly as the user changes the filters might be too intensive?
Thanks in advance,
Sam
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
