'mongodb - combining multiple collections, but each individual query might not need all columns
Here is the scenario : We have 2 tables (issues, anomalies) in BigQuery, which we plan to combine into a single document in MongoDB, since the 2 collections (issues, anomalies) is data about particular site.
[
{
"site": "abc",
"issues": {
--- issues data --
},
"anomalies": {
-- anomalies data --
}
}
]
There are some queries which require the 'issues' data, while others require 'anomalies' data. In the future, we might need to show 'issues' & 'anomalies' data together, which is the reason why i'm planning to combine the two in a single document.
Questions on the approach above, wrt performance/volume of data read:
When we read the combined document, is there a way to read only specific columns (so the data volume read is not huge) ? Or does this mean that when we read the document, the entire document is loaded in memory ?
Pls let me know.
tia!
UPDATE : going over the mongoDB docs, we can use projections to pull only the required data from mongoDB documents. Also, in this case - the data that is transferred over the network is only the data the specific fields that is read. However the mongoDB server will still have to select the specific fields from the documents.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
