'Cache problem when adding data sources to the apollo config option in Keystone 5
When executing a request to the server for the second time, the result that returns is always the cached one, even if the first request mutated the entity.
here is the dataSources.js file that imports two instances of each class of dataSource:
- legacyUserApi = new LegacyUserApi();
- legacyMailApi = new LegacyMailApi();
import legacyUserApi from './LegacyUserApi';
import legacyMailApi from './LegacyMailApi';
export default () => ({
legacyUserApi,
legacyMailApi
});
and in the keystone.js file I import it:
import dataSources from './dataSources';
const apps = [
new GraphQLApp({
apiPath: API_PATH,
apollo: {
dataSources,
introspection: isDev
}
})
]
Solution 1:[1]
Keystone js uses Apollo Server so I've looked at their documentation:
Apollo Server calls this function for every incoming operation. Also as shown, the function should create a new instance of each data source for each operation.
so dataSources.js file should import the class and not the instance so that Apollo Server will create a new instance with every request.
import LegacyUserApi from './LegacyUserApi';
import LegacyMailApi from './LegacyMailApi';
export default () => ({
legacyUserApi: new LegacyUserApi(),
legacyMailApi: new LegacyMailApi()
});
Solution 2:[2]
There are many ways caching can be used on top of Apollo Server but your question doesn't indicate which you're using. Ultimately, most of them set a Cache-Control header on the HTTP response so let's assume that's the case here.
This header is interpreted by both the browser and any shared caches that sit between it and the server, such as a CDN or proxy. Once a cached, the server generally doesn't have any mechanism to invalidate the stored copy. This can be mitigated by setting the correct header directives for the response that will initially be cached. For example, including the must-revalidate directive or setting a short max-age.
Alternatively, setting the private directive will only allow the request to be cached in the browser. Subsequent HTTP requests could then specify max-age=0, no-cache to force the data to be fetched again (for example, after a mutation was made). This may allow you to solve your problem for individual users but, obviously, you loose the benefits of a shared cache.
It's difficult to give specific advice without knowing more about your caching configuration, infrastructure and the requests being made. Maybe you can update your question with a reproduction using curl? That'll let you isolate the server behaviour from that of the browser.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Daniel Bellmas |
| Solution 2 | Molomby |
