'Training agent using historical data in TF-agents
I am using contextual bandits algorithm in TF_agents. Is there a way to train the agent using historical data (context, action, reward) in table, instead of using the replay buffer ?
The environment provides context and reward. Therefore I cam make the environment provide these from the table. But the action is provided by the agent. I am not sure how to override the action provide by the agent (on a specific context) with the action in historical table data.
I am using a custom environment, and a prebuilt agent (LinearThompsonSampling - Bandit agent). Not quite sure if I can use the LinearThompson sampling inbuilt agent and at the same time, provide actions based on the historical data for training. Couldn't find any examples in the tf_agents documentation
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
