'How to ensure data integrity when facing part of requests exceed retry limit?
Sometimes some requests exceed RETRY_TIMES, facing this issue, is there any concept like deadletter queue in scrapy to ensure data integrity for the target site crawler?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
