'How do I bypass a 403 error for unloaded divs on a dynamically scrolling page using selenium in python?
I am trying to scrape an ecom site with divs that load only after you scroll. I can load the page and get the first 60 divs, but the divs after that do not load and show a "Failed to load resource: the server responded with a status of 403 ()" when inspected with chrome developer tools.
The site shows as a blank white space between the first 60 divs that load and the footer.
Is there any way to pass into something to the driver.get request so that the page still continues to load?
I tried using sleep and waiting functions, but it seems that the divs just won't load after the initial get request.
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_experimental_option("detach", True)
driver = webdriver.Chrome(options=options)
driver.get('https://www.ecomercesiteiamtryingtoscrape.com')
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
