'Stale Element Reference Exception when webscraping using python selenium

I am trying to scrape a website and have written up a working script. The problem is that after some time running the script I get the stale element reference exception telling me the referenced element (the href) was not found.

Here I am extracting the links of all products on each page in a website and saving them in a list which I later use to extract the data from each link.

for a in tqdm(range(1,pages+1)):
    time.sleep(3)
    link=driver.find_elements_by_xpath('//div[@class="col-xs-4 animation"]/a')
    for b in link:
        x = b.get_attribute("href")
        print(x)
        LINKS.append(x)
    time.sleep(3)
#next page
    try:
        WebDriverWait(driver, delay).until(ec.presence_of_element_located((By.XPATH, '//ul[@class="pagination-sm pagination"]')))
        next_page = driver.find_element_by_xpath('.//li[@class="prev"]')
        driver.execute_script("arguments[0].click()", next_page)
    except NoSuchElementException:
        pass

Any idea on how to fix this? The error occurs randomly. Sometimes it finds the links and sometimes it does not, confusing me. Only when I scrape for a long time does this error occur.



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source