'Fatal Python error: Cannot recover from stack overflow error while parsing using Selenium
According to the task, I need to parse all books (go through all categories and go to each product). There are about 100 thousand books on the site. But when executing the script, after some time, an error occurs:
Fatal Python error: Cannot recover from stack overflow.
I understand that, most likely, there is not enough RAM (judging by similar questions found on the Internet), but how to get around it, it is not yet clear to me. This is what my code looks like:
import requests
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup as bs
from mongodb import connect_mongo_bd
import time
db = connect_mongo_bd()
collections = db.comparison_new
print('Start!')
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.84 Safari/537.36'}
service = Service('/home/Test/Desktop/work/python_parser/chromedriver')
options = webdriver.ChromeOptions()
options.headless = True
options.binary_location = '/data/opt/apps/cn.google.chrome/files/google-chrome'
browser = webdriver.Chrome(service=service, options=options)
def pagination_cycle(url):
print(url)
try:
browser.get(url)
wait = WebDriverWait(browser, 10)
wait.until(
EC.presence_of_element_located((By.CLASS_NAME, 'product'))
)
first_soup = bs(browser.page_source, 'html.parser')
products = first_soup.select('#resProd > div.product')
products_list = []
for product in products:
time.sleep(1)
product_link_tag = product.select_one('div.rtd > div.title-mine > a')
if not product_link_tag:
continue
else:
product_link = 'https://test.de' + product_link_tag['href']
print(product_link)
second_request = requests.get(product_link, headers=headers)
if second_request.status_code == 200:
second_soup = bs(second_request.content, 'html.parser')
product_name_tag = product.select_one('div.rtd > div.title-mine > a')
if product_name_tag:
product_name = product_name_tag.text
else:
continue
product_price_tag = second_soup.select_one('#product_shop > div.product_list_style > div.item-info > div > span.price2')
if product_price_tag:
product_price = float(product_price_tag.text.replace(' €', ''))
else:
continue
product_isbn_tag = second_soup.select_one('#product_shop > div.product_list_style > div.item-info').find(
text='ISBN'
)
if product_isbn_tag:
product_isbn = product_isbn_tag.find_parent().find_next_sibling().text.replace('-', '')
else:
continue
collections.update_one(
{
'isbn': product_isbn
},
{
'$set': {
'name': product_name,
'test_price': product_price
},
'$inc': {
'cnt_updated': 1
}
},
upsert=True
)
next_link = first_soup.select_one(
'#resPage > div.pager > div > div.pages > ol > li.current'
).find_next_sibling().find('a')
if next_link:
pagination_cycle('https://test.de/knigi/' + next_link['href'])
except Exception as ex:
print("Ошибка: " + ex.__class__.__name__)
time.sleep(10)
pagination_cycle(url)
return True
result = pagination_cycle('https://test.de/knigi/')
print(result)
browser.quit()
db.close()
And everything would be fine, but after some time I am constantly getting this error:

Please tell me what to do and how to solve this problem?
Solution 1:[1]
This error message...
Fatal Python error: Cannot recover from stack overflow
...implies that the recursion limit in your program logic exceeds the maximum depth of the Python interpreter stack.
Deep Dive
You are getting this error as your program starts executing the line:
result = pagination_cycle('https://test.de/knigi/')
Then within def pagination_cycle(url): at the end you are recursively calling the same method as in:
pagination_cycle('https://test.de/knigi/' + next_link['href'])
Once the recursion limit exceeds the maximum depth of the Python interpreter stack, this error is thrown.
tl; dr
Return the current value of the recursion limit, the maximum depth of the Python interpreter stack. This limit prevents infinite recursion from causing an overflow of the C stack and crashing Python. It can be set by
setrecursionlimit().
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | undetected Selenium |
