'How to continuously crawl a webpage for articles using Selenium in Python
I'm trying to crawl bloomberg.com and find links for all English news articles. The problem with the below code is that, it does find a lot of articles from the first page but the it just goes into a loop that it does not return anything and goes once in a while.
from collections import deque
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.firefox.options import Options
visited = set()
to_crawl = deque()
to_crawl.append("https://www.bloomberg.com")
def crawl_link(input_url):
options = Options()
options.add_argument('--headless')
browser = webdriver.Firefox(options=options)
browser.get(input_url)
elems = browser.find_elements(by=By.XPATH, value="//a[@href]")
for elem in elems:
#retrieve all href links and save it to url_element variable
url_element = elem.get_attribute("href")
if url_element not in visited:
to_crawl.append(url_element)
visited.add(url_element)
#save news articles
if 'www.bloomberg.com/news/articles' in url_element:
print(str(url_element))
with open("result.txt", "a") as outf:
outf.write(str(url_element) + "\n")
browser.close()
while len(to_crawl):
url_to_crawl = to_crawl.pop()
crawl_link(url_to_crawl)
I've tried using a queue and then used a stack, but the behavior is the same. I cannot seem to be able to accomplish what im looking for.
How do you crawl websites like this to crawl news urls?
Solution 1:[1]
The approach you are using should work fine, however after running it myself there are a few things that I noticed are causing it to hang or throw errors.
I made some adjustments and included some in-line comments to explain my reasons for adding them.
from collections import deque
from selenium.common.exceptions import StaleElementReferenceException
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.firefox.options import Options
base = "https://www.bloomberg.com"
article = base + "/news/articles"
visited = set()
# this is so there aren't multiple entries
# for the same article in the `result.txt` file
articles = set()
to_crawl = deque()
to_crawl.append(base)
def crawl_link(input_url):
options = Options()
options.add_argument('--headless')
browser = webdriver.Firefox(options=options)
print(input_url)
browser.get(input_url)
elems = browser.find_elements(by=By.XPATH, value="//a[@href]")
# this was the issue, before this line was just after
# `to_crawl.append()` which was prematurely adding links
# to the visited list so those links were skipped over without
# being crawled
visited.add(input_url)
for elem in elems:
# checks for errors
try:
url_element = elem.get_attribute("href")
except StaleElementReferenceException as err:
print(err)
continue
# checks to make sure links aren't being crawled more than once
# and that all the links are in the propper domain
if base in url_element and all(url_element not in i for i in [visited, to_crawl]):
to_crawl.append(url_element)
# this checks if the link matches the correct url pattern
# and ensures no article links are entered multiple times
if article in url_element and url_element not in articles:
articles.add(url_element)
print(str(url_element))
with open("result.txt", "a") as outf:
outf.write(str(url_element) + "\n")
browser.quit() # guarantees the browser closes completely
while len(to_crawl):
# popleft makes the deque a FIFO instead of LIFO.
# A queue would achieve the same thing.
url_to_crawl = to_crawl.popleft()
crawl_link(url_to_crawl)
After running for 90 seconds this was the output of result.txt https://gist.github.com/alexpdev/b7545970c4e3002b1372e26651301a23
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 |
