'AttributeError: 'NoneType' object has no attribute 'find' when scrapping an array of URLs

I have the following code:

from bs4 import BeautifulSoup
import requests

root = 'https://br.investing.com'
website = f'{root}/news/latest-news'

result = requests.get(website, headers={"User-Agent": "Mozilla/5.0"})
content = result.text
soup = BeautifulSoup(content, 'lxml')

box = soup.find('section', id='leftColumn')
links = [link['href'] for link in box.find_all('a', href=True)]

for link in links:
  result = requests.get(f'{root}/{link}', headers={"User-Agent": "Mozilla/5.0"})
  content = result.text
  soup = BeautifulSoup(content, 'lxml')

  box = soup.find('section', id='leftColumn')
  title = box.find('h1').get_text()

  with open('headlines.txt', 'w') as file:
    file.write(title)

I intend with this code scrape the URLs of news from a website, access each of these URLs, get its headers and write them on a text file. With this code, I'm just getting one header on the file and receiving AttributeError: 'NoneType' object has no attribute 'find'. What can be done about this?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source