'How can I loop though many URL's from the same website in order to web scrap data?

I try this and it works only for the first bon_ISIN. It gets the bond price for only the first bond_ISIN, but I want to be able to get bond prices for each ISIN within bond_ISIN. I want the code to insert each item within bond_ISIN into the link and print each bond price associated with each bond_ISIN. Ive pasted the code below but I realized that the forloop isnt indented when i post here. assume ive already indented the contents of the for loop when running this code. Thanks for the help.

bond_ISIN=[
    "us369604bq57",
    "us26441CBG96",
    "us31428XBV73",
]

link='https://markets.businessinsider.com/bonds/dl-flr_prefsecs_1621-und_d-bond-{}'

ticker_data=dict(zip(bond_ISIN, ticker_dataframes))

for ISIN in bond_ISIN: 
    r=urllib.request.urlopen('https://markets.businessinsider.com/bonds/dl-flr_prefsecs_1621-und_d-bond-us369604bq57').read()
    soup=BeautifulSoup(r,"lxml")
    type(soup)
    currentvalue= soup.find("span", attrs={"class": "price-
    section__current-value"})
    print(currentvalue.get_text())


Solution 1:[1]

Your soup.find( ... ) call is fine. But it does not depend on ISIN.

Construct the soup just once, rather than doing it within a for loop.

Then either

  1. loop over soup.findall( ... ) results, ..or..
  2. do a soup.find( ... ) on each ISIN bond of interest.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 J_H