'python download/scrape ssrn papers from list of urls

I have a bunch of links that are the exact same except for the id at the end. All I want to do is loop through each link and download the paper as a PDF using the download as PDF button. In an ideal world, the filename would be the title of the paper but if that isn't possible I can rename them later. Getting them all downloaded is more important. I have like 200 links but I will provide 5 here for an example.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3860262
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2521007
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3146924
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2488552
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3330134

Is what I want to do possible? I have some familiarity with looping through URLs to scrape tables but I have never tried to do anything with a download button.

I don't have example code because I don't know where to start here. But something like

for url in urls:
(go to each link)
(download as pdf via the "download this paper" button)
(save file as title of paper)


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source