'How to login then scrape multiple pages using Scrapy

My task is to visit a starting page in order to get some cookies, then login and finally navigate to a page/pages. My code below:

class ExampleSpider(scrapy.Spider):
    name = "test"

    HEADERS = {
        'user-agent':   'My User Agent',
        'accept':   '*/*',
        'accept-encoding':  'gzip, deflate, br'
    }

    AUTH_REQUEST_HEADERS = {
        'accept':   '*/*',
        'x-li-lang':    'en-US',
        'accept-encoding':  'gzip, deflate, br',
        'token':    'default token',
    }

    
    def start_requests(self):
        return [
            Request(
            "https://www.example.com",
            headers = self.HEADERS,
            ),]

    def parse(self, response):
        cookies = response.headers.to_unicode_dict()['Set-Cookie']
        session_pattern = re.compile('JSESSIONID="(.+?)"')  
        JSESSIONID = session_pattern.findall(cookies)[0]
        return FormRequest(
            "https://www.example.com/authenticate",
            formdata = {
            "session_key": '[email protected]',
            "session_password": 'password',
            "JSESSIONID": JSESSIONID
        },
            headers = self.AUTH_REQUEST_HEADERS,
            callback=self.fetch
            ),
            
            

    def fetch(self, response):
        yield Request("https://www.example.com/api/user1",callback=self.processor)

    def processor(self,response):
        ...

The first request and the authentication need specific headers hence why I used return. However the last call for fetch returns a 403 error. The login step in parse is successful so not sure why my last step is not authenticated. How can I ensure the calls under fetch are also using the authenticated session? I'm still starting out with scrapy so also happy with any tips to improve my code



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source