Category "python-requests"

Max retries exceeded with URL in requests

I'm trying to get the content of App Store > Business: import requests from lxml import html page = requests.get("https://itunes.apple.com/in/genre/ios-bus

PermissionError: [WinError 32] when importing the requests module

When I run the command import requests, I get this: Traceback (most recent call last): File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\import

scraping yell with python requests gives 403 error

I have this code from requests.sessions import Session url = "https://www.yell.com/s/launderettes-birmingham.html" s = Session() headers = { 'user-agent':"

Scraping network traffic data

I'm well aware of scraping webpages with requests, BS, and a few other tools, but I can't seem to find a way to create a program that scrapes stuff found in the

How to post two (several) arguments via requests.post() in Python?

I need to post two arguments: .xls file, and one more constant (e.g. 1000) this is how I tried to do it, but it failed: requests.post('http://13.59.5.143:8082/b

Requests format for uploading multiple images in FastAPI

Example Here's my code trying to upload a list of images: import requests import glob import cv2 path = glob.glob("test_folder/*", recursive=True) # a list of

Scraping data from oddsportal.com with Python

I have been trying to extract the data for each cell on a number from this ajax website, the details for each cell only pop-up when a mouse point on the cell. I

Custom JSONEncoder for requests.post

I'm writing wrapper for REST API and use requests module. Method .json() of Response object transfers **kwargs to json.loads() function, so I can easily use cu

No module named 'requests' in Jupyter with Python3, but "Requirement already satisfied" for Python3

I'm on a macOS with Catalina, running my environment from venv. I'm trying to import requests within a Jupyter notebook Python3, but I'm getting the following e

Scraping First post from phpbb3 forum by Python

I have alink like that http://www.arabcomics.net/phpbb3/viewtopic.php?f=98&t=71718 the link has LINKS in first post in phpbb3 forum How I get LINKS in fir

Robot Framework Requests could not find cert path

I am running the following keyword below to create a client cert session and it creates it perfectly fine: Create Client Cert Session alias=${alias} url=${url

Problem using IMF data API for a large number of countries

I am trying to download national account data from the API of the International Financial Statistics from the International Monetary Fund. I don't have any trou

How to convert curl -F command into python code with requests?

I am developing with apis however I faced with a problem that I have never met. curl -F "media=@IMAGE_NAME" 'xxxx url' How do I convert it into python code wit

Need download voice message from Telegram on Python

I started developing a pet project related to telegram bot. One of the points was the question, how to download a voice message from the bot? Task: Need to dow

gunicorn shows connection error [Errno 111]

Hi im building a Django app with docker, gunicorn and nginx and having serious issues calling a post request from the django view. my login function - with sch

Python Requests: Check if Login was successful

I've looked all over the place for the solution I'm looking for but just can't find it. Basically, I'm developing a tool which takes a list of URLs from a text

python-requests making a GET instead of POST request

I have a daily cron which handles some of the recurring events at my app, and from time to time I notice one weird error that pops up in logs. The cron, among o

How to deal with 401 (unauthorised) in python requests

What I want to do is GET from a site and if that request returns a 401, then redo my authentication wiggle (which may be out of date) and try again. But I don't

Python requests speed up using keep-alive

In the HTTP protocol you can send many requests in one socket using keep-alive and then receive the response from server at once, so that will significantly spe

Scrape information off a complicated table

I need to scrape data off the seasons stats table of this website: https://fantasy.espn.com/basketball/league/standings?leagueId=1878319 I need to scrape data o