'Python Webscraping with HTTP/S proxies shows my own ip

I really appreciate you reading this, any answer helps :)

When I use my proxies with selenium (that support http and https), to connect to a https page, it successfully loads, with the ip of the proxy. Now, when I use a session from the Python requests library with my proxies, using them as http proxies, the website that I'm trying to reach just shows my own ip, not the one of the proxy, it probably redirects me to the https version of the page. So what I did then was trying to use the proxies as https proxies, which gives me this error: requests.exceptions.ProxyError: HTTPSConnectionPool(host='website here', port=443): Max retries exceeded with url: / (Caused by ProxyError('Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP Which just comes to to the fact that I can't use the proxies as https proxies. Nevertheless, I'm able to use them as HTTPS proxies when I just regularly use them in my browser, just not with the requests library. I've also tried disabling SSL certificate, it always shows my own ip made the request not the proxy's.

I've tried 3 different proxy providers so that's not the problem, since they promise HTTP/S proxies and I can use the proxies in my browser to access https website correctly.

newsession = requests.Session()
newsession.proxies = {'http': "http://192.3.233.133:3128"}
print(newsession.get('http://instagc.com', verify=False).text)

The proxy above is just an example, you can't use it.

I'd really appreciate any help. Thank you so much for reading this, have a good day!



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source