'How does Python Eventlet concurrently request?
The client uses eventlet with the following code.
import eventlet
import urllib.request
urls = [
"http://localhost:5000/",
"http://localhost:5000/",
"http://localhost:5000/",
]
def fetch(url: str) -> str:
return urllib.request.urlopen(url).read()
pool = eventlet.GreenPool(1000)
for body in pool.imap(fetch, urls):
print("got body", len(body), body)
The server side is built using fastapi, so why can we distinguish if the client is really concurrently requested, we add a delay
from fastapi import FastAPI
import uvicorn
import time
app = FastAPI()
@app.get('/')
def root():
time.sleep(3)
return {"message": "Hello World"}
if __name__ == '__main__':
uvicorn.run("api:app", host="0.0.0.0", port=5000)
Start the server first, then run the client code, using Linux's time command to see how long it takes.
got body 25 b'{"message": "Hello World"}'
got body 25 b'{"message": "Hello World"}'
got body 25 b'{"message": "Hello World"}'
python -u "013.py" 0.25s user 0.01s system 2% cpu 9.276 total
As you can see, it took 9 seconds instead of 3 seconds, so this ``eventlet` is not parallel, am I using it wrong? What is the correct way to open it?
Translated with www.DeepL.com/Translator (free version)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
