'Don't understand OpenSSL API error handling
I'm implementing a little HTTP client using OpenSSL, and I'm trying to handle "connection timed out" errors gracefully. By gracefully, I mean I want to print a nice, human-readable message that says "Connection Timed Out." Unfortunately, the error handling in OpenSSL isn't making sense to me. Here's what happens:
I create a nonblocking socket and deliberately connect to a port that I know won't respond, in order to test the error handling of "connection timed out." I make the socket into a nonblocking SSL channel.
Then, I call SSL_connect. It returns -1 as expected. I call SSL_get_error to get more information about the error. It returns SSL_ERROR_WANT_WRITE, as expected: it's waiting for connection to timeout, and that takes a while. So far so good.
I keep calling SSL_connect until finally, SSL_get_error returns SSL_ERROR_SYSCALL. Again, this is what I expect. I am expecting the connect system call to fail. So far so good.
Finally, and this is the part that isn't working for me, I try to get the actual error message. Here is the code I'm using:
unsigned long code =
ERR_get_error_line_data(&file, &line, &data, &flags);
To my surprise, this is returning zero, meaning there are no more errors in the error queue. I wasn't expecting that. What I was expecting was an error code with the property ERR_GET_REASON(code) == ETIMEDOUT, so that I could then pass the ETIMEDOUT to strerror, to get the actual error message. It seems weird to me that there's nothing at all in the error queue. I don't understand that.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
