Apparently it can EAGAIN on non-blocking connections... I don't think
LibreSSL's TLS library does this, but something to keep in mind if it
doesn't work for somebody.
tls_read() and tls_write() may return TLS_WANT_POLLIN or TLS_WANT_POLLOUT
if data isn't ready to be read or written yet. We have to account for this
by converting it to EAGAIN, which is how a typical read() or write()
function should behave.
Also installed a SIGPIPE handler; we do not want to be terminated by
SIGPIPE, and it's safe to ignore this signal because it should be
handled thoroughly in the code.
This is useful for having a TLS and a non-TLS version port, like Synapse.
I verified that the multiple-servers does in fact work as intended,
although the TLS server part is broken; I must be doing something
incorrectly with LibreSSL in setting up the server.
This way, we can still set the debug level in the configuration, and not
see the log just absolutely flooded with memory allocations and whatnot.
This is helpful because I want debug messages to show up in development,
but not in production, but having all the memory logging makes it
almost impossible to pick anything else out of the log. I want the
feature available, just not on by default because it's useful in limited
circumstances.
The standard use case for this is going to be running a TLS and a non-TLS
HTTP server. I can't see a need for *more* than two, but it is theoretically
possible.
We shouldn't have to change anything with the database or anything; it
should suffice to simply spin up more HTTP servers, and they should
interact with each other the same way a single HTTP server with multiple
threads will.
This is the easiest and cleanest way to get logging into some of the
fundamental APIs, such as the database and TLS APIs. We don't want to
have to pass logging functions to those, but they can safely use the
global logging configuration.
Not only does this make us more POSIX, it actually makes things a lot
easier because TLS implementations will need to be able to access the
trusted certificates file, which most likely will not live in the
data directory.
Both do buffered reads and writes, but IoCopy() uses IoRead() and
IoWrite() directly, whereas StreamCopy() relies on StreamGetc() and
StreamPutc(), which manipulate the stream buffers.
If we haven't read any bytes yet, then we try a few times a few ms apart
to see if we get anything. If not, treat it as an EOF. Otherwise, read
bytes until we get an EOF or EAGAIN. EAGAIN after a consistent read of
bytes is treaded as an EOF immediately.
These functions previously operated on the assumption that fgetc() would
block; however it will not block on HttpServer streams because those are
non-blocking. They now check error conditions properly before failing
prematurely.
You might be asking why I would just write a simple curl replacement
when curl does the job just fine. Well, the most immediate reason is
to test the HttpClient API, but since Telodendria's goal is to not
be dependent on any third-party code if at all possible, it makes
sense to have a simple HTTP client to use not only for testing
Telodendria, but also for configuring it. When we move the
configuration to the database, we'll ship a script that uses this
tool to allow admins to easily submit API requests.
Do not be concerned that HttpClient does not support TLS yet. TLS
support is necessary for federation to work, so it is coming
eventually.
The spec says that a username can be either just the localpart, or a
localpart and a server. This commit now ensures that the login endpoint
actually handles usernames properly by calling the proper parsing
functions.
This commit should fix user interactive authentication for dummy flows,
but I still have to implement a few more flows, including passwords and
refresh token. I also have to fix the cleanup logic: when do we purge
UIA sessions?
This implementation just keeps the refresh token and only updates the
access token. The spec says that this is allowed. There's really no
reason to do this, other than the fact that it's easier.