Adding in curl and openssl repos

This commit is contained in:
2025-08-14 12:09:30 -04:00
parent af2117b574
commit 0ace93e303
21174 changed files with 3607720 additions and 2 deletions

View File

@@ -0,0 +1,43 @@
#
# To run the demos when linked with a shared library (default) ensure that
# libcrypto and libssl are on the library path. For example to run the
# ddd-01-conn-blocking-tls demo:
#
# LD_LIBRARY_PATH=../../.. ./ddd-01-conn-blocking-tls
#
# Building ddd-06-mem-uv-tls and ddd-06-mem-uv-quic requires the
# library libuv and header file. On Ubuntu, they are provided by the
# package "libuv1-dev".
TESTS_BASE = ddd-01-conn-blocking \
ddd-02-conn-nonblocking \
ddd-02-conn-nonblocking-threads \
ddd-03-fd-blocking \
ddd-04-fd-nonblocking \
ddd-05-mem-nonblocking \
ddd-06-mem-uv
TESTS = $(foreach x,$(TESTS_BASE),$(x)-tls $(x)-quic)
CFLAGS = -I../../../include -g -Wall -Wsign-compare
LDFLAGS = -L../../..
LDLIBS = -lcrypto -lssl
CC_CMD = $(CC) $(CFLAGS) $(LDFLAGS) -o "$@" "$<" $(LDLIBS)
all: $(TESTS)
clean:
rm -f $(TESTS) *.o
ddd-%-tls: ddd-%.c
$(CC_CMD)
ddd-%-quic: ddd-%.c
$(CC_CMD) -DUSE_QUIC
ddd-%-uv-tls: ddd-%-uv.c
$(CC_CMD) -luv
ddd-%-uv-quic: ddd-%-uv.c
$(CC_CMD) -luv -DUSE_QUIC

View File

@@ -0,0 +1,120 @@
Demo-Driven Design
==================
The OpenSSL project from time to time must evolve its public API surface in
order to support new functionality and deprecate old functionality. When this
occurs, the changes to OpenSSL's public API must be planned, discussed and
agreed. One significant dimension which must be considered when considering any
proposed API change is how a broad spectrum of real-world OpenSSL applications
uses the APIs which exist today, as this determines the ways in which those
applications will be affected by any proposed changes, the extent to which they
will be affected, and the extent of any changes which will need to be made by
codebases using OpenSSL to remain current with best practices for OpenSSL API
usage.
As such, it is useful for the OpenSSL project to have a good understanding of
the usage patterns common in codebases which use OpenSSL, so that it can
anticipate the impact of any evolution of its API on those codebases. This
directory seeks to maintain a set of **API usage demos** which demonstrate a
full spectrum of ways in which real-world applications use the OpenSSL APIs.
This allows the project to discuss any proposed API changes in terms of the
changes that would need to be made to each demo. Since the demos are
representative of a broad spectrum of real-world OpenSSL-based applications,
this ensures that API evolution is made both with reference to real-world API
usage patterns and with reference to the impact on existing applications.
As such, these demos are maintained in the OpenSSL repository because they are
useful both to current and any future proposed API changes. The set of demos may
be expanded over time, and the demos in this directory at any one time constitute
a present body of understanding of API usage patterns, which can be used to plan
API changes.
For further background information on the premise of this approach, see [API
long-term evolution](https://github.com/openssl/openssl/issues/17939).
Scope
-----
The current emphasis is on client demos. Server support for QUIC is deferred to
subsequent OpenSSL releases, and therefore is (currently) out of scope for this
design exercise.
The demos also deliberately focus on aspects of libssl usage which are likely to
be relevant to QUIC and require changes; for example, how varied applications
have libssl perform network I/O, and how varied applications create sockets and
connections for use with libssl. The libssl API as a whole has a much larger
scope and includes numerous functions and features; the intention is
not to demonstrate all of these, because most of them will not be touched by
QUIC. For example, while many users of OpenSSL may make use of APIs for client
certificates or other TLS functionality, the use of QUIC is unlikely to have
implications for these APIs and demos demonstrating such functionality are
therefore out of scope.
[A report is available](REPORT.md) on the results of the DDD process following
the completion of the development of the QUIC MVP (minimum viable product).
Background
----------
These demos were developed after analysis of the following open source
applications to determine libssl API usage patterns. The commonly occurring usage
patterns were determined and used to determine categories into which to classify
the applications:
| | Blk? | FD |
|------------------|------|----|
| mutt | S | AOSF |
| vsftpd | S | AOSF |
| exim | S | AOSFx |
| wget | S | AOSF |
| Fossil | S | BIOc |
| librabbitmq | A | BIOx |
| ngircd | A | AOSF |
| stunnel | A | AOSFx |
| Postfix | A | AOSF |
| socat | A | AOSF |
| HAProxy | A | BIOx |
| Dovecot | A | BIOm |
| Apache httpd | A | BIOx |
| UnrealIRCd | A | AOSF |
| wpa_supplicant | A | BIOm |
| icecast | A | AOSF |
| nginx | A | AOSF |
| curl | A | AOSF |
| Asterisk | A | AOSF |
| Asterisk (DTLS) | A | BIOm/x |
| pgbouncer | A | AOSF, BIOc |
* Blk: Whether the application uses blocking or non-blocking I/O.
* S: Blocking (Synchronous)
* A: Nonblocking (Asynchronous)
* FD: Whether the application creates and owns its own FD.
* AOSF: Application owns, calls SSL_set_fd.
* AOSFx: Application owns, calls SSL_set_[rw]fd, different FDs for read/write.
* BIOs: Application creates a socket/FD BIO and calls SSL_set_bio.
Application created the connection.
* BIOx: Application creates a BIO with a custom BIO method and calls SSL_set_bio.
* BIOm: Application creates a memory BIO and does its own
pumping to/from actual socket, treating libssl as a pure state machine which
does no I/O itself.
* BIOc: Application uses BIO_s_connect-based methods such as BIO_new_ssl_connect
and leaves connection establishment to OpenSSL.
Demos
-----
The demos found in this directory are:
| | Type | Description |
|-----------------|-------|-------------|
| [ddd-01-conn-blocking](ddd-01-conn-blocking.c) | S-BIOc | A `BIO_s_connect`-based blocking example demonstrating exemplary OpenSSL API usage |
| [ddd-02-conn-nonblocking](ddd-02-conn-nonblocking.c) | A-BIOc | A `BIO_s_connect`-based nonblocking example demonstrating exemplary OpenSSL API usage, with use of a buffering BIO |
| [ddd-03-fd-blocking](ddd-03-fd-blocking.c) | S-AOSF | A `SSL_set_fd`-based blocking example demonstrating real-world OpenSSL API usage (corresponding to S-AOSF applications above) |
| [ddd-04-fd-nonblocking](ddd-04-fd-nonblocking.c) | A-AOSF | A `SSL_set_fd`-based non-blocking example demonstrating real-world OpenSSL API usage (corresponding to A-AOSF applications above) |
| [ddd-05-mem-nonblocking](ddd-05-mem-nonblocking.c) | A-BIOm | A non-blocking example based on use of a memory buffer to feed OpenSSL encrypted data (corresponding to A-BIOm applications above) |
| [ddd-06-mem-uv](ddd-06-mem-uv.c) | A-BIOm | A non-blocking example based on use of a memory buffer to feed OpenSSL encrypted data; uses libuv, a real-world async I/O library |
On Ubuntu, libuv can be obtained by installing the package "libuv1-dev".
Availability of a default certificate store is assumed. `SSL_CERT_DIR` may be
set when running the demos if necessary.

View File

@@ -0,0 +1,340 @@
Report on the Conclusions of the QUIC DDD Process
=================================================
The [QUIC Demo-Driven Design process](README.md) was undertaken to meet the OMC
requirement to develop a QUIC API that required only minimal changes to existing
applications to be able to adapt their code to use QUIC. The demo-driven design
process developed a set of representative demos modelling a variety of common
OpenSSL usage patterns based on analysis of a broad spectrum of open source
software projects using OpenSSL.
As part of this process, a set of proposed diffs were produced. These proposed
diffs were the expected changes which would be needed to the baseline demos to
support QUIC based on theoretical analysis of the minimum requirements to be
able to support QUIC. This analysis concluded that the changes needed to
applications could be kept very small in many circumstances, with only minimal
diff sizes to the baseline demos.
Following the development of QUIC MVP, these demos have been revisited and the
correspondence of our actual final API and usage patterns with the planned diffs
have been reviewed.
This document discusses the planned changes and the actual changes for each demo
and draws conclusions on the level of disparity.
Since tracking a set of diffs separately is unwieldy, both the planned and
unplanned changes have been folded into the original baseline demo files guarded
with `#ifdef USE_QUIC`. Viewing these files therefore is informative to
application writers as it provides a clear view of what is different when using
QUIC. (The originally planned changes, and the final changes, are added in
separate, clearly-labelled commits; to view the originally planned changes only,
view the commit history for a given demo file.)
ddd-01-conn-blocking
--------------------
This demo exists to demonstrate the simplest possible usage of OpenSSL, whether
with TLS or QUIC.
### Originally planned changes
The originally planned change to enable applications for QUIC amounted to just a
single line:
```diff
+ ctx = SSL_CTX_new(QUIC_client_method());
- ctx = SSL_CTX_new(TLS_client_method());
```
### Actual changes
The following additional changes needed to be made:
- `QUIC_client_method` was renamed to `OSSL_QUIC_client_method` for namespacing
reasons.
- A call to `SSL_set_alpn_protos` to configure ALPN was added. This is necessary
because QUIC mandates the use of ALPN, and this was not noted during the
DDD process.
ddd-02-conn-nonblocking
-----------------------
This demo exists to demonstrate simple non-blocking usage. As with
ddd-01-conn-blocking, the name resolution process is managed by `BIO_s_connect`.
It also arbitrarily adds a `BIO_f_buffer` pushed onto the BIO stack
as this is a common application usage pattern.
### Originally planned changes
The originally planned changes to enable applications for QUIC amounted to:
- Change of method (as for ddd-01-conn-blocking);
- Use of a `BIO_f_dgram_buffer` BIO method instead of a `BIO_f_buffer`;
- Use of a `BIO_get_poll_fd` function to get the FD to poll rather than
`BIO_get_fd`;
- A change to how the `POLLIN`/`POLLOUT`/`POLLERR` flags to pass to poll(2)
need to be determined.
- Additional functions in application code to determine event handling
timeouts related to QUIC (`get_conn_pump_timeout`) and to pump
the QUIC event loop (`pump`).
- Timeout computation code which involves merging and comparing different
timeouts and calling `pump` as needed, based on deadlines reported
by libssl.
Note that some of these changes are unnecessary when using the thread assisted
mode (see the variant ddd-02-conn-nonblocking-threads below).
### Actual changes
The following additional changes needed to be made:
- Change of method name (as for ddd-01-conn-blocking);
- Use of ALPN (as for ddd-01-conn-blocking);
- The strategy for how to expose pollable OS resource handles
to applications to determine I/O readiness has changed substantially since the
original DDD process. As such, applications now use `BIO_get_rpoll_descriptor`
and `BIO_get_wpoll_descriptor` to determine I/O readiness, rather than the
originally hypothesised `SSL_get_poll_fd`.
- The strategy for how to determine when to poll for `POLLIN`, when to
poll for `POLLOUT`, etc. has changed since the original DDD process.
This information is now exposed via `SSL_net_read_desired` and
`SSL_net_write_desired`.
- The API to expose the event handling deadline for the QUIC engine
has evolved since the original DDD process. The new API
`SSL_get_event_timeout` is used, rather than the originally hypothesised
`BIO_get_timeout`/`SSL_get_timeout`.
- The API to perform QUIC event processing has been renamed to be
more descriptive. It is now called `SSL_handle_events` rather than
the originally hypothesised `BIO_pump`/`SSL_pump`.
The following changes were foreseen to be necessary, but turned out to actually
not be necessary:
- The need to change code which pushes a `BIO_f_buffer()` after a SSL BIO
was foreseen as use of buffering on the network side is unworkable with
QUIC. This turned out not to be necessary since we can just reject the
BIO_push() call. The buffer should still be freed eventually when the
SSL BIO is freed. The buffer is not used and is unnecessary, so it is
still desirable for applications to remove this code.
ddd-02-conn-nonblocking-threads
-------------------------------
This is a variant of the ddd-02-conn-nonblocking demo. The base is the same, but
the changes made are different. The use of thread-assisted mode, in which an
internal assist thread is used to perform QUIC event handling, enables an
application to make fewer changes than are needed in the ddd-02-conn-nonblocking
demo.
### Originally planned changes
The originally planned changes to enable applications for QUIC amounted to:
- Change of method, this time using method `QUIC_client_thread_method` rather
than `QUIC_client_method`;
- Use of a `BIO_get_poll_fd` function to get the FD to poll rather than
`BIO_get_fd`;
- A change to how the `POLLIN`/`POLLOUT`/`POLLERR` flags to pass to poll(2)
need to be determined.
Note that this is a substantially smaller list of changes than for
ddd-02-conn-nonblocking.
### Actual changes
The following additional changes needed to be made:
- Change of method name (`QUIC_client_thread_method` was renamed to
`OSSL_QUIC_client_thread_method` for namespacing reasons);
- Use of ALPN (as for ddd-01-conn-blocking);
- Use of `BIO_get_rpoll_descriptor` rather than `BIO_get_poll_fd` (as for
ddd-02-conn-nonblocking).
- Use of `SSL_net_read_desired` and `SSL_net_write_desired` (as for
ddd-02-conn-nonblocking).
ddd-03-fd-blocking
------------------
This demo is similar to ddd-01-conn-blocking but uses a file descriptor passed
directly by the application rather than BIO_s_connect.
### Originally planned changes
- Change of method (as for ddd-01-conn-blocking);
- The arguments to the `socket(2)` call are changed from `(AF_INET, SOCK_STREAM,
IPPROTO_TCP)` to `(AF_INET, SOCK_DGRAM, IPPROTO_UDP)`.
### Actual changes
The following additional changes needed to be made:
- Change of method name (as for ddd-01-conn-blocking);
- Use of ALPN (as for ddd-01-conn-blocking).
ddd-04-fd-nonblocking
---------------------
This demo is similar to ddd-01-conn-nonblocking but uses a file descriptor
passed directly by the application rather than BIO_s_connect.
### Originally planned changes
- Change of method (as for ddd-01-conn-blocking);
- The arguments to the `socket(2)` call are changed from `(AF_INET, SOCK_STREAM,
IPPROTO_TCP)` to `(AF_INET, SOCK_DGRAM, IPPROTO_UDP)`;
- A change to how the `POLLIN`/`POLLOUT`/`POLLERR` flags to pass to poll(2)
need to be determined.
- Additional functions in application code to determine event handling
timeouts related to QUIC (`get_conn_pump_timeout`) and to pump
the QUIC event loop (`pump`).
- Timeout computation code which involves merging and comparing different
timeouts and calling `pump` as needed, based on deadlines reported
by libssl.
### Actual changes
The following additional changes needed to be made:
- Change of method name (as for ddd-01-conn-blocking);
- Use of ALPN (as for ddd-01-conn-blocking);
- `SSL_get_timeout` replaced with `SSL_get_event_timeout` (as for
ddd-02-conn-nonblocking);
- `SSL_pump` renamed to `SSL_handle_events` (as for ddd-02-conn-nonblocking);
- The strategy for how to determine when to poll for `POLLIN`, when to
poll for `POLLOUT`, etc. has changed since the original DDD process.
This information is now exposed via `SSL_net_read_desired` and
`SSL_net_write_desired` (as for ddd-02-conn-nonblocking).
ddd-05-mem-nonblocking
----------------------
This demo is more elaborate. It uses memory buffers created and managed by an
application as an intermediary between libssl and the network, which is a common
usage pattern for applications. Managing this pattern for QUIC is more elaborate
since datagram semantics on the network channel need to be maintained.
### Originally planned changes
- Change of method (as for ddd-01-conn-blocking);
- Call to `BIO_new_bio_pair` is changed to `BIO_new_dgram_pair`, which
provides a bidirectional memory buffer BIO with datagram semantics.
- A change to how the `POLLIN`/`POLLOUT`/`POLLERR` flags to pass to poll(2)
need to be determined.
- Potential changes to buffer sizes used by applications to buffer
datagrams, if those buffers are smaller than 1472 bytes.
- The arguments to the `socket(2)` call are changed from `(AF_INET, SOCK_STREAM,
IPPROTO_TCP)` to `(AF_INET, SOCK_DGRAM, IPPROTO_UDP)`;
### Actual changes
The following additional changes needed to be made:
- Change of method name (as for ddd-01-conn-blocking);
- Use of ALPN (as for ddd-01-conn-blocking);
- The API to construct a `BIO_s_dgram_pair` ended up being named
`BIO_new_bio_dgram_pair` rather than `BIO_new_dgram_pair`;
- Use of `SSL_net_read_desired` and `SSL_net_write_desired` (as for
ddd-02-conn-nonblocking).
ddd-06-mem-uv
-------------
This demo is the most elaborate of the set. It uses a real-world asynchronous
I/O reactor, namely libuv (the engine used by Node.js). In doing so it seeks to
demonstrate and prove the viability of our API design with a real-world
asynchronous I/O system. It operates wholly in non-blocking mode and uses memory
buffers on either side of the QUIC stack to feed data to and from the
application and the network.
### Originally planned changes
- Change of method (as for ddd-01-conn-blocking);
- Various changes to use of libuv needed to switch to using UDP;
- Additional use of libuv to configure a timer event;
- Call to `BIO_new_bio_pair` is changed to `BIO_new_dgram_pair`
(as for ddd-05-mem-nonblocking);
- Some reordering of code required by the design of libuv.
### Actual changes
The following additional changes needed to be made:
- Change of method name (as for ddd-01-conn-blocking);
- Use of ALPN (as for ddd-01-conn-blocking);
- `BIO_new_dgram_pair` renamed to `BIO_new_bio_dgram_pair` (as for
ddd-05-mem-nonblocking);
- `SSL_get_timeout` replaced with `SSL_get_event_timeout` (as for
ddd-02-conn-nonblocking);
- `SSL_pump` renamed to `SSL_handle_events` (as for ddd-02-conn-nonblocking);
- Fixes to use of libuv based on a corrected understanding
of its operation, and changes that necessarily ensue.
Conclusions
-----------
The DDD process has successfully delivered on the objective of delivering a QUIC
API which can be used with only minimal API changes. The additional changes on
top of those originally planned which were required to successfully execute the
demos using QUIC were highly limited in scope and mostly constituted only minor
changes. The sum total of the changes required for each demo (both planned and
additional), as denoted in each DDD demo file under `#ifdef USE_QUIC` guards,
are both minimal and limited in scope.
“Minimal” and “limited” are distinct criteria. If inexorable technical
requirements dictate, an enormous set of changes to an application could be
considered “minimal”. The changes required to representative applications, as
demonstrated by the DDD demos, are not merely minimal but also limited.
For example, while the extent of these necessary changes varies by the
sophistication of each demo and the kind of application usage pattern it
represents, some demos in particular demonstrate exceptionally small changesets;
for example, ddd-01-conn-blocking and ddd-02-conn-nonblocking-threads, with
ddd-01-conn-blocking literally being enabled by a single line change assuming
ALPN is already configured.
This report concludes the DDD process for the single-stream QUIC client API
design process, which sought to validate our API design and API ease of use for
existing applications seeking to adopt QUIC.

View File

@@ -0,0 +1,80 @@
Windows-related issues
======================
Supporting Windows introduces some complications due to some "fun" peculiarities
of Windows's socket API.
In general, Windows does not provide a poll(2) call. WSAPoll(2) was introduced
in Vista and supposed to bring this functionality, but it had a bug in it which
Microsoft refused to fix, making it rather pointless. However Microsoft has now
finally fixed this bug in a build of Windows 10. So WSAPoll(2) is a viable
method, but only on fairly new versions of Windows.
Traditionally, polling has been done on windows using select(). However, this
call works a little differently than on POSIX platforms. Whereas on POSIX
platforms select() accepts a bitmask of FDs, on Windows select() accepts a
structure which embeds a fixed-length array of socket handles. This is necessary
because sockets are NT kernel handles on Windows and thus are not allocated
contiguously like FDs. As such, Windows select() is actually very similar to
POSIX poll(), making select() a viable option for polling on Windows.
Neither select() nor poll() are, of course, high performance polling options.
Windows does not provide anything like epoll or kqueue. For high performance
network I/O, you are expected to use a Windows API called I/O Completion Ports
(IOCP).
Supporting these is a pain for applications designed around polling. The reason
is that IOCPs are a higher-level interface; it is easy to build an IOCP-like
interface on top of polling, but it is not really possible to build a
polling-like interface on top of IOCPs.
For this reason it's actually common for asynchronous I/O libraries to basically
contain two separate implementations of their APIs internally, or at least a
substantial chunk of their code (e.g. libuv, nanomsg). It turns out to be easier
just to write a poll-based implementation of an I/O reactor and an IOCP-based
implementation than try to overcome the impedance discontinuities.
The difference between polling and IOCPs is that polling reports *readiness*
whereas IOCPs report *completion of an operation*. For example, in the IOCP
model, you make a read or write on a socket and an event is posted to the IOCP
when the read or write is complete. This is a fundamentally different model and
actually more similar to a high-level asynchronous I/O library such as libuv or
so on.
Evaluation of the existing demos and their applicability to Windows IOCP:
- ddd-01-conn-blocking: Blocking example, use of IOCP is not applicable.
- ddd-02-conn-nonblocking: Socket is managed by OpenSSL, and IOCP is not
supported.
- ddd-03-fd-blocking: Blocking example, use of IOCP is not applicable.
- ddd-04-fd-nonblocking: libssl is passed an FD with BIO_set_fd.
BIO_s_sock doesn't appear to support overlapped (that is, IOCP-based) I/O
as this requires use of special WSASend() and WSARecv() functions, rather
than standard send()/recv().
Since libssl already doesn't support IOCP for use of BIO_s_sock,
we might say here that any existing application using BIO_s_sock
obviously isn't trying to use IOCP, and therefore we don't need to
worry about the adapability of this example to IOCP.
- ddd-05-mem-nonblocking: Since the application is in full control of passing
data from the memory BIO to the network, or vice versa, the application
can use IOCP if it wishes.
This is demonstrated in the following demo:
- ddd-06-mem-uv: This demo uses a memory BIO and libuv. Since libuv supports
IOCP, it proves that a memory BIO can be used to support IOCP-based usage.
Further, a cursory examination of code on GitHub seems to suggest that when
people do use IOCP with libssl, they do it using memory BIOs passed to libssl.
So ddd-05 and ddd-06 essentially demonstrate this use case, especially ddd-06 as
it uses IOCP internally on Windows.
My conclusion here is that since libssl does not support IOCP in the first
place, we don't need to be particularly worried about this. But in the worst
case there are always workable solutions, as in demos 5 and 6.

View File

@@ -0,0 +1,187 @@
#include <openssl/ssl.h>
/*
* Demo 1: Client — Managed Connection — Blocking
* ==============================================
*
* This is an example of (part of) an application which uses libssl in a simple,
* synchronous, blocking fashion. The functions show all interactions with
* libssl the application makes, and would hypothetically be linked into a
* larger application.
*/
/*
* The application is initializing and wants an SSL_CTX which it will use for
* some number of outgoing connections, which it creates in subsequent calls to
* new_conn. The application may also call this function multiple times to
* create multiple SSL_CTX.
*/
SSL_CTX *create_ssl_ctx(void)
{
SSL_CTX *ctx;
#ifdef USE_QUIC
ctx = SSL_CTX_new(OSSL_QUIC_client_method());
#else
ctx = SSL_CTX_new(TLS_client_method());
#endif
if (ctx == NULL)
return NULL;
/* Enable trust chain verification. */
SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, NULL);
/* Load default root CA store. */
if (SSL_CTX_set_default_verify_paths(ctx) == 0) {
SSL_CTX_free(ctx);
return NULL;
}
return ctx;
}
/*
* The application wants to create a new outgoing connection using a given
* SSL_CTX.
*
* hostname is a string like "openssl.org:443" or "[::1]:443".
*/
BIO *new_conn(SSL_CTX *ctx, const char *hostname)
{
BIO *out;
SSL *ssl = NULL;
const char *bare_hostname;
#ifdef USE_QUIC
static const unsigned char alpn[] = {5, 'd', 'u', 'm', 'm', 'y'};
#endif
out = BIO_new_ssl_connect(ctx);
if (out == NULL)
return NULL;
if (BIO_get_ssl(out, &ssl) == 0) {
BIO_free_all(out);
return NULL;
}
if (BIO_set_conn_hostname(out, hostname) == 0) {
BIO_free_all(out);
return NULL;
}
/* Returns the parsed hostname extracted from the hostname:port string. */
bare_hostname = BIO_get_conn_hostname(out);
if (bare_hostname == NULL) {
BIO_free_all(out);
return NULL;
}
/* Tell the SSL object the hostname to check certificates against. */
if (SSL_set1_host(ssl, bare_hostname) <= 0) {
BIO_free_all(out);
return NULL;
}
#ifdef USE_QUIC
/* Configure ALPN, which is required for QUIC. */
if (SSL_set_alpn_protos(ssl, alpn, sizeof(alpn))) {
/* Note: SSL_set_alpn_protos returns 1 for failure. */
BIO_free_all(out);
return NULL;
}
#endif
return out;
}
/*
* The application wants to send some block of data to the peer.
* This is a blocking call.
*/
int tx(BIO *bio, const void *buf, int buf_len)
{
return BIO_write(bio, buf, buf_len);
}
/*
* The application wants to receive some block of data from
* the peer. This is a blocking call.
*/
int rx(BIO *bio, void *buf, int buf_len)
{
return BIO_read(bio, buf, buf_len);
}
/*
* The application wants to close the connection and free bookkeeping
* structures.
*/
void teardown(BIO *bio)
{
BIO_free_all(bio);
}
/*
* The application is shutting down and wants to free a previously
* created SSL_CTX.
*/
void teardown_ctx(SSL_CTX *ctx)
{
SSL_CTX_free(ctx);
}
/*
* ============================================================================
* Example driver for the above code. This is just to demonstrate that the code
* works and is not intended to be representative of a real application.
*/
int main(int argc, char **argv)
{
static char msg[384], host_port[300];
SSL_CTX *ctx = NULL;
BIO *b = NULL;
char buf[2048];
int l, mlen, res = 1;
if (argc < 3) {
fprintf(stderr, "usage: %s host port\n", argv[0]);
goto fail;
}
snprintf(host_port, sizeof(host_port), "%s:%s", argv[1], argv[2]);
mlen = snprintf(msg, sizeof(msg),
"GET / HTTP/1.0\r\nHost: %s\r\n\r\n", argv[1]);
ctx = create_ssl_ctx();
if (ctx == NULL) {
fprintf(stderr, "could not create context\n");
goto fail;
}
b = new_conn(ctx, host_port);
if (b == NULL) {
fprintf(stderr, "could not create connection\n");
goto fail;
}
l = tx(b, msg, mlen);
if (l < mlen) {
fprintf(stderr, "tx error\n");
goto fail;
}
for (;;) {
l = rx(b, buf, sizeof(buf));
if (l <= 0)
break;
fwrite(buf, 1, l, stdout);
}
res = 0;
fail:
if (b != NULL)
teardown(b);
if (ctx != NULL)
teardown_ctx(ctx);
return res;
}

View File

@@ -0,0 +1,334 @@
#include <sys/poll.h>
#include <openssl/ssl.h>
/*
* Demo 2: Client — Managed Connection — Nonblocking
* ==============================================================
*
* This is an example of (part of) an application which uses libssl in an
* asynchronous, nonblocking fashion. The functions show all interactions with
* libssl the application makes, and would hypothetically be linked into a
* larger application.
*
* In this example, libssl still makes syscalls directly using an fd, which is
* configured in nonblocking mode. As such, the application can still be
* abstracted from the details of what that fd is (is it a TCP socket? is it a
* UDP socket?); this code passes the application an fd and the application
* simply calls back into this code when poll()/etc. indicates it is ready.
*/
typedef struct app_conn_st {
SSL *ssl;
BIO *ssl_bio;
int rx_need_tx, tx_need_rx;
} APP_CONN;
/*
* The application is initializing and wants an SSL_CTX which it will use for
* some number of outgoing connections, which it creates in subsequent calls to
* new_conn. The application may also call this function multiple times to
* create multiple SSL_CTX.
*/
SSL_CTX *create_ssl_ctx(void)
{
SSL_CTX *ctx;
#ifdef USE_QUIC
ctx = SSL_CTX_new(OSSL_QUIC_client_thread_method());
#else
ctx = SSL_CTX_new(TLS_client_method());
#endif
if (ctx == NULL)
return NULL;
/* Enable trust chain verification. */
SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, NULL);
/* Load default root CA store. */
if (SSL_CTX_set_default_verify_paths(ctx) == 0) {
SSL_CTX_free(ctx);
return NULL;
}
return ctx;
}
/*
* The application wants to create a new outgoing connection using a given
* SSL_CTX.
*
* hostname is a string like "openssl.org:443" or "[::1]:443".
*/
APP_CONN *new_conn(SSL_CTX *ctx, const char *hostname)
{
APP_CONN *conn;
BIO *out, *buf;
SSL *ssl = NULL;
const char *bare_hostname;
#ifdef USE_QUIC
static const unsigned char alpn[] = {5, 'd', 'u', 'm', 'm', 'y'};
#endif
conn = calloc(1, sizeof(APP_CONN));
if (conn == NULL)
return NULL;
out = BIO_new_ssl_connect(ctx);
if (out == NULL) {
free(conn);
return NULL;
}
if (BIO_get_ssl(out, &ssl) == 0) {
BIO_free_all(out);
free(conn);
return NULL;
}
buf = BIO_new(BIO_f_buffer());
if (buf == NULL) {
BIO_free_all(out);
free(conn);
return NULL;
}
BIO_push(out, buf);
if (BIO_set_conn_hostname(out, hostname) == 0) {
BIO_free_all(out);
free(conn);
return NULL;
}
/* Returns the parsed hostname extracted from the hostname:port string. */
bare_hostname = BIO_get_conn_hostname(out);
if (bare_hostname == NULL) {
BIO_free_all(out);
free(conn);
return NULL;
}
/* Tell the SSL object the hostname to check certificates against. */
if (SSL_set1_host(ssl, bare_hostname) <= 0) {
BIO_free_all(out);
free(conn);
return NULL;
}
#ifdef USE_QUIC
/* Configure ALPN, which is required for QUIC. */
if (SSL_set_alpn_protos(ssl, alpn, sizeof(alpn))) {
/* Note: SSL_set_alpn_protos returns 1 for failure. */
BIO_free_all(out);
free(conn);
return NULL;
}
#endif
/* Make the BIO nonblocking. */
BIO_set_nbio(out, 1);
conn->ssl_bio = out;
return conn;
}
/*
* Non-blocking transmission.
*
* Returns -1 on error. Returns -2 if the function would block (corresponds to
* EWOULDBLOCK).
*/
int tx(APP_CONN *conn, const void *buf, int buf_len)
{
int l;
conn->tx_need_rx = 0;
l = BIO_write(conn->ssl_bio, buf, buf_len);
if (l <= 0) {
if (BIO_should_retry(conn->ssl_bio)) {
conn->tx_need_rx = BIO_should_read(conn->ssl_bio);
return -2;
} else {
return -1;
}
}
return l;
}
/*
* Non-blocking reception.
*
* Returns -1 on error. Returns -2 if the function would block (corresponds to
* EWOULDBLOCK).
*/
int rx(APP_CONN *conn, void *buf, int buf_len)
{
int l;
conn->rx_need_tx = 0;
l = BIO_read(conn->ssl_bio, buf, buf_len);
if (l <= 0) {
if (BIO_should_retry(conn->ssl_bio)) {
conn->rx_need_tx = BIO_should_write(conn->ssl_bio);
return -2;
} else {
return -1;
}
}
return l;
}
/*
* The application wants to know a fd it can poll on to determine when the
* SSL state machine needs to be pumped.
*/
int get_conn_fd(APP_CONN *conn)
{
#ifdef USE_QUIC
BIO_POLL_DESCRIPTOR d;
if (!BIO_get_rpoll_descriptor(conn->ssl_bio, &d))
return -1;
return d.value.fd;
#else
return BIO_get_fd(conn->ssl_bio, NULL);
#endif
}
/*
* These functions returns zero or more of:
*
* POLLIN: The SSL state machine is interested in socket readability events.
*
* POLLOUT: The SSL state machine is interested in socket writeability events.
*
* POLLERR: The SSL state machine is interested in socket error events.
*
* get_conn_pending_tx returns events which may cause SSL_write to make
* progress and get_conn_pending_rx returns events which may cause SSL_read
* to make progress.
*/
int get_conn_pending_tx(APP_CONN *conn)
{
#ifdef USE_QUIC
return (SSL_net_read_desired(conn->ssl) ? POLLIN : 0)
| (SSL_net_write_desired(conn->ssl) ? POLLOUT : 0)
| POLLERR;
#else
return (conn->tx_need_rx ? POLLIN : 0) | POLLOUT | POLLERR;
#endif
}
int get_conn_pending_rx(APP_CONN *conn)
{
#ifdef USE_QUIC
return get_conn_pending_tx(conn);
#else
return (conn->rx_need_tx ? POLLOUT : 0) | POLLIN | POLLERR;
#endif
}
/*
* The application wants to close the connection and free bookkeeping
* structures.
*/
void teardown(APP_CONN *conn)
{
BIO_free_all(conn->ssl_bio);
free(conn);
}
/*
* The application is shutting down and wants to free a previously
* created SSL_CTX.
*/
void teardown_ctx(SSL_CTX *ctx)
{
SSL_CTX_free(ctx);
}
/*
* ============================================================================
* Example driver for the above code. This is just to demonstrate that the code
* works and is not intended to be representative of a real application.
*/
int main(int argc, char **argv)
{
static char tx_msg[384], host_port[300];
const char *tx_p = tx_msg;
char rx_buf[2048];
int res = 1, l, tx_len;
int timeout = 2000 /* ms */;
APP_CONN *conn = NULL;
SSL_CTX *ctx = NULL;
if (argc < 3) {
fprintf(stderr, "usage: %s host port\n", argv[0]);
goto fail;
}
snprintf(host_port, sizeof(host_port), "%s:%s", argv[1], argv[2]);
tx_len = snprintf(tx_msg, sizeof(tx_msg),
"GET / HTTP/1.0\r\nHost: %s\r\n\r\n", argv[1]);
ctx = create_ssl_ctx();
if (ctx == NULL) {
fprintf(stderr, "cannot create SSL context\n");
goto fail;
}
conn = new_conn(ctx, host_port);
if (conn == NULL) {
fprintf(stderr, "cannot establish connection\n");
goto fail;
}
/* TX */
while (tx_len != 0) {
l = tx(conn, tx_p, tx_len);
if (l > 0) {
tx_p += l;
tx_len -= l;
} else if (l == -1) {
fprintf(stderr, "tx error\n");
} else if (l == -2) {
struct pollfd pfd = {0};
pfd.fd = get_conn_fd(conn);
pfd.events = get_conn_pending_tx(conn);
if (poll(&pfd, 1, timeout) == 0) {
fprintf(stderr, "tx timeout\n");
goto fail;
}
}
}
/* RX */
for (;;) {
l = rx(conn, rx_buf, sizeof(rx_buf));
if (l > 0) {
fwrite(rx_buf, 1, l, stdout);
} else if (l == -1) {
break;
} else if (l == -2) {
struct pollfd pfd = {0};
pfd.fd = get_conn_fd(conn);
pfd.events = get_conn_pending_rx(conn);
if (poll(&pfd, 1, timeout) == 0) {
fprintf(stderr, "rx timeout\n");
goto fail;
}
}
}
res = 0;
fail:
if (conn != NULL)
teardown(conn);
if (ctx != NULL)
teardown_ctx(ctx);
return res;
}

View File

@@ -0,0 +1,449 @@
#include <sys/poll.h>
#include <openssl/ssl.h>
/*
* Demo 2: Client — Managed Connection — Nonblocking
* ==============================================================
*
* This is an example of (part of) an application which uses libssl in an
* asynchronous, nonblocking fashion. The functions show all interactions with
* libssl the application makes, and would hypothetically be linked into a
* larger application.
*
* In this example, libssl still makes syscalls directly using an fd, which is
* configured in nonblocking mode. As such, the application can still be
* abstracted from the details of what that fd is (is it a TCP socket? is it a
* UDP socket?); this code passes the application an fd and the application
* simply calls back into this code when poll()/etc. indicates it is ready.
*/
typedef struct app_conn_st {
SSL *ssl;
BIO *ssl_bio;
int rx_need_tx, tx_need_rx;
} APP_CONN;
/*
* The application is initializing and wants an SSL_CTX which it will use for
* some number of outgoing connections, which it creates in subsequent calls to
* new_conn. The application may also call this function multiple times to
* create multiple SSL_CTX.
*/
SSL_CTX *create_ssl_ctx(void)
{
SSL_CTX *ctx;
#ifdef USE_QUIC
ctx = SSL_CTX_new(OSSL_QUIC_client_method());
#else
ctx = SSL_CTX_new(TLS_client_method());
#endif
if (ctx == NULL)
return NULL;
/* Enable trust chain verification. */
SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, NULL);
/* Load default root CA store. */
if (SSL_CTX_set_default_verify_paths(ctx) == 0) {
SSL_CTX_free(ctx);
return NULL;
}
return ctx;
}
/*
* The application wants to create a new outgoing connection using a given
* SSL_CTX.
*
* hostname is a string like "openssl.org:443" or "[::1]:443".
*/
APP_CONN *new_conn(SSL_CTX *ctx, const char *hostname)
{
APP_CONN *conn;
BIO *out, *buf;
SSL *ssl = NULL;
const char *bare_hostname;
#ifdef USE_QUIC
static const unsigned char alpn[] = {5, 'd', 'u', 'm', 'm', 'y'};
#endif
conn = calloc(1, sizeof(APP_CONN));
if (conn == NULL)
return NULL;
out = BIO_new_ssl_connect(ctx);
if (out == NULL) {
free(conn);
return NULL;
}
if (BIO_get_ssl(out, &ssl) == 0) {
BIO_free_all(out);
free(conn);
return NULL;
}
/*
* NOTE: QUIC cannot operate with a buffering BIO between the QUIC SSL
* object in the network. In this case, the call to BIO_push() is not
* supported by the QUIC SSL object and will be ignored, thus this code
* works without removing this line. However, the buffering BIO is not
* actually used as a result and should be removed when adapting code to use
* QUIC.
*
* Setting a buffer as the underlying BIO on the QUIC SSL object using
* SSL_set_bio() will not work, though BIO_s_dgram_pair is available for
* buffering the input and output to the QUIC SSL object on the network side
* if desired.
*/
buf = BIO_new(BIO_f_buffer());
if (buf == NULL) {
BIO_free_all(out);
free(conn);
return NULL;
}
BIO_push(out, buf);
if (BIO_set_conn_hostname(out, hostname) == 0) {
BIO_free_all(out);
free(conn);
return NULL;
}
/* Returns the parsed hostname extracted from the hostname:port string. */
bare_hostname = BIO_get_conn_hostname(out);
if (bare_hostname == NULL) {
BIO_free_all(out);
free(conn);
return NULL;
}
/* Tell the SSL object the hostname to check certificates against. */
if (SSL_set1_host(ssl, bare_hostname) <= 0) {
BIO_free_all(out);
free(conn);
return NULL;
}
#ifdef USE_QUIC
/* Configure ALPN, which is required for QUIC. */
if (SSL_set_alpn_protos(ssl, alpn, sizeof(alpn))) {
/* Note: SSL_set_alpn_protos returns 1 for failure. */
BIO_free_all(out);
return NULL;
}
#endif
/* Make the BIO nonblocking. */
BIO_set_nbio(out, 1);
conn->ssl_bio = out;
return conn;
}
/*
* Non-blocking transmission.
*
* Returns -1 on error. Returns -2 if the function would block (corresponds to
* EWOULDBLOCK).
*/
int tx(APP_CONN *conn, const void *buf, int buf_len)
{
int l;
conn->tx_need_rx = 0;
l = BIO_write(conn->ssl_bio, buf, buf_len);
if (l <= 0) {
if (BIO_should_retry(conn->ssl_bio)) {
conn->tx_need_rx = BIO_should_read(conn->ssl_bio);
return -2;
} else {
return -1;
}
}
return l;
}
/*
* Non-blocking reception.
*
* Returns -1 on error. Returns -2 if the function would block (corresponds to
* EWOULDBLOCK).
*/
int rx(APP_CONN *conn, void *buf, int buf_len)
{
int l;
conn->rx_need_tx = 0;
l = BIO_read(conn->ssl_bio, buf, buf_len);
if (l <= 0) {
if (BIO_should_retry(conn->ssl_bio)) {
conn->rx_need_tx = BIO_should_write(conn->ssl_bio);
return -2;
} else {
return -1;
}
}
return l;
}
/*
* The application wants to know a fd it can poll on to determine when the
* SSL state machine needs to be pumped.
*/
int get_conn_fd(APP_CONN *conn)
{
#ifdef USE_QUIC
BIO_POLL_DESCRIPTOR d;
if (!BIO_get_rpoll_descriptor(conn->ssl_bio, &d))
return -1;
return d.value.fd;
#else
return BIO_get_fd(conn->ssl_bio, NULL);
#endif
}
/*
* These functions returns zero or more of:
*
* POLLIN: The SSL state machine is interested in socket readability events.
*
* POLLOUT: The SSL state machine is interested in socket writeability events.
*
* POLLERR: The SSL state machine is interested in socket error events.
*
* get_conn_pending_tx returns events which may cause SSL_write to make
* progress and get_conn_pending_rx returns events which may cause SSL_read
* to make progress.
*/
int get_conn_pending_tx(APP_CONN *conn)
{
#ifdef USE_QUIC
return (SSL_net_read_desired(conn->ssl) ? POLLIN : 0)
| (SSL_net_write_desired(conn->ssl) ? POLLOUT : 0)
| POLLERR;
#else
return (conn->tx_need_rx ? POLLIN : 0) | POLLOUT | POLLERR;
#endif
}
int get_conn_pending_rx(APP_CONN *conn)
{
#ifdef USE_QUIC
return get_conn_pending_tx(conn);
#else
return (conn->rx_need_tx ? POLLOUT : 0) | POLLIN | POLLERR;
#endif
}
#ifdef USE_QUIC
/*
* Returns the number of milliseconds after which some call to libssl must be
* made. Any call (BIO_read/BIO_write/BIO_pump) will do. Returns -1 if
* there is no need for such a call. This may change after the next call
* to libssl.
*/
static inline int timeval_to_ms(const struct timeval *t);
int get_conn_pump_timeout(APP_CONN *conn)
{
struct timeval tv;
int is_infinite;
if (!SSL_get_event_timeout(conn->ssl, &tv, &is_infinite))
return -1;
return is_infinite ? -1 : timeval_to_ms(&tv);
}
/*
* Called to advance internals of libssl state machines without having to
* perform an application-level read/write.
*/
void pump(APP_CONN *conn)
{
SSL_handle_events(conn->ssl);
}
#endif
/*
* The application wants to close the connection and free bookkeeping
* structures.
*/
void teardown(APP_CONN *conn)
{
BIO_free_all(conn->ssl_bio);
free(conn);
}
/*
* The application is shutting down and wants to free a previously
* created SSL_CTX.
*/
void teardown_ctx(SSL_CTX *ctx)
{
SSL_CTX_free(ctx);
}
/*
* ============================================================================
* Example driver for the above code. This is just to demonstrate that the code
* works and is not intended to be representative of a real application.
*/
#include <sys/time.h>
static inline void ms_to_timeval(struct timeval *t, int ms)
{
t->tv_sec = ms < 0 ? -1 : ms/1000;
t->tv_usec = ms < 0 ? 0 : (ms%1000)*1000;
}
static inline int timeval_to_ms(const struct timeval *t)
{
return t->tv_sec*1000 + t->tv_usec/1000;
}
int main(int argc, char **argv)
{
static char tx_msg[384], host_port[300];
const char *tx_p = tx_msg;
char rx_buf[2048];
int res = 1, l, tx_len;
#ifdef USE_QUIC
struct timeval timeout;
#else
int timeout = 2000 /* ms */;
#endif
APP_CONN *conn = NULL;
SSL_CTX *ctx = NULL;
#ifdef USE_QUIC
ms_to_timeval(&timeout, 2000);
#endif
if (argc < 3) {
fprintf(stderr, "usage: %s host port\n", argv[0]);
goto fail;
}
snprintf(host_port, sizeof(host_port), "%s:%s", argv[1], argv[2]);
tx_len = snprintf(tx_msg, sizeof(tx_msg),
"GET / HTTP/1.0\r\nHost: %s\r\n\r\n", argv[1]);
ctx = create_ssl_ctx();
if (ctx == NULL) {
fprintf(stderr, "cannot create SSL context\n");
goto fail;
}
conn = new_conn(ctx, host_port);
if (conn == NULL) {
fprintf(stderr, "cannot establish connection\n");
goto fail;
}
/* TX */
while (tx_len != 0) {
l = tx(conn, tx_p, tx_len);
if (l > 0) {
tx_p += l;
tx_len -= l;
} else if (l == -1) {
fprintf(stderr, "tx error\n");
} else if (l == -2) {
#ifdef USE_QUIC
struct timeval start, now, deadline, t;
#endif
struct pollfd pfd = {0};
#ifdef USE_QUIC
ms_to_timeval(&t, get_conn_pump_timeout(conn));
if (t.tv_sec < 0 || timercmp(&t, &timeout, >))
t = timeout;
gettimeofday(&start, NULL);
timeradd(&start, &timeout, &deadline);
#endif
pfd.fd = get_conn_fd(conn);
pfd.events = get_conn_pending_tx(conn);
#ifdef USE_QUIC
if (poll(&pfd, 1, timeval_to_ms(&t)) == 0)
#else
if (poll(&pfd, 1, timeout) == 0)
#endif
{
#ifdef USE_QUIC
pump(conn);
gettimeofday(&now, NULL);
if (timercmp(&now, &deadline, >=))
#endif
{
fprintf(stderr, "tx timeout\n");
goto fail;
}
}
}
}
/* RX */
for (;;) {
l = rx(conn, rx_buf, sizeof(rx_buf));
if (l > 0) {
fwrite(rx_buf, 1, l, stdout);
} else if (l == -1) {
break;
} else if (l == -2) {
#ifdef USE_QUIC
struct timeval start, now, deadline, t;
#endif
struct pollfd pfd = {0};
#ifdef USE_QUIC
ms_to_timeval(&t, get_conn_pump_timeout(conn));
if (t.tv_sec < 0 || timercmp(&t, &timeout, >))
t = timeout;
gettimeofday(&start, NULL);
timeradd(&start, &timeout, &deadline);
#endif
pfd.fd = get_conn_fd(conn);
pfd.events = get_conn_pending_rx(conn);
#ifdef USE_QUIC
if (poll(&pfd, 1, timeval_to_ms(&t)) == 0)
#else
if (poll(&pfd, 1, timeout) == 0)
#endif
{
#ifdef USE_QUIC
pump(conn);
gettimeofday(&now, NULL);
if (timercmp(&now, &deadline, >=))
#endif
{
fprintf(stderr, "rx timeout\n");
goto fail;
}
}
}
}
res = 0;
fail:
if (conn != NULL)
teardown(conn);
if (ctx != NULL)
teardown_ctx(ctx);
return res;
}

View File

@@ -0,0 +1,217 @@
#include <openssl/ssl.h>
/*
* Demo 3: Client — Client Creates FD — Blocking
* =============================================
*
* This is an example of (part of) an application which uses libssl in a simple,
* synchronous, blocking fashion. The client is responsible for creating the
* socket and passing it to libssl. The functions show all interactions with
* libssl the application makes, and would hypothetically be linked into a
* larger application.
*/
/*
* The application is initializing and wants an SSL_CTX which it will use for
* some number of outgoing connections, which it creates in subsequent calls to
* new_conn. The application may also call this function multiple times to
* create multiple SSL_CTX.
*/
SSL_CTX *create_ssl_ctx(void)
{
SSL_CTX *ctx;
#ifdef USE_QUIC
ctx = SSL_CTX_new(OSSL_QUIC_client_method());
#else
ctx = SSL_CTX_new(TLS_client_method());
#endif
if (ctx == NULL)
return NULL;
/* Enable trust chain verification. */
SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, NULL);
/* Load default root CA store. */
if (SSL_CTX_set_default_verify_paths(ctx) == 0) {
SSL_CTX_free(ctx);
return NULL;
}
return ctx;
}
/*
* The application wants to create a new outgoing connection using a given
* SSL_CTX.
*
* hostname is a string like "openssl.org" used for certificate validation.
*/
SSL *new_conn(SSL_CTX *ctx, int fd, const char *bare_hostname)
{
SSL *ssl;
#ifdef USE_QUIC
static const unsigned char alpn[] = {5, 'd', 'u', 'm', 'm', 'y'};
#endif
ssl = SSL_new(ctx);
if (ssl == NULL)
return NULL;
SSL_set_connect_state(ssl); /* cannot fail */
if (SSL_set_fd(ssl, fd) <= 0) {
SSL_free(ssl);
return NULL;
}
if (SSL_set1_host(ssl, bare_hostname) <= 0) {
SSL_free(ssl);
return NULL;
}
if (SSL_set_tlsext_host_name(ssl, bare_hostname) <= 0) {
SSL_free(ssl);
return NULL;
}
#ifdef USE_QUIC
/* Configure ALPN, which is required for QUIC. */
if (SSL_set_alpn_protos(ssl, alpn, sizeof(alpn))) {
/* Note: SSL_set_alpn_protos returns 1 for failure. */
SSL_free(ssl);
return NULL;
}
#endif
return ssl;
}
/*
* The application wants to send some block of data to the peer.
* This is a blocking call.
*/
int tx(SSL *ssl, const void *buf, int buf_len)
{
return SSL_write(ssl, buf, buf_len);
}
/*
* The application wants to receive some block of data from
* the peer. This is a blocking call.
*/
int rx(SSL *ssl, void *buf, int buf_len)
{
return SSL_read(ssl, buf, buf_len);
}
/*
* The application wants to close the connection and free bookkeeping
* structures.
*/
void teardown(SSL *ssl)
{
SSL_free(ssl);
}
/*
* The application is shutting down and wants to free a previously
* created SSL_CTX.
*/
void teardown_ctx(SSL_CTX *ctx)
{
SSL_CTX_free(ctx);
}
/*
* ============================================================================
* Example driver for the above code. This is just to demonstrate that the code
* works and is not intended to be representative of a real application.
*/
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/signal.h>
#include <netdb.h>
#include <unistd.h>
int main(int argc, char **argv)
{
int rc, fd = -1, l, mlen, res = 1;
static char msg[300];
struct addrinfo hints = {0}, *result = NULL;
SSL *ssl = NULL;
SSL_CTX *ctx = NULL;
char buf[2048];
if (argc < 3) {
fprintf(stderr, "usage: %s host port\n", argv[0]);
goto fail;
}
mlen = snprintf(msg, sizeof(msg),
"GET / HTTP/1.0\r\nHost: %s\r\n\r\n", argv[1]);
ctx = create_ssl_ctx();
if (ctx == NULL) {
fprintf(stderr, "cannot create context\n");
goto fail;
}
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = AI_PASSIVE;
rc = getaddrinfo(argv[1], argv[2], &hints, &result);
if (rc < 0) {
fprintf(stderr, "cannot resolve\n");
goto fail;
}
signal(SIGPIPE, SIG_IGN);
#ifdef USE_QUIC
fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
#else
fd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
#endif
if (fd < 0) {
fprintf(stderr, "cannot create socket\n");
goto fail;
}
rc = connect(fd, result->ai_addr, result->ai_addrlen);
if (rc < 0) {
fprintf(stderr, "cannot connect\n");
goto fail;
}
ssl = new_conn(ctx, fd, argv[1]);
if (ssl == NULL) {
fprintf(stderr, "cannot create connection\n");
goto fail;
}
l = tx(ssl, msg, mlen);
if (l < mlen) {
fprintf(stderr, "tx error\n");
goto fail;
}
for (;;) {
l = rx(ssl, buf, sizeof(buf));
if (l <= 0)
break;
fwrite(buf, 1, l, stdout);
}
res = 0;
fail:
if (ssl != NULL)
teardown(ssl);
if (ctx != NULL)
teardown_ctx(ctx);
if (fd >= 0)
close(fd);
if (result != NULL)
freeaddrinfo(result);
return res;
}

View File

@@ -0,0 +1,465 @@
#include <sys/poll.h>
#include <openssl/ssl.h>
/*
* Demo 4: Client — Client Creates FD — Nonblocking
* ================================================
*
* This is an example of (part of) an application which uses libssl in an
* asynchronous, nonblocking fashion. The client is responsible for creating the
* socket and passing it to libssl. The functions show all interactions with
* libssl the application makes, and would hypothetically be linked into a
* larger application.
*/
typedef struct app_conn_st {
SSL *ssl;
int fd;
int rx_need_tx, tx_need_rx;
} APP_CONN;
/*
* The application is initializing and wants an SSL_CTX which it will use for
* some number of outgoing connections, which it creates in subsequent calls to
* new_conn. The application may also call this function multiple times to
* create multiple SSL_CTX.
*/
SSL_CTX *create_ssl_ctx(void)
{
SSL_CTX *ctx;
#ifdef USE_QUIC
ctx = SSL_CTX_new(OSSL_QUIC_client_method());
#else
ctx = SSL_CTX_new(TLS_client_method());
#endif
if (ctx == NULL)
return NULL;
/* Enable trust chain verification. */
SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, NULL);
/* Load default root CA store. */
if (SSL_CTX_set_default_verify_paths(ctx) == 0) {
SSL_CTX_free(ctx);
return NULL;
}
return ctx;
}
/*
* The application wants to create a new outgoing connection using a given
* SSL_CTX.
*
* hostname is a string like "openssl.org" used for certificate validation.
*/
APP_CONN *new_conn(SSL_CTX *ctx, int fd, const char *bare_hostname)
{
APP_CONN *conn;
SSL *ssl;
#ifdef USE_QUIC
static const unsigned char alpn[] = {5, 'd', 'u', 'm', 'm', 'y'};
#endif
conn = calloc(1, sizeof(APP_CONN));
if (conn == NULL)
return NULL;
ssl = conn->ssl = SSL_new(ctx);
if (ssl == NULL) {
free(conn);
return NULL;
}
SSL_set_connect_state(ssl); /* cannot fail */
if (SSL_set_fd(ssl, fd) <= 0) {
SSL_free(ssl);
free(conn);
return NULL;
}
if (SSL_set1_host(ssl, bare_hostname) <= 0) {
SSL_free(ssl);
free(conn);
return NULL;
}
if (SSL_set_tlsext_host_name(ssl, bare_hostname) <= 0) {
SSL_free(ssl);
free(conn);
return NULL;
}
#ifdef USE_QUIC
/* Configure ALPN, which is required for QUIC. */
if (SSL_set_alpn_protos(ssl, alpn, sizeof(alpn))) {
/* Note: SSL_set_alpn_protos returns 1 for failure. */
SSL_free(ssl);
free(conn);
return NULL;
}
#endif
conn->fd = fd;
return conn;
}
/*
* Non-blocking transmission.
*
* Returns -1 on error. Returns -2 if the function would block (corresponds to
* EWOULDBLOCK).
*/
int tx(APP_CONN *conn, const void *buf, int buf_len)
{
int rc, l;
conn->tx_need_rx = 0;
l = SSL_write(conn->ssl, buf, buf_len);
if (l <= 0) {
rc = SSL_get_error(conn->ssl, l);
switch (rc) {
case SSL_ERROR_WANT_READ:
conn->tx_need_rx = 1;
case SSL_ERROR_WANT_CONNECT:
case SSL_ERROR_WANT_WRITE:
return -2;
default:
return -1;
}
}
return l;
}
/*
* Non-blocking reception.
*
* Returns -1 on error. Returns -2 if the function would block (corresponds to
* EWOULDBLOCK).
*/
int rx(APP_CONN *conn, void *buf, int buf_len)
{
int rc, l;
conn->rx_need_tx = 0;
l = SSL_read(conn->ssl, buf, buf_len);
if (l <= 0) {
rc = SSL_get_error(conn->ssl, l);
switch (rc) {
case SSL_ERROR_WANT_WRITE:
conn->rx_need_tx = 1;
case SSL_ERROR_WANT_READ:
return -2;
default:
return -1;
}
}
return l;
}
/*
* The application wants to know a fd it can poll on to determine when the
* SSL state machine needs to be pumped.
*
* If the fd returned has:
*
* POLLIN: SSL_read *may* return data;
* if application does not want to read yet, it should call pump().
*
* POLLOUT: SSL_write *may* accept data
*
* POLLERR: An application should call pump() if it is not likely to call
* SSL_read or SSL_write soon.
*
*/
int get_conn_fd(APP_CONN *conn)
{
return conn->fd;
}
/*
* These functions returns zero or more of:
*
* POLLIN: The SSL state machine is interested in socket readability events.
*
* POLLOUT: The SSL state machine is interested in socket writeability events.
*
* POLLERR: The SSL state machine is interested in socket error events.
*
* get_conn_pending_tx returns events which may cause SSL_write to make
* progress and get_conn_pending_rx returns events which may cause SSL_read
* to make progress.
*/
int get_conn_pending_tx(APP_CONN *conn)
{
#ifdef USE_QUIC
return (SSL_net_read_desired(conn->ssl) ? POLLIN : 0)
| (SSL_net_write_desired(conn->ssl) ? POLLOUT : 0)
| POLLERR;
#else
return (conn->tx_need_rx ? POLLIN : 0) | POLLOUT | POLLERR;
#endif
}
int get_conn_pending_rx(APP_CONN *conn)
{
return get_conn_pending_tx(conn);
}
#ifdef USE_QUIC
/*
* Returns the number of milliseconds after which some call to libssl must be
* made. Any call (SSL_read/SSL_write/SSL_pump) will do. Returns -1 if there is
* no need for such a call. This may change after the next call
* to libssl.
*/
static inline int timeval_to_ms(const struct timeval *t);
int get_conn_pump_timeout(APP_CONN *conn)
{
struct timeval tv;
int is_infinite;
if (!SSL_get_event_timeout(conn->ssl, &tv, &is_infinite))
return -1;
return is_infinite ? -1 : timeval_to_ms(&tv);
}
/*
* Called to advance internals of libssl state machines without having to
* perform an application-level read/write.
*/
void pump(APP_CONN *conn)
{
SSL_handle_events(conn->ssl);
}
#endif
/*
* The application wants to close the connection and free bookkeeping
* structures.
*/
void teardown(APP_CONN *conn)
{
SSL_shutdown(conn->ssl);
SSL_free(conn->ssl);
free(conn);
}
/*
* The application is shutting down and wants to free a previously
* created SSL_CTX.
*/
void teardown_ctx(SSL_CTX *ctx)
{
SSL_CTX_free(ctx);
}
/*
* ============================================================================
* Example driver for the above code. This is just to demonstrate that the code
* works and is not intended to be representative of a real application.
*/
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/signal.h>
#ifdef USE_QUIC
# include <sys/time.h>
#endif
#include <netdb.h>
#include <unistd.h>
#include <fcntl.h>
#ifdef USE_QUIC
static inline void ms_to_timeval(struct timeval *t, int ms)
{
t->tv_sec = ms < 0 ? -1 : ms/1000;
t->tv_usec = ms < 0 ? 0 : (ms%1000)*1000;
}
static inline int timeval_to_ms(const struct timeval *t)
{
return t->tv_sec*1000 + t->tv_usec/1000;
}
#endif
int main(int argc, char **argv)
{
int rc, fd = -1, res = 1;
static char tx_msg[300];
const char *tx_p = tx_msg;
char rx_buf[2048];
int l, tx_len;
#ifdef USE_QUIC
struct timeval timeout;
#else
int timeout = 2000 /* ms */;
#endif
APP_CONN *conn = NULL;
struct addrinfo hints = {0}, *result = NULL;
SSL_CTX *ctx = NULL;
#ifdef USE_QUIC
ms_to_timeval(&timeout, 2000);
#endif
if (argc < 3) {
fprintf(stderr, "usage: %s host port\n", argv[0]);
goto fail;
}
tx_len = snprintf(tx_msg, sizeof(tx_msg),
"GET / HTTP/1.0\r\nHost: %s\r\n\r\n", argv[1]);
ctx = create_ssl_ctx();
if (ctx == NULL) {
fprintf(stderr, "cannot create SSL context\n");
goto fail;
}
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = AI_PASSIVE;
rc = getaddrinfo(argv[1], argv[2], &hints, &result);
if (rc < 0) {
fprintf(stderr, "cannot resolve\n");
goto fail;
}
signal(SIGPIPE, SIG_IGN);
#ifdef USE_QUIC
fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
#else
fd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
#endif
if (fd < 0) {
fprintf(stderr, "cannot create socket\n");
goto fail;
}
rc = connect(fd, result->ai_addr, result->ai_addrlen);
if (rc < 0) {
fprintf(stderr, "cannot connect\n");
goto fail;
}
rc = fcntl(fd, F_SETFL, O_NONBLOCK);
if (rc < 0) {
fprintf(stderr, "cannot make socket nonblocking\n");
goto fail;
}
conn = new_conn(ctx, fd, argv[1]);
if (conn == NULL) {
fprintf(stderr, "cannot establish connection\n");
goto fail;
}
/* TX */
while (tx_len != 0) {
l = tx(conn, tx_p, tx_len);
if (l > 0) {
tx_p += l;
tx_len -= l;
} else if (l == -1) {
fprintf(stderr, "tx error\n");
goto fail;
} else if (l == -2) {
#ifdef USE_QUIC
struct timeval start, now, deadline, t;
#endif
struct pollfd pfd = {0};
#ifdef USE_QUIC
ms_to_timeval(&t, get_conn_pump_timeout(conn));
if (t.tv_sec < 0 || timercmp(&t, &timeout, >))
t = timeout;
gettimeofday(&start, NULL);
timeradd(&start, &timeout, &deadline);
#endif
pfd.fd = get_conn_fd(conn);
pfd.events = get_conn_pending_tx(conn);
#ifdef USE_QUIC
if (poll(&pfd, 1, timeval_to_ms(&t)) == 0)
#else
if (poll(&pfd, 1, timeout) == 0)
#endif
{
#ifdef USE_QUIC
pump(conn);
gettimeofday(&now, NULL);
if (timercmp(&now, &deadline, >=))
#endif
{
fprintf(stderr, "tx timeout\n");
goto fail;
}
}
}
}
/* RX */
for (;;) {
l = rx(conn, rx_buf, sizeof(rx_buf));
if (l > 0) {
fwrite(rx_buf, 1, l, stdout);
} else if (l == -1) {
break;
} else if (l == -2) {
#ifdef USE_QUIC
struct timeval start, now, deadline, t;
#endif
struct pollfd pfd = {0};
#ifdef USE_QUIC
ms_to_timeval(&t, get_conn_pump_timeout(conn));
if (t.tv_sec < 0 || timercmp(&t, &timeout, >))
t = timeout;
gettimeofday(&start, NULL);
timeradd(&start, &timeout, &deadline);
#endif
pfd.fd = get_conn_fd(conn);
pfd.events = get_conn_pending_rx(conn);
#ifdef USE_QUIC
if (poll(&pfd, 1, timeval_to_ms(&t)) == 0)
#else
if (poll(&pfd, 1, timeout) == 0)
#endif
{
#ifdef USE_QUIC
pump(conn);
gettimeofday(&now, NULL);
if (timercmp(&now, &deadline, >=))
#endif
{
fprintf(stderr, "rx timeout\n");
goto fail;
}
}
}
}
res = 0;
fail:
if (conn != NULL)
teardown(conn);
if (ctx != NULL)
teardown_ctx(ctx);
if (result != NULL)
freeaddrinfo(result);
return res;
}

View File

@@ -0,0 +1,459 @@
#include <sys/poll.h>
#include <openssl/ssl.h>
/*
* Demo 5: Client — Client Uses Memory BIO — Nonblocking
* =====================================================
*
* This is an example of (part of) an application which uses libssl in an
* asynchronous, nonblocking fashion. The application passes memory BIOs to
* OpenSSL, meaning that it controls both when data is read/written from an SSL
* object on the decrypted side but also when encrypted data from the network is
* shunted to/from OpenSSL. In this way OpenSSL is used as a pure state machine
* which does not make its own network I/O calls. OpenSSL never sees or creates
* any file descriptor for a network socket. The functions below show all
* interactions with libssl the application makes, and would hypothetically be
* linked into a larger application.
*/
typedef struct app_conn_st {
SSL *ssl;
BIO *ssl_bio, *net_bio;
int rx_need_tx, tx_need_rx;
} APP_CONN;
/*
* The application is initializing and wants an SSL_CTX which it will use for
* some number of outgoing connections, which it creates in subsequent calls to
* new_conn. The application may also call this function multiple times to
* create multiple SSL_CTX.
*/
SSL_CTX *create_ssl_ctx(void)
{
SSL_CTX *ctx;
#ifdef USE_QUIC
ctx = SSL_CTX_new(OSSL_QUIC_client_method());
#else
ctx = SSL_CTX_new(TLS_client_method());
#endif
if (ctx == NULL)
return NULL;
/* Enable trust chain verification. */
SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, NULL);
/* Load default root CA store. */
if (SSL_CTX_set_default_verify_paths(ctx) == 0) {
SSL_CTX_free(ctx);
return NULL;
}
return ctx;
}
/*
* The application wants to create a new outgoing connection using a given
* SSL_CTX.
*
* hostname is a string like "openssl.org" used for certificate validation.
*/
APP_CONN *new_conn(SSL_CTX *ctx, const char *bare_hostname)
{
BIO *ssl_bio, *internal_bio, *net_bio;
APP_CONN *conn;
SSL *ssl;
#ifdef USE_QUIC
static const unsigned char alpn[] = {5, 'd', 'u', 'm', 'm', 'y'};
#endif
conn = calloc(1, sizeof(APP_CONN));
if (conn == NULL)
return NULL;
ssl = conn->ssl = SSL_new(ctx);
if (ssl == NULL) {
free(conn);
return NULL;
}
SSL_set_connect_state(ssl); /* cannot fail */
#ifdef USE_QUIC
if (BIO_new_bio_dgram_pair(&internal_bio, 0, &net_bio, 0) <= 0) {
#else
if (BIO_new_bio_pair(&internal_bio, 0, &net_bio, 0) <= 0) {
#endif
SSL_free(ssl);
free(conn);
return NULL;
}
SSL_set_bio(ssl, internal_bio, internal_bio);
if (SSL_set1_host(ssl, bare_hostname) <= 0) {
SSL_free(ssl);
free(conn);
return NULL;
}
if (SSL_set_tlsext_host_name(ssl, bare_hostname) <= 0) {
SSL_free(ssl);
free(conn);
return NULL;
}
ssl_bio = BIO_new(BIO_f_ssl());
if (ssl_bio == NULL) {
SSL_free(ssl);
free(conn);
return NULL;
}
if (BIO_set_ssl(ssl_bio, ssl, BIO_CLOSE) <= 0) {
SSL_free(ssl);
BIO_free(ssl_bio);
return NULL;
}
#ifdef USE_QUIC
/* Configure ALPN, which is required for QUIC. */
if (SSL_set_alpn_protos(ssl, alpn, sizeof(alpn))) {
/* Note: SSL_set_alpn_protos returns 1 for failure. */
SSL_free(ssl);
BIO_free(ssl_bio);
return NULL;
}
#endif
conn->ssl_bio = ssl_bio;
conn->net_bio = net_bio;
return conn;
}
/*
* Non-blocking transmission.
*
* Returns -1 on error. Returns -2 if the function would block (corresponds to
* EWOULDBLOCK).
*/
int tx(APP_CONN *conn, const void *buf, int buf_len)
{
int rc, l;
l = BIO_write(conn->ssl_bio, buf, buf_len);
if (l <= 0) {
rc = SSL_get_error(conn->ssl, l);
switch (rc) {
case SSL_ERROR_WANT_READ:
conn->tx_need_rx = 1;
case SSL_ERROR_WANT_CONNECT:
case SSL_ERROR_WANT_WRITE:
return -2;
default:
return -1;
}
} else {
conn->tx_need_rx = 0;
}
return l;
}
/*
* Non-blocking reception.
*
* Returns -1 on error. Returns -2 if the function would block (corresponds to
* EWOULDBLOCK).
*/
int rx(APP_CONN *conn, void *buf, int buf_len)
{
int rc, l;
l = BIO_read(conn->ssl_bio, buf, buf_len);
if (l <= 0) {
rc = SSL_get_error(conn->ssl, l);
switch (rc) {
case SSL_ERROR_WANT_WRITE:
conn->rx_need_tx = 1;
case SSL_ERROR_WANT_READ:
return -2;
default:
return -1;
}
} else {
conn->rx_need_tx = 0;
}
return l;
}
/*
* Called to get data which has been enqueued for transmission to the network
* by OpenSSL. For QUIC, this always outputs a single datagram.
*
* IMPORTANT (QUIC): If buf_len is inadequate to hold the datagram, it is truncated
* (similar to read(2)). A buffer size of at least 1472 must be used by default
* to guarantee this does not occur.
*/
int read_net_tx(APP_CONN *conn, void *buf, int buf_len)
{
return BIO_read(conn->net_bio, buf, buf_len);
}
/*
* Called to feed data which has been received from the network to OpenSSL.
*
* QUIC: buf must contain the entirety of a single datagram. It will be consumed
* entirely (return value == buf_len) or not at all.
*/
int write_net_rx(APP_CONN *conn, const void *buf, int buf_len)
{
return BIO_write(conn->net_bio, buf, buf_len);
}
/*
* Determine how much data can be written to the network RX BIO.
*/
size_t net_rx_space(APP_CONN *conn)
{
return BIO_ctrl_get_write_guarantee(conn->net_bio);
}
/*
* Determine how much data is currently queued for transmission in the network
* TX BIO.
*/
size_t net_tx_avail(APP_CONN *conn)
{
return BIO_ctrl_pending(conn->net_bio);
}
/*
* These functions returns zero or more of:
*
* POLLIN: The SSL state machine is interested in socket readability events.
*
* POLLOUT: The SSL state machine is interested in socket writeability events.
*
* POLLERR: The SSL state machine is interested in socket error events.
*
* get_conn_pending_tx returns events which may cause SSL_write to make
* progress and get_conn_pending_rx returns events which may cause SSL_read
* to make progress.
*/
int get_conn_pending_tx(APP_CONN *conn)
{
#ifdef USE_QUIC
return (SSL_net_read_desired(conn->ssl) ? POLLIN : 0)
| (SSL_net_write_desired(conn->ssl) ? POLLOUT : 0)
| POLLERR;
#else
return (conn->tx_need_rx ? POLLIN : 0) | POLLOUT | POLLERR;
#endif
}
int get_conn_pending_rx(APP_CONN *conn)
{
#ifdef USE_QUIC
return get_conn_pending_tx(conn);
#else
return (conn->rx_need_tx ? POLLOUT : 0) | POLLIN | POLLERR;
#endif
}
/*
* The application wants to close the connection and free bookkeeping
* structures.
*/
void teardown(APP_CONN *conn)
{
BIO_free_all(conn->ssl_bio);
BIO_free_all(conn->net_bio);
free(conn);
}
/*
* The application is shutting down and wants to free a previously
* created SSL_CTX.
*/
void teardown_ctx(SSL_CTX *ctx)
{
SSL_CTX_free(ctx);
}
/*
* ============================================================================
* Example driver for the above code. This is just to demonstrate that the code
* works and is not intended to be representative of a real application.
*/
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/signal.h>
#include <netdb.h>
#include <unistd.h>
#include <fcntl.h>
#include <errno.h>
static int pump(APP_CONN *conn, int fd, int events, int timeout)
{
int l, l2;
char buf[2048]; /* QUIC: would need to be changed if < 1472 */
size_t wspace;
struct pollfd pfd = {0};
pfd.fd = fd;
pfd.events = (events & (POLLIN | POLLERR));
if (net_rx_space(conn) == 0)
pfd.events &= ~POLLIN;
if (net_tx_avail(conn) > 0)
pfd.events |= POLLOUT;
if ((pfd.events & (POLLIN|POLLOUT)) == 0)
return 1;
if (poll(&pfd, 1, timeout) == 0)
return -1;
if (pfd.revents & POLLIN) {
while ((wspace = net_rx_space(conn)) > 0) {
l = read(fd, buf, wspace > sizeof(buf) ? sizeof(buf) : wspace);
if (l <= 0) {
switch (errno) {
case EAGAIN:
goto stop;
default:
if (l == 0) /* EOF */
goto stop;
fprintf(stderr, "error on read: %d\n", errno);
return -1;
}
break;
}
l2 = write_net_rx(conn, buf, l);
if (l2 < l)
fprintf(stderr, "short write %d %d\n", l2, l);
} stop:;
}
if (pfd.revents & POLLOUT) {
for (;;) {
l = read_net_tx(conn, buf, sizeof(buf));
if (l <= 0)
break;
l2 = write(fd, buf, l);
if (l2 < l)
fprintf(stderr, "short read %d %d\n", l2, l);
}
}
return 1;
}
int main(int argc, char **argv)
{
int rc, fd = -1, res = 1;
static char tx_msg[300];
const char *tx_p = tx_msg;
char rx_buf[2048];
int l, tx_len;
int timeout = 2000 /* ms */;
APP_CONN *conn = NULL;
struct addrinfo hints = {0}, *result = NULL;
SSL_CTX *ctx = NULL;
if (argc < 3) {
fprintf(stderr, "usage: %s host port\n", argv[0]);
goto fail;
}
tx_len = snprintf(tx_msg, sizeof(tx_msg),
"GET / HTTP/1.0\r\nHost: %s\r\n\r\n",
argv[1]);
ctx = create_ssl_ctx();
if (ctx == NULL) {
fprintf(stderr, "cannot create SSL context\n");
goto fail;
}
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = AI_PASSIVE;
rc = getaddrinfo(argv[1], argv[2], &hints, &result);
if (rc < 0) {
fprintf(stderr, "cannot resolve\n");
goto fail;
}
signal(SIGPIPE, SIG_IGN);
#ifdef USE_QUIC
fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
#else
fd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
#endif
if (fd < 0) {
fprintf(stderr, "cannot create socket\n");
goto fail;
}
rc = connect(fd, result->ai_addr, result->ai_addrlen);
if (rc < 0) {
fprintf(stderr, "cannot connect\n");
goto fail;
}
rc = fcntl(fd, F_SETFL, O_NONBLOCK);
if (rc < 0) {
fprintf(stderr, "cannot make socket nonblocking\n");
goto fail;
}
conn = new_conn(ctx, argv[1]);
if (conn == NULL) {
fprintf(stderr, "cannot establish connection\n");
goto fail;
}
/* TX */
while (tx_len != 0) {
l = tx(conn, tx_p, tx_len);
if (l > 0) {
tx_p += l;
tx_len -= l;
} else if (l == -1) {
fprintf(stderr, "tx error\n");
} else if (l == -2) {
if (pump(conn, fd, get_conn_pending_tx(conn), timeout) != 1) {
fprintf(stderr, "pump error\n");
goto fail;
}
}
}
/* RX */
for (;;) {
l = rx(conn, rx_buf, sizeof(rx_buf));
if (l > 0) {
fwrite(rx_buf, 1, l, stdout);
} else if (l == -1) {
break;
} else if (l == -2) {
if (pump(conn, fd, get_conn_pending_rx(conn), timeout) != 1) {
fprintf(stderr, "pump error\n");
goto fail;
}
}
}
res = 0;
fail:
if (conn != NULL)
teardown(conn);
if (ctx != NULL)
teardown_ctx(ctx);
if (result != NULL)
freeaddrinfo(result);
return res;
}

View File

@@ -0,0 +1,758 @@
#include <sys/poll.h>
#include <openssl/ssl.h>
#include <uv.h>
#include <assert.h>
#ifdef USE_QUIC
# include <sys/time.h>
#endif
typedef struct app_conn_st APP_CONN;
typedef struct upper_write_op_st UPPER_WRITE_OP;
typedef struct lower_write_op_st LOWER_WRITE_OP;
typedef void (app_connect_cb)(APP_CONN *conn, int status, void *arg);
typedef void (app_write_cb)(APP_CONN *conn, int status, void *arg);
typedef void (app_read_cb)(APP_CONN *conn, void *buf, size_t buf_len, void *arg);
#ifdef USE_QUIC
static void set_timer(APP_CONN *conn);
#else
static void tcp_connect_done(uv_connect_t *tcp_connect, int status);
#endif
static void net_connect_fail_close_done(uv_handle_t *handle);
static int handshake_ssl(APP_CONN *conn);
static void flush_write_buf(APP_CONN *conn);
static void set_rx(APP_CONN *conn);
static int try_write(APP_CONN *conn, UPPER_WRITE_OP *op);
static void handle_pending_writes(APP_CONN *conn);
static int write_deferred(APP_CONN *conn, const void *buf, size_t buf_len, app_write_cb *cb, void *arg);
static void teardown_continued(uv_handle_t *handle);
static int setup_ssl(APP_CONN *conn, const char *hostname);
#ifdef USE_QUIC
static inline int timeval_to_ms(const struct timeval *t)
{
return t->tv_sec*1000 + t->tv_usec/1000;
}
#endif
/*
* Structure to track an application-level write request. Only created
* if SSL_write does not accept the data immediately, typically because
* it is in WANT_READ.
*/
struct upper_write_op_st {
struct upper_write_op_st *prev, *next;
const uint8_t *buf;
size_t buf_len, written;
APP_CONN *conn;
app_write_cb *cb;
void *cb_arg;
};
/*
* Structure to track a network-level write request.
*/
struct lower_write_op_st {
#ifdef USE_QUIC
uv_udp_send_t w;
#else
uv_write_t w;
#endif
uv_buf_t b;
uint8_t *buf;
APP_CONN *conn;
};
/*
* Application connection object.
*/
struct app_conn_st {
SSL_CTX *ctx;
SSL *ssl;
BIO *net_bio;
#ifdef USE_QUIC
uv_udp_t udp;
uv_timer_t timer;
#else
uv_stream_t *stream;
uv_tcp_t tcp;
uv_connect_t tcp_connect;
#endif
app_connect_cb *app_connect_cb; /* called once handshake is done */
void *app_connect_arg;
app_read_cb *app_read_cb; /* application's on-RX callback */
void *app_read_arg;
const char *hostname;
char init_handshake, done_handshake, closed;
char *teardown_done;
UPPER_WRITE_OP *pending_upper_write_head, *pending_upper_write_tail;
};
/*
* The application is initializing and wants an SSL_CTX which it will use for
* some number of outgoing connections, which it creates in subsequent calls to
* new_conn. The application may also call this function multiple times to
* create multiple SSL_CTX.
*/
SSL_CTX *create_ssl_ctx(void)
{
SSL_CTX *ctx;
#ifdef USE_QUIC
ctx = SSL_CTX_new(OSSL_QUIC_client_method());
#else
ctx = SSL_CTX_new(TLS_client_method());
#endif
if (ctx == NULL)
return NULL;
/* Enable trust chain verification. */
SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, NULL);
/* Load default root CA store. */
if (SSL_CTX_set_default_verify_paths(ctx) == 0) {
SSL_CTX_free(ctx);
return NULL;
}
return ctx;
}
/*
* The application wants to create a new outgoing connection using a given
* SSL_CTX. An outgoing TCP connection is started and the callback is called
* asynchronously when the TLS handshake is complete.
*
* hostname is a string like "openssl.org" used for certificate validation.
*/
APP_CONN *new_conn(SSL_CTX *ctx, const char *hostname,
struct sockaddr *sa, socklen_t sa_len,
app_connect_cb *cb, void *arg)
{
int rc;
APP_CONN *conn = NULL;
conn = calloc(1, sizeof(APP_CONN));
if (!conn)
return NULL;
#ifdef USE_QUIC
uv_udp_init(uv_default_loop(), &conn->udp);
conn->udp.data = conn;
uv_timer_init(uv_default_loop(), &conn->timer);
conn->timer.data = conn;
#else
uv_tcp_init(uv_default_loop(), &conn->tcp);
conn->tcp.data = conn;
conn->stream = (uv_stream_t *)&conn->tcp;
#endif
conn->app_connect_cb = cb;
conn->app_connect_arg = arg;
#ifdef USE_QUIC
rc = uv_udp_connect(&conn->udp, sa);
#else
conn->tcp_connect.data = conn;
rc = uv_tcp_connect(&conn->tcp_connect, &conn->tcp, sa, tcp_connect_done);
#endif
if (rc < 0) {
#ifdef USE_QUIC
uv_close((uv_handle_t *)&conn->udp, net_connect_fail_close_done);
#else
uv_close((uv_handle_t *)&conn->tcp, net_connect_fail_close_done);
#endif
return NULL;
}
conn->ctx = ctx;
conn->hostname = hostname;
#ifdef USE_QUIC
rc = setup_ssl(conn, hostname);
if (rc < 0) {
uv_close((uv_handle_t *)&conn->udp, net_connect_fail_close_done);
return NULL;
}
#endif
return conn;
}
/*
* The application wants to start reading from the SSL stream.
* The callback is called whenever data is available.
*/
int app_read_start(APP_CONN *conn, app_read_cb *cb, void *arg)
{
conn->app_read_cb = cb;
conn->app_read_arg = arg;
set_rx(conn);
return 0;
}
/*
* The application wants to write. The callback is called once the
* write is complete. The callback should free the buffer.
*/
int app_write(APP_CONN *conn, const void *buf, size_t buf_len, app_write_cb *cb, void *arg)
{
write_deferred(conn, buf, buf_len, cb, arg);
handle_pending_writes(conn);
return buf_len;
}
/*
* The application wants to close the connection and free bookkeeping
* structures.
*/
void teardown(APP_CONN *conn)
{
char teardown_done = 0;
if (conn == NULL)
return;
BIO_free_all(conn->net_bio);
SSL_free(conn->ssl);
#ifndef USE_QUIC
uv_cancel((uv_req_t *)&conn->tcp_connect);
#endif
conn->teardown_done = &teardown_done;
#ifdef USE_QUIC
uv_close((uv_handle_t *)&conn->udp, teardown_continued);
uv_close((uv_handle_t *)&conn->timer, teardown_continued);
#else
uv_close((uv_handle_t *)conn->stream, teardown_continued);
#endif
/* Just wait synchronously until teardown completes. */
#ifdef USE_QUIC
while (teardown_done < 2)
#else
while (!teardown_done)
#endif
uv_run(uv_default_loop(), UV_RUN_DEFAULT);
}
/*
* The application is shutting down and wants to free a previously
* created SSL_CTX.
*/
void teardown_ctx(SSL_CTX *ctx)
{
SSL_CTX_free(ctx);
}
/*
* ============================================================================
* Internal implementation functions.
*/
static void enqueue_upper_write_op(APP_CONN *conn, UPPER_WRITE_OP *op)
{
op->prev = conn->pending_upper_write_tail;
if (op->prev)
op->prev->next = op;
conn->pending_upper_write_tail = op;
if (conn->pending_upper_write_head == NULL)
conn->pending_upper_write_head = op;
}
static void dequeue_upper_write_op(APP_CONN *conn)
{
if (conn->pending_upper_write_head == NULL)
return;
if (conn->pending_upper_write_head->next == NULL) {
conn->pending_upper_write_head = NULL;
conn->pending_upper_write_tail = NULL;
} else {
conn->pending_upper_write_head = conn->pending_upper_write_head->next;
conn->pending_upper_write_head->prev = NULL;
}
}
static void net_read_alloc(uv_handle_t *handle,
size_t suggested_size, uv_buf_t *buf)
{
#ifdef USE_QUIC
if (suggested_size < 1472)
suggested_size = 1472;
#endif
buf->base = malloc(suggested_size);
buf->len = suggested_size;
}
static void on_rx_push(APP_CONN *conn)
{
int srd, rc;
int buf_len = 4096;
do {
if (!conn->app_read_cb)
return;
void *buf = malloc(buf_len);
if (!buf)
return;
srd = SSL_read(conn->ssl, buf, buf_len);
flush_write_buf(conn);
if (srd <= 0) {
rc = SSL_get_error(conn->ssl, srd);
if (rc == SSL_ERROR_WANT_READ) {
free(buf);
return;
}
}
conn->app_read_cb(conn, buf, srd, conn->app_read_arg);
} while (srd == buf_len);
}
static void net_error(APP_CONN *conn)
{
conn->closed = 1;
set_rx(conn);
if (conn->app_read_cb)
conn->app_read_cb(conn, NULL, 0, conn->app_read_arg);
}
static void handle_pending_writes(APP_CONN *conn)
{
int rc;
if (conn->pending_upper_write_head == NULL)
return;
do {
UPPER_WRITE_OP *op = conn->pending_upper_write_head;
rc = try_write(conn, op);
if (rc <= 0)
break;
dequeue_upper_write_op(conn);
free(op);
} while (conn->pending_upper_write_head != NULL);
set_rx(conn);
}
#ifdef USE_QUIC
static void net_read_done(uv_udp_t *stream, ssize_t nr, const uv_buf_t *buf,
const struct sockaddr *addr, unsigned int flags)
#else
static void net_read_done(uv_stream_t *stream, ssize_t nr, const uv_buf_t *buf)
#endif
{
int rc;
APP_CONN *conn = (APP_CONN *)stream->data;
if (nr < 0) {
free(buf->base);
net_error(conn);
return;
}
if (nr > 0) {
int wr = BIO_write(conn->net_bio, buf->base, nr);
assert(wr == nr);
}
free(buf->base);
if (!conn->done_handshake) {
rc = handshake_ssl(conn);
if (rc < 0) {
fprintf(stderr, "handshake error: %d\n", rc);
return;
}
if (!conn->done_handshake)
return;
}
handle_pending_writes(conn);
on_rx_push(conn);
}
static void set_rx(APP_CONN *conn)
{
#ifdef USE_QUIC
if (!conn->closed)
uv_udp_recv_start(&conn->udp, net_read_alloc, net_read_done);
else
uv_udp_recv_stop(&conn->udp);
#else
if (!conn->closed && (conn->app_read_cb || (!conn->done_handshake && conn->init_handshake) || conn->pending_upper_write_head != NULL))
uv_read_start(conn->stream, net_read_alloc, net_read_done);
else
uv_read_stop(conn->stream);
#endif
}
#ifdef USE_QUIC
static void net_write_done(uv_udp_send_t *req, int status)
#else
static void net_write_done(uv_write_t *req, int status)
#endif
{
LOWER_WRITE_OP *op = (LOWER_WRITE_OP *)req->data;
APP_CONN *conn = op->conn;
if (status < 0) {
fprintf(stderr, "UV write failed %d\n", status);
return;
}
free(op->buf);
free(op);
flush_write_buf(conn);
}
static void flush_write_buf(APP_CONN *conn)
{
int rc, rd;
LOWER_WRITE_OP *op;
uint8_t *buf;
buf = malloc(4096);
if (!buf)
return;
rd = BIO_read(conn->net_bio, buf, 4096);
if (rd <= 0) {
free(buf);
return;
}
op = calloc(1, sizeof(LOWER_WRITE_OP));
if (!op)
return;
op->buf = buf;
op->conn = conn;
op->w.data = op;
op->b.base = (char *)buf;
op->b.len = rd;
#ifdef USE_QUIC
rc = uv_udp_send(&op->w, &conn->udp, &op->b, 1, NULL, net_write_done);
#else
rc = uv_write(&op->w, conn->stream, &op->b, 1, net_write_done);
#endif
if (rc < 0) {
free(buf);
free(op);
fprintf(stderr, "UV write failed\n");
return;
}
}
static void handshake_done_ssl(APP_CONN *conn)
{
#ifdef USE_QUIC
set_timer(conn);
#endif
conn->app_connect_cb(conn, 0, conn->app_connect_arg);
}
static int handshake_ssl(APP_CONN *conn)
{
int rc, rcx;
conn->init_handshake = 1;
rc = SSL_do_handshake(conn->ssl);
if (rc > 0) {
conn->done_handshake = 1;
handshake_done_ssl(conn);
set_rx(conn);
return 0;
}
flush_write_buf(conn);
rcx = SSL_get_error(conn->ssl, rc);
if (rcx == SSL_ERROR_WANT_READ) {
set_rx(conn);
return 0;
}
fprintf(stderr, "Handshake error: %d\n", rcx);
return -rcx;
}
static int setup_ssl(APP_CONN *conn, const char *hostname)
{
BIO *internal_bio = NULL, *net_bio = NULL;
SSL *ssl = NULL;
#ifdef USE_QUIC
static const unsigned char alpn[] = {5, 'd', 'u', 'm', 'm', 'y'};
#endif
ssl = SSL_new(conn->ctx);
if (!ssl)
return -1;
SSL_set_connect_state(ssl);
#ifdef USE_QUIC
if (BIO_new_bio_dgram_pair(&internal_bio, 0, &net_bio, 0) <= 0) {
SSL_free(ssl);
return -1;
}
#else
if (BIO_new_bio_pair(&internal_bio, 0, &net_bio, 0) <= 0) {
SSL_free(ssl);
return -1;
}
#endif
SSL_set_bio(ssl, internal_bio, internal_bio);
if (SSL_set1_host(ssl, hostname) <= 0) {
SSL_free(ssl);
return -1;
}
if (SSL_set_tlsext_host_name(ssl, hostname) <= 0) {
SSL_free(ssl);
return -1;
}
#ifdef USE_QUIC
/* Configure ALPN, which is required for QUIC. */
if (SSL_set_alpn_protos(ssl, alpn, sizeof(alpn))) {
/* Note: SSL_set_alpn_protos returns 1 for failure. */
SSL_free(ssl);
return -1;
}
#endif
conn->net_bio = net_bio;
conn->ssl = ssl;
return handshake_ssl(conn);
}
#ifndef USE_QUIC
static void tcp_connect_done(uv_connect_t *tcp_connect, int status)
{
int rc;
APP_CONN *conn = (APP_CONN *)tcp_connect->data;
if (status < 0) {
uv_stop(uv_default_loop());
return;
}
rc = setup_ssl(conn, conn->hostname);
if (rc < 0) {
fprintf(stderr, "cannot init SSL\n");
uv_stop(uv_default_loop());
return;
}
}
#endif
static void net_connect_fail_close_done(uv_handle_t *handle)
{
APP_CONN *conn = (APP_CONN *)handle->data;
free(conn);
}
#ifdef USE_QUIC
static void timer_done(uv_timer_t *timer)
{
APP_CONN *conn = (APP_CONN *)timer->data;
SSL_handle_events(conn->ssl);
handle_pending_writes(conn);
flush_write_buf(conn);
set_rx(conn);
set_timer(conn); /* repeat timer */
}
static void set_timer(APP_CONN *conn)
{
struct timeval tv;
int ms, is_infinite;
if (!SSL_get_event_timeout(conn->ssl, &tv, &is_infinite))
return;
ms = is_infinite ? -1 : timeval_to_ms(&tv);
if (ms > 0)
uv_timer_start(&conn->timer, timer_done, ms, 0);
}
#endif
static int try_write(APP_CONN *conn, UPPER_WRITE_OP *op)
{
int rc, rcx;
size_t written = op->written;
while (written < op->buf_len) {
rc = SSL_write(conn->ssl, op->buf + written, op->buf_len - written);
if (rc <= 0) {
rcx = SSL_get_error(conn->ssl, rc);
if (rcx == SSL_ERROR_WANT_READ) {
op->written = written;
return 0;
} else {
if (op->cb != NULL)
op->cb(conn, -rcx, op->cb_arg);
return 1; /* op should be freed */
}
}
written += rc;
}
if (op->cb != NULL)
op->cb(conn, 0, op->cb_arg);
flush_write_buf(conn);
return 1; /* op should be freed */
}
static int write_deferred(APP_CONN *conn, const void *buf, size_t buf_len, app_write_cb *cb, void *arg)
{
UPPER_WRITE_OP *op = calloc(1, sizeof(UPPER_WRITE_OP));
if (!op)
return -1;
op->buf = buf;
op->buf_len = buf_len;
op->conn = conn;
op->cb = cb;
op->cb_arg = arg;
enqueue_upper_write_op(conn, op);
set_rx(conn);
flush_write_buf(conn);
return buf_len;
}
static void teardown_continued(uv_handle_t *handle)
{
APP_CONN *conn = (APP_CONN *)handle->data;
UPPER_WRITE_OP *op, *next_op;
char *teardown_done = conn->teardown_done;
#ifdef USE_QUIC
if (++*teardown_done < 2)
return;
#endif
for (op=conn->pending_upper_write_head; op; op=next_op) {
next_op = op->next;
free(op);
}
free(conn);
#ifndef USE_QUIC
*teardown_done = 1;
#endif
}
/*
* ============================================================================
* Example driver for the above code. This is just to demonstrate that the code
* works and is not intended to be representative of a real application.
*/
static void post_read(APP_CONN *conn, void *buf, size_t buf_len, void *arg)
{
if (!buf_len) {
free(buf);
uv_stop(uv_default_loop());
return;
}
fwrite(buf, 1, buf_len, stdout);
free(buf);
}
static void post_write_get(APP_CONN *conn, int status, void *arg)
{
if (status < 0) {
fprintf(stderr, "write failed: %d\n", status);
return;
}
app_read_start(conn, post_read, NULL);
}
char tx_msg[300];
int mlen;
static void post_connect(APP_CONN *conn, int status, void *arg)
{
int wr;
if (status < 0) {
fprintf(stderr, "failed to connect: %d\n", status);
uv_stop(uv_default_loop());
return;
}
wr = app_write(conn, tx_msg, mlen, post_write_get, NULL);
if (wr < mlen) {
fprintf(stderr, "error writing request");
return;
}
}
int main(int argc, char **argv)
{
int rc = 1;
SSL_CTX *ctx = NULL;
APP_CONN *conn = NULL;
struct addrinfo hints = {0}, *result = NULL;
if (argc < 3) {
fprintf(stderr, "usage: %s host port\n", argv[0]);
goto fail;
}
mlen = snprintf(tx_msg, sizeof(tx_msg),
"GET / HTTP/1.0\r\nHost: %s\r\n\r\n", argv[1]);
ctx = create_ssl_ctx();
if (!ctx)
goto fail;
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = AI_PASSIVE;
rc = getaddrinfo(argv[1], argv[2], &hints, &result);
if (rc < 0) {
fprintf(stderr, "cannot resolve\n");
goto fail;
}
conn = new_conn(ctx, argv[1], result->ai_addr, result->ai_addrlen, post_connect, NULL);
if (!conn)
goto fail;
uv_run(uv_default_loop(), UV_RUN_DEFAULT);
rc = 0;
fail:
teardown(conn);
freeaddrinfo(result);
uv_loop_close(uv_default_loop());
teardown_ctx(ctx);
}

View File

@@ -0,0 +1,345 @@
Proposal for OSSL_PARAM futures
===============================
Format:
```perl
{-
use OpenSSL::paramnames qw(produce_param_handlers);
-}
/*
* Machine generated parameter handling
* generated by util/perl/OpenSSL/paramnames.pm
*/
{-
produce_param_handlers(
'name' => 'kdf_scrypt',
'functions' => 'both', # getter or setter being the other options
'prologue' => "KDF_SCRYPT *ctx = vctx;",
"static" => "yes", # "yes" to generate static functions (default) or
# "no" to not
'params' => (
'KDF_PARAM_PASSWORD' => (
'type' => 'octet string',
'access' => 'writeonly',
'setaction' => qq(
if (!scrypt_set_membuf(&ctx->pass, &ctx->pass_len, p))
return 0;
),
),
'KDF_PARAM_SALT' => (
'type' => 'octet string',
'access' => 'readwrite',
'setaction' => qq(
if (!scrypt_set_membuf(&ctx->salt, &ctx->salt_len, p))
return 0;
),
'getaction' => qq(
p->return_size = ctx->salt_len;
if (p->data_size >= ctx->salt_len)
memcpy(p->data, ctx->salt, p->data_size >= ctx->salt_len);
),
),
'KDF_PARAM_SCRYPT_N' => (
'type' => 'integer',
'ctype' => 'uint64_t',
'access' => 'readwrite',
'field' => "ctx->N",
'sanitycheck' => "value > 1 && is_power_of_two(value)"
),
'KDF_PARAM_SCRYPT_R' => (
'type' => 'integer',
'ctype' => 'uint64_t',
'access' => 'readwrite',
'field' => "ctx->r",
'sanitycheck' => "value >= 1",
),
'KDF_PARAM_SCRYPT_P' => (
'type' => 'integer',
'ctype' => 'uint64_t',
'access' => 'readwrite',
'field' => "ctx->p",
'sanitycheck' => "value >= 1",
),
'KDF_PARAM_SCRYPT_MAXMEM' => (
'type' => 'integer',
'ctype' => 'uint64_t',
'access' => 'readwrite',
'field' => "ctx->maxmem_bytes",
'sanitycheck' => "value >= 1",
),
'KDF_PARAM_PROPERTIES' => (
'type' => 'utf8_string',
'access' => 'readwrite',
'setaction' => qq(
if (!set_property_query(ctx, p->data) || !set_digest(ctx))
return 0;
),
),
'KDF_PARAM_SIZE' => (
'type' => 'integer',
'ctype' => 'size_t',
'access' => 'readonly',
'field' => "SIZE_MAX",
),
);
);
-}
/* End of generated code */
```
THe top level attributes are:
- "name" is the name the functions will derive from e.g. "kdf_scrypt" to this
will be appended _[gs]et[_ctx]_params
- "functions" is the functions to generate. By default both setters and
getters but either can be omitted.
- "prologue" defines some introductory code emitted in the generated functions.
Function arguments are: `void *vctx, OSSL_PARAM params[]` and this
can be used to specialise the void pointer or declare locals.
- "epilogue" defines some post decode code emitted in the generated function
- "params" defines the parameters both gettable and settable
Within the "params" the fields specify each parameter by label.
Each parameter is then specialised with attributes:
- "type" is the OSSL_PARAM type
- "ctype" is the underlying C type (e.g. for an integer parameter size_t
could be the C type)
- "access" is readwrite, readonly or writeonly. This determines if the
parameter is a settable, gettable or both
- "field" is an accessor to the field itself
- "sanitycheck" is a validation check for the parameter. If present, code
will be generated `if (!(sanitycheck)) return 0;`
The local variable `var` will contain the C value if specified.
- "setaction" is C code to execute when the parameter is being set. It will
define an OSSL_PARAM pointer p to set.
- "code" set to "no" skips code generation for this parameter, it defaults
to "yes" which generates handlers. This is useful when a parameter
is duplicated with differenting types (e.g. utf8 string and integer).
- "published" set to "yes" includes the parameter in the gettable/settable
lists. Set to "no" and it isn't included (but will still be processed).
It defaults to "yes".
- Flags include:
- nostatic: do not make the function static
- nocode: do not generate code for this parameter
- This allows, e.g., two different types for a parameter (int & string)
- unpublished: do not generate this parameter in the gettable/settable list
- "engine" is the only one like this
- readonly: create a getter but not a setter
- writeonly: create a setting but not a getter
The idea is that the gettable and get functions will be simultaneously
generated along with fast decoder to look up parameter names quickly.
The getter and setter functions will be pre-populated with some local variable:
```c
OSSL_PARAM *p; /* The matching parameter */
type val; /* The value of the parameter after a get/set call */
/* (for C types) */
```
A worked example for scrypt:
Would generate something along the lines of:
```c
enum kdf_scrypt_ctx_param_e {
kdf_scrypt_ctx_param_INVALID,
kdf_scrypt_ctx_param_OSSL_KDF_PARAM_PASSWORD,
kdf_scrypt_ctx_param_OSSL_KDF_PARAM_PROPERTIES,
kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SALT,
kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SCRYPT_MAXMEM,
kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SCRYPT_N,
kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SCRYPT_P,
kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SCRYPT_R,
kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SIZE
};
static enum kdf_scrypt_ctx_param_e kdf_scrypt_ctx_lookup(const OSSL_PARAM *p) {
/* magic decoder */
return kdf_scrypt_ctx_param_INVALID;
}
static int kdf_scrypt_set_ctx_params(void *vctx, const OSSL_PARAM params[])
{
const OSSL_PARAM *p;
KDF_SCRYPT *ctx = vctx;
if (params == NULL)
return 1;
for (p = params; p->key != NULL; p++) {
switch (kdf_scrypt_ctx_lookup(p)) {
default:
break;
case kdf_scrypt_ctx_param_OSSL_KDF_PARAM_PASSWORD:
if (!scrypt_set_membuf(&ctx->pass, &ctx->pass_len, p))
return 0;
break;
case kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SALT:
if (!scrypt_set_membuf(&ctx->salt, &ctx->salt_len, p))
return 0;
break;
case kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SCRYPT_N: {
uint64_t value;
if (!OSSL_PARAM_get_uint64(p, &value) {
if (!(value > 1 && is_power_of_two(u64_value)))
return 0;
ctx->N = value;
}
break;
}
case kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SCRYPT_R: {
uint64_t value;
if (!OSSL_PARAM_get_uint64(p, &value) {
if (!(value >= 1))
return 0;
ctx->r = value;
}
break;
}
case kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SCRYPT_P: {
uint64_t value;
if (!OSSL_PARAM_get_uint64(p, &value) {
if (!(value >= 1))
return 0;
ctx->p = value;
}
break;
}
case kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SCRYPT_MAXMEM: {
uint64_t value;
if (!OSSL_PARAM_get_uint64(p, &value) {
if (!(value >= 1))
return 0;
ctx->p = value;
}
break;
}
case kdf_scrypt_ctx_param_OSSL_KDF_PARAM_PROPERTIES:
if (p != NULL) {
if (p->data_type != OSSL_PARAM_UTF8_STRING) {
if (!set_property_query(ctx, p->data) || !set_digest(ctx))
return 0;
}
}
}
}
return 1;
}
static const OSSL_PARAM *kdf_scrypt_settable_ctx_params(ossl_unused void *ctx,
ossl_unused void *p_ctx)
{
static const OSSL_PARAM known_settable_ctx_params[] = {
OSSL_PARAM_octet_string(OSSL_KDF_PARAM_PASSWORD, NULL, 0),
OSSL_PARAM_octet_string(OSSL_KDF_PARAM_SALT, NULL, 0),
OSSL_PARAM_uint64(OSSL_KDF_PARAM_SCRYPT_N, NULL),
OSSL_PARAM_uint32(OSSL_KDF_PARAM_SCRYPT_R, NULL),
OSSL_PARAM_uint32(OSSL_KDF_PARAM_SCRYPT_P, NULL),
OSSL_PARAM_uint64(OSSL_KDF_PARAM_SCRYPT_MAXMEM, NULL),
OSSL_PARAM_utf8_string(OSSL_KDF_PARAM_PROPERTIES, NULL, 0),
OSSL_PARAM_END
};
return known_settable_ctx_params;
}
static int kdf_scrypt_get_ctx_params(void *vctx, OSSL_PARAM params[])
{
const OSSL_PARAM *p;
KDF_SCRYPT *ctx = vctx;
if (params == NULL)
return 1;
for (p = params; p->key != NULL; p++) {
switch (kdf_scrypt_ctx_lookup(p)) {
default:
break;
case kdf_scrypt_ctx_param_OSSL_KDF_PARAM_PASSWORD:
if (!scrypt_set_membuf(&ctx->pass, &ctx->pass_len, p))
return 0;
break;
case kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SALT:
p->return_size = ctx->salt_len;
if (p->data_size >= ctx->salt_len)
memcpy(p->data, ctx->salt, ctx->salt_len);
break;
case kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SCRYPT_N: {
if (!OSSL_PARAM_set_uint64(p, &ctx->N)
return 0;
break;
}
case kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SCRYPT_R: {
if (!OSSL_PARAM_set_uint64(p, &ctx->r)
return 0;
break;
}
case kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SCRYPT_P: {
if (!OSSL_PARAM_set_uint64(p, &ctx->p)
return 0;
break;
}
case kdf_scrypt_ctx_param_OSSL_KDF_PARAM_SCRYPT_MAXMEM: {
if (!OSSL_PARAM_set_uint64(p, &ctx->maxmem)
return 0;
break;
}
case kdf_scrypt_ctx_param_OSSL_KDF_PARAM_PROPERTIES:
if (p->data_type != OSSL_PARAM_UTF8_STRING) {
if (!set_property_query(ctx, p->data) || !set_digest(ctx))
return 0;
}
break;
case kdf_scrypt_ctx_param_KDF_PARAM_SIZE:
if (!OSSL_PARAM_set_size_t(p, SIZE_MAX))
return 0;
break;
}
}
return 1;
}
static const OSSL_PARAM *kdf_scrypt_gettable_ctx_params(ossl_unused void *ctx,
ossl_unused void *p_ctx)
{
static const OSSL_PARAM known_gettable_ctx_params[] = {
OSSL_PARAM_size_t(OSSL_KDF_PARAM_SIZE, NULL),
OSSL_PARAM_END
};
return known_gettable_ctx_params;
}
```

View File

@@ -0,0 +1,135 @@
Fetching composite algorithms and using them - adding the bits still missing
============================================================================
Quick background
----------------
We currently support - at least in the public libcrypto API - explicitly
fetching composite algorithms (such as AES-128-CBC or HMAC-SHA256), and
using them in most cases. In some cases (symmetric ciphers), our providers
also provide them.
However, there is one class of algorithms where the support for *using*
explicitly fetched algorithms is lacking: asymmetric algorithms.
For a longer background and explanation, see
[Background / tl;dr](#background-tldr) at the end of this design.
Public API - Add variants of `EVP_PKEY_CTX` initializers
--------------------------------------------------------
As far as this design is concerned, these API sets are affected:
- SIGNATURE
- ASYM_CIPHER
- KEYEXCH
The proposal is to add these initializer functions:
``` C
int EVP_PKEY_sign_init_ex2(EVP_PKEY_CTX *pctx,
EVP_SIGNATURE *algo, const OSSL_PARAM params[]);
int EVP_PKEY_verify_init_ex2(EVP_PKEY_CTX *pctx,
EVP_SIGNATURE *algo, const OSSL_PARAM params[]);
int EVP_PKEY_verify_recover_init_ex2(EVP_PKEY_CTX *pctx,
EVP_SIGNATURE *algo, const OSSL_PARAM params[]);
int EVP_PKEY_encrypt_init_ex2(EVP_PKEY_CTX *ctx, EVP_ASYM_CIPHER *asymciph,
const OSSL_PARAM params[]);
int EVP_PKEY_decrypt_init_ex2(EVP_PKEY_CTX *ctx, EVP_ASYM_CIPHER *asymciph,
const OSSL_PARAM params[]);
int EVP_PKEY_derive_init_ex2(EVP_PKEY_CTX *ctx, EVP_KEYEXCH *exchange,
const OSSL_PARAM params[]);
```
Detailed proposal for these APIs will be or are prepared in other design
documents:
- [Functions for explicitly fetched signature algorithms]
- [Functions for explicitly fetched asym-cipher algorithms] (not yet designed)
- [Functions for explicitly fetched keyexch algorithms] (not yet designed)
-----
-----
Background / tl;dr
------------------
### What is a composite algorithm?
A composite algorithm is an algorithm that's composed of more than one other
algorithm. In OpenSSL parlance with a focus on signatures, they have been
known as "sigalgs", but this is really broader than just signature algorithms.
Examples are:
- AES-128-CBC
- hmacWithSHA256
- sha256WithRSAEncryption
### The connection with AlgorithmIdentifiers
AlgorithmIdentifier is an ASN.1 structure that defines an algorithm as an
OID, along with parameters that should be passed to that algorithm.
It is expected that an application should be able to take that OID and
fetch it directly, after conversion to string form (either a name if the
application or libcrypto happens to know it, or the OID itself in canonical
numerical form). To enable this, explicit fetching is necessary.
### What we have today
As a matter of fact, we already have built-in support for fetching
composite algorithms, although our providers do not fully participate in
that support, and *most of the time*, we also have public APIs to use the
fetched result, commonly known as support for explicit fetching.
The idea is that providers can declare the different compositions of a base
algorithm in the `OSSL_ALGORITHM` array, each pointing to different
`OSSL_DISPATCH` tables, which would in turn refer to pretty much the same
functions, apart from the constructor function.
For example, we already do this with symmetric ciphers.
Another example, which we could implement in our providers today, would be
compositions of HMAC:
``` C
static const OSSL_ALGORITHM deflt_macs[] = {
/* ... */
{ "HMAC-SHA1:hmacWithSHA1:1.2.840.113549.2.7",
"provider=default", ossl_hmac_sha1_functions },
{ "HMAC-SHA224:hmacWithSHA224:1.2.840.113549.2.8",
"provider=default", ossl_hmac_sha224_functions },
{ "HMAC-SHA256:hmacWithSHA256:1.2.840.113549.2.9",
"provider=default", ossl_hmac_sha256_functions },
{ "HMAC-SHA384:hmacWithSHA384:1.2.840.113549.2.10",
"provider=default", ossl_hmac_sha384_functions },
{ "HMAC-SHA512:hmacWithSHA512:1.2.840.113549.2.11",
"provider=default", ossl_hmac_sha512_functions },
/* ... */
```
### What we don't have today
There are some classes of algorithms for which we have no support for using
the result of explicit fetching. So for example, while it's possible for a
provider to declare composite algorithms through the `OSSL_ALGORITHM` array,
there's currently no way for an application to use them.
This all revolves around asymmetric algorithms, where we currently only
support implicit fetching.
This is hurtful in multiple ways:
- It fails the provider authors in terms being able to consistently
declare all algorithms through `OSSL_ALGORITHM` arrays.
- It fails the applications in terms of being able to fetch algorithms and
use the result.
- It fails discoverability, for example through the `openssl list`
command.
<!-- links -->
[Functions for explicitly fetched signature algorithms]:
functions-for-explicitly-fetched-signature-algorithms.md

View File

@@ -0,0 +1,337 @@
OpenSSL FIPS Indicators
=======================
The following document refers to behaviour required by the OpenSSL FIPS provider,
the changes should not affect the default provider.
References
----------
- [1] FIPS 140-3 Standards: <https://csrc.nist.gov/projects/cryptographic-module-validation-program/fips-140-3-standards>
- [2] Approved Security Functions: <https://csrc.nist.gov/projects/cryptographic-module-validation-program/sp-800-140-series-supplemental-information/sp800-140c>
- [3] Approved SSP generation and Establishment methods: <https://csrc.nist.gov/projects/cryptographic-module-validation-program/sp-800-140-series-supplemental-information/sp800-140d>
- [4] Key transitions: <https://csrc.nist.gov/pubs/sp/800/131/a/r2/final>
- [5] FIPS 140-3 Implementation Guidance: <https://csrc.nist.gov/csrc/media/Projects/cryptographic-module-validation-program/documents/fips 140-3/FIPS 140-3 IG.pdf>
Requirements
------------
The following information was extracted from the FIPS 140-3 IG [5] “2.4.C Approved Security Service Indicator”
- A module must have an approved mode of operation that requires at least one service to use an approved security function (defined by [2] and [3]).
- A FIPS 140-3 compliant module requires a built-in service indicator capable of indicating the use of approved security services
- If a module only supports approved services in an approved manner an implicit indicator can be used (e.g. successful completion of a service is an indicator).
- An approved algorithm is not considered to be an approved implementation if it does not have a CAVP certificate or does not include its required self-tests. (i.e. My interpretation of this is that if the CAVP certificate lists an algorithm with only a subset of key sizes, digests, and/or ciphers compared to the implementation, the differences ARE NOT APPROVED. In many places we have no restrictions on the digest or cipher selected).
- Documentation is required to demonstrate how to use indicators for each approved cryptographic algorithm.
- Testing is required to execute all services and verify that the indicator provides an unambiguous indication of whether the service utilizes an approved cryptographic algorithm, security function or process in an approved manner or not.
- The Security Policy may require updates related to indicators. AWS/google have added a table in their security policy called Non-Approved Algorithms not allowed in the approved mode of operation. An example is RSA with a keysize of < 2048 bits (which has been enforced by [4]).
Since any new FIPS restrictions added could possibly break existing applications
the following additional OpenSSL requirements are also needed:
- The FIPS restrictions should be able to be disabled using Configuration file options (This results in unapproved mode and requires an indicator).
- A mechanism for logging the details of any unapproved mode operations that have been triggered (e.g. DSA Signing)
- The FIPS restrictions should be able to be enabled/disabled per algorithm context.
- If the per algorithm context value is not set, then the Configuration file option is used.
Solution
--------
In OpenSSL most of the existing code in the FIPS provider is using
implicit indicators i.e. An error occurs if existing FIPS rules are violated.
The following rules will apply to any code that currently is not FIPS approved,
but needs to be.
- The fipsinstall application will have a configurable item added for each algorithm that requires a change. These options will be passed to the FIPS provider in a manner similar to existing code.
- A user defined callback similar to OSSL_SELF_TEST will be added. This callback will be triggered whenever an approved mode test fails.
It may be set up by the user using
```c
typedef int (OSSL_INDICATOR_CALLBACK)(const char *type,
const char *desc,
const OSSL_PARAM params[]);
void OSSL_INDICATOR_set_callback(OSSL_LIB_CTX *libctx,
OSSL_INDICATOR_CALLBACK *cb);
```
The callback can be changed at any time.
- A getter is also supplied (which is also used internally)
```c
void OSSL_INDICATOR_get_callback(OSSL_LIB_CTX *libctx,
OSSL_INDICATOR_CALLBACK **cb);
```
An application's indicator OSSL_INDICATOR_CALLBACK can be used to log that an
indicator was triggered. The callback should normally return non zero.
Returning 0 allows the algorithm to fail, in the same way that a strict check
would fail.
- To control an algorithm context's checks via code requires a setter for each individual check e.g OSSL_PKEY_PARAM_FIPS_KEY_CHECK.
```c
p = OSSL_PARAM_locate_const(params, OSSL_PKEY_PARAM_FIPS_KEY_CHECK);
if (p != NULL
&& !OSSL_PARAM_get_int(p, &ctx->key_check))
return 0;
```
The setter is initially -1 (unknown) and can be set to 0 or 1 via a set_ctx call.
If the setter is needed it must be set BEFORE the FIPS related check is done.
If the FIPS related approved mode check fails and either the ctx setter is zero
OR the related FIPS configuration option is zero then the callback is triggered.
If neither the setter of config option are zero then the algorithm should fail.
If the callback is triggered a flag is set in the algorithm ctx that indicates
that this algorithm is unapproved. Once the context is unapproved it will
remain in this state.
- To access the indicator via code requires a getter
```c
p = OSSL_PARAM_locate(params, OSSL_ALG_PARAM_FIPS_APPROVED_INDICATOR);
if (p != NULL && !OSSL_PARAM_set_int(p, ctx->approved))
return 0;
```
This initially has a value of 1, and may be set to 0 if the algorithm is
unapproved. The getter allows you to access the indicator value after the
operation has completed (e.g. Final or Derive related functions)
- Example Algorithm Check
```c
void alg_init(ALG_CTX *ctx)
{
ctx->strict_checks = -1;
ctx->approved = 1;
}
void alg_set_params(ALG_CTX *ctx, OSSL_PARAMS params[])
{
p = OSSL_PARAM_locate_const(params, OSSL_ALG_PARAM_STRICT_CHECKS);
if (p != NULL && !OSSL_PARAM_get_int(p, &ctx->strict_checks))
return 0;
}
int alg_check_approved(ALG_CTX *ctx)
{
int approved;
approved = some_fips_test_passes(ctx->libctx); // Check FIPS restriction for alg
if (!approved) {
ctx->approved = 0;
if (ctx->strict_checks == 0
|| fips_config_get(ctx->libctx, op) == 0) {
if (!indicator_cb(ctx->libctx, "ALG NAME", "ALG DESC"))
return 0;
}
}
return 1;
}
```
- Existing security check changes
OpenSSL already uses FIPS configuration options to perform security_checks, but
the existing code needs to change to work with indicators.
e.g. existing code
```c
if (ossl_securitycheck_enabled(ctx)) {
pass = do_some_alg_test(ctx);
if (!pass)
return 0; /* Produce an error */
}
```
In updated code for indicators the test always runs.. i.e.
```c
pass = do_some_alg_test(ctx);
// Do code similar to alg_check_approved() above
// which will conditionally decide whether to return an error
// or trigger the indicator callback.
```
Issues with per algorithm ctx setters
------------------------------------------------
Normally a user would set params (such as OSSL_PKEY_PARAM_FIPS_KEY_CHECK) using
set_ctx_params(), but some algorithms currently do checks in their init operation.
These init functions normally pass an OSSL_PARAM[] argument, but this still
requires the user to set the ctx params in their init.
e.g.
```c
int strict = 0;
params[0] = OSSL_PARAM_construct_int(OSSL_PKEY_PARAM_FIPS_KEY_CHECK, strict);
EVP_DigestSignInit_ex(ctx, &pctx, name, libctx, NULL, pkey, params);
// using EVP_PKEY_CTX_set_params() here would be too late
```
Delaying the check to after the init would be possible, but it would be a change
in existing behaviour. For example the keysize checks are done in the init since
this is when the key is setup.
Notes
-----
There was discussion related to also having a global config setting that could
turn off FIPS mode. This will not be added at this stage.
MACROS
------
A generic object is used internally to embed the variables required for
indicators into each algorithm ctx. The object contains the ctx settables and
the approved flag.
```c
typedef struct ossl_fips_ind_st {
unsigned char approved;
signed char settable[OSSL_FIPS_IND_SETTABLE_MAX];
} OSSL_FIPS_IND;
```
Since there are a lot of algorithms where indicators are needed it was decided
to use MACROS to simplify the process.
The following macros are only defined for the FIPS provider.
- OSSL_FIPS_IND_DECLARE
OSSL_FIPS_IND should be placed into the algorithm objects struct that is
returned by a new()
- OSSL_FIPS_IND_COPY(dst, src)
This is used to copy the OSSL_FIPS_IND when calling a dup(). If the dup() uses
*dst = *src then it is not required.
- OSSL_FIPS_IND_INIT(ctx)
Initializes the OSSL_FIPS_IND object. It should be called in the new().
- OSSL_FIPS_IND_ON_UNAPPROVED(ctx, id, libctx, algname, opname, configopt_fn)
This triggers the callback, id is the settable index and must also be used
by OSSL_FIPS_IND_SET_CTX_PARAM(), algname and opname are strings that are passed
to the indicator callback, configopt_fn is the FIPS configuration option.
Where this is triggered depends on the algorithm. In most cases this can be done
in the set_ctx().
- OSSL_FIPS_IND_SETTABLE_CTX_PARAM(name)
This must be put into the algorithms settable ctx_params table.
The name is the settable 'key' name such as OSSL_PKEY_PARAM_FIPS_KEY_CHECK.
There should be a matching name used by OSSL_FIPS_IND_SET_CTX_PARAM().
There may be multiple of these.
- OSSL_FIPS_IND_SET_CTX_PARAM(ctx, id, params, name)
This should be put at the top of the algorithms set_ctx_params().
There may be multiple of these. The name should match an
OSSL_FIPS_IND_SETTABLE_CTX_PARAM() entry.
The id should match an OSSL_FIPS_IND_ON_UNAPPROVED() entry.
- OSSL_FIPS_IND_GETTABLE_CTX_PARAM()
This must be placed in the algorithms gettable_ctx_params table
- OSSL_FIPS_IND_GET_CTX_PARAM(ctx, params)
This must be placed in the algorithms get_ctx_params(),
Some existing algorithms will require set_ctx_params()/settable_ctx_params()
and/or get_ctx_params()/gettable_ctx_params() to be added if required.
New Changes Required
--------------------
The following changes are required for FIPS 140-3 and will require indicators.
On a cases by case basis we must decide what to do when unapproved mode is
detected.
The mechanism using FIPS configuration options and the indicator callback should
be used for most of these unapproved cases (rather than always returning an error).
### key size >= 112 bits
There are a few places where we do not enforce key size that need to be addressed.
- HMAC Which applies to all algorithms that use HMAC also (e.g. HKDF, SSKDF, KBKDF)
- CMAC
- KMAC
### Algorithm Transitions
- DES_EDE3_ECB. Disallowed for encryption, allowed for legacy decryption
- DSA. Keygen and Signing are no longer approved, verify is still allowed.
- ECDSA B & K curves are deprecated, but still approved according to (IG C.K Resolution 4).\
If we chose not to remove them , then we need to check that OSSL_PKEY_PARAM_USE_COFACTOR_ECDH is set for key agreement if the cofactor is not 1.
- ED25519/ED448 is now approved.
- X25519/X448 is not approved currently. keygen and keyexchange would also need an indicator if we allow it?
- RSA encryption(for key agreement/key transport) using PKCSV15 is no longer allowed. (Note that this breaks TLS 1.2 using RSA for KeyAgreement),
Padding mode updates required. Check RSA KEM also.
- RSA signing using PKCS1 is still allowed (i.e. signature uses shaXXXWithRSAEncryption)
- RSA signing using X931 is no longer allowed. (Still allowed for verification). Check if PSS saltlen needs a indicator (Note FIPS 186-4 Section 5.5 bullet(e). Padding mode updates required in rsa_check_padding(). Check if sha1 is allowed?
- RSA (From SP800-131Ar2) RSA >= 2048 is approved for keygen, signatures and key transport. Verification allows 1024 also. Note also that according to the (IG section C.F) that fips 186-2 verification is also allowed (So this may need either testing OR an indicator, it also mentions the modulus size must be a multiple of 256). Check that rsa_keygen_pairwise_test() and RSA self tests are all compliant with the above RSA restrictions.
- TLS1_PRF If we are only trying to support TLS1.2 here then we should remove the tls1.0/1.1 code from the FIPS MODULE.
- ECDSA Verify using prehashed message is not allowed.
### Digest Checks
Any algorithms that use a digest need to make sure that the CAVP certificate lists all supported FIPS digests otherwise an indicator is required.
This applies to the following algorithms:
- TLS_1_3_KDF (Only SHA256 and SHA384 Are allowed due to RFC 8446 Appendix B.4)
- TLS1_PRF (Only SHA256,SHA384,SHA512 are allowed)
- X963KDF (SHA1 is not allowed)
- X942KDF
- PBKDF2
- HKDF
- KBKDF
- SSKDF
- SSHKDF
- HMAC
- KMAC
- Any signature algorithms such as RSA, DSA, ECDSA.
The FIPS 140-3 IG Section C.B & C.C have notes related to Vendor affirmation.
Note many of these (such as KDF's will not support SHAKE).
See <https://gitlab.com/redhat/centos-stream/rpms/openssl/-/blob/c9s/0078-KDF-Add-FIPS-indicators.patch?ref_type=heads>
ECDSA and RSA-PSS Signatures allow use of SHAKE.
KECCAK-KMAC-128 and KECCAK-KMAC-256 should not be allowed for anything other than KMAC.
Do we need to check which algorithms allow SHA1 also?
Test that Deterministic ECDSA does not allow SHAKE (IG C.K Additional Comments 6)
### Cipher Checks
- CMAC
- KBKDF CMAC
- GMAC
We should only allow AES. We currently just check the mode.
### Configurable options
- PBKDF2 'lower_bound_checks' needs to be part of the indicator check
Other Changes
-------------
- AES-GCM Security Policy must list AES GCM IV generation scenarios
- TEST_RAND is not approved.
- SSKDF The security policy needs to be specific about what it supports i.e. hash, kmac 128/256, hmac-hash. There are also currently no limitations on the digest for hash and hmac
- KBKDF Security policy should list KMAC-128, KMAC-256 otherwise it should be removed.
- KMAC may need a lower bound check on the output size (SP800-185 Section 8.4.2)
- HMAC (FIPS 140-3 IG Section C.D has notes about the output length when using a Truncated HMAC)

View File

@@ -0,0 +1,205 @@
Functions for explicitly fetched PKEY algorithms
================================================
Quick background
----------------
There are several proposed designs that end up revolving around the same
basic need, explicitly fetched signature algorithms. The following method
type is affected by this document:
- `EVP_SIGNATURE`
Public API - Add variants of `EVP_PKEY_CTX` functionality
---------------------------------------------------------
Through OTC discussions, it's been determined that the most suitable APIs to
touch are the of `EVP_PKEY_` functions.
Specifically, `EVP_PKEY_sign()`, `EVP_PKEY_verify()`, `EVP_PKEY_verify_recover()`
and related functions.
They can be extended to accept an explicitly fetched algorithm of the right
type, and to be able to incrementally process indefinite length data streams
when the fetched algorithm permits it (for example, RSA-SHA256).
It must be made clear that the added functionality cannot be used to compose
an algorithm from different parts. For example, it's not possible to specify
a `EVP_SIGNATURE` "RSA" and combine it with a parameter that specifies the
hash "SHA256" to get the "RSA-SHA256" functionality. For an `EVP_SIGNATURE`
"RSA", the input is still expected to be a digest, or some other input that's
limited to the modulus size of the RSA pkey.
### Making things less confusing with distinct function names
Until now, `EVP_PKEY_sign()` and friends were only expected to act on the
pre-computed digest of a message (under the condition that proper flags
and signature md are specified using functions like
`EVP_PKEY_CTX_set_rsa_padding()` and `EVP_PKEY_CTX_set_signature_md()`),
or to act as "primitive" [^1] functions (under the condition that proper
flags are specified, like `RSA_NO_PADDING` for RSA signatures).
This design proposes an extension to also allow full (not pre-hashed)
messages to be passed, in a streaming style through an *update* and a
*final* function.
Discussions have revealed that it is potentially confusing to conflate the
current functionality with streaming style functionality into the same name,
so this design separates those out with specific init / update / final
functions for that purpose. For oneshot functionality, `EVP_PKEY_sign()`
and `EVP_PKEY_verify()` remain supported.
[^1]: the term "primitive" is borrowed from [PKCS#1](https://www.rfc-editor.org/rfc/rfc8017#section-5)
### Making it possible to verify with an early signature
Some more recent verification algorithms need to obtain the signature
before processing the data.
This is particularly important for streaming modes of operation.
This design proposes a mechanism to accomodate these algorithms
and modes of operation.
New public API - API Reference
------------------------------
### For limited input size / oneshot signing with `EVP_SIGNATURE`
``` C
int EVP_PKEY_sign_init_ex2(EVP_PKEY_CTX *pctx,
EVP_SIGNATURE *algo,
const OSSL_PARAM params[]);
```
### For signing a stream with `EVP_SIGNATURE`
``` C
int EVP_PKEY_sign_message_init(EVP_PKEY_CTX *pctx,
EVP_SIGNATURE *algo,
const OSSL_PARAM params[]);
int EVP_PKEY_sign_message_update(EVP_PKEY_CTX *ctx,
const unsigned char *in,
size_t inlen);
int EVP_PKEY_sign_message_final(EVP_PKEY_CTX *ctx,
unsigned char *sig,
size_t *siglen);
#define EVP_PKEY_sign_message(ctx,sig,siglen,tbs,tbslen) \
EVP_PKEY_sign(ctx,sig,siglen,tbs,tbslen)
```
### For limited input size / oneshot verification with `EVP_SIGNATURE`
``` C
int EVP_PKEY_verify_init_ex2(EVP_PKEY_CTX *pctx,
EVP_SIGNATURE *algo,
const OSSL_PARAM params[]);
```
### For verifying a stream with `EVP_SIGNATURE`
``` C
/* Initializers */
int EVP_PKEY_verify_message_init(EVP_PKEY_CTX *pctx,
EVP_SIGNATURE *algo,
const OSSL_PARAM params[]);
/* Signature setter */
int EVP_PKEY_CTX_set_signature(EVP_PKEY_CTX *pctx,
unsigned char *sig, size_t siglen,
size_t sigsize);
/* Update and final */
int EVP_PKEY_verify_message_update(EVP_PKEY_CTX *ctx,
const unsigned char *in,
size_t inlen);
int EVP_PKEY_verify_message_final(EVP_PKEY_CTX *ctx);
#define EVP_PKEY_verify_message(ctx,sig,siglen,tbs,tbslen) \
EVP_PKEY_verify(ctx,sig,siglen,tbs,tbslen)
```
### For verify_recover with `EVP_SIGNATURE`
Preliminary feedback suggests that a streaming interface is uninteresting for
verify_recover, so we only specify a new init function.
``` C
/* Initializers */
int EVP_PKEY_verify_recover_init_ex2(EVP_PKEY_CTX *pctx,
EVP_SIGNATURE *algo,
const OSSL_PARAM params[]);
```
Requirements on the providers
-----------------------------
Because it's not immediately obvious from a composite algorithm name what
key type ("RSA", "EC", ...) it requires / supports, at least in code, allowing
the use of an explicitly fetched implementation of a composite algorithm
requires that providers cooperate by declaring what key type is required /
supported by each algorithm.
For non-composite operation algorithms (like "RSA"), this is not necessary,
see the fallback strategies below.
This is to be implemented through an added provider function that would work
like keymgmt's `query_operation_name` function, but would return a NULL
terminated array of key type name instead:
``` C
# define OSSL_FUNC_SIGNATURE_QUERY_KEY_TYPE 26
OSSL_CORE_MAKE_FUNC(const char **, signature_query_key_type, (void))
```
Furthermore, the distinction of intent, i.e. whether the input is expected
to be a pre-hashed digest or the original message, must be passed on to the
provider. Because we already distinguish that with function names in the
public API, we use the same mapping in the provider interface.
The already existing `signature_sign` and `signature_verify` remain as they
are, and can be combined with message init calls.
``` C
# define OSSL_FUNC_SIGNATURE_SIGN_MESSAGE_INIT 27
# define OSSL_FUNC_SIGNATURE_SIGN_MESSAGE_UPDATE 28
# define OSSL_FUNC_SIGNATURE_SIGN_MESSAGE_FINAL 29
OSSL_CORE_MAKE_FUNC(int, signature_sign_message_init,
(void *ctx, void *provkey, const OSSL_PARAM params[]))
OSSL_CORE_MAKE_FUNC(int, signature_sign_message_update,
(void *ctx, const unsigned char *in, size_t inlen))
OSSL_CORE_MAKE_FUNC(int, signature_sign_message_final,
(void *ctx, unsigned char *sig, size_t *siglen, size_t sigsize))
# define OSSL_FUNC_SIGNATURE_VERIFY_MESSAGE_INIT 30
# define OSSL_FUNC_SIGNATURE_VERIFY_MESSAGE_UPDATE 31
# define OSSL_FUNC_SIGNATURE_VERIFY_MESSAGE_FINAL 32
OSSL_CORE_MAKE_FUNC(int, signature_verify_message_init,
(void *ctx, void *provkey, const OSSL_PARAM params[]))
OSSL_CORE_MAKE_FUNC(int, signature_verify_message_update,
(void *ctx, const unsigned char *in, size_t inlen))
/*
* signature_verify_message_final requires that the signature to be verified
* against is specified via an OSSL_PARAM.
*/
OSSL_CORE_MAKE_FUNC(int, signature_verify_message_final, (void *ctx))
```
Fallback strategies
-------------------
Because existing providers haven't been updated to respond to the key type
query, some fallback strategies will be needed for the init calls that take
an explicitly fetched `EVP_SIGNATURE` argument (they can at least be used
for pre-hashed digest operations). To find out if the `EVP_PKEY` key type
is possible to use with the explicitly fetched algorithm, the following
fallback strategies may be used.
- Check if the fetched operation name matches the key type (keymgmt name)
of the `EVP_PKEY` that's involved in the operation. For example, this
is useful when someone fetched the `EVP_SIGNATURE` "RSA". This requires
very little modification, as this is already done with the initializer
functions that fetch the algorithm implicitly.
- Check if the fetched algorithm name matches the name returned by the
keymgmt's `query_operation_name` function. For example, this is useful
when someone fetched the `EVP_SIGNATURE` "ECDSA", for which the key type
to use is "EC". This requires very little modification, as this is
already done with the initializer functions that fetch the algorithm
implicitly.
If none of these strategies work out, the operation initialization should
fail.

View File

@@ -0,0 +1,149 @@
Handling Some MAX Defines in Future
===================================
Problem Definition
------------------
The public headers contain multiple `#define` macros that limit sizes or
numbers of various kinds. In some cases they are uncontroversial so they
do not require any changes or workarounds for these limits. Such values
are not discussed further in this document. This document discusses only
some particularly problematic values and proposes some ways how to
change or overcome these particular limits.
Individual Values
-----------------
### HMAC_MAX_MD_CBLOCK
**Current value:** 200
This is a deprecated define which is useless. It is not used anywhere.
#### Proposed solution:
It should be just removed with 4.0.
### EVP_MAX_MD_SIZE
**Current value:** 64
It is unlikely we will see longer than 512 bit hashes any time soon.
XOF functions do not count and the XOF output length is not and should
not be limited by this value.
It is widely used throughout the codebase and by 3rd party applications.
#### API calls depending on this:
HMAC() - no way to specify the length of the output buffer
X509_pubkey_digest() - no way to specify the length of the output buffer
EVP_Q_digest() - no way to specify the length of the output buffer
EVP_Digest() - no way to specify the length of the output buffer
EVP_DigestFinal_ex() - this is actually documented to allow larger output
if set explicitly by some application call that sets the output size
#### Proposed solution:
Keep the value as is, do not deprecate. Review the codebase if it isn't
used in places where XOF might be used with arbitrary output length.
Perhaps introduce API calls replacing the calls above that would have
an input parameter indicating the size of the output buffer.
### EVP_MAX_KEY_LENGTH
**Current value:** 64
This is used throughout the code and depended on in a subtle way. It can
be assumed that 3rd party applications use this value to allocate fixed
buffers for keys. It is unlikely that symmetric ciphers with keys longer
than 512 bits will be used any time soon.
#### API calls depending on this:
EVP_KDF_CTX_get_kdf_size() returns EVP_MAX_KEY_LENGTH for KRB5KDF until
the cipher is set.
EVP_CIPHER_CTX_rand_key() - no way to specify the length of the output
buffer.
#### Proposed solution:
Keep the value as is, do not deprecate. Possibly review the codebase
to not depend on this value but there are many such cases. Avoid adding
further APIs depending on this value.
### EVP_MAX_IV_LENGTH
**Current value:** 16
This value is the most problematic one as in case there are ciphers with
longer block size than 128 bits it could be potentially useful to have
longer IVs than just 16 bytes. There are many cases throughout the code
where fixed size arrays of EVP_MAX_IV_LENGTH are used.
#### API calls depending on this:
SSL_CTX_set_tlsext_ticket_key_evp_cb() explicitly uses EVP_MAX_IV_LENGTH
in the callback function signature.
SSL_CTX_set_tlsext_ticket_key_cb() is a deprecated version of the same
and has the same problem.
#### Proposed solution:
Deprecate the above API call and add a replacement which explicitly
passes the length of the _iv_ parameter.
Review and modify the codebase to not depend on and use EVP_MAX_IV_LENGTH.
Deprecate the EVP_MAX_IV_LENGTH macro. Avoid adding further APIs depending
on this value.
### EVP_MAX_BLOCK_LENGTH
**Current value:** 32
This is used in a few places in the code. It is possible that this is
used by 3rd party applications to allocate some fixed buffers for single
or multiple blocks. It is unlikely that symmetric ciphers with block sizes
longer than 256 bits will be used any time soon.
#### API calls depending on this:
None
#### Proposed solution:
Keep the value as is, do not deprecate. Possibly review the codebase
to not depend on this value but there are many such cases. Avoid adding
APIs depending on this value.
### EVP_MAX_AEAD_TAG_LENGTH
**Current value:** 16
This macro is used in a single place in hpke to allocate a fixed buffer.
The EVP_EncryptInit(3) manual page mentions the tag size being at most
16 bytes for EVP_CIPHER_CTX_ctrl(EVP_CTRL_AEAD_SET_TAG). The value is
problematic as for HMAC/KMAC based AEAD ciphers the tag length can be
larger than block size. Even in case we would have block ciphers with
256 block size the maximum tag length value of 16 is limiting.
#### API calls depending on this:
None (except the documentation in
EVP_CIPHER_CTX_ctrl(EVP_CTRL_AEAD_SET/GET_TAG))
#### Proposed solution:
Review and modify the codebase to not depend on and use
EVP_MAX_AEAD_TAG_LENGTH.
Deprecate the EVP_MAX_AEAD_TAG_LENGTH macro. Avoid adding APIs depending
on this value.

View File

@@ -0,0 +1,77 @@
OSSL_PROVIDER_load_ex - activating providers with run-time configuration
========================================================================
Currently any provider run-time activation requires the presence of the
initialization parameters in the OpenSSL configuration file. Otherwise the
provider will be activated with some default settings, that may or may not
work for a particular application. For real-world systems it may require
providing a specially designed OpenSSL configuration file and passing it somehow
(e.g. via environment), which has obvious drawbacks.
We need a possibility to initialize providers on per-application level
according to per-application parameters. It's necessary for example for PKCS#11
provider (where different applications may use different devices with different
drivers) and will be useful for some other providers. In case of Red Hat it is
also usable for FIPS provider.
OpenSSL 3.2 introduces the API
```C
OSSL_PROVIDER *OSSL_PROVIDER_load_ex(OSSL_LIB_CTX *libctx, const char *name,
OSSL_PARAM params[]);
```
intended to configure the provider at load time.
It accepts only parameters of type `OSSL_PARAM_UTF8_STRING` because any
provider can be initialized from the config file where the values are
represented as strings and provider init function has to deal with it.
Explicitly configured parameters can differ from the parameters named in the
configuration file. Here are the current design decisions and some possible
future steps.
Real-world cases
----------------
Many applications use PKCS#11 API with specific drivers. OpenSSL PKCS#11
provider <https://github.com/latchset/pkcs11-provider> also provides a set of
tweaks usable in particular situations. So there are several scenarios for which
the new API can be used:
1. Configure a provider in the config file, activate on demand
2. Load/activate a provider run-time with parameters
Current design
--------------
When the provider is already loaded an activated in the current library context,
the `OSSL_PROVIDER_load_ex` call simply returns the active provider and the
extra parameters are ignored.
In all other cases, the extra parameters provided by the `OSSL_PROVIDER_load_ex`
call are applied and the values from the config file are ignored.
Separate instances of the provider can be loaded in the separate library
contexts.
Several instances of the same provider can be loaded in the same context using
different section names, module names (e.g. via symlinks) and provider names.
But unless the provider supports some configuration options, the algorithms in
this case will have the same `provider` property and the result of fetching is
not determined. We strongly discourage against this trick.
Changing the loaded provider configuration at runtime is not supported. If
it is necessary, the provider needs to be unloaded using `OSSL_PROVIDER_unload`
and reloaded using `OSSL_PROVIDER_load` or `OSSL_PROVIDER_load_ex` should be used.
Possible future steps
---------------------
1. We should provide some API function accessing the configuration parameters
of a particular provider. Having it, the application will be able to combine
some default values with the app-specific ones in more or less intellectual
way.
2. We probably should remove the `INFOPAIR` structure and use the `OSSL_PARAM`
one instead.

View File

@@ -0,0 +1,175 @@
Handling AlgorithmIdentifier and its parameters with provider operations
========================================================================
Quick background
----------------
We currently only support passing the AlgorithmIdentifier (`X509_ALGOR`)
parameter field to symmetric cipher provider implementations. We currently
only support getting full AlgorithmIdentifier (`X509_ALGOR`) from signature
provider implementations.
We do support passing them to legacy implementations of other types of
operation algorithms as well, but it's done in a way that can't be supported
with providers, because it involves sharing specific structures between
libcrypto and the backend implementation.
For a longer background and explanation, see
[Background / tl;dr](#background-tldr) at the end of this design.
Establish OSSL_PARAM keys that any algorithms may become aware of
-----------------------------------------------------------------
We already have known parameter keys:
- "algor_id_param", also known as the macro `OSSL_CIPHER_PARAM_ALGORITHM_ID_PARAMS`.
This is currently only specified for `EVP_CIPHER`, in support of
`EVP_CIPHER_param_to_asn1()` and `EVP_CIPHER_asn1_to_param()`
- "algorithm-id", also known as the macro `OSSL_SIGNATURE_PARAM_ALGORITHM_ID`.
This design proposes:
1. Adding a parameter key "algorithm-id-params", to replace "algor_id_param",
and deprecate the latter.
2. Making both "algorithm-id" and "algorithm-id-params" generically available,
rather than only tied to `EVP_SIGNATURE` ("algorithm-id") or `EVP_CIPHER`
("algor_id_param").
This way, these parameters can be used in the exact same manner with other
operations, with the value of the AlgorithmIdentifier as well as its
parameters as octet strings, to be used and interpreted by applications and
provider implementations alike in whatever way they see fit.
Applications can choose to add these in an `OSSL_PARAM` array, to be passed
with the multitude of initialization functions that take such an array, or
using specific operation `OSSL_PARAM` setters and getters (such as
`EVP_PKEY_CTX_set_params`), or using other available convenience functions
(see below).
These parameter will have to be documented in the following files:
- `doc/man7/provider-asym_cipher.pod`
- `doc/man7/provider-cipher.pod`
- `doc/man7/provider-digest.pod`
- `doc/man7/provider-kdf.pod`
- `doc/man7/provider-kem.pod`
- `doc/man7/provider-keyexch.pod`
- `doc/man7/provider-mac.pod`
- `doc/man7/provider-signature.pod`
That should cover all algorithms that are, or should be possible to fetch by
AlgorithmIdentifier.algorithm, and for which there's potentially a relevant
AlgorithmIdentifier.parameters field.
We may arguably want to consider `doc/man7/provider-keymgmt.pod` too, but
an AlgorithmIdentifier that's attached directly to a key is usually part of
a PrivKeyInfo or SubjectPublicKeyInfo structure, and those are handled by
encoders and decoders as those see fit, and there's no tangible reason why
that would have to change.
Public convenience API
----------------------
For convenience, the following set of functions would be added to pass the
AlgorithmIdentifier parameter data to diverse operations, or to retrieve
such parameter data from them.
``` C
/*
* These two would essentially be aliases for EVP_CIPHER_param_to_asn1()
* and EVP_CIPHER_asn1_to_param().
*/
EVP_CIPHER_CTX_set_algor_params(EVP_CIPHER_CTX *ctx, const X509_ALGOR *alg);
EVP_CIPHER_CTX_get_algor_params(EVP_CIPHER_CTX *ctx, X509_ALGOR *alg);
EVP_CIPHER_CTX_get_algor(EVP_CIPHER_CTX *ctx, X509_ALGOR **alg);
EVP_MD_CTX_set_algor_params(EVP_MD_CTX *ctx, const X509_ALGOR *alg);
EVP_MD_CTX_get_algor_params(EVP_MD_CTX *ctx, X509_ALGOR *alg);
EVP_MD_CTX_get_algor(EVP_MD_CTX *ctx, X509_ALGOR **alg);
EVP_MAC_CTX_set_algor_params(EVP_MAC_CTX *ctx, const X509_ALGOR *alg);
EVP_MAC_CTX_get_algor_params(EVP_MAC_CTX *ctx, X509_ALGOR *alg);
EVP_MAC_CTX_get_algor(EVP_MAC_CTX *ctx, X509_ALGOR **alg);
EVP_KDF_CTX_set_algor_params(EVP_KDF_CTX *ctx, const X509_ALGOR *alg);
EVP_KDF_CTX_get_algor_params(EVP_KDF_CTX *ctx, X509_ALGOR *alg);
EVP_KDF_CTX_get_algor(EVP_KDF_CTX *ctx, X509_ALGOR **alg);
EVP_PKEY_CTX_set_algor_params(EVP_PKEY_CTX *ctx, const X509_ALGOR *alg);
EVP_PKEY_CTX_get_algor_params(EVP_PKEY_CTX *ctx, X509_ALGOR *alg);
EVP_PKEY_CTX_get_algor(EVP_PKEY_CTX *ctx, X509_ALGOR **alg);
```
Note that all might not need to be added immediately, depending on if they
are considered useful or not. For future proofing, however, they should
probably all be added.
Requirements on the providers
-----------------------------
Providers that implement ciphers or any operation that uses asymmetric keys
will have to implement support for passing AlgorithmIdentifier parameter
data, and will have to process that data in whatever manner that's necessary
to meet the standards for that operation.
Fallback strategies
-------------------
There are no possible fallback strategies, which is fine, considering that
current provider functionality doesn't support passing AlgorithmIdentifier
parameter data at all (except for `EVP_CIPHER`), and therefore do not work
at all when such parameter data needs to be passed.
-----
-----
Background / tl;dr
------------------
### AlgorithmIdenfier parameter and how it's used
OpenSSL has historically done a few tricks to not have to pass
AlgorithmIdenfier parameter data to the backend implementations of
cryptographic operations:
- In some cases, they were passed as part of the lower level key structure
(for example, the `RSA` structure can also carry RSA-PSS parameters).
- In the `EVP_CIPHER` case, there is functionality to pass the parameter
data specifically.
- For asymmetric key operations, PKCS#7 and CMS support was added as
`EVP_PKEY` ctrls.
With providers, some of that support was retained, but not others. Most
crucially, the `EVP_PKEY` ctrls for PKCS#7 and CMS were not retained,
because the way they were implemented violated the principle that provider
implementations *MUST NOT* share complex OpenSSL specific structures with
libcrypto.
### Usage examples
Quite a lot of the available examples today revolve around CMS, with a
number of RFCs that specify what parameters should be passed with certain
operations / algorithms. This list is not exhaustive, the reader is
encouraged to research further usages.
- [DSA](https://www.rfc-editor.org/rfc/rfc3370#section-3.1) signatures
typically have the domain parameters *p*, *q* and *g*.
- [RC2 key wrap](https://www.rfc-editor.org/rfc/rfc3370#section-4.3.2)
- [PBKDF2](https://www.rfc-editor.org/rfc/rfc3370#section-4.4.1)
- [3DES-CBC](https://www.rfc-editor.org/rfc/rfc3370#section-5.1)
- [RC2-CBC](https://www.rfc-editor.org/rfc/rfc3370#section-5.2)
- [GOST 28147-89](https://www.rfc-editor.org/rfc/rfc4490.html#section-5.1)
- [RSA-OAEP](https://www.rfc-editor.org/rfc/rfc8017#appendix-A.2.1)
- [RSA-PSS](https://www.rfc-editor.org/rfc/rfc8017#appendix-A.2.3)
- [XOR-MD5](https://www.rfc-editor.org/rfc/rfc6210.html) is experimental,
but it does demonstrate the possibility of a parametrized hash algorithm.
Some of it can be claimed to already have support in OpenSSL. However, this
is with old libcrypto code that has special knowledge of the algorithms that
are involved.

View File

@@ -0,0 +1,54 @@
Congestion control API design
=============================
We use an abstract interface for the QUIC congestion controller to facilitate
use of pluggable QUIC congestion controllers in the future. The interface is
based on interfaces suggested by RFC 9002 and MSQUIC's congestion control APIs.
`OSSL_CC_METHOD` provides a vtable of function pointers to congestion controller
methods. `OSSL_CC_DATA` is an opaque type representing a congestion controller
instance.
For details on the API, see the comments in `include/internal/quic_cc.h`.
Congestion controllers are not thread safe; the caller is responsible for
synchronisation.
Congestion controllers may vary their state with respect to time. This is
facilitated via the `get_wakeup_deadline` method and the `now` argument to the
`new` method, which provides access to a clock. While no current congestion
controller makes use of this facility, it can be used by future congestion
controllers to implement packet pacing.
Congestion controllers may expose arbitrary configuration parameters via the
`set_input_params` method. Equally, congestion controllers may expose diagnostic
outputs via the `bind_diagnostics` and `unbind_diagnostics` methods. The
configuration parameters and diagnostics supported may be specific to the
congestion controller method, although there are some well known ones intended
to be common to all congestion controllers.
Currently, the only dependency injected to a congestion controller is access to
a clock. In the future it is likely that access at least to the statistics
manager will be provided. Excessive futureproofing of the congestion controller
interface has been avoided as this is currently an internal API for which no API
stability guarantees are required; for example, no currently implemented
congestion control algorithm requires access to the statistics manager, but such
access can readily be added later as needed.
QUIC congestion control state is per-path, per-connection. Currently we support
only a single path per connection, so there is one congestion control instance
per connection. This may change in future.
While the congestion control API is roughly based around the arrangement of
functions as described by the congestion control pseudocode in RFC 9002, there
are some deliberate changes in order to obtain cleaner separation between the
loss detection and congestion control functions. Where a literal option of RFC
9002 pseudocode would require a congestion controller to access the ACK
manager's internal state directly, the interface between the two has been
changed to avoid this. This involves some small amounts of functionality which
RFC 9002 considers part of the congestion controller being part of the ACK
manager in our implementation. See the comments in `include/internal/quic_cc.h`
and `ssl/quic/quic_ackm.c` for more information.
The congestion control API may be revised to allow pluggable congestion
controllers via a provider-based interface in the future.

View File

@@ -0,0 +1,287 @@
QUIC Connection ID Cache
========================
The connection ID cache is responsible for managing connection IDs, both local
and remote.
Remote Connection IDs
---------------------
For remote connection IDs, we need to be able to:
* add new IDs per connection;
* pick a non-retired ID associated from those available for a connection and
* select a connection ID by sequence number and retire that and all older IDs.
The cache will be implemented as a double ended queue as part of the
QUIC_CONNECTION object. The queue will be sorted by sequence number
and must maintain a count of the number of connection IDs present.
There is no requirement to maintain a global mapping since remote IDs
are only used when sending packets, not receiving them.
In MVP, a many-to-1 matching of Connection IDs per Connection object
is required. Refer third paragraph in [5.1].
When picking a non-retired connection ID for MVP, the youngest available will
be chosen.
Local Connection IDs
--------------------
For local connection IDs, we need to be able to:
* generate a new connection ID and associate it with a connection;
* query if a connection ID is present;
* for a server, map a connection ID to a QUIC_CONNECTION;
* drop all connection IDs associated with a QUIC_CONNECTION;
* select a connection ID by sequence number and retire that and all older IDs
and
* select a connection ID by sequence number and drop that and all older IDs.
All connection IDs issued by our stack must be the same length because
short form packets include the connection ID but no length byte. Random
connection IDs of this length will be allocated. Note that no additional
information will be contained in the connection ID.
There will be one global set of local connection IDs, `QUIC_ROUTE_TABLE`,
which is shared by all connections over all SSL_CTX objects. This is
used to dispatch incoming datagrams to their correct destination and
will be implemented as a dictionary.
### Notes
* For MVP, it would be sufficient to only use a zero length connection ID.
* For MVP, a connection ID to QUIC_CONNECTION mapping need not be implemented.
* Post MVP, funnelling all received packets through a single socket is
likely to be bottleneck.
An alternative would be receiving from <host address, source port> pairs.
* For MVP, the local connection ID cache need only have one element.
I.e. there is no requirement to implement any form of lookup.
Routes
------
A pair of connection IDs identify a route between the two ends of the
communication. This contains the connection IDs of both ends of the
connection and the common sequence number. We need to be able to:
* select a connection ID by sequence number and retire that and all older IDs
and
* select a connection ID by sequence number and drop that and all older IDs.
It is likely that operations on local and remote connection IDs can be
subsumed by the route functionality.
ID Retirement
-------------
Connection IDs are retired by either a [NEW_CONNECTION_ID] or
a [RETIRE_CONNECTION_ID] frame and this is acknowledged by a
RETIRE_CONNECTION_ID or a NEW_CONNECTION_ID frame respectively.
When a retirement frame is received, we can immediately _remove_ the
IDs covered from our cache and then send back an acknowledgement of
the retirement.
If we want to retire a frame, we send a retirement frame and mark the
IDs covered in our cache as _retired_. This means that we cannot send
using any of these IDs but can still receive using them. Once our peer
acknowledges the retirement, we can _remove_ the IDs.
It is possible to receive out of order packets **after** receiving a
retirement notification. It's unclear what to do with these, however
dropping them seems reasonable. The alternative would be to maintain
the route in a _deletable_ state until all packets in flight at the time
of retirement have been acked.
API
---
QUIC connection IDs are defined in #18949 but some extra functions
are available:
```c
/* QUIC connection ID representation. */
#define QUIC_MAX_CONN_ID_LEN 20
typedef struct quic_conn_id_st {
unsigned char id_len;
unsigned char id[QUIC_MAX_CONN_ID_LEN];
#if 0
/* likely required later, although this might not be the ideal location */
unsigned char reset_token[16]; /* stateless reset token is per conn ID */
#endif
} QUIC_CONN_ID;
static ossl_unused ossl_inline int ossl_quic_conn_id_eq(const QUIC_CONN_ID *a,
const QUIC_CONN_ID *b);
/* New functions */
int ossl_quic_conn_id_set(QUIC_CONN_ID *cid, unsigned char *id,
unsigned int id_len);
int ossl_quic_conn_id_generate(QUIC_CONN_ID *cid);
```
### Remote Connection ID APIs
```c
typedef struct quic_remote_conn_id_cache_st QUIC_REMOTE_CONN_ID_CACHE;
/*
* Allocate and free a remote connection ID cache
*/
QUIC_REMOTE_CONN_ID_CACHE *ossl_quic_remote_conn_id_cache_new(
size_t id_limit /* [active_connection_id_limit] */
);
void ossl_quic_remote_conn_id_cache_free(QUIC_REMOTE_CONN_ID_CACHE *cache);
/*
* Add a remote connection ID to the cache
*/
int ossl_quic_remote_conn_id_cache_add(QUIC_REMOTE_CONN_ID_CACHE *cache,
const QUIC_CONNECTION *conn,
const unsigned char *conn_id,
size_t conn_id_len,
uint64_t seq_no);
/*
* Query a remote connection for a connection ID.
* Each connection can have multiple connection IDs associated with different
* routes. This function returns one of these in a non-specified manner.
*/
int ossl_quic_remote_conn_id_cache_get0_conn_id(
const QUIC_REMOTE_CONN_ID_CACHE *cache,
const QUIC_CONNECTION *conn, QUIC_CONN_ID **cid);
/*
* Retire remote connection IDs up to and including the one determined by the
* sequence number.
*/
int ossl_quic_remote_conn_id_cache_retire(
QUIC_REMOTE_CONN_ID_CACHE *cache, uint64_t seq_no);
/*
* Remove remote connection IDs up to and including the one determined by the
* sequence number.
*/
int ossl_quic_remote_conn_id_cache_remove(
QUIC_REMOTE_CONN_ID_CACHE *cache, uint64_t seq_no);
```
### Local Connection ID APIs
```c
typedef struct quic_local_conn_id_cache_st QUIC_LOCAL_CONN_ID_CACHE;
/*
* Allocate and free a local connection ID cache
*/
QUIC_LOCAL_CONN_ID_CACHE *ossl_quic_local_conn_id_cache_new(void);
void ossl_quic_local_conn_id_cache_free(QUIC_LOCAL_CONN_ID_CACHE *cache);
/*
* Generate a new random local connection ID and associate it with a connection.
* For MVP this could just be a zero length ID.
*/
int ossl_quic_local_conn_id_cache_new_conn_id(QUIC_LOCAL_CONN_ID_CACHE *cache,
QUIC_CONNECTION *conn,
QUIC_CONN_ID **cid);
/*
* Remove a local connection and all associated cached IDs
*/
int ossl_quic_local_conn_id_cache_remove_conn(QUIC_LOCAL_CONN_ID_CACHE *cache,
const QUIC_CONNECTION *conn);
/*
* Lookup a local connection by ID.
* Returns the connection or NULL if absent.
*/
QUIC_CONNECTION *ossl_quic_local_conn_id_cache_get0_conn(
const QUIC_LOCAL_CONN_ID_CACHE *cache,
const unsigned char *conn_id, size_t conn_id_len);
/*
* Retire local connection IDs up to and including the one specified by the
* sequence number.
*/
int ossl_quic_local_conn_id_cache_retire(
QUIC_LOCAL_CONN_ID_CACHE *cache, uint64_t from_seq_no);
/*
* Remove local connection IDs up to and including the one specified by the
* sequence number.
*/
int ossl_quic_local_conn_id_cache_remove(
QUIC_LOCAL_CONN_ID_CACHE *cache, uint64_t from_seq_no);
```
### Routes
Additional status and source information is also included.
```c
typedef struct quic_route_st QUIC_ROUTE;
typedef struct quic_route_table QUIC_ROUTE_TABLE;
struct quic_route_st {
QUIC_CONNECTION *conn;
QUIC_CONN_ID local;
QUIC_CONN_ID remote;
uint64_t seq_no; /* Sequence number for both ends */
unsigned int retired : 1; /* Connection ID has been retired */
#if 0
/* Later will require */
BIO_ADDR remote_address;/* remote source address */
#endif
};
QUIC_ROUTE_TABLE *ossl_quic_route_table_new(void);
void ossl_quic_route_table_free(QUIC_ROUTE_TABLE *routes);
```
### Add route to route table
```c
int ossl_route_table_add_route(QUIC_ROUTE_TABLE *cache,
QUIC_ROUTE_TABLE *route);
```
### Route query
```c
/*
* Query a route table entry by either local or remote ID
*/
QUIC_ROUTE *ossl_route_table_get0_route_from_local(
const QUIC_ROUTE_TABLE *cache,
const unsigned char *conn_id, size_t conn_id_len);
QUIC_ROUTE *ossl_route_table_get0_route_from_remote(
const QUIC_ROUTE_TABLE *cache,
const unsigned char *conn_id, size_t conn_id_len);
```
### Route retirement
```c
/*
* Retire by sequence number up to and including the one specified.
*/
int ossl_quic_route_table_retire(QUIC_ROUTE_TABLE *routes,
QUIC_CONNECTION *conn,
uint64_t seq_no);
/*
* Delete by sequence number up to and including the one specified.
*/
int ossl_quic_route_table_remove(QUIC_ROUTE_TABLE *routes,
QUIC_CONNECTION *conn,
uint64_t seq_no);
```
[5.1]: (https://datatracker.ietf.org/doc/html/rfc9000#section-5.1)
[active_connection_id_limit]: (https://datatracker.ietf.org/doc/html/rfc9000#section-18.2)
[NEW_CONNECTION_ID]: (https://datatracker.ietf.org/doc/html/rfc9000#section-19.15)
[RETIRE_CONNECTION_ID]: (https://datatracker.ietf.org/doc/html/rfc9000#section-19.16)
[retired]: (https://datatracker.ietf.org/doc/html/rfc9000#section-5.1.2)

View File

@@ -0,0 +1,637 @@
QUIC Connection State Machine
=============================
FSM Model
---------
QUIC client-side connection state can be broken down into five coarse phases of
a QUIC connection:
- The Idle substate (which is simply the state before we have started trying to
establish a connection);
- The Active state, which comprises two substates:
- The Establishing state, which comprises many different substates;
- The Open state;
- The Terminating state, which comprises several substates;
- The Terminated state, which is the terminal state.
There is monotonic progression through these phases.
These names have been deliberately chosen to use different terminology to common
QUIC terms such as 'handshake' to avoid confusion, as they are not the same
concepts. For example, the Establishing state uses Initial, Handshake and 1-RTT
packets.
This discussion is (currently) given from the client side perspective only.
State machine considerations only relevant to servers are not mentioned.
0-RTT is also not currently modelled in this analysis.
The synthesis of this FSM is not suggested by the QUIC RFCs but has been
discerned from the requirements imposed. This does not mean that the
implementation of this FSM as literally presented below is an optimal or
advisable implementation strategy, and a cursory examination of existing QUIC
implementations suggests that such an approach is not common. Moreover, excess
attention should not be given to the Open state, as 1-RTT application
communication can occur even still in the Establishing state (for example, when
the handshake has been completed but not yet confirmed).
However, the state machine described herein is helpful as an aid to
understanding and broadly captures the logic which our implementation will
embody. The design of the actual implementation is discussed further below.
The above states and their substates are defined as follows:
- The Establishing state involves the use of Initial and Handshake
packets. It is terminated when the handshake is confirmed.
Handshake confirmation is not the same as handshake completion.
Handshake confirmation occurs on the client when it receives
a `HANDSHAKE_DONE` frame (which occurs in a 1-RTT packet, thus
1-RTT packets are also invoked in the Establishing state).
On the server, handshake confirmation occurs as soon as
the handshake is considered completed (see RFC 9001 s. 4.1).
The Establishing state is subdivided into the following substates:
- Proactive Version Negotiation (optional): The client sends
a Version Negotiation packet with a reserved version number
to forcibly elicit a list of the server's supported versions.
This is not expected to be commonly used, as it adds a round trip.
If it is used, the time spent in this state is based on waiting for
the server to respond, and potentially retransmitting after a
timeout.
- Pre-Initial: The client has completed proactive version negotiation
(if it performed it), but has not yet sent any encrypted packet. This
substate is included for exposition; no time will generally be spent in it
and there is immediate transmission of the first encrypted packet and
transition to Initial Exchange A.
- Initial Exchange A: The client has sent at least one Initial
packet to the server attempting to initiate a connection.
The client is waiting for a server response, which might
be:
- a Version Negotiation packet (leading to the Reactive Version
Negotiation state);
- a Retry packet (leading to Initial Exchange B); or
- an Initial packet (leading to the Initial Exchange Confirmed state).
- Reactive Version Negotiation: The server has rejected the client's
proposed version. If proactive version negotiation was used, this
can be considered an error. Otherwise, we return to the Pre-Initial
state and proceed as though proactive version negotiation was
performed using the information in the version negotiation packet.
- Initial Exchange B: The client has been asked to perform a Retry.
It sends at least one Initial packet to the server attempting to
initiate a connection. Every Initial packet contains the quoted Retry
Token. Any data sent in `CRYPTO` frames in Initial Exchange A must be
retransmitted, but PNs MUST NOT be reset. Note that this is still
considered part of the same connection, and QUIC Transport Parameters are
later used to cryptographically bind the established connection state to
the original DCIDs used as part of the Retry process. A server is not
allowed to respond to a Retry-triggered Initial exchange with another
Retry, and if it does we ignore it, which is the major distinction of this
state from Initial Exchange A.
The client is waiting for a server response, which might be:
- a Version Negotiation packet (invalid, ignored);
- a Retry packet (invalid, ignored);
- an Initial packet (leading to the Initial Exchange Continued
state);
- Initial Exchange Continued: The client has sent at least one
Initial packet to the server and received at least one valid Initial packet
from the server. There is no longer any possibility of a Retry (any such
packet is ignored) and communications may continue via Initial packets for
an arbitrarily long period until the handshake layer indicates the
Handshake EL is ready.
The client is waiting for server packets, until one of those packets
causes the handshake layer (whether it is TLS 1.3 or some other
hypothetical handshake layer) to emit keys for the Handshake EL.
This will generally occur due to incoming Initial packets containing
crypto stream segments (in the form of `CRYPTO` frames) which deliver
handshake layer protocol messages to the handshake layer in use.
- Handshake: The Handshake EL is now available to the client.
Either client or server may send the first Handshake packet.
The client is waiting to receive a Handshake packet from the server.
- Handshake Continued: The client has received and successfully
decrypted at least one Handshake packet. The client now discards
the Initial EL. Communications via the handshake EL may continue for
an arbitrary period of time.
The client is waiting to receive more Handshake packets from the
server to advance the handshake layer and cause it to transition
to the Handshake Completed state.
- Handshake Completed: The handshake layer has indicated that it
considers the handshake completed. For TLS 1.3, this means both
parties have sent and received (and verified) TLS 1.3 Finished
messages. The handshake layer must emit keys for the 1-RTT EL
at this time.
Though the handshake is not yet confirmed, the client can begin
sending 1-RTT packets.
The QUIC Transport Parameters sent by the peer are now authenticated.
(Though the peer's QUIC Transport Parameters may have been received
earlier in the handshake process, they are only considered
authenticated at this point.)
The client transitions to Handshake Confirmed once either
- it receives a `HANDSHAKE_DONE` frame in a 1-RTT packet, or
- it receives acknowledgement of any 1-RTT packet it sent.
Though this discussion only covers the client state machine, it is worth
noting that on the server, the handshake is considered confirmed as soon as
it is considered completed.
- Handshake Confirmed: The client has received confirmation from
the server that the handshake is confirmed.
The principal effect of moving to this state is that the Handshake
EL is discarded. Key Update is also now permitted for the first
time.
The Establishing state is now done and there is immediate transition
to the Open state.
- The Open state is the steady state of the connection. It is a single state.
Application stream data is exchanged freely. Only 1-RTT packets are used. The
Initial, Handshake (and 0-RTT) ELs have been discarded, transport parameters
have been exchanged, and the handshake has been confirmed.
The client transitions to
- the Terminating — Closing state if the local application initiates an
immediate close (a `CONNECTION_CLOSE` frame is sent);
- the Terminating — Draining state if the remote peer initiates
an immediate close (i.e., a `CONNECTION_CLOSE` frame is received);
- the Terminated state if the idle timeout expires; a `CONNECTION_CLOSE`
frame is NOT sent;
- the Terminated state if the peer triggers a stateless reset; a
`CONNECTION_CLOSE` frame is NOT sent.
- The Terminating state is used when closing the connection.
This may occur due to an application request or a transport-level
protocol error.
Key updates may not be initiated in the Terminating state.
This state is divided into two substates:
- The Closing state, used for a locally initiated immediate close. In
this state, a packet containing a `CONNECTION_CLOSE` frame is
transmitted again in response to any packets received. This ensures
that a `CONNECTION_CLOSE` frame is received by the peer even if the
initially transmitted `CONNECTION_CLOSE` frame was lost. Note that
these `CONNECTION_CLOSE` frames are not governed by QUIC's normal loss
detection mechanisms; this is a bespoke mechanism unique to this
state, which exists solely to ensure delivery of the `CONNECTION_CLOSE`
frame.
The endpoint progresses to the Terminated state after a timeout
interval, which should not be less than three times the PTO interval.
It is also possible for the endpoint to transition to the Draining
state instead, if it receives a `CONNECTION_CLOSE` frame prior
to the timeout expiring. This indicates that the peer is also
closing.
- The Draining state, used for a peer initiated immediate close.
The local endpoint may not send any packets of any kind in this
state. It may optionally send one `CONNECTION_CLOSE` frame immediately
prior to entering this state.
The endpoint progresses to the Terminated state after a timeout
interval, which should not be less than three times the PTO interval.
- The Terminated state is the terminal state of a connection.
Regardless of how a connection ends (local or peer-initiated immediate close,
idle timeout, stateless reset), a connection always ultimately ends up in this
state. There is no longer any requirement to send or receive any packet. No
timer events related to the connection will ever need fire again. This is a
totally quiescent state. The state associated with the connection may now be
safely freed.
We express this state machine in more concrete form in the form of a table,
which makes the available transitions clear:
† Except where superseded by a more specific transition
ε means “where no other transition is applicable”.
Where an action is specified in the Transition/Action column but no new state,
no state change occurs.
<table>
<tr><th>State</th><th>Action On Entry/Exit</th><th>Event</th><th>Transition/Action</th></tr>
<tr>
<td rowspan="2"><tt>IDLE</tt></td>
<td rowspan="2"></td>
<td><tt>APP:CONNECT</tt></td>
<td><tt>ACTIVE.ESTABLISHING.PROACTIVE_VER_NEG</tt> (if used), else
<tt>ACTIVE.ESTABLISHING.PRE_INITIAL</tt></td>
</tr>
<tr>
<td><tt>APP:CLOSE</tt></td>
<td><tt>TERMINATED</tt></td>
</tr>
<tr>
<td rowspan="5"><tt>ACTIVE</tt></td>
<td rowspan="5"></td>
<td><tt>IDLE_TIMEOUT</tt></td>
<td><tt>TERMINATED</tt></td>
</tr>
<tr>
<td><tt>PROBE_TIMEOUT</tt>→ †</td>
<td><tt>SendProbeIfAnySentPktsUnacked()</tt></td>
</tr>
<tr>
<td><tt>APP:CLOSE</tt>→ †</td>
<td><tt>TERMINATING.CLOSING</tt></td>
</tr>
<tr>
<td><tt>RX:ANY[CONNECTION_CLOSE]</tt></td>
<td><tt>TERMINATING.DRAINING</tt></td>
</tr>
<tr>
<td><tt>RX:STATELESS_RESET</tt></td>
<td><tt>TERMINATED</tt></td>
</tr>
<tr>
<td rowspan="3"><tt>ACTIVE.ESTABLISHING.PROACTIVE_VER_NEG</tt></td>
<td rowspan="3"><tt>enter:SendReqVerNeg</tt></td>
<td><tt>RX:VER_NEG</tt></td>
<td><tt>ACTIVE.ESTABLISHING.PRE_INITIAL</tt></td>
</tr>
<tr>
<td><tt>PROBE_TIMEOUT</tt></td>
<td><tt>ACTIVE.ESTABLISHING.PROACTIVE_VER_NEG</tt> (retransmit)</td>
</tr>
<tr>
<td><tt>APP:CLOSE</tt></td>
<td><tt>TERMINATED</tt></td>
</tr>
<tr>
<td rowspan="1"><tt>ACTIVE.ESTABLISHING.PRE_INITIAL</tt></td>
<td rowspan="1"></td>
<td>—ε→</td>
<td><tt>ACTIVE.ESTABLISHING.INITIAL_EXCHANGE_A</tt></td>
</tr>
<tr>
<td rowspan="4"><tt>ACTIVE.ESTABLISHING.INITIAL_EXCHANGE_A</tt></td>
<td rowspan="4"><tt>enter:SendPackets()</tt> (First Initial)</td>
<td><tt>RX:RETRY</tt></td>
<td><tt>ACTIVE.ESTABLISHING.INITIAL_EXCHANGE_B</tt></td>
</tr>
<tr>
<td><tt>RX:INITIAL</tt></td>
<td><tt>ACTIVE.ESTABLISHING.INITIAL_EXCHANGE_CONTINUED</tt></td>
</tr>
<tr>
<td><tt>RX:VER_NEG</tt></td>
<td><tt>ACTIVE.ESTABLISHING.REACTIVE_VER_NEG</tt></td>
</tr>
<tr>
<td><tt>CAN_SEND</tt></td>
<td><tt>SendPackets()</tt></td>
</tr>
<tr>
<td rowspan="1"><tt>ACTIVE.ESTABLISHING.REACTIVE_VER_NEG</tt></td>
<td rowspan="1"></td>
<td>—ε→</td>
<td><tt>ACTIVE.ESTABLISHING.PRE_INITIAL</tt></td>
</tr>
<tr>
<td rowspan="3"><tt>ACTIVE.ESTABLISHING.INITIAL_EXCHANGE_B</tt></td>
<td rowspan="3"><tt>enter:SendPackets()</tt><br/>
(First Initial, with token)<br/>
(*All further Initial packets contain the token)<br/>(*PN is not reset)</td>
<td><tt>RX:INITIAL</tt></td>
<td><tt>ACTIVE.ESTABLISHING.INITIAL_EXCHANGE_CONTINUED</tt></td>
</tr>
<tr>
<td><tt>PROBE_TIMEOUT</tt></td>
<td>TODO: Tail loss probe for initial packets?</td>
</tr>
<tr>
<td><tt>CAN_SEND</tt></td>
<td><tt>SendPackets()</tt></td>
</tr>
<tr>
<td rowspan="2"><tt>ACTIVE.ESTABLISHING.INITIAL_EXCHANGE_CONTINUED</tt></td>
<td rowspan="2"><tt>enter:SendPackets()</tt></td>
<td><tt>RX:INITIAL</tt></td>
<td>(packet processed, no change)</td>
</tr>
<tr>
<td><tt>TLS:HAVE_EL(HANDSHAKE)</tt></td>
<td><tt>ACTIVE.ESTABLISHING.HANDSHAKE</tt></td>
</tr>
<tr>
<td rowspan="3"><tt>ACTIVE.ESTABLISHING.HANDSHAKE</tt></td>
<td rowspan="3"><tt>enter:ProvisionEL(Handshake)</tt><br/>
<tt>enter:SendPackets()</tt> (First Handshake packet, if pending)</td>
<td><tt>RX:HANDSHAKE</tt></td>
<td><tt>ACTIVE.ESTABLISHING.HANDSHAKE_CONTINUED</tt></td>
</tr>
<tr>
<td><tt>RX:INITIAL</tt></td>
<td>(packet processed if EL is not dropped)</td>
</tr>
<tr>
<td><tt>CAN_SEND</tt></td>
<td><tt>SendPackets()</tt></td>
</tr>
<tr>
<td rowspan="3"><tt>ACTIVE.ESTABLISHING.HANDSHAKE_CONTINUED</tt></td>
<td rowspan="3"><tt>enter:DropEL(Initial)</tt><br/><tt>enter:SendPackets()</tt></td>
<td><tt>RX:HANDSHAKE</tt></td>
<td>(packet processed, no change)</td>
</tr>
<tr>
<td><tt>TLS:HANDSHAKE_COMPLETE</tt></td>
<td><tt>ACTIVE.ESTABLISHING.HANDSHAKE_COMPLETE</tt></td>
</tr>
<tr>
<td><tt>CAN_SEND</tt></td>
<td><tt>SendPackets()</tt></td>
</tr>
<tr>
<td rowspan="3"><tt>ACTIVE.ESTABLISHING.HANDSHAKE_COMPLETED</tt></td>
<td rowspan="3"><tt>enter:ProvisionEL(1RTT)</tt><br/><tt>enter:HandshakeComplete()</tt><br/><tt>enter[server]:Send(HANDSHAKE_DONE)</tt><br/><tt>enter:SendPackets()</tt></td>
<td><tt>RX:1RTT[HANDSHAKE_DONE]</tt></td>
<td><tt>ACTIVE.ESTABLISHING.HANDSHAKE_CONFIRMED</tt></td>
</tr>
<tr>
<td><tt>RX:1RTT</tt></td>
<td>(packet processed, no change)</td>
</tr>
<tr>
<td><tt>CAN_SEND</tt></td>
<td><tt>SendPackets()</tt></td>
</tr>
<tr>
<td rowspan="1"><tt>ACTIVE.ESTABLISHING.HANDSHAKE_CONFIRMED</tt></td>
<td rowspan="1"><tt>enter:DiscardEL(Handshake)</tt><br/><tt>enter:Permit1RTTKeyUpdate()</tt></td>
<td>—ε→</td>
<td><tt>ACTIVE.OPEN</tt></td>
</tr>
<tr>
<td rowspan="2"><tt>ACTIVE.OPEN</tt></td>
<td rowspan="2"></td>
<td><tt>RX:1RTT</tt></td>
<td>(packet processed, no change)</td>
</tr>
<tr>
<td><tt>CAN_SEND</tt></td>
<td><tt>SendPackets()</tt></td>
</tr>
<tr>
<td rowspan="2"><tt>TERMINATING</tt></td>
<td rowspan="2"></td>
<td><tt>TERMINATING_TIMEOUT</tt></td>
<td><tt>TERMINATED</tt></td>
</tr>
<tr>
<td><tt>RX:STATELESS_RESET</tt></td>
<td><tt>TERMINATED</tt></td>
</tr>
<tr>
<td rowspan="3"><tt>TERMINATING.CLOSING</tt></td>
<td rowspan="3"><tt>enter:QueueConnectionCloseFrame()</tt><br/><tt>enter:SendPackets()</tt></td>
<td><tt>RX:ANY[CONNECTION_CLOSE]</tt></td>
<td><tt>TERMINATING.DRAINING</tt></td>
</tr>
<tr>
<td><tt>RX:ANY</tt></td>
<td><tt>QueueConnectionCloseFrame()</tt><br/><tt>SendPackets()</tt></td>
</tr>
<tr>
<td><tt>CAN_SEND</tt></td>
<td><tt>SendPackets()</tt></td>
</tr>
<tr>
<td rowspan="1"><tt>TERMINATING.DRAINING</tt></td>
<td rowspan="1"></td>
<td></td>
<td></td>
</tr>
<tr>
<td rowspan="1"><tt>TERMINATED</tt></td>
<td rowspan="1"></td>
<td>[terminal state]</td>
<td></td>
</tr>
</table>
Notes on various events:
- `CAN_SEND` is raised when transmission of packets has been unblocked after previously
having been blocked. There are broadly two reasons why transmission of packets
may not have been possible:
- Due to OS buffers or network-side write BIOs being full;
- Due to limits imposed by the chosen congestion controller.
`CAN_SEND` is expected to be raised due to a timeout prescribed by the
congestion controller or in response to poll(2) or similar notifications, as
abstracted by the BIO system and how the application has chosen to notify
libssl of network I/O readiness.
It is generally implied that processing of a packet as mentioned above
may cause new packets to be queued and sent, so this is not listed
explicitly in the Transition column except for the `CAN_SEND` event.
- `PROBE_TIMEOUT` is raised after the PTO interval and stimulates generation
of a tail loss probe.
- `IDLE_TIMEOUT` is raised after the connection idle timeout expires.
Note that the loss detector only makes a determination of loss due to an
incoming ACK frame; if a peer becomes totally unresponsive, this is the only
mechanism available to terminate the connection (other than the local
application choosing to close it).
- `RX:STATELESS_RESET` indicates receipt of a stateless reset, but note
that it is not guaranteed that we are able to recognise a stateless reset
that we receive, thus this event may not always be raised.
- `RX:ANY[CONNECTION_CLOSE]` denotes a `CONNECTION_CLOSE` frame received
in any non-discarded EL.
- Any circumstance where `RX:RETRY` or `RX:VER_NEG` are not explicitly
listed means that these packets are not allowed and will be ignored.
- Protocol errors, etc. can be handled identically to `APP:CLOSE` events
as indicated in the above table if locally initiated. Protocol errors
signalled by the peer are handled as `RX:ANY[CONNECTION_CLOSE]` events.
Notes on various actions:
- `SendPackets()` sends packets if we have anything pending for transmission,
and only to the extent we are able to with regards to congestion control and
available BIO buffer space, etc.
Non-FSM Model
-------------
Common QUIC implementations appear to prefer modelling connection state as a set
of flags rather than as a FSM. It can be observed above that there is a fair
degree of commonality between many states. This has been modelled above using
hierarchical states with default handlers for common events. [The state machine
can be viewed as a diagram here (large
image).](./images/connection-state-machine.png)
We transpose the above table to sort by events rather than states, to discern
the following list of events:
- `APP:CONNECT`: Supported in `IDLE` state only.
- `RX:VER_NEG`: Handled in `ESTABLISHING.PROACTIVE_VER_NEG` and
`ESTABLISHING.INITIAL_EXCHANGE_A` only, otherwise ignored.
- `RX:RETRY`: Handled in `ESTABLISHING.INITIAL_EXCHANGE_A` only.
- `PROBE_TIMEOUT`: Applicable to `OPEN` and all (non-ε) `ESTABLISHING`
substates. Handled via `SendProbeIfAnySentPktsUnacked()` except in the
`ESTABLISHING.PROACTIVE_VER_NEG` state, which reenters that state to trigger
retransmission of a Version Negotiation packet.
- `IDLE_TIMEOUT`: Applicable to `OPEN` and all (non-ε) `ESTABLISHING` substates.
Action: immediate transition to `TERMINATED` (no `CONNECTION_CLOSE` frame
is sent).
- `TERMINATING_TIMEOUT`: Timeout used by the `TERMINATING` state only.
- `CAN_SEND`: Applicable to `OPEN` and all (non-ε) `ESTABLISHING`
substates, as well as `TERMINATING.CLOSING`.
Action: `SendPackets()`.
- `RX:STATELESS_RESET`: Applicable to all `ESTABLISHING` and `OPEN` states and
the `TERMINATING.CLOSING` substate.
Always causes a direct transition to `TERMINATED`.
- `APP:CLOSE`: Supported in `IDLE`, `ESTABLISHING` and `OPEN` states.
(Reasonably a no-op in `TERMINATING` or `TERMINATED.`)
- `RX:ANY[CONNECTION_CLOSE]`: Supported in all `ESTABLISHING` and `OPEN` states,
as well as in `TERMINATING.CLOSING`. Transition to `TERMINATING.DRAINING`.
- `RX:INITIAL`, `RX:HANDSHAKE`, `RX:1RTT`: Our willingness to process these is
modelled on whether we have an EL provisioned or discarded, etc.; thus
this does not require modelling as additional state.
Once we successfully decrypt a Handshake packet, we stop processing Initial
packets and discard the Initial EL, as required by RFC.
- `TLS:HAVE_EL(HANDSHAKE)`: Emitted by the handshake layer when Handshake EL
keys are available.
- `TLS:HANDSHAKE_COMPLETE`: Emitted by the handshake layer when the handshake
is complete. Implies connection has been authenticated. Also implies 1-RTT EL
keys are available. Whether the handshake is complete, and also whether it is
confirmed, is reasonably implemented as a flag.
From here we can discern state dependence of different events:
- `APP:CONNECT`: Need to know if application has invoked this event yet,
as if so it is invalid.
State: Boolean: Connection initiated?
- `RX:VER_NEG`: Only valid if we have not yet received any successfully
processed encrypted packet from the server.
- `RX:RETRY`: Only valid if we have sent an Initial packet to the server,
have not yet received any successfully processed encrypted packet
from the server, and have not previously been asked to do a Retry as
part of this connection (and the Retry Integrity Token validates).
Action: Note that we are now acting on a retry and start again.
Do not reset packet numbers. The original CIDs used for the first
connection attempt must be noted for later authentication in
the QUIC Transport Parameters.
State: Boolean: Retry requested?
State: CID: Original SCID, DCID.
- `PROBE_TIMEOUT`: If we have sent at least one encrypted packet yet,
we can handle this via a standard probe-sending mechanism. Otherwise, we are
still in Proactive Version Negotiation and should retransmit the Version
Negotiation packet we sent.
State: Boolean: Doing proactive version negotiation?
- `IDLE_TIMEOUT`: Only applicable in `ACTIVE` states.
We are `ACTIVE` if a connection has been initiated (see `APP:CONNECT`) and
we are not in `TERMINATING` or `TERMINATED`.
- `TERMINATING_TIMEOUT`: Timer used in `TERMINATING` state only.
- `CAN_SEND`: Stimulates transmission of packets.
- `RX:STATELESS_RESET`: Always handled unless we are in `TERMINATED`.
- `APP:CLOSE`: Usually causes a transition to `TERMINATING.CLOSING`.
- `RX:INITIAL`, `RX:HANDSHAKE`, `RX:1RTT`: Willingness to process
these is implicit in whether we currently have the applicable EL
provisioned.
- `TLS:HAVE_EL(HANDSHAKE)`: Handled by the handshake layer
and forwarded to the record layer to provision keys.
- `TLS:HANDSHAKE_COMPLETE`: Should be noted as a flag and notification
provided to various components.
We choose to model the CSM's state as follows:
- The `IDLE`, `ACTIVE`, `TERMINATING.CLOSED`, `TERMINATING.DRAINED` and
`TERMINATED` states are modelled explicitly as a state variable. However,
the substates of `ACTIVE` are not explicitly modelled.
- The following flags are modelled:
- Retry Requested? (+ Original SCID, DCID if so)
- Have Sent Any Packet?
- Are we currently doing proactive version negotiation?
- Have Successfully Received Any Encrypted Packet?
- Handshake Completed?
- Handshake Confirmed?
- The following timers are modelled:
- PTO Timeout
- Terminating Timeout
- Idle Timeout
Implementation Plan
-------------------
- Phase 1: “Steady state only” model which jumps to the `ACTIVE.OPEN`
state with a hardcoded key.
Test plan: Currently uncertain, to be determined.
- Phase 2: “Dummy handshake” model which uses a one-byte protocol
as the handshake layer as a standin for TLS 1.3. e.g. a 0x01 byte “represents”
a ClientHello, a 0x02 byte “represents” a ServerHello. Keys are fixed.
Test plan: If feasible, an existing QUIC implementation will be modified to
use this protocol and E2E testing will be performed against it. (This
can probably be done quickly but an alternate plan may be required if
the effort needed turns out be excessive.)
- Phase 3: Final model with TLS 1.3 handshake layer fully plumbed in.
Test plan: Testing against real world implementations.

View File

@@ -0,0 +1,143 @@
QUIC: Debugging and Tracing
===========================
When debugging the QUIC stack it is extremely useful to have protocol traces
available. There are two approaches you can use to obtain this data:
- qlog
- Packet capture
Neither of these approaches is strictly superior to the other and both have pros
and cons:
- In general, qlog is aimed at storing only information relevant to the
QUIC protocol itself without storing bulk data. This includes both transmitted
and received packets but also information about the internal state of a QUIC
implementation which is not directly observable from the network.
- By comparison, packet capture stores all packets in their entirety.
Packet captures are thus larger, but they also provide more complete
information in general and do not have information removed. On the other hand,
because they work from a network viewpoint, they cannot provide direct
information on the internal state of a QUIC implementation. For example,
packet capture cannot directly tell you when an implementation deems a packet
lost.
Both of these approaches have good GUI visualisation tools available for viewing
the logged data.
To summarise:
- qlog:
- Pro: Smaller files
- Con: May leave out data assumed to be irrelevant
- Pro: Information on internal states and decisions made by a QUIC
implementation
- Pro: No need to obtain a keylog
- PCAP:
- Pro: Complete capture
- Con: No direct information on internal states of a QUIC implementation
- Con: Need to obtain a keylog
Using qlog
----------
To enable qlog you must:
- build using the `enable-unstable-qlog` build-time configuration option;
- set the environment variable `QLOGDIR` to a directory where qlog log files
are to be written;
- set the environment variable `OSSL_QFILTER` to a filter specifying the events
you want to be written (set `OSSL_QFILTER='*'` for all events).
Any process using the libssl QUIC implementation will then automatically write
qlog files in the JSON-SEQ format to the specified directory. The files have the
naming convention recommended by the specification: `{ODCID}_{ROLE}.sqlog`,
where `{ODCID}` is the initial (original) DCID of a connection and `{ROLE}` is
`client` or `server`.
The log files can be loaded into [qvis](https://qvis.quictools.info/). The [qvis
website](https://qvis.quictools.info/) also has some sample qlog files which you
can load at the click of a button, which enables you to see what kind of
information qvis can offer you.
Note that since the qlog specification is not finalised and still evolving,
the format of the output may change, as may the method of configuring this
logging support.
Currently this implementation tracks qvis's qlog support, as that is the
main target use case at this time.
Note that since qlog emphasises logging only data which is relevant to a QUIC
protocol implementation, for the purposes of reducing the volume of logging
data, application data is generally not logged. (However, this is not a
guarantee and must not be relied upon from a privacy perspective.)
[See here for more details on the design of the qlog facility.](qlog.md)
Using PCAP
----------
To use PCAP you can use any standard packet capture tool, such as Wireshark or
tcpdump (e.g. `tcpdump -U -i "$IFACE" -w "$FILE" 'udp port 1234'`).
**Using Wireshark.** Once you have obtained a packet capture as a standard
`pcap` or `pcapng` file, you can load it into Wireshark, which has excellent
QUIC protocol decoding support.
**Activating the decoder.** If you are using QUIC on a port not known to be
commonly used for QUIC, you may need to tell Wireshark to try and decode a flow
as QUIC. To do this, right click on the Protocol column and select “Decode
As...”. Click on “(none)” under the Current column and select QUIC.
**Keylogs.** Since QUIC is an encrypted protocol, Wireshark cannot provide much
information without access to the encryption keys used for the connection
(though it is able to decrypt Initial packets).
In order to provide this information you need to provide Wireshark with a keylog
file. This is a log file containing encryption keys for the connection which is
written directly by a QUIC implementation for debugging purposes. The purpose of
such a file is to enable a TLS or QUIC session to be decrypted for development
purposes in a lab environment. It should go without saying that the export of a
keylog file should never be used in a production environment.
For the OpenSSL QUIC implementation, OpenSSL must be instructed to save a keylog
file using the SSL_CTX_set_keylog_callback(3) API call. If the application you
are using does not provide a way to enable this functionality, this requires
recompiling the application you are using as OpenSSL does not provide a way
to enable this functionality directly.
If you are using OpenSSL QUIC to talk to another QUIC implementation, you also
may be able to obtain a keylog from that other implementation. (It does not
matter from which side of the connection you obtain the keylog.)
Once you have a keylog file you can configure Wireshark to use it.
There are two ways to do this:
- **Manual configuration.** Select Edit →
Preferences and navigate to Protocols → TLS. Enter the path to the keylog file
under “(Pre)-Master-Secret log filename". You can have key information being
appended to this log continuously if desired. Press OK and Wireshark should
now be able to decrypt any TLS or QUIC session described by the log file.
- **Embedding.** Alternatively, you can embed a keylog file into a `.pcapng`
file directly, so that Wireshark can decrypt the packets automatically when
the packet capture file is opened. This avoids the need to have a centralised
key log file and ensures that the key log for a specific packet capture is
kept together with the captured packets. It is also highly useful if you want
to distribute a packet capture file publicly, for example for educational
purposes.
To embed a keylog, you can use the `editcap` command provided by Wireshark
after taking a packet capture (note that `tls` should be specified below
regardless of whether TLS or QUIC is being used):
```bash
$ editcap --inject-secrets tls,$PATH_TO_KEYLOG_FILE \
"$INPUT_FILENAME" "$OUTPUT_FILENAME"
```
This tool accepts `.pcap` or `.pcapng` input and will generate a `.pcapng`
output file.

View File

@@ -0,0 +1,47 @@
Packet Demuxer
==============
This is a QUIC specific module that parses headers of incoming packets and
decides what to do next.
Demuxer requirements for MVP
----------------------------
These are the requirements that were identified for MVP:
- multiple QUIC packets in an UDP packet handling as packet coalescing
must be supported
- client must discard any packets that do not match existing connection ID
- client must discard any packets with version different from the one initially
selected
Optional demuxer requirements
-----------------------------
These are optional features of the client side demuxer, not required for MVP
but otherwise desirable:
- optionally trigger sending stateless reset packets if a received packet
on the client is well-formed but does not belong to a known connection
Demuxer requirements for server
-------------------------------
Further requirements after MVP for server support:
- on the server side packets can create a new connection potentially
- server side packet handling for unsupported version packets:
- trigger sending version negotiation packets if the server receives a packet
with an unsupported version and is large enough to initiate a new
connection; limit the number of such packets with the same destination
- discard smaller packets with unsupported versions
- packet handling on server for well-formed packets with supported versions
but with unknown connection IDs:
- if the packet is a well-formed Initial packet, trigger the creation of a
new connection
- if the packet is a well-formed 0RTT packet, mark the packet to be
buffered for short period of time (as Initial packet might arrive late)
- this is optional - enabled only if 0RTT support is enabled by the
application
- discard any other packet with unknown connection IDs
- optionally trigger sending stateless reset packets as above for client

View File

@@ -0,0 +1,487 @@
Datagram BIO API revisions for sendmmsg/recvmmsg
================================================
We need to evolve the API surface of BIO which is relevant to BIO_dgram (and the
eventual BIO_dgram_mem) to support APIs which allow multiple datagrams to be
sent or received simultaneously, such as sendmmsg(2)/recvmmsg(2).
The adopted design
------------------
### Design decisions
The adopted design makes the following design decisions:
- We use a sendmmsg/recvmmsg-like API. The alternative API was not considered
for adoption because it is an explicit goal that the adopted API be suitable
for concurrent use on the same BIO.
- We define our own structures rather than using the OS's `struct mmsghdr`.
The motivations for this are:
- It ensures portability between OSes and allows the API to be used
on OSes which do not support `sendmmsg` or `sendmsg`.
- It allows us to use structures in keeping with OpenSSL's existing
abstraction layers (e.g. `BIO_ADDR` rather than `struct sockaddr`).
- We do not have to expose functionality which we cannot guarantee
we can support on all platforms (for example, arbitrary control messages).
- It avoids the need to include OS headers in our own public headers,
which would pollute the environment of applications which include
our headers, potentially undesirably.
- For OSes which do not support `sendmmsg`, we emulate it using repeated
calls to `sendmsg`. For OSes which do not support `sendmsg`, we emulate it
using `sendto` to the extent feasible. This avoids the need for code consuming
these new APIs to define a fallback code path.
- We do not define any flags at this time, as the flags previously considered
for adoption cannot be supported on all platforms (Win32 does not have
`MSG_DONTWAIT`).
- We ensure the extensibility of our `BIO_MSG` structure in a way that preserves
ABI compatibility using a `stride` argument which callers must set to
`sizeof(BIO_MSG)`. Implementations can examine the stride field to determine
whether a given field is part of a `BIO_MSG`. This allows us to add optional
fields to `BIO_MSG` at a later time without breaking ABI. All new fields must
be added to the end of the structure.
- The BIO methods are designed to support stateless operation in which they
are simply calls to the equivalent system calls, where supported, without
changing BIO state. In particular, this means that things like retry flags are
not set or cleared by `BIO_sendmmsg` or `BIO_recvmmsg`.
The motivation for this is that these functions are intended to support
concurrent use on the same BIO. If they read or modify BIO state, they would
need to be synchronised with a lock, undermining performance on what (for
`BIO_dgram`) would otherwise be a straight system call.
- We do not support iovecs. The motivations for this are:
- Not all platforms can support iovecs (e.g. Windows).
- The only way we could emulate iovecs on platforms which don't support
them is by copying the data to be sent into a staging buffer. This would
defeat all of the advantages of iovecs and prevent us from meeting our
zero/single-copy requirements. Moreover, it would lead to extremely
surprising performance variations for consumers of the API.
- We do not believe iovecs are needed to meet our performance requirements
for QUIC. The reason for this is that aside from a minimal packet header,
all data in QUIC is encrypted, so all data sent via QUIC must pass through
an encrypt step anyway, meaning that all data sent will already be copied
and there is not going to be any issue depositing the ciphertext in a
staging buffer together with the frame header.
- Even if we did support iovecs, we would have to impose a limit
on the number of iovecs supported, because we translate from our own
structures (as discussed above) and also intend these functions to be
stateless and not requiire locking. Therefore the OS-native iovec structures
would need to be allocated on the stack.
- Sometimes, an application may wish to learn the local interface address
associated with a receive operation or specify the local interface address to
be used for a send operation. We support this, but require this functionality
to be explicitly enabled before use.
The reason for this is that enabling this functionality generally requires
that the socket be reconfigured using `setsockopt` on most platforms. Doing
this on-demand would require state in the BIO to determine whether this
functionality is currently switched on, which would require otherwise
unnecessary locking, undermining performance in concurrent usage of this API
on a given BIO. By requiring this functionality to be enabled explicitly
before use, this allows this initialization to be done up front without
performance cost. It also aids users of the API to understand that this
functionality is not always available and to detect when this functionality is
available in advance.
### Design
The currently proposed design is as follows:
```c
typedef struct bio_msg_st {
void *data;
size_t data_len;
BIO_ADDR *peer, *local;
uint64_t flags;
} BIO_MSG;
#define BIO_UNPACK_ERRNO(e) /*...*/
#define BIO_IS_ERRNO(e) /*...*/
ossl_ssize_t BIO_sendmmsg(BIO *b, BIO_MSG *msg, size_t stride,
size_t num_msg, uint64_t flags);
ossl_ssize_t BIO_recvmmsg(BIO *b, BIO_MSG *msg, size_t stride,
size_t num_msg, uint64_t flags);
```
The API is used as follows:
- `msg` points to an array of `num_msg` `BIO_MSG` structures.
- Both functions have identical prototypes, and return the number of messages
processed in the array. If no messages were sent due to an error, `-1` is
returned. If an OS-level socket error occurs, a negative value `v` is
returned. The caller should determine that `v` is an OS-level socket error by
calling `BIO_IS_ERRNO(v)` and may obtain the OS-level socket error code by
calling `BIO_UNPACK_ERRNO(v)`.
- `stride` must be set to `sizeof(BIO_MSG)`.
- `data` points to the buffer of data to be sent or to be filled with received
data. `data_len` is the size of the buffer in bytes on call. If the
given message in the array is processed (i.e., if the return value
exceeds the index of that message in the array), `data_len` is updated
to the actual amount of data sent or received at return time.
- `flags` in the `BIO_MSG` structure provides per-message flags to
the `BIO_sendmmsg` or `BIO_recvmmsg` call. If the given message in the array
is processed, `flags` is written with zero or more result flags at return
time. The `flags` argument to the call itself provides for global flags
affecting all messages in the array. Currently, no per-message or global flags
are defined and all of these fields are set to zero on call and on return.
- `peer` and `local` are optional pointers to `BIO_ADDR` structures into
which the remote and local addresses are to be filled. If either of these
are NULL, the given addressing information is not requested. Local address
support may not be available in all circumstances, in which case processing of
the message fails. (This means that the function returns the number of
messages processed, or -1 if the message in question is the first message.)
Support for `local` must be explicitly enabled before use, otherwise
attempts to use it fail.
Local address support is enabled as follows:
```c
int BIO_dgram_set_local_addr_enable(BIO *b, int enable);
int BIO_dgram_get_local_addr_enable(BIO *b);
int BIO_dgram_get_local_addr_cap(BIO *b);
```
`BIO_dgram_get_local_addr_cap()` returns 1 if local address support is
available. It is then enabled using `BIO_dgram_set_local_addr_enable()`, which
fails if support is not available.
Options which were considered
-----------------------------
Options for the API surface which were considered included:
### sendmmsg/recvmmsg-like API
This design was chosen to form the basis of the adopted design, which is
described above.
```c
int BIO_readm(BIO *b, BIO_mmsghdr *msgvec,
unsigned len, int flags, struct timespec *timeout);
int BIO_writem(BIO *b, BIO_mmsghdr *msgvec,
unsigned len, int flags, struct timespec *timeout);
```
We can either define `BIO_mmsghdr` as a typedef of `struct mmsghdr` or redefine
an equivalent structure. The former has the advantage that we can just pass the
structures through to the syscall without copying them.
Note that in `BIO_mem_dgram` we will have to process and therefore understand
the contents of `struct mmsghdr` ourselves. Therefore, initially we define a
subset of `struct mmsghdr` as being supported, specifically no control messages;
`msg_name` and `msg_iov` only.
The flags argument is defined by us. Initially we can support something like
`MSG_DONTWAIT` (say, `BIO_DONTWAIT`).
#### Implementation Questions
If we go with this, there are some issues that arise:
- Are `BIO_mmsghdr`, `BIO_msghdr` and `BIO_iovec` simple typedefs
for OS-provided structures, or our own independent structure
definitions?
- If we use OS-provided structures:
- We would need to include the OS headers which provide these
structures in our public API headers.
- If we choose to support these functions when OS support is not available
(see discussion below), We would need to define our own structures in this
case (a “polyfill” approach).
- If we use our own structures:
- We would need to translate these structures during every call.
But we would need to have storage inside the BIO_dgram for *m* `struct
msghdr`, *m\*v* iovecs, etc. Since we want to support multithreaded use
these allocations probably will need to be on the stack, and therefore
must be limited.
Limiting *m* isn't a problem, because `sendmmsg` returns the number
of messages sent, so the existing semantics we are trying to match
lets us just send or receive fewer messages than we were asked to.
However, it does seem like we will need to limit *v*, the number of iovecs
per message. So what limit should we give to *v*, the number of iovecs? We
will need a fixed stack allocation of OS iovec structures and we can
allocate from this stack allocation as we iterate through the `BIO_msghdr`
we have been given. So in practice we could just only send messages
until we reach our iovec limit, and then return.
For example, suppose we allocate 64 iovecs internally:
```c
struct iovec vecs[64];
```
If the first message passed to a call to `BIO_writem` has 64 iovecs
attached to it, no further messages can be sent and `BIO_writem`
returns 1.
If three messages are sent, with 32, 32, and 1 iovecs respectively,
the first two messages are sent and `BIO_writem` returns 2.
So the only important thing we would need to document in this API
is the limit of iovecs on a single message; in other words, the
number of iovecs which must not be exceeded if a forward progress
guarantee is to be made. e.g. if we allocate 64 iovecs internally,
`BIO_writem` with a single message with 65 iovecs will never work
and this becomes part of the API contract.
Obviously these quantities of iovecs are unrealistically large.
iovecs are small, so we can afford to set the limit high enough
that it shouldn't cause any problems in practice. We can increase
the limit later without a breaking API change, but we cannot decrease
it later. So we might want to start with something small, like 8.
- We also need to decide what to do for OSes which don't support at least
`sendmsg`/`recvmsg`.
- Don't provide these functions and require all users of these functions to
have an alternate code path which doesn't rely on them?
- Not providing these functions on OSes that don't support
at least sendmsg/recvmsg is a simple solution but adds
complexity to code using BIO_dgram. (Though it does communicate
to code more realistic performance expectations since it
knows when these functions are actually available.)
- Provide these functions and emulate the functionality:
- However there is a question here as to how we implement
the iovec arguments on platforms without `sendmsg`/`recvmsg`. (We cannot
use `writev`/`readv` because we need peer address information.) Logically
implementing these would then have to be done by copying buffers around
internally before calling `sendto`/`recvfrom`, defeating the point of
iovecs and providing a performance profile which is surprising to code
using BIO_dgram.
- Another option could be a variable limit on the number of iovecs,
which can be queried from BIO_dgram. This would be a constant set
when libcrypto is compiled. It would be 1 for platforms not supporting
`sendmsg`/`recvmsg`. This again adds burdens on the code using
BIO_dgram, but it seems the only way to avoid the surprising performance
pitfall of buffer copying to emulate iovec support. There is a fair risk
of code being written which accidentally works on one platform but not
another, because the author didn't realise the iovec limit is 1 on some
platforms. Possibly we could have an “iovec limit” variable in the
BIO_dgram which is 1 by default, which can be increased by a call to a
function BIO_set_iovec_limit, but not beyond the fixed size discussed
above. It would return failure if not possible and this would give client
code a clear way to determine if its expectations are met.
### Alternate API
Could we use a simplified API? For example, could we have an API that returns
one datagram where BIO_dgram uses `readmmsg` internally and queues the returned
datagrams, thereby still avoiding extra syscalls but offering a simple API.
The problem here is we want to support “single-copy” (where the data is only
copied as it is decrypted). Thus BIO_dgram needs to know the final resting place
of encrypted data at the time it makes the `readmmsg` call.
One option would be to allow the user to set a callback on BIO_dgram it can use
to request a new buffer, then have an API which returns the buffer:
```c
int BIO_dgram_set_read_callback(BIO *b,
void *(*cb)(size_t len, void *arg),
void *arg);
int BIO_dgram_set_read_free_callback(BIO *b,
void (*cb)(void *buf,
size_t buf_len,
void *arg),
void *arg);
int BIO_read_dequeue(BIO *b, void **buf, size_t *buf_len);
```
The BIO_dgram calls the specified callback when it needs to generate internal
iovecs for its `readmmsg` call, and the received datagrams can then be popped by
the application and freed as it likes. (The read free callback above is only
used in rare circumstances, such as when calls to `BIO_read` and
`BIO_read_dequeue` are alternated, or when the BIO_dgram is destroyed prior to
all read buffers being dequeued; see below.) For convenience we could have an
extra call to allow a buffer to be pushed back into the BIO_dgram's internal
queue of unused read buffers, which avoids the need for the application to do
its own management of such recycled buffers:
```c
int BIO_dgram_push_read_buffer(BIO *b, void *buf, size_t buf_len);
```
On the write side, the application provides buffers and can get a callback when
they are freed. BIO_write_queue just queues for transmission, and the `sendmmsg`
call is made when calling `BIO_flush`. (TBD: whether it is reasonable to
overload the semantics of BIO_flush in this way.)
```c
int BIO_dgram_set_write_done_callback(BIO *b,
void (*cb)(const void *buf,
size_t buf_len,
int status,
void *arg),
void *arg);
int BIO_write_queue(BIO *b, const void *buf, size_t buf_len);
int BIO_flush(BIO *b);
```
The status argument to the write done callback will be 1 on success, some
negative value on failure, and some special negative value if the BIO_dgram is
being freed before the write could be completed.
For send/receive addresses, we import the `BIO_(set|get)_dgram_(origin|dest)`
APIs proposed in the sendmsg/recvmsg PR (#5257). `BIO_get_dgram_(origin|dest)`
should be called immediately after `BIO_read_dequeue` and
`BIO_set_dgram_(origin|dest)` should be called immediately before
`BIO_write_queue`.
This approach allows `BIO_dgram` to support myriad options via composition of
successive function calls in a “builder” style rather than via a single function
call with an excessive number of arguments or pointers to unwieldy ever-growing
argument structures, requiring constant revision of the central read/write
functions of the BIO API.
Note that since `BIO_set_dgram_(origin|dest)` sets data on outgoing packets and
`BIO_get_dgram_(origin|dest)` gets data on incoming packets, it doesn't follow
that these are accessing the same data (they are not setters and getters of a
variables called "dgram origin" and "dgram destination", even though they look
like setters and getters of the same variables from the name.) We probably want
to separate these as there is no need for a getter for outgoing packet
destination, for example, and by separating these we allow the possibility of
multithreaded use (one thread reads, one thread writes) in the future. Possibly
we should choose less confusing names for these functions. Maybe
`BIO_set_outgoing_dgram_(origin|dest)` and
`BIO_get_incoming_dgram_(origin|dest)`.
Pros of this approach:
- Application can generate one datagram at a time and still get the advantages
of sendmmsg/recvmmsg (fewer syscalls, etc.)
We probably want this for our own QUIC implementation built on top of this
anyway. Otherwise we will need another piece to do basically the same thing
and agglomerate multiple datagrams into a single BIO call. Unless we only
want use `sendmmsg` constructively in trivial cases (e.g. where we send two
datagrams from the same function immediately after one another... doesn't
seem like a common use case.)
- Flexible support for single-copy (zero-copy).
Cons of this approach:
- Very different way of doing reads/writes might be strange to existing
applications. *But* the primary consumer of this new API will be our own
QUIC implementation so probably not a big deal. We can always support
`BIO_read`/`BIO_write` as a less efficient fallback for existing third party
users of BIO_dgram.
#### Compatibility interop
Suppose the following sequence happens:
1. BIO_read (legacy call path)
2. BIO_read_dequeue (`recvmmsg` based call path with callback-allocated buffer)
3. BIO_read (legacy call path)
For (1) we have two options
a. Use `recvmmsg` and add the received datagrams to an RX queue just as for the
`BIO_read_dequeue` path. We use an OpenSSL-provided default allocator
(`OPENSSL_malloc`) and flag these datagrams as needing to be freed by OpenSSL,
not the application.
When the application calls `BIO_read`, a copy is performed and the internal
buffer is freed.
b. Use `recvfrom` directly. This means we have a `recvmmsg` path and a
`recvfrom` path depending on what API is being used.
The disadvantage of (a) is it yields an extra copy relative to what we have now,
whereas with (b) the buffer passed to `BIO_read` gets passed through to the
syscall and we do not have to copy anything.
Since we will probably need to support platforms without
`sendmmsg`/`recvmmsg` support anyway, (b) seems like the better option.
For (2) the new API is used. Since the previous call to BIO_read is essentially
“stateless” (it's just a simple call to `recvfrom`, and doesn't require mutation
of any internal BIO state other than maybe the last datagram source/destination
address fields), BIO_dgram can go ahead and start using the `recvmmsg` code
path. Since the RX queue will obviously be empty at this point, it is
initialised and filled using `recvmmsg`, then one datagram is popped from it.
For (3) we have a legacy `BIO_read` but we have several datagrams still in the
RX queue. In this case we do have to copy - we have no choice. However this only
happens in circumstances where a user of BIO_dgram alternates between old and
new APIs, which should be very unusual.
Subsequently for (3) we have to free the buffer using the free callback. This is
an unusual case where BIO_dgram is responsible for freeing read buffers and not
the application (the only other case being premature destruction, see below).
But since this seems a very strange API usage pattern, we may just want to fail
in this case.
Probably not worth supporting this. So we can have the following rule:
- After the first call to `BIO_read_dequeue` is made on a BIO_dgram, all
subsequent calls to ordinary `BIO_read` will fail.
Of course, all of the above applies analogously to the TX side.
#### BIO_dgram_pair
We will also implement from scratch a BIO_dgram_pair. This will be provided as a
BIO pair which provides identical semantics to the BIO_dgram above, both for the
legacy and zero-copy code paths.
#### Thread safety
It is a functional assumption of the above design that we would never want to
have more than one thread doing TX on the same BIO and never have more than one
thread doing RX on the same BIO.
If we did ever want to do this, multiple BIOs on the same FD is one possibility
(for the BIO_dgram case at least). But I don't believe there is any general
intention to support multithreaded use of a single BIO at this time (unless I am
mistaken), so this seems like it isn't an issue.
If we wanted to support multithreaded use of the same FD using the same BIO, we
would need to revisit the set-call-then-execute-call API approach above
(`BIO_(set|get)_dgram_(origin|dest)`) as this would pose a problem. But I mainly
mention this only for completeness. Our recent learnt lessons on cache
contention suggest that this probably wouldn't be a good idea anyway.
#### Other questions
BIO_dgram will call the allocation function to get buffers for `recvmmsg` to
fill. We might want to have a way to specify how many buffers it should offer to
`recvmmsg`, and thus how many buffers it allocates in advance.
#### Premature destruction
If BIO_dgram is freed before all datagrams are read, the read buffer free
callback is used to free any unreturned read buffers.

View File

@@ -0,0 +1,101 @@
Error handling in QUIC code
===========================
Current situation with TLS
--------------------------
The errors are put on the error stack (rather a queue but error stack is
used throughout the code base) during the libssl API calls. In most
(if not all) cases they should appear there only if the API call returns an
error return value. The `SSL_get_error()` call depends on the stack being
clean before the API call to be properly able to determine if the API
call caused a library or system (I/O) error.
The error stacks are thread-local. Libssl API calls from separate threads
push errors to these separate error stacks. It is unusual to invoke libssl
APIs with the same SSL object from different threads, but even if it happens,
it is not a problem as applications are supposed to check for errors
immediately after the API call on the same thread. There is no such thing as
Thread-assisted mode of operation.
Constraints
-----------
We need to keep using the existing ERR API as doing otherwise would
complicate the existing applications and break our API compatibility promise.
Even the ERR_STATE structure is public, although deprecated, and thus its
structure and semantics cannot be changed.
The error stack access is not under a lock (because it is thread-local).
This complicates _moving errors between threads_.
Error stack entries contain allocated data, copying entries between threads
implies duplicating it or losing it.
Assumptions
-----------
This document assumes the actual error state of the QUIC connection (or stream
for stream level errors) is handled separately from the auxiliary error reason
entries on the error stack.
We can assume the internal assistance thread is well-behaving in regards
to the error stack.
We assume there are two types of errors that can be raised in the QUIC
library calls and in the subordinate libcrypto (and provider) calls. First
type is an intermittent error that does not really affect the state of the
QUIC connection - for example EAGAIN returned on a syscall, or unavailability
of some algorithm where there are other algorithms to try. Second type
is a permanent error that affects the error state of the QUIC connection.
Operations on QUIC streams (SSL_write(), SSL_read()) can also trigger errors,
depending on their effect they are either permanent if they cause the
QUIC connection to enter an error state, or if they just affect the stream
they are left on the error stack of the thread that called SSL_write()
or SSL_read() on the stream.
Design
------
Return value of SSL_get_error() on QUIC connections or streams does not
depend on the error stack contents.
Intermittent errors are handled within the library and cleared from the
error stack before returning to the user.
Permanent errors happening within the assist thread, within SSL_tick()
processing, or when calling SSL_read()/SSL_write() on a stream need to be
replicated for SSL_read()/SSL_write() calls on other streams.
Implementation
--------------
There is an error stack in QUIC_CHANNEL which serves as temporary storage
for errors happening in the internal assistance thread. When a permanent error
is detected the error stack entries are moved to this error stack in
QUIC_CHANNEL.
When returning to an application from a SSL_read()/SSL_write() call with
a permanent connection error, entries from the QUIC_CHANNEL error stack
are copied to the thread local error stack. They are always kept on
the QUIC_CHANNEL error stack as well for possible further calls from
an application. An additional error reason
SSL_R_QUIC_CONNECTION_TERMINATED is added to the stack.
SSL_tick() return value
-----------------------
The return value of SSL_tick() does not depend on whether there is
a permanent error on the connection. The only case when SSL_tick() may
return an error is when there was some fatal error processing it
such as a memory allocation error where no further SSL_tick() calls
make any sense.
Multi-stream-multi-thread mode
------------------------------
There is nothing particular that needs to be handled specially for
multi-stream-multi-thread mode as the error stack entries are always
copied from the QUIC_CHANNEL after the failure. So if multiple threads
are calling SSL_read()/SSL_write() simultaneously they all get
the same error stack entries to report to the user.

View File

@@ -0,0 +1,311 @@
Glossary of QUIC Terms
======================
**ACKM:** ACK Manager. Responsible for tracking packets in flight and generating
notifications when they are lost, or when they are successfully delivered.
**Active Stream:** A stream which has data or control frames ready for
transmission. Active stream status is managed by the QSM.
**AEC:** Application error code. An error code provided by a local or remote
application to be signalled as an error code value by QUIC. See QUIC RFCs
(`STOP_SENDING`, `RESET_STREAM`, `CONNECTION_CLOSE`).
**APL:** API Personality Layer. The QUIC API Personality Layer lives in
`quic_impl.c` and implements the libssl API personality (`SSL_read`, etc.) in
terms of an underlying `QUIC_CHANNEL` object.
**Bidi:** Abbreviation of bidirectional, referring to a QUIC bidirectional
stream.
**CC:** Congestion controller. Estimates network channel capacity and imposes
limits on transmissions accordingly.
**CFQ:** Control Frame Queue. Considered part of the FIFM, this implements
the CFQ strategy for frame in flight management. For details, see FIFM
design document.
**Channel:** See `QUIC_CHANNEL`.
**CID:** Connection ID.
**CMPL:** The maximum number of bytes left to serialize another QUIC packet into
the same datagram as one or more previous packets. This is just the MDPL minus
the total size of all previous packets already serialized into to the same
datagram.
**CMPPL:** The maximum number of payload bytes we can put in the payload of
another QUIC packet which is to be coalesced with one or more previous QUIC
packets and placed into the same datagram. Essentially, this is the room we have
left for another packet payload.
**CSM:** Connection state machine. Refers to some aspects of a QUIC channel. Not
implemented as an explicit state machine.
**DCID:** Destination Connection ID. Found in most QUIC packet headers.
**DEMUX:** The demuxer routes incoming packets to the correct connection QRX by
DCID.
**DGRAM:** (UDP) datagram.
**DISPATCH:** Refers to the QUIC-specific dispatch code in `ssl_lib.c`. This
dispatches calls to libssl public APIs to the APL.
**EL:** Encryption level. See RFC 9000.
**Engine:** See `QUIC_ENGINE`.
**FC:** Flow control. Comprises TXFC and RXFC.
**FIFD:** Frame-in-flight dispatcher. Ties together the CFQ and TXPIM to handle
frame tracking and retransmission on loss.
**FIFM:** Frame-in-flight manager. Tracks frames in flight until their
containing packets are deemed delivered or lost, so that frames can be
retransmitted if necessary. Comprises the CFQ, TXPIM and FIFD.
**GCR:** Generic Control Frame Retransmission. A strategy for regenerating lost
frames. Stores raw frame in a queue so that it can be retransmitted if lost. See
FIFM design document for details.
**Key epoch:** Non-negative number designating a generation of QUIC keys used to
encrypt or decrypt packets, starting at 0. This increases by 1 when a QUIC key
update occurs.
**Key Phase:** Key phase bit in QUIC packet headers. See RFC 9000.
**Keyslot**: A set of cryptographic state used to encrypt or decrypt QUIC
packets by a QRX or QTX. Due to the QUIC key update mechanism, multiple keyslots
may be maintained at a given time. See `quic_record_rx.h` for details.
**KP:** See Key Phase.
**KS:** See Keyslot.
**KU:** Key update. See also TXKU, RXKU.
**LCID:** Local CID. Refers to a CID which will be recognised as identifying a
connection if found in the DCID field of an incoming packet. See also RCID.
**LCIDM:** Local CID Manager. Tracks LCIDs which have been advertised to a peer.
See also RCIDM.
**Locally-initiated:** Refers to a QUIC stream which was initiated by the local
application rather than the remote peer.
**MDPL:** Maximum Datagram Payload Length. The maximum number of UDP payload
bytes we can put in a UDP packet. This is derived from the applicable PMTU. This
is also the maximum size of a single QUIC packet if we place only one packet in
a datagram. The MDPL may vary based on both local source IP and destination IP
due to different path MTUs.
**MinDPL:** In some cases we must ensure a datagram has a minimum size of a
certain number of bytes. This does not need to be accomplished with a single
packet, but we may need to add PADDING frames to the final packet added to a
datagram in this case.
**MinPL:** The minimum serialized packet length we are using while serializing a
given packet. May often be 0. Used to meet MinDPL requirements, and thus equal
to MinDPL minus the length of any packets we have already encoded into the
datagram.
**MinPPL:** The minimum number of bytes which must be placed into a packet
payload in order to meet the MinPL minimum size when the packet is encoded.
**MPL:** Maximum Packet Length. The maximum size of a fully encrypted and
serialized QUIC packet in bytes in some given context. Typically equal to the
MDPL and never greater than it.
**MPPL:** The maximum number of plaintext bytes we can put in the payload of a
QUIC packet. This is related to the MDPL by the size of the encoded header and
the size of any AEAD authentication tag which will be attached to the
ciphertext.
**MSMT:** Multi-stream multi-thread. Refers to a type of multi-stream QUIC usage
in which API calls can be made on different threads.
**MSST:** Multi-stream single-thread. Refers to a type of multi-stream QUIC
usage in which API calls must not be made concurrently.
**NCID:** New Connection ID. Refers to a QUIC `NEW_CONNECTION_ID` frame.
**Numbered CID:** Refers to a Connection ID which has a sequence number assigned
to it. All CIDs other than Initial ODCIDs and Retry ODCIDs have a sequence
number assigned. See also Unnumbered CID.
**ODCID:** Original Destination CID. This is the DCID found in the first Initial
packet sent by a client, and is used to generate the secrets for encrypting
Initial packets. It is only used temporarily.
**PN:** Packet number. Most QUIC packet types have a packet number (PN); see RFC
9000.
**Port:** See `QUIC_PORT`.
**PTO:** Probe timeout. See RFC 9000.
**QC:** See `QUIC_CONNECTION`.
**QCSO:** QUIC Connection SSL Object. This is an SSL object created using
`SSL_new` using a QUIC method.
**QCTX**: QUIC Context. This is a utility object defined within the QUIC APL
which helps to unwrap a SSL object pointer (a QCSO or QSSO) into the relevant
structure pointers such as `QUIC_CONNECTION` or `QUIC_XSO`.
**QRL:** QUIC record layer. Refers collectively to the QRX and QTX.
**QRX:** QUIC Record Layer RX. Receives incoming datagrams and decrypts the
packets contained in them. Manages decrypted packets in a queue pending
processing by upper layers.
**QS:** See `QUIC_STREAM`.
**QSM:** QUIC Streams Mapper. Manages internal `QUIC_STREAM` objects and maps
IDs to those objects. Allows iteration of active streams.
**QSO:** QUIC SSL Object. May be a QCSO or a QSSO.
**QSSO:** QUIC Stream SSL Object. This is an SSL object which is subsidiary to a
given QCSO, obtained using (for example) `SSL_new_stream` or
`SSL_accept_stream`.
**QTLS**, **QUIC_TLS**: Implements the QUIC handshake layer using TLS 1.3,
wrapping libssl TLS code to implement the QUIC-specific aspects of QUIC TLS.
**QTX:** QUIC Record Layer TX. Encrypts and sends packets in datagrams.
**QUIC_CHANNEL:** Internal object in the QUIC core implementation corresponding
to a QUIC connection. Ties together other components and provides connection
handling and state machine implementation. Belongs to a `QUIC_PORT` representing
a UDP socket/BIO, which in turn belongs to a `QUIC_ENGINE`. Owns some number of
`QUIC_STREAM` instances. The `QUIC_CHANNEL` code is fused tightly with the RXDP.
**QUIC_CONNECTION:** QUIC connection. This is the object representing a QUIC
connection in the APL. It internally corresponds to a `QUIC_CHANNEL` object in
the QUIC core implementation.
**QUIC_ENGINE:** Internal object in the QUIC core implementation constituting
the top-level object of a QUIC event and I/O processing domain. Owns zero or
more `QUIC_PORT` instances, each of which owns zero or more `QUIC_CHANNEL`
objects representing QUIC connections.
**QUIC_PORT:** Internal object in the QUIC core implementation corresponding to
a listening port/network BIO. Has zero or more child `QUIC_CHANNEL` objects
associated with it and belongs to a `QUIC_ENGINE`.
**QUIC_STREAM**: Internal object tracking a QUIC stream. Unlike an XSO this is
not part of the APL. An XSO wraps a QUIC_STREAM once that stream is exposed as
an API object. As such, a `QUIC_CONNECTION` is to a `QUIC_CHANNEL` what a
`QUIC_XSO` is to a `QUIC_STREAM`.
**RCID:** Remote CID. Refers to a CID which has been provided to us by a peer
and which we can place in the DCID field of an outgoing packet. See also LCID,
Unnumbered CID and Numbered CID.
**RCIDM:** Remote CID Manager. Tracks RCIDs which have been provided to us by a
peer. See also LCIDM.
**REGEN:** A strategy for regenerating lost frames. This strategy regenerates
the frame from canonical data sources without having to store a copy of the
frame which was transmitted. See FIFM design document for details.
**Remotely-initiated:** Refers to a QUIC stream which was initiated by the
remote peer, rather than by the local application.
**RIO:** Reactive I/O subsystem. Refers to the generic, non-QUIC specific parts
of the asynchronous I/O handling code which the OpenSSL QUIC stack is built on.
**RSTREAM:** Receive stream. Internal receive buffer management object used to
store data which has been RX'd but not yet read by the application.
**RTT:** Round trip time. Time for a datagram to reach a given peer and a reply
to reach the local machine, assuming the peer responds immediately.
**RXDP:** RX depacketiser. Handles frames in packets which have been decrypted
by a QRX.
**RXE:** RX entry. Structure containing decrypted received packets awaiting
processing. Stored in a queue known as the RXL. These structures belong to a
QRX.
**RXFC:** RX flow control. This determines how much a peer may send to us and
provides indication of when flow control frames increasing a peer's flow control
budget should be generated. Exists in both connection-level and stream-level
instances.
**RXKU:** RX key update. The detected condition whereby a received packet
has a flipped Key Phase bit, meaning the peer has initiated a key update.
Causes a solicited TXKU. See also TXKU.
**RXL:** RXE list. See RXE.
**RCMPPL:** The number of bytes left in a packet whose payload we are currently
forming. This is the CMPPL minus any bytes we have already put into the payload.
**SCID:** Source Connection ID. Found in some QUIC packet headers.
**SRT:** Stateless reset token.
**SRTM:** Stateless reset token manager. Object which tracks SRTs we have
received.
**SSTREAM:** Send stream. Internal send buffer management object used to store
data which has been passed to libssl for sending but which has not yet been
transmitted, or not yet been acknowledged.
**STATM:** Statistics manager. Measures estimated connection RTT.
**TA:** Thread assisted mode.
**TPARAM:** Transport parameter. See RFC 9000.
**TSERVER:** Test server. Internal test server object built around a channel.
**TXE:** TX entry. Structure containing encrypted data pending transmission.
Owned by the QTX.
**TXFC:** TX flow control. This determines how much can be transmitted to the
peer. Exists in both connection-level and stream-level instances.
**TXKU:** TX key update. This refers to when a QTX signals a key update for the
TX direction by flipping the Key Phase bit in an outgoing packet. A TXKU can be
either spontaneous (locally initiated) or in solicited (in response to receiving
an RXKU). See also RXKU.
**TXL:** TXE list. See TXE.
**TXP:** TX packetiser. This is responsible for generating yet-unencrypted
packets and passing them to a QTX for encryption and transmission. It must
decide how to spend the space available in a datagram.
**TXPIM:** Transmitted Packet Information Manager. Stores information about
transmitted packets and the frames contained within them. This information
is needed to facilitate retransmission of frames if the packets they are in
are lost. Note that the ACKM records only receipt or loss of entire packets,
whereas TXPIM tracks information about individual frames in those packets.
**TX/RX v. Send/Receive:** The terms *TX* and *RX* are used for *network-level*
communication, whereas *send* and *receive* are used for application-level
communication. An application *sends* on a stream (causing data to be appended
to a *send stream buffer*, and that data is eventually TX'd by the TXP and QTX.)
**Uni:** Abbreviation of unidirectional, referring to a QUIC unidirectional
stream.
**Unnumbered CID:** Refers to a CID which does not have a sequence number
associated with it and therefore cannot be referred to by a `NEW_CONNECTION_ID`
or `RETIRE_CONNECTION_ID` frame's sequence number fields. The only unnumbered
CIDs are Initial ODCIDs and Retry ODCIDs. These CIDs are exceptionally retired
automatically during handshake confirmation. See also Numbered CID.
**URXE:** Unprocessed RX entry. Structure containing yet-undecrypted received
datagrams pending processing. Stored in a queue known as the URXL.
Ownership of URXEs is shared between DEMUX and QRX.
**URXL:** URXE list. See URXE.
**XSO:** External Stream Object. This is the API object representing a QUIC
stream in the APL. Internally, it is the `QUIC_XSO` structure, externally it is
a `SSL *` (and is a QSSO).

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

View File

@@ -0,0 +1,59 @@
@startuml
[*] --> IDLE
ESTABLISHING : PROBE_TIMEOUT: SendProbeIfAnySentPktsUnacked() [default]
state ACTIVE {
state ESTABLISHING {
PROACTIVE_VER_NEG :
PRE_INITIAL :
INITIAL_EXCHANGE_A :
REACTIVE_VER_NEG :
INITIAL_EXCHANGE_B :
INITIAL_EXCHANGE_CONTINUED :
HANDSHAKE :
HANDSHAKE_CONTINUED :
HANDSHAKE_COMPLETED :
HANDSHAKE_CONFIRMED :
[*] --> PROACTIVE_VER_NEG : use proactive VN?
[*] --> PRE_INITIAL : else
PROACTIVE_VER_NEG --> PRE_INITIAL : RX:VER_NEG
PROACTIVE_VER_NEG --> PROACTIVE_VER_NEG : PROBE_TIMEOUT
PRE_INITIAL --> INITIAL_EXCHANGE_A : ε
INITIAL_EXCHANGE_A --> INITIAL_EXCHANGE_B : RX:RETRY
INITIAL_EXCHANGE_A --> INITIAL_EXCHANGE_CONTINUED : RX:INITIAL
INITIAL_EXCHANGE_A --> REACTIVE_VER_NEG : RX:VER_NEG
REACTIVE_VER_NEG --> PRE_INITIAL : ε
INITIAL_EXCHANGE_B --> INITIAL_EXCHANGE_CONTINUED : RX:INITIAL
INITIAL_EXCHANGE_CONTINUED --> HANDSHAKE : TLS:HAVE_EL(HANDSHAKE)
HANDSHAKE --> HANDSHAKE_CONTINUED : RX:HANDSHAKE
HANDSHAKE_CONTINUED --> HANDSHAKE_COMPLETED : TLS:HANDSHAKE_COMPLETE
HANDSHAKE_COMPLETED --> HANDSHAKE_CONFIRMED : RX:1RTT[HANDSHAKE_DONE]
}
OPEN :
[*] --> ESTABLISHING
}
state TERMINATING {
CLOSING :
DRAINING :
CLOSING --> DRAINING : RX:ANY[CONNECTION_CLOSE]
}
HANDSHAKE_CONFIRMED --> OPEN : ε
IDLE --> ACTIVE : APP:CONNECT
IDLE --> TERMINATED : APP:CLOSE
TERMINATING --> TERMINATED : TERMINATING_TIMEOUT, RX:STATELESS_RESET
ACTIVE --> CLOSING : APP:CLOSE
ACTIVE --> DRAINING : RX:ANY[CONNECTION_CLOSE]
ACTIVE --> TERMINATED : IDLE_TIMEOUT, RX:STATELESS_RESET
@enduml

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@@ -0,0 +1,641 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.2" width="297mm" height="210mm" viewBox="0 0 29700 21000" preserveAspectRatio="xMidYMid" fill-rule="evenodd" stroke-width="28.222" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg" xmlns:ooo="http://xml.openoffice.org/svg/export" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:presentation="http://sun.com/xmlns/staroffice/presentation" xmlns:smil="http://www.w3.org/2001/SMIL20/" xmlns:anim="urn:oasis:names:tc:opendocument:xmlns:animation:1.0" xml:space="preserve">
<defs class="ClipPathGroup">
<clipPath id="presentation_clip_path" clipPathUnits="userSpaceOnUse">
<rect x="0" y="0" width="29700" height="21000"/>
</clipPath>
<clipPath id="presentation_clip_path_shrink" clipPathUnits="userSpaceOnUse">
<rect x="29" y="21" width="29641" height="20958"/>
</clipPath>
</defs>
<defs>
<font id="EmbeddedFont_1" horiz-adv-x="2048">
<font-face font-family="DejaVu Sans embedded" units-per-em="2048" font-weight="normal" font-style="normal" ascent="1879" descent="476"/>
<missing-glyph horiz-adv-x="2048" d="M 0,0 L 2047,0 2047,2047 0,2047 0,0 Z"/>
<glyph unicode="z" horiz-adv-x="927" d="M 113,1120 L 987,1120 987,952 295,147 987,147 987,0 88,0 88,168 780,973 113,973 113,1120 Z"/>
<glyph unicode="y" horiz-adv-x="1112" d="M 659,-104 C 607,-237 556,-324 507,-365 458,-406 392,-426 309,-426 L 162,-426 162,-272 270,-272 C 321,-272 360,-260 388,-236 416,-212 447,-155 481,-66 L 514,18 61,1120 256,1120 606,244 956,1120 1151,1120 659,-104 Z"/>
<glyph unicode="x" horiz-adv-x="1112" d="M 1124,1120 L 719,575 1145,0 928,0 602,440 276,0 59,0 494,586 96,1120 313,1120 610,721 907,1120 1124,1120 Z"/>
<glyph unicode="w" horiz-adv-x="1510" d="M 86,1120 L 270,1120 500,246 729,1120 946,1120 1176,246 1405,1120 1589,1120 1296,0 1079,0 838,918 596,0 379,0 86,1120 Z"/>
<glyph unicode="v" horiz-adv-x="1112" d="M 61,1120 L 256,1120 606,180 956,1120 1151,1120 731,0 481,0 61,1120 Z"/>
<glyph unicode="u" horiz-adv-x="953" d="M 174,442 L 174,1120 358,1120 358,449 C 358,343 379,264 420,211 461,158 523,131 606,131 705,131 784,163 842,226 899,289 928,376 928,485 L 928,1120 1112,1120 1112,0 928,0 928,172 C 883,104 832,54 773,21 714,-12 645,-29 567,-29 438,-29 341,11 274,91 207,171 174,288 174,442 Z "/>
<glyph unicode="t" horiz-adv-x="715" d="M 375,1438 L 375,1120 754,1120 754,977 375,977 375,369 C 375,278 388,219 413,193 438,167 488,154 565,154 L 754,154 754,0 565,0 C 423,0 325,27 271,80 217,133 190,229 190,369 L 190,977 55,977 55,1120 190,1120 190,1438 375,1438 Z"/>
<glyph unicode="s" horiz-adv-x="874" d="M 907,1087 L 907,913 C 855,940 801,960 745,973 689,986 631,993 571,993 480,993 411,979 366,951 320,923 297,881 297,825 297,782 313,749 346,725 379,700 444,677 543,655 L 606,641 C 737,613 830,574 885,523 940,472 967,400 967,309 967,205 926,123 844,62 761,1 648,-29 504,-29 444,-29 382,-23 317,-12 252,0 183,18 111,41 L 111,231 C 179,196 246,169 312,152 378,134 443,125 508,125 595,125 661,140 708,170 755,199 778,241 778,295 778,345 761,383 728,410 694,437 620,462 506,487 L 442,502 C 328,526 246,563 195,613 144,662 119,730 119,817 119,922 156,1004 231,1061 306,1118 412,1147 549,1147 617,1147 681,1142 741,1132 801,1122 856,1107 907,1087 Z"/>
<glyph unicode="r" horiz-adv-x="663" d="M 842,948 C 821,960 799,969 775,975 750,980 723,983 694,983 590,983 510,949 455,882 399,814 371,717 371,590 L 371,0 186,0 186,1120 371,1120 371,946 C 410,1014 460,1065 522,1098 584,1131 659,1147 748,1147 761,1147 775,1146 790,1145 805,1143 822,1140 841,1137 L 842,948 Z"/>
<glyph unicode="p" horiz-adv-x="1007" d="M 371,168 L 371,-426 186,-426 186,1120 371,1120 371,950 C 410,1017 459,1066 518,1099 577,1131 647,1147 729,1147 865,1147 976,1093 1061,985 1146,877 1188,735 1188,559 1188,383 1146,241 1061,133 976,25 865,-29 729,-29 647,-29 577,-13 518,20 459,52 410,101 371,168 Z M 997,559 C 997,694 969,801 914,878 858,955 781,993 684,993 587,993 510,955 455,878 399,801 371,694 371,559 371,424 399,318 455,241 510,164 587,125 684,125 781,125 858,164 914,241 969,318 997,424 997,559 Z"/>
<glyph unicode="o" horiz-adv-x="1033" d="M 627,991 C 528,991 450,953 393,876 336,799 307,693 307,559 307,425 336,320 393,243 450,166 528,127 627,127 725,127 803,166 860,243 917,320 946,426 946,559 946,692 917,797 860,875 803,952 725,991 627,991 Z M 627,1147 C 787,1147 913,1095 1004,991 1095,887 1141,743 1141,559 1141,376 1095,232 1004,128 913,23 787,-29 627,-29 466,-29 341,23 250,128 159,232 113,376 113,559 113,743 159,887 250,991 341,1095 466,1147 627,1147 Z"/>
<glyph unicode="n" horiz-adv-x="954" d="M 1124,676 L 1124,0 940,0 940,670 C 940,776 919,855 878,908 837,961 775,987 692,987 593,987 514,955 457,892 400,829 371,742 371,633 L 371,0 186,0 186,1120 371,1120 371,946 C 415,1013 467,1064 527,1097 586,1130 655,1147 733,1147 862,1147 959,1107 1025,1028 1091,948 1124,831 1124,676 Z"/>
<glyph unicode="m" horiz-adv-x="1642" d="M 1065,905 C 1111,988 1166,1049 1230,1088 1294,1127 1369,1147 1456,1147 1573,1147 1663,1106 1726,1025 1789,943 1821,827 1821,676 L 1821,0 1636,0 1636,670 C 1636,777 1617,857 1579,909 1541,961 1483,987 1405,987 1310,987 1234,955 1179,892 1124,829 1096,742 1096,633 L 1096,0 911,0 911,670 C 911,778 892,858 854,910 816,961 757,987 678,987 584,987 509,955 454,892 399,828 371,742 371,633 L 371,0 186,0 186,1120 371,1120 371,946 C 413,1015 463,1065 522,1098 581,1131 650,1147 731,1147 812,1147 882,1126 939,1085 996,1044 1038,984 1065,905 Z"/>
<glyph unicode="l" horiz-adv-x="213" d="M 193,1556 L 377,1556 377,0 193,0 193,1556 Z"/>
<glyph unicode="k" horiz-adv-x="1007" d="M 186,1556 L 371,1556 371,637 920,1120 1155,1120 561,596 1180,0 940,0 371,547 371,0 186,0 186,1556 Z"/>
<glyph unicode="i" horiz-adv-x="213" d="M 193,1120 L 377,1120 377,0 193,0 193,1120 Z M 193,1556 L 377,1556 377,1323 193,1323 193,1556 Z"/>
<glyph unicode="h" horiz-adv-x="954" d="M 1124,676 L 1124,0 940,0 940,670 C 940,776 919,855 878,908 837,961 775,987 692,987 593,987 514,955 457,892 400,829 371,742 371,633 L 371,0 186,0 186,1556 371,1556 371,946 C 415,1013 467,1064 527,1097 586,1130 655,1147 733,1147 862,1147 959,1107 1025,1028 1091,948 1124,831 1124,676 Z"/>
<glyph unicode="g" horiz-adv-x="1006" d="M 930,573 C 930,706 903,810 848,883 793,956 715,993 616,993 517,993 441,956 386,883 331,810 303,706 303,573 303,440 331,337 386,264 441,191 517,154 616,154 715,154 793,191 848,264 903,337 930,440 930,573 Z M 1114,139 C 1114,-52 1072,-193 987,-287 902,-379 773,-426 598,-426 533,-426 472,-421 415,-412 358,-402 302,-387 248,-367 L 248,-188 C 302,-217 355,-239 408,-253 461,-267 514,-274 569,-274 690,-274 780,-242 840,-180 900,-116 930,-21 930,106 L 930,197 C 892,131 843,82 784,49 725,16 654,0 571,0 434,0 323,52 239,157 155,262 113,400 113,573 113,746 155,885 239,990 323,1095 434,1147 571,1147 654,1147 725,1131 784,1098 843,1065 892,1016 930,950 L 930,1120 1114,1120 1114,139 Z"/>
<glyph unicode="f" horiz-adv-x="742" d="M 760,1556 L 760,1403 584,1403 C 518,1403 472,1390 447,1363 421,1336 408,1288 408,1219 L 408,1120 711,1120 711,977 408,977 408,0 223,0 223,977 47,977 47,1120 223,1120 223,1198 C 223,1323 252,1414 310,1471 368,1528 460,1556 586,1556 L 760,1556 Z"/>
<glyph unicode="e" horiz-adv-x="1059" d="M 1151,606 L 1151,516 305,516 C 313,389 351,293 420,227 488,160 583,127 705,127 776,127 844,136 911,153 977,170 1043,196 1108,231 L 1108,57 C 1042,29 974,8 905,-7 836,-22 765,-29 694,-29 515,-29 374,23 270,127 165,231 113,372 113,549 113,732 163,878 262,986 361,1093 494,1147 662,1147 813,1147 932,1099 1020,1002 1107,905 1151,773 1151,606 Z M 967,659 C 966,760 938,841 883,901 828,961 755,991 664,991 561,991 479,962 418,904 356,846 320,764 311,659 L 967,659 Z"/>
<glyph unicode="d" horiz-adv-x="1006" d="M 930,950 L 930,1556 1114,1556 1114,0 930,0 930,168 C 891,101 843,52 784,20 725,-13 654,-29 571,-29 436,-29 326,25 241,133 156,241 113,383 113,559 113,735 156,877 241,985 326,1093 436,1147 571,1147 654,1147 725,1131 784,1099 843,1066 891,1017 930,950 Z M 303,559 C 303,424 331,318 387,241 442,164 519,125 616,125 713,125 790,164 846,241 902,318 930,424 930,559 930,694 902,801 846,878 790,955 713,993 616,993 519,993 442,955 387,878 331,801 303,694 303,559 Z"/>
<glyph unicode="c" horiz-adv-x="900" d="M 999,1077 L 999,905 C 947,934 895,955 843,970 790,984 737,991 684,991 565,991 472,953 406,878 340,802 307,696 307,559 307,422 340,316 406,241 472,165 565,127 684,127 737,127 790,134 843,149 895,163 947,184 999,213 L 999,43 C 948,19 895,1 840,-11 785,-23 726,-29 664,-29 495,-29 361,24 262,130 163,236 113,379 113,559 113,742 163,885 264,990 364,1095 501,1147 676,1147 733,1147 788,1141 842,1130 896,1118 948,1100 999,1077 Z"/>
<glyph unicode="a" horiz-adv-x="980" d="M 702,563 C 553,563 450,546 393,512 336,478 307,420 307,338 307,273 329,221 372,183 415,144 473,125 547,125 649,125 731,161 793,234 854,306 885,402 885,522 L 885,563 702,563 Z M 1069,639 L 1069,0 885,0 885,170 C 843,102 791,52 728,20 665,-13 589,-29 498,-29 383,-29 292,3 225,68 157,132 123,218 123,326 123,452 165,547 250,611 334,675 460,707 627,707 L 885,707 885,725 C 885,810 857,875 802,922 746,968 668,991 567,991 503,991 441,983 380,968 319,953 261,930 205,899 L 205,1069 C 272,1095 338,1115 401,1128 464,1141 526,1147 586,1147 748,1147 869,1105 949,1021 1029,937 1069,810 1069,639 Z"/>
<glyph unicode="X" horiz-adv-x="1297" d="M 129,1493 L 346,1493 717,938 1090,1493 1307,1493 827,776 1339,0 1122,0 702,635 279,0 61,0 594,797 129,1493 Z"/>
<glyph unicode="W" horiz-adv-x="1906" d="M 68,1493 L 272,1493 586,231 899,1493 1126,1493 1440,231 1753,1493 1958,1493 1583,0 1329,0 1014,1296 696,0 442,0 68,1493 Z"/>
<glyph unicode="U" horiz-adv-x="1165" d="M 178,1493 L 381,1493 381,586 C 381,426 410,311 468,241 526,170 620,135 750,135 879,135 973,170 1031,241 1089,311 1118,426 1118,586 L 1118,1493 1321,1493 1321,561 C 1321,366 1273,219 1177,120 1080,21 938,-29 750,-29 561,-29 419,21 323,120 226,219 178,366 178,561 L 178,1493 Z"/>
<glyph unicode="T" horiz-adv-x="1297" d="M -6,1493 L 1257,1493 1257,1323 727,1323 727,0 524,0 524,1323 -6,1323 -6,1493 Z"/>
<glyph unicode="S" horiz-adv-x="1060" d="M 1096,1444 L 1096,1247 C 1019,1284 947,1311 879,1329 811,1347 745,1356 682,1356 572,1356 487,1335 428,1292 368,1249 338,1189 338,1110 338,1044 358,994 398,961 437,927 512,900 623,879 L 745,854 C 896,825 1007,775 1079,703 1150,630 1186,533 1186,412 1186,267 1138,158 1041,83 944,8 801,-29 614,-29 543,-29 468,-21 389,-5 309,11 226,35 141,66 L 141,274 C 223,228 303,193 382,170 461,147 538,135 614,135 729,135 818,158 881,203 944,248 975,313 975,397 975,470 953,528 908,569 863,610 789,641 686,662 L 563,686 C 412,716 303,763 236,827 169,891 135,980 135,1094 135,1226 182,1330 275,1406 368,1482 496,1520 659,1520 729,1520 800,1514 873,1501 946,1488 1020,1469 1096,1444 Z"/>
<glyph unicode="R" horiz-adv-x="1192" d="M 909,700 C 952,685 995,654 1036,606 1077,558 1118,492 1159,408 L 1364,0 1147,0 956,383 C 907,483 859,549 813,582 766,615 703,631 623,631 L 403,631 403,0 201,0 201,1493 657,1493 C 828,1493 955,1457 1039,1386 1123,1315 1165,1207 1165,1063 1165,969 1143,891 1100,829 1056,767 992,724 909,700 Z M 403,1327 L 403,797 657,797 C 754,797 828,820 878,865 927,910 952,976 952,1063 952,1150 927,1216 878,1261 828,1305 754,1327 657,1327 L 403,1327 Z"/>
<glyph unicode="Q" horiz-adv-x="1403" d="M 807,1356 C 660,1356 544,1301 458,1192 371,1083 328,934 328,745 328,557 371,408 458,299 544,190 660,135 807,135 954,135 1070,190 1156,299 1241,408 1284,557 1284,745 1284,934 1241,1083 1156,1192 1070,1301 954,1356 807,1356 Z M 1090,27 L 1356,-264 1112,-264 891,-25 C 869,-26 852,-27 841,-28 829,-29 818,-29 807,-29 597,-29 429,41 304,182 178,322 115,510 115,745 115,981 178,1169 304,1310 429,1450 597,1520 807,1520 1016,1520 1184,1450 1309,1310 1434,1169 1497,981 1497,745 1497,572 1462,423 1393,300 1323,177 1222,86 1090,27 Z"/>
<glyph unicode="P" horiz-adv-x="980" d="M 403,1327 L 403,766 657,766 C 751,766 824,790 875,839 926,888 952,957 952,1047 952,1136 926,1205 875,1254 824,1303 751,1327 657,1327 L 403,1327 Z M 201,1493 L 657,1493 C 824,1493 951,1455 1037,1380 1122,1304 1165,1193 1165,1047 1165,900 1122,788 1037,713 951,638 824,600 657,600 L 403,600 403,0 201,0 201,1493 Z"/>
<glyph unicode="O" horiz-adv-x="1403" d="M 807,1356 C 660,1356 544,1301 458,1192 371,1083 328,934 328,745 328,557 371,408 458,299 544,190 660,135 807,135 954,135 1070,190 1156,299 1241,408 1284,557 1284,745 1284,934 1241,1083 1156,1192 1070,1301 954,1356 807,1356 Z M 807,1520 C 1016,1520 1184,1450 1309,1310 1434,1169 1497,981 1497,745 1497,510 1434,322 1309,182 1184,41 1016,-29 807,-29 597,-29 429,41 304,181 178,321 115,509 115,745 115,981 178,1169 304,1310 429,1450 597,1520 807,1520 Z"/>
<glyph unicode="N" horiz-adv-x="1165" d="M 201,1493 L 473,1493 1135,244 1135,1493 1331,1493 1331,0 1059,0 397,1249 397,0 201,0 201,1493 Z"/>
<glyph unicode="M" horiz-adv-x="1377" d="M 201,1493 L 502,1493 883,477 1266,1493 1567,1493 1567,0 1370,0 1370,1311 985,287 782,287 397,1311 397,0 201,0 201,1493 Z"/>
<glyph unicode="L" horiz-adv-x="954" d="M 201,1493 L 403,1493 403,170 1130,170 1130,0 201,0 201,1493 Z"/>
<glyph unicode="K" horiz-adv-x="1218" d="M 201,1493 L 403,1493 403,862 1073,1493 1333,1493 592,797 1386,0 1120,0 403,719 403,0 201,0 201,1493 Z"/>
<glyph unicode="I" horiz-adv-x="239" d="M 201,1493 L 403,1493 403,0 201,0 201,1493 Z"/>
<glyph unicode="H" horiz-adv-x="1165" d="M 201,1493 L 403,1493 403,881 1137,881 1137,1493 1339,1493 1339,0 1137,0 1137,711 403,711 403,0 201,0 201,1493 Z"/>
<glyph unicode="F" horiz-adv-x="874" d="M 201,1493 L 1059,1493 1059,1323 403,1323 403,883 995,883 995,713 403,713 403,0 201,0 201,1493 Z"/>
<glyph unicode="E" horiz-adv-x="980" d="M 201,1493 L 1145,1493 1145,1323 403,1323 403,881 1114,881 1114,711 403,711 403,170 1163,170 1163,0 201,0 201,1493 Z"/>
<glyph unicode="D" horiz-adv-x="1271" d="M 403,1327 L 403,166 647,166 C 853,166 1004,213 1100,306 1195,399 1243,547 1243,748 1243,948 1195,1095 1100,1188 1004,1281 853,1327 647,1327 L 403,1327 Z M 201,1493 L 616,1493 C 905,1493 1118,1433 1253,1313 1388,1192 1456,1004 1456,748 1456,491 1388,302 1252,181 1116,60 904,0 616,0 L 201,0 201,1493 Z"/>
<glyph unicode="C" horiz-adv-x="1218" d="M 1319,1378 L 1319,1165 C 1251,1228 1179,1276 1102,1307 1025,1338 943,1354 856,1354 685,1354 555,1302 464,1198 373,1093 328,942 328,745 328,548 373,398 464,294 555,189 685,137 856,137 943,137 1025,153 1102,184 1179,215 1251,263 1319,326 L 1319,115 C 1248,67 1174,31 1095,7 1016,-17 932,-29 844,-29 618,-29 440,40 310,179 180,317 115,506 115,745 115,985 180,1174 310,1313 440,1451 618,1520 844,1520 933,1520 1018,1508 1097,1485 1176,1461 1250,1425 1319,1378 Z"/>
<glyph unicode="B" horiz-adv-x="1086" d="M 403,713 L 403,166 727,166 C 836,166 916,189 969,234 1021,279 1047,347 1047,440 1047,533 1021,602 969,647 916,691 836,713 727,713 L 403,713 Z M 403,1327 L 403,877 702,877 C 801,877 874,896 923,933 971,970 995,1026 995,1102 995,1177 971,1234 923,1271 874,1308 801,1327 702,1327 L 403,1327 Z M 201,1493 L 717,1493 C 871,1493 990,1461 1073,1397 1156,1333 1198,1242 1198,1124 1198,1033 1177,960 1134,906 1091,852 1029,818 946,805 1045,784 1123,739 1178,672 1233,604 1260,519 1260,418 1260,285 1215,182 1124,109 1033,36 904,0 737,0 L 201,0 201,1493 Z"/>
<glyph unicode="A" horiz-adv-x="1403" d="M 700,1294 L 426,551 975,551 700,1294 Z M 586,1493 L 815,1493 1384,0 1174,0 1038,383 365,383 229,0 16,0 586,1493 Z"/>
<glyph unicode="-" horiz-adv-x="583" d="M 100,643 L 639,643 639,479 100,479 100,643 Z"/>
<glyph unicode=" " horiz-adv-x="635"/>
</font>
</defs>
<defs class="TextShapeIndex">
<g ooo:slide="id1" ooo:id-list="id3 id4 id5 id6 id7 id8 id9 id10 id11 id12 id13 id14 id15 id16 id17 id18 id19 id20 id21 id22 id23 id24 id25 id26 id27 id28 id29 id30 id31 id32 id33 id34 id35 id36 id37 id38 id39 id40 id41 id42 id43 id44 id45 id46 id47 id48 id49 id50 id51 id52 id53 id54 id55 id56 id57 id58 id59 id60 id61 id62 id63 id64 id65 id66 id67 id68 id69 id70"/>
</defs>
<defs class="EmbeddedBulletChars">
<g id="bullet-char-template-57356" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 580,1141 L 1163,571 580,0 -4,571 580,1141 Z"/>
</g>
<g id="bullet-char-template-57354" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 8,1128 L 1137,1128 1137,0 8,0 8,1128 Z"/>
</g>
<g id="bullet-char-template-10146" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 174,0 L 602,739 174,1481 1456,739 174,0 Z M 1358,739 L 309,1346 659,739 1358,739 Z"/>
</g>
<g id="bullet-char-template-10132" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 2015,739 L 1276,0 717,0 1260,543 174,543 174,936 1260,936 717,1481 1274,1481 2015,739 Z"/>
</g>
<g id="bullet-char-template-10007" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 0,-2 C -7,14 -16,27 -25,37 L 356,567 C 262,823 215,952 215,954 215,979 228,992 255,992 264,992 276,990 289,987 310,991 331,999 354,1012 L 381,999 492,748 772,1049 836,1024 860,1049 C 881,1039 901,1025 922,1006 886,937 835,863 770,784 769,783 710,716 594,584 L 774,223 C 774,196 753,168 711,139 L 727,119 C 717,90 699,76 672,76 641,76 570,178 457,381 L 164,-76 C 142,-110 111,-127 72,-127 30,-127 9,-110 8,-76 1,-67 -2,-52 -2,-32 -2,-23 -1,-13 0,-2 Z"/>
</g>
<g id="bullet-char-template-10004" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 285,-33 C 182,-33 111,30 74,156 52,228 41,333 41,471 41,549 55,616 82,672 116,743 169,778 240,778 293,778 328,747 346,684 L 369,508 C 377,444 397,411 428,410 L 1163,1116 C 1174,1127 1196,1133 1229,1133 1271,1133 1292,1118 1292,1087 L 1292,965 C 1292,929 1282,901 1262,881 L 442,47 C 390,-6 338,-33 285,-33 Z"/>
</g>
<g id="bullet-char-template-9679" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 813,0 C 632,0 489,54 383,161 276,268 223,411 223,592 223,773 276,916 383,1023 489,1130 632,1184 813,1184 992,1184 1136,1130 1245,1023 1353,916 1407,772 1407,592 1407,412 1353,268 1245,161 1136,54 992,0 813,0 Z"/>
</g>
<g id="bullet-char-template-8226" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 346,457 C 273,457 209,483 155,535 101,586 74,649 74,723 74,796 101,859 155,911 209,963 273,989 346,989 419,989 480,963 531,910 582,859 608,796 608,723 608,648 583,586 532,535 482,483 420,457 346,457 Z"/>
</g>
<g id="bullet-char-template-8211" transform="scale(0.00048828125,-0.00048828125)">
<path d="M -4,459 L 1135,459 1135,606 -4,606 -4,459 Z"/>
</g>
<g id="bullet-char-template-61548" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 173,740 C 173,903 231,1043 346,1159 462,1274 601,1332 765,1332 928,1332 1067,1274 1183,1159 1299,1043 1357,903 1357,740 1357,577 1299,437 1183,322 1067,206 928,148 765,148 601,148 462,206 346,322 231,437 173,577 173,740 Z"/>
</g>
</defs>
<g>
<g id="id2" class="Master_Slide">
<g id="bg-id2" class="Background"/>
<g id="bo-id2" class="BackgroundObjects"/>
</g>
</g>
<g class="SlideGroup">
<g>
<g id="container-id1">
<g id="id1" class="Slide" clip-path="url(#presentation_clip_path)">
<g class="Page">
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id3">
<rect class="BoundingBox" stroke="none" fill="none" x="9572" y="8837" width="14606" height="1371"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9721,9963 C 10604,9279 14345,8945 23891,8937"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 9572,10207 L 9816,10006 9647,9900 9572,10207 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 24177,8937 L 23877,8837 23877,9037 24177,8937 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id4">
<rect class="BoundingBox" stroke="none" fill="none" x="1317" y="16556" width="26989" height="3176"/>
<g>
<defs>
<linearGradient id="gradient1" x1="11592" y1="11544" x2="18030" y2="24743" gradientUnits="userSpaceOnUse">
<stop offset="0" style="stop-color:rgb(238,238,238)"/>
<stop offset="1" style="stop-color:rgb(97,97,97)"/>
</linearGradient>
</defs>
<path style="fill:url(#gradient1)" d="M 14811,19731 L 1317,19731 1317,16556 28305,16556 28305,19731 14811,19731 Z"/>
</g>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="2028" y="18327"><tspan fill="rgb(0,0,0)" stroke="none">Kernel</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id5">
<rect class="BoundingBox" stroke="none" fill="none" x="1634" y="1316" width="26673" height="1591"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 14970,2905 L 1635,2905 1635,1317 28305,1317 28305,2905 14970,2905 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 14970,2905 L 1635,2905 1635,1317 28305,1317 28305,2905 14970,2905 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="14185" y="2294"><tspan fill="rgb(0,0,0)" stroke="none">SSL API</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id6">
<rect class="BoundingBox" stroke="none" fill="none" x="1633" y="4491" width="4130" height="1591"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 3698,6080 L 1634,6080 1634,4492 5761,4492 5761,6080 3698,6080 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 3698,6080 L 1634,6080 1634,4492 5761,4492 5761,6080 3698,6080 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="2934" y="5184"><tspan fill="rgb(0,0,0)" stroke="none">Stream</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="2277" y="5754"><tspan fill="rgb(0,0,0)" stroke="none">Send Buffers </tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id7">
<rect class="BoundingBox" stroke="none" fill="none" x="24176" y="4491" width="4131" height="1591"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 26241,6080 L 24177,6080 24177,4492 28305,4492 28305,6080 26241,6080 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 26241,6080 L 24177,6080 24177,4492 28305,4492 28305,6080 26241,6080 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="25477" y="5184"><tspan fill="rgb(0,0,0)" stroke="none">Stream</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="24889" y="5754"><tspan fill="rgb(0,0,0)" stroke="none">Read Buffers</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id8">
<rect class="BoundingBox" stroke="none" fill="none" x="11793" y="3856" width="4448" height="1591"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 14017,5445 L 11794,5445 11794,3857 16239,3857 16239,5445 14017,5445 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 14017,5445 L 11794,5445 11794,3857 16239,3857 16239,5445 14017,5445 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="12821" y="4549"><tspan fill="rgb(0,0,0)" stroke="none">Connection</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="12505" y="5119"><tspan fill="rgb(0,0,0)" stroke="none">State Machine</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id9">
<rect class="BoundingBox" stroke="none" fill="none" x="15603" y="7031" width="4766" height="1591"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 17986,8620 L 15604,8620 15604,7032 20367,7032 20367,8620 17986,8620 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 17986,8620 L 15604,8620 15604,7032 20367,7032 20367,8620 17986,8620 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="16364" y="7724"><tspan fill="rgb(0,0,0)" stroke="none">TLS Handshake</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="16597" y="8294"><tspan fill="rgb(0,0,0)" stroke="none">Record Layer</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id10">
<rect class="BoundingBox" stroke="none" fill="none" x="1633" y="7666" width="4130" height="2543"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 3698,10207 L 1634,10207 1634,7667 5761,7667 5761,10207 3698,10207 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 3698,10207 L 1634,10207 1634,7667 5761,7667 5761,10207 3698,10207 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="2275" y="9120"><tspan fill="rgb(0,0,0)" stroke="none">TX Packetizer</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id11">
<rect class="BoundingBox" stroke="none" fill="none" x="24176" y="7666" width="4131" height="2543"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 26241,10207 L 24177,10207 24177,7667 28305,7667 28305,10207 26241,10207 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 26241,10207 L 24177,10207 24177,7667 28305,7667 28305,10207 26241,10207 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="24493" y="9120"><tspan fill="rgb(0,0,0)" stroke="none">RX Depacketizer</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id12">
<rect class="BoundingBox" stroke="none" fill="none" x="1633" y="11158" width="4130" height="1590"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 3698,12746 L 1634,12746 1634,11159 5761,11159 5761,12746 3698,12746 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 3698,12746 L 1634,12746 1634,11159 5761,11159 5761,12746 3698,12746 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="2540" y="11851"><tspan fill="rgb(0,0,0)" stroke="none">QUIC Write</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="2309" y="12421"><tspan fill="rgb(0,0,0)" stroke="none">Record Layer</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id13">
<rect class="BoundingBox" stroke="none" fill="none" x="24176" y="11158" width="4131" height="1590"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 26241,12746 L 24177,12746 24177,11159 28305,11159 28305,12746 26241,12746 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 26241,12746 L 24177,12746 24177,11159 28305,11159 28305,12746 26241,12746 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="25111" y="11851"><tspan fill="rgb(0,0,0)" stroke="none">QUIC Read</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="24853" y="12421"><tspan fill="rgb(0,0,0)" stroke="none">Record Layer</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id14">
<rect class="BoundingBox" stroke="none" fill="none" x="20048" y="4491" width="3178" height="1591"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 21637,6080 L 20049,6080 20049,4492 23224,4492 23224,6080 21637,6080 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 21637,6080 L 20049,6080 20049,4492 23224,4492 23224,6080 21637,6080 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="20441" y="5184"><tspan fill="rgb(0,0,0)" stroke="none">Connection</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="20686" y="5754"><tspan fill="rgb(0,0,0)" stroke="none">ID Cache</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id15">
<rect class="BoundingBox" stroke="none" fill="none" x="8301" y="14016" width="2861" height="1591"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 9731,15605 L 8302,15605 8302,14017 11160,14017 11160,15605 9731,15605 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 9731,15605 L 8302,15605 8302,14017 11160,14017 11160,15605 9731,15605 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="8673" y="14709"><tspan fill="rgb(0,0,0)" stroke="none">Datagram</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="9357" y="15279"><tspan fill="rgb(0,0,0)" stroke="none">BIO</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id16">
<rect class="BoundingBox" stroke="none" fill="none" x="11794" y="14016" width="2861" height="1591"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 13224,15605 L 11795,15605 11795,14017 14653,14017 14653,15605 13224,15605 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 13224,15605 L 11795,15605 11795,14017 14653,14017 14653,15605 13224,15605 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="12166" y="14709"><tspan fill="rgb(0,0,0)" stroke="none">Datagram</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="12850" y="15279"><tspan fill="rgb(0,0,0)" stroke="none">BIO</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id17">
<rect class="BoundingBox" stroke="none" fill="none" x="15286" y="14016" width="2861" height="1591"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 16716,15605 L 15287,15605 15287,14017 18145,14017 18145,15605 16716,15605 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 16716,15605 L 15287,15605 15287,14017 18145,14017 18145,15605 16716,15605 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="15658" y="14709"><tspan fill="rgb(0,0,0)" stroke="none">Datagram</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="16342" y="15279"><tspan fill="rgb(0,0,0)" stroke="none">BIO</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id18">
<rect class="BoundingBox" stroke="none" fill="none" x="8301" y="16872" width="2861" height="956"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 9731,17826 L 8302,17826 8302,16873 11160,16873 11160,17826 9731,17826 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 9731,17826 L 8302,17826 8302,16873 11160,16873 11160,17826 9731,17826 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="9287" y="17533"><tspan fill="rgb(0,0,0)" stroke="none">UDP</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id19">
<rect class="BoundingBox" stroke="none" fill="none" x="4228" y="18532" width="20958" height="955"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 14707,19485 L 4229,19485 4229,18533 25184,18533 25184,19485 14707,19485 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 14707,19485 L 4229,19485 4229,18533 25184,18533 25184,19485 14707,19485 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="12563" y="19192"><tspan fill="rgb(0,0,0)" stroke="none">Hardware Interfaces</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id20">
<rect class="BoundingBox" stroke="none" fill="none" x="11794" y="16873" width="2861" height="956"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 13224,17827 L 11795,17827 11795,16874 14653,16874 14653,17827 13224,17827 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 13224,17827 L 11795,17827 11795,16874 14653,16874 14653,17827 13224,17827 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="12780" y="17534"><tspan fill="rgb(0,0,0)" stroke="none">UDP</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id21">
<rect class="BoundingBox" stroke="none" fill="none" x="15286" y="16873" width="2861" height="956"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 16716,17827 L 15287,17827 15287,16874 18145,16874 18145,17827 16716,17827 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 16716,17827 L 15287,17827 15287,16874 18145,16874 18145,17827 16716,17827 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="16272" y="17534"><tspan fill="rgb(0,0,0)" stroke="none">UDP</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id22">
<rect class="BoundingBox" stroke="none" fill="none" x="9731" y="17827" width="1748" height="707"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9895,18045 C 10330,18254 10897,18103 11324,18320"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 9731,17827 L 9839,18124 9996,18000 9731,17827 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 11478,18533 L 11375,18234 11216,18355 11478,18533 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id23">
<rect class="BoundingBox" stroke="none" fill="none" x="11478" y="17828" width="1747" height="706"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13060,18045 C 12625,18254 12059,18103 11632,18320"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 13224,17828 L 12959,18001 13116,18125 13224,17828 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 11478,18533 L 11740,18355 11581,18234 11478,18533 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id24">
<rect class="BoundingBox" stroke="none" fill="none" x="16716" y="17828" width="1745" height="706"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 16880,18045 C 17315,18254 17879,18103 18306,18320"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 16716,17828 L 16824,18125 16981,18001 16716,17828 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 18460,18533 L 18358,18234 18199,18355 18460,18533 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id25">
<rect class="BoundingBox" stroke="none" fill="none" x="18779" y="14016" width="2861" height="1591"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 20209,15605 L 18780,15605 18780,14017 21638,14017 21638,15605 20209,15605 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 20209,15605 L 18780,15605 18780,14017 21638,14017 21638,15605 20209,15605 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="19151" y="14709"><tspan fill="rgb(0,0,0)" stroke="none">Datagram</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="19835" y="15279"><tspan fill="rgb(0,0,0)" stroke="none">BIO</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id26">
<rect class="BoundingBox" stroke="none" fill="none" x="18778" y="16888" width="2861" height="956"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 20208,17842 L 18779,17842 18779,16889 21637,16889 21637,17842 20208,17842 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 20208,17842 L 18779,17842 18779,16889 21637,16889 21637,17842 20208,17842 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="19764" y="17549"><tspan fill="rgb(0,0,0)" stroke="none">UDP</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id27">
<rect class="BoundingBox" stroke="none" fill="none" x="18460" y="17843" width="1749" height="691"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 20040,18057 C 19602,18260 19048,18114 18618,18323"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 20208,17843 L 19940,18011 20094,18138 20208,17843 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 18460,18533 L 18725,18360 18568,18236 18460,18533 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id28">
<rect class="BoundingBox" stroke="none" fill="none" x="5761" y="11952" width="3971" height="2066"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5762,11953 L 9477,13885"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 9731,14017 L 9511,13790 9419,13967 9731,14017 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id29">
<rect class="BoundingBox" stroke="none" fill="none" x="5761" y="11952" width="7464" height="2083"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5762,11953 L 12948,13941"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 13224,14017 L 12962,13841 12908,14033 13224,14017 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id30">
<rect class="BoundingBox" stroke="none" fill="none" x="5761" y="11952" width="10956" height="2109"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5762,11953 L 16434,13964"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 16716,14017 L 16440,13863 16403,14060 16716,14017 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id31">
<rect class="BoundingBox" stroke="none" fill="none" x="5761" y="11952" width="14449" height="2123"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5762,11953 L 19925,13976"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 20209,14017 L 19926,13876 19898,14074 20209,14017 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id32">
<rect class="BoundingBox" stroke="none" fill="none" x="19096" y="11158" width="4131" height="1590"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 21161,12746 L 19097,12746 19097,11159 23225,11159 23225,12746 21161,12746 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 21161,12746 L 19097,12746 19097,11159 23225,11159 23225,12746 21161,12746 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="19608" y="11851"><tspan fill="rgb(0,0,0)" stroke="none">Path And Conn</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="19671" y="12421"><tspan fill="rgb(0,0,0)" stroke="none">Demultiplexer</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id33">
<rect class="BoundingBox" stroke="none" fill="none" x="9730" y="12680" width="11432" height="1339"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9731,14017 L 20876,12779"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 21161,12747 L 20852,12681 20874,12880 21161,12747 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id34">
<rect class="BoundingBox" stroke="none" fill="none" x="13223" y="12695" width="7939" height="1324"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13224,14017 L 20878,12792"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 21161,12747 L 20849,12696 20881,12893 21161,12747 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id35">
<rect class="BoundingBox" stroke="none" fill="none" x="16715" y="12733" width="4447" height="1286"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 16716,14017 L 20885,12826"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 21161,12747 L 20845,12733 20900,12926 21161,12747 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id36">
<rect class="BoundingBox" stroke="none" fill="none" x="20208" y="12747" width="954" height="1272"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 20209,14017 L 20989,12976"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 21161,12747 L 20901,12927 21061,13047 21161,12747 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id37">
<rect class="BoundingBox" stroke="none" fill="none" x="9631" y="15605" width="201" height="1270"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9731,15892 L 9731,16587"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 9731,15605 L 9631,15905 9831,15905 9731,15605 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 9731,16874 L 9831,16574 9631,16574 9731,16874 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id38">
<rect class="BoundingBox" stroke="none" fill="none" x="13124" y="15605" width="201" height="1271"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13224,15892 L 13224,16588"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 13224,15605 L 13124,15905 13324,15905 13224,15605 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 13224,16875 L 13324,16575 13124,16575 13224,16875 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id39">
<rect class="BoundingBox" stroke="none" fill="none" x="16616" y="15605" width="201" height="1271"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 16716,15892 L 16716,16588"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 16716,15605 L 16616,15905 16816,15905 16716,15605 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 16716,16875 L 16816,16575 16616,16575 16716,16875 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id40">
<rect class="BoundingBox" stroke="none" fill="none" x="20108" y="15605" width="202" height="1286"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 20209,15892 L 20208,16603"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 20209,15605 L 20109,15905 20309,15905 20209,15605 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 20208,16890 L 20308,16590 20108,16590 20208,16890 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id41">
<rect class="BoundingBox" stroke="none" fill="none" x="21062" y="6079" width="577" height="5082"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 21637,6080 C 21637,9890 21189,7495 21162,10838"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 21161,11160 L 21262,10860 21062,10860 21161,11160 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id42">
<rect class="BoundingBox" stroke="none" fill="none" x="23224" y="11853" width="954" height="201"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 23225,11953 L 23890,11953"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 24177,11953 L 23877,11853 23877,12053 24177,11953 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id43">
<rect class="BoundingBox" stroke="none" fill="none" x="1633" y="13381" width="4130" height="2543"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 3698,15922 L 1634,15922 1634,13382 5761,13382 5761,15922 3698,15922 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 3698,15922 L 1634,15922 1634,13382 5761,13382 5761,15922 3698,15922 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="2508" y="13980"><tspan fill="rgb(0,0,0)" stroke="none">Congestion</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="2656" y="14550"><tspan fill="rgb(0,0,0)" stroke="none">Controller</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id44">
<rect class="BoundingBox" stroke="none" fill="none" x="2268" y="14968" width="2860" height="638"/>
<path fill="rgb(38,166,154)" stroke="none" d="M 3698,15604 L 2269,15604 2269,14969 5126,14969 5126,15604 3698,15604 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 3698,15604 L 2269,15604 2269,14969 5126,14969 5126,15604 3698,15604 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="2635" y="15470"><tspan fill="rgb(0,0,0)" stroke="none">New Reno</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id45">
<rect class="BoundingBox" stroke="none" fill="none" x="3598" y="10206" width="201" height="955"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 3698,10207 L 3698,10873"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 3698,11160 L 3798,10860 3598,10860 3698,11160 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id46">
<rect class="BoundingBox" stroke="none" fill="none" x="3698" y="2904" width="1433" height="1589"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5129,2905 L 3890,4279"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 3698,4492 L 3973,4336 3825,4202 3698,4492 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id47">
<rect class="BoundingBox" stroke="none" fill="none" x="24811" y="2905" width="1432" height="1589"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 26241,4492 L 25003,3118"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 24811,2905 L 24938,3195 25086,3061 24811,2905 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id48">
<rect class="BoundingBox" stroke="none" fill="none" x="14017" y="2905" width="954" height="953"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14767,3108 L 14220,3654"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 14970,2905 L 14687,3046 14828,3188 14970,2905 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 14017,3857 L 14300,3716 14159,3574 14017,3857 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id49">
<rect class="BoundingBox" stroke="none" fill="none" x="8935" y="7031" width="4766" height="1591"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 11318,8620 L 8936,8620 8936,7032 13699,7032 13699,8620 11318,8620 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 11318,8620 L 8936,8620 8936,7032 13699,7032 13699,8620 11318,8620 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="10234" y="7724"><tspan fill="rgb(0,0,0)" stroke="none">Timer And</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="9948" y="8294"><tspan fill="rgb(0,0,0)" stroke="none">Event Queue</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id50">
<rect class="BoundingBox" stroke="none" fill="none" x="13700" y="7745" width="1840" height="2463"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13988,7844 C 14960,7969 15371,8523 15439,9918"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 13700,7826 L 13993,7945 14006,7745 13700,7826 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 15446,10207 L 15539,9905 15339,9910 15446,10207 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id51">
<rect class="BoundingBox" stroke="none" fill="none" x="7031" y="10206" width="5083" height="1591"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 9572,11795 L 7032,11795 7032,10207 12112,10207 12112,11795 9572,11795 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 9572,11795 L 7032,11795 7032,10207 12112,10207 12112,11795 9572,11795 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="7500" y="10899"><tspan fill="rgb(0,0,0)" stroke="none">Flow Controller And</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="7597" y="11469"><tspan fill="rgb(0,0,0)" stroke="none">Statistics Collector</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id52">
<rect class="BoundingBox" stroke="none" fill="none" x="3598" y="6079" width="201" height="1589"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 3698,6080 L 3698,7380"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 3698,7667 L 3798,7367 3598,7367 3698,7667 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id53">
<rect class="BoundingBox" stroke="none" fill="none" x="5761" y="9232" width="1272" height="1795"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6039,9322 C 6586,9645 6204,10624 6763,10938"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 5761,9254 L 6026,9426 6077,9233 5761,9254 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 7032,11001 L 6765,10831 6717,11025 7032,11001 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id54">
<rect class="BoundingBox" stroke="none" fill="none" x="13063" y="10206" width="4766" height="1591"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 15446,11795 L 13064,11795 13064,10207 17827,10207 17827,11795 15446,11795 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 15446,11795 L 13064,11795 13064,10207 17827,10207 17827,11795 15446,11795 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="13530" y="10899"><tspan fill="rgb(0,0,0)" stroke="none">ACK Handling And</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="13992" y="11469"><tspan fill="rgb(0,0,0)" stroke="none">Loss Detector</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id55">
<rect class="BoundingBox" stroke="none" fill="none" x="16556" y="9253" width="7623" height="955"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 24177,9254 C 19096,9254 17257,9494 16684,9976"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 16556,10207 L 16792,9997 16619,9897 16556,10207 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id56">
<rect class="BoundingBox" stroke="none" fill="none" x="16239" y="4650" width="3812" height="687"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 16240,4651 L 19767,5239"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 20050,5286 L 19771,5138 19738,5335 20050,5286 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id57">
<rect class="BoundingBox" stroke="none" fill="none" x="26141" y="10207" width="201" height="955"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 26241,11160 L 26241,10494"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 26241,10207 L 26141,10507 26341,10507 26241,10207 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id58">
<rect class="BoundingBox" stroke="none" fill="none" x="26141" y="6080" width="201" height="1589"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 26241,7667 L 26241,6367"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 26241,6080 L 26141,6380 26341,6380 26241,6080 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id59">
<rect class="BoundingBox" stroke="none" fill="none" x="5761" y="8936" width="8576" height="1272"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5762,8937 C 11478,8937 13694,9275 14239,9951"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 14336,10207 L 14319,9891 14133,9965 14336,10207 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id60">
<rect class="BoundingBox" stroke="none" fill="none" x="4803" y="6412" width="11757" height="1256"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 16558,7032 C 16558,6281 5782,5991 4870,7461"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 4809,7667 L 4995,7411 4804,7351 4809,7667 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id61">
<rect class="BoundingBox" stroke="none" fill="none" x="19414" y="6372" width="5401" height="1297"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 24813,7667 C 24813,5963 20437,6238 19564,6822"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 19414,7032 L 19673,6850 19512,6731 19414,7032 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id62">
<rect class="BoundingBox" stroke="none" fill="none" x="16239" y="5041" width="1837" height="1992"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 16527,5140 C 17500,5240 17891,5666 17975,6745"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 16239,5126 L 16533,5241 16544,5042 16239,5126 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 17986,7032 L 18074,6728 17874,6736 17986,7032 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id63">
<rect class="BoundingBox" stroke="none" fill="none" x="11287" y="5445" width="2759" height="1588"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13952,5727 C 13542,6507 11769,5966 11376,6763"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 14017,5445 L 13850,5713 14044,5760 14017,5445 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 11318,7032 L 11482,6762 11287,6717 11318,7032 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id64">
<rect class="BoundingBox" stroke="none" fill="none" x="5126" y="11794" width="10322" height="1589"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15446,11795 C 15446,12986 6253,12276 5220,13200"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 5126,13382 L 5358,13167 5182,13071 5126,13382 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id65">
<rect class="BoundingBox" stroke="none" fill="none" x="17091" y="2905" width="203" height="4128"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 17191,3192 L 17193,6745"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 17191,2905 L 17091,3205 17291,3205 17191,2905 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 17193,7032 L 17293,6732 17093,6732 17193,7032 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id66">
<rect class="BoundingBox" stroke="none" fill="none" x="5761" y="9572" width="699" height="5081"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6000,9723 C 6613,10568 6613,13726 5985,14521"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 5761,9572 L 5957,9820 6067,9652 5761,9572 Z"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 5762,14652 L 6070,14580 5965,14410 5762,14652 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id67">
<rect class="BoundingBox" stroke="none" fill="none" x="6713" y="4491" width="4130" height="1591"/>
<path fill="rgb(128,203,196)" stroke="none" d="M 8778,6080 L 6714,6080 6714,4492 10841,4492 10841,6080 8778,6080 Z"/>
<path fill="none" stroke="rgb(66,66,66)" d="M 8778,6080 L 6714,6080 6714,4492 10841,4492 10841,6080 8778,6080 Z"/>
<text class="SVGTextShape"><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="7184" y="5184"><tspan fill="rgb(0,0,0)" stroke="none">Frame-in-Flight</tspan></tspan></tspan><tspan class="TextParagraph" font-family="DejaVu Sans, sans-serif" font-size="423px" font-weight="400"><tspan class="TextPosition" x="7387" y="5754"><tspan fill="rgb(0,0,0)" stroke="none">Management</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id68">
<rect class="BoundingBox" stroke="none" fill="none" x="5762" y="5186" width="955" height="201"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6715,5286 L 6049,5286"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 5762,5286 L 6062,5386 6062,5186 5762,5286 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id69">
<rect class="BoundingBox" stroke="none" fill="none" x="5761" y="6080" width="3110" height="2859"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5762,8937 C 7773,8937 8671,8081 8770,6368"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 8778,6080 L 8669,6377 8869,6383 8778,6080 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.ConnectorShape">
<g id="id70">
<rect class="BoundingBox" stroke="none" fill="none" x="8777" y="6079" width="6763" height="4129"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8778,6080 C 8778,7032 15212,5041 15440,9935"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 15446,10207 L 15539,9905 15339,9909 15446,10207 Z"/>
</g>
</g>
</g>
</g>
</g>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 60 KiB

View File

@@ -0,0 +1,92 @@
JSON Encoder
============
Approach
--------
The JSON encoder exists to support qlog implementation. There is no intention to
implement a decoder at this time. The encoder is intended to support automation
using immediate calls without the use of an intermediate syntax tree
representation and is expected to be zero-allocation in most cases. This enables
highly efficient serialization when called from QUIC code without dynamic memory
allocation.
An example usage is as follows:
```c
int generate_json(BIO *b)
{
int ret = 1;
JSON_ENC z;
if (!ossl_json_init(&z, b, 0))
return 0;
ossl_json_object_begin(&z);
{
ossl_json_key(&z, "key");
ossl_json_str(&z, "value");
ossl_json_key(&z, "key2");
ossl_json_u64(&z, 42);
ossl_json_key(&z, "key3");
ossl_json_array_begin(&z);
{
ossl_json_null(&z);
ossl_json_f64(&z, 42.0);
ossl_json_str(&z, "string");
}
ossl_json_array_end(&z);
}
ossl_json_object_end(&z);
if (ossl_json_get_error_flag(&z))
ret = 0;
ossl_json_cleanup(&z);
return ret;
}
```
The zero-allocation, immediate-output design means that most API calls
correspond directly to immediately generated output; however there is some
minimal state tracking. The API guarantees that it will never generate invalid
JSON, with two exceptions:
- it is the caller's responsibility to avoid generating duplicate keys;
- it is the caller's responsibility to provide valid UTF-8 strings.
Since the JSON encoder is for internal use only, its structure is defined in
headers and can be incorporated into other objects without a heap allocation.
The JSON encoder maintains an internal write buffer and a small state tracking
stack (1 bit per level of depth in a JSON hierarchy).
JSON-SEQ
--------
The encoder supports JSON-SEQ (RFC 7464), as this is an optimal format for
outputting qlog for our purposes.
Number Handling
---------------
It is an unfortunate reality that many JSON implementations are not able to
handle integers outside `[-2**53 + 1, 2**53 - 1]`. This leads to the I-JSON
specification, RFC 7493, which recommends that values outside these ranges are
encoded as strings.
An optional I-JSON mode is offered, in which case integers outside these ranges
are automatically serialized as strings instead.
Error Handling
--------------
Error handling is deferred to improve ergonomics. If any call to a JSON encoder
fails, all future calls also fail and the caller is expected to ascertain that
the encoding process failed by calling `ossl_json_get_error_flag`.
API
---
The API is documented in `include/internal/json_enc.h`.

View File

@@ -0,0 +1,136 @@
qlog Support
============
qlog support is formed of two components:
- A qlog API and implementation.
- A JSON encoder API and implementation, which is used by the qlog
implementation.
The API for the JSON encoder is detailed in [a separate document](json-encoder.md).
qlog support will involve instrumenting various functions with qlog logging
code. An example call site will look something like this:
```c
{
QLOG_EVENT_BEGIN(qlog_instance, quic, parameters_set)
QLOG_STR("owner", "local")
QLOG_BOOL("resumption_allowed", 1)
QLOG_STR("tls_cipher", "AES_128_GCM")
QLOG_BEGIN("subgroup")
QLOG_U64("u64_value", 123)
QLOG_BIN("binary_value", buf, buf_len)
QLOG_END()
QLOG_EVENT_END()
}
```
Output Format
-------------
The output format is always the JSON-SEQ qlog variant. This has the advantage
that each event simply involves concatenating another record to an output log
file and does not require nesting of syntactic constructs between events.
Output is written to a directory containing multiple qlog files.
Basic Usage
-----------
Basic usage is in the form of
- a `QLOG_EVENT_BEGIN` macro which takes a QLOG instance, category name and
event name.
This (category name, event name) tuple is known as the event type.
- zero or more macros which log fields inside a qlog event.
- a `QLOG_EVENT_END` macro.
Usage is synchronised across threads on a per-event basis automatically.
API Definition
--------------
API details can be found in `internal/qlog.h`.
Configuration
-------------
qlog must currently be enabled at build time using `enable-unstable-qlog`. If
not enabled, `OPENSSL_NO_QLOG` is defined.
When built with qlog support, qlog can be turned on using the recommended
environment variable `QLOGDIR`. A filter can be defined using `OSSL_QFILTER`. When
enabled, each connection causes a file `{ODCID}_{ROLE}.sqlog` to be created in
the specified directory, where `{ODCID}` is the original initial DCID used for
the connection and `{ROLE}` is `client` or `server`.
Filters
-------
Each event type can be turned on and off individually.
The filtering is configured using a string with the following syntax, expressed
in ABNF:
```abnf
filter = *filter-term
filter-term = add-sub-term
add-sub-term = ["-" / "+"] specifier
specifier = global-specifier / qualified-specifier
global-specifier = wildcard
qualified-specifier = component-specifier ":" component-specifier
component-specifier = name / wildcard
wildcard = "*"
name = 1*(ALPHA / DIGIT / "_" / "-")
```
Here is a (somewhat nonsensical) example filter:
```text
+* -quic:version_information -* quic:packet_sent
```
The syntax works as follows:
- A filter term is preceded by `-` (disable an event type) or `+` (enable an
event type). If this symbol is omitted, `+` is assumed.
- `+*` (or `*`) enables all event types.
- `-*` disables all event types.
- `+quic:*` (or `quic:*`) enables all event types in the `quic` category.
- `-quic:version_information` disables a specific event type.
- Partial wildcard matches are not supported at this time.
Each term is applied in sequence, therefore later items in the filter override
earlier items. In the example above, for example, all event types are enabled,
then the `quic:version_information` event is disabled, then all event types are
disabled, then the `quic:packet_sent` event is re-enabled.
Some examples of more normal filters include:
- `*` (or `+*`): enable all event types
- `quic:version_information quic:packet_sent`: enable some event types explicitly
- `* -quic:version_information`: enable all event types except certain ones
See also
--------
See the manpage openssl-qlog(7) for additional usage guidance.

View File

@@ -0,0 +1,521 @@
QUIC ACK Manager
================
![(Overview block diagram.)](images/ackm.png "QUIC ACK Manager Block Diagram")
The QUIC ACK manager is responsible for, on the TX side:
- Handling received ACK frames
- Generating notifications that a packet we sent was delivered successfully
- Generating notifications that a packet we sent was lost
- Generating requests for probe transmission
- Providing information on the largest unacked packet number so that packet
numbers in packet headers can be encoded and decoded correctly
On the RX side, it is responsible for:
- Generating ACK frames for later transmission in response to packets we
received
- Providing information on whether a given RX packet number is potentially
duplicate and should not be processed
In order to allow it to perform these tasks, the ACK manager must:
- be notified of all transmitted packets
- be notified of all received datagrams
- be notified of all received packets
- be notified of all received ACK frames
- be notified when a packet number space is discarded
- be notified when its loss detection deadline arrives
The ACK manager consumes:
- an arbitrary function which returns the current time;
- a RTT statistics tracker;
- a congestion controller.
The ACK manager provides the following outputs:
- It indicates the current deadline by which the loss detection
event should be invoked.
- It indicates when probes should be generated.
- It indicates what ACK frames should be generated.
- It indicates the current deadline by which new ACK frames
will be generated, if any.
- It indicates the largest unacknowledged packet number
for a given packet number space.
- It calls a callback for each transmitted packet it is notified
of, specifying whether the packet was successfully acknowledged by the peer,
lost or discarded.
- It may communicate with a congestion controller, causing the
congestion controller to update its state.
- It may communicate with a RTT statistics tracker, causing it
to update its state.
In this document, “the caller” refers to the system which makes use of the ACK
manager.
Utility Definitions
-------------------
There are three QUIC packet number spaces: Initial, Handshake and Application
Data.
```c
/* QUIC packet number spaces. */
#define QUIC_PN_SPACE_INITIAL 0
#define QUIC_PN_SPACE_HANDSHAKE 1
#define QUIC_PN_SPACE_APP 2
#define QUIC_PN_SPACE_NUM 3
```
Packet numbers are 62-bit values represented herein by `QUIC_PN`.
`QUIC_PN_INFINITE` evaluates to an invalid QUIC packet number value.
```c
/* QUIC packet number representation. */
typedef uint64_t QUIC_PN;
#define QUIC_PN_INFINITE UINT64_MAX
```
Instantiation
-------------
The QUIC ACK manager is instantiated as follows:
```c
typedef struct ossl_ackm_st OSSL_ACKM;
OSSL_ACKM *ossl_ackm_new(OSSL_TIME (*now)(void *arg),
void *now_arg,
QUIC_STATM *statm,
OSSL_CC_METHOD *cc_method,
OSSL_CC_DATA *cc_data);
void ossl_ackm_free(OSSL_ACKM *ackm);
```
The function pointer `now` is invoked by the ACK manager to obtain the current
time. `now_arg` is passed as the argument. The congestion controller method and
instance passed are used by the ACK manager instance. `statm` points to a
[Statistics Manager tracker instance](quic-statm.md).
Events
------
The ACK manager state is evolved in response to events provided to the ACK
manager by the caller.
### On TX Packet
This must be called when a packet is transmitted. It does not provide the
payload of the packet, but provides metadata about the packet which is relevant
to the loss detection and acknowledgement process.
The caller is responsible for the allocation of the structure and the structure
must remain allocated until one of the callbacks is called or the ACK manager is
freed. It is expected this structure will usually be freed (or returned to a
pool) in the implementation of either callback passed by the caller.
Only exactly one of the callbacks in the structure will be called over the
lifetime of a `OSSL_ACKM_TX_PKT`, and only once.
Returns 1 on success.
```c
typedef struct ossl_ackm_tx_pkt_st {
/* The packet number of the transmitted packet. */
QUIC_PN pkt_num;
/* The number of bytes in the packet which was sent. */
size_t num_bytes;
/* The time at which the packet was sent. */
OSSL_TIME time;
/*
* If the packet being described by this structure contains an ACK frame,
* this must be set to the largest PN ACK'd by that frame.
*
* Otherwise, it should be set to QUIC_PN_INVALID.
*
* This is necessary to bound the number of PNs we have to keep track of on
* the RX side (RFC 9000 s. 13.2.4). It allows older PN tracking information
* on the RX side to be discarded.
*/
QUIC_PN largest_acked;
/*
* One of the QUIC_PN_SPACE_* values. This qualifies the pkt_num field
* into a packet number space.
*/
unsigned int pkt_space :2;
/* 1 if the packet is in flight. */
unsigned int is_inflight :1;
/* 1 if the packet has one or more ACK-eliciting frames. */
unsigned int is_ack_eliciting :1;
/* 1 if the packet is a PTO probe. */
unsigned int is_pto_probe :1;
/* 1 if the packet is an MTU probe. */
unsigned int is_mtu_probe :1;
/* Callback called if frames in this packet are lost. arg is cb_arg. */
void (*on_lost)(void *arg);
/* Callback called if frames in this packet are acked. arg is cb_arg. */
void (*on_acked)(void *arg);
/*
* Callback called if frames in this packet are neither acked nor lost. arg
* is cb_arg.
*/
void (*on_discarded)(void *arg);
void *cb_arg;
/* (Internal use fields are appended here and must be zero-initialized.) */
} OSSL_ACKM_TX_PKT;
int ossl_ackm_on_tx_packet(OSSL_ACKM *ackm, const OSSL_ACKM_TX_PKT *pkt);
```
### On RX Datagram
This must be called whenever a datagram is received. A datagram may contain
multiple packets, and this function should be called before the calls to
`ossl_ackm_on_rx_packet`.
The primary use of this function is to inform the ACK manager of new credit to
the anti-amplification budget. Packet and ACK-frame related logic are handled
separately in the subsequent calls to `ossl_ackm_on_rx_packet` and
`ossl_ackm_on_rx_ack_frame`, respectively.
Returns 1 on success.
```c
int ossl_ackm_on_rx_datagram(OSSL_ACKM *ackm, size_t num_bytes);
```
### On RX Packet
This must be called whenever a packet is received. It should be called after
`ossl_ackm_on_rx_datagram` was called for the datagram containing the packet.
Returns 1 on success.
```c
#define OSSL_ACKM_ECN_NONE 0
#define OSSL_ACKM_ECN_ECT1 1
#define OSSL_ACKM_ECN_ECT0 2
#define OSSL_ACKM_ECN_ECNCE 3
typedef struct ossl_ackm_rx_pkt_st {
/* The packet number of the received packet. */
QUIC_PN pkt_num;
/* The time at which the packet was received. */
OSSL_TIME time;
/*
* One of the QUIC_PN_SPACE_* values. This qualifies the pkt_num field
* into a packet number space.
*/
unsigned int pkt_space :2;
/* 1 if the packet has one or more ACK-eliciting frames. */
unsigned int is_ack_eliciting :1;
/*
* One of the OSSL_ACKM_ECN_* values. This is the ECN labelling applied
* to the received packet. If unknown, use OSSL_ACKM_ECN_NONE.
*/
unsigned int ecn :2;
} OSSL_ACKM_RX_PKT;
int ossl_ackm_on_rx_packet(OSSL_ACKM *ackm, const OSSL_ACKM_RX_PKT *pkt);
```
### On RX ACK Frame
This must be called whenever an ACK frame is received. It should be called
after any call to `ossl_ackm_on_rx_packet`.
The ranges of packet numbers being acknowledged are passed as an argument.
`pkt_space` is one of the `QUIC_PN_SPACE_*` values, specifying the packet number
space of the containing packet. `rx_time` is the time the frame was
received.
This function causes `on_acked` callbacks to be invoked on applicable packets.
Returns 1 on success.
```c
typedef struct ossl_ackm_ack_range_st {
/*
* Represents an inclusive range of packet numbers [start, end].
* start must be <= end.
*/
QUIC_PN start, end;
} OSSL_ACKM_ACK_RANGE;
typedef struct ossl_ackm_ack {
/*
* A sequence of packet number ranges [[start, end]...].
*
* The ranges must be sorted in descending order, for example:
* [ 95, 100]
* [ 90, 92]
* etc.
*
* As such, ack_ranges[0].end is always the highest packet number
* being acknowledged and ack_ranges[num_ack_ranges-1].start is
* always the lowest packet number being acknowledged.
*
* num_ack_ranges must be greater than zero, as an ACK frame must
* acknowledge at least one packet number.
*/
const OSSL_ACKM_ACK_RANGE *ack_ranges;
size_t num_ack_ranges;
OSSL_TIME delay_time;
uint64_t ect0, ect1, ecnce;
/* 1 if the ect0, ect1 and ecnce fields are valid */
char ecn_present;
} OSSL_ACKM_ACK;
int ossl_ackm_on_rx_ack_frame(OSSL_ACKM *ackm, const OSSL_ACKM_ACK *ack,
int pkt_space, OSSL_TIME rx_time);
```
### On Packet Space Discarded
This must be called whenever a packet number space is discarded. ACK-tracking
information for the number space is thrown away. Any previously provided
`OSSL_ACKM_TX_PKT` structures have their `on_discarded` callback invoked,
providing an opportunity for them to be freed.
Returns 1 on success.
```c
int ossl_ackm_on_pkt_space_discarded(OSSL_ACKM *ackm, int pkt_space);
```
### On Handshake Confirmed
This should be called by the caller when the QUIC handshake is confirmed. The
Probe Timeout (PTO) algorithm behaves differently depending on whether the QUIC
handshake is confirmed yet.
Returns 1 on success.
```c
int ossl_ackm_on_handshake_confirmed(OSSL_ACKM *ackm);
```
### On Timeout
This must be called whenever the loss detection deadline expires.
```c
int ossl_ackm_on_timeout(OSSL_ACKM *ackm);
```
Queries
-------
These functions allow information about the status of the ACK manager to be
obtained.
### Get Loss Detection Deadline
This returns a deadline after which `ossl_ackm_on_timeout` should be called.
If it is `OSSL_TIME_INFINITY`, no timeout is currently active.
The value returned by this function may change after any call to any of the
event functions above is made.
```c
OSSL_TIME ossl_ackm_get_loss_detection_deadline(OSSL_ACKM *ackm);
```
### Get ACK Frame
This returns a pointer to a `OSSL_ACKM_ACK` structure representing the
information which should be packed into an ACK frame and transmitted.
This generates an ACK frame regardless of whether the ACK manager thinks one
should currently be sent. To determine if the ACK manager thinks an ACK frame
should be sent, use `ossl_ackm_is_ack_desired`, discussed below.
If no new ACK frame is currently needed, returns NULL. After calling this
function, calling the function immediately again will return NULL.
The structure pointed to by the returned pointer, and the referenced ACK range
structures, are guaranteed to remain valid until the next call to any
`OSSL_ACKM` function. After such a call is made, all fields become undefined.
This function is used to provide ACK frames for acknowledging packets which have
been received and notified to the ACK manager via `ossl_ackm_on_rx_packet`.
Calling this function clears the flag returned by `ossl_ackm_is_ack_desired` and
the deadline returned by `ossl_ackm_get_ack_deadline`.
```c
const OSSL_ACKM_ACK *ossl_ackm_get_ack_frame(OSSL_ACKM *ackm, int pkt_space);
```
### Is ACK Desired
This returns 1 if the ACK manager thinks an ACK frame ought to be generated and
sent at this time. `ossl_ackm_get_ack_frame` will always provide an ACK frame
whether or not this returns 1, so it is suggested that you call this function
first to determine whether you need to generate an ACK frame.
The return value of this function can change based on calls to
`ossl_ackm_on_rx_packet` and based on the passage of time (see
`ossl_ackm_get_ack_deadline`).
```c
int ossl_ackm_is_ack_desired(OSSL_ACKM *ackm, int pkt_space);
```
### Get ACK Deadline
The ACK manager may defer generation of ACK frames to optimize performance. For
example, after a packet requiring acknowledgement is received, it may decide to
wait until a few more packets are received before generating an ACK frame, so
that a single ACK frame can acknowledge all of them. However, if further
packets do not arrive, an ACK frame must be generated anyway within a certain
amount of time.
This function returns the deadline at which the return value of
`ossl_ackm_is_ack_desired` will change to 1, or `OSSL_TIME_INFINITY`, which
means that no deadline is currently applicable. If the deadline has already
passed, it may either return that deadline or `OSSL_TIME_ZERO`.
```c
OSSL_TIME ossl_ackm_get_ack_deadline(OSSL_ACKM *ackm, int pkt_space);
```
### Is RX PN Processable
Returns 1 if the given RX packet number is “processable”. A processable PN is
one that is not either
- duplicate, meaning that we have already been passed such a PN in a call
to `ossl_ackm_on_rx_packet`; or
- written off, meaning that the PN is so old that we have stopped tracking
state for it (meaning we cannot tell whether it is a duplicate and cannot
process it safely).
This should be called for a packet before attempting to process its contents.
Failure to do so may may result in processing a duplicated packet in violation
of the RFC.
The returrn value of this function transitions from 1 to 0 for a given PN once
that PN is passed to ossl_ackm_on_rx_packet, thus this function must be used
before calling `ossl_ackm_on_rx_packet`.
```c
int ossl_ackm_is_rx_pn_processable(OSSL_ACKM *ackm, QUIC_PN pn, int pkt_space);
```
### Get Probe Packet
This determines if the ACK manager is requesting any probe packets to be
transmitted.
The user calls `ossl_ackm_get_probe_request`. The structure pointed
to by `info` is filled and the function returns 1 on success.
The fields of `OSSL_ACKM_PROBE_INFO` record the number of probe requests
of each type which are outstanding. In short:
- `handshake` designates the number of ACK-eliciting Handshake
packets being requested. This is equivalent to
`SendOneAckElicitingHandshakePacket()` in RFC 9002.
- `padded_initial` designates the number of ACK-eliciting
padded Initial packets being requested. This is equivalent to
`SendOneAckElicitingPaddedInitialPacket()` in RFC 9002.
- `pto` designates the number of ACK-eliciting outstanding probe events
corresponding to each packet number space. This is equivalent to
`SendOneOrTwoAckElicitingPackets(pn_space)` in RFC 9002.
Once the caller has processed these requests, the caller must clear these
outstanding requests by calling `ossl_ackm_get_probe_request` with `clear` set
to 1. If `clear` is non-zero, the current values are returned and then zeroed,
so that the next call to `ossl_ackm_get_probe_request` (if made immediately)
will return zero values for all fields.
```c
typedef struct ossl_ackm_probe_info_st {
uint32_t handshake;
uint32_t padded_initial;
uint32_t pto[QUIC_PN_SPACE_NUM];
} OSSL_ACKM_PROBE_INFO;
int ossl_ackm_get_probe_request(OSSL_ACKM *ackm, int clear,
OSSL_ACKM_PROBE_INFO *info);
```
### Get Largest Unacked Packet Number
This gets the largest unacknowledged packet number in the given packet number
space. The packet number is written to `*pn`. Returns 1 on success.
This is needed so that packet encoders can determine with what length to encode
the abridged packet number in the packet header.
```c
int ossl_ackm_get_largest_unacked(OSSL_ACKM *ackm, int pkt_space, QUIC_PN *pn);
```
Callback Functionality
----------------------
The ACK manager supports optional callback functionality when its deadlines
are updated. By default, the callback functionality is not enabled. To use
the callback functionality, call either or both of the following functions
with a non-NULL function pointer:
```c
void ossl_ackm_set_loss_detection_deadline_callback(OSSL_ACKM *ackm,
void (*fn)(OSSL_TIME deadline,
void *arg),
void *arg);
void ossl_ackm_set_ack_deadline_callback(OSSL_ACKM *ackm,
void (*fn)(OSSL_TIME deadline,
int pkt_space,
void *arg),
void *arg);
```
Callbacks can be subsequently disabled by calling these functions with a NULL
function pointer. The callbacks are not called at the time that they are set,
therefore it is recommended to call them immediately after the call to
`ossl_ackm_new`.
The loss detection deadline callback is called whenever the value returned
by `ossl_ackm_get_loss_detection_deadline` changes.
The ACK deadline callback is called whenever the value returned by
`ossl_ackm_get_ack_deadline` changes for a given packet space.
The `deadline` argument reflects the value which will be newly returned by the
corresponding function. If the configured callback calls either of these
functions, the returned value will reflect the new deadline.

View File

@@ -0,0 +1,990 @@
Behaviour of SSL functions on QUIC SSL objects
==============================================
This document is a companion to the [QUIC API Overview](./quic-api.md) which
lists all SSL functions and controls and notes their behaviour with QUIC SSL
objects.
The Category column is as follows:
- **Global**:
These API items do not relate to SSL objects. They may be stateless or may
relate only to global state.
Can also be used for APIs implemented only in terms of other public libssl APIs.
- **Object**:
Object management APIs. Some of these may require QUIC-specific implementation.
- **HL**: Handshake layer API.
These calls should generally be dispatched to the handshake layer, unless
they are not applicable to QUIC. Modifications inside the handshake layer
for the QUIC case may or may not be required.
- **CSSM**: Connection/Stream State Machine. API related to lifecycle of a
connection or stream. Needs QUIC-specific implementation.
- **ADP**: App Data Path. Application-side data path API. QUIC-specific
implementation.
- **NDP**: Net Data Path. Network-side data path control API. Also includes I/O
ticking and timeout handling.
- **RL**: Record layer related API. If these API items only relate to the TLS
record layer, they must be disabled for QUIC; if they are also relevant to the
QUIC record layer, they will require QUIC-specific implementation.
- **Async**: Relates to the async functionality.
- **0-RTT**: Relates to early data/0-RTT functionality.
- **Special**: Other calls which defy classification.
The Semantics column is as follows:
- **🟩U**: Unchanged. The semantics of the API are not changed for QUIC.
- **🟧C**: Changed. The semantics of the API are changed for QUIC.
- **🟦N**: New. The API is new for QUIC.
- **🟥TBD**: Yet to be determined if semantic changes will be required.
The Applicability column is as follows:
- **🟦U**: Unrelated. Not applicable to QUIC — fully unrelated (e.g. functions for
other SSL methods).
- **🟥FC**: Not applicable to QUIC (or not currently supported) — fail closed.
- **🟧NO**: Not applicable to QUIC (nor not currently supported) — no-op.
- **🟩A**: Applicable.
The Implementation Requirements column is as follows:
- **🟩NC**: No changes are expected to be needed (where marked **\***, dispatch
to handshake layer).
**Note**: Where this value is used with an applicability of **FC** or **NO**,
this means that the desired behaviour is already an emergent consequence of the
existing code.
- **🟨C**: Modifications are expected to be needed (where marked **\***,
dispatch to handshake layer with changes inside the handshake layer).
- **🟧QSI**: QUIC specific implementation.
- **🟥QSA**: QUIC specific API.
The Status column is as follows:
- **🔴Pending Triage**: Have not determined the classification of this API item yet.
- **🟠Design TBD**: It has not yet been determined how this API item will work for
QUIC.
- **🟡TODO**: It has been determined how this API item should work for QUIC but it
has not yet been implemented.
- **🟢Done**: No further work is anticipated to be needed for this API item.
Notes:
- †1: Must restrict which ciphers can be used with QUIC; otherwise, no changes.
- †2: ALPN usage must be mandated; otherwise, no changes.
- †3: NPN usage should be forced off as it should never be used with QUIC;
otherwise, no changes.
- †4: Controls needing changes are listed separately.
- †5: TLS compression and renegotiation must not be used with QUIC, but these
features are already forbidden in
TLS 1.3, which is a requirement for QUIC, thus no changes should be needed.
- †6: Callback specified is called for handshake layer messages (TLSv1.3).
- †7: Tickets are issued using `NEW_TOKEN` frames in QUIC and this will
require handshake layer changes. However these APIs as such do not require
changes.
- †8: Use of post-handshake authentication is prohibited by QUIC.
- †9: QUIC always uses AES-128-GCM initially. We need to determine when and
what ciphers we report as being in use.
- †10: Not supporting async for now.
- †11: Since these functions only configure cipher suite lists used for TLSv1.2,
which is never used for QUIC, they do not require changes, and we can allow
applications to configure these lists freely, as they will be ignored.
| API Item | Cat. | Sema. | Appl. | Impl. Req. | Status |
|----------------------------------------------|---------|-------|-------|------------|--------------|
| **⇒ Global Information and Functions** | | | | | |
| `OSSL_default_cipher_list` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `OSSL_default_ciphersuites` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `ERR_load_SSL_strings` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `OPENSSL_init_ssl` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `OPENSSL_cipher_name` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `SSL_alert_desc_string` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `SSL_alert_desc_string_long` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `SSL_alert_type_string` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `SSL_alert_type_string_long` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `SSL_extension_supported` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `SSL_add_ssl_module` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `SSL_test_functions` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `SSL_select_next_proto` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| **⇒ Methods** | | | | | |
| `SSLv3_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `SSLv3_client_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `SSLv3_server_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `TLS_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `TLS_client_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `TLS_server_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `TLSv1_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `TLSv1_client_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `TLSv1_server_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `TLSv1_1_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `TLSv1_1_client_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `TLSv1_1_server_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `TLSv1_2_client_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `TLSv1_2_server_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `TLSv1_2_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `DTLS_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `DTLS_client_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `DTLS_server_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `DTLSv1_client_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `DTLSv1_server_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `DTLSv1_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `DTLSv1_2_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `DTLSv1_2_client_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `DTLSv1_2_server_method` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `OSSL_QUIC_client_method` | Global | 🟩U | 🟦U | 🟥QSA | 🟢Done |
| `OSSL_QUIC_client_thread_method` | Global | 🟩U | 🟦U | 🟥QSA | 🟢Done |
| `OSSL_QUIC_server_method` | Global | 🟩U | 🟦U | 🟥QSA | 🟠Design TBD |
| **⇒ Instantiation** | | | | | |
| `BIO_f_ssl` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `BIO_new_ssl` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_CTX_new` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_CTX_new_ex` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_CTX_up_ref` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_CTX_free` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_new` | Object | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_dup` | Object | 🟩U | 🟩A | 🟥FC | 🟢Done |
| `SSL_up_ref` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_free` | Object | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_is_dtls` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_CTX_get_ex_data` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_CTX_set_ex_data` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_get_ex_data` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_set_ex_data` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_get_SSL_CTX` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_set_SSL_CTX` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| **⇒ Method Manipulation** | | | | | |
| `SSL_CTX_get_ssl_method` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_get_ssl_method` | Object | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_set_ssl_method` | Object | 🟩U | 🟥FC | 🟧QSI | 🟢Done |
| **⇒ SRTP** | | | | | |
| `SSL_get_selected_srtp_profile` | HL | 🟩U | 🟧NO | 🟨C\* | 🟢Done |
| `SSL_get_srtp_profiles` | HL | 🟩U | 🟧NO | 🟨C\* | 🟢Done |
| `SSL_CTX_set_tlsext_use_srtp` | HL | 🟩U | 🟥FC | 🟨C\* | 🟢Done |
| `SSL_set_tlsext_use_srtp` | HL | 🟩U | 🟥FC | 🟩NC\* | 🟢Done |
| **⇒ Ciphersuite Configuration** | | | | | |
| `SSL_CTX_set_cipher_list` | HL | 🟩U | 🟩A | 🟩NC\* †11 | 🟢Done |
| `SSL_CTX_set_ciphersuites` | HL | 🟩U | 🟩A | 🟨C\* †1 | 🟢Done |
| `SSL_CTX_get_ciphers` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_ciphersuites` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get1_supported_ciphers` | HL | 🟩U | 🟩A | 🟨C\* †1 | 🟢Done |
| `SSL_bytes_to_cipher_list` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_ciphers` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_cipher_list` | HL | 🟩U | 🟩A | 🟩NC\* †11 | 🟢Done |
| `SSL_set_cipher_list` | HL | 🟩U | 🟩A | 🟩NC\* †11 | 🟢Done |
| **⇒ Negotiated Ciphersuite Queries** | | | | | |
| `SSL_get_current_cipher` | HL | 🟩U | 🟩A | 🟩NC\* †9 | 🟢Done |
| `SSL_get_pending_cipher` | HL | 🟩U | 🟩A | 🟩NC\* †9 | 🟢Done |
| `SSL_get_shared_ciphers` | HL | 🟩U | 🟩A | 🟩NC\* †9 | 🟢Done |
| `SSL_get_client_ciphers` | HL | 🟩U | 🟩A | 🟩NC\* †9 | 🟢Done |
| `SSL_get_current_compression` | HL | 🟩U | 🟩A | 🟩HLNC | 🟢Done |
| `SSL_get_current_expansion` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_shared_sigalgs` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_sigalgs` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_peer_signature_nid` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_peer_signature_type_nid` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_signature_nid` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_signature_type_nid` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ ALPN** | †2 | | | | |
| `SSL_SESSION_set1_alpn_selected` | HL | 🟩U | 🟩A | 🟨C\* †2 | 🟢Done |
| `SSL_SESSION_get0_alpn_selected` | HL | 🟩U | 🟩A | 🟨C\* †2 | 🟢Done |
| `SSL_CTX_set_alpn_select_cb` | HL | 🟩U | 🟩A | 🟨C\* †2 | 🟢Done |
| `SSL_set_alpn_protos` | HL | 🟩U | 🟩A | 🟨C\* †2 | 🟢Done |
| `SSL_get0_alpn_selected` | HL | 🟩U | 🟩A | 🟨C\* †2 | 🟢Done |
| `SSL_CTX_set_alpn_protos` | HL | 🟩U | 🟩A | 🟨C\* †2 | 🟢Done |
| **⇒ NPN** | †3 | | | | |
| `SSL_CTX_set_next_proto_select_cb` | HL | 🟩U | 🟥FC | 🟨C\* †3 | 🟢Done |
| `SSL_CTX_set_next_protos_advertised_cb` | HL | 🟩U | 🟥FC | 🟨C\* †3 | 🟢Done |
| `SSL_get0_next_proto_negotiated` | HL | 🟩U | 🟥FC | 🟩NC\* †3 | 🟢Done |
| **⇒ Narrow Waist Interface** | †4 | | | | |
| `SSL_CTX_ctrl` | Object | 🟩U | 🟩A | 🟩NC\* †4 | 🟢Done |
| `SSL_ctrl` | Object | 🟩U | 🟩A | 🟩NC\* †4 | 🟢Done |
| `SSL_CTX_callback_ctrl` | Object | 🟩U | 🟩A | 🟩NC\* †4 | 🟢Done |
| `SSL_callback_ctrl` | Object | 🟩U | 🟩A | 🟩NC\* †4 | 🟢Done |
| **⇒ Miscellaneous Accessors** | | | | | |
| `SSL_get_server_random` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_client_random` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_finished` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_peer_finished` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Ciphersuite Information** | | | | | |
| `SSL_CIPHER_description` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CIPHER_find` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CIPHER_get_auth_nid` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CIPHER_get_bits` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CIPHER_get_cipher_nid` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CIPHER_get_digest_nid` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CIPHER_get_handshake_digest` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CIPHER_get_id` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CIPHER_get_kx_nid` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CIPHER_get_name` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CIPHER_get_protocol_id` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CIPHER_get_version` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CIPHER_is_aead` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CIPHER_standard_name` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_group_to_name` | Global | 🟩U | 🟦U | 🟩NC\* | 🟢Done |
| **⇒ Version Queries** | | | | | |
| `SSL_get_version` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_version` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_client_version` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Certificate Chain Management** | | | | | |
| `SSL_get_certificate` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_use_certificate` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_use_certificate_chain_file` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_use_certificate_chain_file` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_use_certificate_file` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_load_verify_file` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_load_verify_dir` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_load_verify_store` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_load_verify_locations` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `CertSSL_use_cert_and_key` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_use_certificate_ASN1` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_use_PrivateKey` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_use_PrivateKey_ASN1` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_use_PrivateKey_file` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_use_RSAPrivateKey` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_use_RSAPrivateKey_ASN1` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_use_RSAPrivateKey_file` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_default_verify_dir` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_default_verify_file` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_default_verify_paths` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_default_verify_store` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_use_cert_and_key` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_use_certificate` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_use_certificate_ASN1` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_use_certificate_file` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_use_PrivateKey` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_use_PrivateKey_ASN1` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_use_PrivateKey_file` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_use_RSAPrivateKey` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_use_RSAPrivateKey_ASN1` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_use_RSAPrivateKey_file` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_check_chain` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_check_private_key` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_check_private_key` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_add_client_CA` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_add1_to_CA_list` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_add_dir_cert_subjects_to_stack` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_add_file_cert_subjects_to_stack` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_add_store_cert_subjects_to_stack` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_load_client_CA_file` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_load_client_CA_file_ex` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_dup_CA_list` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set0_CA_list` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get0_CA_list` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_client_CA_list` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_add_client_CA` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get0_CA_list` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get0_certificate` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get0_privatekey` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get_cert_store` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set1_cert_store` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get_client_CA_list` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_add1_to_CA_list` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set0_CA_list` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get_client_cert_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get_default_passwd_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get_default_passwd_cb_userdata` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_client_CA_list` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_privatekey` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Certificate Compression** | | | | | |
| `SSL_CTX_set1_cert_comp_preference` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set1_cert_comp_preference` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_compress_certs` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_compress_certs` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set1_compressed_cert` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set1_compressed_cert` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get1_compressed_cert` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get1_compressed_cert` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Certificate Verification** | | | | | |
| `SSL_set1_host` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_add1_host` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_hostflags` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_verify` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_verify` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_verify_depth` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_verify_result` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_verify_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_verify_depth` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_verify_mode` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_verify_result` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get0_peer_CA_list` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get0_peer_certificate` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get0_verified_chain` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get1_peer_certificate` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_peer_cert_chain` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_peer_certificate` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_certs_clear` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get0_param` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get0_param` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get_verify_mode` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get_verify_depth` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_verify_depth` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get0_peername` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set1_param` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set1_param` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get0_param` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get0_param` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_purpose` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_purpose` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_trust` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_trust` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ PSK** | | | | | |
| `SSL_use_psk_identity_hint` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_use_psk_identity_hint` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_psk_client_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_psk_find_session_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_psk_server_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_psk_use_session_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_psk_identity` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_psk_identity_hint` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ SRP** | | | | | |
| `SSL_SRP_CTX_init` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_SRP_CTX_init` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_SRP_CTX_free` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SRP_CTX_free` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_srp_client_pwd_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_srp_password` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_srp_g` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_srp_cb_arg` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_srp_N` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_srp_username_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_srp_username` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_srp_server_param` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_srp_userinfo` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_srp_server_param_with_username` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_srp_strength` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_srp_verify_param_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_srp_server_param_pw` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_srp_username` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SRP_Calc_A_param` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ DANE** | | | | | |
| `SSL_CTX_dane_enable` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get0_dane_tlsa` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_dane_set_flags` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_dane_set_flags` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_dane_clear_flags` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_dane_clear_flags` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get0_dane` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_dane_enable` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get0_dane_authority` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_dane_mtype_set` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_dane_tlsa_add` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Certificate Transparency** | | | | | |
| `SSL_CTX_enable_ct` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_ct_is_enabled` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_ctlog_list_file` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_default_ctlog_list_file` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_ct_validation_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set0_ctlog_store` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get0_ctlog_store` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_enable_ct` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_ct_is_enabled` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get0_peer_scts` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_ct_validation_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Compression** | | | | | |
| `SSL_COMP_add_compression_method` | HL | 🟩U | 🟩A | 🟩NC\* †5 | 🟢Done |
| `SSL_COMP_get0_name` | HL | 🟩U | 🟩A | 🟩NC\* †5 | 🟢Done |
| `SSL_COMP_get_compression_methods` | HL | 🟩U | 🟩A | 🟩NC\* †5 | 🟢Done |
| `SSL_COMP_get_id` | HL | 🟩U | 🟩A | 🟩NC\* †5 | 🟢Done |
| `SSL_COMP_get_name` | HL | 🟩U | 🟩A | 🟩NC\* †5 | 🟢Done |
| `SSL_COMP_set0_compression_methods` | HL | 🟩U | 🟩A | 🟩NC\* †5 | 🟢Done |
| **⇒ Exporters** | | | | | |
| `SSL_export_keying_material` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_export_keying_material_early` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Renegotiation** | | | | | |
| `SSL_renegotiate` | HL | 🟩U | 🟥FC | 🟩NC\* †5 | 🟢Done |
| `SSL_renegotiate_abbreviated` | HL | 🟩U | 🟥FC | 🟩NC\* †5 | 🟢Done |
| `SSL_renegotiate_pending` | HL | 🟩U | 🟧NO | 🟩NC\* †5 | 🟢Done |
| **⇒ Options** | | | | | |
| `SSL_CTX_clear_options` | HL | 🟩U | 🟩A | 🟨C\* | 🟢Done |
| `SSL_CTX_set_options` | HL | 🟩U | 🟩A | 🟨C\* | 🟢Done |
| `SSL_CTX_get_options` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_clear_options` | HL | 🟩U | 🟩A | 🟨C\* | 🟢Done |
| `SSL_set_options` | HL | 🟩U | 🟩A | 🟨C\* | 🟢Done |
| `SSL_get_options` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Configuration** | | | | | |
| `SSL_CONF_CTX_new` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CONF_CTX_free` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CONF_CTX_set_ssl` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CONF_CTX_set_ssl_ctx` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CONF_CTX_set1_prefix` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CONF_CTX_set_flags` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CONF_CTX_clear_flags` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CONF_CTX_finish` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CONF_cmd` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CONF_cmd_argv` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CONF_cmd_value_type` | Global | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_config` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_config` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Callbacks** | | | | | |
| `SSL_CTX_set_cert_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_cert_store` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_cert_verify_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_client_CA_list` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_client_cert_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_client_cert_engine` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_client_hello_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_cookie_generate_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_cookie_verify_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_default_passwd_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_default_passwd_cb_userdata` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_default_read_buffer_len` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get_info_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_info_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_info_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_info_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_msg_callback` | HL | 🟩U | 🟩A | 🟩NC\* †6 | 🟢Done |
| `SSL_set_cert_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_default_passwd_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_default_passwd_cb_userdata` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_default_passwd_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_default_passwd_cb_userdata` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_keylog_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get_keylog_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_psk_client_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_psk_find_session_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_psk_server_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_psk_use_session_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get_verify_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_not_resumable_session_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_not_resumable_session_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_session_secret_cb` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| **⇒ Session Management** | | | | | |
| `d2i_SSL_SESSION` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `i2d_SSL_SESSION` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `PEM_read_bio_SSL_SESSION` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `PEM_read_SSL_SESSION` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `PEM_write_bio_SSL_SESSION` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `PEM_write_SSL_SESSION` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_new` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_up_ref` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_dup` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_free` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_print` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_print_fp` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_print_keylog` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get0_cipher` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_set_cipher` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get0_hostname` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_set1_hostname` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get0_id_context` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_set1_id_context` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get0_peer` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get0_ticket` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get0_ticket_appdata` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_set1_ticket_appdata` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_has_ticket` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get_protocol_version` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_set_protocol_version` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get_compress_id` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get_id` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_set1_id` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get_time` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_set_time` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get_timeout` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_set_timeout` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get_ex_data` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_set_ex_data` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get0_hostname` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_set1_hostname` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get_master_key` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get_master_key` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_is_resumable` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get_max_early_data` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get_max_early_data` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get_max_fragment_length` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_SESSION_get_ticket_lifetime_hint` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_add_session` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_remove_session` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get1_session` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_session` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_session` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_sess_get_get_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_sess_set_get_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_sess_get_new_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_sess_set_new_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_sess_get_remove_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_sess_set_remove_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_session_id_context` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_session_id_context` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_generate_session_id` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_generate_session_id` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_has_matching_session_id` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_flush_sessions` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_session_reused` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get_timeout` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_timeout` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_default_timeout` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_sessions` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Session Ticket Management** | | | | | |
| `SSL_get_num_tickets` | HL | 🟩U | 🟩A | 🟩NC\* †7 | 🟢Done |
| `SSL_set_num_tickets` | HL | 🟩U | 🟩A | 🟩NC\* †7 | 🟢Done |
| `SSL_CTX_get_num_tickets` | HL | 🟩U | 🟩A | 🟩NC\* †7 | 🟢Done |
| `SSL_CTX_set_num_tickets` | HL | 🟩U | 🟩A | 🟩NC\* †7 | 🟢Done |
| `SSL_new_session_ticket` | HL | 🟩U | 🟩A | 🟩NC\* †7 | 🟢Done |
| `SSL_set_session_ticket_ext` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_session_ticket_ext_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_tlsext_ticket_key_evp_cb` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Security Levels** | | | | | |
| `SSL_CTX_get_security_level` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_security_level` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_security_level` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_security_level` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get_security_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_security_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_security_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_security_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_get0_security_ex_data` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set0_security_ex_data` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get0_security_ex_data` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set0_security_ex_data` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Custom Extensions** | | | | | |
| `SSL_CTX_add_custom_ext` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_add_client_custom_ext` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_add_server_custom_ext` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_has_client_custom_ext` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Early ClientHello Processing** | | | | | |
| `SSL_client_hello_get_extension_order` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_client_hello_get0_ciphers` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_client_hello_get0_compression_methods` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_client_hello_get0_ext` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_client_hello_get0_legacy_version` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_client_hello_get0_random` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_client_hello_get0_session_id` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_client_hello_get1_extensions_present` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_client_hello_isv2` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ SNI** | | | | | |
| `SSL_get_servername` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_servername_type` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Server Info** | | | | | |
| `SSL_CTX_use_serverinfo` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_use_serverinfo_ex` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_use_serverinfo_file` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Post-Handshake Authentication** | | | | | |
| `SSL_verify_client_post_handshake` | HL | 🟩U | 🟥FC | 🟨C* †8 | 🟢Done |
| `SSL_CTX_set_post_handshake_auth` | HL | 🟩U | 🟥FC | 🟨C* †8 | 🟢Done |
| `SSL_set_post_handshake_auth` | HL | 🟩U | 🟥FC | 🟨C* †8 | 🟢Done |
| **⇒ DH Parameters** | | | | | |
| `SSL_CTX_set_dh_auto` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_dh_auto` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set0_tmp_dh_pkey` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set0_tmp_dh_pkey` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_tmp_dh_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_tmp_dh_callback` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_CTX_set_tmp_dh` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_tmp_dh` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ State Queries** | | | | | |
| `SSL_in_init` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_in_before` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_is_init_finished` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_get_state` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_rstate_string` | HL | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_rstate_string_long` | HL | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_state_string` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_state_string_long` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Data Path and CSSM** | | | | | |
| `SSL_set_connect_state` | CSSM | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_set_accept_state` | CSSM | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_is_server` | CSSM | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_peek` | ADP | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_peek_ex` | ADP | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_read` | ADP | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_read_ex` | ADP | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_write` | ADP | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_write_ex` | ADP | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_sendfile` | ADP | 🟩U | 🟥FC | 🟩NC\* | 🟢Done |
| `SSL_pending` | ADP | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_has_pending` | ADP | 🟧C | 🟩A | 🟧QSI | 🟢Done |
| `SSL_accept` | CSSM | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_connect` | CSSM | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_do_handshake` | CSSM | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_set0_wbio` | NDP | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_set0_rbio` | NDP | 🟧C | 🟩A | 🟧QSI | 🟢Done |
| `SSL_set_bio` | NDP | 🟧C | 🟩A | 🟧QSI | 🟢Done |
| `SSL_get_wbio` | NDP | 🟧C | 🟩A | 🟧QSI | 🟢Done |
| `SSL_get_rbio` | NDP | 🟧C | 🟩A | 🟧QSI | 🟢Done |
| `SSL_get_error` | NDP | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_get_rfd` | NDP | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_get_wfd` | NDP | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_get_fd` | NDP | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_set_rfd` | NDP | 🟧C | 🟩A | 🟧QSI | 🟢Done |
| `SSL_set_wfd` | NDP | 🟧C | 🟩A | 🟧QSI | 🟢Done |
| `SSL_set_fd` | NDP | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_key_update` | RL | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_get_key_update_type` | RL | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_clear` (connection) | CSSM | 🟩U | 🟥FC | 🟧QSI | 🟢Done |
| `SSL_clear` (stream) | CSSM | 🟩U | 🟥FC | 🟧QSI | 🟢Done |
| `SSL_shutdown` | CSSM | 🟧C | 🟩A | 🟧QSI | 🟢Done |
| `SSL_want` | ADP | 🟧C | 🟩A | 🟧QSI | 🟢Done |
| `BIO_new_ssl_connect` | Global | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `BIO_new_buffer_ssl_connect` | Global | 🟩U | 🟦U | 🟧QSI | 🟢Done |
| `SSL_get_shutdown` | CSSM | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_set_shutdown` | CSSM | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| **⇒ New APIs** | | | | | |
| `SSL_is_tls` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_is_quic` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_handle_events` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_get_event_timeout` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_get_blocking_mode` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_set_blocking_mode` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_get_rpoll_descriptor` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_get_wpoll_descriptor` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_net_read_desired` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_net_write_desired` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_set1_initial_peer_addr` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_shutdown_ex` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_stream_conclude` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_stream_reset` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_get_stream_read_state` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_get_stream_write_state` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_get_stream_read_error_code` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_get_stream_write_error_code` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_get_conn_close_info` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_inject_net_dgram` | NDP | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| **⇒ New APIs for Multi-Stream** | | | | | |
| `SSL_get0_connection` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_is_connection` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_get_stream_id` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_get_stream_type` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_is_stream_local` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_new_stream` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_accept_stream` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_get_accept_stream_queue_len` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_set_default_stream_mode` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| `SSL_set_incoming_stream_policy` | CSSM | 🟦N | 🟩A | 🟥QSA | 🟢Done |
| **⇒ Currently Not Supported** | | | | | |
| `SSL_copy_session_id` | Special | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| `BIO_ssl_copy_session_id` | Special | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| `SSL_CTX_set_quiet_shutdown` | CSSM | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `SSL_CTX_get_quiet_shutdown` | CSSM | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `SSL_set_quiet_shutdown` | CSSM | 🟩U | 🟥FC | 🟨C | 🟢Done |
| `SSL_get_quiet_shutdown` | CSSM | 🟩U | 🟧NO | 🟨C | 🟢Done |
| `SSL_CTX_set_ssl_version` | HL | 🟩U | 🟥FC | 🟨C | 🟢Done |
| **⇒ Async** | | | | | |
| `SSL_CTX_set_async_callback` | Async | 🟩U | 🟧NO | 🟩NC* †10 | 🟢Done |
| `SSL_set_async_callback` | Async | 🟩U | 🟧NO | 🟩NC* †10 | 🟢Done |
| `SSL_CTX_set_async_callback_arg` | Async | 🟩U | 🟧NO | 🟩NC* †10 | 🟢Done |
| `SSL_set_async_callback_arg` | Async | 🟩U | 🟧NO | 🟩NC* †10 | 🟢Done |
| `SSL_waiting_for_async` | Async | 🟩U | 🟧NO | 🟩NC* †10 | 🟢Done |
| `SSL_get_async_status` | Async | 🟩U | 🟧NO | 🟩NC* †10 | 🟢Done |
| `SSL_get_all_async_fds` | Async | 🟩U | 🟧NO | 🟩NC* †10 | 🟢Done |
| `SSL_get_changed_async_fds` | Async | 🟩U | 🟧NO | 🟩NC* †10 | 🟢Done |
| **⇒ Readahead** | | | | | |
| `SSL_CTX_get_default_read_ahead` | RL | 🟩U | 🟧NO | 🟩NC* | 🟢Done |
| `SSL_CTX_get_read_ahead` | RL | 🟩U | 🟧NO | 🟩NC* | 🟢Done |
| `SSL_CTX_set_read_ahead` | RL | 🟩U | 🟧NO | 🟨C* | 🟢Done |
| `SSL_get_read_ahead` | RL | 🟩U | 🟧NO | 🟨C* | 🟢Done |
| `SSL_set_read_ahead` | RL | 🟩U | 🟧NO | 🟨C* | 🟢Done |
| `SSL_CTX_set_default_read_buffer_len` | RL | 🟩U | 🟧NO | 🟩NC* | 🟢Done |
| `SSL_set_default_read_buffer_len` | RL | 🟩U | 🟧NO | 🟨C* | 🟢Done |
| **⇒ Record Padding and Fragmentation** | | | | | |
| `SSL_CTX_set_record_padding_callback` | RL | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_set_record_padding_callback` | RL | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| `SSL_CTX_get_record_padding_callback_arg` | RL | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_CTX_set_record_padding_callback_arg` | RL | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_get_record_padding_callback_arg` | RL | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_set_record_padding_callback_arg` | RL | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_CTX_set_block_padding` | RL | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_set_block_padding` | RL | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| `SSL_CTX_set_tlsext_max_fragment_length` | RL | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_set_tlsext_max_fragment_length` | RL | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| **⇒ Stateless/HelloRetryRequest** | | | | | |
| `SSL_stateless` | RL | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| `SSL_CTX_set_stateless_cookie_generate_cb` | RL | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_CTX_set_stateless_cookie_verify_cb` | RL | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| **⇒ Early Data/0-RTT** | | | | | |
| `SSL_CTX_set_allow_early_data_cb` | 0-RTT | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_set_allow_early_data_cb` | 0-RTT | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| `SSL_CTX_get_recv_max_early_data` | 0-RTT | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_CTX_set_recv_max_early_data` | 0-RTT | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_get_recv_max_early_data` | 0-RTT | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_set_recv_max_early_data` | 0-RTT | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| `SSL_CTX_get_max_early_data` | 0-RTT | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_CTX_set_max_early_data` | 0-RTT | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_get_max_early_data` | 0-RTT | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_set_max_early_data` | 0-RTT | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| `SSL_read_early_data` | 0-RTT | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| `SSL_write_early_data` | 0-RTT | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| `SSL_get_early_data_status` | 0-RTT | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| **⇒ Miscellaneous** | | | | | |
| `DTLSv1_listen` | RL | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `DTLS_set_timer_cb` | NDP | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `DTLS_get_data_mtu` | NDP | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `SSL_get_ex_data_X509_STORE_CTX_idx` | Global | 🟩U | 🟦U | 🟩NC | 🟢Done |
| `BIO_ssl_shutdown` | Global | 🟩U | 🟩A | 🟩NC | 🟢Done |
| `SSL_alloc_buffers` | HL | 🟩U | 🟩A | 🟨C\* | 🟢Done |
| `SSL_free_buffers` | HL | 🟩U | 🟩A | 🟨C\* | 🟢Done |
| `SSL_trace` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| `SSL_set_debug` | HL | 🟩U | 🟩A | 🟩NC\* | 🟢Done |
| **⇒ Controls** | | | | | |
| `SSL_CTRL_MODE` | Special | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_CTRL_CLEAR_MODE` | Special | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_CTRL_CLEAR_NUM_RENEGOTIATIONS` | HL | 🟩U | 🟧NO | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_NUM_RENEGOTIATIONS` | HL | 🟩U | 🟧NO | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_TOTAL_RENEGOTIATIONS` | HL | 🟩U | 🟧NO | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_RI_SUPPORT` | HL | 🟩U | 🟧NO | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_READ_AHEAD` | HL | 🟩U | 🟧NO | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_READ_AHEAD` | HL | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| `SSL_CTRL_SET_MAX_PIPELINES` | RL | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| `SSL_CTRL_SET_MAX_SEND_FRAGMENT` | RL | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| `SSL_CTRL_SET_SPLIT_SEND_FRAGMENT` | RL | 🟩U | 🟥FC | 🟨C* | 🟢Done |
| `SSL_CTRL_SET_MTU` | RL | 🟩U | 🟥FC | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_MAX_PROTO_VERSION` | HL | 🟩U | 🟩A | 🟨C* | 🟢Done |
| `SSL_CTRL_SET_MIN_PROTO_VERSION` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_MAX_PROTO_VERSION` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_MIN_PROTO_VERSION` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_BUILD_CERT_CHAIN` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_CERT_FLAGS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_CHAIN` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_CHAIN_CERT` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_CLEAR_CERT_FLAGS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_CLEAR_EXTRA_CHAIN_CERTS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_EXTRA_CHAIN_CERT` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_CHAIN_CERTS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_CHAIN_CERT_STORE` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_CLIENT_CERT_REQUEST` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_CLIENT_CERT_TYPES` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_EC_POINT_FORMATS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_EXTMS_SUPPORT` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_EXTRA_CHAIN_CERTS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_FLAGS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_GROUPS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_IANA_GROUPS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_MAX_CERT_LIST` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_NEGOTIATED_GROUP` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_PEER_SIGNATURE_NID` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_PEER_TMP_KEY` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_RAW_CIPHERLIST` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_SESS_CACHE_MODE` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_SESS_CACHE_SIZE` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_SHARED_GROUP` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_SIGNATURE_NID` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB_ARG` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_TLSEXT_STATUS_REQ_EXTS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_TLSEXT_STATUS_REQ_IDS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_TLSEXT_STATUS_REQ_OCSP_RESP` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_TLSEXT_STATUS_REQ_TYPE` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_TLSEXT_TICKET_KEYS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_TMP_KEY` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_GET_VERIFY_CERT_STORE` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SELECT_CURRENT_CERT` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SESS_ACCEPT` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SESS_ACCEPT_GOOD` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SESS_ACCEPT_RENEGOTIATE` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SESS_CACHE_FULL` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SESS_CB_HIT` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SESS_CONNECT` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SESS_CONNECT_GOOD` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SESS_CONNECT_RENEGOTIATE` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SESS_HIT` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SESS_MISSES` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SESS_NUMBER` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SESS_TIMEOUTS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_CHAIN_CERT_STORE` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_CLIENT_CERT_TYPES` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_CLIENT_SIGALGS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_CLIENT_SIGALGS_LIST` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_CURRENT_CERT` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_DH_AUTO` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_GROUPS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_GROUPS_LIST` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_MAX_CERT_LIST` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_MSG_CALLBACK` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_MSG_CALLBACK_ARG` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_NOT_RESUMABLE_SESS_CB` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_RETRY_VERIFY` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_SESS_CACHE_MODE` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_SESS_CACHE_SIZE` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_SIGALGS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_SIGALGS_LIST` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_SRP_ARG` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_SRP_GIVE_CLIENT_PWD_CB` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_SRP_VERIFY_PARAM_CB` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLSEXT_DEBUG_ARG` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLSEXT_DEBUG_CB` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLSEXT_HOSTNAME` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLSEXT_SERVERNAME_ARG` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLSEXT_SERVERNAME_CB` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLS_EXT_SRP_PASSWORD` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLS_EXT_SRP_STRENGTH` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLS_EXT_SRP_USERNAME` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLS_EXT_SRP_USERNAME_CB` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLSEXT_STATUS_REQ_CB` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLSEXT_STATUS_REQ_CB_ARG` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLSEXT_STATUS_REQ_EXTS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLSEXT_STATUS_REQ_IDS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLSEXT_STATUS_REQ_OCSP_RESP` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLSEXT_STATUS_REQ_TYPE` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TLSEXT_TICKET_KEYS` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TMP_DH` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TMP_DH_CB` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_TMP_ECDH` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| `SSL_CTRL_SET_VERIFY_CERT_STORE` | HL | 🟩U | 🟩A | 🟩NC* | 🟢Done |
| **⇒ SSL Modes** | | | | | |
| `SSL_MODE_ENABLE_PARTIAL_WRITE` | ADP | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER` | ADP | 🟩U | 🟩A | 🟧QSI | 🟢Done |
| `SSL_MODE_RELEASE_BUFFERS` | ADP | 🟩U | 🟧NO | 🟩NC | 🟢Done |
| `SSL_MODE_ASYNC` | ADP | 🟩U | 🟧NO | 🟩NC | 🟢Done |
| `SSL_MODE_AUTO_RETRY` | ADP | 🟩U | 🟧NO | 🟩NC | 🟢Done |
| `SSL_MODE_SEND_FALLBACK_SCSV` | HL | 🟩U | 🟩U | 🟩NC | 🟢Done |
Q&A For TLS-Related Calls
-------------------------
### What should `SSL_get_current_cipher`, `SSL_get_pending_cipher`, etc. do?
QUIC always uses AES-128-GCM for Initial packets. At this time the handshake
layer has not negotiated a ciphersuite so it has no “current” cipher. We could
return AES-128-GCM here, but it seems reasonable to just return NULL as the
encryption is mostly for protection against accidental modification and not
“real” encryption. From the perspective of the Handshake layer encryption is not
active yet. An application using QUIC can always interpret NULL as meaning
AES-128-GCM is being used if needed as this is implied by using QUIC.
A. We return NULL here, because it allows applications to detect if a
ciphersuite has been negotiated and NULL can be used to infer that Initial
encryption is still being used. This also minimises the changes needed to the
implementation.
### What should `SSL_CTX_set_cipher_list` do?
Since this function configures the cipher list for TLSv1.2 and below only, there
is no need to restrict it as TLSv1.3 is required for QUIC. For the sake of
application compatibility, applications can still configure the TLSv1.2 cipher
list; it will always be ignored. This function can still be used to set the
SECLEVEL; no changes are needed to facilitate this.
### What SSL options should be supported?
Options we explicitly want to support:
- `SSL_OP_CIPHER_SERVER_PREFERENCE`
- `SSL_OP_DISABLE_TLSEXT_CA_NAMES`
- `SSL_OP_NO_TX_CERTIFICATE_COMPRESSION`
- `SSL_OP_NO_RX_CERTIFICATE_COMPRESSION`
- `SSL_OP_PRIORITIZE_CHACHA`
- `SSL_OP_NO_TICKET`
- `SSL_OP_CLEANSE_PLAINTEXT`
Options we do not yet support but could support in the future, currently no-ops:
- `SSL_OP_NO_QUERY_MTU`
- `SSL_OP_NO_ANTI_REPLAY`
The following options must be explicitly forbidden:
- `SSL_OP_NO_TLSv1_3` — TLSv1.3 is required for QUIC
- `SSL_OP_ENABLE_MIDDLEBOX_COMPAT` — forbidden by QUIC RFCs
- `SSL_OP_ENABLE_KTLS` — not currently supported for QUIC
- `SSL_OP_SAFARI_ECDHE_ECDSA_BUG`
- `SSL_OP_TLSEXT_PADDING`
- `SSL_OP_TLS_ROLLBACK_BUG`
- `SSL_OP_IGNORE_UNEXPECTED_EOF`
- `SSL_OP_ALLOW_NO_DHE_KEX`
The following options are ignored for TLSv1.3 or otherwise not applicable and
may therefore be settable but ignored. We take this approach on the grounds
that it is harmless and applications might want to see that options have been
correctly set for protocols unrelated to QUIC.
- `SSL_OP_CRYPTOPRO_TLSEXT_BUG`
- `SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS`
- `SSL_OP_ALLOW_CLIENT_RENEGOTIATION`
- `SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION`
- `SSL_OP_CISCO_ANYCONNECT`
- `SSL_OP_COOKIE_EXCHANGE`
- `SSL_OP_LEGACY_SERVER_CONNECT`
- `SSL_OP_NO_COMPRESSION`
- `SSL_OP_NO_ENCRYPT_THEN_MAC`
- `SSL_OP_NO_EXTENDED_MASTER_SECRET`
- `SSL_OP_NO_RENEGOTIATION`
- `SSL_OP_NO_RESSION_RESUMPTION_ON_NEGOTIATION`
- `SSL_OP_NO_SSLv3`
- `SSL_OP_NO_TLSv1`
- `SSL_OP_NO_TLSv1_1`
- `SSL_OP_NO_TLSv1_2`
- `SSL_OP_NO_DTLSv1`
- `SSL_OP_NO_DTLSv1_2`
### What should `SSL_rstate_string` and `SSL_state_string` do?
SSL_state_string is highly handshake layer specific, so it makes sense to just
forward to the handshake layer.
SSL_rstate_string is record layer specific. A cursory evaluation of usage via
GitHub code search did not appear to identify much usage of this function other
than for debug output; i.e., there seems to be little usage of this in a way
that depends on the output for the purposes of control flow. Since there is not
really any direct correspondence to the QUIC record layer, we conservatively
define the output of this function as "unknown".
TODO: forbid NPN
TODO: enforce TLSv1.3
TODO: forbid PHA - DONE
TODO: forbid middlebox compat mode in a deeper way?
TODO: new_session_ticket doesn't need modifying as such, but ticket machinery
will
### What should `SSL_pending` and `SSL_has_pending` do?
`SSL_pending` traditionally yields the number of bytes buffered inside a SSL
object available for immediate reading. For QUIC, we can just make this report
the current size of the receive stream buffer.
`SSL_has_pending` returns a boolean value indicating whether there is processed
or unprocessed incoming data pending. There is no direct correspondence to
QUIC, so there are various implementation options:
- `SSL_pending() > 0`
- `SSL_pending() > 0 || pending URXEs or RXEs exist`
The latter can probably be viewed as more of a direct correspondence to the
design intent of the API, so we go with this.
### What should `SSL_alloc_buffers` and `SSL_free_buffers` do?
These do not really correspond to our internal architecture for QUIC. Since
internal buffers are always available, `SSL_alloc_buffers` can simply always
return 1. `SSL_free_buffers` can always return 0, as though the buffers are in
use, which they generally will be.
### What should `SSL_key_update` and `SSL_get_key_update_type`?
`SSL_key_update` can trigger a TX record layer key update, which will cause the
peer to respond with a key update in turn. The update occurs asynchronously
at next transmission, not immediately.
`SSL_get_key_update_type` returns an enumerated value which is only relevant to
the TLSv1.3 protocol; for QUIC, it will always return `SSL_KEY_UPDATE_NONE`.
### What should `SSL_MODE_AUTO_RETRY` do?
The absence of `SSL_MODE_AUTO_RETRY` causes `SSL_read`/`SSL_write` on a normal
TLS connection to potentially return due to internal handshake message
processing. This does not really make sense for our QUIC implementation,
therefore we always act as though `SSL_MODE_AUTO_RETRY` is on, and this mode is
ignored.
### What should `SSL_MODE_SEND_FALLBACK_SCSV` do?
This is not relevant to QUIC because this functionality relates to protocol
version downgrade attack protection and QUIC only supports TLSv1.3. Thus,
it is ignored.
### What should `SSL_CTX_set_ssl_version` do?
This is a deprecated function, so it needn't be supported for QUIC. Fail closed.
### What should `SSL_set_ssl_method` do?
We do not currently support this for QUIC.
### What should `SSL_set_shutdown` do?
This is not supported and is a no-op for QUIC.
### What should `SSL_dup` and `SSL_clear` do?
These may be tricky to support. Currently they are blocked.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,70 @@
QUIC Route Requirements
=======================
* Two connection IDs -- one local, one remote
MVP
---
MVP does most of one side of the CID management. The major outstanding items
for a complete implementation are:
* possibly increase the number of CIDs we permit (from 2)
* use more than the just latest CID for packet transmission
* round robin non-retired CIDs
Non zero Length Connection ID
-----------------------------
MVP does not issue multiple connection CIDs, instead it uses a zero length CID.
To achieve this, more work is required:
* creation of new CIDs (coded but not used)
* responding to new CIDs by returning new CIDs to peer match
* managing the number of CIDs presented to our peer
* limiting the number of CIDs issued & retired
* retirement of CIDs that are no longer being used
* ensuring only one retire connection ID frame is in flight
Connection Migration
--------------------
* Supporting migration goes well beyond CID management. The additions required
to the CID code should be undertaken when/if connection migration is
supported. I.e. do this later in a just in time manner.
Retiring Connection ID
----------------------
When a remote asks to retire a connection ID (RETIRE_CONNECTION_ID) we have to:
* Send retirement acks for all retired CIDs
* Immediately delete all CIDs and routes associated with these CIDs
* Retransmits use different route, so they are good.
* Out of order delivery will initiate retransmits
* Should respond with a NEW_CONNECTION_ID frame if we are low on CIDs
* Not sure if it is mandatory to send a retirement.
When a remote creates a new connection ID:
* May respond with a new connection ID frame (it's a good idea)
* It reads like the NEW_CONNECTION_ID frame can't be used to retire routes.
However, see above. Suggest we accept either.
When we want to retire one (or more) connection IDs we have to:
* Flag the route(s) as retired
* Send a retirement frame (RETIRE_CONNECTION_ID)
* Delete the connection(s) once they are retired by our peer (either
NEW_CONNECTION_ID or RETIRE_CONNECTION_ID can do this)
State
-----
* routes we've retired until they are acked as being retired (uint64_t max CID)
* routes our peer has retired don't need tracking, we can remove immediately
* retired routes where we've outstanding data to send will have that data
sent before the retirement acks are send. If these fragments need to
be retransmitted, they'll be done so using a new CID on a new route.
This means, there is no requirement to wait for data to be flushed
before sending the retirement ack.

View File

@@ -0,0 +1,553 @@
QUIC Fault Injector
===================
The OpenSSL QUIC implementation receives QUIC packets from the network layer and
processes them accordingly. It will need to behave appropriately in the event of
a misbehaving peer, i.e. one which is sending protocol elements (e.g. datagrams,
packets, frames, etc) that are not in accordance with the specifications or
OpenSSL's expectations.
The QUIC Fault Injector is a component within the OpenSSL test framework that
can be used to simulate misbehaving peers and confirm that OpenSSL QUIC
implementation behaves in the expected manner in the event of such misbehaviour.
Typically an individual test will inject one particular misbehaviour (i.e. a
fault) into an otherwise normal QUIC connection. Therefore the fault injector
will have to be capable of creating fully normal QUIC protocol elements, but
also offer the flexibility for a test to modify those normal protocol elements
as required for the specific test circumstances. The OpenSSL QUIC implementation
in libssl does not offer the capability to send faults since it is designed to
be RFC compliant.
The QUIC Fault Injector will be external to libssl (it will be in the test
framework) but it will reuse the standards compliant QUIC implementation in
libssl and will make use of 3 integration points to inject faults. 2 of these
integration points will use new callbacks added to libssl. The final integration
point does not require any changes to libssl to work.
QUIC Integration Points
-----------------------
### TLS Handshake
Fault Injector based tests may need to inject faults directly into the TLS
handshake data (i.e. the contents of CRYPTO frames). However such faults may
need to be done in handshake messages that would normally be encrypted.
Additionally the contents of handshake messages are hashed and each peer
confirms that the other peer has the same calculated hash value as part of the
"Finished" message exchange - so any modifications would be rejected and the
handshake would fail.
An example test might be to confirm that an OpenSSL QUIC client behaves
correctly in the case that the server provides incorrectly formatted transport
parameters. These transport parameters are sent from the server in the
EncryptedExtensions message. That message is encrypted and so cannot be
modified by a "man-in-the-middle".
To support this integration point two new callbacks will be introduced to libssl
that enables modification of handshake data prior to it being encrypted and
hashed. These callbacks will be internal only (i.e. not part of the public API)
and so only usable by the Fault Injector.
The new libssl callbacks will be as follows:
```` C
typedef int (*ossl_statem_mutate_handshake_cb)(const unsigned char *msgin,
size_t inlen,
unsigned char **msgout,
size_t *outlen,
void *arg);
typedef void (*ossl_statem_finish_mutate_handshake_cb)(void *arg);
int ossl_statem_set_mutator(SSL *s,
ossl_statem_mutate_handshake_cb mutate_handshake_cb,
ossl_statem_finish_mutate_handshake_cb finish_mutate_handshake_cb,
void *mutatearg);
````
The two callbacks are set via a single internal function call
`ossl_statem_set_mutator`. The mutator callback `mutate_handshake_cb` will be
called after each handshake message has been constructed and is ready to send, but
before it has been passed through the handshake hashing code. It will be passed
a pointer to the constructed handshake message in `msgin` along with its
associated length in `inlen`. The mutator will construct a replacement handshake
message (typically by copying the input message and modifying it) and store it
in a newly allocated buffer. A pointer to the new buffer will be passed back
in `*msgout` and its length will be stored in `*outlen`. Optionally the mutator
can choose to not mutate by simply creating a new buffer with a copy of the data
in it. A return value of 1 indicates that the callback completed successfully. A
return value of 0 indicates a fatal error.
Once libssl has finished using the mutated buffer it will call the
`finish_mutate_handshake_cb` callback which can then release the buffer and
perform any other cleanup as required.
### QUIC Pre-Encryption Packets
QUIC Packets are the primary mechanism for exchanging protocol data within QUIC.
Multiple packets may be held within a single datagram, and each packet may
itself contain multiple frames. A packet gets protected via an AEAD encryption
algorithm prior to it being sent. Fault Injector based tests may need to inject
faults into these packets prior to them being encrypted.
An example test might insert an unrecognised frame type into a QUIC packet to
confirm that an OpenSSL QUIC client handles it appropriately (e.g. by raising a
protocol error).
The above functionality will be supported by the following two new callbacks
which will provide the ability to mutate packets before they are encrypted and
sent. As for the TLS callbacks these will be internal only and not part of the
public API.
```` C
typedef int (*ossl_mutate_packet_cb)(const QUIC_PKT_HDR *hdrin,
const OSSL_QTX_IOVEC *iovecin, size_t numin,
QUIC_PKT_HDR **hdrout,
const OSSL_QTX_IOVEC **iovecout,
size_t *numout,
void *arg);
typedef void (*ossl_finish_mutate_cb)(void *arg);
void ossl_qtx_set_mutator(OSSL_QTX *qtx, ossl_mutate_packet_cb mutatecb,
ossl_finish_mutate_cb finishmutatecb, void *mutatearg);
````
A single new function call will set both callbacks. The `mutatecb` callback will
be invoked after each packet has been constructed but before protection has
been applied to it. The header for the packet will be pointed to by `hdrin` and
the payload will be in an iovec array pointed to by `iovecin` and containing
`numin` iovecs. The `mutatecb` callback is expected to allocate a new header
structure and return it in `*hdrout` and a new set of iovecs to be stored in
`*iovecout`. The number of iovecs need not be the same as the input. The number
of iovecs in the output array is stored in `*numout`. Optionally the callback
can choose to not mutate by simply creating new iovecs/headers with a copy of the
data in it. A return value of 1 indicates that the callback completed
successfully. A return value of 0 indicates a fatal error.
Once the OpenSSL QUIC implementation has finished using the mutated buffers the
`finishmutatecb` callback is called. This is expected to free any resources and
buffers that were allocated as part of the `mutatecb` call.
### QUIC Datagrams
Encrypted QUIC packets are sent in datagrams. There may be more than one QUIC
packet in a single datagram. Fault Injector based tests may need to inject
faults directly into these datagrams.
An example test might modify an encrypted packet to confirm that the AEAD
decryption process rejects it.
In order to provide this functionality the QUIC Fault Injector will insert
itself as a man-in-the-middle between the client and server. A BIO_s_dgram_pair()
will be used with one of the pair being used on the client end and the other
being associated with the Fault Injector. Similarly a second BIO_s_dgram_pair()
will be created with one used on the server and other used with the Fault
Injector.
With this setup the Fault Injector will act as a proxy and simply pass
datagrams sent from the client on to the server, and vice versa. Where a test
requires a modification to be made, that will occur prior to the datagram being
sent on.
This will all be implemented using public BIO APIs without requiring any
additional internal libssl callbacks.
Fault Injector API
------------------
The Fault Injector will utilise the callbacks described above in order to supply
a more test friendly API to test authors.
This API will primarily take the form of a set of event listener callbacks. A
test will be able to "listen" for a specific event occurring and be informed about
it when it does. Examples of events might include:
- An EncryptedExtensions handshake message being sent
- An ACK frame being sent
- A Datagram being sent
Each listener will be provided with additional data about the specific event.
For example a listener that is listening for an EncryptedExtensions message will
be provided with the parsed contents of that message in an easy to use
structure. Additional helper functions will be provided to make changes to the
message (such as to resize it).
Initially listeners will only be able to listen for events on the server side.
This is because, in MVP, it will be the client side that is under test - so the
faults need to be injected into protocol elements sent from the server. Post
MVP this will be extended in order to be able to test the server. It may be that
we need to do this during MVP in order to be able to observe protocol elements
sent from the client without modifying them (i.e. in order to confirm that the
client is behaving as we expect). This will be added if required as we develop
the tests.
It is expected that the Fault Injector API will expand over time as new
listeners and helper functions are added to support specific test scenarios. The
initial API will provide a basic set of listeners and helper functions in order
to provide the basis for future work.
The following outlines an illustrative set of functions that will initially be
provided. A number of `TODO(QUIC TESTING)` comments are inserted to explain how
we might expand the API over time:
```` C
/* Type to represent the Fault Injector */
typedef struct ossl_quic_fault OSSL_QUIC_FAULT;
/*
* Structure representing a parsed EncryptedExtension message. Listeners can
* make changes to the contents of structure objects as required and the fault
* injector will reconstruct the message to be sent on
*/
typedef struct ossl_qf_encrypted_extensions {
/* EncryptedExtension messages just have an extensions block */
unsigned char *extensions;
size_t extensionslen;
} OSSL_QF_ENCRYPTED_EXTENSIONS;
/*
* Given an SSL_CTX for the client and filenames for the server certificate and
* keyfile, create a server and client instances as well as a fault injector
* instance. |block| indicates whether we are using blocking mode or not.
*/
int qtest_create_quic_objects(OSSL_LIB_CTX *libctx, SSL_CTX *clientctx,
SSL_CTX *serverctx, char *certfile, char *keyfile,
int block, QUIC_TSERVER **qtserv, SSL **cssl,
OSSL_QUIC_FAULT **fault, BIO **tracebio);
/*
* Free up a Fault Injector instance
*/
void ossl_quic_fault_free(OSSL_QUIC_FAULT *fault);
/*
* Run the TLS handshake to create a QUIC connection between the client and
* server.
*/
int qtest_create_quic_connection(QUIC_TSERVER *qtserv, SSL *clientssl);
/*
* Same as qtest_create_quic_connection but will stop (successfully) if the
* clientssl indicates SSL_ERROR_WANT_XXX as specified by |wanterr|
*/
int qtest_create_quic_connection_ex(QUIC_TSERVER *qtserv, SSL *clientssl,
int wanterr);
/*
* Confirm that the server has received the given transport error code.
*/
int qtest_check_server_transport_err(QUIC_TSERVER *qtserv, uint64_t code);
/*
* Confirm the server has received a protocol error. Equivalent to calling
* qtest_check_server_transport_err with a code of QUIC_ERR_PROTOCOL_VIOLATION
*/
int qtest_check_server_protocol_err(QUIC_TSERVER *qtserv);
/*
* Enable tests to listen for pre-encryption QUIC packets being sent
*/
typedef int (*ossl_quic_fault_on_packet_plain_cb)(OSSL_QUIC_FAULT *fault,
QUIC_PKT_HDR *hdr,
unsigned char *buf,
size_t len,
void *cbarg);
int ossl_quic_fault_set_packet_plain_listener(OSSL_QUIC_FAULT *fault,
ossl_quic_fault_on_packet_plain_cb pplaincb,
void *pplaincbarg);
/*
* Helper function to be called from a packet_plain_listener callback if it
* wants to resize the packet (either to add new data to it, or to truncate it).
* The buf provided to packet_plain_listener is over allocated, so this just
* changes the logical size and never changes the actual address of the buf.
* This will fail if a large resize is attempted that exceeds the over
* allocation.
*/
int ossl_quic_fault_resize_plain_packet(OSSL_QUIC_FAULT *fault, size_t newlen);
/*
* Prepend frame data into a packet. To be called from a packet_plain_listener
* callback
*/
int ossl_quic_fault_prepend_frame(OSSL_QUIC_FAULT *fault, unsigned char *frame,
size_t frame_len);
/*
* The general handshake message listener is sent the entire handshake message
* data block, including the handshake header itself
*/
typedef int (*ossl_quic_fault_on_handshake_cb)(OSSL_QUIC_FAULT *fault,
unsigned char *msg,
size_t msglen,
void *handshakecbarg);
int ossl_quic_fault_set_handshake_listener(OSSL_QUIC_FAULT *fault,
ossl_quic_fault_on_handshake_cb handshakecb,
void *handshakecbarg);
/*
* Helper function to be called from a handshake_listener callback if it wants
* to resize the handshake message (either to add new data to it, or to truncate
* it). newlen must include the length of the handshake message header. The
* handshake message buffer is over allocated, so this just changes the logical
* size and never changes the actual address of the buf.
* This will fail if a large resize is attempted that exceeds the over
* allocation.
*/
int ossl_quic_fault_resize_handshake(OSSL_QUIC_FAULT *fault, size_t newlen);
/*
* TODO(QUIC TESTING): Add listeners for specific types of frame here. E.g.
* we might expect to see an "ACK" frame listener which will be passed
* pre-parsed ack data that can be modified as required.
*/
/*
* Handshake message specific listeners. Unlike the general handshake message
* listener these messages are pre-parsed and supplied with message specific
* data and exclude the handshake header.
*/
typedef int (*ossl_quic_fault_on_enc_ext_cb)(OSSL_QUIC_FAULT *fault,
OSSL_QF_ENCRYPTED_EXTENSIONS *ee,
size_t eelen,
void *encextcbarg);
int ossl_quic_fault_set_hand_enc_ext_listener(OSSL_QUIC_FAULT *fault,
ossl_quic_fault_on_enc_ext_cb encextcb,
void *encextcbarg);
/* TODO(QUIC TESTING): Add listeners for other types of handshake message here */
/*
* Helper function to be called from message specific listener callbacks. newlen
* is the new length of the specific message excluding the handshake message
* header. The buffers provided to the message specific listeners are over
* allocated, so this just changes the logical size and never changes the actual
* address of the buffer. This will fail if a large resize is attempted that
* exceeds the over allocation.
*/
int ossl_quic_fault_resize_message(OSSL_QUIC_FAULT *fault, size_t newlen);
/*
* Helper function to delete an extension from an extension block. |exttype| is
* the type of the extension to be deleted. |ext| points to the extension block.
* On entry |*extlen| contains the length of the extension block. It is updated
* with the new length on exit.
*/
int ossl_quic_fault_delete_extension(OSSL_QUIC_FAULT *fault,
unsigned int exttype, unsigned char *ext,
size_t *extlen);
/*
* TODO(QUIC TESTING): Add additional helper functions for querying extensions
* here (e.g. finding or adding them). We could also provide a "listener" API
* for listening for specific extension types.
*/
/*
* Enable tests to listen for post-encryption QUIC packets being sent
*/
typedef int (*ossl_quic_fault_on_packet_cipher_cb)(OSSL_QUIC_FAULT *fault,
/* The parsed packet header */
QUIC_PKT_HDR *hdr,
/* The packet payload data */
unsigned char *buf,
/* Length of the payload */
size_t len,
void *cbarg);
int ossl_quic_fault_set_packet_cipher_listener(OSSL_QUIC_FAULT *fault,
ossl_quic_fault_on_packet_cipher_cb pciphercb,
void *picphercbarg);
/*
* Enable tests to listen for datagrams being sent
*/
typedef int (*ossl_quic_fault_on_datagram_cb)(OSSL_QUIC_FAULT *fault,
BIO_MSG *m,
size_t stride,
void *cbarg);
int ossl_quic_fault_set_datagram_listener(OSSL_QUIC_FAULT *fault,
ossl_quic_fault_on_datagram_cb datagramcb,
void *datagramcbarg);
/*
* To be called from a datagram_listener callback. The datagram buffer is over
* allocated, so this just changes the logical size and never changes the actual
* address of the buffer. This will fail if a large resize is attempted that
* exceeds the over allocation.
*/
int ossl_quic_fault_resize_datagram(OSSL_QUIC_FAULT *fault, size_t newlen);
````
Example Tests
-------------
This section provides some example tests to illustrate how the Fault Injector
might be used to create tests.
### Unknown Frame Test
An example test showing a server sending a frame of an unknown type to the
client:
```` C
/*
* Test that adding an unknown frame type is handled correctly
*/
static int add_unknown_frame_cb(OSSL_QUIC_FAULT *fault, QUIC_PKT_HDR *hdr,
unsigned char *buf, size_t len, void *cbarg)
{
static size_t done = 0;
/*
* There are no "reserved" frame types which are definitately safe for us
* to use for testing purposes - but we just use the highest possible
* value (8 byte length integer) and with no payload bytes
*/
unsigned char unknown_frame[] = {
0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
};
/* We only ever add the unknown frame to one packet */
if (done++)
return 1;
return ossl_quic_fault_prepend_frame(fault, unknown_frame,
sizeof(unknown_frame));
}
static int test_unknown_frame(void)
{
int testresult = 0, ret;
SSL_CTX *cctx = SSL_CTX_new(OSSL_QUIC_client_method());
QUIC_TSERVER *qtserv = NULL;
SSL *cssl = NULL;
char *msg = "Hello World!";
size_t msglen = strlen(msg);
unsigned char buf[80];
size_t byteswritten;
OSSL_QUIC_FAULT *fault = NULL;
if (!TEST_ptr(cctx))
goto err;
if (!TEST_true(qtest_create_quic_objects(NULL, cctx, NULL, cert, privkey, 0,
&qtserv, &cssl, &fault, NULL)))
goto err;
if (!TEST_true(qtest_create_quic_connection(qtserv, cssl)))
goto err;
/*
* Write a message from the server to the client and add an unknown frame
* type
*/
if (!TEST_true(ossl_quic_fault_set_packet_plain_listener(fault,
add_unknown_frame_cb,
NULL)))
goto err;
if (!TEST_true(ossl_quic_tserver_write(qtserv, (unsigned char *)msg, msglen,
&byteswritten)))
goto err;
if (!TEST_size_t_eq(msglen, byteswritten))
goto err;
ossl_quic_tserver_tick(qtserv);
if (!TEST_true(SSL_tick(cssl)))
goto err;
if (!TEST_int_le(ret = SSL_read(cssl, buf, sizeof(buf)), 0))
goto err;
if (!TEST_int_eq(SSL_get_error(cssl, ret), SSL_ERROR_SSL))
goto err;
if (!TEST_int_eq(ERR_GET_REASON(ERR_peek_error()),
SSL_R_UNKNOWN_FRAME_TYPE_RECEIVED))
goto err;
if (!TEST_true(qtest_check_server_protocol_err(qtserv)))
goto err;
testresult = 1;
err:
ossl_quic_fault_free(fault);
SSL_free(cssl);
ossl_quic_tserver_free(qtserv);
SSL_CTX_free(cctx);
return testresult;
}
````
### No Transport Parameters test
An example test showing the case where a server does not supply any transport
parameters in the TLS handshake:
```` C
/*
* Test that a server that fails to provide transport params cannot be
* connected to.
*/
static int drop_transport_params_cb(OSSL_QUIC_FAULT *fault,
OSSL_QF_ENCRYPTED_EXTENSIONS *ee,
size_t eelen, void *encextcbarg)
{
if (!ossl_quic_fault_delete_extension(fault,
TLSEXT_TYPE_quic_transport_parameters,
ee->extensions, &ee->extensionslen))
return 0;
return 1;
}
static int test_no_transport_params(void)
{
int testresult = 0;
SSL_CTX *cctx = SSL_CTX_new(OSSL_QUIC_client_method());
QUIC_TSERVER *qtserv = NULL;
SSL *cssl = NULL;
OSSL_QUIC_FAULT *fault = NULL;
if (!TEST_ptr(cctx))
goto err;
if (!TEST_true(qtest_create_quic_objects(NULL, cctx, NULL, cert, privkey, 0,
&qtserv, &cssl, &fault, NULL)))
goto err;
if (!TEST_true(ossl_quic_fault_set_hand_enc_ext_listener(fault,
drop_transport_params_cb,
NULL)))
goto err;
/*
* We expect the connection to fail because the server failed to provide
* transport parameters
*/
if (!TEST_false(qtest_create_quic_connection(qtserv, cssl)))
goto err;
if (!TEST_true(qtest_check_server_protocol_err(qtserv)))
goto err;
testresult = 1;
err:
ossl_quic_fault_free(fault);
SSL_free(cssl);
ossl_quic_tserver_free(qtserv);
SSL_CTX_free(cctx);
return testresult;
}
````

View File

@@ -0,0 +1,272 @@
Flow Control
============
Introduction to QUIC Flow Control
---------------------------------
QUIC flow control acts at both connection and stream levels. At any time,
transmission of stream data could be prevented by connection-level flow control,
by stream-level flow control, or both. Flow control uses a credit-based model in
which the relevant flow control limit is expressed as the maximum number of
bytes allowed to be sent on a stream, or across all streams, since the beginning
of the stream or connection. This limit may be periodically bumped.
It is important to note that both connection and stream-level flow control
relate only to the transmission of QUIC stream data. QUIC flow control at stream
level counts the total number of logical bytes sent on a given stream. Note that
this does not count retransmissions; thus, if a byte is sent, lost, and sent
again, this still only counts as one byte for the purposes of flow control. Note
that the total number of logical bytes sent on a given stream is equivalent to
the current “length” of the stream. In essence, the relevant quantity is
`max(offset + len)` for all STREAM frames `(offset, len)` we have ever sent for
the stream.
(It is essential that this be determined correctly, as deadlock may occur if we
believe we have exhausted our flow control credit whereas the peer believes we
have not, as the peer may wait indefinitely for us to send more data before
advancing us more flow control credit.)
QUIC flow control at connection level is based on the sum of all the logical
bytes transmitted across all streams since the start of the connection.
Connection-level flow control is controlled by the `MAX_DATA` frame;
stream-level flow control is controlled by the `MAX_STREAM_DATA` frame.
The `DATA_BLOCKED` and `STREAM_DATA_BLOCKED` frames defined by RFC 9000 are less
important than they first appear, as peers are not allowed to rely on them. (For
example, a peer is not allowed to wait until we send `DATA_BLOCKED` to increase
our connection-level credit, and a conformant QUIC implementation can choose to
never generate either of these frame types.) These frames rather serve two
purposes: to enhance flow control performance, and as a debugging aid.
However, their implementation is not critical.
Note that it follows from the above that the CRYPTO-frame stream is not subject
to flow control.
Note that flow control and congestion control are completely separate
mechanisms. In a given circumstance, either or both mechanisms may restrict our
ability to transmit application data.
Consider the following diagram:
RWM SWM SWM' CWM CWM'
| | | | |
| |<-- credit| -->| |
| <-|- threshold -|----->| |
----------------->
window size
We introduce the following terminology:
- **Controlled bytes** refers to any byte which counts for purposes of flow
control. A controlled byte is any byte of application data in a STREAM frame
payload, the first time it is sent (retransmissions do not count).
- (RX side only) **Retirement**, which refers to where we dequeue one or more
controlled bytes from a QUIC stream and hand them to the application, meaning
we are no longer responsible for them.
Retirement is an important factor in our RX flow control design, as we want
peers to transmit not just at the rate that our QUIC implementation can
process incoming data, but also at a rate the application can handle.
- (RX side only) The **Retired Watermark** (RWM), the total number of retired
controlled bytes since the beginning of the connection or stream.
- The **Spent Watermark** (SWM), which is the number of controlled bytes we have
sent (for the TX side) or received (for the RX side). This represents the
amount of flow control budget which has been spent. It is a monotonic value
and never decreases. On the RX side, such bytes have not necessarily been
retired yet.
- The **Credit Watermark** (CWM), which is the number of bytes which have
been authorized for transmission so far. This count is a cumulative count
since the start of the connection or stream and thus is also monotonic.
- The available **credit**, which is always simply the difference between
the SWM and the CWM.
- (RX side only) The **threshold**, which is how close we let the RWM
get to the CWM before we choose to extend the peer more credit by bumping the
CWM. The threshold is relative to (i.e., subtracted from) the CWM.
- (RX side only) The **window size**, which is the amount by which we or a peer
choose to bump the CWM each time, as we reach or exceed the threshold. The new
CWM is calculated as the SWM plus the window size (note that it added to the
SWM, not the old CWM.)
Note that:
- If the available credit is zero, the TX side is blocked due to a lack of
credit.
- If any circumstance occurs which would cause the SWM to exceed the CWM,
a flow control protocol violation has occurred and the connection
should be terminated.
Connection-Level Flow Control - TX Side
---------------------------------------
TX side flow control is exceptionally simple. It can be modelled as the
following state machine:
---> event: On TX (numBytes)
---> event: On TX Window Updated (numBytes)
<--- event: On TX Blocked
Get TX Window() -> numBytes
The On TX event is passed to the state machine whenever we send a packet.
`numBytes` is the total number of controlled bytes we sent in the packet (i.e.,
the number of bytes of STREAM frame payload which are not retransmissions). This
value is added to the TX-side SWM value. Note that this may be zero, though
there is no need to pass the event in this case.
The On TX Window Updated event is passed to the state machine whenever we have
our CWM increased. In other words, it is passed whenever we receive a `MAX_DATA`
frame, with the integer value contained in that frame (or when we receive the
`initial_max_data` transport parameter).
The On TX Window Updated event expresses the CWM (that is, the cumulative
number of controlled bytes we are allowed to send since the start of the
connection), thus it is monotonic and may never regress. If an On TX Window
Update event is passed to the state machine with a value lower than that passed
in any previous such event, it indicates a peer protocol error or a local
programming error.
The Get TX Window function returns our credit value (that is, it returns the
number of controlled bytes we are allowed to send). This value is reduced by the
On TX event and increased by the On TX Window Updated event. In fact, it is
simply the difference between the last On TX Window Updated value and the sum of
the `numBytes` arguments of all On TX events so far; it is that simple.
The On TX Blocked event is emitted at the time of any edge transition where the
value which would be returned by the Get TX Window function changes from
non-zero to zero. This always occurs during processing of an On TX event. (This
event is intended to assist in deciding when to generate `DATA_BLOCKED`
frames.)
We must not exceed the flow control limits, else the peer may terminate the
connection with an error.
An initial connection-level credit is communicated by the peer in the
`initial_max_data` transport parameter. All other credits occur as a result of a
`MAX_DATA` frame.
Stream-Level Flow Control - TX Side
-----------------------------------
Stream-level flow control works exactly the same as connection-level flow
control for the TX side.
The On TX Window Updated event occurs in response to the `MAX_STREAM_DATA`
frame, or based on the relevant transport parameter
(`initial_max_stream_data_bidi_local`, `initial_max_stream_data_bidi_remote`,
`initial_max_stream_data_uni`).
The On TX Blocked event can be used to decide when to generate
`STREAM_DATA_BLOCKED` frames.
Note that the number of controlled bytes we can send in a stream is limited by
both connection and stream-level flow control; thus the number of controlled
bytes we can send is the lesser value of the values returned by the Get TX
Window function on the connection-level and stream-level state machines,
respectively.
Connection-Level Flow Control - RX Side
---------------------------------------
---> event: On RX Controlled Bytes (numBytes) [internal event]
---> event: On Retire Controlled Bytes (numBytes)
<--- event: Increase Window (numBytes)
<--- event: Flow Control Error
RX side connection-level flow control provides an indication of when to generate
`MAX_DATA` frames to bump the peer's connection-level transmission credit. It is
somewhat more involved than the TX side.
The state machine receives On RX Controlled Bytes events from stream-level flow
controllers. Callers do not pass the event themselves. The event is generated by
a stream-level flow controller whenever we receive any controlled bytes.
`numBytes` is the number of controlled bytes we received. (This event is
generated by stream-level flow control as retransmitted stream data must be
counted only once, and the stream-level flow control is therefore in the best
position to determine how many controlled bytes (i.e., new, non-retransmitted
stream payload bytes) have been received).
If we receive more controlled bytes than we authorized, the state machine emits
the Flow Control Error event. The connection should be terminated with a
protocol error in this case.
The state machine emits the Increase Window event when it thinks that the peer
should be advanced more flow control credit (i.e., when the CWM should be
bumped). `numBytes` is the new CWM value, and is monotonic with regard to all
previous Increase Window events emitted by the state machine.
The state machine is passed the On Retire Controlled bytes event when one or
more controlled bytes are dequeued from any stream and passed to the
application.
The state machine uses the cadence of the On Retire Controlled Bytes events it
receives to determine when to increase the flow control window. Thus, the On
Retire Controlled Bytes event should be sent to the state machine when
processing of the received controlled bytes has been *completed* (i.e., passed
to the application).
Stream-Level Flow Control - RX Side
-----------------------------------
RX-side stream-level flow control works similarly to RX-side connection-level
flow control. There are a few differences:
- There is no On RX Controlled Bytes event.
- The On Retire Controlled Bytes event may optionally pass the same event
to a connection-level flow controller (an implementation decision), as these
events should always occur at the same time.
- An additional event is added, which replaces the On RX Controlled Bytes event:
---> event: On RX Stream Frame (offsetPlusLength, isFin)
This event should be passed to the state machine when a STREAM frame is
received. The `offsetPlusLength` argument is the sum of the offset field of
the STREAM frame and the length of the frame's payload in bytes. The isFin
argument should specify whether the STREAM frame had the FIN flag set.
This event is used to generate the internal On RX Controlled Bytes event to
the connection-level flow controller. It is also used by stream-level flow
control to determine if flow control limits are violated by the peer.
The state machine handles `offsetPlusLength` monotonically and ignores the
event if a previous such event already had an equal or greater value. The
reason this event is used instead of a `On RX (numBytes)` style event is that
this API can be monotonic and thus easier to use (the caller does not need to
remember if they have already counted a specific controlled byte in a STREAM
frame, which may after all duplicate some of the controlled bytes in a
previous STREAM frame).
RX Window Sizing
----------------
For RX flow control we must determine our window size. This is the value we add
to the peer's current SWM to determine the new CWM each time as RWM reaches the
threshold. The window size should be adapted dynamically according to network
conditions.
Many implementations choose to have a mechanism for increasing the window size
but not decreasing it, a simple approach which we adopt here.
The common algorithm is a so-called auto-tuning approach in which the rate of
window consumption (i.e., the rate at which RWM approaches CWM after CWM is
bumped) is measured and compared to the measured connection RTT. If the time it
takes to consume one window size exceeds a fixed multiple of the RTT, the window
size is doubled, up to an implementation-chosen maximum window size.
Auto-tuning occurs in 'epochs'. At the end of each auto-tuning epoch, a decision
is made on whether to double the window size, and a new auto-tuning epoch is
started.
For more information on auto-tuning, see [Flow control in
QUIC](https://docs.google.com/document/d/1F2YfdDXKpy20WVKJueEf4abn_LVZHhMUMS5gX6Pgjl4/edit#heading=h.hcm2y5x4qmqt)
and [QUIC Flow
Control](https://docs.google.com/document/d/1SExkMmGiz8VYzV3s9E35JQlJ73vhzCekKkDi85F1qCE/edit#).

View File

@@ -0,0 +1,510 @@
QUIC Frame-in-Flight Management
===============================
The QUIC frame-in-flight manager is responsible for tracking frames which were
sent which need to be regenerated if the packets they were placed into are
designated as lost by the ACK manager. The ACK manager works on the level of
packets, whereas the QUIC frame-in-flight manager (FIFM) works on the level of
frames.
The FIFM comprises three components, collectively known as the FIFM:
- the Control Frame Queue (CFQ);
- the Transmitted Packet Information Manager (TXPIM); and
- the Frame-in-Flight Dispatcher (FIFD).
![](images/quic-fifm-overview.png "QUIC FIFM Overview")
These are introduced in turn below, but first we discuss the various QUIC frame
types to establish the need for each component.
Analysis of QUIC Frame Retransmission Requirements
--------------------------------------------------
### Frame Types
Standard QUIC uses the following frame types:
```plain
HANDSHAKE_DONE GCR / REGEN
MAX_DATA REGEN
DATA_BLOCKED REGEN
MAX_STREAMS REGEN
STREAMS_BLOCKED REGEN
NEW_CONNECTION_ID GCR
RETIRE_CONNECTION_ID GCR
PATH_CHALLENGE -
PATH_RESPONSE -
ACK - (non-ACK-eliciting)
CONNECTION_CLOSE special (non-ACK-eliciting)
NEW_TOKEN GCR
CRYPTO GCR or special
RESET_STREAM REGEN
STOP_SENDING REGEN
MAX_STREAM_DATA REGEN
STREAM_DATA_BLOCKED REGEN
STREAM special
PING -
PADDING - (non-ACK-eliciting)
```
The different frame types require various different ways of handling
retransmission in the event of loss:
- **GCR** (Generic Control Frame Retransmission): The raw bytes of
the encoded frame can simply be sent again. This retransmission system does
not need to understand the specific frame type. A simple queue can be used,
with each queue entry being an octet string representing an encoded frame.
This queue can also be used for initial transmission of **GCR** frames, not
just retransmissions.
- **REGEN** (Regenerate): These frames can be marked for dynamic regeneration
when a packet containing them is lost. This has the advantage of using
up-to-date data at the time of transmission, so is preferred over `GCR` when
possible.
- Special — `STREAM`, `CRYPTO`: `STREAM` frame handling is handled as a
special case by the QUIC Send Stream Manager. `CRYPTO` frame retransmission
can also be handled using a QUIC Send Stream manager. (`CRYPTO` frames could
also be handled via GCR, though suboptimally. We choose to use proper send
stream management, just as for application data streams.)
- Some frame types do not need to be retransmitted even if lost (`PING`,
`PADDING`, `PATH_CHALLENGE`, `PATH_RESPONSE`).
- Special — `CONNECTION_CLOSE`: This frame is a special case and is not
retransmitted per se.
### Requirements
The following requirements are identified:
- Need for a generic control queue which can store encoded control frames.
This control queue will handle both initial transmission and retransmission of
most control frames which do not have special requirements.
- The ability to determine, when the ACK Manager determines that a packet has
been acknowledged, lost or discarded:
- What stream IDs were sent in a packet, and the logical ranges of application
data bytes for each (which may not be one contiguous range).
This is needed so that the QUIC Send Stream Manager for a given stream
can be informed of lost or acked ranges in the stream.
- The logical ranges of the CRYPTO stream which were sent in the packet
(which may not be one contiguous range), for similar reasons.
- Which stream IDs had a FIN bit set in the packet.
This is needed so that the QUIC Send Stream Manager can be informed for a
given stream whether a FIN was lost or acked.
- What control frames using the **GCR** strategy were sent in the packet
so that they can be requeued (if lost) or released (if acked or discarded).
- For each type of frame using the **REGEN** strategy, a flag as to whether
that frame type was contained in the packet (so that the flag can be set
again if the packet was lost).
The Control Frame Queue (CFQ)
-----------------------------
![](images/quic-fifm-cfq.png "QUIC CFQ Overview")
The CFQ (`QUIC_CFQ`) stores encoded frames which can be blindly retransmitted in
the event that they are lost. It facilitates the GCR retransmission strategy.
One logical CFQ instance will be needed per PN space per connection. As an
optimisation, these three CFQ instances per connection are all modelled by a
single `QUIC_CFQ` instance.
Each frame in the CFQ is a simple opaque byte buffer, which has the following
metadata associated with it:
- An integral priority value, used to maintain priority ordering.
- The frame type, which is provided by the caller along with the buffer.
This can be determined from the encoded frame buffer, but this saves the
CFQ's users from needing to decode it. The CFQ itself does not use this
value.
- A state, which is either `NEW` or `TX`. Frames added to the CFQ have
the `NEW` state initially. When the frame is transmitted, it is transitioned
to the `TX` state. If the packet it was sent in is subsequently lost,
it is transitioned back to the `NEW` state.
Frames in the `NEW` state participate in a priority queue (the NEW queue)
according to their priority and the CFQ's NEW queue can be iterated in priority
order by callers.
When a packet containing a CFQ item is acknowledged, the CFQ is informed and the
CFQ item is released. A free callback provided when the buffer was added to the
CFQ is called, providing an opportunity to free or reuse the buffer. Buffers
provided to the CFQ as part of a CFQ item must remain allocated for the duration
of their membership of the CFQ. The CFQ maintains memory allocation of CFQ items
themselves internally.
### API
```c
/*
* QUIC Control Frame Queue Item
* =============================
*
* The CFQ item structure has a public and a private part. This structure
* documents the public part.
*/
typedef struct quic_cfq_item_st QUIC_CFQ_ITEM;
struct quic_cfq_item_st {
/*
* These fields are not used by the CFQ, but are a convenience to assist the
* TXPIM in keeping a list of GCR control frames which were sent in a
* packet. They may be used for any purpose.
*/
QUIC_CFQ_ITEM *pkt_prev, *pkt_next;
/* All other fields are private; use ossl_quic_cfq_item_* accessors. */
};
#define QUIC_CFQ_STATE_NEW 0
#define QUIC_CFQ_STATE_TX 1
/* Returns the frame type of a CFQ item. */
uint64_t ossl_quic_cfq_item_get_frame_type(QUIC_CFQ_ITEM *item);
/* Returns a pointer to the encoded buffer of a CFQ item. */
const unsigned char *ossl_quic_cfq_item_get_encoded(QUIC_CFQ_ITEM *item);
/* Returns the length of the encoded buffer in bytes. */
size_t ossl_quic_cfq_item_get_encoded_len(QUIC_CFQ_ITEM *item);
/* Returns the CFQ item state, a QUIC_CFQ_STATE_* value. */
int ossl_quic_cfg_item_get_state(QUIC_CFQ_ITEM *item);
/* Returns the PN space for the CFQ item. */
int ossl_quic_cfg_item_get_pn_space(QUIC_CFQ_ITEM *item);
/*
* QUIC Control Frame Queue
* ========================
*/
typedef struct quic_cfq_st QUIC_CFQ;
QUIC_CFQ *ossl_quic_cfq_new(void);
void ossl_quic_cfq_free(QUIC_CFQ *cfq);
/*
* Input Side
* ----------
*/
/*
* Enqueue a frame to the CFQ. encoded points to the opaque encoded frame.
*
* free_cb is called by the CFQ when the buffer is no longer needed;
* free_cb_arg is an opaque value passed to free_cb.
*
* priority determines the relative ordering of control frames in a packet.
* Higher numerical values for priority mean that a frame should come earlier in
* a packet. pn_space is a QUIC_PN_SPACE_* value.
*
* On success, returns a QUIC_CFQ_ITEM pointer which acts as a handle to
* the queued frame. On failure, returns NULL.
*
* The frame is initially in the TX state, so there is no need to call
* ossl_quic_cfq_mark_tx() immediately after calling this function.
*
* The frame type is duplicated as the frame_type argument here, even though it
* is also encoded into the buffer. This allows the caller to determine the
* frame type if desired without having to decode the frame.
*/
typedef void (cfq_free_cb)(unsigned char *buf, size_t buf_len, void *arg);
QUIC_CFQ_ITEM *ossl_quic_cfq_add_frame(QUIC_CFQ *cfq,
uint32_t priority,
uint32_t pn_space,
uint64_t frame_type,
const unsigned char *encoded,
size_t encoded_len,
cfq_free_cb *free_cb,
void *free_cb_arg);
/*
* Effects an immediate transition of the given CFQ item to the TX state.
*/
void ossl_quic_cfq_mark_tx(QUIC_CFQ *cfq, QUIC_CFQ_ITEM *item);
/*
* Effects an immediate transition of the given CFQ item to the NEW state,
* allowing the frame to be retransmitted. If priority is not UINT32_MAX,
* the priority is changed to the given value.
*/
void ossl_quic_cfq_mark_lost(QUIC_CFQ *cfq, QUIC_CFQ_ITEM *item,
uint32_t priority);
/*
* Releases a CFQ item. The item may be in either state (NEW or TX) prior to the
* call. The QUIC_CFQ_ITEM pointer must not be used following this call.
*/
void ossl_quic_cfq_release(QUIC_CFQ *cfq, QUIC_CFQ_ITEM *item);
/*
* Output Side
* -----------
*/
/*
* Gets the highest priority CFQ item in the given PN space awaiting
* transmission. If there are none, returns NULL.
*/
QUIC_CFQ_ITEM *ossl_quic_cfq_get_priority_head(QUIC_CFQ *cfq, uint32_t pn_space);
/*
* Given a CFQ item, gets the next CFQ item awaiting transmission in priority
* order in the given PN space. In other words, given the return value of
* ossl_quic_cfq_get_priority_head(), returns the next-lower priority item.
* Returns NULL if the given item is the last item in priority order.
*/
QUIC_CFQ_ITEM *ossl_quic_cfq_item_get_priority_next(QUIC_CFQ_ITEM *item,
uint32_t pn_space);
```
The Transmitted Packet Information Manager (TXPIM)
--------------------------------------------------
![](images/quic-fifm-txpim.png "QUIC TXPIM Overview")
The Transmitted Packet Information Manager (`QUIC_TXPIM`) is responsible for
allocating and keeping bookkeeping structures for packets which have been
transmitted, but not yet acknowledged, deemed lost or discarded. It is a
self-contained memory pool handing out `QUIC_TXPIM_PKT` structures. Each
`QUIC_TXPIM_PKT` is a self-contained data structure intended for consumption by
the FIFM.
The `QUIC_TXPIM_PKT` structure can be used for:
- Keeping track of all GCR control frames which were transmitted
in each packet, via a linked list of `QUIC_CFQ_ITEM`s.
- Keeping track of all REGEN-strategy control frame types, via a flag
for each frame type indicating whether the packet contained
such a frame.
- Keeping track of all stream IDs sent in a given packet, and
what ranges of the logical stream were sent, and whether
a FIN was sent.
- Keeping track of what logical ranges of the CRYPTO stream were sent.
In order to avoid unnecessary allocations, the FIFM also incorporates the ACK
Manager's `QUIC_ACKM_TX_PKT` structure into its per-packet bookkeeping
structure. The intention is for the `QUIC_TXPIM_PKT` to be the principal
allocation made per transmitted packet. The TX packetiser will obtain
a `QUIC_TXPIM_PKT` structure from the TXPIM, fill in the structure including
the ACK Manager data, and submit it via the FIFD which we introduce below.
The TXPIM does do not anything with the `QUIC_TXPIM_PKT` structure itself other
than managing its allocation and manipulation. Constructive use of the data kept
in the TXPIM is made by the FIFD.
### API
```c
/*
* QUIC Transmitted Packet Information Manager
* ===========================================
*/
typedef struct quic_txpim_st QUIC_TXPIM;
typedef struct quic_txpim_pkt_st {
/* ACKM-specific data. Caller should fill this. */
QUIC_ACKM_TX_PKT ackm_pkt;
/* Linked list of CFQ items in this packet. */
QUIC_CFQ_ITEM *retx_head;
/* Reserved for FIFD use. */
QUIC_FIFD *fifd;
/* Regenerate-strategy frames. */
unsigned int had_handshake_done : 1;
unsigned int had_max_data_frame : 1;
unsigned int had_max_streams_bidi_frame : 1;
unsigned int had_max_streams_uni_frame : 1;
unsigned int had_ack_frame : 1;
/* Private data follows. */
} QUIC_TXPIM_PKT;
/* Represents a range of bytes in an application or CRYPTO stream. */
typedef struct quic_txpim_chunk_st {
/* The stream ID, or UINT64_MAX for the CRYPTO stream. */
uint64_t stream_id;
/*
* The inclusive range of bytes in the stream. Exceptionally, if end <
* start, designates a frame of zero length (used for FIN-only frames).
*/
uint64_t start, end;
/*
* Whether a FIN was sent for this stream in the packet. Not valid for
* CRYPTO stream.
*/
unsigned int has_fin : 1;
} QUIC_TXPIM_CHUNK;
QUIC_TXPIM *ossl_quic_txpim_new(void);
void ossl_quic_txpim_free(QUIC_TXPIM *txpim);
/*
* Allocates a new QUIC_TXPIM_PKT structure from the pool. Returns NULL on
* failure. The returned structure is cleared of all data and is in a fresh
* initial state.
*/
QUIC_TXPIM_PKT *ossl_quic_txpim_pkt_alloc(QUIC_TXPIM *txpim);
/*
* Releases the TXPIM packet, returning it to the pool.
*/
void ossl_quic_txpim_pkt_release(QUIC_TXPIM *txpim, QUIC_TXPIM_PKT *fpkt);
/* Clears the chunk list of the packet, removing all entries. */
void ossl_quic_txpim_pkt_clear_chunks(QUIC_TXPIM_PKT *fpkt);
/* Appends a chunk to the packet. The structure is copied. */
int ossl_quic_txpim_pkt_append_chunk(QUIC_TXPIM_PKT *fpkt,
const QUIC_TXPIM_CHUNK *chunk);
/* Adds a CFQ item to the packet by prepending it to the retx_head list. */
void ossl_quic_txpim_pkt_add_cfq_item(QUIC_TXPIM_PKT *fpkt,
QUIC_CFQ_ITEM *item);
/*
* Returns a pointer to an array of stream chunk information structures for the
* given packet. The caller must call ossl_quic_txpim_pkt_get_num_chunks() to
* determine the length of this array.
*
* The chunks are sorted by (stream_id, start) in ascending order.
*/
const QUIC_TXPIM_CHUNK *ossl_quic_txpim_pkt_get_chunks(QUIC_TXPIM_PKT *fpkt);
/*
* Returns the number of entries in the array returned by
* ossl_quic_txpim_pkt_get_chunks().
*/
size_t ossl_quic_txpim_pkt_get_num_chunks(QUIC_TXPIM_PKT *fpkt);
/*
* Returns the number of QUIC_TXPIM_PKTs allocated by the given TXPIM that have
* yet to be returned to the TXPIM.
*/
size_t ossl_quic_txpim_get_in_use(QUIC_TXPIM *txpim);
```
The Frame-in-Flight Dispatcher (FIFD)
-------------------------------------
Finally, the CFQ, TXPIM and some interfaces to the ACKM are tied together via
the FIFD (`QUIC_FIFD`). The FIFD is completely stateless and provides reasonable
implementations for the on-loss, on-acked and on-discarded callbacks issued by
the ACK Manager.
The FIFD is used by obtaining a packet structure from the TXPIM, filling it in,
and then calling `ossl_quic_fifd_pkt_commit()`. The FIFD submits the packet to
the ACK Manager as a transmitted packet and provides its own callback
implementations to the ACK Manager for the packet. Note that the
`QUIC_TXPIM_PKT` is returned to the free pool once any of these callbacks occur;
once a packet's fate is known (acked, lost or discarded), use is immediately
made of the information in the `QUIC_TXPIM_PKT` and the `QUIC_TXPIM_PKT` is
immediately released. CFQ items may be freed (on ACK or discard) or transitioned
back to the NEW state (on loss).
The FIFD consumes various dependencies so that it can inform the appropriate
subsystems in the event of a packet being acked, lost or discarded. In
particular:
- It references a CFQ used to manage CFQ items;
- It references an ACK manager which it informs of transmitted packets;
- It references a TXPIM which manages each `QUIC_TXPIM_PKT`;
- It is provided with a callback to obtain a QUIC Send Stream based on a stream
ID. Thus the caller of the FIFD may implement whatever strategy it likes
to map stream IDs to QUIC Send Stream instances.
- It is provided with a callback which is called when it thinks a frame
should be regenerated using the REGEN strategy. Some of these are specific
to a given stream, in which case a stream ID is specified.
All of the state is in the dependencies referenced by the FIFD. The FIFD itself
simply glues all of these parts together.
### API
```c
typedef struct quic_fifd_st {
/* (internals) */
} QUIC_FIFD;
int ossl_quic_fifd_init(QUIC_FIFD *fifd,
QUIC_CFQ *cfq,
QUIC_ACKM *ackm,
QUIC_TXPIM *txpim,
/* stream_id is UINT64_MAX for the crypto stream */
OSSL_QSS *(*get_qss_by_id)(uint64_t stream_id,
void *arg),
void *get_qss_by_id_arg,
/* stream_id is UINT64_MAX if not applicable */
void (*regen_frame)(uint64_t frame_type,
uint64_t stream_id,
void *arg),
void *regen_frame_arg);
void ossl_quic_fifd_cleanup(QUIC_FIFD *fifd); /* (no-op) */
int ossl_quic_fifd_pkt_commit(QUIC_FIFD *fifd, QUIC_TXPIM_PKT *pkt);
```
Typical Intended TX Packetiser Usage
------------------------------------
- TX Packetiser maintains flags for each REGEN-strategy frame type.
It sets this flag when the regenerate callback is issued by the FIFD
and clears it when transmitting a packet containing such a frame.
- TX Packetiser obtains a `QUIC_TXPIM_PKT` structure by calling
`ossl_quic_txpim_pkt_alloc()`.
- TX Packetiser fills in the ACKM part of the `QUIC_TXPIM_PKT`
(`QUIC_ACKM_TX_PKT`), except for the callback fields, which are handled by the
FIFD.
- TX Packetiser queries the ACK Manager to determine if an ACK frame
is desired, and if so adds it to the packet.
- TX Packetiser queries the CFQ to determine what control frames it places
in a packet. It does this before adding STREAM or CRYPTO frames (i.e.,
all CFQ frames are considered of higher priority). For each such frame
it places in a packet, it:
- calls `ossl_quic_txpim_pkt_add_cfq_item()` on the TXPIM to log the CFQ item
as having been transmitted in the given packet, so that the CFQ item can be
released or requeued depending on the ultimate fate of the packet.
- For each STREAM or CRYPTO frame included in a packet, the TX Packetiser:
- informs the QUIC Send Stream instance for that stream that a range of bytes
has been transmitted;
- also informs the QUIC Send Stream instance if FIN was set on a STREAM frame.
- calls `ossl_quic_txpim_pkt_append_chunk()` to log a logical range of
the given application or crypto stream as having been sent, so that it can
be subsequently marked as acknowledged or lost depending on the ultimate
fate of the packet.
- TX Packetiser calls `ossl_quic_fifd_pkt_commit()`. The FIFD takes care
of submitting the packet to the ACK Manager and provides its own callback
implementation. It also takes care of informing the CFQ that any CFQ items
which were added via `ossl_quic_txpim_pkt_add_cfq_item()` have been
transmitted.
In the event of packet loss, ACK or discard, the appropriate QUIC Send Stream,
CFQ and regenerate callback calls are made. Regardless of the outcome, the
TXPIM is released.

View File

@@ -0,0 +1,536 @@
QUIC I/O Architecture
=====================
This document discusses possible implementation options for the I/O architecture
internal to the libssl QUIC implementation, discusses the underlying design
constraints driving this decision and introduces the resulting I/O architecture.
It also identifies potential hazards to existing applications, and identifies
how those hazards are mitigated.
Objectives
----------
The [requirements for QUIC](./quic-requirements.md) which have formed the basis
for implementation include the following requirements:
- The application must have the ability to be in control of the event loop
without requiring callbacks to process the various events. An application must
also have the ability to operate in “blocking” mode.
- High performance applications (primarily server based) using existing libssl
APIs; using custom network interaction BIOs in order to get the best
performance at a network level as well as OS interactions (IO handling, thread
handling, using fibres). Would prefer to use the existing APIs - they dont
want to throw away what theyve got. Where QUIC necessitates a change they
would be willing to make minor changes.
As such, there are several objectives for the I/O architecture of the QUIC
implementation:
- We want to support both blocking and non-blocking semantics
for application use of the libssl APIs.
- In the case of non-blocking applications, it must be possible
for an application to do its own polling and make its own event
loop.
- We want to support custom BIOs on the network side and to the extent
feasible, minimise the level of adaptation needed for any custom BIOs already
in use on the network side. More generally, the integrity of the BIO
abstraction layer should be preserved.
QUIC-Related Requirements
-------------------------
Note that implementation of QUIC will require that the underlying network BIO
passed to the QUIC implementation be configured to support datagram semantics
instead of bytestream semantics as has been the case with traditional TLS
over TCP. This will require applications using custom BIOs on the network side
to make substantial changes to the implementation of those custom BIOs to model
datagram semantics. These changes are not minor, but there is no way around this
requirement.
It should also be noted that implementation of QUIC requires handling of timer
events as well as the circumstances where a network socket becomes readable or
writable. In many cases we need to handle these events simultaneously (e.g. wait
until a socket becomes readable, or writable, or a timeout expires, whichever
comes first).
Note that the discussion in this document primarily concerns usage of blocking
vs. non-blocking I/O in the interface between the QUIC implementation and an
underlying BIO provided to the QUIC implementation to provide it access to the
network. This is independent of and orthogonal to the application interface to
libssl, which will support both blocking and non-blocking I/O.
Blocking vs. Non-Blocking Modes in Underlying Network BIOs
----------------------------------------------------------
The above constraints make it effectively a requirement that non-blocking I/O be
used for the calls to the underlying network BIOs. To illustrate this point, we
first consider how QUIC might be implemented using blocking network I/O
internally.
To function correctly and provide blocking semantics at the application level,
our QUIC implementation must be able to block such that it can respond to any of
the following events for the underlying network read and write BIOs immediately:
- The underlying network write BIO becomes writeable;
- The underlying network read BIO becomes readable;
- A timeout expires.
### Blocking sockets and select(3)
Firstly, consider how this might be accomplished using the Berkeley sockets API.
Blocking on all three wakeup conditions listed above would require use of an API
such as select(3) or poll(3), regardless of whether the network socket is
configured in blocking mode or not.
While in principle APIs such as select(3) can be used with a socket in blocking
mode, this is not an advisable usage mode. If a socket is in blocking mode,
calls to send(3) or recv(3) may block for some arbitrary period of time, meaning
that our QUIC implementation cannot handle incoming data (if we are blocked on
send), send outgoing data (if we are blocked on receive), or handle timeout
events.
Though it can be argued that a select(3) call indicating readability or
writeability should guarantee that a subsequent send(3) or recv(3) call will not
block, there are several reasons why this is an extremely undesirable solution:
- It is quite likely that there are buggy OSes out there which perform spurious
wakeups from select(3).
- The fact that a socket is writeable does not necessarily mean that a datagram
of the size we wish to send is writeable, so a send(3) call could block
anyway.
- This usage pattern precludes multithreaded use barring some locking scheme
due to the possibility of other threads racing between the call to select(3)
and the subsequent I/O call. This undermines our intentions to support
multi-threaded network I/O on the backend.
Moreover, our QUIC implementation will not drive the Berkeley sockets API
directly but uses the BIO abstraction to access the network, so these issues are
then compounded by the limitations of our existing BIO interfaces. We do not
have a BIO interface which provides for select(3)-like functionality or which
can implement the required semantics above.
Moreover, even if we used select(3) directly, select(3) only gives us a
guarantee (under a non-buggy OS) that a single syscall will not block, however
we have no guarantee in the API contract for BIO_read(3) or BIO_write(3) that
any given BIO implementation has such a BIO call correspond to only a single
system call (or any system call), so this does not work either. Therefore,
trying to implement QUIC on top of blocking I/O in this way would require
violating the BIO abstraction layer, and would not work with custom BIOs (even
if the poll descriptor concept discussed below were adopted).
### Blocking sockets and threads
Another conceptual possibility is that blocking calls could be kept ongoing in
parallel threads. Under this model, there would be three threads:
- a thread which exists solely to execute blocking calls to the `BIO_write` of
an underlying network BIO,
- a thread which exists solely to execute blocking calls to the `BIO_read` of an
underlying network BIO,
- a thread which exists solely to wait for and dispatch timeout events.
This could potentially be reduced to two threads if it is assumed that
`BIO_write` calls do not take an excessive amount of time.
The premise here is that the front-end I/O API (`SSL_read`, `SSL_write`, etc.)
would coordinate and synchronise with these background worker threads via
threading primitives such as conditional variables, etc.
This has a large number of disadvantages:
- There is a hard requirement for threading functionality in order to be
able to support blocking semantics at the application level. Applications
which require blocking semantics would only be able to function in thread
assisted mode. In environments where threading support is not available or
desired, our APIs would only be usable in a non-blocking fashion.
- Several threads are spawned which the application is not in control of.
This undermines our general approach of providing the application with control
over OpenSSL's use of resources, such as allowing the application to do its
own polling or provide its own allocators.
At a minimum for a client, there must be two threads per connection. This
means if an application opens many outgoing connections, there will need
to be `2n` extra threads spawned.
- By blocking in `BIO_write` calls, this precludes correct implementation of
QUIC. Unlike any analogue in TLS, QUIC packets are time sensitive and intended
to be transmitted as soon as they are generated. QUIC packets contain fields
such as the ACK Delay value, which is intended to describe the time between a
packet being received and a return packet being generated. Correct calculation
of this field is necessary to correct calculation of connection RTT. It is
therefore important to only generate packets when they are ready to be sent,
otherwise suboptimal performance will result. This is a usage model which
aligns optimally to non-blocking I/O and which cannot be accommodated
by blocking I/O.
- Since existing custom BIOs will not be expecting concurrent `BIO_read` and
`BIO_write` calls, they will need to be adapted to support this, which is
likely to require substantial rework of those custom BIOs (trivial locking of
calls obviously does not work since both of these calls must be able to block
on network I/O simultaneously).
Moreover, this does not appear to be a realistically implementable approach:
- The question is posed of how to handle connection teardown, which does not
seem to be solvable. If parallel threads are blocked in blocking `BIO_read`
and `BIO_write` calls on some underlying network BIO, there needs to be some
way to force these calls to return once `SSL_free` is called and we need to
tear down the connection. However, the BIO interface does not provide
any way to do this. *At best* we might assume the BIO is a `BIO_s_dgram`
(but cannot assume this in the general case), but even then we can only
accomplish teardown by violating the BIO abstraction and closing the
underlying socket.
This is the only portable way to ensure that a recv(3) call to the same socket
returns. This obviously is a highly application-visible change (and is likely
to be far more disruptive than configuring the socket into non-blocking mode).
Moreover, it is not workable anyway because it only works for a socket-based
BIO and violates the BIO abstraction. For BIOs in general, there does not
appear to be any viable solution to the teardown issue.
Even if this approach were successfully implemented, applications will still
need to change to using network BIOs with datagram semantics. For applications
using custom BIOs, this is likely to require substantial rework of those BIOs.
There is no possible way around this. Thus, even if this solution were adopted
(notwithstanding the issues which preclude this noted above) for the purposes of
accommodating applications using custom network BIOs in a blocking mode, these
applications would still have to completely rework their implementation of those
BIOs. In any case, it is expected to be comparatively rare that sophisticated
applications implementing their own custom BIOs will do so in a blocking mode.
### Use of non-blocking I/O
By comparison, use of non-blocking I/O and select(3) or similar APIs on the
network side makes satisfying our requirements for QUIC easy, and also allows
our internal approach to I/O to be flexibly adapted in the future as
requirements may evolve.
This is also the approach used by all other known QUIC implementations; it is
highly unlikely that any QUIC implementations exist which use blocking network
I/O, as (as mentioned above) it would lead to suboptimal performance due to the
ACK delay issue.
Note that this is orthogonal to whether we provide blocking I/O semantics to the
application. We can use blocking I/O internally while using this to provide
either blocking or non-blocking semantics to the application, based on what the
application requests.
This approach in general requires that a network socket be configured in
non-blocking mode. Though some OSes support a `MSG_DONTWAIT` flag which allows a
single I/O operation to be made non-blocking, not all OSes support this (e.g.
Windows), thus this cannot be relied on. As such, we need to configure any
socket FD we use into non-blocking mode.
Of the approaches outlined in this document, the use of non-blocking I/O has the
fewest disadvantages and is the only approach which appears to actually be
implementable in practice. Moreover, most of the disadvantages can be readily
mitigated:
- We rely on having a select(3) or poll(3) like function available from the
OS.
However:
- Firstly, we already rely on select(3) in our code, at least in
non-`no-sock` builds, so this does not appear to raise any portability
issues;
- Secondly, we have the option of providing a custom poller interface which
allows an application to provide its own implementation of a
select(3)-like function. In fact, this has the potential to be quite
powerful and would allow the application to implement its own pollable
BIOs, and therefore perform blocking I/O on top of any custom BIO.
For example, while historically none of our own memory-based BIOs have
supported blocking semantics, a sophisticated application could if it
wished choose to implement a custom blocking memory BIO and implement a
custom poller which synchronises using a custom poll descriptor based
around condition variables rather than sockets. Thus this scheme is
highly flexible.
(It is worth noting also that the implementation of blocking semantics at
the application level also does not rely on any privileged access to the
internals of the QUIC implementation and an application could if it wished
build blocking semantics out of a non-blocking QUIC instance; this is not
particularly difficult, though providing custom pollers here would mean
there should be no need for an application to do so.)
- Configuring a socket into non-blocking mode might confuse an application.
However:
- Applications will already have to make changes to any network-side BIOs,
for example switching from a `BIO_s_socket` to a `BIO_s_dgram`, or from a
BIO pair to a `BIO_s_dgram_pair`. Custom BIOs will need to be
substantially reworked to switch from bytestream semantics to datagram
semantics. Such applications will already need substantial changes, and
this is unavoidable.
Of course, application impacts and migration guidance can (and will) all
be documented.
- In order for an application to be confused by us putting a socket into
non-blocking mode, it would need to be trying to use the socket in some
way. But it is not possible for an application to pass a socket to our
QUIC implementation, and also try to use the socket directly, and have
QUIC still work. Using QUIC necessarily requires that an application not
also be trying to make use of the same socket.
- There are some circumstances where an application might want to multiplex
other protocols onto the same UDP socket, for example with protocols like
RTP/RTCP or STUN; this can be facilitated using the QUIC fixed bit.
However, these use cases cannot be supported without explicit assistance
from a QUIC implementation and this use case cannot be facilitated by
simply sharing a network socket, as incoming datagrams will not be routed
correctly. (We may offer some functionality in future to allow this to be
coordinated but this is not for MVP.) Thus this also is not a concern.
Moreover, it is extremely unlikely that any such applications are using
sockets in blocking mode anyway.
- The poll descriptor interface adds complexity to the BIO interface.
Advantages:
- An application retains full control of its event loop in non-blocking mode.
When using libssl in application-level blocking mode, via a custom poller
interface, the application would actually be able to exercise more control
over I/O than it actually is at present when using libssl in blocking mode.
- Feasible to implement and already working in tests.
Minimises further development needed to ship.
- Does not rely on creating threads and can support blocking I/O at the
application level without relying on thread assisted mode.
- Does not require an application-provided network-side custom BIO to be
reworked to support concurrent calls to it.
- The poll descriptor interface will allow applications to implement custom
modes of polling in the future (e.g. an application could even building
blocking application-level I/O on top of a on a custom memory-based BIO
using condition variables, if it wished). This is actually more flexible
than the current TLS stack, which cannot be used in blocking mode when used
with a memory-based BIO.
- Allows performance-optimal implementation of QUIC RFC requirements.
- Ensures our internal I/O architecture remains flexible for future evolution
without breaking compatibility in the future.
Use of Internal Non-Blocking I/O
--------------------------------
Based on the above evaluation, implementation has been undertaken using
non-blocking I/O internally. Applications can use blocking or non-blocking I/O
at the libssl API level. Network-level BIOs must operate in a non-blocking mode
or be configurable by QUIC to this end.
![Block Diagram](images/quic-io-arch-1.png "Block Diagram")
### Support of arbitrary BIOs
We need to support not just socket FDs but arbitrary BIOs as the basis for the
use of QUIC. The use of QUIC with e.g. `BIO_s_dgram_pair`, a bidirectional
memory buffer with datagram semantics, is to be supported as part of MVP. This
must be reconciled with the desire to support application-managed event loops.
Broadly, the intention so far has been to enable the use of QUIC with an
application event loop in application-level non-blocking mode by exposing an
appropriate OS-level synchronisation primitive to the application. On \*NIX
platforms, this essentially means we provide the application with:
- An FD which should be polled for readability, writability, or both; and
- A deadline (if any is currently applicable).
Once either of these conditions is met, the QUIC state machine can be
(potentially) advanced meaningfully, and the application is expected to reenter
the QUIC state machine by calling `SSL_tick()` (or `SSL_read()` or
`SSL_write()`).
This model is readily supported when the read and write BIOs we are provided
with are socket BIOs:
- The read-pollable FD is the FD of the read BIO.
- The write-pollable FD is the FD of the write BIO.
However, things become more complex when we are dealing with memory-based BIOs
such as `BIO_dgram_pair` which do not naturally correspond to any OS primitive
which can be used for synchronisation, or when we are dealing with an
application-provided custom BIO.
### Pollable and Non-Pollable BIOs
In order to accommodate these various cases, we draw a distinction between
pollable and non-pollable BIOs.
- A pollable BIO is a BIO which can provide some kind of OS-level
synchronisation primitive, which can be used to determine when
the BIO might be able to do useful work once more.
- A non-pollable BIO has no naturally associated OS-level synchronisation
primitive, but its state only changes in response to calls made to it (or to
a related BIO, such as the other end of a pair).
#### Supporting Pollable BIOs
“OS-level synchronisation primitive” is deliberately vague. Most modern OSes use
unified handle spaces (UNIX, Windows) though it is likely there are more obscure
APIs on these platforms which have other handle spaces. However, this
unification is not necessarily significant.
For example, Windows sockets are kernel handles and thus like any other object
they can be used with the generic Win32 `WaitForSingleObject()` API, but not in
a useful manner; the generic readiness mechanism for WIndows handles is not
plumbed in for socket handles, and so sockets are simply never considered ready
for the purposes of this API, which will never return. Instead, the
WinSock-specific `select()` call must be used. On the other hand, other kinds of
synchronisation primitive like a Win32 Event must use `WaitForSingleObject()`.
Thus while in theory most modern operating systems have unified handle spaces in
practice there are substantial usage differences between different handle types.
As such, an API to expose a synchronisation primitive should be of a tagged
union design supporting possible variation.
A BIO object will provide methods to retrieve a pollable OS-level
synchronisation primitive which can be used to determine when the QUIC state
machine can (potentially) do more work. This maintains the integrity of the BIO
abstraction layer. Equivalent SSL object API calls which forward to the
equivalent calls of the underlying network BIO will also be provided.
The core mechanic is as follows:
```c
#define BIO_POLL_DESCRIPTOR_TYPE_NONE 0
#define BIO_POLL_DESCRIPTOR_TYPE_SOCK_FD 1
#define BIO_POLL_DESCRIPTOR_CUSTOM_START 8192
#define BIO_POLL_DESCRIPTOR_NUM_CUSTOM 4
typedef struct bio_poll_descriptor_st {
int type;
union {
int fd;
union {
void *ptr;
uint64_t u64;
} custom[BIO_POLL_DESCRIPTOR_NUM_CUSTOM];
} value;
} BIO_POLL_DESCRIPTOR;
int BIO_get_rpoll_descriptor(BIO *ssl, BIO_POLL_DESCRIPTOR *desc);
int BIO_get_wpoll_descriptor(BIO *ssl, BIO_POLL_DESCRIPTOR *desc);
int SSL_get_rpoll_descriptor(SSL *ssl, BIO_POLL_DESCRIPTOR *desc);
int SSL_get_wpoll_descriptor(SSL *ssl, BIO_POLL_DESCRIPTOR *desc);
```
Currently only a single descriptor type is defined, which is a FD on \*NIX and a
Winsock socket handle on Windows. These use the same type to minimise code
changes needed on different platforms in the common case of an OS network
socket. (Use of an `int` here is strictly incorrect for Windows; however, this
style of usage is prevalent in the OpenSSL codebase, so for consistency we
continue the pattern here.)
Poll descriptor types at or above `BIO_POLL_DESCRIPTOR_CUSTOM_START` are
reserved for application-defined use. The `value.custom` field of the
`BIO_POLL_DESCRIPTOR` structure is provided for applications to store values of
their choice in. An application is free to define the semantics.
libssl will not know how to poll custom poll descriptors itself, thus these are
only useful when the application will provide a custom poller function, which
performs polling on behalf of libssl and which implements support for those
custom poll descriptors.
For `BIO_s_ssl`, the `BIO_get_[rw]poll_descriptor` functions are equivalent to
the `SSL_get_[rw]poll_descriptor` functions. The `SSL_get_[rw]poll_descriptor`
functions are equivalent to calling `BIO_get_[rw]poll_descriptor` on the
underlying BIOs provided to the SSL object. For a socket BIO, this will likely
just yield the socket's FD. For memory-based BIOs, see below.
#### Supporting Non-Pollable BIOs
Where we are provided with a non-pollable BIO, we cannot provide the application
with any primitive used for synchronisation and it is assumed that the
application will handle its own network I/O, for example via a
`BIO_s_dgram_pair`.
When libssl calls `BIO_get_[rw]poll_descriptor` on the underlying BIO, the call
fails, indicating that a non-pollable BIO is being used. Thus, if an application
calls `SSL_get_[rw]poll_descriptor`, that call also fails.
There are various circumstances which need to be handled:
- The QUIC implementation wants to write data to the network but
is currently unable to (e.g. `BIO_s_dgram_pair` is full).
This is not hard as our internal TX record layer allows arbitrary buffering.
The only limit comes when QUIC flow control (which only applies to
application stream data) applies a limit; then calls to e.g. `SSL_write` we
must fail with `SSL_ERROR_WANT_WRITE`.
- The QUIC implementation wants to read data from the network
but is currently unable to (e.g. `BIO_s_dgram_pair` is empty).
Here calls like `SSL_read` need to fail with `SSL_ERROR_WANT_READ`; we
thereby support libssl's classic nonblocking I/O interface.
It is worth noting that theoretically a memory-based BIO could be implemented
which is pollable, for example using condition variables. An application could
implement a custom BIO, custom poll descriptor and custom poller to facilitate
this.
### Configuration of Blocking vs. Non-Blocking Mode
Traditionally an SSL object has operated either in blocking mode or non-blocking
mode without requiring explicit configuration; if a socket returns EWOULDBLOCK
or similar, it is handled appropriately, and if a socket call blocks, there is
no issue. Since the QUIC implementation is building on non-blocking I/O, this
implicit configuration of non-blocking mode is not feasible.
Note that Windows does not have an API for determining whether a socket is in
blocking mode, so it is not possible to use the initial state of an underlying
socket to determine if the application wants to use non-blocking I/O or not.
Moreover this would undermine the BIO abstraction.
As such, an explicit call is introduced to configure an SSL (QUIC) object into
non-blocking mode:
```c
int SSL_set_blocking_mode(SSL *s, int blocking);
int SSL_get_blocking_mode(SSL *s);
```
Applications desiring non-blocking operation will need to call this API to
configure a new QUIC connection accordingly. Blocking mode is chosen as the
default for parity with traditional Berkeley sockets APIs and to make things
simpler for blocking applications, which are likely to be seeking a simpler
solution. However, blocking mode cannot be supported with a non-pollable BIO,
and thus blocking mode defaults to off when used with such a BIO.
A method is also needed for the QUIC implementation to inform an underlying BIO
that it must not block. The SSL object will call this function when it is
provided with an underlying BIO. For a socket BIO this can set the socket as
non-blocking; for a memory-based BIO it is a no-op; for `BIO_s_ssl` it is
equivalent to a call to `SSL_set_blocking_mode()`.
### Internal Polling
When blocking mode is configured, the QUIC implementation will call
`BIO_get_[rw]poll_descriptor` on the underlying BIOs and use a suitable OS
function (e.g. `select()`) or, if configured, custom poller function, to block.
This will be implemented by an internal function which can accept up to two poll
descriptors (one for the read BIO, one for the write BIO), which might be
identical.
Blocking mode cannot be used with a non-pollable underlying BIO. If
`BIO_get[rw]poll_descriptor` is not implemented for either of the underlying
read and write BIOs, blocking mode cannot be enabled and blocking mode defaults
to off.

View File

@@ -0,0 +1,141 @@
QUIC Design Overview
====================
The QUIC implementation in OpenSSL is roughly described by the following
picture.
![alt_text](images/quic-overview.svg "QUIC Implementation Building Blocks")
SSL API
-------
The application facing public API of the OpenSSL library.
Stream Send and Read Buffers
----------------------------
Buffers for stream data to be sent or received from the peer over the
QUIC protocol. These are necessary to support existing semantics of the
SSL_read and SSL_write functions.
They will be bypassed with a single-copy API for read and write (_not
for MVP_).
Frame in Flight Manager
-----------------------
The frame in flight manager manages the queueing of frames which may need to be
retransmitted if the packets in which they were transmitted were lost. It is
[discussed in more detail here.](./quic-fifm.md)
Connection State Machine
------------------------
A state machine handling the state for a QUIC connection.
Connection ID Cache
-------------------
A table matching Connection IDs with Connection objects represented
via SSL objects.
_In MVP there is a many-to-1 matching of Connection IDs to Connection
objects. Refer third paragraph in [5.1]_
[5.1]: (https://datatracker.ietf.org/doc/html/rfc9000#section-5.1)
Timer And Event Queue
---------------------
Queue of events that need to be handled asynchronously or at a later
time.
TLS Handshake Record Layer
--------------------------
A module that uses the Record Layer API to implement the inner TLS-1.3
protocol handshake. It produces and parses the QUIC CRYPTO frames.
TX Packetizer
-------------
This module creates frames from the application data obtained from
the application. It also receives CRYPTO frames from the TLS Handshake
Record Layer and ACK frames from the ACK Handling And Loss Detector
subsystem.
RX Frame Handler
----------------
Decrypted packets are split into frames here and the frames are forwarded
either as data or as events to the subsequent modules based on the frame
type. Flow Controller And Statistics Collector is consulted for decisions
and to record the statistics of the received stream data.
Flow Controller
---------------
This module is consulted by the TX Packetizer and RX Frame Handler for flow
control decisions at both the stream and connection levels.
Statistics Collector
--------------------
This module maintains statistics about a connection, most notably the estimated
round trip time to the remote peer.
QUIC Write Record Layer
-----------------------
Encryption of packets according to the given encryption level and with
the appropriate negotiated algorithm happens here.
Resulting packets are sent through the Datagram BIO interface to the
network.
QUIC Read Record Layer
----------------------
Decryption of packets according to the given encryption level and with
the appropriate negotiated algorithm happens here.
Packets are received from the network through the Datagram BIO interface.
Congestion Controller
---------------------
This is a pluggable API that provides calls to record data relevant
for congestion control decisions and to query for decision on whether
more data is allowed to be sent or not.
The module is called by the TX Packetizer and the ACK Handling And
Loss Detector modules.
ACK Handling And Loss Detector
------------------------------
A module that tracks packets sent to the peer and received ACK frames.
It detects lost packets (after an ACK is not received in time). It informs
TX packetizer that it can drop frames waiting to be ACKed when ACK is received.
It also schedules retransmits of frames from packets that are considered
to be lost.
The module also handles the receiving side - it schedules when ACK frames should
be sent for the received packets.
Path And Conn Demultiplexer
---------------------------
On server side this module is shared between multiple SSL connection objects
which makes it a special kind of module. It dispatches the received packets
to the appropriate SSL Connection by consulting the Connection ID Cache.
_For client side and MVP this module just checks that the received packet has
the appropriate Connection ID and optionally schedules sending stateless
reset for packets with other Connection IDs._
Datagram BIO
------------
Implementation of BIO layer that supports `BIO_sendmmsg` and `BIO_recvmmsg`
calls.

View File

@@ -0,0 +1,202 @@
QUIC Requirements
=================
There have been various sources of requirements for the OpenSSL QUIC
implementation. The following sections summarise the requirements obtained from
each of these sources.
Original OMC Requirements
-------------------------
The OMC have specified an initial set of requirements for QUIC as well as other
requirements for the coming releases. The remainder of this section summarises
the OMC requirements that were originally
[posted](https://mta.openssl.org/pipermail/openssl-project/2021-October/002764.html)
and that were specific to QUIC
* The focus for the next releases is QUIC, with the objective of providing a
fully functional QUIC implementation over a series of releases (2-3).
* The current libssl record layer includes support for TLS, DTLS and KTLS. QUIC
will introduce another variant and there may be more over time. The OMC requires
a pluggable record layer interface to be implemented to enable this to be less
intrusive, more maintainable, and to harmonize the existing record layer
interactions between TLS, DTLS, KTLS and the planned QUIC protocols. The pluggable
record layer interface will be internal only for MVP and be public in a future
release.
* The application must have the ability to be in control of the event loop without
requiring callbacks to process the various events. An application must also have
the ability to operate in “blocking” mode.
* The QUIC implementation must include at least one congestion control algorithm.
The fully functional release will provide the ability to plug in more
implementations (via a provider).
* The minimum viable product (MVP) for the next release is a pluggable record
layer interface and a single stream QUIC client in the form of s_client that
does not require significant API changes. In the MVP, interoperability should be
prioritized over strict standards compliance.
* The MVP will not contain a library API for an HTTP/3 implementation (it is a
non-goal of the initial release). Our expectation is that other libraries will
be able to use OpenSSL to build an HTTP/3 client on top of OpenSSL for the
initial release.
* Once we have a fully functional QUIC implementation (in a subsequent release),
it should be possible for external libraries to be able to use the pluggable
record layer interface and it should offer a stable ABI (via a provider).
* The next major release number is intended to be reserved for the fully
functional QUIC release (this does not imply we expect API breakage to occur as
part of this activity - we can change major release numbers even if APIs remain
compatible).
* PR#8797 will not be merged and compatibility with the APIs proposed in that PR
is a non-goal.
* We do not plan to place protocol versions themselves in separate providers at
this stage.
* For the MVP a single interop target (i.e. the server implementation list):
1. [Cloudfare](https://cloudflare-quic.com/)
* Testing against other implementations is not a release requirement for the MVP.
### Non-QUIC OpenSSL Requirements
In addition to the QUIC requirements, the OMC also required that:
* The objective is to have shorter release timeframes, with releases occurring
every six months.
* The platform policy, covering the primary and secondary platforms, should be
followed. (Note that this includes testing of primary and secondary platforms
on project CI)
OMC Blog post requirements
--------------------------
The OMC additionally published a
[blog post](https://www.openssl.org/blog/blog/2021/11/25/openssl-update/) which
also contained some requirements regarding QUIC. Statements from that blog post
have been extracted, paraphrased and summarised here as requirements:
* The objective is to have APIs that allow applications to support any of our
existing (or future) security protocols and to select between them with minimal
effort.
* In TLS/DTLS each connection represents a single stream and each connection is
treated separately by our APIs. In the context of QUIC, APIs to be able to
handle a collection of streams will be necessary for many applications. With the
objective of being able to select which security protocol is used, APIs that
encompass that capability for all protocols will need to be added.
* The majority of existing applications operate using a single connection (i.e.
effectively they are single stream in nature) and this fundamental context of
usage needs to remain simple.
* We need to enable the majority of our existing users applications to be able
to work in a QUIC environment while expanding our APIs to enable future
application usage to support the full capabilities of QUIC.
* We will end up with interfaces that allow other QUIC implementations
(outside of OpenSSL) to be able to use the TLS stack within OpenSSL however
that is not the initial focus of the work.
* A long term supported core API for external QUIC library implementation usage
in a future OpenSSL release will be provided.
* Make it easy for our users to communicate securely, flexibly and performantly
using whatever security protocol is most appropriate for the task at hand.
* We will provide unified, consistent APIs to our users that work for all types
of applications ranging from simple single stream clients up to optimised high
performance servers
Additional OTC analysis
-----------------------
An OTC document provided the following analysis.
There are different types of application that we need to cater for:
* Simple clients that just do basic SSL_read/SSL_write or BIO_read/BIO_write
interactions. We want to be able to enable them to transfer to using single
stream QUIC easily. (MVP)
* Simple servers that just do basic SSL_read/SSL_write or BIO_read/BIO_write
interactions. We want to be able to enable them to transfer to using single
stream QUIC easily. More likely to want to do multi-stream.
* High performance applications (primarily server based) using existing libssl
APIs; using custom network interaction BIOs in order to get the best performance
at a network level as well as OS interactions (IO handling, thread handling,
using fibres). Would prefer to use the existing APIs - they dont want to throw
away what theyve got. Where QUIC necessitates a change they would be willing to
make minor changes.
* New applications. Would be willing to use new APIs to achieve their goals.
Other requirements
------------------
The following section summarises requirements obtained from other sources and
discussions.
* The differences between QUIC, TLS, DTLS etc, should be minimised at an API
level - the structure of the application should be the same. At runtime
applications should be able to pick whatever protocol they want to use
* It shouldnt be harder to do single stream just because multi stream as a
concept exists.
* It shouldnt be harder to do TLS just because you have the ability to do DTLS
or QUIC.
* Application authors will need good documentation, demos, examples etc.
* QUIC performance should be comparable (in some future release - not MVP) with
other major implementations and measured by a) handshakes per second
b) application data throughput (bytes per second) for a single stream/connection
* The internal architecture should allow for the fact that we may want to
support "single copy" APIs in the future:
A single copy API would make it possible for application data being sent or
received via QUIC to only be copied from one buffer to another once. The
"single" copy allowed is to allow for the implicit copy in an encrypt or decrypt
operation.
Single copy for sending data occurs when the application supplies a buffer of
data to be sent. No copies of that data are made until it is encrypted. Once
encrypted no further copies of the encrypted data are made until it is provided
to the kernel for sending via a system call.
Single copy for receiving data occurs when a library supplied buffer is filled
by the kernel via a system call from the socket. No further copies of that data
are made until it is decrypted. It is decrypted directly into a buffer made
available to (or supplied by) the application with no further internal copies
made.
MVP Requirements (3.2)
----------------------
This section summarises those requirements from the above that are specific to
the MVP.
* a pluggable record layer (not public for MVP)
* a single stream QUIC client in the form of s_client that does not require
significant API changes.
* interoperability should be prioritized over strict standards compliance.
* Single interop target for testing (cloudflare)
* Testing against other implementations is not a release requirement for the MVP.
* Support simple clients that just do basic SSL_read/SSL_write or BIO_read/BIO_write
interactions. We want to be able to enable them to transfer to using single
stream QUIC easily. (MVP)

View File

@@ -0,0 +1,73 @@
QUIC Statistics Manager
=======================
The statistics manager keeps track of RTT statistics for use by the QUIC
implementation.
It provides the following interface:
Instantiation
-------------
The QUIC statistics manager is instantiated as follows:
```c
typedef struct ossl_statm_st {
...
} OSSL_STATM;
int ossl_statm_init(OSSL_STATM *statm);
void ossl_statm_destroy(OSSL_STATM *statm);
```
The structure is defined in headers, so it may be initialised without needing
its own memory allocation. However, other code should not examine the fields of
`OSSL_STATM` directly.
Get RTT Info
------------
The current RTT info is retrieved using the function `ossl_statm_get_rtt_info`,
which fills an `OSSL_RTT_INFO` structure:
```c
typedef struct ossl_rtt_info_st {
/* As defined in RFC 9002. */
OSSL_TIME smoothed_rtt, latest_rtt, rtt_variance, min_rtt,
max_ack_delay;
} OSSL_RTT_INFO;
void ossl_statm_get_rtt_info(OSSL_STATM *statm, OSSL_RTT_INFO *rtt_info);
```
Update RTT
----------
New RTT samples are provided using the `ossl_statm_update_rtt` function:
- `ack_delay`. This is the ACK Delay value; see RFC 9000.
- `override_latest_rtt` provides a new latest RTT sample. If it is
`OSSL_TIME_ZERO`, the existing Latest RTT value is used when updating the
RTT.
The maximum ACK delay configured using `ossl_statm_set_max_ack_delay` is not
enforced automatically on the `ack_delay` argument as the circumstances where
this should be enforced are context sensitive. It is the caller's responsibility
to retrieve the value and enforce the maximum ACK delay if appropriate.
```c
void ossl_statm_update_rtt(OSSL_STATM *statm,
OSSL_TIME ack_delay,
OSSL_TIME override_latest_rtt);
```
Set Max. Ack Delay
------------------
Sets the maximum ACK delay field reported by `OSSL_RTT_INFO`.
```c
void ossl_statm_set_max_ack_delay(OSSL_STATM *statm, OSSL_TIME max_ack_delay);
```

View File

@@ -0,0 +1,104 @@
QUIC Thread Assisted Mode Synchronisation Requirements
======================================================
In thread assisted mode, we create a background thread to ensure that periodic
QUIC processing is handled in a timely fashion regardless of whether an
application is frequently calling (or blocked in) SSL API I/O functions.
Part of the QUIC state comprises the TLS handshake layer. However, synchronising
access to this is extremely difficult.
At first glance, one could synchronise handshake layer public APIs by locking a
per-connection mutex for the duration of any public API call which we forward to
the handshake layer. Since we forward a very large number of APIs to the
handshake layer, this would require a very large number of code changes to add
the locking to every single public HL-related API call.
However, on second glance, this does not even solve the problem, as
applications existing usage of the HL APIs assumes exclusive access, and thus
consistency over multiple API calls. For example:
x = SSL_get_foo(s);
/* application mutates x */
SSL_set_foo(s, x);
For locking of API calls the lock would only be held for the separate get and
set calls, but the combination of the two would not be safe if the assist thread
can process some event which causes mutation of `foo`.
As such, there are really only three possible solutions:
- **1. Application-controlled explicit locking.**
We would offer something like `SSL_lock()` and `SSL_unlock()`.
An application performing a single HL API call, or a sequence of related HL
calls, would be required to take the lock. As a special exemption, an
application is not required to take the lock prior to connection
(specifically, prior to the instantiation of a QUIC channel and consequent
assist thread creation).
The key disadvantage here is that it requires more API changes on the
application side, although since most HL API calls made by an application
probably happen prior to initiating a connection, things may not be that bad.
It would also only be required for applications which want to use thread
assisted mode.
Pro: Most “robust” solution in terms of HL evolution.
Con: API changes.
- **2. Handshake layer always belongs to the application thread.**
In this model, the handshake layer “belongs” to the application thread
and the assist thread is never allowed to touch it:
- `SSL_tick()` (or another I/O function) called by the application fully
services the connection.
- The assist thread performs a reduced tick operation which does everything
except servicing the crypto stream, or any other events we may define in
future which would be processed by the handshake layer.
- This is rather hacky but should work adequately. When using TLS 1.3
as the handshake layer, the only thing we actually need to worry about
servicing after handshake completion is the New Session Ticket message,
which doesn't need to be acknowledged and isn't “urgent”. The other
post-handshake messages used by TLS 1.3 aren't relevant to QUIC TLS:
- Post-handshake authentication is not allowed;
- Key update uses a separate, QUIC-specific method;
- TLS alerts are signalled via `CONNECTION_CLOSE` frames rather than the TLS
1.3 Alert message; thus if a peer's HL does raise an alert after
handshake completion (which would in itself be highly unusual), we simply
receive a `CONNECTION_CLOSE` frame and process it normally.
Thus so long as we don't expect our own TLS implementation to spontaneously
generate alerts or New Session Ticket messages after handshake completion,
this should work.
Pro: No API changes.
Con: Somewhat hacky solution.
- **3. Handshake layer belongs to the assist thread after connection begins.**
In this model, the application may make handshake layer calls freely prior to
connecting, but after that, ownership of the HL is transferred to the assist
thread and may not be touched further. We would need to block all API calls
which would forward to the HL after connection commences (specifically, after
the QUIC channel is instantiated).
Con: Many applications probably expect to be able to query the HL after
connection. We could selectively enable some important post-handshake HL calls
by specially implementing synchronised forwarders, but doing this in the
general case runs into the same issues as option 1 above. We could only enable
APIs we think have safe semantics here; e.g. implement only getters and not
setters, focus on APIs which return data which doesn't change after
connection. The work required is proportional to the number of APIs to be
enabled. Some APIs may not have ways to indicate failure; for such APIs which
we don't implement for thread assisted post-handshake QUIC, we would
essentially return incorrect data here.
Option 2 has been chosen as the basis for implementation.

View File

@@ -0,0 +1,261 @@
QUIC-TLS Handshake Integration
==============================
QUIC reuses the TLS handshake for the establishment of keys. It does not use
the standard TLS record layer and instead assumes responsibility for the
confidentiality and integrity of QUIC packets itself. Only the TLS handshake is
used. Application data is entirely protected by QUIC.
QUIC_TLS Object
---------------
A QUIC-TLS handshake is managed by a QUIC_TLS object. This object provides
3 core functions to the rest of the QUIC implementation:
```c
QUIC_TLS *ossl_quic_tls_new(const QUIC_TLS_ARGS *args);
```
The `ossl_quic_tls_new` function instantiates a new `QUIC_TLS` object associated
with the QUIC Connection and initialises it with a set of callbacks and other
arguments provided in the `args` parameter. These callbacks are called at
various key points during the handshake lifecycle such as when new keys are
established, crypto frame data is ready to be sent or consumed, or when the
handshake is complete.
A key field of the `args` structure is the `SSL` object (`s`). This "inner"
`SSL` object is initialised with an `SSL_CONNECTION` to represent the TLS
handshake state. This is a different `SSL` object to the "user" visible `SSL`
object which contains a `QUIC_CONNECTION`, i.e. the user visible `SSL` object
contains a `QUIC_CONNECTION` which contains the inner `SSL` object which
contains an `SSL_CONNECTION`.
```c
void ossl_quic_tls_free(QUIC_TLS *qtls);
```
When the QUIC Connection no longer needs the handshake object it can be freed
via the `ossl_quic_tls_free` function.
```c
int ossl_quic_tls_tick(QUIC_TLS *qtls);
```
Finally the `ossl_quic_tls_tick` function is responsible for advancing the
state of the QUIC-TLS handshake. On each call to `ossl_quic_tls_tick` newly
received crypto frame data may be consumed, or new crypto frame data may be
queued for sending, or one or more of the various callbacks may be invoked.
QUIC_TLS_ARGS
-------------
A `QUIC_TLS_ARGS` object is passed to the `ossl_quic_tls_new` function by the
OpenSSL QUIC implementation to supply a set of callbacks and other essential
parameters. The `QUIC_TLS_ARGS` structure is as follows:
```c
typedef struct quic_tls_args_st {
/*
* The "inner" SSL object for the QUIC Connection. Contains an
* SSL_CONNECTION
*/
SSL *s;
/*
* Called to send data on the crypto stream. We use a callback rather than
* passing the crypto stream QUIC_SSTREAM directly because this lets the CSM
* dynamically select the correct outgoing crypto stream based on the
* current EL.
*/
int (*crypto_send_cb)(const unsigned char *buf, size_t buf_len,
size_t *consumed, void *arg);
void *crypto_send_cb_arg;
int (*crypto_recv_cb)(unsigned char *buf, size_t buf_len,
size_t *bytes_read, void *arg);
void *crypto_recv_cb_arg;
/* Called when a traffic secret is available for a given encryption level. */
int (*yield_secret_cb)(uint32_t enc_level, int direction /* 0=RX, 1=TX */,
uint32_t suite_id, EVP_MD *md,
const unsigned char *secret, size_t secret_len,
void *arg);
void *yield_secret_cb_arg;
/*
* Called when we receive transport parameters from the peer.
*
* Note: These parameters are not authenticated until the handshake is
* marked as completed.
*/
int (*got_transport_params_cb)(const unsigned char *params,
size_t params_len,
void *arg);
void *got_transport_params_cb_arg;
/*
* Called when the handshake has been completed as far as the handshake
* protocol is concerned, meaning that the connection has been
* authenticated.
*/
int (*handshake_complete_cb)(void *arg);
void *handshake_complete_cb_arg;
/*
* Called when something has gone wrong with the connection as far as the
* handshake layer is concerned, meaning that it should be immediately torn
* down. Note that this may happen at any time, including after a connection
* has been fully established.
*/
int (*alert_cb)(void *arg, unsigned char alert_code);
void *alert_cb_arg;
/*
* Transport parameters which client should send. Buffer lifetime must
* exceed the lifetime of the QUIC_TLS object.
*/
const unsigned char *transport_params;
size_t transport_params_len;
} QUIC_TLS_ARGS;
```
The `crypto_send_cb` and `crypto_recv_cb` callbacks will be called by the
QUIC-TLS handshake when there is new CRYPTO frame data to be sent, or when it
wants to consume queued CRYPTO frame data from the peer.
When the TLS handshake generates secrets they will be communicated to the
OpenSSL QUIC implementation via the `yield_secret_cb`, and when the handshake
has successfully completed this will be communicated via `handshake_complete_cb`.
In the event that an error occurs a normal TLS handshake would send a TLS alert
record. QUIC handles this differently and so the QUIC_TLS object will intercept
attempts to send an alert and will communicate this via the `alert_cb` callback.
QUIC requires the use of a TLS extension in order to send and receive "transport
parameters". These transport parameters are opaque to the `QUIC_TLS` object. It
does not need to use them directly but instead simply includes them in an
extension to be sent in the ClientHello and receives them back from the peer in
the EncryptedExtensions message. The data to be sent is provided in the
`transport_params` argument. When the peer's parameters are received the
`got_transport_params_cb` callback is invoked.
QUIC_TLS Implementation
-----------------------
The `QUIC_TLS` object utilises two main mechanisms for fulfilling its functions:
* It registers itself as a custom TLS record layer
* It supplies callbacks to register a custom TLS extension
### Custom TLS Record Layer
A TLS record layer is defined via an `OSSL_RECORD_METHOD` object. This object
consists of a set of function pointers which need to be implemented by any
record layer. Existing record layers include one for TLS, one for DTLS and one
for KTLS.
`QUIC_TLS` registers itself as a custom TLS record layer. A new internal
function is used to provide the custom record method data and associate it with
an `SSL_CONNECTION`:
```C
void ossl_ssl_set_custom_record_layer(SSL_CONNECTION *s,
const OSSL_RECORD_METHOD *meth,
void *rlarg);
```
The internal function `ssl_select_next_record_layer` which is used in the TLS
implementation to work out which record method should be used next is modified
to first check whether a custom record method has been specified and always use
that one if so.
The TLS record layer code is further modified to provide the following
capabilities which are needed in order to support QUIC.
The custom record layer will need a record layer specific argument (`rlarg`
above). This is passed as part of a modified `new_record_layer` call.
Existing TLS record layers use TLS keys and IVs that are calculated using a
KDF from a higher level secret. Instead of this QUIC needs direct access to the
higher level secret as well as the digest to be used in the KDF - so these
values are now also passed through as part of the `new_record_layer` call.
The most important function pointers in the `OSSL_RECORD_METHOD` for the
`QUIC_TLS` object are:
* `new_record_layer`
Invoked every time a new record layer object is created by the TLS
implementation. This occurs every time new keys are provisioned (once for the
"read" side and once for the "write" side). This function is responsible for
invoking the `yield_secret_cb` callback.
* `write_records`
Invoked every time the TLS implementation wants to send TLS handshake data. This
is responsible for calling the `crypto_send_cb` callback. It also includes
special processing in the event that the TLS implementation wants to send an
alert. This manifests itself as a call to `write_records` indicating a type of
`SSL3_RT_ALERT`. The `QUIC_TLS` implementation of `write_records` must parse the
alert data supplied by the TLS implementation (always a 2 byte record payload)
and pull out the alert description (a one byte integer) and invoke the
`alert_cb` callback. Note that while the TLS RFC strictly allows the 2 byte
alert record to be fragmented across two 1 byte records this is never done in
practice by OpenSSL's TLS stack and the `write_records` implementation can make
the optimising assumption that both bytes of an alert are always sent together.
* `quic_read_record`
Invoked when the TLS implementation wants to read more handshake data. This
results in a call to `crypto_recv_cb`.
This design does introduce an extra "copy" in the process when `crypto_recv_cb`
is invoked. CRYPTO frame data will be queued within internal QUIC "Stream
Receive Buffers" when it is received by the peer. However the TLS implementation
expects to request data from the record layer, get a handle on that data, and
then inform the record layer when it has finished using that data. The current
design of the Stream Receive Buffers does not allow for this model. Therefore
when `crypto_recv_cb` is invoked the data is copied into a QUIC_TLS object
managed buffer. This is inefficient, so it is expected that a later phase of
development will resolve this problem.
### Custom TLS extension
Libssl already has the ability for an application to supply a custom extension
via the `SSL_CTX_add_custom_ext()` API. There is no equivalent
`SSL_add_custom_ext()` and therefore an internal API is used to do this. This
mechanism is used for supporting QUIC transport parameters. An extension
type `TLSEXT_TYPE_quic_transport_parameters` with value 57 is used for this
purpose.
The custom extension API enables the caller to supply `add`, `free` and `parse`
callbacks. The `add` callback simply adds the `transport_params` data from
`QUIC_TLS_ARGS`. The `parse` callback invokes the `got_transport_params_cb`
callback when the transport parameters have been received from the peer.
### ALPN
QUIC requires the use of ALPN (Application-Layer Protocol Negotiation). This is
normally optional in OpenSSL but is mandatory for QUIC connections. Therefore
a QUIC client must call one of `SSL_CTX_set_alpn_protos` or
`SSL_set_alpn_protos` prior to initiating the handshake. If the ALPN data has
not been set then the `QUIC_TLS` object immediately fails.
### Other Implementation Details
The `SSL_CONNECTION` used for the TLS handshake is held alongside the QUIC
related data in the `SSL` object. Public API functions that are only relevant to
TLS will modify this internal `SSL_CONNECTION` as appropriate. This enables the
end application to configure the TLS connection parameters as it sees fit (e.g.
setting ciphersuites, providing client certificates, etc). However there are
certain settings that may be optional in a normal TLS connection but are
mandatory for QUIC. Where possible these settings will be automatically
configured just before the handshake starts.
One of these settings is the minimum TLS protocol version. QUIC requires that
TLSv1.3 is used as a minimum. Therefore the `QUIC_TLS` object automatically
calls `SSL_set_min_proto_version()` and specifies `TLS1_3_VERSION` as the
minimum version.
Secondly, QUIC enforces that the TLS "middlebox" mode must not be used. For
normal TLS this is "on" by default. Therefore the `QUIC_TLS` object will
automatically clear the `SSL_OP_ENABLE_MIDDLEBOX_COMPAT` option if it is set.

View File

@@ -0,0 +1,604 @@
Design Problem: Abstract Record Layer
=====================================
This document covers the design of an abstract record layer for use in (D)TLS.
The QUIC record layer is handled separately.
A record within this document refers to a packet of data. It will typically
contain some header data and some payload data, and will often be
cryptographically protected. A record may or may not have a one-to-one
correspondence with network packets, depending on the implementation details of
an individual record layer.
The term record comes directly from the TLS and DTLS specifications.
Libssl supports a number of different types of record layer, and record layer
variants:
- Standard TLS record layer
- Standard DTLS record layer
- Kernel TLS record layer
Within the TLS record layer there are options to handle "multiblock" and
"pipelining" which are different approaches for supporting the reading or
writing of multiple records at the same time. All record layer variants also
have to be able to handle different protocol versions.
These different record layer implementations, variants and protocol versions
have each been added at different times and over many years. The result is that
each took slightly different approaches for achieving the goals that were
appropriate at the time and the integration points where they were added were
spread throughout the code.
The introduction of QUIC support will see the implementation of a new record
layer, i.e. the QUIC-TLS record layer. This refers to the "inner" TLS
implementation used by QUIC. Records here will be in the form of QUIC CRYPTO
frames.
Requirements
------------
The technical requirements
[document](https://github.com/openssl/openssl/blob/master/doc/designs/quic-design/quic-requirements.md)
lists these requirements that are relevant to the record layer:
* The current libssl record layer includes support for TLS, DTLS and KTLS. QUIC
will introduce another variant and there may be more over time. The OMC
requires a pluggable record layer interface to be implemented to enable this
to be less intrusive, more maintainable, and to harmonize the existing record
layer interactions between TLS, DTLS, KTLS and the planned QUIC protocols. The
pluggable record layer interface will be internal only for MVP and be public
in a future release.
* The minimum viable product (MVP) for the next release is a pluggable record
layer interface and a single stream QUIC client in the form of s_client that
does not require significant API changes. In the MVP, interoperability should
be prioritized over strict standards compliance.
* Once we have a fully functional QUIC implementation (in a subsequent release),
it should be possible for external libraries to be able to use the pluggable
record layer interface and it should offer a stable ABI (via a provider).
The MVP requirements are:
* a pluggable record layer (not public for MVP)
Candidate Solutions that were considered
----------------------------------------
This section outlines two different solution approaches that were considered for
the abstract record layer
### Use a METHOD based approach
A METHOD based approach is simply a structure containing function pointers. It
is a common pattern in the OpenSSL codebase. Different strategies for
implementing a METHOD can be employed, but these differences are hidden from
the caller of the METHOD.
In this solution we would seek to implement a different METHOD for each of the
types of record layer that we support, i.e. there would be one for the standard
TLS record layer, one for the standard DTLS record layer, one for kernel TLS and
one for QUIC-TLS.
In the MVP the METHOD approach would be private. However, once it has
stabilised, it would be straight forward to supply public functions to enable
end user applications to construct their own METHODs.
This option is simpler to implement than the alternative of having a provider
based approach. However it could be used as a "stepping stone" for that, i.e.
the MVP could implement a METHOD based approach, and subsequent releases could
convert the METHODs into fully fetchable algorithms.
Pros:
* Simple approach that has been used historically in OpenSSL
* Could be used as the basis for the final public solution
* Could also be used as the basis for a fetchable solution in a subsequent
release
* If this option is later converted to a fetchable solution then much of the
effort involved in making the record layer fetchable can be deferred to a
later release
Cons:
* Not consistent with the provider based approach we used for extensibility in
3.0
* If this option is implemented and later converted to a fetchable solution then
some rework might be required
### Use a provider based approach
This approach is very similar to the alternative METHOD based approach. The
main difference is that the record layer implementations would be held in
providers and "fetched" in much the same way that cryptographic algorithms are
fetched in OpenSSL 3.0.
This approach is more consistent with the approach adopted for extensibility in
3.0. METHODS are being deprecated with providers being used extensively.
Complex objects (e.g. an `SSL` object) cannot be passed across the
libssl/provider boundary. This imposes some restrictions on the design of the
functions that can be implemented. Additionally implementing the infrastructure
for a new fetchable operation is more involved than a METHOD based approach.
Pros:
* Consistent with the extensibility solution used in 3.0
* If this option is implemented immediately in the MVP then it would avoid later
rework if adopted in a subsequent release
Cons:
* More complicated to implement than the simple METHOD based approach
* Cannot pass complex objects across the provider boundary
### Selected solution
The METHOD based approach has been selected for MVP, with the expectation that
subsequent releases will convert it to a full provider based solution accessible
to third party applications.
Solution Description: The METHOD based approach
-----------------------------------------------
This section focuses on the selected approach of using METHODs and further
elaborates on how the design works.
A proposed internal record method API is given in
[Appendix A](#appendix-a-the-internal-record-method-api).
An `OSSL_RECORD_METHOD` represents the implementation of a particular type of
record layer. It contains a set of function pointers to represent the various
actions that can be performed by a record layer.
An `OSSL_RECORD_LAYER` object represents a specific instantiation of a
particular `OSSL_RECORD_METHOD`. It contains the state used by that
`OSSL_RECORD_METHOD` for a specific connection (i.e. `SSL` object). Any `SSL`
object will have at least 2 `OSSL_RECORD_LAYER` objects associated with it - one
for reading and one for writing. In some cases there may be more than 2 - for
example in DTLS it may be necessary to retransmit records from a previous epoch.
There will be different `OSSL_RECORD_LAYER` objects for different protection
levels or epochs. It may be that different `OSSL_RECORD_METHOD`s are used for
different protection levels. For example a connection might start using the
standard TLS record layer during the handshake, and later transition to using
the kernel TLS record layer once the handshake is complete.
A new `OSSL_RECORD_LAYER` is created by calling the `new` function of the
associated `OSSL_RECORD_METHOD`, and freed by calling the `free` function. The
parameters to the `new` function also supply all of the cryptographic state
(e.g. keys, ivs, symmetric encryption algorithms, hash algorithm etc) used by
the record layer. The internal structure details of an `OSSL_RECORD_LAYER` are
entirely hidden to the rest of libssl and can be specific to the given
`OSSL_RECORD_METHOD`. In practice the standard internal TLS, DTLS and KTLS
`OSSL_RECORD_METHOD`s all use a common `OSSL_RECORD_LAYER` structure. However
the QUIC-TLS implementation is likely to use a different structure layout.
All of the header and payload data for a single record will be represented by an
`OSSL_RECORD_TEMPLATE` structure when writing. Libssl will construct a set of
templates for records to be written out and pass them to the "write" record
layer. In most cases only a single record is ever written out at one time,
however there are some cases (such as when using the "pipelining" or
"multibuffer" optimisations) that multiple records can be written in one go.
It is the record layer's responsibility to know whether it can support multiple
records in one go or not. It is libssl's responsibility to split the payload
data into `OSSL_RECORD_TEMPLATE` objects. Libssl will call the record layer's
`get_max_records()` function to determine how many records a given payload
should be split into. If that value is more than one, then libssl will construct
(up to) that number of `OSSL_RECORD_TEMPLATE`s and pass the whole set to the
record layer's `write_records()` function.
The implementation of the `write_records` function must construct the
appropriate number of records, apply protection to them as required and then
write them out to the underlying transport layer BIO. In the event that not
all the data can be transmitted at the current time (e.g. because the underlying
transport has indicated a retry), then the `write_records` function will return
a "retry" response. It is permissible for the data to be partially sent, but
this is still considered a "retry" until all of the data is sent.
On a success or retry response libssl may free its buffers immediately. The
`OSSL_RECORD_LAYER` object will have to buffer any untransmitted data until it
is eventually sent.
If a "retry" occurs, then libssl will subsequently call `retry_write_records`
and continue to do so until a success return value is received. Libssl will
never call `write_records` a second time until a previous call to
`write_records` or `retry_write_records` has indicated success.
Libssl will read records by calling the `read_record` function. The
`OSSL_RECORD_LAYER` may read multiple records in one go and buffer them, but the
`read_record` function only ever returns one record at a time. The
`OSSL_RECORD_LAYER` object owns the buffers for the record that has been read
and supplies a pointer into that buffer back to libssl for the payload data, as
well as other information about the record such as its length and the type of
data contained in it. Each record has an associated opaque handle `rechandle`.
The record data must remain buffered by the `OSSL_RECORD_LAYER` until it has
been released via a call to `release_record()`.
A record layer implementation supplies various functions to enable libssl to
query the current state. In particular:
`unprocessed_read_pending()`: to query whether there is data buffered that has
already been read from the underlying BIO, but not yet processed.
`processed_read_pending()`: to query whether there is data buffered that has
been read from the underlying BIO and has been processed. The data is not
necessarily application data.
`app_data_pending()`: to query the amount of processed application data that is
buffered and available for immediate read.
`get_alert_code()`: to query the alert code that should be used in the event
that a previous attempt to read or write records failed.
`get_state()`: to obtain a printable string to describe the current state of the
record layer.
`get_compression()`: to obtain information about the compression method
currently being used by the record layer.
`get_max_record_overhead()`: to obtain the maximum amount of bytes the record
layer will add to the payload bytes before transmission. This does not include
any expansion that might occur during compression. Currently this is only
implemented for DTLS.
In addition, libssl will tell the record layer about various events that might
occur that are relevant to the record layer's operation:
`set1_bio()`: called if the underlying BIO being used by the record layer has
been changed.
`set_protocol_version()`: called during protocol version negotiation when a
specific protocol version has been selected.
`set_plain_alerts()`: to indicate that receiving unencrypted alerts is allowed
in the current context, even if normally we would expect to receive encrypted
data. This is only relevant for TLSv1.3.
`set_first_handshake()`: called at the beginning and end of the first handshake
for any given (D)TLS connection.
`set_max_pipelines()`: called to configure the maximum number of pipelines of
data that the record layer should process in one go. By default this is 1.
`set_in_init()`: called by libssl to tell the record layer whether we are
currently `in_init` or not. Defaults to "true".
`set_options()`: called by libssl in the event that the current set of options
to use has been updated.
`set_max_frag_len()`: called by libssl to set the maximum allowed fragment
length that is in force at the moment. This might be the result of user
configuration, or it may be negotiated during the handshake.
`increment_sequence_ctr()`: force the record layer to increment its sequence
counter. In most cases the record layer will entirely manage its own sequence
counters. However in the DTLSv1_listen() corner case, libssl needs to initialise
the record layer with an incremented sequence counter.
`alloc_buffers()`: called by libssl to request that the record layer allocate
its buffers. This is a hint only and the record layer is expected to manage its
own buffer allocation and freeing.
`free_buffers()`: called by libssl to request that the record layer free its
buffers. This is a hint only and the record layer is expected to manage its own
buffer allocation and freeing.
Appendix A: The internal record method API
------------------------------------------
The internal recordmethod.h header file for the record method API:
```` C
/*
* We use the term "record" here to refer to a packet of data. Records are
* typically protected via a cipher and MAC, or an AEAD cipher (although not
* always). This usage of the term record is consistent with the TLS concept.
* In QUIC the term "record" is not used but it is analogous to the QUIC term
* "packet". The interface in this file applies to all protocols that protect
* records/packets of data, i.e. (D)TLS and QUIC. The term record is used to
* refer to both contexts.
*/
/*
* An OSSL_RECORD_METHOD is a protocol specific method which provides the
* functions for reading and writing records for that protocol. Which
* OSSL_RECORD_METHOD to use for a given protocol is defined by the SSL_METHOD.
*/
typedef struct ossl_record_method_st OSSL_RECORD_METHOD;
/*
* An OSSL_RECORD_LAYER is just an externally defined opaque pointer created by
* the method
*/
typedef struct ossl_record_layer_st OSSL_RECORD_LAYER;
# define OSSL_RECORD_ROLE_CLIENT 0
# define OSSL_RECORD_ROLE_SERVER 1
# define OSSL_RECORD_DIRECTION_READ 0
# define OSSL_RECORD_DIRECTION_WRITE 1
/*
* Protection level. For <= TLSv1.2 only "NONE" and "APPLICATION" are used.
*/
# define OSSL_RECORD_PROTECTION_LEVEL_NONE 0
# define OSSL_RECORD_PROTECTION_LEVEL_EARLY 1
# define OSSL_RECORD_PROTECTION_LEVEL_HANDSHAKE 2
# define OSSL_RECORD_PROTECTION_LEVEL_APPLICATION 3
# define OSSL_RECORD_RETURN_SUCCESS 1
# define OSSL_RECORD_RETURN_RETRY 0
# define OSSL_RECORD_RETURN_NON_FATAL_ERR -1
# define OSSL_RECORD_RETURN_FATAL -2
# define OSSL_RECORD_RETURN_EOF -3
/*
* Template for creating a record. A record consists of the |type| of data it
* will contain (e.g. alert, handshake, application data, etc) along with a
* buffer of payload data in |buf| of length |buflen|.
*/
struct ossl_record_template_st {
int type;
unsigned int version;
const unsigned char *buf;
size_t buflen;
};
typedef struct ossl_record_template_st OSSL_RECORD_TEMPLATE;
/*
* Rather than a "method" approach, we could make this fetchable - Should we?
* There could be some complexity in finding suitable record layer implementations
* e.g. we need to find one that matches the negotiated protocol, cipher,
* extensions, etc. The selection_cb approach given above doesn't work so well
* if unknown third party providers with OSSL_RECORD_METHOD implementations are
* loaded.
*/
/*
* If this becomes public API then we will need functions to create and
* free an OSSL_RECORD_METHOD, as well as functions to get/set the various
* function pointers....unless we make it fetchable.
*/
struct ossl_record_method_st {
/*
* Create a new OSSL_RECORD_LAYER object for handling the protocol version
* set by |vers|. |role| is 0 for client and 1 for server. |direction|
* indicates either read or write. |level| is the protection level as
* described above. |settings| are mandatory settings that will cause the
* new() call to fail if they are not understood (for example to require
* Encrypt-Then-Mac support). |options| are optional settings that will not
* cause the new() call to fail if they are not understood (for example
* whether to use "read ahead" or not).
*
* The BIO in |transport| is the BIO for the underlying transport layer.
* Where the direction is "read", then this BIO will only ever be used for
* reading data. Where the direction is "write", then this BIO will only
* every be used for writing data.
*
* An SSL object will always have at least 2 OSSL_RECORD_LAYER objects in
* force at any one time (one for reading and one for writing). In some
* protocols more than 2 might be used (e.g. in DTLS for retransmitting
* messages from an earlier epoch).
*
* The created OSSL_RECORD_LAYER object is stored in *ret on success (or
* NULL otherwise). The return value will be one of
* OSSL_RECORD_RETURN_SUCCESS, OSSL_RECORD_RETURN_FATAL or
* OSSL_RECORD_RETURN_NON_FATAL. A non-fatal return means that creation of
* the record layer has failed because it is unsuitable, but an alternative
* record layer can be tried instead.
*/
/*
* If we eventually make this fetchable then we will need to use something
* other than EVP_CIPHER. Also mactype would not be a NID, but a string. For
* now though, this works.
*/
int (*new_record_layer)(OSSL_LIB_CTX *libctx,
const char *propq, int vers,
int role, int direction,
int level,
uint16_t epoch,
unsigned char *key,
size_t keylen,
unsigned char *iv,
size_t ivlen,
unsigned char *mackey,
size_t mackeylen,
const EVP_CIPHER *ciph,
size_t taglen,
int mactype,
const EVP_MD *md,
COMP_METHOD *comp,
BIO *prev,
BIO *transport,
BIO *next,
BIO_ADDR *local,
BIO_ADDR *peer,
const OSSL_PARAM *settings,
const OSSL_PARAM *options,
const OSSL_DISPATCH *fns,
void *cbarg,
OSSL_RECORD_LAYER **ret);
int (*free)(OSSL_RECORD_LAYER *rl);
int (*reset)(OSSL_RECORD_LAYER *rl); /* Is this needed? */
/* Returns 1 if we have unprocessed data buffered or 0 otherwise */
int (*unprocessed_read_pending)(OSSL_RECORD_LAYER *rl);
/*
* Returns 1 if we have processed data buffered that can be read or 0 otherwise
* - not necessarily app data
*/
int (*processed_read_pending)(OSSL_RECORD_LAYER *rl);
/*
* The amount of processed app data that is internally buffered and
* available to read
*/
size_t (*app_data_pending)(OSSL_RECORD_LAYER *rl);
/*
* Find out the maximum number of records that the record layer is prepared
* to process in a single call to write_records. It is the caller's
* responsibility to ensure that no call to write_records exceeds this
* number of records. |type| is the type of the records that the caller
* wants to write, and |len| is the total amount of data that it wants
* to send. |maxfrag| is the maximum allowed fragment size based on user
* configuration, or TLS parameter negotiation. |*preffrag| contains on
* entry the default fragment size that will actually be used based on user
* configuration. This will always be less than or equal to |maxfrag|. On
* exit the record layer may update this to an alternative fragment size to
* be used. This must always be less than or equal to |maxfrag|.
*/
size_t (*get_max_records)(OSSL_RECORD_LAYER *rl, uint8_t type, size_t len,
size_t maxfrag, size_t *preffrag);
/*
* Write |numtempl| records from the array of record templates pointed to
* by |templates|. Each record should be no longer than the value returned
* by get_max_record_len(), and there should be no more records than the
* value returned by get_max_records().
* Where possible the caller will attempt to ensure that all records are the
* same length, except the last record. This may not always be possible so
* the record method implementation should not rely on this being the case.
* In the event of a retry the caller should call retry_write_records()
* to try again. No more calls to write_records() should be attempted until
* retry_write_records() returns success.
* Buffers allocated for the record templates can be freed immediately after
* write_records() returns - even in the case a retry.
* The record templates represent the plaintext payload. The encrypted
* output is written to the |transport| BIO.
* Returns:
* 1 on success
* 0 on retry
* -1 on failure
*/
int (*write_records)(OSSL_RECORD_LAYER *rl, OSSL_RECORD_TEMPLATE *templates,
size_t numtempl);
/*
* Retry a previous call to write_records. The caller should continue to
* call this until the function returns with success or failure. After
* each retry more of the data may have been incrementally sent.
* Returns:
* 1 on success
* 0 on retry
* -1 on failure
*/
int (*retry_write_records)(OSSL_RECORD_LAYER *rl);
/*
* Read a record and return the record layer version and record type in
* the |rversion| and |type| parameters. |*data| is set to point to a
* record layer buffer containing the record payload data and |*datalen|
* is filled in with the length of that data. The |epoch| and |seq_num|
* values are only used if DTLS has been negotiated. In that case they are
* filled in with the epoch and sequence number from the record.
* An opaque record layer handle for the record is returned in |*rechandle|
* which is used in a subsequent call to |release_record|. The buffer must
* remain available until release_record is called.
*
* Internally the OSSL_RECORD_METHOD the implementation may read/process
* multiple records in one go and buffer them.
*/
int (*read_record)(OSSL_RECORD_LAYER *rl, void **rechandle, int *rversion,
uint8_t *type, unsigned char **data, size_t *datalen,
uint16_t *epoch, unsigned char *seq_num);
/*
* Release a buffer associated with a record previously read with
* read_record. Records are guaranteed to be released in the order that they
* are read.
*/
int (*release_record)(OSSL_RECORD_LAYER *rl, void *rechandle);
/*
* In the event that a fatal error is returned from the functions above then
* get_alert_code() can be called to obtain a more details identifier for
* the error. In (D)TLS this is the alert description code.
*/
int (*get_alert_code)(OSSL_RECORD_LAYER *rl);
/*
* Update the transport BIO from the one originally set in the
* new_record_layer call
*/
int (*set1_bio)(OSSL_RECORD_LAYER *rl, BIO *bio);
/* Called when protocol negotiation selects a protocol version to use */
int (*set_protocol_version)(OSSL_RECORD_LAYER *rl, int version);
/*
* Whether we are allowed to receive unencrypted alerts, even if we might
* otherwise expect encrypted records. Ignored by protocol versions where
* this isn't relevant
*/
void (*set_plain_alerts)(OSSL_RECORD_LAYER *rl, int allow);
/*
* Called immediately after creation of the record layer if we are in a
* first handshake. Also called at the end of the first handshake
*/
void (*set_first_handshake)(OSSL_RECORD_LAYER *rl, int first);
/*
* Set the maximum number of pipelines that the record layer should process.
* The default is 1.
*/
void (*set_max_pipelines)(OSSL_RECORD_LAYER *rl, size_t max_pipelines);
/*
* Called to tell the record layer whether we are currently "in init" or
* not. Default at creation of the record layer is "yes".
*/
void (*set_in_init)(OSSL_RECORD_LAYER *rl, int in_init);
/*
* Get a short or long human readable description of the record layer state
*/
void (*get_state)(OSSL_RECORD_LAYER *rl, const char **shortstr,
const char **longstr);
/*
* Set new options or modify ones that were originally specified in the
* new_record_layer call.
*/
int (*set_options)(OSSL_RECORD_LAYER *rl, const OSSL_PARAM *options);
const COMP_METHOD *(*get_compression)(OSSL_RECORD_LAYER *rl);
/*
* Set the maximum fragment length to be used for the record layer. This
* will override any previous value supplied for the "max_frag_len"
* setting during construction of the record layer.
*/
void (*set_max_frag_len)(OSSL_RECORD_LAYER *rl, size_t max_frag_len);
/*
* The maximum expansion in bytes that the record layer might add while
* writing a record
*/
size_t (*get_max_record_overhead)(OSSL_RECORD_LAYER *rl);
/*
* Increment the record sequence number
*/
int (*increment_sequence_ctr)(OSSL_RECORD_LAYER *rl);
/*
* Allocate read or write buffers. Does nothing if already allocated.
* Assumes default buffer length and 1 pipeline.
*/
int (*alloc_buffers)(OSSL_RECORD_LAYER *rl);
/*
* Free read or write buffers. Fails if there is pending read or write
* data. Buffers are automatically reallocated on next read/write.
*/
int (*free_buffers)(OSSL_RECORD_LAYER *rl);
};
````

View File

@@ -0,0 +1,205 @@
RX depacketizer
===============
This component takes a QUIC packet and parses the frames contained therein,
to be forwarded to appropriate other components for further processing.
In the [overview], this is called the "RX Frame Handler". The name "RX
depacketizer" was chosen to reflect the kinship with the [TX packetizer].
Main structures
---------------
### Connection
Represented by an `QUIC_CONNECTION` object, defined in
[`include/internal/quic_ssl.h`](../../../include/internal/quic_ssl.h).
### Stream
Represented by an `QUIC_STREAM` object (yet to be defined).
### Packets
Represented by the `OSSL_QRX_PKT` structure, defined in
`include/internal/quic_record_rx.h` in [QUIC Demuxer and Record Layer (RX+TX)].
Interactions
------------
The RX depacketizer receives a packet from the QUIC Read Record Layer, and
then processes frames in two phases:
1. [Collect information for the ACK Manager](#collect-information-for-the-ack-manager)
2. [Pass frame data](#pass-frame-data)
### Other components
There are a number of other components that the RX depacketizer wants to
interact with:
- [ACK manager]
- Handshake manager, which is currently unspecified. It's assumed that
this will wrap around what is called the "TLS Handshake Record Layer",
in the [overview]
- Session manager, which is currently unspecified for QUIC, but may very
well be the existing `SSL_SESSION` functionality, extended to fit QUIC
purposes.
- Flow control, which is currently unspecified. In the [overview], it's
called the "Flow Controller And Statistics Collector"
- Connection manager, which is currently unspecified. In the [overview],
there's a "Connection State Machine" that the "RX Frame Handler" isn't
talking directly with, so it's possible that the Connection manager will
turn out to be the Handshake manager.
- Stream SSL objects, to pass the stream data to.
### Read and process a packet
Following how things are designed elsewhere, the depacketizer is assumed to
be called "from above" using the following function:
``` C
__owur int ossl_quic_depacketize(QUIC_CONNECTION *connection);
```
This function would create an `OSSL_QRX_PKT` and call the QUIC Read Record
Layer with a pointer to it, leaving it to the QUIC Read Record Layer to fill
in the data.
This uses the `ossl_qrx_read_pkt()` packet reading function from
[QUIC Demuxer and Record Layer (RX+TX)].
(the `OSSL_QRX_PKT` structure / sub-structure needs to be extended to take
an `OSSL_TIME`, possibly by reference, which should be filled in with the
packet reception time)
### Collect information for the [ACK manager]
This collects appropriate data into a `QUIC_ACKM_RX_PKT` structure:
- The packet number (`packet->packet_number`)
- The packet receive time (`received`)
- The packet space, which is always:
- `QUIC_PN_SPACE_INITIAL` when `packet->packet_type == pkt_initial`
- `QUIC_PN_SPACE_HANDSHAKE` when `packet->packet_type == pkt_handshake`
- `QUIC_PN_SPACE_APP` for all other packet types
- The ACK eliciting flag. This is calculated by looping through all
frames and noting those that are ACK eliciting, as determined from
[Table 1](#table-1) below)
### Passing frame data
This loops through all the frames, extracts data where there is any
and calls diverse other components as shown in the Passed to column in
[Table 1](#table-1) below.
#### Table 1
Taken from [RFC 9000 12.4 Frames and Frame Types]
| Type | Name | Passed to | ACK eliciting | I | H | 0 | 1 |
|------|-------------------------|--------------------------|---------------|----------|----------|----------|----------|
| 0x00 | [padding] | - | | &#10004; | &#10004; | &#10004; | &#10004; |
| 0x01 | [ping] | - | &#10004; | &#10004; | &#10004; | &#10004; | &#10004; |
| 0x02 | [ack 0x02] | [ACK manager] [^1] | | &#10004; | &#10004; | | &#10004; |
| 0x03 | [ack 0x03] | [ACK manager] [^1] | | &#10004; | &#10004; | | &#10004; |
| 0x04 | [reset_stream] | - [^2] | &#10004; | | | &#10004; | &#10004; |
| 0x05 | [stop_sending] | - [^3] | &#10004; | | | &#10004; | &#10004; |
| 0x06 | [crypto] | Handshake manager | &#10004; | &#10004; | &#10004; | | &#10004; |
| 0x07 | [new_token] | Session manager | &#10004; | | | | &#10004; |
| 0x08 | [stream 0x08] | Apprioriate stream [^4] | &#10004; | | | &#10004; | &#10004; |
| 0x09 | [stream 0x09] | Apprioriate stream [^4] | &#10004; | | | &#10004; | &#10004; |
| 0x0A | [stream 0x0A] | Apprioriate stream [^4] | &#10004; | | | &#10004; | &#10004; |
| 0x0B | [stream 0x0B] | Apprioriate stream [^4] | &#10004; | | | &#10004; | &#10004; |
| 0x0C | [stream 0x0C] | Apprioriate stream [^4] | &#10004; | | | &#10004; | &#10004; |
| 0x0D | [stream 0x0D] | Apprioriate stream [^4] | &#10004; | | | &#10004; | &#10004; |
| 0x0E | [stream 0x0E] | Apprioriate stream [^4] | &#10004; | | | &#10004; | &#10004; |
| 0x0F | [stream 0x0F] | Apprioriate stream [^4] | &#10004; | | | &#10004; | &#10004; |
| 0x10 | [max_data] | Flow control [^5] | &#10004; | | | &#10004; | &#10004; |
| 0x11 | [max_stream_data] | Flow control [^5] | &#10004; | | | &#10004; | &#10004; |
| 0x12 | [max_streams 0x12] | Connection manager? [^6] | &#10004; | | | &#10004; | &#10004; |
| 0x13 | [max_streams 0x13] | Connection manager? [^6] | &#10004; | | | &#10004; | &#10004; |
| 0x14 | [data_blocked] | Flow control [^5] | &#10004; | | | &#10004; | &#10004; |
| 0x15 | [stream_data_blocked] | Flow control [^5] | &#10004; | | | &#10004; | &#10004; |
| 0x16 | [streams_blocked 0x16] | Connection manager? [^6] | &#10004; | | | &#10004; | &#10004; |
| 0x17 | [streams_blocked 0x17] | Connection manager? [^6] | &#10004; | | | &#10004; | &#10004; |
| 0x18 | [new_connection_id] | Connection manager | &#10004; | | | &#10004; | &#10004; |
| 0x19 | [retire_connection_id] | Connection manager | &#10004; | | | &#10004; | &#10004; |
| 0x1A | [path_challenge] | Connection manager? [^7] | &#10004; | | | &#10004; | &#10004; |
| 0x1B | [path_response] | Connection manager? [^7] | &#10004; | | | | &#10004; |
| 0x1C | [connection_close 0x1C] | Connection manager | | &#10004; | &#10004; | &#10004; | &#10004; |
| 0x1D | [connection_close 0x1D] | Connection manager | | | | &#10004; | &#10004; |
| 0x1E | [handshake_done] | Handshake manager | &#10004; | | | | &#10004; |
| ???? | *[Extension Frames]* | - [^8] | &#10004; | | | | |
The I, H, 0, and 1 columns are validity in different packet types, with this meaning:
| Pkts | Description |
|:----:|----------------------------|
| I | Valid in Initial packets |
| H | Valid in Handshake packets |
| 0 | Valid in 0-RTT packets |
| 1 | Valid in 1-RTT packets |
Notes:
[^1]: This creates and populates an `QUIC_ACKM_ACK` structure, then calls
`QUIC_ACKM_on_rx_ack_frame()`, with the appropriate context
(`QUIC_ACKM`, the created `QUIC_ACKM_ACK`, `pkt_space` and `rx_time`)
[^2]: Immediately terminates the appropriate receiving stream `QUIC_STREAM`
object.
This includes discarding any buffered application data.
For a stream that's send-only, the error `STREAM_STATE_ERROR` is raised,
and the `QUIC_CONNECTION` object is terminated.
[^3]: Immediately terminates the appropriate sending stream `QUIC_STREAM`
object.
For a stream that's receive-only, the error `STREAM_STATE_ERROR` is
raised, and the `QUIC_CONNECTION` object is terminated.
[^4]: The frame payload (Stream Data) is passed as is to the `QUIC_STREAM`
object, along with available metadata (offset and length, as determined
to be available from the lower 3 bits of the frame type).
[^5]: The details of what flow control will need are yet to be determined
[^6]: I imagine that `max_streams` and `streams_blocked` concern a Connection
manager before anything else.
[^7]: I imagine that path challenge/response concerns a Connection manager
before anything else.
[^8]: We have no idea what extension frames there will be. However, we
must at least acknowledge their presence, so much is clear from the RFC.
[overview]: https://github.com/openssl/openssl/blob/master/doc/designs/quic-design/quic-overview.md
[TX packetizer]: https://github.com/openssl/openssl/pull/18570
[SSL object refactoring using SSL_CONNECTION object]: https://github.com/openssl/openssl/pull/18612
[QUIC Demuxer and Record Layer (RX+TX)]: https://github.com/openssl/openssl/pull/18949
[ACK manager]: https://github.com/openssl/openssl/pull/18564
[RFC 9000 12.4 Frames and Frame Types]: https://datatracker.ietf.org/doc/html/rfc9000#section-12.4
[padding]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.1
[ping]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.2
[ack 0x02]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.3
[ack 0x03]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.3
[reset_stream]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.4
[stop_sending]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.5
[crypto]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.6
[new_token]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.7
[stream 0x08]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.8
[stream 0x09]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.8
[stream 0x0A]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.8
[stream 0x0B]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.8
[stream 0x0C]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.8
[stream 0x0D]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.8
[stream 0x0E]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.8
[stream 0x0F]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.8
[max_data]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.9
[max_stream_data]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.10
[max_streams 0x12]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.11
[max_streams 0x13]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.11
[data_blocked]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.12
[stream_data_blocked]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.13
[streams_blocked 0x16]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.14
[streams_blocked 0x17]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.14
[new_connection_id]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.15
[retire_connection_id]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.16
[path_challenge]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.17
[path_response]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.18
[connection_close 0x1C]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.19
[connection_close 0x1D]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.19
[handshake_done]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.20
[Extension Frames]: https://datatracker.ietf.org/doc/html/rfc9000#section-19.21

View File

@@ -0,0 +1,145 @@
Stream Receive Buffers
======================
This is a QUIC specific module that retains the received stream data
until the application reads it with SSL_read() or any future stream read
calls.
Receive Buffers requirements for MVP
------------------------------------
These are the requirements that were identified for MVP:
- As packets with stream frames are received in arbitrary frames the
received data must be stored until all the data with earlier offsets
are received.
- As packets can be received before application calls SSL_read() to read
the data the data must be stored.
- The application should be able to set the limit on how much data should
be stored. The flow controller should be used to limit the peer to not send
more data. Without the flow control limit a rogue peer could trigger
a DoS via unlimited flow of incoming stream data frames.
- After the data is passed via SSL_read() to the application the stored
data can be released and flow control limit can be raised.
- As the peer can recreate stream data frames when resending them, the
implementation must be able to handle properly frames with partially
or fully overlapping data with previously received frames.
Optional Receive Buffers requirements
-------------------------------------
These are optional features of the stream receive buffers implementation.
They are not required for MVP but they are otherwise desirable:
- To support a single copy operation with a future stream read call
the received data should not be copied out of the decrypted packets to
store the data. The only information actually stored would be a list
of offset, length, and pointers to data, along with a pointer to the
decrypted QUIC packet that stores the actual frame.
Proposed new public API calls
-----------------------------
```C
int SSL_set_max_stored_stream_data(SSL *stream, size_t length);
```
This function adjusts the current data flow control limit on the `stream`
to allow storing `length` bytes of quic stream data before it is read by
the application.
OpenSSL handles sending MAX_STREAM_DATA frames appropriately when the
application reads the stored data.
```C
int SSL_set_max_unprocessed_packet_data(SSL *connection,
size_t length);
```
This sets the limit on unprocessed quic packet data `length` in bytes that
is allowed to be allocated for the `connection`.
See the [Other considerations](#other-considerations) section below.
Interfaces to other QUIC implementation modules
-----------------------------------------------
### Front End I/O API
SSL_read() copies data out of the stored buffers if available and
eventually triggers release of stored unprocessed packet(s).
SSL_peek(), SSL_pending(), SSL_has_pending() peek into the stored
buffers for any information about the stored data.
### RX Depacketizer
The Receive Buffers module obtains the stream data via the ssl_queue_data()
callback.
The module uses ossl_qrx_pkt_wrap_up_ref() and ossl_qrx_pkt_wrap_release()
functions to keep and release decrypted packets with unprocessed data.
### Flow Control
The Receive Buffers module provides an appropriate value for the Flow
Control module to send MAX_DATA and MAX_STREAM_DATA frames. Details
TBD.
### QUIC Read Record Layer
The Receive Buffers module needs to know whether it should stop holding
the decrypted quic packets and start copying the stream data due to
the limit reached. See the `SSL_set_max_unprocessed_quic_packet_data()`
function above and the [Other considerations](#other-considerations) section
below. Details TBD.
Implementation details
----------------------
The QUIC_RSTREAM object holds the received stream data in the SFRAME_LIST
structure. This is a sorted list of partially (never fully) overlapping
data frames. Each list item holds a pointer to the received packet
wrapper for refcounting and proper release of the received packet
data once the stream data is read by the application.
Each SFRAME_LIST item has range.start and range.end values greater
than the range.start and range.end values of the previous item in the list.
This invariant is ensured on the insertion of overlapping stream frames.
Any redundant frames are released. Insertion at the end of the list
is optimised as in the ideal situation when no packets are lost we
always just append new frames.
See `include/internal/quic_stream.h` and `include/internal/quic_sf_list.h`
for internal API details.
Other considerations
--------------------
The peer is allowed to recreate the stream data frames. As we aim for
a single-copy operation a rogue peer could use this to override the stored
data limits by sending duplicate frames with only slight changes in the
offset. For example: 1st frame - offset 0 length 1000, 2nd frame -
offset 1 length 1000, 3rd frame - offset 2 length 1000, and so on. We
would have to keep the packet data for all these frames which would
effectively raise the stream data flow control limit quadratically.
And this is not the only way how a rogue peer could make us occupy much
more data than what is allowed by the stream data flow control limit
in the single-copy scenario.
Although intuitively the MAX_DATA flow control limit might be used to
somehow limit the allocated packet buffer size, it is defined as sum
of allowed data to be sent across all the streams in the connection instead.
The packet buffer will contain much more data than just the stream frames
especially with a rogue peer, that means MAX_DATA limit cannot be used
to limit the memory occupied by packet buffers.
To resolve this problem, we fall back to copying the data off the
decrypted packet buffer once we reach a limit on unprocessed decrypted
packets. We might also consider falling back to copying the data in case
we receive stream data frames that are partially overlapping and one frame
not being a subrange of the other.
Because in MVP only a single bidirectional stream to receive
any data will be supported, the MAX_DATA flow control limit should be equal
to MAX_STREAM_DATA limit for that stream.

View File

@@ -0,0 +1,707 @@
TX Packetiser
=============
This module creates frames from the application data obtained from
the application. It also receives CRYPTO frames from the TLS Handshake
Record Layer and ACK frames from the ACK Handling And Loss Detector
subsystem.
The packetiser also deals with the flow and congestion controllers.
Creation & Destruction
----------------------
```c
typedef struct quic_tx_packetiser_args_st {
/* Configuration Settings */
QUIC_CONN_ID cur_scid; /* Current Source Connection ID we use. */
QUIC_CONN_ID cur_dcid; /* Current Destination Connection ID we use. */
BIO_ADDR peer; /* Current destination L4 address we use. */
/* ACK delay exponent used when encoding. */
uint32_t ack_delay_exponent;
/* Injected Dependencies */
OSSL_QTX *qtx; /* QUIC Record Layer TX we are using */
QUIC_TXPIM *txpim; /* QUIC TX'd Packet Information Manager */
QUIC_CFQ *cfq; /* QUIC Control Frame Queue */
OSSL_ACKM *ackm; /* QUIC Acknowledgement Manager */
QUIC_STREAM_MAP *qsm; /* QUIC Streams Map */
QUIC_TXFC *conn_txfc; /* QUIC Connection-Level TX Flow Controller */
QUIC_RXFC *conn_rxfc; /* QUIC Connection-Level RX Flow Controller */
const OSSL_CC_METHOD *cc_method; /* QUIC Congestion Controller */
OSSL_CC_DATA *cc_data; /* QUIC Congestion Controller Instance */
OSSL_TIME (*now)(void *arg); /* Callback to get current time. */
void *now_arg;
/*
* Injected dependencies - crypto streams.
*
* Note: There is no crypto stream for the 0-RTT EL.
* crypto[QUIC_PN_SPACE_APP] is the 1-RTT crypto stream.
*/
QUIC_SSTREAM *crypto[QUIC_PN_SPACE_NUM];
} QUIC_TX_PACKETISER_ARGS;
_owur typedef struct ossl_quic_tx_packetiser_st OSSL_QUIC_TX_PACKETISER;
OSSL_QUIC_TX_PACKETISER *ossl_quic_tx_packetiser_new(QUIC_TX_PACKETISER_ARGS *args);
void ossl_quic_tx_packetiser_free(OSSL_QUIC_TX_PACKETISER *tx);
```
Structures
----------
### Connection
Represented by an QUIC_CONNECTION object.
### Stream
Represented by an QUIC_STREAM object.
As per [RFC 9000 2.3 Stream Prioritization], streams should contain a priority
provided by the calling application. For MVP, this is not required to be
implemented because only one stream is supported. However, packets being
retransmitted should be preferentially sent as noted in
[RFC 9000 13.3 Retransmission of Information].
```c
void SSL_set_priority(SSL *stream, uint32_t priority);
uint32_t SSL_get_priority(SSL *stream);
```
For protocols where priority is not meaningful, the set function is a noop and
the get function returns a constant value.
Interactions
------------
The packetiser interacts with the following components, the APIs for which
can be found in their respective design documents and header files:
- SSTREAM: manages application stream data for transmission.
- QUIC_STREAM_MAP: Maps stream IDs to QUIC_STREAM objects and tracks which
streams are active (i.e., need servicing by the TX packetiser).
- Crypto streams for each EL other than 0-RTT (each is one SSTREAM).
- CFQ: queried for generic control frames
- QTX: record layer which completed packets are written to.
- TXPIM: logs information about transmitted packets, provides information to
FIFD.
- FIFD: notified of transmitted packets.
- ACKM: loss detector.
- Connection and stream-level TXFC and RXFC instances.
- Congestion controller (not needed for MVP).
### SSTREAM
Each application or crypto stream has a SSTREAM object for the sending part.
This manages the buffering of data written to the stream, frees that data when
the packet it was sent in was acknowledged, and can return the data for
retransmission on loss. It receives loss and acknowledgement notifications from
the FIFD without direct TX packetiser involvement.
### QUIC Stream Map
The TX packetiser queries the QUIC stream map for a list of active streams
(QUIC_STREAM), which are iterated on a rotating round robin basis. Each
QUIC_STREAM provides access to the various components, such as a QUIC_SSTREAM
instance (for streams with a send part). Streams are marked inactive when
they no longer have any need to generate frames at the present time.
### Crypto Streams
The crypto streams for each EL (other than 0-RTT, which does not have a crypto
stream) are represented by SSTREAM instances. The TX packetiser queries SSTREAM
instances provided to it as needed when generating packets.
### CFQ
Many control frames do not require special handling and are handled by the
generic CFQ mechanism. The TX packetiser queries the CFQ for any frames to be
sent and schedules them into a packet.
### QUIC Write Record Layer
Coalesced frames are passed to the QUIC record layer for encryption and sending.
To send accumulated frames as packets to the QUIC Write Record Layer:
```c
int ossl_qtx_write_pkt(OSSL_QTX *qtx, const OSSL_QTX_PKT *pkt);
```
The packetiser will attempt to maximise the number of bytes in a packet.
It will also attempt to create multiple packets to send simultaneously.
The packetiser should also implement a wait time to allow more data to
accumulate before exhausting it's supply of data. The length of the wait
will depend on how much data is queued already and how much space remains in
the packet being filled. Once the wait is finished, the packets will be sent
by calling:
```c
void ossl_qtx_flush_net(OSSL_QTX *qtx);
```
The write record layer is responsible for coalescing multiple QUIC packets
into datagrams.
### TXPIM, FIFD, ACK Handling and Loss Detector
ACK handling and loss detection is provided by the ACKM and FIFD. The FIFD uses
the per-packet information recorded by the TXPIM to track which frames are
contained within a packet which was lost or acknowledged, and generates
callbacks to the TX packetiser, SSTREAM instances and CFQ to allow it to
regenerate those frames as needed.
1. When a packet is sent, the packetiser informs the FIFD, which also informs
the ACK Manager.
2. When a packet is ACKed, the FIFD notifies applicable SSTREAMs and the CFQ
as appropriate.
3. When a packet is lost, the FIFD notifies the TX packetiser of any frames
which were in the lost packet for which the Regenerate strategy is
applicable.
4. Currently, no notifications to the TX packetiser are needed when packets
are discarded (e.g. due to an EL being discarded).
### Flow Control
The packetiser interacts with connection and stream-level TXFC and RXFC
instances. It interacts with RXFC instances to know when to generate flow
control frames, and with TXFC instances to know how much stream data it is
allowed to send in a packet.
### Congestion Control
The packetiser is likely to interact with the congestion controller in the
future. Currently, congestion control is a no-op.
Packets
-------
Packet formats are defined in [RFC 9000 17.1 Packet Formats].
### Packet types
QUIC supports a number of different packets. The combination of packets of
different encryption levels as per [RFC 9000 12.2 Coalescing Packets], is done
by the record layer. Non-encrypted packets are not handled by the TX Packetiser
and callers may send them by direct calls to the record layer.
#### Initial Packet
Refer to [RFC 9000 17.2.2 Initial Packet].
#### Handshake Packet
Refer to [RFC 9000 17.2.4 Handshake Packet].
#### App Data 0-RTT Packet
Refer to [RFC 9000 17.2.3 0-RTT].
#### App Data 1-RTT Packet
Refer to [RFC 9000 17.3.1 1-RTT].
Packetisation and Processing
----------------------------
### Definitions
- Maximum Datagram Payload Length (MDPL): The maximum number of UDP payload
bytes we can put in a UDP packet. This is derived from the applicable PMTU.
This is also the maximum size of a single QUIC packet if we place only one
packet in a datagram. The MDPL may vary based on both local source IP and
destination IP due to different path MTUs.
- Maximum Packet Length (MPL): The maximum size of a fully encrypted
and serialized QUIC packet in bytes in some given context. Typically
equal to the MDPL and never greater than it.
- Maximum Plaintext Payload Length (MPPL): The maximum number of plaintext
bytes we can put in the payload of a QUIC packet. This is related to
the MDPL by the size of the encoded header and the size of any AEAD
authentication tag which will be attached to the ciphertext.
- Coalescing MPL (CMPL): The maximum number of bytes left to serialize
another QUIC packet into the same datagram as one or more previous
packets. This is just the MDPL minus the total size of all previous
packets already serialized into to the same datagram.
- Coalescing MPPL (CMPPL): The maximum number of payload bytes we can put in
the payload of another QUIC packet which is to be coalesced with one or
more previous QUIC packets and placed into the same datagram. Essentially,
this is the room we have left for another packet payload.
- Remaining CMPPL (RCMPPL): The number of bytes left in a packet whose payload
we are currently forming. This is the CMPPL minus any bytes we have already
put into the payload.
- Minimum Datagram Length (MinDPL): In some cases we must ensure a datagram
has a minimum size of a certain number of bytes. This does not need to be
accomplished with a single packet, but we may need to add PADDING frames
to the final packet added to a datagram in this case.
- Minimum Packet Length (MinPL): The minimum serialized packet length we
are using while serializing a given packet. May often be 0. Used to meet
MinDPL requirements, and thus equal to MinDPL minus the length of any packets
we have already encoded into the datagram.
- Minimum Plaintext Payload Length (MinPPL): The minimum number of bytes
which must be placed into a packet payload in order to meet the MinPL
minimum size when the packet is encoded.
- Active Stream: A stream which has data or flow control frames ready for
transmission.
### Frames
Frames are taken from [RFC 9000 12.4 Frames and Frame Types].
| Type | Name | I | H | 0 | 1 | N | C | P | F |
|------|-----------------------|---------|---------|---------|---------|---------|---------|---------|---------|
| 0x00 | padding | &check; | &check; | &check; | &check; | &check; | | &check; | |
| 0x01 | ping | &check; | &check; | &check; | &check; | | | | |
| 0x02 | ack 0x02 | &check; | &check; | | &check; | &check; | &check; | | |
| 0x03 | ack 0x03 | &check; | &check; | | &check; | &check; | &check; | | |
| 0x04 | reset_stream | | | &check; | &check; | | | | |
| 0x05 | stop_sending | | | &check; | &check; | | | | |
| 0x06 | crypto | &check; | &check; | | &check; | | | | |
| 0x07 | new_token | | | | &check; | | | | |
| 0x08 | stream 0x08 | | | &check; | &check; | | | | &check; |
| 0x09 | stream 0x09 | | | &check; | &check; | | | | &check; |
| 0x0A | stream 0x0A | | | &check; | &check; | | | | &check; |
| 0x0B | stream 0x0B | | | &check; | &check; | | | | &check; |
| 0x0C | stream 0x0C | | | &check; | &check; | | | | &check; |
| 0x0D | stream 0x0D | | | &check; | &check; | | | | &check; |
| 0x0E | stream 0x0E | | | &check; | &check; | | | | &check; |
| 0x0F | stream 0x0F | | | &check; | &check; | | | | &check; |
| 0x10 | max_data | | | &check; | &check; | | | | |
| 0x11 | max_stream_data | | | &check; | &check; | | | | |
| 0x12 | max_streams 0x12 | | | &check; | &check; | | | | |
| 0x13 | max_streams 0x13 | | | &check; | &check; | | | | |
| 0x14 | data_blocked | | | &check; | &check; | | | | |
| 0x15 | stream_data_blocked | | | &check; | &check; | | | | |
| 0x16 | streams_blocked 0x16 | | | &check; | &check; | | | | |
| 0x17 | streams_blocked 0x17 | | | &check; | &check; | | | | |
| 0x18 | new_connection_id | | | &check; | &check; | | | &check; | |
| 0x19 | retire_connection_id | | | &check; | &check; | | | | |
| 0x1A | path_challenge | | | &check; | &check; | | | &check; | |
| 0x1B | path_response | | | | &check; | | | &check; | |
| 0x1C | connection_close 0x1C | &check; | &check; | &check; | &check; | &check; | | | |
| 0x1D | connection_close 0x1D | | | &check; | &check; | &check; | | | |
| 0x1E | handshake_done | | | | &check; | | | | |
The various fields are as defined in RFC 9000.
#### Pkts
_Pkts_ are defined as:
| Pkts | Description|
| :---: | --- |
| I | Valid in Initial packets|
| H | Valid in Handshake packets|
| 0 | Valid in 0-RTT packets|
| 1 | Valid in 1-RTT packets|
#### Spec
_Spec_ is defined as:
| Spec | Description |
| :---: | --- |
| N | Not ack-eliciting. |
| C | does not count toward bytes in flight for congestion control purposes. |
| P | Can be used to probe new network paths during connection migration. |
| F | The contents of frames with this marking are flow controlled. |
For `C`, `N` and `P`, the entire packet must consist of only frames with the
marking for the packet to qualify for it. For example, a packet with an ACK
frame and a _stream_ frame would qualify for neither the `C` or `N` markings.
#### Notes
- Do we need the distinction between 0-rtt and 1-rtt when both are in
the Application Data number space?
- 0-RTT packets can morph into 1-RTT packets and this needs to be handled by
the packetiser.
### Frame Type Prioritisation
The frame types listed above are reordered below in the order of priority with
which we want to serialize them. We discuss the motivations for this priority
ordering below. Items without a line between them have the same priority.
```plain
HANDSHAKE_DONE GCR / REGEN
----------------------------
MAX_DATA REGEN
DATA_BLOCKED REGEN
MAX_STREAMS REGEN
STREAMS_BLOCKED REGEN
----------------------------
NEW_CONNECTION_ID GCR
RETIRE_CONNECTION_ID GCR
----------------------------
PATH_CHALLENGE -
PATH_RESPONSE -
----------------------------
ACK - (non-ACK-eliciting)
----------------------------
CONNECTION_CLOSE *** (non-ACK-eliciting)
----------------------------
NEW_TOKEN GCR
----------------------------
CRYPTO GCR/*q
============================ ] priority group, repeats per stream
RESET_STREAM GCR* ]
STOP_SENDING GCR* ]
---------------------------- ]
MAX_STREAM_DATA REGEN ]
STREAM_DATA_BLOCKED REGEN ]
---------------------------- ]
STREAM *q ]
============================ ]
----------------------------
PING -
----------------------------
PADDING - (non-ACK-eliciting)
```
(See [Frame in Flight Manager](quic-fifm.md) for information on the meaning of
the second column, which specifies the retransmission strategy for each frame
type.)
- `PADDING`: For obvious reasons, this frame type is the lowest priority. We only
add `PADDING` frames at the very end after serializing all other frames if we
have been asked to ensure a non-zero MinPL but have not yet met that minimum.
- `PING`: The `PING` frame is encoded as a single byte. It is used to make a packet
ACK-eliciting if it would not otherwise be ACK-eliciting. Therefore we only
need to send it if
a. we have been asked to ensure the packet is ACK-eliciting, and
b. we do not have any other ACK-eliciting frames in the packet.
Thus we wait until the end before adding the PING frame as we may end up
adding other ACK-eliciting frames and not need to add it. There is never
a need to add more than one PING frame. If we have been asked to ensure
the packet is ACK-eliciting and we do not know for sure up front if we will
add any other ACK-eliciting packet, we must reserve one byte of our CMPPL
to ensure we have room for this. We can cancel this reservation if we
add an ACK-eliciting frame earlier. For example:
- We have been asked to ensure a packet is ACK-eliciting and the CMPPL is
1000 (we are coalescing with another packet).
- We allocate 999 bytes for non-PING frames.
- While adding non-PING frames, we add a STREAM frame, which is
ACK-eliciting, therefore the PING frame reservation is cancelled
and we increase our allocation for non-PING frames to 1000 bytes.
- `HANDSHAKE_DONE`: This is a single byte frame with no data which is used to
indicate handshake completion. It is only ever sent once. As such, it can be
implemented as a single flag, and there is no risk of it outcompeting other
frames. It is therefore trivially given the highest priority.
- `MAX_DATA`, `DATA_BLOCKED`: These manage connection-level flow control. They
consist of a single integer argument, and, as such, take up little space, but
are also critical to ensuring the timely expansion of the connection-level
flow control window. Thus there is a performance reason to include them in
packets with high priority and due to their small size and the fact that there
will only ever be at most one per packet, there is no risk of them
outcompeting other frames.
- `MAX_STREAMS`, `STREAMS_BLOCKED`: Similar to the frames above for
connection-level flow control, but controls rate at which new streams are
opened. The same arguments apply here, so they are prioritised equally.
- `STREAM`: This is the bread and butter of a QUIC packet, and contains
application-level stream data. As such these frames can usually be expected to
consume most of our packet's payload budget. We must generally assume that
- there are many streams, and
- several of those streams have much more data waiting to be sent than
can be sent in a single packet.
Therefore we must ensure some level of balance between multiple competing
streams. We refer to this as stream scheduling. There are many strategies that
can be used for this, and in the future we might even support
application-signalled prioritisation of specific streams. We discuss
stream scheduling further below.
Because these frames are expected to make up the bulk of most packets, we
consider them low priority, higher only than `PING` and `PADDING` frames.
Moreover, we give priority to control frames as unlike `STREAM` frames, they
are vital to the maintenance of the health of the connection itself. Once we
have serialized all other frame types, we can reserve the rest of the packet
for any `STREAM` frames. Since all `STREAM` frames are ACK-eliciting, if we
have any `STREAM` frame to send at all, it cancels any need for any `PING`
frame, and may be able to partially or wholly obviate our need for any
`PADDING` frames which we might otherwise have needed. Thus once we start
serializing STREAM frames, we are limited only by the remaining CMPPL.
- `MAX_STREAM_DATA`, `STREAM_DATA_BLOCKED`: Stream-level flow control. These
contain only a stream ID and integer value used for flow control, so they are
not large. Since they are critical to the management and health of a specific
stream, and because they are small and have no risk of stealing too many bytes
from the `STREAM` frames they follow, we always serialize these before any
corresponding `STREAM` frames for a given stream ID.
- `RESET_STREAM`, `STOP_SENDING`: These terminate a given stream ID and thus are
also associated with a stream. They are also small. As such, we consider these
higher priority than both `STREAM` frames and the stream-level flow control
frames.
- `NEW_CONNECTION_ID`, `RETIRE_CONNECTION_ID`: These are critical for connection
management and are not particularly large, therefore they are given a high
priority.
- `PATH_CHALLENGE`, `PATH_RESPONSE`: Used during connection migration, these
are small and are given a high priority.
- `CRYPTO`: These frames generate the logical crypto stream, which is a logical
bidirectional bytestream used to transport TLS records for connection
handshake and management purposes. As such, the crypto stream is viewed as
similar to application streams but of a higher priority. We are willing to let
`CRYPTO` frames outcompete all application stream-related frames if need be,
as `CRYPTO` frames are more important to the maintenance of the connection and
the handshake layer should not generate an excessive amount of data.
- `CONNECTION_CLOSE`, `NEW_TOKEN`: The `CONNECTION_CLOSE` frame can contain a
user-specified reason string. The `NEW_TOKEN` frame contains an opaque token
blob. Both can be arbitrarily large but for the fact that they must fit in a
single packet and are thus ultimately limited by the MPPL. However, these
frames are important to connection maintenance and thus are given a priority
just above that of `CRYPTO` frames. The `CONNECTION_CLOSE` frame has higher
priority than `NEW_TOKEN`.
- `ACK`: `ACK` frames are critical to avoid needless retransmissions by our peer.
They can also potentially become large if a large number of ACK ranges needs
to be transmitted. Thus `ACK` frames are given a fairly high priority;
specifically, their priority is higher than all frames which have the
potential to be large but below all frames which contain only limited data,
such as connection-level flow control. However, we reserve the right to adapt
the size of the ACK frames we transmit by chopping off some of the PN ranges
to limit the size of the ACK frame if its size would be otherwise excessive.
This ensures that the high priority of the ACK frame does not starve the
packet of room for stream data.
### Stream Scheduling
**Stream budgeting.** When it is time to add STREAM frames to a packet under
construction, we take our Remaining CMPPL and call this value the Streams
Budget. There are many ways we could make use of this Streams Budget.
For the purposes of stream budgeting, we consider all bytes of STREAM frames,
stream-level flow control frames, RESET_STREAM and STOP_SENDING frames to
“belong” to their respective streams, and the encoded sizes of these frames are
accounted to those streams for budgeting purposes. If the total number of bytes
of frames necessary to serialize all pending data from all active streams is
less than our Streams Budget, there is no need for any prioritisation.
Otherwise, there are a number of strategies we could employ. We can categorise
the possible strategies into two groups to begin with:
- **Intrapacket muxing (IRPM)**. When the data available to send across all
streams exceeds the Streams Budget for the packet, allocate an equal
portion of the packet to each stream.
- **Interpacket muxing (IXPM).** When the data available to send across all
streams exceeds the Streams Budget for the packet, try to fill the packet
using as few streams as possible, and multiplex by using different
streams in different packets.
Though obvious, IRPM does not appear to be a widely used strategy [1] [2],
probably due to a clear downside: if a packet is lost and it contains data for
multiple streams, all of those streams will be held up. This undermines a key
advantage of QUIC, namely the ability of streams to function independently of
one another for the purposes of head-of-line blocking. By contrast, with IXPM,
if a packet is lost, typically only a single stream is held up.
Suppose we choose IXPM. We must now choose a strategy for deciding when to
schedule streams on packets. [1] establishes that there are two basic
strategies found in use:
- A round robin (RR) strategy in which the frame scheduler switches to
the next active stream every n packets (where n ≥ 1).
- A sequential (SEQ) strategy in which a stream keeps being transmitted
until it is no longer active.
The SEQ strategy does not appear to be suitable for general-purpose
applications as it presumably starves other streams of bandwidth. It appears
that this strategy may be chosen in some implementations because it can offer
greater efficiency with HTTP/3, where there are performance benefits to
completing transmission of one stream before beginning the next. However, it
does not seem like a suitable choice for an application-agnostic QUIC
implementation. Thus the RR strategy is the better choice and the popular choice
in a survey of implementations.
The choice of `n` for the RR strategy is most trivially 1 but there are
suggestions [1] that a higher value of `n` may lead to greater performance due
to packet loss in typical networks occurring in small durations affecting small
numbers of consecutive packets. Thus, if `n` is greater than 1, fewer streams
will be affected by packet loss and held up on average. However, implementing
different values of `n` poses no non-trivial implementation concerns, so it is
not a major concern for discussion here. Such a parameter can easily be made
configurable.
Thus, we choose what active stream to select to fill in a packet on a
revolving round robin basis, moving to the next stream in the round robin
every `n` packets. If the available data in the active stream is not enough to
fill a packet, we do also move to the next stream, so IRPM can still occur in
this case.
When we fill a packet with a stream, we start with any applicable `RESET_STREAM`
or `STOP_SENDING` frames, followed by stream-level flow control frames if
needed, followed by `STREAM` frames.
(This means that `RESET_STREAM`, `STOP_SENDING`, `MAX_STREAM_DATA`,
`STREAM_DATA_BLOCKED` and `STREAM` frames are interleaved rather than occurring
in a fixed priority order; i.e., first there could be a `STOP_SENDING` frame
for one stream, then a `STREAM` frame for another, then another `STOP_SENDING`
frame for another stream, etc.)
[1] [Same Standards; Different Decisions: A Study of QUIC and HTTP/3
Implementation Diversity (Marx et al. 2020)](https://qlog.edm.uhasselt.be/epiq/files/QUICImplementationDiversity_Marx_final_11jun2020.pdf)
[2] [Resource Multiplexing and Prioritization in HTTP/2 over TCP versus HTTP/3
over QUIC (Marx et al. 2020)](https://h3.edm.uhasselt.be/files/ResourceMultiplexing_H2andH3_Marx2020.pdf)
### Packets with Special Requirements
Some packets have special requirements which the TX packetiser must meet:
- **Padded Initial Datagrams.**
A datagram must always be padded to at least 1200 bytes if it contains an
Initial packet. (If there are multiple packets in the datagram, the padding
does not necessarily need to be part of the Initial packet itself.) This
serves to confirm that the QUIC minimum MTU is met.
- **Token in Initial Packets.**
Initial packets may need to contain a token. If used, token is contained in
all further Initial packets sent by the client, not just the first Initial
packet.
- **Anti-amplification Limit.** Sometimes a lower MDPL may be imposed due to
anti-amplification limits. (Only a concern for servers, so not relevant to
MVP.)
Note: It has been observed that a lot of implementations are not fastidious
about enforcing the amplification limit in terms of precise packet sizes.
Rather, they just use it to determine if they can send another packet, but not
to determine what size that packet must be. Implementations with 'precise'
anti-amplification implementations appear to be rare.
- **MTU Probes.** These packets have a precisely crafted size for the purposes
of probing a path MTU. Unlike ordinary packets, they are routinely expected to
be lost and this loss should not be taken as a signal for congestion control
purposes. (Not relevant for MVP.)
- **Path/Migration Probes.** These packets are sent to verify a new path
for the purposes of connection migration.
- **ACK Manager Probes.** Packets produced because the ACK manager has
requested a probe be sent. These MUST be made ACK-eliciting (using a PING
frame if necessary). However, these packets need not be reserved exclusively
for ACK Manager purposes; they SHOULD contain new data if available, and MAY
contain old data.
We handle the need for different kinds of packet via a notion of “archetypes”.
The TX packetiser is requested to generate a datagram via the following call:
```c
/* Generate normal packets containing most frame types. */
#define TX_PACKETISER_ARCHETYPE_NORMAL 0
/* Generate ACKs only. */
#define TX_PACKETISER_ARCHETYPE_ACK_ONLY 1
int ossl_quic_tx_packetiser_generate(OSSL_QUIC_TX_PACKETISER *txp,
uint32_t archetype);
```
More archetypes can be added in the future as required. The archetype limits
what frames can be placed into the packets of a datagram.
### Encryption Levels
A QUIC connection progresses through Initial, Handshake, 0-RTT and 1-RTT
encryption levels (ELs). The TX packetiser decides what EL to use to send a
packet; or rather, it would be more accurate to say that the TX packetiser
decides what ELs need a packet generating. Many resources are instantiated per
EL, and can only be managed using a packet of that EL, therefore a datagram will
frequently need to contain multiple packets to manage the resources of different
ELs. We can thus view datagram construction as a process of determining if an EL
needs to produce a packet for each EL, and concatenating the resulting packets.
The following EL-specific resources exist:
- The crypto stream, a bidirectional byte stream abstraction provided
to the handshake layer. There is one crypto stream for each of the Initial,
Handshake and 1-RTT ELs. (`CRYPTO` frames are prohibited in 0-RTT packets,
which is to say the 0-RTT EL has no crypto stream of its own.)
- Packet number spaces and acknowledgements. The 0-RTT and 1-RTT ELs
share a PN space, but Initial and Handshake ELs both have their own
PN spaces. Thus, Initial packets can only be acknowledged using an `ACK`
frame sent in an Initial packet, etc.
Thus, a fully generalised datagram construction methodology looks like this:
- Let E be the set of ELs which are not discarded and for which `pending(el)` is
true, where `pending()` is a predicate function determining if the EL has data
to send.
- Determine if we are limited by anti-amplification restrictions.
(Not relevant for MVP since this is only needed on the server side.)
- For each EL in E, construct a packet bearing in mind the Remaining CMPPL
and append it to the datagram.
For the Initial EL, we attach a token if we have been given one.
If Initial is in E, the total length of the resulting datagram must be at
least 1200, but it is up to us to which packets of which ELs in E we add
padding to.
- Send the datagram.
### TX Key Update
The TX packetiser decides when to tell the QRL to initiate a TX-side key update.
It decides this using information provided by the QRL.
### Restricting packet sizes
Two factors impact the size of packets that can be sent:
* The maximum datagram payload length (MDPL)
* Congestion control
The MDPL limits the size of an entire datagram, whereas congestion control
limits how much data can be in flight at any given time, which may cause a lower
limit to be imposed on a given packet.
### Stateless Reset
Refer to [RFC 9000 10.3 Stateless Reset]. It's entirely reasonable for
the state machine to send this directly and immediately if required.
[RFC 9000 2.3 Stream Prioritization]: https://datatracker.ietf.org/doc/html/rfc9000#section-2.3
[RFC 9000 4.1 Data Flow Control]: https://datatracker.ietf.org/doc/html/rfc9000#section-4.1
[RFC 9000 10.3 Stateless Reset]: https://datatracker.ietf.org/doc/html/rfc9000#section-10.3
[RFC 9000 12.2 Coalescing Packets]: https://datatracker.ietf.org/doc/html/rfc9000#section-12.2
[RFC 9000 12.4 Frames and Frame Types]: https://datatracker.ietf.org/doc/html/rfc9000#section-12.4
[RFC 9000 13.3 Retransmission of Information]: https://datatracker.ietf.org/doc/html/rfc9000#section-13.3
[RFC 9000 17.1 Packet Formats]: https://datatracker.ietf.org/doc/html/rfc9000#section-17
[RFC 9000 17.2.1 Version Negotiation Packet]: https://datatracker.ietf.org/doc/html/rfc9000#section-17.2.1
[RFC 9000 17.2.2 Initial Packet]: https://datatracker.ietf.org/doc/html/rfc9000#section-17.2.2
[RFC 9000 17.2.3 0-RTT]: https://datatracker.ietf.org/doc/html/rfc9000#section-17.2.3
[RFC 9000 17.2.4 Handshake Packet]: https://datatracker.ietf.org/doc/html/rfc9000#section-17.2.4
[RFC 9000 17.2.5 Retry Packet]: https://datatracker.ietf.org/doc/html/rfc9000#section-17.2.5
[RFC 9000 17.3.1 1-RTT]: https://datatracker.ietf.org/doc/html/rfc9000#section-17.3.1
[RFC 9002]: https://datatracker.ietf.org/doc/html/rfc9002

View File

@@ -0,0 +1,103 @@
Thread Pool Support
===================
OpenSSL wishes to support the internal use of threads for purposes of
concurrency and parallelism in some circumstances. There are various reasons why
this is desirable:
- Some algorithms are designed to be run in parallel (Argon2);
- Some transports (e.g. QUIC, DTLS) may need to handle timer events
independently of application calls to OpenSSL.
To support this end, OpenSSL can manage an internal thread pool. Tasks can be
scheduled on the internal thread pool.
There is currently a single model available to an application which wants to use
the thread pool functionality, known as the “default model”. More models
providing more flexible or advanced usage may be added in future releases.
A thread pool is managed on a per-`OSSL_LIB_CTX` basis.
Default Model
-------------
In the default model, OpenSSL creates and manages threads up to a maximum
number of threads authorized by the application.
The application enables thread pooling by calling the following function
during its initialisation:
```c
/*
* Set the maximum number of threads to be used by the thread pool.
*
* If the argument is 0, thread pooling is disabled. OpenSSL will not create any
* threads and existing threads in the thread pool will be torn down.
*
* Returns 1 on success and 0 on failure. Returns failure if OpenSSL-managed
* thread pooling is not supported (for example, if it is not supported on the
* current platform, or because OpenSSL is not built with the necessary
* support).
*/
int OSSL_set_max_threads(OSSL_LIB_CTX *ctx, uint64_t max_threads);
/*
* Get the maximum number of threads currently allowed to be used by the
* thread pool. If thread pooling is disabled or not available, returns 0.
*/
uint64_t OSSL_get_max_threads(OSSL_LIB_CTX *ctx);
```
The maximum thread count is a limit, not a target. Threads will not be spawned
unless (and until) there is demand.
As usual, `ctx` can be NULL to use the default library context.
Capability Detection
--------------------
These functions allow the caller to determine if OpenSSL was built with threads
support.
```c
/*
* Retrieves flags indicating what threading functionality OpenSSL can support
* based on how it was built and the platform on which it was running.
*/
/* Is thread pool functionality supported at all? */
#define OSSL_THREAD_SUPPORT_FLAG_THREAD_POOL (1U<<0)
/*
* Is the default model supported? If THREAD_POOL is supported but DEFAULT_SPAWN
* is not supported, another model must be used. Note that there is currently
* only one supported model (the default model), but there may be more in the
* future.
*/
#define OSSL_THREAD_SUPPORT_FLAG_DEFAULT_SPAWN (1U<<1)
/* Returns zero or more of OSSL_THREAD_SUPPORT_FLAG_*. */
uint32_t OSSL_get_thread_support_flags(void);
```
Build Options
-------------
A build option `thread-pool`/`no-thread-pool` will be introduced which allows
thread pool functionality to be compiled out. `no-thread-pool` implies
`no-default-thread-pool`.
A build option `default-thread-pool`/`no-default-thread-pool` will be introduced
which allows the default thread pool functionality to be compiled out. If this
functionality is compiled out, another thread pool model must be used. Since the
default model is the only currently supported model, disabling the default model
renders threading functionality unusable. As such, there is little reason to use
this option instead of `thread-pool/no-thread-pool`, however this option is
nonetheless provided for symmetry when additional models are introduced in the
future.
Internals
---------
New internal components will need to be introduced (e.g. condition variables)
to support this functionality, however there is no intention of making
this functionality public at this time.

View File

@@ -0,0 +1,268 @@
XOF Design
==========
XOF Definition
--------------
An extendable output function (XOF) is defined as a variable-length hash
function on a message in which the output can be extended to any desired length.
At a minimum an XOF needs to support the following pseudo-code
```text
xof = xof.new();
xof.absorb(bytes1);
xof.absorb(bytes2);
xof.finalize();
out1 = xof.squeeze(10);
out2 = xof.squeeze(1000);
```
### Rules
- absorb can be called multiple times
- finalize ends the absorb process (by adding padding bytes and doing a final
absorb). absorb must not be called once the finalize is done unless a reset
happens.
- finalize may be done as part of the first squeeze operation
- squeeze can be called multiple times.
OpenSSL XOF Requirements
------------------------
The current OpenSSL implementation of XOF only supports a single call to squeeze.
The assumption exists in both the high level call to EVP_DigestFinalXOF() as
well as in the lower level SHA3_squeeze() operation (Of which there is a generic
c version, as well as assembler code for different platforms).
A decision has to be made as to whether a new API is required, as well as
considering how the change may affect existing applications.
The changes introduced should have a minimal affect on other related functions
that share the same code (e.g SHAKE and SHA3 share functionality).
Older providers that have not been updated to support this change should produce
an error if a newer core is used that supports multiple squeeze operations.
API Discussion of Squeeze
-------------------------
### Squeeze
Currently EVP_DigestFinalXOF() uses a flag to check that it is only invoked once.
It returns an error if called more than once. When initially written it also did
a reset, but that code was removed as it was deemed to be incorrect.
If we remove the flag check, then the core code will potentially call low level
squeeze code in a older provider that does not handle returning correct data for
multiple calls. To counter this the provider needs a mechanism to indicate that
multiple calls are allowed. This could just be a new gettable flag (having a
separate provider function should not be necessary).
#### Proposal 1
Change EVP_DigestFinalXOF(ctx, out, outlen) to handle multiple calls.
Possibly have EVP_DigestSqueeze() just as an alias method?
Changing the code at this level should be a simple matter of removing the
flag check.
##### Pros
- New API is not required
##### Cons
- Final seems like a strange name to call multiple times.
#### Proposal 2 (Proposed Solution)
Keep EVP_DigestFinalXOF() as a one shot function and create a new API to handle
the multi squeeze case e.g.
```text
EVP_DigestSqueeze(ctx, out, outlen).
```
##### Pros
- Seems like a better name.
- The existing function does not change, so it is not affected by logic that
needs to run for the multi squeeze case.
- The behaviour of the existing API is the same.
- At least one other toolkit uses this approach.
##### Cons
- Adds an extra API.
- The interaction between the 2 API's needs to be clearly documented.
- A call to EVP_DigestSqueeze() after EVP_DigestFinalXOF() would fail since
EVP_DigestFinalXOF() indicates no more output can be retrieved.
- A call to EVP_DigestFinalXOF() after the EVP_DigestSqueeze() would fail.
#### Proposal 3
Create a completely new type e.g. EVP_XOF_MD to implement XOF digests
##### Pros
- This would separate the XOF operations so that the interface consisted
mainly of Init, Absorb and Squeeze API's
- DigestXOF could then be deprecated.
##### Cons
- XOF operations are required for Post Quantum signatures which currently use
an EVP_MD object. This would then complicate the Signature API also.
- Duplication of the EVP_MD code (although all legacy/engine code would be
removed).
Choosing a name for the API that allows multiple output calls
-------------------------------------------------------------
Currently OpenSSL only uses XOF's which use a sponge construction (which uses
the terms absorb and squeeze).
There will be other XOF's that do not use the sponge construction such as Blake2.
The proposed API name to use is EVP_DigestSqueeze.
The alternate name suggested was EVP_DigestExtract.
The terms extract and expand are used by HKDF so I think this name would be
confusing.
API Discussion of other XOF API'S
---------------------------------
### Init
The digest can be initialized as normal using:
```text
md = EVP_MD_fetch(libctx, "SHAKE256", propq);
ctx = EVP_MD_CTX_new();
EVP_DigestInit_ex2(ctx, md, NULL);
```
### Absorb
Absorb can be done by multiple calls to:
```text
EVP_DigestUpdate(ctx, in, inlen);
```
#### Proposal:
Do we want to have an Alias function?
```text
EVP_DigestAbsorb(ctx, in, inlen);
```
(The consensus was that this is not required).
### Finalize
The finalize is just done as part of the squeeze operation.
### Reset
A reset can be done by calling:
```text
EVP_DigestInit_ex2(ctx, NULL, NULL);
```
### State Copy
The internal state can be copied by calling:
```text
EVP_MD_CTX_copy_ex(ctx, newctx);
```
Low Level squeeze changes
--------------------------
### Description
The existing one shot squeeze method is:
```text
SHA3_squeeze(uint64_t A[5][5], unsigned char *out, size_t outlen, size_t r)
```
It contains an opaque object for storing the state B<A>, that can be used to
output to B<out>. After every B<r> bits, the state B<A> is updated internally
by calling KeccakF1600().
Unless you are using a multiple of B<r> as the B<outlen>, the function has no
way of knowing where to start from if another call to SHA_squeeze() was
attempted. The method also avoids doing a final call to KeccakF1600() currently
since it was assumed that it was not required for a one shot operation.
### Solution 1
Modify the SHA3_squeeze code to accept a input/output parameter to track the
position within the state B<A>.
See <https://github.com/openssl/openssl/pull/13470>
#### Pros
- Change in C code is minimal. it just needs to pass this additional parameter.
- There are no additional memory copies of buffered results.
#### Cons
- The logic in the c reference has many if clauses.
- This C code also needs to be written in assembler, the logic would also be
different in different assembler routines due to the internal format of the
state A being different.
- The general SHA3 case would be slower unless code was duplicated.
### Solution 2
Leave SHA3_squeeze() as it is and buffer calls to the SHA3_squeeze() function
inside the final. See <https://github.com/openssl/openssl/pull/7921>
#### Pros
- Change is mainly in C code.
#### Cons
- Because of the one shot nature of the SHA3_squeeze() it still needs to call
the KeccakF1600() function directly.
- The Assembler function for KeccakF1600() needs to be exposed. This function
was not intended to be exposed since the internal format of the state B<A>
can be different on different platform architectures.
- When should this internal buffer state be cleared?
### Solution 3
Perform a one-shot squeeze on the original absorbed data and throw away the
first part of the output buffer,
#### Pros
- Very simple.
#### Cons
- Incredibly slow.
- More of a hack than a real solution.
### Solution 4 (Proposed Solution)
An alternative approach to solution 2 is to modify the SHA3_squeeze() slightly
so that it can pass in a boolean that handles the call to KeccakF1600()
correctly for multiple calls.
#### Pros
- C code is fairly simple to implement.
- The state data remains as an opaque blob.
- For larger values of outlen SHA3_squeeze() may use the out buffer directly.
#### Cons
- Requires small assembler change to pass the boolean and handle the call to
KeccakF1600().
- Uses memcpy to store partial results for a single blob of squeezed data of
size 'r' bytes.