+LMDB Lookup support
+-------------------
+LMDB is an ultra-fast, ultra-compact, crash-proof key-value embedded data store.
+It is modeled loosely on the BerkeleyDB API. You should read about the feature
+set as well as operation modes at https://symas.com/products/lightning-memory-mapped-database/
+
+LMDB single key lookup support is provided by linking to the LMDB C library.
+The current implementation does not support writing to the LMDB database.
+
+Visit https://github.com/LMDB/lmdb to download the library or find it in your
+operating systems package repository.
+
+If building from source, this description assumes that headers will be in
+/usr/local/include, and that the libraries are in /usr/local/lib.
+
+1. In order to build exim with LMDB lookup support add or uncomment
+
+EXPERIMENTAL_LMDB=yes
+
+to your Local/Makefile. (Re-)build/install exim. exim -d should show
+Experimental_LMDB in the line "Support for:".
+
+EXPERIMENTAL_LMDB=yes
+LDFLAGS += -llmdb
+# CFLAGS += -I/usr/local/include
+# LDFLAGS += -L/usr/local/lib
+
+The first line sets the feature to include the correct code, and
+the second line says to link the LMDB libraries into the
+exim binary. The commented out lines should be uncommented if you
+built LMDB from source and installed in the default location.
+Adjust the paths if you installed them elsewhere, but you do not
+need to uncomment them if an rpm (or you) installed them in the
+package controlled locations (/usr/include and /usr/lib).
+
+2. Create your LMDB files, you can use the mdb_load utility which is
+part of the LMDB distribution our your favourite language bindings.
+
+3. Add the single key lookups to your exim.conf file, example lookups
+are below.
+
+${lookup{$sender_address_domain}lmdb{/var/lib/baruwa/data/db/relaydomains.mdb}{$value}}
+${lookup{$sender_address_domain}lmdb{/var/lib/baruwa/data/db/relaydomains.mdb}{$value}fail}
+${lookup{$sender_address_domain}lmdb{/var/lib/baruwa/data/db/relaydomains.mdb}}
+
+
+Queuefile transport
+-------------------
+Queuefile is a pseudo transport which does not perform final delivery.
+It simply copies the exim spool files out of the spool directory into
+an external directory retaining the exim spool format.
+
+The spool files can then be processed by external processes and then
+requeued into exim spool directories for final delivery.
+However, note carefully the warnings in the main documentation on
+qpool file formats.
+
+The motivation/inspiration for the transport is to allow external
+processes to access email queued by exim and have access to all the
+information which would not be available if the messages were delivered
+to the process in the standard email formats.
+
+The mailscanner package is one of the processes that can take advantage
+of this transport to filter email.
+
+The transport can be used in the same way as the other existing transports,
+i.e by configuring a router to route mail to a transport configured with
+the queuefile driver.
+
+The transport only takes one option:
+
+* directory - This is used to specify the directory messages should be
+copied to. Expanded.
+
+The generic transport options (body_only, current_directory, disable_logging,
+debug_print, delivery_date_add, envelope_to_add, event_action, group,
+headers_add, headers_only, headers_remove, headers_rewrite, home_directory,
+initgroups, max_parallel, message_size_limit, rcpt_include_affixes,
+retry_use_local_part, return_path, return_path_add, shadow_condition,
+shadow_transport, transport_filter, transport_filter_timeout, user) are
+ignored.
+
+Sample configuration:
+
+(Router)
+
+scan:
+ driver = accept
+ transport = scan
+
+(Transport)
+
+scan:
+ driver = queuefile
+ directory = /var/spool/baruwa-scanner/input
+
+
+In order to build exim with Queuefile transport support add or uncomment
+
+EXPERIMENTAL_QUEUEFILE=yes
+
+to your Local/Makefile. (Re-)build/install exim. exim -d should show
+Experimental_QUEUEFILE in the line "Support for:".
+
+
+ARC support
+-----------
+Specification: https://tools.ietf.org/html/draft-ietf-dmarc-arc-protocol-11
+Note that this is not an RFC yet, so may change.
+
+ARC is intended to support the utility of SPF and DKIM in the presence of
+intermediaries in the transmission path - forwarders and mailinglists -
+by establishing a cryptographically-signed chain in headers.
+
+Normally one would only bother doing ARC-signing when functioning as
+an intermediary. One might do verify for local destinations.
+
+ARC uses the notion of a "ADministrative Management Domain" (ADMD).
+Described in RFC 5598 (section 2.3), this is essentially the set of
+mail-handling systems that the mail transits. A label should be chosen to
+identify the ADMD. Messages should be ARC-verified on entry to the ADMD,
+and ARC-signed on exit from it.
+
+
+Verification
+--
+An ACL condition is provided to perform the "verifier actions" detailed
+in section 6 of the above specification. It may be called from the DATA ACL
+and succeeds if the result matches any of a given list.
+It also records the highest ARC instance number (the chain size)
+and verification result for later use in creating an Authentication-Results:
+standard header.
+
+ verify = arc/<acceptable_list> none:fail:pass
+
+ add_header = :at_start:${authresults {<admd-identifier>}}
+
+ Note that it would be wise to strip incoming messages of A-R headers
+ that claim to be from our own <admd-identifier>.
+
+There are four new variables:
+
+ $arc_state One of pass, fail, none
+ $arc_state_reason (if fail, why)
+ $arc_domains colon-sep list of ARC chain domains, in chain order.
+ problematic elements may have empty list elements
+ $arc_oldest_pass lowest passing instance number of chain
+
+Example:
+ logwrite = oldest-p-ams: <${reduce {$lh_ARC-Authentication-Results:} \
+ {} \
+ {${if = {$arc_oldest_pass} \
+ {${extract {i}{${extract {1}{;}{$item}}}}} \
+ {$item} {$value}}} \
+ }>
+
+Receive log lines for an ARC pass will be tagged "ARC".
+
+
+Signing
+--
+arc_sign = <admd-identifier> : <selector> : <privkey> [ : <options> ]
+An option on the smtp transport, which constructs and prepends to the message
+an ARC set of headers. The textually-first Authentication-Results: header
+is used as a basis (you must have added one on entry to the ADMD).
+Expanded as a whole; if unset, empty or forced-failure then no signing is done.
+If it is set, all of the first three elements must be non-empty.
+
+The fourth element is optional, and if present consists of a comma-separated list
+of options. The options implemented are
+
+ timestamps Add a t= tag to the generated AMS and AS headers, with the
+ current time.
+ expire[=<val>] Add an x= tag to the generated AMS header, with an expiry time.
+ If the value <val> is an plain number it is used unchanged.
+ If it starts with a '+' then the following number is added
+ to the current time, as an offset in seconds.
+ If a value is not given it defaults to a one month offset.
+
+[As of writing, gmail insist that a t= tag on the AS is mandatory]
+
+Caveats:
+ * There must be an Authentication-Results header, presumably added by an ACL
+ while receiving the message, for the same ADMD, for arc_sign to succeed.
+ This requires careful coordination between inbound and outbound logic.
+
+ Only one A-R header is taken account of. This is a limitation versus
+ the ARC spec (which says that all A-R headers from within the ADMD must
+ be used).
+
+ * If passing a message to another system, such as a mailing-list manager
+ (MLM), between receipt and sending, be wary of manipulations to headers made
+ by the MLM.
+ + For instance, Mailman with REMOVE_DKIM_HEADERS==3 might improve
+ deliverability in a pre-ARC world, but that option also renames the
+ Authentication-Results header, which breaks signing.
+
+ * Even if you use multiple DKIM keys for different domains, the ARC concept
+ should try to stick to one ADMD, so pick a primary domain and use that for
+ AR headers and outbound signing.
+
+Signing is not compatible with cutthrough delivery; any (before expansion)
+value set for the option will result in cutthrough delivery not being
+used via the transport in question.
+
+
+
+
+Early pipelining support
+------------------------
+Ref: https://datatracker.ietf.org/doc/draft-harris-early-pipe/
+
+If compiled with EXPERIMENTAL_PIPE_CONNECT support is included for this feature.
+The server advertises the feature in its EHLO response, currently using the name
+"X_PIPE_CONNECT" (this will change, some time in the future).
+A client may cache this information, along with the rest of the EHLO response,
+and use it for later connections. Those later ones can send esmtp commands before
+a banner is received.
+
+Up to 1.5 roundtrip times can be taken out of cleartext connections, 2.5 on
+STARTTLS connections.
+
+In combination with the traditional PIPELINING feature the following example
+sequences are possible (among others):
+
+(client) (server)
+
+EHLO,MAIL,RCPT,DATA ->
+ <- banner,EHLO-resp,MAIL-ack,RCPT-ack,DATA-goahead
+message-data ->
+------
+
+EHLO,MAIL,RCPT,BDAT ->
+ <- banner,EHLO-resp,MAIL-ack,RCPT-ack
+message-data ->
+------
+
+EHLO,STARTTLS ->
+ <- banner,EHLO-resp,TLS-goahead
+TLS1.2-client-hello ->
+ <- TLS-server-hello,cert,hello-done
+client-Kex,change-cipher,finished ->
+ <- change-cipher,finished
+EHLO,MAIL,RCPT,DATA ->
+ <- EHLO-resp,MAIL-ack,RCPT-ack,DATA-goahead
+
+------
+(tls-on-connect)
+TLS1.2-client-hello ->
+ <- TLS-server-hello,cert,hello-done
+client-Kex,change-cipher,finished ->
+ <- change-cipher,finshed
+ <- banner
+EHLO,MAIL,RCPT,DATA ->
+ <- EHLO-resp,MAIL-ack,RCPT-ack,DATA-goahead
+
+Where the initial client packet is SMTP, it can combine with the TCP Fast Open
+feature and be sent in the TCP SYN.
+
+
+A main-section option "pipelining_connect_advertise_hosts" (default: *)
+and an smtp transport option "hosts_pipe_connect" (default: unset)
+control the feature.
+
+If the "pipelining" log_selector is enabled, the "L" field in server <=
+log lines has a period appended if the feature was advertised but not used;
+or has an asterisk appended if the feature was used. In client => lines
+the "L" field has an asterisk appended if the feature was used.
+
+The "retry_data_expire" option controls cache invalidation.
+Entries are also rewritten (or cleared) if the adverised features
+change.
+
+
+NOTE: since the EHLO command must be constructed before the connection is
+made it cannot depend on the interface IP address that will be used.
+Transport configurations should be checked for this. An example avoidance:
+
+ helo_data = ${if def:sending_ip_address \
+ {${lookup dnsdb{>! ptr=$sending_ip_address} \
+ {${sg{$value} {^([^!]*).*\$} {\$1}}} fail}} \
+ {$primary_hostname}}
+
+
+
+
+TLS Session Resumption
+----------------------
+TLS Session Resumption for TLS 1.2 and TLS 1.3 connections can be used (defined
+in RFC 5077 for 1.2). The support for this can be included by building with
+EXPERIMENTAL_TLS_RESUME defined. This requires GnuTLS 3.6.3 or OpenSSL 1.1.1
+(or later).
+
+Session resumption (this is the "stateless" variant) involves the server sending
+a "session ticket" to the client on one connection, which can be stored by the
+client and used for a later session. The ticket contains sufficient state for
+the server to reconstruct the TLS session, avoiding some expensive crypto
+calculation and one full packet roundtrip time.
+
+Operational cost/benefit:
+ The extra data being transmitted costs a minor amount, and the client has
+ extra costs in storing and retrieving the data.
+
+ In the Exim/Gnutls implementation the extra cost on an initial connection
+ which is TLS1.2 over a loopback path is about 6ms on 2017-laptop class hardware.
+ The saved cost on a subsequent connection is about 4ms; three or more
+ connections become a net win. On longer network paths, two or more
+ connections will have an average lower startup time thanks to the one
+ saved packet roundtrip. TLS1.3 will save the crypto cpu costs but not any
+ packet roundtrips.
+
+ Since a new hints DB is used, the hints DB maintenance should be updated
+ to additionally handle "tls".
+
+Security aspects:
+ The session ticket is encrypted, but is obviously an additional security
+ vulnarability surface. An attacker able to decrypt it would have access
+ all connections using the resumed session.
+ The session ticket encryption key is not committed to storage by the server
+ and is rotated regularly (OpenSSL: 1hr, and one previous key is used for
+ overlap; GnuTLS 6hr but does not specify any overlap).
+ Tickets have limited lifetime (2hr, and new ones issued after 1hr under
+ OpenSSL. GnuTLS 2hr, appears to not do overlap).
+
+ There is a question-mark over the security of the Diffie-Helman parameters
+ used for session negotiation. TBD. q-value; cf bug 1895
+
+Observability:
+ New log_selector "tls_resumption", appends an asterisk to the tls_cipher "X="
+ element.
+
+ Variables $tls_{in,out}_resumption have bits 0-4 indicating respectively
+ support built, client requested ticket, client offered session,
+ server issued ticket, resume used. A suitable decode list is provided
+ in the builtin macro _RESUME_DECODE for ${listextract {}{}}.
+
+Issues:
+ In a resumed session:
+ $tls_{in,out}_cipher will have values different to the original (under GnuTLS)
+ $tls_{in,out}_ocsp will be "not requested" or "no response", and
+ hosts_require_ocsp will fail