When a duplicated CRYPTO frame is received during handshake, a server
may consider that there was a packet loss and immediately retransmit its
pending CRYPTO data without having to wait for PTO expiration. However,
RFC 9002 indicates that this should only be performed at most once per
connection to avoid excessive packet transmission.
QUIC connection is flagged with QUIC_FL_CONN_HANDSHAKE_SPEED_UP to mark
that a fast retransmit has been performed. However, during the
refactoring on CRYPTO handling with the storage conversion from ncbuf to
ncbmbuf, the check on the flag was accidentely removed. The faulty patch
is the following one :
commit f50425c021
MINOR: quic: remove received CRYPTO temporary tree storage
This patch adds again the check on QUIC_FL_CONN_HANDSHAKE_SPEED_UP
before initiating fast retransmit. This ensures this is only performed
once per connection.
This must be backported up to 3.3.
In srv_dynamic_maxconn(), we'll decide that the max number of connection
is the server's maxconn if 1) the proxy's number of connection is over
fullconn, or if minconn was not set.
Check if minconn is not set first, as it will be true most of the time,
and as the proxy's "beconn" variable is in a busy cache line, it can be
costly to access it, while minconn/maxconn is in a cache line that
should very rarely change.
Oto Valek rightfully reported in issue #3262 that the proxy-protocol
doc makes no mention of the packed attribute on struct pp2_tlv_ssl,
which is mandatory since fields are not type-aligned in it. Let's
add it in the definition and make an explicit mention about it to
save implementers from wasting their time trying to debug this.
It can be backported.
The documentation of @system-ca specifies that one can overwrite the
value provided by the SSL Library using SSL_CERT_DIR.
However it seems like X509_get_default_cert_dir() is not affected by
this environment variable, and X509_get_default_cert_dir_env() need to
be used in order to get the variable name, and get the value manually.
This could be backported in every stable branches. Note that older
branches don't have the memprintf in ssl_sock.c.
SSL libraries built manually might lack the right
X509_get_default_cert_dir() value.
The common way to fix the problem is to build openssl with
./configure --openssldir=/etc/ssl/
In order to verify this setting, output it with haproxy -vv.
Given that we already have "set profiling task", it's easy to permit to
enable/disable the lock and/or memory profiling at run time. However, the
change will only be applied next time the task profiling will be switched
from off/auto to on.
The patch is very minor and is best viewed with git show -b because it
indents a whole block that moves in a "if" clause.
This can be backported to 3.3 along with the two previous patches.
In continuity of previous patch, this one makes use of the new profiling
flags. For this, based on the global "profiling" setting, when switching
profiling on, we set or clear two flags on the thread context,
TH_FL_TASK_PROFILING_L and TH_FL_TASK_PROFILING_M to indicate whether
lock profiling and/or malloc profiling are desired when profiling is
enabled. These flags are checked along with TH_FL_TASK_PROFILING to
decide when to collect time around a lock or a malloc. And by default
we're back to the behavior of 3.2 in that neither lock nor malloc times
are collected anymore.
This is sufficient to see the CPU usage spent in the VDSO to significantly
drop from 22% to 2.2% on a highly loaded system.
This should be backported to 3.3 along with the previous patch.
Damien Claisse reported in issue #3257 a performance regression between
3.2 and 3.3 when task profiling is enabled, more precisely in relation
with the following patches were merged:
98cc815e3e ("MINOR: activity: collect time spent with a lock held for each task")
503084643f ("MINOR: activity: collect time spent waiting on a lock for each task")
9d8c2a888b ("MINOR: activity: collect CPU time spent on memory allocations for each task")
The issue mostly comes from the first patches. What happens is that the
local time is taken when entering and leaving each lock, which costs a
lot on a contended system. The problem here is the lack of finegrained
settings for lock and malloc profiling.
This patch introduces a better approach. The task profiler goes back to
its default behavior in on/auto modes, but the configuration now accepts
new extra options "lock", "no-lock", "memory", "no-memory" to precisely
indicate other timers to watch for each task when profiling turns on.
This is achieved by setting two new flags HA_PROF_TASKS_LOCK and
HA_PROF_TASKS_MEM in the global "profiling" variable.
This patch only parses the new values and assigns them to the global
variable from the config file for now. The doc was updated.
Commit 3674afe8a0 ("BUG/MEDIUM: threads: Atomically set TH_FL_SLEEPING
and clr FL_NOTIFIED") accidentally left a strange-looking line wrapping
making one think of an editing mistake, let's fix it and keep it on a
single line given that even indented wrapping is almost as large.
This can be backported with the fix above till 2.8 to keep the patch
context consistent between versions.
An issue was introduced in 3.0 with commit faa8c3e024 ("MEDIUM: lb-chash:
Deterministic node hashes based on server address"): the new server_key
field and lb_nodes entries initialization were not updated for servers
added at run time with "add server": server_key remains zero and the key
used in lb_node remains the one depending only on the server's ID.
This will cause trouble when adding new servers with consistent hashing,
because the hash-key will be ignored until the server's weight changes
and the key difference is detected, leading to its recalculation.
This is essentially caused by the poorly placed lb_nodes initialization
that is specific to lb-chash and had to be replicated in the code dealing
with server addition.
This commit solves the problem by adding a new ->server_init() function
in the lbprm proxy struct, that is called by the server addition code.
This also allows to abandon the complex check for LB algos that was
placed there for that purpose. For now only lb-chash provides such a
function, and calls it as well during initial setup. This way newly
added servers always use the correct key now.
While it should also theoretically have had an impact on servers added
with the "random" algorithm, it's unlikely that the difference between
proper server keys and those based on their ID could have had any visible
effect.
This patch should be backported as far as 3.0. The backport may be eased
by a preliminary backport of previous commit "CLEANUP: lb-chash: free
lb_nodes from chash's deinit(), not global", though this is not strictly
necessary if context is manually adjusted.
There's an ambuity on the ownership of lb_nodes in chash, it's allocated
by chash but freed by the server code in srv_free_params() from srv_drop()
upon deinit. Let's move this free() call to a chash-specific function
which will own the responsibility for doing this instead. Note that
the .server_deinit() callback is properly called both on proxy being
taken down and on server deletion.
For "add backend" implementation, postparsing code in
check_config_validity() from cfgparse.c has been extracted in a new
dedicated function named proxy_finalize() into proxy.c.
This has caused unexpected compilation issue as in the latter file
TLSEXT_TYPE_application_layer_protocol_negotiation macro may be
undefined, in particular when building without QUIC support. Thus, code
related to default ALPN on binds is discarded after the preprocessing
stage.
Fix this by including openssl-compat header file into proxy source file.
This should be sufficient to ensure SSL related defines are properly
included.
This should fix recent issues on SSL regtests.
No need to backport.
Emeric suggested that it's sometimes convenient to instantly know if a
client has advertised support for window scaling or timestamps for
example. While the info is present in the TCP options output, it's hard
to extract since it respects the options order.
So here we're extending the 56-bit fingerprint with 8 extra bits that
indicate the presence of options 2..8, and any option above 9 for the
last bit. In practice this is sufficient since higher options are not
commonly used. Also TCP option 5 is normally not sent on the SYN (SACK,
only SACK_perm is sent), and echo options 6 & 7 are no longer used
(replaced with timestamps). These fields might be repurposed in the
future if some more meaningful options are to be mapped (e.g. MPTCP,
TFO cookie, auth).
When a backend is created at runtime, the new proxy instance is inserted
at the end of proxies_list. This operation is buggy if this list is
empty : the code causes a null dereference which will lead to a crash.
This causes the following compilation error :
CC src/proxy.o
src/proxy.c: In function 'cli_parse_add_backend':
src/proxy.c:4933:36: warning: null pointer dereference [-Wnull-dereference]
4933 | proxies_list->next = px;
| ~~~~~~~~~~~~~~~~~~~^~~~
This patch fixes this issue. Note that in reality it cannot occur at
this moment as proxies_list cannot be empty (haproxy requires at least
one frontend to start, and the list also always contains internal
proxies).
No need to backport.
This patch fixes the following compilation error :
src/proxy.c:4954:12: error: format string is not a string literal
(potentially insecure) [-Werror,-Wformat-security]
4954 | ha_notice(msg);
| ^~~
No need to backport.
Add a new regtests to validate backend creation at runtime. A server is
then added and requests made to validate the newly created instance
before (with force-be-switch) and after publishing.
Implement proxy ID generation for dynamic backends. This is performed
through the already function existing proxy_get_next_id().
As an optimization, lookup will performed starting from a global
variable <dynpx_next_id>. It is initialized to the greatest ID assigned
after parsing, and updated each time a backend instance is created. When
backend deletion will be implemented, it could be lowered to the newly
available slot.
Implement the required operations for "add backend" handler. This
requires a new proxy allocation, settings copy from the specified
default instance and proxy config finalization. All handlers registered
via REGISTER_POST_PROXY_CHECK() are also called on the newly created
instance.
If no error were encountered, the newly created proxy is finally
attached in the proxies list.
This commits completes "add backend" handler with some checks performed
on the specified default proxy instance. These are additional checks
outside of the already existing inheritance rules, specific to dynamic
backends.
For now, a default proxy is considered not compatible if it is not in
mode TCP/HTTP. Also, a default proxy is rejected if it references HTTP
errors. This limitation may be lifted in the future, when HTTP errors
are partiallay reworked.
Add an optional "mode" argument to "add backend" CLI command. This
argument allows to specify if the backend is in TCP or HTTP mode.
By default, it is mandatory, unless the inherited default proxy already
explicitely specifies the mode. To differentiate if TCP mode is implicit
or explicit, a new proxy flag PR_FL_DEF_EXPLICIT_MODE is defined. It is
set for every defaults instances which explicitely defined their mode.
Define a basic CLI handler for "add backend".
For now, this handler only performs a parsing of the name argument and
return an error if a duplicate already exists. It runs under thread
isolation, to guarantee thread safety during the proxy creation.
This feature is considered in development. CLI command requires to set
experimental-mode.
Move backend compatibility checks performed during 'add server' in a
dedicated function be_supports_dynamic_srv(). This should simplify
addition of future restriction.
This function will be reused when implementing backend creation at
runtime.
Define a new utility function str_to_proxy_mode() which is able to
convert a string into the corresponding proxy mode if possible. This new
function is used for the parsing of "mode" configuration proxy keyword.
This patch will be reused for dynamic backend implementation, in order
to parse a similar "mode" argument via a CLI handler.
If a proxy is referencing a defaults instance, some checks must be
performed to ensure that inheritance will be compatible. Refcount of the
defaults instance may also be incremented if some settings cannot be
copied. This operation is performed when parsing a new proxy of defaults
section which references a defaults, either implicitely or explicitely.
This patch extracts this code into a dedicated function named
proxy_ref_defaults(). This in turn may call defaults_px_ref()
(previously called proxy_ref_defaults()) to increment its refcount.
The objective of this patch is to be able to reuse defaults inheritance
validation for dynamic backends created at runtime, outside of the
parsing code.
A lot of proxies initialization code is delayed on post-parsing stage,
as it depends on the configuration fully parsed. This is performed via a
loop on proxies_list.
Extract this code in a dedicated function proxy_finalize(). This patch
will be useful for dynamic backends creation.
Note that for the moment the code has been extracted as-is. With each
new features, some init code was added there. This has become a giant
loop with no real ordering. A future patch may provide some cleanup in
order to reorganize this.
Default proxies validation occurs during post-parsing. The objective is
to report any tcp/http-rules which could not behave as expected.
Previously, this was performed while looping over standard proxies list,
when such proxy is referencing a default instance. This was enough as
only named referenced proxies were kept after parsing. However, this is
not the case anymore in the context of dynamic backends creation at
runtime.
As such, this patch now performs validation on every named defaults
outside of the standard proxies list loop. This should not cause any
behavior difference, as defaults are validated without using the proxy
which relies on it.
Along with this change, PR_FL_READY proxy flag is now removed. Its usage
was only really needed for defaults, to avoid validating a same instance
multiple times. With the validation of defaults in their own loop, it is
now redundant.
Fix unhandled strdup() failure when initializing global.log_tag.
Bug was introduced with the fix UAF for global progname pointer from
351ae5dbe. So it must be backported as far as 3.1.
Initially when init_early was introduced the progname string was a local
used for temporary storage of log_tag. Now it's global and detached from
log_tag enough. Thus, in the past we could inform that log_tag
allocation has been failed but not now.
Must be backported since the progname string became global, that is
v3.1-dev9-96-g49772c55e
Differ checking the max threads per group number until we're done
parsing the configuration file, as it may be set after a "thread-group-
directive. Otherwise the default value of 64 will be used, even if there
is a max-threads-per-group directive.
This should be backported to 3.3.
Give global.maxthrpertgroup its default value at global creation,
instead of later when we're trying to detect the thread count.
It is used when verifying the configuration file validity, and if it was
not set in the config file, in a few corner cases, the value of 0 would
be used, which would then reject perfectly fine configuration files.
This should be backported to 3.3.
Document the mworker V3 implementation introduced in HAProxy 3.1.
Explains the rationale behind moving configuration parsing out of the
master process to improve robustness.
Could be backported to 3.1.
Released version 3.4-dev4 with the following main changes :
- BUG/MEDIUM: hlua: fix invalid lua_pcall() usage in hlua_traceback()
- BUG/MINOR: hlua: consume error object if ignored after a failing lua_pcall()
- BUG/MINOR: promex: Detach promex from the server on error dump its metrics dump
- BUG/MEDIUM: mux-h1: Skip UNUSED htx block when formating the start line
- BUG/MINOR: proto_tcp: Properly report support for HAVE_TCP_MD5SIG feature
- BUG/MINOR: config: check capture pool creations for failures
- BUG/MINOR: stick-tables: abort startup on stk_ctr pool creation failure
- MEDIUM: pools: better check for size rounding overflow on registration
- DOC: reg-tests: update VTest upstream link in the starting guide
- BUG/MINOR: ssl: Properly manage alloc failures in SSL passphrase callback
- BUG/MINOR: ssl: Encrypted keys could not be loaded when given alongside certificate
- MINOR: ssl: display libssl errors on private key loading
- BUG/MAJOR: applet: Don't call I/O handler if the applet was shut
- MINOR: ssl: allow to disable certificate compression
- BUG/MINOR: ssl: fix error message of tune.ssl.certificate-compression
- DOC: config: mention some possible TLS versions restrictions for kTLS
- OPTIM: server: move queueslength in server struct
- OPTIM: proxy: separate queues fields from served
- OPTIM: server: get rid of the last use of _ha_barrier_full()
- DOC: config: mention that idle connection sharing is per thread-group
- MEDIUM: h1: strictly verify quoting in chunk extensions
- BUG/MINOR: config/ssl: fix spelling of "expose-experimental-directives"
- BUG/MEDIUM: ssl: fix msg callbacks on QUIC connections
- MEDIUM: ssl: remove connection from msg callback args
- MEDIUM: ssl: porting to X509_STORE_get1_objects() for OpenSSL 4.0
- REGTESTS: ssl: make reg-tests compatible with OpenSSL 4.0
- DOC: internals: cleanup few typos in master-worker documentation
- BUG/MEDIUM: applet: Fix test on shut flags for legacy applets
- MINOR: quic: Fix build with USE_QUIC_OPENSSL_COMPAT
- MEDIUM: tcpcheck: add post-80 option for mysql-check to support MySQL 8.x
- BUG/MEDIUM: threads: Atomically set TH_FL_SLEEPING and clr FL_NOTIFIED
- BUG/MINOR: cpu-topo: count cores not cpus to distinguish core types
- DOC: config: mention the limitation on server id range for consistent hash
- MEDIUM: backend: make "balance random" consider req rate when loads are equal
- BUG/MINOR: config: Fix setting of alt_proto
This patch fixes the bug presented in issue #3254
(https://github.com/haproxy/haproxy/issues/3254), which
occured on FreeBSD when using a stream socket for in
nameserver section. This bug occured due to an incorrect
reset of the alt_proto for a stream socket when the default
socket is created as a datagram socket. This patch fixes
this bug by doing a late assignment to alt_proto when
a datagram socket is requested, leaving only the modification
of alt_proto done by mptcp. Additional documentation
for the use of alt_proto has also been added to
clarify the use of the alt_proto variable.
As reported by Damien Claisse and Cdric Paillet, the "random" LB
algorithm can become particularly unfair with large numbers of servers
having few connections. It's indeed fairly common to see many servers
with zero connection in a thousand-server large farm, and in this case
the P2C algo consisting in checking the servers' loads doesn't help at
all and is basically similar to random(1). In this case, we only rely
on the distribution of server IDs in the random space to pick the best
server, but it's possible to observe huge discrepancies.
An attempt to model the problem clearly shows that with 1600 servers
with weight 10, for 1 million requests, the lowest loaded ones will
take 300 req while the most loaded ones will get 780, with most of
the values between 520 and 700.
In addition, only the first 28 lower bits of server IDs are used for
the key calculation, which means that node keys are more determinist.
Setting random keys in the lowest 28 bits only better packs values
with min around 530 and max around 710, with values mostly between
550 and 680.
This can only be compensated by increasing weights and draws without
being a perfect fix either. At 4 draws, the min is around 560 and the
max around 670, with most values bteween 590 and 650.
This patch takes another approach to this problem: when servers are on
tie regarding their loads, instead of arbitrarily taking the second one,
we now compare their current request rates, which is updated all the
time and smoothed over one second, and we pick the server with the
lowest request rate. Now with 2 draws, the curve is mostly flat, with
the min at 580 and the max at 628, and almost all values between 611
and 625. And 4 draws exclusively gives values from 614 to 624.
Other points will need to be addressed separately (bits of server ID,
maybe refine the hash algorithm), but these ones would affect how
caches are selected, and cannot be changed without an extra option.
For random however we can perform a change without impacting anyone.
This should be backported, probably only to 3.3 since it's where the
"random" algo became the default.
When using "hash-type consistent", we default to using the server's ID
as the insertion key. However, that key is scaled to avoid collisions
when inserting multiple slots for a server (16 per weight unit), and
that scaling loses the 4 topmost bits of the ID, so the only effective
range of IDs is 1..268435456, and anything above will provide the same
hashing keys again.
Let's mention this in the documentation, and also remind that it can
affect "balance random". This can be backported to all versions.
The per-cpu capacity of a cluster was taken into account since 3.2 with
commit 6c88e27cf4 ("MEDIUM: cpu-topo: change "performance" to consider
per-core capacity").
In cpu_policy_performance() and cpu_policy_efficiency(), we're trying
to figure which cores have more capacity than others by comparing their
cluster's average capacity. However, contrary to what the comment says,
we're not averaging per core but per cpu, which makes a difference for
CPUs mixing SMT with non-SMT cores on the same SoC, such as intel's 14th
gen CPUs. Indeed, on a machine where cpufreq is not enabled, all CPUs
can be reported with a capacity of 1024, resulting in a big cluster of
16*1024, and 4 small clusters of 4*1024 each, giving an average of 1024
per CPU, making it impossible to distinguish one from the other. In this
situation, both "cpu-policy performance" and "cpu-policy efficiency"
enable all cores.
But this is wrong, what needs to be taken into account in the divide is
the number of *cores*, not *cpus*, that allows to distinguish big from
little clusters. This was not noticeable on the ARM machines the commit
above aimed at fixing because there, the number of CPUs equals the number
of cores. And on an x86 machine with cpu_freq enabled, the frequencies
continue to help spotting which ones are big/little.
By using nb_cores instead of nb_cpus in the comparison and in the avg_capa
compare function, it properly works again on x86 without affecting other
machines with 1 CPU per core.
This can be backported to 3.2.
When we're about to enter polling, atomically set TH_FL_SLEEPING and
remove TH_FL_NOTIFIED, instead of doing it in sequence. Otherwise,
another thread may sett that both the TH_FL_SLEEPING and the
TH_FL_NOTIFIED bits are set, and don't wake up the thread then it should
be doing that.
This prevents a bug where a thread is sleeping while it should be
handling a new connection, which can happen if there are very few
incoming connection. This is easy to reproduce when using only two
threads, and injecting with only one connection, the connection may then
never be handled.
This should be backported up to 2.8.
This patch adds a new 'post-80' option that sets the
CLIENT_PLUGIN_AUTH (0x00080000) capability flag
and explicitly specifies mysql_native_password as
the authentication plugin in the handshake response.
This patch also addes documentation content for post-80 option
support in MySQL 8.x version. Which handles new default auth
plugin caching_sha2_password.
MySQL 8.0 changed the default authentication plugin from
mysql_native_password to caching_sha2_password.
The current mysql-check implementation only supports pre-41
and post-41 client auth protocols, which lack the CLIENT_PLUGIN_AUTH
capability flag. When HAProxy sends a post-41 authentication
packet to a MySQL 8.x server, the server responds with error 1251:
"Client does not support authentication protocol requested by server".
The new client capabilities for post-80 are:
- CLIENT_PROTOCOL_41 (0x00000200)
- CLIENT_SECURE_CONNECTION (0x00008000)
- CLIENT_PLUGIN_AUTH (0x00080000)
Usage example:
backend mysql_servers
option mysql-check user haproxy post-80
server db1 192.168.1.10:3306 check
The health check user must be created with mysql_native_password:
CREATE USER 'haproxy'@'%' IDENTIFIED WITH mysql_native_password BY '';
This addresses https://github.com/haproxy/haproxy/issues/2934.
Commit fa094d0b61 changed the msg callback
args, but forgot to fix quic_tls_msg_callback() accordingly, so do that,
and remove the unused struct connection paramter.
A regression was introduced in the commit 0ea601127 ("BUG/MAJOR: applet: Don't
call I/O handler if the applet was shut"). The test on shut flags for legacy
applets is inverted.
It should be harmeless on 3.4 and 3.3 because all applets were converted. But
this fix is mandatory for 3.2 and older.
The patch must be backported as far as 3.0 with the commit above.
s/mecanism/mechanism
s/got ride/got rid
s/traditionnal/traditional
One typo is confusion between master and worker that results to a
semantic mistake in the sentence:
"...the master will emit an "exit-on-failure" error and will kill every
workers with a SIGTERM and exits with the same error code than the
failed [-master-]{+worker+}..."
Should be backported as far as 3.1.
OpenSSL 4.0 changed the way it stores objects in X509_STORE structures
and are not allowing anymore to iterate on objects in insertion order.
Meaning that the order of the object are not the same before and after
OpenSSL 4.0, and the reg-tests need to handle both cases.
OpenSSL 4.0 is deprecating X509_STORE_get0_objects().
Every occurence of X509_STORE_get0_objects() was first replaced by
X509_STORE_get1_objects().
This changes the ref count of the STACK_OF(X509_OBJECT) everywhere, and
need it to be sk_X509_OBJECT_pop_free(objs, X509_OBJECT_free) each time.
X509_STORE_get1_objects() is not available in AWS-LC, OpenSSL < 3.2,
LibreSSL and WolfSSL, so we need to still be compatible with get0.
To achieve this, 2 macros were added X509_STORE_getX_objects() and
sk_X509_OBJECT_popX_free(), these macros will use either the get0 or the
get1 macro depending on their availability. In the case of get0,
sk_X509_OBJECT_popX_free() will just do nothing instead of trying to
free.
Don't backport that unless really needed if we want to be compatible
with OpenSSL 4.0. It changes all the refcounts.
SSL msg callbacks are used for notification about sent/received SSL
messages. Such callbacks are registered via
ssl_sock_register_msg_callback().
Prior to this patch, connection was passed as first argument of these
callbacks. However, most of them do not use it. Worst, this may lead to
confusion as connection can be NULL in QUIC context.
This patch cleans this by removing connection argument. As an
alternative, connection can be retrieved in callbacks if needed using
ssl_sock_get_conn() but the code must be ready to deal with potential
NULL instances. As an example, heartbeat parsing callback has been
adjusted in this manner.
With QUIC backend implementation, SSL code has been adjusted in several
place when accessing connection instance. Indeed, with QUIC usage, SSL
context is tied up to quic_conn, and code may be executed prior/after
connection instantiation. For example, on frontend side, connection is
only created after QUIC handshake completion.
The following patch tried to fix unsafe accesses to connection. In
particular, msg callbacks are not called anymore if connection is NULL.
fab7da0fd0
BUG/MEDIUM: quic-be/ssl_sock: TLS callback called without connection
However, most msg callbacks do not need to use the connection instance.
The only occurence where it is accessed is for heartbeat message
parsing, which is the only case of crash solved. The above fix is too
restrictive as it completely prevents execution of these callbacks when
connection is unset. This breaks several features with QUIC, such as SSL
key logging or samples based on ClientHello capture.
The current patch reverts the above one. Thus, this restores invokation
of msg callbacks for QUIC during the whole low-level connection
lifetime. This requires a small adjustment in heartbeat parsing callback
to prevent access on a NULL connection.
The issue on ClientHello capture was mentionned in github issue #2495.
This must be backported up to 3.3.
The help message for "ktls" mentions "expose-experimental-directive"
without the final 's', which is particularly annoying when copy-pasting
the directive from the error message directly into the config.
This should be backported to 3.3.
As reported by Ben Kallus in the following thread:
https://www.mail-archive.com/haproxy@formilux.org/msg46471.html
there exist some agents which mistakenly accept CRLF inside quoted
chunk extensions, making it possible to fool them by injecting one
extra chunk they won't see for example, or making them miss the end
of the body depending on how it's done. Haproxy, like most other
agents nowadays, doesn't care at all about chunk extensions and just
drops them, in agreement with the spec.
However, as discussed, since chunk extensions are basically never used
except for attacks, and that the cost of just matching quote pairs and
checking backslashed quotes is escape consistency remains relatively
low, it can make sense to add such a check to abort the message parsing
when this situation is encountered. Note that it has to be done at two
places, because there is a fast path and a slow path for chunk parsing.
Also note that it *will* cause transfers using improperly formatted chunk
extensions to fail, but since these are really not used, and that the
likelihood of them being used but improperly quoted certainly is much
lower than the risk of crossing a broken parser on the client's request
path or on the server's response path, we consider the risk as
acceptable. The test is not subject to the configurable parser exceptions
and it's very unlikely that it will ever be needed.
Since this is done in 3.4 which will be LTS, this patch will have to be
backported to 3.3 so that any unlikely trouble gets a chance to be
detected before users upgrade to 3.4.
Thanks to Ben for the discussion, and to Rajat Raghav for sparking it
in the first place even though the original report was mistaken.
Cc: Ben Kallus <benjamin.p.kallus.gr@dartmouth.edu>
Cc: Rajat Raghav <xclow3n@gmail.com>
Cc: Christopher Faulet <cfaulet@haproxy.com>