cfg_program_postparser() contains 2 parts:
- check the combination of MODE_MWORKER and "program" section. if
"program" section was parsed, MODE_MWORKER is mandatory;
- check "command" keyword, which is mandatory for this section as
well.
This is more appropriate now, after the master-worker refactoring, do the
first part in read_cfg_in_discovery_mode, where we already check the
combination of MODE_MWORKER and -S option.
We need to do the second part just below, in read_cfg_in_discovery_mode() as
well, because it's only the master process, who parses now program section and
programs are forked before running postparser functions in step_init_2.
Otherwise, mworker_ext_launch_all() will emit a log message, that program is
started, but actually nothing has been launched, if 'command' keyword is
absent.
This not needs to be backported, as related to the master-worker refactoring.
A ClientHello may be splitted accross several different CRYPTO frames,
then mixed in a single QUIC packet. This is used notably by clients such
as chrome to render the first Initial packet opaque to middleboxes.
Each packet frame is handled sequentially. Out-of-order CRYPTO frames
are buffered in a ncbuf, until gaps are filled and data is transferred
to the SSL stack. If CRYPTO frames are heavily splitted with small
fragments, buffering may fail as ncbuf does not support small gaps. This
causes the whole packet to be rejected and unacknowledged. It could be
solved if the client reemits its ClientHello after remixing its CRYPTO
frames.
This patch is written to improve CRYPTO frame parsing. Each CRYPTO
frames which cannot be buffered due to ncbuf limitation are now stored
in a temporary list. Packet parsing is completed until all frames have
been handled. If temporary list is not empty, reparsing is done on the
stored frames. With the newly buffered CRYPTO frames, ncbuf insert
operation may this time succeeds if the frame now covers a whole gap.
Reparsing will loop until either no progress can be made or it has been
done at least 3 times, to prevent CPU utilization.
This patch should fix github issue #2776.
This should be backported up to 2.6, after a period of observation. Note
that it relies on the following refactor patches :
MINOR: quic: extend return value of CRYPTO parsing
MINOR: quic: use dynamically allocated frame on parsing
MINOR: quic: simplify qc_parse_pkt_frms() return path
qc_handle_crypto_frm() is the function used to handled a newly received
CRYPTO frame. Change its API to use a newly dedicated return type. This
allows to report if the frame was properly handled, ignored if already
parsed previously or rejected after a fatal error.
This commit does not have any functional changes. However, it allows to
simplify qc_handle_crypto_frm() API by removing <fast_retrans> as output
parameter. Also, this patch will be necessary to support multiple
iteration of packet parsing for CRYPTO frames.
qc_parse_pkt_frms() is the function responsible to parse a received QUIC
packet. Payload is decoded and splitted into individual frames which are
then handled individually. Previously, frame was used as locally stack
allocated. Change this to work on a dynamically allocated frame.
This commit does bring any functional changes. However, it will be
useful to extend packet parsing. In particular, it will be necessary to
save some frames during parsing to reparse them after the others.
Change qc_parse_pkt_frms() return path for normal and error cases. Most
notably, it allows to remove local variable ret as now return value is
hardcoded on normal and err label. This also allows to define a
different trace for error leaving code.
As reported by Pierre Maoui in GH #2477, it's not possible to render
control chars from variables or expressions verbatim in the payload part
of http-return statements. That's a problem because this part should not
require to be encoded at all (we could even imagine building favicons on
the fly for example).
In fact it is the LOG_OPT_HTTP option when passed as default options on
parse_logformat_string() which tells the log encoder that the payload
should be http-encoded using lf_chunk() instead of being printed using the
per-type encoder.
This option was set when parsing logformat expressions for lf-string
expression under http-return statements, as well as logformat expressions
for set-map action. While it is true that those actions may only be
used under http context, the LOG_OPT_HTTP logformat option is not relevant
there, because the payload is expected to be used without being encoded.
So let's simply get rid of this option when parsing logformat expressions
for set-map action key/value and lf-string from http-request return
action, and add a note next to LOG_OPT_HTTP option to indicate that it is
used to tell the log encoder that the payload should be HTTP-encoded.
Thanks to Pierre for having reported the issue and Willy for the
analysis and patch proposal.
This was reported by Coverity. In sample_conv_q_preferred() function, a goto
statement after a "while(1)" loop is unreachable. Instead of just removing
it, the same goto statement in the loop is replaced by a break. It is safer
this way, in case the loop change in future.
This patch should fix the issue #2683.
The receiver protocol is always set when a listener is created or cloned. At
least for now. And there is no check on it at many places, except in
listener_accept() function. So, let's remove remaining useless checks. That
will avoid false Coverity reports in future.
This patch should fix the issue #2631.
This bug arrived with this commit:
cdfceb10a MINOR: quic: refactor qc_prep_pkts() loop
which prevents haproxy from sending PING only packets/datagrams (some
packets/datagrams with only PING frame as ack-eliciting frames inside).
Such packets/datagrams are useful in rare cases during retransmissions
when one wants to probe the peer without exceeding the anti-amplification
limit.
Modify the condition passed to qc_build_pkt() to add padding to the current
datagram. One does not want to do that when probing the peer without ack-eliciting
frames passed as <frms> parameter. Indeed qc_build_pkt() calls qc_do_build_pkt()
which supports this case: if <probe> is true (probing required), qc_do_build_pkt()
handles the case where some padding must be added to a PING only packet/datagram.
This is the case when probing with an empty <frms> frame list of ack-eliciting
frames without exceeding the anti-amplification limit from qc_dgrams_retransmit().
Add some comments to qc_build_pkt() and qc_do_build_pkt() to clarify this
as this code is easy to break!
Thank you for @Tristan971 for having reported this issue in GH #2709.
Must be backported to 3.0.
Add a BUG_ON() to detect some malformed packets which are supposed to probe the
peer without being ack-eliciting: the peer would not acknowledged such packets.
These functions return a symbolic error code such as ECONNRESET to keep
logs compact while making them human-readable. It's a good alternative
to the numeric code in that it's more expressive, and a good one to the
full message since it's shorter and more precise (some codes even match
errno names).
The doc was updated so that the symbolic names appear in the table. It
could be useful to backport this feature to help with troubleshooting
some issues, though backporting the doc might possibly be more annoying
in case users have local patches already, so maybe the table update does
not need to be backported in this case.
For a long time the errno values returned by recv/send/splice() were not
translated to connection error codes. There are not that many eligible
and having them would help a lot when debugging some complex issues where
logs disagree with network traces. Let's add them now.
While we get reports of connection setup errors in fc_err/bc_err, we
don't have the equivalent for the recv/send/splice syscalls. Let's
add provisions for new codes that cover the common errno values that
recv/send/splice can return, i.e. ECONNREFUSED, ENOMEM, EBADF, EFAULT,
EINVAL, ENOTCONN, ENOTSOCK, ENOBUFS, EPIPE. We also add a special case
for when the poller reported the error itself. It's worth noting that
EBADF/EFAULT/EINVAL will generally indicate serious bugs in the code
and should not be reported.
The only thing is that it's quite hard to forcefully (and reliably)
trigger these errors in automated tests as the timing is critical.
Using iptables to manually reset established connections in the
middle of large transfers at least permits to see some ECONNRESET
and/or EPIPE, but the other ones are harder to trigger.
"debug dev close <fd>" currently closes that FD using fd_delete() after
checking that it's known from the fdtab. Sometimes we also want to just
perform a pure close() of FDs not in the fdtab (pollers, etc) in order
to provoke certain error cases. The optional "hard" argument to the
command will make it use a plain close() instead of fd_delete() and skip
the fd owner check. The main visible effect when closing a traffic socket
with it is that instead of dying from a double fd_delete() by seeing that
fd.owner is already 0, it will die during the next fd_insert() seeing that
fd.owner was not 0.
It was the only one prefixed with "CO_ERR_", making it harder to batch
process and to look up. It was added in 2.5 by commit 61944f7a73 ("MINOR:
ssl: Set connection error code in case of SSL read or write fatal failure")
so it can be backported as far as 2.6 if needed to help integrate other
patches.
First there is a typo in the directive name, then it is not documented and
finally, it is not used at all. The directive is only removed from the
keyword list. Parsing function is not updated.
This patch should fix the issue #2601.
This commit introduces the tune.renice.startup and tune.renice.runtime
global keywords that allows to change the priority with setpriority().
tune.renice.startup is parsed and applied in the worker or the standalone
process for configuration parsing. If this keyword is used alone, the
nice value is changed to the previous one after configuration parsing.
tune.renice.runtime is applied after configuration parsing, so in the
worker or a standalone process. Combined with tune.renice.startup it
allows to have a different nice value during configuration parsing and
during runtime.
The feature was discussed in github issue #1919.
Example:
global
tune.renice.startup 15
tune.renice.runtime 0
When http-buffer-request option is set on a proxy, the processing will be
paused to wait the full request payload or a full buffer. So it is an entity
that block the processing, just like a rule or a filter that yields. So now,
it is reported as a waiting entity if an error or a timeout occurred.
To do so, an stream entity type is added for this option. There is no
pointer. And "waiting_entity" sample fetch returns the option name.
When a rule or a filter yields because it waits for something to be able to
continue its processing, this entity is saved in the stream. If an error or
a timeout occurred, info on this entity may be retrieved via the
"waiting_entity" sample fetch, for instance to dump it in the logs. This
info may be useful to found root cause of some bugs because it is a way to
know the processing was temporarily stopped. This may explain timeouts for
instance.
The sample fetch is not documented yet.
It is very similar to the last evaluated rule. When a filter returns an
error that interrupts the processing, it is saved in the stream, in the
last_entity field, with the type 2. The pointer on filter config is
saved. This pointer never changes during runtime and is part of the proxy's
structure. It is an element of the filter_configs list in the proxy
structure.
"last_entity" sample fetch was update accordingly. The filter identifier is
returned, if defined. Otherwise the save pointer.
The last evaluated rule is now saved in a generic structure, named
last_entity, with a type to identify it. The idea is to be able to store
other kind of entity that may interrupt a specific processing.
The type of the last evaluated rule is set to 1. It will be replace later by
an enum to be more explicit. In addition, the pointer to the rule itself is
saved instead of its location.
The sample fetch "last_entity" was added to retrieve the information about
it. In this case, it is the rule localtion, the config file containing the
rule followed by the line where the rule is defined, separated by a
colon. This sample fetch is not documented yet.
When an action yields while it is not allowed, an internal error is
reported. This interrupts the processing. So info about the last evaluated
rule must be filled.
This patch may be bakcported if needed. If so, the commit ("MINOR: stream:
Save last evaluated rule on invalid yield") must be backported first.
This was already performed for tcp actions at content level, but not for
HTTP actions. It is always a bug, so it must be reported accordingly.
This patch may be backported to all stable versions.
There were several flaws in the way the different timeouts were applied on
H1 connections. First, the H1C task handling timeouts was not created if no
client/server timeout was specified. But there are other timeouts to
consider. First, the client-fin/server-fin timeouts. But for frontend
connections, http-keey-alive and http-request timeouts may also be used. And
finally, on soft-stop, the close-spread-time value must be considered too.
So at the end, it is probably easier to always create a task to manage H1C
timeouts. Especially since the client/server timeouts are most often set.
Then, when the expiration date of the H1C's task must only be updated if the
considered timeout is set. So tick_add_ifset() must be used instead of
tick_add(). Otherwise, if a timeout is undefined, the taks may expire
immediately while it should in fact never expire.
Finally, the idle expiration date must only be considered for idle
connections.
This patch should be backported in all stable versions, at least as far as
2.6. On the 2.4, it will have to be slightly adapted for the idle_exp
part. On 2.2 and 2.0, the patch will have to be rewrite because
h1_refresh_timeout() is quite different.
As master-worker fork happens now before step_init_2(), when pollers are
initialized and polling settings and dumped then in verbose and in debug modes
to stdout, it turns out that master and worker dump its same polling
settings separately. This creates long and messy output in these modes.
Polling settings are the same for master and for worker process for the moment.
Even if they would diverge in future we are interested here in worker's
settings. So, when started in the master-worker mode let's dump it only in the
worker context.
This doesn't need to be backported as related to the latest master-worker
refactoring.
If haproxy was started with -W -dK*, after master-worker refactoring, we dump
registered keywords to stdout twice in master and in worker processes. This
information is redundant and output has no longer the right format. So, as the
keyword registration happens very early before the fork, let's dump keywords
only in the worker context, if haproxy was launched with -W.
This does not need to be backported, as related to the latest master-worker
refactoring.
If haproxy was started with -W -dL, after master-worker refactoring we dump
libs to stdout twice in master and in worker processes. This is information is
redundant. So let's show linked libraries only in the worker context, if
haproxy was started also with -W.
This does not need to be backported, as related to the latest master-worker
rework.
Don't do master-worker fork if MODE_CHECK is detected from the command line along
with the master-worker mode. We should exit in MODE_CHECK, after the
configuration parsing and validation. So, with the new master-worker architecture
it's better to align this mode with the standalone.
This patch does not need to be backported, as related to the latest
master-worker rework.
Don't call startup_logs_init() in order to allocate the startup logs ring
again, if startup_logs pointer is NULL. Startup logs ring is allocated
explicitly in step_init_1 routine, when the process starts, and it's freed
explicitly for master process at the end of mworker_reexec scope. So, when
we no longer have this pointer, let's just save the log message in the
message buffer.
Otherwise, in case of master process, we will allocate the startup logs ring
again here and we will lost its address after execvp.
No need to backport this fix as it's related to the latest master-worker
refactoring.
ring_free() calls free() on the ring struct pointer, but startup_logs continues
to keep this address. So let's reset at the end startup_logs to NULL.
startup_logs is checked in print_message().
No need to backport this fix, as it's related to the latest master-worker
refactoring.
Flag MODE_STARTING should be unset for master just before freeing the startup
logs ring, as it triggers the copy of process logs to this ring, see the code
of print_message().
Moreover with this flag set, if startup logs ring pointer is NULL, any
print_message() triggered just before the execvp in mworker_reexec() will call
startup_logs_init(). So ring will be allocated again "discretely" and after
execvp we will lost its address, as in step_init_1() we will call again
startup_logs_init().
No need to backport this fix as it's related to the latest master-worker
refactoring.
Since commit 089c13850f ("MEDIUM: ssl: ssl-load-extra-del-ext work
only with .crt"), the 'set ssl cert' CLI command does not check
correctly if the transaction you are trying to update is the right one.
The consequence is that you could commit accidentaly a transaction on
the wrong certificate.
The fix introduces the check again in case you are not using
ssl-load-extra-del-ext.
This must be backported in all stable versions.
When an abstract unix socket is bound by HAProxy (using "abns@" prefix),
NUL bytes are appended at the end of its path until sun_path is filled
(for a total of 108 characters).
Here we add an alternative to pass only the non-NUL length of that path
to connect/bind calls, such that the effective path of the socket's name
is as humanly written. This may be useful to interconnect with existing
softwares that implement abstract sockets with this logic instead of the
default haproxy one.
This is achieved by implementing the "abnsz" socket prefix (instead of
"abns"), which stands for "zero-terminated ABNS". "abnsz" prefix may be
used anywhere "abns" is. Internally, haproxy uses the custom socket
family (AF_CUST_ABNS vs AF_CUST_ABNSZ) to differentiate default abns
sockets from zero-terminated ones.
Documentation was updated and regtest was added.
Fixes GH issues #977 and #2479
Co-authored-by: Aurelien DARRAGON <adarragon@haproxy.com>
Thanks to previous commit, we may now use dedicated addrcmp functions for
each UNIX address family. This allows to simplify sock_unix_addrcmp()
function and avoid useless checks in order to try to guess the socket
type.
In this patch we implement sock_abns_addrcmp() and sock_abnsz_addrcmp()
functions, which are respectively used for ABNS and ABNSZ custom families
sock_unix_addrcmp() now only holds regular UNIX socket comparing logic.
As in previous commit, let's push the logic a bit further in order to
properly restore the effective UNIX socket type when leveraging
get_src() and get_dst() sock functions, since they rely on getpeername()
and getsockname() under the hood, both of which will actually loose the
effective family and return AF_UNIX for all our custom UNIX sockets.
To do this, add sock_restore_unix_family() helper function from the logic
implemented in the previous commit, and call this function from get_src()
and get_dst() in case of unix socket prior to returning.
When getting sockets from older process in sock_get_old_sockets(), we
leverage getsockname() to fill sockaddr struct from known fd.
However, the kernel doesn't know about our custom UNIX families such
as CUST_ABNS and CUST_ABNSZ which are both based on AF_UNIX real family.
Since haproxy socket API relies on effective family (and not real family)
to recognize the socket type instead of having to guess it by analyzing
the path content, let's restore it right after getsockname() since we
have all the infos needed to deduce the right family.
If the path starts with a NULL byte, we know that it is an abstract sock.
Then we simply check <addrlen> value from getsockname() to know if the
addr makes uses of the whole path space (normal ABNS) or partial path
space (zero ABNS / aka ABNZ) terminated by 0.
For now it's the same as abns. We'll need to modify sock_unix_addrcmp(),
and a few other ones to support effective path length when dealing with
the \0. Let's check with Tristan's patch for this (upcoming patch).
Co-authored-by: Aurelien DARRAGON <adarragon@haproxy.com>
Following previous commit, in str2sa_range(), make use of address' family
which was just set to check if the socket is ABNS or not instead of
relying on an extra boolean to save this info.
Now that we can easily distinguish regular UNIX socket from ABNS sockets
by simply looking at the address family, stop looking at the first byte
from addr->sun_path to guess if the socket is an ABNS one or not. Looking
at the family is straightforward and will allow to differentiate between
upcoming ABNSZ and ABNS (where looking at the first byte from path won't
help anymore).
This is a pre-requisite to adding the abnsz socket address family:
in this patch we make use of protocol API rework started by 732913f
("MINOR: protocol: properly assign the sock_domain and sock_family") in
order to implement a dedicated address family for ABNS sockets (based on
UNIX parent family).
Thanks to this, it will become trivial to implement a new ABNSZ (for abns
zero) family which is essentially the same as ABNS but with a slight
difference when it comes to path handling (ABNS uses the whole sun_path
length, while ABNSZ's path is zero terminated and evaluation stops at 0)
It was verified that this patch doesn't break reg-tests and behaves
properly (tests performed on the CLI with show sess and show fd).
Anywhere relevant, AF_CUST_ABNS is handled alongside AF_UNIX. If no
distinction needs to be made, real_family() is used to fetch the proper
real family type to handle it properly.
Both stream and dgram were converted, so no functional change should be
expected for this "internal" rework, except that proto will be displayed
as "abns_{stream,dgram}" instead of "unix_{stream,dgram}".
Before ("show sess" output):
0x64c35528aab0: proto=unix_stream src=unix:1 fe=GLOBAL be=<NONE> srv=<none> ts=00 epoch=0 age=0s calls=1 rate=0 cpu=0 lat=0 rq[f=848000h,i=0,an=00h,ax=] rp[f=80008000h,i=0,an=00h,ax=] scf=[8,0h,fd=21,rex=10s,wex=] scb=[8,1h,fd=-1,rex=,wex=] exp=10s rc=0 c_exp=
After:
0x619da7ad74c0: proto=abns_stream src=unix:1 fe=GLOBAL be=<NONE> srv=<none> ts=00 epoch=0 age=0s calls=1 rate=0 cpu=0 lat=0 rq[f=848000h,i=0,an=00h,ax=] rp[f=80008000h,i=0,an=00h,ax=] scf=[8,0h,fd=22,rex=10s,wex=] scb=[8,1h,fd=-1,rex=,wex=] exp=10s rc=0 c_exp=
Co-authored-by: Aurelien DARRAGON <adarragon@haproxy.com>
When using trace with -dt, the trace_parse_cmd() function is doing a
strtok which write \0 into the argv string.
When using the mworker mode, and reloading, argv was modified and the
trace won't work anymore because the first : is replaced by a '\0'.
This patch fixes the issue by allocating a temporary string so we don't
modify the source string directly. It also replace strtok by its
reentrant version strtok_r.
Must be backported as far as 2.9.
That one is huge and unneeded since we now have the pointer to the
whole thread_info[] array, which does contain the freshest version
of these info and many more. Let's just get rid of it entirely.
Thanks to previous commit, we can now use strnlen2() function to perform
strnlen() portable equivalent instead of re-implementing the logic under
_lf_text_len() function.
During re-execution master keeps always opened "reload" sockpair FDs and
shared sockpair ipc_fd[0], the latter is using to transfert listeners sockets
from the previously forked worker to the new one. So, these master's FDs are
inherited in the newly forked worker and must be closed in its context.
"reload" sockpair inherited FDs and shared sockpair FD (ipc_fd[0]) are closed
separately, becase master doesn't recreate "reload" sockpair each time after
its re-exec. It always keeps the same FDs for this "reload" sockpair. So in
worker context it can be closed immediately after the fork.
At contrast, shared sockpair is created each time after reload, when the new
worker will be forked. So, if N previous workers are still exist at this moment,
the new worker will inherit N ipc_fd[0] from master. So, it's more save to
close all these FDs after get_listeners_fd() and bind_listeners() calls.
Otherwise, early closed FDs in the worker context will be immediately bound to
listeners and we could potentially have some bugs.