{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":128791889,"defaultBranch":"master","name":"haproxy","ownerLogin":"haproxy","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2018-04-09T15:17:42.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/38220289?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1726691591.0","currentOid":""},"activityList":{"items":[{"before":"1d403caf8aa59c9070f30ea16017261cab679fe2","after":"7caf073faa6962de15a18dadcaf200df95ce7889","ref":"refs/heads/master","pushedAt":"2024-09-29T07:59:32.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"MINOR: tools: do not attempt to use backtrace() on linux without glibc\n\nThe function is provided by glibc. Nothing prevents us from using our\nown outside of glibc there (tested on aarch64 with musl). We still do\nnot enable it by default as we don't yet know if all archs work well,\nbut it's sufficient to pass USE_BACKTRACE=1 when building with musl to\nverify it's OK.","shortMessageHtmlLink":"MINOR: tools: do not attempt to use backtrace() on linux without glibc"}},{"before":"b8e3b0a18d59b4f52b4ecb7ae61cef0b8b2402a0","after":"1d403caf8aa59c9070f30ea16017261cab679fe2","ref":"refs/heads/master","pushedAt":"2024-09-27T17:08:07.000Z","pushType":"push","commitsCount":3,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"MINOR: server: make srv_shutdown_sessions() call pendconn_redistribute()\n\nWhen shutting down server sessions, the queue was not considered, which\nis a problem if some element reached the queue at the moment the server\nwas going down, because there will be no more requests to kick them out\nof it. Let's always make sure we scan the queue to kick these streams\nout of it and that they can possibly find a more suitable server. This\nmay make a difference in the time it takes to shut down a server on the\nCLI when lots of servers are in the queue.\n\nIt might be interesting to backport this to 3.0 but probably not much\nfurther.","shortMessageHtmlLink":"MINOR: server: make srv_shutdown_sessions() call pendconn_redistribute()"}},{"before":"0c94b2efeccfb421c2480dd904225db1643bf290","after":"b8e3b0a18d59b4f52b4ecb7ae61cef0b8b2402a0","ref":"refs/heads/master","pushedAt":"2024-09-27T10:25:26.000Z","pushType":"push","commitsCount":3,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"BUG/MEDIUM: stream: make stream_shutdown() async-safe\n\nThe solution found in commit b500e84e24 (\"BUG/MINOR: server: shut down\nstreams under thread isolation\") to deal with inter-thread stream\nshutdown doesn't work fine because there exists code paths involving\na server lock which can then deadlock on thread_isolate(). A better\nsolution then consists in deferring the shutdown to the stream itself\nand just wake it up for that.\n\nThe only thing is that TASK_WOKEN_OTHER is a bit too generic and we\nneed to pass at least 2 types of events (SF_ERR_DOWN and SF_ERR_KILLED),\nso we're now leveraging the new TASK_F_UEVT1 and _UEVT2 flags on the\ntask's state to convey these info. The caller only needs to wake the\ntask up with these flags set, and the stream handler will then finish\nthe job locally using stream_shutdown_self().\n\nThis needs to be carefully backported to all branches affected by the\ndequeuing issue and containing any of the 5541d4995d (\"BUG/MEDIUM:\nqueue: deal with a rare TOCTOU in assign_server_and_queue()\"), and/or\nb11495652e (\"BUG/MEDIUM: queue: implement a flag to check for the\ndequeuing\").","shortMessageHtmlLink":"BUG/MEDIUM: stream: make stream_shutdown() async-safe"}},{"before":"a889413f5ec5e2c0c6ed30debfd16f666dbc0f99","after":"0c94b2efeccfb421c2480dd904225db1643bf290","ref":"refs/heads/master","pushedAt":"2024-09-26T15:03:59.000Z","pushType":"push","commitsCount":12,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"REGTESTS: add a test for proxy \"log-steps\"\n\nNow that proxy \"log-steps\" keyword was implemented and is usable since\n(\"MEDIUM: log: consider log-steps proxy setting for existing log origins\")\nlet's add some tests for it in reg-tests/log/log_profile.vtc.","shortMessageHtmlLink":"REGTESTS: add a test for proxy \"log-steps\""}},{"before":"96edacc5465a0029368e3b874beb4a612de63bd0","after":"a889413f5ec5e2c0c6ed30debfd16f666dbc0f99","ref":"refs/heads/master","pushedAt":"2024-09-25T15:17:14.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"BUG/MEDIUM: cli: Deadlock when setting frontend maxconn\n\nThe proxy lock state isn't passed down to relax_listener\nthrough dequeue_proxy_listeners, which causes a deadlock\nin relax_listener when it tries to get that lock.\n\nBackporting: Older versions didn't have relax_listener and directly called\nresume_listener in dequeue_proxy_listeners. lpx should just be passed directly\nto resume_listener then.\n\nThe bug was introduced in commit 001328873c352e5e4b1df0dcc8facaf2fc1408aa\n\n[cf: This patch should fix the issue #2726. It must be backported as far as\n2.4]","shortMessageHtmlLink":"BUG/MEDIUM: cli: Deadlock when setting frontend maxconn"}},{"before":"d622f9d5b6c391fa650786a07c7d4eb23b3cd5b1","after":"96edacc5465a0029368e3b874beb4a612de63bd0","ref":"refs/heads/master","pushedAt":"2024-09-25T07:29:48.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"DEV: flags/applet: decode appctx flags\n\nDecode APPCTX flags via appctx_show_flags() function.","shortMessageHtmlLink":"DEV: flags/applet: decode appctx flags"}},{"before":"fdf38ed7fc3c0c6e6c95a5cef4bda8cc7dadfb66","after":"d622f9d5b6c391fa650786a07c7d4eb23b3cd5b1","ref":"refs/heads/master","pushedAt":"2024-09-23T18:16:46.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"MEDIUM: mailers: warn about deprecated legacy mailers\n\nAs mentioned in 2.8 announce on the mailing list [1] and on the wiki [2],\nuse of legacy mailers is now deprecated and will not be supported anymore\nstarting with version 3.3. Use of Lua script (AKA Lua mailers) is now\nencouraged (and fully supported since 2.8) for this purpose, as it offers\nmore flexibility (e.g: alerts can be customized) and is more future-proof.\n\nConfigurations relying on legacy mailers will now raise a warning.\n\nUsers willing to keep their existing mailers config in a working state\nshould simply add the following line to their global section:\n\n # mailers.lua file as provided in the git repository\n # adjust path as needed\n lua-load examples/lua/mailers.lua\n\n[1]: https://www.mail-archive.com/haproxy@formilux.org/msg43600.html\n[2]: https://github.com/haproxy/wiki/wiki/Breaking-changes","shortMessageHtmlLink":"MEDIUM: mailers: warn about deprecated legacy mailers"}},{"before":"b500e84e24fd19ccbcdf4fae5165aeb07e46bd67","after":"fdf38ed7fc3c0c6e6c95a5cef4bda8cc7dadfb66","ref":"refs/heads/master","pushedAt":"2024-09-21T18:10:14.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"BUG/MINOR: proxy: also make the cli and resolvers use the global name\n\nAs detected by ASAN on the CI, two places still using strdup() on the\nproxy names were left by commit b325453c3 (\"MINOR: proxy: use the global\nfile names for conf->file\").\n\nNo backport is needed.","shortMessageHtmlLink":"BUG/MINOR: proxy: also make the cli and resolvers use the global name"}},{"before":"e77c73316a83c6bcb2bbb9bc24926075047df321","after":"b500e84e24fd19ccbcdf4fae5165aeb07e46bd67","ref":"refs/heads/master","pushedAt":"2024-09-21T17:49:33.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"BUG/MINOR: server: shut down streams under thread isolation\n\nSince the beginning of thread support, the shutdown of streams attached\nto a server was run under the server's lock, but that's not sufficient.\nIt indeed turns out that shutting down streams (either from the CLI using\n\"shutdown sessions server XXX\" or due to \"on-error shutdown-sessions\")\niterates over all the streams to shut them down, but stream_shutdown()\nhas no way to protect its actions against concurrent actions from the\nstream itself on another thread, and streams offer no such provisions\nanyway.\n\nThe impact is some rare but possible crashes when shutting down streams\nfrom the CLI in cmopetition with high server traffic. The probability\nis low enough to mark it minor, though it was observed in the field.\n\nAt least since 2.4 the streams are arranged in per-thread lists, so it\nlikely would be possible using the event subsystem to delegate these\nevents to dedicated per-thread tasks which would address the problem.\nBut server streams don't get killed often enough to justify such extra\ncomplexity, so better just run the loop under thread isolation.\n\nIt also shows that the internal API could probably be improved to\nsupport a lighter thread exclusion instead of full isolation: various\nplaces want to only exclude one thread and here it could work. But\nagain there's no point doing this for now.\n\nThis patch should be backported to all stable branches. It's important\nto carefully check that this srv_shutdowns_streams() function is never\ncalled itself under isolation in older versions (though at first glance\nit looks OK).","shortMessageHtmlLink":"BUG/MINOR: server: shut down streams under thread isolation"}},{"before":"30a0e93fe68344d86557ba0ac8ad74325c646929","after":"e77c73316a83c6bcb2bbb9bc24926075047df321","ref":"refs/heads/master","pushedAt":"2024-09-20T15:36:32.000Z","pushType":"push","commitsCount":9,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"MEDIUM: cfgparse: warn about deprecated use of duplicate server names\n\nAs discussed below, there are too many problems and limitations caused\nby still supporting duplicate server names. That's already particularly\ncomplicated and dissuasive to use since it requires these servers to\nhave explicit IDs to be accept. Let's now warn on any duplicate, even\nwith explicit IDs and remind that this will become forbidden in 3.3.\n\nLink: https://www.mail-archive.com/haproxy@formilux.org/msg45185.html","shortMessageHtmlLink":"MEDIUM: cfgparse: warn about deprecated use of duplicate server names"}},{"before":"1a38684fbc0b510f000e3df77ad6ddeaf27580f7","after":"30a0e93fe68344d86557ba0ac8ad74325c646929","ref":"refs/heads/master","pushedAt":"2024-09-18T20:33:11.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"[RELEASE] Released version 3.1-dev8\n\nReleased version 3.1-dev8 with the following main changes :\n - DOC: configuration: place the HAPROXY_HTTP_LOG_FMT example on the correct line\n - MINOR: mux-h1: Set EOI on SE during demux when both side are in DONE state\n - BUG/MEDIUM: mux-h1/mux-h2: Reject upgrades with payload on H2 side only\n - REGTESTS: h1/h2: Update script testing H1/H2 protocol upgrades\n - BUG/MEDIUM: clock: detect and cover jumps during execution\n - BUG/MINOR: pattern: prevent const sample from being tampered in pat_match_beg()\n - BUG/MEDIUM: pattern: prevent uninitialized reads in pat_match_{str,beg}\n - BUG/MEDIUM: pattern: prevent UAF on reused pattern expr\n - MEDIUM: ssl/cli: \"dump ssl cert\" allow to dump a certificate in PEM format\n - BUG/MAJOR: mux-h1: Wake SC to perform 0-copy forwarding in CLOSING state\n - BUG/MINOR: h1-htx: Don't flag response as bodyless when a tunnel is established\n - REGTESTS: fix random failures with wrong_ip_port_logging.vtc under load\n - BUG/MINOR: pattern: do not leave a leading comma on \"set\" error messages\n - REGTESTS: shorten a bit the delay for the h1/h2 upgrade test\n - MINOR: server: allow init-state for dynamic servers\n - DOC: server: document what to check for when adding new server keywords\n - MEDIUM: h1: Accept invalid T-E values with accept-invalid-http-response option\n - BUG/MINOR: polling: fix time reporting when using busy polling\n - BUG/MINOR: clock: make time jump corrections a bit more accurate\n - BUG/MINOR: clock: validate that now_offset still applies to the current date\n - BUG/MEDIUM: queue: implement a flag to check for the dequeuing\n - OPTIM: sample: don't check casts for samples of same type\n - OPTIM: vars: remove the unneeded lock in vars_prune_*\n - OPTIM: vars: inline vars_prune() to avoid many calls\n - MINOR: vars: remove the emptiness tests in callers before pruning\n - IMPORT: import cebtree (compact elastic binary trees)\n - OPTIM: vars: use a cebtree instead of a list for variable names\n - OPTIM: vars: use multiple name heads in the vars struct\n - BUG/MINOR: peers: local entries updates may not be advertised after resync\n - DOC: config: Explicitly list relaxing rules for accept-invalid-http-* options\n - MINOR: proxy: Rename accept-invalid-http-* options\n - DOC: configuration: Remove dangerous directives from the proxy matrix\n - BUG/MEDIUM: sc_strm/applet: Wake applet after a successfull synchronous send\n - BUG/MEDIUM: cache/stats: Wait to have the request before sending the response\n - BUG/MEDIUM: promex: Wait to have the request before sending the response\n - MINOR: clock: test all clock_gettime() return values\n - MEDIUM: clock: collect the monotonic time in clock_local_update_date()\n - MEDIUM: clock: opportunistically use CLOCK_MONOTONIC for the internal time\n - MEDIUM: clock: use the monotonic clock for idle time calculation\n - MEDIUM: clock: don't compute before_poll when using monotonic clock\n - BUG/MINOR: fix missing \"log-format overrides previous 'option tcplog clf'...\" detection\n - BUG/MINOR: fix missing \"'option httpslog' overrides previous 'option tcplog clf'...\" detection\n - BUG/MINOR: cfgparse-listen: fix option httpslog override warning message\n - BUG/MINOR: cfgparse: detect incorrect overlap of same backend names\n - MEDIUM: cfgparse: warn about proxies having the same names\n - DOC: management: add init-state to add server keywords\n - BUG/MINOR: mux-quic: report glitches to session\n - BUILD: cebtree: silence a bogus gcc warning on impossible code paths\n - MEDIUM: cfgparse: warn about colliding names between defaults and proxies\n - MEDIUM: cfgparse: detect collisions between defaults and log-forward","shortMessageHtmlLink":"[RELEASE] Released version 3.1-dev8"}},{"before":"8df44eea6dd1203218936fec7b571ab04b16ce91","after":"1a38684fbc0b510f000e3df77ad6ddeaf27580f7","ref":"refs/heads/master","pushedAt":"2024-09-18T16:09:25.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"MEDIUM: cfgparse: detect collisions between defaults and log-forward\n\nSadly, when log-forward were introduced they took great care of avoiding\ncollision with regular proxies but defaults were missed (they need to be\nexplicitly checked for). So now we have to move them to a warning for 3.1\ninstead of rejecting them.","shortMessageHtmlLink":"MEDIUM: cfgparse: detect collisions between defaults and log-forward"}},{"before":"fcd6d29acf108c55e1e1c17c04aeae621327adff","after":"8df44eea6dd1203218936fec7b571ab04b16ce91","ref":"refs/heads/master","pushedAt":"2024-09-18T15:43:21.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"BUILD: cebtree: silence a bogus gcc warning on impossible code paths\n\ngcc-12 and above report a wrong warning about a negative length being\npassed to memcmp() on an impossible code path when built at -O0. The\npattern is the same at a few places, basically:\n\n int foo(int op, const void *a, const void *b, size_t size, size_t arg)\n {\n if (op == 1) // arg is a strict multiple of size\n return memcmp(a, b, arg - size);\n return 0;\n }\n ...\n int bar()\n {\n return foo(0, a, b, sizeof(something), 0);\n }\n\nIt *might* be possible to invent dummy values for the \"len\" argument\nabove in the real code, but that significantly complexifies it and as\nusual can easily result in introducing undesired bugs.\n\nHere we take a different approach consisting in shutting the\n-Wstringop-overread warning on gcc>=12 at -O0 since that's the only\ncondition that triggers it. The issue was reported to and confirmed by\nthe gcc team here: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114622\n\nNo backport needed, but this should be upstreamed into cebtree after\nchecking that all involved macros are available.","shortMessageHtmlLink":"BUILD: cebtree: silence a bogus gcc warning on impossible code paths"}},{"before":"2c783c25d65034a097361adfd5b981c37d4a82d0","after":"fcd6d29acf108c55e1e1c17c04aeae621327adff","ref":"refs/heads/master","pushedAt":"2024-09-18T14:15:13.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"BUG/MINOR: mux-quic: report glitches to session\n\nGlitch counter was implemented for QUIC/HTTP3. The counter is stored in\nthe QCC MUX connection instance. However, this is never reported at the\nsession level which is necessary if glitch counter is tracked via a\nstick-table.\n\nTo fix this, use session_add_glitch_ctr() in various QUIC MUX functions\nwhich may increment glitch counter.\n\nThis should be backported up to 3.0.","shortMessageHtmlLink":"BUG/MINOR: mux-quic: report glitches to session"}},{"before":"303a66573df618cbeb25aed7110324f69cafa252","after":"2c783c25d65034a097361adfd5b981c37d4a82d0","ref":"refs/heads/master","pushedAt":"2024-09-17T20:45:02.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"DOC: management: add init-state to add server keywords\n\nCommit ce6a621ae allowed init-state to be used for dynamic servers but I\nforgot to update management doc.","shortMessageHtmlLink":"DOC: management: add init-state to add server keywords"}},{"before":"17e52c922b577e1b677098b34e47cd0a85f31e8b","after":"303a66573df618cbeb25aed7110324f69cafa252","ref":"refs/heads/master","pushedAt":"2024-09-17T17:55:45.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"MEDIUM: cfgparse: warn about proxies having the same names\n\nAs discussed below, there are too many problems and uncaught bugs\nin the parser when trying to support proxies having similar names\nbut different types. There's specific code to detect the presence\nof stick-tables in a pair of such proxies for example. It's even\npossible that certain combinations of backend+listen that were not\npreviously detected have some nasty side effects.\n\nAccording to the proposal in the discussion, this is now deprecated\nin 3.1 (thus we emit a warning) and will become forbidden in 3.3.\n\nA backport might be useful, but reporting a diag_warning only, not a\nclassical warning, so as not to break setups running in zero-warning\nmode.\n\nIt was verified with a config involving all 9 combinations of\n(frontend,backend,listen) followed by one of the same three that all\ncollisions are now properly blocked and that only back+front are kept\nand emit a warning.\n\nLink: https://www.mail-archive.com/haproxy@formilux.org/msg45185.html","shortMessageHtmlLink":"MEDIUM: cfgparse: warn about proxies having the same names"}},{"before":"607b9adc9bd92ece2e99ab1012ddb4b8cdcd4ac6","after":"17e52c922b577e1b677098b34e47cd0a85f31e8b","ref":"refs/heads/master","pushedAt":"2024-09-17T13:40:36.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"BUG/MINOR: cfgparse-listen: fix option httpslog override warning message\n\n\"option httpslog\" override warning messaged used to be reported as\n\"option httplog\", probably as a result of copy paste without adjusting\nthe context. Let's fix that to prevent emitting confusing warning messages\n\nThe issue exists since 98b930d (\"MINOR: ssl: Define a default https log\nformat\"), thus it should be backported up to 2.6","shortMessageHtmlLink":"BUG/MINOR: cfgparse-listen: fix option httpslog override warning message"}},{"before":"499e057644d659ac61bd13cc50e75aadf33cdbdd","after":"607b9adc9bd92ece2e99ab1012ddb4b8cdcd4ac6","ref":"refs/heads/master","pushedAt":"2024-09-17T12:42:06.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"BUG/MINOR: fix missing \"log-format overrides previous 'option tcplog clf'...\" detection\n\nIn commit fd48b28315 (\"MINOR: Implements new log format of option tcplog clf\")\n\"option tcplog clf\" detection was correcly added for \"option tcplog\" and\n\"option httplog\", but \"log-format\" case was overlooked. Thus, this config\nwould report erroneous warning message:\n\n defaults\n option tcplog clf\n log-format \"ok\"\n\n[WARNING] (727893) : config : parsing [test.conf:3]: 'log-format' overrides previous 'log-format' in 'defaults' section.\n\nNo backport needed unless fd48b28315 is.","shortMessageHtmlLink":"BUG/MINOR: fix missing \"log-format overrides previous 'option tcplog …"}},{"before":"dc8535831bcfaa9e3031eb7f362e4b29cf71a82e","after":null,"ref":"refs/heads/20240908-clock-3","pushedAt":"2024-09-17T07:16:48.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"}},{"before":"bb2a2bc5f2f6d864cd8770cbd2533d3df1878ad1","after":"499e057644d659ac61bd13cc50e75aadf33cdbdd","ref":"refs/heads/master","pushedAt":"2024-09-17T07:14:27.000Z","pushType":"push","commitsCount":5,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"MEDIUM: clock: don't compute before_poll when using monotonic clock\n\nThere's no point keeping both clocks up to date; if the monotonic clock\nis ticking, let's just refrain from updating the wall clock one before\npolling since we won't use it. We still do it after polling however as\nwe need a wall clock time to communicate with outside.\n\nThis saves one gettimeofday() call per loop and two timeval comparisons.","shortMessageHtmlLink":"MEDIUM: clock: don't compute before_poll when using monotonic clock"}},{"before":"1e0920f85542d63755cb8824d1000c3a6f22bb9c","after":"bb2a2bc5f2f6d864cd8770cbd2533d3df1878ad1","ref":"refs/heads/master","pushedAt":"2024-09-17T07:02:40.000Z","pushType":"push","commitsCount":6,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"BUG/MEDIUM: promex: Wait to have the request before sending the response\n\nIt is similar to the previous fix about the stats applet (\"BUG/MEDIUM:\ncache/stats: Wait to have the request before sending the response\").\nHowever, for promex, there is no crash and no obvious issue. But it depends\non the filter. Indeed, the request is used by promex, independantly if it\nwas considered as forwarded or not. So if it is modified by the filter,\nmodification are just ignored.\n\nSame bug, same fix. We now wait the request was forwarded before processing\nit and produce the response.","shortMessageHtmlLink":"BUG/MEDIUM: promex: Wait to have the request before sending the response"}},{"before":"5d350d1e5032fed589739a87b30fc857e3ed2ee1","after":"1e0920f85542d63755cb8824d1000c3a6f22bb9c","ref":"refs/heads/master","pushedAt":"2024-09-16T12:07:38.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"BUG/MINOR: peers: local entries updates may not be advertised after resync\n\nSince commit 864ac3117 (\"OPTIM: stick-tables: check the stksess without\ntaking the read lock\"), when entries for a local table are learned from\nanother peer upon resynchro, and this is the only peer haproxy speaks to,\nlocal updates on such entries are not advertised to the peer anymore,\nuntil they eventually expire and can be recreated upon local updates.\n\nThis is due to the fact that ts->seen is always set to 0 when creating\nnew entry, and also when touch_remote is performed on the entry.\n\nIndeed, while 864ac3117 attempts to avoid useless updates, it didn't\nconsider entries learned from a remote peer. Such entries are exclusively\nlearned in peer_treat_updatemsg(): once the entry is created (or updated)\nwith new data, touch_remote is used to commit the change. However, unlike\ntouch_local, entries committed using touch_remote will not be advertised\nto the peer from which the entry was just learned (otherwise we would\nenter a looping situation). Due to the above patch, once an entry is\nlearned from the (unique) remote peer, 'seen' will be stuck to 0 so it\nwill never be advertised for its whole lifetime.\n\nInstead, when entries are learned from a peer, we should consider that\nthe peer that taught us the entry has seen it.\n\nTo do this, let's set seen=1 in peer_treat_updatemsg() after calling\ntouch_remote(). This way, if we happen to perform updates on this entry,\nit will be properly advertized to relevant peers. This patch should not\naffect the performance gain documented in 864ac3117 given that the test\nscenario didn't involved entries learned by remote peers, but solely\nlocally created entries advertised to remote peers upon updates.\n\nThis should be backported in 3.0 with 864ac3117.","shortMessageHtmlLink":"BUG/MINOR: peers: local entries updates may not be advertised after r…"}},{"before":"5d350d1e5032fed589739a87b30fc857e3ed2ee1","after":null,"ref":"refs/heads/20240915-vars-perf-4","pushedAt":"2024-09-16T08:26:17.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"}},{"before":"51ade2f1dbf12873d1f56b8d49c75dfe6b2cb34a","after":"5d350d1e5032fed589739a87b30fc857e3ed2ee1","ref":"refs/heads/master","pushedAt":"2024-09-16T07:22:59.000Z","pushType":"push","commitsCount":6,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"OPTIM: vars: use multiple name heads in the vars struct\n\nGiven that the original list-based version was using a list head as the\nroot of the variables, while the tree is using a single pointer, it made\nsense to reuse that space to place multiple roots, indexed on the lower\nbits of the name hash. Two roots slightly increase the performance level,\nbut the best gain is obtained with 4 roots. The performance is now always\nabove that of the list, even with small counts, and with 100 vars, it's\n21% higher than before, or 67% higher than with the list.\n\nWe keep the same lock (it could have made sense to use one lock per head),\nbecause most of the variables in large configs are attached to a stream\nor a session, hence are not shared between threads. Thus there's no point\nin sharding the pointer.","shortMessageHtmlLink":"OPTIM: vars: use multiple name heads in the vars struct"}},{"before":"a984bd1575e1c3fc399db9dba9a87b914762888d","after":"5d350d1e5032fed589739a87b30fc857e3ed2ee1","ref":"refs/heads/20240915-vars-perf-4","pushedAt":"2024-09-15T21:54:40.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"OPTIM: vars: use multiple name heads in the vars struct\n\nGiven that the original list-based version was using a list head as the\nroot of the variables, while the tree is using a single pointer, it made\nsense to reuse that space to place multiple roots, indexed on the lower\nbits of the name hash. Two roots slightly increase the performance level,\nbut the best gain is obtained with 4 roots. The performance is now always\nabove that of the list, even with small counts, and with 100 vars, it's\n21% higher than before, or 67% higher than with the list.\n\nWe keep the same lock (it could have made sense to use one lock per head),\nbecause most of the variables in large configs are attached to a stream\nor a session, hence are not shared between threads. Thus there's no point\nin sharding the pointer.","shortMessageHtmlLink":"OPTIM: vars: use multiple name heads in the vars struct"}},{"before":null,"after":"a984bd1575e1c3fc399db9dba9a87b914762888d","ref":"refs/heads/20240915-vars-perf-4","pushedAt":"2024-09-15T21:53:55.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"OPTIM: vars: inline vars_prune() to avoid many calls\n\nMany configs don't have variables and call it for no reason, and even\nconfigs with variables don't necessarily have some in all scopes.\nInlining the function improves the performance by 8% on a variable\nintensive config.","shortMessageHtmlLink":"OPTIM: vars: inline vars_prune() to avoid many calls"}},{"before":"a31ef7cc722933477e0097ed9bdabaaa8bbf2078","after":null,"ref":"refs/heads/20240915-vars-perf-1","pushedAt":"2024-09-15T21:53:49.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"}},{"before":null,"after":"a31ef7cc722933477e0097ed9bdabaaa8bbf2078","ref":"refs/heads/20240915-vars-perf-1","pushedAt":"2024-09-15T19:46:20.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"OPTIM: vars: use multiple name heads in the vars struct\n\nGiven that the original list-based version was using a list head as the\nroot of the variables, while the tree is using a single pointer, it made\nsense to reuse that space to place multiple roots, indexed on the lower\nbits of the name hash. Two roots slightly increase the performance level,\nbut the best gain is obtained with 4 roots. The performance is now always\nabove that of the list, even with small counts, and with 100 vars, it's\n20% higher than before, or 67% higher than with the list.\n\nWe keep the same lock (it could have made sense to use one lock per head),\nbecause most of the variables in large configs are attached to a stream\nor a session, hence are not shared between threads. Thus there's no point\nin sharding the pointer.","shortMessageHtmlLink":"OPTIM: vars: use multiple name heads in the vars struct"}},{"before":"b11495652e724d71f1f4247332f060fe48577664","after":"51ade2f1dbf12873d1f56b8d49c75dfe6b2cb34a","ref":"refs/heads/master","pushedAt":"2024-09-15T19:45:56.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"OPTIM: sample: don't check casts for samples of same type\n\nOriginally when converters were created, they were mostly for casting\ntypes. Nowadays we have many artithmetic converters to perform operations\non integers, and a number of converters operating on strings. Both of\nthese categories most often do not need any cast since the input and\noutput types are the same, which is visible as the cast function is\nc_none. However, profiling shows that when heavily using arithmetic\nconverters, it's possible to spend up to ~7% of the time in\nsample_process_cnv(), a good part of which is only in accessing the\nsample_casts[] array. Simply avoiding this lookup when input and ouput\ntypes are equal saves about 2% CPU on such setups doing intensive use\nof converters.","shortMessageHtmlLink":"OPTIM: sample: don't check casts for samples of same type"}},{"before":"adaba6f904d11424327f90c3ab4df085c26fc8c4","after":"b11495652e724d71f1f4247332f060fe48577664","ref":"refs/heads/master","pushedAt":"2024-09-13T06:37:22.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"haproxy-mirror","name":null,"path":"/haproxy-mirror","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/38239150?s=80&v=4"},"commit":{"message":"BUG/MEDIUM: queue: implement a flag to check for the dequeuing\n\nAs unveiled in GH issue #2711, commit 5541d4995d (\"BUG/MEDIUM: queue:\ndeal with a rare TOCTOU in assign_server_and_queue()\") does have some\nside effects in that it can occasionally cause an endless loop.\n\nAs Christopher analysed it, the problem is that process_srv_queue(),\nwhich uses a trylock in order to leave only one thread in charge of\nthe dequeueing process, can lose the lock race against pendconn_add().\nIf this happens on the last served request, then there's no more thread\nto deal with the dequeuing, and assign_server_and_queue() will loop\nforever on a condition that was initially exepected to be extremely\nrare (and still is, except that now it can become sticky). Previously\nwhat was happening is that such queued requests would just time out\nand since that was very rare, nobody would notice.\n\nThe root of the problem really is that trylock. It was added so that\nonly one thread dequeues at a time but it doesn't offer only that\nguarantee since it also prevents a thread from dequeuing if another\none is in the process of queuing. We need a different criterion.\n\nWhat we're doing now is to set a flag \"dequeuing\" in the server, which\nindicates that one thread is currently in the process of dequeuing\nrequests. This one is atomically tested, and only if no thread is in\nthis process, then the thread grabs the queue's lock and dequeues.\nThis way it will be serialized with pendconn_add() and no request\naddition will be missed.\n\nIt is not certain whether the original race covered by the fix above\ncan still happen with this change, so better keep that fix for now.\n\nThanks to @Yenya (Jan Kasprzak) for the precise and complete report\nallowing to spot the problem.\n\nThis patch should be backported wherever the patch above was backported.","shortMessageHtmlLink":"BUG/MEDIUM: queue: implement a flag to check for the dequeuing"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"startCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0yOVQwNzo1OTozMi4wMDAwMDBazwAAAATDwUQW","endCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0xM1QwNjozNzoyMi4wMDAwMDBazwAAAAS1Jxcl"}},"title":"Activity · haproxy/haproxy"}