Skip to content

Conversation

@karlsson
Copy link

Add option.merge_type_list
Use Count(n) instead of Once in ActiveState
Add handling of tcp_error, ssl_error, tcp_passive and ssl_passive

  Add option.merge_type_list
  Use Count(n) instead of Once in ActiveState
  Add handling of tcp_error, ssl_error, tcp_passive and ssl_passive
@rawhat
Copy link
Owner

rawhat commented Jan 21, 2026

In testing this locally, the active mode changes are pretty significantly worse for performance. A simple "hello, world" server saw the performance drop from ~300k RPS to ~250k RPS. This was repeatable.

Restoring the {active, once} behavior actually improved performance, up to ~310k RPS. So I think the other changes are definitely worth including here.

Copy link
Owner

@rawhat rawhat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just some random thoughts

@karlsson
Copy link
Author

karlsson commented Jan 23, 2026

I will revert to {active, once} but I cannot see any reason for setting it when you receive User(msg). My understanding is that it is only passivated when receiving on the socket, but maybe I am missing something?
I tried to only set it upon receiving Internal(Receive(msg)) and all tests pass, also for ewe.

Update: Ok, I get it, in case the user provides a user selector with handlers that overrides the "base" selector.

@karlsson
Copy link
Author

In the handler:

case msg {
  ....
  Internal(Closed) | Internal(Close) ->

from where do Internal(Close) originate?
I can see Closed from ssl_closed and tcp_closed .

@vshakitskiy
Copy link
Contributor

In the handler:

case msg {
  ....
  Internal(Closed) | Internal(Close) ->

from where do Internal(Close) originate? I can see Closed from ssl_closed and tcp_closed .

As far as I know, mist uses Internal(Close) to handle timer timeout.
https://github.com/rawhat/mist/blob/master/src/mist/internal/http/handler.gleam#L109

@karlsson
Copy link
Author

Oh, Close message is part of the "external" API. Thanks.
Maybe it could be made more explicit in a public function or part of configuration.

@karlsson karlsson marked this pull request as ready for review January 24, 2026 12:21
@karlsson karlsson marked this pull request as draft January 26, 2026 12:58
@karlsson
Copy link
Author

I have made it possible to set ActiveState for the TCP server to Active and Count(N) also (besides the old default Once). But I have problems to make all mist and ewe test cases pass with the new configurations. For mist some "100-continue" testcase fails since the handler loop returns NormalStop on the first chunk and I get NormalStop on ewe fragmented tests. All received messages looks the same for both Once and Active though so I am rather confused.

@vshakitskiy
Copy link
Contributor

I will try to play around tomorrow and let you know if I figure something out on ewe side :)

@vshakitskiy
Copy link
Contributor

Quoting my discord message:

So, the reason my fragmented tests are failing, is because I am using transport.receive_timeout to read from socket. And, as I figured out, gen_tcp:recv and ssl:recv cannot be used on active sockets, as messages sent through a connection are automatically read from the socket for us and placed in the controlling process's mailbox, so I assume receive functions return {error, einval}

@karlsson
Copy link
Author

karlsson commented Jan 29, 2026

Thanks for the investigation! I will check the mist case, could be similar cause there since it concerns a "split" message. Nothing wrong really, just limited to ActiveState(Once) only.

Update: the mist case also involves reading data directly from the socket for split messages. Nothing wrong, just limited to ActiveState(Once) configuration.

I will set the PR "Ready for review".

@karlsson karlsson marked this pull request as ready for review January 29, 2026 13:07
Copy link
Contributor

@vshakitskiy vshakitskiy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, great job!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants