Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay] chosen home relay server is random #2950

Open
mohamed-essam opened this issue Nov 25, 2024 · 0 comments · May be fixed by #2952
Open

[Relay] chosen home relay server is random #2950

mohamed-essam opened this issue Nov 25, 2024 · 0 comments · May be fixed by #2952
Labels

Comments

@mohamed-essam
Copy link
Contributor

Describe the problem

chosen home Relay server is random and not based on latency.

To Reproduce

Steps to reproduce the behavior:

  1. Deploy multiple relay servers (new relay) to different geographic regions.
  2. Run client in same region as one of relay servers.
  3. Observe chosen home relay server.

In my case specifically:

  1. Deploy relay servers to AWS regions:
    a. us-east-1
    b. eu-central-1
    c. ap-south-1
  2. Run client in us-east-1

Expected behavior

Expected client to chose home relay server to be closest server in latency.

Are you using NetBird Cloud?

Self-hosted

NetBird version

0.33.0

NetBird status -dA output:

N/A

Do you face any (non-mobile) client issues?

N/A

Screenshots

N/A

Additional context

I noticed in the code in relay/client/picker.go in processConnResults() it takes the first item with a successful connection from the resultChan chan, this is highly dependent on Golang scheduling and channel ordering rather than latency.

Might be relevant: tested client has only 1 core (so only 1 thread can run at a time, which would highly affect the ordering to be only based on the Go scheduling), another client which has 8 cores chose the nearest relay server consistently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants