Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow processing of NetworkVariable delivery as a whole at Tick update #3195

Open
babaq opened this issue Jan 9, 2025 · 3 comments
Open
Assignees
Labels
stat:awaiting response Status - Awaiting response from author. stat:Investigating Issue is currently being investigated type:feature New feature, request or improvement

Comments

@babaq
Copy link

babaq commented Jan 9, 2025

Is your feature request related to a problem? Please describe.
The Documentation has stated clearly that NetworkVariable message delivery to peers in the network are not guaranteed to be received as a whole. This is also true for the deprecated UNET which we have confirmed by setting [SyncVar]Visible = true to turn on two object in the same frame. On Server/Host, we can confirm two object always show up in the same frame by recording all rendered frames (Unity Recorder), but on Client, recorded frames show that some times they appear in the same frame, but other times, they appear one by another in successive two frames.

This is a fatal flaw for our application because we need to measure the time when a object show up on display, if the NetworkVaraible updates are not processed as a whole, objects will be rendered at different frames, and the timing will have jitters based on frame rate.

Our old solution is to "frame" the message delivery, so before UNET start to preparing delivery of SyncVar, we first send a FrameStart message and immediately flush the network buffer, then let UNET doing it's work, and after that we send a FrameEnd message to close the whole message delivery.

On Client side, if FrameStart received, it will spin polling the network buffer until FrameEnd received or a timeout passed. In this way whenever we think a critical frame happens, we add framing of messages, and the critical frame will be rendered the same across network.

Describe the solution you'd like
Now, we are migrating to Netcode, and wandering if this is also a useful feature to others. From a user perspective, it would be nice to have this in Netcode:

// this enable/disable the extra step of adding Frame message for each tick delivery
// default false is the current beharivor of Netcode, true will add framing for each tick
bool NetworkManager.EnableSyncTickDelivery;

// this add framing of delivery only for next tick update
void NetworkManager.SyncNextTickDelivery();

// this is timeout(ms) of spin waiting for the FrameEnd
int NetworkManager.SyncTickDeliveryTimeout;

Describe alternatives you've considered
a straightforward migration of our old solution would be using LateUpdate to add framing message, and also the spin polling logic in PreUpdate, but it would block main thread. The best place to add this functionality is the network thread, because the only requirement is to make sure all updates from server are received and transferred to main thread as a whole. I am not sure this need to modify low level Unity.Transport or we could harness the INetworkUpdateSystem to add this functionality to the Netcode.

Any suggestions are appreciated.

Additional context
Add any other context or screenshots about the feature request here.

@babaq babaq added stat:awaiting triage Status - Awaiting triage from the Netcode team. type:feature New feature, request or improvement labels Jan 9, 2025
@EmandM
Copy link
Collaborator

EmandM commented Jan 9, 2025

The documentation has this section that explains the best way to get the functionality that you're describing. Is there a particular reason that RPCs do not work for your use-case?

@EmandM EmandM added stat:awaiting response Status - Awaiting response from author. and removed stat:awaiting triage Status - Awaiting triage from the Netcode team. labels Jan 10, 2025
@babaq
Copy link
Author

babaq commented Jan 10, 2025

@EmandM Like the Documentation described, the variables that control the whole app behavior and need to update continuously and synced not only to connected clients but also late-join clients are best modeled as NetworkVarables, not one off event as RPC. Besides, we usually have 5-50 NetworkVarables (~20 on average) for each NetworkBehavior, It's functionally and ergonomically inappropriate to implementing them in RPC. I guess since RPC have already implements the infrastructure to sync messages, enable them for NetworkVarables would give developer more control on how precise they want game states be synchronized. It may not be a big issue for casual games, but for research or industrial applications, it's an essential feature.

@EmandM EmandM removed the stat:awaiting response Status - Awaiting response from author. label Jan 13, 2025
@NoelStephensUnity
Copy link
Collaborator

NoelStephensUnity commented Jan 14, 2025

Hi @babaq,

Hmm... I "think" I understand what you are trying to achieve here and there might be some details I am missing...so I am going to repeat a summary of what you are trying to achieve in order to make sure I don't provide you with an invalid/not pertinent solution.

If I understand you have some form of industrial/research application where a client would visualize changes to say an object (physical or simulated) that is represented by (at a minimum) one or more NetworkObject(s) that has/have NetworkBehaviour components that could have several NetworkVariables per NetworkBehaviour component.

The issue you are encountering is that not all NetworkVariable update messages, NetworkVariableDeltaMessage, for a specific tick are received and processed on the same render frame which leads to issues (jitter and/or visual anomalies)?

I also want to clear something up with NetworkVariables and when they are received and processed relative to one another.

  • Upon a NetworkVariable being changed, it is sent via NetworkVariableDeltaMessage on the next network tick.,
    • If one or more NetworkVariables are changed/updated on network tick 301.3 - 301.7 (NGO does have partial tick values) then those will be sent on tick ~302.0.
      • Depending on whether these are server write permission or owner write permission NetworkVariables:
        • Server write permissions:
          • You then have the 1/2 RTT it takes to send the NetworkVariableDeltaMessage from the server to each client.
        • Owner write permissions:
          • You then have the 1/2 RTT it takes to send the NetworkVariableDeltaMessage from the owning client to the server (unless it is the host) and then the 1/2 RTT it takes to send to each non-owning client.
    • Messages are batched together, so if you have 20 NetworkVariables updated during that 301.3-301.7 tick period then they will be batched into one batched message that contains all NetworkVariableDeltaMessages on network tick 302 at the end of the frame.
      • You can add the time from when the values are changed to the end of the frame to the total time the NetworkVariableDeltaMessage(s) will be received and processed.
      • You can add a small amount of additional time for the message to be ingested and processed.
        • Depending upon the time each NetworkVariable.OnValuedChanged method takes to process each individual NetworkVariable you can "stagger-add" those times to the total time each next NetworkVariable will be processed, but they will all be processed on the same render frame.

So, there will always be (at a minimum with server write permission NetworkVariables):

  • The remaining network tick time (i.e. if set on tick 301.4 then ~0.6 of tick period will pass before the delta state is sent).
  • The frame time on the server until messages in the outbound queue are sent (upon the next network tick increment).
  • The 1/2 RTT to deliver the message.
  • The initial frame time at the beginning of the frame (EarlyUpdate) when NetworkVariableDeltaMessage(s) are processed.

However, as mentioned messages are batched together at the end of the frame into a "single batch message" and they are grouped together based on the delivery type used. NetworkVariables use (by default) NetworkDelivery.ReliableFragmentedSequenced which means that you could have 100 NetworkVariables on 5 different NetworkBehaviour components (20 each) that all update on network tick 310.0 - 310.9(ish) and they would all be grouped into a single ReliableFragmentedSequenced batched message (i.e. if it extends the MTU size then it fragments the message into MTU sized messages). So, under this scenario all 100 NetworkVariable updates that occurred during network tick 310 would be received as a single message and processed on the same frame (with the latency described above). Clients receiving the fragmented batched message won't start processing the wrapped messages until the all fragments of the batched message are received. So, from that perspective...as long as the NetworkVariables aren't changing at the edge of the end of a tick and the beginning of the next tick, you should see them all delivered and processed on the same frame (if they all change on the same tick).

Which the second question I have is if you are synchronizing when you update NetworkVariables (i.e. registering for NetworkTickSystem.Tick will trigger on each network tick and then checking at this point in time to determine if changes need to be made to any NetworkVariables or not)...or asked differently...at what point do you make changes to NetworkVariables relative to the player loop and relative to the other NetworkVariables and have you gathered any metrics (i.e. when you change a NetworkVariable what tick was it changed on) as to when the staggered NetworkVariable updates are occurring?

I have several possible approaches, but I think the above should be a good starting point to begin to figure out what could be the best solution for your project's needs (based on your response as to whether I am fully understanding the issue at hand, when you are changing NetworkVariables, and if you are synchronizing when you are changing NetworkVariables to occur at the beginning of a tick or in some other fashion to assure you don't apply changes at the end of one tick and beginning of the next).

@NoelStephensUnity NoelStephensUnity added the stat:awaiting response Status - Awaiting response from author. label Jan 17, 2025
@michalChrobot michalChrobot added stat:Investigating Issue is currently being investigated and removed Investigating labels Jan 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:awaiting response Status - Awaiting response from author. stat:Investigating Issue is currently being investigated type:feature New feature, request or improvement
Projects
None yet
Development

No branches or pull requests

4 participants