This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again.


Subject İnformation
Author starwolf Replies 0
Share Views 4313
Netcode and physics corrections [Update #2]
#1
Netcode in Rust with ECS.
A pioneering first-person real-time low-bandwidth netcode replication framework with Bevy Engine.
By Ramses

Introduction
Online multiplayer games require accurate, precise and sophisticated code libraries that enable a medium to high amount of physics based entities and/or players in the game world without running into the infamous network bottleneck. The video-game has to be developed as such that physics gets calculated authoritatively on the server while clients aim to replicate and locally simulate the same world with precision, accuracy and with the ability to support a variety of latency scenarios while latency is ever-changing and dynamic. The latency of a connection is usually decided by geographic positioning between clients and the server. A problem with sending and receiving data over the network is that there is always some latency involved when a message was sent and when it was received. When the server sends positional data of an entity the clients will all receive it several milliseconds later. When you aim to replicate physics simulations in real-time across the net you need to take into account these problems.

Thankfully there have been useful resources on these issues. A very helpful YouTube playlist explaining the netcode model I have implemented is found here.

Netcode showcase videos

Media I: Mid-high latency test, ping differs between 20ms-70ms~ involves a server hosted in a datacentre located in different country and a VPN connection to intentionally worsen net conditions

Media II: Low latency test.

The entire netcode framework is open source and can be found on Github.

Plugins
At the time of writing Space Frontiers uses a fork of the renet plugin which provides a UDP-based standardized netcode game library that would offer things like DDOS protection with game server providers.
For physics there is also a native plugin called bevy_xpbd. The fact that this plugin is native and integrated with the game engine's ECS is great because it allows for some epic multi-world and multi-schedule processing.

Tickrates and syncing
The default tickrate of Space Frontiers is 60hz. This means at fixed a interval the game loop gets stepped sixty times per second. Each tick that gets processed has a unique integer ID tick stamp. When server and client send messages to one another they also send the stamp of the tick they're at. This way for each message that is received we can determine how much latency (in ticks) there is.

To initialize a new client connection the server sends the current tick ID its at and the new client adjusts its own tick ID to match it.

We also put the client a few number of ticks ahead of the server. The amount of ticks at which an individual client is ahead of the server depends on the (averaged) connection latency as ticks. This way when a client sends a message for tick x the server is at tick x - latency. So when the server receives the client message it can accurately insert and process the associated events in the server game loop with convenient timing. 

The network is ever-changing so latency and increase and decrease per connection over time. This means that the amount of ticks at which the clients should be ahead of the server is subject to change. When the server detects a latency change it will make sync adjustment requests to the client and the client will then either freeze or speed up for a specified amount of ticks to reach the new desired client tick.

Latency
Clients obtain and send user input such as movement keys and mouse input to the server and the server forwards it to the connected peers that can see the player that sent the input. Since the client is put ahead of the server equal to the amount of latency and sending a message from the server to the client also takes an amount of latency to arrive: we receive those peer inputs at double the latency. This means when the server forwarded peer input at tick X, the client will receive it when it is at tick X + latency in ticks * 2. This means that the peer input message the client receives is several ticks old. If we were to just take that data and apply it without taking this latency into account we would be randomly applying data in in the game loop and the physics simulation would not bee accurate and the results will not look smooth in the render on the users monitor.

Client-side input & physics caching, rollback and prediction
Clients cache the physics data of each entity for every tick and store it in a Resource. Clients can spin up a new physics simulation, clone previously cached data and even add a set changes (ie position or input updates from the server). So the client always receives authoritative data from the server a few ticks behind the tick it is currently at. When we receive new data from the server we can go back in time to the tick contained by that specific message header and step the simulation to the tick the client is currently at and then apply those to the main world. Space Frontiers performs rollback physics simulation and corrections in a secondary World with its own Schedule and the results get sent back to the main World.
Since the core principles of ECS and Rust are data-driven it is actually relatively easy to cache everything and to restore previous physics states. It is simply a matter of querying physics components from physics entities and storing them in a map mapped by the entity id and the tick ID.
Essentially we can now predict the future and go back in the past to make physics simulation corrections.

[Image: cache_copying.png] [Image: input_debug.png]
Media III: physics cache copying.                                                                           Media IV: Fun times debugging peer input.

Consistent processing
To ensure consistent processing the client and server send messages in batches. Both the client and server send and receive one message batch per tick (per client).
This way the event loops stay consistent and no unexpected results happen when multiple batch messages have received for a single tick to process, the application will simply split those messages up and call them in the upcoming ticks from a queue.

SubApps and multiple Worlds
The physics rollback and prediction happens inside a SubApp. Which means that the systems and data are separated from the main world that gets rendered. This way we have two ECS Worlds. In a single step of the main world we can step and process the SubApp for any amount of ticks we desire. A correction is triggered by providing a start and end tick id and the correction SubApp will step this range of physics calculations and return the results. All we do is pass the caches of physics entities and player input to the correction and let it run the physics steps as fast as possible. There are some sophisticated systems written to do this properly. Because entity spawning and despawning has to be emulated too. We use the same worker threads as the main app, reducing thread overhead. Both worlds execute asynchronously but as is standard in any Bevy application the inner logic of each world is entirely parallelized through systems. I also made a purist synchronous implementation of the physics correction SubApp, but it is experimental because it causes jitter. However in theory it is possible to run correction physics steps completely synchronously to the main game loop for great performance gains, the down-side is that you would get one tick of additional latency.

Optimizations
The gridmap is split up into chunks in terms of data and collision. There is a compound gridmap collider for each populated gridmap chunk.
The correction SubApp is lean and runs a skeleton Plugin selection. This is important because with another player in the scene the client could easily be tasked to rollback and step several ticks worth of physics per client tick. At the default 60hz tickrate this means clients are expected to rollback hundreds of physics steps per second to achieve smooth results with a narrow execution time of 12 milliseconds per main world tick. 
There are still several big steps that will eventually be made to optimize physics rollback, such as finishing the synchronous execution of the rollback World and reducing the amount of cache data that is sent between Worlds.

Afterword
Up next is integrating more 3D assets, like 3D character models, animations and hopefully to expand the gridmap with higher fidelity assets and to make the current default map larger.
Reply




Users browsing this thread:
2 Guest(s)