Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
coherence provides two types of online replication services: Rooms and Worlds. Read about the different uses cases for each
Rooms are best for session-based gameplay where the match between players takes place in a short-lived environment.
A good example is a first person shooter multiplayer match. The match takes place between two teams in a single game session, and players enter through a lobby and matchmaking. When the match is concluded, the multiplayer environment the match took place in is closed and players return to a lobby.
This is one example of how Rooms can be used, but it is by no means the only use case. The important distinction between Rooms and Worlds (see below) is that Rooms are relatively short-lived and are meant to be created and closed by the game client through the coherence SDK.
See Rooms API.
Worlds, as opposed to Rooms, are long-lived and permanent multiplayer environments provided by coherence. Using the Developer Portal, your project will easily define and manage your World configurations.
See Manage Worlds.
A good example of a World is a permanent environment for an Massively Multiplayer Game (MMO). Regardless of the number of players connected, the environment is always available, and players can connect and disconnect at will.
Entities can be permanently saved in the world so that even if there are no active connections, they are persisted when players do connect.
See Worlds API.
Your project does not have to choose one-or-the-other. A project in coherence can contain both World and Rooms.
A good example of this scenario is again, our MMO. Although players connect to a permanent and persistent World, they may enter a dungeon instance with other players. These dungeon instances can be Rooms.
The primary difference in the configuration and usage of Room and Worlds is that Worlds are managed in the Developer Portal, whereas Rooms are created and managed through the SDK.
coherence is currently in private preview. Some stability and performance issues may still be present and being ironed out. Additionally, more features are planned for the public release.
Custom UDP transport layer using bit streams with reliability
Smooth state replication
Server-side, client-side, distributed authority
Connected entity support
Fast authority transfer
Remote messaging (RPC)
Persistence
Multiple examples and showcases
Verified support for Windows, macOS, Linux, Android, iOS and WebGL
Support for rooms and worlds
Unity SDK with intuitive no-code layer
Per-field adjustable interpolation and extrapolation
Input queues
Easy deployment into the cloud
SDK source included, no 3rd-party libraries
Per-field compression and quantization
Per-field sampling frequency adjustable at runtime
Unlimited per-field levels of detail
Areas of interest
Accurate SimulationFrame tracking
Developer portal with server and service configurator
Multiple regions (US East, EU Central)
Player accounts
Key-value store
Matchmaking
Ability to deploy one replication server and one simulation server per environment
Prometheus and Grafana integration
Multi-room simulators
Input queue UX improvements
Network Profiler
Additional regions
Support for multiple simulators and replicators in a single project
Dashboard with usage statistics
Support for lean pure C# clients and simulators without Unity
Peer-to-peer (without replication server) with NAT punch-through
TCP fallback support
WebSockets support
MTU detection
Packet replay
Ability to deploy multiple simulation servers per environment
Player analytics
Developer portal graphs and analytics
Simulator authentication
Bare-metal and cloud support
JavaScript SDK
Unreal Engine SDK
Multiple replication servers per game world
Customer-specific serialization
User-space load-balancing (SDK framework)
Game world map with admin interface
Anti-cheat functionality
Advanced transaction logs (audit trail)
Schema versioning (hot updates)
Games are better when we play together.
coherence is a network engine, platform and a series of tools to help anyone create a multiplayer game. Our mission is to give any game developer, regardless of how technical they are, the power to make a connected game.
If you are an existing user and looking to update, you can find our change log.
The Network Playground is a collection of scenes showing you how to use various features of the coherence Unity SDK. It shows you how to synchronize transforms, physics, persistence, animations, AI navigation and send network commands.
You can follow our step-by-step guide to learn how to install coherence in Unity, set up your scene, prefabs, interactions, as well as deploy your project to be shared with your friends.
Join our community Discord for community chatter and support.
Join our official Developer Discord channel.
Contact us at devrel@coherence.io
A lean and performant server that keeps the state of the world and replicates it efficiently between various simulators and game clients. The Replicator usually runs in the coherence Cloud, but developers can start it locally from the command line or the Unity Editor.
A build of the game. To connect to coherence, it will use the coherence SDK.
A version of the game client without the graphics ("headless client") optimized and configured to perform server-side simulation of the game world. When we say something is simulated on the server, we mean it is simulated on one or several simulators.
A text file defining the structure of the world from the network's point of view. The schema is shared between the replicators, simulators and game clients. The world is generally divided in components and archetypes.
**** Code Generation
The process of generating code specific to the game engine that takes care of network synchronization and other network-specific code. This is done using a CLI tool called _Protocol Code Generator _ that takes the schema file and generates code for various engines (e.g. C# for Unity).
The process of making sure the state of the world is eventually the same on the replicator, simulators and game clients, depending on their areas of interest.
coherence works by sharing game world data via a Replication Server in the cloud and passing it to the connected clients.
The clients and simulators can define areas of interest (LiveQueries), levels of detail, varying simulation and replication frequencies and other optimization techniques to control how much bandwidth and CPU power is used in different situations.
The game world can be run using multiple simulators that split up simulation functions or areas of the world accordingly.
The platform handles scaling, synchronization, persistence and load balancing automatically.
coherence is a network engine, platform and a series of tools to help anyone create a multiplayer game.
Fast network engine with cloud scaling, state replication, persistence and auto load balancing.
Easy to develop, iterate and operate connected games and experiences.
SDK allows developers to make multiplayer games using Windows, Linux or Mac, targeting desktop, console, mobile, VR or the web.
Game engine plugins and visual tools will help even non-coders create and quickly iterate on a connected game idea.
Scalable from small games to large virtual worlds running on hundreds of servers.
Game-service features like user account, key-value stores and matchmaking.
At the core of coherence lies a fast network engine based on bitstreams and a data-oriented architecture, with numerous optimization techniques like delta compression, quantization and ("Level of Detail") to minimize bandwidth and maximize performance.
The network engine supports multiple authority models:
Client authority
Server authority
Server authority with client prediction
Authority handover (request, steal)
Distributed authority (multiple simulators with seamless transition)
Deterministic client prediction with rollback (""): coming in a future release
coherence supports persistence out of the box.
This means that the state of the world is preserved no matter if clients or simulators are connected to it or not. This way, you can create shared worlds where visitors have a lasting impact.
Fast authority transfer and remote commands allow different authority models, including client authority, server authority, distributed authority and combinations like client prediction with .
Peer-to-peer support (without a replicator) is planned in a future release. Please see the for updates.
The coherence SDK only supports Unity at the moment. Unreal Engine support is planned. For more specific details and announcements, please check the page. For custom engine integration,.
coherence only supports Unity at the moment. Unreal Engine support is planned. For more specific details and announcements, please check the Unreal Engine Support page. For custom engine integration, please contact our developer relations team.
First, open Project Settings
.
Under Package Manager
, add a new Scoped Registry with the following fields:
Name: coherence
URL: https://registry.npmjs.org
Scope(s): io.coherence.sdk
Enable Preview / Pre-release Packages: checked
Show Dependencies: checked
Click Apply
.
Now open the Package Manager
.
Click Packages
and My Registries
.
Under coherence
, click Install
.
If you want to install coherence manually, go to the folder of your project and open the file /Packages/manifest.json
.
Copy paste the lines surrounded by comments that look like this:/* comment */
.
The Unity docs have information about scoped package registries.
When you install the coherence Unity Package, all the services for the SDK are installed in the background as well.
You will then see this package in the Package Manager under "My Registries".
The coherence SDK has some dependencies on other Unity packages you can see in the image above. If you're already using these in your project, you might have to adjust their version number (Unity will tell you about this).
When you successfully install the coherence SDK you'll get this quickstart winodow pop-up and you'll be good to go.
Now we can build the project and try out network replication locally.
This example will show you how to launch a local replication server and connect multiple instances.
Versions under 0.8 require Bake code before building a player and running a Replication Server for the first time.
You can run a local replication server from the coherence menu by clicking:
coherence -> Server -> Run Local Worlds Server.
This will open a new terminal window with the replication server and world created in it.
Now it's time to make a standalone build and test network replication.
#protip: Go to Project Settings, Player and change the Fullscreen Mode to Windowed and enable Resizable Window. This will make it much easier to observe standalone builds side-by-side when testing networking.
Note, that for this sample we are running World on server, so make sure that Connect Dialog Selector in your Coherence Sample UI object on scene is set to Worlds also.
Open the Build Settings window (File -> Build Settings
). Click on Add Open Scenes to add the current scene to the build. Click Build and Run.
Select a folder (e.g. builds) and click OK.
When the build is done, start another instance of the executable (or run the project in the Game Window in Unity).
Click Connect on both clients. Now try focusing one and using WSAD keys. You will see the box move on the other side as well.
Congratulations, you've made your first coherence replicated experience. But this is only the beginning. Keep reading to take advantage of more advanced coherence features.
If you want to connect to the local replication server from another local device (such as another PC, Mac, Mobile or VR device), you can find your IPv4 address and use that as your server address in the Connect dialog. These devices need to be connected to the same network.
You can find your IPv4 address by going to your command line tool and type ipconfig
. Remember to include the port number, for example 192.168.1.185:32001
.
Make sure your Firewall allows remote connections to connect to the replication server from other devices on your network.
It's quick and easy to set up a networked scene from scratch using the coherence SDK. This example will show you the basic steps to sync up some moving characters.
Add these components to your scene to prepare it for network synchronization.
coherence -> Scene Setup -> Create MonoBridge
This object takes care of connected GameObject
lifetimes and allows us to develop using traditional MonoBehaviour
scripts.
coherence -> Scene Setup -> Create LiveQuery
Creates a LiveQuery which queries the area around the local player to get required information from the replication server. You can surround your entire scene in one query or can attach it to an object such as the player or a camera.
coherence -> Scene Setup -> Add Sample UI
Creates a Canvas
(and Event System
if not already present in the scene) with a sample UI that helps you connect to a local or remote replication server. You can create your own connection dialog, this one is just a quick way to get started.
Out of the box, coherence will use C# Reflection to sync all the data at runtime. This is a great way to get started but it is very costly performance-wise.
coherence offers an automatic way of doing that called baking.
Click on coherence -> Schema and Baking -> Bake Schemas
.
This will go through all CoherenceSync
components in the project and generate a schema file based on the selected variables, commands and other settings. It will also take into account any LODs
added.
For every prefab with a CoherenceSync
object, the baking process will generate a bespoke C# file in the coherence/baked
folder in the project.
Adding that file to the prefab will make that prefab use bespoke generated code instead of C# reflection.
Once the Schema has been baked, you will be able to switch to baked mode in the CoherenceSync inspector.
The name of the baked script will be CoherenceSync[prefabName]
.
When you bind to your script's fields and bake, coherence generates specific code that accesses your code directly, without using reflection. This means, whenever you change your scripts, you might break compilation.
For example, if you have a Health.cs
script which exposes a public float health;
field, and you mark health
as a binding in the Bindings window, the baked script will access your component via type name, and your field via field name.
Your baked script might now reference your component:
Baked scripts reside by default in Assets/coherence/baked
, but you can check where exactly they're located in the settings window.
This means that if you decide to change your component name (Health
) or any of your bound field names (health
), Unity script recompilation will fail. In this example, we will be deprecating health
and adding health2
in its place.
Our watchdog is able to understand when this happens, and offer you a solution right away.
It will suggest you to bake in safe mode, and then diagnose the state of your prefabs. After a few seconds of script recompilation, you'll be presented with the diagnosis window.
You can enter safe mode manually via coherence > Schema and Baking > Bake Schemas (Safe Mode)
.
In this window, you can easily spot bindings in your prefabs that are no longer valid. In our example, health is no longer valid since we've moved it elsewhere (or deleted it).
Click on the hand pointing button to open the bindings window, and take a look at your script:
Now, we can manually rebind our data: unbind health
and bind health2
. Once we do, we can now safely bake again.
Baking in safe mode creates scripts that will help avoid compilation errors, but prefabs that use these will not work in runtime. Remember to bake again normally when you're done fixing your prefabs.
In this section, we will learn how to prepare a prefab for network replication.
Add an asset and create a prefab of it. Make sure the prefab is in a Resources
folder in your Unity project.
Here is an example:
GameObject -> 3D Object -> Cube
Create a Resources
folder in your Project. Drag the Cube into the Resources folder to turn it into a prefab.
The CoherenceSync
component will help you prepare an object for network synchronization during design time. It also exposes an API that allows us to manipulate the object during runtime.
CoherenceSync
will query all public variables and methods on any of the attached components, for example Unity components such as Transform
, Animator
, etc.. This will include any custom scripts such as PlayerInput
and even scripts that came with Asset Store packages you may have downloaded.
Select which variables you would like to sync across the network. Initially, this will probably be the Transform
settings; position, rotation, scale.
Under Configure
, click Select fields and methods
.
In the Configuration dialog, select position, rotation and scale.
Close the Configuration dialog_._
This simple input script will use WASD or the Arrow keys to move the prefab around the scene.
Click on Assets -> Create -> C# Script
.
Name it Move.cs
. Copy-paste the following content into the file.
Wait for Unity to compile the file, then add it onto the prefab.
We have added a Move
script to the prefab. This means that if we just run the scene, we will be able to use the keyboard to move the object around.
But what happens on another client where this object is not authoritative, but rather replicated? We will want the position to be replicated over the network, without the keyboard input interfering with it.
Open the Events
section in CoherenceSync
. Add a new On Network Instantiation
handler by clicking on the plus sign next to it.
Pull the Cube prefab into the Runtime Only / None (Object) field.
Now click the dropdown No Function and select Move -> bool enabled
.
Leave the Boolean field unchecked.
From the CoherenceSync
component you can configure settings for Lifetime (Session-based
or Persistent
, Authority transfer (Request
or Steal
), Simulation model (Client Side
, Server Side
or Server Side with Client Input
) and Adoption settings for when local persistent entities are orphaned.
On Before Networked Instantiation
(before the network is instantiated)
On Networked Instantiation
(when the GameObject is instantiated)
On Networked Destruction
(when the GameObject is destroyed)
On Authority Gained
(when authority over the GameObject is transferred to the local client)
On Authority Lost
(when authority over the GameObject is transferred to another client)
On After Authority Transfer Rejected
(when GameObject's Authority transfer was requested and denied).
On Input Simulator Connected
(when client with simulator is ready for Server-side with Client Input)
There are some constraints when setting up a prefab with CoherenceSync
, hereafter referred to as Sync Prefab
.
A Sync Prefab
has one, and only one CoherenceSync
component in its hierarchy
The CoherenceSync
component must be at the Sync Prefab
root
A Sync Prefab
cannot contain instances of other Sync Prefabs
A hierarchy in a scene can contain multiple Sync Prefabs
. However, such a hierarchy cannot be saved as a Sync Prefab
as that would break rule 1-3.
Networked entities can be simulated either on a game client ("client authority") or a simulation server ("server authority).
Client authority is the easiest to set up initially, but it has some drawbacks:
Higher latency. Because both clients have a non-zero ping to the replication server, the minimum latency for data replication and commands is the combined ping (client 1 to replications server and replication server to client 2).
Higher exposure to cheating. Because we trust game clients to simulate their own entities, there is a risk that one such client is tampered with and sends out unrealistic data.
In many cases, especially when not working on a competitive PvP game, these are not really issues and are a perfectly fine choice for the game developer.
Client authority does have a few advantages:
Easier to set up. No client vs. server logic separation in the code, no building and uploading of simulation servers, everything just works out of the box.
Cheaper. Depending on how optimized the simulator code is, running a simulator in the cloud will in most cases incur more costs than just running a replication server (which is comparatively very lean).
Having one or several simulators taking care of the important world simulation tasks (like AI, player character state, score, health, etc.) is always a good idea for competitive PvP games.
Running a simulator in the cloud next to the replicator (the ping between them being negligible) will also result in lower latency.
The player character can also be simulated on the server, with the client locally predicting its state based on inputs. You can read more about how to achieve that in the section .
Peer-to-peer support (without a replicator) is planned in a future release. Please see the for updates.
coherence allows you to upload and share the builds of your games to your team, friends or adoring fans via an eay access play link.
Right now we support desktop (PC, Mac, Linux) and also WebGL, where you can host and instantly play your multiplayer game and share it around the world.
Build your game to a local folder on your desktop as you would normally.
In the coherence menu in Unity select "Share -> Build Upload"
in this window you can select which platform the build is for an you also need to browse to the local path folder.
Click "Upload" or "Begin Upload" and coherence will confirm that it's okay to compress and upload the build to the webdashboard.
Now that build has been updated (signified by the green tick), you can share it by enabing and share the public URL. Anyone with this link can access the build.
If you uploaded a WebGL build then you can play it instantly from that public link for instant play.
The state of network entities that are currently not simulated locally (either because they are being simulated on another game client or on a simulator) cannot be affected directly.
Network can help us affect state indirectly, but for anything more involved, an authority transfer might be necessary.
In the design phase, CoherenceSync objects can be configured to handle authority transfer in different ways:
Request. Authority transfer may be requested, but it may be rejected by the receiving party.
Steal. Authority will always be given to the requesting party on a FCFS ("first come first serve") basis.
Not transferable. Authority cannot be transferred.
Note also that you need to set up Auto-adopt Orphan if you wan't orphans to be adopted automatically by nearest player.
When using Request, an optional callback OnAuthorityRequestedByConnection
can be set on the CoherenceSync behaviour. If the callback is set, then the results of the callback will override the Approve Requests setting in the behaviour.
The request can be approved or rejected in the callback.
Requesting authority is very straight-forward.
As the transfer is asynchronous, we have to subscribe to one or more Unity Events in CoherenceSync to learn the result.
The request will first go to the replication server and be passed onto the receiving simulator or game client, so it may take a few frames to get a response.
These events are also exposed to the Unity inspector in the Events on authority transfer section of the CoherenceSync behaviour.
Now we can finally deploy our schema and replication server into the cloud.
In the Project Settings tab for coherence, click on Upload.
The status should change from "Unknown" to "In Sync".
Your project schema is now deployed with the correct version of the replication server already running in the cloud. You will be able to see this in your cloud dashboard status.
You can now build the project again and send the build to your friends for testing.
You will be able to play over the internet without worrying about firewalls and local network connections.
Make sure everybody selects the same region, and that this region is not local
, before connecting.
Now that we have tested our project locally, it's time to upload it to the cloud and share it with our friends and colleagues. To be able to do that, we need to create a free account with coherence.
Create an account or log into an existing one.
Create an organization.
Under the organization, create a new project.
Copy the token to your clipboard.
Open Unity and navigate to the Project Settings. Open the coherence tab_._
Paste the token into the Portal Token section.
Once you have pasted the portal token successfully, you need to fetch the runtime token as well.
For optimal runtime performance, we need to create a schema and perform code generation specific to our project. Learn more about this in the section.
It's important that your prefab is in a Resources folder so that Unity can load it at runtime. This is a Unity requirement, more info .
You can find out more about CoherenceSync .
You can find more information in the . There are also some Events that are triggered at different times.
Even if an entity is not currently being simulated locally, we can still affect its state by sending a or even .
Support for requests based on is coming soon.
If the status does not say In Sync, or if you encounter any other issues with the server interface, please refer to the section.
The Connect Dialog uses the to fetch all the regions available for your project. This depends on the project configuration (e.g. the regions that you have selected for your project in the portal).
We are working on a WebGL / WebAssembly option that will automatically upload the browser-playable build to your own personal webpage that you can share with your friends. For more information about our roadmap, please contact our .
In your web browser, navigate to .
Open the project dashboard and find the .
You can fetch the **** **** by clicking on the down-arrow button on the right side of the input field.
It is often useful to know when a synced property has changed its value. It can be easily achieved using the OnValueSyncedAttribute
. This attribute lets you define a method that will be called each time a value of a synced member (field or property) changes in the non-simulated version of an entity.
Let's start with a simple example:
Whenever the value of the Health
field gets updated (synced with its simulated version) the UpdateHealthLabel
will be called automatically, changing the health label text and printing log with a health difference.
This comes in handy in projects that use authoritative simulators. The client code can easily react to changes in the Player
entity state introduced by the simulator, updating the visual representation (which the simulator doesn't need).
The OnValueSyncedAttribute
requires using baked scripts.
Remember that the callback method will be called only for a non-simulated instance of an entity. Use on a simulated (owned) instance requires calling the selected method manually whenever the value of a given field/member changes. We recommend using properties with a backing field for this.
The OnValueSynced
feature can be used only on members of user-defined types, that is, there's no way to be notified about a change in the value of a Unity type member, like transform.position
. This might however change in the future, so stay tuned!
All persistent objects remain in the world for the entire lifetime of the replication server and, periodically, the replication server records the state of the world and saves it to physical storage. If the replication server is restarted, then the saved persistent objects are reloaded when the replication server resumes.
When we connect to a game world with a game client, the traditional approach is that all entities originating on our client are session-based. This means that when the client disconnects, they will disappear from the network world for all players.
A persistent object, however, will remain on the replication server even when the client or simulator that created or last simulated it, is gone.
This allows us to create a living world where player actions leave lasting effects.
In a virtual world, examples of persistent objects are:
A door anyone can open, close or lock
User-generated or user-configured objects left in the world to be found by others
Game progress objects (e.g. in PvE games)
Voice or video messages left by users
NPC's wandering around the world using an AI logic
Player characters on "auto pilot" that continue affecting the world when the player is offline
And many, many more
A persistent object with no simulator is called an orphan. Orphans can be configured to be auto-adopted by clients or simulators on a FCFS basis.
coherence input queues are backed by a rolling buffer of inputs transmitted between the clients. This buffer can be used to build a fully deterministic simulation with a client side-prediction, rollback, and input delay. This game networking model is often called the GGPO (Good Game Peace Out).
Input delay allows for a smooth, synchronized netplay with almost no negative effect on the user experience. The way it works is input is scheduled to be processed X frames in the future. Consider a fighting game scenario with two players. At frame 10 Player A presses a kick button that is scheduled to be executed at frame 13. This input is immediately sent to Player B. With a decent internet connection, there's a big chance that Player B will receive that input even before his frame 13. Thanks to this the simulation is always in sync and can progress steadily.
Prediction is used to run the simulation forward even in the absence of inputs from other players. Consider the scenario from the previous paragraph - what if Player B doesn't receive the input on time? The answer is very simple - we just assume that the input state hasn't changed and progress with the simulation. As it turns out this assumption is valid most of the time.
Rollback is used to correct the simulation when our predictions turn out wrong. The game keeps historical states of the game for past frames. When an input is received for a past simulation frame the system checks whether it matches the input prediction made at that frame. If it does we don't have to do anything (the simulation is correct up to that point). If it doesn't match, however, we need to restore the simulation state to the last known valid state (last frame which was processed with non-predicted inputs). After restoring the state we re-simulate all frames up to the current one, using the fresh inputs.
In a deterministic simulation, given the same set of inputs and a state we are guaranteed to receive the same output. In other words, the simulation is always predictable. Deterministic simulation is a key part of the GGPO model, as well as a lockstep model because it lets us run exactly the same simulation on multiple clients without a need for synchronizing big and complex states.
Implementing a deterministic simulation is a non-trivial task. Even the smallest divergence in simulation can lead to a completely different game outcome. This is usually called a desync. Here's a list of common determinism pitfalls that have to be avoided:
Using Update
to run the simulation (every player might run at a different frame rate)
Using coroutines, asynchronous code, or system time in a way that affects the simulation (anything time-sensitive is almost guaranteed to be non-deterministic)
Using Unity physics (it is non-deterministic)
Using random numbers generator without prior seed synchronization
Non-symmetrical processing (e.g. processing players by their spawn order which might be different for everyone)
Relying on floating point numbers across different platforms, compilations or processor types
We'll create a simple, deterministic simulation using provided utility components.
This is the recommended way of using input queues since it greatly reduces the implementation complexity and should be sufficient for most projects. If you'd prefer to have full control over the input code feel free to use theCoherenceInput
and InputBuffer
directly.
Our simulation will synchronize the movement of multiple clients, using the rollback and prediction in order to cover for the latency.
Start by creating a Player
component and a prefab for it. We'll use the client connection system to make our Player
represent a session participant and automatically spawn the selected prefab for each player that connects to the server. The Player
will also be responsible for handling inputs using the CoherenceInput
component.
Create a prefab from cube, sphere, or capsule, so it will be visible on the scene. That way later it will be easier to verify visually if the simulation works.
When building an input-based simulation it is important to use the client connection system, that is not a subject to the live query. Objects that might disappear or change based on the client-to-client distance are likely to cause simulation divergence leading to a desync.
Our Player
code looks as follows:
The GetMovement
and SetMovement
will be called by our "central" simulation code. Now that we have our Player
defined let's prepare a prefab for it. Create a game object and attach the Player
component to it. Using the CoherenceSync
inspector create a prefab. The inspector view for our prefab should look as follows:
A couple of things to note:
A Mov
axis has been added to the CoherenceInput
which will let us sync the movement input state
Unlike in the server-side input queues our simulation uses client-to-client communication, meaning each client is responsible for its entity and sending inputs to other clients. To ensure such behavior set the CoherenceSync > Simulation and Interpolation > Simulation Type
to Client Side
In the deterministic simulation, it is our code that is responsible for producing deterministic output on all clients. This means that the automatic transform position syncing is no longer desirable. To turn it off check the CoherenceSync > Manual Position Update
In order for inputs to be processed in a deterministic way, we need to use the fixed simulation frames. Tick the CoherenceInput > Use Fixed Simulation Frames
checkbox
Make sure to use the baked mode (CoherenceInput > Use Baked Script
) - inputs do not work in the reflection mode
Since our player is the base of the client connection we must set it as the connection prefab in the CoherenceMonoBridge
and enable the global query:
Before we move on to the simulation, we need to define our simulation state which is a key part of the rollback system. The simulation state should contain all the information required to "rewind" the simulation in time. For example, in a fighting game that would be a position of all players, their health, and perhaps a combo gauge level. In a shooting game, this could be player positions, their health, ammo, and a map objective progression.
In the example we're building player position is the only state. We need to store it for every player:
The state above assumes the same number and order of players in the simulation. The order is guaranteed by the CoherenceInputSimulation
, however, handling a variable number of clients is up to the developer.
Simulation code is where all the logic should happen, including applying inputs and moving our Players
:
SetInputs
is called by the system when it's time for our local Player
to update its input state using the CoherenceInput
Simulate
is called when it's time to simulate a given frame. It is also called during frame re-simulation after misprediction - don't worry though, the complex part is handled by the CoherenceInputSimulation
internals - all you need to do in this method is apply inputs from the CoherenceInput
to run the simulation
Rollback
is where we need to set the simulation state back to how it was at a given frame. The state is already provided in the state
parameter, we just need to apply it
CreateState
is where we create a snapshot of our simulation so it can be used later in case of rollback
OnClientJoined
and OnClientLeft
are optional callbacks. We use them here to start and stop the simulation depending on the number of clients
The SimulationEnabled
is set to 'false' by default. That's because in a real-world scenario the simulation should start only after all clients have agreed for it to start, on a specific frame chosen, for example, by the host.
Starting the simulation on a different frame for each client is likely to cause a desync (as well as joining in the middle of the session, without prior simulation state synchronization). Simulation start synchronization is however out of the scope of this guide so in our simplified example we just assume that clients don't start moving immediately after joining.
As a final step attach the Simulation
script to the MonoBridge object on scene and link the MonoBridge back to the Simulation
:
That's it! Once you build a client executable you can verify that the simulation works by connecting two clients to the replication server. Move one of the clients using arrow keys while observing the movement being synced on the other one.
Due to the FixedNetworkUpdate
running at different (usually lower) rate than Unity's Update
loop, polling inputs using the functions like Input.GetKeyDown
is susceptible to a input loss, i.e. keys that were pressed during the Update
loop might not show up as pressed in the FixedNetworkUpdate
.
To illustrate why this happens consider the following scenario: given Update
running five times for each network FixedNetworkUpdate
, if we polled inputs from the FixedNetworkUpdate
there's a chance that an input was fully processed within the five Update
s in-between FixedNetworkUpdate
s, i.e. a key was "down" on the first Update
, "pressed" on the second, and "up" on a third one.
To prevent this issue from occurring you can use the FixedUpdateInput
class:
The FixedUpdateInput
works by sampling inputs at Update
and prolonging their lifetime to the network FixedNetworkUpdate
so they can be processed correctly there. For our last example that would mean "down" & "pressed" registered in the first FixedNetworkUpdate
after the initial five updates, followed by an "up" state in the subsequent FixedNetworkUpdate
.
The FixedUpdateInput
works only with the legacy input system (UnityEngine.Input
).
There's a limit to how many frames can be predicted by the clients. This limit is controlled by the CoherenceInput.InputBufferSize
. When clients try to predict too many frames into the future (more frames than the size of the buffer) the simulation will issue a pause. This pause affects only the local client. As soon as the client receives enough inputs to run another frame the simulation will resume.
To get notified about the pause use the OnPauseChange(bool isPaused)
method from the CoherenceInputSimulation
:
This can be used for example to display a pause screen the informs user about a bad internet connection.
To recover from the time gap created by the pause the client will automatically speed up the simulation. The time scale change is gradual and in the case of a small frame gap, can be unnoticeable. If a manual control over the timescale is desired set the CoherenceMonoBridge.controlTimeScale
flag to 'false'.
The CoherenceInputSimulation
has a built-in debugging utility that collects various information about the input simulation on each frame. This data can prove extremely helpful in finding a simulation desync point.
The CoherenceInputDebugger
can be used outside the CoherenceInputSimulation
. It does however require the CoherenceInputManager
which can be retrieved through the CoherenceMonoBridge.InputManager
property.
Since debugging might induce a non-negligible overhead it is turned off by default. To turn it on add, a COHERENCE_INPUT_DEBUG
scripting define:
From that point, all the debugging information will be gathered. The debug data is dumped to a JSON file as soon as the client disconnects. The file can be located under a root directory of the executable (in case of Unity Editor the project root directory) under the following name: inputDbg_<ClientId>.json
, where <ClientId>
is the CoherenceClientConnection.ClientId
of the local client.
Data handling behaviour can be overridden by setting the CoherenceInputDebugger.OnDump
delegate, where the string parameter is a JSON dump of the data.
The debugger is available as a property in the simulation base class: CoherenceInputSimulation.Debugger
. Most of the debugging data is recorded automatically, however, the user is free to append any arbitrary information to a frame debug data, as long as it is JSON serializable. This is done by using the CoherenceInputDebugger.AddEvent
method:
Since the simulation can span an indefinite amount of frames it might be wise to limit the number of debug frames kept by the debugging tool (it's unlimited by default). To do this use the CoherenceInputDebugger.FramesToKeep
property. For example, setting it to 1000 will instruct the debugger to keep only the latest 1000 frames worth of debugging information in the memory.
Since the debugging tool uses JSON as a serialization format, any data that is part of the debug dump must be JSON-serializable. An example of this is the simulation state. The simulation state from the quickstart example is not JSON serializable by default, due to Unity's Vector3 that doesn't serialize well out of the box. To fix this we need to give JSON serializer a hint:
With the JsonProperty
attribute, we can control how a given field/property/class will be serialized. In this case, we've instructed the JSON serializer to use the custom UnityVector3Converter
for serializing the vectors.
You can write your own JSON converters using the example found here. For information on the Newtonsoft JSON library that we use for serialization check here.
To find a problem in the simulation, we can compare the debug dumps from multiple clients. The easiest way to find a divergence point is to search for a frame where the hash differs for one or more of the clients. From there one can inspect the inputs and simulation states from previous frames to find the source of the problem.
Here's the debug data dump example for one frame:
Explanation of the fields:
Frame
- frame of this debug data
AckFrame
- the common acknowledged frame, i.e. the lowest frame for which inputs from all clients have been received and are known to be valid (not mispredicted)
ReceiveFrame
- the common received frame, i.e. the lowest frame for which inputs from all clients have been received
AckedAt
- a frame at which this frame has been acknowledged, i.e. set as known to be valid (not mispredicted)
MispredictionFrame
- a frame that is known to be mispredicted, or -1
if there's no misprediction
Hash
- hash of the simulation state. Available only if the simulation state implements the IHashable
interface
Initial state
- the original simulation state at this frame, i.e. a one before rollback and resimulation
Initial inputs
- original inputs at this frame, i.e. ones that were used for the first simulation of this frame
Updated state
- the state of the simulation after rollback and resimulation. Available only in case of rollback and resimulation
Updated inputs
- inputs after being corrected (post misprediction). Available only in case of rollback and resimulation
Input buffer states
- dump of the input buffer states for each client. For details on the fields see the InputBuffer
code documentation
Events
- all debug events registered in this frame
One of the things that are stored as part of the debugging information is the simulation state. In the case of a complex state searching for a difference across multiple clients and hundreds of frames can quickly become tedious. To simplify the problem we can use the hash calculation feature of the input debugger.
Every state simulation class/struct that implements the IHashable
interface will have its hash automatically calculated and stored as part of the debugging information. The example of IHashable
implementation:
There are two main variables which affect the behaviour of the InputBuffer
:
Input buffer size - the size of the buffer determines how far into the future the input system is allowed to predict. The bigger the size, the more frames can be predicted without running into a pause. Note that the further we predict, the more unexpected the rollback can be for the player. The InitialBufferSize
value can be set directly in code however it must be done before the Awake
of the baked component, which might require a script execution order configuration.
Input buffer delay - dictates how many frames must pass before applying an input. In other words, it defines how "laggy" the input is. The higher the value, the less likely clients are going to run into prediction (because a "future" input is sent to other clients), but the more unresponsive the game might feel. This value can be changed freely at runtime, even during a simulation (it is however not recommended due to inconsistent input feeling).
The other two options are:
Disconnect on time reset - if set to 'true' the input system will automatically issue a disconnect on an attempt to resync time with the server. This happens when the client's connection was so unstable that frame-wise it drifted too far away from the server. In order to recover from that situation, the client performs an immediate "jump" to what it thinks is the actual server frame. There's no easy way to recover from such "jump" in the deterministic simulation code so the advised action is to simply disconnect.
Use fixed simulation frames - if set to 'true' the input system will use the IClient.ClientFixedSimulationFrame
frame for simulation - otherwise the IClient.ClientSimulationFrame
is used. Setting this to 'true' is recommended for the deterministic simulation.
The fixed network update rate is based on the Fixed Timestep configured through the Unity project settings:
To know the exact fixed frame number that is executing at any given moment use the IClient.ClientFixedSimulationFrame
or CoherenceInputSimulation.CurrentSimulationFrame
property.
Commands are network messages sent from one entity to another entity in the networked world. Functionally equivalent to standard RPCs, commands bind to public methods accessible on the GameObject that CoherenceSync sits on.
In the design phase, you can expose public methods the same way you select fields for synchronization: through the Configure window on your CoherenceSync component.
Selected public methods will be exposed as network commands in the baking process.
The button on the right of the method lets you choose the routing mode. Commands with aSend to Authority Only
mode can be sent only to the authority of the target entity, while ones with the Send to All Instances
can be broadcasted to all clients that have a copy of this entity. The routing is enforced by the Replication Server as a security measure so the outdated or malicious clients don't break the game.
To send a command, we call the SendCommand
__ method on the target CoherenceSync object. It takes a number of arguments:
The type argument (within the <
and>
) must be the type of the receiving MonoBehaviour. This ensures that the correct method gets called if the receiving GameObject has components that implement methods that share the same name.
The first argument is the name of the method on the MonoBehaviour that we want to call. It is good practice to use the C# nameof
expression when referring to the method name, since it prevents accidentally misspelling it, or forgetting to update the string if the method changes name.
The second argument is an enum that specifies the MessageTarget
of the command. The possible values are:
MessageTarget.All
– this will send the command to each client that has an instance of this entity.
MessageTarget.AuthorityOnly
– this will send the command only to the client that has authority over the entity.
Mind that the target must be compatible with the routing mode set in the bindings, i.e. Send to authority
will allow only for the MessageTarget.AuthorityOnly
while Send to all instances
allows for both values.
The rest of the arguments (if any) vary depending on the command itself. We must supply as many parameters as are defined in the target method and the schema.
Here's an example of how to send a command:
We don't have to do anything special to receive the command. The system will simply call the corresponding method on the target network entity.
If the target is a locally simulated entity, SendCommand
will recognize that and not send a network command, but instead simply call the method directly.
Sometimes you want to inform a bunch of different entities about a change. For example, an explosion impact on a few players. To do so, we have to go through the instances we want to notify, and send commands to each of them.
In this example, a command will get sent to each network entity under the authority of this client. To make it only affect entities within certain criteria, you need to filter to which CoherenceSync you send the command to on your own.
Some of the primitive types supported are nullable values, this includes:
Byte[]
string
Entity references: CoherenceSync, Transform, and GameObject
In order to send one of these values as a null (or default) we need to use special syntax to ensure the right method signature is resolved.
Null-value arguments need to be passed as a ValueTuple<Type, object> so that their type can be correctly resolved. In the example above sending a null value for a string is written as:
(typeof(string), (string)null)
and the null Byte[] argument is written as:
(typeof(Byte[]), (Byte[])null)
Mis-ordered arguments, type mis-match, or unresolvable types will result in errors logged and the command not being sent.
When a null argument is deserialized on a client receiving the command, it is possible that the null value is converted into a non-null default value. For example, sending a null string in a command could result in clients receiving an empty string. As another example, a null Byte[] argument could be deserialized into an empty Byte[0] array. So, receiving code should be ready for either a null value or an equivalent default.
When a prefab is not using a baked script there are some restrictions for what types can be sent in a single command:
4 entity references
maximum of 511 bytes total of data in other arguments
a single Byte[] argument can be no longer than 509 bytes because of overhead
Some network primitive types send extra data when serialized (like Byte arrays and string types) and some are compressed using default compression settings (like int, float, Vector2, and Vector3) so gauging how many bits a command will use is difficult. If a single command is bigger than the supported packet size, it won't work even with baked code. For a good and performant game experience, always try to keep the total command argument sizes low.
If multiple commands are sent to a single entity or to multiple entities in the same frame or if there is significant network instability, coherence does not guarantee that commands will be received by their targets in the same order as they were sent.
coherence only replicates animation parameters, not state. Latency can create scenarios where different clients reproduce different animations. Take this into account when working with Animator Controllers that require precise timings.
Unity Animator's parameters are bindable out of the box, with the exception of triggers.
Triggers can be invoked over the network using commands. Here's an example where we inform networked clients that we have played a jump animation:
Now, bind to PlayJumpAnimator
.
The client connection system lets you uniquely identify users connected to the same session, find any user by his ID, spawn objects whenever a new user joins the session, and send messages between those users.
To achieve this a special connection entity is automatically created by the replication server for each connected client, including a simulator. Those entities are subject to a different rule set than standard entities. Connection entities:
Can't be created or destroyed by the client - they are always replication server-driven
Are global - they are replicated across clients regardless of the in-simulation distance or LiveQuery radius
Client connections shine whenever there's a need to communicate something to all the connected players. Usage examples:
Global chat
Game state changes: game started, game ended, map changed
Server announcements
Server-wide leaderboard
Server-wide events
The globality of client connection doesn't fit all game types - for example, it rarely makes sense to keep every client informed about the presence of all players on the server in an MMORPG game (think World Of Warcraft). Due to this client connections are turned off by default.
To enable client connections turn on the global query in the MonoBridge (the Global Query On
toggle):
Disabling global query on one client doesn't affect other clients, i.e. connection entity of this client will still be visible to other clients that have the global query turned on.
Most of the client connection functionality is accessible through the CoherenceMonoBridge.ClientConnections
object:
Each connection is represented by a plain C# CoherenceClientConnection
object. It contains all the important information about a connection - its ClientID
, Type
, whether it IsMyConnection
, and a reference to the GameObject
and Coherence Sync
associated with it.
The CoherenceClientConnection.ClientID
is guaranteed to not change during connection's lifetime. However, if client disconnects and then connects again to the same room/world, a new ClientID
will be assigned (since a new connection was established).
Each client connection can have a GameObject with CoherenceSync automatically being spawned and associated with it. Those objects, like any other objects with CoherenceSync, can be used for syncing properties or sending messages, with a little twist - they are global and thus not limited by the live query radius. That makes them perfect candidates for operations like:
Syncing global information - name, stats, tags, etc.
Sending global messages - chat, server interaction
To enable connection objects:
This step is described in detail in the Prefab setup section. In short, a prefab with a CoherenceSync and a custom component (PlayerConnection
in this example) must be created and placed in the Resources directory:
For the system to know which object to create for every new client connection, we have to link our prefab to the MonoBridge. Simply drag the prefab to the Client
field in the MonoBridge inspector:
From now on every new connection will be assigned an instance of this prefab, which can be accessed through the CoherenceClientConnection.GameObject
property.
Note that there's a separate field for the Simulator Connection Prefab. It can be used to spawn a completely different object for the simulator connection that may contain simulator specific commands and replicated properties. If the field is left empty, no object will be created for the simulator connection.
The prefab selection process can be also controlled from code using the CoherenceMonoBridge.ClientConnections.ProvidePrefab
callback:
A prefab provided through the ProvidePrefab
callback takes precedence over prefabs linked in the inspector.
Client messages are commands sent between the client connection objects. Implementing client messages is as simple as adding a new method to the component used by our connection prefab and binding it in the configuration:
Don't forget to bind the new command:
Client messages can be sent using the CoherenceClientConnection.SendClientMessage
method:
If the ClientID
of the message recipient is known we can use the CoherenceMonoBridge.ClientConnections
directly to send a client message:
Input queues enable a simulator to take control of the simulation of another client's objects based on the client's inputs.
In situations where you want a centralized simulation of all inputs. Many game genres use input queues and centralized simulation to guarantee the fairness of actions or the stability of physics simulations.
In situations where clients have low processing power. If the clients don't have sufficient processing power to simulate the world it makes sense to use input queue and just display the replicated results on the clients.
In situations where determinism is important. RTS and fighting games will use input queues and rollback to process input events in a shared (not centralized), and deterministic, way so that all clients simulate the same conditions and produce the same results.
coherence currently only supports using input queues in a centralized way where a single simulator is setup to process all inputs and replicate the results to all clients.
Setting up an object to simulate via input queues using CoherenceSync is done in three steps:
The Simulation Type of the CoherenceSync component is set to Simulation Server With Client Input
Setting the simulation type to this mode instructs the client to automatically give authority over this object to the simulator in charge of simulating all inputs on all objects.
Each simulated CoherenceSync component is able to define its own, unique set of inputs for simulating that object. An input can be one of:
Button. A button input is tracked with just a binary on/off state.
Button Range. A button range input is tracked with a float value from 0 to 1.
Axis. An axis input is tracked as two floats from -1 to 1 in both the X and Y axis.
String. A string value representing custom input state. (max length of 63 characters)
To declare the inputs used by the CoherenceSync component, the CoherenceInput component is added to the object. The input is named and the fields are defined.
In this example, the input block is named "Player Movement" and the inputs are WASD and "mouse" for the XY mouse position.
In order for the inputs to be simulated on CoherenceSync objects, they must be optimized through baking.
If the CoherenceInput fields or name is changed, then the CoherenceSync object must be re-baked to reflect the new fields/values.
When a simulator is running it will find objects that are set up using CoherenceInput components and will automatically assume authority and perform simulations. Both the client and simulator need to access the inputs of the CoherenceInput of the replicated object. The client uses the Set* methods and the simulator uses the Get* methods to access the state of the inputs of the object. In all of these methods, the name parameter is the same as the Name field in the CoherenceInput component.
public void SetButtonState(string name, bool value)
public void SetButtonRangeState(string name, float value)
public void SetAxisState(string name, Vector2 value)
public void SetStringState(string name, string value)
Simulator-Side Get* Methods
public bool GetButtonState(string name)
public float GetButtonRangeState(string name)
public Vector2 GetAxisState(string name)
public string GetStringState(string name)
For example, the mouse click position can be passed from the client to the simulator via the "mouse" field in the setup example.
The simulator can access the state of the input to perform simulations on the object which are then reflected back to the client as any replicated object is.
Each object only accepts inputs from one specific client, called the object's Input Owner.
When a client spawns an object it automatically becomes the Input Owner for that object. This way, the object's creator will retain control over the object even after authority has been transferred to the simulator.
If an object is spawned directly by the simulator, you will need to assign the Input Owner manually. Use the SetInputOwner method on the CoherenceInput component to assign or re-assign a client that will take control of the object:
The ClientId used to specify input owner can currently only be accessed from the ClientConnection class. For detailed information about setting up the ClientConnection prefab, see the Client connections page.
Use the OnInputOwnerAssigned event on the CoherenceSync component to be notified whenever an object changes input owner.
Generally, using server-side simulation with simulators makes the time from the client providing input to the time the object is updated with that input significantly longer than just client-side simulation because of the time required for the input to be sent to the simulator, processed, and then the updates to the object returned across the network. This can cause visual lag. Using input queues allows the client to perform prediction of the simulation to provide a smooth playing experience.
If the client simulates the inputs on the object as well as applying them to the CoherenceInput component and then blends the authoritative results from the simulator with the locally simulated results, then smoother and more responsive simulation is achieved.
Rollback is not currently available but is in the roadmap. Each client will be aware of the current global simulation frame and so the inputs can be applied by each client at the same frame in time and so client-side prediction can be even more accurate.
The CoherenceSync editor interface allows us to define the Lifetime of a networked object. The following options are available:
Session Based. No persistence. The entity will disappear when the client or simulator disconnects.
Persistent. The entity will remain on the server until a simulating client deletes it.
Unique persistent objects need to be identified so that the system can know how to treat duplicate persistent objects.
Manually assigning a UUID means that each instance of this persistent object prefab is considered the same object regardless of where on the network it is instantiated. So, for example, if two clients instantiate the same prefab object with the same persistence UUID then only one is considered official and the other is replaced by the replication server.
The CoherenceUUID behaviour is used to uniquely identify a prefab.
It has several functions: you can generate a new ID for your object, and you can set auto-generate UUID on the scene to true, so each time object will receive a new ID.
Auto-generate UUID in scene is not working for persistent objects.
Deleting a persistent object is done the same as with any network object - by destroying its GameObject.
No matter how fast the internet becomes, conserving bandwidth will always be important. Some game clients might be on poor mobile networks with low upload and download speeds, or have high ping to the replication server and/or other clients, etc.
Additionally, sending more data than is required consumes more memory and unnecessarily burdens the CPU and potentially GPU, which could add to performance issues, and even to quicker battery drainage.
In order to optimize the data we are sending over the network, we can employ various techniques built into the core of coherence.
Delta-compression (automatic). When possible, only send differences in data, not the entire state every frame.
Compression and quantization (automatic and configurable). Various data types can be compressed to consume less bandwidth that they naturally would.
Simulation frequency (configurable). Most entities do not need to be simulated at 60+ frames per second.
Levels of detail (configurable). Entities need to consume less and less bandwidth the farther away they move from the observer.
Area of interest. Only replicate what we can see.
The way you get information about the world is through LiveQueries. We set criteria for what part of the world we are interested in at each given moment. That way, the replicator won’t send information about everything that is going on in the game world everywhere, at all times.
Instead, we will just get information about what’s within a certain area, kind of like moving a torch around to look in a dark cave.
More complex area of interest types are coming in future versions of coherence.
A LiveQuery is a cube that defined the area of interested in a particular part of the world. It is defined by its position and its radius (half the side of the cube). There can be multiple LiveQueries in a single scene.
A classic approach is to put a LiveQuery on the camera and set the radius to correspond to the far clipping plane or visibility distance.
Moving the GameObject containing the LiveQuery will also notify the replication server that the query for that particular game client has moved.
In addition to the LiveQuery, coherence also supports filtering objects by tag. This is useful when you have some special objects that should always be visible regardless of world position.
To create a TagQuery, right click a GameObject in the scene and select coherence -> TagQuery from the context menu.
All networked GameObjects with matching tags will now be visible to the client. The coherence tag can be any string and can be configured separately from the Unity tag in the Advanced Settings section of the CoherenceSync component.
Tags and TagQueries can be updated at any time while the application is running, either from the Unity inspector or setting CoherenceSync.tag
and CoherenceTagQuery.tag
with code.
Currently, only a single tag per GameObject and TagQuery is supported. To include objects with different tags, you can create multiple TagQuery objects for each tag.
In the future we plan to integrate TagQueries with LiveQueries allowing combined query restrictions, e.g., only show objects with tag "red" within a radius of 50.
If we don't do any special configuration, entity data is captured at the highest possible frequency and sent to the replication server. This often generates more data than is needed to efficiently replicate the entity's state across the network.
On a simulator, we can limit the framerate globally using Unity's built-in static variable targetFrameRate.
coherence will automatically limit the target framerate of uploaded simulators to 30 frames per second. We plan to enable lifting this restriction in the future. Check back for updates in the next couple of releases.
Sample rate can also be configured individually for all fields with code.
In the future, you will be able to define per-field sample frequencies in the Optimization window.
This document explains how to set up an ever increasing counter that all clients have access to. This could be used to make sure that everyone can generate unique identifiers, with no chance of ever getting a duplicate.
By being persistent, the counter will also keep its value even if all clients log off, as long as the replication server is running.
First, create a script called Counter.cs and add the following code to it:
This script expects a command sent from a script called NumberRequester, which we will create below.
Next, add this script to a prefab with CoherenceSync on it, and select the counter
and the method NextNumber
for syncing in the bindings window. To make the counter behave like we want, mark the prefab as "Persistent" and give it a unique persistence ID, e.g. "THE_COUNTER". Also change the adoption behaviour to "Auto Adopt":
Finally, make sure that a single instance of this prefab is placed in the scene.
Now, create a script called NumberRequester.cs. This will be an example MonoBehaviour that requests a unique number by sending the command GetNumber
to the Counter prefab. As a single argument to this command, the NumberRequester will send an entity reference to itself. This makes it possible for the Counter to send back a response command (GotNumber
) with the number that was generated. In this simple example we just log the number to the console.
To make this script work, add it to a prefab that has the CoherenceSync script and mark the GotNumber
for syncing in the bindings window.
A persistent object can be deleted only by the client or simulator that has authority over it. For indirect remote deletion, see the section about .
Sometimes you want to synchronize entities that are connected to other entities. These relationships can be references between entities, but they can also involve direct parent-child relationship between game objects, or more nuanced use cases.
Here's a guide for what technique to use, depending on the situation:
If you have an entity that needs to keep a nullable reference to another entity, use a normal Entity reference. This includes any existing MonoBehaviour that has GameObject or Transform fields that you want to synchronize over the network.
If the entities are placed in a hierarchy, use the techniques for parent-child relationships.
Extrapolation or "dead reckoning" uses historical data to predict the future state of a component. The actual network data that arrives later can be used to interpolate or snap the predicted values to the correct ones.
We will be adding an example of extrapolation in a subsequent release.
Positions and livequeries in the world are compressed, and the compression is defined by maximum scale and bit count. Larger scale at the same bit count means lower precision, and vice-versa.
A default maximum world size of 2400
means that only values from [-2400, 2400]
will be supported for all spatial axes.
If our game scenes are larger than 2400 x 2
(4800
) units across, we can increase this value in the Settings window or in the schema as illustrated below.
Don't forget to extend the LiveQuery scale to match the world position scale.
Depending on the settings in our project, data may not always arrive at a smooth 60 frames per second through the network. This is completely okay, but in order to make state changes (e.g. movement, rotation) appear smooth on the client, we use interpolation.
Interpolation is a type of estimation, a method of constructing new data points within the range of a discrete set of known data points.
The way interpolation works in coherence is that we wait for three data points and then start smoothing the subsequent values according to the interpolation parameters defined in the interpolation settings provided.
In the Configure window, each binding displays its interpolation settings next to it.
Built-in interpolation settings for position and rotation are provided out-of-the-box, but you are free to create your own and use them instead.
You can also create an interpolation settings asset: Assets > Create > coherence > Interpolation Settings
There, you have a few settings you can tweak:
Interpolation Type: the type of interpolation used. If set to None, the value will simply snap to the closest sample point without any blending.
Smooth Time: additional smoothing can be applied (using SmoothDamp
)
to clear out any jerky movement after regular interpolation has been performed.
Max Smoothing Speed: the maximum speed at which the value can change, unless teleporting.
Teleport Distance: if two consecutive samples are further apart than this, the value will "teleport" or snap to the new sample immediately without interpolating or smoothing in between.
Latency Control: Auto
With Latency Control set to Auto, latency will automatically adapt to the average sample delta time for all samples in the buffer. This allows each binding to operate with minimal latency without having to extrapolate data.
Factor: fudge factor applied to the average sample delta time. A factor of 1 means latency is exactly equal to time between samples, so the interpolated value should reach the last sample in the buffer at the exact time when a new sample is expected to arrive.
In general, a factor of 1.1 is recommended to keep from running out of samples (due to network fluctuations).
For spline interpolation, a factor of at least 2 is recommended, because the spline algorithm requires staying two samples behind to produce smooth curves.
Latency Control: Manual
With Latency Control set to Manual, latency is fixed so the interpolated value always stays a certain time behind the game time.
Latency: seconds the interpolated value will trail being the game time.
Overshooting
Max: how far into dead reckoning to venture when the time fraction exceeds 100%, as percentage of the sample rate.
Retraction: how fast to pull back to 100% when overshooting the allowed dead reckoning maximum (in seconds)
Interpolation works both in baked and reflection modes. You can change these settings at runtime via the Configure window (editor) or accessing the binding and changing the interpolation settings yourself:
Entity references let you set up references between entities and have those be synchronized, just like other value types (like integers, vectors, etc.)
To use Entity references, simply select any fields of type GameObject
, Transform
, or CoherenceSync
for syncing in the Configuration window:
The synchronization works both when using reflection and in baked sync scripts.
Entity references can also be used as arguments in Commands.
It's important to know about the situations when an entity reference might become null, even though it seems like it should have a value:
A client might not have the referenced entity in its live query. A local reference can only be valid if there's an actual entity instance to reference. If this becomes a problem, consider switching to using the ConnectedEntity component which ensures that another entity becomes part of the query.
The owner of the entity reference might sync the reference to the Replication Server before syncing the referenced entity. This will lead to the Replication Server storing a null reference. If possible, try setting the entity references during gameplay when the referenced entities have already existed for a while.
In any case, it's important to use a defensive coding style when working with entity references. Make sure that your code can handle missing entities and nulls in a graceful way.
This feature requires baking.
coherence can support large game worlds with many objects. Since the amount of data that can be transmitted over the network is limited, it's very important to only send the most important things.
You already know a very efficient tool for enabling this – the LiveQuery. It ensures that a client is only sent data when an object in its vicinity has been updated.
Often though, there is a possibility for an even more nuanced and optimized approach. It is based on the fact that we might not need to send as much data for an entity that is far away, compared to a close one. A similar technique is often used in 3D-programming to show a simpler model when something is far away, and a more detailed when close-up.
This idea works really well for networking too. For example, when another player is close to you it's important to know exactly what animation it is playing, what it's carrying around, etc. When the same player is far off in the horizon, it might suffice to only know it's position and orientation, since nothing else will be discernible anyways.
To use this technique we must learn about something called archetypes.
An archetype is a component that can be added to any prefab with the CoherenceSync component. It contains a list of the various levels of detail (LODs) that this particular prefab can have.
There must always exist a LOD 0, this is the default level and it always has all components enabled (it can have per-field overrides though, see below.)
There can be any number of subsequent LODs (e.g. LOD 1, LOD 2, etc.) and each one must have a distance threshold higher than the previous one. The coherence SDK will try to use the LOD with the highest number, but that is still within the distance threshold.
Example
An object has three LODs, like this:
LOD 0 (threshold 0)
LOD 1 (threshold 10)
LOD 2 (threshold 20)
If this object is 15 units away, it will use LOD 1.
Confusingly, the highest numbered LOD is usually called the lowest one, since it has the least detail.
One each LOD, there are two options for optimizing data being transferred:
Components can be turned off, meaning you won't receive any updates from it.
It's fields can be configured to use fewer bits, usually leading to less fine-grained information. The idea is that this won't be noticeable at the distance of the LOD.
coherence allows us to define the scale of numeric fields and how many bits we want to allocate to them.
Here are some terms we will be using:
Scale. The minimum/maximum value of the field before it overflows. A scale of 2400
means the number can run from -2400
to 2400
.
Bits. The number of bits (octets) user for the field. When used for vectors, the number defined the number of bits used for each component (x
, y
and z
). A vector3
set to 24 bits
will consume 3 * 24 = 72
bits.
Range. For integer values, we define a minimum and maximum possible value (e.g. Health
can lie between 0
and 100
).
More bits means more precision. Increasing the scale while leaving the bit count the same will lower the precision of the field.
The maximum number of bits used for any field/component is currently 32.
coherence allows us to define these values for specific components and fields. Furthermore, we can define levels of detail so that precision and therefore bandwidth consumption falls with the distance of the object to the point of observation.
Levels of detail are calculated from the distance between the entity and the center of the LiveQuery.
On each LOD you can configure the individual fields of any component to use less data. You can only decrease the fidelity, so a field can't use more data on a lower (more far away) LOD. The Archetype editor interface will help you to follow these rules.
In order to define levels of detail, we have to add a CoherenceArchetype
component to a prefab with CoherenceSync
defined field bindings.
Clicking on the Edit LOD button opens the Archetype Editor (and adds a CoherenceArchetype component if there wasn't one already).
We can override the base component settings even without defining further levels of detail.
Clicking on Add new Level Of Detail will add a new LOD. We can now define the distance at which the LOD starts. This is the minimum distance between the entity and the center of the LiveQuery at which the new level of detail becomes active (i.e. the replicator will start sending data as defined here at this distance).
You can also disable components at later LOD levels if they are not needed. In the example above, you can see that the entire Animator component is disabled beyond the distance of 100 units. At 100 units (a.k.a. meters), we usually do not see animation details, so it does not make sense to replicate this data.
The Data Cost Overview shows us that this takes the original 416 bits down to just 81 bits at LOD level 2.
The primitive types that coherence supports can be configured in different ways:
These three types can all be configured in the same way:
By setting the scale, which affects the maximum and minimum value that the data type can take on. For example, a scale of 100 means that a float ranges from -100 to 100.
By setting the precision, which defines the greatest deviation allowed for the data type. For example, a precision of 0.5 means that a float of value 10.0 can be transmitted as anything from 9.5 to 10.5 over the network.
When using these scale setting for vectors, it affects each axis of the vector separately. Imagine shrinking its bounding box, rather than a sphere.
Based on the scale and the desired precision, a bit count will be calculated. The default precision and scale (which happens to be 2400) gives a bit count of 24. This means that for a Vector3 a total of 72 bits will be used, 24 x 3.
Integers can be configured to any span (that fits within a 32-bit int) by setting its minimum and maximum value.
For example, the member variable age
in a game about ancient trolls might use a minimum of 100 and a maximum of 2000. Based on the size of the range (1900 in this case) a bit-count will be calculated for you.
For integers it usually make sense to not decrease the range on lower LODs since it will overflow (and wrap-around) any member on an entity that switches to a lower LOD. Instead, use this setting on LOD 0 to save data for the whole Archetype.
Right now quaternions (used for rotations) do not support field overrides, but this will be fixed in the near future.
All other types (strings, booleans, entity references) have no settings that can be overridden, so your only option for optimizing those are to turn them off completely at lower LODs.
If a LODed game object is parented to another synced object, the child will base its LOD level on the world position of its parent. This means that the (local) position of the LODed child does not have any effect on its LOD, until it is unparented.
Also – to save bandwidth, detection of LOD changes on the client only happens when the entity sends a component update. This means that a child object might appear to be using a nonsensical LOD until it changes in some way, for example by modifying its position.
When we bake, information from the CoherenceArchetype
component gets written into our schema. Below, you can see the setup presented earlier reflected in the resulting schema file.
The most unintuitive thing about archetypes and LOD-ing is that it doesn't affect the sending of data. This means that a "fat" object with tons of fields will still tax the network and the replication server if it is constantly updated, even if it uses a very optimized Archetype.
Also, it's important to realize that the exact LOD used on an entity varies for each other client, depending on the position of their query (or the closest one, if several are used.)
Objects with the CoherenceSync
component can be connected to other objects with CoherenceSync
components to form a parent-child relationship. For example, an object can be linked to a hand, a hand to an arm, and the arm to a spine.
When and object has a parent in the network hierarchy, its transform (position and orientation) will update in local space, which means its transform is relative to the parent's transform.
A child object will only be visible in a LiveQuery if its parent is within the query's boundaries.
Creating an entity hierarchy is very simple. All you need to do is to add a GameObject with a CoherenceSync
component as a direct child of another GameObject with a CoherenceSync
component. You can add and remove parent-child relationships at runtime (even from the editor).
Destruction or disconnection of the parent object will also destroy and remove all children of this object. Those objects state needs to be treated on the client side to be reinstantiated on the next connection.
Sometimes, it is not practical to add CoherenceSync
objects to all the links in the chain. For example, if a weapon is parented to a hand controlled by an Animator, we do not need to synchronize the entire skeleton over the network. In that case, see CoherenceNode.
If the child object is using LODs, it will base its distance calculations on the world position of its parent. For more details, see the Level of detail documentation.
While the basic case of direct parent-child relationships between entities is handled automatically by coherence, more complex hierarchies (with multiple levels) need a little extra work.
An example of such a hierarchy would be a synced Player prefab with a hierarchical bone structure, where you want to place an item (e.g. a flashlight) in the hand:
Player > Shoulder > Arm > Hand
A prefab can only have a single CoherenceSync
script on it (and only on it's root node), so you can't add an additional one to the hand. Instead, you need to add the CoherenceNode
component to another prefab so that it can be parented. Please note that this parenting relationship can only be set up in the scene or at runtime; you can't store it in the parent prefab since that would break the rule of only one CoherenceSync
per prefab.
To prepare the child prefab that you want to place in the hierarchy, add the CoherenceNode
component to it (it also has to have a CoherenceSync
.) In the example above, that would be the flashlight you want your player to be able to pick up. You don't need to do any changes to the Player prefab, just make sure it has a CoherenceSync
script in the root.
This setup allows you to place instances of the flashlight prefab anywhere in the hierarchy of the Player (you could even move it from one hand to the other, and it will work.) The one important constraint is that the hierarchies have to be identical on all clients.
To recap, for CoherenceNode to work you need to things:
One or more prefabs with CoherenceSync
that have some kind of hierarchy of child transforms (the child transforms can't have CoherenceSyncs on them.)
Another prefab with CoherenceSync
and CoherenceNode
. Instances of this prefab can now be parented to any transform of the prefabs with just CoherenceSync (in step 1.)
A simulation server or simulator is a version of the game client without the graphics ("headless client") optimized and configured to perform server-side simulation of the game world. When we say something is simulated on the server, we mean it is simulated on one or several simulators.
Simulators can also be independent from the game code. A simulator could be a standalone application written in any language, including C#, Go or C++ , for instance. We will post more information about how to achieve this here in the future. For now, if you would like to create a simulator outside of Unity, please contact our developer relations team.
A simulator can have various uses, including:
Server-side simulation of game logic that cannot be tampered with
Offloading processing from game clients
Splitting up a large game world with many entities between them
Here are some examples of things a simulator could be taking care of:
Running all the important game logic
Running NPC AI
Simulating the player character (by receiving only inputs from the clients through input queues)
We can have as many simulators as we like. They will connect to the replication server like any other game client.
Our cloud services only support uploading one simulator to the cloud during our alpha. This will be extended in the near future. Enterprise customers can still run multiple simulators in their own cloud environment.
See Build and deploy to enable a simulator for your project.
When scripting simulators, we need mechanisms to tell them apart.
Ask Coherence.SimulatorUtility.IsSimulator
.
There are two ways you can tell coherence if the game build should behave as a simulator:
COHERENCE_SIMULATOR
preprocessor define.
--coherence-simulation-server
command-line argument.
Connect
and ConnectionType
The Connect
method on Coherence.Network
accepts a ConnectionType
parameter.
Whenever the project compiles with the COHERENCE_SIMULATOR
preprocessor define, coherence understands that the game will act as a simulator.
Launching the game with --coherence-simulation-server
will let coherence know that the loaded instance must act as a simulator.
You can supply additional parameters to a simulator that define its area of responsibility, e.g. a sector/quadrant to simulate entities in and take authority over entities wandering into it.
You can also build a special simulator for AI, physics, etc.
You can define who simulates the object in the CoherenceSync inspector.
Automatic simulator adoption of CoherenceSync objects is work in progress and will be available in one of the future releases of coherence.
The sample UI provided includes auto-reconnect behaviour out of the box for room and world based simulators. The root GameObject has AutoReconnect components attached to it.
Multi-room simulators have their own per-scene reconnect logic. The AutoReconnect components should not be enabled when working with multi-room simulators.
If the simulator is invoked with the --coherence-play-region
parameter, AutoReconnect will try to reconnect to the server located in that region.
A simulator build is a built Unity Player for the Linux 64-bit platform that you can upload to coherence straight from Unity Editor.
Make sure you have run through Build and run and Create an account.
On Unity's menu bar, navigate to coherence -> Simulator -> Build Wizard
.
From within the Build Wizard you can build and upload simulators.
The Info tab provides information and requirements to build simulators properly.
The Build tab creates valid simulator builds from Build Configuration Assets.
There's a known issue in the Platforms package provided by Unity where builds will fail when the project is not in the target build platform. To prevent this from happening, please switch your active platform to match the one used in your build configuration before building.
You can create them via Assets -> Create -> coherence -> Simulator Build Configuration
.
A newly created build configuration looks like this:
There are several settings you might want to change.
Specify the scenes you want to get in the build via the Scene List component.
Specify a Company Name and a Version from the General Settings component (optional).
Additionally, you can add our OptimizeForSize component (find it using Add Component). Specify which optimizations you want to use to reduce the final build size from the Optimize For Size component (optional).
This feature is experimental, please make sure you backup your project beforehand.
You can add an OptimizeForSize component to your build configuration via the Add Component in the build configuration inspector. It looks like this:
Select the desired optimizations depending on your needs.
Settings applied to built simulators will be reverted once the build process is completed, so these settings won't affect other builds you make.
Once you have created a valid simulator build, you can upload it to coherence.
If you built your simulator using the Build tab, you should have a valid path to your simulator build set already. If you haven't or want to use a different path, use the Browse button.
You'll see in the developer dashboard when your simulator is ready and running.
Target frame rate on simulator builds is forced at 30.
coherence allows us to use multiple simulators to split up a large game world with many entities between them. This is called spatial load balancing. Please refer to the section about simulators for more information.
Our cloud services only support uploading one simulator to the cloud during our alpha. This will be extended in the near future. Enterprise customers can still run multiple simulators in their own cloud environment.
When using the Simulator Upload Wizard in Unity, you can specify a "simulator slug". This is simply a unique identifier for simulator. This value is automatically saved in RuntimeSettings when an upload is complete, and Room creation requests will use this value to identify which simulator should be started alongside your room.
The simulator slug can be any string value, but we recommend using something descriptive. If the same slug is used between two uploads, the later upload will overwrite the previous simulator.
A list of uploaded simulators and their corresponding slugs can be found in the Developer Portal:
Simulators per room can be enabled in the dashboard for the project. The simulator used is matched according to the in the RuntimeSettings scriptable object file. This is set automatically when you upload a simulator.
For each new room, a simulator will be created with the command line parameters described in the section. The simulator is shutdown automatically when the room is closed.
World simulators are started and shutdown with the world. They can be enabled and assigned in the Worlds section of the portal.
World simulation servers are started with the command line parameters described in the section.
coherence only supports Unity at the moment. Unreal Engine support is planned. For more specific details and announcements, please check the page. For custom engine integration,.
The Network Playground is a starter project for you to dive into. It contains the latest SDK, all the required preview packages and a bunch of extra resources for you to learn how to make multiplayer games with coherence.
You will need Unity Version 2020.1.9f1 or later.
Each Scene in this Network Playground Project shows you something new:
Scene 1. Synchronizing Transforms
Scene 2. Physics
Scene 3. Persistence
Scene 4. Synchronizing Animations and Custom Variables
Scene 5. AI Navigation
Scene 6. Commands
Scene 7. Team based
Scene 8. Connected Entities
Before deploying a simulation server testing and debugging locally can significantly improve development and iteration times. There are a few ways of accomplishing this.
Using the Unity editor as a simulator allows us to easily debug the simulator. This way we can see logs, examine the state of scenes and game objects and test fixes very rapidly.
To run the Editor as a simulator run the editor from the command line with the proper parameters:
--coherence-simulation-server: used to specify the program should run as a coherence simulator.
--coherence-ip: tells the simulator which IP it should connect to, using 127.0.0.1 will connect the simulator to a local server if one is running.
--coherence-port: specifies the port the simulator will use
--coherence-world-id: specifies the world id to connect to, used only when set to worlds.
--coherence-room-id: specifies the room id to connect to, used only when set to rooms.
--coherence-unique-room-id: specifies the unique room id to connect to, used only when set to rooms.
For example:
If you're not sure which values should be used adding a COHERENCE_LOG_DEBUG
define symbol will let you see detailed logs, among them are logs that describe which IP, port and such the client is connecting to. This can be done in the Player settings: Project Settings -> Player -> Other Settings -> Script Compilation -> Scripting Define Symbols
Another option is making a simulator build and running it locally. This option more closely emulates what will happen when the simulator is running after being uploaded.
You can run a simulator executable build in the same way you run the editor.
This allows you to test a simulator build before it is uploaded or if you are having trouble debugging it.
When using a rooms-based setup, you first have to create a room in the local replication server (e.g. by using the connect dialog in the client).
The local replication server will print out the room ID and unique room ID that you can use when connecting the simulator.
Optimization | What it does |
---|---|
Download the Network Playground Project
To learn more about simulators see .
To learn more about building simulator build see .
Replace Textures And Sounds With Dummies
Project's textures and sound files are replaced with tiny and lightweight alternatives (dummies). Original assets are copied over to <project>/Library/coherence/AssetsBackup
. They are restored once the build process has finished.
Keep Original Assets Backup
The Assets Backup (found at<project>/Library/coherence/AssetsBackup
) is kept after the build process is completed, instead of deleted. This will take disk space depending on the size of the project, but it's a safety convenience.
Compress Meshes
Sets Mesh Compression on all your models to High.
Disable Static Batching
Static Batching tries to combine meshes at compile-time, potentially increasing build size. Depending on your project, static batching can affect build size drastically. Read more about static batching.
Simulate multiple rooms at the same time, within one Unity instance
Multi-Room Simulators are Room Simulators which are able to simulate different game rooms at the same time. One game build to rule them all.
In order to achieve this, the game code should be defensive on which room it is affecting. Game state should be kept per room, meaning game managers, singletons (static data), etc need to account for this.
Each room is held in a different scene. So for every room created, the Multi-Room simulator should open a connection to it, hence loading additively a scene and stablishing a simulator connection (via MonoBridge).
By using Multi-Room Simulators, the coherence Developer Portal is able to instruct your simulator which room to join and start simulating.
This communication happens via HTTP. An HTTP server is started by your game build when the MultiRoomSimulator
component is active. This component listens to HTTP requests made by the coherence Developer Portal.
For offline local development, you can use a MultiRoomSimulatorLocalForwarder
component on your clients, which will create HTTP requests against your local simulator upon client connection i.e. joining a room.
For local development, enable the Local Development Mode
flag in the project settings.
Once the MultiRoomSimulator
receives a request to join a room, it spawns a CoherenceSceneLoader
that will be in charge of loading additively the scene specified.
By default scenes will have their physics scene. coherence ticks the physics scene on the CoherenceScene
component, which the target scene to be loaded should include.
The quickest way to get Multi-Room Simulators set up is by using the provided wizard.
It will drive you through the GameObjects and Components needed to make it happen.
Some steps are not strictly necessary. For example, you don't need a Sample UI for Multi-Room Simulators to happen. However, if you do use the Sample UI, we help you make sure you have it set up properly.
These are the pieces needed for Multi-Room Simulators to work:
Simulators
In the initialization scene (splash, init, menu, ...)
MultiRoomSimulator — listens to join room requests and delegates scene loading (by instantiating CoherenceSceneLoaders)
Clients
(Only for local development) In the scene where you connect to a room (where you have the Sample UI or your custom connection logic)
MultiRoomSimulatorLocalForwarder — requests the local MultiRoomSimulator to join rooms when the client connects.
Independently
In the scene where the networked game logic is (game, room, main, ...)
MonoBridge — handles the connection
LiveQuery — filters entities by distance
CoherenceScene — when the scene is loaded via CoherenceSceneLoader, it will try to connect using the data given by it. It attaches to the MonoBridge, creates a connection, and handles auto reconnection. If a scene loaded through CoherenceSceneLoader doesn't have a CoherenceScene on it, one will be created on the fly.
There are two components that can help you fork client and simulator logic, for example, by enabling or disabling the MultiRoomSimulator component depending on whether it's a simulator or a client build. These are optional but can come in handy.
SimulatorEventHandler — events on the build type (client/simulator).
ConnectionEventHandler — events on the connection stablished by the MonoBridge associated with that scene.
It's possible to visualize each individual room the Multi-Room Simulator is working on. By default, simulator connections to rooms are hidden, as shown in the image above. You can toggle the visibility per scene by clicking the eye icon. You can also change the default visibility of the loaded scene (defaults to hidden) on the CoherenceScene component:
Working with Multi-Room Simulators needs your logic to be constrained to the scene. Methods like FindObjectsOfType will return objects in all scenes — you could affect other game sessions!
This is also true for static members e.g. singletons. When using Multi-Room Simulators, there need to be as many isolated instances of your managers as there are open simulated rooms.
For example, if you were to access your Game Manager through GameManager.instance
, now you'll need a per-scene API like GameManager.GetInstance(scene)
.
There might be third-party or Unity-provided features that can't be accessed per scene, and affect the whole game.
Loading operations, garbage collections, frame-rate spikes... these all will affect performance on other sessions, since everything is running within the same game instance.
Multi-Room Simulators are still Room Simulators. You need to Enable simulators for rooms and enable multi-room simulators in the coherence Developer Portal, as shown here:
Welcome to the first scene of the coherence Network Playground. This scene will show you how easy it is to set up networking in your Unity project and sync GameObject transforms across the network.
In this example, each client will have a player character to move by clicking on the map to make the entity move to that location. Each client will only have control of its local entity.
In the Hierarchy of the Scene you can see three core Prefabs.
Core Scene Setup
and Coherence Setup
are present in all (Network Playground) Scenes and described in detail on Start Tutorial page.
Coherence Entity Character
is the Prefab that will change per Scene with different functionality. It has a standard CharacterController
and Rigidbody
as well as an Agent
script which will handle movement through the Input Manager
in the Core Scene Setup
prefab.
Coherence Entity Character
(always change the prefab, not the instance) is located in the Resources folder. The UnityEngine.Transform
and position
are ticked to sync. All other settings (persistence and authority) use the default settings. This entity will be session based, no authority handover and no adoption will take place, when a client leaves.
The On Network Instantiation
event is used to change the color of the mesh and recalculate the RigidBody collisions. That's it.
You can build this Scene via the Build Settings. Run the local Replication Server through the Window -> Coherence -> Settings window and see how it works. You can try running multiple clients rather than just two and see replicating for each.
The Network Playground project is a series of scenes with different features of coherence for you to pick through and learn.
They will teach you the fundamentals that should set you on your way to creating a multiplayer game.
Each Scene in this Network Playground Project shows you something new:
Scene 1. Synchronizing Transforms
Scene 2. Physics
Scene 3. Persistence
Scene 4. Synchronizing Animations and Custom Variables
Scene 5. AI Navigation
Scene 6. Commands
Scene 7. Team based
Scene 8. Connected Entities
Each scene comes with a few helpful core components.
This Prefab stores all the generic things that make up these simple Scenes.
Main Camera
Global Volume
Directional Light
Environment (Ground, Walls etc)
Navigation Interface
To move between the Scenes without having to enter and exit playmode, useful when testing the standalone build.
Input Manager
Input Manager that uses Raycasting to send a message to the GameObject (with a collider) that it hit.
This Prefab includes all of the things that make coherence work.
Interface
Canvas UI that handles Connection/Disconnection dialog and what address to connect.
Event System
Event system to interact with any UI in the scene.
coherence Live Query
Game Object/Component with a query radius that coherence uses to ask the server "What is happening in the query radius?" so it does not query unnecessarily big areas of large worlds. You can find more information here.
coherence Mono Bridge
GameObject/Component to transform the Mono based CoherenceSync component to archetypes so all data can be processed as ECS code.
We use this component on anything that we require to be networked, whether this is a playable character, NPC (non playing character), an object (ball, weapon, banana, car, plant etc) or any Game/Input Managers that require to sync data between Clients/Simulators/Servers.
It scans the inspector for attached components and allows us to sync those component values (as long as they are public) and communicate them to other clients. It also allows us to set individual Prefabs persistent, authoritative and network connect/disconnect settings. There's much more information on the CoherenceSync page
This scene will show you how easy it is to set up Networking in your Unity project and sync GameObject transforms and Physics objects across the network whilst keeping them persistent. As long as the server is running you can disconnect and re-connect, your world will persist.
In this example a right click will spawn local physics based objects that all other players will see. These physics objects will interact with each other and the physics for every object will be simulated locally on its authority. Clicking one of these objects will either take or relieve authority of the local client over the clicked object. You can disconnect and re-connect and the persistent entities will all remain.
The controls at the top right of the screen allow the spawning of a unique object that works very similarly to the other physics based objects. This bigger cube is set to No Duplicates
in it's Uniqueness
property which means the server will only allow one instance of this object to exist at a time. It also has it's Lifetime
setting set to Session Based
this will cause the object to be deleted when it's owner disconnects. Just like with the physics based objects, a player can claim authority over this object by clicking it. If that player would then disconnect all clients will have the unique object deleted.
In the Hierarchy of the Scene you can see three core Prefabs:
Core Scene Setup
and Coherence Setup
are present in all scenes and described in detail on Start Tutorial page.
Coherence Entity
is not present in this scene.
Input Manager
in the Core Scene Setup
prefab is set up to spawn a Sample Entity Physics Persistent
where a click is performed.
Coherence Connection Events
handles overall Scene connectivity. In this scene we use it to cleanup objects the client has authority over when disconnecting.
The Physics Entity Spawner
component is a simple script to instantiate a Coherence Entity Physics Persistent
prefab with a coherenceSync component that replicates the transform and position. The component also changes the material based on if it is locally simulated or synced over the network.
The Coherence Entity Physics
variants have an Entity Lifetime Type
set to Persistent
. This means it will remain in the world as long as the replication server is running, even if all clients disconnect. It also has Authority Transfer Style
set to Stealing
which means the entity can be "stolen" and simulated on a client requesting authority.
This is done via the Input Manager
in the Core Scene Setup
prefab. When the object is left-clicked, it sends the message "Adopt" to the GameObject on the specific Layer Mask "Physics Entities". The component called Coherence Handler
on Coherence Entity Physics
objects handle the Adopt call and requests the authority change via the coherenceSync component.
Coherence Handler
is a basic layer for handling Commands and Events, both sending and receiving. You can create your own or reuse and extend this for your project.
You can build this Scene via the Build Settings. Run the local Replication Server through the Window -> Coherence -> Settings window and see how it works. You can try running multiple clients rather than just two and see replicating for each.
This scene will show you how easy it is to set up Networking in your Unity project and sync GameObject transforms, animations via the Animator parameters and customer variables.
In this example, each client will have its own player character to move, the beloved Unity RobotKyle asset. Type your name into the connect dialog box and connect. You can move around with WASD. When another player connects, their name is carried across and their transform and animation state is replicated.
In the Hierarchy of the scene you can see the two core prefabs Core Scene Setup
and Coherence Setup.
Both are present in all scenes and described in detail on Start Tutorial page.
Coherence Kyle
is taken from the Unity asset "Robot Kyle", with added components Rigidbody
, Character Controller
, and Animator
with the two animation states - Idle and Walk. The animation states are controlled by a speed parameter from the Agent
script. The scene also contains a Name Badge
script which gets the Connect Dialog
GameObject from the Core Scene Setup
and sets the color depending if it's local or networked.
Attached to Coherence Kyle
is a coherenceSync component which replicates the parametersTransform; position and rotation
, Animator; Speed[float]
and NameBadge; Name [string]
. The authority and persistence settings are set to their default values and the On Network Instantiation
event is used to change the color of the networked entities.
You can build this Scene via the Build Settings. Run the local Replication Server through the Window -> Coherence -> Settings window and see how it works. You can try running multiple clients rather than just two and see replicating for each.
You can read more about synchronizing animations on the Animations section.
This scene will show you how easy it is to set up Networking in your Unity project and sync non player characters that move around the world via Unity's Navmesh system.
In this example each client can spawn as many navigation agents as they wish, and these navigation agents will move intermittently to different locations on the grid. All navigation agents will be replicated across clients with a specific colors signifying if they are local or networked.
In the Hierarchy of the Scene you can see three core Prefabs:
Coherence Connection Events
handles overall Scene connectivity. Additionally, it removes all Entities with coherenceSync
from the Scene to demo disconnection/reconnection via the Interface without refreshing the Scene.
Spawner
is a simple script to instantiate a Coherence Entity Nav Agent
prefab with a coherenceSync component that replicates the transform and position. The component also changes the material based on if it is locally simulated or synced over the network.
Coherence Entity Nav Agent
has a Nav Mesh Agent
component controlled via the Navigation Agent
script which every few seconds sets a new destination on the grid. It's not required to sync anything other than the Transform; position, rotation
parameter as the Nav Mesh Agent
settings only need to simulate locally.
You can build this Scene via the Build Settings. Run the local Replication Server through the Window -> Coherence -> Settings window and see how it works. You can try running multiple clients rather than just two and see replicating for each.
Core Scene Setup
and Coherence Setup
are present in all scenes and described in detail on page.
This scene demonstrates usage of connected entities.
This scene shares the moving entities that were in a few previous scenes but also has added functionality.
Right-clicking on a non-local entity causes the local entity to start moving towards it while also displaying an arrow pointed at the target. When the entity reaches this target it will parent itself under it as a child, setting the target as its connected entity.
From this point onward, until the entity disconnects by moving or otherwise, the entity will be smaller and parented to this target. When the target moves the local entity will move with it as one.
This type of connection can also cause issues. For example if the client controlling the parent entity disconnects, destroying its entity, any client whose entity was a child of that destroyed entity will have his entity destroyed as well.
To learn more about connected entities see Connected Entities.
You can build this Scene via the Build Settings. Run the local Replication Server through the Window -> Coherence -> Settings window and see how it works. You can try running multiple clients rather than just two and see replicating for each.
coherence provides an API for creating player game accounts that uniquely identify players across multiple devices. An account is required in order to use the rest of the online services, like the key-value store and matchmaking.
There are two types of accounts that are currently supported - guest accounts and user accounts.
Guest accounts provide an easy way to start using the coherence online services without providing any user interface for user names or password. Everything is controlled with the API, and is completely transparent to the player.
The session data for the account is stored locally on the device so it is important to know that uninstalling the game will also wipe out all the data and the account will be no longer accessible even if the player installs the game again.
User accounts require explicit authorization by the player. Currently, only user name and password are supported as means for authentication. The user interface for entering the credentials must be provided by the game. Check the API how to use this feature.
In the future, there will be support for many more authentication mechanisms like Steam, Google Play Games, Sign in with Apple, etc.
Please refer to the Cloud API: Game Accounts.
This scene will show you how coherence can be used to make a basic team based game.
In this example each client has one character they can control with click to move input. When connecting the player will be prompted to select a team they wish to join.
This selection will be synced via a TeamIndex
field in the Sample Team Agent
component.
The Target Area
object is a unique gameobject that is shared among clients and moves around the grid, specifying a particular part of the field every time it jumps. When arriving at it's final position this object checks which team has the most players inside the specified area and awards that team a point.
The team colors and score are managed by another unique object called Team Assigner
. This object has a synced string variable called encodedScores
which is used to sync the team scores between clients.
Because both the Team Assigner
and Target Area
are persistent we can disconnect from the server and the game state will be preserved as long as the server is alive, even if no clients are connected at all.
Notice that the number of teams and their colors, set in the Team Assigner, are not synced. This means it could be possible to create different clients with different colors without them effecting each other.
You can build this Scene via the Build Settings. Run the local Replication Server through the Window -> Coherence -> Settings window and see how it works. You can try running multiple clients rather than just two and see replicating for each.
This scene will show you how to use coherence to sync GameObject transforms and Physics objects across the network.
In this example, each client will have its own player character to move. Left-clicking on the map will make the entity move to that location. Right-clicking will spawn local physics based objects that all player characters can interact with. Each client will only have control over its local entity.
In the Hierarchy of the Scene you can see three core Prefabs:
Core Scene Setup
and Coherence Setup
are present in all scenes and described in detail on Start Tutorial page.
Coherence Entity
is the prefab that will change per Scene with different functionality. It has a standard CharacterController
and Rigidbody
as well as an Agent
script which will handle movement functionality through the Input Manager
in the Core Scene Setup
prefab.
Coherence Connection Events
handles overall Scene connectivity. Additionally, it removes all Entities with coherenceSync
from the Scene to demo disconnection/reconnection via the Interface without refreshing the Scene.
Coherence Entity Character
(always change the prefab, not the instance) is located in the Resources folder. The UnityEngine.Transform
and position
are ticked to sync. All other settings (persistence and authority) use the default settings. This entity will be session based, no authority handover and no adoption will take place, when a client leaves.
The On Network Instantiation
event is used to change the colour of the mesh.
The Physics Entity Spawner
is a simple script to instantiate a Coherence Entity Physics
Prefab with a coherenceSync component that replicates the transform and position. The component also changes the material based on if it is locally simulated or synced over the network.
You can build this Scene via the Build Settings. Run the local Replication Server through the Window -> Coherence -> Settings window and see how it works. You can try running multiple clients rather than just two and see replicating for each.
This scene will show you how easy it is to set up Networking in your Unity project and send Network Commands to other clients. Network Commands are like sending direct messages to objects instead of syncing variable values.
In this example each client has one character they can control with click to move input. They can right-click on another Entity to send a command and that Entity will instantiate an Exclamation mark above their head.
In the Hierarchy of the Scene you can see three core Prefabs:
Core Scene Setup
and Coherence Setup
are present in all scenes and described in detail on Start Tutorial page.
Coherence Entity
is the prefab that will change per Scene with different functionality. It has a standard CharacterController
and Rigidbody
as well as an Agent
script which will handle movement functionality through the Input Manager
in the Core Scene Setup
prefab.
Coherence Entity
can send commands to other entities through the Coherence Handler
component. In the coherenceSync component we can open the bindings window and find a Methods
tab used for command setup. There we can find a method called ReceiveCommand
and beside it, an icon describing who the command will be sent to (Only to the objects authority or all clients).
In the game view in Play mode, Commands can be sent to other entities via the right click button. An exclamation mark asset will pop up above the right-clicked entity for all clients.
If we were to set this command to Authority Only
then only the objects' authority would receive this method call.
You can build this Scene via the Build Settings. Run the local Replication Server through the Window -> Coherence -> Settings window and see how it works. You can try running multiple clients rather than just two and see replicating for each.
coherence provides an API and a database for storing key-value pairs from within game sessions.
The key/value store provides a simple way to store and retrieve data for the currently logged in player. For example, you could store player's score, email address, or any other data.
It is important to mention that this feature requires a game account.
The keys must be alphanumerical strings, underscore or dash. Currently, only strings are accepted as values. If you need to store numbers or complex types like arrays, you have to convert them to strings.
The total amount of stored data (keys + values) cannot exceed 256 KB per player (TBD).
There is no limit how often the data is stored or retrieved.
Please refer to the Cloud API: Key-value store.
coherence provides a powerful matchmaking API with various different setups (multiple teams, team sizes, etc.).
Before using the matchmaking service you have to configure it on the developer portal. Currently there are two things to setup: the timeout and the teams.
The timeout is in seconds. There can be any number of teams and any number of players per team.
For example, a chess game would need only one team with two players, while in a socker game you'd need two teams with eleven players per team.
Please refer to the Cloud API: Matchmaking.****
The developer portal is an online dashboard where the cloud services behind your coherence-based game can be managed. It can be found at or from the Developer Portal link above.
The developer portal includes:
Organization and Project creation and management
Resource configuration and management
Enabling / disabling features
Cost analysis
Team management
Here are some examples of tasks to perform on the developer portal:
Create your organization and project for your game
Start/stop/restart your cloud-based replication server or simulator
Enable coherence features such as player authentication, key-value store, persistence, and build sharing
Invite teammates to your project
View your resource usage and billing forecasts
While a local replication server is available as part of the Unity SDK, in order host the multiplayer services like the replication server in the cloud, **** your team must have a project in the Developer Portal. It is up to your project needs when to begin using the cloud services.
Besides the core replicator and simulator, coherence offers additional services to enhance your game's experience and we are constantly working on more.
Currently available services are:
In the Project sidebar, you can find links to each service. Each service has an enabled checkbox which you can toggle to enable and disable those features:
Note: Disabling a service will immediately remove that functionality from your game. Please disable with caution.
From the Developer Portal, you can configure how rooms are created through the SDK in the coherence cloud.
The coherence SDK is a set of prefabs & scripts to help you create multiplayer games super fast.
It makes it easy for anyone to create a multiplayer game by having flexible, intuitive and powerful visual components.
Here are the main building blocks of the SDK.
CoherenceSync is a component that should be attached to every networked Game Object. It may be your player, an NPC or an inanimate object such as a ball, a projectile or a banana. Anything that needs to be synchronized over the network. You can select which of the attached components you would like to sync across the network as well as individual public properties.
The coherence Settings window is located in Project Settings -> coherence
and lets you launch a local replication server, upload your server to the cloud via the access token and bakes your schemas for more optimized data transfer of our Networked GameObjects.
LiveQuery, as the name suggests, queries a location set by the developer so that coherence can simulate anything within its radius. In our Starter Project, the LiveQuery position is static with a radius large enough to cover the entire playable level. If the world was very large and potentially set over multiple simulation servers, the LiveQuery could be attached to the playable character or camera.
The coherence MonoBridge passes information between the coherenceSync component and the networked ECS components.
The sample UI Prefab holds all of the UI and connection functionality to connect to the running project locally or via a server. You can completely rewrite this if you like, it's there to get you up and running quickly.
The built-in coherence scripts are configured to execute in a specific order, using the following DefaultExecutionOrder
setup:
-1000 CoherenceMonoBridge
-900 CoherenceSync
-800 CoherenceInput
1000 CoherenceMonoBridgeSender
From the Developer Portal you can create, edit and configure your Worlds
At least one schema must be uploaded to create a world. To create a schema, see .
Click the 'New World' button at the top right of the Worlds view in the Developer Portal
To create a world:
Enter a unique name
(optional) choose an SDK version. The latest version is recommended, but this should match the SDK version installed for your project
Enter tags separated by commas
Choose which region the World should be started in
Choose the size of the replicator
(optional) Choose the schema this World should start with. Usually the latest schema uploaded is the preferred choice, and this is the default.
Please see the page, , in the Get Started section.
From the Developer Portal, you can configure what size you want your simulator instances to be. To attach a simulator to a Room, send the `simulator slug` uploaded through the SDK with the Rooms creation request. When using the to create rooms, the simulator uploaded through the SDK is automatically assigned in the creation request.
Refer to the Level of detail section for more information.
The coherence Settings window is located in Project Settings -> coherence
and lets you launch a local replication server, upload your server to the cloud via the access token and bakes your Schemas for more optimized data transfer of Networked GameObjects.
Bake Schemas
When CoherenceSync variables/components are sent over the network, C# reflection is used to sync all the data at runtime. Whilst this is really useful for prototyping quickly and getting things working, it can be quite slow and poorly performing. A way to combat this is to bake the CoherenceSync component into a Schema.
The Schema is a text file that defines which data types in your project are synced over the network. It is the source from which coherence SDK generates C# struct types (and helper functions) that are used by the rest of your game. The coherence Replication Server also reads the Schema file to know about those types and to communicate them with all of its clients efficiently.
The Schema must be baked in the coherence Settings window, before the check box to bake this prefab can be clicked.
When the CoherenceSync component is baked, it generates CoherenceSync<NameOfPrefab>.cs
.
Bake Output Folder
Defines where to store the baked Schema files.
Portal
Upload your Schema files to your server.
Status - Current Status of your cloud server
Token - Cloud token
Local Replication Server
Run a local replication server.
Port - The port access
Frequency - Frequency of server.
The MonoBridge is a system that makes sure every GameObject is linked to its networked representation. It essentially interfaces between the GameObject world and the coherence SDK code running "under the hood".
When you place a GameObject in your scene, the MonoBridge detects it and makes sure all the synchronization can be done via the CoherenceSync component.
At runtime, you can inspect which entites the MonoBridge is currently tracking.
A MonoBridge is associated with the scene it's instantiated on, and keeps track of entities that are part of that scene. This also allows for multiple connections at the same time coming from the game or within the Unity Editor.
When using a Global MonoBridge (Singleton), the MonoBridge it's still associated to the scene it was originally instantiated on, even when the GameObject deattaches from the scene and becomes part of DontDestroyOnLoad.
The way you get information about the world is through LiveQueries. We set criteria for what part of the world we are interested in at each given moment. That way, the replicator won’t send information about everything that is going on in the game world everywhere, at all times.
Instead, we will just get information about what’s within a certain area, kind of like moving a torch around to look in a dark cave.
More complex areas of interest types are coming in future versions of coherence.
A LiveQuery is a cube that defines the area of interest in a particular part of the world. It is defined by its position and its radius (half the side of the cube). There can be multiple LiveQueries in a single scene.
A classic approach is to put a LiveQuery on the camera and set the radius to correspond to the far clipping plane or visibility distance.
Moving the GameObject containing the LiveQuery will also notify the replication server that the query for that particular game client has moved.
The coherence Sample UI
is a prefab that you can add to your scene that handles interaction with coherence services. It is made up of a Unity UI Canvas and includes everything needed to handle connection to coherence.
The UI
component on the root of the prefab allows us to switch between using Rooms
or Worlds
each of these methods has a dedicated dialog for connection.
The Auto Reconnect
components are used by Simulator builds. The relevant Auto Reconnect
component is also enabled when switching between Rooms and Worlds.
To learn more about simulators see Simulators
The Rooms Connect Dialog
has a few components that facilitate usage of Rooms.
At the top of the dialog is a dropdown for region selection. This dropdown is populated when regions are fetched and automatically selects the first one available.
Due to current limitations the local server is fetched only if it's started before you enter play mode. The Local Development Mode
checkbox must also be checked in the project settings (coherence > Settings
menu item).
This effects the local region for Rooms and the local worlds for Worlds.
Next the dialog holds an input field for the players name.
Beneath these elements is a tab group with two sections.
The first section holds a list of rooms fetched from the currently selected region. The user can select one of these rooms and connect to it using the join button. On the left of this tab is a button to refresh the rooms list, fetching them again. This refresh is also triggered whenever the selected region is changed.
The second section is used for room creation. Two input fields allow us to specify a room name and a maximal number of players.
The buttons at the bottom of this section then allow the creation of a room with the specified parameters. The Create and Join
button allows the user to automatically connect to the room that was created by the request right away.
The Worlds Connect Dialog
is much simpler. It holds a dropdown for world selection, an input field for the players name, and a connect button.
The dropdown is populated when worlds are fetched and automatically selects the first one available.
The connect button tells the client to connect to the selected world.
Note: You can also build your own interface to connect players to the server using thePlayResolver
API to learn more about the API see PlayResolver, Rooms or Worlds according to what your project needs.
CoherenceSync is a component that should be attached to every networked Game Object. It may be your player, an NPC or an inanimate object such as a ball, a projectile or a banana. Anything that needs to be synchronized over the network and turned into an Entity. You can select which of the attached components you would like to sync across the network as well as individual public properties.
All Networked Entities need to be placed in the Resources folder
Any scripts attached to the component with CoherenceSync that have public variables will be shown here and can be synced across the network. Enable the script + the variable to sync, it's that easy. Those variables with a lightning bolt next to them signify a public method that can be activated via commands.
Ownership Transfer
When you create a networked game object, you automatically become the owner of that game object. That means only you are allowed to update or destroy it. But sometimes it is necessary to pass ownership from one player to another. For example, you could snatch the football in a soccer game or throw a mind control spell in a strategy game. In this case, you will need to transfer ownership from one client to another.
Entity Lifetime
When a player disconnects, all the game objects created by that player are usually destroyed. If you want any game objects to stay in the game world after the owner disconnects, you need to set Entity lifetime type of that game object to Persistent.
Session Based - Will be removed when the client disconnects
Persistence - Entities with this option will persist as long as the server is running. For more details please visit Configuring persistence.
Uniqueness
Allow Duplicates - no restrictions on which objects can be instantiated over the network.
No Duplicates - ensure objects are not duplicated by marking them with a UUID. You can provide your own, left the field blank (a GUID will be assigned at runtime), or use the CoherenceUUID component helper to generate GUIDs for you at editor time.
Entity Simulation Type
Client Side - Simulates everything on the local client and passes the information to the Replication Server to distribute that information to the other clients.
Other forms of simulation (Server; Server with Client Input) coming soon.
Authority Transfer Style
Not Transferable - The default value is Not Transferable because most often objects are not meant to be transferred.
Stealing - Allows the game object to be transferred to another client.
Request - This option is intended for conditional transfers, which is not yet supported.
Orphaned Entities
By making the game object persistent, you ensure that it remains in the game world even after its owner disconnects. But once the game object has lost its owner, it will remain frozen in place because no client is allowed to update or delete it. This is called an orphaned game object.
In order to make the orphaned game object interactive again, another client needs to take ownership of it. To do this, enable Auto-adopt orphan
.
Once you have set the transfer style to stealing, any client can request ownership by calling the RequestAuthority()
method on the CoherenceSync component of that game object:
someGameObject.GetComponent<CoherenceSync>().RequestAuthority();
A request will be sent to the game object's current owner. The current owner will then accept the request and complete the transfer.
You are now the new owner of the game object. This means the isSimulated
flag has been set to true, indicating that you are now in full control of the game object. The previous owner is no longer allowed to update or destroy it.
Helper scripts with a custom implementation of Authority transfer can be found here.
Events for handling user connection and disconnection. Manual Destory
is useful for session based objects that you want to keep "semi persistent" which would be removed when all the clients disconnect.
When CoherenceSync variables/components are sent over the network, by default, reflection is used to sync all the data at runtime. Whilst this is really useful for prototyping quickly and getting things working, it can be quite slow and unperformant. A way to combat this is to bake the CoherenceSync component, generating a compatible schema and generating code for it.
The schema is a file that defines which data types in your project are synced over the network. It is the source from which coherence SDK generates C# struct types (and helper functions) that are used by the rest of your game. The coherence Replication Server also reads the Schema file so that it knows about those types and communicates them with all of its clients efficiently.
The schema must be baked in the coherence Settings window before the check box to bake this prefab can be clicked.
When the CoherenceSync component is baked, it generates a new file in the baked folder called CoherenceSync<NameOfThePrefab>
. This component will be instantiated at runtime, and will take care of networked serialization and deserialization, instead of the built-in reflection-based one.
Refer to the commands section.
The token you get when creating a project on the developer portal.
You paste it in the Project Settings.
Once you have pasted the portal token successfully, you need to fetch the runtime key as well.
You can fetch the **** Runtime Key **** by clicking on the down-arrow button on the right side of the input field.
List of the Cloud APIs
The coherence Cloud API allows us to access online services like game accounts, key-value store, matchmaking, and others. It also allows you to get the addresses (IP and port number) of the servers the players can connect to.
The Cloud API requires you to use tokens connected to your coherence project.
This is an advanced topic that aims to bring access to coherence's internals to the end user.
CustomBindingProviders are editor classes that tell coherence how a specific component should expose its bindings and how it generates baked scripts.
For example, we could create a custom binding provider for our Health component:
Place CustomBindingProviders inside an Editor folder.
We can add additional (custom) bindings:
In order for these new custom bindings to work on reflected mode, we'll need to implement a runtime serializer that understands how to deal with them.
Check the CustomBinding.Descriptor
API for further properties, like interpolation or marking the binding as required.
For custom bindings to work on reflected mode, we need to implement how their values are serialized and deserialized at runtime:
CustomBindingRuntimeSerializers should be placed in a non-Editor folder.
Once we have our runtime serializer, we need to make our binding provider use it:
You can extend an already existing CustomBindingProvider. For example, coherence ships with a CustomBindingProvider for Transforms:
This way, you can define additional rules on how you want to treat your Transforms, for example.
Any amount of CustomBindingProviders can be registered over the same component, but only one being used. The one being used, is resolved by a priority integer that you can specify in the CustomBindingProvider attribute. The class with the higher priority defined in the attribute will be the only provider taken into account:
The default priority is set at 0
, and coherence's internal CustomBindingProviders have a priority of -1
.
To understand how these APIs are used, check out TransformBindingProvider and AnimatorBindingProvider, both shipped with the coherence SDK (<package>/Coherence.Editor/Toolkit/CustomBindingProviders
).
Rooms functionality can be accessed through the PlayResolver
which includes all the methods needed to use rooms.
To manage rooms we must first decide which region we are working with.
FetchRegions
in PlayResolver.cs
allows us to fetch the regions available for our project. This task returns a list of regions (as strings) and a boolean that indicates if the operation was successful.
FetchLocalRegions
in PlayResolver.cs
returns the local region string for a local running rooms server, or null if the operation is un-successful (if the server isn't running for example).
Every other rooms API will require a region string that indicates the relevant region for the operation so these strings should not be changed before using them for other operations.
The RoomsConnectDialog
populates a dropdown with the region strings returned by both of these methods directly for easy selection.
These methods also call EnsurePlayConnection
which initializes the needed mechanisms in the PlayResolver
if necessary. EnsurePlayConnection
can also be called directly for initialization.
After we have the available regions we can start managing rooms, for instance:
CreateRoom
in PlayResolver.cs
allows us to create a room in the region we send it.
We can also optionally specify:
a room name
the maximal number of clients allowed for the room
a list of tags for room filtering and other uses
a key-value collection for the room
This task returns the operations result and RoomData
for the created room assuming the operation was successful.
FetchRooms
in PlayResolver.cs
allows us to search for available rooms in a region. We can also optionally specify tags for filtering the rooms.
This task returns a list of RoomData
objects for the rooms available for our specifications.
JoinRoom
in PlayResolver.cs
connects the client that we pass to the method to the room we pass to the method. This RoomData
object can be either the one we get back from CreateRoom
or one of the ones we got from FetchRooms
.
The RoomsConnectDialog
demonstrates both of these cases in CreateRoom
when called with true for autoJoin and in JoinRoom
respectively.
Worlds functionality can also be accessed through the PlayResolver
just like rooms. Worlds work a differently however and are a bit simpler.
First we need to fetch the available worlds. Unlike rooms, worlds cannot be created by a client and need to be setup in the developer portal.
FetchWorlds
in PlayResolver.cs
allows us to fetch the available worlds for our project. This task returns a list of worlds in the form of WorldsData
objects and a boolean that indicates if the operation was successful.
This method also call EnsurePlayConnection
which initializes the needed mechanisms in thePlayResolver
if necessary. EnsurePlayConnection
can also be called directly for initialization.
FetchLocalWorld
in PlayResolver.cs
returns the local world for a local running world server.
The WorldsConnectDialog
populates a dropdown with the worlds returned by both of these methods so we can select a world.
After we've selected a world we can connect to it using:
JoinWorld
in PlayResolver.cs
connects the client that we pass to the method to the world we pass to the method.
The isSimulator
optional parameter is used for simulators and can be ignored for regular client connections (see https://github.com/coherence/docs/blob/sdk-0.8/api-reference/network-sdk/broken-reference/README.md).
The WorldsConnectDialog
is an example implementation for Worlds usage.
When connected to a room or a world, the client can access the currently connected endpoint by accessing the Coherence.IClient.LastEndpointData property of the CoherenceMonoBridge. eg.
myBridge.Client.LastEndpointData
The key-value store provides a simple persistence layer for the players.
The player needs need to be to use the Key-value store.
This class provides the methods to set, get and unset key-value pairs. This is executed within the context of the currently logged in player.
Size: there are no limits to the number of stored key/values as long as the total size is less than 256 kB.
Requests: Set/Get/Unset can be called unlimited amount of times but the execution may be throttled.
The Replication Server replicates the state of the world to all connected Clients and Simulators.
To understand what is happening in the game world, and to be able to contribute your simulated values, you need to connect to a Replication Server. The Replication Server acts as a central place where data is received from and distributed to interested clients.
You can connect to a Replication Server in the cloud, but we recommend that you first start one locally on your computer. coherence is designed so you can easily develop everything locally first before deploying to the cloud.
Replication Servers replicate data defined in schema files. The schema's inspector provides all the tools needed to start a Replication Server.
Run the Replication Server by clicking the Run button or copy the run command to the clipboard via the copy run-command button on it's right.
A terminal/command line will pop up running your server locally
The port the Replication Server will use. Default: 32001
.
The Replication Server frequency. Default: 60
.
You can also start the replication server from the coherence menu or by pressing ctrl+shift+alt+N.
If you're unsure where schema files are located, you can easily search through the project using Unity's project search window, witht:Coherence.SchemaAsset
For Mac Users: You can open new instances of an application from the Terminal:
When the replication server is running, you connect to it using the Connect
method.
After trying to connect you might be interested in knowing whether the connection succeeded. The Connect call will run asynchronously and take around 100 ms to finish, or longer if you connect to a remote server.
Check 'Run in Background' in the Unity settings under Project Settings -> Player so that the clients continue to run when not the active window.
The schema has two uses in your project:
As a basis for code generation, creating various structs and methods that can be used in your project to communicate with the replication server.
As a description for the Replication Server, telling it how the data in your project looks like – to receive, store, and send this data to its clients.
When using MonoBehaviours and CoherenceSync you often don't need to interact with the schema directly. Here's an example of a small schema:
To learn more about the types and definitions available in a schema, see the .
coherence uses the concept of ownership to determine who is responsible for simulating each entity in the game world. By default, each client that connects to the server owns and simulates the entities they create. There are a lot of situations where this setup is not adequate. For example:
The number of entities in the game world could be too large to be simulated by the players on their own, especially if there are few players and the world is very large.
The game might have an advanced AI that requires a lot of coordination, which makes it hard to split up the work between clients.
It's often desirable to have an authoritative object that ensures a single source of truth for certain data. State replication and "eventual correctness" doesn't give us these guarantees.
Perhaps the game should run a persistent simulation, even while no one is playing.
With coherence, all of these situations are can be solved using dedicated simulation servers. They behave very much like normal game clients, except they run on their own with no player involved. Usually they also have special code that only they run (and not the clients). It is up to the game developer to create and run these programs somewhere in the cloud, based on the demands of their particular game.
If you have determined that you need one or more simulation servers for your game, there are multiple ways you can go about implementing these. You could create a separate Unity project and write the specific code for the simulation server there (while making sure you use the same schema as your original project).
An easier way is to use your existing Unity project and modify it in a way so that it can be started either as a normal game client, or as a simulation server. This will ensure that you maximize code sharing between clients and servers -- they both do simulation of entities in the same game world after all.
Please note that to build a simulation server, you have to build for the Linux platform.
To determine whether to start a build as client or simulation server, you can use command line arguments:
To pass the command line argument, start the application like this:
The simulation server is started with the following parameters in the cloud
The SDK provides a static helper class to access all the above parameters in the C# code called SimulatorUtility.
Change your build target to be Linux and tick Headless Mode.is this still a bug? i cant replicate this locally. rooms with just simulators always shut down automatically. 
Creating a player account is the first step towards using the coherence Cloud API. It is required in order to use the rest of the services.
To initialize the Cloud API you need to provide a that can be obtained from the .
The easiest way to get started is by using a guest account. The only thing that is needed is to call LoginAsGuest
. This will create a random username / password combination and will authenticate the player with the coherence Cloud.
Once logged in, the credentials are securely persisted so if the game is restarted the player will be able to log in automatically.
If the game is uninstalled then the account credentials will be lost and a new guest account will be created next time the game is installed.
Another alternative is to login with a username and a password. You have to provide the user interface.
This example initializes the Cloud API, checks for an existing session and, if no session was found or if it expired, logs in the player as guest.
To connect with multiple clients locally, publish a build for your platform (File > Build and Run
, details in ). Run the Replication Server and launch the build any number of times. You can also enter Play Mode in the Unity Editor.
To connect to cloud hosted servers, see and documentation.
When building stand-alone builds, Unity also has an option for . This is great for simulation servers since we're not interested in rendering any graphics on these anyway. By using headless mode we get a leaner executable that is easier to deploy in the cloud.
Refer to the .
coherence Network Playground (Unity Version 2020.1.9 or later)
Many of the primitive data types in coherence supports configuration to make it possible to optimize the data being sent over the network. These settings can be made individually for each field of a component and will then be used throughout your code base.
The field settings uses the meta data syntax in the schema, which looks like this:
The meta data always goes at the end of the line and can be set on both definitions and the fields within a definition, like this:
In this example, a component named Health would be created, but instead of using the default 24 bits when sending its value, it would just use 8. Any updates to it would also be deprioritized compared to other components, so it could potentially be sent a bit late if bandwidth is scarse.
Component updates do not only contain the actual data of the update, but also information about what entity should be affected, etc. This means that the total saving of data won't be quite as large as you'd think when going from 24 to 8 bits. Still, it's a great improvement!
All components support a piece of meta data that affects how highly the Replication Server will prioritize sending out updates for that particular component.
This meta data is set on components, like this:
The available priority levels are:
"very-low"
"low"
"mid" (default)
"high"
"very-high"
Some of the primitive types support optimizing the number of bits used to encode them when sending the over the network. It's worthwhile to think through if you can get away with less information than the default, to make room for more frequent updates.
All of these types support the same two settings:
bits
– how many bits the type should use to encode its floating point values
scale
– the maximum and minimum value of each of its scalars (a scale of
Integers can be configured to only hold a certain range via:
range-min
– the lowest possible value that the integer can hold
range-max
– the largets possible value that the integer can hold
Using these settings you can emulate other numeric types like char
, short
, unsigned int
, etc.
Right now quaternions don't have any settings, but this will be remedied soon.
The other types doesn't have any settings that affect the number of bits they use. If they take up to much bandwith you'll have to try to send them less often, using priority, update frequency, or LODing.
These are the primitive types supported in a coherence schema:
Uses a default range of -9999 to 9999.
Uses a default scale of +/- 2400, which is encoded using 24 bits.
Encoded using a single bit.
Encoded using two floats.
Encoded using three floats.
A string with up to 63 characters encoded using 6 bits for length.
An array of bytes with an upper limit of 511 bytes encoded using 9 bits for length.
Packet fragmentation is not supported yet in this version, so packets bigger than the internal MTU (~1200 bytes) may be never sent.
The Entity
type is used to keep references to other Entities. Technically the reference is stored as a local index that has to be resolved to an actual Entity before usage. Also, since a client might not know about the referenced Entity (due to it being outside of its live query) an Entity reference might be impossible to resolve in some circumstances. Your client code will have to take this into account and be programmed in a defensive way that handles missing Entities gracefully.
Several of the primitive types can be configured to take up less space when sent over the network, see field settings.
The most common definition in schemas is components, which correspond to replicated fields for baked MonoBehaviours.
The definition takes a name of the component, and on the following lines an indented list of member variables, each one followed by their primitive type (see above.) The indentation has to be exactly 2 spaces. Here's how it might look:
After code generation, this will give access to a component with the name Portal
that has the members locked
, connectedTo
, and size
.
Optionally, each member/type pair can have additional meta data listed on the same line, using the following syntax:
This is how it might look in an actual example:
There are some components that are built into the Protocol Code Generator and that you will always have access to.
Archetypes are used to optimize the sending of data from the server to each client, lowering the precision or even turning off whole components based on the distance from the live query to a particular Entity. Read more about how to define them in the schema on the page Archetypes and LOD-ing.
Commands are defined very similarly to components, but they use the command
keyword instead.
Here's a simple example Command:
Routing defines to whom the command can be sent. Currently, two values are supported:
AuthorityOnly
- command will be received only by the owner of the target entity
All
- command will be received by every client that has a copy of this entity
When using reflection, there are limitations to what types are supported in commands. See the Supported types in commands section for more information.
This document explains how to use Archetypes and LOD-ing manually. If you're using coherence with MonoBehaviours, see this page instead.
Level of Detail (or LOD-ing, for short) is a technique to optimize the amount of data being sent from the replication server to each client. Often a client doesn't need to get as much information about an entity if it's far away. The way this is achieved when working with coherence is by using archetypes.
Archetypes let you group components together and create distinct "levels of detail". Each such level must have a distance threshold, and a list of components that should be present at that distance. Optionally it can also contain per-field overrides that make the primitive data types in the components take up less space (at the cost of less precision.)
To define an archetype, use the archetype
keyword in your schema, then list the LODs in ascending order. Notice that LOD 0 does not need a distance, since it always starts at 0. Here's an example of a simple archetype:
In this example, any Enemy entity that is 200 or more units away from the live query of a particular client will only get updates for the WorldPosition
. Any client with a distance of 10 – 200 will get WorldPosition
and WorldOrientation
, and anything closer than that will get the full entity.
Given one or more archetype definitions in your schema, you will have access to a few different data types and methods in your project (these will be generated when you run the Protocol Code Generator.)
ArchetypeComponent
– this component has a field index that keeps track of which one of the archetypes in your schema that is being used. If you add the ArchetypeComponent
yourself you have to use the static constants in the Coherence.Generated.Archetype
to set the index. These all have the name "archetype name" + "Index", e.g. EnemyIndex
in the example above.
An individual "tag" component (with no fields) called "archetype name" + "Archetype", e.g. EnemyArchetype
in the example above. This component can be used to create optimized ForEach queries for a specific archetype.
LastObservedLod
– this component holds the current LOD for the entity. This can be used to detect when the entity changes LOD, if that's something you want to react to. Note that this component is not networked, since the exact LOD for an entity is unique for each client.
Static helper methods on the Archetype
class to instantiate the archetype in a usable state. These are named "Instantiate" + "archetype name", e.g. InstantiateEnemy
in the example above.
If a component isn't present at a certain LOD, no updates will be sent for that component. This is a great optimization, but sometimes a little too extreme. We might actually need the information, but be OK with a slightly less fine-grained version of it.
To achieve this, you can use the same field settings that are available when defining components, but use them as overrides on specific LOD's instead.
Here's an example of the syntax to use:
Notice that the settings are exactly the same as when defining components. To override a field, you must know its name (value
in this case.) Any field that is not overridden will use the settings of the LOD above, or the original component if at LOD 0.
Each component in an archetype can also override the default priority for that component. Just add the priority meta data after the component, like this:
To read more about priority, see the page about Field Settings.