Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page lists changes in coherence 1.0.0 version which might affect existing projects when upgrading from a 0.10.x version to 1.0.0.
Version 1.0.0 includes breaking changes to the baked scripts, which means your current ones will cause compilation errors. It is recommended to delete your current baked scripts under Assets/coherence/baked before performing a coherence Unity SDK update.
For automatic migrations to run smoothly, we recommend not having any Prefab with missing component scripts that have the CoherenceSync component attached to it. These Prefabs will stop Unity from saving assets and automatic data migrations might get interrupted because of it.
There have been a number of updates to the old CoherenceMonoBridge, which have resulted in a rebranding of the component to CoherenceBridge.
If your custom scripts have references to the class CoherenceMonoBridge, you will have to rename all these references to CoherenceBridge.
The script asset GUID and the public API of the component remain unchanged, so asset references to CoherenceMonoBridge components should remain intact.
In coherence 1.0.0, we have revamped our asset management systems to make them more scalable and customizable, both in Editor and in runtime. Read more about the new implementations and the possibilities in the Asset Management article.
Upon upgrading the SDK, a new CoherenceSyncConfigRegistry asset will be automatically created in Assets/coherence/CoherenceSyncConfigRegistry.asset and populated with sub-assets for each of your Prefabs that have a CoherenceSync component attached.
If the automatic migration completed successfully, you can browse all your networked Prefabs in the new CoherenceSync Objects window found under the coherence menu item:
If something has gone wrong during the automatic migration, you can restart the process by deleting the current CoherenceSyncConfigRegistry asset, and selecting the Reimport coherence Assets option under the coherence menu item. This will automatically create a config entry for each of your CoherenceSync Prefabs:
In previous versions of coherence, Prefabs with the CoherenceSync component would be automatically synced over the network in the Start
method, and they would stop being synced only when destroyed. This prevented us from supporting things like object pooling and reusing the same CoherenceSync instance to represent different network entities across its lifetime.
As a result, CoherenceSync instances are now automatically synced and unsynced with the network in the OnEnable and OnDisable methods of the MonoBehaviour. This means you can disable the GameObject instance in the hierarchy and it will stop being synced, you can also keep the local visual representation by only disabling the CoherenceSync component.
You can learn more about the CoherenceSync lifetime cycle in the Order Of Execution page.
This change may or may not affect your current project. If you wish to use the new Object Pooling feature (or make your own!), you might need to upgrade your custom components logic to accommodate the new paradigm.
The Play API was the public API we offered to connect with your coherence Cloud project in runtime. It has been deprecated in favor of a new non-static API called CloudService, with separation of a ReplicationServerRoomsService so you can talk to self-hosted Room Replication Servers with a lot more customization and without having to use the coherence Cloud.
The Play API includes the following classes:
PlayResolver
PlayClient
Play
WorldsResolverClient
RoomsResolverClient
If you were using PlayResolver static calls, you can now access CloudService instance from CoherenceBridge instead, and you will be able to use the CloudService.Worlds or CloudService.Rooms instances to fetch Worlds or to create, delete or fetch available Rooms respectively.
CloudService offers callback-based methods and an async variant for all of the available requests.
You can learn more about the Cloud Service API under the Unity Cloud Service section.
No Domain Reload is a Unity Editor option that allows users to enter Play mode very fast without having to recompile all your code. This wasn't properly supported in earlier versions of coherence, but now it is!
You can find this option under Edit > Project Settings > Editor > Enter Play Mode Settings.
Keep in mind that you might have custom implementations in your project that prevents you from using this option successfully, the most common are:
Static state and initializers: since your app domain isn't reloaded when entering Play mode, that means static contexts aren't reset either, which might cause issues.
SerializeReference instances: similar to the previous one, SerializeReference instances won't be recreated when entering Play mode, so you will have to make sure that the state of SerializeReference instances are properly reset on application quit.
In previous releases of coherence, this event would be fired strictly for CoherenceSync prefab instances created by coherence, which means it would be fired for instances you didn't have authority over.
In 1.0, this event will be fired for every instance of CoherenceSync that starts being synchronized over the network.
If you wish to hook behaviour to non-authority instances, please use the OnStateRemote event instead, which is fired when a CoherenceSync instance is non-authoritative.
In 1.0, there has been two major changes to Client Connection instances.
Client Connection instances now live in the DontDestroyOnLoad scene. Since now we support Scene Transitioning, and Client Connections instances are managed by coherence, they need to survive the destruction of the Scene they were in.
References to the Client Connection Prefab in the CoherenceBridge Component have been changed from hard referencing a Unity Prefab, to referencing a CoherenceSyncConfig asset. If you were using a Unity Prefab reference, it still exists serialized but is deprecated. If you wish to stop using the Unity Prefab reference, please delete the CoherenceBridge Component and add it again.
using Coherence.Entity;
-> using Coherence.Entities;
It is recommended to switch to Assets strategy before upgrading. If you upgrade while using the Source Generator, you might see an error popup, guiding you to switch back to Assets. After the upgrade and a clean Bake, you can switch back to Source Generators baking strategy.
Notable API changes include:
Coherence.Network
is removed now. OnConnected
and OnDisconnected
events are moved to IClient
and have changed a signature:
BridgeStore.GetBridge
is now obsolete, and will be replaced with
BridgeStore.TryGetBridge
.
CoherenceSync.isSimulated
is now obsolete, and will be replaced with CoherenceSync.HasStateAuthority
.
CoherenceSync.RequestAuthority
is now obsolete, and will be replaced with CoherenceSync.RequestAuthority(AuthorityType).
For more information, see Release Notes.
For detailed documentation of the updated CoherenceSync component, see CoherenceSync.
The CLI tools have been updated, especially the ones that handle Simulators. To learn more about this, see CLI utilities.
Before upgrading, back up your project to avoid any data loss.
APIs marked as depracted on 1.0 and earlier are removed on 1.2. Make sure your project is not using them before upgrading.
Removed | Replacement |
---|
Some components like the CoherenceBridgeSender don't exist anymore. If your project was using any of these legacy components, you might see warnings on the Console window while in Play Mode. You'll want to find the affected GameObjects and remove the missing component references. This is specially important for Prefabs, since missing components affect the ability to save them.
There has been a major refactor on how we deal with the registry.
First off, we're deprecating the registry holding configs as sub-assets. This has been the default up to 1.1, but on 1.2, it has been changed to store the configs in the Assets/coherence/CoherenceSyncConfigs folder.
For 1.2, we have avoided manual migration of existing subassets. You can trigger this operation from the registry inspector (Assets/coherence/CoherenceSyncRegistry). We don't automate this operation, since the extraction changes the asset GUIDs of the configs, which could lead to missing references. This is specially true if your project is referencing the configs directly, instead of going through the registry. Keep this in mind when you decide to extract existing subassets.
It is important that you perform the forementioned migration as soon as possible, since SDK will be deprecating reading from subassets in upcoming releases.
If your project was using any registry-related APIs, you might see compilation errors on upgrade, since the APIs have changed considerably. Check the classes mentioned above.
In this update, bundling the Replication Server has been revamped to work with Streaming Assets. On the Settings window, instead of different toggles per platform, there's now one unified toggle:
Tips and best practices to upgrade the SDK while avoiding risk of data loss
If you wish to get started and install the coherence Unity SDK, please refer to the page.
Open Unity Package Manager under Window > Package Manager and check the Packages - coherence section. It will state the currently installed version and if there are any recommended updates available:
When upgrading coherence, please do it one major or minor version at a time (considering semantic versioning, so MAJOR.MINOR.PATCH).
For example, if your project is on 0.7 and would like to upgrade to 1.0, upgrade from 0.7 to 0.8, then from 0.8 to 0.9, and so on. Always back up your project before upgrading, to avoid any risk of losing your data.
Versions prior to 0.9 are considered legacy. We don't provide a detailed upgrade guide for those versions.
Refer to the specific upgrade guides provided as subsections of this article.
Defines a network entity and what data to sync from the GameObject. Anything that needs to be synchronized over the network can use a CoherenceSync component. You can select data from your GameObject hierarchy that you'd like to sync across the network.
Queries an area of interest, so that you can read/write across the network on the desired location. In our Starter Project, the LiveQuery position is static with an extent large enough to cover the entire playable level. If the World was very large and potentially set over multiple Simulators, the LiveQuery could be attached to the playable character or camera.
Handles the connection between the coherence transport layer and the Unity scene.
Enables a Simulator to take control of the state authority of a Client's CoherenceSync, while retaining input authority.
This component is added by CoherenceSync on .
The Bridge establishes a connection between your scene and the coherence . It makes sure all networked entities stay in sync.
When you place a GameObject in your scene, the Bridge detects it and makes sure all the synchronization can be done via the CoherenceSync
component.
At runtime, you can inspect which Entites the Bridge is currently tracking.
A Bridge is associated with the scene it's instantiated on, and keeps track of Entities that are part of that scene. This also allows for multiple connections at the same time coming from the game or within the Unity Editor.
The CoherenceBridge offers a couple of Unity Events in its inspector where you can hook your custom game logic:
This event is invoked when the Replication Server state has been fully synchronized, it is fired after OnConnected.
For example, if you connect to a ongoing game that has five players connected, when this event is fired all the entities and information of all the other players will already be synchronized and available to be polled.
This event is invoked the moment you stablish a connection with the Replication Server, but before any synchronization has happened.
Following the previous example, if you connect to an ongoing game that has five players connected, when this event is fired, you won't have any entities or information available about those five players.
This event is invoked when you disconnect from a Replication Server. In the parameters of the event you will be given a ConnectionCloseReason value that will explain why the disconnection happened.
This event is invoked when you attempt to connect to a Replication Server, but the connection fails, you will be returned a ConnectionException with information about the error.
The Client Connections system allows you to keep track of how many users are connected and uniquely identify them, as well as easily send server-wide messages.
Currently, the maximum number of persistent Entities supported by the Replication Server is 32 000. This limit will be increased in the near future.
In addition to the LiveQuery, coherence also supports filtering objects by tag. This is useful when you have some special objects that should always be visible regardless of World position.
For a guide on how to use TagQuery, see .
The APIs provided have changed too. Check , and . Exposed functionality has been documented.
When the package imports, if there are any compilation errors, Unity won't do a . This means the previous package is still in memory and executing, but the contents of the package (might) have changed drastically. This can yield numerous errors and warnings on the Console, that will fade away once the compilation (followed by a domain reload) is completed.
If, after compilation, you are still experiencing issues using the SDK (errors hitting the Bake button, CoherenceSync throwing exceptions or warnings that weren't there before the upgrade, etc), please reach out to us on our or the .
Refer to the section at any time to find solutions to issues found during or after upgrading.
You can use to get a CoherenceBridge associated with a scene:
You can read more about the Client Connections system .
If you have a account, you can connect to Worlds or Rooms hosted in coherence Cloud. You can use the CloudService instance from CoherenceBridge to fetch existing Worlds or create or fetch existing Rooms, after you fetch a valid World or Room, you can use the JoinWorld or JoinRoom methods to easily connect your client.
You can read more about the Unity Cloud Service .
The way you get information about the World is through LiveQueries. We set criteria for what part of the World we are interested in at each given moment. That way, the Replicator won’t send information about everything that is going on in the Game World everywhere, at all times.
Instead, we will just get information about what’s within a certain area, kind of like moving a torch to look around in a dark cave.
For a guide on how to use LiveQuery, see Areas of Interest.
More complex areas of interest types are coming in future versions of coherence.
Sample UI | coherence / Explore Samples / Connection Dialog(s) |
CoherenceSyncConfigManager |
Source Generator baking strategy | Falls back to Assets baking strategy |
CoherenceSync
is a component that should be attached to every networked GameObject. It may be your player, an NPC or an inanimate object such as a ball, a projectile or a banana. Anything that needs to be synchronized over the network and turned into an Entity.
Once a CoherenceSync
is added to a Prefab, you can select which individual public properties you would like to sync across the network, expose methods as Network Commands, and configure other network-related properties.
To start syncing variables, open the Configure window that you can access from the CoherenceSync
's Inspector.
Any components attached to the GameObject with CoherenceSync
that have public variables will be shown here and can be synced across the network.
To start syncing a property, just use the checkbox. Optionally, choose how it is interpolated on the right.
Network Commands are public methods that can be invoked remotely. In other networking frameworks they are often referred to as RPCs (Remote Procedure Calls).
To mark a method as a Command, you can do it from the Configure window in the same way described above when syncing properties by going to the second tab labelled "Commands".
For more info, refer to the page about messaging with Commands.
When an entity is instantiated in the network, other Clients will see it but they won't have authority on it. It is then important to ensure that some components behave differently when an entity is non-authoritative.
To quickly achieve this, you can leverage Component Actions, which are located in the Components tab of the Configure window:
The sections above describe UI-based workflows to sync variables and commands. We also offer a code-based workflow, which leverages [Sync] and [Command] C# attributes directly from within code.
(You can notice in the screenshot above how the isBeingCarried
property is synced in code, and displays the [Sync]
tag in front of its name.)
The two workflows can be used together, even on the same Prefab!
You can also create your own, custom Component Actions.
When you create a networked GameObject, you automatically become the owner of that GameObject. That means only you are allowed to update its values, or destroy it. But sometimes it is necessary to pass ownership from one Client to another. For example, you could snatch the football in a soccer game or throw a mind control spell in a strategy game. In these cases, you will need to transfer ownership over these Entities from one Client to another.
When an authority transfer request is performed, an Entity can be set up to respond in different ways to account for different gameplay cases:
Not Transferable - Authority requests will always fail. This is a typical choice for player characters.
Steal - Authority requests always succeed.
Request - This option is intended for conditional transfers. The owner of an Entity can reply to an authority request by either accepting or denying it.
Approve Requests - The requests will succeed even if no event listener is present.
Note that for Request, a listener to the event OnAuthorityRequested
needs to be provided in code. If not present, the optional parameter Approve Requests can be used as a fallback. This is only useful in corner cases where the listener is added and removed at runtime. In general, you can simply set the transfer style to Steal and all requests will automatically succeed.
Any Client or Simulator can request ownership by invoking the RequestAuthority()
method on the CoherenceSync
component of a Network Entity:
A request will be sent to the Entity's current owner. They will then accept or deny the request, and complete the transfer. If the transfer succeeds, the previous owner is no longer allowed to update or destroy the Entity.
When a Client disconnects, all the Network Entities created by that Client are usually destroyed. If you want any Entity to stay after the owner disconnects, you need to set Entity lifetime type of that Prefab to Persistent.
Session Based - the Entity will be removed on all other Clients, when the owner Client disconnects.
Persistence - Entities with this option will persist as long as the Replication Server is running. For more details, see Configuring persistence.
Orphaned Entities
By making the GameObject persistent, you ensure that it remains in the game world even after its owner disconnects. But once the GameObject has lost its owner, it will remain frozen in place because no Client is allowed to update or delete it. This is called an orphaned GameObject.
In order to make the orphaned GameObject interactive again, another Client needs to take ownership of it. To do this, one can use APIs (specifically, Adopt()
) or – more conveniently – enable Auto-adopt orphan on the Prefab.
Allow Duplicates - multiple copies of this object can be instantiated over the network. This is typical for bullets, spell effects, RTS units, and similar repeated Entities.
No Duplicates - ensures objects are not duplicated by assigning them a Unique ID.
Manual Unique ID - You can set the Unique ID manually in the Prefab, only one Prefab instance will be allowed at runtime, any other instance created with the same UUID will be destroyed.
Prefab Instance Unique ID - When creating a Prefab instance in the Scene at Editor time, a special Prefab Instance Unique ID is assigned, if the manual UUID is blank, the UUID assigned at runtime will be the Prefab Instance ID.
Manual ID vs. Prefab Instance ID
To understand the difference between these two IDs, consider the following use cases:
Manager: If your game has a Prefab of which there can only be 1 in-game instance at any time (such as a Game Manager), assign an ID manually on the Prefab asset.
Multiple interactable scene objects: If you have several instances of a given Prefab, but each instance must be unique (such as doors, elevators, pickups, traps, etc.), each instance created in Editor time will have a auto-generated Prefab Instance Unique ID. This will ensure that when 2 players come online, they only bring one copy of any given door/trap/pickup, but each of them still replicates its state across the network to all Clients currently in the same scene.
Defines which type of network node (Client or Simulator) can have authority over this Entity.
Client Side - The Entity is by default owned by the Client that spawns it. It can be also owned by a Simulator.
Server Side - The Entity can't be owned by a normal Client, but only by a "server" (in coherence called Simulator).
Server Side with Client Input - This automatically adds a CoherenceInput
component. Ownership is split: a Simulator holds State Authority, while a Client has Input Authority. See Server Authoritative setup for more info.
You can hook into the events fired by the CoherenceSync
to conveniently structure gameplay in response to key moment of the component's lifecycle. Events are initially hidden, but you can reveal them using the button at the bottom of the Inspector called "Subscribe to...".
Once revealed, you can use them just like regular UnityEvents
:
You can also subscribe to these events in code.
You might also want to check out the CoherenceSync instance lifecycle section at the bottom of the Order of execution article.
When CoherenceSync
variables/components are sent over the network, by default, Reflection Mode is used to sync all the data at runtime. Whilst this is really useful for prototyping quickly and getting things working, it can be quite slow and unperformant. A way to combat this is to bake the CoherenceSync component, creating a compatible schema and then generating code for it.
The schema is a file that defines which data types in your project are synced over the network. It is the source from which coherence SDK generates C# struct types (and helper functions) that are used by the rest of your game. The coherence Replication Server also reads the schema file so that it knows about those types and communicates them with all of its Clients efficiently.
The schema must be baked in the coherence Settings window, before the check box to bake this Prefab can be clicked.
When the CoherenceSync
component is baked, it generates a new file in the baked folder called CoherenceSync<AssetIdOfThePrefab>
. This class will be instantiated at runtime, and will take care of networked serialization and deserialization, instead of the built-in reflection-based one.
You can find more information on the page about Baking.
The CoherenceSync
component will help you prepare an object for network synchronization. It also exposes an API that allows us to manipulate the object during runtime.
CoherenceSync
will query all public variables and methods on any of the attached components, for example Unity components such as Transform
, Animator
, etc. This will include any custom scripts, including third-party Asset Store packages that you may have downloaded.
Refer to the page to learn how to configure your Prefab to network state changes.
Instead of hard referencing Prefabs in your scripts to instantiate them, you can reference a CoherenceSyncConfig and instantiate your local Prefab instances through our API. This will utilize the internal INetworkObjectProvider and INetworkObjectInstantiator interfaces to load and instantiate the Prefab in a given networked scene (a scene with a CoherenceBridge component in it).
You can also hard reference the Prefab in your script, and use our extensions to instantiate the Prefab easily using the internal INetworkObjectInstantiator interface implementation. The main difference is that the previous approach doesn't need a Prefab hard reference, and you won't have to change the code if the way that the Prefab is loaded into memory changes (for example, if you go from Resources to load it via Addressables).
This page describes the order of various coherence events and scripts in relation to .
Check out .
Additionally, take a look at your project's Script Execution Order settings by opening Edit > Project Settings and selecting the Script Execution Order category. See this for more details.
Depending on the reason for a disconnection the onDisconnected
event can be raised from different places in the code, including LateUpdate
.
When a Prefab instance with CoherenceSync is created at runtime, it will be fully synchronized with the network in the OnEnable method of CoherenceSync. This means that you can expect your custom Components to have fully resolved synchronized values and authority state in your Awake method. It occurs in the following order:
Awake() is called
Internal initialization.
OnEnable() is called
Synchronize with a new or existing Network Entity.
OnBeforeNetworkedInstantiation event is invoked.
Initial component updates are applied (for entities you have no authority over).
OnNetworkedInstantiation event is invoked.
OnStateAuthority or OnStateRemote (for authority or non-authority instances respectively) event is invoked.
Awake() is called
At this point, if you get the CoherenceSync component, you can expect networked variables and authority state to be fully resolved.
In this page we will learn about how coherence handles loading CoherenceSync Prefabs into memory and instantiating them when a new remote entity appears in the network. You will also learn how you can hook your own asset loading and instantiation systems seamlessly.
Whenever you start synchronizing one of your prefabs, either by adding the CoherenceSync component manually or clicking the Sync with coherence toggle in the prefab inspector, coherence will automatically create a CoherenceSyncConfig object that will be added to the CoherenceSyncConfigRegistry asset found in the Assets/coherence folder.
This CoherenceSyncConfig object allows us to do the following:
Hard reference the prefab in Editor, this means that whenever we have to do postprocessing in synced prefabs, we don't have to do a lookup or load them from Resources.
Serialize the method of loading and instantiating this prefab in runtime.
Soft reference the prefab in Runtime with a GUID, this means we can access the loading and instantiating implementations without having to load the prefab itself into memory.
All your CoherenceSync prefabs will have a related CoherenceSyncConfig object, you can inspect all your prefabs in the CoherenceSync Objects window, found under the coherence => CoherenceSync Objects menu item:
You can also manually inspect your CoherenceSyncConfig objects by selecting the CoherenceSyncConfigRegistry asset in Assets/coherence/CoherenceSyncConfigRegistry.asset:
You can also find your related CoherenceSyncConfig in the inspector of the CoherenceSync component, you can directly edit your Config from here:
This option allows you to specify how this prefab will be loaded into memory in runtime, we support three default implementations, or you can create your own. The three default implementations are Resources, Direct Reference or Addressables, these three will be automatically managed by coherence and you won't have to worry much about them.
Resources loader will be used if your prefab is inside a Resources folder, if you wish to use any other type of loading method, you will be prompted to move the prefab outside of the Resources folder.
This loader will be used if your prefab is outside of a Resources folder, and the prefab is not marked as Addressable. This means that we will need to hard reference your prefab in the CoherenceSyncConfig, which means it will always be loaded into memory from the moment you start your game.
Addressables
You can implement the INetworkObjectProvider interface to create your custom implementations that will be used by coherence when we need to load the prefab into memory.
Custom implementations can be Serializable and have your own custom serialized data.
Implementations of this interface will be automatically selectable via the Load via option in the CoherenceSyncObject asset.
This option allows you to specify how this prefab will be instantiated in runtime, we support three default implementations, or you can create your own. The three default implementations are Default, Pooling or DestroyCoherenceSync.
This instantiator will create a new instance of your prefab, and when the related network entity is destroyed, this prefab instance will also be destroyed.
This instantiator supports object pooling, instead of always creating and destroying instances, the pool instantiator will attempt to reuse existing instances. It has two options:
Max Size: maximum size of the pool for this prefab, instances that exceed the limit of the pool will be destroyed when returned.
Initial Size: coherence will create this amount of instances on app startup.
This instantiator will create a new instance for your prefab, but instead of completely destroying the object when the related network entity is destroyed, it will destroy or disable the CoherenceSync component instead.
You can implement the INetworkObjectInstantiator interface to create your custom implementations that will be used by coherence when we need to instantiate a pefab in the scene.
Custom implementations can be Serializable and have your own custom serialized data.
Implementations of this interface will be automatically selectable via the Instantiate via option in the CoherenceSyncObject asset.
,
This option is only available if you have the installed.
This loader will be used if your prefab is marked as an Addressable asset, and it will be soft referenced using Addressables class.
coherence can sync the following types out of the box:
bool
int
uint
byte
char
short
ushort
float
string
Vector2
Vector3
Quaternion
GameObject
Transform
RectTransform
CoherenceSync
SerializeEntityID
byte[]
long
ulong
Int64
UInt64
Color
double
RectTransform
is still in an experimental phase - use at your own discretion!
Commands are network messages sent from one CoherenceSync to another CoherenceSync. Functionally equivalent to RPCs, commands bind to public methods accessible on the GameObject hierarchy that CoherenceSync sits on.
In the design phase, you can expose public methods the same way you select fields for synchronization: through the Configure window on your CoherenceSync component.
By clicking on the method, you bind to it, defining a command. The grid icon on its right lets you configure the routing mode. Commands with a Send to Authority Only
mode can be sent only to the authority of the target CoherenceSync, while ones with the Send to All Instances
can be broadcasted to all clients that see it. The routing is enforced by the Replication Server as a security measure, so that outdated or malicious clients don't break the game.
To send a command, we call the SendCommand
method on the target CoherenceSync
object. It takes a number of arguments:
The generic type parameter must be the type of the receiving Component. This ensures that the correct method gets called if the receiving GameObject has components that implement methods that share the same name.
Example: sync.SendCommand<Transform>(...)
If there are multiple commands bound to different components of the same type (for example, your CoherenceSync hierarchy has five Transforms, and you create a command for Transform.SetParent on all of them), the command is only sent to the first one found in the hierarchy which matches the type.
The first argument is the name of the method on the component that we want to call. It is good practice to use the C# nameof
expression when referring to the method name, since it prevents accidentally misspelling it, or forgetting to update the string if the method changes name.
Alternatively, if you want to know which Client sent the command, you can add CoherenceSync sender
as the first argument of the command, and the correct value will be automatically filled in by the SDK.
The second argument is an enum that specifies the MessageTarget
of the command. The possible values are:
MessageTarget.All
– sends the command to each Client that has an instance of this Entity.
MessageTarget.AuthorityOnly
– send the command only to the Client that has authority over the Entity.
MessageTarget.Other
- sends the command to every Entity other than the one SendCommand is called on.
Mind that the target must be compatible with the routing mode set in the bindings, i.e. Send to authority
will allow only for the MessageTarget.AuthorityOnly
while Send to all instances
allows for both values.
Also, it is possible that the message never sends as in the case of a command with MessageTarget.Other
sent from the authority with routing of Authority Only.
The rest of the arguments (if any) vary depending on the command itself. We must supply as many parameters as are defined in the target method and the schema.
Here's an example of how to send a command:
If you have the same command bound more than once in the same Prefab hierarchy, you can target a specific MonoBehaviour when sending a message, you can do so via the SendCommand(Action action) method in CoherenceSync.
Additionally, if you want to target every bound MonoBehaviour, you can do so via the SendCommandToChildren method in CoherenceSync.
We don't have to do anything special to receive the command. The system will simply call the corresponding method on the target network entity.
If the target is a locally simulated entity, SendCommand
will recognize that and not send a network command, but instead simply call the method directly.
While commands by default carry no information on who sent them in order to optimise traffic, you can create commands that include a ClientID as one of the parameters. Then, on the receiving end, compare that value with a list of connected Clients.
Another useful way to access ClientID is via CoherenceBridge, like this:
You can create your own implementation for these IDs or, more simply, use coherence's built-in Client Connections feature.
Sometimes you want to inform a bunch of different CoherenceSyncs about a change. For example, an explosion impact on a few players. To do so, we have to go through the instances we want to notify and send commands to each of them.
In this example, a command will get sent to each CoherenceSync under the state authority of this Client. To make it only affect CoherenceSyncs within certain criteria, you need to filter to which CoherenceSync you send the command to, on your own.
Some of the primitive types supported are nullable values, this includes:
Byte[]
string
Entity references: CoherenceSync, Transform, and GameObject
Refer to the supported types page.
In order to send one of these values as a null (or default) we need to use special syntax to ensure the right method signature is resolved.
Null-value arguments need to be passed as a ValueTuple<Type, object> so that their type can be correctly resolved. In the example above sending a null value for a string is written as:
(typeof(string), (string)null)
and the null Byte[] argument is written as:
(typeof(Byte[]), (Byte[])null)
Mis-ordered arguments, type mis-match, or unresolvable types will result in errors logged and the command not being sent.
When a null argument is deserialized on a client receiving the command, it is possible that the null value is converted into a non-null default value. For example, sending a null string in a command could result in clients receiving an empty string. As another example, a null Byte[] argument could be deserialized into an empty Byte[0] array. So, receiving code should be ready for either a null value or an equivalent default.
When a Prefab is not using a baked script there are some restrictions for what types can be sent in a single command:
4 entity references
maximum of 511 bytes total of data in other arguments
a single Byte[] argument can be no longer than 509 bytes because of overhead
Some network primitive types send extra data when serialized (like Byte arrays and string types) so gauging how many bits a command will use is difficult. If a single command is bigger than the supported packet size, it won't work even with baked code. For a good and performant game experience, always try to keep the total command argument sizes low.
When a Client receives a command targeted at AuthorityOnly
but it has already transferred an authority of that entity, the command is simply discarded.
Creating complex hierarchies of CoherenceSyncs at runtime
While the basic case of direct parent-child relationships between CoherenceSync entities is handled automatically by coherence, more complex hierarchies (with multiple levels) need a specific component.
An example of such a hierarchy would be a synced Player Prefab with a hierarchical bone structure, where you want to place an item (e.g. a flashlight) in the hand:
Player > Shoulder > Arm > Hand
To prepare the child Prefab that you want to parent at runtime, add the CoherenceNode
component to it (in addition to its CoherenceSync
). In the example above, that would be the flashlight you want your player to be able to pick up. No additional changes are required.
This setup allows you to place instances of the flashlight Prefab anywhere in the hierarchy of the Player (you could even move it from one hand to the other, and it would work).
You don't need to input any value in the fields of the CoherenceNode
. They are used at runtime, by coherence, automatically.
To recap, for deep-nesting network entities to work, you need two things:
The parent: a Prefab with CoherenceSync
that has some hierarchy of child transforms (these child transforms are not networked entities themselves).
The child: another connected Prefab with CoherenceSync
and CoherenceNode
.
One important constraint for using CoherenceNode
is that the hierarchies have to be identical on all Clients.
Example: if on Client A an object is parented to Player > Shoulder > Arm > Hand, the hierarchy on Client B needs to be exactly: Player > Shoulder > Arm > Hand.
Removing or moving an intermediate child (such as Shoulder or Arm) would lead to undesirable results, and desynchronisation.
Position and rotation
Similarly to the above, intermediate children objects need to have the same position and rotation on all Clients. If not, that would lead to desync because the parented entity doesn't track the position of its parent object(s).
If you plan to move these intermediate children, then we suggest to sync the position and/or rotation of those objects as part of the containing Prefab.
Following the previous example, if an object is parented to Player > Shoulder > Arm > Hand, you might want to mark the position and rotation of Shoulder, Arm and Hand as synced, as part of the prefab Player.
This way if any of them moves, the movement will be replicated correctly on all clients, and the object parented to Hand will also look correct.
Keep in mind that there is no penalty for synching positions of objects that never or rarely move, because the position is not synched every frame if it hasn't changed.
This section is only useful to you if you want to understand deeply how CoherenceNode
works under the hood.
CoherenceNode
works using two public fields which are automatically set to sync using the [Sync]
attribute.
The path
variable describes where in the parent's hierarchy the child object should be located. It is a string consisting of comma-separated indexes. Every one of these indexes designates a specific child index in the hierarchy. The child object which has the CoherenceNode
component will be placed in the resulting place in the hierarchy.
The pathDirtyCounter
variable is a helper variable used to keep track of the applied hierarchy changes. In case the object's position in the parent's hierarchy changes, this variable will be used to help settle and properly sync those changes.
For an example of a CoherenceSync
parenting and unparenting at runtime in a deep hierarchy, check out the First Steps sample project, lesson 5.
CoherenceSync direct parent-child relationships at runtime
Objects with the CoherenceSync
component can be connected at runtime to other objects with a CoherenceSync
component to form a direct parent-child relationship.
For example, an item of cargo can be parented to a vehicle, so that they move together when the vehicle is in motion.
Keep in mind that on this page we deal with direct parenting of two CoherenceSync
GameObjects. If it's not practical to parent a network entity directly to the root of another, see instead how to .
When an object has a parent in the network hierarchy, its transform (position and orientation) will update in local space, which means its transform is relative to the parent's transform.
A child object will only be visible in a LiveQuery if its parent is within the query's boundaries.
Parenting network entities directly doesn't require any extra work. Any parenting code (i.e. Unity's own transform.SetParent()
will work out of the box, without any need for additional action.
You can add and remove parent-child relationships at runtime – even from the Unity editor, by drag-and-drop.
If the child object is using LODs, it will base its distance calculations on the world position of its parent. For more info, see the documentation.
When the parent CoherenceSync
is destroyed, by default its CoherenceSync
children get destroyed together with it. This can be changed via the Preserve Children option on the parent, under Advanced Settings:
When Preserve Children is enabled, if the authority destroys or disables the parent entity, child entities get unparented instead of being destroyed together with the parent. Those children will now reside at the root of the Scene hierarchy.
How to parent CoherenceSync objects to each other
Out of the box, coherence offers several options to handle parenting of networked entities. While some workflows are automatic, others require a specific component to be added.
Generally there is a distinction if the parenting happens at runtime vs. edit time, and whether the two entities are direct parent-child, or have a complex hierarchy. See below for each case.
At runtime:
: when you create a parent-child relationship of CoherenceSync
objects at runtime.
: when you create a complex parent-child relationship of CoherenceSync
objects at runtime.
At edit time:
: the developer prepares several connected Prefabs and nests them one to another before entering Play Mode. This covers both Prefabs in the scene and in the assets.
Binding to variables and methods within the hierarchy
If a synced Prefab has a hierarchy, you can synchronize variables, methods and component actions for any of the child GameObjects within its hierarchy.
Note: on this page we cover children GameObjects or nested Prefabs that don't have their ownCoherenceSync
.
If a child object has a CoherenceSync
of their own, they become an independent network entity. For that, see the .
When the Configure window is open it will show the variables, methods and component actions available for synchronization for your currently selected GameObject.
First, make sure to be editing the Prefab in Prefab Mode:
Once in Prefab Mode and with the Configure window open, shift the selection to any of the GameObjects that belong to the hierarchy.
The Configure window will be updated automatically, showing you everything that is available to be synchronized on the child GameObject:
That's it!
Syncing properties, methods and component actions on child GameObjects doesn't require any different flow than what you usually do for the root object. They all get collected and networked as part of one single network entity.
Make sure to not destroy child GameObjects that have synced properties, or you will receive a warning in the Console. To destroy a synced object, always remove the root.
(you can totally destroy children that don't have any synced property)
For an example of direct child CoherenceSync
components parenting and unparenting at runtime, check out the First Steps sample project, specifically .
After your changes to GameObject, don't forget to again to rebuild the netcode for the entity.
Preparing nested connected Prefabs at edit time
coherence supports all Prefab-related Unity workflows, and nesting is one of them. It can make a lot of sense to prepare multiple networked Prefabs, parent them to each other, and either place them in the scene, or save them as a complex Prefab, ready to be instantiated. This page covers these cases.
When preparing a networked Prefab that contains another networked Prefab, one extra component is needed to allow coherence to sync the whole hierarchy: PrefabSyncGroup
.
For instance, let's suppose we have a vehicle in an RTS that can carry cargo, and it comes with cargo pre-loaded when it's instantiated:
In this example Spacetruck is a synced Prefab, with 4 instances of the synced Prefab Cargo nested within. To make this work, we add a PrefabSyncedGroup
to the root:
The component keeps track of child Prefabs that are also synced Prefabs. Now, whenever Spacetruck is instantiated, PrefabSyncGroup
makes sure to take 4 instances of Cargo and link the Prefab instances to the correct network entities.
Please note that if the nested Prefabs are more than one level under the root object, you still need to add a CoherenceNode
component to the child ones (in the example above, Cargo), to enable deep nesting at runtime.
So to recap:
The outermost Prefab needs CoherenceSync
and PrefabSyncGroup
.
The child Prefabs need CoherenceSync
and, optionally, CoherenceNode
.
When dealing with synced Prefabs that are hand-placed in the scene before connecting, such as level design elements like interactive doors, you need to ensure that they are seen as "unique". This is also covered in the Uniqueness page, but it's worth talking about it in the context of nested synced Prefabs.
When preparing such a Prefab, you need to set the Uniqueness property to No Duplicates. This ensures that, once multiple Clients connect and open the same scene, the synced Prefabs contained within are not spawned on the network multiple times.
Let's suppose we have a networked Prefab that represents a structure in an RTS (a LandingPad) that can be pre-placed in the scene. This structure also contains a networked vehicle Prefab (a Lander). This Prefab is synced as an independent network entity because at runtime it can detach, change ownership, be destroyed, etc.
To achieve this, all we need to do is ensure that both Prefabs are set to be unique. When we drag-and-drop the LandingPad Prefab into the scene, coherence automatically assigns a randomly-generated Prefab Instance Unique ID as an override. This number identifies these particular instances of these two Prefabs in the scene.
With this setting, we don't need to do anything else for these compound Prefabs to work.
Like for runtime-instantiated Prefabs, keep in mind that if the Lander is nested 2 or more levels deep in the hierarchy, it will also need a CoherenceNode
component.
If you plan to also instantiate this Prefab at runtime, you can add a PrefabSyncGroup
to the root as described in the previous section. This makes the Prefab work when instantiated at runtime, while the uniqueness takes care of copies in the scene.
To recap:
The outermost Prefab needs its Uniqueness set to No Duplicates. Optionally, you can add PrefabSyncGroup
to enable runtime-instantiation.
Any child Prefab also needs its Uniqueness set to No Duplicates. It also needs a CoherenceNode
if it's parented deep in the hierarchy.
An important thing to keep in mind when working with compound Prefabs in the scene: when you add a new nested synced Prefab to an existing one that has already been placed in the scene a few times, the Prefab Instance Unique ID for these instances will initially be the same.
For this reason, once you play the game, you might see all children disappear (except one). That is normal: coherence thinks that all these network entities are the same, because they have the same uniqueness ID.
You need to ensure that these new children have an overriden and unique ID on each instance in the scene. To do so, click on the button next to the Prefab Instance Unique ID for each child that needs it:
coherence only replicates animation parameters, not state. Latency can create scenarios where different Clients reproduce different animations. Take this into account when working with Animator Controllers that require precise timings.
Unity Animator's parameters are bindable out of the box, with the exception of triggers.
While coherence doesn't officially support working with multiple AnimatorControllers, there's a way to work around it. As long as the parameters you want to network are shared among the AnimatorControllers you want to use, they will get networked. Parameters need to have the same type and name. Using the example above, any AnimatorController featuring a Boolean Walk parameter is compatible, and can be switched.
Triggers can be invoked over the network using commands. Here's an example where we inform networked Clients that we have played a jump animation:
Now, bind the PlayJumpAnimator
method as a command.
Entity references let you set up references between Entities and have those be synchronized, just like other value types (like integers, vectors, etc.)
To use Entity references, simply select any fields of type GameObject
, Transform
, or CoherenceSync
for syncing in the Configuration window:
The synchronization works both when using reflection and in baked sync scripts.
Entity references can also be used as arguments in Commands.
It's important to know about the situations when an Entity reference might become null, even though it seems like it should have a value:
A client might not have the referenced entity in its LiveQuery. A local reference can only be valid if there's an actual Entity instance to reference. If this becomes a problem, consider switching to using the CoherenceNode component or Parent-Child relationships of prefabs, which ensures that that Entity stays part of the query.
The owner of the Entity reference might sync the reference to the Replication Server before syncing the referenced Entity. This will lead to the Replication Server storing a null reference. If possible, try setting the Entity references during gameplay when the referenced Entities have already existed for a while.
Cyclic references are undefined behavior for now. Therefore multiple entities created on the same Client that reference each other might never get synced properly. This is also holds true for references that exist through intermediate entities (A has reference to B has reference to C has reference A - cyclic).
In any case, it's important to use a defensive coding style when working with Entity references. Make sure that your code can handle missing Entities and nulls in a graceful way.
Even though coherence provides Component Actions out of the box for various component, you can implement your own Component Actions in order to give designers on the team full authoring power on network entities, directly from within the Configure window UI.
Creating a new one is simply done by extending the ComponentAction
abstract class:
Your custom Component Action must implement the following methods:
OnAuthority
This method will be called when the object is spawned and you have authority over it.
OnRemote
This method will be called when a remote object is spawned and you do not have authority over it.
It will also require the ComponentAction
class attribute, specifying the type of Component that you want the Action to work with, and the display name.
For example, here is the implementation of the Component Action that we use to disable Components on remote objects:
Notifying State Changes
It is often useful to know when a synchronized variable has changed its value. It can be easily achieved using the OnValueSyncedAttribute
. This attribute lets you define a method that will be called each time a value of a synced member (field or property) changes in the non-simulated version of an entity.
Let's start with a simple example:
Whenever the value of the Health
field gets updated (synced with its simulated version) the UpdateHealthLabel
will be called automatically, changing the health label text and printing a log with a health difference.
This comes in handy in projects that use authoritative Simulators. The Client code can easily react to changes in the Player
entity state introduced by the Simulator, updating the visual representation (which the Simulator doesn't need).
The OnValueSyncedAttribute
requires using baked mode.
Remember that the callback method will be called only for a non-simulated instance of an Entity. Use on a simulated (owned) instance requires calling the selected method manually whenever the value of a given field/member changes. We recommend using properties with a backing field for this.
The OnValueSynced
feature can be used only on members of user-defined types, that is, there's no way to be notified about a change in the value of a Unity type member, like transform.position
. This might however change in the future, so stay tuned!
Aside from configuring your CoherenceSync bindings from within the Configure window, it's possible to use the [Sync]
and [Command]
C# attributes directly on your scripts. Your prefabs will get updated to require such bindings.
Mark public fields and properties to be synchronized over the network.
It's possible to migrate the variable automatically, if you decide to change its definition:
Mark public methods to be invoked over the network. Method return type must be void
.
It's possible to migrate the command automatically, if you decide to change the method signature:
Note that marking a command attribute only marks it as programmatically usable. It does not mean it will be automatically called over the network when executed.
You still need to follow the guidelines in the Messaging with Commands article to make it work.
Extending what can be synced from the Configure window
This is an advanced topic that aims to bring access to coherence's internals to the end user.
The Configure window lists all variables and methods that can be synced for the selected Prefab. Each selected element in the list is stored in the Prefab as a Binding
with an associated Descriptor
, which holds information about how to access that data.
By default, coherence uses reflection to gather public fields, properties and methods from each of the Prefab's components. You can specify exactly what to list in the Configure window for a given component by implementing a custom DescriptorProvider
. This allows you to sync custom component data over the network.
Take this player inventory for example:
Since the inventory items are not immediately accessible as fields or properties, they are not listed in the Configure window. In order to expose the inventory items so they can be synced across the network, we need to implement a custom DescriptorProvider
.
DescriptorProvider
The main job of the DescriptorProvider
is to provide the list of Descriptors
that you want to show up in the Configure window. You can instantiate new Descriptors
using this constructor:
name: identifying name for this Descriptor
.
ownerType: type of the MonoBehaviour that this Descriptor
is for.
bindingType: type of the ValueBinding class that will be instantiated and serialized in CoherenceSync, when selecting this Descriptor
in the Configure window.
required: if true, every network Prefab that uses a MonoBehaviour of ownerType will always have this Binding active.
If you need to serialize additional data with your Descriptor
, you can inherit from the Descriptor
class or assign a Serializable
object to Descriptor.CustomData
.
Here is an example InventoryDescriptorProvider
that returns a Descriptor for each of the inventory items:
To specify how to read and write data to the Inventory component, we also need a custom binding implementation.
Binding
A Descriptor
must specify through the bindingType which type of ValueBinding
it is going to instantiate when synced in a CoherenceSync
. In our example, we need an InventoryBinding
to specify how to set and get the values from the Inventory
. To sync the durability property of the inventory item, we should extend the IntBinding
class which provides functionality for syncing int values.
For the full list of supported binding types, see Supported types in Commands and Bindings.
We are now ready to sync the inventory items on the Prefabs.
Out of the box, coherence can use C# reflection to sync data at runtime. This is a great way to get started but is very costly performance-wise and has a number of limitations on what features can be used through this system.
For optimal runtime performance and a complete feature set, we need to create a schema and perform code generation specific to our project. coherence calls this mechanism Baking.
Learn more about schemas in the section.
Click on the coherence > Bake menu item.
This will go through all indexed CoherenceSync
GameObjects (Resources folders and Prefab Mapper) in the project and generate a schema file based on the selected variables, commands and other settings. It will also take into account any that have been added.
For every Prefab with a CoherenceSync
component attached, the baking process will generate a C# baked script specifically tuned for it.
Check .
When baking, the generated code will output to Asset/coherence/baked.
You can version the baked files or ignore them, your call.
If you work on a larger game or team, where you use continuous integration, chances are you are better off including the baked files on your VCS.
Since baked scripts access your code, changing networked variables or commands with coherence will get you into compilation errors.
When you configure your Prefab to network variables, and then bake, coherence generates baked scripts that access your code directly, without using reflection. This means that whenever you change your code, you might break compilation by accident.
For example, if you have a Health.cs
script which exposes a public float health;
field, and you toggle health
in the Configure window and bake, the generated baked script will access your component via its type, and your field via field name.
Like so:
If you decide you want to change your component name (Health
) or any of your bound fields (health
), Unity script recompilation can fail. In this example, we will be removing health
and adding health2
in its place.
When baking via assets, the watchdog is able to catch compilation problems related with this, and offer you a solution right away.
It will suggest that you delete the baked folder, and then diagnose the state of your Prefabs. After a few seconds of script recompilation, you will be presented with the Diagnosis window.
In this window, you can easily spot variables in your Prefabs that can't be resolved properly. In our example, health
is no longer valid since we've moved it elsewhere (or deleted it).
From here, you can access the Configure window, where you can spot the problem.
Now, we can manually rebind our data: unbind health
and bind health2
. Once we do, we can now safely bake again.
Remember to bake again after you fix your Prefabs.
Once the baked code has been generated, Prefabs will automatically make use of it. If you want to switch a particular Prefab to reflection code, you can do so in the Inspector of its CoherenceSync
, by unchecking the Baked checkbox:
Networked entities can be simulated either on a Game Client ("Client authority") or a Simulator ("Server authority"). Authority defines which Client or Simulator is allowed to make changes to the synced properties of an Entity, and in general defines who "runs the gameplay code" for that Entity. An Entity is any networked GameObject.
To learn more about authority, check out this short video:
Client authority is the easiest to set up initially, but it has some drawbacks:
Higher latency. Because both Clients have a non-zero ping to the Replication Server, the minimum latency for data replication and commands is the combined ping (Client 1 to Replication Server and Replication Server to Client 2).
Higher exposure to cheating. Because we trust Game Clients to simulate their own Entities, there is a risk that one such Client is tampered with and sends out unrealistic data.
In many cases, especially when not working on a competitive PvP game, these are not really issues and are a perfectly fine choice for the game developer.
Client authority does have a few advantages:
Easier to set up. No Client vs. Server logic separation in the code, no building and uploading of Simulation Servers, everything just works out of the box.
Cheaper. Depending on how optimized the Simulator code is, running a Simulator in the cloud will in most cases incur more costs than just running a Replication Server (which is comparatively very lean).
Having one or several Simulators taking care of the important World simulation tasks (like AI, player character state, score, health, etc.) is always a good idea for competitive PvP games. In this scenario, the Simulator has authority over key game elements, like a "game manager", a score-keeping object, and so on.
Running a Simulator in the cloud next to the Replication Server (with the ping between them being negligible) will also result in lower latency.
Scenes or levels are a common feature of Unity games. They can be loaded from Unity scenes, custom level formats, or even be procedurally generated. In networked games, players should not be able to see entities that are in other scenes. To address this, coherence's scene feature gives you a simple way of controlling what scene you're acting in.
Each Coherence scene is represented by an integer index. You can map this index to your scenes or levels in any way you find appropriate. Projects that don't use scenes will implicitly put all their entities into scene 0.
Since the connection to the Replication Server is done through the component, it means that if you switch Scenes, the current CoherenceBridge that holds the connection to the Replication Server will be destroyed.
In order to keep a CoherenceBridge with its connection alive between Scene changes, you will have to set it as Main Bridge in the Component inspector:
These are the options related to Scene transitions:
Main Bridge: This CoherenceBridge instance will be saved as DontDestroyOnLoad and its connection to the Replication Server will be kept alive between Scene changes. All other CoherenceBridge components that are instantiated from this point forward will update the target Scene of the Main Bridge, and destroy themselves afterwards.
Use Build Index as Scene Id: Every Scene needs a unique identifier over the network. This option will automate the creation of this ID by using the Scene Build Index (from the Build Settings window).
Scene Identifier: If the previous option is unchecked, then you will be able to manually set a Scene Identifier of your own (restricted to unsigned integers).
Using these options will automate Scene transitions.
The only requirement is having a single CoherenceBridge set as Main (the first one that your game will load). The rest of the Scenes you want to network should also have a CoherenceBridge component, but not set as main.
These options require no extra code on your part.
A client connection and all the entities it has authority over are always kept in the same coherence scene. Clients cannot have authority over entities in other scenes. This implies two things:
When a client changes scene, it will bring along any entities it has authority over.
If an entity changes ownership via authority transfer, it will be moved to the new owner's scene.
An entity that does not have an owner is an orphan. Orphaned entities will stay in the scene where their previous owner left them.
Note that Unity will destroy all game objects not marked as DontDestroyOnLoad
whenever a new Unity scene is loaded (non-additively). If the client has authority over any of those entities at that point, coherence will replicate that destruction to all other clients. If that is undesirable and you need to leave entities behind, make sure that authority has been lost or transferred before loading the new Unity scene. You can of course also mark them as DontDestroyOnLoad
, which will bring them along to then new scene.
Since this process involves a bit of logic that has to be executed over several frames, coherence provides a LoadScene
helper method (co-routine) on CoherenceSceneManager
. Here's an example of how to use it:
It is not possible to move entities to other scenes without the client connection also moving there. Additionally, you can't currently query for entities in other scenes.
Both of these limitations are planned to be addressed in future versions of coherence.
If your project isn't a good fit for the automatic scene transitioning support described above, it is possible to use a more manual approach. There are a few important things to take care of in such a setup:
If you ever load another Unity scene, the CoherenceBridge
that connects to the server needs to be kept alive, or else the client will be disconnected. A straightforward way of doing this is to call Unity's DontDestroyOnLoad
method on it. This creates two problems when replicating entities from other Clients:
The bridge instantiates remote entities into the scene where it is currently located. To override this behaviour, set the InstantiationScene
property on your CoherenceBridge
to the desired scene.
Any new CoherenceSync instances will look for the bridge in the same scene that they are located. If the bridge is moved to the DontDestroyOnLoad
scene, this lookup will fail. You can use the static CoherenceSync.BridgeResolve
event to solve this problem (see the code sample in the next section). Alternatively, if you have a reference to a Scene, you can register the appropriate bridge for entities in that scene with CoherenceBridgeStore.RegisterBridge
before it is loaded.
Additionally, coherence queries (e.g. CoherenceLiveQuery
) also look for their bridge in their own scene, so you might have to set its bridgeResolve
event too.
If you load levels via your own level format, or by loading Unity scenes additively, it is quite possible that you can skip some of the steps above.
The only thing strictly necessary for coherence scene support is to call
CoherenceBridge.SceneManager.SetClientScene(uint sceneIndex);
so that the Replication Server knows in which scene each Client is located.
Here's a complete code sample of how to use all the above things together:
Baked scripts will be located in Assets/coherence/baked
.
When an Entity is created, the creator is assigned authority over the Entity. Authority can be then at a later point between Clients – or even between Clients and Simulators, or . Regardless, only one Client or Simulator can be the authority over the Entity at any given time.
In addition to key gameplay objects, the player character can also be simulated on the Server, with the Client locally predicting its state based on inputs. You can read more about how to achieve that in the section about .
Peer-to-peer support (without a Replication Server) is planned in a future release. Please see the for updates.
Even if an entity is not currently being simulated locally (the client does not have authority), we can still affect its state by sending a or even .
If this approach to keeping the connection alive is not a good fit for your game, see in the second part of this document.
In the CoherenceBridge inspector you will find all the options related to handling Scene transitions. First thing to know is that must be enabled for this feature to work.
When we connect to a Game World with a Game Client, the traditional approach is that all Entities originating on our Client are session-based. This means that when the Client disconnects, they will disappear from the network World for all players.
A persistent object, however, will remain on the Replication Server even when the Client or Simulator that created or last simulated it, is gone.
This allows us to create a living world where player actions leave lasting effects.
In a virtual world, examples of persistent objects are:
A door anyone can open, close or lock
User-generated or user-configured objects left in the world to be found by others
Game progress objects (e.g. in PvE games)
Voice or video messages left by users
NPC's wandering around the world using an AI logic
Player characters on "auto pilot" that continue affecting the world when the player is offline
And many, many more
A persistent object with no Simulator is called an orphan. Orphans can be configured to be auto-adopted by Clients or Simulators on a FCFS basis.
The CoherenceSync editor interface allows us to define the Lifetime of a networked object. The following options are available:
Session Based. No persistence. The Entity will disappear when the Client or Simulator disconnects.
Persistent. The Entity will remain on the Server until a simulating Client deletes it.
For managing unique persistent objects, see Uniqueness.
A persistent object can be deleted only by the Client or Simulator that has authority over it. For indirect remote deletion, see the section about network commands.
Deleting a persistent object is done the same as with any network object - by destroying its GameObject.
All persistent objects remain in the World for the entire lifetime of the Replication Server and, periodically, the Replication Server records the state of the World and saves it to physical storage. If the Replication Server is restarted, then the saved persistent objects are reloaded when the Replication Server resumes.
Currently, the maximum number of persistent objects supported by the Replication Server is 32 000. This limit will be increased in the near future.
This document explains how to set up an ever increasing counter that all Clients have access to. This could be used to make sure that everyone can generate unique identifiers, with no chance of ever getting a duplicate.
By being persistent, the counter will also keep its value even if all Clients log off, as long as the Replication Server is running.
First, create a script called Counter.cs and add the following code to it:
This script expects a command sent from a script called NumberRequester
, which we will create below.
Next, add this script to a Prefab with CoherenceSync on it, and select the counter
and the method NextNumber
for syncing in the bindings window. To make the counter behave like we want, mark the Prefab as "Persistent" and give it a unique persistence ID, e.g. "THE_COUNTER". Also change the adoption behaviour to "Auto Adopt":
Finally, make sure that a single instance of this Prefab is placed in the scene.
Now, create a script called NumberRequester.cs
. This will be an example MonoBehaviour that requests a unique number by sending the command GetNumber
to the Counter Prefab. As a single argument to this command, the NumberRequester
will send an entity reference to itself. This makes it possible for the Counter to send back a response command (GotNumber
) with the number that was generated. In this simple example we just log the number to the console.
To make this script work, add it to a Prefab that has the CoherenceSync script and mark the GotNumber
for syncing in the bindings window.
Authority over an Entity is transferrable, so it is possible to move the authority between different Clients or even to a Simulator. This is useful for things such as balancing the simulation load, or for exchanging items. It is possible for an Entity to have no Client or Simulator as the authority - these Entities are considered orphaned and are not simulated.
In the design phase, CoherenceSync objects can be configured to handle authority transfer in different ways:
Request. Authority transfer may be requested, but it may be rejected by the current authority.
Steal. Authority will always be given to the requesting party on a FCFS ("first come first serve") basis.
Disabled. Authority cannot be transferred.
Note that you need to set up Auto-adopt Orphan if you want orphans to be adopted automatically when an Entity's authority disconnects, otherwise an orphaned Entity is not simulated. Auto-adopt is only allowed for persistent entities.
When using Request, an optional callback OnAuthorityRequested
can be set on the CoherenceSync behaviour. If the callback is set, then the results of the callback will override the Approve Requests setting in the behaviour.
The request can be approved or rejected in the callback.
Support for requests based on CoherenceClientConnection.ClientID
is coming soon.
Requesting authority is very straight-forward.
RequestAuthority
returns false
if the request was not sent. This can be because of the following reasons:
The sync is not ready yet.
The entity is not allowed to be transferred becauseauthorityTransferType
is set to NonTransferable
.
There is already a request underway.
The entity is orphaned, in which case you must call Adopt
instead to request authority.
The request itself might fail depending on the response of the current authority.
As the transfer is asynchronous, we have to subscribe to one or more Unity Events in CoherenceSync to learn the result.
Also because of their asynchronous nature, clients can receive commands for entities that they have already transferred. Such commands are dropped.
These events are also exposed in the Custom Events section of the CoherenceSync inspector.
CoherenceInput is a component that enables a Simulator to take control of the simulation of another Client's objects based on the Client's inputs.
In situations where you want a centralized simulation of all inputs. Many game genres use client inputs and centralized simulation to guarantee the fairness of actions or the stability of physics simulations.
In situations where Clients have low processing power. If the Clients don't have sufficient processing power to simulate the World it makes sense to send inputs and just display the replicated results on the Clients.
In situations where determinism is important. RTS and fighting games will use CoherenceInput and rollback to process input events in a shared (not centralized) and deterministic way so that all Clients simulate the same conditions and produce the same results.
coherence currently only supports using CoherenceInput in a centralized way where a single Simulator is setup to process all inputs and replicate the results to all Clients.
Setting up an object for server-side simulation using CoherenceInput and CoherenceSync is done in three steps:
The simulation type of the CoherenceSync component is set to Server Side With Client Input
Setting the simulation type to this mode instructs the Client to automatically transfer State Authority for this object to the Simulator that is in charge of simulating inputs on all objects.
Each simulated CoherenceSync component is able to define its own, unique set of inputs for simulating that object. An input can be one of:
Button. A button input is tracked with just a binary on/off state.
Button Range. A button range input is tracked with a float value from 0 to 1.
Axis. An axis input is tracked as two floats from -1 to 1 in both the X and Y axis.
String. A string value representing custom input state. (max length of 63 characters)
To declare the inputs used by the CoherenceSync component, the CoherenceInput component is added to the object. The input is named and the fields are defined.
In this example, the input block is named Player Movement and the inputs are WASD and mouse for the XY mouse position.
In order for the inputs to be simulated on CoherenceSync objects, they must be optimized through baking.
If the CoherenceInput fields or name is changed, then the CoherenceSync object must be re-baked to reflect the new fields/values.
When a Simulator is running it will find objects that are set up using CoherenceInput components and will automatically assume authority and perform simulations. Both the Client and Simulator need to access the inputs of the CoherenceInput of the replicated object. The Client uses the Set* methods and the Simulator uses the Get* methods to access the state of the inputs of the object. In all of these methods, the name parameter is the same as the Name field in the CoherenceInput component.
Check the CoherenceInput API for a complete list of the available methods.
For example, the mouse click position can be passed from the Client to the Simulator via the "mouse
" field in the setup example.
The Simulator can access the state of the input to perform simulations on the object which are then reflected back to the Client just as any replicated object is.
Each object only accepts inputs from one specific Client, called the object's Input Authority.
When a Client spawns an object it automatically becomes the Input Authority for that object. The object's creator will retain control over the object even after state authority has been transferred to the Simulator.
If an object is spawned directly by the Simulator, you will need to assign the Input Authority manually. Use the TransferAuthority method on the CoherenceSync component to assign or re-assign a Client that will take control of the object:
The ClientId used to specify Input Authority can currently only be accessed from the ClientConnection class. For detailed information about setting up the ClientConnection Prefab, see the Client connections page.
Use the OnInputAuthority and OnInputRemote events on the CoherenceSync component to be notified whenever an object changes input authority.
Only the object's current State Authority is allowed to transfer Input Authority.
In order to get notified when the Simulator (or host) takes state authority of the input you can use the OnInputSimulatorConnected event from the CoherenceSync component.
The OnInputSimulatorConnected event can also be raised on the Simulator or host if they have both input and state authority over an entity. This allows the session host to use inputs just like any other client but might be undesirable if input entities are created on the host and then have their input authority transferred to the clients.
To solve this you can check the CoherenceSync.IsSimulatorOrHost flag in the callback:
The CoherenceLiveQuery component can be used to limit the visible portion of the Game World that a player is allowed to see. The Replication Server filters out networked objects that are outside the range of the LiveQuery so that players can't cheat by inspecting the incoming network traffic.
When a query component is placed on a Game Object that is set to Server Side With Client Inputs the query visibility will be applied to the Game Object's Input Authority (i.e., the player) while the component remains in control of the State Authority (i.e. the Simulator). This prevents players from viewing other parts of the map by simply manipulating the radius or position of the query component.
See Area of interest for more information on how to use queries.
Using Server-side simulation takes a significantly longer period of time from the Client providing input until the game state is updated, compared to just using Client-side simulation. That's because of the time required for the input to be sent to the Simulator, processed, and then the updates to the object returned across the network. This round-trip time results in an input lag that can make controls feel awkward and slow to respond.
If you want to use a Server-authoritative setup without sacrificing input responsiveness, you need to use Client-side prediction. With Client-side prediction enabled, incoming network data is ignored for one or more bindings, allowing the Client to predict those values locally. Usually, position and rotation are predicted for the local player, but you can toggle Client-side prediction for any binding in the Configuration window.
By processing inputs both on the Client and on the Server, the Client can make a prediction of where the player is heading without having to wait for the authoritative Server response. This provides immediate input feedback and a more responsive playing experience.
Note that inputs should not be processed for Clients that neither have State Authority nor Input Authority. That's because we can only predict the local player; remote players and other networked objects are synced just as normal.
With Client-side prediction enabled, the predicted Client state will sometimes diverge from the Server state. This is called misprediction. When misprediction occurs, you will need to adjust the Client state to match the Server state in one way or another. This is called Server Reconciliation.
There are many possible approaches to Server Reconciliation and coherence doesn't favor one over another. The simplest method is to snap the Client state to the Server state once a misprediction is detected. Another method is to continuously blend from Client state to Server state.
Misprediction detection and reconciliation can be implemented in a binding's OnNetworkSampleReceived
event callback. This event is called every time new network data arrives, so we can test the incoming data to see if it matches with our local Client state.
The misprediction threshold is a measure of how far the prediction is allowed to drift from the Server state. Its value will depend on how fast your player is moving and how much divergence is acceptable in your particular game.
Remember that incoming sample data is delayed by the round-trip time to the Server, so it will trail the currently predicted state by at least a few frames, depending on network latency. The simulationFrame
parameter tells you the exact frame at which the sample was produced on the authoritative Server.
For better accuracy, incoming network samples should be compared to the predicted state at the corresponding simulation frame. This requires keeping a history buffer of predicted states in memory.
This feature is in the experimental phase.
A client-hosted session is an alternative way to use CoherenceInput in Server Side With Client Input mode that doesn't require a Simulator.
A Client that created a Room can join as a Host of this Room. Just like a Simulator, the Host will take over the State Authority of the CoherenceInput objects while leaving the Input Authority in the hands of the Client that created those objects.
The difference between a Host and a Simulator is that the Host is still a standard client connection, which means it counts towards the Room's client limit and will show up as a client connection in the connection list.
To connect as a Host all we have to do is call CoherenceBridge.ConnectAsHost:
No matter how fast the internet becomes, conserving bandwidth will always be important. Some Game Clients might be on poor quality mobile networks with low upload and download speeds, or have high ping to the Replication Server and/or other Clients, etc.
Additionally, sending more data than is required consumes more memory and unnecessarily burdens the CPU and potentially GPU, which could add to performance issues, and even to quicker battery drainage.
In order to optimize the data we are sending over the network, we can employ various techniques built into the core of coherence.
Delta-compression (automatic). When possible, only send differences in data, not the entire state every frame.
Compression and quantization (automatic and configurable). Various data types can be compressed to consume less bandwidth that they naturally would.
Simulation frequency (configurable). Most Entities do not need to be simulated at 60+ frames per second.
Levels of detail (configurable). Entities need to consume less and less bandwidth the farther away they move from the observer.
Area of interest. Only replicate what we can see.
An integration with the Unity Profiler provides basic statistics on networking events and bandwidth.
The module is only available in Unity 2021.2 and newer.
To view the module, open the Unity Profiler by selecting Window > Analysis > Profiler. Open the Profiler Modules dropdown menu in the top left, and select the coherence module.
To hide unneeded graph lines, select the colored square next to the item you do not wish to see.
Without a special configuration, Entity data is captured at the highest possible frequency and sent to the Replication Server. This often generates more data than is needed to efficiently replicate the Entity's state across the network.
On a Simulator, we can limit the framerate globally using Unity's built-in static variable targetFrameRate.
coherence will automatically limit the target framerate of uploaded Simulators to 30 frames per second. We plan to make it possible to lift this restriction in the future. Check back for updates in the next couple of releases.
Replication frequency can be configured for each binding individually in the Prefab Optimize window. The Sample Rate controls how many times per second values are sampled and synced over the network.
Since the default packet send frequency of the Replication Server is 20Hz, sample rates above that value won't have any benefits unless you increase the Replication Server send frequency, too. See here how to adjust the Replication Server send frequency.
High sample rates increase replication accuracy and reduce latency, but consume more bandwidth. The upper limit at which samples can be quantized is 60hz, so sample rates beyond that are generally not recommended. It is not possible to change sampling frequency at runtime.
Values that don't change over time do not consume any bandwidth. Only bindings with updated values will be synced over the network.
The way you get information about the world is through LiveQueries. We set criteria for what part of the world we are interested in at each given moment. That way, the Replicator won’t send information about everything that is going on in the Game World everywhere, at all times.
Instead, we will just get information about what’s within a certain area, kind of like moving a torch to look around in a dark cave.
More complex area of interest types are coming in future versions of coherence.
A LiveQuery is a cube that defines the area of interest in a particular part of the World. It is defined by its position and its extent (half the side of the cube). There can be multiple LiveQueries in a single scene.
A classic approach is to put a LiveQuery on the camera and set the extent to correspond to the far clipping plane or visibility distance.
Moving the GameObject containing the LiveQuery will also notify the Replication Server that the query for that particular Game Client has moved.
In addition to the LiveQuery, coherence also supports filtering objects by tag. This is useful when you have some special objects that should always be visible regardless of World position.
To create a TagQuery, right click a GameObject in the scene and select coherence > TagQuery from the context menu.
All networked GameObjects with matching tags will now be visible to the Client. The coherence tag can be any string and can be configured separately from the Unity tag
in the Advanced Settings section of the CoherenceSync
component.
Tags and TagQueries can be updated at any time while the application is running, either from the Unity inspector or setting CoherenceSync.coherenceTag
and CoherenceTagQuery.coherenceTag
with code.
Currently, only a single tag per GameObject and TagQuery is supported. To include objects with different tags, you can create multiple TagQuery objects for each tag.
In the future, we plan to integrate TagQueries with LiveQueries allowing combined query restrictions, e.g., only show objects with tag "red" within an extent of 50.
Queries can also be used for cheat prevention, see Server authoritative setup for more information.
This feature requires baking.
coherence can support large game worlds with many objects. Since the amount of data that can be transmitted over the network is limited, it's very important to only send the most important things.
You already know a very efficient tool for enabling this – the LiveQuery. It ensures that a client is only sent data when an object in its vicinity has been updated.
Often though, there is a possibility for an even more nuanced and optimized approach. It is based on the fact that we might not need to send as much data for an entity that is far away, compared to a close one. A similar technique is often used in 3D-programming to show a simpler model when something is far away, and a more detailed when close-up.
This idea works really well for networking too. For example, when another player is close to you it's important to know exactly what animation it is playing, what it's carrying around, etc. When the same player is far off in the horizon, it might suffice to only know it's position and orientation, since nothing else will be discernible anyways.
To use this technique we must learn about something called archetypes.
Any Prefab with the CoherenceSync component can be optimized to use a various levels of details (LODs).
There must always exist a LOD 0, this is the default level and it always has all components enabled (it can have per-field overrides though, see below.)
There can be any number of subsequent LODs (e.g. LOD 1, LOD 2, etc.) and each one must have a distance threshold higher than the previous one. The coherence SDK will try to use the LOD with the highest number, but that is still within the distance threshold.
Example
An object has three LODs, like this:
LOD 0 (threshold 0)
LOD 1 (threshold 10)
LOD 2 (threshold 20)
If this object is 15 units away, it will use LOD 1.
Confusingly, the highest numbered LOD is usually called the lowest one, since it has the least detail.
On each LOD, there are two options for optimizing data being transferred:
Components can be turned off, meaning you won't receive any updates from them.
Its fields can be configured to use fewer bits, usually leading to less fine-grained information. The idea is that this won't be noticeable at the distance of the LOD.
coherence allows us to define the range of numeric fields and how many bits we want to allocate to them.
Here are some terms we will be using:
Bits. The number of bits (octets) used for the field. When used for vectors, the number defined the number of bits used for each component (x
, y
and z
). A vector3
set to 24 bits
will consume 3 * 24 = 72
bits.
Range. For integer values and fixed-point floats, we define a minimum and maximum possible value (e.g. Health
can lie between 0
and 100
).
More bits mean more precision. Increasing the range while leaving the bit count the same will lower the precision of the field.
The maximum number of bits used for any field/component is currently 32.
coherence allows us to define these values for specific components and fields. Furthermore, we can define levels of detail so that precision and therefore bandwidth consumption falls with the distance of the object to the point of observation.
Levels of detail are calculated from the distance between the entity and the center of the LiveQuery.
On each LOD you can configure the individual fields of any component to use less data. You can only decrease the fidelity, so a field can't use more data on a lower (more far away) LOD. The Archetype editor interface will help you to follow these rules.
In order to define levels of detail, we have to click the Optimize button on a Prefab's CoherenceSync
component with defined field bindings.
That opens the Optimization window. We can override the base component settings even without defining further levels of detail.
Clicking on Add new Level Of Detail will add a new LOD. We can now define the distance at which the LOD starts. This is the minimum distance between the entity and the center of the LiveQuery at which the new level of detail becomes active (i.e. the Replicator will start sending data as defined here at this distance).
You can also disable components at later LOD levels if they are not needed. In the example above, you can see that in LOD2 the entire Transform and Animator components are disabled beyond the distance of 20 units. At 100 units (a.k.a. meters), we usually do not see animation details, so we can save a lot of bandwidth and processing power by not replicating this data.
The Data Cost Overview shows us that this takes the original 913 bits down to just 372 bits at LOD level 2.
The primitive types that coherence supports can be configured in different ways:
These three types can all be configured in the same way, using different compression types:
None
No compression will be used, a full 32-bit float will be transmitted every time.
Truncated
Allows for specifying the number of bits for compression. Less bits means lower bandwidth usage but at the cost of precision loss. The minimum number of bits is 10. Using 22 bits will result in around half of the precision of the full float, while 16 will result in the quarter of the precision.
Fixed point
Allows for specifying the range of values used together with either number of bits or a desired precision.
Range affects the maximum and minimum value that the data type can take on. For example, a range of 100 to 200 means only values within that range can be sent - any value outside of this range will be clamped to the nearest correct value.
Precision defines the greatest deviation allowed for the data type. For example, a precision of 0.1 means that a float of value 10.0 can be transmitted as anything from 9.9 to 10.1 over the network. The minimum allowed precision is 0.1, while the maximum precision depends on the range. Changing precision automatically recalculates the number of bits required for given range.
Bits dictate how many bits to use when calculating the precision for a given range. When set manually, it will trigger recalculation of the precision for a given range. Mind that the number of bits can be rounded down if the calculated precision uses less, e.g. for a range of [0, 1] setting the number of bits to 6 will result in precision of 0.1 and a final bit count of 4, since 4 bits suffice to represent this range with a calculated precision.
When using these range settings for vectors, it affects each axis of the vector separately. Imagine shrinking its bounding box, rather than a sphere.
Integers can be configured to any span (that fits within a 32-bit integer) by setting its minimum and maximum value.
For example, the member variable age
in a game about ancient trolls might use a minimum of 100 and a maximum of 2000. Based on the size of the range (1900 in this case) a bit-count will be calculated for you.
For integers, it usually make sense to not decrease the range on lower LODs since it will overflow (and wrap-around) any member on an entity that switches to a lower LOD. Instead, use this setting on LOD 0 to save data for the whole Archetype.
Quaternions and Colors can be configured using the number of bits per component. Quaternions require sending 3 components while Colors require 4 components.
All other types (strings, booleans, entity references) have no settings that can be overridden, so your only option for optimizing those are to turn them off completely at lower LODs.
If a LODed game object is parented to another synced object, the child will base its LOD level on the World position of its parent. This means that the (local) position of the LODed child does not have any effect on its LOD, until it is unparented.
Also – to save bandwidth, detection of LOD changes on the client only happens when the entity sends a component update. This means that a child object might appear to be using a nonsensical LOD until it changes in some way, for example by modifying its position.
When we bake, information from the CoherenceArchetype
component gets written into our schema. Below, you can see the setup presented earlier reflected in the resulting schema file.
If you want to know more about how LODs work inside the schema files, take a look at Archetypes.
The most unintuitive thing about archetypes and LOD-ing is that it doesn't affect the sending of data. This means that a "fat" object with tons of fields will still tax the network and the Replication Server if it is constantly updated, even if it uses a very optimized Archetype.
Also, it's important to realize that the exact LOD used on an entity varies for each other client, depending on the position of their query (or the closest one, if several are used.)
For an object to appear to move smoothly on the screen, it must be rendered at a high rate, usually 60 frames per second or more. However, depending on the settings in your project, and the conditions of your internet connection, data may not always arrive at a smooth 60 frames per second across the network. This is completely okay, but in order to make state changes appear smooth on the Client, we use interpolation.
Interpolation is a type of estimation, a method of constructing new data points within the range of a discrete set of known data points.
When you select a variable to replicate in the Configure window, it is automatically assigned a default interpolation setting. The default settings are usually good to get started, but you can modify or create your own interpolation settings that better fit your specific needs.
In the Configure window, each binding displays its interpolation settings next to it.
Built-in interpolation settings for position and rotation are provided out-of-the-box, but you are free to create your own and use them instead.
You can also create an interpolation settings asset: Assets > Create > coherence > Interpolation Settings
Linear interpolation blends values by moving along straight lines from sample to sample. This makes the networked object move in a zig-zag pattern, but this is usually not noticeable when sampled at a sufficient rate and with some additional smoothing applied (see section Other settings > Smoothing below).
Spline interpolation blends between samples using the Catmull-Rom spline method which gives a smoother movement than linear interpolation without any sharp corners, at the cost of increased latency (see: Latency below). Spline interpolation requires at least 4 samples to produce good results.
If interpolation type is set to None, the value will simply snap to the most recent sample without any blending. This is recommended for binding types that have no obvious blending methods, e.g., string, byte array and object references.
You could also implement your own interpolation type (see: Custom Interpolators below).
Interpolation will add some additional latency to synced bindings. That's because incoming network samples must first be put in a buffer that is then used to calculate the interpolated value.
The amount of latency depends on the binding's sample rate and interpolation type. The lower the sample rate, the higher the latency.
Linear Interpolation requires a headroom of one sample while Spline Interpolation requires two samples. If interpolation type is set to None, there is no additional latency added, and samples will be rendered as soon as they arrive over the network.
Example: A Prefab that uses Spline Interpolation for its position binding with a sample rate of 30 Hz and network latency of 100 ms will appear to be 2*1/30+0.100 = 0.16 s behind the local time.
Since a Prefab can define separate interpolation types and sample rates for its different bindings, it is possible that not all bindings share the same latency. If, for example, position and rotation are interpolated with different latency, the position and rotation of a vehicle might not match on the remote object.
There are a few settings you can tweak:
Smoothing
Smooth Time: additional smoothing can be applied (using SmoothDamp
) to clear out any jerky movement after regular interpolation has been performed.
Max Smoothing Speed: the maximum speed at which the value can change, unless teleporting.
Latency
Network Latency Factor: fudge factor applied to the network latency. A factor of 1 means adapting to network latency with no margin, so the incoming sample must arrive at its exact predicted time to prevent the buffer from becoming stale. In general, a factor of 1.1 is recommended to prevent network fluctuations from causing dead reckoning due to latency peaks.
Network Latency Cooldown: when network latency decreases, wait this amount of time (in seconds) before recalculating network latency. This prevents network fluctuations from causing dead reckoning due to latency valleys.
Additional Latency: increases latency by a fixed amount (in seconds) to add an additional margin for the sample buffer.
Overshooting
Max: how far into the dead reckoning to venture when the time fraction exceeds 100%, as a percentage of the sample rate.
Retraction: how fast to pull back to 100% when overshooting the allowed dead reckoning maximum (in seconds)
Teleport Distance: if two consecutive samples are further apart than this, the value will teleport or snap to the new sample immediately without interpolating or smoothing in between.
Stale Factor: defines when to insert a virtual sample in case of a longer time gap between the samples. High stale factor puts the virtual sample close to first sample leading to a smooth transition between two distant samples. This is suitable for parameters that do not change rapidly - the position of a big ship for example. Low stale factor places the virtual sample near the second sample resulting in initial lack of change in value during interpolation followed by a quick transition to the second sample. This is best suited for parameters that can change rapidly, e.g. position of a player.
Dead reckoning is a form of replicated computing so that everyone participating in a game winds up simulating all the entities (typically vehicles) in the game, albeit at a coarse level of fidelity.
The basic notion of dead reckoning is an agreement in advance on a set of algorithms that can be used by all player nodes to extrapolate the behavior of entities in the game, and an agreement on how far reality should be allowed to get from these extrapolation algorithms before a correction is issued.
Interpolation settings can be tweaked in Play mode where you can see the result on the screen immediately, but the changes you make will be reverted again once you exit Play mode. This is because - in Play mode - a copy of the interpolation settings is created.
Remember that interpolation only happens on remote objects, so you need to select a remote object to experiment with interpolation settings in Play mode.
Interpolation works both in Baked and Reflection modes. You can change these settings at runtime via the Configure window (editor) or by accessing the binding and changing the interpolation settings yourself:
The Linear and Spline interpolators that are provided by coherence are sufficient for most common use cases, but you can also implement your own interpolation algorithm by sub-classing Interpolator
.
You can choose to override one or more of the base methods depending on which type or types of values you want to support. The method signatures usually take two adjacent samples and a fractional value (from 0 to 1) to blend between them. There are also method signatures that provide four samples, which is useful for the Catmull-Rom spline interpolation.
Here's an example of a custom interpolator that makes the remote object appear at an offset distance from the object's actual position.
The NumberOfSamplesToStayBehind property controls the internal latency.
Catmull-Rom splines require four samples to blend between, so its NumberOfSamplesToStayBehind property must be set to 2.
By default, each binding is interpolated on every Update call. This can be changed using the Interpolate On property on the CoherenceSync under Advanced Settings. Possible values are:
Update / LateUpdate / FixedUpdate - bindings will be updated with interpolated values on every Update / LateUpdate / FixedUpdate call
Combination - you can combine any of the above, so that bindings are updated in more than one Unity callback
Nothing - bindings will completely stop receiving new values because interpolation is fully disabled
If you are using Rigidbody for movement of a GameObject, it is recommended to set Interpolate On to FixedUpdate. Also, to achieve completely smooth movement, Rigidbody interpolation should be enabled and you should avoid setting the position of a GameObject directly using Transform.position or RigidBody.position.
Extrapolation uses historical data to predict the future state of a binding. By predicting the state of other players before their network data actually arrives, network lag can be reduced or removed entirely. This will cause mispredictions that need to be corrected when the incoming network data does not match the predicted state.
Extrapolation is not yet supported by coherence.
The coherence Settings window is located in coherence > Settings.
Uniqueness is about naming entities and guaranteeing that a named entity can only exist once. This name is referred to as Unique ID.
To start using uniqueness, set CoherenceSync's Uniqueness setting to No Duplicates
:
The ID or name of the entity. Use any name you might help you recognize it over the network, for example: game manager
, boss, spawner
, chest 1
, ...
When Manual Unique ID is left empty, Prefab Instance Unique ID will be used instead. The latter is assigned automatically (for prefab instances), to tell apart different instances on the scene easily. This is handy when your scene has a handful of the entities [that come from the same Prefab] around.
In coherence, the concepts of Authority, Persistence and Uniqueness often go hand-in-hand. For example, uniqueness can be useful to keep track of persistent objects. Despite this, all three can also function on their own.
Replacement occurs when there's an attempt to create a named entity that already exists. For example, you have player
in your scene, and you instantiate another player
.
By default, a Replace strategy is applied. This strategy makes sure the actual GameObject is kept, but the underlying linked entity is updated. This way, Unity Object references are not lost.
However, there's scenarios where you might want to trigger an actual Destroy operation when this happens - for such cases, the Destroy strategy can be used.
coherence uses the concept of ownership to determine who is responsible for simulating each Entity. By default, each Client that connects to the Replication Server owns and simulates the Entities they create. There are a lot of situations where this setup is not adequate. For example:
The number of Entities could be too large to be simulated by the players on their own, especially if there are few players and the World is very large.
The game might have an advanced AI that requires a lot of coordination, which makes it hard to split up the work between Clients.
It is often desirable to have an authoritative object that ensures a single source of truth for certain data. State replication and "eventual correctness" doesn't give us these guarantees.
Perhaps the game should run a persistent simulation, even while no one is playing.
With coherence, all of these situations can be solved using dedicated Simulators. They behave very much like normal Clients, except they run on their own with no player involved. Usually, they also have special code that only they run (and not the clients). It is up to the game developer to create and run these programs somewhere in the cloud, based on the demands of their particular game.
Simulators can also be independent from the game code. A Simulator could be a standalone application written in any language, including C#, Go or C++, for instance. We will post more information about how to achieve this here in the future. For now, if you would like to create a Simulator outside of Unity, please contact our developer relations team.
To use Simulators, you need to enter your credit card details. You can do it by logging into our Dashboard, selecting the Billing tab, finding the Payment Methods section and clicking the Manage button.
If you're on the Free plan, you won't be charged anything - our payment provider will temporarily reserve a small amount to verify that the credit card is in working order.
Only Paid and Enterprise plans offer Simulators external network connectivity. When switching from Free plan to a Paid or Enterprise plan, it may take up to 10 minutes for the Simulators to have their external connectivity enabled.
If you have determined that you need one or more Simulator for your game, there are multiple ways you can go about implementing these. You could create a separate Unity project and write the specific code for the Simulator there (while making sure you use the same schema as your original project).
An easier way is to use your existing Unity project and modify it in a way so that it can be started either as a normal Client, or as a Simulator. This will ensure that you maximize code sharing between Clients and Servers - they both do simulation of Entities in the same Game World after all.
To force a build to start as a Simulator, you can use the following command line argument:
The Simulator is started with the following parameters in coherence Cloud:
Important: if you want to deploy Simulators on the coherence Cloud, they have to be built for Linux 64-bit.
The SDK provides a static helper class to access all the above parameters in the C# code called SimulatorUtility
.
To build Simulators, it's best to use the Linux Dedicated Server Build Target.
This is great for Simulators since we're not interested in rendering any graphics on these outside of local development. You will also get a leaner executable that is smaller and faster to be published in coherence Cloud.
When a room has only Simulators (no Clients) it shuts down automatically after a short period of time.
Refer to the Simulator: Build and deploy section.
The simulation frame represents an internal clock that every Client syncs with a Replication Server. This clock runs at a 60Hz frequency which means that the resolution of a single simulation frame is ~16ms.
There are 3 different simulation frame types used within the coherence:
Accessible via CoherenceBridge.NetworkTime
.
Every Client tries to match the Client simulation frame with the Server simulation frame by continuously monitoring the distance between the two and adjusting the NetworkTime.NetworkTimeScale
based on the distance, ping, delta time, and several other factors starting from the first simulation frame captured when the client first connects in NetworkTime.ConnectionSimulationFrame
Unity's Time.timeScale
is automatically set to the value of NetworkTime.NetworkTimeScale
if the CoherenceBridge.controlTimeScale
is set to true
(default value).
In perfect conditions, all Clients connected to a single session should have exactly the same ClientSimulationFrame
value at any point in the real-world time.
The value of the ClientSimulationFrame
can jump by more than 1 between two engine frames if the frame rate is low enough.
The Client simulation frame is used to timestamp any outgoing Entity changes to achieve a consistent view of the World for all players. The receiving side uses it for interpolation of the synced values.
Local simulation frame that progresses in user-controlled fixed steps. Accessible via CoherenceBridge.ClientFixedSimulationFrame
.
By default, the fixed step value is set to the Time.fixedDeltaTime
.
Just like the basic Client simulation frame, it uses the NetworkTime.NetworkTimeScale
to correct the drift. The fixed simulation frame is used as a base for the fixed-step, network-driven simulation loop that is run via CoherenceBridge.OnFixedNetworkUpdate
. This loop is used internally to power the CoherenceInput
and the GGPO code.
Unlike ClientSimulationFrame
, the CoherenceBridge.OnFixedNetworkUpdate
loop never skips frames - it is guaranteed to run for every single frame increment.
Supporting Unity physics in a network environment requires managing the state of rigid bodies on replicated Prefabs. Generally, if a Prefab using CoherenceSync has a Rigidbody or Rigidbody2D component, the replicated instances of the Prefab should have the body set to kinematic so that they do not simulate in the physics step on non-authoritative clients. There is a convenient configuration for this in the CoherenceSync configuration components tab.
For most purposes, this is all that is required to have physically simulated entities correctly replicated on Clients. However, only the transform of the rigid body is actually replicated. For additional physical state replication a more advanced setup is required.
The CoherenceSync component supports three modes for replication of Unity rigid bodies:
Direct - the default mode used for basic replication of the transform of the Unity GameObject with a rigid body component. When a rigid body is detected, the position and rotation of the GameObject are provided by and assigned to the rigid body's position and rotation directly and Unity updates the GameObject transform after the physics step.
Interpolated - similar to Direct mode, except the update to the rigid body position and rotation are applied using MovePosition and MoveRotation which allows the Unity physics system to calculate rigid body state such as linear and angular velocity on Clients with replicated Entities.
For best behavior, it is recommended that the interpolation timing use only FixedUpdate. See the article on Interpolation.
Manual - disables automatic update of position and rotation of CoherenceSync Prefabs with rigid bodies and enables the use of callbacks, allowing custom implementation of how position and rotation updates are applied. The callbacks are OnRigidbody2DPositionUpdate, OnRigidbody3DPositionUpdate, OnRigidbody2DRotationUpdate, and OnRigidbody3DRotationUpdate.
The Replication Server replicates the state of the world to all connected Clients and Simulators.
To understand what is happening in the Game World, and to be able to contribute your simulated values, you need to connect to a Replication Server. The Replication Server acts as a central place where data is received from and distributed to interested Clients.
You can connect to a Replication Server in the coherence Cloud, but we recommend that you first start one locally on your computer. coherence is designed so you can easily develop everything locally first, before deploying to the Cloud.
Replication Servers replicate data defined in schema files. The schema's inspector provides all the tools needed to start a Replication Server.
Run the Replication Server by clicking the Run button or copy the run command to the clipboard via clicking the copy run-command icon located to the right of it.
A terminal/command line will pop up, running your Server locally.
The port the Replication Server will use. Rooms: 42001
Worlds: 32001
The web port used for webGL connections. Rooms: 42001
Worlds: 32002
The Replication Server send frequency. Default: 20
packets / s
The Replication Server receive frequency. Default: 60
packets / s
You can also start the Replication Server from the coherence menu or by pressing Ctrl+Shift+Alt+N.
The Replication Server supports different packet frequencies for sending and receiving data.
The send frequency is the frequency that the Replication Server uses to send packets to a given Client. Each Client can be sent packets at different times, but the packet receive frequency for any Client will not exceed the Replication Server's send frequency.
The receive frequency is the maximum frequency at which the Replication Server expects to receive packets from any Client, before throttling. If a Client sends packets to the Replication Server at a higher than expected frequency, that Client will receive a command to slow down sending. If the Client doesn't respect the command to throttle packet sending then the Client is disconnected after a time. All extra packets received by the Replication Server, after a threshold based on the receive frequency, are dropped and not processed. This is to prevent malicious Clients from flooding the Replication Server. The Unity SDK handles throttling automatically.
It is possible for the Replication Server to temporarily request Clients to reduce their packet send rates if the processing load of the Replication Server is too high. This is automatic and send rates from the affected Clients are commanded to resume once the load is reduced.
Low and consistent send rates from the Replication Server allow for optimal bandwidth use and still support a smooth stream of updates to Clients. Try different rates during local replication tests to see what works well for your game.
For a locally hosted Replication Server, you can edit the send and receive frequencies by using the CLI arguments --send-frequency
and --recv-frequency
. Or by changing it in the coherence Settings -> Local Replication Server -> Send Frequency / Recv Frequency.
On the dashboard, the packet frequencies for sending and receiving data can be adjusted per project too. It is part of the Advanced Config section of Worlds create/edit and Rooms pages of the dashboard.
Adjusting the send and receive frequencies on the dashboard is available for paid plans.
When the Replication Server is running, you connect to it using the Connect
method.
After trying to connect you might be interested in knowing whether the connection succeeded. The Connect call will run asynchronously and take around 100 ms to finish, or longer if you connect to a remote Server.
The OnLiveQuerySynced event is triggered when the initial game state has been synced to the client. More specifically, it is fired when all entities found by the Client's first Live Query have finished replicating. This is the last step of the connection process and is usually a good place to start the game simulation.
To connect to Cloud-hosted Servers, see Rooms API and Worlds API documentation.
Check Run in Background in the Unity settings under Project Settings > Player so that the Clients continue to run even when they're not the active window.
To connect with multiple Clients locally, publish a build for your platform (File > Build and Run, details in Unity docs). Run the Replication Server and launch the build any number of times. You can also enter Play Mode in the Unity Editor.
For Mac Users: You can open new instances of an application from the Terminal:
By default, the number of players that can connect to a locally hosted Replication Server is limited to 100.
By definition, a locally hosted Replication Server is one that is not managed by coherence, for example if it has been started from a Unity editor or by a game client in the self-hosting scenario. Replication Servers running in the coherence Cloud have no player limit.
This restriction can be lifted by supplying the SDK with an unlock token. The token can be generated in the Settings section of your project dashboard at coherence.io.
Once you have the token, it needs to be added to the coherence RuntimeSettings
(Assets/coherence/RuntimeSettings.asset):
The unlock token will now be automatically passed to all the Replication Server instances started via Unity editor or the Coherence.Toolkit.ReplicationServer
API.
If you plan to execute the Replication Server manually the token can be supplied via the --token <token>
command line argument.
Simulators per room can be enabled in the dashboard for the project. The Simulator used is matched according to the in the RuntimeSettings scriptable object file. This is set automatically when you upload a Simulator.
For each new Room, a Simulator will be created with the command line parameters described in the section. The Simulator is shutdown automatically when the Room is closed.
World Simulators are started and shut down with the World. They can be enabled and assigned in the Worlds section of the Developer Portal.
World simulation servers are started with the command line parameters described in the section.
When using the Simulators tab in the coherence Hub, you can specify a Simulator slug. This is simply a unique identifier for a Simulator. This value is automatically saved in RuntimeSettings
when an upload is complete, and Room creation requests will use this value to identify which Simulator should be started alongside your room.
The Simulator slug can be any string value, but we recommend using something descriptive. If the same slug is used between two uploads, the later upload will overwrite the previous Simulator.
A list of uploaded Simulators and their corresponding slugs can be found in the Developer Portal:
Before deploying a Simulation Server, testing and debugging locally can significantly improve development and iteration times. There are a few ways of accomplishing this.
Using the Unity Editor as a Simulator allows us to easily debug the Simulator. This way we can see logs, examine the state of scenes and GameObjects and test fixes very rapidly.
To run the Editor as a Simulator, run the Editor from the command line with the proper parameters:
--coherence-simulation-server
: used to specify that the program should run as a coherence Simulator.
--coherence-simulator-type
: tells the Simulator what kind of connection to make with the Replication Server, can be Rooms or World.
--coherence-region
: tells the Simulator which region the Replication Server is running in: EU, US or local.
--coherence-ip
: tells the Simulator which IP it should connect to. Using 127.0.0.1 will connect the Simulator to a local server, if one is running.
--coherence-port
: specifies the port the Simulator will use.
--coherence-world-id
: specifies the World ID to connect to, used only when set to Worlds.
--coherence-room-id
: specifies the Room ID to connect to, used only when set to Rooms.
--coherence-unique-room-id
: specifies the unique Room ID to connect to, used only when set to Rooms.
For example:
If you're not sure which values should be used, adding a COHERENCE_LOG_DEBUG
define symbol will let you see detailed logs. Among them are logs that describe which IP, port and such the Client is connecting to. This can be done in the Player settings: Project Settings > Player > Other Settings > Script Compilation > Scripting Define Symbols.
Another option is making a Simulator build and running it locally. This option emulates more closely what will happen when the Simulator is running after being uploaded.
You can run a Simulator executable build in the same way you run the Editor.
This allows you to test a Simulator build before it is uploaded or if you are having trouble debugging it.
You can also run existing Simulator build from coherence Hub > Simulators > Run local simulator build.
Use the Fetch Last Endpoint button to autofill the required fields.
When using a Rooms-based setup, you first have to create a Room in the local Replication Server (e.g. by using the connect dialog in the Client).
The local Replication Server will print out the Room ID and unique Room ID that you can use when connecting the Simulator.
When scripting Simulators, we need mechanisms to tell them apart.
Ask Coherence.SimulatorUtility.IsSimulator
.
There are two ways you can tell coherence if the game build should behave as a Simulator:
COHERENCE_SIMULATOR
preprocessor define.
--coherence-simulation-server
command-line argument.
Connect
and ConnectionType
The Connect
method on Coherence.Network
accepts a ConnectionType
parameter.
Whenever the project compiles with the COHERENCE_SIMULATOR
preprocessor define, coherence understands that the game will act as a Simulator.
Launching the game with --coherence-simulation-server
will let coherence know that the loaded instance must act as a Simulator.
You can supply additional parameters to a Simulator that define its area of responsibility, e.g. a sector/quadrant to simulate Entities in and take authority over Entities wandering into it.
You can also build a special Simulator for AI, physics, etc.
You can define who simulates the object in the CoherenceSync inspector.
coherence includes an auto-connect MonoBehaviour out of the box for Room- and World-based Simulators. The Component its called AutoSimulatorConnection.
Multi-Room Simulators have their own per-scene reconnect logic. The AutoSimulatorConnection components should not be enabled when working with Multi-Room Simulators.
If the Simulator is invoked with the --coherence-play-region
parameter, AutoSimulatorConnection will try to reconnect to the Server located in that region.
Simulate multiple Rooms at the same time, within one Unity instance
Multi-Room Simulators are Room Simulators which are able to simulate multiple game rooms at the same time - one sim to rule them all!
In order to achieve this, the game code should be defensive on which room it is affecting. Game state should be kept per Room, meaning game managers, singletons (static data), etc. need to account for this.
Each Room is held in a different scene. So for every Room created, the Multi-Room Simulator should open a connection to it, hence loading additively a scene and stablishing a Simulator connection (via Bridge).
By using Multi-Room Simulators, the coherence Developer Portal is able to instruct your Simulator which room to join and start simulating.
This communication happens via HTTP. An HTTP server is started by your game build when the MultiRoomSimulator
component is active. This component listens to HTTP requests made by the coherence __ Developer Portal.
For offline local development, you can use a MultiRoomSimulatorLocalForwarder
component on your clients, which will create HTTP requests against your local simulator upon client connection, like joining a room.
For local development, enable the Local Development Mode
flag in the .
Once the MultiRoomSimulator
receives a request to join a room, it spawns a CoherenceSceneLoader
that will be in charge of loading additively the scene specified.
The quickest way to get Multi-Room Simulators set up is by using the provided wizard.
It will take you through the GameObjects and Components needed to make it happen.
Here's a quick overview video of the setup:
These are the pieces needed for Multi-Room Simulators to work:
Simulators
In the initialization scene (splash, init, menu, ...)
MultiRoomSimulator — listens to join room requests and delegates scene loading (by instantiating CoherenceSceneLoaders)
Clients
(Only for local development) In the scene where you connect to a Room (where you have the Sample UI or your custom connection logic)
MultiRoomSimulatorLocalForwarder — requests the local MultiRoomSimulator to join rooms when the Client connects.
Independently
In the scene where the networked game logic is (game, Room, main, ...)
Bridge — handles the connection
LiveQuery — filters Entities by distance
CoherenceScene — when the scene is loaded via CoherenceSceneLoader, it will try to connect using the data given by it. It attaches to the Bridge, creates a connection, and handles auto reconnection. If a scene loaded through CoherenceSceneLoader doesn't have a CoherenceScene on it, one will be created on the fly.
There are two components that can help you fork Client and Simulator logic, for example, by enabling or disabling the MultiRoomSimulator component depending on whether it's a Simulator or a Client build. These are optional but can come in handy.
SimulatorEventHandler — events on the build type (Client/Simulator).
ConnectionEventHandler — events on the connection stablished by the Bridge associated with that Scene.
It is possible to visualize each individual Room the Multi-Room Simulator is working on. By default, Simulator connections to Rooms are hidden, as shown in the image above. You can toggle the visibility per scene by clicking the Eye icon. You can also change the default visibility of the loaded scene (defaults to hidden) on the CoherenceScene component:
Working with Multi-Room Simulators needs your logic to be constrained to the scene. Methods like FindObjectsOfType will return objects in all scenes — you could affect other game sessions!
Check out Coherence.Toolkit.SceneUtils
for alternative APIs to FindObjectsOfType
that work per scene.
Also, Coherence.Toolkit.ActiveSceneScope
can help make sure instantiation happens where you want it to be.
This is also true for static members, e.g. singletons. When using Multi-Room Simulators, there need to be as many isolated instances of your managers as there are open simulated rooms.
For example, if you were to access your Game Manager through GameManager.instance
, now you'll need a per-scene API like GameManager.GetInstance(scene)
.
There may be third-party or Unity-provided features that can't be accessed per scene, and that affect the whole game.
Loading operations, garbage collections, frame-rate spikes... all these will affect performance on other sessions, since everything is running within the same game instance.
Keep in mind that all regular Unity arguments are supported. You can see the full list here: .
To learn more about Simulators, see .
To learn more about creating a Simulator build, see .
When you add the Component, it will parse the connection data passed with to connect to the given Replication Server automatically. This will also work for Simulators you upload to the coherence Cloud.
By default, scenes will have their . coherence ticks the physics scene on the CoherenceScene
component, which the target scene to be loaded should include.
Multi-Room Simulators are still . You need to enable Simulators for Rooms and enable Multi-Room Simulators in the coherence Developer Portal, as shown here:
coherence allows us to use multiple Simulators to split up a large game world with many entities between them. This is called spatial load balancing.
You can see load balancing in action in this video:
While load balancing is supported for standalone projects, our cloud services currently only support associating one Simulator to a Room or World. This will be extended in the near future. Enterprise customers can still run multiple Simulators in their own cloud environment.
A Simulator build is a built Unity Player for the Linux 64-bit platform that you can upload to coherence straight from the Unity Editor.
Open Coherence Hub and select the Simulators tab.
From here you can build and upload Simulators.
Click the little info icon in the top right corner to learn more about Simulators and how to build them properly.
You can change your Simulator build options by editing the SimulatorBuildOptions object, or in the coherence Hub Simulators tab.
There are several settings you might want to change.
Specify the scenes you want to get in the build via the Scenes To Build field.
For a local build, you can choose to enable/disable the Headless Mode by ticking the checkbox. For a cloud build, Headless Mode is always enabled by default.
For more information about the options listed under Build Size Optimizations, see this section below.
Make sure you have completed the steps required in Create an account.
Make sure you meet the requirements:
You have to have Linux modules (Linux Build Support (IL2CPP)
, Linux Build Support (Mono)
, and Linux Dedicated Server Build Support
) installed in Unity Editor. See Unity - Manual: Adding modules to the Unity Editor.
You have to be logged into the coherence Developer Portal, through the Unity Editor. See Login through Unity for more information.
Press the coherence Hub > Simulators > Build And Upload Headless Linux Client button.
When the build is finished, it will be uploaded to your currently selected organization and project in the Developer Portal.
You'll see in the developer dashboard when your Simulator is ready to be associated with a Room or World.
Target frame rate on Simulator builds is forced at 30.
This feature is experimental, please make sure you make a backup of your project beforehand.
You can set the values for the Build Size Optimizations in the drop-down list of the build configuration inspector. It looks like this:
Select the desired optimizations depending on your needs.
Once your Simulator is built and uploaded, you'll be prompted with the option to revert the settings to the ones you had applied before building. This is to avoid these settings from affecting other builds you make.
Optimization | What it does |
---|---|
Replace Textures And Sounds With Dummies
Project's textures and sound files are replaced with tiny and lightweight alternatives (dummies). Original assets are copied over to <project>/Library/coherence/AssetsBackup. They are restored once the build process has finished.
Keep Original Assets Backup
The Assets Backup (found at <project>/Library/coherence/AssetsBackup) is kept after the build process is completed, instead of deleted. This will take extra disk space depending on the size of the project, but is a safety convenience.
Compress Meshes
Sets Mesh Compression on all your models to High.
Disable Static Batching
Static Batching tries to combine meshes at compile-time, potentially increasing build size. Depending on your project, static batching can affect build size drastically. Read more about static batching.
coherence Input Queues are backed by a rolling buffer of inputs transmitted between the Clients. This buffer can be used to build a fully deterministic simulation with a client side-prediction, rollback, and input delay. This game networking model is often called the GGPO (Good Game Peace Out).
Input delay allows for a smooth, synchronized netplay with almost no negative effect on the user experience. The way it works is input is scheduled to be processed X frames in the future. Consider a fighting game scenario with two players. At frame 10 Player A presses a kick button that is scheduled to be executed at frame 13. This input is immediately sent to Player B. With a decent internet connection, there's a very good chance that Player B will receive that input even before his frame 13. Thanks to this, the simulation is always in sync and can progress steadily.
Prediction is used to run the simulation forward even in the absence of inputs from other players. Consider the scenario from the previous paragraph - what if Player B doesn't receive the input on time? The answer is very simple - we just assume that the input state hasn't changed and progress with the simulation. As it turns out this assumption is valid most of the time.
Rollback is used to correct the simulation when our predictions turn out wrong. The game keeps historical states of the game for past frames. When an input is received for a past simulation frame the system checks whether it matches the input prediction made at that frame. If it does we don't have to do anything (the simulation is correct up to that point). If it doesn't match, however, we need to restore the simulation state to the last known valid state (last frame which was processed with non-predicted inputs). After restoring the state we re-simulate all frames up to the current one, using the fresh inputs.
GGPO is not recommended for FPS-style games. The correct rollback networking solution for those is planned to be added in the future.
In a deterministic simulation, given the same set of inputs and a state we are guaranteed to receive the same output. In other words, the simulation is always predictable. Deterministic simulation is a key part of the GGPO model, as well as a lockstep model because it lets us run exactly the same simulation on multiple Clients without a need for synchronizing big and complex states.
Implementing a deterministic simulation is a non-trivial task. Even the smallest divergence in simulation can lead to a completely different game outcome. This is usually called a desync. Here's a list of common determinism pitfalls that have to be avoided:
Using Update
to run the simulation (every player might run at a different frame rate)
Using coroutines, asynchronous code, or system time in a way that affects the simulation (anything time-sensitive is almost guaranteed to be non-deterministic)
Using Unity physics (it is non-deterministic)
Using random numbers generator without prior seed synchronization
Non-symmetrical processing (e.g. processing players by their spawn order which might be different for everyone)
Relying on floating point numbers across different platforms, compilations or processor types
We'll create a simple, deterministic simulation using provided utility components.
This is the recommended way of using Input Queues since it greatly reduces the implementation complexity and should be sufficient for most projects.
If you'd prefer to have full control over the input code feel free to use theCoherenceInput
and InputBuffer
directly.
Our simulation will synchronize the movement of multiple Clients, using the rollback and prediction in order to cover for the latency.
Start by creating a Player
component and a Prefab for it. We'll use the client connection system to make our Player
represent a session participant and automatically spawn the selected Prefab for each player that connects to the Server. The Player
will also be responsible for handling inputs using the CoherenceInput
component.
Create a Prefab from cube, sphere, or capsule, so it will be visible on the scene. That way it will be easier to verify visually if the simulation works, later.
When building an input-based simulation it is important to use the Client connection system, that is not a subject to the LiveQuery. Objects that might disappear or change based on the client-to-client distance are likely to cause simulation divergence leading to a desync.
Our Player
code looks as follows:
The GetMovement
and SetMovement
will be called by our "central" simulation code. Now that we have our Player
defined let's prepare a Prefab for it. Create a GameObject and attach the Player
component to it, using the CoherenceSync
inspector create a Prefab. The inspector view for our Prefab should look as follows:
A couple of things to note:
A Mov
2D axis has been added to the CoherenceInput which will let us sync the movement input state.
Unlike in the Server authoritative setup our simulation uses client-to-client communication, meaning each Client is responsible for its Entity and sending inputs to other Clients. To ensure this behavior set the CoherenceSync > Simulation and Interpolation > Simulation Type to Client Side.\
In a deterministic simulation, it is our code that is responsible for producing deterministic output on all Clients. This means that the automatic transform position syncing is no longer desirable. To turn it off, toggle the Predicted button in the CoherenceSync Bindings window (see the chapter on client-side prediction in Server authoritative setup).
In order for inputs to be processed in a deterministic way, we need to use fixed simulation frames. Check the CoherenceInput > Use Fixed Simulation Frames checkbox.
Make sure to use the baked mode (CoherenceInput > Use Baked Script) - inputs do not work in reflection mode.
Since our player is the base of the Client connection we must set it as the connection Prefab in the CoherenceBridge
and Enable Client Connections:
Before we move on to the simulation, we need to define our simulation state which is a key part of the rollback system. The simulation state should contain all the information required to "rewind" the simulation in time. For example, in a fighting game that would be the position of all players, their health, and perhaps a combo gauge level. In a shooting game, this could be player positions, their health, ammo, and map objective progression.
In the example we're building, player position is the only state. We need to store it for every player:
The state above assumes the same number and order of players in the simulation. The order is guaranteed by the CoherenceInputSimulation
, however, handling a variable number of Clients is up to the developer.
Simulation code is where all the logic should happen, including applying inputs and moving our Players
:
We have identified a misprediction bug in coherence 1.1.3 and earlier versions that causes a failed rollback right after a new Client joins a session, throwing relevant error messages. However if the Client manages to connect, this won't affect the rest of the simulation. The fix should ship in the next release.
SetInputs
is called by the system when it's time for our local Player
to update its input state using the CoherenceInput
.
Simulate
is called when it's time to simulate a given frame. It is also called during frame re-simulation after misprediction - don't worry though, the complex part is handled by the CoherenceInputSimulation
internals - all you need to do in this method is apply inputs from the CoherenceInput
to run the simulation.
Rollback
is where we need to set the simulation state back to how it was at a given frame. The state is already provided in the state
parameter, we just need to apply it.
CreateState
is where we create a snapshot of our simulation so it can be used later in case of rollback.
OnClientJoined
and OnClientLeft
are optional callbacks. We use them here to start and stop the simulation depending on the number of Clients.
The SimulationEnabled
is set to "false" by default. That's because in a real-world scenario the simulation should start only after all Clients have agreed for it to start, on a specific frame chosen, for example, by the host.
Starting the simulation on a different frame for each Client is likely to cause a desync (as well as joining in the middle of the session, without prior simulation state synchronization). Simulation start synchronization is however out of the scope of this guide so in our simplified example we just assume that Clients don't start moving immediately after joining.
As a final step, attach the Simulation
script to the Bridge object on scene and link the Bridge back to the Simulation
:
That's it! Once you build a client executable you can verify that the simulation works by connecting two Clients to the Replication Server. Move one of the Clients using arrow keys while observing the movement being synced on the other one.
Due to the FixedNetworkUpdate
running at different (usually lower) rate than Unity's Update
loop, polling inputs using the functions like Input.GetKeyDown
is susceptible to a input loss, i.e. keys that were pressed during the Update
loop might not show up as pressed in the FixedNetworkUpdate
.
To illustrate why this happens consider the following scenario: given that Update
is running five times for each network FixedNetworkUpdate
, if we polled inputs from the FixedNetworkUpdate
there's a chance that an input was fully processed within the five Update
s in-between FixedNetworkUpdate
s, i.e. a key was "down" on the first Update
, "pressed" on the second, and "up" on a third one.
To prevent this issue from occurring you can use the FixedUpdateInput
class:
The FixedUpdateInput
works by sampling inputs at Update
and prolonging their lifetime to the network FixedNetworkUpdate
so they can be processed correctly there. For our last example that would mean "down" & "pressed" registered in the first FixedNetworkUpdate
after the initial five updates, followed by an "up" state in the subsequent FixedNetworkUpdate
.
The FixedUpdateInput
works only with the legacy input system (UnityEngine.Input
).
There's a limit to how many frames can be predicted by the Clients. This limit is controlled by the CoherenceInput.InputBufferSize
. When Clients try to predict too many frames into the future (more frames than the size of the buffer) the simulation will issue a pause. This pause affects only the local Client. As soon as the Client receives enough inputs to run another frame the simulation will resume.
To get notified about the pause use the OnPauseChange(bool isPaused)
method from the CoherenceInputSimulation
:
This can be used for example to display a pause screen that informs the player about a bad internet connection.
To recover from the time gap created by the pause the Client will automatically speed up the simulation. The time scale change is gradual and in the case of a small frame gap, can be unnoticeable. If a manual control over the timescale is desired set the CoherenceBridge.controlTimeScale
flag to "false".
The CoherenceInputSimulation
has a built-in debugging utility that collects various information about the input simulation on each frame. This data can prove extremely helpful in finding a simulation desync point.
The CoherenceInputDebugger
can be used outside the CoherenceInputSimulation
. It does however require the CoherenceInputManager
which can be retrieved through the CoherenceBridge.InputManager
property.
Since debugging might induce a non-negligible overhead it is turned off by default. To turn it on, add a COHERENCE_INPUT_DEBUG
scripting define:
From that point, all the debugging information will be gathered. The debug data is dumped to a JSON file as soon as the Client disconnects. The file can be located under a root directory of the executable (in case of Unity Editor the project root directory) under the following name: inputDbg_<ClientId>.json
, where <ClientId>
is the CoherenceClientConnection.ClientId
of the local client.
Data handling behavior can be overridden by setting the CoherenceInputDebugger.OnDump
delegate, where the string parameter is a JSON dump of the data.
The debugger is available as a property in the simulation base class: CoherenceInputSimulation.Debugger
. Most of the debugging data is recorded automatically, however, the user is free to append any arbitrary information to a frame debug data, as long as it is JSON serializable. This is done by using the CoherenceInputDebugger.AddEvent
method:
Since the simulation can span an indefinite amount of frames it might be wise to limit the number of debug frames kept by the debugging tool (it's unlimited by default). To do this use the CoherenceInputDebugger.FramesToKeep
property. For example, setting it to 1000 will instruct the debugger to keep only the latest 1000 frames worth of debugging information in the memory.
Since the debugging tool uses JSON as a serialization format, any data that is part of the debug dump must be JSON-serializable. An example of this is the simulation state. The simulation state from the quickstart example is not JSON serializable by default, due to Unity's Vector3 that doesn't serialize well out of the box. To fix this we need to give JSON serializer a hint:
With the JsonProperty
attribute, we can control how a given field/property/class will be serialized. In this case, we've instructed the JSON serializer to use the custom UnityVector3Converter
for serializing the vectors.
You can write your own JSON converters using the example found here. For information on the Newtonsoft JSON library that we use for serialization check here.
To find a problem in the simulation, we can compare the debug dumps from multiple clients. The easiest way to find a divergence point is to search for a frame where the hash differs for one or more of the clients. From there, one can inspect the inputs and simulation states from previous frames to find the source of the problem.
Here's the debug data dump example for one frame:
Explanation of the fields:
Frame
- frame of this debug data
AckFrame
- the common acknowledged frame, i.e. the lowest frame for which inputs from all clients have been received and are known to be valid (not mispredicted)
ReceiveFrame
- the common received frame, i.e. the lowest frame for which inputs from all clients have been received
AckedAt
- a frame at which this frame has been acknowledged, i.e. set as known to be valid (not mispredicted)
MispredictionFrame
- a frame that is known to be mispredicted, or -1
if there's no misprediction
Hash
- hash of the simulation state. Available only if the simulation state implements the IHashable
interface
Initial state
- the original simulation state at this frame, i.e. a one before rollback and resimulation
Initial inputs
- original inputs at this frame, i.e. ones that were used for the first simulation of this frame
Updated state
- the state of the simulation after rollback and resimulation. Available only in case of rollback and resimulation
Updated inputs
- inputs after being corrected (post misprediction). Available only in case of rollback and resimulation
Input buffer states
- dump of the input buffer states for each client. For details on the fields see the InputBuffer
code documentation
Events
- all debug events registered in this frame
There are two main variables which affect the behaviour of the InputBuffer
:
Initial buffer size - the size of the buffer determines how far into the future the input system is allowed to predict. The bigger the size, the more frames can be predicted without running into a pause. Note that the further we predict, the more unexpected the rollback can be for the player. The InitialBufferSize
value can be set directly in code however it must be done before the Awake
of the baked component, which might require a script execution order configuration.
Initial buffer delay - dictates how many frames must pass before applying an input. In other words, it defines how "laggy" the input is. The higher the value, the less likely Clients are going to run into prediction (because a "future" input is sent to other Clients), but the more unresponsive the game might feel. This value can be changed freely at runtime, even during a simulation (it is however not recommended due to inconsistent input feeling).
The other two options are:
Disconnect on time reset - if set to "true" the input system will automatically issue a disconnect on an attempt to resync time with the Server. This happens when the Client's connection was so unstable that frame-wise it drifted too far away from the Server. In order to recover from that situation, the Client performs an immediate "jump" to what it thinks is the actual server frame. There's no easy way to recover from such a "jump" in the deterministic simulation code, so the advised action is to simply disconnect.
Use fixed simulation frames - if set to "true" the input system will use the IClient.ClientFixedSimulationFrame
frame for simulation - otherwise the IClient.ClientSimulationFrame
is used. Setting this to "true" is recommended for a deterministic simulation.
The fixed network update rate is based on the Fixed Timestep configured through the Unity project settings:
To know the exact fixed frame number that is executing at any given moment use the IClient.ClientFixedSimulationFrame
or CoherenceInputSimulation.CurrentSimulationFrame
property.
Communication between Clients
ClientConnections are CoherenceSyncs
that the CoherenceBridge
can handle for you and that let you uniquely identify users connected, find them by their ID, and easily send commands between those users.
When using ClientConnections, CoherenceBridge
will spawn a CoherenceSync
for each connection (Client or Simulator). Those CoherenceSyncs
are subject to a different ruleset than standard CoherenceSyncs
:
They can't be created or destroyed by the Client - they are always driven by CoherenceBridge
.
They are global - they are replicated across Clients regardless of the Live Query extent.
ClientConnections shine whenever there's a need to communicate something to all the connected players. Usage examples:
Global chat
Game state changes: game started, game ended, map changed
Server announcements
Server-wide leaderboard
Server-wide events
The global nature of ClientConnections doesn't fit all game types - for example, it rarely makes sense to keep every Client informed about the presence of all players on the server in an MMORPG. If this is your use case, don't set ClientConnections on your CoherenceBridge
.
To enable ClientConnections, turn Global Query on in your CoherenceBridge
(it should be on by default):
Disabling Global Query on one Client doesn't affect other Clients, i.e. the ClientConnection object of this Client will still be visible to other Clients that have Global Query turned on.
Most of the ClientConnection functionality is accessible through the CoherenceBridge.ClientConnections
object:
Each connection is represented by a plain C# CoherenceClientConnection
object. It contains all the important information about a connection - its ClientID
, Type
, whether it IsMyConnection
, and a reference to the GameObject
and CoherenceSync
associated with it.
The CoherenceClientConnection.ClientID
is guaranteed to not change during a connection's lifetime. However, if a Client disconnects and then connects again to the same Room/World, a new ClientID
will be assigned (since a new connection was established).
Each ClientConnection can have a CoherenceSync
automatically being spawned and associated with it. Those objects, like any other objects with CoherenceSync
, can be used for syncing properties or sending messages, with a little twist - they are global and thus not limited by the CoherenceLiveQuery
extent. That makes them perfect candidates for operations like:
Syncing global information - name, stats, tags, etc.
Sending global messages - chat, server interaction
To enable connection objects:
This step is described in detail in the Prefab setup section. In short, it is enough to create a Prefab with a CoherenceSync
and a custom component (PlayerConnection
in this example):
CoherenceBridge
For the system to know which object to create for every new Client connection, we have to link our Prefab to the CoherenceBridge
. Simply drag the prefab to the Client field in the inspector:
From now on every new connection will be assigned an instance of this Prefab, which can be accessed through the CoherenceClientConnection.GameObject
property.
Note that there's a separate field for the Simulator Connection Prefab. It can be used to spawn a completely different object for the Simulator connection that may contain Simulator-specific commands and replicated properties. If the field is left empty, no object will be created for the Simulator connection.
The Prefab selection process can be also controlled from code using the CoherenceBridge.ClientConnections.ProvidePrefab
callback:
A Prefab provided through the ProvidePrefab
callback takes precedence over Prefabs linked in the Inspector.
Client messages is a shortcut to send Network Commands using a CoherenceClientConnection
object as the target instead instead of a CoherenceSync
. The end recipient of the command will however still be the CoherenceSync
associated with the ClientConnection Prefab, just like a regular Network Command.
Preparing to use Client messages requires the same approach as exposing a command on a script present on the Client Connection Prefab that we set up in the CoherenceBridge
in the previous section:
Don't forget to bind the method to define a Command:
That same Command can now be sent using the CoherenceClientConnection.SendClientMessage
method:
If the ClientID
of the message recipient is known we can use the CoherenceBridge.ClientConnections
directly to send a Client message:
This is a collection of features that makes it easy for the players of your coherence-enabled game to host Replication Servers themselves, without using our cloud services. It involves three main parts:
A mechanism for bundling the coherence Replication Server with the game.
SDK methods that start and stop the Replication Server on the player's personal device.
A relay for communication between the Replication Server and some 3rd party networking service, such as Steam Networking.
Players running their own local Replication Server will still be bound by the legal terms of the coherence end user agreement. For questions regarding this, please reach out to us at the devrel@coherence.io email address.
If you decide to release your game with support for Client-hosting, it is important to first consider the tradeoffs of this approach:
Server costs will be paid by those who provide the networking service, i.e. Valve in the case of relying on Steam Networking.
Players will be running the Replication Server on their personal devices, so their specs and network conditions will have a big impact on performance and reliability for all players.
You will not have access to the full range of features included when you're using the coherence Cloud services.
It lets your players keep playing the game over the Internet, even if your company or coherence goes out of business.
To bundle the coherence Replication Server with the build of your game, go to coherence > Settings > Bundle stand-alone Replication Server and check what platforms should have this feature enabled.
Currently, coherence supports bundling on Windows, MacOS, and Linux. We are working on adding support for more platforms in the future.
If your game uses a custom build process where the automatic bundling doesn't work well, you can also use a manual approach.
Here's a code example:
The BundleWithStreamingAssets
method will copy the Replication Server and a combined schema (which contains all active schemas) for the target platform into the streaming assets folder.
To start the bundled Replication Server from within your game, you can use the Launcher
and ReplicationServer
classes provided by the coherence Unity SDK. It will make sure that the correct parameters are passed to the Replication Server at startup, and it will also help you manage the child process.
Launching a child process is not supported in Unity IL2CPP builds. If your game is using IL2CPP, you must use another method (see the next section below).
Here's a simple code example of how to start and stop the Replication Server using the coherence API:
It is very important to keep track of your child process (via the ReplicationServer
class) and close it down properly, or else you will leave the Replication Server running on the user's machine. Note that it's only the person hosting a game that needs to start an instance of the Replication Server, players joining a game should connect normally.
In case your game crashes, and your child processes are not cleaned up, it might be useful to set AutoShutdown = true
so that the Replication Server (RS) will automatically shut down after AutoShutdownTimeout
milliseconds (defaults to 10 seconds) when no Clients are connected to it.
Builds using IL2CPP (instead of Mono) do not support starting child process with Process.Start()
, which is used internally in the coherence Launcher
class. If you are depending on IL2CPP and want to support Client-hosting, you must use one of the available workarounds instead:
There are third-party packages in the Unity asset store that let you launch subprocesses from IL2CPP builds.
You can create launcher scripts, similar to the ones coherence generates in your Unity project at ./Library/coherence
, and ship them with your game. Depending on if the game is using Rooms or Worlds, the script is called run-replication-server-rooms
or run-replication-server-worlds
. The player can then run the script to start the Replication Server in an easy fashion from outside the game.
For players to communicate with one another over the Internet, a networking service is required. The networking service provides features such as setting up games, establishing connections, and sending data.
By default, coherence provides all of these networking services out-of-the-box. In this scenario, players all communicate with one another via a Replication Server that is hosted in the coherence Cloud, so you don't have to worry about anything.
In a Client-hosted scenario however, the Replication Server runs on the hosting player's machine. Therefore, the connectivity between clients and host must be provided via an external networking service. In the context of Client-hosting, we call such networking service a relay, since it is used to relay traffic between the Clients and the Replication Server running on the host's machine. You can also think of a relay as tunneling traffic between clients and host.
Steam offers a free networking service for games available on its platform. In order to use Steam Networking you'll need a registered Steam application with a valid Steam App ID. Once you have a Steam App ID, you'll be able to pass messages between clients via Steam's servers.
To make things easy, coherence provides a complete Steam relay implementation that provides out-of-the-box networking over Steam. The Steam relay utilizes the Facepunch.Steamworks library to access the Steam API.
The sample code also demonstrates how to register a lobby with the Steam Matchmaking API to make it easy for players to find and join an ongoing session.
The Steam relay is available here: https://github.com/coherence/steam-integration-sample.
The host (Client A) starts a Replication Server on its local machine.
The host connects to the local Replication Server.
The host initializes a SteamRelay that listens for incoming Steam connections.
Another player (Client B) connects to the host via Steam using the SteamTransport.
The SteamRelay accepts the incoming connection, creating a SteamRelayConnection.
The SteamRelayConnection immediately starts passing data between the Steam servers and the Replication Server.
The relayed connection is now fully established. All data between Client B and the Replication Server is relayed through Steam.
For each new Client that connects, steps 4-7 are repeated.
Although the diagram above shows that traffic is routed via Steam servers, it is often the case that traffic can flow directly between player and host machines without actually making the extra hop via the Steam servers. This technique is commonly referred to as "hole punching" or "NAT Punch-through" and greatly reduces latency, however, it is not supported on all networks due to firewall restrictions.
Steam's networking service will first attempt a NAT punch-through and then automatically fall back to relayed communication if the punch-through failed.
To be able to test your game with the SteamRelay you'll need at least two Steam accounts - even for local development. Since only a single Steam account can be logged in to one machine at a time, you will need at least two machines or a sandbox solution to be able to connect. Trying to connect two instances of the game on the same machine will result in "invalid connection" or "failed to create lobby" errors.
Similar to the Steam relay above, you can create your own custom relay implementation and route traffic via any networking service. The relay implementation consists of three parts, each class implementing one of three interfaces.
ITransport (Client) - Outgoing connection. Passes messages between the client and the networking service.
IRelay (Host) - Listens for incoming connections and instantiates IRelayConnections.
IRelayConnection (Host) - Incoming connection. Passes messages between the Replication Server and the networking service.
Let's say we want to implement a custom relay that uses an API called FoobarNetworkingService. The code here outlines the main points to implement for routing network traffic.
First, we'll create a CustomTransport class to manage the outgoing connection from the client to the host. CustomTransport implements the ITransport interface that provides a few important methods. The Open and Close methods are used to connect and disconnect to/from the networking service. The Send and Receive methods are used to send and receive messages to/from the networking service.
The CustomTransport will be instantiated when the client attempts to connect to the host, usually as a result of calling CoherenceBridge.Connect. You can control how the transport is instantiated by implementing an ITransportFactory.
Finally, to configure the client to actually use the CustomTransport, just set the transport factory on CoherenceBridge.
This is everything needed on the client-side.
You can call _SetTransportFactory(null)_ to disable the custom transport and connect as normal.
On the host-side, we need a CustomRelayConnection class to manage the incoming connection. This class implements IRelayConnection and is a mirror image of the CustomTransport. The OnConnectionOpened and OnConnectionClosed methods are called in response to CustomTransport.Open and CustomTransport.Close. The SendMessageToClient and ReceiveMessagesFromClient methods are responsible for sending and receiving messages over the networking services, similar to CustomTransport.Send and CustomTransport.Receive.
Now we just need a CustomRelay class that listens for incoming FoobarConnections and maps them to a corresponding CustomRelayConnection.
Finally, to configure the host to actually use the CustomRelay, simply set the relay on the CoherenceBridge:
You can call SetRelay(null)
to disable relaying.
These are all the necessary steps required to configure a custom relay.
For a complete relay code example, please review the Steam relay source code.
Unity supports as part of the engine. Code stripping means automatically removing unused or unreachable code during the Unity build process to try and significantly decrease your application’s final size.
Setting Code Stripping above Minimal level is risky and might break your game by removing code that's needed for it to run.
If you're considering using code stripping above Low level, add the following link.xml file anywhere in your project (we recommend Assets/coherence/link.xml
):
Read more on how link.xml works in Unity on their section.
In any case, there's no one-size-fits-all solution to be safe when it comes to code stripping. It highly depends on your project, and also your dependencies (third party libraries and assets used).
If you are experiencing issues with coherence while using code stripping,.
If you're using a VCS (which we highly recommend you to) like git, Subversion or PlasticSCM, here's a few tips.
You can ignore (as in, not version) Assets/coherence/baked
and its .meta file in most cases, but it's completely safe to include them too. If you decide to ignore them, every other fresh copy of the project (including continuous integration setups) has to Bake for such files to be generated.
If your build pipeline relies on asset checksums for verification, you should version the forementioned files.
Make sure Assets/coherence/Gathered.schema
is treated as a binary and not as a text file (where conflicts can be auto-resolved via content diffs). The Gathered.schema, although a text file, is meant to be thought of as a binary file.
Each VCS has different mechanisms to work around this. Read the documentation of your VCS of choice to learn how to set this up.
When using git, you can add this line to your .gitattributes file (or create one in the root of your repo, if it doesn't exist)
To learn more about how gitattributes works, refer to their .
Creating massive multiplayer worlds
Unity has a well-known limitation of offering high precision positioning only within a few kilometers from the center of the world. A common technique to get around this limitation is to move the whole world underneath the player. This is called floating origin. Here's how you can use floating origin with coherence.
Unity uses numbers to represent the world position of game objects in memory. While this format can represent numbers up to , its precision decreases as the number gets larger. You can use this to see that already around the distance of meters the precision of a 32-bit float is 1 meter, which means that the position can only be represented in steps of one meter or more. So if your Game Object moves away from the origin by 1000 kilometers it can be only positioned with the accuracy of 1000km and one meter, or at 1000km and two meters, but not in between. As a result, usable virtual worlds can be limited to a range of as little as 5km, depending on how precisely GameObjects like bullets need to be tracked.
Having a single floating world origin as used in single player games is not sufficient for multiplayer games since each player can be located in different parts of the virtual world. For that reason, in coherence, all positions on the are stored in absolute coordinates, while each Client has its own floating origin position, to which all of their game object positions are relative.
To represent the absolute position of Game Objects on the Replication Server, we use the 64-bit floating-point format. This format allows for sub-1mm precision out to distances of 5 billion kilometers. To keep the implementation simple, floating origin position and any Game Object's absolute position is limited to the 32-bit float range, but because of the Floating Origin, it will have precision of a 64-bit float when networked with other Clients.
Here is a simple example how the floating origin could be used. We will create a script that is attached to the player Prefab and is active only on the Client with authority.
Calling the CoherenceBridge.TranslateFloatingOrigin
will shift all CoherenceSync objects by the translated vector, but you have to shift other non-networked objects by yourself. We will create another script which takes care of this.
When your floating origin changes, the CoherenceBridge.OnFloatingOriginShifted
event is invoked. It contains arguments such as the last floating origin, the new one, and the delta between them. We use the delta to shift back all non-networked game objects ourselves. Since the floating origin is Vector3
of doubles we need to use ToUnityVector3
method to convert it to Vector3
of floats.
To control what happens to your entities when you change your floating origin, you can use CoherenceSync's floatingOriginMode
and floatingOriginParentedMode
fields. Both are accessible from the inspector under Advanced Settings.
Available options for both fields are:
MoveWithFloatingOrigin
- when you change your floating origin, the Entity is moved with it, so its relative position is the same and absolute position is shifted.
DontMoveWithFloatingOrigin
- when you change your floating origin, the Entity is left behind, so its absolute position is the same and relative position is shifted.
Floating Origin Mode dictates what happens to the Entity when it is a Root Object in the scene hierarchy, and Floating Origin Parented Mode dictates what happens to it when its parented under another non-synced Game Object.
Command-line interface tools explained
Found in <package-root>/.Runtime/<platform>/
.
protocol-code-generator --help
Argument | Help |
---|
protocol-code-generator --help generate
replication-server --help serve
To start the Server, you need to give it the location of the schema.
You can copy the CLI commands to start the replication server form coherence Hub > Servers tab_._
You can also define other parameters like min-query-distance
(the minimum distance the LiveQuery needs to move for the Replicator to recognize a change), send and receive frequency
, ip
and port
number.
Minimal parameters set is presented in the example below:
replication-server serve --port 32001 --signalling-port 32002 --send-frequency 20 --recv-frequency 60 --web-support --env dev --schema "/Users/coherence/unity/Coherence.Toolkit/Toolkit.schema,/Users/coherence/MyProject/Library/coherence/Gathered.schema"
replication-server --help listen
How coherence behaves when a Client is not connected
Even though coherence is a networking solution, there might be instances when a Scene configured for online play is used offline, without connecting to a Replication Server. This can be useful for creating gameplay that works in an offline game mode (like a tutorial), or simply a game that can connect and disconnect seamlessly during uninterrupted gameplay.
The ability to create Prefabs and code that can be used both online and offline is an great tool that in the long term can streamline the development process, avoid duplicated code, ultimately creating less bugs.
To ensure that the code you write doesn't break when offline, follow these recommendations:
Check if a CoherenceBridge
is connected using CoherenceBridge.isConnected
.
CoherenceSync
components also have a reference to the associated bridge, so if one is present in the Scene you can use sync.CoherenceBridge.isConnected
for convenience.
When offline, CoherenceSync.EntityState
is null
. Use this to your advantage to identify the state of the connection.
Authority is assumed on offline entities (CoherenceSync.HasStateAuthority
always returns true
).
You can use . They will be routed to local method calls.
When offline, some events on coherence components (e.g., CoherenceBridge.OnLiveQuerySynced
) won't be fired, so review any game logic that depends on them.
and are not resolved when offline, so don't make assumptions about their network state.
When offline, if you have a Prefab in the Scene that is set to Simulate In Server Side, it won't be automatically removed (since there's no connection, hence coherence can't infer if it's a Simulator or a Client connection). You can use SimulatorUtility.IsSimulator
in an OnEnable()
and deactivate it.
In case of a server-authoritative scenario using CoherenceInput
, you might want to isolate state-changing code that would run in the Simulator into its own script. This way, it can be reused to directly affect the state of the entity when the game is offline.
See below for a more in-depth description of how the different components behave.
This section describes how the different components offered by coherence behave when the game is offline.
Because the CoherenceBridge
never tries to connect, it won't fire any connection-related events.
Connected Prefabs that feature a CoherenceSync
can be instantiate and destroyed like usual while offline.
Because it's always the local Client that creates instances of connected Prefabs, it will automatically receive full authority on all of them. Consequently, the OnStateAuthority
callback will be invoked when the object is instantiated, in OnEnable()
. Check on sync.HasStateAuthority
will return true, as early as Awake()
.
The Client has authority to manipulate its state, and can destroy the Prefab instance at will.
Persistence is tied to the World or Room the Client is connected to. If you need objects to persist between different offline sessions, you need to store their state some other way.
When offline, no check happens for uniqueness, meaning that unique entities can be instantiated multiple times. It is therefore up to the local gameplay code to make sure this doesn't happen in the first place.
Any Prefab for which the property Simulate In has been set to Server-Side will not be automatically deactivated on instantiation.
To ensure such a Prefab doesn't appear in an offline session, make sure to deactivate it in its code by using:
LiveQuery
components will not have any effect offline. However, keep in mind that they will try to find the CoherenceBridge
, so if none is present they will throw an error. For this reason it's a good idea to keep them together, and only have a LiveQuery
in the scene if a CoherenceBridge
is present.
Any CoherenceNode
component will have no effect. There are no drawbacks for leaving it inside the Prefabs.
Given that it's inherently meant to be used in a Client-Server scenario, CoherenceInput
has no meaning offline.
However, you can safely leave the component on your Prefabs, and architect your scripts so that rather than sending inputs to CoherenceInput
regardless, they first check if the Client is connected and, in case of a negative answer, manipulate the entity's state instead.
It can be a good idea to isolate the code used to manipulate the entity's state during prediction, and reuse it for offline behavior.
If the game is online, input is sent to the Simulator via CoherenceInput
, while at the same time prediction is done locally and applied. On the Simulator, the same code used from the Client to do prediction is used to compute the final state. Once an update is received, reconciliation code kicks in and corrects any mismatches.
In offline mode, the same code used for the prediction is used for driving the entity's state instead, and no input is forwarded to CoherenceInput
.
If the Entity is parented under another CoherenceSync Object (even using ), its local position will never be changed, since it will always be relative to the parent.
If you are using Cinemachine for your cameras, you'll need to call to notify them that the camera target has moved when you shift the floating origin.
__
You won't be able to query the list of , and all Room or World related data or data won't be there.
However, you will be able to access through the CoherenceBridge
what is part of the setup at edit time. For instance, you will be able to inspect the list of CoherenceSyncConfig
objects in order to .
If a Component has been configured to be enabled/disabled as a result of authority changes (that is, using ), it will be enabled.
No change in behavior. will invoke the corresponding method with a direct invocation, with no network delay incurred.
Prefabs marked as will not persist after an offline game session. They will be destroyed when the Scene they belong to is unloaded, and will not be automatically recreated if the Scene is re-loaded.
Like persistence, is also verified within the context of the Room or World the Client is connected to, and is generally used for ensuring that a different Client can't bring an already existing entity to the simulation.
| Log level. Values: |
Output format of the log. Values: |
| Log output file |
| Enable/disable panic on error |
The following CLI flags can be specified on Unity Builds. They are read by the SDK via the SimulatorUtility API.
Flag | Description | SimulatorUtility |
---|---|---|
--coherence-region <region>
eu
, us
, usw
, ap
or local
.
Region
--coherence-ip <ip>
Specific IP to point to.
Ip
--coherence-port <port>
Specific port to point to.
Port
--coherence-room-id <room-id>
Specific Room to point to.
RoomId
--coherence-room-tags <base64-tags>
A base64 enconded string containing the Room tags (space-separated). Example: tag1 tag2 tag3
RoomTags
--coherence-room-kv-json <base64-json>
A base64 encoded string containing a JSON object literal with key-valure pairs. Example:
{"key1": "value1", "key2": "value2"}
RoomKV
--coherence-world-id <world-id>
Specific World ID to point to.
WorldId
--coherence-simulation-server
Connect and behave as a Simulator.
HasSimulatorCommandLineParameter
--coherence-simulator
Same as --coherence-simulation-server
.
HasSimulatorCommandLineParameter