Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
coherence is a network engine, platform and a series of tools to help anyone create a multiplayer game.
Fast network engine with cloud scaling, state replication, persistence and auto load balancing.
Easy to develop, iterate and operate connected games and experiences.
SDK allows developers to make multiplayer games using Windows, Linux or Mac, targeting desktop, console, mobile, VR or the web.
Game engine plugins and visual tools will help even non-coders create and quickly iterate on a connected game idea.
Scalable from small games to large virtual worlds running on hundreds of servers.
Game-service features like user account and key-value stores.
At the core of coherence lies a fast network engine based on bitstreams and a data-oriented architecture, with numerous optimization techniques like delta compression, quantization and network LOD-ing ("Level of Detail") to minimize bandwidth and maximize performance.
The network engine supports multiple authority models:
Client authority
Server authority
Server authority with client prediction
Authority handover (request, steal)
Distributed authority (multiple simulators with seamless transition)
Deterministic client prediction with rollback ("GGPO") - experimental
coherence supports persistence out of the box.
This means that the state of the world is preserved no matter if clients or simulators are connected to it or not. This way, you can create shared worlds where visitors have a lasting impact.
The coherence SDK only supports Unity at the moment. Unreal Engine support is planned. For more specific details and announcements, please check the Unreal Engine Support page. For custom engine integration, please contact our developer relations team.
Custom UDP transport layer using bit streams with reliability
WebRTC support for WebGL builds
Smooth state replication
Server-side, Client-side, distributed authority
Connected entity support
Fast authority transfer
Remote messaging (RPC)
Persistence
Verified support for Windows, macOS, Linux, Android, iOS and WebGL
Support for Rooms and Worlds
Unity SDK with an intuitive no-code layer
Per-field adjustable interpolation and extrapolation
Input queues
Easy deployment into the cloud
Multi-room Simulators
Multiple code generation strategies (Assets/Baking, automated with C# Source Generators)
Multiple object spawning strategies (Resources, Prefab Mapper, Addressables)
Per-field compression and quantization
Per-field sampling frequency adjustable at runtime
Unlimited per-field levels of detail
Areas of interest
Accurate SimulationFrame tracking
Developer portal with server and service configurator
Automatic server scaling
Multiple regions (US East, EU Central)
Player accounts
Key-value store
Matchmaking
Float64 support
Permissions and roles system for clients and simulators
World origin shifting
Input queue UX improvements
More logging and diagnostics tools
Built-in network condition simulation
Additional server regions
Support for multiple Simulators and Replicators in a single project
Dashboard with usage statistics
Paid plans with more CCU and credits
Network profiler
Support for lean pure C# clients and simulators without Unity
Peer-to-peer (without replication server) with NAT punch-through
MTU detection
Packet replay
Ability to deploy multiple Simulation Servers per environment
Player analytics
Developer portal graphs and analytics
Simulator authentication
Bare-metal and cloud support
Misprediction detection support
SDK library that can be used in C++ and other languages
More starter/sample projects and helper scripts
Ability to embed WebGL games in web portals
Global KV-Store
Complex data types/entities in the KV store
More GGPO support for specific game genres
Misprediction detection support
Unreal Engine SDK
JavaScript SDK
TCP fallback support
Advanced matchmaking
Multiple Replication Servers per game world
Customer-specific serialization
User-space load-balancing (SDK framework)
Game world map with admin interface
Advanced anti-cheat functionality
Advanced transaction logs (audit trail)
Schema versioning (hot updates)
Console-specific updates
Player analytics
Tips on how to handle common problems
Ubuntu 22.04 suffers from a GLIBC version mismatch, resulting in coherence baking failure. The current solution is to downgrade by one version of Ubuntu.
Revert all your Prefab Instance Overrides
Check that all your clients are using the same Schema ID.
On macOS, some versions of the Unity Hub don't install platform modules properly. This issue is fixed on version Unity Hub version 3.3.0. If you want to use a prior version, install one module at a time.
SDK 0.9 release removed dependency to the experimental package Platforms. As a result, your existing BuildConfiguration assets will have its defining script missing. To recreate the Simulator build configuration with the new pipeline, you can do it from the Simulators Module in coherence Hub.
When networked Entities are added by loading a scene additively, mind that the CoherenceMonoBridge
must be set to a singleton mode. In case of a non-singleton MonoBridge the Entity will search for a MonoBridge instance within its scene, and if none is found it will be created in that scene. This applies to both CoherenceSync
and LiveQuery
objects.
For a complete control over CoherenceMonoBridge
resolving you can use either CoherenceSync.MonoBridgeResolve
instance event or MonoBridgeStore.MonoBridgeResolve
global event. A CoherenceMonoBridge
instance provided by the MonoBridgeResolve
event takes precedence over the scene and singleton ones.
Sometimes the package cache gets stalled and after updating the coherence package you might find Unity still uses the old package version. Try Reimport all
or closing the project, deleting the Library
folder, and reopening the project again. It will clean and update packages cache, and use the version of coherence stated in manifest.json
.
Currently, coherence doesn't support networking prefabs with the same name. Doing so can yield duplicated definition errors. Best course of action is to rename your prefabs so they have different names.
We will support networking prefabs with the same name in the future.
When working with Prefab Variants, Unity leaks managed references (fields marked with [SerializeReference]
). This can make your prefab grow big and use more memory than necessary. Until Unity fixes this issue, we provide you with the ability to prune the leaked references. Pruning is only available for Unity 2021.2+.
Unity does not build WebGL because Python 2 is missing on latest macOS and Ubuntu 22. This is fixed on Unity 2020.3.40f1 and onwards.
Dots (.
) are not supported as part of WebGL build names.
There's a known bug preventing the removal of session-based objects from the scene after the Client disconnects. We're working on it.
LOD levels are not properly updated when queries move or resize.
A lean and performant server that keeps the state of the world and replicates it efficiently between various Simulators and Game Clients. The Replicator usually runs in the coherence Cloud, but developers can start it locally from the command line or the Unity Editor.
A build of the game. To connect to coherence, it will use the coherence SDK.
A version of the Game Client without the graphics ("headless client") optimized and configured to perform server-side simulation of the game world. When we say something is simulated on the server, we mean it is simulated on one or several Simulators.
A text file defining the structure of the world from the network's point of view. The schema is shared between the Replicators, Simulators and Game Clients. The world is generally divided in components and archetypes.
Code generation
The process of generating code specific to the game engine that takes care of network synchronization and other network-specific code. This is done using a CLI tool called Protocol Code Generator that takes the schema file and generates code for various engines (e.g. C# for Unity).
The process of making sure the state of the world is eventually the same on the Replicator, Simulators and Game Clients, depending on their areas of interest.
coherence works by sharing game world data via a Replication Server in the cloud and passing it to the connected Clients.
The Clients and Simulators can define areas of interest (LiveQueries), levels of detail, varying simulation and replication frequencies and other optimization techniques to control how much bandwidth and CPU power is used in different situations.
The game world can be run using multiple Simulators that split up simulation functions or areas of the world accordingly.
The platform handles scaling, synchronization, persistence and load balancing automatically.
If you have issues building Clients or Simulators, refer to .
When upgrading older projects to 0.9, you may get the following error after trying to connect to a Room/World:
System.Collections.Generic.KeyNotFoundException: The given key was not present in the dictionary.
If this happens, you should manually select your CoherenceSync Prefabs in the Inspector to trigger an OnGUI call.
If you have any problems after trying this, please let us know on our so we can help you out.
Fast authority transfer and remote commands allow different authority models, including Client authority, Server authority, distributed authority and combinations like Client prediction with .
Peer-to-peer support (without a Replicator) is planned in a future release. Please see the for updates.
Read about new features, important changes and fixes for version 0.10
Published 02.06.2023
This is a maintenance release.
Child entities of duplicate unique entities are no longer destroyed.
TCP fallback removed until it receives a rework in the next major version.
Published 27.04.2023
This is a maintenance release.
Permissions issue while executing baking or a Replication Server on macOS.
Published 26.04.2023
This is a feature release!
Added Floating Origin support. It's now possible to move the whole world underneath an entity and position game objects precisely even in very large virtual worlds. For more information, see the Floating Origin article.
Published 24.04.2023
This is a maintenance release.
Compilation on Unity 2022.2+ works now.
Reverted a warning on the CoherenceSync inspector where the object is not in the Prefab Mapper.
Published 11.04.2023
This is a maintenance release.
Added a warning about Prefabs not being addded to the Prefab Mapper when baking from the CoherenceSync inspector.
Log settings are now stored in Library/coherence/logSettings.json
.
Improved coherence Hub performance.
BakeUtil.Bake method return value.
LogSettings saved on fresh imports.
Multi-Room Simulators no longer attempt to serve more than one HTTP server.
Published 28.03.2023
The main benefits of this release involve schema. Gathered.schema is now stored in the project and as such, became fully versionable. This and some other choice improvements below have dramatically improved SchemaID stability.
BakeUtil.OnBakeStarted and BakeUtil.OnBakeEnded events.
Multi-Room Simulators: reconnection timeout.
CoherenceNode: fixed order of execution. Now, updates are applied before parenting is resolved.
Gathered.schema moved to Assets/coherence/Gathered.schema (from Library/coherence/Gathered.schema).
Can no longer disable Toolkit/Reflection/Generic Schema on Settings. It's required now.
RuntimeSettings' SchemaID property now warns about outdated schemas.
Optimize window: added statement that it is an experimental feature for now.
Sample UI hanging when there's no connection.
Reduced allocations made by the SDK.
Auto-update RuntimeSettings after bake.
Entering Play Mode is now faster on big projects.
CoherenceInput: now works with Prefabs containing spaces in their name.
Faster synchronization of Transform's position.
Connection: packets filtered by roomID to prevent cross-room data contamination.
Users no longer receive a prompt to bake before entering Play Mode.
Bake Window button removed from the Optimize window UI.
macOS builds uploaded to coherence Cloud now keep the Application Bundle (.app) structure intact.
When a simulator fails to build, the scripting define symbol COHERENCE_SIMULATOR
is properly removed.
Faster assembly reloads, since coherence no longer tries to load your project Prefabs (was done to inform about possible issues or misconfigurations).
WebGL won't crash when failing to create a WebSocket.
TCP fallback when UDP connections fail or are blocked. Useful in heavily firewalled/restricted environments.
CoherenceSync: ability to trigger authority transfers from within the inspector while in Play Mode.
CoherenceSync: when adding CoherenceSync to a Prefab Instance, it's possible to apply CoherenceSync to the original Prefab instead of creating a new one.
Sample UI: layout.
Sample UI: watermark shows current Room/World ID, region and schema ID.
Replication issues when commands or inputs are being sent right after instantiation.
CoherenceSync: moving to Assets/Resources won't fail if the folder doesn't exist yet, and it's possible to issue from within a Prefab Stage.
CoherenceSync: adding a CoherenceSync inside a Prefab Stage resolves 'Load via' properly now.
Sample UI: highly reduced GC allocations in watermark.
Sample UI: refreshing was performed multiple times. Now, it also takes less time to get the state updated.
New PlayResolver.RemoveRoom API.
Sample UI: detect available local Replication Server.
CoherencePlayerName component to ease networking the player name set on the Sample UI.
Bake automatically defaults to false.
Sample UI: improved layout and better state management.
CoherenceSync: tooltips have been rewritten and improved.
PlayResolver.FetchWorlds takes optional region and simulator slug as arguments.
Connection Dialog defaults to rooms.
Baking automatically on building a Unity Player no longer updates Prefabs.
Texture-loading related issues (Editor).
Packets received from incorrect endpoints are ignored.
Build uploading on macOS.
Hub: improvements to the Overview section.
Hub: Multi-Room Simulators Wizard no longer reopens.
coherence now waits for the AssetDatabase to be ready before performing changes to Prefabs.
CoherenceSync: No Duplicates resolves uniqueness properly now.
Uploading schemas from CI using COHERENCE_PORTAL_TOKEN.
CoherenceSync: with rigidbody, parented under an object without CoherenceSync, now correctly syncs its position.
The first thing you'll notice is that our Documentation has gotten a thorough overhaul to its structure. We hope you find the new layout of topics more usable and logical.
Play.IsLoggedIn to wait before interacting further with the Play API.
Authority-side sample interpolation that removes jitter for bindings that are sampled at high frequencies.
CoherenceSync: now fully configurable in Prefab variants. Prefab overrides can be visualized both in the CoherenceSync inspector and the Configure window.
PlayResolver: Option to create or join existing room with matching tags.
DescriptorProvider custom data object added for extended binding support.
Play.IsLoggedIn now waits before interacting further with the Play API.
Authority-side sample interpolation that removes jitter for bindings that are sampled at high frequencies.
CoherenceSync: a new 'Preserve Children' option to prevent connected entities (CoherenceSyncs in hierarchy) from being destroyed if the parent entity is destroyed.
CoherenceSync: [Sync] and [Command] attributes now allow you to specify the old name of the member to migrate it in place (example: [Sync("myOldVar")] public float myVar;
).
CoherenceMonoBridge: onLiveQuerySynced exposed in the inspector (OnLiveQuerySynced deprecated).
Build sharing: improved build path validation.
Baking: now informs users about the current limitation of 32 schema components.
Source Generator: improved reports so that it's clear when there's a failure.
PlayResolver: Option to create or join existing room with matching tags.
DescriptorProvider custom data object for extended binding support.
CoherenceSync: polymorphic bindings using serialized references. Upgrading from older coherence versions requires migration (done automatically, can trigger manually via menu item 'coherence > Reimport coherence Assets').
CoherenceSync: configurable rigidbody component action. You can decide when isKinematic should be (un)set.
CoherenceSync: authority transfer UI label changed from 'No' to 'Disabled'.
CoherenceHub: last tab opened is now persistent when closing and opening the window again.
Networked Prefabs are now instantiated with their original rotation if the rotation is not synced.
Truncated floating point precision in the 'Optimize' window.
Losing latest changes when transferring authority.
Overshooting and overflowing of integers while interpolating.
WebGL Client automatically disconnecting after Replication Server shutdown.
Recycling of EntityIDs caused by a double delete of an ID when an Entity is destroyed because of a parent moving out of the query.
Negative latency calculations during CPU spikes.
Fixed ConnectionSettings.PingOnConnect causing a timeout when connecting.
Editor getting stuck in an infinite loop when trying to import packages that reference missing types.
CoherenceNode script is now executed before CoherenceSync to mitigate parenting validation warnings.
Entities that exceed the Room's max Entity count are destroyed and a warning is logged.
Cloning GameObject instances with baked bindings using Instantiate would result in binding errors due to missing GUIDs (globally unique identifiers).
Warnings spammed when exiting Play Mode in Unity Editor.
Baking: process running longer than expected.
Baking: gather bindings from inactive GameObjects in hierarchy.
Baking: fixed NullReferenceException by skipping missing scripts.
Hub: don't show synchronous fetch organization progress bar.
Hub: improved scene setup instructions.
Hub: improved Multi-Room Simulators setup.
More coherence components implement a custom editor now, showing the logo and a brief description.
Source Generator: allow on 2020.3.
Source Generator: first time import failing due to directories not yet existing.
CoherenceSync: binding the same method as a command multiple times in the same Prefab is now supported.
CoherenceSync: fixed issue with parenting and interpolation which would result in a few frames of the connected Entity being in the wrong place.
CoherenceSync: use the method SendCommandToChildren to send a command to every bound component in the hierarchy.
CoherenceSync: new overload for the SendCommand method that accepts an Action, you can use it to send a command to a specific Component instance, if a method is bound to more than one component in the hierarchy.
CoherenceSync: disabled prediction controls on reflection (non-baked) mode.
CoherenceSync: set default interpolation.
CoherenceSync: fixed error on cancelling interpolation asset creation.
CoherenceSync: opening the inspector should not load additional assets (noticeable speedups on bigger projects).
CoherenceSync: binding to generic members is no longer allowed.
CoherenceSync: collapsed 'Custom Events' section by default.
CoherenceSync: 'Steal' authority transfer type on persistent Entities is no longer forced.
CoherenceNode: executed before CoherenceSync to mitigate parenting validation warnings.
Sample UI: don't auto-generate EventSystem, but inform when it's missing.
Sample UI: fixed default room name.
Sample UI: fixed Refresh button throttling (too many requests).
PlayClient: fixed missing Client version header.
Analytics: Exception thrown due to threading error.
PlayResolver: Cached rooms not filtered by tags.
CoherenceMonoBridge: OnShutdown event.
coherence provides two types of online replication services: Rooms and Worlds. Read about the different uses cases for each
Rooms are best for session-based gameplay where the match between players takes place in a short-lived environment.
A good example is a first person shooter multiplayer match. The match takes place between two teams in a single game session, and players enter through a lobby and matchmaking. When the match is concluded, the multiplayer environment the match took place in is closed and players return to a lobby.
This is one example of how Rooms can be used, but it is by no means the only use case. The important distinction between Rooms and Worlds (see below) is that Rooms are relatively short-lived and are meant to be created and closed by the Game Client through the coherence SDK.
See Rooms API.
Current limits for Rooms are as follows: Players
The default setting is 10 players hosted, but you can specify your own value anywhere between 2 and 100 players.
To support more than a 100 players per room, write to devrel@coherence.io
Entities
1000 by default, currently up to a 65535.
Worlds, as opposed to Rooms, are long-lived and permanent multiplayer environments provided by coherence. Using the Developer Portal, your project will easily define and manage your World configurations.
See Manage Worlds.
A good example of a World is a permanent environment for an Massively Multiplayer Game (MMO). Regardless of the number of players connected, the environment is always available, and players can connect and disconnect at will.
Entities can be permanently saved in the World so that even if there are no active connections, they still persist when players do connect.
See Worlds API.
Your project does not have to choose one-or-the-other. A project in coherence can contain both World and Rooms.
A good example of this scenario is again, our MMO. Although players connect to a permanent and persistent World, they may enter a dungeon instance with other players. These dungeon instances can be Rooms.
The primary difference in the configuration and usage of Room and Worlds is that Worlds are managed in the Developer Portal, whereas Rooms are created and managed through the SDK.
Games are better when we play together.
coherence is a network engine, platform and a series of tools to help anyone create a multiplayer game. Our mission is to give any game developer, regardless of how technical they are, the power to make a connected game.
When you use our search bar, you can now switch from Search to Lens. Simply tell Lens what you want, or ask it a question. It’ll use AI to scan coherence documentation and give you a simple, semantic answer — with clickable references if you want to dive deeper.
If you are an existing user and looking to update, check out the latest Release Notes. And maybe the SDK update guide as well!
First Steps is a collection of articles and scenes showing you how to use various features of the coherence Unity SDK. It shows you how to synchronize transforms, physics, persistence, animations, AI navigation and send network commands.
You can follow our step-by-step guide to learn how to install coherence in Unity, set up your scene, prefabs, interactions, as well as deploy your project to be shared with your friends.
Join our community Discord for community chatter and support.
Join our official Developer Discord channel.
Contact us at devrel@coherence.io
Using the same scene as in the previous lesson, let's see how to easily sync animation over the network.
WASD or Left stick: Move character
Hold Shift or Shoulder button left: Run
Spacebar or Joypad button down: Jump
Animation | Bindings
We haven't mentioned it before, but the player Prefab does a lot more than just syncing its position and rotation.
If you perform the actions mentioned in the controls, you will notice that animation is also replicated across Clients. This is done via synced Animator parameters (and Network Commands, but we cover these in the next lesson).
Very much like in the example about position and rotation, just sending these across the network allows us to synchronize of the state of animation, making it look like network-instantiated Prefabs on other Clients (the other players) are performing actions.
Open the player Prefab located in the Characters/Player
__ folder. Browse the Hierarchy until you find the sub-object called PlayerModel. You will notice it has an Animator
component. Select this object and open the Animator window.
As you can imagine, animation is controlled by a few parameters of different types (int, bool, float).
Make sure to keep the GameObject with the Animator component selected, and open the coherence Configure window:
You will see that a group of animation parameters are being synced. It's that simple: just checking them will start sending the values across, once the game starts.
Did you notice that you are able to configure bindings even if this particular GameObject doesn't have a CoherenceSync
component on it? This is done via the one attached to the root of the player Prefab. These parameters are what we call deep bindings. Learn more in the Complex hierarchies lesson.
There is only one piece missing: animation triggers. They are not a variable holding a value that changes over time, but rather an action that happens instantaneously, so we will see how to sync them in the next lesson using Network Commands.
The First Steps project is a series of small sample scenes, each one demonstrating one or more features of coherence.
If you're a first time user, we suggest to go through the scenes in the established order. They will guide you through some key coherence and networking concepts.
Remember that playing the scenes on your own only shows part of the picture. To fully experience the networked aspects, you have to play them in the Editor plus one or more built instances, and even better - with other people.
The Unity project can be downloaded on its Github repo.
To quickly try a pre-built version of the game, head to this link and either play the WebGL build directly in the browser, or download one of the available desktop versions.
Share the link with friends and colleagues, and have them join you!
Once you open the project in the Unity Editor, you can build scenes via File > Build Settings, as per usual.
If you want to try all the scenes in one go, keep them all in the build and place SceneSelector as the first one in the list.
If you're working on an individual scene instead, bring that one to the top and deselect the others. The build will be faster.
To be able to connect, you need to also run a local Replication Server, that can be started via coherence > Local Server > Run Local Worlds Server.
You can try running multiple Clients rather than just two, and see how replication works for each of them. You can also have one Client just be the Unity Editor. This allows you to inspect GameObjects while the game runs.
Since you might be building frequently, we recommend making native builds (macOS or Windows) as they are created much faster than WebGL.
You can also upload a build to the cloud and share a link with friends. To do that, follow these steps to host builds on the coherence Cloud.
Keep in mind that the custom builds you create in the Unity Editor will not be able to play together with the builds mentioned in the first section of this page.
This scene demonstrates the simplest networking scenario possible with coherence. Characters sync their position and rotation, which immediately creates a feeling of presence. Someone else is connected!
WASD or Left stick: Move character
Hold Shift or Shoulder button left: Run
Spacebar or Joypad button down: Jump
CoherenceSync | Bindings | Component behaviours | Authority
Upon connecting, the PlayerHandler
script (attached to the PlayerHandler GameObject) creates a new instance of the character Prefab, located in the Prefabs/Characters
folder. When disconnecting, the same script destroys the instance created.
coherence takes care of keeping all Game Clients in sync regarding network entities. When another Client connects, a new instance of your game character is instantiated in their scene, and a copy of their character is instantiated into yours. This is called network instantiation.
Now you can move and jump around, and you will see other characters move too.
You can see what is synced over the network by selecting the Prefab asset, and opening coherence's Configuration window (either by clicking on the Configure button on the CoherenceSync
component, or by going to coherence > GameObject setup > Configure).
When this window opens on the first tab you will notice that, at the very top, Transform.position
and Transform.rotation
are checked.
Are you wondering why the position is checked by default? You'll find answers in the lesson regarding LiveQueries.
This is the data being transferred over the network. Each Client sends position and rotation of the character they have authority over to every other connected Client, every time there is a change to it that is significant enough. We call these bindings.
Each connected Client receives these values and applies them to the Transform
component of their own instance of the remote player character.
To ensure that Clients don't modify properties of entities they don't have authority on, some components are either disabled or changed on instances that are non-authoritative. If you open the third tab of the Configuration window, you will see that 3 components are modified:
In particular:
The PlayerInput
and KinematicMove
scripts are disabled.
The Rigidbody
component is made kinematic.
One important concept to get familiar with is the fact that every networked entity exists as a GameObject on every Client currently connected. However, only one of them has what we call authority over the network entity, and can control its synced variables.
For instance, if we play this scene with two Clients, each one will have 2 instances in their respective worlds:
This is something to keep in mind as you decide which components have to keep running or be disabled on remote instances, in order to not have the same code running unnecessarily on various Clients. This could create a conflict or put the two GameObjects in a very different state, generating unwanted results.
In the Unity Editor, the name of a GameObject and the icon next to it informs you about its current authority state (see image).
There are two types of authority in coherence: State and Input. For the sake of simplicity, in this project we often refer just to a generic "authority", and what we mean is State authority. Go here for more info on authority.
If you want to see which entities are currently local and which ones are remote, we included a debug visualisation in this project. Hit the Tab key (or click the Joystick) to switch to a view that shows authority. You can keep playing the game while in this view, and see how things change.
Using the same scene as in the , we now take a look at another way to make Clients communicate: Network Commands. Network Commands are like sending direct messages to objects, instead of syncing the value of a variable.
WASD or Left stick: Move character
Hold Shift or Shoulder button left: Run
Spacebar or Joypad button down: Jump
Q or D-pad up: Wave
|
Building on top of previous examples, let's now focus on two key player actions. Press Space to jump, or Q to wave. For both of these actions to play their animation, we need to send a command over the network to call Animator.SetTrigger()
on the other Client.
Like before, select the player Prefab located in the Characters/Player
__ folder, and browse the Hierarchy until you find the sub-object called PlayerModel.
Open the coherence Configure window on the Methods tab:
You can see how the method Animator.SetTrigger(string)
has been marked as a Network Command. Once this is done, it is possible to invoke it over the network.
You can find the code doing so in the Hail
class (located in /Scripts/PlayerActions/Hail.cs
):
With this simple line of code, we're asking to:
Send a command to an object of class Animator
.
Invoke a method called Animator.SetTrigger
.
Do so only for network entities other than the one with authority (MessageTarget.Other
).
Pass the string "Hail"
as the first parameter (which is the name of the animation trigger parameter).
Because we don't invoke this on the one with authority, you will notice that just before invoking the Network Command, we also call SetTrigger
locally in the usual way:
An alternative to avoid this would have been to pass MessageTarget.All
to CoherenceSync.SendCommand()
, but in this case it made more sense to avoid that additional network traffic and just execute locally.
Game characters and other networked entities are often made of very deep hierarchies of nested GameObjects, needing to sync specific properties along these chains. In addition, a common use case is to parent a networked object to the tip of a chain of GameObjects.
Let's see how to handle these cases.
A/D or Left/right joypad triggers: Rotate crane base
W/S or Left joystick up/down: Raise/lower crane head
Q/E or Left joystick left/right: Move crane head forward/back
P/Space/Enter or Joypad button left: Pickup and release crate
| |
This scene features a robotic arm that can be controlled by one player at a time. In the scene, a small crate can be picked up and released.
The first player to connect takes control of the arm, and other players can request it via a UI button.
To demonstrate complex hierarchies we choose to sync the movement of a robot arm, made of several GameObjects. In addition to syncing several positions and rotations, we also sync animation variables and other script parameters, present on child objects.
To sync the whole arm we use a coherence feature called deep bindings, that is bindings that are located not on the root object, but deeper in the transform hierarchy.
Select the RobotArm __ Prefab asset located in /Prefabs
, and open it for editing. You will immediately notice a host of little coherence icons to the right of several GameObjects in the Hierarchy window:
These icons are telling us that these GameObjects have one or more binding currently configured (a variable, a method, or a component action).
Now open the coherence Configuration window, and click through those objects to discover what's being synced:
In addition to position and rotation, we also choose to sync the animation parameter ClawsOpen, and enable Animator.SetTrigger()
as a Network Command. Finally we disable the Robot Arm script when losing authority (to disallow input).
This is the base of the robot arm, for which we only sync rotation:
We don't sync the rotation of every object in the chain, since the arm is equipped with an IK solver, which allows us to just sync the target (Two-Bone IK_target) and work out the rotation of the limb (robotarm_bottomarm and robotarm_toparm) on each Client:
By syncing all of these properties, we can have the robotic arm move in sync on all Clients, simply by translating the tip of the IK, and rotating the base of the crane. All of the bindings in this hierarchy are synced through the Coherence Sync component present on the Prefab's root object RobotArm.
As you can see, using deep bindings doesn't require any special setup: they are enabled in exactly the same way as a binding, Network Command or Component action on the root object.
The Path property displays the location in the hierarchy where this object will be inserted. It gets automatically updated by coherence every time the object is parented. Each number represents a child in the root object (and it's 0-based).
Once we have this component set up, parenting the object only requires calling Transform.SetParent()
like any usual parenting operation, and setting its Rigidbody
component to be kinematic.
When we do this, coherence takes care of propagating the parenting to other Clients, so that the crate becomes a child GameObject on every connected Client.
This code is in the RobotArmHand
class, a component attached to the tip of our hierarchy chain: GrabPoint. In OnTriggerEnter
we detect when the crate is in range, storing a reference to it in a variable of type Transform
named grabbableObject
.
This reference is set to sync:
When the player presses the key P (or the Left Gamepad face button), the referenced crate is parented to the GrabPoint GameObject.
Note that coherence natively supports syncing references to CoherenceSync
and Transform
components, and to GameObjects.
Even if the Robot Arm Hand script is disabled on non-authoritative Clients, by referencing the grabbed crate in the grabbableObject
variable and syncing it over the network, even if a Client disconnects, other Clients will already have the correct reference to the crate network entity.
This allows us to gracefully handle a case where, for instance, a Client picks up the crate and disconnects. Because both the crate and the robot arm have Auto-adopt Orphan set to "on", authority is passed onto another Client and they immediately have all the data needed to keep handling the crate.
To move authority between Clients, we can use the UI in the bottom left corner. The button is connected to the Robot Arm Authority script on the ArmAuthoritySwapper GameObject. This script takes care of the authority transfer and what happens as a result, including setting the crate to be kinematic or not.
Is Kinematic is set as follows:
The code is in the RobotArmAuthority
class. To detect whether it's currently being held, it's as simple as checking whether its Transform.parent
is null
:
Remember you can use Tab/click the Gamepad stick to use the authority visualization mode. Try requesting authority from another Client while in this mode.
Every now and then it makes sense to parent network entities to each other, for instance when creating vehicles or an elevator. In this sample scene we'll see what are the implications of that, and how coherence uses this to optimize network traffic.
WASD or Left stick: Move character
Hold Shift or Shoulder button left: Run
Spacebar or Joypad button down: Jump
Moving platforms | | Parenting at runtime |
This wintery setting contains 2 moving platforms running along splines. Players can jump on them and they will receive the platform's movement and rotation, while still being able to move relative to the platform itself.
This scene doesn't require anything special in terms of network setup to work.
Direct parenting of network entities in coherence happens exactly like usual, with a simple transform.SetParent()
, or even just dragging one GameObject onto another in the Unity Editor's Hierarchy window. The player's Move
script is set to recognize the moving platforms when it lands on them, and it just parents itself to it.
As for the platforms, they are just moving themselves as kinematic rigid bodies, following the path of their spline (see the FloatingPlatform
script). Their position and rotation is synced on the network, and the first Client to connect assumes authority over them.
Once directly parented, coherence automatically switches to sync the child's position and rotation as local, rather than in world space. This means that when child entities don't move within their parent, no data about them is being sent across the network.
Imagine for instance a situation where 3 players are riding one of the platforms and not moving, only the coordinates of the platform are being synced every frame.
You might have noticed we always mentioned "direct" parenting. One limitation of this simple setup is that the parented network entity has to be a first-level child of the parent one. This doesn't exclude that the parent can have other child GameObjects (and other networked entities!), but networked entities have to be a direct child.
A hierarchy could look like this:
Platform
Player
Character graphics
Bones
...
Platform's graphics
...
(In bold is the root of each Prefab, which has a CoherenceSync
component)
You can even parent multiple network entities to each other. For example, a networked character holding a networked crate, riding a networked elevator, on a networked spaceship. In that case:
Spaceship
Elevator1
Elevator graphics
Elevator2
Player
Crate
Character graphics
Elevator graphics
Spaceship graphics
...
One final note: re-parenting has to happen at runtime. Currently, coherence doesn't support the authoring of Prefabs with more than one CoherenceSync
nested inside each other, but this will come in a future version.
In this sample we look at how to network simple physics simulated directly on the Clients, and the implications of this setup.
If we were making a game that relied on precise physics at play between the players (like a sports match, for instance), we would probably go with a setup where the Clients connect to a Simulation Server that runs the physics and prevents cheating.
However, that makes running the game much more expensive for the developer, since a Simulation Server has to be always-on.
WASD or Left stick: Move character
Hold Shift or Shoulder button left: Run
Spacebar or Joypad button down: Jump
E or Joypad button left: Pick up / throw objects
Physics | | Uniqueness |
In this scene we have mostly static scenery, and a few crates that the players can pick up and throw around. Who runs the physics simulation here? You could say that everyone runs their part. Let's take a closer look into the setup.
Select one of the crates in the scene. You can see that they have normal Box Collider
and Rigidbody
components. Up until a player is connected, they are being simulated locally. In fact if you press Play, they will fall down and settle.
The crates also have a CoherenceSync
component. The first player to connect gets authority over them, and keeps running their simulation without interruption. That Client now syncs 2 values over the network: Transform.position
and Transform.rotation
.
On other Clients however (the ones that connect after the first), these crates will become remote. The configured component action makes their Rigidbody kinematic, so that now their movement is controlled by the remote authority (i.e. the first Client).
At this point, the first Client to connect is simulating all the crates. However, interacting with physical objects that are simulated by another Client is quite unpleasant due to the lag. To make it better, other Clients steal authority over crates, whenever they either:
Touch/collide with a crate directly
Pick a crate up
In code, this authority switch is a trivial operation, done in a single line. You can find the code in the NetworkGrabbable
class:
As you can see, it's good practice to ask first if the requesting script already has authority over an object, to avoid wasted work.
Once the request succeeds, the instance of the crate on the requesting Client becomes authoritative, and the Client starts simulating its physics. On the other Client (the previous owner), the object becomes remote (and its Rigidbody kinematic), and is now just receiving position and rotation over the network.
Careful! Since authority request is a network operation, you can't run follow-up code right away after having requested it. It's good practice to set a listener to the events that are available on the Coherence Sync component, like this:
This way, as soon as the reply comes back, we can perform the rest of the code.
So who is running the physics, after all? We can now say that it's everyone at the same time, as the roles change all the time.
As we mentioned in the intro - in a simple game where precise physics are non-crucial this might be enough, and it will definitely keep the costs of running the game down, since no Simulation Server has to run in order to make the game playable.
As before, pressing Tab (or clicking the Joystick) switches to an authority view. It's very interesting to see how crates switch sides when a player interacts with them.
For more on authority, take a look inside the NetworkGrabbable
class. It has more code regarding authority events, all commented.
There is one important thing to note in this setup. Since the objects are already in the scene at the start, by default every time a Client connects it would try to sync those instances to the network. This is very similar to what we have seen with character instantiation so far: each Clients brings their own copy.
However, in this case this would effectively duplicate the crates, once online. One extra copy for each connected player! We don't want that.
For this reason, the CoherenceSync
is configured so that these crates are Persistent and have No Duplicates. This is generally the correct way of configuring networked Prefab instances that have been manually placed in the scene.
With these parameters in mind, the way the crates behave is as follows:
At the start, none of the entities exist on the Replication Server (yet).
The first Client connects. They sync the crates onto the network. Being unique, the Replication Server takes note of their Universally Unique ID (UUID).
Another Client connects. They try to bring the same crates onto the network, but because they are set to be No Duplicates and coherence finds there is already a network entity with the same UUID, it destroys the instance in the scene and network-instantiates a new instance, which is now non-authoritative.
If the first Client disconnects, the crates are not destroyed because their Lifetime is set to Persistent. They briefly become orphaned (no one has authority on them) but immediately the authority is passed to the second Client due to the option Auto-adopt Orphan being on.
If everyone disconnects, the crates persist on the Replication Server as network entities that are orphaned. They keep whatever position/rotation they had, since nobody is simulating them anymore.
At this point, nobody is connected. The Replication Server is not doing any work.
When a new Client reconnects and tries to bring the crates online, a duplication is detected: the Prefab instance in the scene and the network instance on the Replication Server have the same UUID. Due to Uniqueness set to No Duplicates, their local instances are destroyed and re-instantiated as network copies. They are orphaned, but thanks to Auto-adopt Orphan, the Client immediately assumes authority on them.
They will also most probably see the crates snap to the last seen position/translation that was stored on the Replication Server, which is synced just before they assume full control over the crates. At this point, they start simulating their physics locally.
We have seen a lot of examples with objects belonging to a Client, and when that Client disconnects, they disappear with them. We call these session-based entities.
But coherence also has a built-in system to make objects survive the disconnection of a Client, and be ready to be adopted by another Client or a Simulator. We call these objects persistent. Persistent objects stay on the Replication Server even if no Client is connected, creating the feeling that the game world is alive beyond an individual player session.
WASD or Left stick: Move character
Hold Shift or Shoulder button left: Run
P or Right shoulder button: Plant a flower (hold to preview placement)
| |
Players can plant flowers in this little valley. Each flower has 3 phases: starts as a bud, blooms into a full flower, and then withers after some time.
Creating a flower generates a new, persistent network entity. Even if the Client disconnects, the flower will persist on the server. When they reconnect, they will see the flower at their correct stage of growth (this is a little trick ).
Planting too many flowers starts erasing older flowers. A button in the UI allows clearing all flowers (belonging to any player) at any time.
When using the plant action, any connected player instantiates a copy of the Flower Prefab (located in the /Prefabs/Nature
folder).
By selecting the Prefab asset, we can see its CoherenceSync
component is set up like this:
In particular, the Lifetime __ is set to Persistent. This means that when the Client who plants a flower disconnects, the network entity won't be automatically destroyed. Auto-adopt Orphan __ set to on makes it so the next player who sees the flower instantly adopts it, and keeps simulating its growth.
Opening coherence's Configuration window, you will see that we sync position, rotation, and a variable called timePlanted
:
Once a flower has spawned, all of its logic runs locally (no coherence involved). An internal timer calculates what phase it should be in by looking at the timePlanted
property and doing the math, and playing the appropriate animations and particles as a result.
To achieve this, the flowers of this scene store the Flower.timePlanted
value on the Replication Server. A Replication Server with no connected Clients is dormant, and has a very low cost to run. So the flowers are not actually simulating, they are just waiting.
When a new Client comes online and this value is synced to them, they immediately fast-forward the phase of the flower to the correct value, and then they start simulating locally as normal.
This gives the players the perception that things are still running even when they are not connected.
This setup is not bulletproof, and could be easily cheated if a player comes online with a modified Client, changing the algorithm calculating the flowers' phase.
But for a game in which this calculation is not critical, especially if it doesn't affect other player's experience of the game, this can be a nice setup to cut some costs.
Every Client can, at any time, remove all flowers from the scene by clicking a button in the UI.
It's important to remember that you shouldn't call Destroy()
on a network entity on which the Client doesn't have authority on. To achieve this, we first request authority on remote flowers and listen for a reply. Once obtained it, we destroy them.
Check the code at the end of the Flower
script:
In this example we used a Network Command to trigger a transition in an animation state machine, but they can be used to call any instantaneous behavior that has to be replicated over the network. As an example, it is also used in the sample to change a number in a UI element across all Clients.
As mentioned in the lesson about , parenting a network entity to a GameObject that belongs to a chain requires some setup. To be able to pick up the crate with the crane, we equip it with a CoherenceNode
component:
Differently from other scenes (like ) where the object is always non-kinematic on the Client simulating it, in this case we want the crate to stay kinematic when authority changes while it's being held.
On the authority Client | On non-authoritative Clients |
---|
For cases like these, coherence takes care of them automatically. More complex hierarchies require a different handling, and we cover them in .
For more information on persistence, there's about it.
When it gets instantiated, the flower writes the current into the timePlanted
variable. This variable never changes after this, and is used to reconstruct the phase in which the flower is in (see ). Similarly, as the flower is not moving, position and rotation are only synced at the time of planting.
coherence supports the ability to have an instance of the game active in the cloud, running some logic all the time (a ). However, this might be an expensive setup, and it's good advice to think things through differently to keep the cost of running your game lower.
As we discussed in the , switching authority is a network operation that is asynchronous, so we need to wait for the reply from the player who currently has authority.
Is being held | true | true |
Has been released | false | true |
Quick exploration and recommendations for different game genres
This section introduces you to coherence features and terminology by using well-known genres and game types as examples. Each example will come with a list of considerations and how we propose to use coherence to achieve a similar result. As you well know, game creation is a complex process, so the list is far from exhaustive, but aims to highlight pitfalls, suggest solutions and generally just provide you with a starting point when trying to create a multiplayer game with coherence in the context of a game type you are working on.
This section is a beginner-friendly exploration into familiarizing with coherence's terminology and networking mindset, and by no means is representative of a production-ready architecture proposal.
Racing games involve multiple vehicles racing a number of laps. They can be realistic or arcade-y, but the end goal is always the same: crossing the finish line first.
In multiplayer racing games it is vital to have as precise information about the position of other clients as possible. The server and the players both need to know details like if its possible to overtake in a curve, if players bumped into each other, and even more importantly - who crossed the finish line first. We need to have server authority, but each Client also needs reliable predictions about where the other players are.
Fighting games come in many shapes and forms. Usually they involve 2 or more players fighting each other. The players can kick, punch, block, grab and trigger intricate combos for extra damage. Often this type of game relies on quick reactions to the opponent's movement.
For a fighting game, we need a reliable game state regardless of user ping. It needs to be deterministic. For example, if two players press the kick button a few milliseconds apart, the Simulator needs to be able to figure out which player is the one doing the kicking, and which is the one getting kicked. Our solution is called input queues and the setup is described in great detail here. The key idea is that only the input is being processed by the Client, and the Simulator is responsible for deciding the outcome. The Simulator stores a queue of inputs, which is then used to decide on the correct order of actions.
If you want to synchronize more than just the root of the Game Object, e.g. if you want to have precise replication of ragdoll on all Clients, you need to create bindings to more that just the root transform. We support deep bindings which allow you to select any object in the hierarchy and synchronize whatever is needed.
First-Person Shooters (FPS) are games where multiple players join opposing teams and shoot each other. You will often win by either eliminating the opposite team, exploding a bomb, or running out the timer.
Good communication between players is often essential in winning. Serious players will have voice communication, but its also good to have in-game comms to easily communicate tactics. In coherence there is the concept of Client Connections which you can use to easily send messages between players. It can also be used to communicate game state changes e.g., "The bomb has been planted".
When building your level there may be certain objects that should be duplicated across Clients. You want to have a duplicate of each player on every Client, but for something like doors which can be opened or closed, you only want one "shared" door across all Clients. For this to happen you need to understand Uniqueness and Lifetime. Using those concepts, you can make sure that a given objects is persistent on the scene, and that only one exists.
Similar to doors, the bomb is also unique. The difference is that the bomb is spawned and only 1 bomb can exist in the game at any time. Doors are also unique, but multiple instances of a door asset can exist, just not at the same place. To understand more, read about Setting up a global counter. This is the same principle of having a Prefab that is uniquely identified.
Online competitive multiplayer games are tricky to get exactly right, and there is no "right" solution. It's a constant tradeoff between cheat protection, latency, client prediction, etc. You need to do further research to decide on the solution that best suits you. One thing you definitely need to learn about is client vs server authority.
For turn-based games, the requirements for networking can be quite different from other, more fast-paced games. You are only interested in changes to the game state, and don't really need more granularity than that. Let's take chess as an example.
You don't really need a player character so in order to process input you don't really need a CoherenceSync object. You could use a Client Connection Prefab if you want to have an easy way to implement chat.
You might want to have a Simulator to process everything if you want cheat protection, but for a game such as this it could also be viable to simply opt for client-side simulation and then have one of the Clients have authority over the game controller. That is the default setting when adding a CoherenceSync Component. Each client can then talk to the game controller using Commands.
The topics of this page are covered in the first minute of this video:
It's quick and easy to set up a networked scene from scratch using the coherence SDK. This example will show you the basic steps to sync up some moving characters.
Add these components to your scene to prepare it for network synchronization.
coherence > Scene Setup > Create MonoBridge
This object takes care of connected GameObject
lifetimes and allows us to develop using traditional MonoBehaviour
scripts.
coherence > Scene Setup > Create LiveQuery
Creates a LiveQuery which queries the area around the local player to get the required information from the Replication Server. You can surround your entire scene in one query or can attach it to an object such as the player or a camera.
coherence > Scene Setup > Add Sample UI
Creates a Canvas
(and Event System
if not already present in the scene) with a sample UI that helps you connect to a local or remote Replication Server. You can create your own connection dialog, this one is just a quick way to get started.
Using the coherence Hub window gives you an overview of everything related to networking in your project. The Overview tab will show you the current status and which actions you need to perform for everything to work.
coherence > coherence Hub
If we have identified any issues in your project, the Overview tab provides a one-click solution to solving it. In the example below, simply click the blue link next to the warning messages and coherence will redirect you to the correct place to learn more about how to fix the issue.
Now we can build the project and try out network replication locally.
This example will show you how to launch a local Replication Server and connect multiple instances.
You can run a local Replication Server from the coherence menu:
This will open a new terminal window with the Replication Server and a World created in it.
As with most features found in the menu, you can find local replication server functionality in the Coherence Hub as well. Open the Servers tab and run a Room or a World Replication Server.
Now it's time to make a standalone build and test network replication.
#protip: Go to Project Settings, Player and change the Fullscreen Mode to Windowed and enable Resizable Window. This will make it much easier to observe standalone builds side-by-side when testing networking.
Note that for this sample we are running a World on a server, so make sure that Connect Dialog Selector in your Coherence Sample UI object on scene is set to Worlds as well.
Open the Build Settings window (File > Build Settings). Click on Add Open Scenes to add the current scene to the build. Click Build and Run.
Select a folder (e.g. Builds) and click OK.
When the build is done, start another instance of the executable (or run the project in the Game Window in Unity).
Click Connect on both clients. Now try focusing one and using WASD keys. You will see the box move on the other side as well.
Congratulations, you've made your first coherence replicated experience. But this is only the beginning. Keep reading to take advantage of more advanced coherence features.
If you want to connect to the local Replication Server from another local device (such as another PC, Mac, Mobile or VR device), you can find your IPv4 address and use that as your server address in the Connect dialog. These devices need to be connected to the same network.
You can find your IPv4 address by going to your command line tool and type ipconfig
. Remember to include the port number, for example 192.168.1.185:32001
.
Make sure your firewall allows remote connections to connect to the Replication Server from other devices on your network.
The coherence Sample UI
is a Prefab that you can add to your scene. It handles interaction with coherence services, is made up of a Unity UI Canvas and includes everything needed to handle connection to coherence.
The UI
component on the root of the Prefab allows us to switch between using Rooms or Worlds. Each of these methods has a dedicated dialog for connection.
The Auto Simulator Connection
component is used by Simulator builds to connect to the relevant Replication Server.
See chapter Simulators to learn more about them.
The Rooms Connect Dialog has a few components that facilitate the usage of Rooms.
At the top of the dialog we have an input field for the player's name.
Next is a dropdown for region selection. This dropdown is populated when regions are fetched. The default selection is the first available region.
Due to current limitations the local Server is fetched only if it's started before you enter Play Mode. The Local Development Mode checkbox must also be checked in the project settings (coherence > Settings).
This affects the local region for Rooms and the local worlds for Worlds.
Beneath these elements is a Rooms list of available Rooms in the selected region.
After selecting a Room from the list the Join button can be used to join that Room.
The New Room tab will take us to the Room creation screen.
This screen contains controls for setting a Room's name and maximum player capacity. Pressing the Create button will create a Room with the specified parameters and immediately add it to the Room List in the previous tab. Create and Join will create the room, and join immediately
The Worlds Connect Dialog is much simpler. It simply holds a dropdown for region selection, an input field for the players name, and a Connect button.
You can also build your own interface to connect players to the Server using thePlayResolver
API. To learn more about the API, see either PlayResolver, Rooms or Worlds according to what your project needs.
Requirements
coherence currently supports Unity. For custom engine integration, please contact our developer relations team. For updates regarding Unreal Engine support, please check the Unreal Engine support page.
For custom engine integration, please contact our developer relations team.
Unity 2020 LTS or 2021 LTS (2020.3.36f1 or later).
A Windows, Linux or macOS system.
First, open Unity's Project Settings.
Under Package Manager, add a new Scoped Registry with the following information:
Name: coherence
URL: https://registry.npmjs.org
Scope(s): io.coherence.sdk
Enable Preview / Pre-release Packages: yes
Now open the Window / Package Manager.
Select My Registries in the Packages dropdown.
Under coherence, click Install.
Refer to Unity's instructions on modifying your project manifest.
Edit <project-path>/Packages/manifest.json
.
Add an entry for the coherence sdk on the dependencies
object, and for the scoped registry in the scopedRegistries
array:
You will then see the package in the Package Manager under My Registries.
When you successfully install the coherence SDK the Welcome Window will show.
In most MMOs you control a character, interact with other players and group up with other players to clear dungeons. Here's a few networking considerations for anyone creating something similar.
In a MMO the world needs to be persistent. Any given user can join and leave a world at any given time and we want their changes to be persistent. Which NPCs was killed, which treasures were looted, which items are available on the action house, etc. In order to achieve this, you need to run a in the which will make sure the world state is saved even if no players are logged in.
Given that there can be a large amount of players distributed over a very large area, we don't really care about the ones in all the different areas. By not sending information about players far from the player itself, we can significantly limit the amount of data sent over the network. When using coherence you can use the to set a bounding box in which we replicate data from networked entities - anything outside of it is ignored.
Even within a there might be further room to optimize. When having a large amount of networked entities, you might want to prioritize those that are closer. Our solution to this is called . Using this you are able to control values such compression, value range and sample rate in order to hit the optimization sweet spot.
To make sure that all users experience an identical game world at any given time, we need a Simulator to be responsible for taking decisions for the AI, triggering events, etc. In coherence we support launching your game with any number of taking responsibility for the various parts of your game.
A common part of all MMO's are instanced areas where a smaller group of players enters together and completes a set of tasks. It does not make much sense to run these instances as a part of the World Replication Server. The idea is that it can be possible to spin up a separate server for each group who enters such an instance. In coherence we offer this through the . This allows you to have a shared world, as well as any number of instances for a specific area for a subset of players.
In this section, we will learn how to prepare a Prefab for network replication.
Setting up basic syncing is explained in this video, from 1:03 and onwards:
Alternatively, you can follow the text guide below.
CoherenceSync
to your GameObjectFor a Unity GameObject to be networked through coherence, it needs to have a CoherenceSync component attached. Currently, only Prefabs are supported. If your GameObject is not a Prefab, CoherenceSync can assist you.
First, create a new GameObject
. In this example, we're going to create a Cube.
Next, let's add the CoherenceSync
component to this Cube.
The CoherenceSync
inspector now tells us that we need to make a Prefab out of this GameObject
for it to work. We get to choose where to create it.
In this example, I'll be creating it on Assets / Resources by clicking Convert to Prefab in Resources.
One way to configure your Prefab, instead of just adding CoherenceSync into it, is to fork a Prefab variant and add the component there.
In our Cube example, instead of adding CoherenceSync to Cube, you can create a Cube (Networked) and add CoherenceSync to it:
This way, you can retain the original Prefab untouched.
Another way to use Prefab variants to our advantage is to have a base Prefab using CoherenceSync, and create Prefab variants off that one with customizations. For example, Enemy (base Prefab) and Enemy 1, Enemy 2, Enemy 3... (variant Prefabs, using different models, animations, materials, etc.). In this setup, all of the enemies will share the networking settings stored in CoherenceSync, so you don't have to manually update every one of them.
When the Prefab variant inherits the network settings from the Prefab parent, you can configure your Prefab variant with overrides in the Configuration window. When a synced variable, method or component action is present in the variant and not in the parent, it will be bolded and it will have the blue prefix beside it, just like any other override in Unity.
The CoherenceSync
component will help you prepare an object for network synchronization during design time. It also exposes an API that allows us to manipulate the object during runtime.
CoherenceSync
will query all public variables and methods on any of the attached components, for example Unity components such as Transform
, Animator
, etc. This will include any custom scripts such as PlayerInput
and even scripts that came with the Asset Store packages that you may have downloaded.
In order for coherence to know which Prefab to instantiate through the network, they must be located in Resources folders or added to the Prefab Mapper
. The Prefab Mapper
is simply a ScriptableObject that holds Prefabs we want to sync.
When a CoherenceSync
Prefab outside the Resources folder is not present in the Prefab Mapper, the Add to Prefab Mapper button appears.
The Prefab Mapper asset is located at Assets / coherence.
If you are using the Addressables Package (1.15.0 or newer), the Prefab Mapper also supports that. The Prefab Mapper has a list for direct references and a list for addressables referenced using AddressableAssets.AssetReference. The directly referenced assets will be loaded at game start, while the addressables will be asynchronously loaded when needed.
Remember, in order for coherence to load assets as addressables, you need to follow Unity's default workflow for using Addressables which entails building the Addressables Group Bundles whenever the Prefab changes.
When adding a new Prefab to the mapper, it will automatically make sure to add it to the correct list (addressable or not), but if the Prefab already exists in the mapper and you have changed how it is loaded, simply press the Validate Mapping Lists button.
If you have a number of valid Prefabs that have not yet been added to the Prefab Mapper you can press the Add all CoherenceSync prefabs button.
If you have empty entries in the Prefab Mapper you can remove them by pressing Remove empty entries. This is optional.
Select which variables you would like to sync across the network. Initially, this will probably be the Transform
settings: position, rotation, scale.
Under Configure, click Variables.
In the Configuration dialog, select position, rotation and scale.
You can configure variables, methods and components on child objects in the CoherenceSync hierarchy. To do that, simply select the desired object in the Hierarchy window, and the Configuration window will show information for that specific object — similarly to how the inspector works.
Close the Configuration dialog.
This simple input script will use WASD or the Arrow keys to move the Prefab around the scene.
Click on Assets > Create > C# Script.
Name it Move.cs
. Copy-paste the following content into the file.
Wait for Unity to compile the file, then add it onto the Prefab.
We have added a Move
script to the Prefab. This means that if we just run the scene, we will be able to use the keyboard to move the object around.
But what happens on another Client where this object is not authoritative, but replicated? We will want the position to be replicated over the network, but without the keyboard input interfering with it.
Under Configure, click Components.
Here you will see a list of Component Actions that you can apply to non-authoritative GameObjects that have been spawned by the network.
Selecting Disable for your Move
script will make sure the Component is disabled for network instances of your Prefab.
By extending the ComponentAction
abstract class, you can implement your own Component Actions.
Your custom Component Action must implement the following methods:
OnAuthority
This method will be called when the object is spawned and you have authority over it.
OnRemote
This method will be called when a remote object is spawned and you do not have authority over it.
It will also require the ComponentAction
class attribute, specifying the type of Component that you want the Action to work with, and the display name.
For example, here is the implementation of the Component Action that we use to disable Components on remote objects:
From the CoherenceSync
component you can configure settings for Lifetime (Session-based
or Persistent
, Authority transfer (Request
or Steal
), Simulation model (Client Side
, Server Side
or Server Side with Client Input
) and Adoption settings for when local persistent entities are orphaned.
There are also some Events that are triggered at different times.
On Before Networked Instantiation
(before the GameObject is instantiated)
On Networked Instantiation
(when the GameObject is instantiated)
On Networked Destruction
(when the GameObject is destroyed)
On Authority Gained
(when authority over the GameObject is transferred to the local client)
On Authority Lost
(when authority over the GameObject is transferred to another client)
On After Authority Transfer Rejected
(when GameObject's Authority transfer was requested and denied).
On Input Simulator Connected
(when client with simulator is ready for Server-side with Client Input)
On Input Owner Assigned
(when InputOwner was changed is ready)
If you prefer, you can let the Coherence Hub guide you through your Prefab setup process. Simply select a Prefab, open the Synced Object tab in Coherence Hub and follow the instructions.
There are some constraints when setting up a Prefab with CoherenceSync
, hereafter referred to as Sync Prefab
.
A Sync Prefab
has one, and only one CoherenceSync
component in its hierarchy
The CoherenceSync
component must be at the Sync Prefab
root
A Sync Prefab
cannot contain instances of other Sync Prefabs
A hierarchy in a scene can contain multiple Sync Prefabs
. However, such a hierarchy cannot be saved as a Sync Prefab
as that would break rule 1-3.
Breaking rules 1-3 will lead to the build breaking in runtime.
Quicker iteration during development
When developing multiplayer experiences you will need to run multiple instances of your game in order to test properly. You also need to make sure these instances can be restarted quickly, so you can iterate quickly.
coherence does not have a built-in solution for multiclient testing, but there are several options available to you, each with their own benefits and drawbacks.
ParrelSync is an open-source project which allows you to have multiple Editors open which share Assets and ProjectSettings using Symbolic links.
Benefits
Short iteration times.
Easily debuggable instances.
Drawbacks
Requires you to have multiple Editors open.
Caveats
All instances of the game must have identical schemas, which are NOT shared using ParrelSync. That means you need to bake on all open Editors. Setting Auto Bake on Enter Play Mode to true in coherence Settings will alleviate this issue.
EditorPrefs are not consistently shared between Editors.
First, install the ParrelSync package as described in the .
Open ParrelSync -> Clones Manager. Create a new clone, and open it.
Continue development in the original Editor.
When you need to test, do the following for all open Editors: Bake, press play. Alternatively you can set Auto Bake on Enter Play Mode to true.
The easiest method is to simply create a new build each time you want to test anything. You can launch any number of instances of that build, and have an instance running in the Editor as well.
Benefits
Easy to distribute amongst team members.
Well-understood workflow.
Drawbacks
Long iteration time as you need to continuously make builds.
Harder to debug the executables.
Caveats
All instances of the game must have identical schemas, so remember to bake before building the executable.
Unity has an experimental package called Multiplayer Play Mode (MPPM) available for 2023.1. As this is currently experimental, we do not officially recommend it - but it does show some promise and should be mentioned. This package allows a single Editor to run several instances of a game.
Benefits
Short iteration times.
No issues with schema incompatibility.
Drawbacks
Experimental.
Now we can finally deploy our schema and Replication Server on coherence Cloud.
In this example we're working with Worlds. Make sure you have created a World before trying to deploy the Replication Server. To create a World, follow the steps described in .
The topics on this page start from around 1:03 in the video below:
Alternatively, you have the steps detailed in the text below.
In the coherence Hub window, select the coherence Cloud tab, and click on Upload to coherence Cloud in the Schemas section.
The status in the Schemas section should now be In Sync.
Your project schema is now deployed with the correct version of the Replication Server already running in the cloud. You will be able to see this in your cloud dashboard status.
The Connect Dialog fetches all the regions available for your project. This depends on the project configuration (e.g., the regions that you have selected for your project in the Portal).
You can now build the project again and send the build to your friends for testing.
You will be able to play over the internet without worrying about firewalls and local network connections.
Before connecting, make sure everybody selects the same region, and that this region is not local.
coherence allows you to upload and share the builds of your games to your team, friends or adoring fans via an easy access play link.
Right now we support desktop (PC, Mac, Linux) and also WebGL, where you can host and instantly play your multiplayer game and share it around the world.
Build your game to a local folder on your desktop as you would normally.
In the coherence Hub window, select the coherence Cloud tab. You can upload your build from the Share Build section of the tab, select the platform (macOS, Linux, Window or WebGL are supported) and click on the Begin Upload button.
Now that build has been updated (signified by the green tick), you can share it by enabling and sharing the public URL. Anyone with this link can access the build.
If you uploaded a WebGL build, the public link allows for instant play.
Now that we have tested our project locally, it's time to upload it to the cloud and share it with our friends and colleagues. To be able to do that, we need to create a free account with coherence.
Create an account or log into an existing one.
Open Unity and open the coherence Hub window. Then open the coherence Cloud tab.
After pressing Login you will be taken to the login page. Simply login as usual, and return to Unity.
You are now logged into the Portal through Unity. Select the correct Organization and Project, and you are ready to start creating.
Defines a network entity and what data to sync from the GameObject. Anything that needs to be synchronized over the network can use a CoherenceSync component. You can select data from your GameObject hierarchy that you'd like to sync across the network.
Queries an area of interest, so that you can read/write across the network on the desired location. In our Starter Project, the LiveQuery position is static with an extent large enough to cover the entire playable level. If the World was very large and potentially set over multiple Simulators, the LiveQuery could be attached to the playable character or camera.
Handles the connection between the coherence transport layer and the Unity scene.
Enables a Simulator to take control of the state authority of a Client's CoherenceSync, while retaining input authority.
This component is added by CoherenceSync on .
Learn how to create and use Prefab variants in the .
You can find out more about CoherenceSync .
Helper scripts and samples can be found .
If the status does not say "In Sync", or if you encounter any other issues with the server interface, refer to the section.
We are working on a WebGL / WebAssembly option that will automatically upload the browser-playable build to your own personal webpage that you can share with your friends. For more information about our roadmap, please contact our .
In your web browser, navigate to .
The way you get information about the World is through LiveQueries. We set criteria for what part of the World we are interested in at each given moment. That way, the Replicator won’t send information about everything that is going on in the Game World everywhere, at all times.
Instead, we will just get information about what’s within a certain area, kind of like moving a torch to look around in a dark cave.
For a guide on how to use LiveQuery, see Areas of Interest.
More complex areas of interest types are coming in future versions of coherence.
In addition to the LiveQuery, coherence also supports filtering objects by tag. This is useful when you have some special objects that should always be visible regardless of World position.
For a guide on how to use TagQuery, see Areas of Interest.
The CoherenceSync
component will help you prepare an object for network synchronization. It also exposes an API that allows us to manipulate the object during runtime.
CoherenceSync
will query all public variables and methods on any of the attached components, for example Unity components such as Transform
, Animator
, etc. This will include any custom scripts, including third-party Asset Store packages that you may have downloaded.
Refer to the prefab setup page to learn how to configure your prefab to network state changes.
Sometimes you want to synchronize data outside of the current GameObject.
Out of the box, coherence offers you coherence offers you several options to synchronize data from your CoherenceSync objects' hierarchy:
Child GameObjects: when you need to network data directly from other GameObjects.
Child CoherenceSyncs: when you create a parent-child relationship of CoherenceSync objects at runtime.
Deep Child CoherenceSyncs: when you create a complex parent-child relationship of CoherenceSync objects at runtime.
bool
int
uint
byte
char
short
ushort
float
string
Vector2
Vector3
Quaternion
GameObject
Transform
RectTransform
CoherenceSync
SerializeEntityID
byte[]
long
ulong
Int64
UInt64
Color
double
RectTransform
is still in experimental phase - use at your own discretion!
The MonoBridge establishes a connection between your scene and the coherence Replication Server. It makes sure all networked entities stay in sync.
When you place a GameObject in your scene, the MonoBridge detects it and makes sure all the synchronization can be done via the CoherenceSync
component.
At runtime, you can inspect which Entites the MonoBridge is currently tracking.
A MonoBridge is associated with the scene it's instantiated on, and keeps track of Entities that are part of that scene. This also allows for multiple connections at the same time coming from the game or within the Unity Editor.
When using a Global MonoBridge (Singleton), the MonoBridge is still associated to the scene it was originally instantiated on, even when the GameObject deattaches from the scene and becomes part of DontDestroyOnLoad.
Currently, the maximum number of persistent Entities supported by the Replication Server is 32 000. This limit will be increased in the near future.
CoherenceSync
is a component that should be attached to every networked GameObject. It may be your player, an NPC or an inanimate object such as a ball, a projectile or a banana. Anything that needs to be synchronized over the network and turned into an Entity. You can select which of the attached components you would like to sync across the network as well as individual public properties.
All networked Entities need to be placed in the Resources folder.
Any scripts attached to the component with CoherenceSync
that have public variables will be shown here and can be synced across the network. Enable the script + the variable to sync, it's that easy. Those variables with a lightning bolt next to them signify a public method that can be activated via commands.
Ownership transfer
When you create a networked GameObject, you automatically become the owner of that GameObject. That means only you are allowed to update or destroy it. But sometimes it is necessary to pass ownership from one player to another. For example, you could snatch the football in a soccer game or throw a mind control spell in a strategy game. In this case, you will need to transfer ownership from one Client to another.
Entity lifetime
When a player disconnects, all the GameObjects created by that player are usually destroyed. If you want any GameObjects to stay in the Game World after the owner disconnects, you need to set Entity lifetime type of that GameObject to Persistent.
Session Based - will be removed when the Client disconnects. GameObjects will stay on the scene of the Client who is an Authority owner for session-based objects until the scene reloads.
Persistence - Entities with this option will persist as long as the server is running. For more details, see Configuring persistence.
Keep in mind that Entity IDs are assigned locally. This means that the IDs for the same Entity can be different on different Clients.
Uniqueness
Allow Duplicates - no restrictions on which objects can be instantiated over the network.
No Duplicates - ensure objects are not duplicated by marking them with a UUID.
You can set the UUID manually
If you leave the field blank a GUID will be assigned at runtime (This is rarely what you want)
Use the CoherenceUUID
component helper to generate GUIDs for you at editor time. CoherenceUUID
is an design time only tool for assigning unique UUIDs to prefab instances.
Uniqueness examples
Manager: If your game has a prefab, of which there can only be 1 in-game instance at any time (Such as a Game Controller), assign a UUID manually on the prefab asset.
Interactable objects: If you have several instances of a given prefab, but each instance must be unique (Such as doors, elevators, pickups etc) you should add the CoherenceUUID script and set it to Auto-generate in scene. This means that i.e. a door will only spawn once, but still replicate its state across the network.
Entity simulation type
Client Side - Simulates everything on the local Client and passes the information to the Replication Server to distribute that information to the other Clients.
Other forms of simulation (Server; Server with Client Input).
Authority transfer style
Not Transferable - The default value is Not Transferable because most often objects are not meant to be transferred.
Stealing - Allows the GameObject to be transferred to another Client.
Request - This option is intended for conditional transfers, which is not yet supported.
Orphaned entities
By making the GameObject persistent, you ensure that it remains in the game world even after its owner disconnects. But once the GameObject has lost its owner, it will remain frozen in place because no Client is allowed to update or delete it. This is called an orphaned GameObject.
In order to make the orphaned GameObject interactive again, another Client needs to take ownership of it. To do this, enable Auto-adopt orphan.
Once you have set the transfer style to Stealing, any Client can request ownership by calling the RequestAuthority()
method on the CoherenceSync
component of that GameObject:
someGameObject.GetComponent<CoherenceSync>().RequestAuthority();
A request will be sent to the GameObject's current owner. The current owner will then accept the request and complete the transfer.
You are now the new owner of the GameObject. This means the isSimulated
flag has been set to true, indicating that you are now in full control of the GameObject. The previous owner is no longer allowed to update or destroy it.
Helper scripts with a custom implementation of authority transfer can be found here.
The state of the CoherenceSync.isSimulated
flag is not guaranteed to have a proper value during the Awake()
callback (right after an object is created). All scripts that use this flag should wait at least until the Start()
callback.
You can set up Custom Events for handling user connection and disconnection. Manual Destroy is useful for session based objects that you want to keep "semi-persistent" which would be removed when all the Clients disconnect.
When CoherenceSync
variables/components are sent over the network, by default, Reflection Mode is used to sync all the data at runtime. Whilst this is really useful for prototyping quickly and getting things working, it can be quite slow and unperformant. A way to combat this is to bake the CoherenceSync component, creating a compatible schema and then generating code for it.
The schema is a file that defines which data types in your project are synced over the network. It is the source from which coherence SDK generates C# struct types (and helper functions) that are used by the rest of your game. The coherence Replication Server also reads the schema file so that it knows about those types and communicates them with all of its Clients efficiently.
The schema must be baked in the coherence Settings window, before the check box to bake this Prefab can be clicked.
When the CoherenceSync
component is baked, it generates a new file in the baked folder called CoherenceSync<NameOfThePrefab>
. This component will be instantiated at runtime, and will take care of networked serialization and deserialization, instead of the built-in reflection-based one.
Refer to the Commands section.
This page describes the order of various coherence events and scripts in relation to Unity's main loop.
The following coherence components use a non-standard script execution order:
Component | Execution order |
---|---|
Execution order values can be found in the Coherence.Toolkit.ScriptExecutionOrder
script.
Take a look at your project's Script Execution Order settings by opening Edit > Project Settings and selecting the Script Execution Order category. See this Unity manual article for more details.
Depending on the reason for a disconnection the onDisconnected
event can be raised from different places in the code, including LateUpdate
.
CoherenceSync parent-child relationships on complex hierarchies
While the basic case of direct parent-child relationships between CoherenceSync entities is handled automatically by coherence, more complex hierarchies (with multiple levels) need a little extra work.
An example of such a hierarchy would be a synced Player Prefab with a hierarchical bone structure, where you want to place an item (e.g. a flashlight) in the hand:
Player > Shoulder > Arm > Hand
A Prefab can only have a single CoherenceSync
script on it (and only on its root node), so you can't add an additional one to the hand. Instead, you need to add the CoherenceNode
component to another Prefab so that it can be parented. Please note that this parenting relationship can only be set up in the scene or at runtime; you can't store it in the parent Prefab since that would break the rule of only one CoherenceSync
per Prefab.
To prepare the child Prefab that you want to place in the hierarchy, add the CoherenceNode
component to it (it also has to have a CoherenceSync
). In the example above, that would be the flashlight you want your player to be able to pick up. You don't need to make any changes to the Player Prefab, just make sure it has a CoherenceSync
script in the root.
This setup allows you to place instances of the flashlight Prefab anywhere in the hierarchy of the Player (you could even move it from one hand to the other, and it will work).
The one important constraint is that the hierarchies have to be identical on all Clients.
To recap, for CoherenceNode to work you need two things:
One or more Prefabs with CoherenceSync
that have some kind of hierarchy of child transforms (the child transforms can't have CoherenceSyncs on them).
Another Prefab with CoherenceSync
and CoherenceNode
. Instances of this Prefab can now be parented to any transform of the Prefabs with just CoherenceSync (in step 1).
CoherenceNode
works using two public fields which are automatically set to sync using the [Sync]
attribute.
The path
variable describes where in the parent's hierarchy the child object should be located. It is a string consisting of comma-separated indexes. Every one of these indexes designates a specific child index in the hierarchy. The child object which has the CoherenceNode
component will be placed in the resulting place in the hierarchy.
The pathDirtyCounter
variable is a helper variable used to keep track of the applied hierarchy changes. In case the object's position in the parent's hierarchy changes, this variable will be used to help settle and properly sync those changes.
Note: This is simply an example solution for a particular case which uses other tools coherence provides. Your project's needs might be different and require a different custom solution.
CoherenceSync direct parent-child relationships
Objects with the CoherenceSync
component can be connected to other objects with CoherenceSync
components to form a parent-child relationship. For example, an object can be linked to a hand, a hand to an arm, and the arm to a spine.
When an object has a parent in the network hierarchy, its transform (position and orientation) will update in local space, which means its transform is relative to the parent's transform.
A child object will only be visible in a LiveQuery if its parent is within the query's boundaries.
Creating an Entity hierarchy is very simple. All you need to do is add a GameObject with a CoherenceSync
component as a direct child of another GameObject with a CoherenceSync
component. You can add and remove parent-child relationships at runtime (even from the editor).
Destruction or disconnection of the parent object will also destroy and remove all children of this object. Those objects' state needs to be treated on the Client side to be reinstantiated on the next connection.
Sometimes, it is not practical to add CoherenceSync
objects to all the links in the chain. For example, if a weapon is parented to a hand controlled by an Animator, we do not need to synchronize the entire skeleton over the network. In that case, see CoherenceNode.
If the child object is using LODs, it will base its distance calculations on the world position of its parent. For more details, see the Level of detail documentation.
When the parent CoherenceSync
is destroyed, by default its CoherenceSync
children get destroyed together with it. This can be changed via the Preserve Children option on the parent:\
When the Preserve Children option is enabled, destroying the parent entity will result in children getting unparented instead of being destroyed together with the parent. Those children will now reside at the root of the scene hierarchy.
Commands are network messages sent from one CoherenceSync to another CoherenceSync. Functionally equivalent to RPCs, commands bind to public methods accessible on the GameObject hierarchy that CoherenceSync sits on.
In the design phase, you can expose public methods the same way you select fields for synchronization: through the Configure window on your CoherenceSync component.
By clicking on the method, you bind to it, defining a command. The grid icon on its right lets you configure the routing mode. Commands with a Send to Authority Only
mode can be sent only to the authority of the target CoherenceSync, while ones with the Send to All Instances
can be broadcasted to all clients that see it. The routing is enforced by the Replication Server as a security measure, so that outdated or malicious clients don't break the game.
To send a command, we call the SendCommand
method on the target CoherenceSync
object. It takes a number of arguments:
The generic type parameter must be the type of the receiving Component. This ensures that the correct method gets called if the receiving GameObject has components that implement methods that share the same name.
Example: sync.SendCommand<Transform>(...)
If there are multiple commands bound to different components of the same type (for example, your CoherenceSync hierarchy has five Transforms, and you create a command for Transform.SetParent on all of them), the command is only sent to the first one found in the hierarchy which matches the type.
The first argument is the name of the method on the component that we want to call. It is good practice to use the C# nameof
expression when referring to the method name, since it prevents accidentally misspelling it, or forgetting to update the string if the method changes name.
Alternatively, if you want to know which Client sent the command, you can add CoherenceSync sender
as the first argument of the command, and the correct value will be automatically filled in by the SDK.
The second argument is an enum that specifies the MessageTarget
of the command. The possible values are:
MessageTarget.All
– sends the command to each Client that has an instance of this Entity.
MessageTarget.AuthorityOnly
– send the command only to the Client that has authority over the Entity.
MessageTarget.Other
- sends the command to every Entity other than the one SendCommand is called on.
Mind that the target must be compatible with the routing mode set in the bindings, i.e. Send to authority
will allow only for the MessageTarget.AuthorityOnly
while Send to all instances
allows for both values.
Also, it is possible that the message never sends as in the case of a command with MessageTarget.Other
sent from the authority with routing of Authority Only.
The rest of the arguments (if any) vary depending on the command itself. We must supply as many parameters as are defined in the target method and the schema.
Here's an example of how to send a command:
If you have the same command bound more than once in the same Prefab hierarchy, you can target a specific MonoBehaviour when sending a message, you can do so via the SendCommand(Action action) method in CoherenceSync.
Additionally, if you want to target every bound MonoBehaviour, you can do so via the SendCommandToChildren method in CoherenceSync.
We don't have to do anything special to receive the command. The system will simply call the corresponding method on the target network entity.
If the target is a locally simulated entity, SendCommand
will recognize that and not send a network command, but instead simply call the method directly.
Sometimes you want to inform a bunch of different CoherenceSyncs about a change. For example, an explosion impact on a few players. To do so, we have to go through the instances we want to notify and send commands to each of them.
In this example, a command will get sent to each CoherenceSync under the state authority of this Client. To make it only affect CoherenceSyncs within certain criteria, you need to filter to which CoherenceSync you send the command to, on your own.
Some of the primitive types supported are nullable values, this includes:
Byte[]
string
Entity references: CoherenceSync, Transform, and GameObject
Refer to the supported types page.
In order to send one of these values as a null (or default) we need to use special syntax to ensure the right method signature is resolved.
Null-value arguments need to be passed as a ValueTuple<Type, object> so that their type can be correctly resolved. In the example above sending a null value for a string is written as:
(typeof(string), (string)null)
and the null Byte[] argument is written as:
(typeof(Byte[]), (Byte[])null)
Mis-ordered arguments, type mis-match, or unresolvable types will result in errors logged and the command not being sent.
When a null argument is deserialized on a client receiving the command, it is possible that the null value is converted into a non-null default value. For example, sending a null string in a command could result in clients receiving an empty string. As another example, a null Byte[] argument could be deserialized into an empty Byte[0] array. So, receiving code should be ready for either a null value or an equivalent default.
When a Prefab is not using a baked script there are some restrictions for what types can be sent in a single command:
4 entity references
maximum of 511 bytes total of data in other arguments
a single Byte[] argument can be no longer than 509 bytes because of overhead
Some network primitive types send extra data when serialized (like Byte arrays and string types) so gauging how many bits a command will use is difficult. If a single command is bigger than the supported packet size, it won't work even with baked code. For a good and performant game experience, always try to keep the total command argument sizes low.
Binding to variables and methods within the hierarchy
When you have the Configure window open, it will show the variables, methods and component actions available for synchronization for your currently selected GameObject.
If the Prefab that you are configuring has a hierarchy, you can synchronize variables, methods and component actions for any of the child GameObjects within the hierarchy.
To do so, open the Prefab in Prefab Mode by clicking the Open Prefab option in the inspector. This will allow you to select any of the GameObjects that belong to the hierarchy, the Configure window will be updated automatically, showing you everything that is available to be synchronized.
To edit child GameObjects, make sure you click on them in the hierarchy. A Configuration window will pop up.
coherence only replicates animation parameters, not state. Latency can create scenarios where different Clients reproduce different animations. Take this into account when working with Animator Controllers that require precise timings.
Unity Animator's parameters are bindable out of the box, with the exception of triggers.
Now, bind to the PlayJumpAnimator.
How to network the Player Name set in the Connection Dialog?
coherence ships with a Sample UI that can be used to kickstart your project.
One implementation that we often see pretty early on prototypes, is the ability to show a name to identify players within a game session. So we've created a component to help achieve that: CoherencePlayerName
.
If you want to network the Player Name set on the Sample UI, add a CoherencePlayerName component to your CoherenceSync:
This component is only valid for the built-in Sample UI. You can, at any time, develop your own mechanism of storing and synchronizing a player name. This component is a convenience for early prototyping.
Aside from configuring your CoherenceSync bindings from within the Configure window, it's possible to use the [Sync]
and [Command]
C# attributes directly on your scripts. Your prefabs will get updated to require such bindings.
Mark public fields and properties to be synchronized over the network.
It's possible to migrate the variable automatically, if you decide to change its definition:
Mark public methods to be invoked over the network. Method return type must be void
.
It's possible to migrate the command automatically, if you decide to change the method signature:
Networked entities can be simulated either on a Game Client ("Client authority") or a Simulation Server ("Server authority"). Authority defines which Client or Simulation Server is allowed to make changes to an Entity. An Entity is any networked GameObject.
When an Entity is created, the creator is assigned authority over the Entity and that authority can be between Clients and Simulators, but only one Client or Simulator can be the authority over the Entity at at time.
Client authority is the easiest to set up initially, but it has some drawbacks:
Higher latency. Because both Clients have a non-zero ping to the Replication Server, the minimum latency for data replication and commands is the combined ping (Client 1 to Replication Server and Replication Server to Client 2).
Higher exposure to cheating. Because we trust Game Clients to simulate their own Entities, there is a risk that one such Client is tampered with and sends out unrealistic data.
In many cases, especially when not working on a competitive PvP game, these are not really issues and are a perfectly fine choice for the game developer.
Client authority does have a few advantages:
Easier to set up. No Client vs. Server logic separation in the code, no building and uploading of Simulation Servers, everything just works out of the box.
Cheaper. Depending on how optimized the Simulator code is, running a Simulator in the cloud will in most cases incur more costs than just running a Replication Server (which is comparatively very lean).
Having one or several Simulators taking care of the important World simulation tasks (like AI, player character state, score, health, etc.) is always a good idea for competitive PvP games.
Running a Simulator in the cloud next to the Replication Server (with the ping between them being negligible) will also result in lower latency.
The player character can also be simulated on the Server, with the Client locally predicting its state based on inputs. You can read more about how to achieve that in the section .
Peer-to-peer support (without a Replication Server) is planned in a future release. Please see the for updates.
Out of the box, coherence can use C# reflection to sync data at runtime. This is a great way to get started but is very costly performance-wise and has a number of limitations on what features can be used through this system.
For optimal runtime performance and a complete feature set, we need to create a schema and perform code generation specific to our project.
Learn more about this in the section.
coherence calls this mechanism baking.
Click on coherence / Bake.
For every Prefab with a CoherenceSync
component attached, the baking process will generate a C# baked script specifically tuned for it.
coherence offers two mechanisms to generate baked scripts: through Assets or through a Source Generator. By default, coherence baked using Assets, but you can change this setting anytime in the coherence Settings window.
When you bake using Assets, the generated code will output to Asset/coherence/baked. This is a simple solution, allowing for an easy inspection and debugging of the generated files, but it comes with a few drawbacks:
You should version the baked files, which can clutter your VCS workflow.
Since baked scripts access your code, changing your code will get you into compilation errors.
When you bake using the Source Generator, the generated code is fed directly to the compilation pipeline, not generating the files in the Assets folder. In this process, coherence can analyze your code syntactically and semantically. This means it can detect cases where changes in your code can affect the baked files, anticipating compiler errors and avoiding them altogether. This mode comes with a few drawbacks too:
File Assets/coherence/Footprint.cs
is created/updated every bake operation, to trigger a recompile on your code. This file (and its .meta
) can be ignored in your VCS.
A bake operation is performed on every recompile. For most projects this is not noticeable. But if your project is heavy or your computer is slow, this can take additional time on top of the normal recompilation time.
Since baked files are not versioned and have to be generated, the protocol code generator (executable bundled with the SDK package) needs to keep its execute permissions and should have write permission on <project>/Library/coherence
. This is usually not a problem, except in continuous integration scenarios, where there might be strict rules on files having the execute permission.
Harder to debug. Since files are not in Assets anymore, you can't click on them and have proper code completion. The last generation made through the Source Generator is available in Library/coherence/LastBake
.
As you can see, there are pros and cons to each mechanism, so we recommend you try both and check what works best for your workflow.
Source generators do not work on Unity version 2021.1. This is a known Unity issue that has no other fix than to upgrade (or downgrade) the version.
Once the baked scripts have been generated, you can make use of it by ticking the checkbox Baked in the CoherenceSync inspector. This is on by default.
When you configure your Prefab to network variables, and then bake, coherence generates baked scripts that access your code directly, without using reflection. This means that whenever you change your code, you might break compilation by accident.
For example, if you have a Health.cs
script which exposes a public float health;
field, and you toggle health
in the Configure window and bake, the generated baked script will access your component via its type, and your field via field name.
Like so:
When baking via assets, baked scripts will be located in Assets/coherence/baked.
If you decide you want to change your component name (Health
) or any of your bound fields (health
), Unity script recompilation can fail. In this example, we will be removing health
and adding health2
in its place.
When baking via assets, the watchdog is able to catch compilation problems related with this, and offer you a solution right away.
You can delete the baked folder manually through the coherence Settings window.
It will suggest that you delete the baked folder, and then diagnose the state of your Prefabs. After a few seconds of script recompilation, you will be presented with the Diagnosis window.
In this window, you can easily spot variables in your Prefabs that can't be resolved properly. In our example, health
is no longer valid since we've moved it elsewhere (or deleted it).
From here, you can access the Configure window, where you can spot the problem.
Now, we can manually rebind our data: unbind health
and bind health2
. Once we do, we can now safely bake again.
Remember to bake again after you fix your Prefabs.
Notifying State Changes
It is often useful to know when a synchronized variable has changed its value. It can be easily achieved using the OnValueSyncedAttribute
. This attribute lets you define a method that will be called each time a value of a synced member (field or property) changes in the non-simulated version of an entity.
Let's start with a simple example:
Whenever the value of the Health
field gets updated (synced with its simulated version) the UpdateHealthLabel
will be called automatically, changing the health label text and printing a log with a health difference.
The OnValueSynced
feature can be used only on members of user-defined types, that is, there's no way to be notified about a change in the value of a Unity type member, like transform.position
. This might however change in the future, so stay tuned!
Value sync callbacks are currently only supported for value types. That means the following types are not supported: byte[], CoherenceSync, GameObject, Transform and RectTransform.
Entity references let you set up references between Entities and have those be synchronized, just like other value types (like integers, vectors, etc.)
To use Entity references, simply select any fields of type GameObject
, Transform
, or CoherenceSync
for syncing in the Configuration window:
The synchronization works both when using reflection and in baked sync scripts.
It's important to know about the situations when an Entity reference might become null, even though it seems like it should have a value:
The owner of the Entity reference might sync the reference to the Replication Server before syncing the referenced Entity. This will lead to the Replication Server storing a null reference. If possible, try setting the Entity references during gameplay when the referenced Entities have already existed for a while.
In any case, it's important to use a defensive coding style when working with Entity references. Make sure that your code can handle missing Entities and nulls in a graceful way.
Authority over state changes to an Entity is transferrable, so it is possible to move the authority over simulation of an Entity between Clients and Simulation Servers. This is useful for things such as balancing the simulation load, or exchanging items. It is possible for an Entity to have no Client or Simulator as the authority - these Entities are considered orphaned and are not simulated.
In the design phase, CoherenceSync objects can be configured to handle authority transfer in different ways:
Request. Authority transfer may be requested, but it may be rejected by the current authority.
Steal. Authority will always be given to the requesting party on a FCFS ("first come first serve") basis.
Disabled. Authority cannot be transferred.
Note that you need to set up Auto-adopt Orphan if you want orphans to be adopted automatically when an Entity's authority disconnects, otherwise an orphaned Entity is not simulated. Auto-adopt is only allowed for persistent entities.
When using Request, an optional callback OnAuthorityRequested
can be set on the CoherenceSync behaviour. If the callback is set, then the results of the callback will override the Approve Requests setting in the behaviour.
The request can be approved or rejected in the callback.
Requesting authority is very straight-forward.
RequestAuthority
returns false
if the request was not sent. This can be because of the following reasons:
The sync is not ready yet.
The entity is not allowed to be transferred becauseauthorityTransferType
is set to NonTransferable
.
There is already a request underway.
The entity is orphaned, in which case you must call Adopt
instead to request authority.
The request itself might fail depending on the response of the current authority.
As the transfer is asynchronous, we have to subscribe to one or more Unity Events in CoherenceSync to learn the result.
The request will first go to the Replication Server and be passed onto the receiving Simulator or Game Client, so it may take a few frames to get a response.
These events are also exposed in the Custom Events section of the CoherenceSync inspector.
Triggers can be invoked over the network using . Here's an example where we inform networked Clients that we have played a jump animation:
It will automatically bind its variable playerName
, since internally it uses the . So you're already set. At runtime, CoherencePlayerName.playerName
will hold the name of the player that owns that entity.
Even if an entity is not currently being simulated locally (the client does not have authority), we can still affect its state by sending a or even .
This will go through all indexed CoherenceSync
GameObjects (Resources folders and Prefab Mapper) in the project and generate a schema file based on the selected variables, commands and other settings. It will also take into account any that have been added.
Check .
This comes in handy in projects that use authoritative . The Client code can easily react to changes in the Player
entity state introduced by the Simulator, updating the visual representation (which the Simulator doesn't need).
The OnValueSyncedAttribute
requires using .
Remember that the callback method will be called only for a non-simulated instance of an Entity. Use on a simulated (owned) instance requires calling the selected method manually whenever the value of a given field/member changes. We recommend using for this.
Entity references can also be used as arguments in .
A client might not have the referenced entity in its LiveQuery. A local reference can only be valid if there's an actual Entity instance to reference. If this becomes a problem, consider switching to using the component or of prefabs which ensures that another Entity becomes part of the query.
Support for requests based on is coming soon.
CoherenceMonoBridge
-1000
CoherenceSync
-900
CoherenceInput
-800
CoherenceLiveQuery
900
CoherenceTagQuery
900
CoherenceMonoBridgeSender
1000
When we connect to a Game World with a Game Client, the traditional approach is that all Entities originating on our Client are session-based. This means that when the Client disconnects, they will disappear from the network World for all players.
A persistent object, however, will remain on the Replication Server even when the Client or Simulator that created or last simulated it, is gone.
This allows us to create a living world where player actions leave lasting effects.
In a virtual world, examples of persistent objects are:
A door anyone can open, close or lock
User-generated or user-configured objects left in the world to be found by others
Game progress objects (e.g. in PvE games)
Voice or video messages left by users
NPC's wandering around the world using an AI logic
Player characters on "auto pilot" that continue affecting the world when the player is offline
And many, many more
A persistent object with no Simulator is called an orphan. Orphans can be configured to be auto-adopted by Clients or Simulators on a FCFS basis.
coherence uses the concept of ownership to determine who is responsible for simulating each Entity. By default, each Client that connects to the Replication Server owns and simulates the Entities they create. There are a lot of situations where this setup is not adequate. For example:
The number of Entities could be too large to be simulated by the players on their own, especially if there are few players and the World is very large.
The game might have an advanced AI that requires a lot of coordination, which makes it hard to split up the work between Clients.
It is often desirable to have an authoritative object that ensures a single source of truth for certain data. State replication and "eventual correctness" doesn't give us these guarantees.
Perhaps the game should run a persistent simulation, even while no one is playing.
With coherence, all of these situations can be solved using dedicated Simulators. They behave very much like normal Clients, except they run on their own with no player involved. Usually, they also have special code that only they run (and not the clients). It is up to the game developer to create and run these programs somewhere in the cloud, based on the demands of their particular game.
Simulators can also be independent from the game code. A Simulator could be a standalone application written in any language, including C#, Go or C++, for instance. We will post more information about how to achieve this here in the future. For now, if you would like to create a Simulator outside of Unity, please contact our developer relations team.
To use Simulators, you need to enter your credit card details. You can do it by logging into our Dashboard, selecting the Billing tab, finding the Payment Methods section and clicking the Manage button.
If you're on the Free plan, you won't be charged anything - our payment provider will temporarily reserve a small amount to verify that the credit card is in working order.
If you have determined that you need one or more Simulator for your game, there are multiple ways you can go about implementing these. You could create a separate Unity project and write the specific code for the Simulator there (while making sure you use the same schema as your original project).
An easier way is to use your existing Unity project and modify it in a way so that it can be started either as a normal Client, or as a Simulator. This will ensure that you maximize code sharing between Clients and Servers - they both do simulation of Entities in the same Game World after all.
To force a build to start as a Simulator, you can use the following command line argument:
The Simulator is started with the following parameters in coherence Cloud:
Important: if you want to deploy Simulators on the coherence Cloud, they have to be built for Linux 64-bit.
The SDK provides a static helper class to access all the above parameters in the C# code called SimulatorUtility
.
To build Simulators, it's best to use headless mode (Unity 2020) or the Linux Dedicated Server Build Target.
This is great for Simulators since we're not interested in rendering any graphics on these outside of local development. You will also get a leaner executable that is smaller and faster to be published in coherence Cloud.
When a room has only Simulators (no Clients) it shuts down automatically after a short period of time.
Refer to the Simulator: Build and deploy section.
Connecting Simulators to the Internet is a paid-only feature. If you are currently on a Free Tier Subscription and need your Simulators to connect to external services, please upgrade your Subscription.
Before deploying a Simulation Server, testing and debugging locally can significantly improve development and iteration times. There are a few ways of accomplishing this.
Using the Unity Editor as a Simulator allows us to easily debug the Simulator. This way we can see logs, examine the state of scenes and GameObjects and test fixes very rapidly.
To run the Editor as a Simulator, run the Editor from the command line with the proper parameters:
--coherence-simulation-server
: used to specify that the program should run as a coherence Simulator.
--coherence-simulator-type
: tells the Simulator what kind of connection to make with the Replication Server, can be Rooms or World.
--coherence-region
: tells the Simulator which region the Replication Server is running in: EU, US or local.
--coherence-ip
: tells the Simulator which IP it should connect to. Using 127.0.0.1 will connect the Simulator to a local server, if one is running.
--coherence-port
: specifies the port the Simulator will use.
--coherence-world-id
: specifies the World ID to connect to, used only when set to Worlds.
--coherence-room-id
: specifies the Room ID to connect to, used only when set to Rooms.
--coherence-unique-room-id
: specifies the unique Room ID to connect to, used only when set to Rooms.
--coherence-auth-token
: specifies your local dev authentication token, that you get from pressing the coherence Hub > Simulators > Run local simulator build > Fetch Last Endpoint button. This will change in the near future and will no longer be a required parameter.
For example:
Keep in mind that all regular Unity arguments are supported. You can see the full list here: Unity Editor command line arguments.
If you're not sure which values should be used, adding a COHERENCE_LOG_DEBUG
define symbol will let you see detailed logs. Among them are logs that describe which IP, port and such the Client is connecting to. This can be done in the Player settings: Project Settings > Player > Other Settings > Script Compilation > Scripting Define Symbols.
To learn more about Simulators, see Simulators.
Another option is making a Simulator build and running it locally. This option emulates more closely what will happen when the Simulator is running after being uploaded.
You can run a Simulator executable build in the same way you run the Editor.
This allows you to test a Simulator build before it is uploaded or if you are having trouble debugging it.
You can also run existing Simulator build from coherence Hub > Simulators > Run local simulator build.
Use the Fetch Last Endpoint button to autofill the required fields.
When using a Rooms-based setup, you first have to create a Room in the local Replication Server (e.g. by using the connect dialog in the Client).
The local Replication Server will print out the Room ID and unique Room ID that you can use when connecting the Simulator.
To learn more about creating a Simulator build, see SIMULATORS: Build and Deploy.
This document explains how to set up an ever increasing counter that all Clients have access to. This could be used to make sure that everyone can generate unique identifiers, with no chance of ever getting a duplicate.
By being persistent, the counter will also keep its value even if all Clients log off, as long as the Replication Server is running.
First, create a script called Counter.cs and add the following code to it:
This script expects a command sent from a script called NumberRequester
, which we will create below.
Next, add this script to a Prefab with CoherenceSync on it, and select the counter
and the method NextNumber
for syncing in the bindings window. To make the counter behave like we want, mark the Prefab as "Persistent" and give it a unique persistence ID, e.g. "THE_COUNTER". Also change the adoption behaviour to "Auto Adopt":
Finally, make sure that a single instance of this Prefab is placed in the scene.
Now, create a script called NumberRequester.cs
. This will be an example MonoBehaviour that requests a unique number by sending the command GetNumber
to the Counter Prefab. As a single argument to this command, the NumberRequester
will send an entity reference to itself. This makes it possible for the Counter to send back a response command (GotNumber
) with the number that was generated. In this simple example we just log the number to the console.
To make this script work, add it to a Prefab that has the CoherenceSync script and mark the GotNumber
for syncing in the bindings window.
When scripting Simulators, we need mechanisms to tell them apart.
Ask Coherence.SimulatorUtility.IsSimulator
.
There are two ways you can tell coherence if the game build should behave as a Simulator:
COHERENCE_SIMULATOR
preprocessor define.
--coherence-simulation-server
command-line argument.
Connect
and ConnectionType
The Connect
method on Coherence.Network
accepts a ConnectionType
parameter.
Whenever the project compiles with the COHERENCE_SIMULATOR
preprocessor define, coherence understands that the game will act as a Simulator.
Launching the game with --coherence-simulation-server
will let coherence know that the loaded instance must act as a Simulator.
You can supply additional parameters to a Simulator that define its area of responsibility, e.g. a sector/quadrant to simulate Entities in and take authority over Entities wandering into it.
You can also build a special Simulator for AI, physics, etc.
You can define who simulates the object in the CoherenceSync inspector.
The sample UI provided includes auto-reconnect behaviour out of the box for Room- and World-based simulators. The root GameObject has AutoReconnect components attached to it.
Multi-Room Simulators have their own per-scene reconnect logic. The AutoReconnect components should not be enabled when working with Multi-Room Simulators.
If the Simulator is invoked with the --coherence-play-region
parameter, AutoReconnect will try to reconnect to the Server located in that region.
The CoherenceSync editor interface allows us to define the Lifetime of a networked object. The following options are available:
Session Based. No persistence. The Entity will disappear when the Client or Simulator disconnects.
Persistent. The Entity will remain on the Server until a simulating Client deletes it.
Unique persistent objects need to be identified so that the system can know how to treat duplicate persistent objects.
Manually assigning a UUID means that each instance of this persistent object Prefab is considered the same object regardless of where on the network it is instantiated. So, for example, if two Clients instantiate the same Prefab object with the same persistence UUID then only one is considered official and the other is replaced by the Replication Server.
The CoherenceUUID behaviour is used to uniquely identify a Prefab.
It has several functions: you can generate a new ID for your object, and you can set auto-generate UUID on the scene to true, so each time the object will receive a new ID.
Auto-generate UUID in scene is not working for persistent objects.
A persistent object can be deleted only by the Client or Simulator that has authority over it. For indirect remote deletion, see the section about network commands.
Deleting a persistent object is done the same as with any network object - by destroying its GameObject.
All persistent objects remain in the World for the entire lifetime of the Replication Server and, periodically, the Replication Server records the state of the World and saves it to physical storage. If the Replication Server is restarted, then the saved persistent objects are reloaded when the Replication Server resumes.
Currently, the maximum number of persistent objects supported by the Replication Server is 32 000. This limit will be increased in the near future.
This feature requires baking.
coherence can support large game worlds with many objects. Since the amount of data that can be transmitted over the network is limited, it's very important to only send the most important things.
You already know a very efficient tool for enabling this – the LiveQuery. It ensures that a client is only sent data when an object in its vicinity has been updated.
Often though, there is a possibility for an even more nuanced and optimized approach. It is based on the fact that we might not need to send as much data for an entity that is far away, compared to a close one. A similar technique is often used in 3D-programming to show a simpler model when something is far away, and a more detailed when close-up.
This idea works really well for networking too. For example, when another player is close to you it's important to know exactly what animation it is playing, what it's carrying around, etc. When the same player is far off in the horizon, it might suffice to only know it's position and orientation, since nothing else will be discernible anyways.
To use this technique we must learn about something called archetypes.
Any Prefab with the CoherenceSync component can be optimized to use a various levels of details (LODs).
There must always exist a LOD 0, this is the default level and it always has all components enabled (it can have per-field overrides though, see below.)
There can be any number of subsequent LODs (e.g. LOD 1, LOD 2, etc.) and each one must have a distance threshold higher than the previous one. The coherence SDK will try to use the LOD with the highest number, but that is still within the distance threshold.
Example
An object has three LODs, like this:
LOD 0 (threshold 0)
LOD 1 (threshold 10)
LOD 2 (threshold 20)
If this object is 15 units away, it will use LOD 1.
Confusingly, the highest numbered LOD is usually called the lowest one, since it has the least detail.
On each LOD, there are two options for optimizing data being transferred:
Components can be turned off, meaning you won't receive any updates from them.
Its fields can be configured to use fewer bits, usually leading to less fine-grained information. The idea is that this won't be noticeable at the distance of the LOD.
coherence allows us to define the range of numeric fields and how many bits we want to allocate to them.
Here are some terms we will be using:
Bits. The number of bits (octets) used for the field. When used for vectors, the number defined the number of bits used for each component (x
, y
and z
). A vector3
set to 24 bits
will consume 3 * 24 = 72
bits.
Range. For integer values and fixed-point floats, we define a minimum and maximum possible value (e.g. Health
can lie between 0
and 100
).
More bits mean more precision. Increasing the range while leaving the bit count the same will lower the precision of the field.
The maximum number of bits used for any field/component is currently 32.
coherence allows us to define these values for specific components and fields. Furthermore, we can define levels of detail so that precision and therefore bandwidth consumption falls with the distance of the object to the point of observation.
Levels of detail are calculated from the distance between the entity and the center of the LiveQuery.
On each LOD you can configure the individual fields of any component to use less data. You can only decrease the fidelity, so a field can't use more data on a lower (more far away) LOD. The Archetype editor interface will help you to follow these rules.
In order to define levels of detail, we have to click the Optimize button on a Prefab's CoherenceSync
component with defined field bindings.
That opens the Optimization window. We can override the base component settings even without defining further levels of detail.
Clicking on Add new Level Of Detail will add a new LOD. We can now define the distance at which the LOD starts. This is the minimum distance between the entity and the center of the LiveQuery at which the new level of detail becomes active (i.e. the Replicator will start sending data as defined here at this distance).
You can also disable components at later LOD levels if they are not needed. In the example above, you can see that in LOD2 the entire Transform and Animator components are disabled beyond the distance of 20 units. At 100 units (a.k.a. meters), we usually do not see animation details, so we can save a lot of bandwidth and processing power by not replicating this data.
The Data Cost Overview shows us that this takes the original 913 bits down to just 372 bits at LOD level 2.
The primitive types that coherence supports can be configured in different ways:
These three types can all be configured in the same way, using different compression types:
None
No compression will be used, a full 32-bit float will be transmitted every time.
Truncated
Allows for specifying the number of bits for compression. Less bits means lower bandwidth usage but at the cost of precision loss. The minimum number of bits is 10. Using 22 bits will result in around half of the precision of the full float, while 16 will result in the quarter of the precision.
Fixed point
Allows for specifying the range of values used together with either number of bits or a desired precision.
Range affects the maximum and minimum value that the data type can take on. For example, a range of 100 to 200 means only values within that range can be sent - any value outside of this range will be clamped to the nearest correct value.
Precision defines the greatest deviation allowed for the data type. For example, a precision of 0.1 means that a float of value 10.0 can be transmitted as anything from 9.9 to 10.1 over the network. The minimum allowed precision is 0.1, while the maximum precision depends on the range. Changing precision automatically recalculates the number of bits required for given range.
Bits dictate how many bits to use when calculating the precision for a given range. When set manually, it will trigger recalculation of the precision for a given range. Mind that the number of bits can be rounded down if the calculated precision uses less, e.g. for a range of [0, 1] setting the number of bits to 6 will result in precision of 0.1 and a final bit count of 4, since 4 bits suffice to represent this range with a calculated precision.
When using these range settings for vectors, it affects each axis of the vector separately. Imagine shrinking its bounding box, rather than a sphere.
Integers can be configured to any span (that fits within a 32-bit integer) by setting its minimum and maximum value.
For example, the member variable age
in a game about ancient trolls might use a minimum of 100 and a maximum of 2000. Based on the size of the range (1900 in this case) a bit-count will be calculated for you.
For integers, it usually make sense to not decrease the range on lower LODs since it will overflow (and wrap-around) any member on an entity that switches to a lower LOD. Instead, use this setting on LOD 0 to save data for the whole Archetype.
Quaternions and Colors can be configured using the number of bits per component. Quaternions require sending 3 components while Colors require 4 components.
All other types (strings, booleans, entity references) have no settings that can be overridden, so your only option for optimizing those are to turn them off completely at lower LODs.
If a LODed game object is parented to another synced object, the child will base its LOD level on the World position of its parent. This means that the (local) position of the LODed child does not have any effect on its LOD, until it is unparented.
Also – to save bandwidth, detection of LOD changes on the client only happens when the entity sends a component update. This means that a child object might appear to be using a nonsensical LOD until it changes in some way, for example by modifying its position.
When we bake, information from the CoherenceArchetype
component gets written into our schema. Below, you can see the setup presented earlier reflected in the resulting schema file.
If you want to know more about how LODs work inside the schema files, take a look at Archetypes.
The most unintuitive thing about archetypes and LOD-ing is that it doesn't affect the sending of data. This means that a "fat" object with tons of fields will still tax the network and the Replication Server if it is constantly updated, even if it uses a very optimized Archetype.
Also, it's important to realize that the exact LOD used on an entity varies for each other client, depending on the position of their query (or the closest one, if several are used.)
CoherenceInput is a component that enables a Simulator to take control of the simulation of another Client's objects based on the Client's inputs.
In situations where you want a centralized simulation of all inputs. Many game genres use Client inputs and centralized simulation to guarantee the fairness of actions or the stability of the physics simulations.
In situations where Clients have low processing power. If the Clients don't have sufficient processing power to simulate the World it makes sense to send inputs and just display the replicated results on the Clients.
In situations where determinism is important. RTS and fighting games will use CoherenceInput and rollback to process input events in a shared (not centralized) and deterministic way so that all Clients simulate the same conditions and produce the same results.
coherence currently only supports using CoherenceInput in a centralized way where a single Simulator is set up to process all inputs and replicate the results to all Clients.
Setting up an object for server-side simulation using CoherenceInput and CoherenceSync is done in three steps:
The simulation type of the CoherenceSync component is set to Server Side With Client Input
Setting the simulation type to this mode instructs the Client to automatically transfer State Authority for this object to the Simulator that is in charge of simulating inputs on all objects.
Each simulated CoherenceSync component is able to define its own, unique set of inputs for simulating that object. An input can be one of:
Button. A button input is tracked with just a binary on/off state.
Button Range. A button range input is tracked with a float value from 0 to 1.
Axis. An axis input is tracked as two floats from -1 to 1 in both the X and Y axis.
String. A string value representing custom input state. (max length of 63 characters)
To declare the inputs used by the CoherenceSync component, the CoherenceInput component is added to the object. The input is named and the fields are defined.
In this example, the input block is named Player Movement and the inputs are a 2D axis for movement and a jump button.
In order for the inputs to be simulated on CoherenceSync objects, they must be optimized through baking.
If the CoherenceInput fields or name is changed, then the CoherenceSync object must be re-baked to reflect the new fields/values.
When a Simulator is running it will find objects that are set up using CoherenceInput components and it will automatically assume authority and perform simulations. Both the Client and Simulator need to access the inputs of the CoherenceInput of the replicated object. The Client uses the Set* methods and the Simulator uses the Get* methods to access the state of the inputs of the object. In all of these methods, the name parameter is the same as the Name field in the CoherenceInput component.
public void SetButtonState(string name, bool value)
public void SetButtonRangeState(string name, float value)
public void SetAxisState(string name, Vector2 value)
public void SetStringState(string name, string value)
Simulator-Side Get* Methods
public bool GetButtonState(string name)
public float GetButtonRangeState(string name)
public Vector2 GetAxisState(string name)
public string GetStringState(string name)
For example, if the jump button is pressed, this can be passed from Client to Simulator via the "Jump
" input. Similarly, horizontal and vertical movement is passed via the "Move
" input.
The Simulator can access the state of the input to perform simulations on the object which are then reflected back to the Client, just like any replicated object.
Each object only accepts inputs from one specific Client, called the object's Input Authority.
When a Client spawns an object it automatically becomes the Input Authority for that object. The object's creator will retain control over the object even after state authority has been transferred to the Simulator.
If an object is spawned directly by the Simulator, you will need to assign the Input Authority manually. Use the TransferAuthority method on the CoherenceSync component to assign or re-assign a Client that will take control of the object:
The ClientId used to specify Input Authority can currently only be accessed from the ClientConnection class. For detailed information about setting up the ClientConnection Prefab, see the Client connections page.
Use the OnInputAuthority and OnInputRemote events on the CoherenceSync component to be notified whenever an object changes input authority.
Only the object's current State Authority is allowed to transfer Input Authority.
In order to get notified when the Simulator (or host) takes state authority of the input you can use the OnInputSimulatorConnected event from the CoherenceSync component.
The OnInputSimulatorConnected event can also be raised on the Simulator or host if they have both input and state authority over an entity. This allows the session host to use inputs just like any other Client but might be undesirable if input entities are created on the host and then have their input authority transferred to the Clients.
To solve this you can check the CoherenceSync.IsSimulatorOrHost flag in the callback:
The CoherenceLiveQuery component can be used to limit the visible portion of the Game World that a player is allowed to see. The Replication Server filters out networked objects that are outside the range of the LiveQuery so that players can't cheat by inspecting the incoming network traffic.
When a query component is placed on a Game Object that is set to Server Side With Client Inputs the visibility query will be applied to the Game Object's Input Authority (i.e., the player) while the component remains in control of the State Authority (i.e. the Simulator). This prevents players from viewing other parts of the map by simply manipulating the radius or position of the query component.
See Area of interest for more information on how to use queries.
Using Server-side simulation takes a significantly longer period of time from the Client providing input until the game state is updated, compared to just using Client-side simulation. That's because of the time required for the input to be sent to the Simulator, processed, and then the updates to the object returned across the network. This round-trip time results in an input lag that can make controls feel awkward and slow to respond.
If you want to use a Server-authoritative setup without sacrificing input responsiveness, you need to use Client-side prediction. With Client-side prediction enabled, incoming network data is ignored for one or more bindings, allowing the Client to predict those values locally. Usually, position and rotation are predicted for the local player, but you can toggle Client-side prediction for any binding in the Configuration window.
By processing inputs both on the Client and on the Server, the Client can make a prediction of where the player is heading without having to wait for the authoritative Server response. This provides immediate input feedback and a more responsive playing experience.
Note that inputs should not be processed for Clients that neither have State Authority nor Input Authority. That's because we can only predict the local player; remote players and other networked objects are synced just as normal.
With Client-side prediction enabled, the predicted Client state will sometimes diverge from the Server state. This is called misprediction. When misprediction occurs, you will need to adjust the Client state to match the Server state in one way or another. This is called Server Reconciliation.
There are many possible approaches to Server Reconciliation and coherence doesn't favor one over another. The simplest method is to snap the Client state to the Server state once a misprediction is detected. Another method is to continuously blend from Client state to Server state.
Misprediction detection and reconciliation can be implemented in a binding's OnNetworkSampleReceived
event callback. This event is called every time new network data arrives, so we can test the incoming data to see if it matches with our local Client state.
The misprediction threshold is a measure of how far the prediction is allowed to drift from the Server state. Its value will depend on how fast your player is moving and how much divergence is acceptable in your particular game.
Remember that incoming sample data is delayed by the round-trip time to the Server, so it will trail the currently predicted state by at least a few frames, depending on network latency. The simulationFrame
parameter tells you the exact frame at which the sample was produced on the authoritative Server.
For better accuracy, incoming network samples should be compared to the predicted state at the corresponding simulation frame. This requires keeping a history buffer of the predicted states in memory.
This feature is in the experimental phase.
A client-hosted session is an alternative way to use CoherenceInput in Server Side With Client Input mode that doesn't require a Simulator.
A Client that created a Room can join as a Host of this Room. Just like a Simulator, the Host will take over the State Authority of the CoherenceInput objects while leaving the Input Authority in the hands of the Client that created those objects.
The difference between a Host and a Simulator is that the Host is still a standard client connection, which means it counts towards the Room's client limit and will show up as a client connection in the connection list.
To connect as a Host all we have to do is call CoherenceMonoBridge.ConnectAsHost:
The way you get information about the world is through LiveQueries. We set criteria for what part of the world we are interested in at each given moment. That way, the Replicator won’t send information about everything that is going on in the Game World everywhere, at all times.
Instead, we will just get information about what’s within a certain area, kind of like moving a torch to look around in a dark cave.
More complex area of interest types are coming in future versions of coherence.
A LiveQuery is a cube that defines the area of interest in a particular part of the World. It is defined by its position and its extent (half the side of the cube). There can be multiple LiveQueries in a single scene.
A classic approach is to put a LiveQuery on the camera and set the extent to correspond to the far clipping plane or visibility distance.
Moving the GameObject containing the LiveQuery will also notify the Replication Server that the query for that particular Game Client has moved.
In addition to the LiveQuery, coherence also supports filtering objects by tag. This is useful when you have some special objects that should always be visible regardless of World position.
To create a TagQuery, right click a GameObject in the scene and select coherence > TagQuery from the context menu.
All networked GameObjects with matching tags will now be visible to the Client. The coherence tag can be any string and can be configured separately from the Unity tag
in the Advanced Settings section of the CoherenceSync
component.
Tags and TagQueries can be updated at any time while the application is running, either from the Unity inspector or setting CoherenceSync.coherenceTag
and CoherenceTagQuery.coherenceTag
with code.
Currently, only a single tag per GameObject and TagQuery is supported. To include objects with different tags, you can create multiple TagQuery objects for each tag.
In the future, we plan to integrate TagQueries with LiveQueries allowing combined query restrictions, e.g., only show objects with tag "red" within an extent of 50.
Queries can also be used for cheat prevention, see Server authoritative setup for more information.
No matter how fast the internet becomes, conserving bandwidth will always be important. Some Game Clients might be on poor quality mobile networks with low upload and download speeds, or have high ping to the Replication Server and/or other Clients, etc.
Additionally, sending more data than is required consumes more memory and unnecessarily burdens the CPU and potentially GPU, which could add to performance issues, and even to quicker battery drainage.
In order to optimize the data we are sending over the network, we can employ various techniques built into the core of coherence.
Delta-compression (automatic). When possible, only send differences in data, not the entire state every frame.
Compression and quantization (automatic and configurable). Various data types can be compressed to consume less bandwidth that they naturally would.
Simulation frequency (configurable). Most Entities do not need to be simulated at 60+ frames per second.
Levels of detail (configurable). Entities need to consume less and less bandwidth the farther away they move from the observer.
Area of interest. Only replicate what we can see.
Without a special configuration, Entity data is captured at the highest possible frequency and sent to the Replication Server. This often generates more data than is needed to efficiently replicate the Entity's state across the network.
On a Simulator, we can limit the framerate globally using Unity's built-in static variable targetFrameRate.
coherence will automatically limit the target framerate of uploaded Simulators to 30 frames per second. We plan to make it possible to lift this restriction in the future. Check back for updates in the next couple of releases.
Replication frequency can be configured for each binding individually in the Prefab Optimize window. The Sample Rate controls how many times per second values are sampled and synced over the network.
Since the default packet send frequency of the Replication Server is 20Hz, sample rates above that value won't have any benefits unless you increase the Replication Server send frequency, too. See here how to adjust the Replication Server send frequency.
High sample rates increase replication accuracy and reduce latency, but consume more bandwidth. The upper limit at which samples can be quantized is 60hz, so sample rates beyond that are generally not recommended. It is not possible to change sampling frequency at runtime.
Values that don't change over time do not consume any bandwidth. Only bindings with updated values will be synced over the network.
For an object to appear to move smoothly on the screen, it must be rendered at a high rate, usually 60 frames per second or more. However, depending on the settings in your project, and the conditions of your internet connection, data may not always arrive at a smooth 60 frames per second across the network. This is completely okay, but in order to make state changes appear smooth on the Client, we use interpolation.
Interpolation is a type of estimation, a method of constructing new data points within the range of a discrete set of known data points.
When you select a variable to replicate in the Configure window, it is automatically assigned a default interpolation setting. The default settings are usually good to get started, but you can modify or create your own interpolation settings that better fit your specific needs.
In the Configure window, each binding displays its interpolation settings next to it.
Built-in interpolation settings for position and rotation are provided out-of-the-box, but you are free to create your own and use them instead.
You can also create an interpolation settings asset: Assets > Create > coherence > Interpolation Settings
Linear interpolation blends values by moving along straight lines from sample to sample. This makes the networked object move in a zig-zag pattern, but this is usually not noticeable when sampled at a sufficient rate and with some additional smoothing applied (see section Other settings > Smoothing below).
Spline interpolation blends between samples using the Catmull-Rom spline method which gives a smoother movement than linear interpolation without any sharp corners, at the cost of increased latency (see: Latency below). Spline interpolation requires at least 4 samples to produce good results.
If interpolation type is set to None, the value will simply snap to the most recent sample without any blending. This is recommended for binding types that have no obvious blending methods, e.g., string, byte array and object references.
You could also implement your own interpolation type (see: Custom Interpolators below).
Interpolation will add some additional latency to synced bindings. That's because incoming network samples must first be put in a buffer that is then used to calculate the interpolated value.
The amount of latency depends on the binding's sample rate and interpolation type. The lower the sample rate, the higher the latency.
Linear Interpolation requires a headroom of one sample while Spline Interpolation requires two samples. If interpolation type is set to None, there is no additional latency added, and samples will be rendered as soon as they arrive over the network.
Example: A Prefab that uses Spline Interpolation for its position binding with a sample rate of 30 Hz and network latency of 100 ms will appear to be 2*1/30+0.100 = 0.16 s behind the local time.
Since a Prefab can define separate interpolation types and sample rates for its different bindings, it is possible that not all bindings share the same latency. If, for example, position and rotation are interpolated with different latency, the position and rotation of a vehicle might not match on the remote object.
There are a few settings you can tweak:
Smoothing
Smooth Time: additional smoothing can be applied (using SmoothDamp
) to clear out any jerky movement after regular interpolation has been performed.
Max Smoothing Speed: the maximum speed at which the value can change, unless teleporting.
Latency
Network Latency Factor: fudge factor applied to the network latency. A factor of 1 means adapting to network latency with no margin, so the incoming sample must arrive at its exact predicted time to prevent the buffer from becoming stale. In general, a factor of 1.1 is recommended to prevent network fluctuations from causing dead reckoning due to latency peaks.
Network Latency Cooldown: when network latency decreases, wait this amount of time (in seconds) before recalculating network latency. This prevents network fluctuations from causing dead reckoning due to latency valleys.
Additional Latency: increases latency by a fixed amount (in seconds) to add an additional margin for the sample buffer.
Overshooting
Max: how far into the dead reckoning to venture when the time fraction exceeds 100%, as a percentage of the sample rate.
Retraction: how fast to pull back to 100% when overshooting the allowed dead reckoning maximum (in seconds)
Teleport Distance: if two consecutive samples are further apart than this, the value will teleport or snap to the new sample immediately without interpolating or smoothing in between.
Dead reckoning is a form of replicated computing so that everyone participating in a game winds up simulating all the entities (typically vehicles) in the game, albeit at a coarse level of fidelity.
The basic notion of dead reckoning is an agreement in advance on a set of algorithms that can be used by all player nodes to extrapolate the behavior of entities in the game, and an agreement on how far reality should be allowed to get from these extrapolation algorithms before a correction is issued.
Interpolation settings can be tweaked in Play mode where you can see the result on the screen immediately, but the changes you make will be reverted again once you exit Play mode. This is because - in Play mode - a copy of the interpolation settings is created.
Remember that interpolation only happens on remote objects, so you need to select a remote object to experiment with interpolation settings in Play mode.
Interpolation works both in Baked and Reflection modes. You can change these settings at runtime via the Configure window (editor) or by accessing the binding and changing the interpolation settings yourself:
The Linear and Spline interpolators that are provided by coherence are sufficient for most common use cases, but you can also implement your own interpolation algorithm by sub-classing Interpolator
.
You can choose to override one or more of the base methods depending on which type or types of values you want to support. The method signatures usually take two adjacent samples and a fractional value (from 0 to 1) to blend between them. There are also method signatures that provide four samples, which is useful for the Catmull-Rom spline interpolation.
Here's an example of a custom interpolator that makes the remote object appear at an offset distance from the object's actual position.
The NumberOfSamplesToStayBehind property controls the internal latency.
Catmull-Rom splines require four samples to blend between, so its NumberOfSamplesToStayBehind property must be set to 2.
Extrapolation uses historical data to predict the future state of a binding. By predicting the state of other players before their network data actually arrives, network lag can be reduced or removed entirely. This will cause mispredictions that need to be corrected when the incoming network data does not match the predicted state.
Extrapolation is not yet supported by coherence.
The coherence Settings window is located in coherence / Settings.
The Replication Server replicates the state of the world to all connected Clients and Simulators.
To understand what is happening in the Game World, and to be able to contribute your simulated values, you need to connect to a Replication Server. The Replication Server acts as a central place where data is received from and distributed to interested Clients.
You can , but we recommend that you first start one locally on your computer. coherence is designed so you can easily develop everything locally first, before deploying to the Cloud.
Replication Servers replicate data defined in schema files. The schema's inspector provides all the tools needed to start a Replication Server.
Run the Replication Server by clicking the Run button or copy the run command to the clipboard via clicking the copy run-command icon located to the right of it.
A terminal/command line will pop up, running your Server locally.
The port the Replication Server will use. Rooms: 42001
Worlds: 32001
The web port used for webGL connections. Rooms: 42001
Worlds: 32002
The Replication Server send frequency. Default: 20
packets / s
The Replication Server receive frequency. Default: 60
packets / s
You can also start the Replication Server from the coherence menu or by pressing Ctrl+Shift+Alt+N.
The Replication Server supports different packet frequencies for sending and receiving data.
The send frequency is the frequency that the Replication Server uses to send packets to a given Client. Each Client can be sent packets at different times, but the packet receive frequency for any Client will not exceed the Replication Server's send frequency.
The receive frequency is the maximum frequency at which the Replication Server expects to receive packets from any Client, before throttling. If a Client sends packets to the Replication Server at a higher than expected frequency, that Client will receive a command to slow down sending. If the Client doesn't respect the command to throttle packet sending then the Client is disconnected after a time. All extra packets received by the Replication Server, after a threshold based on the receive frequency, are dropped and not processed. This is to prevent malicious Clients from flooding the Replication Server. The Unity SDK handles throttling automatically.
It is possible for the Replication Server to temporarily request Clients to reduce their packet send rates if the processing load of the Replication Server is too high. This is automatic and send rates from the affected Clients are commanded to resume once the load is reduced.
Low and consistent send rates from the Replication Server allow for optimal bandwidth use and still support a smooth stream of updates to Clients. Try different rates during local replication tests to see what works well for your game.
For a locally hosted Replication Server, you can edit the send and receive frequencies by using the CLI arguments --send-frequency
and --recv-frequency
. Or by changing it in the coherence Settings -> Local Replication Server -> Send Frequency / Recv Frequency.
On the dashboard, the packet frequencies for sending and receiving data can be adjusted per project too. It is part of the Advanced Config section of Worlds create/edit and Rooms pages of the dashboard.
Adjusting the send and receive frequencies on the dashboard is available for paid plans.
When the Replication Server is running, you connect to it using the Connect
method.
After trying to connect you might be interested in knowing whether the connection succeeded. The Connect call will run asynchronously and take around 100 ms to finish, or longer if you connect to a remote Server.
The OnLiveQuerySynced event is triggered when the initial game state has been synced to the client. More specifically, it is fired when all entities found by the Client's first Live Query have finished replicating. This is the last step of the connection process and is usually a good place to start the game simulation.
Check Run in Background in the Unity settings under Project Settings > Player so that the Clients continue to run even when they're not the active window.
To connect with multiple Clients locally, publish a build for your platform (File > Build and Run, details in Unity docs). Run the Replication Server and launch the build any number of times. You can also enter Play Mode in the Unity Editor.
For Mac Users: You can open new instances of an application from the Terminal:
To connect to Cloud-hosted Replication Servers, see Rooms API and Worlds API documentation. We also have several code examples available under the PlayResolver article.
Simulators per room can be enabled in the dashboard for the project. The Simulator used is matched according to the Simulator slug in the RuntimeSettings scriptable object file. This is set automatically when you upload a Simulator.
For each new Room, a Simulator will be created with the command line parameters described in the Simulators section. The Simulator is shutdown automatically when the Room is closed.
World Simulators are started and shut down with the World. They can be enabled and assigned in the Worlds section of the Developer Portal.
World simulation servers are started with the command line parameters described in the Simulators section.
Flag | Description |
---|---|
A Simulator build is a built Unity Player for the Linux 64-bit platform that you can upload to coherence straight from the Unity Editor.
Open Coherence Hub and select the Simulators tab.
From here you can build and upload Simulators.
Click the little info icon in the top right corner to learn more about Simulators and how to build them properly.
You can change your Simulator build options by editing the SimulatorBuildOptions object, or in the coherence Hub Simulators tab.
There are several settings you might want to change.
Specify the scenes you want to get in the build via the Scenes To Build field.
For a local build, you can choose to enable/disable the Headless Mode by ticking the checkbox. For a cloud build, Headless Mode is always enabled by default.
Make sure you meet the requirements:
Press the coherence Hub > Simulators > Build And Upload Headless Linux Client button.
When the build is finished, it will be uploaded to your currently selected organization and project in the Developer Portal.
You'll see in the developer dashboard when your Simulator is ready to be associated with a Room or World.
Target frame rate on Simulator builds is forced at 30.
This feature is experimental, please make sure you make a backup of your project beforehand.
You can set the values for the Build Size Optimizations in the drop-down list of the build configuration inspector. It looks like this:
Select the desired optimizations depending on your needs.
Once your Simulator is built and uploaded, you'll be prompted with the option to revert the settings to the ones you had applied before building. This is to avoid these settings from affecting other builds you make.
coherence Input Queues are backed by a rolling buffer of inputs transmitted between the Clients. This buffer can be used to build a fully deterministic simulation with a client side-prediction, rollback, and input delay. This game networking model is often called the (Good Game Peace Out).
Input delay allows for a smooth, synchronized netplay with almost no negative effect on the user experience. The way it works is input is scheduled to be processed X frames in the future. Consider a fighting game scenario with two players. At frame 10 Player A presses a kick button that is scheduled to be executed at frame 13. This input is immediately sent to Player B. With a decent internet connection, there's a very good chance that Player B will receive that input even before his frame 13. Thanks to this, the simulation is always in sync and can progress steadily.
Prediction is used to run the simulation forward even in the absence of inputs from other players. Consider the scenario from the previous paragraph - what if Player B doesn't receive the input on time? The answer is very simple - we just assume that the input state hasn't changed and progress with the simulation. As it turns out this assumption is valid most of the time.
Rollback is used to correct the simulation when our predictions turn out wrong. The game keeps historical states of the game for past frames. When an input is received for a past simulation frame the system checks whether it matches the input prediction made at that frame. If it does we don't have to do anything (the simulation is correct up to that point). If it doesn't match, however, we need to restore the simulation state to the last known valid state (last frame which was processed with non-predicted inputs). After restoring the state we re-simulate all frames up to the current one, using the fresh inputs.
GGPO is not recommended for FPS-style games. The correct rollback networking solution for those is planned to be added in the future.
In a deterministic simulation, given the same set of inputs and a state we are guaranteed to receive the same output. In other words, the simulation is always predictable. Deterministic simulation is a key part of the GGPO model, as well as a lockstep model because it lets us run exactly the same simulation on multiple Clients without a need for synchronizing big and complex states.
Implementing a deterministic simulation is a non-trivial task. Even the smallest divergence in simulation can lead to a completely different game outcome. This is usually called a desync. Here's a list of common determinism pitfalls that have to be avoided:
Using Update
to run the simulation (every player might run at a different frame rate)
Using coroutines, asynchronous code, or system time in a way that affects the simulation (anything time-sensitive is almost guaranteed to be non-deterministic)
Using Unity physics (it is non-deterministic)
Using random numbers generator without prior seed synchronization
Non-symmetrical processing (e.g. processing players by their spawn order which might be different for everyone)
Relying on numbers across different platforms, compilations or processor types
We'll create a simple, deterministic simulation using provided utility components.
This is the recommended way of using Input Queues since it greatly reduces the implementation complexity and should be sufficient for most projects. If you'd prefer to have full control over the input code feel free to use theCoherenceInput
and InputBuffer
directly.
Our simulation will synchronize the movement of multiple Clients, using the rollback and prediction in order to cover for the latency.
Start by creating a Player
component and a Prefab for it. We'll use the client connection system to make our Player
represent a session participant and automatically spawn the selected Prefab for each player that connects to the Server. The Player
will also be responsible for handling inputs using the CoherenceInput
component.
Create a Prefab from cube, sphere, or capsule, so it will be visible on the scene. That way it will be easier to verify visually if the simulation works, later.
Our Player
code looks as follows:
The GetMovement
and SetMovement
will be called by our "central" simulation code. Now that we have our Player
defined let's prepare a Prefab for it. Create a GameObject and attach the Player
component to it, using the CoherenceSync
inspector create a Prefab. The inspector view for our Prefab should look as follows:
A couple of things to note:
A Mov
axis has been added to the CoherenceInput which will let us sync the movement input state
In order for inputs to be processed in a deterministic way, we need to use the fixed simulation frames. Tick the CoherenceInput > Use Fixed Simulation Frames checkbox\
Since our player is the base of the Client connection we must set it as the connection Prefab in the CoherenceMonoBridge
and enable the global query:
Before we move on to the simulation, we need to define our simulation state which is a key part of the rollback system. The simulation state should contain all the information required to "rewind" the simulation in time. For example, in a fighting game that would be the position of all players, their health, and perhaps a combo gauge level. In a shooting game, this could be player positions, their health, ammo, and map objective progression.
In the example we're building, player position is the only state. We need to store it for every player:
The state above assumes the same number and order of players in the simulation. The order is guaranteed by the CoherenceInputSimulation
, however, handling a variable number of Clients is up to the developer.
Simulation code is where all the logic should happen, including applying inputs and moving our Players
:
SetInputs
is called by the system when it's time for our local Player
to update its input state using the CoherenceInput
Simulate
is called when it's time to simulate a given frame. It is also called during frame re-simulation after misprediction - don't worry though, the complex part is handled by the CoherenceInputSimulation
internals - all you need to do in this method is apply inputs from the CoherenceInput
to run the simulation
Rollback
is where we need to set the simulation state back to how it was at a given frame. The state is already provided in the state
parameter, we just need to apply it
CreateState
is where we create a snapshot of our simulation so it can be used later in case of rollback
OnClientJoined
and OnClientLeft
are optional callbacks. We use them here to start and stop the simulation depending on the number of clients
The SimulationEnabled
is set to "false" by default. That's because in a real-world scenario the simulation should start only after all Clients have agreed for it to start, on a specific frame chosen, for example, by the host.
Starting the simulation on a different frame for each Client is likely to cause a desync (as well as joining in the middle of the session, without prior simulation state synchronization). Simulation start synchronization is however out of the scope of this guide so in our simplified example we just assume that Clients don't start moving immediately after joining.
As a final step, attach the Simulation
script to the MonoBridge object on scene and link the MonoBridge back to the Simulation
:
That's it! Once you build a client executable you can verify that the simulation works by connecting two Clients to the Replication Server. Move one of the Clients using arrow keys while observing the movement being synced on the other one.
Due to the FixedNetworkUpdate
running at different (usually lower) rate than Unity's Update
loop, polling inputs using the functions like Input.GetKeyDown
is susceptible to a input loss, i.e. keys that were pressed during the Update
loop might not show up as pressed in the FixedNetworkUpdate
.
To illustrate why this happens consider the following scenario: given that Update
is running five times for each network FixedNetworkUpdate
, if we polled inputs from the FixedNetworkUpdate
there's a chance that an input was fully processed within the five Update
s in-between FixedNetworkUpdate
s, i.e. a key was "down" on the first Update
, "pressed" on the second, and "up" on a third one.
To prevent this issue from occurring you can use the FixedUpdateInput
class:
The FixedUpdateInput
works by sampling inputs at Update
and prolonging their lifetime to the network FixedNetworkUpdate
so they can be processed correctly there. For our last example that would mean "down" & "pressed" registered in the first FixedNetworkUpdate
after the initial five updates, followed by an "up" state in the subsequent FixedNetworkUpdate
.
The FixedUpdateInput
works only with the legacy input system (UnityEngine.Input
).
There's a limit to how many frames can be predicted by the Clients. This limit is controlled by the CoherenceInput.InputBufferSize
. When Clients try to predict too many frames into the future (more frames than the size of the buffer) the simulation will issue a pause. This pause affects only the local Client. As soon as the Client receives enough inputs to run another frame the simulation will resume.
To get notified about the pause use the OnPauseChange(bool isPaused)
method from the CoherenceInputSimulation
:
This can be used for example to display a pause screen that informs the player about a bad internet connection.
To recover from the time gap created by the pause the Client will automatically speed up the simulation. The time scale change is gradual and in the case of a small frame gap, can be unnoticeable. If a manual control over the timescale is desired set the CoherenceMonoBridge.controlTimeScale
flag to "false".
The CoherenceInputSimulation
has a built-in debugging utility that collects various information about the input simulation on each frame. This data can prove extremely helpful in finding a simulation desync point.
The CoherenceInputDebugger
can be used outside the CoherenceInputSimulation
. It does however require the CoherenceInputManager
which can be retrieved through the CoherenceMonoBridge.InputManager
property.
Since debugging might induce a non-negligible overhead it is turned off by default. To turn it on, add a COHERENCE_INPUT_DEBUG
scripting define:
From that point, all the debugging information will be gathered. The debug data is dumped to a JSON file as soon as the Client disconnects. The file can be located under a root directory of the executable (in case of Unity Editor the project root directory) under the following name: inputDbg_<ClientId>.json
, where <ClientId>
is the CoherenceClientConnection.ClientId
of the local client.
Data handling behavior can be overridden by setting the CoherenceInputDebugger.OnDump
delegate, where the string parameter is a JSON dump of the data.
The debugger is available as a property in the simulation base class: CoherenceInputSimulation.Debugger
. Most of the debugging data is recorded automatically, however, the user is free to append any arbitrary information to a frame debug data, as long as it is JSON serializable. This is done by using the CoherenceInputDebugger.AddEvent
method:
Since the simulation can span an indefinite amount of frames it might be wise to limit the number of debug frames kept by the debugging tool (it's unlimited by default). To do this use the CoherenceInputDebugger.FramesToKeep
property. For example, setting it to 1000 will instruct the debugger to keep only the latest 1000 frames worth of debugging information in the memory.
Since the debugging tool uses JSON as a serialization format, any data that is part of the debug dump must be JSON-serializable. An example of this is the simulation state. The simulation state from the quickstart example is not JSON serializable by default, due to Unity's Vector3 that doesn't serialize well out of the box. To fix this we need to give JSON serializer a hint:
With the JsonProperty
attribute, we can control how a given field/property/class will be serialized. In this case, we've instructed the JSON serializer to use the custom UnityVector3Converter
for serializing the vectors.
To find a problem in the simulation, we can compare the debug dumps from multiple clients. The easiest way to find a divergence point is to search for a frame where the hash differs for one or more of the clients. From there, one can inspect the inputs and simulation states from previous frames to find the source of the problem.
Here's the debug data dump example for one frame:
Explanation of the fields:
Frame
- frame of this debug data
AckFrame
- the common acknowledged frame, i.e. the lowest frame for which inputs from all clients have been received and are known to be valid (not mispredicted)
ReceiveFrame
- the common received frame, i.e. the lowest frame for which inputs from all clients have been received
AckedAt
- a frame at which this frame has been acknowledged, i.e. set as known to be valid (not mispredicted)
MispredictionFrame
- a frame that is known to be mispredicted, or -1
if there's no misprediction
Hash
- hash of the simulation state. Available only if the simulation state implements the IHashable
interface
Initial state
- the original simulation state at this frame, i.e. a one before rollback and resimulation
Initial inputs
- original inputs at this frame, i.e. ones that were used for the first simulation of this frame
Updated state
- the state of the simulation after rollback and resimulation. Available only in case of rollback and resimulation
Updated inputs
- inputs after being corrected (post misprediction). Available only in case of rollback and resimulation
Input buffer states
- dump of the input buffer states for each client. For details on the fields see the InputBuffer
code documentation
Events
- all debug events registered in this frame
There are two main variables which affect the behaviour of the InputBuffer
:
Initial buffer size - the size of the buffer determines how far into the future the input system is allowed to predict. The bigger the size, the more frames can be predicted without running into a pause. Note that the further we predict, the more unexpected the rollback can be for the player. The InitialBufferSize
value can be set directly in code however it must be done before the Awake
of the baked component, which might require a script execution order configuration.
Initial buffer delay - dictates how many frames must pass before applying an input. In other words, it defines how "laggy" the input is. The higher the value, the less likely Clients are going to run into prediction (because a "future" input is sent to other Clients), but the more unresponsive the game might feel. This value can be changed freely at runtime, even during a simulation (it is however not recommended due to inconsistent input feeling).
The other two options are:
Disconnect on time reset - if set to "true" the input system will automatically issue a disconnect on an attempt to resync time with the Server. This happens when the Client's connection was so unstable that frame-wise it drifted too far away from the Server. In order to recover from that situation, the Client performs an immediate "jump" to what it thinks is the actual server frame. There's no easy way to recover from such a "jump" in the deterministic simulation code, so the advised action is to simply disconnect.
Use fixed simulation frames - if set to "true" the input system will use the IClient.ClientFixedSimulationFrame
frame for simulation - otherwise the IClient.ClientSimulationFrame
is used. Setting this to "true" is recommended for a deterministic simulation.
The fixed network update rate is based on the Fixed Timestep configured through the Unity project settings:
To know the exact fixed frame number that is executing at any given moment use the IClient.ClientFixedSimulationFrame
or CoherenceInputSimulation.CurrentSimulationFrame
property.
Creating massive multiplayer worlds
Unity has a well-known limitation of offering high precision positioning only within a few kilometers from the center of the world. A common technique to get around this limitation is to move the whole world underneath the player. This is called floating origin. Here's how you can use floating origin with coherence.
Unity uses numbers to represent the world position of game objects in memory. While this format can represent numbers up to , its precision decreases as the number gets larger. You can use this to see that already around the distance of meters the precision of a 32-bit float is 1 meter, which means that the position can only be represented in steps of one meter or more. So if your Game Object moves away from the origin by 1000 kilometers it can be only positioned with the accuracy of 1000km and one meter, or at 1000km and two meters, but not in between. As a result, usable virtual worlds can be limited to a range of as little as 5km, depending on how precisely GameObjects like bullets need to be tracked.
Having a single floating world origin as used in single player games is not sufficient for multiplayer games since each player can be located in different parts of the virtual world. For that reason, in coherence, all positions on the are stored in absolute coordinates, while each Client has its own floating origin position, to which all of their game object positions are relative.
To represent the absolute position of Game Objects on the Replication Server, we use the 64-bit floating-point format. This format allows for sub-1mm precision out to distances of 5 billion kilometers. To keep the implementation simple, floating origin position and any Game Object's absolute position is limited to the 32-bit float range, but because of the Floating Origin, it will have precision of a 64-bit float when networked with other Clients.
Here is a simple example how the floating origin could be used. We will create a script that is attached to the player Prefab and is active only on the Client with authority.
Calling the CoherenceManager.TranslateFloatingOrigin
will shift all CoherenceSync objects by the translated vector, but you have to shift other non-networked objects by yourself. We will create another script which takes care of this.
When your floating origin changes, the CoherenceManager.OnFloatingOriginShifted
event is invoked. It contains arguments such as the last floating origin, the new one, and the delta between them. We use the delta to shift back all non-networked game objects ourselves. Since the floating origin is Vector3
of doubles we need to use ToUnityVector3
method to convert it to Vector3
of floats.
To control what happens to your entities when you change your floating origin, you can use CoherenceSync's floatingOriginMode
and floatingOriginParentedMode
fields. Both are accessible from the inspector under Advanced Settings.
Available options for both fields are:
MoveWithFloatingOrigin
- when you change your floating origin, the Entity is moved with it, so its relative position is the same and absolute position is shifted.
DontMoveWithFloatingOrigin
- when you change your floating origin, the Entity is left behind, so its absolute position is the same and relative position is shifted.
Floating Origin Mode dictates what happens to the Entity when it is a Root Object in the scene hierarchy, and Floating Origin Parented Mode dictates what happens to it when its parented under another non-synced Game Object.
When using the Simulators tab in the coherence Hub, you can specify a Simulator slug. This is simply a unique identifier for a Simulator. This value is automatically saved in RuntimeSettings
when an upload is complete, and Room creation requests will use this value to identify which Simulator should be started alongside your room.
The Simulator slug can be any string value, but we recommend using something descriptive. If the same slug is used between two uploads, the later upload will overwrite the previous Simulator.
A list of uploaded Simulators and their corresponding slugs can be found in the Developer Portal:
coherence allows us to use multiple Simulators to split up a large game world with many entities between them. This is called spatial load balancing.
While load balancing is supported for standalone projects, our Cloud services currently only support associating one Simulator to a Room or World. This will be extended in the near future. Enterprise customers can still run multiple Simulators in their own cloud environment.
Simulate multiple Rooms at the same time, within one Unity instance
Multi-Room Simulators are Room Simulators which are able to simulate multiple game rooms at the same time - one sim to rule them all!
In order to achieve this, the game code should be defensive on which room it is affecting. Game state should be kept per Room, meaning game managers, singletons (static data), etc. need to account for this.
Each Room is held in a different scene. So for every Room created, the Multi-Room Simulator should open a connection to it, hence loading additively a scene and stablishing a Simulator connection (via MonoBridge).
By using Multi-Room Simulators, the coherence Developer Portal is able to instruct your Simulator which room to join and start simulating.
This communication happens via HTTP. An HTTP server is started by your game build when the MultiRoomSimulator
component is active. This component listens to HTTP requests made by the coherence __ Developer Portal.
For offline local development, you can use a MultiRoomSimulatorLocalForwarder
component on your clients, which will create HTTP requests against your local simulator upon client connection, like joining a room.
For local development, enable the Local Development Mode
flag in the .
Once the MultiRoomSimulator
receives a request to join a room, it spawns a CoherenceSceneLoader
that will be in charge of loading additively the scene specified.
The quickest way to get Multi-Room Simulators set up is by using the provided wizard.
It will take you through the GameObjects and Components needed to make it happen.
Some steps are not strictly necessary. For example, you don't need a Sample UI for Multi-Room Simulators to happen. However, if you do use the Sample UI, we help you make sure you have it set up properly.
Here's a quick overview video of the setup:
These are the pieces needed for Multi-Room Simulators to work:
Simulators
In the initialization scene (splash, init, menu, ...)
MultiRoomSimulator — listens to join room requests and delegates scene loading (by instantiating CoherenceSceneLoaders)
Clients
(Only for local development) In the scene where you connect to a Room (where you have the Sample UI or your custom connection logic)
MultiRoomSimulatorLocalForwarder — requests the local MultiRoomSimulator to join rooms when the Client connects.
Independently
In the scene where the networked game logic is (game, Room, main, ...)
MonoBridge — handles the connection
LiveQuery — filters Entities by distance
CoherenceScene — when the scene is loaded via CoherenceSceneLoader, it will try to connect using the data given by it. It attaches to the MonoBridge, creates a connection, and handles auto reconnection. If a scene loaded through CoherenceSceneLoader doesn't have a CoherenceScene on it, one will be created on the fly.
There are two components that can help you fork Client and Simulator logic, for example, by enabling or disabling the MultiRoomSimulator component depending on whether it's a Simulator or a Client build. These are optional but can come in handy.
SimulatorEventHandler — events on the build type (Client/Simulator).
ConnectionEventHandler — events on the connection stablished by the MonoBridge associated with that Scene.
It is possible to visualize each individual Room the Multi-Room Simulator is working on. By default, Simulator connections to Rooms are hidden, as shown in the image above. You can toggle the visibility per scene by clicking the Eye icon. You can also change the default visibility of the loaded scene (defaults to hidden) on the CoherenceScene component:
Working with Multi-Room Simulators needs your logic to be constrained to the scene. Methods like FindObjectsOfType will return objects in all scenes — you could affect other game sessions!
Check out Coherence.Toolkit.SceneUtils
for alternative APIs to FindObjectsOfType
that work per scene.
Also, Coherence.Toolkit.ActiveSceneScope
can help make sure instantiation happens where you want it to be.
This is also true for static members, e.g. singletons. When using Multi-Room Simulators, there need to be as many isolated instances of your managers as there are open simulated rooms.
For example, if you were to access your Game Manager through GameManager.instance
, now you'll need a per-scene API like GameManager.GetInstance(scene)
.
There may be third-party or Unity-provided features that can't be accessed per scene, and that affect the whole game.
Loading operations, garbage collections, frame-rate spikes... all these will affect performance on other sessions, since everything is running within the same game instance.
Communication between Clients
Client Connections are CoherenceSyncs that the CoherenceMonoBridge can handle for you and let you uniquely identify users connected, find them by their ID, spawn CoherenceSyncs whenever a new user joins the session, and send commands between those users.
When using Client Connections, CoherenceMonoBridge will spawn a CoherenceSync for each connection (Client or Simulator). Those CoherenceSyncs are subject to a different ruleset than standard CoherenceSyncs:
They can't be created or destroyed by the Client - they are always driven by CoherenceMonoBridge.
They are global - they are replicated across Clients regardless of the extent.
Client Connections shine whenever there's a need to communicate something to all the connected players. Usage examples:
Global chat
Game state changes: game started, game ended, map changed
Server announcements
Server-wide leaderboard
Server-wide events
The global nature of Client Connections doesn't fit all game types - for example, it rarely makes sense to keep every Client informed about the presence of all players on the server in an MMORPG. If this is your use case, don't set Client Connections on your CoherenceMonoBridge.
To enable Client Connections, turn Global Query on in your (it should be by default):
Disabling Global Query on one Client doesn't affect other Clients, i.e. the ClientConnection Object of this Client will still be visible to other Clients that have the Global Query turned on.
Most of the Client Connection functionality is accessible through the CoherenceMonoBridge.ClientConnections
object:
Each connection is represented by a plain C# CoherenceClientConnection
object. It contains all the important information about a connection - its ClientID
, Type
, whether it IsMyConnection
, and a reference to the GameObject
and Coherence Sync
associated with it.
The CoherenceClientConnection.ClientID
is guaranteed to not change during a connection's lifetime. However, if a Client disconnects and then connects again to the same Room/World, a new ClientID
will be assigned (since a new connection was established).
Each Client Connection can have a CoherenceSync automatically being spawned and associated with it. Those objects, like any other objects with CoherenceSync, can be used for syncing properties or sending messages, with a little twist - they are global and thus not limited by the LiveQuery extent. That makes them perfect candidates for operations like:
Syncing global information - name, stats, tags, etc.
Sending global messages - chat, server interaction
To enable connection objects:
For the system to know which object to create for every new Client connection, we have to link our Prefab to the CoherenceMonoBridge. Simply drag the prefab to the Client field in the inspector:
From now on every new connection will be assigned an instance of this Prefab, which can be accessed through the CoherenceClientConnection.GameObject
property.
Note that there's a separate field for the Simulator Connection Prefab. It can be used to spawn a completely different object for the Simulator connection that may contain Simulator-specific commands and replicated properties. If the field is left empty, no object will be created for the Simulator connection.
The Prefab selection process can be also controlled from code using the CoherenceMonoBridge.ClientConnections.ProvidePrefab
callback:
A Prefab provided through the ProvidePrefab
callback takes precedence over Prefabs linked in the inspector.
Don't forget to bind to the new method to define a command:
Client Messages can be sent using the CoherenceClientConnection.SendClientMessage
method:
If the ClientID
of the message recipient is known we can use the CoherenceMonoBridge.ClientConnections
directly to send a client message:
Choose your preferred Scripting Implementation from the drop-down list. It can either be or .
For more information about the options listed under Build Size Optimizations, see .
Make sure you have completed the steps required in .
You have to have Linux modules (Linux Build Support (IL2CPP)
, Linux Build Support (Mono)
, and Linux Dedicated Server Build Support
) installed in Unity Editor. See .
You have to be logged into the coherence Developer Portal, through the Unity Editor. See for more information.
Optimization | What it does |
---|
When building an input-based simulation it is important to use the , that is not a subject to the LiveQuery. Objects that might disappear or change based on the client-to-client distance are likely to cause simulation divergence leading to a desync.
Unlike in the our simulation uses client-to-client communication, meaning each Client is responsible for its Entity and sending inputs to other Clients. To ensure such behavior set the CoherenceSync > Simulation and Interpolation > Simulation Type to Client Side __
In the deterministic simulation, it is our code that is responsible for producing deterministic output on all Clients. This means that the automatic transform position syncing is no longer desirable. To turn it off, toggle the Predicted button in the CoherenceSync Bindings window (see the chapter on client-side prediction in ).\
Make sure to use the (CoherenceInput > Use Baked Script) - inputs do not work in the reflection mode\
You can write your own JSON converters using the example . For information on the Newtonsoft JSON library that we use for serialization .
If the Entity is parented under another CoherenceSync Object (even using ), its local position will never be changed, since it will always be relative to the parent.
If you are using Cinemachine for your cameras, you'll need to call to notify them that the camera target has moved when you shift the floating origin.
By default, scenes will have their . coherence ticks the physics scene on the CoherenceScene
component, which the target scene to be loaded should include.
Multi-Room Simulators are still . You need to enable Simulators for Rooms and enable Multi-Room Simulators in the coherence Developer Portal, as shown here:
This step is described in detail in the . In short, a Prefab with a CoherenceSync and a custom component (PlayerConnection
in this example) must be created and placed in a Resources folder:
Client Messages are sent between the . Implementing Client Messages is as simple as creating a command on the CoherenceSync used by the Client Connection Prefab in the CoherenceMonoBridge:
--coherence-play-region
eu
, us
(or local
).
--coherence-ip
Specific IP to point to.
--coherence-port
Specific port to point to.
--coherence-room-id
Specific Room to point to.
--coherence-room-tags
Room tags (space-separated).
--coherence-room-kv
Key-value pairs (space-separated). Example:
key1 value1 key2 value2
--coherence-world-id
Specific World ID to point to.
--coherence-simulation-server
Connect and behave as a Simulator.
--coherence-simulator
Same as --coherence-simulation-server
.
Rooms functionality can be accessed through the PlayResolver
which includes all the methods needed to use rooms.
To manage Rooms we must first decide which region we are working with.
FetchRegions
in PlayResolver.cs
allows us to fetch the regions available for our project. This task returns a list of regions (as strings) and a boolean that indicates if the operation was successful.
FetchLocalRegions
in PlayResolver.cs
returns the local region string for a local running Rooms Server, or null if the operation is un-successful (if the Server isn't running for example).
Every other Rooms API will require a region string that indicates the relevant region for the operation so these strings should not be changed before using them for other operations.
The RoomsConnectDialog
populates a dropdown with the region strings returned by both of these methods directly for easy selection.
These methods also call EnsurePlayConnection
which initializes the needed mechanisms in the PlayResolver
if necessary. EnsurePlayConnection
can also be called directly for initialization.
After we have the available regions we can start managing Rooms, for instance:
CreateRoom
in PlayResolver.cs
allows us to create a Room in the region we send it.
the RoomCreationOptions
is used to optionally specify:
a Room name
the maximum number of Clients allowed for the Room
a list of tags for Room filtering and other uses
a key-value collection for the Room
This task returns the RoomData
for the created Room assuming the operation was successful.
FetchRooms
in PlayResolver.cs
allows us to search for available Rooms in a region. We can also optionally specify tags for filtering the Rooms.
This task returns a list of RoomData
objects for the Rooms available for our specifications.
JoinRoom
in PlayResolver.cs
connects the client that we pass to the method to the Room we pass to the method. This RoomData
object can be either the one we get back from CreateRoom
or one of the ones we got from FetchRooms
.
When joining a Room, the method is optionally supplied if the connecting Client is a Simulator, as well.
The RoomsConnectDialog
demonstrates both of these cases in CreateRoom
when called with true for autoJoin and in JoinRoom
respectively.
RemoveRoom
in PlayResolver.cs
allows us to close a Room. The uniqueID can be either the one we get back from CreateRoom
or one of the ones we got from FetchRooms
, but roomToken (Room's Secret) is only returned by CreateRoom
.
The Developer Portal is an online dashboard where the cloud services behind your coherence-based game can be managed. It can be found at https://dev.coherence.io or from the Developer Portal link above.
The Developer Portal includes:
Organization and Project creation and management
Resource configuration and management
Enabling / disabling features
Cost analysis
Team management
Here are some examples of tasks to perform on the Developer Portal:
Create your organization and project for your game
Start/stop/restart your cloud-based Replication Server or Simulator
Enable coherence features such as player authentication, key-value store, persistence, and build sharing
Invite teammates to your project
View your resource usage and billing forecasts
While a local ReplicationServer is available as part of the Unity SDK, in order to host multiplayer services like the Replication Server in the cloud, your team must have a project in the Developer Portal. It is up to your project needs when to begin using the cloud services.
Please see the page Create a free account in the Get Started section.
Worlds functionality can also be accessed through the PlayResolver
just like rooms. Worlds work a differently however and are a bit simpler.
First we need to fetch the available Worlds. Unlike Rooms, Worlds cannot be created by a Client and need to be setup in the Developer Portal.
FetchWorlds
in PlayResolver.cs
allows us to fetch the available Worlds for our project. This task returns a list of Worlds in the form of WorldsData
objects and a boolean that indicates if the operation was successful.
This method also calls EnsurePlayConnection
which initializes the needed mechanisms in thePlayResolver
if necessary. EnsurePlayConnection
can also be called directly for initialization.
FetchLocalWorld
in PlayResolver.cs
returns the local World for a local running World Server.
The WorldsConnectDialog
populates a dropdown with the Worlds returned by both of these methods so we can select a World.
After we've selected a World we can connect to it using:
JoinWorld
in PlayResolver.cs
connects the Client that we pass to the method to the World we pass to the method.
The isSimulator
optional parameter is used for Simulators and can be ignored for regular Client connections (see Simulators).
The WorldsConnectDialog
is an example implementation for Worlds usage.
When connected to a Room or a World, the Client can access the currently connected endpoint by accessing the Coherence.IClient.LastEndpointData
property of the CoherenceMonoBridge,
e.g. myBridge.Client.LastEndpointData
.
coherence provides an API and a database for storing key-value pairs from within game sessions.
The key-value store provides a simple way to store and retrieve data for the currently logged in player. For example, you could store the player's score, email address, or any other data.
This feature requires a game account.
The keys must be alphanumerical strings, underscore or dash. Currently, only strings are accepted as values. If you need to store numbers or complex types like arrays, you have to convert them to strings.
The total amount of stored data (keys + values) cannot exceed 256 KB per player (TBD).
There is no limit how often the data is stored or retrieved.
Please refer to the Cloud API: Key-value store.
coherence provides an API for creating player game accounts that uniquely identify players across multiple devices. An account is required in order to use the rest of the online services, like the key-value store and matchmaking.
There are two types of accounts that are currently supported - guest accounts and user accounts.
Guest accounts provide an easy way to start using the coherence online services without providing any user interface for user names or password. Everything is controlled with the API, and is completely transparent to the player.
The session data for the account is stored locally on the device so it is important to know that uninstalling the game will also wipe out all the data and the account will be no longer accessible even if the player installs the game again.
User accounts require explicit authorization by the player. Currently, only user name and password are supported as means for authentication. The user interface for entering the credentials must be provided by the game. Check the API how to use this feature.
In the future, there will be support for many more authentication mechanisms like Steam, Google Play Games, Sign in with Apple, etc.
Please refer to the Cloud API: Game Accounts.
Creating a player account is the first step towards using the coherence Cloud API. It is required in order to use the rest of the services.
To initialize the Cloud API you need to provide a runtime key that can be obtained from the Settings.
The easiest way to get started is by using a guest account. The only thing that is needed is to call LoginAsGuest
. This will create a random username / password combination and will authenticate the player with the coherence Cloud.
Once logged in, the credentials are securely persisted so that if the game is restarted the player will be able to log in automatically.
If the game is uninstalled then the account credentials will be lost and a new guest account will be created next time the game is installed.
Another alternative is to login with a username and a password. You have to provide the user interface.
This example initializes the Cloud API, checks for an existing session and, if no session was found or if it expired, logs in the player as guest.
The key-value store provides a simple persistence layer for the players.
The player needs need to be logged in to use the key-value store.
This class provides the methods to set, get and unset key-value pairs. This is executed within the context of the currently logged in player.
Size: there are no limits to the number of stored key/values as long as the total size is less than 256 kB.
Requests: Set/Get/Unset can be called unlimited amount of times but the execution may be throttled.
The coherence Cloud API allows us to access online services like game accounts, key-value store, matchmaking, and others. It also allows you to get the addresses (IP and port number) of the Servers the players can connect to.
The Cloud API requires you to use tokens connected to your coherence project.
Replace Textures And Sounds With Dummies | Project's textures and sound files are replaced with tiny and lightweight alternatives (dummies). Original assets are copied over to <project>/Library/coherence/AssetsBackup. They are restored once the build process has finished. |
Keep Original Assets Backup | The Assets Backup (found at <project>/Library/coherence/AssetsBackup) is kept after the build process is completed, instead of deleted. This will take extra disk space depending on the size of the project, but is a safety convenience. |
Compress Meshes | Sets Mesh Compression on all your models to High. |
Disable Static Batching |
Extending what can be synced from the Configure window
This is an advanced topic that aims to bring access to coherence's internals to the end user.
The Configure window lists all variables and methods that can be synced for the selected Prefab. Each selected element in the list is stored in the Prefab as a Binding
with an associated Descriptor
, which holds information about how to access that data.
By default, coherence uses reflection to gather public fields, properties and methods from each of the Prefab's components. You can specify exactly what to list in the Configure window for a given component by implementing a custom DescriptorProvider
. This allows you to sync custom component data over the network.
Take this player inventory for example:
Since the inventory items are not immediately accessible as fields or properties, they are not listed in the Configure window. In order to expose the inventory items so they can be synced across the network, we need to implement a custom DescriptorProvider
.
DescriptorProvider
The main job of the DescriptorProvider
is to provide the list of Descriptors
that you want to show up in the Configure window. You can instantiate new Descriptors
using this constructor:
name: identifying name for this Descriptor
.
ownerType: type of the MonoBehaviour that this Descriptor
is for.
bindingType: type of the ValueBinding class that will be instantiated and serialized in CoherenceSync, when selecting this Descriptor
in the Configure window.
required: if true, every network Prefab that uses a MonoBehaviour of ownerType will always have this Binding active.
If you need to serialize additional data with your Descriptor
, you can inherit from the Descriptor
class or assign a Serializable
object to Descriptor.CustomData
.
Here is an example InventoryDescriptorProvider
that returns a Descriptor for each of the inventory items:
To specify how to read and write data to the Inventory component, we also need a custom binding implementation.
Binding
A Descriptor
must specify through the bindingType which type of ValueBinding
it is going to instantiate when synced in a CoherenceSync
. In our example, we need an InventoryBinding
to specify how to set and get the values from the Inventory
. To sync the durability property of the inventory item, we should extend the IntBinding
class which provides functionality for syncing int values.
For the full list of supported binding types, see Supported types in Commands and Bindings.
We are now ready to sync the inventory items on the Prefabs.
From the Developer Portal you can create, edit and configure your Worlds
Click the New World button at the top right of the Worlds view in the Developer Portal.
To create a World:
Enter a unique name
(optional) choose an SDK version. The latest version is recommended, but this should match the SDK version installed for your project
Enter tags separated by commas
Choose which region the World should be started in
Choose the size of the Replicator
(optional) Choose the schema this World should start with. Usually, the latest schema uploaded is the preferred choice, and this is the default.
(optional) Adjust the packet frequencies for sending and receiving data. It can be adjusted per project and is part of the Advanced Config section. Note that the adjustment of frequencies is accessible only for paid plans.
After creating an Organization and Project, (see Create a free account), your Developer Portal home page will be the Dashboard.
Here you can:
find your Portal token
view the recent **** resources you've created: schemas, Simulators, and Rooms
Besides the core Replicator and Simulator, coherence offers additional services to enhance your game's experience and we are constantly working on more.
Currently available services are:
In the Project sidebar, you can find links to each service. Each service has an enabled checkbox which you can tick to enable and disable those features:
Disabling a service will immediately remove that functionality from your game. Please disable with caution.
From the Developer Portal, you can configure how Rooms are created through the SDK in the coherence cloud.
From the left sidebar, select Rooms. On this page you can:
choose the regions you want to allow your rooms to be created in
enable Simulators for your Rooms
view a list of recently created Rooms
view a list of recently uploaded Simulators
From the Developer Portal, you can configure what size you want your Simulator instances to be. To attach a Simulator to a Room, send the "Simulator slug" uploaded through the SDK with the Rooms creation request. When using the PlayResolver to create Rooms, the Simulator uploaded through the SDK is automatically assigned in the creation request.
The packet frequencies for sending and receiving data can be adjusted per project. It is part of the Advanced Config section and adjusting the frequencies is available only for paid plans.
The Recent Rooms Table provide a quick view of the recently created Rooms with the following information per column:
Command-line interface tools explained
Found in <package-root>/.Runtime/<platform>/
.
protocol-code-generator --help
Argument | Help |
---|---|
protocol-code-generator --help generate
replication-server --help serve
To start the Server, you need to give it the location of the schema.
You can copy the CLI commands to start the replication server form coherence Hub > Servers tab_._
You can also define other parameters like min-query-distance
(the minimum distance the LiveQuery needs to move for the Replicator to recognize a change), send and receive frequency
, ip
and port
number.
Minimal parameters set is presented in the example below:
replication-server serve --port 32001 --signalling-port 32002 --send-frequency 20 --recv-frequency 60 --web-support --env dev --schema "/Users/coherence/unity/Coherence.Toolkit/Toolkit.schema,/Users/coherence/MyProject/Library/coherence/Gathered.schema"
replication-server --help listen
persistence-client --help serve
Static Batching tries to combine meshes at compile-time, potentially increasing build size. Depending on your project, static batching can affect build size drastically. Read more about .
Column name | Description |
---|---|
__
ID
ID of the Room.
KV
Key-value pairs associated with the Room.
Tags
Tags attached to the Room.
SDK
SDK version used by the Room.
Sim
Sim Slug for the Room.
Schema
Schema ID used by the Room.
IP
IP address of the Replication Server attached to the Room.
Created
Time duration since the Room creation.
Room Status
Status of the Room (Open or Closed).
Sim Status
Status of the Simulator attached to the Room (Starting, Started, Stopped).
Messages
Messages from the Simulator orchestration. This field is usually empty but will contain a message if an issue happened with the Simulator life-cycle. The message provides the reason for the Simulator's current state (usually Stopped). Once a Simulator is stopped (for Room inactivity for instance), the Message column will mention Idle-stopped.
--log-level=LOG-LEVEL-STRING
Log level. Values: Trace, Debug, Info, Warning, Error, Panic
Output format of the log. Values: plain, json.
--log-file=LOG-FILE-STRING
Log output file
--panic-on-error
Enable/disable panic on error
Getting updates about every entity in the whole scene is unfeasible for big-world games, like MMOs. For this, coherence has a flexible system for creating areas of interest, and getting updates only about the entities that each Client cares about, using a tool called Live Query.
WASD or Left stick: Move character
Hold Shift or Shoulder button left: Run
Spacebar or Joypad button down: Jump
This scene contains two cubes that represent areas of interest. Every connected Client can only see other players if they are standing in one of these cubes.
Select one of the two GameObjects named LiveQuery. You will see they have a Coherence Live Query component.
This component defines an area of interest, in this case a 10x10x10 cube (5 is the Radius). This is telling the Replication Server that this Clients is only interested in network entities that are physically present within this volume.
If a Client has to know about the whole world, it's just enough to set the Live Query Radius to 0, to make it capture all updates.
Now it's clear why Transform.position
cannot be excluded from synchronization, as we saw in the first lesson. It needs to know where network entities are in space at all times, to detect if they fall within a Live Query or not.
In addition, Live Queries can be moved in space. They can be parented to the camera, to the player, or to other moving elements that denote an area of interest - depending on the type of game.
It is also possible, like in this scene, to have more than one Live Query. They will act as additive, requesting updates from entities that are within at least one of the volumes.
Notice that a Live Query is needed: a Client with no Live Query in the scene will receive no updates at all.
If you explored previous scenes you might have noticed that GameObjects with a Live Query component were actually there, but in this scene we gave them a visual representation, just for demo purposes.
Try moving in and out of volumes. You will notice that network-instantiation takes care of destroying the GameObject representing a remote entity that exits a Live Query, and reinstantiates it when it enters one again.
Also, notice that the player belonging to the local Client doesn't disappear. coherence will stop sending updates about this instance to other Clients, but the instance is not destroyed locally, as long as the Client retains authority on it.
If a GameObject can be in a state that needs to be computed, it might not appear correctly as it gets recreated. For instance, an animation state machine might not be in the correct animation state if it had previously reached that state via a trigger. You would have to ensure that the trigger is called again when the instance gets network-instantiated (via a Network Command).
The simulation frame represents an internal clock that every Client syncs with a Replication Server. This clock runs at a 60Hz frequency which means that the resolution of a single simulation frame is ~16ms.
There are 3 different simulation frame types used within the coherence:
The latest simulation frame received from the Replication Server. Accessible via CoherenceMonoBridge.NetworkTime.ServerSimulationFrame
.
Local Client simulation frame that progresses with local time. Accessible via CoherenceMonoBridge.NetworkTime.ClientSimulationFrame
.
Every Client tries to match the Client simulation frame with the Server simulation frame by continuously monitoring the distance between the two and adjusting the NetworkTime.NetworkTimeScale
based on the distance, ping, delta time, and several other factors starting from the first simulation frame captured when the client first connects in NetworkTime.ConnectionSimulationFrame
The Time.timeScale
is automatically set to the value of NetworkTime.NetworkTimeScale
if the CoherenceMonoBridge.controlTimeScale
is set to true (default value).
In perfect conditions, all Clients connected to a single session should have exactly the same ClientSimulationFrame
value at any point in the real-world time.
The value of the ClientSimulationFrame
can jump by more than 1 between two engine frames if the frame rate is low enough.
The Client simulation frame is used to timestamp any outgoing Entity changes to achieve a consistent view of the World for all players. The receiving side uses it for interpolation of the synced values.
Local simulation frame that progresses in user-controlled fixed steps. Accessible via CoherenceMonoBridge.ClientFixedSimulationFrame
.
By default, the fixed step value is set to the Time.fixedDeltaTime
.
Just like the basic Client simulation frame, it uses the NetworkTime.NetworkTimeScale
to correct the drift. The fixed simulation frame is used as a base for the fixed-step, network-driven simulation loop that is run via CoherenceMonoBridge.OnFixedNetworkUpdate
. This loop is used internally to power the CoherenceInput
and the GGPO code.
Unlike ClientSimulationFrame
the CoherenceMonoBridge.OnFixedNetworkUpdate
loop never skips frames - it is guaranteed to run for every single frame increment.