'RealityKit How to create custom meshes at runtime?

RealityKit has a bunch of useful functionality like built-in multiuser synchronization over a network to support shared worlds, but I can’t seem to find much documentation regarding mesh / object creation at runtime. RealityKit has some basic mesh generation functions (box, sphere, etc.) but I’d like to create my own procedural meshes at runtime (vertices and indices), and likely regenerate them every frame immediate-mode rendering style.

Firstly, is there a way to do this, or is RealityKit too closed-in without a way to do much custom rendering? Secondly, would there be an alternative solution that might let me use some of RealityKit’s synchronization? For example, is that part really just another library I can use with ARKit 3? What is it called? I’d like to be able to synchronize arbitrary data between users’ devices as well, so the built-in system would be helpful as well.

I can’t really test this because I don’t have any devices that can support the beta software at the moment. I am trying to learn whether I’ll be able to do what I want for my program(s) if I do get the necessary hardware, but the documentation is sparse.



Solution 1:[1]

As far as I know RealityKit can only use primitives or usdz files as models. While you can generate usdz files using ModelIO on device but that isn't feasible for your use case.

The synchronization however is built into ARKit although you have to do a little bit more work when you are not using RealityKit.

  1. Create a MultipeerConnectivity session between the devices (that's something you need to to for RealityKit as well)
  2. Configure your ARSession and set isCollborationEnabled which makes your session output CollaborationData in the session(_:didOutputCollaborationData:) delegate callback.
  3. Send this data using your MultipeerConnectivity session.
  4. When receiving data from other users integrate it into your session using update(with:)

To send arbitrary information between users you can either send them via MultipeerConnectivity independently from ARKit or use custom ARAnchors, which is the preferred option when your dealing with positional data, e.g. when a users has placed an object at a specific location.
Instead of adding objects directly (by using something like scene.rootNode.addChildNode() in SceneKit you create a special ARAnchor subclass with all the information needed to add your model and add it to your session. Then you add the object in the rendered(_:didAdd:forAnchor:) callback. This has the benefits of better tracking around your object (because you added an anchor to the position, indicating to ARKit that it should remember the position) and that you don't need to do anything special for multiuser experiences, because ARKit calls the rendered(_:didAdd:forAnchor:) method for both manually added anchors as well as automatically added ones, for example when it receives collaboration data.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 jlsiewert