'Kinect v2 - Synchronize depth and color frames
I am currently looking for a stereoscopic camera for a project and the Kinect v2 seems to be a good option. However, since it's quite an investment to me, I need to be sure it meets my requirements, the main one being a good synchronization of the different sensors.
Apparently there is no hardware synchronization of the sensors, and I get many versions about the software part:
Some posts where people complain about lag between the 2 sensors, and many others asking for a way to synchronize the sensors. Both seem to have strange workarounds and no "official", common solution emerges from the answers.
Some posts about a
MultiSourceFrameclass, which is part of Kinect SDK 2.0. From what I understand, this class enables you to retrieve the frame of all the sensors (or less, you can choose which sensors you want to get the data from) at a given time. Thus, you should be able, for a given instant t, to get the output of the different sensors and make sure these outputs are synchronized.
So my question is, is this MultiSourceFrame class doing exactly what I mean it does? And if yes, why is it never proposed as a solution? It seems the posts of the 1st category are from 2013, so before the release of the SDK 2.0. However, MultiSourceFrame class is supposed to replace the AllFramesReady event of the previous versions of the SDK, and AllFramesReady wasn't suggested as a solution either.
Unfortunately the documentation doesn't provide much information about how it works, so I'm asking here in case someone would have already used it. I'm sorry if my question seems stupid, but I would like to be sure before purchasing such a camera.
Thank you for your answers! And feel free to ask for more details if needed :)
Solution 1:[1]
I've only used the MS SDK but I figure the rules apply. The reason why Relative time is the same for all the above streams is because all of the above are created out of the IR frame, thus they are all dependent on it. The color frame is not as it comes from a different camera. As for RelativeTime, it's basically a TimeSpan(in C# terms) which describes something akin a delta time between frames in Kinect's own runtime clock. It's probably created by the Kinect Service which grabs the raw input from the sensor, sends the IR to GPU for expansion into Depth(which is actually an averaging of several frames), Body and BodyFrame(and LongExposureIR) and then gets them back and gets the data back in the CPU to distributed to all the registered listeners(a.k.a. different Kinect v2 apps/instances). Also read in an MSDN forum a reply by an MVP who said MS cautioned them from using RelativeTime for anything other than delta time usage. So I don't know if you can actually use it for manual synchro between separate streams(ie without the use of MultiSourceFrameReader) with confidence.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | NPatch |
