Had any luck with RealSense?

Home Forums AR Sandbox Forum Had any luck with RealSense?

Tagged: ,

Viewing 11 posts - 1 through 11 (of 11 total)
  • Author
    Posts
  • #123964
    jill
    Participant

    Has anyone had luck with using an Intel RealSense with the AR Sandbox software packages?

    If so, which model did you use and what modifications did you make?

    #123973
    Oliver Kreylos
    Keymaster

    The current software supports RealSense cameras using the first-generation Intel RealSense SDK. Those cameras were not good enough to run an AR Sandbox in any reasonable way.

    The current RealSense camera line, primarily models D415 and D435, only works with the second-generation RealSense SDK, which is not supported yet. Someone would have to write a new driver.

    #123976
    jill
    Participant

    Thanks Oliver. Could you elaborate on “Those cameras were not good enough to run an AR Sandbox in any reasonable way.” ?

    #123977
    hinmanj
    Participant

    Hey Oliver, I’m sorry if my questions are silly as I’m entering this conversation without having delved into the codebase, but does the sandbox code work by just taking in a stream of depth frames as input?

    If so, is it resolution dependent or could someone just write a Kinect For Azure wrapper and pipe in the 1024×1024 depth stream and things would just work?

    Thanks!

    #123980
    Oliver Kreylos
    Keymaster

    I might be getting the exact model numbers wrong, but the F200 (time-of-flight) had very low resolution and short maximum sensing distance (less than 1m) and a lot of noise. The R200 (structured light for laptops/tablets) had a small FoV, so it needed to be mounted way high up, and then poor vertical resolution, poor sample coverage, and high noise.

    I currently have a D435 to play with, which is second-generation RealSense and doesn’t yet work with the AR Sandbox, and I’m not impressed by its 3D sensing quality so far, either.

    It’s kinda sad that the first-generation Kinect, which just had its ninth birthday, is still the best 3D camera for an AR Sandbox that I’ve tried so far.

    #123981
    hinmanj
    Participant

    Have you tried the Kinect for Azure yet? I think it’s far and away the best of the bunch now, and it’s even (allegedly) cross-platform so you don’t have to USB-sniff to make it work on Linux this time ( https://github.com/microsoft/Azure-Kinect-Sensor-SDK )

    I think it’s Jill’s best bet if it were supported. As an aside, was the original first-gen Kinect better than the Kinect v2 for the AR Sandbox even?

    #123982
    Oliver Kreylos
    Keymaster

    If so, is it resolution dependent or could someone just write a Kinect For Azure wrapper and pipe in the 1024×1024 depth stream and things would just work?

    In principle, yes. Ideally this would be done inside my Kinect package, which has support for other camera types as well and a generic depth/color image streaming API. Cameras advertise their depth and color stream resolutions and intrinsic parameters, and downstream software is device-independent.

    It would also be possible to run the low-level camera driver on a separate host (using original Windows drivers for example), and stream the depth images over the network. There is already a remote camera protocol in the Kinect package, so it would be easiest to stream using that protocol. It also does compression to reduce bandwidth.

    #123983
    Oliver Kreylos
    Keymaster

    As an aside, was the original first-gen Kinect better than the Kinect v2 for the AR Sandbox even?

    The Kinect v2 has higher effective resolution for most applications because of its time-of-flight nature, but for the AR Sandbox, where filtering collects depth samples over time, that didn’t really have an advantage, so Kinect v1’s higher nominal resolution was a plus. In addition, the Kinect v2 is quite unreliable compared to Kinect v1, at least using the reverse-engineered Linux driver I have. It seems there are multiple slightly different versions out there, and my driver works pretty solidly on the two cameras I have (where I reverse-engineered and debugged it), but others have reported persistent issues.

    Overall, and only for the AR Sandbox, I recommend Kinect v1 over Kinect v2.

    #123984
    jill
    Participant

    So to sum up the Kinect v1 vs v2 discussion, is it that even though the v2 uses time-of-flight technology, the AR Sandbox software wouldn’t make use of all of the depth measurement pixels at once because it reads a special subset of samples each pass?

    I am going to attempt to read the shape of 3D printed sand toys within the sandbox. I’m wondering if I should make the effort to use the Azure.

    #123985
    jill
    Participant

    …but the Azure also uses ToF, so then would that be wasted effort?

    https://docs.microsoft.com/en-us/azure/kinect-dk/depth-camera

    #123986
    hinmanj
    Participant

    I believe the resolution and FOV on the Kinect for Azure (I’ll call it K4A from now on) are both much better than the previous generations.

    And as they allegedly have Linux support out of the box, I wouldn’t be as concerned with the reliability of them versus the Kinect v1/v2 as Oliver had to reverse engineer the drivers for those because Microsoft kept them windows-only.

Viewing 11 posts - 1 through 11 (of 11 total)
  • You must be logged in to reply to this topic.

Comments are closed.