Using SARndbox with libfreenect or Iannix

Home Forums AR Sandbox Forum Using SARndbox with libfreenect or Iannix

This topic contains 9 replies, has 4 voices, and was last updated by  mankoff 2 years, 11 months ago.

Viewing 10 posts - 1 through 10 (of 10 total)
  • Author
    Posts
  • #100748

    ernusame
    Participant

    I’ve got my sandbox up and running nicely, but now I was wondering if there is a way to get some of the data from a kinect while it’s running?

    I’ve been interested to use the sandbox with 3d sequencer Iannix (https://github.com/iannix/IanniX), which can take input from kinect via libfreenect, (video demo here http://vimeo.com/36355391) or OpenSoundControl messages in general.

    I think I’ll have to use two kinects, but am just wondering since it’s beyond my abilities to read from the source, how you might get these to all play nicely together?

    Thanks

    #100750

    Oliver Kreylos
    Keymaster

    That would require code modification. The main SARndbox application receives depth image data in a central callback, from where it is dispatched to any clients who need depth data for processing. It would be straightforward to add another client that converts the raw depth data into whatever format necessary to control external applications, and then stream it out through OSC or a custom protocol.

    The relevant callback is called rawDepthFrameDispatcher, and found in Sandbox.cpp.

    #100751

    ernusame
    Participant

    Thanks Oliver!

    I’ll have a look into adding OSC output. That would make my setting everything up much easier.

    #100784

    ernusame
    Participant

    So I’m looking through the code for rawDepthFrameDispatcher, and sending the buffer to a new callback.

    Now when I want to get the depthBuffer (I’m just trying to get it to print out right now), do I call the FrameBuffer objects getBuffer() method?

    Sorry to bother you I’m just not very used to C++. Getting a running printout of the depth would be a huge step for me.

    Thanks

    #100785

    ernusame
    Participant

    Ok progress update I’ve had some success getting values out calling getBuffer() and casting to RawDepth as you do in FrameFilter.cpp. I think I’m getting somewhere.

    Hopefully I’m on the right track and now I just need to interpret the data! I’ve not work with a kinect before so I’m a little unsure of what I’m getting out, is it every pixel of 11 bit depth?

    #100786

    Oliver Kreylos
    Keymaster

    To process an incoming depth image within the rawDepthFrameDispatcher method, or any method called by it, you access the frame contents via the getBuffer method:

    const RawDepth* depthImage=static_cast<const RawDepth*>(frameBuffer.getBuffer());

    RawDepth is a typedef for unsigned short, i.e., each pixel in the depth image is a 16-bit unsigned integer. You can get the size of the depth image from the frame buffer’s getSize() method, but for Kinect v1 the size is always 640×480.

    The contents of the depth image are raw pattern displacement values, meaning they are not metric distances. The conversion formula from displacement values d to metric z values is slightly different for each Kinect camera; figuring out the parameters is part of intrinsic calibration. The precise formula to get z from d is:

    float z = A / (B - float(d))

    where A and B are intrinsic calibration parameters. Approximate values for A and B are 34000 and 1090, respectively, which will yield z values in centimeters.

    There is also pretty significant non-linear distortion in the depth image, due to lens imperfections in both the IR pattern projector and the IR camera. My Kinect software uses per-pixel depth correction formulas to account for those; they are applied to the raw displacement values before conversion to metric distances. See the FrameFilter class’ filterThreadMethod method to see how they are applied.

    #100787

    ernusame
    Participant

    Just read your reply, thanks Oliver!

    • This reply was modified 4 years, 1 month ago by  ernusame.
    #100791

    jpwalsh2
    Participant

    I hope you will update this thread if you have made progress on getting the elevation data from the system. We have a sandbox at my university (East Carolina U.), and I think if we had the data… it would be very useful for teaching more advanced concepts/methods.

    Please let us know your progress or problems.

    Thanks
    J.P.

    #100792

    ernusame
    Participant

    Yes J.P I’ve managed to get OSC data from SARndbox successfully! Both raw and filtered.

    I’m now using it to drive supercollider and max/msp patches for sound.

    One problem is managing the large output, right now I’m sampling every other pixel, and throttling it with the system clock. I’m sure there’s a better way but was getting arraybounds errors using c++11 time libs and a newer compiler, have no idea why.

    The other is that this is the first time trying to use c++ 🙂

    I’ll upload my ‘modded’ version to github on monday. Hopefully I’ll be able to tidy things up into classes etc. at some point.

    #102110

    mankoff
    Participant

    Hi ernusame,

    Can you post a link to your github? I’d like access to the data stream to implement usage tracking.

    Thanks,

    -k.

Viewing 10 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic.

Comments are closed.