Mapping color and depth pixels

Home Forums AR Sandbox Forum Mapping color and depth pixels

This topic contains 3 replies, has 2 voices, and was last updated by  elisek 7 months ago.

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #103175

    elisek
    Participant

    Hello all,

    I’m trying to do some marker recognition with the Sandbox and managed to get most of it working but I am struggling with mapping the color and depth images together. I know that there is an offset between the two and the images don’t quite match up so ideally I’d want to have them both in the same coordinate system. I was wondering if anybody has tried something like this before or would know roughly how to go about something like this.

    I have access to both the color projection matrix and the depth projection matrix but they give me very different results unfortunately. Would I have to create an orthogonal transform for the color data as well?

    Thanks 🙂
    E

    #103246

    elisek
    Participant

    Update: I’ve tried out a few different things and looking at some of the code in Kinect but still no success. The depth and color projection matrices from the intrinsic parameters are vastly different and when I transform a point from the color image space through either one of the transforms they don’t come even close to matching the same point from depth image space to camera space (i.e. same point on the sandbox). I’m using the downloaded intrinsic calibration and not a calculated – could that be the case for the mismatch? The way the color and depth camera projection matrices are different though makes me think it’s something more though.

    #103285

    David
    Participant

    Hi elisek,
    what you are trying to do sounds incredibly interesting and useful. Let me just say that i am not anywhere near qualified to talk about this but here are three things that came to my mind:
    Did you account for the different resolutions of depth and color stream? That would of course only be relevant if the coordinate system is pixel-based
    did you calibrate both in the same way? i.e. the vector orthogonal to the Box floor, i think its called camera space vector in the calibration procedure
    Is the distortion from different terrain height relevant here ?

    I highly doubt that this will help but i would love to hear about how your project is coming along.

    In another one of your posts i read about your modifications to the hydrology model, did you have any success with that?
    I am part of a group of rescue-engineering students from the university of applied sciences TH Köln and we are working on a similar idea, teaching about river flood risks, human factors and prevention. Perhaps you can share some information on your project, we are very interested in that.

    Greetings from Germany,
    David

    #103294

    elisek
    Participant

    Hi David,

    The resolutions for both cameras are 640×480 so I don’t think that should be a great issue. When I run KinectViewer from the Kinect library the two images align in the 3D image but the method for that (using the Projector.cpp class) is not compatible to the method for extracting points from the image that I am using (as I have corners of my element rather than using the whole image as a texture) so I haven’t been able to translate that code into what I need. I am no expert either, especially in graphics and image processing.
    I haven’t tried to use the box method. I’m not sure whether the base plane equation will work for the color image (it is all technically z=0) but I don’t think it hurts to try. Thanks 🙂

    As for the hydrology, unfortunately I wasn’t successful in implementing roughness (I found a paper that can probably use but attempts to introduce the modification haven’t been successful and I think I would need more time to work on the maths/modelling side of it). I have been able to introduce subsurface storage and permeability (where the water permeates from the storage to groundwater -ie outside of the system) and I will be using changes to attenuation to inform roughness (my plan is to see whether I can relate attenuation values to Manning’s n based on some observations or at least have sample environments that will mimic actual terrain). My project is an internship developing the sandbox to introduce some more NFM type functionality and finding an interactive way for the user to change these so I am hoping to post at least some of my outcomes towards the end of it for the benefit of the wider community.

    Elisa

Viewing 4 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic.

Comments are closed.