Reply To: Mapping color and depth pixels

Home Forums AR Sandbox Forum Mapping color and depth pixels Reply To: Mapping color and depth pixels

#103294
elisek
Participant

Hi David,

The resolutions for both cameras are 640×480 so I don’t think that should be a great issue. When I run KinectViewer from the Kinect library the two images align in the 3D image but the method for that (using the Projector.cpp class) is not compatible to the method for extracting points from the image that I am using (as I have corners of my element rather than using the whole image as a texture) so I haven’t been able to translate that code into what I need. I am no expert either, especially in graphics and image processing.
I haven’t tried to use the box method. I’m not sure whether the base plane equation will work for the color image (it is all technically z=0) but I don’t think it hurts to try. Thanks 🙂

As for the hydrology, unfortunately I wasn’t successful in implementing roughness (I found a paper that can probably use but attempts to introduce the modification haven’t been successful and I think I would need more time to work on the maths/modelling side of it). I have been able to introduce subsurface storage and permeability (where the water permeates from the storage to groundwater -ie outside of the system) and I will be using changes to attenuation to inform roughness (my plan is to see whether I can relate attenuation values to Manning’s n based on some observations or at least have sample environments that will mimic actual terrain). My project is an internship developing the sandbox to introduce some more NFM type functionality and finding an interactive way for the user to change these so I am hoping to post at least some of my outcomes towards the end of it for the benefit of the wider community.

Elisa

Comments are closed.