Forum Replies Created

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • in reply to: Terrian model? #102311
    matthias
    Participant

    Yeah, Frank, these are some good ideas!

    Still hoping this feature gets implemented eventually…

    in reply to: Terrian model? #102184
    matthias
    Participant

    Coming from a Landscape Architecture background, and currently using the AR Sandbox as part of my bachelor’s thesis, I’d also love to see that feature implemented. To manipulate a real-world terrain would open up a whole new range of possibilities!

    in reply to: Warsaw University try on ARSandbox #102010
    matthias
    Participant

    see this thread:
    http://lakeviz.org/forums/topic/weird-pattern/

    i have the same issue, and never got an answer whether internal calibration would make a difference. but it seems like you will have to live with these inconsistencies.

    in reply to: Weird pattern #101695
    matthias
    Participant

    is there any way to lessen these effects? will internal calibration make a difference?

    in reply to: Min system specs? (not models) #101211
    matthias
    Participant

    I’m running my sandbox on the following:

    Hardware:
    AMD Phenom(tm) II X4 965 Processor
    12 GB RAM
    GeForce GT 430
    Kinect for XBOX 360, Model 1414

    Software:
    Linux Mint 17.2 Rafaela (Linux version 3.16.0-38-generic)
    NVIDIA binary driver – version 340.76

    The water simulation is a bit choppy, but apart from that, it works just fine. So I assume you should be good to go with the mainboard and processor you have, and a new graphics card, depending on how well you want the water simulation to run…

    in reply to: Vrui Video Viewer makes IR visible? #101206
    matthias
    Participant

    Trying to answer my own question here, though I haven’t come far.

    So when I use GUVCViewer to look through the Kinect’s eyes, and switch to color model “Y10B”, I will get the same result as above.

    Other color models, such as GRBG, RGB3 or YU12, show a regular color picture. So I am still wondering whether those dots are actually the infrared sent out by the Kinect, or just some interpretation of data noise in the Y10B color mode.

    So I assume the above mentioned VideoViewer uses the Y10B model. What that actually means and how it works, even google can’t really tell me. The only information on it I found is this.

    in reply to: Model Export #101202
    matthias
    Participant

    Thankfully, this thread was opened before I could even ask for the same thing.

    I have now also been using LWOwriter to export 3D data. As I found, the arguments should rather read:

    $ ./bin/LWOWriter <video stream base name> <output file name.lwo> <first exported frame> <last exported frame>

    When entering two differing numbers, i.e. more than frame, the LWOWriter still only generates one single file, though. Would I have to specify more output file names, or does it average multiple frames into one file then (which doesn’t seem to be the case), or can it simply not generate more than one LWO per run?
    No matter what frame numbers were given as argument, it will always prompt that it is processing all the frames, i.e. it starts at 1 and runs all the way through to the end of the stream.

    The resulting mesh is very noisy, I guess because no averaging took place. I got a savvy friend interested in trying to use this code to run it over the saved streams before sending them to the LWOWriter. Any thoughts on that?

    Wsgrah, what do your meshes look like? Could you post a pic of some results? Here is an example of a single frame LWOWriter export, imported in Blender and exported as .stl for import in Meshlab:

    Look how bumpy and noisy the sandbox walls are. In reality, they are just flat boards 😉

    I am currently experimenting on how to achieve the best possible results in gaining 3D data. Here’s what I am trying:
    – LWOWriter, single frame export > smoothing in Meshlab
    – LWOWriter, multiple frames export > merging/averaging in Meshlab
    – (possibly, if my friend succeeds) Smoothing saved stream > LWOWriter
    – external 3D scanning software (Skanect & ReconstructMe, running under Windows, requires moving the Kinect for best results)

    Obviously, using 3D scanning software and moving the sensor will yield the best results. It kinda disrupts the workflow to switch OS, and remove the Kinect from its mount, though. That’s why I am looking for a viable solution within Linux, and without dismounting the Kinect.

    What are the chances of having data export implemented in the Sandbox software itself, with watertightness and some averaging included?

    in reply to: vertical line in the display #101200
    matthias
    Participant

    I am having the same issue with those vertical lines in the sandbox. They do alter the topography. See screenshot below (ignore the colors, I was just experimenting, but the b/w makes the lines really visible):
    Screenshot of AR Sandbox application

    However, as Oliver already pointed out, this seems to be an issue of the Kinect itself. Here is a screenshot of a 3D-Scan of the sandbox with some test objects in it, using the software Skanect and not moving the Kinect while scanning (when moving the sensor, the lines will disappear):
    screenshot of exported model from 3d-scan

    So I guess there is not much that can be done, right?

    • This reply was modified 5 years, 10 months ago by matthias.
Viewing 8 posts - 1 through 8 (of 8 total)