Forum Replies Created

Viewing 15 posts - 331 through 345 (of 522 total)
  • Author
    Posts
  • in reply to: Complete Installation Instructions #101764
    Oliver Kreylos
    Keymaster

    The Kinect software works fine over USB 3.0, but the problem is that some USB 3.0 host controllers are poorly supported in the Linux kernel itself. Basically, there are a lot of broken USB 3.0 controllers on the market that require “quirks” to make them work, and those are only implemented in secret custom drivers, typically only for Windows. Unfortunately, your laptop might have one of those USB chips.

    You can find out which controller chip you have via lspci | grep "USB 3.0". The ones known to be good are by NEC or Renesas, chip ID uPD720200 or similar.

    in reply to: Complete Installation Instructions #101763
    Oliver Kreylos
    Keymaster

    That appears to be a configuration problem. SARndbox is looking for the projector calibration matrix in the wrong place. You can test this by copying ProjectorMatrix.dat from /home/matt/src/SARndbox-1.6/etc/SARndbox-1.6/ProjectorMatrix.dat (where it should be) to /home/matt/src/SARndbox-1.6/ProjectorMatrix.dat (where SARndbox seems to be looking for it).

    But it shouldn’t be looking there. Somehow your SARndbox source package seems to have gotten messed up. It’s probably best to back up your configuration files (BoxLayout.txt, ProjectorMatrix.dat), remove the entire SARndbox-1.6 directory, and download a fresh tarball. Then copy the config files back where they really belong, and you should be good.

    The Pi2 is going to be marginal for running an AR Sandbox even without water, but this problem has nothing to do with the Pi.

    in reply to: Complete Installation Instructions #101762
    Oliver Kreylos
    Keymaster

    for me sudo apt-get install libdpkg-perl was the key.

    Thank you for figuring that out, and reporting back. I can add that package to the list in the Build-Ubuntu.sh script.

    in reply to: BenQ MX631S projection angle to sandbox #101761
    Oliver Kreylos
    Keymaster

    The 105% figure means that, if your projected image is 1m tall, its bottom edge will be 5cm (5% of 1m) above the imaginary center line of the projector.

    The projector lens is designed such that the image will be in focus everywhere if projected onto a flat screen that is at a right angle to that imaginary center line. Concretely, if you put the projector completely level onto a table and project onto a vertical wall, the image will sit 5% above the horizontal plane going through the lens center, and the entire image will be in focus.

    The downside of above-centerline projection for AR Sandboxes is that the light at the front edge of the sandbox, where most users will stand, comes in at a fairly low angle, which reduces brightness, and can lead to shadowing when there are steep slopes facing away from the projector. A centerline projector directly above the sandbox’s center point would improve that.

    The ECHO, TERC, and LHS setups use a compromise where the projector is shifted towards the center of the box, and tilted out to move the projection image back to the center. This was necessary because the projector hood was designed too small.

    There are two problems when a projection surface is not orthogonal to the projector’s centerline: keystone distortion and focus. Keystone distortion is effectively taken care of by the AR Sandbox’s calibration procedure, but focus remains. With a tilted projector, only a single horizontal line is in perfect focus; lines above and below that become increasingly blurry. It’s not a huge effect, but it is noticeable. Enough that I don’t recommend building an AR Sandbox with a tilted projector.

    in reply to: Complete Installation Instructions #101733
    Oliver Kreylos
    Keymaster

    Which Linux distribution and which exact version are you using?

    in reply to: Version of Kinect #101730
    Oliver Kreylos
    Keymaster

    As good a place to start as any: The Kinect 2.0.

    in reply to: Version of Kinect #101728
    Oliver Kreylos
    Keymaster

    The first-generation Kinect has a nominal depth map resolution of 640×480 pixels. Its effective resolution is much lower as only a fraction of pixels carry depth measurements at any given moment, due to the way the structured light scanning technology works. The second-generation Kinect has a depth map resolution of 512×424 pixels, but it carries depth measurements on every pixel, based on its time-of-flight scanning technology.

    In short, the AR Sandbox’s recommended display resolution of 1024×768 is higher than that of any available depth camera, and also higher than water simulation grid sizes supported by current graphics cards. You can run the AR Sandbox at higher display resolution, but you won’t get more detail in the terrain model or water behavior. The only difference are narrower contour lines.

    in reply to: range of color in Rawinectviewer #101713
    Oliver Kreylos
    Keymaster

    You can change the depth range by passing a -depthRange <min depth> <max depth> parameter on RawKinectViewer’s command line. Depth values are not metric distances, however. You should try -depthRange 900 1090 first and see how that works out.

    Alternatively, you can define a base plane interactively, and color everything by distance to that base plane. Flatten the sand surface, and select “Average Frames” from the main menu. Then assign a “Define Depth Planes” tool to two arbitrary buttons say “1” and “2”. Then press “1” a few times to select points from your sand surface, and press “2” once to define a plane approximating all those points. Afterwards, the edges of the box should show up clearly. You can then assign the 3D measurement tool and measure points as usual.

    in reply to: Calibration "CD" always yellow except outer edge. HELP!! #101712
    Oliver Kreylos
    Keymaster

    When you measured the corner points of the AR sandbox, did you measure in counter-clockwise order, or in correct order lower left, lower right, upper left, upper right? If the former, you need to swap the last two lines in BoxLayout.txt.

    in reply to: Min system specs? (not models) #101699
    Oliver Kreylos
    Keymaster

    “CPU% on the other hand is not so pretty, it’s in the %175-195% for SARndbox process alone (over 100% because split across two CPU cores I presume).”

    In your case CPU appears to be the bottleneck. I used to run exclusively on AMD in the past, and nothing could touch the Athlon in its day, but it doesn’t hold a candle to Intel’s newer Core CPUs. Your CPU simply can’t feed the graphics card fast enough to push the latter to 100% utilization (and yes, it’s more than 100% CPU as the AR Sandbox code is multi-threaded). The moment your CPU jumps over the minimum spec hurdle, the bottleneck is going to be, and then stay, the GPU.

    in reply to: Rain setup (options) #101693
    Oliver Kreylos
    Keymaster

    Yes, I realize what’s going on. Try running the SARndbox without the -fpv command line parameter. You’ll notice that the projected sand surface doesn’t line up with the real sand surface at all. That’s because without -fpv the sandbox doesn’t use the calibration matrix that you created.

    However, when running SARndbox with -fpv, while the calibration matrix is applied to the display, it is not applied to the software’s internal representation. Meaning, when you think you’re clicking on some part of the map, the software thinks you clicked on something entirely different. Fixed projector view, via -fpv, and mouse-based interaction were never meant to be used together.

    You can work around the problem by running the sandbox without -fpv, and using standard mouse-based navigation to manually line up the projected and real sand surfaces as closely as you can (you will not be able to get a perfect alignment). Then save the current viewing state via “Save View…” in Vrui’s system menu, and the next time you start SARndbox with -fpv, load the view you previously saved, either via the main menu, or the -loadView <view file name> command line option. After that, mouse interaction should be closely aligned.

    in reply to: Intrinsic Calibration Mean #101686
    Oliver Kreylos
    Keymaster

    Intrinsic calibration defines how a depth camera such as the Kinect converts from two-dimensional depth images to three-dimensional geometry. Without intrinsic calibration, the surfaces reconstructed by depth cameras would not be a 1:1 match to their real counterparts. While each Kinect is individually calibrated at the factory, and contains the resulting calibration data in its firmware, the calibration is not particularly good. Most importantly, a factory-calibrated Kinect will reconstruct a flat plane as a gently curved bowl. In the AR Sandbox, this would lead to elevation contour lines that are not completely flat when viewed from the side, or water that might appear to flow uphill.

    The custom intrinsic calibration procedure in the Kinect package is rather complex, but it can correct for almost all of these problems.

    In technical terms, the result of intrinsic calibration is a 4×4 homogeneous projection matrix that transforms depth-valued depth image pixels (px, py, d, 1) into 3D positions (wx, wy, wz, w). (These 4-vectors are homogeneous points; to convert them to regular affine points, the first three components are divided by the fourth, which is then dropped. So the affine counterparts of the two given vectors would be (px, py, d) and (x, y, z), respectively.)

    The depth conversion formula is a part of the intrinsic 4×4 matrix that describes how a depth value d as reported by the Kinect is converted into a metric distance z in centimeters. Concretely, the conversion is z = A / (B – d), where A and B are device-dependent constants determined during calibration. A is usually around 32000, and B is around 1090.

    in reply to: How to enable / use rain function? #101683
    Oliver Kreylos
    Keymaster

    Water simulation / rain is turned on by default, but you might have to configure the elevation range at which the software detects “rain clouds,” especially if you use a non-standard base plane elevation.

    To test the water simulation, create a water tool: When the sandbox software is running, press and hold the “1” key, move the mouse to highlight “Manage Water Locally” from the tool selection menu, and release “1” to select. Then press and release “2” to assign the secondary function. Afterwards, press and hold “1” to let it rain underneath the mouse cursor.

    (You can use any keys or buttons you want instead of “1” and “2”.)

    To make it rain from your hands, you need to move your hands into the rain elevation range and hold still. By default, the rain range starts a few centimeters above the highest terrain elevation configured in your color map. You can configure the rain elevation range directly via the -rer <min elevation> <max elevation> command line parameter, where <min elevation> and <max elevation> are elevations relative to the base plane configured in BoxLayout.txt, in centimeters.

    in reply to: Version of Kinect #101682
    Oliver Kreylos
    Keymaster

    From the instructions page:

    The AR Sandbox software, or rather the underlying Kinect 3D Video Package as of version 2.8, supports all three models of the first-generation Kinect (Kinect-for-Xbox 1414 and 1473 and Kinect for Windows). All three are functionally identical, so get the cheapest model you can find. Note: The second-generation Kinect (Kinect for Xbox One or Kinect for Windows v2) is not yet supported by the AR Sandbox software.

    Oliver Kreylos
    Keymaster

    Yes, those components will work, but consider that you’re going to be paying almost USD 3,000 more, compared to a Core i7/GeForce combination, for no significant performance improvement.

Viewing 15 posts - 331 through 345 (of 522 total)