Other applications for AR sandbox

Home Forums AR Sandbox Forum Other applications for AR sandbox

  • This topic has 2 replies, 3 voices, and was last updated 6 years ago by Anonymous.
Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #100937
    Anonymous
    Inactive

    I stumbled across a video of the the AR sandbox that has led me here. I work in wildland fire and we use sand table for fire training. There’s a company that makes a really cool product seen here in the video: https://youtu.be/aLmOxnPKUqk. The problem is their product costs between $25k-$75k. When I found that your table could be done less than $1k, I was wondering if you believed some of the basic features could be done with your setup, adding structures, vegetation overlays etc?

    #100941
    Oliver Kreylos
    Keymaster

    To my knowledge, the AR Sandbox has one major benefit over the SimTable: it has a 3D camera that captures the real sand surface in real time. SimTable — unless they upgraded it recently — does not, so it has to rely on the users to approximate the projected topography with the sand manually, without any feedback. If the approximation is off, subsequently projected color imagery will be distorted and misleading.

    I am working on a guided method to recreate existing terrain for SARndbox v2.0, which will include the ability to project aerial photography or other color imagery without distortion, but we don’t have specific plans to add other major functionality included in SimTable. I’m expecting that they’ll release a new version with 3D camera support and real-time feedback soonish, now that the path is cleared.

    #101000
    Anonymous
    Inactive

    I just browsed this forum for the first time today and noticed this thread.

    Cousin Eddie, thank you for pointing out our video. I am the inventor and owner of Simtable.

    The hardware costs for the Simtable is a relatively small component of our price. Hardware, by the time you add up projector, camera, computer, sand media, tables, cabling, stands, mounts etc is about $3500 for us. We have been working on a mobile version that brings this down to just a projector and a mobile phone which assuming one has a phone could be as low as $100-$200 as the projector prices fall.

    Most of our current cost as a small business is in paying developer and staff salaries to develop interactive GIS simulations (fire, flood, hazmat plumes, traffic and forest ecology), to pay staff travel to meet with users and to setup the tables and train users. Further we provide realtime networked GIS data to feed the simulations as well as bi-directional synching between the Simtable and mobile clients and laptop browsers. We have not been funded by public grant money – at the end of the day developers need to eat 🙂

    We recognize not everyone has the budget – We have donated our software to schools and volunteer fire departments. We’ve also open-sourced our agent-based modeling framework agentscript.org for users to develop their own models.

    Oliver, we haven’t met but I’ve followed your project since the release of the Kinect and the hacking fervor that followed. I’ve been meaning to contact you for some time but haven’t. You can reach me at 505-577-5828 or stephen@simtable.com. We have a mutual friend in Jim Crutchfield who was in Santa Fe before coming to Davis.

    BTW, very nice implementation! I think you’ve nailed a nice interface with a very playful and engaging demo that requires no instruction on use. And nice work to package up an open-source project for others to experiment with!

    I just wanted to correct a bit in your post. Simtable *does* do interactive 3D scanning and gives feedback for sculpting to match a given DEM. You can see examples of this scanning we were doing back in 2008. Here is an an old archive page that shows the scanning http://redfish.com/simtable/

    I did experiment with the early ZCam before Microsoft purchased them prior to Kinect Development. I decided against using depth cameras for ambient computing as I feel we already have controllable light coming from the projector and it seemed redundant to me to need laser light from the depth cameras. We think rooms can be covered with multiple projectors (lamps) and cheap cameras to do depth scanning and auto warping and blending.

    While there’s some advantage to having that light in the infrared so it’s imperceptible to the user (as the kinect does), we are implementing selective scanning as well as leveraging the projected interface to determine depth based on the offset in the camera without distracting the user with structured light patterns. This can be done with realtime refresh rates. As we replace industrial cameras like the PtGrey with mobile phones, we’re finding the high resolutions of the phones yield extremely high resolution depth scans. In some cases, consumer mobile phone cameras are now at 18 megapixels or better. You can see some of recent direction here that includes depth scanning and warp correction:
    http://bit.ly/AnySurfaceReview

    I hope to meet up sometime – again awesome work!

Viewing 3 posts - 1 through 3 (of 3 total)
  • You must be logged in to reply to this topic.

Comments are closed.