Technology FAQs

Technology Behind the AR Sandbox

What is the hardware?

The AR Sandbox uses a 3D camera such as the Microsoft Kinect to scan the 3D shape of the sand surface in real time, and additionally to detect hand gestures above the surface to create “virtual rain clouds.” A computer collects data from the camera, and creates a real-time dynamic topography map. A projector, also mounted above the sandbox, projects that topography map back onto the real sand, calibrated such that real and virtual features line up exactly.

What is the software?

The AR Sandbox software, available for download as free and open-source software under the GNU General Public License, is based on the Vrui VR development toolkit and the Kinect 3D video processing framework, both also developed at the University of California, Davis. The software runs on Linux operating systems, is written in C++, and uses the OpenGL 3D graphics library to draw a real-time topographic map with customizable elevation color mapping and contour lines.

The AR Sandbox simulates the flow of virtual water over the real sand surface using a physically realistic, real-time water simulation implemented as a set of OpenGL Shading Language (GLSL) programs, which are run directly on the computer’s 3D graphics card. The water simulation is based on the Saint-Venant system of shallow water equations, which are a depth-integrated version of the Navier-Stokes equations describing fluid flow. The current state of the water simulation is displayed by overlaying it onto the currently drawn topographic map.

What kinds of data are being used in the exhibit?

The AR Sandbox is primarily a closed-loop system that processes “live” data created by manipulating the real sand surface.

In addition, the AR Sandbox contains a module that guides users towards recreating a scale model of an existing landscape, by coloring the sand according to its distance from a pre-loaded landscape model. By moving sand from blue areas (too much sand) to red areas (too little sand), users can create accurate scale models quickly.

This exhibit features scale models of Raplee Ridge, a formation along the San Juan river in southern Utah, and of Lake Tahoe and its surrounding mountains.

Can hands trigger different functions at different elevations? 

In the current software, holding out a hand with spread fingers creates a virtual rain cloud, and adds water to the simulation, independent of the hand’s elevation above the sand. This could be changed in future software versions.

How does the camera distinguish hands from sand?

The AR Sandbox software looks for a specific hand gesture (all fingers stretched and spread out) to create a virtual rain cloud. Apart from that, the AR Sandbox distinguishes body parts or other objects from sand based on their motion. Anything that moves inside  a one-second window is considered “body part,” everything that does not move for one second, and is below a pre-set maximum elevation, is considered sand.

What is planned in the next software release? 

The next release of the AR Sandbox software will contain the new features shown in this exhibit:

  • Gesture-based hand detection for virtual rain
  • Ability to change display and simulation properties (such as switching between water and lava) on-the-fly from external programs or scripts
  • Guided recreation of scale models of existing landscapes
  • Multi-modal AR and VR display using additional screens or headsets
  • Support for 3D cameras besides the first-generation Microsoft Kinect, such as the second-generation Microsoft Kinect, and Intel RealSense cameras