March 3, 2015 at 10:58 pm #100802
I’m working on the next version of the AR Sandbox, as much as I can given that development is currently unfunded. Still, I figured it would be a good idea to create a thread where users can submit desired features. I have no idea how much I’ll be able to implement, but I’ll at least try to address the most commonly desired functionality.
If you describe a desired feature, please try to be as precise as possible.March 4, 2015 at 12:54 am #100803
Hello! I’ve been following the project for some time now and I am still astonished about how far it has come.
The one thing I would like to recommend is the sand. I noticed in the video that the boy was wiping it off his arms and hands which seemed like it would make a mess. What if you replaced the sand with kinetic sand?
It wouldn’t stick to the hands and arms of anyone and wouldn’t make much of a mess. It would also be easier to mold with.
The draw backs to switching would probably be cost and how well the Kinect reads the sand.
I thought this was something I would share. Can’t wait to see the progress!March 4, 2015 at 1:09 am #100804
We’re using Kinetic sand in the AR sandboxes at Lawrence Hall of Science and ECHO. I personally don’t like it very much because it requires a bit too much effort to work (you can’t make a riverbed just by swiping your hand, for example), and it doesn’t take projection color very well because Kinetic sand, at least as of now, is naturally brown instead of white. The Kinect can read its surface just fine.March 4, 2015 at 1:16 am #100805
Interesting! Well i’m glad I know the advantages and disadvantages. If I have further questions about the project should I leave them here or contact someone else?March 4, 2015 at 6:44 pm #100809
For new questions, please start a new thread in the main forum.March 11, 2015 at 5:02 am #100823
Great idea to start a requested feature list! Below are a few ideas people have suggested at LHS. I know some of these are out of your domain, but I thought I’d list them all here so they’d be in one place. I’ll add more as they come up.
March 11, 2015 at 5:48 pm #100830
- ‘Polluted’ water color option
- Visualizations of animals or living things in the landscape, e.g., fish in standing water.
- Words / written overlays to highlight features of interest.
- Thunder and/or water sounds when it rains
- Ability to easily turn water on and off. For instance, facilitators would like to turn the water off to build things without creating rain. Then make predictions with visitors about how the water will flow. Then turn the water on to test their predictions.
- Separate file / interface for editing parameters of interest (rain strength, etc.) [You already did this for LHS, but other users would probably find it useful.]
- An interface for facilitators and /or visitors that would allow them to easily change color settings, turn water on / off, etc.
- One educator was very interested in integrating the water cycle. She said: “I appreciate your comment about the rate of evaporation being degrees of magnitude slower than the rate of flow, and other challenges — but, as a person with no programming experience myself, I want to know if it could develop to where students could manipulate not only clouds that rain, but the amount of sunlight (nearness to equator, time of year, general cloud-cover overall) that would change the rate of evaporation. And then I wonder if the type of topography could interact with the behavior of the clouds as to where water evaporated from one part of the “landscape” would cause a rain storm in another part of the landscape?”
- Way to visualize erosion / deposition – how water not only moves within but also shapes the landscape.
- A regular camera mounted above sandbox to capture images of sandbox creations and/or videos of sandbox learning.
- Non-rectangular blanking to black out any viz that may show up on sandbox frame edges
- Water ‘dead’ zone / graphics driver edge bug fix
Two of the most asked for items when I do deploys are:
Erosion (No idea how to achieve this)
One other is asked for a lot, but not possible as is.
Manipulate the sand without having to touch it…
This project still amazes the heck out of people.March 16, 2015 at 4:01 am #100860
I might already be working on DEM overlays. There will be a mode where the sand is color-coded based on vertical distance to a pre-loaded DEM, so that the sandbox guides the user towards recreating that DEM. Once the sand surface is close enough, it becomes possible to project other information, such as aerial photography, onto the sand without deformation.April 9, 2015 at 8:08 pm #100959
Hi Oliver, I really wish to thank you for making this available. I saw the box at Fall AGU 2014 and started building one from an old computer and an old projector I have at home over christmas. After struggeling quite a bit with the first instructions, the ones you published recently in March work like a charm, and after getting used to the Vrui handling, it took me about 30 min to compile and set up the software and to do the basic calibrations. Now my kids have somethig to play 🙂
Also, I didn’t have a short range projector, but a regular one. I therefore extended the range with a mirror, which is really easy (some pictures of set ups showed this on your site) so in fact any projector would do.
In my experience, the success is mainly dependent on the right graphics card and good drivers. CPU is not very important. I use Ubuntu 14.04, an old Intel Quadcore Q9550 CPU and a GTX770 Graphics card. No need to buy an expensive i7 or such, but I wouldn’t go with a les powerful graphics adapter.
Now the reason why I post here – there are actually two additional features that I would suggest:
1. it would be great if one could let water flow in on one side or point of the box (e.g. from a hill at the side, and out at another, in the valley or ocean. Also being able to define currents in seas or lakes would be great. The reason is, that the water animation shows nicely, where the topography is rough, by more curly waves, so I imagine one could use the box to model obstacles and waterflow along a coast for example.
2. what about burying virtual objects, like bones, rocks or other objects, which could be discovered? It would be good if the pictures showed the cut through the object as it is dicovered. My background is Geology and with virtual objects, one could create an interior geology, which would then be projected onto the contour maps, therefore making the sandbox a tool to teach students how to read and work with geologic maps. The same could be used for virtual excavations in paleontology or archeology. The striking thing would be, that one basically would dig through an object, and I think it helps a lot in teaching, if students learn to identify obejcts by looking at cut planes through them.
Cheers, ManuApril 9, 2015 at 9:09 pm #100960
AGU 2014!!! You saw my version. I hope you enjoyed it.
Keep on building.April 9, 2015 at 10:43 pm #100961
amazing, yes – that’s it! I told you there on Friday I would build one 🙂 and it worked!
Cheers, ManuApril 9, 2015 at 10:51 pm #100962
That’s great, I am glad you followed through. It is a fun project, with many interesting aspects.April 14, 2015 at 6:10 am #100971
a nice new feature for the ARS would be to include a catchment or catchments boundaries to the projection. The catchment(s) outlet should be defined by a user with a gesture.
As a next step and based on this catchment I could imagine to present hydrographs for these catchment outlets. Maybe these hydrographs in combination with the precipitation could be presented at a different screen. So this would be a more hydrological ARS and based on this it would be possible to present the catchment characteristic in general or to show differences in catchments (small vs. large, flat vs. steep, round vs. stretched,…).
BenjaminApril 16, 2015 at 4:10 pm #100978
First of all, I want to thank you Oliver for sharing all this stuff and all the forum for the replies to the doubts!
We managed to make one sandbox in Fab Lab Sevilla to experiment with the project. After some time we built a sandbox and we are waiting for a short distance projector to get a stable device (currently is in a “work in progress” situation).
As long as I did in an Architectural institution, we were thinking in some features related with that field:
-Possibility to export the model in a STL format or a point cloud file, so we can manage to open in a 3D editing program.
-Is it possible to manage the scale? (I think that one is already in other topic of the forum)
-Simulation of landslide according to the rainfall in some areas.
Thanks!May 1, 2015 at 8:43 pm #101014
Thanks so much for this demo. I have been doing STEM events with it and it has been a blast. Some features are:
1. Is there a way to control the motor on the Kinect sensor for fine tuning? Kids do bump the sensor and I have to readjust it. Moving it manually has large jumps.
2. A means of changing the width of the color while it is running. Some colors are a bit wide and too thin. I have to keep changing the boxlayout file and restart over and over. I understand the RGB colors, however, not the other fields. Documenting this would be helpful.
You must be logged in to reply to this topic.