Inside "Map Dive"

We are super excited about our Map Dive installation in the Geo Sandbox at Google I/O 2013. Lots of people at the conference have been asking about the APIs and technology behind the experience, so our Technical Lead on the project, Ben Purdy, put together the following overview.

Displays
Map Dive features seven 1080p displays each with a dedicated Ubuntu PC running Chrome 25 in full screen. Having a dedicated computer for each display allows for high frame rates at full resolution without having to compromise on features like anti-aliasing.

Game logic
All game logic is run on Chrome via a PC hidden behind the main displays that handles all timing and game states as well as user input. This PC is also used for an administrative console. Game state and events are broadcast to the seven display nodes through websockets. All game objects are positioned using real-world coordinates and are translated between 2D and lat/long coordinate spaces through Google Maps APIs.

API used:
JavaScript API v3

3D game content
The 3D portion of the game content is rendered with WebGL using THREE.js. The WebGL layer is rendered with a transparent background, allowing the underlying HTML map layer to show through. Custom models are used along with morph targets for facial expression and UV mapping for sharing common textures among multiple models.


I.E.
We are animating facial expressions using morph targets generated from C4D. Textures are UV mapped so that we can share them across multiple models.

Map layer
The ground plane is a live HTML Google Map instance, translated via 3D CSS to align with the WebGL camera. The map plane is always positioned under the game camera and the map center point is moved opposite the camera position in order to keep all game object lined up with the terrain below.

APIs used:
Overlays
Styled maps
JavaScript API v3

User input / motion tracking
We created a custom C++ app built with openNI and an ASUS Xtion 3D to track the player's body pose. The angle of the torso and each arm are sent to the game logic. The motion tracker is able to track a large number of simultaneous users but only the user closest to the displays will control the game.

Networking and synchronization
The system is synchronized via web sockets. A server built on node.js routes JSON messages between the control and display nodes. These messages contain a global time value which is used to drive complex procedural animation without having to send any extra data.

Dive editor
To build the dive courses, we created a custom level editor. The editor is built into the control node administrative console and all changes are reflected live in the game scene. This allows for quick iteration and play testing. The editor UI is a Google Map using draggable markers for the game objects. Geocoding is used to center the map view on addresses or monuments for creation of site-specific dives.


APIs used:
JavaScript API v3
- Geocoding
- Overlays
- Markers
- Polylines
- Polygons
- Circles/Rectangles

Web Services API
- Geocoding

Additional Maps Libraries
- Geometry

Share

Close Video