9 Spatial Data to VR via R

9.1 Questions

  • How do I create an A-Frame scene with R?
  • What kinds of data can we represent in A-Frame?
  • How do I render spatial data in VR?

9.2 Overview

  • Teaching: 30min
  • Exercises: 5min

9.3 Motivation

So now you’ve had a taste of the A-Frame framework and marshalling spatial data to 3D in R you’re in better position to undstand why defining VR scenes from R is a beneficial thing. Before this capability existed the workflow for VR scenes was roughly this:

Repeat until ‘done’:

  1. Build 3D models and JSON data in R for export to VR
  2. Define VR in HTML/JS
  3. Serve scene
  4. Discover limitation or bug

The process is naturally iterative but the speed of iteration is frustratingly slow due to context switching from the R environment to Web environment. It also leads to a nasty anti-pattern where data names and calculation results from R make their way into web land as magical constants, slowing the process even further when these need to change.

9.4 R to VR

The tools that exist in R allow you to mix R code with the VR abstraction provided by A-Frame. They do not provide a higher level abstraction. To use an R analogy:

grid is a low level graphics framework that gives you the power to draw anything you can imagine using 2D geometric primitives. It is up to you write functions that map your data to those primitives. ggplot2 is a popular visualisation package that does exactly this.

If you’re using VR tools in R you’re going to be working with low level VR primitives. Do not expect ggplot2 level magic. A typical scene will be hundreds of lines of code, as opposed to say tens with ggplot2. The saving grace is that most of those lines of code will be about declaring simple primitive objects and interactions which are not overly complex.

This is a natural situation since the domain of VR visualisations is not well understood right now. Through working with VR you will begin to see the common tasks important to your domain and if we’re lucky you might even write a package to help others do them.

There are currently two packages that allow you create A-Frame scenes in R:

They have different capabilities, APIs, and are not interoperable.

9.4.1 r2vr Hello World

Here’s a familiar scene constructed with r2vr, we’ll build some simpler examples soon, this is just to compare and contrast the syntax:

library(r2vr)

## Configure scene

js_libs <- list("https://unpkg.com/aframe-animation-component@^4.1.2/dist/aframe-animation-component.min.js",
                "https://unpkg.com/aframe-mirror-component/dist/aframe-mirror-component.min.js"
)

hadley_gif <- a_asset(.tag = "img",
                      src = "./figs/JT_R_code.gif",
                      id = "hadz")

box <- a_entity(.tag = "box",
                position = c(-1, 0.5, -3),
                rotaion = c(0, 45, 0),
                src = hadley_gif,
                shadow = "",
                animation =
                  list(property = "rotation",
                       to = c(0, 360, 0),
                       dur = 2000,
                       loop = TRUE,
                       easing = "linear"))

sphere <- a_entity(.tag = "sphere",
                   position = c(0, 1.25, -5),
                   radius = 1.25,
                   color = "#EF2D5E",
                   shadow = "",
                   mirror = list(resolution = 64,
                                 interval = 150,
                                 distance = 5000,
                                 `repeat` = TRUE))

cylinder <- a_entity(.tag = "cylinder",
                     position = c(1, 0.75, -3),
                     radius = 0.5,
                     height = 1.5,
                     color = "#FFC65D",
                     shadow = "")

floor <- a_entity(.tag = "plane",
                  position = c(0, 0, -4),
                  rotation = c(-90, 0, 0),
                  width = 4,
                  height = 4,
                  color = "#7BC8A4",
                  shadow = "")

backboard <- a_entity(.tag = "plane",
                      position = c(0, 2, -6),
                      rotation = c(0, 0, 0),
                      width = 4,
                      height = 4,
                      color = "#7BC8A4",
                      shadow = "")

sky <- a_entity(.tag = "sky", color = "#ECECEC")

hello_world_scene <- a_scene(.template = "empty",
                             .children = list(box, sphere, cylinder,
                                              floor, backboard, sky),
                             .js_sources = js_libs)

## Serve a scene
hello_world_scene$serve()

## Stop serving a scene
hello_world_scene$stop()

This is the equivalent A-Frame scene: https://glitch.com/edit/#!/pricey-kitten

Things to note:

  • Components that were configured as HTML properties are now function arguments.
  • r2vr has just one function for creating entities, a_entity(), that creates <a-entity> HTML. It can create the shorthand modes, eg <a-box>, using the .tag argument.
  • The convention with argument names is anything that will appear in HTML literally is a plain argument, anything that is internal to r2vr has a . prefix.
  • assets can be passed directly to entities, no need for the make the `#** id referencealthough assets still need ids.
  • The Hadley spinnig uses the animation component.

9.4.2 R2VR tips

r2vr is an A-Frame html code generator and server that users 3 main R6 classes created with a_entity(), a_asset() and a_scene(). You can view the HTML associated with these objects by calling their render() method. eg:

9.5 Spatial data in VR

The type of work we will consider is making and plotting over 3D meshes. Recapping from the previous Act, the data types that are useful for this are:

  • Rasters
  • Digital Elevation Modes give us mesh heights,
  • Images can give us textures
  • Model output to shade meshes
  • Simple features collections
  • Giving us shapes for mesh boundaries

The R packages we will use to get these data into VR are:

  • raster
  • sf
  • tidyverse
  • quashmesh
  • r2vr.gis
  • r2vr

9.5.1 DEM raster to VR

For this example we will use a DEM dataset from Uluṟu-Kata Tut National Park.

9.5.1.1 Load Raster

library(raster)
library(quadmesh)

uluru_raster <- raster("./data/ELVIS_CLIP.tif")

plot(uluru_raster)
crs(uluru_raster)
## CRS arguments:
## +proj=lcc +lat_1=-30 +lat_2=-20 +lat_0=-25 +lon_0=135 +x_0=0 +y_0=0 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0 

Check: Does the raster have units=m?

  • When we port the mesh to VR we want it to have units of meters since this is VCR’s native unit. Consider re-projecting if need be.

9.5.1.2 Crop Raster

We’ll be doing a smaller section to make things faster

library(r2vr.gis)
library(sf)

### coords come from a google map: https://drive.google.com/open?id=1Ak26Hyx1R-f2QjPCTK0rQLye5xcHyE8n&usp=sharing

uluru_bbox <-
  st_bbox(c(xmin = 131.02084,
            xmax = 131.0535,
            ymin = -25.35461,
            ymax = -25.33568),
          crs = st_crs("+proj=longlat +ellps=WGS84"))
 
 uluru_raster <- raster_crop_bbox(uluru_raster, uluru_bbox)

9.5.1.3 Make Triangular Mesh

We build a mesh using quadmesh and then cut each face in half so that it is a triangular mesh. A-Frame models can only have triangular faces.

uluru_mesh <- quadmesh(uluru_raster)
rgl::shade3d(uluru_mesh)
## looks good?

## quadmesh::triangulate quads will make a mesh that VR thinks is inside out.
## This will be fixed in future.

uluru_trimesh_indices <- triangulate_quads(uluru_mesh$ib, clockwise = FALSE)

We now have the pieces of a triangular mesh.

  • uluru_mesh$vb - are the mesh vertices and are actual points in space.
  • uluru_trimesh_indicies - are indices into the vertices that describe triangles.

Because we had the quad mesh expressed in this primitive form the transformation could be made without creating any additional vertices. They are re-used for triangles.

9.5.1.4 Export to 3D model format

The 3D model format we will use is a JSON format supported by three.js but not A-Frame natively. r2vr will take care of loading the 3rd party javascript necessary to use models of this type.

When gltf support comes to R, that would be preferred, but until then this is what we have.

Notice the use of t(). The VR tools expect columns of x, y, z coordinates while rgl and quadmesh work in rows.

library(readr)

uluru_json <- trimesh_to_threejson(vertices = t(uluru_mesh$vb[1:3, ]),
                                   face_vertices = t(uluru_trimesh_indices))

write_file(uluru_json, "uluru.json")

9.5.1.5 Render in A-Frame

library(r2vr)

uluru_asset <- a_asset(id = "uluru", src = "uluru.json")

uluru_model <- a_json_model(src = uluru_asset)

scene <- a_scene(.template = "basic_map",
                 .children = list(uluru_model))
scene$serve()
## Fire started at 127.0.0.1:8080

a_kill_all_scenes()

If you navigate to 127.0.0.1:8080 in your browser you should see the scene being served. You can try it on your phone as well but you need to use your computer’s public IP scene$serve:

scene$serve(host = "<YOUR_IP>")

If things are working you should see a scene empty but for a grey grid. This grid is part of the "basic_map" template - it is added in automatically. It’s a visual reference as each square is 1x1 VR meters. It also let’s you know that things are ‘working’ - although Uluru is not visible at the moment.

9.5.1.6 Find Uluru

We’ve imported a full scale model of Uluru in VR but we can’t see it just yet.

  1. Use the A-Frame inspector to find an appropriate, scale, position, rotation, and color, to view the model.

Notice the position of the mesh, what does it say about how the coordinates have been transformed?

9.5.1.7 Setting the position and scale

From the previous exercise we learned that the model was too large to practically view, was rotated with it’s z-axis pointing toward camera, and had been centred.

To keep it spatially referenced, it’s a good idea to set up some constants relating to transforming the model to VR for example:

  • scale_factor
    • the scale of the model
  • mesh_centre
    • the centre of the original mesh so that we can use it to transform the coordinates of other things we would like to plot in spatial context over the mesh.
  • height_correction
    • the correction to apply to the height so that the ‘ground’ is at a VR height of 0.
    • This means we need to decide on what the true ground height should be. Here we use a simple approach of taking the lowest point in the raster extent.
    • We create this as an xyz vector so it can easily be added to positions.

We define these and view the result:

## Model constants
scale_factor <- 0.01

mesh_centre <- colMeans(t(uluru_mesh$vb[1:3,]))

extent_coord_mat <-
  matrix(extent(uluru_raster),
         nrow = 2, ncol = 2, byrow = FALSE)

lowest_corner <- min(raster::extract(uluru_raster,
                                 extent_coord_mat))

height_correction <- c(0, mesh_centre[3] - lowest_corner, 0)

## Scene definition
uluru_asset <- a_asset(id = "uluru", src = "uluru.json", .parts = "uluru.png")

uluru_model <- a_json_model(src = uluru_asset,
                            scale = c(1, 1, 1) * scale_factor,
                            position = (c(0, 0, -5) + height_correction) * scale_factor,
                            material = list(color = '#C88A77'),
                            rotation = c(-90, 0, 0))

scene <- a_scene(.template = "basic_map",
                 .children = list(uluru_model))
scene$serve()

a_kill_all_scenes()

It’s taking shape! But we can do better than plain brown.

9.5.1.8 Texturing using satellite imagery

We can use images to texture our mesh. This uses work we have done recently to make getting tiles from tile servers easier in R. The high level workflow is:

  1. Fetch and composite satellite tiles for the mesh bounding box.
  2. Rebuild mesh with quadmesh texture args, and export the model to supplying the texture coordinates with a reference to the texture image.
9.5.1.8.1 Fetch texture image
## Fetch textures from ESRI maps
## from slippymath README:
library(purrr)
library(curl)
library(glue)
library(slippymath)

tile_grid <- bb_to_tg(uluru_bbox, max_tiles = 15)

esri_query_string <-
  paste0("https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{zoom}/{y}/{x}")

images <-
  pmap(tile_grid$tiles,
       function(x, y, zoom){
         outfile <- glue("{x}_{y}.jpg")
         curl_download(url = glue(esri_query_string),
                       destfile = outfile) 
         outfile 
       },
       zoom = tile_grid$zoom)

## Create a new textured quamesh and export it to JSON
raster_out <- tg_composite(tile_grid, images)

raster_to_png(raster_out, "uluru.png")

uluru_mesh_tex <- quadmesh(uluru_raster, texture = raster_out, texture_filename = "uluru.png")

uluru_trimesh_indices <- triangulate_quads(uluru_mesh_tex$ib, clockwise = FALSE)

uluru_json <- trimesh_to_threejson(vertices = t(uluru_mesh_tex$vb[1:3, ]),
                                   face_vertices = t(uluru_trimesh_indices),
                                   vertex_uvs = t(uluru_mesh_tex$texcoords),
                                   texture_file = "uluru.png")

write_file(uluru_json, "uluru.json")

## Scene definition
uluru_asset <- a_asset(id = "uluru", src = "uluru.json", .parts = "uluru.png")

uluru_model <- a_json_model(src = uluru_asset,
                            scale = c(1, 1, 1) * scale_factor,
                            position = (c(0, 0, -5) + height_correction) * scale_factor,
                            rotation = c(-90, 0, 0),
                            mesh_smooth = TRUE)
scene <- a_scene(.template = "basic_map",
                 .children = list(uluru_model))
scene$serve()

a_kill_all_scenes()

9.5.1.9 Shading using data

In this section we look at shading the mesh with an arbitrary raster. This could represent the output of a spatial model.

We first generate a raster using a noise generator, we then use it to texture the mesh.

library(ambient)
library(scico)

noise <-
  setExtent(raster(noise_simplex(c(500, 600),
                                 fractal = "none")),
            extent(uluru_raster))

colouring_raster_data <-
  raster::extract(noise, t(uluru_mesh_tex$vb[1:2, ]))

n_colours <- 256

palette_function <-
  purrr::partial(scico, palette = "tokyo")

vertex_colour_data <-
  vec_pal_colours(colouring_raster_data,
                  palette_function,
                  n_colours,
                  zero_index = TRUE)

face_colours <-
  vertex_to_face_colours(vertex_colour_data$indexes,
                         t(uluru_trimesh_indices))
                         
mesh_json <-
  trimesh_to_threejson(vertices = t(uluru_mesh$vb[1:3,]),
                       face_vertices = t(uluru_trimesh_indices),
                       colours = vertex_colour_data$colours,
                       face_vertex_colours = face_colours,
                       transparency = 0.4)


## Scene definition
uluru_asset_mod <- a_in_mem_asset(id = "uluru_mod", src = "uluru_mod.json", .data = mesh_json)

uluru_model2 <- a_json_model(src = uluru_asset_mod,
                            scale = c(1.001, 1.001, 1.001) * scale_factor,
                            position = ((c(0, 0, -5) + height_correction) * scale_factor) + c(0, 0.01, 0),
                            rotation = c(-90, 0, 0))

scene <- a_scene(.template = "basic_map",
                 .children = list(uluru_model, uluru_model2))
scene$serve()

a_kill_all_scenes()