Part 4 of WebVR: Creating Data Visualizations on Canvas

We were successful in our goal to develop a WebVR Proof of Concept! Having finished the simulation in earlier blog entries, we’re ready to get artistic.

The arrival of VR marks a significant shift for designers and developers.

The smartphone revolution began in 2007 with the release of the first iPhone. By 2012, “mobile-first” and “responsive” web design were standard practice. Facebook and Oculus debuted the first mobile VR headset in 2019. Let’s get this going!

The “mobile-first” internet was not a fad, and I believe the “VR-first” internet will be just as revolutionary. I demonstrated the technical feasibility in your existing browser in the previous three articles and demos.

We’re constructing a celestial gravity simulation of spinning planets in case you’re joining us in the middle of the series.

  • Part 1: Intro and Architecture
  • Part 2: Utilizing Web Workers for Multithreading
  • Part 3: Overcoming Performance Bottlenecks with WebAssembly and AssemblyScript

It’s time to get creative after laying the groundwork. We will investigate canvas, WebVR, and user experience in the following two posts.

  • Part 4: Canvas Data Visualization (this post)
  • Part 5: WebVR Data Visualization

Today, we’ll make our simulation come to life. Looking back, I realized how much more eager and engaged I became in finishing the project after I began working on the visualizers. The visualizations piqued the curiosity of others.

This simulation’s goal was to investigate the technologies that will power WebVR - Virtual Reality in the browser - and the coming VR-first web. These same technologies have the potential to revolutionize browser-based computing.

Today, we’ll begin by developing a canvas visualization to complete our Proof of Concept.

Canvas visualization

In the last installment, we’ll examine VR design and create a WebVR version of this project.

WebVR Data Visualization

Starting Simple: Utilizing console.log()

Returning to the real world, let’s create some visualizations for our “n-body” simulation in the browser. I’ve previously utilized canvas in web video applications, but never as an artist’s canvas. Let’s see what we can come up with.

You may recall that we delegated visualization to nBodyVisualizer.js in our project architecture.

Delegate visualization to nBodyVisualizer.js

nBodySimulator.js includes a simulation loop start() that calls its step() method, and the bottom of step() calls this.visualize()

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
// src/nBodySimulator.js

  /**

   * This is the simulation loop.

   */

  async step() {

	// Skip calculation if worker not ready. Runs every 33ms (30fps). Will skip.

	if (this.ready()) {

  		await this.calculateForces()

	} else {

  	console.log(`Skipping calculation: ${this.workerReady} ${this.workerCalculating}`)

	}

	// Remove any "debris" that has traveled out of bounds

	// This keeps the button from creating uninteresting work.

	this.trimDebris()

	// Now Update forces. Reuse old forces if worker is already busy calculating.

	this.applyForces()

	// Now Visualize

	this.visualize()

  }

When we click the green button, the main thread introduces ten random bodies into the system. We tweaked the button code in the first post, and you can view it in the repository. here. These bodies are useful for testing a proof of concept, but keep in mind that we are in O(n²) performance danger zone.

Humans are hardwired to care about what they can see, therefore trimDebris() removes objects that are flying out of sight to avoid slowing down the rest of the simulation. This is the distinction between perceived and actual performance.

Let’s move on to the final this.visualize() now that we’ve covered everything else.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
// src/nBodySimulator.js

  /**

   * Loop through our visualizers and paint()

   */

  visualize() {

	this.visualizations.forEach(vis => {

  		vis.paint(this.objBodies)

	})

  }

  /**

   * Add a visualizer to our list

   */

  addVisualization(vis) {

	this.visualizations.push(vis)

  }

These two functions allow us to use a variety of visualizers. The canvas version includes two visualizers:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// src/main.js 

window.onload = function() {

  // Create a Simulation

  const sim = new nBodySimulator()

  

  // Add some visualizers

  sim.addVisualization(

    new nBodyVisPrettyPrint(document.getElementById("visPrettyPrint"))

  )

  sim.addVisualization(

    new nBodyVisCanvas(document.getElementById("visCanvas"))

  )

  

The first visualizer in the canvas version is a table of white numbers rendered as HTML. A black canvas element sits beneath the second visualizer.

Canvas visualizers

To accomplish this, I began with a simple base class in nBodyVisualizer.js:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
// src/nBodyVisualizer.js

/**

 * This is a toolkit of visualizers for our simulation.

 */

/**

 * Base class that console.log()s the simulation state.

 */

export class nBodyVisualizer {

  constructor(htmlElement) {

	this.htmlElement = htmlElement

	this.resize()

  }

  resize() {}

  paint(bodies) {

	console.log(JSON.stringify(bodies, null, 2))

  }

}

This class outputs to the console (every 33ms!) and keeps track of an htmlElement, which we’ll use in subclasses to make them simple to define in main.js.

This is the most basic solution.

However, while this console visualization is straightforward, it does not actually “work.” The browser console (as well as browsing humans) are not designed to handle log messages at a rate of 33ms. Let’s look for the next simplest thing that might work.

Visualizing Data from Simulations

Printing text to an HTML element was the next “pretty print” step. This pattern is also used in the canvas implementation.

It’s worth noting that we’re keeping a reference to an htmlElement, on which the visualizer will draw. It has a mobile-first design, like everything else on the web. This displays the data table of objects and their coordinates to the left of the page on a desktop. We omit it on mobile because it would result in visual clutter.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
/**

 * Pretty print simulation to an htmlElement's innerHTML

 */

export class nBodyVisPrettyPrint extends nBodyVisualizer {

  constructor(htmlElement) {

	super(htmlElement)

	this.isMobile = /iPhone|iPad|iPod|Android/i.test(navigator.userAgent);

  }

  resize() {}

  paint(bodies) {

    

	if (this.isMobile) return

	let text = ''

	function pretty(number) {

  		return number.toPrecision(2).padStart(10)

	}

	bodies.forEach( body => {

  	text += `<br>${body.name.padStart(12)} {  x:${pretty(body.x)}  y:${pretty(body.y)}  z:${pretty(body.z)}  mass:${pretty(body.mass)}) }`

	})

	if (this.htmlElement) this.htmlElement.innerHTML = text

  }

}

This “data stream” visualizer serves two purposes:

  1. It allows you to “sanity check” the simulation’s inputs to the visualizer. This is a “debug” window.
  2. It’s interesting to look at, so let’s keep it for the desktop demo!

Let’s discuss graphics and canvas now that we’re pretty confident in our inputs.

Visualizing Simulations Using a 2D Canvas

A “Game Engine” is a “Simulation Engine” that includes explosions. Because both concentrate on asset pipelines, streaming level loading, and a slew of other incredibly dull tasks that should go unnoticed, they are both incredibly complex tools.

With “mobile-first” design, the web has also introduced its own set of “things that should never be noticed.” If the browser is resized, our canvas’s CSS will resize the canvas element in the DOM, forcing our visualizer to adjust or face user wrath.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
#visCanvas {

  	margin: 0;

  	padding: 0;

  	background-color: #1F1F1F;

  	overflow: hidden;

  	width: 100vw;

  	height: 100vh;

}

The resize() method in the nBodyVisualizer base class and the canvas implementation are driven by this requirement.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
/**

 * Draw simulation state to canvas

 */

export class nBodyVisCanvas extends nBodyVisualizer {

  constructor(htmlElement) {

	super(htmlElement)

	// Listen for resize to scale our simulation

	window.onresize = this.resize.bind(this)

  }

	// If the window is resized, we need to resize our visualization

  resize() {

	if (!this.htmlElement) return

	this.sizeX = this.htmlElement.offsetWidth

	this.sizeY = this.htmlElement.offsetHeight

	this.htmlElement.width = this.sizeX

	this.htmlElement.height = this.sizeY

	this.vis = this.htmlElement.getContext('2d')

  }

As a result, our visualizer has three key properties:

  • this.vis - can be used to draw primitive shapes
  • this.sizeX
  • this.sizeY - the drawing area’s dimensions

Design Considerations for Canvas 2D Visualization

Our resizing method contradicts the default canvas implementation. If we were visualizing a product or a data graph, we’d want to:

  1. Draw on the canvas (at a specific size and aspect ratio)
  2. Then, during page layout, have the browser resize that drawing into the DOM element.

The product or graph is the main focus of the experience in this more common use case.

Instead, our visualization is a theatrical depiction of the vastness of space, dramatically illustrated by launching dozens of tiny worlds into the void for our amusement.

Our celestial bodies express that space by being modest, with diameters ranging from 0 to 20 pixels. To create a sense of “scientific” spaciousness, this resizing scales the space between the dots while also increasing perceived velocity.

We initialize bodies with a drawSize proportional to mass to create a sense of scale between objects with vastly different masses:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// nBodySimulation.js

export class Body {

  constructor(name, color, x, y, z, mass, vX, vY, vZ) {

	...

	this.drawSize = Math.min(   Math.max( Math.log10(mass), 1),   10)

  }

}

Creating Custom Solar Systems

We now have all of the tools we need to create our visualization in main.js:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
	// Set Z coords to 1 for best visualization in overhead 2D canvas

	// Making up stable universes is hard

	//   name        	color 	x	y	z	m  	vz	vy   vz

  sim.addBody(new Body("star",  "yellow", 0,   0,   0,   1e9))

  sim.addBody(new Body("hot jupiter",  "red",   -1,  -1,   0,   1e4,  .24,  -0.05,  0))

  sim.addBody(new Body("cold jupiter", "purple", 4,   4, -.1,   1e4, -.07,   0.04,  0))

	// A couple far-out asteroids to pin the canvas visualization in place.

  sim.addBody(new Body("asteroid", 	"black", -15,  -15,  0,  0))  

  sim.addBody(new Body("asteroid", 	"black",  15,   15,  0,  0))

	// Start simulation  

  sim.start()

You might notice the two “asteroids” at the bottom. These zero-mass objects are a hack that “pins” the simulation’s smallest viewport to a 30x30 area centered on 0,0.

Our paint function is now ready. Because the cloud of bodies can “wobble” away from the origin (0,0,0), we must shift in addition to scale.

When the simulation feels natural, we’re finished. There is no “correct” way to accomplish this. To arrange the planets’ initial positions, I simply fiddled with the numbers until they held together long enough to be interesting.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
	// Paint on the canvas

paint(bodies) {

	if (!this.htmlElement) return

	// We need to convert our 3d float universe to a 2d pixel visualization

	// calculate shift and scale

	const bounds = this.bounds(bodies)

	const shiftX = bounds.xMin

	const shiftY = bounds.yMin

	const twoPie = 2 * Math.PI

    

	let scaleX = this.sizeX / (bounds.xMax - bounds.xMin)

	let scaleY = this.sizeY / (bounds.yMax - bounds.yMin)

	if (isNaN(scaleX) || !isFinite(scaleX) || scaleX < 15) scaleX = 15

	if (isNaN(scaleY) || !isFinite(scaleY) || scaleY < 15) scaleY = 15

	// Begin Draw

	this.vis.clearRect(0, 0, this.vis.canvas.width, this.vis.canvas.height)

		bodies.forEach((body, index) => {

  	// Center

  		const drawX = (body.x - shiftX) * scaleX

  		const drawY = (body.y - shiftY) * scaleY

  	// Draw on canvas

  		this.vis.beginPath();

  		this.vis.arc(drawX, drawY, body.drawSize, 0, twoPie, false);

  		this.vis.fillStyle = body.color || "#aaa"

  		this.vis.fill();

	});

  }

	// Because we draw the 3D space in 2D from the top, we ignore z

bounds(bodies) {

	const ret = { xMin: 0, xMax: 0, yMin: 0, yMax: 0, zMin: 0, zMax: 0 }
    
	bodies.forEach(body => {
    
		if (ret.xMin > body.x) ret.xMin = body.x
            
		if (ret.xMax < body.x) ret.xMax = body.x
            
		if (ret.yMin > body.y) ret.yMin = body.y
            
		if (ret.yMax < body.y) ret.yMax = body.y
            
		if (ret.zMin > body.z) ret.zMin = body.z
            
		if (ret.zMax < body.z) ret.zMax = body.z
	})
	return ret
  }

}

The actual canvas drawing code is only five lines long, each beginning with this.vis. The remainder of the code is the scene’s grip.

Art Is Never Finished, Only Abandoned

When clients appear to be spending money that will not benefit them, now is an excellent time to bring it up. Investing in art is a financial decision.

The client (me) for this project decided to switch from canvas implementation to WebVR. I desired a visually appealing WebVR demo. So, let’s get this wrapped up and move on to the next one!

We could take this canvas project in a variety of directions with what we’ve learned. As you may recall from the second post, we are creating multiple copies of the body data in memory:

Copies of the body data in memory

If performance is more important than design complexity, the canvas’s memory buffer can be passed directly to WebAssembly. This eliminates the need for a few memory copies, which improves performance:

These projects, like WebAssembly and AssemblyScript, are dealing with upstream compatibility issues as the specifications for these incredible new browser capabilities evolve.

All of these projects, as well as all of the open-source code I used here, are laying the groundwork for the VR-first internet commons of the future. We appreciate your help!

In the final post, we’ll look at some key design differences between developing a VR scene and a flat web page. And, because VR is difficult, we’ll use a WebVR framework to create our spinning world. I chose Google’s A-Frame, which is also canvas-based.

It has been a long road to get to the start of WebVR. However, this series was not about the A-Frame hello world demo. I wrote this series because I was excited to demonstrate the browser technology that will underpin the internet’s VR-first worlds.

Licensed under CC BY-NC-SA 4.0