# A millennial spiral

If you saw my previous post (the post has now been removed), you will have seen that I took a small break from programming to play around with designing a new icon for the blog. As always, the design process wasn't so straightforward. In the spirit of a developer raised in an age full of technological distractions, I somehow spiraled completely off-topic and wound-up examining methods for multivariate interpolation. My thinking was that I needed a more advanced gradient/blending tool than what photo editing programs currently provide, in order to make a pretty gradient for the icon background!

For those of you who just want to mess about with pretty gradients, you can play around with the demo.

# Building the demo

To reiterate in more detail: I wanted to create a tool which, given a number of colors positioned arbitrarily in 2D space, would be able to interpolate between them and produce interesting gradients/maps.

Over the past year or so, most of my free time has gone into learning Rust. Coming from a C++ background, many of its language features feel fun, fresh, and intuitive to use. But I'm not here to advocate the language itself, instead, I wanted to take a look at how I used Rust to build a tiny WebGL application. Although tangential, this experience turned out to be a worthwhile foray into the state of Rust and WebAssembly (WASM) at the end of 2018.

I spent over a month intermittently messing about with wasm-bindgen and trying to make sense of the SIGGRAPH 2010 course, Scattered Data Interpolation for Computer Graphics by Ken Anjyo et al. I managed to build a demo using Monaco to edit JSON input, whilst displaying the result in a WebGL quad. Here are just some of the abstract masterpieces I generated.

Some combinations create some pretty wacky results! I've placed the JSON snippet for the bottom-right example on GitLab for those that are interested (try toggling the visualize_fields option for even more wacky goodness). Admittedly, the more regular-looking results are slightly underwhelming - not unlike a bunch of radial gradients in Photoshop slapped on-top of one another - but, I learned a lot in the process, and that's what really matters after all!

Within the month of hacking: about a third of the time was spent trying to simply get started; another third was better invested in iterating and experimenting with interpolation algorithms; the last third was unfortunately spent wrangling with CSS.

## Burden of web development

Web development has always been a particularly impenetrable region of software for me. This is probably in part my own fault as I haven't devoted enough of my resources to the craft. Yet through this experience, it became clear to me that the unstable landscape of trends and practices can be incredibly discouraging to newcomers. Learning anything within this ecosystem can hugely unrewarding because, by the same time next year, your knowledge would most likely be out-of-date.

In the computer graphics scene, on the other hand, developers spend less time iterating on the infrastructure side, and more time on core algorithms and mathematics. Development and change naturally occur at a slower rate, and cognitive load is pushed into a different area where building robust, mathematical tooling is valued over chasing trends.

Some reasons why I chose to use wasm-bindgen:

• Minimal JavaScript required to build a front-end. More on this in a bit.
• Can be embedded on the blog something that readers could play with.
• WebAssembly should give me performance necessary for computer graphics.1
• Traditional native windowing/GUI is a pain, HTML and CSS provide expressive freedom.

With wasm-bindgen I was able to solve most of my problem in Rust, whilst only a thin layer required any web-specific knowledge. Unfortunately, the amount of configuration needed to get a nice webpack/node set-up isn't minimal. I sit in an awkward spot where I would like to understand as much of what I am building as possible, but also not have to care as much for the parts that I am not interested in. It would be a huge improvement for developers like me if tools like webpack/node would be more "batteries-included" and require less configuration to get up-and-running.

wasm-pack is one solution to my problem, but it creates a lot of magic around the "getting started" process. Instead of simplifying the set-up, it automates it, which are two different approaches in my opinion, although I understand this might not be so much an issue with the Rust + WASM ecosystem itself.

Aside from my complaints, wasm-bindgen itself was simple and easy to use. It boiled down to tagging whatever I wanted to expose on the JavaScript side with a single macro, #[wasm_bindgen].

use wasm_bindgen::prelude::*;

#[wasm_bindgen]
pub struct ColorMapDisplay {}

#[wasm_bindgen]
impl ColorMapDisplay {
#[wasm_bindgen(constructor)]
pub fn new() -> ColorMapDisplay {}

/// Initialize WebGL stuff.
pub fn init(&mut self) -> Result<(), JsValue> {}

/// Update the color mapping according to the given JSON configuration.
pub fn update(&mut self, json: &str) -> Result<(), JsValue> {}

/// Draw the result of the interpolated colors using WebGL.
pub fn draw(&self) -> Result<(), JsValue> {}

// Etc.
}

• To make things easier for myself, I made a helper struct that would handle talking to JavaScript and make all the GL calls.
• By tagging the impl block, bindings are automagically generated for any public methods.
• Using the wasm-bindgen command-line tool, we spit-out a bunch of TypeScript (?) files that enable us to import the WASM module in JavaScript.
const rust = import("./colormap");

rust.then(module => {
// Create the display helper.
const display = new module.ColorMapDisplay();

// Etc.
}).catch(console.error);


An interesting question I had yet to test was how Drop is managed through the bindings. My assumption would be that it behaves as you would expect, in that when the struct goes out-of-scope in JavaScript land, it is dropped accordingly.

On another note, as image processing algorithms are often embarrassingly parallel - considering cases where the value of each pixel can be computed independently of the others - I was hoping to be able to jam Rayon into the demo at the end and get a "free" performance boost. Unfortunately, it seemed as though the crate was yet to be WASM-able, although I imagine it should be coming soon2

## Scattered data interpolation

Let's take a look at the mathematics behind the interpolation algorithms to understand the output images better. Quick thumbs-up to the nalgebra developers for their great work on building a Rust equivalent of Eigen.

Plain and simple linear interpolation is an effective method for evenly blending between two points of data. There are plenty of other forms of interpolation, from smoothstep to cubic spline, which uses four data points. In the end, interpolation is used to compute unknown values within the range of a discrete set of known points of data. More often than not, this boils down to a weighted average of said data points.

What I hadn't realized, was that many methods existed for interpolation of $$N$$ number of data points. These methods have endless applications: from obvious tasks such as image reconstruction or building topographic maps from sparse data sets; to more creative industries, such as pose space deformation for better shape interpolation of skeleton-driven deformation.

Shepard's method, or inverse distance weighting (IDW), is a common method of multivariate interpolation. To compute the value of an unknown point, it is essentially a weighted average of its distance from the set of known points, where the weight increases as the distance decreases. This has been implemented in the demo, you can give it a try using the following option.

"algorithm": {
"Shepard": {
"power": 2.0,
"epsilon": 0.001
}
}


One of the interesting properties of the algorithm is that as the power increases, the colors more closely approximate a Voronoi diagram.

As the power increases, a side effect of the algorithm also becomes more obvious. Whereby if the queried value sits on top of one of the known data points we end-up computing zero weights. The naive way to work around this is to increase the size of the epsilon value used when checking for zero-length distances.

Radial basis functions, as far as I understand them (which isn't very far, thankfully the literature for RBFs enables me to implement it without a full understanding), make-up a method of interpolation, where the interpolated "surface" or "result" is a combination of basis functions. Wikipedia's mind-blowing explanation helped me wrap my head around this.

Every continuous function in the function space can be represented as a linear combination of basis functions, just as every vector in a vector space can be represented as a linear combination of basis vectors.

One of the key benefits of using RBFs is its ability to generate values outside of the range of known values.3 This produces much smoother functions than IDW. Another nice property of the method is that it can be evaluated for nearly anything which can define a distance function. In our case, we use the Euclidean distance between positions in $$\mathbb{R}^2$$.

The SIGGRAPH paper gives a decent description of the implementation, but for my own understanding I thought I would regurgitate it with a bit more specificity. We can essentially express the problem as a matrix multiplication in the form $$AX = B$$, where:

• $$A$$ is a square matrix filled with the results of evaluating an RBF kernel for the distances between every known position with each other, i.e., its size will increase exponentially by the number of defined data points.
• $$X$$ is a column vector of (unknown) weights.
• $$B$$ is a column vector of the known values.

Solving the RBF becomes as simple as solving the unknown matrix $$X$$. We can imagine this as, "What are the weights $$X$$, so that the RBF will produce the known values $$B$$ when evaluating it at each of their respective known positions?"

Each value in $$B$$ is expected to be a single scalar value. If values have more components, like our tuples of red, green, and blue, each component is evaluated independently. Thankfully, nalgebra handles this for us, solving the systems simultaneously. When we enable the visualize_fields option in the demo, we can see that each color component is computed independently of the others.

Here we have a comparison of evaluating two different RBF kernels. Gaussian on the left, and inverse multiquadric on the right.

It was quite interesting to compare the two interpolation methods I implemented. Below we have IDW (left) and RBF (right). Each data point is very clearly visible with IDW, whereas RBF produces a much smoother gradient. With RBF, the distances between color components are not equal, so the boundaries between each data point become less well-defined, e.g., we can see the green from the bottom right point "bleeding" more into the red of the bottom left.

The differences between the two methods are made more obvious when we enable visualization of the field for each color component.

You can play around with the demo here! This is implemented almost entirely in WASM with the exception of the JSON editor which uses Monaco. The source is available on GitLab.

Building this tool has been an educational experience. Being able to bind Rust to WASM so easily is incredibly exciting and is sure to be a big part of the near future - a number of the 2019 wishlist blog posts point in that direction after all.

I have Dave Reeves to thank for the help in understanding the mathematics behind the interpolation methods. I personally still have ways to go to be comfortable with reading an equation-filled paper. An interesting afterthought would be to play with interpolation of colors in different color spaces. It turns out RGB doesn't necessarily blend very well and can produce some rather muddy colors (red-green in particular).

To get started with Rust and WASM yourself, I would suggest reading the Rust and WebAssembly. For the "learn-by-example" inclined, take advantage of the plethora of examples available in the wasm-bindgen guide.

In fact, I've had to clamp interpolated values to the standard 8-bit color range in the demo as there were cases where the function would produce values over 1.0, which would overflow when casting to u8 color values.