WebAssembly vs JavaScript: the LZ77 algorithm

WebAssembly (Wasm, in short) is an assembly-like instruction format for a stack-based virtual machine. It is designed to be encoded in an efficient binary format that can be executed at near-native speed on any platform that can host the virtual machine. In particular, this is supported on all modern browsers (Chrome, Edge, Firefox and Safari).

Thus, we can execute (in our browser) code written in any language, provided it can be compiled to Wasm. One of the main advantages of doing so is performance, compared to a JavaScript implementation, which is interpreted instead – partly, at least: nowadays JS code actually goes through Just In Time (JIT) compilation, but this still adds some overhead that Wasm can overcome.

This becomes significant when performing computation-heavy tasks, such as graphics rendering or data processing. To showcase how Wasm fares against JS, we will implement in both languages one such task: the LZ77 compression algorithm, chosen as it is quite straightforward to explain and visualise.

Lempel-Ziv (1977) algorithm

LZ77 is a lossless data compression algorithm, often referred to as a “sliding window”. It transforms an input into a sequence of tuples, according to the occurrence of repeated symbols, where each tuple has the form: (offset, length, trailing). Starting from the beginning of the input, we look for the rightmost longest occurrence of the lookahead (the unprocessed section of the input) in the window (the processed part), record where the match occurred (i.e. offset and length), and the trailing symbol (the first unmatched one) – this delimits the new lookahead/window, and the same step is repeated until all the input has been encoded.

As an example, consider “ababcxabcd”. We use the vertical bar to delimit at each step where lookahead and window are; we also underline the match and highlight the trailing symbol in bold. The algorithm goes as follows:

 | ababcxabcd (0, 0, a)

a | babcxabcd (0, 0, b)

ab | abcxabcd (2, 2, c)

ababc | xabcd (0, 0, x)

ababcx | abcd (4, 3, d)

where the offset in each tuple denotes where the match begins in the window, and the length how many symbols are repeated (e.g. in the last step the match is “abc”, of length 3, located 4 symbols to the left of the delimiter).

Rust to Wasm

Rust is a low-level, statically-typed programming language, with its main focus on safety and performance. It is quite new but already making a name for itself, being voted most-loved language on Stack Overflow for four years in a row. And compiling to Wasm is almost trivial! The crate (Rust for external package/module) wasm-pack handles generating all the boilerplate needed to import the Wasm binary into JS (to execute it in a web page), together with the compilation itself.

The Rust implementation of LZ77 is quite straightforward, taking roughly 50 lines in total


We then only need to instruct wasm-pack on what functions/objects to export to the binary format


Running wasm-pack build –target web generates the Wasm binary and JS boilerplate in the pkg folder.

Run in the browser

All we need to do is to import the init method from the JS file generated by wasm-pack, which handles loading the Wasm binary into the browser’s virtual machine. Now all the functions declared in Wasm are accessible, and we can invoke them as any other JS method.

We do exactly this in the main script of the test app


 where we run the Wasm and then the JS implementation of LZ77 against a given input and record the execution time of each. The JS version of LZ77


 is identical to the Rust one, so that the execution should be similar from the algorithmic point of view, and all optimisations be down to static and JIT compilations.

Navigating to the test app (hosted in GitHub pages https://angelorendina.github.io/wasm-lz77/) will fetch a small file and run the compression on its bytes, using three different encodings in both Wasm and JS, and output the execution time of each. Since the code runs in the browser, performance will vary dramatically from client to client: on my machine, the JavaScript implementation takes from 20 to 200 times more than Wasm – a significant difference!

Is Wasm really faster?

JS files are really just text documents. To understand and execute them, the browser needs to parse their syntax and compile it to the hosting machine’s native instruction set: this is done by the JIT compiler, which also applies one (or more) optimisation steps based on how often particular code blocks appear.

Wasm promises to drastically reduce the impact of these steps because:

  1. Its binary format is already precompiled, and can be fed directly to the JIT for the optimisation step;
  2. No garbage collection happens on the Wasm virtual machine.

But there is one catch. Browsers interface with the JS main thread, and every interaction with the Wasm virtual machine must pass through it: this is handled by (JavaScript) “glue code”, which wasm-pack generated for us in the example above. In particular, JS objects must be serialised and deserialised when crossing the Wasm boundaries, and this operation is extremely slow and inefficient. Therefore, invoking Wasm methods too often (especially when passing complex parameters) can outweigh the performance boost they provide and actually slow down the execution overall. For these reasons, Wasm really shines when performing expensive computations that need to seldom report back to the JS main thread.

Performance is not everything, though. Another use case of Wasm is to port applications written for different platforms in different languages to the browser. For instance, C++ can be compiled into Wasm, and a few desktop applications have been made into web apps with this approach – notably, Autocad and Figma (amongst others), but also games like Doom 3 (https://wasm.continuation-labs.com/d3demo/).