Sharing Memory Across Threads in JavaScript
Enter the races
Previously I wrote about sharing threads across tabs. Now we’ll talk about sharing memory across threads.
Which is harder to get setup than simply using the API. It turns out, there are little hardware vulnerabilities called Spectre and Meltdown, and these rely on some strange hardware behavior when it comes to timing, threads, and shared memory. To mitigate this, shared memory in JavaScript contexts have security restrictions that your site must meet.
Making Shared Memory
The requirements is that your site must be in a secure context (i.e. localhost or HTTPS) and your site must be cross origin isolated. Cross origin isolated basically means that you’re page is only pulling resources from locations allowed in CORS, and that popups are only allowed for the same origin, and embedding is restricted as well. In other words, CORS has to be setup properly, and it needs to be at least somewhat restricted.
In my experience, this has usually meant that I can’t just use python3 -m http.server to run my shared memory code as it doesn’t setup CORS properly. Instead, I need to create a test server with proper CORS headers.
To check if you’re cross origin isolated, read the boolean value window.crossOriginIsolated. If it’s true, you’re good.
Once those security requirements are met, it’s time to actually share memory.
To share memory, we first create a region of linear memory1 to share. Then we send that memory off to the worker (either Worker or SharedWorker) for use. Once the memory is shared, we can use it to communicate data between threads, just like we would in threaded programming languages like Java and C++. To create shared memory, we simply create a SharedArrayBuffer.
const sab = new SharedArrayBuffer(1024); // 1KB
worker.postMessage(sab); // share it with the workerSimple enough!
Using Array Buffers
So, now we need to create our workers that use the shared memory. Since multi-threading itself can be difficult, we’ll start with a worker that begins in an isolated, non-multi-threaded state which we can then “upgrade” to a multi-threaded state. That way we can separate shared memory issues from logic or implementation issues. Which will help a lot.
One of the first issues that we’ll run into with the linear memory approach is that array buffers (shared or non-shared) cannot be accessed directly. Instead, they need to be wrapped inside a typed array to “view” into the buffer. I don’t know why JavaScript did things this way, but that’s the way the standard went.
We’ll start with basic worker that just increments an index some number of times. Here’s our code:
let memory = new Int32Array(new ArrayBuffer(1024))
let offset = 0
onmessage = (e) => {
for (let i = 0; i < 200; ++i) {
memory.set([memory.at(offset) + 1], offset)
}
postMessage({final: memory.at(offset)})
}Here we create an array buffer, and we get an integer array view over that buffer2. We then increment that index in the buffer 200 times, and then we return the final measurement at that buffer.
We can easily test our code by spinning up a worker and sending a message, like so:
const worker = new Worker('worker-01.js')
worker.onmessage = (e) => console.log(e.data)
worker.postMessage('run')With that, we get back our {final: 200} just as expected. If we post another message, we’ll get {final: 400} and so on.
Nothing too surprising. Now, let’s update our worker to use a shared memory buffer. We’ll need to update our message receiving. While we’re in there, we’ll also pass in the offset we’re writing to. Here’s the new worker code:
let memory = new Int32Array(new ArrayBuffer(1024))
let offset = 0
onmessage = (e) => {
if (e.data.type === 'init') {
memory = new Int32Array(e.data.memory) // use the memory buffer were given
offset = e.data.offset
// don't respond
}
else if (e.data.type === 'run') {
for (let i = 0; i < 200; ++i) {
memory.set([memory.at(offset) + 1], offset)
}
postMessage({final: memory.at(offset)})
}
}Now, setting up our worker is a little more complicated, but not much. We’ll create a shared array buffer this time, and then we’ll send that to our worker and run it. We can then also wait for a response, and then read from our memory to make sure the values line up. Here’s our runner code:
// our config
const memory = new SharedArrayBuffer(1024)
const arr = new Int32Array(memory)
const offset = 1
await run(memory, offset)
console.log("Memory data: ", arr.at(offset))
function run(memory, offset) {
const worker = new Worker('example-02.js')
const wait = new Promise((resolve) => {
worker.onmessage = (e) => {
resolve(e.data)
}
})
worker.postMessage({type: 'init', offset, memory})
return wait
}We get the same response back from the worker, but this time we can also read the data directly. Once we read the data, we see that we indeed have 200 in our main thread’s memory!
I am aware that the top-level await will require the method to be in an async function in most JavaScript/TypeScript engines. I’m omitting the wrapper async function for brevity.
Racing with Multiple Threads
Now that we have one thread writing to our data, let’s add more! We want all of them working towards the same goal (in this case, adding 200 to a piece of memory). It should be as simple as spawning more workers, and giving them each the same offset and memory, right? Let’s give it a try.
/// our config
const memory = new SharedArrayBuffer(1024)
const arr = new Int32Array(memory)
const offset = 1
const numRunners = 4
const runners = []
for (let i = 0; i < numRunners; ++i) {
runners.push(run(memory, offset))
}
await Promise.all(runners)
console.log("Memory data: ", arr.at(offset))
// ommiting run function, same as above exampleLet’s run that and… we don’t get 800. At least, not always. In fact, we pretty much get different results in all of our test runs. For one of my runs, I got 688 total. When I looked at the messages from each of my runners for that run, I got 288, 288, 488, and 688. When I ran it again, I got 284, 303, 503, and 703. What’s going on?
Well, we introduced a race condition into our code. All of our threads are trying to read and write the same memory at the same time. These reads and writes don’t have any sort of sequencing guarantees, so they get interleaved arbitrarily which causes the odd data. To resolve this, we must instruct the code on how to sequence memory access.
Sequencing Shared Memory
The simplest method to sequence memory is with atomics. Atomics allow a single operation on memory (read, set, exchange, add, subtract) to be done in a single, sequenced operation. However, the catch is that only the atomic operation is sequenced. If we have two atomic operations, then they are sequenced separately, meaning that interleaving (and thus data races) can still happen. This means that the following code still has the same bug as before, even though it uses atomics:
let memory = new Int32Array(new SharedArrayBuffer(1024))
let offset = 0
onmessage = (e) => {
if (e.data.type === 'init') {
memory = new Int32Array(e.data.memory) // use the memory buffer were given
offset = e.data.offset
// don't respond
}
else if (e.data.type === 'run') {
for (let i = 0; i < 200; ++i) {
Atomics.exchange(memory, offset, Atomics.load(memory, offset) + 1)
}
postMessage({final: memory.at(offset)})
}
}Between the load and exchange, the CPU can interleave other instructions from other threads, which isn’t what we desire. To fix this, we need to have the entire operation happen in one atomic operation. This can be done as follows:
let memory = new Int32Array(new SharedArrayBuffer(1024))
let offset = 0
onmessage = (e) => {
if (e.data.type === 'init') {
memory = new Int32Array(e.data.memory) // use the memory buffer were given
offset = e.data.offset
// don't respond
}
else if (e.data.type === 'run') {
for (let i = 0; i < 200; ++i) {
Atomics.add(memory, offset, 1)
}
postMessage({final: Atomics.load(offset)})
}
}Now if we run this updated threading code, we’ll get 800 as our final result. We can run it several times and we’ll always end up with 800 in the end.
There’s a lot more to threading than what we’ve covered in this article. This at least should be enough to get some wheels turning.
In future posts, I’ll cover locking for more complicated synchronization, growing shared memory, and more. JavaScript doesn’t include locking primitives most developers are familiar with (outside of atomics), so we’ll need to build our own locking primitives, such as mutexes and semaphores.
Linear memory is basically an array of bytes. There’s a starting index (0) and an ending index (the maximum size of the memory). Most low-level systems and embedded code view memory this way since it’s closely mirrors the way the hardware operates. Memory allocators will then take this linear memory and subdivide it into “allocations” - which are essentially just pieces of memory designated for use. Allocations are pretty much just the allocator saying “this chunk of memory is already used.” When the allocated memory is no longer needed, it is “freed” by simply marking that section as “available.” When memory is “freed” we don’t destroy it, we simply mark it as available for reuse.
The array interface provides at and set to access the memory, so that’s what we use. The set method does take an ArrayLike for the first parameter since it lets setting an array of values at once. We’re just setting one element, so we pass an array of one element.

