Note: I’m working on some other posts at the moment but felt I’d left the dev
section of my blog alone for too long. So here’s a little interim post that tackles an issue I had lately.
Everyone loves fahncy web maps, stuff that looks cool and makes you feel like computers are neat. And there’s a popular type of layer that shows wind speeds as a gorgeously animated particle effect. Probably the most famous example for my fellow Aussies is bushfire.io. Damn, doesn’t that look cool as hell?
There’s a few tutorials and libraries online so you can learn how to implement it, I’ve previously used that second link to do it in JavaScript via deck.gl; the neat-o modern web mapping library that takes advantage of your device’s GPU through the WebGL2 API. *finger to earpiece* I’m just getting word from our Chronomancy division that my information is out of date. While that was correct for deck.gl and luma.gl (the graphics library powering deck.gl) throughout version 8.x, the major bump to version 9.x from over a year ago is focused on the WebGPU API. Subsequently there is a heap of breaking changes with a lot of version 8.x plugins or layers and definitely so for any where you write your own shaders.
Unfortunately a project I was working on needed the upgrade to deck.gl and I got absolutely lost getting the particle layer up to spec. If you’re in a similar spot, this might help you implement a wind particle layer in deck.gl 9.1. It’ll also let you do the entire thing in TypeScript so your linter won’t keep cracking the shits at you.
Please note that this requires at least deck.gl 9.1 - there’s issues in 9.0 where it won’t compile the TypeScript using the below method.
Backend
I’m gonna gloss over this because you would need this for any existing 8.x implementation. But you need ~something~ (image, binary file, big-ass array) that represents the wind speed for every point on the globe. For my implementation of a particle layer I use a PNG where the red channel represents horizontal wind speed and the green represents vertical wind speed. Each pixel thus represents a coordinate on the Earth’s surface. By taking the horizontal and vertical wind speed you can work out the velocity (ie: direction and speed) which is how we get the particle layer working.
Where do you get such an image? You can grab it from NOAA’s GFS (Global Forecast System). Grab the relevant forecast time and oh wow it’s probably some big-ass NetCDF file? Don’t worry, you can use the gdal_translate
utility to pull out what you want like so:
gdal_translate -b 11 -b 12 -b 12 -ot Byte -scale -128 127 0 255 gfs.t12z.pgrb2.1p00.f000 output.png
That will grab bands 11 and 12 (horizontal/vertical wind speed) with each being limited to a value between 0-255 (-ot Byte -scale -128 127 0 255
) and give you a sweet PNG file. Make sure this is accessible by your web map (don’t forgets CORS!).
Caveats
This is not for safety. Heads up that GFS is forecast - if you wanted to do latest wind observations then you’ll need to find another source. Note that the GFS only goes down to about 0.25 of a degree so I would say it’s a good model for a pretty animation or a rough idea of wind speed but I would not use this layer in a safety context.
This is an intro example. This version doesn’t have all the funky features you see of some of the other particle layers nor does it allow you to use a globe model but is designed as a starting point to work from. If you want something fahncier with colours or adjustable speeds then hell yeah brother, go for it. If you want me to add those features or toggles to the examples you can contact me via an existing channel to find out my consulting rates. I offer discounts to charities and organisations doing ethical work and will send you a photo of a gaping anus (possibly even mine) if you represent a defence company.
Before The Layer
You’ll need to make sure you import the image before instantiating the particle layer. Here’s a little function that’ll import the PNG image to a temporary canvas and get the raw pixel data for it:
async function loadImage(url: string): Promise<ImageData> {
const img = new Image();
img.src = url;
img.crossOrigin = "anonymous";
try {
await img.decode();
} catch (e) {
throw new Error(`Image ${url} can't be decoded.`);
}
const canvas = document.createElement("canvas");
canvas.width = img.width;
canvas.height = img.height;
const canvas2d = canvas.getContext("2d");
canvas2d.drawImage(img, 0, 0);
return canvas2d.getImageData(0, 0, canvas.width, canvas.height);
}
If you have an alternate form of wind data, you’ll need to make changes to how this works to suit your needs.
The ParticleLayer
Class
If you have an existing deck.gl 8.x implementation or have used the examples I linked above then a lot of the code for the ParticleLayer
class will be the same. What I’m going to do is run through the major changes rather than build the whole thing, however I will include links to a full example at the end of the post.
Typing
This is like any JavaScript to TypeScript conversion. Gotta add those types. But like every such conversion you suddenly realise how much you just used the flow of the code to assume what type would be there. Sure strict typing prevents errors where you did that incorrectly but it also means a lot more type time making sure everything is initialised and hinted from the outset and checks to ensure that something is not undefined (even if it wouldn’t be in actual use)1.
For our module this will hit us in two places. Firstly for my implementation I’m using a ShaderModule
for the uniform
declarations. Because our TypeScript and GLSL is typed we need to make sure we define not just the types used by our regular TS code but also define them on the GPU. So instead of just slamming a bunch of uniforms into our transform.run({uniforms})
that we did in the carefree days of yore (1-6 years ago), we need to have something like this:
// Shader Module
export type UniformProps = {
numParticles: number;
maxAge: number;
speedFactor: number;
time: number;
seed: number;
viewportBounds: number[];
viewportZoomChangeFactor: number;
imageUnscale: number[];
bounds: number[];
bitmapTexture: Texture;
};
const uniformBlock = `\
uniform bitmapUniforms {
float numParticles;
float maxAge;
float speedFactor;
float time;
float seed;
vec4 viewportBounds;
float viewportZoomChangeFactor;
vec2 imageUnscale;
vec4 bounds;
} bitmap;
`;
export const bitmapUniforms = {
name: "bitmap",
vs: uniformBlock,
fs: uniformBlock,
uniformTypes: {
numParticles: "f32",
maxAge: "f32",
speedFactor: "f32",
time: "f32",
seed: "f32",
// @ts-ignore
viewportBounds: "vec4<f32>",
viewportZoomChangeFactor: "f32",
imageUnscale: "vec2<f32>",
bounds: "vec4<f32>",
},
} as const satisfies ShaderModule<UniformProps>;
A bunch of boilerplate but it ensures when we run the transform we’ve got the correct data (or at least types, not on me if you assign the wrong values) when we’re ready to run the transform.
The other big part is making sure the state of our ParticleLayer
class is properly set up. So right after the class declaration we’ll drop this in:
export default class ParticleLayer<
D = any,
ExtraPropsT = ParticleLayerProps<D>
> extends LineLayer<D, ExtraPropsT & ParticleLayerProps<D>> {
state!: {
model?: Model;
initialized: boolean;
numInstances: number;
numAgedInstances: number;
sourcePositions: Buffer;
targetPositions: Buffer;
sourcePositions64Low: Float32Array;
targetPositions64Low: Float32Array;
colors: Buffer;
widths: Float32Array;
transform: BufferTransform;
previousViewportZoom: number;
previousTime: number;
texture: Texture;
stepRequested: boolean;
};
}
A bunch of the properties may already be familiar. You’ll also want to make sure you’ve typed your ParticleLayerProps
as an extension of LineLayerProps
and done due dilligence with any default props. That’s an exercise left for the reader. Unless you just want to copy what I’ve got at the bottom, that’s valid too.
Typing done. Next I’ll run through the major functions you’ll have used in the 8.x implementation, in each case focusing on what makes our variant different.
getShaders()
This is the same. The only thing I’ll need you to change is mentions of varying
. So when you’re injecting at the start of the vertex shader you’ll want to do:
out float drop;
And the accompanying fragment shader will have:
in float drop;
This represents the direction of each - from the vertex shader it’ll be an output which will become an input into the fragment shader.
initialiseState()
Unfortunately because of the seppo cultural hegemony we can’t fix the obvious mistakes of “initalize” and “color” thanks to so many being references to the class we’re extending from and integrations in existing modules but that’s what we get when the lingua franca was designed by people who popularised horse dewormer as an antiviral.
What we can do is after we remove some of the LineLayer
attributes as in the 8.x version we need to re-instance the ones we will use and also describe their buffer layout on the GPU:
initializeState() {
const color = this.props.color;
super.initializeState();
this._setupTransformFeedback();
const attributeManager = this.getAttributeManager();
attributeManager!.remove([
"instanceSourcePositions",
"instanceTargetPositions",
"instanceColors",
"instanceWidths",
]);
attributeManager!.addInstanced({
instanceSourcePositions: {
size: 3,
type: "float32",
noAlloc: !0,
},
instanceTargetPositions: {
size: 3,
type: "float32",
noAlloc: !0,
},
instanceColors: {
size: 4,
type: "float32",
noAlloc: !0,
defaultValue: [color[0], color[1], color[2], color[3]],
},
});
}
Easy.
_setupTransformFeedback()
Here’s where we prepare the BufferTransform
, what we knew in 8.x as simply the Transform
. This function should only be run when it first loads and when you change any props, layers, etc. A lot of it should be familiar to your existing implementation but I will note that deck.gl and luma.gl 9.x use a slightly different syntax for creating Buffers as they’re attaching them to the device. You can see the old/new versions below:
// Old
const { gl } = this.context;
const sourcePositions = new Buffer(
gl,
new Float32Array(numInstances * 3)
);
// New
const sourcePositions = this.context.device.createBuffer(
new Float32Array(numInstances * 3)
);
Next up is the actual Transform BufferTransform. There’s a few syntactical changes here which align more to how it’s used as well as some extra fields to make sure we have the correct layout and can inform the device how the buffers will be used:
const transform = new BufferTransform(this.context.device, {
attributes: {
sourcePosition: sourcePositions,
},
bufferLayout: [
{
name: "sourcePosition",
format: "float32x3",
},
],
feedbackBuffers: {
targetPosition: targetPositions,
},
vs: shader,
varyings: ["targetPosition"],
modules: [bitmapUniforms],
vertexCount: numParticles,
});
Main differences:
sourceBuffers
->attributes
.- Specifying the buffer layout for the
attributes
. - Explicitly stating that the
targetPosition
in our feedback buffer is avarying
. - Adding our module! That’ll come up again later.
elementCount
->vertexCount
.
draw()
We’ll continue running the standard model attached to ParticleLayer
/LineLayer
, with the main change is instead of solely using model.setAttributes()
we’ll set some of them as constant attributes:
model.setAttributes({
instanceSourcePositions: sourcePositions,
instanceTargetPositions: targetPositions,
instanceColors: colors,
});
model.setConstantAttributes({
instanceSourcePositions64Low: sourcePositions64Low,
instanceTargetPositions64Low: targetPositions64Low,
instanceWidths: widths,
});
_runTransformFeedback()
Cool, now we’re mucking around with our BufferTransform
so we’ll see more changes.
First up is when we run the transform. We need to set the uniforms using our earlier typing (haha, foreshadowing) and then make them inputs to the shader on the model of our BufferTransform
(which is different to the model on our layer and you can see why this project drove me insane before I started getting results). I’m also setting a few options on the transform that will help prevent flickering when panning/zooming the map:
const modUniforms: UniformProps = {
bitmapTexture: texture,
viewportBounds: viewportBounds || [0, 0, 0, 0],
viewportZoomChangeFactor: viewportZoomChangeFactor || 0,
imageUnscale: imageUnscale || [0, 0],
bounds,
numParticles,
maxAge,
speedFactor: currentSpeedFactor,
time,
seed: Math.random(),
};
transform.model.shaderInputs.setProps({ bitmap: modUniforms });
transform.run({
clearColor: false,
clearDepth: false,
clearStencil: false,
depthReadOnly: true,
stencilReadOnly: true,
});
Here you’ll notice I’ve stripped out all the stuff to check whether it’s a flat map or a globe. This is because I refuse to teach you — my loving children — that the Earth is round. Or because I just wanted to get an example working so I could post this and you can do that part yourself2.
Now usually we’d swap the buffers around, shifting the data slightly so we can “age out” the particles over 25 before we attend next year’s Oscars. But for the 9.x version of luma.gl things change because our buffers are on the GPU. We need to use a CommandEncoder to muck around with our memory:
const encoder = this.context.device.createCommandEncoder();
encoder.copyBufferToBuffer({
sourceBuffer: sourcePositions,
sourceOffset: 0,
destinationBuffer: targetPositions,
destinationOffset: numParticles * 4 * 3,
size: numAgedInstances * 4 * 3,
});
encoder.finish();
encoder.destroy();
// Swap the buffers.
this.state.sourcePositions = targetPositions;
this.state.targetPositions = sourcePositions;
transform.model.setAttributes({
sourcePosition: targetPositions,
});
transform.transformFeedback.setBuffers({
targetPosition: sourcePositions,
});
That should be everything for our ParticleLayer
class. Only one major step left.
GLSL Shader
First off you can just strip out all those uniforms from your old one, we now have them in our typed version above which will be automatically prepended to the shader. Whenever you’d usually reference one of those uniforms, just preface it with the module name: bitmap.numParticles
.
The next important thing is to grab a cold one and spend some time having wheely-chair races around the office. Job done.
Wait Really?
Yeah. You can find the code for a working example on GitLab or GitHub. You can also check out a demo.
Shout out to @King_Owl for doing an initial read/tone check and @shmouflon for corrections (first post where I haven’t messed up it’s/its!). The views expressed in this post reflect the views held by their employers and also represent endorsements.
Honestly I had a lot of fun deep diving into WebGL, WebGPU3, as well as graphics shaders which I hadn’t previously much practical experience. This experience really taught me so much about B2B sales.
-
It also helps any future extensions or maintainers not accidentally go against those expected uses. ↩︎
-
Or request a quote. ↩︎
-
I know I’ve got a few conceptual mistakes in here about how much is done on the GPU vs CPU, but hopefully the small mistakes are understandable and do not distract from the working code and correct opinions that you know and love me for. ↩︎