How to Use WebGPU for Real-Time 3D Rendering in the Browser

Team 9 min read

#webgpu

#graphics

#browser

Introduction

WebGPU is the modern path to real-time graphics and compute in the browser. It gives you closer-to-metal access to the GPU than WebGL, with a clean, strongly-typed API and a shading language (WGSL) designed for safety and performance. In this guide you’ll learn how to set up WebGPU, load a simple 3D mesh (a cube), write shaders, configure a rendering pipeline, and render a rotating cube in real time.

Prerequisites

  • A browser with WebGPU support (Chrome, Edge, sometimes behind flags; Safari with WebGPU enabled in recent versions).
  • A local server to serve your files (WebGPU context requires secure origins in many environments).
  • Basic familiarity with modern JavaScript and GPU concepts like buffers, shaders, and pipelines.

If you’re new to WebGPU, start with a small example to confirm your environment is configured correctly before wiring up a full scene.

The WebGPU Pipeline

Key concepts you’ll work with:

  • GPUDevice: your connection to the GPU.
  • GPUBuffer: memory buffers on the GPU for vertex data, indices, and uniforms.
  • GPUShaderModule: compiled WGSL shader code.
  • GPURenderPipeline: the program that runs on the GPU for rendering.
  • Bind groups and bind group layouts: how you bind resources (uniforms, textures) to the pipeline.
  • Command encoding and render passes: how you issue draw calls to the GPU.

A high-level flow:

  • Acquire a GPUAdapter and then a GPUDevice.
  • Create a canvas context configured for WebGPU.
  • Create buffers for geometry and uniforms (e.g., a projection-view-model matrix).
  • Write WGSL vertex and fragment shaders.
  • Create a render pipeline with vertex buffers and shader stages.
  • In a render loop, update uniforms (like the MVP matrix), encode draw commands, and submit them.

A Minimal Rotating Cube: Step-by-Step

Below is a compact, self-contained skeleton you can adapt. It renders a rotating colored cube using WGSL shaders, a simple MVP (model-view-projection) matrix, and a basic render loop.

Code: HTML bootstrap (index.html)

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1" />
  <title>WebGPU Real-Time 3D Cube</title>
  <style>
    html, body { margin: 0; height: 100%; background: #0b0e14; }
    canvas { width: 100%; height: 100%; display: block; }
  </style>
</head>
<body>
  <canvas id="gpuCanvas"></canvas>
  <script type="module" src="./main.js"></script>
</body>
</html>

Code: WGSL shaders (shaders.wgsl)

// Vertex input: position (vec3) and color (vec3)
struct VertexInput {
  @location(0) position: vec3<f32>;
  @location(1) color: vec3<f32>;
};

// Vertex output: clip space position and color
struct VertexOutput {
  @builtin(position) pos: vec4<f32>;
  @location(0) color: vec3<f32>;
};

// Uniform block: a 4x4 MVP matrix
struct MVP {
  mvp : mat4x4<f32>;
};
@group(0) @binding(0) var<uniform> ubo: MVP;

@vertex
fn main(input: VertexInput) -> VertexOutput {
  var out: VertexOutput;
  out.pos = ubo.mvp * vec4<f32>(input.position, 1.0);
  out.color = input.color;
  return out;
}

@fragment
fn main(in: VertexOutput) -> @location(0) vec4<f32> {
  return vec4<f32>(in.color, 1.0);
}

Code: JavaScript (main.js)

async function init() {
  if (!navigator.gpu) {
    console.error("WebGPU not supported in this browser.");
    return;
  }

  // Canvas setup
  const canvas = document.getElementById("gpuCanvas");
  const context = canvas.getContext("webgpu");

  // Device and swap chain setup
  const adapter = await navigator.gpu.requestAdapter();
  const device = await adapter.requestDevice();
  const format = navigator.gpu.getPreferredCanvasFormat();
  context.configure({ device, format });

  // Shaders
  const wgslCode = await fetch("./shaders.wgsl").then(r => r.text());
  const shaderModule = device.createShaderModule({ code: wgslCode });

  // Cube geometry (positions and colors interleaved)
  // 8 vertices with position (x,y,z) and color (r,g,b)
  const vertices = new Float32Array([
    // positions           // colors
    -1,-1,-1,  1,0,0, // 0
     1,-1,-1,  0,1,0, // 1
     1, 1,-1,  0,0,1, // 2
    -1, 1,-1,  1,1,0, // 3
    -1,-1, 1,  0,1,1, // 4
     1,-1, 1,  1,0,1, // 5
     1, 1, 1,  1,1,1, // 6
    -1, 1, 1,  0.5,0.5,0.5 // 7
  ]);
  const indices = new Uint16Array([
    // back face
    0,1,2, 0,2,3,
    // front face
    4,5,6, 4,6,7,
    // connections
    0,1,5, 0,5,4,
    2,3,7, 2,7,6,
    1,2,6, 1,6,5,
    0,3,7, 0,7,4
  ]);

  // Buffer layouts
  const vertexBuffer = device.createBuffer({
    size: vertices.byteLength,
    usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST
  });
  device.queue.writeBuffer(vertexBuffer, 0, vertices.buffer, vertices.byteOffset, vertices.byteLength);

  const indexBuffer = device.createBuffer({
    size: indices.byteLength,
    usage: GPUBufferUsage.INDEX | GPUBufferUsage.COPY_DST
  });
  device.queue.writeBuffer(indexBuffer, 0, indices.buffer, indices.byteOffset, indices.byteLength);

  // Bind group for MVP uniform
  const uniformBufferSize = 4 * 16; // 4x4 matrix
  const uniformBuffer = device.createBuffer({
    size: uniformBufferSize,
    usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST
  });

  const bindGroupLayout = device.createBindGroupLayout({
    entries: [{ binding: 0, visibility: GPUShaderStage.VERTEX, buffer: { type: "uniform" } }]
  });

  const bindGroup = device.createBindGroup({
    layout: bindGroupLayout,
    entries: [{ binding: 0, resource: { buffer: uniformBuffer } }]
  });

  // Pipeline
  const pipeline = device.createRenderPipeline({
    layout: device.createPipelineLayout({ bindGroupLayouts: [bindGroupLayout] }),
    vertex: {
      module: shaderModule,
      entryPoint: "main",
      buffers: [{
        arrayStride: 6 * 4,
        attributes: [
          { shaderLocation: 0, offset: 0, format: "float32x3" }, // position
          { shaderLocation: 1, offset: 3 * 4, format: "float32x3" }, // color
        ]
      }]
    },
    fragment: {
      module: shaderModule,
      entryPoint: "main",
      targets: [{ format }]
    },
    primitive: { topology: "triangle-list" },
    primitiveState: undefined,
  });

  // Helpers: matrices
  function perspective(aspect, fov, near, far) {
    const f = 1.0 / Math.tan((fov * Math.PI) / 180 / 2);
    const nf = 1 / (near - far);
    const out = new Float32Array(16);
    out[0] = f / aspect;
    out[5] = f;
    out[10] = (far + near) * nf;
    out[11] = -1;
    out[14] = (2 * far * near) * nf;
    return out;
  }

  function lookAt(eye, center, up) {
    const z = normalize(subtract(eye, center));
    const x = normalize(cross(up, z));
    const y = cross(z, x);

    const out = new Float32Array(16);
    out[0] = x[0]; out[1] = y[0]; out[2] = z[0]; out[3] = 0;
    out[4] = x[1]; out[5] = y[1]; out[6] = z[1]; out[7] = 0;
    out[8] = x[2]; out[9] = y[2]; out[10] = z[2]; out[11] = 0;
    out[12] = -dot(x, eye); out[13] = -dot(y, eye); out[14] = -dot(z, eye); out[15] = 1;
    return out;
  }

  function multiply(a, b) {
    const out = new Float32Array(16);
    for (let i = 0; i < 4; i++) {
      for (let j = 0; j < 4; j++) {
        out[i * 4 + j] =
          a[i * 4 + 0] * b[0 * 4 + j] +
          a[i * 4 + 1] * b[1 * 4 + j] +
          a[i * 4 + 2] * b[2 * 4 + j] +
          a[i * 4 + 3] * b[3 * 4 + j];
      }
    }
    return out;
  }

  function normalize(v) {
    const len = Math.hypot(v[0], v[1], v[2]);
    return [v[0] / len, v[1] / len, v[2] / len];
  }

  function cross(a, b) {
    return [
      a[1] * b[2] - a[2] * b[1],
      a[2] * b[0] - a[0] * b[2],
      a[0] * b[1] - a[1] * b[0]
    ];
  }

  function subtract(a, b) {
    return [a[0] - b[0], a[1] - b[1], a[2] - b[2]];
  }

  function dot(a, b) {
    return a[0] * b[0] + a[1] * b[1] + a[2] * b[2];
  }

  // Resize handling
  function fitCanvasToDisplaySize() {
    const dpr = Math.max(1, window.devicePixelRatio || 1);
    const w = canvas.clientWidth;
    const h = canvas.clientHeight;
    if (canvas.width !== w * dpr || canvas.height !== h * dpr) {
      canvas.width = w * dpr;
      canvas.height = h * dpr;
    }
  }

  // Render loop
  let t0 = performance.now();
  function frame(now) {
    fitCanvasToDisplaySize();
    const aspect = canvas.width / canvas.height;

    // Time-based rotation
    const t = (now - t0) / 1000;
    const eye = [0, 0, 5];
    const center = [0, 0, 0];
    const up = [0, 1, 0];
    const view = lookAt(eye, center, up);
    const proj = perspective(aspect, 45, 0.1, 100);

    // Model rotation
    const angle = t;
    const cos = Math.cos(angle);
    const sin = Math.sin(angle);
    const model = new Float32Array([
      cos, 0, sin, 0,
      0,   1, 0,   0,
     -sin, 0, cos, 0,
      0,   0, 0,   1,
    ]);

    const mvp = multiply(proj, multiply(view, model));
    device.queue.writeBuffer(uniformBuffer, 0, mvp.buffer);

    // Command encoding
    const commandEncoder = device.createCommandEncoder();
    const pass = commandEncoder.beginRenderPass({
      colorAttachments: [{
        view: context.getCurrentTexture().createView(),
        clearValue: { r: 0.05, g: 0.05, b: 0.1, a: 1.0 },
        loadOp: "clear",
        storeOp: "store",
      }]
    });

    pass.setPipeline(pipeline);
    pass.setVertexBuffer(0, vertexBuffer);
    pass.setIndexBuffer(indexBuffer, "uint16");
    pass.setBindGroup(0, bindGroup);
    pass.draw(36, 1, 0, 0);
    pass.end();

    const commandBuffer = commandEncoder.finish();
    device.queue.submit([commandBuffer]);

    requestAnimationFrame(frame);
  }

  requestAnimationFrame(frame);
}

init();

Code: Notes and tips

  • This is a compact, self-contained example. In a real project you’ll likely split the shaders, buffers, and pipeline setup into modules for maintainability.
  • The MVP math can be replaced with a dedicated math library if you prefer; the key is that the vertex shader receives a 4x4 matrix that transforms model space to clip space.

Debugging and Performance Tips

  • Feature detection: Check for navigator.gpu and handle environments without WebGPU gracefully.
  • Fallbacks: Provide a WebGL2 path or a graceful degradation if a user’s browser doesn’t support WebGPU.
  • GPU timing: Use browser profiling tools (Performance API, GPU time stamps) to identify bottlenecks in buffer updates or shader workloads.
  • Resource management: Reuse buffers when possible; avoid frequent buffer re-creations inside the render loop.
  • Shaders: Start with simple shaders and small meshes; progressively add lighting, textures, or more complex materials as you verify each piece.
  • Precision and formats: Use the recommended canvas format from the browser (navigator.gpu.getPreferredCanvasFormat()) to maximize compatibility and performance.
  • Debugging shaders: If compilation fails, check WGSL syntax against official samples and ensure you’re using correct input/output locations and types.

Deployment and Next Steps

  • Expand your scene: add more objects, textures, and lighting. Implement a basic camera with user controls (orbit, pan, zoom).
  • Add a depth buffer: configure a depth texture and enable depth testing to improve visual correctness for 3D scenes.
  • Lighting models: implement basic Phong or physically-based lighting in WGSL, possibly with normal maps.
  • Asset pipelines: bring in models from common formats (GLTF) and convert them into GPU buffers for rendering.
  • Performance: profile and optimize; look into instancing for rendering many objects efficiently.

Conclusion

WebGPU unlocks more direct and efficient access to the GPU for real-time 3D rendering in the browser. With a solid understanding of the MVP pipeline, WGSL shaders, and a clean render loop, you can build interactive, high-performance graphics experiences that run across modern browsers. Start with a simple rotating cube to validate your setup, then incrementally add complexity like lighting, textures, and more advanced rendering techniques.