Quickstart
First WSI Render in Minutes
Start from image metadata, normalize source, then wire tiles + points + ROI.
1. Setup
npm install
npm run dev
2. Load and normalize image info
import { normalizeImageInfo, toBearerToken } from "open-plant";
const IMAGE_ID = "69898975a05a9cd9ec6fe311";
const token = "<access-token>";
const res = await fetch(
`https://your-api.example.com/api/v4/images/${IMAGE_ID}/info`,
{ headers: { Authorization: toBearerToken(token) } },
);
if (!res.ok) throw new Error(`HTTP ${res.status}`);
const raw = await res.json();
const TILE_BASE_URL = "https://your-s3-bucket.example.com/ims";
const source = normalizeImageInfo(raw, TILE_BASE_URL);
const pointZstUrl = raw?.mvtPath || "";
3. Minimal viewer mount
import { useState } from "react";
import { WsiViewerCanvas } from "open-plant";
function Viewer({ source, token }) {
const [viewState, setViewState] = useState();
const [rotationResetNonce, setRotationResetNonce] = useState(0);
const [selectedRoiId, setSelectedRoiId] = useState(null);
return (
<WsiViewerCanvas
source={source}
authToken={token}
viewState={viewState}
onViewStateChange={setViewState}
ctrlDragRotate
rotationResetNonce={rotationResetNonce}
activeRegionId={selectedRoiId}
onActiveRegionChange={setSelectedRoiId}
onPointerWorldMove={(event) => {
// event.coordinate -> [x, y] | null
}}
onStats={(stats) => console.log(stats)}
style={{ width: "100vw", height: "100vh" }}
/>
);
}
4. Camera bounds/transition + tile color
<WsiViewerCanvas
source={source}
imageColorSettings={{
brightness: 0,
contrast: 0,
saturation: 0,
}}
minZoom={0.25}
maxZoom={1}
viewTransition={{ duration: 300 }}
autoLiftRegionLabelAtMaxZoom
/>
Color settings are applied to the tile shader only. Point/cell markers, ROI, and draw overlay keep original colors.
<WsiViewerCanvas
source={source}
zoomSnaps={[1.25, 2.5, 5, 10, 20, 40]}
zoomSnapFitAsMin
/>
zoomSnaps values are magnification steps. They are normalized with source.mpp.
5. Provide point data and build palette
Point loading and parsing (e.g. ZST/MVT decoding) is not part of the library. Parse points externally, then pass typed arrays to the viewer.
import { buildTermPalette } from "open-plant";
// positions: Float32Array [x0,y0,x1,y1,...] (your own loader)
// paletteIndices: Uint16Array (mapped from your term table)
const termPalette = buildTermPalette(source.terms);
const pointData = {
count: positions.length / 2,
positions,
paletteIndices,
fillModes, // optional Uint8Array (0: ring, 1: solid)
};
<WsiViewerCanvas
source={source}
authToken={toBearerToken(token)}
pointData={pointData}
pointPalette={termPalette.colors}
/>
6. ROI polygon clipping
<WsiViewerCanvas
source={source}
pointData={pointData}
pointPalette={termPalette.colors}
roiRegions={[
{
id: "roi-1",
label: "Tumor Core",
coordinates: [
[12000, 14000],
[18000, 14000],
[18000, 20000],
[12000, 20000],
[12000, 14000],
],
},
]}
clipPointsToRois
/>
7. ROI term stats callback
<WsiViewerCanvas
source={source}
pointData={pointData}
roiRegions={regions}
roiPaletteIndexToTermId={new Map([[1, "negative"], [2, "positive"]])}
onRoiPointGroups={(stats) => {
// stats.groups -> per-ROI term counts
}}
/>
8. ROI acceleration mode (worker / hybrid-webgpu)
import { getWebGpuCapabilities } from "open-plant";
const caps = await getWebGpuCapabilities();
const clipMode = caps.supported ? "hybrid-webgpu" : "worker";
<WsiViewerCanvas
source={source}
pointData={pointData}
pointPalette={termPalette.colors}
roiRegions={regions}
clipPointsToRois
clipMode={clipMode} // "sync" | "worker" | "hybrid-webgpu"
onClipStats={(stats) => {
console.log(stats.mode, stats.durationMs, stats.outputCount);
}}
/>
Recommended default: use
clipMode="worker".
Switch to hybrid-webgpu only after measuring with your real dataset.