ISPRS CATCON 9 · 2026 · Submission

Observe the Earth.
Read the disaster.

A web-based teaching platform built on one real flood case — Sentinel-1A SAR, 26 November 2025, Banda Aceh. Every figure is a live model output; every step from raw backscatter to flood mask is reproducible in the browser.

Scene
Banda Aceh · 05°33′N 95°19′E
Sensor
Sentinel-1A · IW · VV+VH
Tiles
49 · 224×224 px · KuroSiwo format
Model
U-Net · 3 flood configs · live GPU inference
VIIRS NOAA-20 optical view of Banda Aceh, 27 November 2025, almost entirely covered in cloud
AOpticalVIIRS · 27 Nov 2025
What the eye gets — a cloud blur.
Sentinel-1A VV backscatter of Banda Aceh, 26 November 2025, flood visible as dark patches
BRadarSentinel-1A · 26 Nov 2025
What the radar gets — the flood itself.
Above Same 17 × 17 km patch of Banda Aceh (95.25–95.40 °E, 5.45–5.60 °N), two sensors, one week. On 27 November the VIIRS optical radiometer sees a near-total cloud deck — the human eye's view of the disaster, coarse and blind. The day before, Sentinel-1A's C-band radar looked through the same weather and recorded the flood directly as dark low-backscatter patches where calm water lay over the land. Everything that follows on this site is a study of panel B. A: NASA Worldview / VIIRS NOAA-20, Corrected Reflectance true colour · B: ESA Copernicus Sentinel-1A, IW VV polarisation · basemap watermark: Esri World Imagery.
/ MISSION · WHO THIS IS FOR

We don't show materials.
We teach how to read the Earth.

A teaching interface organized around workflow, not content piles. Every case walks a single path — from raw data, through method, to interpretation and discussion. Built for classroom demonstration, student practice, and CATCON competition review alike.

  1. 01
    READRead SAR like a practitioner.
  2. 02
    RUNRun an end-to-end flood pipeline.
  3. 03
    QUESTIONQuestion the hyperparameters.
  4. 04
    DISCUSSDiscuss and iterate with instructors.
/ PRIMER

How SAR sees water.

Before we look at model predictions, spend ninety seconds with the underlying physics. Radar does not see colour or brightness the way a camera does — it measures how much of its own microwave pulse bounces straight back.

  1. 01Smooth water reflects the beam away from the satellite. It appears very dark in both VV and VH channels.
  2. 02Rough terrain and vegetation scatter in all directions. Some energy returns, giving the familiar grey speckle.
  3. 03Urban structures act as corner reflectors. They appear as intensely bright points — notice the city core.
Figure 1a · VV polarizationFigure 1a
Figure 1a · VV polarization

Vertical transmit, vertical receive. Most sensitive to smooth surfaces. Water = black, urban = bright.

Figure 1b · VH polarizationFigure 1b
Figure 1b · VH polarization

Vertical transmit, horizontal receive. Sensitive to volume scattering in vegetation.

Figure 1c · VV/VH/VV RGBFigure 1c
Figure 1c · VV/VH/VV RGB

A common false-colour composite. Green highlights vegetation volume scattering.

/ TIMELINE

One place, three moments in time.

Three Sentinel-1A acquisitions of the same Aceh coast. Move left to right to follow the event: dry baseline, approaching weather, and co-event inundation.

01
2025-10-21
Pre-flood
VV scene 2025-10-21Figure 2a
VV INTENSITY · SENTINEL-1A

Baseline. Dry paddy fields, full river discharge, normal coastal outline.

02
2025-11-02
Approach
VV scene 2025-11-02Figure 2b
VV INTENSITY · SENTINEL-1A

Before landfall. Early moisture accumulation in low-lying areas visible as darker patches.

03
2025-11-26
Co-event
VV scene 2025-11-26Figure 2c
VV INTENSITY · SENTINEL-1A

Peak inundation captured during ascending pass. Large dark zones mark new standing water.

All three scenes are processed with identical radiometric terrain correction (SNAP). What changes is the physical world on the ground, not the sensor.

● CASE-001FLOOD DETECTIONSENTINEL-1 SAR Showing pre-computed outputs
/ THE EXPERIMENT

How does input clamping change what a flood-detection model sees?

Compare the UNetRSMamba flood prediction under four input-scaling strategies, using the same Sentinel-1A scene over the Aceh coast. Each view below is a real pre-computed model output — click through the tabs and switch the configuration to see what changes.

  1. 01Look at the input — start on the Reference imagery tab. What does the coast actually look like in VV, VH, and RGB?
  2. 02Switch the configuration — pick a clamp and flip to Prediction or Overlay. Notice which pixels move from land to flood.
  3. 03Read the numbers — open Statistics. The flood-coverage bar on the right is what you'd hand to a disaster-response officer.
SCENE
2025-11-26
SATELLITE
Sentinel-1A
PIXELS
2.80M
Figure set 3

Pairwise disagreement maps.

Each figure below is a three-panel comparison: prediction A, prediction B, and the pixel-wise difference. Red marks pixels classified as flood by A but not B. Blue marks the reverse. The further two configurations are apart, the noisier the difference panel.

Difference map original_vs_clamp03
Figure 3a · Original vs. Recommended

The 84.1% agreement number hides a story: most disagreement is blue (new flood that the training-time clamp missed entirely).

Difference map clamp015_vs_clamp03
Figure 3b · Conservative vs. Recommended

94.6% agreement. A conservative clamp captures the main flood footprint but shrinks the extent.

Difference map clamp03_vs_clamp05
Figure 3c · Recommended vs. Aggressive

90.5% agreement. An aggressive clamp starts dropping real floods while over-estimating permanent water.

Teaching note

Why one hyperparameter matters in SAR flood mapping.

Input clamping bounds the SAR backscatter before normalization. A tight clamp (0.15) truncates bright returns and under-detects floods. A loose clamp (0.5) preserves bright reflectors but washes out the water signature. The sweet spot at 0.3 balances information loss against signal-to-noise — and more than doubles the recovered flood area compared to the training-time setting.

/ BEHIND THE CURTAIN See the full pipeline that produced these maps 6 stages · SNAP → tiles → stats → UNetRSMamba → validation · including a scrubbable training trajectory
Agreement with recommended (clamp = 0.3)%
Original
84.1%
clamp = 0.15
94.6%
clamp = 0.3
100.0%
clamp = 0.5
90.5%
/ BEHIND THE CURTAIN · HANDS ON

Everything above came from a pipeline.
Scroll — 4 of its stages are clickable, right here.

Instead of reading how it works, run it. Sample a random KuroSiwo tile and see the 7 bands. Drag the clamp slider and watch truncation live. Press RUN and a 224² patch rides a Cloudflare Tunnel to a WSL + RTX 5070 inference server. Click any cell of the agreement matrix and the real disagreement figure fades in.

/ STAGE 02 · INTERACTIVE · TILE EXPLORER

Sample one 224² tile and see what's inside.

Each KuroSiwo-format tile directory packs 6 GeoTIFFs: VV + VH at three acquisition times — pre-event 1 (21 Oct, baseline), pre-event 2 (2 Nov, approach) and co-event (26 Nov, main flood scene). Press Sample to load a random tile from the …-tile Banda Aceh test split. Each click round-trips to the WSL box in ~1 s.

pre1 · VVSL1_IVV
pre2 · VVSL2_IVV
co-event · VVMS1_IVV
pre1 · VHSL1_IVH
pre2 · VHSL2_IVH
co-event · VHMS1_IVH
/ STAGE 03 · INTERACTIVE · CLAMP PLAYGROUND

Drag the clamp, watch the model's view of Banda Aceh change.

Every bar is a VH backscatter bucket. Everything to the right of your clamp value gets clipped to the ceiling — identical to the model. Find the clamp that keeps the flood tail visible without drowning in speckle. This one runs entirely in your browser.

clamp0.300
truncated17.6%
post-clamp mean0.1884
× KuroSiwo7.14×
clamp = 0.3000.00.20.40.60.81.0KuroSiwo μ
Goldilocks · most of the flood tail survives, noise still manageable.
/ STAGE 04 · INTERACTIVE · INFERENCE STATION

Same config, two ways to see it.

Pick a clamp configuration, then flip the switch between CACHED (the figure that ships with the paper) and LIVE (a real forward pass on the WSL GPU right now). Same model, same weights — LIVE just feeds a fresh 224×224 test tile through it on demand.

CONFIG
INPUTREFERENCE · RGB
Reference SAR composite
OUTPUTPREDICTION · 3-class map
Cached prediction
FLOOD10.43%
WATER23.59%
LAND65.98%
REGIONS4,697
from prediction_report.json · full scene
● CACHED · static figure identical to the homepage showcase
/ STAGE 05 · INTERACTIVE · AGREEMENT MATRIX

Click any pair — see where the models actually disagree.

No ground truth exists for Banda Aceh on 2025-11-26, so we triangulate: measure the pixel-for-pixel agreement between every pair of configurations. Low numbers are not wrong — they're the teaching signal. Clicking a cell pulls up the real disagreement map.

The Team

A small team, a focused contribution to remote sensing education.

The platform is built by a compact group combining remote sensing research, teaching practice, and web engineering — optimizing for reproducibility and classroom fit rather than scale.

Principal Investigator · Associate Professor

Wei Yuan

IRIDeS, Tohoku University

Research interests in photogrammetry, remote sensing, computer vision and machine learning. Works on aerial/satellite image processing, DEM/DSM generation, stereo matching and change detection for urban monitoring and disaster science.

/ CONTRIBUTIONS
  • 01Concept Development and Team Lead
  • 02Workflow Design and System Integration
  • 03Methodological Review and Validation
Ph.D. Candidate · Model & Data Engineering

Zhongyuan Yang

IRIDeS, Tohoku University

Ph.D. candidate at the International Research Institute of Disaster Science. Focus on AI-driven disaster analysis, flood mapping and depth estimation, integrating multi-source remote sensing with physical modelling.

/ CONTRIBUTIONS
  • 01Sentinel-1A GRD pre-processing pipeline
  • 02UNet-RSMamba training and inference
  • 03Reference-mask generation and validation
  • 04Live GPU inference endpoint and operations
Ph.D. Candidate · AI Methodology

Weihang Ran

OSCARS Lab, The University of Tokyo

Ph.D. candidate at the Graduate School of Information Science and Technology. Research covers adversarial robustness, deepfake detection, privacy-preserving ML and other AI-safety topics.

/ CONTRIBUTIONS
  • 01Model behavior and robustness evaluation
  • 02Platform validation and testing
  • 03Critical review of machine-learning methodology
Teaching Support

Teaching Support & Feedback

Not just a contact form — this page is part of the platform's pedagogical loop. Students can ask questions, teachers can collect needs, partners can submit collaboration intent.

Question

For concrete questions and methodological doubts students encounter in practice.

Data Request

To request sample datasets, practice materials, or case support.

Collaboration

For course collaboration, case co-authoring, or thematic expansion discussions.

Submit Inquiry

Submit an Inquiry

Submissions are sent to the backend /inquiries endpoint and stored in the D1 database.

Submissions are stored privately in the platform feedback database.