Code Guide

How each example works, where to make changes, and things to try.

How to Run Any Example

  1. Unzip the folder somewhere on your computer
  2. Open the folder in VS Code
  3. Right-click any index.html and choose Open with Live Server
  4. Allow camera access when your browser asks

No Live Server? Install it from the VS Code marketplace. Or run npx serve . in a terminal.

The Examples

01 Presence Detect Try it →

The screen wakes up when you sit in front of the camera and fades to dark when you leave. Your face is the sensor.

This is a prototype of a lamp, heater, or door that knows you're there. The browser stands in for the real object — you're sketching the sensing, not the form.

Where to modify

Open index.html and find the CONFIGURATION section near the top.

WARM_COLOR — the glow color when you're present
COOL_COLOR — the dark empty-room color
TRANSITION_SPEED — how fast the room responds (try 500 for snappy, 4000 for calm)
CLOSENESS_THRESHOLD — how close you need to be for "focused" mode
Things to try
02 Face Awareness Try it →

Three zones on screen respond to where you're looking. Turn your head left, center, or right and the matching zone lights up. A golden dot follows your gaze.

This is a prototype of a display, sign, or light that follows your attention. Imagine a classroom where the board brightens when students look at it and dims when they don't.

Where to modify

Find the ZONES array in the config.

ZONES — change zone names, icons, colors, and response text
SENSITIVITY — how far you need to turn your head (lower = more sensitive)
SHOW_FACE_MESH — set to false to hide the 478 landmark dots
Things to try
03 Gesture Response Try it →

Hold up your hand and the room responds. Open palm, closed fist, pointing, peace sign — each triggers a different light color. A hand skeleton shows the 21 tracked points.

This is a prototype of a door, appliance, or control surface that reads your hands. No buttons, no touchscreen — the gesture is the interface.

Where to modify

Find the GESTURES object.

GESTURES — change which color and label each gesture triggers
TRANSITION_SPEED — how fast the light changes
DRAW_HAND_SKELETON — set false to hide the hand overlay
DISCORD_WEBHOOK — paste a webhook URL to post events to Discord
Things to try
04 OCR Reader Try it →

Hold up any printed text — a book, a sticky note, your phone — and the camera reads it. Words get yellow bounding boxes and the recognized text appears on the right.

This is a prototype of a smart shelf, whiteboard, or package scanner — any object that needs to read the world around it.

First load downloads a ~15MB model. After that it's cached and loads instantly.

Where to modify

Find the CONFIGURATION section.

OCR_LANGUAGE — 'eng', 'spa', 'fra', 'deu', 'jpn' — try different languages
AUTO_SCAN_INTERVAL — how often it auto-reads (ms)
MIN_CONFIDENCE — raise to 60 for clean results, lower to 10 to see everything
DISCORD_WEBHOOK — paste a webhook URL to post detected text to Discord
Things to try
05 Teachable Machine Try it →

Train your own ML model with zero code, then paste the link here and the room responds to whatever you trained.

This is the most open-ended lab — you decide what the room detects. You're not using someone else's model, you're defining the categories yourself. That's a design decision.

How to train
  1. Go to teachablemachine.withgoogle.com and create 2–4 classes (e.g., "Thumbs Up", "Open Palm", "Nothing")
  2. Record ~30 samples per class using your webcam
  3. Click Train Model (~30 seconds)
  4. Click Export Model, then Upload, then copy the shareable link
  5. Paste the URL in this lab and click Load Model
Where to modify

Find CLASS_RESPONSES in the config.

CLASS_RESPONSES — map each class name to an icon, color, and label
MIN_CONFIDENCE — how confident the model needs to be before triggering (0.5 = loose, 0.9 = strict)
DISCORD_WEBHOOK — paste a webhook URL to share detections
Things to try
06 Figurate Chat Try it →
Requires server

Talk to an AI character using text or voice. Pick a character, type a message or hold the mic button and speak, and the character responds with personality and synthesized voice.

This is a prototype of any object that speaks. A narrating room, a talking appliance, a guide that greets you at a door. The character's personality is the design variable.

Your instructor will provide the server URL and login credentials.

Things to try
07 Smart Room Voice Try it →
Requires server

Everything combined. Your webcam detects faces and gestures (Labs 01–03), and an AI character narrates what it sees with voice (Lab 06). Walk in: "Someone just arrived." Show a peace sign: "Ah, the calm gesture." Leave: "The room is empty."

This is the full sketch — a room that sees and speaks. The character's personality changes the entire feeling. A calm narrator makes it meditative. An excitable one makes it unsettling. Same tech, different design.

Same server setup as Lab 06.

Where to modify
EVENT_COOLDOWN — how often the character comments (default 8000ms, try 4000 for chatty or 15000 for calm)

You can also toggle face detection and hand detection on/off using the buttons in the UI.

Things to try

Demos

Three standalone pages for exploring the raw detection tech.

DemoWhat it shows
Presence MirrorFull-screen webcam with toggles for face mesh, hand landmarks, and bounding boxes. See what the computer sees.Try it →
Attention HeatmapTracks where you look over time and builds a color heatmap. Red = looked there a lot. Blue = rarely.Try it →
Gesture SpaceControl a virtual 6-zone room (lights, display, audio, blinds, climate, door) with hand gestures.Try it →

Where to Go From Here

Combine labs.

The most interesting projects come from mixing these together:

Connect to Discord.

Labs 03, 04, and 05 support Discord webhooks. Paste your webhook URL in the config and your laptop becomes a sensor that reports to the shared class channel.

Modify the detection logic.

Every lab has a function that does the core work — updateRoom, classifyGesture, etc. Read through it, understand the flow, and change how decisions get made.

Design new responses.

All the visual responses (colors, icons, text) are in the config section at the top of each file. You can change how the room reacts without touching the detection code.

Ask design questions.

Each lab folder has its own README with deeper design questions — especially around privacy, consent, and what makes a space feel aware vs. creepy.

← Back to Week 6