Code Guide
How each example works, where to make changes, and things to try.
How to Run Any Example
- Unzip the folder somewhere on your computer
- Open the folder in VS Code
- Right-click any
index.html and choose Open with Live Server
- Allow camera access when your browser asks
No Live Server? Install it from the VS Code marketplace. Or run npx serve . in a terminal.
The Examples
The screen wakes up when you sit in front of the camera and fades to dark when you leave. Your face is the sensor.
This is a prototype of a lamp, heater, or door that knows you're there. The browser stands in for the real object — you're sketching the sensing, not the form.
Where to modify
Open index.html and find the CONFIGURATION section near the top.
WARM_COLOR — the glow color when you're present
COOL_COLOR — the dark empty-room color
TRANSITION_SPEED — how fast the room responds (try 500 for snappy, 4000 for calm)
CLOSENESS_THRESHOLD — how close you need to be for "focused" mode
Things to try
- Change
WARM_COLOR to '#ff6b6b' for a red glow or '#4ecdc4' for teal
- Set
TRANSITION_SPEED to 500 — does instant response feel better or worse?
- What if you added a third state? (hint: check the
updateRoom function)
- Add a sound effect when someone arrives
Three zones on screen respond to where you're looking. Turn your head left, center, or right and the matching zone lights up. A golden dot follows your gaze.
This is a prototype of a display, sign, or light that follows your attention. Imagine a classroom where the board brightens when students look at it and dims when they don't.
Where to modify
Find the ZONES array in the config.
ZONES — change zone names, icons, colors, and response text
SENSITIVITY — how far you need to turn your head (lower = more sensitive)
SHOW_FACE_MESH — set to false to hide the 478 landmark dots
Things to try
- Change the zone labels to match a real room ("Window", "Board", "Door")
- Set
SENSITIVITY to 0.1 — very twitchy. Try 0.3 — very deliberate. Which is better?
- Add a fourth zone by expanding the grid and the detection logic
- What if the zones you're NOT looking at dimmed instead?
Hold up your hand and the room responds. Open palm, closed fist, pointing, peace sign — each triggers a different light color. A hand skeleton shows the 21 tracked points.
This is a prototype of a door, appliance, or control surface that reads your hands. No buttons, no touchscreen — the gesture is the interface.
Where to modify
Find the GESTURES object.
GESTURES — change which color and label each gesture triggers
TRANSITION_SPEED — how fast the light changes
DRAW_HAND_SKELETON — set false to hide the hand overlay
DISCORD_WEBHOOK — paste a webhook URL to post events to Discord
Things to try
- Change the colors for each gesture — make fist =
'#ff0000' for red alert
- Set
TRANSITION_SPEED to 300 for snappy or 2000 for smooth
- Try adding a new gesture by modifying the
classifyGesture function (look at how it checks which fingers are extended)
- Connect to Discord and see your gestures appear in a shared channel
Hold up any printed text — a book, a sticky note, your phone — and the camera reads it. Words get yellow bounding boxes and the recognized text appears on the right.
This is a prototype of a smart shelf, whiteboard, or package scanner — any object that needs to read the world around it.
First load downloads a ~15MB model. After that it's cached and loads instantly.
Where to modify
Find the CONFIGURATION section.
OCR_LANGUAGE — 'eng', 'spa', 'fra', 'deu', 'jpn' — try different languages
AUTO_SCAN_INTERVAL — how often it auto-reads (ms)
MIN_CONFIDENCE — raise to 60 for clean results, lower to 10 to see everything
DISCORD_WEBHOOK — paste a webhook URL to post detected text to Discord
Things to try
- Change
OCR_LANGUAGE to 'spa' and hold up Spanish text
- Lower
MIN_CONFIDENCE to 10 to see how much garbled text the model detects
- Good lighting and high contrast (dark text on white paper) makes a big difference
- What if detected text triggered a room response? (e.g., reading "LIGHTS" changes the background)
Train your own ML model with zero code, then paste the link here and the room responds to whatever you trained.
This is the most open-ended lab — you decide what the room detects. You're not using someone else's model, you're defining the categories yourself. That's a design decision.
How to train
- Go to teachablemachine.withgoogle.com and create 2–4 classes (e.g., "Thumbs Up", "Open Palm", "Nothing")
- Record ~30 samples per class using your webcam
- Click Train Model (~30 seconds)
- Click Export Model, then Upload, then copy the shareable link
- Paste the URL in this lab and click Load Model
Where to modify
Find CLASS_RESPONSES in the config.
CLASS_RESPONSES — map each class name to an icon, color, and label
MIN_CONFIDENCE — how confident the model needs to be before triggering (0.5 = loose, 0.9 = strict)
DISCORD_WEBHOOK — paste a webhook URL to share detections
Things to try
- Train a model to recognize objects on your desk (coffee mug, phone, book)
- Train it on poses (hand raised, leaning back, looking at phone)
- Change the icons and colors in
CLASS_RESPONSES to match your custom classes
- What happens with bad training data? Try training with only 5 samples vs. 50
Requires server
Talk to an AI character using text or voice. Pick a character, type a message or hold the mic button and speak, and the character responds with personality and synthesized voice.
This is a prototype of any object that speaks. A narrating room, a talking appliance, a guide that greets you at a door. The character's personality is the design variable.
Your instructor will provide the server URL and login credentials.
Things to try
- Have a 5-turn conversation and see how the character remembers context
- Try different characters — each has a unique voice and personality
- Hold the mic button for 2–5 seconds, speak clearly, then release
- Ask the character about itself — it responds based on its personality config
- Compare how the same question feels typed vs. spoken
Requires server
Everything combined. Your webcam detects faces and gestures (Labs 01–03), and an AI character narrates what it sees with voice (Lab 06). Walk in: "Someone just arrived." Show a peace sign: "Ah, the calm gesture." Leave: "The room is empty."
This is the full sketch — a room that sees and speaks. The character's personality changes the entire feeling. A calm narrator makes it meditative. An excitable one makes it unsettling. Same tech, different design.
Same server setup as Lab 06.
Where to modify
EVENT_COOLDOWN — how often the character comments (default 8000ms, try 4000 for chatty or 15000 for calm)
You can also toggle face detection and hand detection on/off using the buttons in the UI.
Things to try
- Pick different characters and see how the same events feel completely different
- Turn off hand detection — now it only narrates presence and attention
- Lower
EVENT_COOLDOWN to 4000 — the room won't stop talking
- Toggle auto-narrate off and type directly to the character while the webcam runs
- Design your own character in the Flowstate dashboard and use it as the room's voice
Demos
Three standalone pages for exploring the raw detection tech.
| Demo | What it shows | |
| Presence Mirror | Full-screen webcam with toggles for face mesh, hand landmarks, and bounding boxes. See what the computer sees. | Try it → |
| Attention Heatmap | Tracks where you look over time and builds a color heatmap. Red = looked there a lot. Blue = rarely. | Try it → |
| Gesture Space | Control a virtual 6-zone room (lights, display, audio, blinds, climate, door) with hand gestures. | Try it → |
Where to Go From Here
Combine labs.
The most interesting projects come from mixing these together:
- Presence (01) + OCR (04) = "if someone is here AND holding a sign, read it"
- Gesture (03) + Teachable Machine (05) = built-in gestures + your custom ones
- Any sensor lab + Figurate Chat (06) = a character that reacts to what the room senses
Connect to Discord.
Labs 03, 04, and 05 support Discord webhooks. Paste your webhook URL in the config and your laptop becomes a sensor that reports to the shared class channel.
Modify the detection logic.
Every lab has a function that does the core work — updateRoom, classifyGesture, etc. Read through it, understand the flow, and change how decisions get made.
Design new responses.
All the visual responses (colors, icons, text) are in the config section at the top of each file. You can change how the room reacts without touching the detection code.
Ask design questions.
Each lab folder has its own README with deeper design questions — especially around privacy, consent, and what makes a space feel aware vs. creepy.
← Back to Week 6