AprilTag Generator

Generate fiducial markers for your smart classroom objects

Generate Tags
Tag Config
Tutorial

tag-config.json

Configured Tags

    Using AprilTags in Your Smart Classroom

    What Are AprilTags?

    AprilTags are visual fiducial markers, similar to QR codes, but designed specifically for robotics and computer vision. They can be detected reliably from various angles and distances, making them perfect for identifying objects in a smart space.

    Key advantages:

    • Unique ID: Each tag has a unique ID (0-586 in the 36h11 family)
    • Position + Rotation: Camera can determine the tag's 3D pose
    • Robust: Works with partial occlusion and various lighting
    • Fast: Detection runs at 30+ FPS on modest hardware

    Step 1: Generate and Print Tags

    Use the "Generate Tags" tab to create tags. Each tag needs a unique ID. Print them at a consistent size (we recommend 5-10cm for classroom objects).

    Tip: Print on matte paper to reduce glare. Laminate for durability but avoid glossy lamination.

    Step 2: Configure Tag Meanings

    Use the "Tag Config" tab to define what each tag ID represents. The JSON structure:

    {
      "tags": {
        "0": {
          "name": "Lamp A",      // Human-readable name
          "type": "lamp",        // Category for styling/behavior
          "zone": "desk"         // Spatial grouping
        }
      },
      "types": {
        "lamp": {
          "color": "#fbbf24",    // Display color
          "icon": "💡"            // Visual indicator
        }
      },
      "zones": {
        "desk": { "x": 0.2, "y": 0.5 }  // Normalized coordinates
      }
    }

    Step 3: Attach Tags to Objects

    Place tags on objects you want to track. Guidelines:

    • Keep tags flat and unobstructed
    • Place where camera can see them
    • Avoid wrapping around curved surfaces
    • Maintain the white border around the tag

    Step 4: Detect Tags with Python

    Install the apriltag library:

    pip install apriltag opencv-python

    Basic detection code:

    import cv2
    import apriltag
    import json
    
    # Load your config
    with open('tag-config.json') as f:
        config = json.load(f)
    
    # Initialize detector
    detector = apriltag.Detector(apriltag.DetectorOptions(families="tag36h11"))
    
    # Open camera
    cap = cv2.VideoCapture(0)
    
    while True:
        ret, frame = cap.read()
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    
        # Detect tags
        results = detector.detect(gray)
    
        for r in results:
            tag_id = str(r.tag_id)
            center = r.center  # (x, y) in pixels
    
            # Look up in config
            if tag_id in config['tags']:
                tag_info = config['tags'][tag_id]
                print(f"Detected: {tag_info['name']} at {center}")
    
            # Draw on frame
            cv2.circle(frame, (int(center[0]), int(center[1])), 10, (0,255,0), -1)
    
        cv2.imshow('AprilTag Detection', frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    
    cap.release()
    cv2.destroyAllWindows()

    Step 5: Use Position Data

    The detected position is in screen coordinates (pixels). To use it for your smart classroom:

    # Convert screen coords to normalized (0-1)
    frame_height, frame_width = frame.shape[:2]
    norm_x = center[0] / frame_width
    norm_y = center[1] / frame_height
    
    # Now you can use norm_x, norm_y for your interaction logic
    # Example: trigger different zones
    if norm_x < 0.33:
        zone = "left"
    elif norm_x > 0.66:
        zone = "right"
    else:
        zone = "center"

    Integrating with WebSocket

    To send tag data to your web demos, add a WebSocket server:

    import asyncio
    import websockets
    import json
    
    async def send_tags(websocket):
        while True:
            # ... detection code ...
            data = {
                "tags": [
                    {
                        "id": tag_id,
                        "name": config['tags'][tag_id]['name'],
                        "x": norm_x,
                        "y": norm_y
                    }
                    for r in results
                    if str(r.tag_id) in config['tags']
                ]
            }
            await websocket.send(json.dumps(data))
            await asyncio.sleep(0.033)  # ~30 FPS
    
    async def main():
        async with websockets.serve(send_tags, "localhost", 8765):
            await asyncio.Future()  # run forever
    
    asyncio.run(main())

    Next Steps

    • Experiment with different tag sizes for your space
    • Calibrate camera position for accurate world coordinates
    • Build rules that respond to tag positions
    • Combine with other sensors for richer interactions
    ← Back to home