
EyeSwitch
Control your computer with your eyes. EyeSwitch uses real-time gaze tracking to replace mouse navigation with hands-free, frictionless interaction.
EyeSwitch rethinks the fundamental assumption behind human-computer interaction — that you need a keyboard or mouse to navigate software. Using TensorFlow.js for real-time facial landmark detection and the Canvas API for rendering feedback, it tracks gaze and translates it into navigation commands with minimal perceptible latency. The system is intentionally designed to disappear: when it works well, there's no interface — just intent and response. A lightweight CLI layer handles calibration and configuration without requiring a GUI wrapper.
Why I built this
I kept wondering what interaction looks like when you remove the physical layer entirely. Keyboards and mice are tools we've normalized — but they're not inevitable. EyeSwitch started as a genuine question: what if the only input required was attention?
Use case
Valuable for accessibility research, HCI exploration, and environments where traditional input devices are impractical or unavailable. It demonstrates that gaze-based control can feel natural and responsive — not gimmicky — when the signal processing is done right.
What I learned
Removing friction is architecturally harder than adding features. Every added affordance — a hover state, a transition, a confirmation prompt — is friction in disguise. The entire design had to be measured by a single question: does this make the interface more invisible?
Where I got stuck
Raw gaze data is noisy. The difference between intentional focus and casual glance is milliseconds of signal pattern, not absolute position. Building a smoothing layer that felt responsive without feeling jittery required extensive tuning of prediction windows and confidence thresholds.