Help for Millions
Treats idiosyncratic gestures and non-speech sounds as valid, modelled input signals.
GestureLabs · A3CP
Communication is a human right. GestureLabs is developing open-source AI communication tools that adapt to the individual and their abilities.
Ability-adaptive communication is a new direction in Augmentative and Alternative Communication (AAC). Current AAC systems support many non-speaking individuals, but they remain inaccessible for people with complex motor, sensory, or cognitive profiles. Our work develops an AI-based system that learns directly from each user’s gestures, sounds, and behavioural patterns. The interface adapts to the individual rather than requiring the individual to adapt to a fixed design. The system is built with strict privacy safeguards, transparent operation, and explicit ethical constraints.
Treats idiosyncratic gestures and non-speech sounds as valid, modelled input signals.
Stores derived features instead of raw video or audio, supporting local, privacy-preserving deployments.
Designed to operate with or without internet for use in homes, therapy centers, schools...where the user wants to be.
GestureLabs is an international team of researchers, developers, academics, and product managers commited to supporting the needs of people with complex needs. The platform is built as public digital infrastructure: transparent, auditable, and independent of commercial lock-in.


We are looking for institutions, developers, researchers, and families interested in co-designing, creating, and evaluating a new generation of ability adaptive communication.