To interact with us robots need to see, hear, speak, and express themselves as naturally as we can. Furhat is specifically designed for human social interactions using a multimodal system with discrete modular subsystems that handle functions such as facial animations, neck motion, visual perception, audio processing, cloud service integrations and other operations that allow it to interact with humans just as we interact with each other.
Explore the robotFurhat's unique combination of a back-projected 3D face engine and voice synthesis libraries open up unlimited customization.
Furhat can track multiple individuals simultaneously in real-time using face tracking, allowing you to create multi-party interactions.
Furhat comes with a very powerful suite of tools allowing you to create, debug, deploy and analyse your applications on your desktop.
Prompt personality, control expressivity, add knowledge sources, integrate APIs to create agentic workflows, log data and more.
Freely access video tutorials, demo skill files and comprehensive documentation to get you started with the platform.
Explore our peer-reviewed publications featuring Furhat from academic journals including ACM/IEEE HRI, ICRA, RO-MAN, and CHI.
Go to publicationsStay up to date with the latest news and projects from the Furhat research community.