Embodied AI safety
23 Jun 2026
Room 1
Plenary opening session
Embodied AI (eAI), also called Physical AI, uses artificial intelligence based on machine learning to interact with the physical world. We are already seeing eAI deployed in the real world in robotaxis, smart medical devices, household robots, and other applications. However, everyone is struggling with the safety of these devices: how to design for safety, how to evaluate safety, and how to think about whether any particular eAI system is acceptably safe.
This talk provides an overview of my new book on this topic, with robotaxi safety as a concrete example. Anyone working in this area needs a basic understanding of four core areas: safety engineering, cybersecurity engineering, machine learning technology, and human/computer interaction. The talk also discusses eAI safety issues in the wild, the complexities of establishing what risks might be acceptable, and open challenges in eAI safety. The speaker welcomes questions as to how these matters might apply in other domains.
- Identifies key principles in the areas of system safety, cybersecurity, machine learning, human/computer interaction, and liability
- Illustrates how things change when a human operator is replaced by a computer, using examples from the robotaxi industry
- Explains the need to re-frame system safety from risk optimization to a multi-constraint satisfaction approach

