To be presented as a conference paper at NeurIPS 2025 in San Diego, California (December 2025), this work showcases an interdisciplinary collaboration at the intersection of machine learning, human-computer interaction, and performance art. We introduce a real-time motion recognition system integrated into choreographic practice, enabling human and machine to engage in a shared ritual of responsive interaction. Our project aims to reframe the human–machine relationship by asking: what happens when the machine remembers rather than generates, when it listens rather than speaks first? Resisting generalization, we center bodily intuition and memory as paths toward hybrid rituals of coexistence. Looking ahead, the work extends toward building a dance-literate machine, expanding archives of embodied motion, retraining models in real time, and deepening sound–movement mappings as expressive tools. Beyond technical refinement, we envision applications in somatic education, therapeutic movement, and creative AI performance. In doing so, we continue to cultivate a system that listens, remembers, and responds with care, opening new dialogues between embodied human expression and computational perception.
Human-Machine Ritual: Synergic Performance through Real-Time Motion Recognition
Zhuodi (Zoe) Cai, Ziyu (Rose) Xu, Juan Pampin. Human-Machine Ritual: Synergic Performance through Real-Time Motion Recognition. In Proceedings of the Thirty-Ninth Conference on Neural Information Processing Systems (NeurIPS 2025).
      