STUDIO FOR NARRATIVE SPACES

Project

From Temporal to Spatial

Summary

Performance artforms like Peking opera face transmission challenges due to the extensive passive listening required to understand their nuance. To create engaging forms of experiencing auditory Intangible Cultural Heritage (ICH), we designed a spatial interactionbased segmented-audio (SISA) Virtual Reality system that transforms passive ICH experiences into active ones. We undertook: (1) a co-design workshop with seven stakeholders to establish design requirements, (2) prototyping with five participants to validate design elements, and (3) user testing with 16 participants exploring Peking Opera. We designed transformations of temporal music into spatial interactions by cutting sounds into short audio segments, applying t-SNE algorithm to cluster audio segments spatially. Users navigate through these sounds by their similarity in audio property. Analysis revealed two distinct interaction patterns (Progressive and Adaptive), and demonstrated SISA's efficacy in facilitating active auditory ICH engagement. Our work illuminates the design process for enriching traditional performance artform using spatially-tuned forms of listening.

This temporal-to-spatial strategy was applied to a VR artwork shown at Osage Gallery in Hong Kong. In dreams, one's life experiences are jumbled together, so that characters can represent multiple people in your life and sounds can run together without sequential order. To show one's memories in a dream in a more contextual way, we represent environments and sounds using machine learning approaches that take into account the totality of a complex dataset. The immersive environment uses machine learning to computationally cluster sounds in thematic scenes to allow audiences to grasp the dimensions of the complexity in a dream-like scenario. We applied the t-SNE algorithm to collections of music and voice sequences to explore the way interactions in immersive space can be used to convert temporal sound data into spatial interactions. We designed both 2D and 3D interactions, as well as headspace vs. controller interactions in two case studies, one on segmenting a single work of music and one on a collection of sound fragments, applying it to a Virtual Reality (VR) artwork about replaying memories in a dream. We found that audiences can enrich their experience of the story without necessarily gaining an understanding of the artwork through the machine-learning generated soundscapes. This provides a method for experiencing the temporal sound sequences in an environment spatially using nonlinear exploration in VR.

Honorable Mention Award Publication in DIS: Designing Interactive Systems (DIS'25), arxiv.
Art Paper in EAI Conference: ArtsIT 2022.
Exhibition at JCCAC and Osage Gallery Hong Kong.

People

RAY LC, Yuqi Wang, Sirui Wang, Shiman Zhang, Kexue Fu, Michelle Lui, Zeynep Erol, Zhiyuan Zhang, Eray Ozgunay

Tech

vr ar, hci, social good, machine learning

Venues

DIS, Osage Gallery, EAI ArtsIT, JCCAC, Floating Projects

Year

2025