Yiyuan Qian / 銭 漪遠 / セン イーエン
Computational Designer
DDL (Digital Design Lab), Nikken Sekkei Ltd. | 2020~
Education
M.Eng. Obuchi Lab, the University of Tokyo | 2018 - 2020
B.Arch. & M.Arch. Tsinghua University | 2012 - 2018
Contact
[email protected]
twitter / instagram @seleca789
Yiyuan Qian / 銭 漪遠 / セン イーエン
Computational Designer
DDL (Digital Design Lab), Nikken Sekkei Ltd. | 2020~
Education
M.Eng. Obuchi Lab, the University of Tokyo | 2018 - 2020
B.Arch. & M.Arch. Tsinghua University | 2012 - 2018
Contact
[email protected]
twitter / instagram @seleca789
This research is conducted at Obuchi Lab, the University of Tokyo, instructed by Prof Yusuke Obuchi based on a previous fabrication practice: PAFF (Projectile Acoustic Fiber Forest). Collaborators: Alex Orsholits and Joe Li
As for shooting games, people mostly aim with their eyes. We brought up the question: how is human’s ability to recognize direction through audio? Does it vary according to individuals, or according to the position or characteristics of the sound? Is it possible to aim with ears?
We developed a “shooting game” which generates spatial sound sources and feeds it to users according to their head positions and directions.
In a previous practice PAFF (Projectile Acoustic Fiber Forest), hardware and software tools are developed to create virtual sound as guidance for people to shoot with an air gun. However, various problems appeared during the process.
Blaster (left): as an optimization over the previous air gun, we designed a blaster to be mounted directly on user's arms, in order to minimize the deviation caused by aiming. The blaster includes an auto loading and shooting system, and produces minimal sound to provide the user with quiet environment. It also includes an HTC VIVE controller to track its movement realtime.
Headset (right): the headset consists of a Bluetooth headphone to produce sound, an HTC VIVE tracker to track the user's head movement realtime, and a band to fasten the device.
Most people possess the ability to tell the proximate direction of a sound source, although the precision varies. In our case, we created a virtual sound generator using Unity engine and HTC VIVE tracker to track head movement.
As the ability to tell the direction of sound source varies from individual to individual, we measured 10 users for a detailed sound map to get their acoustic spatial perception data. Users would hear a continuous drum sound from different directions (in our case, 180 points), and point at the direction of the sound as they perceive.
Above are color maps that directly indicate whether the user is good (green) or bad (red) at locating the sound source. However, some users are proficient in the center area (e.g., on the left) but become confused in other areas.
Above are distribution maps that illustrate the users' tendencies in their deviations. Some users consistently tend to overshoot upwards, while others have a consistent tendency to veer to the left.
Step 01
The "target" audience volunteers to raise up a hand.
Step 02
A staff member would input the user name and seat number into the software.
Step 03
The "target" audience volunteers to raise up a hand.
Step 04
The "target" audience volunteers to raise up a hand.
The "game" is showcased in Daiwa Ubiquitous Hall for IBM CDE 「音で作る建築」 Acoustic Spatial Perception Workshop by Prof Obuchi and team members, Jan 2020.