Yiyuan Qian / 銭 漪遠 / セン イーエン
Computational Designer
DDL (Digital Design Lab), Nikken Sekkei Ltd. | 2020~
Education
M.Eng. Obuchi Lab, the University of Tokyo | 2018 - 2020
B.Arch. & M.Arch. Tsinghua University | 2012 - 2018
Contact
[email protected]
twitter / instagram @seleca789
Yiyuan Qian / 銭 漪遠 / セン イーエン
Computational Designer
DDL (Digital Design Lab), Nikken Sekkei Ltd. | 2020~
Education
M.Eng. Obuchi Lab, the University of Tokyo | 2018 - 2020
B.Arch. & M.Arch. Tsinghua University | 2012 - 2018
Contact
[email protected]
twitter / instagram @seleca789
This research is conducted at Obuchi Lab, the University of Tokyo, instructed by Prof Yusuke Obuchi based on a previous project: Spatial Recognition via Audio Guidance. Collaborator: Alex Orsholits
In VR applications, hearing is commonly employed as a secondary sense that complements our more dominant sense of sight. We aim to explore the potential of auditory perception as a unique sense capable of accommodating an innate understanding of space through the use of persistent, geographically mapped virtual spatial audio.
By implementing computer-generated audio cues in an urban environment with the use of accessible technologies, we introduce a prototype wearable navigation device employing GNSS RTK positional tracking, and an IMU for absolute position spatial audio computation through Google’s Resonance Audio SDK running autonomously on an Android smartphone.
In the demonstrated use-case, the user is intuitively guided by a virtual audio cue playing their personal music to reach a destination, without the assistance of maps or displays. Positively evaluated by the test users, the proposed system proved its easy adoption and its seamless interfacing.
We propose a prototype wearable navigation device for pedestrians based on a spatial audio interface.
(1) It differs from current visual, haptic or descriptive audio (speech) interfaces, providing a truly intuitive method of instruction;
(2) it overcomes the major problems brought by digital maps such as tracking issues, occupying human sight and cognitive load;
(3) it fully realized the idea of using spatial audio as a navigation interface on a modular and non-exclusive wearable prototype, tested and proved to be functional and pleasant to use;
(4) it opens up the possibilities of further urban-scale designs or applications using spatial audio as an interface.
GNSS testing in low-rise urban area. From left to right: single-band GNSS, dual-band GNSS, RTK GNSS.
Satellite imagery copyright Google 2019, via Google Earth.
GNSS testing in high-rise urban area. From left to right: single-band GNSS, dual-band GNSS, RTK GNSS. Satellite imagery copyright Google 2019, via Google Earth.
Diagram of hardware and network mapping of the system.
Diagrams depicting the characteristics of the follow-the-leader navigation algorithm.
Scaled plan views of the various test sites and their respective routes, along with a depiction of site conditions.
Satellite imagery copyright Google 2019, via Google Earth. Imagery Date: 3/14/2019