top of page

Our Research

We promote academia-industry cooperation through international collaborations. We communicate our results in the scientific communities of the highest level in 3D acoustics for architecture and virtual reality.

1. Clarity of Speech and Music in Acoustic Spaces

Presentation by Alexis Campos, scientist of Perception Research, at the international conference on immersive audio I3DA 2021.

When musical instruments are perceived with little clarity or spoken communication is unintelligible, it is necessary to analyze the acoustic space to decide on a strategy to increase listening comfort.

The traditional way of measuring the clarity of speech and music in acoustic spaces does not consider all directions as it uses a single microphone.

Our user-focused proposal uses a concentric microphone array to distinguish the different directions in which sounds reach each listener.

We invite you to see the details of our investigation in the paper and in the video presented in I3DA 2021.

Reference

A. Campos, S. Sakamoto, and C. Salvador, “Directional early-to-late energy ratios to quantify clarity: A case study in a large auditorium,” International Conference on Immersive and 3D Audio (I3DA), Bologna, Italy, September 2021.​

DOI: 10.1109/I3DA48870.2021.9610935

The external anatomy of the ears and head determine an acoustic transfer function used to spatialize sound sources around the listener through headphones.

Conventional modeling of the listener transfer function considers sounds at distances greater than one meter.

Our proposal increases the realism of spatialization thanks to the ability to model transfer functions for sound sources very close to the listener's head.

We invite you to watch the video and read the references to learn more about our research.

References

  1. C. Salvador, A. Urviola and S. Sakamoto, “Ear centering in the spatial and transform domains for near-field head-related transfer functions,” 24th International Congress on Acoustics (ICA 2022), Gyeongju, South Korea, Oct. 2022. Paper in PDF

  2. A. Urviola, S. Sakamoto, and C. Salvador, “Ear centering for accurate synthesis of near-field head-related transfer functions, ”Appl. Sci., vol. 12, no. 16, 2022. DOI: 10.3390/app12168290

  3. A. Urviola, S. Sakamoto, and C. Salvador, “Ear centering for near-distance head-related transfer functions, ”International Conference on Immersive and 3D Audio (I3DA), Bologna, Italy, September 2021. DOI: 10.1109/I3DA48870.2021.9610891

2. Spatialization of Near Sounds with Headphones

Presentation by Ayrton Urviola, scientist of Perception Research, at the international conference on immersive audio I3DA 2021.

Presentation of César Salvador at the international conference on acoustics ICA 2022.

3. Directional Reverberation in Acoustic Spaces

Brief introduction by Julio Alarcón, scientist of Perception Research, at the international technology conference INTERCON 2021.

Sound, when propagated in an acoustic space, interacts with surfaces and materials producing echo or reverberation.

To discern the directions from which the reverberation reaches the listener it must be measured using a spherical array of microphones. The directional resolution of the available arrays is, however, limited.

Our proposal increases the directional resolution of the reverberation using interpolation methods on the sphere.

We invite you to review the details of our investigation in the paper and in the video presented at INTERCON 2021.

Reference

J. Alarcón, J. Solis, and C. Salvador, “Regularized spherical Fourier transform for room impulse response interpolation,” XXVII International Conference on Electronics, Electrical Engineering, and Computing (INTERCON), Lima, Peru, August 2021.

DOI: 10.1109/INTERCON52678.2021.9532805

GitHub: AlarconGanoza/sphericalAcoustic

Modeling the neural processes that occur during sound perception is essential to add realism and naturalness to audio processing methods. These methods are useful, for example, in assistive devices for people with hearing loss, in audio systems for virtual reality, and in machines for acoustic environment recognition.

We invite you to review the details of our investigation in the slideshow presented at Brainware 2021.

Reference

C. Salvador, R. Teraoka, and S. Sakamoto, “Auditory brain models for the localization and identification of sound,” 7th Int. Symp. LSI Brainware, Sendai, Japan, March 2021.

4. Auditory Brain Models

Summary of the Presentation of César Salvador at Brainware 2021

bottom of page