top of page
Philip-Philip.jpg

Phil Coleman & Philip Jackson

__________________

​

biography

​

​

Phil Coleman is Lecturer in Audio at the Institute of Sound Recording, University of Surrey, UK. Previously, he worked in the Centre for Vision, Speech and Signal Processing (University of Surrey) as a Research Fellow on the project S3A: Future spatial audio for an immersive listening experience at home. He received a Ph.D. degree in 2014 on the topic of loudspeaker array processing for personal audio (University of Surrey), as part of the perceptually optimized sound zones (POSZ) project. His research interests are broadly in the domain of engineering and perception of 3D spatial audio, including object-based audio, immersive reverberation, sound field control, loudspeaker and microphone array processing, and enabling new user experiences in spatial audio. 

​

Philip Jackson is Reader in Machine Audition at the Centre for Vision, Speech & Signal Processing (CVSSP, University of Surrey, UK) with MA in Engineering (Cambridge University, UK) and PhD in Electronic Engineering (University of Southampton, UK). His broad interests in acoustical signals have led to research contributions in human speech perception and production, auditory processing and recognition, audio-visual machine perception, blind source separation, articulatory modeling, visual speech synthesis, sound field control, spatial reverberation, its capture and reproduction, and spatial audio quality evaluation. He led one of four research streams on object-based spatial audio in the S3A programme grant (2014-2019), publicly funded in the UK by EPSRC.

​

Phil Coleman's profile

Philip Jackson on Google Scholar

 

 

__________________

​

abstract

​

​

Phil and Philip will present on the subject of "Turning real spaces into imaginary ones":

​​

In immersive media content, reverberation is an important part of transporting a listener into an imaginary space. It creates the space and offers depth and perspective to the listener. An emerging audio format, called object-based audio, represents individual sounds (objects) within a piece using metadata to describe how they should be positioned and rendered together as a scene. In this way, the same immersive content can be played out using completely different sound setups: e.g., stereo and surround sound loudspeakers, sophisticated multi-channel or Ambisonic arrangements, and headphones, with or without head-tracking. There is not yet a standard way of producing reverberation in an object-based world. Although efficient techniques exist for generating artificial reverberation via room acoustic models, questions remain about how best to convert a real room acoustic into object metadata that a designer could manipulate finally to produce a listener's experience of the imaginary space. Our research in the S3A project recently developed a perceptually-inspired end-to-end pipeline for immersive reverb. Spatial room impulse response measurements are converted into reverb metadata, manipulated in production tools and reproduced over headphones or any loudspeaker layout. Our presentation will outline the benefits of an object-based representation of reverberation, explain the key aspects of our reverberant spatial audio object (RSAO) model, consider prospects for use in virtual and mixed reality, and demonstrate examples of the effects and spatial impressions that this model can achieve.

bottom of page