What is the 360 Ecosystem: Producers, beat makers, programmers can now begin to imagine the delivery/distribution/presentation of their works at even earlier stages of creation. We’ll discuss how artists, arrangers, composers & songwriters fit into this Ecosystem so they might present their music in a new way to their fans (consumers). Ecosystem Idea: Artist; Producer; Mixing Engineer & Immersive Engineer -then- Producer; Artist; Fans (Consumers through Immersive Playback). How: Taking close microphone/console audio of a stereo 2-mix and reimagining into Dolby Atmos (Avid Play for Distribution) or Sony RA360 (Orchard for Distribution). Potential future state: using spatial microphones early in the production/writing process to capture the true sonics of what an artist, arranger, composer or songwriter hears in their mind as the musical work develops. Why: Consumers probably already have the technology in their pocket without knowing it’s even available to them. Fans can access playback options through streaming services, using speakers/sound bars/headphones (i.e., AirPod Pro2 & AirPod MAX which are now Atmos enabled). Will artists, arrangers, composers & songwriters start to change their thought processes/workflow in the earliest writing stages with the idea of immersing their listeners at inception. What consumers choose for playback (e.g., speaker(s) vs headphones/binaural) can have impact genre to genre. Every musical style should have access to present their music in these cutting edge formats. Presentation of the 360 Ecosystem will use a musical work from legendary artist & producer Hank Shocklee to reimagine an original stereo 2-mix into both the Dolby Atmos and Sony RA360 formats.
The first Tonmeister program was founded in 1949 at the Detmold Hochschule für Muisik. The unique concept of education in music combined with audio recording continues to this day, in Detmold and at schools and universities around the world. How has the tonmeister maintained its tradition and how has the concept changed with continuing evolutions in technology and music itself? In “From Tonmeister to Today,” eight prominent international audio educators speak with Tonmeister Ulrike Schwarz and discuss their individual programs. A brief Q and A will follow the video.
Ideally an orchestral score should be performed/recorded by an orchestra of musicians playing together. However, due to limitations on budget, time and perhaps global pandemic conditions, this is sometimes not possible.
The author has worked since the 90s with orchestras of sampled instruments, and has followed and lived through all of the technological changes affecting sampling, sample playback and advances in computer and musical instrument technology. Where the ultimate deliverable is a sampled orchestra performance, he has always prioritised realism in the performance.
This tutorial will showcase his work through the years, with audio examples to explain how advances in technology have translated into improvements in realism.
Wave Field Synthesis (WFS) is a spatial audio rendering technique that places virtual sound sources in real space. Using high density arrays consisting of approximately 200-600 discrete loudspeakers, it is possible to place sound sources accurately in physical space in front of the speakers -- in short, create sound holograms or "holophones." While this technology has long been thought of as logistically impossible, there have been a number of systems created in the last few years with the rise of audio-over-IP. This panel discussion focuses on composers and other creators in the performing arts who have been leading the way in how to use this new spatial audio technology for artistic expression. The conversation will begin with a brief introduction about the technology, but focus mainly on how artists are using it and why “sound holograms” have caused a fundamental shift in how they think about making artistic work with spatial audio. In the past, everyone in a listening experience hears everything at the same time. Now there can be individual sonic experiences in a live event without headphones. The technology is incredibly flexible and there is enormous room for creativity. The panelists include composers, sound designers, and a choreographer who have worked closely with WFS. The three projects discussed focus on how the artists are using the technology differently. One is a concert with a roaming audience who walk inside of the sound sources (accompanied by beams of light). One is a seated audience hearing sounds whisper in their ears and moving through them. And the other is a dance in which the sound of the dancer’s movement is separated from his body like a ghost, in a conceptual piece about gravity.
Audio in many forms is an important part of the interactive media like games: sound effects and music have a substantial effect on players' experience of games. In this proposed panel, several experts in industry and academia will hold a semi-structured chat. The panel members consist of several experts working in the industry with a background in music composition and sound effects design for games and other interactive media, and a game audio researcher with empirical work in game audio for PC and VR games. The questions will involve the process of audio design for games, how they design or compose for specific experiences and to minimize replay fatigue, how they communicate about the audio they're aiming for, how music can create or break immersion, what factors distract from game audio, and what makes audio in games particularly satisfying.
How would your career look with your mentor/mentee experience were different?
This tutorial will examine some of the challenges and opportunities in Networked Music Performance from a musician’s perspective in a home environment. The Covid-19 pandemic has highlighted the importance of playing music with other people in many people’s lives, and playing online has allowed this to continue throughout enforced isolation. This tutorial will first look at the two main approaches: asynchronous (or the ‘virtual ensemble’) and synchronous, including the pros and cons of each approach, the particular considerations around how to choose which approach to take, and musical examples.
The tutorial will then go on to focus on the synchronous approach – playing together in (near) real time. The Internet was not designed for real-time transmission of audio, and data packets may arrive late, or not at all, introducing latency and glitches to the received signals. Latency is a major consideration for musicians, and we will discuss the ways it can be used creatively. We will also discuss bandwidth issues, and the trade-offs around this, as well as the impact of different approaches to monitoring. We will give examples of accessible software that musicians can use in their own homes for Networked Music Performance.
"Immersive" audio is a popularly used term today, and it is often regarded as synonym of 3D audio. But what does immersive mean exactly? There is currently a lack of consensus in how the term should be defined, and it is not yet clear what techniques are required to make audio content more immersive. This session will first explicate different dimensions of immersion as well as those of related concepts presence and involvement, identifying the source of confusion around the terms and provide a conceptual relationship among them. A universal conceptual model of immersive experience will then be introduced, and various context-dependent factors that might be associated with immersive "auditory" experience will be discussed with practical examples.
In 2001, the AES Technical Committee on Coding of Audio Signals (TC-CAS) produced the legendary educational CD ROM "Perceptual Audio Coders - What To Listen For". It contains a taxonomy of common types of codec artifacts, as well as tutorial information on the background of each one. Example audio signals with different degrees of impairment illustrate the nature of the artifacts and help in training test listener expertise. Since its initial release, several generations of CD ROMs were sold and found worldwide use for education of the public.
This workshop presents the results of the TC's efforts in producing a second-generation educational package that tutors on new artifact types as they can be typically experienced with advanced audio codec processing, such as bandwidth extension or parametric stereo. Moreover, the format of the material was enhanced for seamless display and playback on PCs, tablets and mobile phones and includes interactive graphics elements. This makes it an attractive educational package that is now available as an AES publication.
Since the advent of the modern line arrays, it is common practice to fly the full-range sources of a main live sound reproduction system. Subwoofers, on the contrary, have remained ground-stacked primarily for practical reasons due to weight and the lack of captive rigging elements. Modern subwoofer designs however have partly alleviated these constraints.
This workshop compares ground-stacked against flown subwoofers in relation to the audience experience: level monitoring of low frequencies, health and safety measures relative to the exposition to high level of low frequencies, tonal balance and level distribution, subwoofer/main system time alignment over the audience, and the acoustical influence of the presence of the audience.