Streaming networks have helped lead the industry into new horizons, and set many of the standards and best practices that engineering teams use in producing and delivering compelling immersive content. In this fourth edition, sound engineering leaders from Amazon Prime, Disney+ and Netflix will share a 3-hours long panel to help engineers across the world make sense of the workflow and deliverables commonly requested across the industry. The panel will be moderated by AES Past-President and immersive audio designer Andres Mayo.
|Ellis Burman of Burman Sound and Roundabout Entertainment covers the ins and outs of mixing for film, and shares his approach to maximizing control surface workflows to manage high track counts and focus on artistic objectives. Ellis will walk us through his mix templates and demonstrate how the control surface is much more than a $100,000 mouse!|
Dave Stagl explained the concept of Dolby Atmos compared to stereo and surround sound. Here are few points that he touched on:
He answered questions, including ones about his monitor/speaker configuration, Dolby recommendations and other compatible gear and software requirements.
He did a quick demonstration of the Dolby Atmos Renderer.
Frank began by providing some background on Dante. Dante as a platform includes both hardware components and software tools for control and interfacing, all integrated with a single management layer. This has recently expanded to include video integration in Dante AV, all managed from familiar Dante software. Dante AV includes options for lossless, low-latency video and multichannel audio, intermediate latency and video quality, and high-latency and lossy video.
Frank then moved to the evolution of Audio over IP (AoIP) systems. Early systems employed local, isolated network within a single room or facility, with all equipment interfacing through a single network switch. Similarly, a dedicated virtual LAN could be dedicated to AoIP use. As the network world expanded, IT departments became increasingly involved in these network, bringing a focus on network bandwidth requirements, larger coverage areas, Layer 3 routing, and security considerations.
To appreciate the complexities of modern systems, Frank took a dive into network architecture. The traditional Layer 3 network includes a core, distribution, and access layer, where newer network architecture often relies on a “spine and leaf” structure. Similarly, network security requirements have also expanded from a “defense in depth” paradigm to modern “zero trust” systems where every device’s interconnection must be configured. This, combined with increased network segmentation for differing uses, means that today’s AoIP network is a significant departure from the typical audio-visual professional’s focus on physical interconnection!
Dante Domain Manager (DDM) serves as a hub to simplify and integrate the varying hardware and software components in a modern network. Frank explained that DDM allows for device management and network segregation, as well as tiered access for users, monitoring and logging, and multi-subnet support, all within a single interface. This is particularly useful when using DDM to serve as a management layer for advanced routing between networks. In essence, DDM is a network engineer’s solution to AoIP. DDM can create customized dashboards, network monitoring, or GUIs, allowing for a streamlined user experience while maintaining security and network integrity.
Frank then dove into the future of AoIP and Dante. Dante Connect can transmit uncompressed audio through a cloud-based system to provide in-sync, lossless audio to remote users. Using Dante Gateway ,low-latency local networks can integrate with high-latency cloud distribution to bridge these networks. Further developments leverage WebRTC, to allow remote contribution of lossy audio, as well. Frank wrapped up taking some questions from the attendees.
In this talk, Rupert Brun will explain what audio objects are and why we need them. His award-winning Wimbledon ‘NetMix’ experiment from 2011 led directly to a focus on accessibility as the key feature for Next Generation Audio. Rupert will cover the ways in which audio objects can be created for both live and post-produced content, and how they can be distributed to the consumer, calling on his experience delivering audio object content for a wide range of programmes, including Eurovision and the European Athletics Championship. His talk will be based on his personal experience with MPEG‑H, but other standards will be outlined including the Audio Definition Model (ADM) and the emerging serial version SADM.
Rupert has been a member of the AES for many decades and was previously a member of the UK Committee, with responsibility for sustaining members. He is also a member of the IET Media Executive Committee and a STEM Ambassador. He helps to run the Radio Technology Conference in the UK each year. Rupert spent 35 years with the BBC in a variety of roles including Senior Engineer, Maida Vale and Senior Engineer Radiophonic Workshop. He spent his last decade at the BBC as Head of Technology for Radio & Music TV, with responsibility for all of the technology across those areas including multi-million pound technology projects. Rupert left the BBC to set up Brun Audio Consulting Ltd in March 2015. His clients include broadcasters, technology companies, manufacturers, and systems integrators. He continues to work with Fraunhofer IIS to promote MPEG‑H Next Generation Audio for TV, with a focus on personalisation, especially accessibility.
Mastering Engineer Jeff Powell has an incredible vinyl discography. Jeff will lead a step-by-step journey through the vinyl mastering process, review the basics for mastering to vinyl, then walk through a case study of a Made In Memphis STAX Records re-issue series cut from the original analog master tapes, covering both challenges and “buttercuts” seemingly made for vinyl.
Richard Heyser was fascinated by the link between time and frequency, via the Fourier and Hilbert transforms. During his lifetime, performing the Fourier transform was a difficult thing to do, and he developed novel techniques to allow us to measure the time and frequency characteristics of audio systems and studios. Nowadays, it is simple to perform Fourier transforms in real-time on readily available computer hardware. Much of our processing of signals in the audio chain is now done via the Fourier transform, and versions of it form the basis of our audio coding systems for audio delivery.
We also know that the inner ear of a human listener converts a time domain acoustic signal into a frequency-based representation before it encodes it into neural impulses. However, the inner ear’s frequency representation is different to that obtained from a Fourier transform.
The talk, which should be accessible to people from all the different areas of endeavor within the AES, will first examine the operation of the ear, including its dynamic non-linear behavior. It will then examine the difference between the Fourier Transform and the human auditory system and highlight how they trade off time and frequency resolution differently.
We will then look at how processing in the Fourier domain can result in artifacts that can be perceived by the ear and discuss how one could mitigate these effects in Fourier-based processing systems.
Finally, we shall look at the unique ways the human auditory system allows us to hear the incredible complexity of the audio signal and how that might affect what we do in the future to improve audio, and perhaps move closer to some of Richard’s final words: “Perhaps more than any other discipline, audio engineering involves not only purely objective characterization but also subjective interpretations. It is the listening experience, that personal and most private sensation, which is the intended result of our labors in audio engineering. No technical measurement, however glorified with mathematics, can escape that fact.”
We have a good understanding of the music recording industry and the processes involved in recording mixing and editing. We have a good grasp on what goes on in live touring audio. Mastering is not a mystery and the world of TV and film is not a secret… but what about AAA games?
A world of specialist techniques, coding and software not found anywhere else in the audio world – but also a world fiercely protected behind the wall of nondisclosure agreements. In this presentation AES Scotland has managed to persuade some elves to knock out a couple of bricks in the wall and let us join a conversation on the other side….
Created by the Glasgow School of Art: Games Audio and Sound for the Moving Image Departments
A discussion of Loudness for Broadcast and Streaming. We will have a summary of AES73, CTA 2075, and a preview of the Recommendations for Distribution Loudness of Internet Audio Streaming and On-Demand File Playback.
Compression stole the dynamic effects spotlight. In this tutorial, we’re stealing it back. Your mixes will benefit from creative applications of that ‘other’ dynamics processor: the expander/gate. While offering all the virtues of expanded dynamic range, it has the power to create a far wider variety of effects. Expanders are a tool for altering timbre, reshaping sounds, synthesizing new ones, overcoming masking, and fabricating the impossible. Parameters with names like attack, hold, release, decay, range, depth, slope, and side chain filters don’t exactly invite creative exploration. This overview of effects strategies brings structure to the sonic possibilities of expansion and gating so that you can quickly find the parameter settings that let you achieve your production goals.