Meeting Report: Careers in Audio Panel

Meeting Topic: Careers in Audio: What’s your Connection?

Moderator Name: Jay Dill & Tim Hsu

Speaker Name: Alan Alford (Indianapolis Symphony Orchestra), Elizabeth Alford (Jonas Productions and freelance), Luke Molloy (Diversified), Clem Tiggs (freelance), Gavin Haverstick (Haverstick Designs)

Meeting Location: virtual (YouTube:


This meeting put together a dynamic group of panelist to discuss different career paths and roles within the audio industry, moderated by Jay Dill and Dr. Tim Hsu. The meeting began with short introductions of each panelist, and a description of their roles within the larger audio industry, including job duties and responsibilities.

Alan Alford of the Indianapolis Symphony began by detailing his job as an IATSE stage hand and audio engineer for both indoor concerts at the Hilbert Circle Theatre and outdoors at the symphony’s summer home at Conner Prairie. Alan briefly detailed his non-audio duties, but focused on his work providing live sound reinforcement as well as advancing audio for guest artists and providing support for audio recordings.
Elizabeth Alford then described her varying roles working with Jonas Productions in their backline rental, and her focus on wireless technology and RF coordination. Her shift to RF coordination has put her in roles managing everything ranging from small wireless packages to massive shows with 80+ channels of wireless microphones and in-ear monitors. Elizabeth ardently emphasized the increasing need for knowledge of networking and general IT infrastructure in the modern production environment, as well.

Luke Molloy discussed his role as an audio-video system designer and drafting engineer. His work focuses not only on meeting the needs of a given system design and installation, but in configuring systems to fit within the physical and practical confines of the installation environment. Luke pointed out that his background in both audio and video dovetailed nicely with his engineering background to prepare him to this work.
Clem Tiggs described his work as an A2 on major film and television productions, describing the role as the “get it done” person, responsible for everything from placing mics to ensuring signal flow back to the A1 a the mix location. Likewise, a strong technical background, troubleshooting ability, and maintaining a positive rapport with clients/talents were also featured as necessary skills for the A2 role. Clem also highlighted the differences between freelance and traditional salary job, with the freedom of choice being a major upside, but with the caveat of requiring discipline and a strong independent work ethic.

The final panelist, Gavin Haverstick, presented his work as an acoustical consultant with a particular reputation for high-quality recording studio design. The marriage of musical background and engineering education served to propel Gavin towards a focus on musical designs and applications, where his consultancy focuses on recording studios, performances spaces, and multi-purpose auditoriums.

Following individual introductions, the panel took questions from the attendees addressing a variety of related topics.

Written By: Brett Leonard

Meeting Report: Introduction to Immersive Mixing: Atmos & Beyond

Meeting Topic: Introduction to Immersive Mixing: Atmos & Beyond

Moderator Name: Brett Leonard

Speaker Name: George Massenburg, McGill University and Massenburg Design Works

Meeting Location: virtual (YouTube:


Engineer and innovator George Massenburg joined the Central Indiana section for a discussion of immersive audio mixing, highlighting his recent experience with remixing major popular music artists in Dolby Atmos, as well as his experience with current and upcoming consumer delivery methods for immersive content.
George began the presentation with a brief history and overview of immersive audio, stretching back to stereo and early surround. The early use of binaural transmission in the 1881 Paris Opera telephone transmission was highlighted as a truly early form of immersive audio, despite being often overlooked. Further developments presented ranged from Alan Blumlein’s stereo innovations, quadraphonic recordings, Todd-AO surround, Dolby Stereo, DTS Surround, and other such formats. DTS Music Disc, DVD-Audio, and SACD were also highlighted as previous music-specific immersive formats. George was careful to highlight not only the success and innovations of many of these technologies and formats, but also to acknowledge some of the commercial shortcomings of earlier forays into immersive music.

Following the historical context of immersive audio, George moved into the realm of Dolby Atmos. Discussion began with the basic components of an Atmos mix: bed tracks and objects. George discussed the use of 7.1.2 (7.1 with two height channels) and 7.1.4 (7.1 with four height channels) channel-based bed tracks as the foundation for a mix, with the remaining 110+ reserved for objects that can be placed and manipulated outside of these defined speaker locations. George carefully defined the sometimes-nebulous term “object” in reference to Atmos, including their encoding in mixing and decoding during playback. George also provided a glimpse into his signal flow/studio setup for immersive mixing, with dedicated playback/mixing and capture/render computers and multiple monitoring formats and devices, including both professional monitor loudspeakers and consumer devices for immersive playback.

George then ask for questions from viewers online, received in via YouTube chat, text, etc. This garnered an incredible range of immersive audio sub-topics, including the differences between mixing/re-mixing for immersive rather than capturing immersive content from the recording stage. The difficulties in re-mixing content that exists in an artist-approved stereo iteration was also discussed. George was careful to note that one of the first steps to an immersive remix is often to recreate the existing stereo mix, then branching out from the sounds, feelings, and intentions existing in that format. Questions regarding consumer delivery were also addressed, with topics such as single-point immersive systems (e.g. sound bars, wireless home devices, etc.), binaural renderings, MPEG-H encoding, and mobile audio all discussed.

George both started and ended the evening on an uplifting note, emphasizing the fact that immersive audio opens up a world of opportunities for increased artistry. Our goals as immersive content creators should be to provide a truly special and authentic experience for artists and listeners alike.

Written By: Brett Leonard

Meeting Report: Automatic Mic Mixing

Meeting Topic: Automatic Microphone Mixing: How and Why?

Moderator Name: Jay Dill and Nate Sparks

Speaker Name: Michael Pettersen and Gino Sigismondi, Shure

Other business or activities at the meeting: General welcome, introduction to the section and section’s website/social media, and information on joining the AES for non-members.

Meeting Location: Online (YouTube stream with Q&A)


Moderator’s Jay Dill and Nate Sparks joined Shure’s Michael Pettersen and Gino Sigismondi for the Central Indiana Section’s inaugural webcast to discuss the history and current state of automatic microphone mixing. The presentation began with an in-depth overview of the history of automatic mixing dating back to the original concept brought forward by famed theatre sound designer Dan Dugan. Dugan’s initial concept allowed a theatre mixer to offload the task of muting and unmuting (or fader riding) multiple microphones as actors delivered lines and entered or left stage. This functionality helps optimize gain before feedback, prevented comb filtering, and reduced buildup of background noise and reverberation.

Shure entered the automixing market in the early 1970s with the Voicegate, a speech-centric gating system. By the mid-70s, advancements allowed for variable threshold operation, as well as implementing gain sharing, a system which maintains a sum total gain for all open channels as channels are added or subtracted, thereby creating a more stable system. Further advances heralded a dual-element microphone with a secondary, rear-facing capsule providing a differential to ensure only on-axis input signals triggered unmuting, and system linking to allow for more channels.

The next wave of development included adaptation to ambient noise and the ability to work with non-proprietary microphones. This system grew into the famed FP-410, which included MaxBus, a system to ensure that the loudest receiver capturing a single source would remain open, a system to ensure that the last microphone used would remain open, and the implementation of “off-attenuation”, which used approximately 15 dB of gain reduction rather than full muting of sources. These technologies have rolled into the systems we know as IntelliMix.

As the world of audio migrated way from analog processing, IntelliMix went digital. While the aims of automixing remain the same, the processing tasks of signal detection, channel priority, gain-sharing, etc. have been merged into DSP-based systems in both hardware and software. Current automixing offerings retain this functionality, but also allow for configuration of all aspects of the system via a browser-based GUI. Traditional functionality can also be coupled with additional audio enhancement processing and digital I/O for maximum flexibility.

The presentation was facilitated by Force Technology Solutions’ live streaming studio, allowing broadcast-style graphics and switching, off-site production, and remote presentation from across the Midwest. The lecture can be can be viewed on the Central Indiana Section’s YouTube channel or directly at

Written By: Brett Leonard

Meeting Report: ReverBall and Music Facility Tour at IUPUI

Central Indiana Section Meeting Report

ReverBall! – A Tour of the Music Technology Facilities and Open House at the Eskenazi Fine Arts Center at IUPUI (Indiana University-Purdue University, Indianapolis)

This meeting featured a tour of the classroom, recording, and lab facilities of the Music Technology Program on the IUPUI campus. It also included an open house-type event (ReverBall), hosted by the Herron School of Art + Design and the Department of Music Technology.

Dr. Hsu took the group through various spaces used by the Music Technology Program, including:

  • A control room and adjacent tracking studio.
  • A music rehearsal room.
  • An acoustics laboratory where experimental work was being done with impedance tubes and various acoustic panels of different sustainable materials.
  • The Tavel Center for Arts Technology where interactive/distance learning with local and remote
  • students takes place alongside current research in music technology.
  • A newly renovated piano lab used for keyboard and MIDI controller classes.

The Music Technology program IUPUI resides in the Purdue School of Engineering and Technology. They offer a Bachelor of Science, Master of Science, and Ph.D in Music Technology, as well as a Bachelor of Science and Master of Science degree in Music Therapy. Research in the department spans fields in audio, live performance technologies, acoustics, health, music therapy, and digital and acoustic instrument development.

The Open House event included several ensembles performing music, in some cases with homemade instruments or modified regular instruments and synthesizers. Mixed media performances included world premieres of works by both faculty and students.

The meeting was hosted by Dr. Timothy Hsu, faculty member in the Music Technology Program at IUPUI.

Meeting Report: An Evening with John Cooper

Central Indiana – July 18, 2019

Meeting Topic: An Evening with John Cooper

Moderator Name: Michael Petrucci

Speaker Name: John Cooper, freelance FOH mixer for Bruce Springsteen and other noted artists

Other business or activities at the meeting: It was announced that Section elections will commence at this time. Nominations are open and should be submitted to the Secretary. Voting will be done electronically (via special website). Results are expected on/about August 20, 2019.

Meeting Location: ESCO Communications, Indianapolis, IN


This evening’s guest presenter has been an FOH engineer/mixer for Bruce Springsteen since 2001. He has also worked as FOH engineer/mixer with other numerous other artists, including: John Mayer, Sheryl Crow, Keith Urban, and Lionel Richie. 

John talked about his approach and experience in mixing for major, live music performances. Some of the things he highlighted included: 
• Understanding and maintaining the proper gain structure. 
• A result that sounds good/acceptable, not something that reads ideally on a meter. 
• Having appropriate backup equipment and a strategy to deploy, when necessary. 
• Be cautious of level limits with digital consoles. Some people are using analog matrices to do certain mixes in order to work around these issues. 
• Use of delays to achieve some stereo effects from a mono source. 
• Protools can provide a virtual sound check. 
• The teleprompter is a key element in this scale of road show — everyone uses it to know the where they are in the show. There could be as many as 20 displays. Related to this — many shows are automated. 
• The entire stage is on UPS. 
• There is a definite difference in energy level between an afternoon rehearsal and an evening performance. 
• Bass guitar balance (with the rest of the band/orchestra) is a very important consideration. 
• Front fills are important, especially for the performer to be understood. Balance can be tricky and important. 
In regular business, biennial elections for the Section were announced and the process will commence promptly. The value of AES membership was highlighted, including product discounts (Apple, Dell, Sound Particles and Focal Press) plus career resources (profiles, forums, and job board postings from sustaining member companies).

Written By: Barrie Zimmerman, Secretary