Thanks for your informative reply to my previous question, I’m back with another!
As mastering engineers we are often told our job is to make recordings translate across various systems. I think it was something you mentioned in your MWTM.
My question is, how does one actually address this when mastering? What are the tangible factors that contribute to “translation”?
I’d like to know how you go about this.
Your insight is appreciated.
My pleasure. Yes, translation is a big part of the ME’s job. However, we have little control over how people listen to the songs we master in their listening environments. There are so many factors that can get in the way of translation like, poor room acoustics or poor placement if listening on loudspeakers, non-linear frequency response of loudspeakers and headphones, and EQ added on the playback side.
My approach is to go for balance in terms of frequency response whenever I have the ability to shape it in that way. It still has to be relevant in the context of the goals of the artist, producer, mixer, etc. But I tend to not push any particular frequency range to be dominant over another if have the flexibility to do so. This seems to work pretty well in having the record translate across multiple listening environments. Hope this helps.