I’m working my way through House of Leaves right now, and the real horror is the grad school flashbacks from trying to follow the footnotes.
I’m working my way through House of Leaves right now, and the real horror is the grad school flashbacks from trying to follow the footnotes.
My TV is insulting like that. It technically has an EQ, but it makes no perceivable difference no matter what I do in it.
What the hell!
But assuming it worked, wouldn’t doing that strictly with sound frequencies cause issues? Like, okay, most voices are louder because I boosted their frequency, but now that one dude with a super low voice is quieter, plus any music in the show is distorted. Or something like that.
Not necessarily. Regardless of vocal range, around 400hz-2000hz makes up the body of what you hear in human speech, or the notes for instryments carrying a melody. Below that, say, 160-315hz is going to be the “warmth” and “fullness” of the sound, while 2.5khz-8khz is going to be the enunciation and clarity (think ch-sounds, ess-es, tee-s, etc).
Sure, if you start really going hard on an EQ, you could absolutely throw everything out of balance — if you cut out 12db at 250hz, all the warmth will be gone and everything will sound thin. If you scoop a bunch of 400hz-1.6khz, it will sound like a walkie-talkie, and if you make a large boost around 3khz-8khz, then everything will probably sound harsh and scratchy.
This is where, the listening environment becomes important to consider. Do you live near a busy highway or do you have a loud air conditioner? You don’t need to answer these questions in public, but those kinds of ambient sounds can compete with the enunciation frequencies, or add to the buildup of “mud” in the lower part of the spectrum.
The size, shape, material properties etc. of your room and furniture also play a role here. For example, a bunch of bare walls and hard surfaces will cause a lot of the high frequencies to bounce around, potentially causing a buildup of harshness. This is why recording studios and your high school band hall probably have those oddly-shaped, cloth-covered wall “decorations” that serve to neutralize the cavernous sound you’d get in a large, bare room.
Overall, compensating for the environment is where you should probably aim your EQ. That is, even if source material varies wildly, it’s probably best to try to EQ to the room you’re in rather than each, individual program.
The way to do it is to find a song you know by heart, that you know how it sounds in the best way possible (there are a few that, to me, sound great in my car and on my favorite pair of headphones, so I use those), and play that through your TV. Then, fiddle with the EQ until it’s as close to the ideal sound in your head as you can get it.
I would bet there is one mix created in surround sound (7.1 or Dolby Atmos or whatever), and then the end-user hardware does the down-mixing part, i.e. from Atmos with ~20 speakers to a pair of airpods.
In the music world, we usually make stereo mixes. Even though the software that I use has a button to downmix the stereo output to mono, I only print stereo files.
It’s defintely good practice to listen to the mix in mono for technical reasons and also because you just never know who’s going to be listening on what device—the ultimate goal being to make it sound as good as possible in as many listening environments as possible. Ironically, switching the output to mono is a great way to check for balance between instruments (including the vocals) in a stereo mix.
At any rate, I think the problem of dynamics control—and for that matter, equalization—for fine-tuning the listening experience at home is going to vary wildly from place to place and setup to setup. Therefore the hypothetical regulations should help consumers help themselves by requiring compression and eq controls on consumer devices!
Side tip: if your tv or home theater box has an equalizer, try cutting around 200-250hz and bring the overall volume up a tad to reduce the muddiness of vocals/dialogue. You could also try boosting around 2khz, but as a sound engineer primarily dealing with live performances, I tend to cut more often than I boost.
Audio compression is much older than 20 years! Though you’re probably right about it becoming available on consumer A/V devices more recently.
And you’re definitely correct that “pre-applying” compression and generally overdoing it will fuck up the sound for too many people.
The dynamic ranges that are possible (and arguably desirable) to achieve in a movie theater are much greater than what one could (or would even want to) achieve from some crappy TV speakers or cheap ear buds.
From what I understand, mastering for film is going to aim for the greatest dynamic range possible, because it’s always theoretically possible to narrow the range after the fact but not really vice-versa.
I think the direction to go with OP’s suggested regulation would be to require all consumer TV sets and home theater boxes to have a built-in compressor that can be accessed and adjusted by the user. This would probably entail allowing the user to blow their speakers if they set it incorrectly, but in careful hands, it could solve OP’s problem.
That said, my limited experience in this world is exclusive to mixing and mastering music and not film, so grain of salt and all that.
I have to back into a parking spot in a shitty, shared driveway. If I don’t throw my (automatic transmission) car into neutral and coast into place, my car will decide I’m too close to the curb and just slam the fuck out of the brakes while still several feet away from where I intend to be. It sounds awful and it scared the absolute shit out of me several times before I internalized the workaround.
Good thing I’m not a fan of the backup camera in general, or this problem would be even more irritating, since the camera turns off when I go from reverse to neutral.
I started on a small instance that fortunately gave a heads up when they decided to shut down. When I moved to a second, small instance where I ported all my community subscriptions, it shut down with no warning. It’s a shame, because both instances were topically-focused and small enough to avoid defederation drama.
I love the idea of decentralized infrastructure, but now I’m on .world because I just don’t have the time or willpower to move every few months, and I definitely don’t have the wherewithal to run my own instance.
OneDrive decided to kick on after an overnight update and uploaded some projects and vst plugins to the cloud. Apparently, the files weren’t accessible except via the cloud, so I lost a few hours re-downloading my folders before I could do anything. I don’t know if I’ve ever been more furious over technology that I theoretically owned.
I got a PC in order to eventually go back to Linux, where at least I know that when something goes wrong, it’s generally my own fault and somewhat easy to troubleshoot. Unfortunately, the plugins I’ve been using only have Windows and Mac versions. If I had done a bit more research, I probably would have just gone with an apple device.
That’s not Shrek, that’s Marshall Applewhite
The Dollars Trilogy is a great recommendation, and I think your analysis is spot on! The cinematography in the second and third installments is incredible.
I like to follow up with Tarantino’s Hateful Eight (my personal favorite film to recommend, especially as a Christmas movie in place of Die Hard) to see how hugely influential the Trilogy was.
Folks in Mississippi passed an initiative for a fairly lax medical law in 2020. Some Karen mayor of one of the suburbs around the capital city used judicial chicanery to get it thrown out at the State Supreme Court, along with the ability of the populace to vote on ballot measures going forward.
I doubt that OP was debating you in good faith, but it did happen at least once in the last few years. The Republicans certainly didn’t waste the opportunity to minimize the effects of democracy on their power.