Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
There is no audible difference between 16bit and 24bit so I fail to see why you feel the need to.
Yes there is, you just have terrible monitoring or you're deaf if you can't hear it. There is definitely an empirical difference.
http://www.youtube.com/watch?v=BYTlN6wjcvQ skip to 45:49
There is no audible difference between 16bit and 24bit so I fail to see why you feel the need to.
Let's see some ABX logs. You'll want to pad the 16-bit samples to 24-bit before proceeding to avoid bit depth switching delays if you're using ASIO output or KS output, of course (you knew that, right?)Yes there is, you just have terrible monitoring or you're deaf if you can't hear it. There is definitely an empirical difference.
The feature is only useful if you're playing back 24-bit audio. Otherwise, it is purposeless.My sound card (x-plosion) advertises this feature so i was curious to try it, and could not figure it out... and
No.I was wondering if the increased sound depth would help with DTS real time encoding in games
http://www.youtube.com/watch?v=BYTlN6wjcvQ skip to 45:49
The feature is only useful if you're playing back 24-bit audio. Otherwise, it is purposeless.
2) I was wondering if the increased sound depth would help with DTS real time encoding in games
Well there are two serious problems with that test:
1) Gradual changes are less noticeable then immediate changes. What I mean is I can stick you in a room and gradually lower the light level by 50%, and you won't notice I'm doing it, you'll just have a harder time seeing. However if I suddenly halve the light level, you'll have no problem noticing. So that you don't notice it as soon if the change is gradual, doesn't prove you can't notice a change at all.
2) Youtube compresses all audio, quite a lot. You aren't going to get a realistic representation of the sound. As a practical matter, its encoder most likely works with 16-bit samples anyhow so anything below that will be gone, no matter what the original bit size.
+ other stuff
I'm mostly just warning the guy who originally posted it against putting too much stock in watching it. I suspect like many people he watches a video on Youtube and feels like that is the final argument.
As a practical matter, properly testing 16-bit vs 24-bit would be real difficult because it is going to be a fairly small difference, and the question for any sort of test is always going to be "In what context?" I mean I can easily make a contrived test where there's a big difference, simply by putting a test tone below the 16-bit noise floor and nothing else. Ok, well that doesn't tell you anything useful. Likewise I can easily set up a test where the equipment is such that there'd be no way to hear a difference between the two regardless.
In over all terms, I think it is more of an insurance thing than a real necessity thing. As everything goes 24-bit, we just have more dynamic range than we will ever need. It is more than a human can hear, more than electronics can reproduce, etc. That means that we ensure that our sampling isn't the problem.
A fair point, but I still see no particular benefit. Padding the words won't help the converter(s) squeeze anything meaningful from the input signal itself, which is pretty much guaranteed to have less than 90 dB of dynamic range anyway. Only if you're on that absolute threshold will there be any potential improvement.Not entirely. You actually want converters to be bigger than your target output. Reason is that as with everything, they aren't perfect. The best 16-bit converters in the world don't get the 96dB SNR they should, and lesser quality ones aren't even close.
http://www.youtube.com/watch?v=BYTlN6wjcvQ skip to 45:49
Let's see some ABX logs. You'll want to pad the 16-bit samples to 24-bit before proceeding to avoid bit depth switching delays if you're using ASIO output or KS output, of course (you knew that, right?)
A fair point, but I still see no particular benefit. Padding the words won't help the converter(s) squeeze anything meaningful from the input signal itself, which is pretty much guaranteed to have less than 90 dB of dynamic range anyway. Only if you're on that absolute threshold will there be any potential improvement.
I'd bet you anything that if you were to take any post-1990 24-bit DAC and do double-blind tests against a 16-bit input versus a 24-bit input (with padded 16-bit data), you'll have no luck differentiating between the two at very high listening levels (beyond 110 dB SPL), let alone moderate listening levels.
Seriously? over 4 posts a day for 5.7 years and I'm supposed to take you seriously? Go outside and "pad" your life with real work in any professional field, then maybe you'll be more convincing. I will not respond to any more of your bullshit.
How are you so certain that he doesn't work in a professional field?
My past four years of education in recording engineering, two Digidesign-issued certifications (continual), prior employment with two Burbank post-production houses as well as my freelance experience in dialog editing and general audio editing have no bearing whatsoever in this discussion.
If you wish to address certain points I've made, please feel free. My credentials, however, are not open for any further discussion.
A fair point, but I still see no particular benefit. Padding the words won't help the converter(s) squeeze anything meaningful from the input signal itself, which is pretty much guaranteed to have less than 90 dB of dynamic range anyway. Only if you're on that absolute threshold will there be any potential improvement.
Heh. Interesting.That is what people who can't get real jobs in sound do.
It's the fundamental shortcoming of linear PCM, yeah. The audible 'realization' of that, however, in actual use, is pretty negligible, particularly with some of the psychoacoustic dither techniques we can employ these days. With simple unshaped RPDF or TPDF dither, distortion is potentially audible on the head or tail of a fade where few effective bits are utilized. Being able to A/B them, however, will generally demand above-average listening levels (as well as critical listening). With more elaborate dither, the difference between a 24-bit reference fade and its 16-bit counterpart is very minimal, demanding very high listening levels to discern any differences.Well, as that applies to digital audio, that means having something right near the lower bound of the sample is problematic. If you've got only 1-bit all you can make are square waves. You can dither, of course, which will allow detail below the noise floor, but raise said floor.
Yeah, don't get me wrong -- I'm probably more 'pro-24-bit' than you are, but the most meaningful advantages are in the production domain as opposed to the reproduction domain. Probably 95% of all the various recordings I've ever done have been recorded at 24-bit.However I do see the overall benefit of having samples that are larger than we ever need.