24 bit audio with sound card

Jhalf

Limp Gawd
Joined
Dec 16, 2007
Messages
313
How do you set the default audio resolution to 24 bit audio with an aftermarket sound card in windows 7?
 
I don't think the windows sounds are 24-bit. You would only need to set it to 24-bit in your recording software. You might also need to via your soundcard's settings utility if the software doesn't control your card.
 
There is no audible difference between 16bit and 24bit so I fail to see why you feel the need to.
 
There is no audible difference between 16bit and 24bit so I fail to see why you feel the need to.

Yes there is, you just have terrible monitoring or you're deaf if you can't hear it. There is definitely an empirical difference.
 
There is no audible difference between 16bit and 24bit so I fail to see why you feel the need to.

I wanted to do it because:

1) My sound card (x-plosion) advertises this feature so i was curious to try it, and could not figure it out... and

2) I was wondering if the increased sound depth would help with DTS real time encoding in games

I fail to see the useful content provided by your response..
 
Yes there is, you just have terrible monitoring or you're deaf if you can't hear it. There is definitely an empirical difference.
Let's see some ABX logs. You'll want to pad the 16-bit samples to 24-bit before proceeding to avoid bit depth switching delays if you're using ASIO output or KS output, of course (you knew that, right?)

My sound card (x-plosion) advertises this feature so i was curious to try it, and could not figure it out... and
The feature is only useful if you're playing back 24-bit audio. Otherwise, it is purposeless.

I was wondering if the increased sound depth would help with DTS real time encoding in games
No.
 
24-bit resolution would be useful for playback of 24-bit material...although it may be irrelevant anyway unless you have

1) high quality audio components and
2) have the amplifier power to actually take advantage of the extra dynamic range. Since most will people hear their amplifier clipping long before maxing out 16 bit dynamic range, the extra bit depth is pretty much wasted.

And real-time DTS and Dolby Digital encoding is a very lossy format, I don't think extra headroom here will affect sound quality.
 

Well there are two serious problems with that test:

1) Gradual changes are less noticeable then immediate changes. What I mean is I can stick you in a room and gradually lower the light level by 50%, and you won't notice I'm doing it, you'll just have a harder time seeing. However if I suddenly halve the light level, you'll have no problem noticing. So that you don't notice it as soon if the change is gradual, doesn't prove you can't notice a change at all.

2) Youtube compresses all audio, quite a lot. You aren't going to get a realistic representation of the sound. As a practical matter, its encoder most likely works with 16-bit samples anyhow so anything below that will be gone, no matter what the original bit size.

Now please note I'm not saying this means that there is some night and day difference between 24 and 16-bit sources, but if you are relying on this as a test you are not doing a proper test.

Also the benefits of higher sample sizes are largely in terms of dynamic range. 16-bit gives you 96dB, which is pretty good. Supposing you use a nice dithered source you get like 90dB SNR. Ok, no problem... if you are using a full level signal. However, the more silent the passage is, the less range you've got. That's the downside of fixed precision numbers, as anyone who's done scientific coding can tell you. If you have something 30dB below peak, you've only got 60dB SNR there, and that is not hard for humans notice in many cases.

So 24-bit is useful because it well exceeds anything we'd ever need. Humans have about 120dB of range in hearing since 0dB is the threshold of hearing for most people and 120dB causes pain. 24-bit gives you 144dB, so plenty of extra. Also, it is above what you can realistically get out of electronics, the inherent noise becomes a problem.

The feature is only useful if you're playing back 24-bit audio. Otherwise, it is purposeless.

Not entirely. You actually want converters to be bigger than your target output. Reason is that as with everything, they aren't perfect. The best 16-bit converters in the world don't get the 96dB SNR they should, and lesser quality ones aren't even close. Well, 24-bit converters don't get the 144dB they should be who cares, it is more than we need. So 24-bit converters can be useful on 16-bit sources simply because they make sure the converters aren't the problem. Goes double since they are cheap these days.
 
2) I was wondering if the increased sound depth would help with DTS real time encoding in games

No, not at all. While the DTS Coherent Acoustics codec doesn't specify an input or output bitsize, the implementation with DTS Interactive uses 16-bit samples. As such, there's nothing to be gained with 24-bit at any stage, as it'll be truncated before being compressed.

As for how to change the output in Windows 7, go to your sound control panel, the playback tab. Pick the device you are interested in and hit properties. On the advanced tab, you can pick the output sample rate and size. Windows 7 will then resample all audio to that rate, before handing it to your soundcard.

However, no matter what you set it to, the soundcard itself will have to resample to 48kHz to hand it off to the DSTI encoder, since it is 48kHz only, and all samples will get truncated to 16-bit.
 
Well there are two serious problems with that test:

1) Gradual changes are less noticeable then immediate changes. What I mean is I can stick you in a room and gradually lower the light level by 50%, and you won't notice I'm doing it, you'll just have a harder time seeing. However if I suddenly halve the light level, you'll have no problem noticing. So that you don't notice it as soon if the change is gradual, doesn't prove you can't notice a change at all.

2) Youtube compresses all audio, quite a lot. You aren't going to get a realistic representation of the sound. As a practical matter, its encoder most likely works with 16-bit samples anyhow so anything below that will be gone, no matter what the original bit size.

+ other stuff

Interesting points, however a note on the second one: near the beginning of the video, he acknowledges the compression issue on youtube and posts a website in which you can listen to the original uncompressed wav files.
 
I'm mostly just warning the guy who originally posted it against putting too much stock in watching it. I suspect like many people he watches a video on Youtube and feels like that is the final argument.

As a practical matter, properly testing 16-bit vs 24-bit would be real difficult because it is going to be a fairly small difference, and the question for any sort of test is always going to be "In what context?" I mean I can easily make a contrived test where there's a big difference, simply by putting a test tone below the 16-bit noise floor and nothing else. Ok, well that doesn't tell you anything useful. Likewise I can easily set up a test where the equipment is such that there'd be no way to hear a difference between the two regardless.

In over all terms, I think it is more of an insurance thing than a real necessity thing. As everything goes 24-bit, we just have more dynamic range than we will ever need. It is more than a human can hear, more than electronics can reproduce, etc. That means that we ensure that our sampling isn't the problem.
 
You need to set your default Shared Mode playback under the device properties. I'd show you the exact menu, but unfortunately I'm in XP right now. Go to control panel -> sounds -> device -> rt. click or advanced and choose something like 24-bit 48khz. This affects recording and the internal resampler for sounds not already at 44/48khz.
 
I'm mostly just warning the guy who originally posted it against putting too much stock in watching it. I suspect like many people he watches a video on Youtube and feels like that is the final argument.

As a practical matter, properly testing 16-bit vs 24-bit would be real difficult because it is going to be a fairly small difference, and the question for any sort of test is always going to be "In what context?" I mean I can easily make a contrived test where there's a big difference, simply by putting a test tone below the 16-bit noise floor and nothing else. Ok, well that doesn't tell you anything useful. Likewise I can easily set up a test where the equipment is such that there'd be no way to hear a difference between the two regardless.

In over all terms, I think it is more of an insurance thing than a real necessity thing. As everything goes 24-bit, we just have more dynamic range than we will ever need. It is more than a human can hear, more than electronics can reproduce, etc. That means that we ensure that our sampling isn't the problem.


I am by no means an audiophile... I was seriously just wondering because my sound card advertises opitcal output of 24 bit/96khz DTS audio and windows 7 just has two options of;

16 bit 44000hz
16 bit 48000hz

Also I have never fed my receiver anything 24 bit and it is supposed to display this info on its screen.

I don't need the feature at all, I just wanted to properly test my equipment.. No one needs the 212 MPH speed of a lambo, but its nice to try it once, ya know?
 
Well it probably can do 24 bit 96kHz S/PDIF uncompressed output, that's pretty common for cards these days. However that's only two channels. To do 5.1 you need to compress the sound since S/PDIF doesn't have the bandwidth. That means DD or DTS. DTS Interactive is an implementation of the full bitrate spec of the original DTS Coherent Acoustics which is 48kHz. As I said bit size isn't specified since perceptual encoders don't have a sample size, but in the implementation I've seen it is 16-bit input, and recievers might only decode it as 16-bit anyhow.

While there are higher bit/kHz DTS specs, they aren't something that DTS Interactive does.

So if you want uncompressed 24/96, you'll have to do 2 channels, just turn the DTS encoder off and try it. If you want surround, you'll have to do without. If you want 24/96 uncompressed surround, you need to get an HDMI sound card as that is the only thing that'll do it digital.
 
Thanks for your comments, I hadn't even thought i turning of the digital decoder
 
Not entirely. You actually want converters to be bigger than your target output. Reason is that as with everything, they aren't perfect. The best 16-bit converters in the world don't get the 96dB SNR they should, and lesser quality ones aren't even close.
A fair point, but I still see no particular benefit. Padding the words won't help the converter(s) squeeze anything meaningful from the input signal itself, which is pretty much guaranteed to have less than 90 dB of dynamic range anyway. Only if you're on that absolute threshold will there be any potential improvement.

I'd bet you anything that if you were to take any post-1990 24-bit DAC and do double-blind tests against a 16-bit input versus a 24-bit input (with padded 16-bit data), you'll have no luck differentiating between the two at very high listening levels (beyond 110 dB SPL), let alone moderate listening levels.
 

OK I watched that chapter. First fallacy in his argument is that if you notice his screen shot he is using a Delta 66. I bought one in 2001, and recently upgraded, yet I don't even own a professional studio quality setup. Second fallacy is how can you believe someone whos logic is so fallacious? Do you have the original files to listen to them through your own $20,000 monitoring setup like the major studios have? No, you don't and I don't see any mention of a high-end spectrum analyzer in the video. Now, you might be saying to yourself that I'm an elitist jerk. That is beside the point. 24-bit audio improves the dynamic range among other things, which you cannot hear if your monitoring system has less dynamic range capability than the recording! Anyway, that video is ridiculous and obviously just a vehicle for making money off of conspiracy theorists who don't require Lynx converters, Neve consoles or Gordon preamps.
 
Let's see some ABX logs. You'll want to pad the 16-bit samples to 24-bit before proceeding to avoid bit depth switching delays if you're using ASIO output or KS output, of course (you knew that, right?)

A fair point, but I still see no particular benefit. Padding the words won't help the converter(s) squeeze anything meaningful from the input signal itself, which is pretty much guaranteed to have less than 90 dB of dynamic range anyway. Only if you're on that absolute threshold will there be any potential improvement.

I'd bet you anything that if you were to take any post-1990 24-bit DAC and do double-blind tests against a 16-bit input versus a 24-bit input (with padded 16-bit data), you'll have no luck differentiating between the two at very high listening levels (beyond 110 dB SPL), let alone moderate listening levels.

Seriously? over 4 posts a day for 5.7 years and I'm supposed to take you seriously? Go outside and "pad" your life with real work in any professional field, then maybe you'll be more convincing. I will not respond to any more of your bullshit.
 
Seriously? over 4 posts a day for 5.7 years and I'm supposed to take you seriously? Go outside and "pad" your life with real work in any professional field, then maybe you'll be more convincing. I will not respond to any more of your bullshit.

How are you so certain that he doesn't work in a professional field?
 
My past four years of education in recording engineering, two Digidesign-issued certifications (continual), prior employment with two Burbank post-production houses as well as my freelance experience in dialog editing and general audio editing have no bearing whatsoever in this discussion.

If you wish to address certain points I've made, please feel free. My credentials, however, are not open for any further discussion.
 
My past four years of education in recording engineering, two Digidesign-issued certifications (continual), prior employment with two Burbank post-production houses as well as my freelance experience in dialog editing and general audio editing have no bearing whatsoever in this discussion.

If you wish to address certain points I've made, please feel free. My credentials, however, are not open for any further discussion.

That is what people who can't get real jobs in sound do. Editing is cut and dry, your ears and artistic ability are obviously lacking. Let's see those certs and diploma from expressions or wherever.
 
A fair point, but I still see no particular benefit. Padding the words won't help the converter(s) squeeze anything meaningful from the input signal itself, which is pretty much guaranteed to have less than 90 dB of dynamic range anyway. Only if you're on that absolute threshold will there be any potential improvement.

Well something else to remember with regards to audio is that at the threshold of hearing, you can hear more detail than just "present/not present." You can discern speech at your hearing threshold. That is, in fact, one of the things an audiologist will test. They first do basic tests to establish that you can hear, your speech recognition is working properly and so on. Then they find your threshold of hearing with test tones. Once they've established that, they verify everything works properly with speech recognition at that level.

Well, as that applies to digital audio, that means having something right near the lower bound of the sample is problematic. If you've got only 1-bit all you can make are square waves. You can dither, of course, which will allow detail below the noise floor, but raise said floor.

With 16-bit audio, this is potentially a problem. With 24-bit, not really, you've got plenty of bits to spare.

I realize that this doesn't matter in a lot of situations. I'm certainly not some "24-bit or death!" zealot. Hell, middling bitrate MP3s are fine with me. However I do see the overall benefit of having samples that are larger than we ever need.
 
That is what people who can't get real jobs in sound do.
Heh. Interesting.

Well, as that applies to digital audio, that means having something right near the lower bound of the sample is problematic. If you've got only 1-bit all you can make are square waves. You can dither, of course, which will allow detail below the noise floor, but raise said floor.
It's the fundamental shortcoming of linear PCM, yeah. The audible 'realization' of that, however, in actual use, is pretty negligible, particularly with some of the psychoacoustic dither techniques we can employ these days. With simple unshaped RPDF or TPDF dither, distortion is potentially audible on the head or tail of a fade where few effective bits are utilized. Being able to A/B them, however, will generally demand above-average listening levels (as well as critical listening). With more elaborate dither, the difference between a 24-bit reference fade and its 16-bit counterpart is very minimal, demanding very high listening levels to discern any differences.

If you've never done any ABX testing against fades at 24-bit and 16-bit, you might be surprised at what 16-bit is truly capable of achieving. It fumbles in theoretical bouts against 24-bit, but more than holds its own under real-world testing (even with synthetic worst-case-scenario samples). If you're interested, I'd be happy to piece together some samples using various dithering algorithms for you to check out.

However I do see the overall benefit of having samples that are larger than we ever need.
Yeah, don't get me wrong -- I'm probably more 'pro-24-bit' than you are, but the most meaningful advantages are in the production domain as opposed to the reproduction domain. Probably 95% of all the various recordings I've ever done have been recorded at 24-bit.
 
Back
Top