My question is rather a matter of interest than need.
I did a hearing test online to find out the maximum frequency, that I can still hear. The result was ~18500Hz. So I concluded the most efficient way to encode my audio files would be with a 37kHz sampling rate. The thinking was, that would allow each frame to be larger in size with the same overall bitrate.
So: opusenc --bitrate 110 --raw-rate 37000 Sample.wav Sample.opus
The conversion went normal, but the resulting file sounded like random white noise and like the high frequencies being cut off.
So I have several questions:
Is the thought process in the first abstract correct? Does a 48000 Hz sampling rate on a file with a maximum frequency signal of 21kHz waste frame quality or does opusenc (or any other mean of conversion) recognize that an sample at 42kHz (or 44,1kHz)?
I have heard that even 96kHz sampling could me useful to encode e.g. two signals way above our spectrum, that together produce a hearable sound. Is that true and is that the explanation for the awful resulting file?
Does opusenc simply not know what to do and puts out that kind of a file instead of error?
Kind regards
Markus