This post has NOT been accepted by the mailing list yet.
Lots of questions conerning the last part ("synth_1to1" function) of MP3 decoding using the mpglib/mpg123 library...
What I've understood is that:
- we've got with 'hybridOut', a frame of 18 amplitudes for each of the 32 frequency subbands, each amplitude corresponding to a different time interval.
- the DCT64 returns the the coded audio frame with these 18*32 amplitudes as input
(please rectify me if this is wrong)
In this case:
- the 32 frequency subbands being equaly divided in the spectrum, we've got intervalls of approximately 700Hz. Knowing that the coded amplitudes only give time information, how does the algorithm do to have a better frequency resolution (as we can hear in the decoded sound)?
- why do we need to window the output buffer of the DCT?
- why do we need a second output buffer in the DCT, which will only be used by the next frame?
- is the algorithm still valid if the amplitudes are shifted within a same subband frequency, and thus juxtaposing the amplitudes of two different frames, in order to calculate a frame straddling these two original frames (since the amplitudes are in a time axis)?