Compression Depression

sadmusic

“Research finds MP3s make your music sound more depressing” says the latest bit of breathless reportage to come floating down the audio/science news feeds. In this new ‘post fact’ age, I guess I’m no longer supposed to be surprised by the paucity of actual evidence offered up in support of any particular assertion put forward as scientific ‘research’ but in the interests of actual reality, let’s focus the Hummadruz lens on this affair.

As usual, nearly every mention of this that I saw completely uncritically regurgitated copy that looked like it came from one source. I couldn’t narrow it down to a first instance, but the coverage from NME was typical:

New research has found that listening to music in low-quality digital formats can dampened its emotional impact.

According to a study by the Audio Engineering Library, MP3s can have a distinct effect on the “timbral and emotional characteristics” of the instruments involved.

Researchers compared responses to compressed and uncompressed music over ten emotional categories at several bit rates.

It’s always instructive to find out who is responsible for research, so my obvious first action when I read the above quote was to click on that link to the ‘Audio Engineering Library’ and find out who those dudes actually are. Puzzlingly, the link just leads to the abstract of the actual paper, which makes no mention of that organisation.

In fact, searching the web for the ‘Audio Engineering Library’ reveals that there is no such entity, and, instructively, pretty much every search result leads to some variation or other of the story about ‘How compression makes your music depressing’. It’s a veritable self-referential maelstrom. Audio Engineering Library? Someone just made that shit up.*

If ever there was an indication that we are about to enter the land of hogwash and horsepiss, here we have it.

Still, we have the paper right? Published on Researchgate – the science industry equivalent of LinkedIn – to be sure, but proper science will out!

The first thing we should note is that this research is a one-off  publication, on an online site, of a conference presentation. The research it entails has neither been replicated, nor peer-reviewed. This is, in scientific terms, not much more than a bunch of opinions.

We don’t have to delve far into the paper to find its true worth:

3.2 ListeningTest

We used eight sustained instrument sounds: bassoon (bs), clarinet (cl), flute (fl), horn (hn), oboe (ob), saxophone (sx), trumpet (tp), and violin (vn).

Whoa there cowboy! I was told by NME that the compression made my music more depressing. You do know what music is, don’t you NME? What it is not, is a bunch of fixed-note sustained instrument noises taken entirely out of musical context.

In addition to that complete clanger, there is no mention anywhere in the paper about how many subjects were used in these tests, nor anything about how they were conducted – just a lot of technical hocus-pocus about compression methods, and some graphs that are totally meaningless given what we just read. Wading further through this procedural mess, we find so much experimenter subjectivity stirred into the mix that the study is rendered all but useless as a piece of viable science.

To sum up, an unpublished, un peer-reviewed paper, conducted by a fictional institution,  tells us that in an un-replicated study, an unspecified number of listeners (keeping in mind that could be as few as two) were played compressed timbral instrumental single tones (not music) and asked to subjectively choose – from a list pre-determined by the researchers – how those noises made them feel.

Would you conclude that from there you could get to “Research finds MP3s make your music sound more depressing”?

No, me neither.


*Addendum: It seems they probably mean the AES (Audio Engineering Society) Library, where the paper is also archived. So let’s chalk that one up to sloppy journalism, rather than wilful deception. On the AES site, the presentation appears under its actual title The Effects of MP3 Compression on Perceived Emotional Characteristics in Musical Instruments [my emphasis].

Notice that this doesn’t make any claims about music, per se, and is a much more accurate appraisal of what the study was actually looking at (see comments below for further clarification).

Advertisements

6 thoughts on “Compression Depression

  1. This is not the paper that I saw referenced, the one I saw was published by AES:

    http://dx.doi.org/10.17743/jaes.2016.0031

    Plus last time I checked major and minor chords being happy and sad respectively, is the most basic of music theory, and that the only context needed is the base note of the chord in relation to the other notes of the chord, So I don’t understand your criticism.

    • It’s the same paper. Just because it’s on the AES website as well doesn’t make it more credible or its science any better. It’s not ‘published’ by AES, or even endorsed by it – it’s just archived there. That really means very little in terms of scientific validity. And you seem to miss the point somewhat: you can’t take a single tonal sound and call that music.

      You say: “the only context needed is the base note of the chord in relation to the other notes of the chord” Yes, of course. But the researchers didn’t give any context. In fact, they’re not looking at music at all, just at the sounds of the instruments. This is even reflected in the actual title of the presentation: The Effects of MP3 Compression on Perceived Emotional Characteristics in Musical Instruments.

      From the paper:

      “We used eight sustained instrument sounds: bassoon (bs), clarinet (cl), flute (fl), horn (hn), oboe (ob), saxophone (sx), trumpet (tp), and violin (vn). The sustained instruments are nearly harmonic, and the chosen sounds had fundamental frequencies close to Eb4 (311.1 Hz)”

      In other words they just played their listeners individual single sustained Eflat4 tones on different instruments, in isolation from any musical structure.

      That is not, by anybody’s reckoning, music. Drawing any such conclusion from the research is incorrect.

      • Sorry I think I misinterpreted that line, the use of the word “sound” instead of tone or note, threw me off, I thought they were talking about a chord.

        It is not the same paper, the name and the layout is clearly different, the AES document has twice as many pages, and a later date attaches, page 7, section 4 makes recognition that it was peer reviewed, though the quality of that review is questionable.

        They showed a marked difference between the reporting of happy music, you can’t argue “it’s not real music” when they are statistically valid results in at extremes, you’re just exhibiting confirmation bias, here, the statistics here say quite clearly that 32 & 56 kbps audio doesn’t sound happy, and I don’t think anyone would argue otherwise.

        You can even make the case that having a neutral tone (I.E, not “happy” or “sad” chord or) is more statistically valid as it doesn’t bias the tests towards happy or sad tones.

        There are plenty of quality criticism for this paper, but your post reeks of you fishing for ad hominem. You’re saying that it’s bad research just because it’s on Researchgate? That my friend is fallacy by association, if it’s bad quality research criticise the research.

        This paper is only confirming what we already know, anything below 128kbps MP3 is crap. And that this relationship would not scale linearly to high bitrate, high complexity music.

        The paper is by all means a bit self evident and not all that interesting, the real issue I think here is what every fan of science complains about, the reporting of science by the media is utterly atrocious.

        And I don’t think anyone would disagree with the conclusion:

        “The current study also helps provide the basis for content based
        refinements of audio codecs in the future. As an
        example, if we know that the trumpet is particularly changed
        in emotional characteristics by compression at 32 Kbps, if
        we have a piece by Miles Davis with a prominent trumpet
        throughout, we may decide to use a higher bit rate to encode
        it. Or, future research may indicate how the trumpet could
        be compressed at 32 Kbps without substantially changing
        its emotional characteristics.”

        • It is the same paper. It’s about the same thing, it has the same authors: Ronald Mo, Ga Lam Choi, Chung Lee, and Andrew Horner. The Researchgate version seems to have been abridged slightly for reasons on which we can only speculate. Nevertheless, it lays out the same aims, the same methodology, the same results and the same conclusions. In the same words, and using the same graphs and references. Same paper.

          And seriously, you’re going to attempt to tell me that a one sentence ‘thank you’ to anonymous reviewers constitutes peer review? That’s not ‘questionable’, that’s laughable. It may as well say ‘thanks to our Mums for fixing our spelling mistakes’.

          Moving on to the paper itself, the methodology as described in the paper makes no assessment of music. None. They sure talk about music a lot, but that is most definitely not what they looked at. All the graphs, the pValues and the other hocus pocus refers SOLELY to single Eflat4 tones played on various instruments. I will repeat: extropolating these results to apply to actual music is completely ridiculous, and well outside anything the study could possibly show.

          You said: “They showed a marked difference between the reporting of happy music,”

          No, they didn’t. Point me to anywhere in the paper where it says that they played the listeners music.

          You said: “You’re saying that it’s bad research just because it’s on Researchgate?”

          No I’m not. And I never did. I said – clearly – that it’s bad research because it does not support the conclusions of the researchers, and most egregiously does not, in any way, support the conjecture that was carried on every single social media post I saw on this story: that music compression makes music sound ‘more depressing’ or that it ‘dampens its emotional impact’. That it’s referenced on Researchgate merely indicates the quality of its bona fides. If it was presented on something like Nature or even Plos One, we would have a better reason to take it seriously. And, as I also said, having it on the AES website also means little. AES is not a scientific institution, and articles archived in its library are not scientifically reviewed.

          So let’s talk about who has confirmation bias, shall we?

          You said: “The paper is by all means a bit self evident ”

          No, it’s not. You’ve already decided what you think the study should show and you’re scrambling to defend it. At the very MOST, the results of the study point to a small percentile shift of listener bias in categories that the researchers have pre-decided. On single Eflat4 tones. In a ridiculously small listening group. In a single trial. With no controls. And no independent replication. And no peer review.

          You said: “This paper is only confirming what we already know, anything below 128kbps MP3 is crap. And that this relationship would not scale linearly to high bitrate, high complexity music.”

          But that’s quite specifically NOT what the paper is ‘confirming’. It’s not judging audio quality at all. The stated aims of the paper are to “give listeners and music streaming service providers some preliminary benchmarks for understanding the emotional effects of MP3 compression on music.” The listeners are never asked to rate the quality of what they hear – only to say if it makes them ‘happy’ or ‘sad’ or ‘shy’ (frikkin’ hell – can we get any more contextually adrift with some of those categories?). They are not asked if the tones sound ‘crap’ or not.

          Audio quality is not being assessed. Assessment of ’emotional effect’ of compression on musical instruments not music, is the stated aim. And, I will say again as clearly as I can – that’s not what the conclusion supports.

          Addendum: I figured out the reason that the two publications appear different. The Researchgate version is, as I speculated, abridged, and that’s because it is a conference presentation version of the research. Nevertheless, it is the exact same research.

          • Sorry what? How is this not a measure of quality?

            You have the original, and the degraded signal, and the degraded is measurably difference (in this case emotionally) from the original, That is by any dictionary definition a reduction of quality.

            You can argue all you want about whether this is “real music” but that’s entirely my point, these are sounds, MP3 dosen’t give 2 shits if it’s “traditional music” or “harsh wall noise”, what does matter is that the MP3 file at low bitrates will not reproduce the content accurately, which to anyone who has listened to low bitrate audio, knows it does not.

            • Daniel, you are now desperately ducking and weaving to defend ideas you have already uncritically accepted. You have introduced the word ‘degradation’ into the argument, when that is, very specifically, NOT what the research is measuring. It’s (supposed to be) measuring how the listener feels emotionally, not what they think of the sound quality. If you can’t see the difference between those two things then you really have missed the point.

              And to claim that there is no difference between music, and single tonal notes played in isolation and with no context, is just completely absurd.

Leave a reply (please note, if your email address is linked to an existing Gravatar or Wordpress account, you will be required to login before your comment will publish):

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s