The validity of best-of lists

I spotted this on Twitter earlier this afternoon:

I can understand the poster’s frustration, but this suggestion seems entirely useless. Why? Well, I have some thoughts on that.

For one thing, I doubt that anyone who reads year-end ‘best-of’ lists does so under the misconception that the critics compiling them had read every single book (or book of that type) published during the year—which would likely be impossible, anyway. It is understood that a given critic’s ‘best-of’ is going to consist of books chosen from all those that critic has read, and that those are not necessarily going to be the same books that other critics have read.

For another, such lists—and all reviews, for that matter—are highly subjective. Tastes vary—not just from journal to journal, but from critic to critic. This is going to be true whether a given critic has read 10 books, 110 books, or 1,110 books (in which case I’d seriously wonder how the person finds time to have a regular life).

Third, the person who regularly reads magazines and journals that contain reviews would presumably, over time, become familiar with the work of several critics, in the process gaining a sense of their sensibilities and biases—and, thus, whose reviews they can and cannot trust.

For example, from when I first started buying music magazines until I was in college, I read lots of album reviews; it wasn’t that long before I figured out that if, say, Steve Simels (Stereo Review) liked an album, I would probably like it too.

To me, understanding how closely a critic’s opinions align with one’s own is more important than how many books  that critic has read. If numbers are that important, why not skip critics altogether and just go by sales figures?

Another person suggested that it would be helpful for critics to list all the books they had read before compiling their lists. That might work if a critic had read a smallish number of books—but what if the list were longer? That would really work only online—and even then, the longer list could be only so long for it to be useful.

One thing that could be helpful in evaluating ‘best-of’ lists—at least, online—is providing a quick blurb about the critic. That would give readers of all levels (serious, casual, occasional) some idea of the critic’s tastes, helping them to determine how seriously (or lightly) to take the recommendations.

More information is good. But it should the right kind of information. That’s how informed choices get made.

(3 December 2017)