But there’s one part of the app that I dislike – which is odd, because it’s a core feature. & that’s the rating (& recommendation) system. I find it nearly useless. Currently, if you’re not familiar, it has a 5-bottlecap (Star) rating system. Which is common. But, looking at how I rate things, things fall into 4 zones for me (so out of 10 possible data-points (0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5), I only really have 4 – which indicates a problem to me):
- 0-1.5: I didn’t like it at all
- 2-3: ok, not worth trying if you haven’t had it yourself.
- 3-4: decent, forgettable
- 4-5: excellent, will try to have it again.
Conversations with others have revealed that they have a similar grouping. Dave might have something more detailed, but he’s an outlier I suspect. And, watching over the past while, it feels like virtually every beer falls into a 3.5-4.5 category over time. Which thus makes ratings kind of useless.
Related to this: I have no idea how/why Untappd recommends the other beers it does (edit: in the “recommendations by style” that appears after a checkin). The rating I give a beer doesn’t seem to affect this at all – I assume there is a secret sauce beyond “beer is of a similar type”, but I don’t know. What I *do* know is that I don’t find the recommendations currently useful to me. When I’ve tried related beers based on either extremely high or extremely low ratings, there’s no consistency in the response. Aside on this: I wish I could “regionalize” the recommended beers, because it’s really hard to get most of the recommended beers I see here in BC.
So here’s my modest proposal for improving a rating system here. It has 2 parts. The big change is that beer ratings should be relative to each other. So when I untap my beer, it’ll ask me “Is Brassneck Brewing’s Passive Aggressive IPA better or worse than Driftwood Brewing’s Fat Tug IPA?” & I’ll say yea or nay. This, combined with 100s of others answering similar questions will start to build an overall score for a beer. Likely a percentile score. But it will also build a large web of relative ratings of one beer to another.
This sort of natural-language question is great for humans. I can remember how much I enjoyed beerA, and can think about that relative to my current beerB. But I have a hard time giving an absolute rating to a beer. In part because my tastes change over time, where as a relative rating will more accurately reflect my changing tastes. Imagine if you drink the same beer 3-6 months later. Untappd could ask how I enjoyed it relative to the last time, which provides useful information.
With 1000s of users providing relative ratings, a particular scoring set will emerge, with much more granular ratings, resulting in fun stats like :95% of drinkers liked this beer. In the recommendation section, it can then use other relative ratings to suggest other beers to try. If I like my beer LESS than the comparison beer, show me other beers that are liked more than the comparison beer. Or vice versa. Because a negative rating should indicate I want something different than what I’m drinking, whereas a positive one should indicate that I want more of the same? or similarly rated?
I realize I’m way overthinking what’s a fun, app & pastime. But ratings of things are a hard nut to crack, and universally applicable anywhere anyone rates anything. And in a system where the subjects are inherently comparable (apples to apples), relative ratings and enjoyment-percentiles seems to be a good, human-and-machine-usable dataset.