Of course! In my opinion the article I linked is a pretty good explanation but I’ll restate here anyway, especially since the original article has a lot of tangential discussion about game rankings on Steam.

## Premises

- At 100% uncertainty (+0/-0, +1/-0, +0/-1), assume that a given post has a “true” rating of 50%.
- If a post
**A** has 10x as many votes as post **B**, then our uncertainty about the “true” rating of **A** is half that of **B**. (These 10x and 0.5x multipliers can be adjusted as need be.)

A consequence of the following math is that a post with millions of upvotes but no downvotes has a “true” rating of 1 and likewise, millions of downvotes but no upvotes means a “true” rating of 0. This can be transformed into a more-familiar number if need be, but for plain ranking, it suffices to simply compare “true” ratings and see which is bigger.

## Example

If a post has 9 upvotes and 1 downvote, then it has an “approval” rating of 90% - number of upvotes divided by total votes. (If the post had +1 / -9, then it has an approval rating of 10%.) As the total number of votes is 10, which is 10x that of a post with 1 vote, then the uncertainty is half that of a post with 1 vote. As the uncertainty of a post with 1 vote is 100%, the uncertainty of this post is 50%. **We are 50% certain that +9/-1 = 90% represents the true rating of this post.** We are 50% certain that 90% is the true rating and 50% certain that 50% is the true rating, so we can combine these like so - `0.5 * 0.9 + 0.5 * 0.5 = 0.7`

- to conclude that **we should consider +9/-1 to have a rating of 70% for the purposes of ranking**. Conversely, +1/-9 has an adjusted rating of 30%.

## Calculation

The uncertainty level is 1/2 to the power of the log base 10 of the total number of votes. `uncertainty = 0.5 ** log_10(upvotes + downvotes)`

The adjusted rating is then a combination of the default rating and the observed rating, weighted by the uncertainty. `adjusted_rating = uncertainty * 0.5 + (1 - uncertainty) * (upvotes / total_votes)`

(Posts with +0/-0 will just be assigned a rating of 0.5 to make the math easier. Ties can be broken by the sum of upvotes and downvotes.)

## Comparison with Wilson Score

Piggybacking off of @celtschk’s prior work here (thanks for doing this!): How should we compute post scores?

SE order |
SE score |
upvotes |
downvotes |
Wilson score (z=2, center) |
Wilson order |
SteamDB score |
SteamDB order |

1 |
83 |
86 |
3 |
0.946 |
1 |
0.846 |
1 |

2 |
55 |
59 |
4 |
0.910 |
3 |
0.811 |
3 |

3 |
30 |
30 |
0 |
0.941 |
2 |
0.820 |
2 |

4 |
3 |
4 |
1 |
0.667 |
4 or 5 (tie) |
0.615 |
4 |

5 |
2 |
2 |
0 |
0.667 |
4 or 5 (tie) |
0.594 |
5 |

6 |
0 |
1 |
1 |
0.5 |
6 |
0.500 |
6 |

7 |
-2 |
1 |
3 |
0.375 |
7 |
0.415 |
7 |

SE order |
SE score |
upvotes |
downvotes |
Wilson score (z=2, center) |
Wilson order |
SteamDB score |
SteamDB order |

1 |
68 |
69 |
1 |
0.959 |
1 |
0.851 |
1 |

2 |
37 |
37 |
0 |
0.951 |
2 |
0.831 |
2 |

3 |
19 |
19 |
0 |
0.913 |
3 |
0.794 |
3 |

4 |
14 |
16 |
2 |
0.818 |
6 or 7 |
0.726 |
6 |

5 |
12 |
15 |
3 |
0.773 |
10 |
0.694 |
9 |

6 |
9 |
9 |
0 |
0.846 |
4 |
0.742 |
4 |

7 or 8 |
8 |
9 |
1 |
0.786 |
8 |
0.700 |
8 |

7 or 8 |
8 |
8 |
0 |
0.833 |
5 |
0.733 |
5 |

9 |
7 |
7 |
0 |
0.818 |
6 or 7 |
0.722 |
7 |

10 |
5 |
5 |
0 |
0.778 |
9 |
0.692 |
10 |

11 |
4 |
5 |
1 |
0.7 |
13 |
0.639 |
13 |

12 or 13 |
3 |
3 |
0 |
0.714 |
11 or 12 |
0.641 |
11 or 12 |

12 or 13 |
3 |
3 |
0 |
0.714 |
11 or 12 |
0.641 |
11 or 12 |

14 |
2 |
2 |
0 |
0.667 |
14 |
0.594 |
14 |

15 or 16 |
1 |
1 |
0 |
0.6 |
15 or 16 |
0.500 |
15 or 16 |

16 or 16 |
1 |
1 |
0 |
0.6 |
15 or 16 |
0.500 |
15 or 16 |

What sticks out to me most is how close the Wilson score ranking and SteamDB ranking are to each other, with only a few places where they differ. In particular, when considering two answers where **A** scored +15/-3 and **B** scored +5/-0, the Wilson score preferred **B** and the SteamDB rating preferred **A**.