Are they though? Take the +25/-15 example. If the +25 is from seasoned professionals and the -15 are from “hobby” level users, what does that say?
Flip that on it’s head. If the +25 votes are by “hobby” users and the -15 are from seasoned professionals, what is the take away? And the positive relative “score” of the answer makes it easier for those who aren’t sure to give it additional up votes (well if others like it, it must be good, right?).
Or it could simply be that “user X” pissed off 30 people in chat and half of those went out and voted down a few of X’s answers. So what do the votes really tell anyone about the answer?
Unless a post gets some statistically significant number of votes, it really doesn’t actually provide valuable information. Sorting by weighted votes would be a better indicator of good answers. I agree that sorting isn’t enough by itself (after all, if every answer is bad, the top will only be the least bad answer). Perhaps as additional feedback for users generate a more generic ranking metric; something like giving up to three thumbs up/down to indicate how well received an answer is by the community (I like a more simplistic visual solution personally) or rating community approval on a 100 point scale.
As a positive feedback mechanism for the author, you could show only the author of the post the count but this can also be indicated by a more generic ranking system as well (if my post has three thumbs down, I know it needs to be improved).