Downvoting questions makes a lot of intuitive sense to me, but I think it might be worthwhile to play the devil’s advocate a bit to make sure it’s really essential. In particular, the single most obvious use for question score is to allow answerers to pick questions they can answer effectively and usefully, but, after many long hours of digging through question lists, analyzing patterns, considering my habits, and so forth on multiple sites, I never found question score anywhere near as useful on any SE site as I would have expected. It simply doesn’t help much for its number one use case.
This makes a lot of sense, but it does occur to me that there are two other signals that SE already uses to good effect in its automated Q-ban system: question closure and deletion. It’s not very clear yet how aggressively we’ll be deleting questions, manually or automatically, and the deletion signal apparently isn’t weighted very strongly in SE’s implementation, but closure is still worth considering. If we can do a better job of recognizing specific show-stopper flaws in questions that make them unsuitable for answering (temporarily or permanently) then a Q-ban system could reasonably use that data to at least approach SE’s speed and accuracy of automated bans.
What’s more, if there were no up- or down-votes on questions, any potential reasons to close would be more thoroughly considered by anyone who disliked the question at all, making closure a more reliable signal and keeping the site tidier. Admittedly, this could result in overreacting, especially early on before a mature site culture handles voting norms, so leave-open and reopen mechanisms would need to be robust. And the difference between a temporary hold to get some needed editing done and a mistaken close that was reversed might also need to be made explicit in the system.
Have an easy-to-access sort by questions’ answer quality. “Questions By Best Answer”, or something. A canonical question will inevitably have a highly-scored answer. In fact, usually the answer score is at least a slightly better indicator of the question’s true usefulness than its own score. A +100 question with a +80 answer may be very well asked, but most people can’t benefit from it as much as a +80 question with a +100 answer, never mind a +80 question with a +200 answer.
Strictly speaking this kind of sort is actually also useful for answerers on the main site. I have often wished to have an easy way to find questions with inadequate answers so as to contribute where there was a need, but searching for these is extremely non-trivial on SE. (In fact, I think you have to use either SEDE, on a week-long lag, or write an API client yourself.) Being able to sort by answer score in either direction lets you see either the best- or the worst-answered questions, and thus either learn from or teach about some of the most fertile questions respectively. (Being able to further sort by views, subscribers, etc would help a lot as well.)