The Internet Entrance Exam
Why you should pass a test before commenting onine
If you can imagine anything worse than YouTube comments, it would those comments playing on top of the video. That’s exactly what the Chinese website Bilibili does, and they somehow do it without opening a steaming portal to hell. How?

They make users pass a test.
Before you can use Bilibili’s annotation system (called danmu) you have to pass a 100-question test about etiquette and domain knowledge. For example:
“Which of the following danmu comments does not count as trolling?
A. So, do you add cilantro or not?
B. I do not want to be a person anymore.
C. What kind of soft tofu is the most delicious?
D. This is not as good as XX”
Honestly, I don’t know the answer here. C? Cilantro is divisive, is it OK to ask about tofu?
Anyways, the end result is that you get what you measure for, a high quality community that can comment on top of videos without making other users lose faith in humanity. The concept isn’t just an outlier from China.
Facebook Used To Have (Somewhat Elitist) Standards
Perhaps you don’t remember, but there was a time when using Facebook required an education. You literally couldn’t sign up without a university email.
At that point Facebook was a better place because they were selecting educated users. Perhaps there are other ways to do that, but the relevant fact is that there was some limit on the system, it was screening for some measure of quality.
Since then, Facebook and most digital platforms have prioritized total user numbers rather than the quality of those users or even whether the users are human. So what you get is not better people but more people, because that’s what the algorithms (and accountants) are optimizing for.
This has been good for establishing a monopoly business model, but it has also been bad for individual mental health and, arguably, democracy. And it now poses a reputational and legal risk to those businesses as they find themselves dealing with live-streamed suicides, cross-border election meddling, bullying, etc.
More Than More
The question we should be asking is, is this is the only way?
Right now the default model for platforms is that they should be completely open, because
- Free speech
- Friction-less growth
- Moderation takes resources
I would argue that the free speech argument is a category error. Free speech is a right you have in regards to your government, not private property. You have the right to speech, but the New York Times doesn’t have to publish it. Neither does Facebook, Twitter or YouTube.
Right now they do publish whatever because they like making money and not spending it on moderation, but this is increasingly becoming a reputational, legal and ultimately business risk. Plus some of these companies did actually start with a desire to make the world better.
At the same time, if a decent, healthy, discourse is something people actually want and would put time and money behind (debatable) then this is a market opportunity for a disruptive competitor.
Either or neither way, there is another model where quality and decency are baked into the platform. You can require an exam before you can comment, and/or measure quality and decency on an ongoing basis. The algorithm doesn’t have to just prioritize more. It can prioritize better, and factor that into what gets published and seen.
I think many people (certainly many Medium readers) would agree that, more and more, it has become and Internet Of Shit. But that’s very much an engineering problem. If you design for more you get more. If you design for better you can get that instead.