Can't this be used for censorship and online tracking?

Our solution gives the client full control so no government can dictate content restriction. Once the internet traffic is encrypted via HTTPS, nobody will be able to find out whether unsafe content was requested or not.

Also, for websites all browsers look the same. In theory, the preference of the user might be retrievable through some clever scripts. However, there's not much to gain from this information as there are already much easier methods to track users (e. g. the user agent or the screen resolution).

Isn't the protection easy to circumvent?

We believe our specification is roughly at a sweet spot. It has very few disadvantages and is still very effective.

Of course, our specification can be circumvented if someone is willing to put in the effort to do so. Given enough dedication, any measure against inappropriate content can and will be circumvented. On the other hand, if the restrictions were stronger illegal providers could emerge to fill the demand.

What about different cultures?

We are aware that the terms "inappropriate" and "unsafe" have different meanings in different cultures. Our specification allows websites to define precisely why content is unsafe. For example, a website can tell the client that its content only contains violence, but no other category of unsafe content. If that's acceptable for the user, they can simply choose to display this category of unsafe content while still blocking other categories.

If you fight for youth protection, why do you oppose digital age verification?

We don't oppose digital age verification itself. We simply believe that youth protection on the internet is an issue that must be addressed and privacy is a resource that must be preserved. Sacrificing either of them in favor of the other is never a good decision as long as we have alternatives. So far, we haven't seen a plan for privacy-friendly digital age verification.

The other problem is that age verification is already opposed by many activists and organizations. As long as there are two parties opposing each other for two separate good causes we will only impede our progress and waste our resources.

And most importantly, we're looking for a solution that can be enforced everywhere around the world. Digital age verification might be used in a few countries but will not affect the majority of the world. Therefore we need a different approach.

Why is this better than a website filter?

Website filters are inherently coarse. It's simply impossible to create a list of all websites that are safe or unsafe to browse. And even if that list existed, websites with both safe and unsafe (NSFW) content like Reddit would still need to blocked.

In short, all website filters will either suffer from blocking too much or too few. Our specification gives every website a simple tool for a fine-grained specification of safe and unsafe content. In the long term, mistakenly blocking safe content shouldn't happen anymore as the adoption of the preference increases.

Can this be used by search engines to provide better results?

Yes, this is a really powerful synergy. Currently, search engines have a hard job at filtering out unsafe content for users that want to search safely. The new specification makes is very easy for search engines to figure out which content is safe and which isn't.