top of page

UN to Tackle “Information Integrity”

It’s nice to know that in these times the UN continues to be the world police—oh how they care about our “human rights” and safety.

So, not surprisingly, it now aims to deal with “Information Integrity”, publishing the “United Nations Global Principles For Information Integrity: Recommendations for Multi-stakeholder Action”. It is the usual corporate-styled document and it is 41 pages of waffle.

United Nations Global Principles For Information Integrity

While these [technological] advances have enabled the mass dissemination of information, they have also facilitated the spread of misinformation, disinformation and hate speech by many kinds of actors at historically unprecedented volume, velocity and virality, risking the integrity of the information ecosystem.

Ah yes: misinformation, disinformation and hate speech. It’s always about those.

But fret not, it has come up with the “Global Principles for Information Integrity”, which are:

  • societal trust and resilience

  • healthy incentives

  • public empowerment

  • independent, free and pluralistic media

  • transparency and research.

To be fair, some of the problems pointed out are of genuine concern, and pointing out problems make up for almost half the document. The rest of the document are recommendations to the various stakeholders, including technology companies, AI actors and the State.

Recommendations for stakeholders

As expected, the right words are used. For example, transparency, respect of privacy, and a free and independent media are often mentioned even though the usual stakeholders have been mostly doing the opposite anyway.

In any case, a few parts hint at the framework’s true intent, which is control. Merely a few examples are provided below from the recommendations to technology companies.

Regarding “consistent content moderation”:

Cooperate with independent, third-party organizations to develop content moderation processes in line with international human rights standards and ensure that such policy is enforced consistently and non-arbitrarily across areas of operation.

Hmm… would that be “fact checkers”? We know how useful they are.

Take measures to address content that violates platform community standards and undermines human rights, such as limiting algorithmic amplification, labelling and demonetization.

Right, because an “independent, free and pluralistic media” cannot thrive without “limiting algorithmic amplification”.

Regarding “industry standards”:

Ensure cooperation between platforms and services, recognizing that risks can spread across various information spaces, each with unique design flaws and policy gaps that can be exploited.

Like the platforms themselves, standards and capabilities are neutral in themselves. Compatible standards and cooperation between platforms can be an advantage or a disadvantage, depending on the context. The fact that it is assumed to be the latter when dealing with “risks” betrays its true intent or perhaps mere idiocy.

After all, if there is a problem, sometimes it is diversity and/or a degree of separation/disconnect that prevents it from spreading or spreading in such a way that is harmful. You don’t want all your crops to be of the same strain and therefore be susceptible to the same disease… Unless, of course, this is referring to when, for example, some non-lamestream view is banned from YouTube but then is posted on Rumble instead. Well, can’t have that now, can we?

As for “crisis response”:

Working with stakeholders operating in high-risk areas, establish early warning and escalation processes with accelerated and timely response rates in contexts of crisis and conflict. Establish mechanisms to enable prominent, timely access to reliable, accurate information that serves the public interest.

If this is referring to severe weather and earthquakes, then this is right and proper. However, “elections” and “human-made crises” are mentioned in the introductory section as examples so one has to wonder whether this is to mean pre-empting or prebunking whatever is declared as misinformation or disinformation by whoever is in power.

As is common, this is at best drivel or at worse the usual thinly veiled plan about control.


Be sure to subscribe to our mailing list so you get each new Opinyun that comes out!



Screen Shot 2021-12-09 at 4.49.31 PM.png

10% Off


bottom of page