Issue
Online information platforms like Facebook and Twitter, which are major source of information for many, had recently introduced labels to posts that link to articles from reliable sources. However, users lack incentive to read extra.
Proposal: prompt and reward scheme
Therefore, I propose a scheme to expand the scope of suggested articles, and to reward readers who read them.
The reward: readers who accumulate enough points can redeem ad-free access to the platforms. This is an attractive reward to active platform users, and can be provided by the platforms without incurring extra cost.
Information to provide
Reliable explainers: Where the information involves no value judgement, platforms should prompt reliable explainers of issues from reliable sources. For instance, platforms can provide authoritative information regarding Japanese nuclear-wastewater issue provided by reliable universities or the WHO.
Opposing view: Where the information involves value or moral judgement, platforms should provide articles of opposing views. Although no ultimate answer is provided, this policy can encourage the attitude of looking at two sides of a coin.
Materials teaching digital literacy: Currently, Google has offered a media literacy program (Note 1) to teach people to identify fake news. With rewards, people are more likely willing to read these short explainers and join the little quizzes.
Ethical concern: what exactly to provide?
The real issue lies not in what kind, but what exactly to prompt. Unwise selection will onlygenerate more biases. Therefore, I propose platforms set up a local task force (“Force”) to handpick location-specific reliable sources. It can be the existing team responsible for screening fact-checkers.(Note 2)
The guiding factors to consider when handpicking a source, I propose, includes (non-exhaustive) the organisation’s or individual’s (1) background and nature; (2) expertise in subject matter; and (3) track record in transmitting misinformation.
For explainers, they must be provided by internationally or academically recognised institutes such as a UN branch or a renowned university. For opposing views, they must be contained in articles published in reliable media such as newspaper column, or written by commentator deemed honest and convincing by the Force. For materials teaching digital literacy, they can of course be designed by the Force themselves, or be bought from organisations dedicated to promoting digital literacy, such as the Factcheck Lab (Note 3) from Hong Kong.
Limitation
Apart from the potential of low participation, a major limitation of this proposal is that the Force is required to handpick information. This inevitably involves making a judgement on what is reliable – which is where controversy arise. Nonetheless, I believe such judgement is a “necessary evil”, and the Force in exercising their discretion must remain rigorous, sensitive, and sensible.
Analogous to education context, a teacher must inevitably exercise judgement in material selection. All one can do is to try one’s best to remain diversity and avoid indoctrination.
Footnotes:
- The Google News Initiative
- Currently, Meta engages fact-checking institutes to filter misinformation.
- FactcheckLabHK