Note: Originally written (but never published) in 2020, then revisited and published in 2025. I’ve added annotations and extended reflections on trust in our AI-dominated era.
Wikipedia and the Sin of Omission
I just finished reading Catch and Kill by Ronan Farrow. The writing was satisfying and the story was horrifying but relatable on many levels. I was on the last few chapters yesterday when I came across a passage that felt more personal than others because it discussed the project that I worked on, Wikipedia.
In the book, Farrow describes the painstaking process of bringing his research about Harvey Weinstein, Matt Lauer and other sexual predators (some convicted — some suspected) to NBC. We learn that NBC leadership actively blocked him from reporting on these topics, fostering a culture that accepted misogynistic behavior, retaliation, and sexual misconduct. Later, when the story was published in the New Yorker, NBC attempted to cover up their role in creating an environment where rape could be an open secret. In this context, Farrow mentions someone NBC paid to alter Wikipedia.
“NBC also hired Ed Sussman, a “Wikipedia whitewasher,” to unbraid references to Oppenheim, Weinstein and Lauer on the crowdsourced encyclopedia.” [1]
Farrow then describes various questionable edits and adds,
“Other times, he simply removed all mention of the controversies.”
The book quotes a “veteran editor” saying that it was:
“…one of the most blatant and naked exercises of corporate spin that I have encountered in WP and I have encountered a lot.”
Finally, the chapter concluded with the ominous sentence:
“It was almost as if it had never happened.”
Can Wikipedia be Trusted?
This chapter rendered Wikipedia meaningless in the age of paid coverups and catch and kill [2] purchasing. Why? Because for Wikipedia to work, we need to trust it. But what does it take to trust Wikipedia?
Does Transparency breed trust?
Many people think that in order to trust something you need to know how it works, so that’s the hypothesis I started with ; if readers knew how Wikipedia works, they’d be more likely to trust it. Spoiler alert: there’s a lot of research [3] showing that when Wikipedia readers saw how the “sausage was made,” that the readers in turn, felt that they could trust it less. But for the sake of conversation, let’s take a look at the sausage (a phrase my Jewish mother would be m̶o̶r̶t̶i̶f̶i̶e̶d̶ amused to hear me say).
Part 1: Reading Wikipedia
Lots of people read Wikipedia (understatement) via the Web, mobile apps or even through voice assistant services. There isn’t one Wikipedia to rule them all, but over 300+ Wikipedias, written and available to people in different languages. It’s not translated content, it’s localized content which means that for example, an article about the Middle East Conflict on English Wikipedia will have a different entry than an article on Arabic Wikipedia (ويكيبيديا العربية). This makes me appreciate Wikipedia a lot because it’s not history being recorded by the victors but in theory every vantage point of a story.
You do not need an account to read Wikipedia.
Wikipedia is or has been censored in several countries.
Here’s a screenshot of the English and French articles on Noah Oppenheim (mobile web versions). Note that while both articles generally have the same information here, the language is nuanced. In the English article it says he is “best known for attempting to stop Ronan Farrow’s reporting on Harvey Weinstein’s sexual misconduct.” On the French article it says (this is my translation)“He’s accused by Ronan Farrow of concealing the facts of the Harvey Weinstein case.”

Part 2: Making an Edit
Let’s just imagine the scenario that you come to the page after reading the Farrow book and want to add information to the article about Oppenheim’s role in the Weinstein sexual misconduct case.
First you want to decide if you will be logged in or out of the encyclopedia when you make the edit. There are many reasons to log in including: attributing the edit to your name, tracking your edit history and watching the page and getting notifications when there are any edits on the page. However, if you don’t log in, you will be editing using your IP address, which is not anonymous, but not exactly openly identifying yourself either.
Next comes the business at hand: deciding what to write. So you have a quote from Catch and Kill about Oppenheim. The first question that I usually ask is “what fact does this quote validate?” followed up by “can I use this quote?” The first question goes back to the whole raison d’etre for Wikipedia. Wikipedia is an encyclopedia, which means that it’s a collection of facts represented in a narrative form. The second question relates in that there are policies and guidelines in place to ensure that real facts get added in a consistent and appropriate manner. One of the core policies of the English Wikipedia is that articles should be written from the “Neutral Point of View” rather than advocating for one side in a dispute, articles should be written describing and contrasting different view points, and trust the reader to come to their own conclusion. Although, as James Forrester, a Wikipedia editor since 2002 and more recently also a software engineer (now Principal Engineer) at the Wikimedia Foundation, pointed out to me recently[4],
“This does not mean refusing to state facts — the Earth is round, the sky is blue, global temperatures are rising — but you shouldn’t mix up extreme fringe positions with real discussion; you don’t serve the reader by glossing over anti-vaccination advocacy, or pretending that climate change denial doesn’t exist.”
If you’ve gotten this far, from here on out is all about the technical task of editing the article. You can either edit the page in Full page mode (like a google doc, seeing the entire article and editing it at the same time) or zooming in to edit a section. You should reference the book and author through a citation and then provide a summary so other people can see what your intention is if they view the article revision history.
Part 3: Revising
As you examine the page, you might spot content that’s redundant or incorrect. You can edit it the same way. While deleting content can be controversial, you can never truly “whitewash” a page because Wikipedia maintains a revision history that logs every edit. Each edit record includes: time of edit, username of logged-in editor or IP address of anonymous editor, a link to view the article before and after the edit (the difference) and the amount of character change.
Additionally, editors typically write a summary describing their changes as an edit log. Revision histories exist for both article pages and discussion pages.

I chatted [5]with Carnegie Mellon University Phd student(with a research focus on Wikipedia), Andrew Kuznetsov (still accurate), about revision history in relationship to the Farrow book and he pointed out that:
“… the sin of omission is often caught by a centralized source that understands the entire picture. However, one thing that makes Wikipedia unique is editors don’t think in ‘content’, they think in ‘edits’. Thus, a removal of a sentence is just as noteworthy and documented as the addition of one.
He went on to say,
This presents a few unique opportunities for Wikipedia, compared to other social systems (especially ones that people use for news). You may not be able to see the history of a Facebook page and viewing deleted tweets on Twitter is deliberately difficult, but Wikipedia stores all this data (for free). As a consequence, edits that remove good information (a sin of omission) and those that add bad information (misinformation, basically) are surprisingly at near parity in their exposure to editors.
What Kuznetsov highlights is the sensemaking challenge for Wikipedia readers due to content decentralization. To editors, this is obvious, but readers need to think like content creators to understand how an article has been altered.
Part 4: Joining or Creating a discussion
If you return to find your edits gone and the page reverted, you have options. From the revision history, you can identify who made the change (username or IP address). You can then contact them via their user Talk page — an on-wiki profile page that editors personalize and use for direct, open communication.
Alternatively, you can take the more public route through the article discussion page. Every Wikipedia article has an attached discussion page where anyone can participate in improving the article. There are detailed guides on engaging in discussion pages — here or here — but what’s important is that discussing edits is an accepted and expected convention. As Forrester explained:
“Writing Wikipedia articles means working together when we’re all apart — separated by location, by background, by language, by experience, by values — means it’s easy to find ways that we disagree. Rather than making bold changes and hoping that people will agree to them after the fact, often you can quickly get to a better improvement by discussing ideas first and then, once agreed, make the changes in concert with others."

Is radical transparency the answer?
As I’ve explained, significant work happens behind each Wikipedia article. But if we want to increase reader trust, is radical transparency the solution? At minimum, we could explore design interventions like prominently displaying author information and edit timestamps in humane language on the article page. But would knowing that regular humans provide information and that ANYONE can edit it change your perception of the article? Of Wikipedia overall?
The Wikimedia Foundation product team constantly balances an open-source mission with mainstream appeal (which requires making the product accessible to a wide audience). I’d like to think that helping readers understand our commitment to radical transparency by exposing the inner workings on article pages would attract more writers. More editors means better quality content. My thinking is that more voices in the conversation prevent “textbooks” or “histories” from being written from just one perspective.
I view discussion pages and revision histories as valuable artifacts themselves. In my work, I explored the boundary between articles and the editing process behind them. I see a major design opportunity in blending these knowledge spaces within Wikipedia. But for any software to succeed, there must be partnership between the software and its users (both editors and readers). For Ronan Farrow to think more highly of Wikipedia, we’d need to design an ethical, radically transparent experience where you see both the article and editor activity simultaneously. This would require an epistemological shift where readers understand that collaborative content creation on the web is actually beneficial. Imagine a world where every reader felt empowered to be the change they wished to see and took to their laptops to research and fact-check?
Thanks to James Forrester, Adrian Fraser, Ed Sanders, Peter Pelberg, Sylvan Klein, and Amir Aharoni for helping me to articulate my thoughts.
2025 Thoughts
This holds up. I love that I used the words “epistemological shift” in that last paragraph. My updated question is — has that shift occurred with the introduction and spread of AI technology? Reading what I wrote five years ago feels like reminiscing about a bygone era, but I would argue this question of trust remains highly relevant. If I search Google for Noah Oppenheim today, this is what appears:

The content on the left side of the screen is AI generated from Google AI Mode, which according to Google’s The Keyword blog:
This new Search mode expands what AI Overviews can do with more advanced reasoning, thinking and multimodal capabilities so you can get help with even your toughest questions. You can ask anything on your mind and get a helpful AI-powered response with the ability to go further with follow-up questions and helpful web links.
Using a custom version of Gemini 2.0, AI Mode is particularly helpful for questions that need further exploration, comparisons and reasoning. You can ask nuanced questions that might have previously taken multiple searches — like exploring a new concept or comparing detailed options — and get a helpful AI-powered response with links to learn more.
From a UX perspective, Google presents an opinionated overview at the top of the page hierarchy and applies a hierarchical order to content display (similar to Wikipedia). For each entry, Google explains its logic and provides source links. This transfers the concept of trust back to the user, who must take the additional step of clicking these links to verify information.
Extensive research on AI and Wikipedia exists — including Wikipedia’s Moment of Truth, Can the online encyclopedia help teach A.I. chatbots to get their facts right — without destroying itself in the process? published by the NY Times in 2023. While this topic deserves its own blogpost, what’s striking is that after five years, the responsibility for truth-seeking still falls on the end-user. Google elegantly places links beside content (hidden under the link icon), while in the standard overview (a different search tab with less aggressive information presentation), sources appear more prominently alongside content (often pulling directly from Wikipedia).

I find Wikipedia’s approach to information access particularly valuable (as of 2025, speaking as an independent observer). The platform offers multiple intuitive ways for users to explore content. On desktop, you can simply hover over a wikilink to preview a summary before clicking through to another Wikipedia page. Similarly, citations can be previewed on desktop or easily accessed through the References section on both desktop and mobile devices.

Wikipedia’s “citation needed” tag stands out as an elegant call to participation. This feature appears when a reader questions the source of a fact and edits the page to request verification. What makes this system powerful is its accessibility — anyone can use this template . By encouraging users to flag unverified information, Wikipedia normalizes critical thinking rather than passive acceptance of presented facts. This raises an important question: when people engage with content, are they genuinely seeking truth?
— — — — — — — — — -
[1]Pages 399–400 of Ronan Farrow’s Catch and Kill detail the aftermath of reporting on Harvey Weinstein, revealing a conspiracy to suppress the story and media complicity, particularly at NBC.
[2]Catch and kill is a phrase that means that someone buys a story in order to never share it with the world. This was used by many organizations referenced in Ronan Farrow’s book as a way to keep women from “going public” with their sexual harassment claims (through NDA agreements). See: catch and kill on Wikipedia.
[3] There are lots of studies that describe how revealing Wikipedia’s process decreases reader trust. Two articles that I recommend are Your Process is Showing: Controversy Management and Perceived Quality in Wikipedia and Can You Ever Trust a Wiki? Perceived Trust and Wikipedia.
[4] James Forrester and I talked via Slack on May 5, 2020
[5] Andrew Kuznetsov and I talked via Slack on May 5, 2020

