top of page

AI Transparency Penalty: Why Admitting AI Use Hurts Trust

When Being Honest About AI Makes You Look Less Trustworthy


AI transparency penalty

Artificial intelligence tools like ChatGPT have become everyday helpers. Students use them to polish essays, journalists experiment with them to draft stories, and researchers lean on them for editing or brainstorming. But here's the twist: when writers openly admit they used AI, people often end up trusting them less than if they'd said nothing at all.


Infographic titled 'When Transparency Backfires: How Disclosing AI Use Can Undermine Trust.' Illustration of scales weighing 'AI Disclose' and 'AI Hide.' Three points on the right: (1) AI tools are widely used in academic and journalistic writing. (2) Research shows disclosing AI use makes readers see the work as less trustworthy. (3) Papers without disclosure are judged more credible. Signed 'Laras Fadillah' at the bottom.

Sounds unfair? You're not alone in thinking so. This strange phenomenon is sometimes called the “transparency penalty.”


We normally expect transparency to build trust. If a journalist or researcher tells you exactly how their work was created, that honesty should make them more credible, right? Well, not when AI is involved.


Recent experiments on AI transparency penalty show that labeling something as "AI-assisted" or "AI-written" often makes readers doubt it. In one study of journalism, audiences who saw AI-disclosure tags judged the articles as less reliable  - even though the content was identical to the human-only versions (Toff et al., 2023).

Similarly, organizational research has found that disclosure of AI use can hurt perceived trust and legitimacy, even when it's ethically the right thing to do (Schilke, 2025).


Why Do People Distrust AI-Assisted Writing?

"Infographic titled 'The Psychology of Distrust: How Disclosing AI Use Can Undermine Trust.' Green triangle with three points: Perception of Effort (AI seen as shortcut, less intellectual work), Accuracy Concerns (AI can hallucinate, hidden errors), and Authenticity & Ethics (doubts about originality and integrity). Signed 'Laras Fadillah' at the bottom.

Several psychological triggers are at play:

  • Effort gap: Readers may assume the author "took a shortcut" and didn't put in the intellectual work.

  • Accuracy fears: AI sometimes fabricates facts ("hallucinations"), which sparks doubt about hidden errors.

  • Authenticity worries: In academia and journalism, originality is sacred. Knowing AI was involved raises questions about integrity.

In short, the disclosure makes people question the process, even if the outcome is solid.


So, Should We Hide AI Use?

Infographic titled 'Bridging the Trust Gap: How Disclosing AI Use Can Undermine Trust.' Green path diagram with four steps: (1) Contextual Disclosure – explain how AI was used (grammar, structure, not ideas). (2) Source Transparency – credibility with citations and references. (3) Let your data do the speaking – clear methods and robust results. (4) Cultural Shift – normalize AI as everyday tool (like spell-checkers). Closing statement: 'Balance honesty, credibility, and integrity in the AI era.' Signed 'Laras Fadillah' at the top right.

That's the ethical dilemma. If you hide it, people trust you more. But if you disclose it, you’re being honest; and paying a credibility price.

Some researchers suggest better forms of disclosure could soften the problem. For example, instead of a vague line like "This article was AI-assisted", authors could clarify: "AI was used to check grammar and spelling, but all ideas and analysis are the author's." Others argue that building stronger sourcing and references helps reassure readers regardless of AI involvement.

And over time, society's view of AI may shift. Just as nobody today questions whether spell-check or citation software undermines academic credibility, we may eventually see AI tools as just another part of the workflow.


Right now, though, honesty about AI comes with a paradox: it's more ethical, but it can make your work look less trustworthy. That puts writers, researchers, and journalists in a tricky spot. Until norms catch up, the best we can do is disclose responsibly, be clear about how AI was used, and back up our work with solid sources.

Because in the end, it's not just about who wrote the words; it's about whether the ideas stand strong.

Comments


bottom of page