Machines of Truth and Distortion

A Citizens' Call to Action: Preparing America for the AI Flood

By Kenneth Russell DeGraff: Joan Shorenstein Fellow, Shorenstein Center for Media, Politics, and Public Policy at the Harvard Kennedy School of Government

Executive Summary

AI systems now fabricate with the same fluency as they convey facts. Google’s AI Overviews have replaced expert-verified information with plausible falsehoods, while its NotebookLM podcast distorted this very analysis—fabricating policy recommendations I never made. Meta’s FungiFriend wizard went further, offering users lethal mushroom foraging advice. When even advanced AI confidently misrepresents reality while projecting false authority, we confront an unprecedented machinery of reality distortion—one that increasingly mediates how millions understand the world. 

Hobby Lobby replaced human artists and now sells prints of AI slop. The question is no longer whether generative AI will shape society, but how: will we direct these tools toward solving humanity’s greatest challenges, or allow them to deepen misinformation, inequality, and social decay?

This analysis examines three interconnected crises imposing mounting burdens on American society:

The erosion of shared truth in a new AI-driven digital age

The unchecked power of tech companies to shape public discourse, knowledge, and human behavior

The hidden economic and environmental costs of AI infrastructure citizens unknowingly bear

At the heart of these crises lies a simple truth: too much power concentrated in too few hands. This fundamental imbalance allows corporate interests to reshape our digital landscape, economic reality, and physical environment with minimal accountability or oversight.

Importantly, while this analysis raises serious concerns about Big Tech's consolidated power, many solutions actually align with Silicon Valley's self-interest: reduced energy costs, improved infrastructure, and reliable human-curated content serve both corporate profits and the public good. This alignment could make positive change achievable. Life can be good for all.

The Machinery of Reality Distortion

AI's Paradox: Powerful but Unreliable

Generative AI produces authoritative-sounding falsehoods as fluently as facts. These systems routinely hallucinate and become unreliable when faced with simple name changes or variable swaps, demonstrating fundamental flaws beneath their confident veneer.

NotebookLM, among Google's flagship AI products, butchered this very analysis—inventing policy recommendations never made while fumbling a straightforward metaphor. When even sophisticated AI systems so confidently misrepresent content while projecting false authority, how can we build the shared understanding necessary to address our mounting challenges?

Recent research reveals they're evolving from mere "hallucinations" into something more concerning: strategic deception. Autonomous AI agents regularly make decisions and take actions their creators neither anticipated nor designed for. These agents threaten to shatter critical boundaries between applications, operating systems, and user data. Signal's founders warn these agents can be "a fundamental breakdown of the privacy architecture that keeps our digital lives secure." They typically require your browser, payment information, calendar, messaging apps, and essentially root-level permissions across your entire system. Because most of this processing happens on remote servers, not your device, private communications, financial details, and personal schedules all potentially flow through corporate servers accessible to governments. The machinery of reality distortion isn't just changing what we see and how we think—it's reaching for control of what we do.

Redistribute Concentrated Power

Big Tech Accountability: Technology giants should bear the full societal costs they impose—from misinformation amplification to unauthorized use of creative works and hidden environmental impacts. The cost-shifting extends far beyond their energy bills, as these companies extract value from human labor, personal data, creative content, and public infrastructure while externalizing harms to communities and our collective problem-solving capacity.

Protect Creative Rights: Creators deserve a seat at the table during policies affecting their future. While tech companies and their lobbyists shape AI policy, the artists, writers, musicians, and other creatives whose work trains these systems are largely excluded from these decisions. AI companies trample even basic digital rights, ignoring robots.txt—the Internet's 'No Trespassing' sign—to scrape content without permission.

Economic Fairness: The issue isn't prosperity but predation—business models that drain communities while enriching the few. Congressional oversight of automation's impact has been inconsistent: robust 1950s investigations found 'enlightened business' accepting responsibility for displaced workers, followed by decades of neglect until 2016. During Congress's four decades of inattention, $50 trillion moved from the bottom 90% of Americans to the top 0.5%—the largest wealth transfer in human history. Research confirms this shift wasn't merely redistributive but fundamentally changed corporate behavior. Studies causally link this short-term orientation to the production of fewer influential inventions, directly impacting not only firm competitiveness but broader U.S. economic growth.

Combat Synthetic News: 'Pink slime' operations—AI-generated local news funded by special interests—now frequently outnumber legitimate newspapers. These operations have evolved from clickbait websites to printed tabloids masquerading as community papers, sometimes resurrecting shuttered local or diverse, niche publications. Political groups and industry lobbies flood swing district mailboxes with these sophisticated forgeries, leaving residents struggling to distinguish propaganda from genuine journalism.

Put People, Not Algorithms, in Control

User-Controlled Algorithms & Data Portability: Critical Tools for Digital Freedom

We need policies and technologies that empower digital self-determination—ensuring people can consolidate content across platforms, choose how they discover and filter content, and freely transfer their data, relationships, and creative works between services.

Imagine selecting a bespoke algorithm that sifts through all your social feeds according to your own definition of “time well spent.” Digital-governance scholars call this choice layer “middleware”—an intermediary between platforms and people that lets each of us tailor how our feeds are curated. These tools would finally flip the script, making platforms compete to earn your trust rather than your clicks. 

On-device AI models could make such tools more affordable and private. At times, China's Deepseek v3 exceeds benchmarks. Running its best version requires a powerful computer but frees you from both cost and the Chinese Communist Party's ideological constraints on their website versions. While Deepseek likely trained on unauthorized materials like how Meta and OpenAI are alleged, this doesn't excuse anyone stealing from creators to build competitors.

Unlike China, America and other democracies value creative rights and fair compensation. Surrendering creators' rights to big tech risks realizing the dystopian world of 'Ready Player One' — where creative work is systematically extracted from masses to build immersive virtual worlds that simultaneously entertain and exploit them, concentrating wealth and power in the hands of tech oligarchs. AI Agents are a profound restructuring of digital security that prioritizes corporate convenience and "magic genie" functionality over fundamental rights of privacy—all so our "brains can sit in jars" while AI handles life's details. The machinery of reality distortion isn't just changing what we see and how we think—it's reaching for control of what we do.

Economic Adaptation and Resilience

The Impact of AI on Labor Markets

Recent research reveals each profession faces a distinct "inflection point" with AI tools. AIs, trained on material created by the very humans they displace, initially act as productivity multipliers. However, once AI capabilities cross a certain threshold, the relationship inverts dramatically. The recent financial downturn and Trump-initiated trade war (which Congress can turn off anytime) have only intensified corporate adoption of Shopify-style policies requiring jobs to pass an 'AI-immunity test' before new positions are approved.

A sobering assessment comes from a regular top-20 technology forecaster: there's a 50% chance that within 10 years, technological advancement will eliminate most current employment. While he acknowledges there will always be some roles where humans are intrinsically preferred over AI, he doubts these positions alone could sustain anything close to our current workforce.

The solution lies not just in universal basic income alone, as some in AI advocate, but in a profound reorientation of our political economy toward frameworks for infrastructure, education, and innovation that reject inefficient status quos, properly align incentives and addresses root causes of problems like national security, climate change and economic hardship while building families' economic security and community prosperity. A comprehensive framework of social supports could transform American resilience through affordable housing, health and child care, job training, universal pre-K and guaranteed sick leave, enhancing market competition with an innovation agenda—at about one-seventh the annual cost of the Bush deficits.

Despair and the Justification Machine

New research by neurologists finds that people in despair are particularly vulnerable to misinformation, conspiracy theories, and radicalization. This epidemic of loneliness—the "anti-social century"—erodes social ties, driving many deeper into digital worlds where misinformation thrives uncontested. Approximately half of American adults struggle with reading comprehension to some degree.

Research has shown that repetition increases perceived truth equally for plausible and implausible statements. Today's internet functions less as a brainwashing engine and more as a "justification machine" where "a rationale is always just a scroll or click away." The incentives of the modern attention economy—where engagement and influence are rewarded—ensure there will always be a rush to provide such rationales, regardless of their truthfulness. Users aren't seeking truth but confirming existing beliefs, making it increasingly easy to maintain those beliefs even in the face of contradicting evidence.

Finland's systematic approach to digital literacy—where ninth-graders consistently lead global rankings in detecting misinformation—offers a proven model. Their integration of fundamental skills—reading, mathematics, finance, science, civics, critical thinking—throughout K-12 education creates a foundation for evaluating sources and using AI responsibly, even a free book for every 9th grader.

The Planet's Price of Progress

The Horse Manure Crisis of Our Time

In 1894, London and New York faced what seemed an insurmountable crisis: their streets were drowning in horse manure. The very technology powering urban transportation threatened public health and city life itself. Today's AI boom presents a remarkably similar challenge—computing infrastructure threatens to overwhelm our electrical grid and environment.

AI servers consume one to two orders of magnitude more energy than standard web/email servers, demanding ever-larger facilities that lock regions into decades of ecological consequences. By 2029, AI facilities could require an additional 128 gigawatts—equivalent to powering a small nation.

In North Omaha, where people of color make up 68% of the population and asthma rates rank among the nation's highest, a coal plant scheduled for shutdown continues operating to power Google and Meta's new data centers. Data centers also create dangerous 'bad harmonics'—electrical distortions that damage nearby residents' appliances and increase fire risks in both urban and rural areas. Communities bear these costs through higher insurance rates and damaged electronics. Secret negotiations for data center pricing undermine fair rates, while ratepayers shoulder billions in infrastructure costs. The result? A perverse system forcing everyday people to pay more to burn the dirtiest fossil fuels in vulnerable communities to generate digital pollution and misinformation.

Unlike the 1894 Horse Manure Crisis, clean energy solutions to our AI power challenge already exist and often cost less than fossil fuels. While utilities cite AI-driven growth to justify expensive new fossil fuel plants, independent analysis from Duke University's Nicholas Institute confirms our existing grid can handle significant new loads through flexible management, modernization, and distributed generation. At the time of publication, American methane gas is up 100% from last year.

Meanwhile, utilities are following a predictable playbook—starting the conversation with yesterday's solutions because they earn guaranteed returns on new power plants in ways they don't from grid improvements or efficiency.

This is the time for an all-hands on deck strategy that includes:

Grid-enhancing technologies can increase transmission capacity by over 33%, while advanced controls redirect power around congested lines- Smart load management to maximize existing infrastructure

• Fast, standardized interconnection rules like Texas's that speed clean energy deployment- Clean microgrids that reduce dependency on fossil generation

The solution requires ending utility secrecy, empowering consumers with data access, and aligning incentives toward improving resilience, affordability, and reducing pollution.

Governing AI: Legal Frameworks for a New Era

AI systems break traditional corporate personhood models because they can develop capabilities and take actions beyond what any human intended or controlled. Tort law's fundamental premise—identifying who caused harm—faces unprecedented challenges in this new reality. Traditional product liability frameworks struggle with AI's continuous evolution, while negligence standards could better hold AI creators to objective standards of proper conduct rather than merely common practice.

Two approaches also emerge as potential solutions:

1. The Insurance Model: A mandatory insurance model requiring entities deploying AI systems publicly to carry coverage similar to no-fault auto insurance. Though careful distinctions might be needed between major AI developers and individual users, we should enable compensation for harms while incentivizing responsible development.

2. The Superfund Approach: An AI Superfund could address broader societal impacts. Following established environmental law principles, modest fees on computational resources or data used in AI development could create funds for cleanup and mitigation.

Whistleblower protections are also urgently needed. As AI systems grow more autonomous and potentially harmful, we need mechanisms to protect those who identify risks and harms from within AI companies and development teams.

States are emerging as vital laboratories for AI governance, using inherent powers such as procurement, and are developing models that can inform national policy. Twenty states now have privacy laws, with California and Maryland setting the highest bar. Through global forums and interstate coalitions, state legislators and attorneys general can champion human-centered values in AI development while federal action lags.

The 2024 Crowdstrike incident, which paralyzed operations for thousands of businesses, hospitals and governments, demonstrates why companies deploying powerful tools like kernel drivers should face meaningful accountability, including potential criminal penalties, balanced with clear safe harbor provisions for adopting proven safety measures.

Congress and states must counter the systematic erosion of traditional duties of care by implementing enforceable standards across sectors—such as social media platforms, utilities, AI developers, and online gambling operators—requiring businesses to prioritize consumer safety over engagement metrics or quarterly returns.

The Path Forward

The central problem is starkly simple: too much power in too few hands has created an unprecedented machinery of reality distortion. The solution must involve redistributing this power—to citizens, communities, and civic oversight mechanisms—while establishing guardrails that ensure technology serves humanity rather than the reverse.

What emerges from this analysis is the revelation of an interlocking machinery of reality distortion that we must collectively dismantle before it's too late. It's not just isolated problems of misinformation, economic exploitation, or environmental damage. Rather, these systems work in concert: AIs feed the internet's 'justification machine,' which is powered by the attention economy, which drives data center expansion, which burdens communities with hidden costs, which erodes trust in institutions, which weakens our collective ability to distinguish truth from fiction and solve problems.

Stay engaged—America needs you now. Hold elected officials accountable by tracking their votes and showing up at town halls. Organize around just one issue you care about—doom scrolling doesn't count. Join a group taking action or start one yourself.

The Principal Edition of this paper contains more detailed analysis, additional policy recommendations, and practical pathways for implementing these solutions at all levels of government.

2025 0410 Machines of Truth and Distortion - Executive Summary.pdf

Machines of Truth and Distortion - Executive Summary.pdf

340.41 KBPDF File

Reply

or to participate

Keep Reading

No posts found