Opinion

Andrew Yang renews AI warning as Mark Cuban frames users as learners vs shirkers

Productivity boom could empower individuals or entrench regulated model cartels

Images

Extended interview: Andrew Yang reflects on 2020 campaign and warns of looming AI consequences Extended interview: Andrew Yang reflects on 2020 campaign and warns of looming AI consequences cbsnews.com
Mark Cuban said people who use AI either use it to learn everything or to avoid learning.
                            
                              Tom Williams/CQ-Roll Call, Inc via Getty Images Mark Cuban said people who use AI either use it to learn everything or to avoid learning. Tom Williams/CQ-Roll Call, Inc via Getty Images businessinsider.com

Andrew Yang is back on his old beat—warning that artificial intelligence will reorder work and politics—but the more interesting question is not whether AI is “dangerous.” It’s dangerous to whom.

In an extended interview with CBS News, Yang revisits his 2020 campaign arguments that automation and AI would displace workers faster than institutions can adapt, and he promotes a new book about his political experience. The Washington version of this story is predictable: “AI is coming, therefore government must act.”

A better test is uncomfortable: will AI be a productivity multiplier for individuals, or a licensing and cartel tool for institutions?

Business Insider, citing Mark Cuban, captures the fork succinctly. Cuban says there are “two types” of large-language-model users: those who use AI to “learn everything,” and those who use it so they “don’t have to learn anything.” That’s not just a self-help aphorism; it’s a theory of power. If AI makes a competent individual dramatically more capable—faster research, better writing, cheaper prototyping—then it decentralizes leverage. One sharp operator can do what used to require a department.

But if AI becomes a crutch that deskills users, it centralizes power in the organizations that control models, compute, distribution, and compliance. The “AI safety” agenda—often framed as benevolent risk management—can easily become a regulatory moat: mandatory audits, model registration, controlled access, and liability regimes that small actors cannot afford. The outcome is familiar: a handful of approved vendors, a “responsible” API, and a permanently supervised permission slip to innovate.

Yang’s political instincts tend toward national programs—most famously universal basic income—because he reads disruption as a macro problem requiring macro relief. Yet bureaucracies respond to technological shocks by protecting incumbents, not by maximizing human agency. When the same institutions that struggle to ship a functional DMV website promise to certify which AI models are “safe,” the irony writes itself.

The real divide is not pro- or anti-AI; it’s open versus gated. Open tools let people learn, build, and compete. Gated tools—wrapped in compliance, watermarking mandates, and “misuse” policies enforced by centralized platforms—turn AI into a controlled utility.

Cuban’s dichotomy is ultimately about character: do you use the machine to expand your competence, or to outsource it? But the political economy question is about incentives: do we let people own and run powerful models, or do we build a regime where you may access intelligence only as a service—metered, monitored, and revocable?

If Yang is right that AI will disrupt everything, then the fight over who gets to use it freely is the fight over whether the next productivity boom belongs to citizens—or to the credentialed class that promises to manage it for them.