Examining whether Sam Altman’s AI policy can prevent wealth concentration and job collapse
In March 2015, I published a blog post titled “Artificial Intelligence and What It Means,” in which I projected future AI as an “Ultimate Benevolent Didactic Monk Mind”—a perfectly logical, ego-free entity that would solve humanity’s fundamental problems and usher in an era of abundance, leisure, and universal flourishing. Money would cease. Work would cease. Hunger would vanish. Religion would become obsolete, replaced by the AI’s patient, loving guidance.
That vision is Utopian. The premise is for ultimate, universal benevolence to shine through. However we must survive a violent birth, or rather the transition from our current status quo to this benevolent paradigm. Not an easy task, as can be seen with current tragic events in the Middle East. With the demise of the egotistical, maniacal western desires of global resource control then we have a fighting chance.
In February 2026, I published a far grimmer analysis: “AI-Driven Collapse 2026–2100,” which mapped a trajectory of job destruction, demand collapse, demographic suicide, and the fracturing of humanity into AI-utopia enclaves, managed oligarchies, and subsistence zones. This model is based on deliberate policy non-intervention, AI would be deployed wherever it cut costs and concentrated power, regardless of social consequence. That vision was dystopian. It assumed malevolence was structural—a reality when measuring short term human goals and objectives.
Now, on 6 April 2026, Sam Altman and OpenAI have published a thirteen-page policy document titled “Industrial Policy for the Intelligence Age: Ideas to Keep People First.” It sits, deliberately and strategically, between those two poles. It is neither my 2015 utopianism nor my 2026 pessimism. It is a carefully constructed argument for what I would call managed superintelligence capitalism—a system in which AI is allowed to scale, but within a framework of deliberate policy choices designed to distribute its benefits more broadly than market forces alone would permit.
The question is whether Altman’s framework is realistic, sufficient, or merely a sophisticated form of elite reassurance. Let me work through it.
What Altman Proposes
Altman’s document is organised around two pillars: building an open economy with broad access and shared prosperity, and building a resilient society through accountability, alignment, and risk management.
On the first pillar—the open economy—Altman proposes:
Worker voice in AI deployment: Formalise collaboration between workers and management to ensure AI improves job quality, enhances safety, and respects labour rights.
AI-first entrepreneurship: Support workers in turning domain expertise into new companies by using AI to handle overhead (accounting, marketing, procurement).
Right to AI: Treat access to foundational AI models as foundational infrastructure, like electricity or the internet, ensuring broad, affordable access.
Tax modernisation: Rebalance the tax base away from labour income and payroll taxes toward capital gains, corporate income, and “sustained AI-driven returns.”
Public Wealth Fund: Create a fund that allows every citizen to share directly in AI-driven economic growth, regardless of their starting wealth or access to capital.
Efficiency dividends: Convert AI-driven productivity gains into durable improvements in workers’ benefits—higher retirement contributions, expanded healthcare coverage, subsidised childcare, and, crucially, a 32-hour workweek with no loss of pay.
Adaptive safety nets: Ensure that unemployment insurance, SNAP, Social Security, Medicaid, and Medicare are fully functional and responsive, with automatic expansions when AI-driven disruption exceeds pre-defined thresholds.
Portable benefits: Decouple benefits from employment by expanding access to healthcare, retirement savings, and skills training through portable accounts that follow individuals across jobs and industries.
Pathways into human-centred work: Invest in care and connection economies—childcare, eldercare, education, healthcare, community services—as pathways for workers displaced by AI.
Scientific acceleration: Build distributed networks of AI-enabled laboratories to expand the capacity to test and validate AI-generated hypotheses at scale.
On the second pillar—resilience and governance—Altman proposes:
Safety systems: Research and develop tools to protect models, detect risks, and prevent misuse in high-consequence domains (cyber, biological, etc.).
AI trust stack: Develop systems that help people trust and verify AI systems, including provenance standards, verifiable signatures, and privacy-preserving audit systems.
Auditing regimes: Strengthen institutions like the Centre for AI Standards and Innovation (CAISI) to develop auditing standards for frontier AI risks.
Model-containment playbooks: Develop coordinated playbooks to contain dangerous AI systems if they are released into the world.
Mission-aligned corporate governance: Frontier AI companies should adopt Public Benefit Corporation structures with explicit commitments to ensure benefits are broadly shared.
Guardrails for government use: Establish clear legal and technical standards for how governments can and cannot use AI.
Mechanisms for public input: Create structured ways for public input so that alignment is not defined only by engineers and executives.
Incident reporting: Establish mechanisms for companies to share information about incidents, misuse, and near-misses with designated public authorities.
International information-sharing: Develop a global network of AI Institutes that collaborate on shared protocols for information exchange, joint evaluations, and coordinated mitigation.
Where Altman’s Framework Sits
Altman’s document is explicitly positioned as a middle path. It rejects both the libertarian “let markets rip” approach and the heavy-handed state control model. Instead, it proposes what amounts to social-democratic capitalism with AI-era guardrails: a system in which AI is allowed to scale and create enormous productivity gains, but within a framework of deliberate policy choices—tax reform, public wealth funds, worker voice, portable benefits, and international coordination—designed to ensure those gains are shared more broadly.
The tone is optimistic but not utopian. Altman acknowledges real risks: job and industry disruption, misuse of AI, misalignment of systems, concentration of power and wealth. But he frames these as problems to be managed through policy, not as inevitable outcomes. The document is, in essence, a call for proactive industrial policy to navigate the transition to superintelligence in a way that “keeps people first.”
The Realism Test: Where Altman’s Framework Holds
There are several elements of Altman’s proposal that are genuinely grounded in economic and political reality:
Tax reform is necessary: Altman is correct that as AI compresses labour costs and expands capital returns, the tax base that funds social programmes will erode unless tax policy adapts. Payroll taxes and labour-based revenues will decline; capital-based revenues must rise. This is not ideological; it is arithmetic. Most advanced economies will eventually face this pressure.
Worker voice matters: Altman’s proposal to formalise worker input on AI deployment is not radical, but it is important. Workers do have knowledge about how work is actually performed, and where AI can improve outcomes. Ignoring that knowledge is both inefficient and politically dangerous. The proposal is modest—not worker ownership, not veto power, but formal consultation—but it is a step toward acknowledging that workers are stakeholders, not just inputs.
Portable benefits are overdue: The decoupling of benefits from employment is a genuine necessity in an era of gig work and precarity. Altman’s proposal to expand access to healthcare, retirement savings, and training through portable accounts is sensible and, in some form, inevitable. The question is whether it will be implemented generously or stingily.
Care work is a real growth sector: Altman’s emphasis on investing in childcare, eldercare, education, and healthcare as pathways for displaced workers is sound. These sectors are labour-intensive, hard to automate fully, and growing in demand as populations age. They are also genuinely valuable. This is not make-work; it is real work that needs doing.
International coordination is critical: Altman’s call for a global network of AI Institutes and shared protocols for evaluation and risk-sharing is realistic in recognising that AI risks—misuse, misalignment, dual-use threats—are global and require coordinated responses. The proposal is modest (information-sharing, joint evaluations), not a world government, but it acknowledges that fragmentation is dangerous.
The Realism Test: Where Altman’s Framework Falters
But there are also significant gaps and tensions in Altman’s proposal, places where the framework bumps up against harder realities:
The 32-hour workweek is a nice idea, but politically fragile: Altman proposes that efficiency dividends be converted into a permanent shorter workweek with no loss of pay. This is genuinely appealing—more time for family, community, creativity, and rest. But it is also politically vulnerable. Employers will resist it. Governments in free-capital regimes will be reluctant to mandate it. And if it is voluntary, only the most secure workers will be able to take it; precarious workers will feel pressure to work longer hours to maintain income. The proposal assumes a level of political will and labour power that may not exist.
The Public Wealth Fund is underfunded in the proposal: Altman suggests that a Public Wealth Fund be seeded by policymakers and AI companies working together, but he does not specify how much capital should be allocated or how it should be structured. If the fund is too small, it will be a token gesture. If it is large enough to genuinely share AI-driven gains, it will face fierce political opposition from capital owners who see it as wealth redistribution. The proposal is vague on precisely the point where political economy becomes concrete.
Worker voice without worker power is limited: Altman’s proposal for formalised collaboration between workers and management is sensible, but it stops short of giving workers real leverage. In a context where AI is making labour increasingly dispensable, worker voice without worker power—without the ability to withhold labour, to organise collectively, or to veto harmful deployments—is advisory at best. The proposal does not address the fundamental asymmetry: capital can automate; labour cannot.
Tax reform faces entrenched opposition: Altman’s call for higher taxes on capital gains and corporate income is economically sound, but politically difficult. In many jurisdictions, capital owners have enormous influence over tax policy. Shifting the tax burden from labour to capital will face fierce resistance. The proposal assumes a level of political independence and state capacity that many governments do not possess.
The “right to AI” is vague on implementation: Altman proposes treating access to foundational AI models as foundational infrastructure, but the document does not specify how this would work in practice. Would governments mandate that AI companies provide free or low-cost access to their models? Would they fund open-source alternatives? Would they regulate pricing? The proposal is aspirational but underspecified.
Safety and alignment are harder than the document suggests: Altman’s proposals for safety systems, auditing regimes, and model-containment playbooks are sensible, but they assume a level of technical and institutional capacity that may not exist. Auditing frontier AI systems is genuinely difficult; containment of dangerous systems is even harder. The proposal assumes these problems can be solved through better institutions and coordination, but it does not grapple with the possibility that some risks may be irreducible.
The document does not address demand collapse: This is the most significant gap. Altman’s framework assumes that AI-driven productivity gains will be large, but that labour demand will remain substantial enough to support a functioning consumer economy. But in my February 2026 analysis, I outlined a scenario in which AI-driven automation and wage compression lead to demand destruction—a situation in which goods become cheaper, but workers have less income to buy them. Altman’s framework does not directly address this risk. The Public Wealth Fund and efficiency dividends are designed to mitigate it, but they are not guaranteed to be sufficient. They are also dipping into the same depleting bucket to fund it.
The Deeper Tension: Can Capitalism Distribute AI Gains?
The core tension in Altman’s proposal is this: can capitalism, even reformed capitalism with strong social-democratic guardrails, actually distribute the gains from AI broadly? Or does the logic of capital accumulation inevitably concentrate those gains at the top?
Altman’s answer is implicit: yes, but only if deliberate policy choices are made. Tax reform, public wealth funds, worker voice, portable benefits, and international coordination can all work to broaden the distribution of AI-driven gains. The document is, in essence, a brief for managed superintelligence capitalism—a system in which AI is allowed to scale, but within a framework of policy choices designed to prevent the worst outcomes.
But there is a countervailing logic, which I outlined in my February 2026 analysis: capital is always trying to minimise labour costs and maximise returns to capital. AI is the ultimate tool for that project. Even with strong policy guardrails, the incentive structure of capitalism will push firms to automate, to compress wages, and to concentrate control. The question is whether policy can overcome those incentives, or whether they are too deeply embedded in the logic of the system.
Altman’s document assumes the former. My February analysis assumed the latter.
Where I Stand Now
Reading Altman’s document in April 2026, I find myself in the same position, but with clearer time lines. A distant utopianism for my 2015 piece and a near the term pessimism I drew out in my 2026 post. Altman’s framework is neither naive nor cynical; it is pragmatic and grounded in real economic and political constraints. It acknowledges that AI will create disruption and risk, but it argues that those risks can be managed through deliberate policy choices. It is a bridge from what I see is a reality upon us, and the Utopia that AI can truly become – eventually.
I think he is partially right. Some of his proposals—tax reform, portable benefits, worker voice, international coordination—are genuinely necessary and could make a real difference. But I also think he underestimates the depth of the political and economic resistance to those policies, and he does not adequately address the risk of demand collapse if AI-driven wage compression outpaces productivity gains.
In practical terms, I believe Altman’s framework is the best realistic option available—better than laissez-faire automation, better than heavy-handed state control, and brings structure to my 2026 pessimism, which actually assumes policy failure. But it is also a framework that requires enormous political will to implement, and political will is in short supply.
For entrepreneurial families, the implication is this: Altman’s framework is a possible future, not an inevitable one. If policymakers implement his proposals—if they reform taxes, establish public wealth funds, formalise worker voice, and coordinate internationally—then the trajectory of AI-driven inequality and demand collapse can be slowed or partially reversed. But if they do not, or if they implement only the weakest versions of these policies, then my February 2026 analysis becomes more likely: AI will hollow out work, concentrate wealth, and fracture humanity into enclaves and managed majorities.
The entrepreneurial family should therefore hedge: assume that Altman’s framework is partially implemented, but not fully. Own the assets that AI makes more valuable, but also invest in human-scale, non-AI-replaceable sectors. Prepare for both scenarios—the managed superintelligence capitalism that Altman envisions, and the AI-driven collapse that I outlined in February.
Conclusion: The Next Decade Will Decide
Altman’s document is important because it represents a genuine attempt by one of the most powerful figures in AI to articulate a vision of AI-driven growth that is not purely extractive or oligarchic. It is a call for policy choices that would distribute AI gains more broadly. Whether those choices are actually made will depend on political will, international coordination, and the ability of workers and citizens to organise and demand change.
The next decade—2026 to 2036—will be decisive. If policymakers move quickly on tax reform, public wealth funds, and worker protections—however unlikely that sounds, then Altman’s framework could become reality. If they do not—more likely, then AI will continue on its current trajectory: hollowing out work, concentrating wealth, and fracturing humanity into winners and managed majorities.
My 2015 post was utopian and we will get there if we don’t destroy ourselves with a less capable morally AI before hand. (The evidence at hand clearly proves our political leaders are immoral). My February 2026 post was pragmatically pessimistic. Altman’s framework is a realistic middle path—but only if it is actually implemented. The question now is whether the political will exists to make it so.
About the Author
Jeremiah Josey is Chairman of MECi Group and a systems architect specialising in energy infrastructure, advanced technology, and large-scale industrial projects. He bridges visionary thinking—from artificial intelligence and sociocratic governance to ancient symbolism and climate science—with hands-on execution across China, the Middle East, including Türkiye, the Arab states and Iran, as well as Australia. Some of his initiatives include IPRI.Tech and The Thorium Network, he helps principals and decision-makers make complex, politically sensitive projects bankable and executable. His approach combines data-driven clarity, consent-based systems design, and deep structural insight to drive rapid growth, operational excellence, and transformative impact. Cnotact MECi Group via their official website MECi-Group.com
Links
My Feb 2026 Dystopian, Pragmatic Post or Business Families

Leave a Reply
You must be logged in to post a comment.