{"id":2261,"date":"2025-08-29T18:47:13","date_gmt":"2025-08-29T18:47:13","guid":{"rendered":"https:\/\/firearmupgrades.com\/?p=2261"},"modified":"2025-08-29T18:47:13","modified_gmt":"2025-08-29T18:47:13","slug":"is-peter-thiels-warning-of-ai-dictatorship-right","status":"publish","type":"post","link":"https:\/\/firearmupgrades.com\/?p=2261","title":{"rendered":"Is Peter Thiel&#8217;s Warning of AI Dictatorship Right?"},"content":{"rendered":"<p> <br \/>\n<br \/><\/p>\n<div data-nosnippet=\"\">\n<p>Computers have always been <a href=\"https:\/\/mitpress.mit.edu\/9780262533881\/the-government-machine\/\">governance machines<\/a>\u2014tools used by bureaucracies to <a href=\"https:\/\/direct.mit.edu\/books\/monograph\/2279\/The-Stuff-of-BitsAn-Essay-on-the-Materialities-of\">organize themselves<\/a> to exert power, models used to understand <a href=\"https:\/\/press.princeton.edu\/books\/paperback\/9780691094878\/the-cybernetic-theory-of-decision\">how bureaucracies behave<\/a>, and little <a href=\"https:\/\/ieeexplore.ieee.org\/document\/4544555\">bureaucratic<\/a> organizations <a href=\"https:\/\/ieeexplore.ieee.org\/document\/4544556\">in and of themselves<\/a>. Generative artificial intelligence systems are no exception; they are likely to <a href=\"https:\/\/www.programmablemutter.com\/p\/the-management-singularity\">transform<\/a> how governments, corporations, and other entities organizationally behave.<\/p>\n<p>Large language models (LLMs) and other related systems have already been subsumed into the age-old struggle for political power, as seen in everything from Elon Musk\u2019s <a href=\"https:\/\/www.techpolicy.press\/anatomy-of-an-ai-coup\/\">AI-driven takeover<\/a> of governmental agencies to <a href=\"https:\/\/www.programmablemutter.com\/p\/americas-plan-to-control-global-ai\">technological competition<\/a> between the United States and China. Are they <a href=\"https:\/\/carnegieendowment.org\/research\/2024\/12\/can-democracy-survive-the-disruptive-power-of-ai?lang=en\">compatible<\/a> with democratic governance, or <a href=\"https:\/\/www.journalofdemocracy.org\/articles\/how-ai-threatens-democracy\/\">threats<\/a> to its survival?<\/p>\n<p>In choosing to frame an always amorphously defined \u201cintelligence\u201d as an inherently singular and self-contained quality, AI designers have unconsciously selected systems that mirrored the centralized architectures of the institutions that utilize them.<\/p>\n<p>AI has, throughout its history, emphasized particular solutions to intelligent behavior that trend towards centralization and top-down control. In turn, these tendencies have been reinforced by the manner in which patrons\u2014such as governments and large corporations\u2014see their own ideological and organizational assumptions reflected as computational artifacts.<\/p>\n<p>Past need not be prologue, but less centralized AI may require breaking with the field\u2019s governing assumptions.<\/p>\n<hr class=\"thin-horizontal-rule\"\/>\n<p><span class=\"section-break-text\">In a 2019 talk,<\/span> Peter Thiel <a href=\"https:\/\/mindmatters.ai\/wp-content\/uploads\/sites\/2\/2021\/10\/Mind-Matters-Episode-156-Peter-Thiel-at-COSM-rev1.pdf\">suggested<\/a> AI itself\u2014independent of any particular AI flavor\u2014might be inherently authoritarian. Thiel, calling AI \u201ccommunist,\u201d mused about how it could bring back the world as it was before Silicon Valley emerged: \u201cA few large companies, a few large governments, a few large computers that controlled everything.\u201d<\/p>\n<p>The future that Silicon Valley was building, he said, would be one characterized by \u201clarge centralization,\u201d government-like corporations that \u201ccontrol all the world\u2019s information,\u201d and \u201ctotalitarian\u201d computers that know \u201cmore about you than you know about yourself.\u201d Thiel\u2019s comments are worth revisiting in light of <a href=\"https:\/\/www.technologyreview.com\/2023\/12\/05\/1084393\/make-no-mistake-ai-is-owned-by-big-tech\/\">arguments<\/a> about <a href=\"https:\/\/www.wsj.com\/opinion\/trump-can-keep-americas-ai-advantage-china-chips-data-eccdce91\">whether<\/a> or <a href=\"https:\/\/www.forbes.com\/sites\/digital-assets\/2025\/02\/28\/deepseeks-lesson-the-future-of-ai-is-decentralized-and-open-source\/\">not<\/a> only <a href=\"https:\/\/clivethompson.medium.com\/the-dangers-of-highly-centralized-ai-96e988e84385\">big companies<\/a> (and <a href=\"https:\/\/www.forbes.com\/sites\/moorinsights\/2025\/01\/30\/the-stargate-project-trump-touts-500-billion-bid-for-ai-dominance\/\">government backers<\/a>) will control <a href=\"https:\/\/arxiv.org\/abs\/2001.08361\">large<\/a>, resource-intensive AI <a href=\"https:\/\/media.datacenterdynamics.com\/media\/documents\/openai-infra-economics-10.09.24.pdf\">infrastructure<\/a>.<\/p>\n<p>One of Thiel\u2019s unstated assumptions is that AI does not really capture the intelligence that matters the most to human society. In everyday work and life, very little of the information and computation that matters is truly done within us. We rely to an astonishing degree on external sources of information, and on external mechanisms (rules, conventions, institutions) that provide us with ways to simplify what would otherwise be costly for us to compute ourselves. We also rely very much on each other to do what we cannot manage alone.<\/p>\n<p>The knowledge and capabilities necessary to do things of value in the world is unlikely to be found in the same centralized place, in the same conveniently standardized format. Sometimes\u2014as with an <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/0004370273900118\">architect<\/a> moving from a vague design sketch to a fully realized blueprint\u2014you have to iterate and experiment to know what you need to know. Information can be bottlenecked by time and order effects.<\/p>\n<p>More broadly, Friedrich Hayek <a href=\"https:\/\/www.econlib.org\/library\/Essays\/hykKnw.html\">famously argued <\/a>that the knowledge of one singular planner was vastly inferior to the distributed knowledge of many people acting separately under the coordination of the price system. A dramatic example of the distribution of knowledge and capabilities today is the Taiwan Semiconductor Manufacturing Company (TSMC). If China took Taiwan, it might capture TSMC\u2019s chip fabrication facilities, but it wouldn\u2019t necessarily control them. Those chips, though produced by TSMC, are really the product of a complex chain of relationships between a worldwide network of manufacturers, suppliers, and highly specialized technical personnel.<strong>\u00a0<\/strong><\/p>\n<p>That\u2019s a tricky problem for AI. Even if one sincerely wanted to replicate the coordinating capacity of things like institutions or markets, it is much easier to build a genius-in-a-box than replicate a stock market <em>in silico. <\/em>AI, with a few notable exceptions, emphasizes intelligence as the individual ability to find <a href=\"https:\/\/www.science.org\/doi\/10.1126\/science.aac6076\">computationally efficient <\/a>ways to solve problems rather than the coordination of collective capacities.<\/p>\n<p>Because AI individualizes intelligent behavior, it always faces an uphill battle in making engineered systems solve tough challenges. These problems are far from intractable but, historically, are only fruitfully accomplished when AI researchers give up on designing systems with even superficial adherence to the biological constraints of natural intelligence.<\/p>\n<p>AI development often follows a <a href=\"https:\/\/homes.luddy.indiana.edu\/nensmeng\/files\/Ensmenger2012-Chess.pdf\">recurring pattern<\/a> first exemplified by computer chess. When the problem is very big relative to the technical resources available to solve it, as chess was in the 1950s, AI researchers try to emulate the ways that humans use knowledge and skill to solve difficult problems\u2014until powerful hardware comes online that makes <a href=\"https:\/\/www.cs.utexas.edu\/~eunsol\/courses\/data\/bitter_lesson.pdf\">simpler brute force<\/a> approaches viable. Pieties about the mysteries of the mind aside, clever heuristics get discarded faster than a lazy freshman dropping CS 101.<\/p>\n<p>However much power is applied, the eventual result has been consistently disappointing. Even if the term artificial general intelligence (AGI) is of recent vintage, the idea is as old as the discipline itself. Much like Brazil is always the country of the future, AGI always seems to be just around the corner. AI has contributed many \u201cnarrow\u201d systems that accomplish useful individual tasks in particular circumstances, but has consistently fallen short of its ambitions to make something that truly has it all. It\u2019s possible LLMs might be different, but they have yet to overcome a lot of understandable skepticism.<\/p>\n<p>And yet, other forms of software engineering have used a different approach to create computational artifacts\u2014like operating systems\u2014that are capable of doing far more arbitrary tasks in a much more diverse range of circumstances. The Linux operating system powers everything from <a href=\"https:\/\/tuxcare.com\/resources\/learning\/embedded-linux\/#pillar-nav-li-2\">small Internet of Things applications<\/a> to <a href=\"https:\/\/www.nccs.nasa.gov\/systems\/discover\">NASA supercomputing clusters<\/a>. Variants of Linux can be found in <a href=\"https:\/\/www.androidauthority.com\/history-android-os-name-789433\/\">phones<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Steam_Deck\">game consoles<\/a>, and even <a href=\"https:\/\/www.pcworld.com\/article\/430978\/meet-red-star-os-the-north-korean-linux-distro-that-apes-apples-os-x.html\">North Korean computers<\/a>. Gripes about Linux hardware compatibility aside, it\u2019s very hard to think of something Linux<em> can\u2019t <\/em>do. Linux\u2014and other operating systems like Windows or MacOS\u2014also are coordinating devices.<\/p>\n<p>They govern an enormous amount of subprocesses that allow users to make use of the hardware underneath, working so harmoniously that their operation is only noticed when something goes wrong. Even if Apple, Microsoft, and others are working to integrate LLMs directly into their operating systems, the LLMs are just one component of many.<\/p>\n<p>As a discipline, <a href=\"https:\/\/www.bcs.org\/articles-opinion-and-research\/does-current-ai-represent-a-dead-end\/\">mainstream software engineering practices<\/a> have trended over time toward an interlocking collection of practices that make individual programs more reliable. Components, at least ideally, ought to be testable and reusable in isolation. They can be composed together\u2014like Lego blocks\u2014to make a larger system, but at least ideally every subcomponent ought to be modular and separable. Unsurprisingly, complex computational artifacts like the Linux operating systems are comprised of things made by many different individual people joined together. Science fiction author Neal Stephenson <a href=\"https:\/\/www.amazon.com\/Beginning-was-Command-Line\/dp\/0380815931\">analogized<\/a> Unix, which Linux partially derives from, as more akin to a collectively maintained folk tradition than one engineered system.<\/p>\n<p>Coordinating all of these disparate and distributed parts together to make a composite whole is not really feasible for AI and never really has been. AI systems often have little separation of concerns, are too tightly coupled to be fully modular, and tend to be all-or-nothing affairs in general. Everything in the system is used to perform a computation, and removing any one individual piece can easily destroy the whole.<\/p>\n<p>What results is often a monolithic architecture built according to the principles of whatever silicon representation of intelligence\u2014symbolic logic, neural networks, and whatever comes next\u2014is currently in vogue. This partially validates Thiel\u2019s complaint that AI inherently tends towards centralization and authoritarianism. Governments and large corporations, all things being equal, are more capable of buying, funding, and\/or operating the hardware-hungry AI systems that apply brute force when gentle persuasion fails. The monolithic purebred composition of AI systems, unlike the mixed origins of more mainstream software, similarly contributes toward centralized control.<\/p>\n<hr class=\"thin-horizontal-rule\"\/>\n<p><span class=\"section-break-text\">Yet the causality<\/span> may not be that straightforward. It is true that AI systems, throughout the field\u2019s history, have converged towards tightly coupled architectures managed by large bureaucracies. But this has as much to do with the way that these bureaucracies already see the world\u2014and themselves\u2014as it does with the technical characteristics of the systems they develop or utilize.<\/p>\n<p>The Soviet chess programming innovator (and chess grandmaster) Mikhail Botvinnik thought his <a href=\"https:\/\/www.chessprogramming.org\/Pioneer\">Pioneer <\/a>system could be a model for economic planning because he lived in a regime where it was axiomatic that the economy could fit into the constraints of a <a href=\"https:\/\/crookedtimber.org\/2012\/05\/30\/in-soviet-union-optimization-problem-solves-you\/\">highly optimized mathematical program<\/a>. When the United States and Japan both tried (<a href=\"https:\/\/paleofuture.com\/blog\/2013\/4\/30\/darpa-spent-1-billion-trying-to-build-a-real-life-skynet-in-the-1980s\">and failed<\/a>) to solve artificial intelligence in the 1980s by building large knowledge-based expert systems, the causes had more to do with Washington and Tokyo than the systems themselves. Silicon Valley as we understand it had yet to emerge, and both powers lived in a world dominated by large-scale, state-directed systems engineering projects. Scientists, engineers, and the military had <a href=\"https:\/\/mitpress.mit.edu\/9780262549578\/arguments-that-count\/\">collaborated <\/a>since World War II to build <a href=\"https:\/\/mitpress.mit.edu\/9780262182010\/from-whirlwind-to-mitre\/\">foundational computer projects <\/a>like the Semi-Automatic Ground Environment (SAGE) air defense system.<\/p>\n<p>More broadly, both countries experienced breakneck economic and technological growth as a result of heavy state-directed industrial patronage. A big, top-down AI project like the ill-fated Strategic Computing Initiative simply matched how both governments already understood themselves.<\/p>\n<p>AI is still a young discipline, dating back only to the late 1940s. The field has never been entirely monolithic, and strands of it have periodically advocated a more <a href=\"https:\/\/mitpress.mit.edu\/9780262529204\/behavior-based-robotics\/\">bottom-up<\/a> and <a href=\"https:\/\/www.amazon.com\/Society-Mind-Marvin-Minsky\/dp\/0671657135\">distributed <\/a>view of intelligent behavior. Today, researchers have called for more <a href=\"https:\/\/arxiv.org\/html\/2502.03689v1\">varied goals and approaches <\/a>as well as <a href=\"https:\/\/www.nature.com\/articles\/d41586-025-00930-6\">more freedom to use, modify, and share <\/a>generative AI systems. LLMs themselves, though emblematic of centralized control due to the immense resources associated with their training, deployment, and upkeep, also are potentially promising developments in their own right. To the extent that LLMs work so well, Henry Farrell and others <a href=\"https:\/\/henryfarrell.net\/large-ai-models-are-cultural-and-social-technologies\/\">recently argued<\/a>, it is because they emulate ways in which collective external systems like institutions and markets coordinate individual human behaviors. In this view, LLMs can be best understood not so much as big, singular \u201cintelligent agents,\u201d but rather as \u201ccultural technologies\u201d that\u2014like images, writing, print, or video\u2014allow people to access, organize, and disseminate information in novel ways.<\/p>\n<p>Human knowledge, training, prompting, and a growing community of active users and developers are as much key to the success of LLMs as big companies and governments.<\/p>\n<p>As LLMs and other generative systems become more integrated into human societies, a subsidiary group of other institutions will also emerge to regulate them, cushion their impact, and mitigate against the negative externalities they cause. Over time, the collaborative development, usage, and regulation of these systems may mitigate against their centralized ownership.<\/p>\n<p>Still, the field will ultimately need to reorient itself around the possibilities of the emergent and collaborative intelligence LLMs offer tantalizing glimpses of. It will need to embrace a hitherto unfamiliar image of intelligence as the coordination of collective behavior, and the architectural assumption of intelligent systems as distributed and heterogeneous rather than singular and monolithic.<\/p>\n<p>In other words, a future AI less amenable to control by a powerful few should look more like the collectively edited Wikipedia than Deep Blue.<\/p>\n<p>Wikipedia does not exist to find ways of solving problems in a computationally efficient manner. Instead, it is both a coordinating mechanism for the organization of information and an external source of knowledge for the people that utilize it. The fact that we do not consider it to be \u201cartificial intelligence\u201d is perhaps the greatest sign of its success. The most powerful intelligent systems in the world operate beneath the surface, only revealing their presence when we can no longer rely on them.<\/p>\n<p>But AI\u2014like the computers that it runs on\u2014is still young and has room to grow. It is possible that, by the end of this century, we will live in a world radically remade by a much different image of intelligence than the monolithic, top-down models that have traditionally driven AI research and development. However, that future will require an active choice to embrace that image. Otherwise, Thiel\u2019s glum vision of digital dominance may become a self-fulfilling prophecy.<\/p>\n<\/p><\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/foreignpolicy.com\/2025\/08\/29\/ai-democracy-dictatorship-agi-governance\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Computers have always been governance machines\u2014tools used by bureaucracies to organize themselves to exert power, models used to understand how bureaucracies behave, and little bureaucratic organizations in and of themselves. Generative artificial intelligence systems are no exception; they are likely to transform how governments, corporations, and other entities organizationally behave. Large language models (LLMs) and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2262,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10],"tags":[],"class_list":{"0":"post-2261","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-politcical-news"},"_links":{"self":[{"href":"https:\/\/firearmupgrades.com\/index.php?rest_route=\/wp\/v2\/posts\/2261","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/firearmupgrades.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/firearmupgrades.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/firearmupgrades.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/firearmupgrades.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2261"}],"version-history":[{"count":0,"href":"https:\/\/firearmupgrades.com\/index.php?rest_route=\/wp\/v2\/posts\/2261\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/firearmupgrades.com\/index.php?rest_route=\/wp\/v2\/media\/2262"}],"wp:attachment":[{"href":"https:\/\/firearmupgrades.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2261"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/firearmupgrades.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2261"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/firearmupgrades.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2261"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}