Skip to content

Against Scale

The cloud falling back down to earth

  • Jamie Gaehring
  • Aug 29, 2025

At the end of my first round of consultations for the Farm Flow project, I submitted my recommendations for its development model with some closing remarks to make clear my own biases that led to those conclusions. I don't pretend to put all of my political convictions aside when rendering a professional opinion, but I do try to be transparent. Those remarks, under the final subheading of "Distributing Cost & Control; Organizing Support & Reach," read in part:

I’m reluctant to speak in terms of “scaling” product development, as is common today in start-up culture and industry. [I find that scaling] is essentially at odds with the goal of producing technology that affords its users greater autonomy, both individually and collectively. I prefer to think about the ways technology can help to distribute control over our production systems, while also diffusing the costs of maintaining them. From this perspective, it’s not the product that expands to an ever greater and greater scale, but rather our capacity to organize a larger bloc of mutual support with a stronger commitment to shared values.

[...]

In terms of technology development and especially technology maintenance, widening the circle of cooperation can diffuse costs, by lowering the stakes required to initiate development at the very start, while also averaging out the long-term costs of ownership through shared ownership.

Roughly 6 months later, I was putting together a critique of two funding proposals that emerged from DWeb Camp 2024. I drew upon those early assessments, as well as lessons I took away from subsequent Farm Flow consultations, when I voiced my misgivings over certain aspects of those plans that adhered to conventional development models like scalability. Recycling language from the above remarks, I contrasted Runrig's proposed methods with what I saw as some of the worst practices of venture capitalism. I then tried to distill it down to one succinct table:

Runrig MethodologyVenture Capitalism
Distributing CONTROLvs.Scaling PRODUCTION
Diffusing COSTSvs.Accumulating CAPITAL
Expanding PARTICIPATIONvs.Consolidating MARKETS
WORKER-organizingvs.RENT-seeking

To be honest, I'm still quite ambivalent about the this framing. I'm not at all confident that each Runrig method correlates perfectly to its VC counterpart in a significant or illustrative way, but I do feel most confident on the point about scale. I almost think I could have filled in the entire right-hand column with "scaling" of one type or another, and each element in the left-hand column would still pose a distinct, valid alternative to some aspect of scalability. In a way, venture capitalism as a whole can be summed up rather bluntly as "scale everything, all the time," without much regard for the particulars.

More recently in my article on Illegible Agriculture, I discussed how ag-tech startups have a penchant for oversimplifying the essential complexities of agriculture and regional food systems. I asserted that this is not meant to improve those systems at the community level, in accordance with users' unique needs and desires, or by taking into account local sensibilities. At the end of the day, these complex systems must be simplified in order to "render the labor, knowledge, and produce of that community more suitable for mass consumption and capital accumulation."

Scalability, then and now, is at the very top of my list of complexity-erasing strategies in today's tech industry that must be re-evaluated, and if no redeeming qualities are forthcoming, it ought to be jettisoned from our methodologies entirely. Likewise, in our discourse around software development it should be relegated to the set of anti-patterns found to be inimical to appropriate technology design.

Thumbs on the Scale

In Silicon Valley, there is a widespread fascination with scaling, or to be more precise, digital technologies that scale. The verb "to scale" in this context can take the passive voice, as in digital technologies that "can be scaled," or an active voice for technologies that facilitate "the scaling of" other systems. The other systems can be digital or non-digital, and if some new tech promises "to scale" and "to be scaled" at the same time, all the better. Many a would-be founder has exalted the properties of this or that technology for its ability to scale without being especially clear on what's being scaled, as if by some quasi-magical property latent in the computer chips to scale all they touch. But scaling is by no means inherent to the nature of computation, nor does scaling emerge from digital technology all of its own accord. Rather, I would argue, it is imposed on technology by a mandate from venture capital investors to pursue unlimited economic growth. As Karen Hao observes in Empire of AI:

In the end, Moore’s Law was not based on some principle of physics. It was an economic and political observation that Moore made about the rate of progress that he could drive his company to achieve, and an economic and political choice that he made to follow it. When he did, Moore took the rest of the computer chip industry with him, as other companies realized it was the most competitive business strategy. OpenAI’s Law, or what the company would later replace with an even more fevered pursuit of so-called scaling laws, is exactly the same. It is not a natural phenomenon. It’s a self-fulfilling prophecy.

Where information technology does make an original contribution is in its unrivaled capacity for abstraction, a power that can just as well be applied to scaling as to other unrelated tasks or even opposing aims. A well-designed computer algorithm can abstract away concrete details of the real world – e.g., material goods and services, users, workers, facial expressions, social relations, monetary costs, environmental costs, etc. – and whisk them away to the cloud. Once in this realm of pure abstraction, properties like color, size, and shape become mere numbers or bits. Free of all physical encumbrance, our worldly cares assume new virtual bodies, becoming weightless, untethered, and without consequence. Up there in the cloud, scale itself is only limited to the largest number you can fit into a 64-bit register – although that limit, too, can be easily abstracted away.

When the object of scaling is economic productivity or market dynamics, computational abstraction becomes an accelerant for capital's race towards infinite growth. This secret sauce – abstraction coupled to a business model meant for rapid market growth and capital accumulation – is what business analysts or techno-optimists typically infer by the neologism: scalability.

The Physicality of Information

To head off any criticism that I'm focusing exclusively the administrative dimensions of scale, I should acknowledge that there is, in fact, a scientific context in which "scalability" can denote a real-world quantity that can be measured empirically. It requires strict definitions of one's parameters and can be said to govern a specific range of phenomena under controlled conditions. That still doesn't make it a predictor of planet-wide industrial production cycles, subject to the whims of the global economy and the twists and turns of geopolitics. Yet even as academic terminology, "scalability" comes freighted with some heavy socioeconomic implications. Its physical limits and potentials, as measured in the laboratory, may have little predictive power over macroeconomic trends, but that's not to say influence doesn't pass the opposite direction, from Silicon Valley into the halls of the academy. That's what funds scientific research into scalability in the first place: it is significant principally as a managerial science for the Nasdaq-100.[1]

It nevertheless remains to be seen if the abstractions of scalability can survive eventual contact with reality – the cloud falling back down to earth, so to speak. Computational abstractions do incur physical costs and real-world consequences, and there are practical limits to the scale of their application, even if they encompass theoretical infinities. The need for sane limits on computational scaling could not be more acute than in the face of our accelerating climate crisis and the rising number of geopolitical conflicts spawned by competition over finite energy and mineral resources. Indeed, such resources will never be adequate to the computational demands of today's tech moguls, if left to set their own limits. When Microsoft announces it will reopen Three Mile Island to power its large language models, this is the cloud falling back down to earth. When companies like Apple, Tesla and Dell are willing to pay millions of dollars in legal fees each year so they can keep extracting the conflict minerals that power our smartphones, electric vehicles, and other devices, this too is the cloud falling back down to earth.

I need only to gesture slightly towards the current AI bubble and its impending burst, as its hype finally seems to be fading and this terawatt-hour-expending project with its "scale-at-all-cost" approach, as Hao calls it, is exposed for the massive scam it has always been from the start.

A Measure of Social Control

Beyond the clear perils to our planet's climate and natural ecosystems, rapid scaling also poses a dire threat to our social and cultural ecosystems. When big tech companies talk about scalability, they might have in mind scaling the production of material goods and services (e.g., GrubHub, Apple, Tesla), scaling the marketplace for those goods and services (e.g., Amazon, AdSense, Square), scaling cultural exchange and artistic expression (e.g., Netflix, Spotify, YouTube), or scaling the tenuous fibers of our social relations that get bound up in all of that (e.g., Facebook, LinkedIn, Tinder). When these processes are scaled by algorithmic abstraction, however, we find that some essential aspect – some inherent quality of our culture, of our social relations, of our very material well-being – always seems to get lost in the mix.

In many ways, abstraction is just the omission of certain characteristics that make real-world phenomena especially inscrutable to meaningful analysis. Details that are seen as anomalous, divergent, or simply irrelevant to the task at hand are thrown out, while other traits or patterns are elevated in their place. All of this is done to form a coherent model of whichever dynamics the modeler deems most significant. As George Box famously put it, "all models are wrong, but some are useful." Abstraction can produce models that are insightful and beneficial to society just as easily as it can throw up models that are misleading, exploitative, or utterly meaningless. In the case of most cloud software, the abstraction is performed by proprietary algorithms, hidden away on a remote server somewhere that only its owners can ever see or control. The general public simply cannot know what intentions may lie behind their algorithmic abstractions, either good or ill. Ask any content creator who's tried to guess what thumbnail image will get them the most views, or an SEO consultant who's racked their brain for the right combination of keywords to improve their website's search ranking, and they'll tell you just how futile a guessing game that can be.

Billions of decisions are being made every second on the basis of such cloud-based abstractions, and all for the sake of somebody's model. But whose? Most of those decisions are the sole prerogative of the algorithm's authors, while the overwhelming majority of us are relegated to being the mere objects of their abstractions, even if we never use the particular cloud software in question. Users and non-users alike are seldom granted any knowledge of the decisions being made that impact our lives, let alone any influence over how those abstractions are formed in the first place. When the phenomenon being abstracted away is an entire economic sector or, worse yet, society as a whole, we forfeit a tremendous degree of agency over our social lives and our very material existence. All that power of abstraction is essentially handed over to just a few over-caffeinated engineers and their even fewer corporate managers. Once in their hands they'll do whatever they deem necessary for the sake of scale, often to the detriment of our communities and ultimately to the sole benefit of their company's shareholders.

That imbalance of control is the definitive metric for scalability and the primary rationale for scaling up so much tech infrastructure in the first place. Another implicit assumption of scalability is that whatever measure is used to indicate a company's total market share or asset valuation – e.g., the number of active users, payments processed, quarterly revenues, etc. – that value is expected to increase at a geometric rate with respect to the total number of employees and capital assets needed to achieve said increase. A mere linear rate of growth would be seen as an abject failure. Whether the market valuation is expressed in users, dollars, or some other unit, in order to be commensurable with the number of employees or other operating costs, they must both be regarded as measures of socioeconomic value, in one form or another.

Putting assets and liabilities together in a ratio this way also implies that what's being scaled represents a form of equity. It essentially amounts to an unequal exchange of socioeconomic value, and if we assume naturally that a positive equity valuation is the desired outcome, the critical difference in value always flows upward. To put it more bluntly, it's a form of wealth extraction, plain and simple, where the rich only get richer. Furthermore, if the projected growth rate (e.g., equity over time) must rise geometrically in order to be deemed "scalable," then scalability is just an expression for the rate at which a technology system can perpetuate and increase socioeconomic inequality over time. Given the informational and communicative nature of such systems, especially in comparison with pre-digital methods of scaling economic production, it must be emphasized that such inequality will be at once economic as well as social in nature, effecting both how resources are allocated and who controls the allocation process. Scalability, in other words, must also be viewed as essentially a measure of extractiveness and social control.

Social control and extractivism are nothing new, of course, and computers aren't unique in their ability to scale them; for millennia prior to the invention of integrated circuits, ancient bureaucrats did just fine with their abacuses, law books, quipu, and clay tablets. Where computerized scaling differs is in the rate of extraction and the degree or granularity of control it can achieve relative to the amount of effort it requires to implement and enforce. In this respect, it exceeds all previous means of scaling by orders of magnitude.

Info Trawlers

The enormous informational complexity these tools purport to scale is so far in excess of what all the world's engineers, managers, and shareholders can ever hope to apprehend – at least with any semblance of competence or intentionality. Despite the relative ease with which it enables their control over nearly every aspect of our daily lives, scalability is a very sloppy means of control. The people wielding it don't know or care about the full detail being erased in the act of scaling, so long as it doesn't hurt their bottom line. And yet, however ineptly they may wield this power, it remains a very dangerous form of control. As with many forms of industrialization, scalability can prove all the more harmful through the sheer bluntness of the tool, even when no malice is intended. Inevitably, this immense power will be applied to a needlessly wide range of social functions, across diverse cultures and with reckless indifference, because in spite of what damage it may incur, any greater precision would only be seen as an unjustified cost to shareholders and a mere nuisance to engineers.

Like an ocean trawler that obliterates square miles of seafloor habitat, along with the full diversity of marine life it sustains, all just to harvest a few scallops and discard three quarters of its total catch – so, too, scalability can cut across a wide swath of our social and natural environs, wreaking havoc with our lives and wasting untold resources, all while its operators scarcely pay any notice to the destruction left in their wake.

"Still frame from David Attenborough's Ocean (2025), depicting an industrial
bottom trawler decimating the seafloor off the channel coast of southern
England"

I don't polemicize against scale because I oppose widespread adoption of new and effective digital tools. I just don't think that technology should be designed, owned, and controlled by a tiny number of elites, who then get to impose it upon the masses whether we want it or not. I also take especial exception when it becomes clear that those elites have their heads up their own asses and refuse to see the unparalleled damage they are doing to the rest of us. Finally, I think we're quite past the point of reinterpreting "scale" to mean anything else apart from such wanton devastation. The metaphor is too embroiled with these practices to rehabilitate it now.[2]

Socializing Our Computational Abstractions

All the same, I still contend that new computational abstractions can just as well play a critical role in overturning these injustices. After all, that is my entire purpose with Runrig: to develop the necessary methodologies, relationships, and infrastructure to bring more liberatory technologies into my own community, and then with some luck, to share them more widely.

It was with that in mind that I first proposed the table at the start of this article, as a rebuttal to the various abstractions imposed by venture capital funding models. Exploitative abstractions like scaling have been so normalized in the tech industry that they are taken for granted even by free software maintainers and advocates, including myself at times.

Technology does not have to blindly scale the production and reproduction of our socio-economic systems, to the exclusion of all other concerns. It's not obliged to turn our communities into more efficient resource pumps for capital accumulation. We can choose different abstractions and only decide to scale them when we have group consensus. We can co-develop new abstractions for distributing control of our production systems more evenly, while also diffusing the costs of maintaining them. Instead of consolidating markets into fewer and fewer hands, we can expand the zone of participation, along with our capacity for collective action. And although we live for now under capitalism, where technocratic rent-seeking is the order of the day, we can still choose not to reproduce those dynamics with our labor and technology. Instead, let's use them to organize bulwarks of worker power that are more resilient to such rent-seeking efforts, as well as other forms of attack.

Much of this is just a convoluted restatement of mutual aid's basic tenets, and I'm by no means the first to apply them to food and technology. I reframe them this way as a response to specific critiques about free software and cooperative technology. Namely, there's a view that truly robust, maintainable software must be amply funded through capital investments, public grants, or large donations from private foundations – at least, for halfway decent software anyone wants to use. Implicit in this claim is an assumption that good software must be able to scale, or else it's either not very good or it can't help very many people. It's not surprising to hear this charge coming from the startup industry, but beliefs of this sort are almost just as common among advocates for free software and regenerative agriculture.

I can fully sympathize with these concerns, and I share the pragmatic outlook that I think gives rise to such claims. The high-minded ideals of software freedom and cooperative economics must be balanced against fair compensation for developers and other support staff so that they can deliver the kind of safe, reliable, easy-to-use software that users deserve. It's a tough nut to crack, when you take a realistic account of all the labor, expertise, and long-term commitments needed to accommodate each and every one of those demands. Alternative funding models do exist, but I believe they're poorly attested within free software circles, as well as within conventional industries like agriculture or whichever sector we choose to address.

I've indicated here a few alternatives to "scaling" but only as a metaphor (e.g., "diffusing costs" or "distributing control"). I've offered nothing comparable to scaling in terms of its power for abstraction, let alone a feasible design for achieving it. For that, we do need to get more specific and talk about architecture, as much as I like to make a fuss about putting ecology over architecture. So next time, I'll cut straight to the chase; I'll layout what I believe will be a central architectural pattern for much of Runrig's future development: federated municipal platforms.


  1. Academic scalability is a pretty dry body of literature, even as theoretical computer science goes, but to get a sense, see Amdahl's Law and Gunther's Universal Scalability Law. The theoretical physics behind computational limits is actually a lot more approachable and fun to explore,. in my opinion. On her YouTube channel Up and Atom, Jade Tan-Holmes gives a fantastic explanation of "Why Pure Information Gives Off Heat" according to Landauer's Principle. To understand how Planck's constant and the Uncertainty Principle combine to fix a hard upper limit on the volume of information that can be transmitted over a fixed period of time, see "What is the maximum Bandwidth?" with Prof. Mike Merrifield and Brady Haran from Sixty Symbols. It's far more useful, in my opinion, to get a beginner's intuition for the physicality of information than to memorize a bunch of equations for scaling systems that have no business being that big to start with. ↩︎

  2. I don't know if a non-technical audience would appreciate the full extent to which capitalistic exploitation is entangled with the whole concept of technological scalability, but software developers should be able to spot it instantly.

    Look at any of the most common (or most extreme) approaches to scaling: containerized swarms and clusters, database sharding, MapReduce, data warehouses, data lakes, massively parallel processor arrays, etc. These techniques scale up the number of operations that can be performed or the number of bits that can be stored, but they aren't intended to scale up the complexity of the underlying computing model – some might say that defeats the whole point! And so they do little or nothing to accommodate any greater complexity in the domain model itself, which is where computational control can be handed off to the end user to decide for themselves what programs will positively impact their material lives. For all that massive parallelism and distributed architecture, the potential for human intervention becomes increasingly siloed and extremely centralized. It's analogous to scaling a bitmap image from 32x32 to 64x64 pixels. You can do it easily enough by transforming every single pixel into a homogenous 2x2-pixel block, each one a precise replica of the original, but you haven't added any information or finer detail to the image. You just quadrupled the number of bits you now need to store or transmit it. There's no increase in the significance of what's been scaled, no new knowledge, no refinement.

    This is what technological scaling represents today, in essence. Unless all the familiar techniques are rewritten from scratch or tossed out entirely, this is the definition of scaling that technologists are stuck with for the foreseeable future. We must adopt new metaphors if we want to break out of this established pattern. ↩︎