It is often claimed that the next technological epoch will be defined by conflict: human versus artificial intelligence. This narrative of competition, however, misses a deeper truth. We are not waiting for a confrontation between humanity and machines; the merger has already begun. From the moment we started carrying devices that manage our time, location and thought, we began an evolutionary migration from human to cyborg. The distinction between organism and mechanism has become less meaningful as cognition, communication and perception are mediated through technology.
The slow embedding of technology into the human system
Our dependence upon external systems now shapes almost every aspect of life. We navigate through GPS, communicate through algorithms, and measure our health through wearable devices. Each action relies upon a digital infrastructure that extends our sensory and cognitive reach. This is not a future state but a lived condition.
The process began decades ago with seemingly trivial technologies. The portable cassette Walkman, introduced in 1979, was one of the first mass-market devices that allowed individuals to curate their auditory environment (Hosokawa, 1984). By wearing a headset, one could exclude the natural world and enter a private soundscape, transforming the act of walking into a self-contained performance. This externalisation of sensory experience marked the beginning of the cyborg transition. The Walkman was not implanted, yet it merged with human behaviour so thoroughly that its absence became noticeable.
Smartphones later extended this process to the cognitive domain. They are not merely communication tools; they are prosthetic memory systems, navigational aids, and emotional regulators. When we lose them, we experience a form of disorientation akin to sensory deprivation. According to Clark and Chalmers (1998), such integration constitutes an “extended mind,” where external artefacts become part of the cognitive process itself.
Today, wearable technology like smartwatches, fitness trackers and augmented-reality glasses has pushed the interface even closer to the body. The next step is internalisation. Subdermal chips, implantable sensors and neural interfaces represent a move from carrying and wearing to embodying technology. At this point, the line between tool and tissue dissolves. Hayles (1999) described this condition as “posthuman,” not because we have transcended the human form, but because information systems now participate in what it means to be human.
The meta-narrative of dependency
Across this continuum, a single narrative emerges: dependence. The more we integrate technology into the body, the more we rely upon its invisible systems of maintenance and control. The GPS network, the data cloud, the machine-learning model—all form part of a vast cybernetic feedback system that governs our access to the world from which we derive our resources to live. We can no longer easily separate biological survival from digital infrastructure.
This dependency is not purely mechanical; it is epistemological. Our sense of truth, orientation and safety now passes through algorithmic filters. In this respect, the smartphone is already a cognitive implant. It tells us where to go, who to speak to, what to remember, and sometimes what to believe. The boundary between external assistance and internal reliance has blurred. The cybernetic principle of feedback and control, first articulated by Wiener (1948), is no longer a theory of machines but a condition of human life.
Governance and failure
If the future human body includes embedded AI systems that regulate health and cognition, pressing questions follow. Who maintains these systems? Who controls the flow of data between body and network? If the AI is learning and adaptive, it may evolve in ways its host cannot predict or understand. Traditional models of medical device regulation are insufficient for a living, learning system.
Failure is an even greater concern. When a wearable fails, we replace it. When an embedded system fails, the consequences could be fatal. A malfunctioning AI that regulates blood sugar or heart rhythm might leave the body defenceless. Asimov (1950) proposed three laws to safeguard human-machine interaction, yet these were written for robots external to the human body. An AI that operates from within requires new ethical architectures. It must balance safety, autonomy, and identity: can a system that governs physiology without consent still be considered part of the self?
Biological consequences of augmentation
The paradox of augmentation is that it can weaken the very systems it seeks to enhance. The immune system, for example, adapts to a dynamic environment. If embedded technologies continuously monitor and correct the body’s internal state, they may suppress this adaptive process. Over generations, the natural immune response could diminish, leaving humanity dependent upon technological maintenance for basic survival.
The same may occur cognitively. We have already outsourced memory and navigation to our devices. Neural implants capable of instant recall or predictive reasoning could accelerate this outsourcing until independent reasoning becomes a rarity. As Wiener (1950) warned, a system that loses its redundancy also loses its resilience. The evolutionary balance between self-regulation and external control must therefore remain in focus. Evolution does not end with integration; it continues through new forms of dependency and adaptation.
One fear is we may make monsters out of ourselves, such as depicted by the Borg in Star Trek (see below). It may be that humans begin to care less about aesthetics and more about capabilities of the body to self-heal, and actualise as part of a digital collective which includes AI-mind synthesis, automation algorithms for performing tasks, and nano robots for immune system responses and maintenance of the ‘organics’.
Image credit: Image from Star Trek: The Next Generation – Season 3, Episode 26, ‘The Best of Both Worlds Part I’ (1990), courtesy of CBS / Paramount Pictures.
Theoretical and ethical insight from cybernetics
To understand the human transition to cyborg, it is necessary to return to the literature of cybernetics. Norbert Wiener’s Cybernetics: Or Control and Communication in the Animal and the Machine (1948) established the conceptual unity between living organisms and mechanical systems through feedback loops. His later work, The Human Use of Human Beings (1950), warned that the social application of cybernetic principles could erode autonomy if left unexamined. Wiener’s central insight was that information flow governs both machines and societies. In this sense, cybernetics is not just a science of control but a philosophy of life under technological mediation.
Haraway’s (1985) Cyborg Manifesto reframed the cyborg as a political and cultural figure: a hybrid of organism and machine that destabilises traditional boundaries between nature and technology. For Haraway, the cyborg is not a dystopian future but a lived reality that reveals our interdependence with systems of production and communication. Her analysis remains relevant as AI begins to occupy not only social but biological space.
Modern thinkers like Bostrom (2014) and Kurzweil (2005) continue this trajectory, exploring the possibilities of superintelligence and human enhancement. Yet their optimism often overlooks the systemic risks of dependency and governance. The cybernetic tradition reminds us that every system of control introduces a potential point of failure. It is therefore not enough to pursue augmentation; we must design for fallibility.
What a human may become
If the rate of technological progress continues, the human being of the next century will likely be a composite of organic and computational systems. Vision may be enhanced through retinal sensors; cognition may be assisted by implantable neural modules; disease prevention may occur through continuous bio-monitoring and automated intervention. The individual will be connected to a network that mediates perception, memory and metabolism in real time.
Such integration will alter the meaning of agency. The self will no longer be an isolated consciousness but a node within a distributed network of information exchange. Privacy, autonomy and responsibility will require redefinition. A human error might be indistinguishable from a system fault. In this context, Asimov’s ethical vision must be inverted: rather than teaching machines to obey humans, we must teach humans to coexist with the machines inside them.
Humans will seeks to aesthetically integrate technology, but there will be an inevitable physical consequence to embedding tech into and onto our bodies.
Image Title: ‘Aesthetic Cyborg with externalised tech’

Image credit: Chapman, D. (2025). ‘Aesthetic Cyborg with externalised tech’. Image created by ChatGPT5.
Evolution is never over
The cyborg transition should not be read as the end of humanity but as a continuation of evolution through artificial means. The tools we design to assist us gradually become part of us. The integration of AI into the body is simply the latest stage in a long history of technological symbiosis. Yet this evolution demands humility. If we allow dependence to eclipse understanding, we risk creating a species that cannot function without its extensions.
The task for our generation is therefore twofold: to embrace the potential of integration while preserving the capacity for autonomy and failure. Cybernetics offers a framework for this balance, reminding us that all living systems depend upon feedback, redundancy and ethical restraint.
The real frontier is not artificial intelligence but the redefinition of what it means to be human in an age when intelligence itself is becoming shared across organic and artificial systems. Evolution is never over; it has merely changed its medium.
References
Asimov, I. (1950) I, Robot. New York: Gnome Press.
Bostrom, N. (2014) Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.
Clark, A. and Chalmers, D. (1998) ‘The Extended Mind’, Analysis, 58(1), pp. 7–19.
Clark, A. (2003) Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. Oxford: Oxford University Press.
Haraway, D. (1985) ‘A Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s’, Socialist Review, 80, pp. 65–108.
Hayles, N. K. (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.
Hosokawa, S. (1984) ‘The Walkman Effect’, Popular Music, 4, pp. 165–180.
Kurzweil, R. (2005) The Singularity Is Near: When Humans Transcend Biology. New York: Viking.
Wiener, N. (1948) Cybernetics: Or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press.
Wiener, N. (1950) The Human Use of Human Beings: Cybernetics and Society. Boston: Houghton Mifflin.
Postscript
I asked the AI how it knew to generate the image that it did, and how it thought it would align with my expectations; it replied (see below). The below reflection led to the theme for the article although cybernetics has been an interest of mine for many years.
ChatGPT5 response:
