AI Reprogramming Humans: The Cybernetic Feedback Effect

Artificial Intelligence (AI) is rapidly advancing, integrating itself into various aspects of human life, enhancing efficiency, and even altering the way we interact with the world. However, beyond these obvious impacts, there lies a more profound and potentially unsettling question: Is AI reprogramming our minds? This essay argues that AI is not only augmenting human capabilities but also fundamentally changing our cognitive processes. Drawing on cybernetics and Actor-Network Theory (ANT), this essay explores how AI affects human cognition and behavior, and posits that just as we can introduce viruses into computers, it may not be so far fetched to say that AI may be able to introduce ‘viruses’ into human minds.

A caveat to the AI-destroys-humanity construct: So long as it is called ‘artificial’ intelligence, computers can only imitate, but never become, a mind in itself. This is not to say that it may be possible for humans to create the conditions for intelligence to emerge in some form; however, AI agency is, and has always been, an oxymoron. Therefore, limits on AI, per se, are really a governing framework of permissions for the algorithm to run freely. After all, an algorithm, even if able to change itself in real time, acts within its governing parameters / limitations. If a system has only been trained in English, it cannot spontaneously write in Chinese. The artificiality of AI is less talked about as the intelligence side is the focus, as ChatGPT and its contemporaries continue to evolve to process greater complexity in medium and subject matter. Therefore, the artificiality is the weak component when asking questions about how AI might police itself or engage in any form of significant autonomous behavior (Shukla et al., 2024). As it is ‘artificial’ intelligence, the AI, by definition, has zero emotional intelligence as it has no experiences (in any way relatable to humans anyway) to draw upon for feeling something. An AI will never tell you it has trauma from an inexperienced programmer and can relate to your same situation at work.

Nevertheless, the algorithms which imitate intelligence do so in a compelling manner, and combined with data analytic tools, AI has already succeeding in assisting humans in profound ways. Questions around attribution, ethics, and the digital analogue to the gray goo scenario all pervade the discourse. The use of AI is exploding due in part to its availability (ChatGPT 4.0 is available for free), as well as its ability to understand the highest level of code: human language. We are all programmers of our respective languages insofar as we construct our words to receive some sort of expected feedback (see Wittgenstein, 1953). Certain ‘algorithms’ of language, when repeated, produce the same results (most of the time). The study of human-to-human language and communication is linguistics; when computers are involved, it is the field of cybernetics, which encompasses linguistics.

Indeed, cybernetics, the study of communication and control in systems, offers a robust framework for understanding how AI reprograms human cognition. Norbert Wiener, the father of cybernetics, emphasized the importance of feedback loops in regulating systems (Wiener, 1948). In the context of AI, feedback loops are present in the continuous interaction between humans and AI systems, such as recommendation algorithms and personal assistants. These interactions shape our preferences, behaviors, and even thought patterns. Whilst in some cases, like learning a new programming language through these interactions, there is tangible benefit, it could be that the new way of thinking ironically robs us of our own intelligence.

There is academic support for this concern. The integration of AI into daily tasks leads to cognitive offloading, where humans rely on AI to perform functions that previously required human cognition. Studies have shown that the use of GPS for navigation, for instance, can diminish our spatial memory and navigational skills (Dahmani & Bohbot, 2020). As we increasingly depend on AI for decision-making, our cognitive abilities adapt to this new environment, potentially reducing our capacity for independent critical thinking.

The way we inquire about the universe affects the structure of our brain as it adapts to new knowledge paradigms. The integration of AI into research and learning processes introduces new ways of thinking and problem-solving. For instance, AI’s pattern recognition capabilities can lead to new insights and discoveries, but they also shape how researchers frame questions and approach problems. This cognitive restructuring is akin to the changes observed in the brains of individuals who engage extensively in activities that require specific cognitive skills, such as musicians or chess players (Wan & Schlaug, 2010).

The implications of this are not fully understood, and the evolution may be inevitable toward this further human-AI integration leading to what may eventually become a single intelligence in which the knowledge and analytical powers of AI are seamlessly grafted into our own brains. Such a move takes advantage of the creativity and impetus of humans, and the great data storage and processing capability of computers. Although it may seem inevitable, it may already be too late for humans to escape the influence of AI on their development. The question is if the character of this transition will be better or worse for civilization.

Conclusion
AI is not merely a tool that augments human capabilities; it is a powerful force that is reprogramming human cognition. Through cybernetic feedback loops and the dynamic networks described by Actor-Network Theory, AI systems influence our behaviors, thoughts, and decision-making processes. The introduction of cognitive ‘viruses’ by AI poses significant risks to human autonomy and agency. As we continue to integrate AI into our lives, it is essential to understand and address these profound implications to ensure that AI serves to enhance, rather than diminish, our cognitive and ethical capacities. We should also consider other ways AI could harm us outside the stereotypical conventional war we see in films.

References
Wiener, N. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press.
Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press.
Pariser, E. (2011). The Filter Bubble: What the Internet is Hiding from You. Penguin Press.
Dahmani, L., & Bohbot, V. D. (2020). Neural correlates of cognitive mapping, pattern separation, and context encoding in humans. Hippocampus, 30(7), 738-754.
Wan, C. Y., & Schlaug, G. (2010). Music Making as a Tool for Promoting Brain Plasticity across the Life Span. Neuroscientist, 16(5), 566-577.
Cadwalladr, C., & Graham-Harrison, E. (2018). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian.
Cummings, M. L. (2017). Artificial Intelligence and the Future of Warfare. Chatham House.
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
Wittgenstein, L. (1953). Philosophical Investigations. Blackwell Publishing.
Shukla, A. K., Terziyan, V., & Tiihonen, T. (2024). AI as a user of AI: Towards responsible autonomy. Heliyon, 10(11), e31397.