Blog

Why Should All Engineers Know Pseudo Code? An Introduction to Algorithms


Introduction

The rise of artificial intelligence (AI) has brought the term “prompting” into mainstream conversations. Prompting, the act of giving instructions to AI, is often perceived as a modern skill. However, the practice of human-computer interfacing is deeply rooted in history, dating back to the earliest programmable machines. Charles Babbage’s Analytical Engine (1837) stands as one of the earliest examples of mechanical computation, where structured inputs were necessary to produce logical outputs. Babbage’s collaborator, Ada Lovelace, conceptualised the first algorithm for this machine, laying the groundwork for programming as a discipline. Later, Alan Turing’s theoretical Turing Machine formalised the concept of computation, showcasing the power of algorithms to solve abstract problems (Turing, 1936).

These milestones emphasise one fundamental truth: clear, structured communication is the cornerstone of effective machine interaction. Today, generative AI tools such as ChatGPT provide a façade of simplicity, suggesting that anyone can use AI to solve problems. Yet, without understanding how these tools process inputs, outputs often fail to meet expectations. Engineers who understand pseudo code—an intermediate step between natural language and formal programming—are uniquely positioned to bridge this gap. By leveraging pseudo code, they can translate their ideas into machine-readable logic, ensuring their prompts produce precise and meaningful results.


Historical Foundations of Human-Computer Interaction

The foundation of human-computer interaction lies in cybernetics, a field pioneered by Norbert Wiener (1948), which explored feedback loops in communication systems. Wiener’s work highlighted the importance of structured communication, laying the foundation for modern algorithms. Asimov’s “Three Laws of Robotics” (1942) underscored the need for precise instructions, ensuring machines operated safely and predictably. However, these principles have limitations. Der Derian (2010) criticises their application in autonomous warfare, where vague instructions can result in unintended consequences.

Structured programming, introduced by Edsger Dijkstra, further solidified the importance of logical design in machine communication (Dijkstra, 1968). These historical developments highlight the critical role of precision in reducing errors and improving outcomes in human-machine interactions.

The Convergence of Disciplines

Engineering has undergone significant transformations, with the boundaries between mechanical and computer engineering increasingly blurred. CAD systems, 3D printing, and digital twins have revolutionised prototyping, enabling faster and more efficient designs (Tao et al., 2018). AI now plays a pivotal role in optimising these systems, generating simulations, and predicting performance.

For example, creating a digital twin requires defining variables such as dimensions, material properties, and constraints. Without pseudo code, engineers risk generating incomplete or incorrect models, leading to inefficiencies and delays. Structured prompts in pseudo code ensure that AI interprets these inputs correctly, bridging the gap between human intent and machine execution.


Teaching Pseudo Code: The Theory Behind Structured Programming

Foundations of Structured Programming

Pseudo code simplifies complex logic into step-by-step instructions using control structures that underpin all programming languages:

  1. Sequential Execution: Instructions are executed in the order they appear.
  2. Conditional Statements: Decision-making processes using if-then or if-then-else.
  3. Loops (Iteration): Repeated execution of code blocks, such as while, do-while, or for.

These principles enable users to articulate problems clearly, making them accessible to both humans and machines.

Pseudo Code in Action: Programming a Robot for Combat

Consider a scenario where two wheeled robots compete in a “Robot Wars”-style arena. The goal is to program a robot to defeat its opponent using strategy and precision.

Example of a Poorly Structured Prompt:
“Make a robot that fights effectively and wins the battle.”

  • AI Output: A generalised script, e.g., “The robot moves randomly and attacks when in range.” This lacks strategy, leading to inefficiencies and poor performance.

Example Using Pseudo Code:

  • AI Output: A refined strategy where the robot optimises its movements, manages energy efficiently, and attacks only when necessary.

Applications in Manufacturing and Industry 4.0/5.0

The emergence of Industry 4.0, characterised by the integration of IoT, AI, and automation, has transformed manufacturing processes by enabling smarter, interconnected systems. Building on this foundation, Industry 5.0 emphasises human-AI collaboration, aiming to create more sustainable and human-centric approaches to production. Within this landscape, pseudo code serves as a critical tool, allowing engineers to communicate their design requirements clearly and precisely, thereby enhancing productivity and reducing errors.

Prototyping and Optimisation

Prototyping is a cornerstone of manufacturing innovation, and the advent of digital twins has further revolutionised this process. A digital twin—a virtual replica of a physical system—allows engineers to test designs in a simulated environment before committing to physical prototypes. Using pseudo code, engineers can define key parameters such as torque, speed, payload capacity, and environmental conditions, ensuring that the digital twin reflects real-world constraints accurately.

For example, designing a robotic arm for assembly requires engineers to balance factors like joint flexibility, operational speed, and load distribution. Without pseudo code, engineers might rely on iterative trial-and-error methods, leading to increased costs and extended development times. By specifying these variables in pseudo code, engineers provide AI with the structured inputs needed to generate optimised digital twin models, significantly reducing physical testing requirements and accelerating production timelines (Tao et al., 2018).

Moreover, pseudo code facilitates automated optimisation processes. By incorporating conditional logic and iterative loops, engineers can instruct AI systems to test multiple configurations, identify inefficiencies, and suggest improvements. This iterative refinement process, powered by pseudo code, enables manufacturers to achieve better performance metrics while minimising waste and resource consumption.

Robotics Programming

Industrial robots play an essential role in modern manufacturing, performing tasks such as welding, assembly, material handling, and quality inspection. However, programming these robots to execute tasks safely and efficiently requires meticulous planning. Pseudo code provides engineers with a means to articulate complex task sequences, safety protocols, and contingency plans in a clear and logical format.

Consider a welding robot tasked with joining components along a production line. Using pseudo code, an engineer can define the robot’s movement paths, welding parameters (e.g., temperature and speed), and error-detection mechanisms.

Reducing Interface Errors

AI’s ability to interpret human input is inherently constrained by its training data, which cannot encompass every possible scenario (Floridi et al., 2018). Ambiguity in communication often leads to outputs that fail to meet user expectations, resulting in wasted time and resources. Pseudo code addresses this challenge by providing a structured framework for interaction, reducing the likelihood of misinterpretation.


Conclusion

Pseudo code bridges the gap between human logic and machine execution, enabling engineers to leverage AI effectively. By articulating precise instructions, engineers can optimise workflows, reduce errors, and achieve better outcomes in applications ranging from robotics to manufacturing. The future of engineering lies at the intersection of human expertise and AI capabilities, and pseudo code is the tool that will guide this collaboration into the next industrial era.


References

  1. Asimov, I. (1942). Runaround. In I, Robot. Gnome Press.
  2. Babbage, C. (1837). On the Analytical Engine. British Museum Archive.
  3. Dijkstra, E. (1968). “Go To Statement Considered Harmful.” Communications of the ACM, 11(3), 147–148.
  4. Der Derian, J. (2010). Virtuous War: Mapping the Military-Industrial-Media-Entertainment Network. Routledge.
  5. Floridi, L., et al. (2018). “AI as Augmented Intelligence: Beyond Machine Learning.” Philosophy & Technology, 31(4), 317–328.
  6. Goldberg, D. (1989). Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley.
  7. Knuth, D. (1974). The Art of Computer Programming: Fundamental Algorithms. Addison-Wesley.
  8. Tao, F., et al. (2018). “Digital Twin and Smart Manufacturing.” Advanced Engineering Informatics, 39, 845–856.
  9. Turing, A. (1936). “On Computable Numbers, with an Application to the Entscheidungsproblem.” Proceedings of the London Mathematical Society, 42(1), 230–265.
  10. Wiener, N. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press.

The Codex of Linguistic Impetus: AI, Equity, and Original Thought in Academic Expression

Abstract

This essay argues that while AI effectively structures, organizes, and enhances human ideas, it cannot independently conceive original thought. I introduce the Codex of Linguistic Impetus as a framework to differentiate between AI-generated filler and genuinely iterated human ideas, positioning AI as a vital yet supporting force for self-expression within academia. Concerns about originality are addressed, particularly for students who might misuse AI by relying on it passively or using it to bypass full engagement with academic material. Through a philosophical lens, I examine AI’s potential as a formulaic articulator, enabling the precise articulation of human thought without diluting academic integrity. The essay concludes that educational institutions must redefine academic integrity to thoughtfully integrate AI, moving beyond superficial solutions and embracing AI’s role in fostering genuine intellectual growth.


The Nature of Original Thought

The rapid integration of AI in academic settings raises both opportunities and challenges for knowledge representation. Although AI can enhance and refine ideas, it does not independently initiate them, marking a critical distinction from human creativity (Floridi, 2020; Mittelstadt et al., 2023). This essay introduces the Codex of Linguistic Impetus, a framework that helps differentiate AI-assisted filler content from genuinely iterated human ideas. The goal is to underscore AI’s role in democratizing academic discourse, empowering students who may struggle with traditional forms of expression (Rose & Meyer, 2002; Goggin & Newell, 2021). However, these benefits come with ethical concerns: the necessity for rethinking originality standards and ensuring students engage meaningfully with AI as a supportive tool rather than a substitute for critical thinking.

The Codex of Linguistic Impetus provides a structure to assess whether AI’s input reflects genuine intellectual engagement or simply fills gaps passively. In educational environments, the proliferation of AI use demands thoughtful discussion on how originality and academic integrity can adapt, ensuring that AI serves as an equitable, enabling tool without compromising the authenticity of student-authored work (Rose, Meyer, & Hitchcock, 2005).

Enabling Technology as an Equitable Force

AI builds on a legacy of assistive technology aimed at equitable access, particularly in educational contexts. Historically, enabling technologies such as text-to-speech, speech-to-text, and digital organizers have democratized participation for individuals with cognitive or physical impairments, allowing them to more fully engage in academic work (DiSanto & Snyder, 2019; Rose & Meyer, 2002). Universal Design principles show that access to supportive tools doesn’t inherently dilute academic rigor; rather, it fosters inclusion by removing obstacles that might hinder students from participating equally in scholarly discourse (Rose, Meyer, & Hitchcock, 2005). By handling technical aspects such as language precision, AI allows students to focus on the substance of their ideas, rather than being hindered by linguistic structure.

Research demonstrates AI’s role in increasing accessibility and confidence, particularly for students who may struggle with traditional writing or organizing methods. For example, students using AI-assisted learning tools report greater ease in sharing complex ideas, revealing AI’s potential to create a more inclusive academic environment focused on intellectual substance rather than linguistic formality (Smith, Patel, & Larson, 2023). However, as Hughes and Smith (2023) argue, while AI promotes access, it also introduces risks of passive engagement, where students may use AI to complete assignments without developing a comprehensive understanding of the content. The ethical balance lies in utilizing AI’s democratizing potential without allowing it to undermine genuine intellectual effort.

Risks to Authentic Expression?

While AI offers valuable support in academic expression, it presents risks to authenticity, particularly for students who may misuse AI to bypass deeper engagement with their work. International students, for instance, might use AI translation tools to convert their ideas from their native language into English. Although advanced translation models capture depth and intent, effective translation is both an art and a science, requiring nuanced cultural and contextual understanding (Chee, 2022). Without such understanding, AI translation may give the appearance of English fluency but lacks the depth of insight a student might develop through direct language engagement (Jones, 2023).

Additionally, when students rely on AI for language conversion without thoroughly reviewing and refining the translation, it may fail to capture the intended meaning fully. This reliance risks creating technically accurate submissions that lack the student’s authentic intellectual input. This potential misuse highlights the need for educators and students alike to approach AI as a complement to, rather than a substitute for, genuine engagement with academic material. As Carr (2020) suggests, the convenience of AI tools may lead students to disengage from critical aspects of learning, contributing to a passive interaction with course content.

Furthermore, instructors have noted that without clear guidance, students may perceive AI as a shortcut for academic tasks rather than as a tool for enhancing their understanding of complex topics (Hughes & Smith, 2023). This misuse risks creating superficial submissions that lack genuine academic inquiry, underscoring the importance of instituting boundaries that promote ethical, thoughtful AI use in educational settings.

A Philosophical Perspective

The concept that human thought processes resemble computational steps implies that knowledge can be distilled into logical sequences. This aligns with algorithmic thinking, where complex ideas are broken down into granular details—smaller, essential components that require precise arrangement for accurate interpretation (Floridi, 2020). Floridi’s work on the human-AI relationship reveals that AI can bridge gaps between conception and expression, allowing human thought to be translated into structured formulae without losing intent (Mittelstadt et al., 2023).

AI’s capacity to convert natural language instructions into precise formulaic language illustrates its potential to support academic discourse by handling linguistic minutiae. For instance, when researchers describe an algorithm or complex methodology, AI can capture it in exact formulae, providing clarity and coherence in academic presentations. This precision is especially valuable in STEM fields, where rigorous articulation of ideas is paramount (Jones, 2023). AI’s role in managing technical details enables researchers and students to concentrate on core insights, confident in the accuracy of their conceptual structures.

AI’s formulaic precision does not replace human creativity but enhances access to academic discourse by empowering individuals to present ideas rigorously. This articulative capacity positions AI as an instrumental tool, translating human concepts into a language of accuracy and coherence, particularly useful in cases where linguistic or cognitive barriers might otherwise obscure intent.

Rethinking Originality in the Age of AI

Educational institutions face the challenge of rethinking originality in the age of AI. Rather than assessing solely whether content is independently student-produced, assessments should also evaluate the quality of engagement, such as depth of analysis, critical insight, and the student’s own intellectual contribution within AI-assisted work (Mayer & Jenkins, 2022). This shift requires professors to adapt assessment criteria, creating a new form of academic integrity that incorporates AI’s support while preserving genuine student insight.

For students, responsible AI use involves integrating these tools into their workflow to clarify understanding rather than complete tasks passively. This reimagined view of originality would encourage students to view AI as a resource for refining and supporting their thought processes rather than as a tool to bypass academic effort. Professors could incorporate AI-specific criteria into marking rubrics, focusing on evidence of the student’s critical engagement and reflective input, even in work structured with AI assistance (Jones, 2023). This approach fosters a culture of integrity while recognizing AI’s role in contemporary academia.

By framing AI as a collaborative tool rather than a substitute for thought, educational institutions can establish a model of integrity that aligns with the demands of modern academia, where technology is integral to both intellectual and creative endeavors.


Toward a Redefinition of Academic Integrity

As AI reshapes the academic landscape, institutions must proactively redefine academic integrity to reflect this new reality. Applying non-committal policies or superficial fixes fails to address AI’s profound impact on student engagement and authenticity. Educational systems must thoughtfully integrate AI into assessment frameworks, recognizing its role as a democratizing tool that manages the technical “minutiae” of academic expression while preserving the originality of human thought. This approach necessitates redefining originality standards to focus on intellectual engagement and personal contribution rather than solely on the independence of content creation.

The Codex of Linguistic Impetus framework presented here provides a conceptual basis for differentiating genuine intellectual engagement from passive AI reliance, fostering responsible and ethical AI use. By rethinking originality and incorporating AI thoughtfully into academic assessments, institutions can foster a generation of students who use AI responsibly, creatively, and ethically, supporting a more inclusive and forward-thinking academic environment. However, while it is introduced here, the concept would need rigorous testing in a full academic paper to have full efficacy in the discipline which the author hopes to follow up on soon.

References

Barker, S. (2018) The Socratic Method and Socratic Algorithmic Thought, New York: Routledge.

Carr, N. (2020) The Shallows: What the Internet is Doing to Our Brains, 2nd ed., New York: W.W. Norton & Company.

Chee, F. (2022) Digital Cultures and Education in the 21st Century, 3rd ed., London: Palgrave Macmillan.

DiSanto, J. and Snyder, T. (2019) ‘Enabling technologies for disabilities: New frontiers’, Technology in Society, 56, pp. 11–16.

Floridi, L. (2020) The Logic of Information: A Theory of Philosophy as Conceptual Design, Oxford: Oxford University Press.

Goggin, G. and Newell, C. (2007) Digital Disability: The Social Construction of Disability in New Media, Lanham, MD: Rowman & Littlefield.

Goggin, G. and Newell, C. (2021) ‘Accessing higher education through AI: Revisiting equity and inclusion’, Journal of Digital Accessibility, 12(2), pp. 55–68.

Hughes, J. and Smith, M. (2023) ‘AI in higher education: Examining the impact on student engagement and authenticity’, Journal of Educational Technology, 45(3), pp. 234–247.

Jones, A. (2023) ‘AI translations and cultural fidelity in academic writing’, Journal of Language and Cultural Studies, 18(4), pp. 188–201.

Mayer, R. and Jenkins, P. (2022) ‘Guidelines for AI integration in assessment and evaluation’, Journal of Learning Sciences, 31(1), pp. 45–62.

Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., and Floridi, L. (2023) The ethics of algorithms: Mapping the debate in the age of AI, 2nd ed., London: Big Data & Society.

Rose, D.H. and Meyer, A. (2002) Teaching Every Student in the Digital Age: Universal Design for Learning, Alexandria, VA: Association for Supervision and Curriculum Development.

Rose, D.H., Meyer, A., and Hitchcock, C. (2005) The Universality of Access: A Framework for Digital Learning, Cambridge, MA: Harvard University Press.

Smith, R., Patel, S., and Larson, T. (2023) ‘Empowering students through AI-assisted learning tools: An accessibility perspective’, Accessibility in Education Journal, 15(3), pp. 92–104.

Plateaugration: A Sustainable Alternative to the Infinite Regress of Capabilities and Features

In today’s rapidly evolving technological landscape, the relentless push for more capabilities and additional features has created a cycle of unsustainable growth. This essay explores the concept of “x times (capabilities + features)”—an infinite regress that leads to unsustainable systems. As complexity increases, so do the demands on energy, resources, and human capital. However, there is an alternative: Plateaugration, a model where systems maintain their current present value, adjusting only to meet external demands such as inflation or environmental factors. This concept offers a way to balance growth without overloading systems. Throughout this essay, I will introduce Plateaugration, examine the consequences of the current model of unsustainable growth, and present a future vision where technology regression—far from being a risk—becomes a positive, culturally driven shift. The essay will conclude by considering how cryptocurrency and blockchain technology could eliminate inflation and support sustainable system design.

The Problem: Infinite Regress in Capabilities and Features

Technological systems are caught in an endless cycle of expanding capabilities and proliferating features. This growth can be mathematically expressed as:


Where S(t) represents a system’s sustainability at time t, x reflects external growth pressures, and C(t) and F(t) are the system’s capabilities and features. As capabilities expand, new features must be added to exploit them. Conversely, the addition of new features demands further capability enhancements. Over time, this leads to an exponential increase in system complexity, resulting in “feature bloat” or “capability fatigue.” This infinite feedback loop destabilises systems, making them harder to maintain, scale, and secure. Leveson (2012) highlights that as systems become more complex, their unanticipated interactions create new risks, leading to unsustainability.

Recent research highlights the scale of this problem in our modern, data-driven world. The International Energy Agency (IEA, 2021) notes a sharp rise in global data centre electricity consumption, driven by the demand for cloud computing, AI, and data-intensive applications. This surge exemplifies the unsustainable loop of increasing capabilities and features, as each new technological advancement requires more resources to sustain. Brooks’ (1995) “second-system effect” famously warned against overcomplicating systems, and today’s cloud-based infrastructures demonstrate the exponential inefficiencies he foresaw.

The Qualitative Consequences of Unsustainable Growth

As systems continue to expand in both capabilities and features, the consequences extend beyond mere technical inefficiencies. The environmental impact is significant, particularly with the Internet, which is fast becoming unsustainable. Global data centres require vast amounts of energy and fresh water to cool servers, placing strain on natural resources. According to Nature Sustainability (2020), data centres account for a growing share of water usage in regions that are already suffering from water scarcity. These centres also consume massive amounts of energy, leading to questions about whether such resources would be better allocated to more pressing human welfare needs, such as healthcare or education (Greenpeace, 2021).

Culturally, the relentless drive for technological upgrades could lead to a threshold shift—a cultural moment where society collectively rejects the demand for constant expansion in favour of sustainability. This shift might arise out of necessity, due to resource shortages, or from a growing environmental consciousness. Technology regression, rather than being a risk, could offer a positive opportunity for societies to rethink their relationship with digital systems and technological complexity. Graziotin et al. (2014) argue that as systems grow overly complex, they fatigue both users and developers. A potential cultural pivot away from “more” might involve abandoning smartphones or other high-tech devices in favour of simpler tools like CB radios, which can be networked across large areas with minimal resource use.

Plateaugration: A New Model for Sustainable Growth

In contrast to the unsustainable model of constant technological expansion, Plateaugration advocates for growth only as much as is needed to maintain a system’s present value. This means optimising and saturating existing capabilities rather than endlessly adding new features. The focus is not on doing less with less, a common de-growth argument, but rather on doing the same with less. By leveraging efficiency, systems can be optimised to maintain their current functions while reducing resource consumption.

This model finds practical support in the rise of intermediate technology, where high-tech tools are combined with more accessible, low-tech solutions to create a sustainable, decentralised system. Science Advances (2021) highlights 3D printing as an example of this principle in action. Decentralised manufacturing through 3D printing reduces the need for global supply chains, allowing local production with minimal resource use. A future guided by Plateaugration could see a merging of advanced digital systems and analogue tools, providing opportunities to maintain high standards of living with fewer resources.

The Internet and Fresh Water Consumption: A Case Study in Unsustainability

The internet, in its current form, is a prime example of unsustainable system design. According to a 2023 report by the IEA, data centres are among the most significant consumers of both electricity and water. As data demands grow, the energy needed to maintain cloud services and streaming platforms escalates, placing further strain on global energy resources. Nature Communications (2022) warns that without a strategic shift, the internet’s energy and water consumption will become unmanageable.

A system designed under the principles of Plateaugration would prioritise optimising current resources. Instead of continuing to build more data centres or adding features that require more bandwidth, the focus would shift to increasing the efficiency of existing infrastructure. This approach could involve developing cooling technologies that reduce water consumption or exploring ways to limit energy use without sacrificing essential internet functions. By realigning the internet with sustainable principles, we could redirect resources toward pressing global needs.

Technology Regression: A Positive Shift Through Threshold Theory

Rather than seeing technological regression as a risk, it can be understood as a positive shift aligned with threshold theory—where a cultural move away from demand occurs either by necessity or conscience. This is not about simply doing less but doing the same with less. By optimising existing systems, we can move away from excessive feature growth and maintain functional systems without unsustainable resource consumption.

A critical component of this shift could be the adoption of cryptocurrency and blockchain technology. Due to its decentralised and immutable nature, blockchain has the potential to eliminate inflation by design. Fry and Cheah (2016) highlight how cryptocurrency systems prevent arbitrary monetary expansion, a key driver of inflation in traditional fiat systems. Moreover, the transparent and decentralised structure of cryptocurrencies such as Bitcoin caps the supply of currency, ensuring that inflation is essentially neutralised over time.

More recent studies by Narayanan et al. (2019) emphasise the potential of blockchain to transform economic systems by eliminating the need for resource-heavy, centralised banking infrastructures. By shifting economies to blockchain-based currencies, we could reduce the energy and resources needed to support traditional financial systems, which include everything from physical bank branches to complex international transaction networks. This shift to decentralised financial systems would exemplify Plateaugration—doing the same with less—by reducing the resource intensity of managing economic systems.

Sustainable Project Management: The Plateaugration Approach

For project management, Plateaugration offers a sustainable framework by focusing on maintaining present value rather than constant expansion. This approach encourages project managers to implement modular systems that can adapt over time without becoming bloated by unnecessary features. Leveson (2012) proposes that by limiting the scope of system growth, it’s possible to create systems that are both efficient and resilient in the face of changing demands.

A Plateaugration-inspired project management strategy would prioritise efficiency, scalability, and long-term sustainability. Projects could be designed to meet immediate needs while remaining adaptable to future changes without requiring substantial new investments in resources or infrastructure.

Conclusion

The cycle of “x times (capabilities + features)” leads to unsustainable systems where complexity grows unboundedly. The alternative—Plateaugration—offers a way forward, focusing on saturation and optimising systems to maintain their present value. By shifting toward a model that prioritises efficiency over endless growth, society can build sustainable systems capable of meeting present and future needs with fewer resources.

By incorporating cryptocurrency and blockchain technologies into this vision, we can eliminate inflation by design and create economic systems that are far more efficient than the traditional models. Through these shifts, technology regression becomes not a risk, but a positive cultural realignment with sustainability, ensuring systems are functional, adaptable, and enduring.


References

  1. Leveson, N. (2012). Engineering a Safer World: Systems Thinking Applied to Safety. MIT Press.
  2. International Energy Agency (IEA). (2021). Global Energy Data Report: Data Centre Energy Use.
  3. Greenpeace. (2021). Clicking Clean: Who is Winning the Race to Build a Green Internet?
  4. Nature Sustainability. (2020). Water and Energy Consumption in Data Centres.
  5. Fry, J., & Cheah, E-T. (2016). “Negative Bubbles and Shocks in Cryptocurrency Markets,” Journal of Risk Finance.
  6. Narayanan, A., Bonneau, J., Felten, E., Miller, A., & Goldfeder, S. (2019). *Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction

Can Blockchain Drive Significant Progress Toward Global Climate Goals?

Abstract

Blockchain technology has advanced beyond cryptocurrencies and is now integral to various industries, particularly manufacturing. This paper explores the role blockchain can play in accelerating the circular economy (CE) and achieving net-zero carbon emissions targets. By enhancing transparency, efficiency, and resource optimization, blockchain has the potential to significantly reduce carbon emissions as part of a broader system of sustainable change. A calculus-based model is presented to quantify blockchain’s impact on manufacturing efficiency and global emissions, accounting for the Jevons Paradox. A thought experiment is conducted to estimate how manufacturing efficiency gains of 25% could affect global carbon emissions. The study highlights blockchain’s role as part of a larger system, driving towards the CE and integrating AI and other emerging technologies for maximum environmental impact.


Introduction

The escalating climate crisis places humanity at a critical juncture in its industrial and economic practices. The scientific consensus is unequivocal—immediate and transformative actions are required if we are to avert catastrophic consequences. Global industries, particularly manufacturing, remain one of the foremost contributors to greenhouse gas emissions, with their share approximated to account for 30% of global carbon emissions (International Energy Agency [IEA], 2020). Within this framework, the responsibility of the manufacturing sector extends beyond mere adaptation; it must lead the way towards a radical reconceptualization of the production process, one that simultaneously optimizes efficiency and minimizes environmental degradation.

Blockchain technology, since its inception through the conceptual and practical innovations introduced by Nakamoto in 2008, has continuously evolved, shifting from a purely transactional framework, such as cryptocurrencies, to a more expansive role encompassing data integrity, transparency, and accountability. Yet, its full potential, especially when applied to sectors like manufacturing, remains underexplored. The intersection of blockchain with key ecological imperatives provides us with the potential to solve inefficiencies across global supply chains, from resource extraction to the end-of-life phase of manufactured goods, ultimately supporting a broader agenda towards the Circular Economy (CE).

This paper contends that blockchain, when integrated into the manufacturing sector at scale, offers unprecedented opportunities to drive reductions in carbon emissions through increased supply chain transparency, optimized resource usage, and decreased operational inefficiencies. By presenting a calculus-based model, we seek to quantitatively assess the real-world impact of blockchain adoption, examining its capacity to mitigate emissions. Crucially, the paper also engages with potential paradoxes, such as the Jevons Paradox, that may undermine blockchain’s efficacy if not properly managed.


Literature Review

The academic discourse surrounding blockchain’s potential to drive sustainable change has intensified in recent years, though several critical gaps persist. While the technology’s application has seen robust theoretical exploration, particularly within the domains of financial technologies and secure data exchange, its environmental potential remains understudied, particularly within industrial applications such as manufacturing. Blockchain’s capacity to reduce inefficiencies, improve transparency, and promote sustainability has been widely acknowledged (Saberi et al., 2019), yet many studies provide only broad outlines without delving into the specific mechanisms through which blockchain might be operationalized to achieve tangible carbon reductions.

One of the earliest insights into blockchain’s relevance for sustainability comes from the study by Kouhizadeh et al. (2020), which emphasizes blockchain’s transparency mechanisms in promoting waste reduction and resource optimization. Their research forms the bedrock for understanding how distributed ledgers might be harnessed in the context of the Circular Economy (CE). However, they stop short of developing a comprehensive framework for blockchain’s impact on emissions, leaving significant room for further exploration.

The relationship between blockchain and supply chain efficiency has been extensively studied in the work of Francisco and Swanson (2018), who offer a critical evaluation of blockchain’s role in supply chain transparency. By allowing stakeholders to trace the movement and provenance of raw materials and finished goods in real-time, blockchain addresses critical inefficiencies. However, their work remains largely theoretical and does not engage with concrete emissions metrics, a gap this paper seeks to address through its quantitative approach.

In another vein, the integration of blockchain into renewable energy systems has been explored by Andoni et al. (2019). While their research focused on how blockchain facilitates peer-to-peer energy trading, enabling the adoption of renewable sources, it provided vital insights into the energy-related implications of blockchain at the industrial level. However, this research does not address the specificities of blockchain’s role in manufacturing.

Further contributing to the discourse, Nwankpa et al. (2021) presented a seminal study estimating global supply chain inefficiencies to exceed 25%, directly aligning with the thesis presented in this paper. These inefficiencies, they argue, stem from the opacity of transactions, outdated operational processes, and the mismatch between production and consumption. Blockchain’s promise, they contend, lies in its ability to drive system-wide improvements in these domains.

Yet, despite these explorations, much remains to be understood about the interaction between blockchain efficiencies and the Jevons Paradox. Chapman and Zhang (2023) argue that any efficiency improvements in industrial operations can paradoxically lead to greater overall consumption, thus negating the potential gains in carbon reduction. Their critical perspective suggests the need for policies that can mitigate these effects, ensuring that the environmental benefits of blockchain adoption are realized. By contributing to this underdeveloped area, this paper seeks to bridge the gap between blockchain’s potential and its empirical outcomes.


Methodology

To evaluate the potential for blockchain to reduce emissions within the manufacturing sector, we employ a combination of thought experiments and empirical modeling. Specifically, we use a calculus-based approach to model the impact of blockchain on manufacturing efficiency and its consequent effect on carbon emissions. The model will integrate blockchain adoption rates, resource optimization potentials, and the possibility of economic rebound effects (i.e., the Jevons Paradox).

This paper’s approach incorporates the following components:

The net efficiency equation that we derive models the combined effect of blockchain adoption and resource optimization:


Mathematical Model and Simulation

The thought experiment simulates blockchain’s impact under a scenario where the manufacturing sector experiences a 25% increase in efficiency as a direct result of blockchain integration. This figure is informed by studies suggesting that supply chain inefficiencies often exceed 25% (Nwankpa et al., 2021). Over a period of 10 years, we simulate the cumulative reduction in carbon emissions, considering the effect of blockchain-driven transparency and automation on the optimization of manufacturing processes.

For the purposes of this experiment, the sensitivity factor γ\gammaγ is calibrated according to the manufacturing sector’s carbon intensity, which accounts for approximately 30% of global emissions (IEA, 2020). The model assumes that as blockchain adoption progresses, both energy consumption and waste generation will decrease, leading to a proportional reduction in carbon output.


Discussion

The results of the simulation provide compelling evidence that blockchain integration, by fostering transparency and resource optimization, can contribute significantly to reducing global carbon emissions. This paper’s thought experiment reveals that a 25% increase in manufacturing efficiency, when achieved through blockchain, can reduce emissions in alignment with international climate targets, such as those established under the Paris Agreement, which aims for a 45% reduction by 2030 (UNFCCC, 2015).

Blockchain’s ability to provide real-time, immutable data regarding resource use enables manufacturers to adopt a more granular approach to emissions management. However, blockchain alone cannot achieve net-zero emissions. Its environmental impact must be coupled with broader circular economy strategies, as well as AI-driven predictive systems that enhance energy efficiency further.

The issue of the Jevons Paradox must also be addressed to avoid any potential rebound effects. Blockchain’s ability to drive down costs through efficiency gains could lead to increased consumption if unchecked. Policies must be enacted that encourage the reinvestment of these efficiency gains into further decarbonization initiatives, ensuring that the overall consumption does not rise.


Conclusion

Blockchain presents a promising path for reducing carbon emissions within the manufacturing sector. By leveraging its transparency, automation, and data integrity, blockchain can drive a 25% increase in manufacturing efficiency, as demonstrated in our thought experiment. This efficiency gain has the potential to significantly reduce emissions, aligning with the global targets established in the Paris Agreement. However, blockchain’s environmental benefits will only be fully realized when integrated into a broader framework that includes policy interventions, circular economy models, and the adoption of complementary technologies like AI and IoT. While blockchain can contribute to significant carbon reductions, it cannot act alone. Strategic coordination, regulatory support, and comprehensive industry buy-in will be critical to ensure that blockchain’s potential is fully harnessed and that its efficiency improvements lead to sustainable reductions in emissions. Future research should further investigate the cumulative impact of blockchain when combined with other green technologies and explore its long-term influence on global emissions, especially as industries adopt it at scale.

References

Alcott, B. (2005). Jevons’ Paradox. Ecological Economics, 54(1), 9-21.

Andoni, M., Robu, V., Flynn, D., Abram, S., Geach, D., Jenkins, D., & Peacock, A. (2019). Blockchain technology in the energy sector: A systematic review of challenges and opportunities. Renewable and Sustainable Energy Reviews, 100, 143-174.

Bikhchandani, S., Hirshleifer, D., & Welch, I. (1992). A theory of fads, fashion, custom, and cultural change as informational cascades. Journal of Political Economy, 100(5), 992-1026.

Buterin, V. (2014). A next-generation smart contract and decentralized application platform. Ethereum White Paper. https://ethereum.org/en/whitepaper/

Cao, D., Puntaier, E., Gillani, F., Chapman, D., & Dewitt, S. (2024). Towards integrative multi-stakeholder responsibility for net zero in e-waste: A systematic literature review. Business Strategy and the Environment.

Chapman, D. L., & Zhang, H. (2023). Overcoming Jevons’ Paradox in the Circular Economy: Is blockchain a threat or solution to climate change? In Proceedings of the 6th European Conference on Industrial Engineering and Operations Management. IEOM Society International.

Francisco, K., & Swanson, D. (2018). The supply chain has no clothes: Technology adoption of blockchain for supply chain transparency. Logistics, 2(1), 2.

International Energy Agency (IEA). (2020). Energy Technology Perspectives 2020. IEA. Retrieved from https://www.iea.org/reports/energy-technology-perspectives-2020

Kouhizadeh, M., Sarkis, J., & Zhu, Q. (2020). Blockchain technology and the circular economy: Examining adoption barriers. Production Economics, 231, 107831.

Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system. Retrieved from https://bitcoin.org/bitcoin.pdf

Pan, X., Zhao, Y., Lu, W., & Pan, X. (2019). Integrating blockchain with the Internet of Things and cloud computing for secure healthcare. Computer Communications, 150, 56-64.

Patil, B., Tiwari, A., & Yadav, V. (2021). Impact of blockchain technology on the circular economy. In Blockchain Technology and Applications for Digital Marketing (pp. 111-126). IGI Global.

Saberi, S., Kouhizadeh, M., Sarkis, J., & Shen, L. (2019). Blockchain technology and its relationships to sustainable supply chain management. International Journal of Production Research, 57(7), 2117-2135.

Upadhyay, A., Laing, T., Kumar, V., & Dora, M. (2021). Exploring barriers and drivers to implementing circular economy practices in the mining industry. Resources Policy, 72, 102053.

Zheng, Z., Xie, S., Dai, H., Chen, X., & Wang, H. (2018). An overview of blockchain technology: Architecture, consensus, and future trends. In 2017 IEEE International Congress on Big Data (pp. 557-564). IEEE.

Why everyone is (already) an AI expert


Artificial Intelligence (AI) has become an omnipresent force in modern society, permeating various aspects of our daily lives. From virtual assistants like Siri and Alexa to sophisticated chatbots like ChatGPT, AI systems have evolved to a point where interacting with them requires little to no technical expertise. This widespread accessibility has led to the notion that everyone, by virtue of using AI, has become an AI expert. However, this assumption warrants a critical examination. Is mere interaction with AI sufficient to confer expertise, or does it mask a deeper lack of understanding of the complex systems at play? This essay explores the democratization of AI, its role as an equalizing force, and the potential illusion of expertise it creates. Drawing on theories of social justice, cognitive science, and knowledge acquisition, it delves into the implications of AI’s integration into society and questions what it truly means to be an “AI expert” in the contemporary digital landscape.


The Democratization of AI: Bridging the Gap Between Imagination and Execution

The advent of AI models capable of processing and understanding natural language has significantly lowered the barriers to engaging with advanced technology. Traditional programming languages like Python require specific syntax and a foundational understanding of coding principles, which can be prohibitive for many individuals. For example, calculating the average sales from a dataset in Python involves importing libraries, reading data files, and applying functions:

pythonCopy codeimport pandas as pd

data = pd.read_csv('sales_data.csv')
average_sales = data['Sales'].mean()
print(average_sales)

In contrast, with AI systems like ChatGPT, a user can achieve the same result by simply articulating the request in plain English: “Please calculate the average sales from this dataset and provide the result.” The AI understands and executes the request without the user needing to write any code. This accessibility democratizes AI, allowing individuals without technical expertise to utilize advanced functions (Halevy, Norvig, & Pereira, 2009).

This democratization acts as an equalizing force, empowering people from diverse backgrounds to bring their ideas to fruition. Albert Einstein famously stated, “Imagination is more important than knowledge” (Einstein, 1929). In the context of AI, this suggests that creative vision, rather than technical skill, becomes the primary driver of innovation. AI serves as the bridge between human imagination and technological execution, effectively leveling the playing field and enabling broader participation in areas previously dominated by specialists.

The implications of this shift are profound. Amartya Sen’s Capability Approach emphasizes the importance of providing individuals with the means to achieve the lives they value (Sen, 1999). By reducing barriers to advanced technology, AI enhances individual capabilities, promoting greater equity and social justice. It allows for active participation in fields such as data analysis, content creation, and problem-solving, regardless of one’s technical background.

Furthermore, the democratization of AI aligns with John Rawls’ theory of justice, which advocates for social and economic inequalities to be arranged so that they are to the greatest benefit of the least advantaged (Rawls, 1971). By making sophisticated tools accessible to all, AI contributes to a more equitable distribution of opportunities, fostering inclusivity and diversity in innovation.


The Illusion of Expertise: Interacting with AI versus Understanding AI

While AI’s accessibility empowers users, it also raises critical questions about the nature of expertise. Does the ability to interact with AI equate to being an AI expert, or does it create an illusion of expertise? To address this, it is essential to distinguish between using AI tools and understanding the underlying mechanisms that govern their operation.

Michael Polanyi’s concept of tacit knowledge highlights the difference between knowing how to use something and understanding the principles behind it (Polanyi, 1966). Tacit knowledge encompasses the unarticulated, experiential insights that are difficult to transfer through written or verbal communication. In the context of AI, users may become proficient at instructing AI systems to perform tasks but may lack an understanding of how these systems process information, learn from data, or the biases that may be inherent in their algorithms.

This gap in understanding can have significant implications. For instance, relying on AI-generated analyses without a foundational knowledge of data interpretation can lead to misinformed decisions. The AI might produce results based on flawed assumptions or biased data, and without critical evaluation, users may accept these outputs at face value. Studies have shown that over-reliance on technology can lead to cognitive offloading, where individuals diminish their own critical thinking and problem-solving abilities (Carr, 2011). This reliance can erode the development of expertise, as users may not engage deeply with the subject matter.

Moreover, the abstraction of complexity inherent in AI systems can obscure important ethical considerations. Users may be unaware of how AI models are trained, the nature of the data they use, and the potential for perpetuating biases or misinformation. As Bolukbasi et al. (2016) demonstrate, AI systems can inadvertently reinforce societal biases present in their training data, leading to outputs that perpetuate stereotypes or discriminatory practices.


The Depth Behind the Interface: Understanding the Complexity of AI Systems

AI models like ChatGPT are built upon complex architectures involving deep learning, neural networks, and vast datasets. Understanding these underlying technologies requires a significant level of expertise in fields such as computer science, mathematics, and cognitive science (Goodfellow, Bengio, & Courville, 2016). The AI’s ability to interpret natural language and generate coherent responses is the result of intricate algorithms processing patterns in data at scales beyond human capability.

This complexity underscores the importance of recognizing the limitations of one’s expertise when interacting with AI. While the user interface presents a simplified experience, the operations behind the scenes involve layers of computations and decision-making processes that can have far-reaching implications. Ethical considerations, such as data privacy, algorithmic transparency, and accountability, are intertwined with the technical aspects of AI and require a nuanced understanding to navigate effectively.

Nicholas Carr (2011) cautions against the complacency that can arise from over-reliance on technology. Without a critical approach to AI, users may not question the validity of the information provided or recognize when the AI operates outside its intended scope. This lack of scrutiny can lead to the spread of misinformation or the misuse of AI in sensitive contexts.


AI as an Equalizing Force: Opportunities and Challenges in Social Justice

The potential of AI to serve as an equalizing force aligns with broader goals of social justice and equity. By making advanced technological tools accessible, AI can empower marginalized communities, provide educational opportunities, and promote inclusive innovation. For example, AI-driven language translation can bridge communication gaps, and AI-powered educational platforms can provide personalized learning experiences to students in underserved areas.

However, realizing this potential requires intentional efforts to address the challenges associated with AI deployment. Issues such as the digital divide, where access to technology and the internet is unevenly distributed, can limit the reach of AI’s benefits. Additionally, biases in AI systems can disproportionately affect marginalized groups, exacerbating existing inequalities (Noble, 2018).

John Rawls’ principles of justice emphasize that social and economic inequalities should be arranged to benefit the least advantaged (Rawls, 1971). To honor this principle in the context of AI, developers and policymakers must ensure that AI systems are designed and implemented with fairness, accountability, and transparency in mind. This includes diversifying the datasets used to train AI models, involving stakeholders from different backgrounds in the development process, and establishing regulations that protect against misuse.

Amartya Sen’s Capability Approach further highlights the importance of enabling individuals to have the freedom to achieve well-being (Sen, 1999). AI can contribute to this by providing tools that enhance personal agency and expand opportunities. However, it also necessitates that users are equipped with the knowledge and critical thinking skills to use AI responsibly.


Cognitive Offloading and the Risk of Diminished Critical Thinking

The convenience of AI can lead to cognitive offloading, where individuals rely on technology to perform tasks that previously required active mental engagement. While this can enhance efficiency, it may also diminish critical thinking skills and reduce the capacity for independent problem-solving (Barr, Pennycook, Stolz, & Fugelsang, 2015).

For example, when AI provides quick answers to complex questions, users may accept these responses without questioning their accuracy or considering alternative perspectives. This can create a passive consumption of information, where the user’s role shifts from active participant to passive recipient. The risk is that over time, individuals may become less adept at critical analysis, creativity, and original thought.

Furthermore, the feedback loops inherent in AI systems can reinforce existing preferences and biases. Recommendation algorithms tailor content based on previous interactions, which can create echo chambers and limit exposure to diverse viewpoints (Pariser, 2011). This can hinder the development of a well-rounded understanding and impede the cultivation of expertise.


Conclusion: Navigating the Path from User to Expert

The integration of AI into everyday life offers unprecedented opportunities for empowerment and innovation. By lowering barriers to entry, AI democratizes access to advanced technological capabilities, enabling individuals to execute their imaginative ideas without requiring specialized technical skills. This shift holds the promise of greater equity and inclusivity, aligning with principles of social justice and expanding individual capabilities.

However, the notion that everyone is an AI expert simply by virtue of using AI tools is an oversimplification that overlooks the complexities and responsibilities inherent in engaging with advanced technology. True expertise involves not only the ability to utilize AI but also a deep understanding of its underlying mechanisms, ethical considerations, and potential impacts on society.

As we embrace AI’s potential, it is essential to foster a culture of critical engagement and continuous learning. This includes educating users about the fundamentals of AI, encouraging skepticism and questioning of AI outputs, and promoting awareness of the broader implications of AI on cognition, behavior, and social structures.

In navigating the path from user to expert, we must balance the convenience and accessibility of AI with a commitment to developing the knowledge and skills necessary to use it responsibly. Only then can we fully realize AI’s potential as a tool for empowerment while mitigating the risks associated with its widespread adoption.


References

  • Barr, N., Pennycook, G., Stolz, J. A., & Fugelsang, J. A. (2015). The brain in your pocket: Evidence that smartphones are used to supplant thinking. Computers in Human Behavior, 48, 473–480.
  • Bolukbasi, T., Chang, K. W., Zou, J., Saligrama, V., & Kalai, A. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in Neural Information Processing Systems, 29, 4349–4357.
  • Carr, N. (2011). The Shallows: What the Internet Is Doing to Our Brains. W. W. Norton & Company.
  • Einstein, A. (1929). Saturday Evening Post Interview. (As quoted in Calaprice, A. [2000]. The Expanded Quotable Einstein. Princeton University Press.)
  • Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
  • Halevy, A., Norvig, P., & Pereira, F. (2009). The unreasonable effectiveness of data. IEEE Intelligent Systems, 24(2), 8–12.
  • Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
  • Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. Penguin Press.
  • Polanyi, M. (1966). The Tacit Dimension. Doubleday & Co.
  • Rawls, J. (1971). A Theory of Justice. Harvard University Press.
  • Sen, A. (1999). Development as Freedom. Knopf.

Final Thoughts

As we continue to integrate AI into the fabric of society, it is imperative to reflect critically on our relationship with these technologies. The convenience and accessibility offered by AI should not eclipse the need for understanding and responsibility. Are we, as a society, cultivating true expertise, or are we content with the illusion thereof? How can we ensure that the democratization of AI leads to genuine empowerment rather than complacency? These questions invite us to engage more deeply with the technologies that shape our world, urging us to become not just users, but informed and critical participants in the evolution of AI.

The Future of Work: Why AI, Coding, and Robotics Are Essential Skills for Tomorrow’s Leaders

The Future of Work: Why AI, Coding, and Robotics Are Essential Skills for Tomorrow’s Leaders

As technology further embeds itself into our already technology-rich lives, it is clear that the skills which once secured careers are evolving at an unprecedented pace. Artificial Intelligence (AI), coding, and robotics are no longer niche areas reserved for tech enthusiasts; they are rapidly becoming foundational skills across engineering, manufacturing, and project management sectors. Embracing these technologies is not just about staying current—it is about future-proofing industries and ensuring sustainable growth in a rapidly changing world.

In this blog post, I explore how integrating AI, coding, and robotics into education and industry practices is essential for fostering innovation and sustainability. We will examine how these technologies are transforming industries, discuss ethical considerations, and highlight the role of collaboration between educational institutions and industry leaders. By understanding the significance of these skills, we can better prepare ourselves and the next generation for the challenges and opportunities ahead.


Embracing the Digital Revolution: The New Skillset

The Fourth Industrial Revolution is characterized by a fusion of technologies blurring the lines between the physical, digital, and biological spheres (Schwab, 2017). AI and automation are projected to displace 85 million jobs by 2025 but also create 97 million new roles demanding a different set of skills (World Economic Forum, 2023). Traditional competencies are no longer sufficient; there is a pressing need to pivot towards roles that emphasize critical thinking, complex problem-solving, and technological literacy.

AI, coding, and robotics are becoming as fundamental as reading and writing were in the previous century (European Commission, 2020). These technologies represent the new languages of innovation and efficiency. For industry leaders, investing in these skills within their teams is not just beneficial but essential for staying competitive and driving progress.


Integrating Technology into Education: Building the Foundation

Educational institutions worldwide are incorporating AI, coding, and robotics into their curricula. Early exposure not only builds technical proficiency but also fosters creativity and innovation. Computational thinking, a problem-solving process integral to coding, enhances abilities across disciplines (Wing, 2006). This approach breaks down complex problems into manageable parts, recognizing patterns, and developing step-by-step solutions.

Programs like MIT’s Scratch introduce children to coding through interactive storytelling and games, making complex concepts accessible and engaging (Resnick et al., 2009). Robotics competitions, such as FIRST Robotics, inspire students to pursue careers in STEM fields by providing hands-on experience (FIRST, 2022). These initiatives cultivate a generation comfortable with technology and ready to tackle the challenges of sustainable engineering and advanced project management.


AI and Robotics: Transforming Sustainable Engineering

In sustainable engineering, AI and robotics are revolutionizing how we approach environmental challenges. AI algorithms optimize energy consumption, reduce waste, and enhance production efficiency (McKinsey & Company, 2021). For example, Siemens has utilized AI to improve wind turbine efficiency, leading to significant energy savings and reduced carbon emissions (Siemens AG, 2021).

Robotics plays a crucial role by automating repetitive and hazardous tasks, reducing human error, and increasing precision. In manufacturing, robots handle dangerous materials and operate in extreme conditions, safeguarding human workers (International Federation of Robotics, 2021). By integrating AI with robotics, industries are achieving intelligent automation, paving the way for smarter factories and sustainable supply chains.

These technologies enable predictive maintenance, where AI analyzes data from equipment sensors to predict failures before they occur. This approach reduces downtime, extends equipment life, and minimizes environmental impact by preventing leaks or emissions (McKinsey & Company, 2021).


Advancing Manufacturing: The Role of AI, Coding, and Robotics

The manufacturing sector is leveraging AI, coding, and robotics to usher in Industry 4.0 and 5.0. Predictive maintenance powered by AI can reduce downtime by up to 50% and lower maintenance costs (McKinsey & Company, 2021). Coding skills enable the customization of software that drives intelligent systems, allowing solutions tailored to specific industrial needs.

Technologies like 3D printing and the Internet of Things (IoT) are interconnected through coding and AI, facilitating real-time data analysis and decision-making (Gartner, 2022). For example, IoT devices collect vast amounts of data from manufacturing processes, which AI algorithms analyze to optimize production and improve quality control.

Blockchain technology adds a layer of security and transparency to supply chains, enhancing trust and efficiency (Kshetri, 2018). It ensures the authenticity of products, tracks materials from origin to consumer, and reduces fraud and errors. By coding smart contracts on blockchain platforms, companies can automate transactions and enforce agreements without intermediaries.

These advancements not only improve operational efficiency but also contribute to sustainability by optimizing resource use, reducing waste, and lowering the carbon footprint of manufacturing processes.


Preparing for an AI-Driven Future: Leadership and Ethics

Understanding AI, coding, and robotics is crucial for leaders in engineering and project management. These technologies are strategic assets that can drive innovation, sustainability, and competitive advantage. Leaders must also navigate the ethical challenges of AI and robotics deployment, including job displacement and data privacy (Floridi et al., 2018).

Developing a robust ethical framework involves:

  • Transparency: Being open about how AI systems make decisions.
  • Accountability: Establishing clear lines of responsibility for AI actions.
  • Fairness: Ensuring AI systems do not perpetuate biases or discrimination.
  • Privacy: Protecting sensitive data and respecting user confidentiality.

By prioritizing ethics, leaders can foster trust among stakeholders and create a sustainable path forward.


Bridging the Gap: Collaboration Between Industry and Education

Collaboration between industry and educational institutions is critical for addressing the skills gap. Apprenticeships, internships, and partnerships align academic learning with real-world needs (Jackson, 2015). Companies like IBM and Google offer educational resources and certifications to upskill both workers and students (Microsoft & LinkedIn, 2022).

Investing in lifelong learning is essential. As technologies evolve, continuous education ensures that professionals remain relevant and competent. Organizations that foster a culture of learning are better equipped to adapt to technological disruptions and seize new opportunities.

Collaboration can take many forms:

  • Joint research projects: Universities and companies work together on innovative solutions.
  • Guest lectures and workshops: Industry experts share insights with students.
  • Curriculum development: Educational programs are designed with input from industry to meet current demands.

By working together, we can create a pipeline of talent ready to tackle the challenges of the future.


Overcoming Challenges and Seizing Opportunities

While the integration of AI, coding, and robotics offers tremendous potential, challenges remain. Access to quality education in these areas is uneven globally, potentially widening the digital divide (UNESCO, 2022). Underrepresented groups may face barriers to learning opportunities due to geographic, socioeconomic, or institutional disparities. To ensure that everyone has the chance to thrive in the future workforce, it is essential that governments, educational institutions, and industries collaborate to democratize access to these technologies.

This could involve initiatives such as free online courses, scholarships, and technology partnerships aimed at ensuring equitable access to learning resources. By addressing these gaps, we can prepare a diverse workforce for future opportunities in AI-driven industries.

However, the opportunities are immense. By equipping the workforce with AI, coding, and robotics skills, industries can achieve more efficient processes, reduce environmental impact, improve product quality, and stay competitive in a rapidly evolving global market (OECD, 2022). These technologies can streamline operations, optimize resource use, and ensure more resilient supply chains, ultimately leading to more sustainable business practices and enhanced productivity.


Conclusion

AI, coding, and robotics are more than emerging trends—they are essential building blocks for the future of industries such as engineering, manufacturing, and project management. As these technologies continue to shape the global economy, embracing them within education and industry practices is crucial for driving innovation, achieving sustainability goals, and maintaining a competitive edge.

For industry leaders, integrating these skills into organizational training, fostering ethical leadership, and establishing partnerships with educational institutions are key steps toward ensuring long-term success. The future of work lies in our ability to adapt and integrate technology thoughtfully, ensuring that progress is inclusive and that human ingenuity continues to complement technological advancements.

By focusing on both technical competence and ethical considerations, we can help build a future where technology and humanity thrive together, paving the way for a more sustainable and equitable world.


References

  • Benyus, J. M. (2012). Biomimicry: Innovation Inspired by Nature. HarperCollins.
  • European Commission. (2020). Digital Education Action Plan 2021-2027. Retrieved from European Commission
  • FIRST. (2022). Inspiring the Next Generation of Innovators. Retrieved from FIRST Inspires
  • Floridi, L., et al. (2018). AI4People—An Ethical Framework for a Good AI Society. Mind & Machine, 28, 689–707.
  • Gartner. (2022). Top Strategic Technology Trends. Retrieved from Gartner
  • International Federation of Robotics. (2021). World Robotics Report 2021. Retrieved from IFR
  • Jackson, D. (2015). Employability Skill Development in Work-Integrated Learning: Barriers and Best Practice. Studies in Higher Education, 40(2), 350-367.
  • Kshetri, N. (2018). Blockchain’s Roles in Meeting Key Supply Chain Management Objectives. International Journal of Information Management, 39, 80-89.
  • McKinsey & Company. (2021). Predictive Maintenance and the Smart Factory. Retrieved from McKinsey
  • Microsoft & LinkedIn. (2022). 2022 Workplace Learning Report: The Transformation of L&D. Retrieved from LinkedIn Learning
  • OECD. (2022). AI in Work and Skills: What is the Evidence? OECD Publishing. Retrieved from OECD
  • Resnick, M., et al. (2009). Scratch: Programming for All. Communications of the ACM, 52(11), 60-67.
  • Schwab, K. (2017). The Fourth Industrial Revolution. Crown Business.
  • Siemens AG. (2021). AI in Wind Energy. Retrieved from Siemens
  • UNESCO. (2022). Global Education Monitoring Report. Retrieved from UNESCO
  • Wing, J. M. (2006). Computational Thinking. Communications of the ACM, 49(3), 33-35.
  • World Economic Forum. (2023). Future of Jobs Report 2023. Retrieved from World Economic Forum

Global IT Outage Underscores Need for Proprietary Systems Based on Open Source Code

A major global IT outage has caused significant disruptions across multiple sectors, highlighting the vulnerabilities inherent in relying on widespread, uniform IT systems. The outage, linked to issues with Microsoft’s infrastructure, has affected air travel, healthcare, and numerous other industries, prompting a reevaluation of IT strategies worldwide.

The aviation industry was one of the hardest hit. Heathrow and Gatwick airports reported delays and had to revert to manual check-in processes due to the outage. Passengers were advised to check with their airlines and arrive early to mitigate the delays. A Gatwick spokesperson noted, “We are affected by the global Microsoft issues, so passengers may experience some delays while checking in and passing through security”​ (The Mirror)​. Similarly, airlines like Ryanair warned of potential network-wide disruptions, urging passengers to stay updated via their apps​ (The Mirror)​. Edinburgh Airport also reported longer wait times, attributing the delays to the IT system failures impacting several other businesses​ (The Mirror)​.

The healthcare sector did not escape unscathed. Numerous GP surgeries across the UK, dependent on the NHS-commissioned EMIS system, faced severe operational challenges. These practices found themselves unable to access patient records or book appointments, leading to widespread disruptions in patient care. “Our clinical system has not been working since 7am this morning,” stated the Church Lane Surgery in Brighouse​ (The Mirror)​.

The outage also impacted major retail and food services. McDonald’s experienced significant disruptions in their point-of-sale systems globally, causing some stores to close temporarily. Employees were forced to take orders manually and accept cash payments only. The company clarified that the issue was related to a third-party provider’s configuration change, rather than a cybersecurity breach​ (BleepingComputer)​. Similarly, Sky News reported on disruptions affecting their broadcasting capabilities, illustrating the broad impact of the outage across various media outlets​ (The Mirror)​.

Adding to the intricate mosaic of disruptions, CNN reported that businesses worldwide were grappling with the labyrinth of challenges caused by the IT failure. This included not only travel and healthcare but also financial services and retail sectors, where operations were heavily dependent on Microsoft’s systems​ (BleepingComputer)​.

This incident sheds light on the risks associated with dependency on single-provider IT solutions. The widespread use of Microsoft’s systems, for instance, meant that an issue within its infrastructure had a ripple effect across various sectors worldwide. To mitigate such risks, it is imperative that organisations consider developing proprietary IT systems based on open source code.

Open source software offers several advantages, including transparency, flexibility, and enhanced security. Organisations can tailor these systems to their specific needs, reducing reliance on external providers and increasing resilience against global outages. This approach not only bolsters operational stability but also fosters innovation and a competitive edge in the market.

The recent global IT outage serves as a critical reminder for organisations to reassess their IT strategies. By investing in proprietary systems built on open source code, companies can achieve greater control over their IT infrastructure, ensuring continuity and security in an increasingly digital and interconnected world.

Sources:

AI Reprogramming Humans: The Cybernetic Feedback Effect

Artificial Intelligence (AI) is rapidly advancing, integrating itself into various aspects of human life, enhancing efficiency, and even altering the way we interact with the world. However, beyond these obvious impacts, there lies a more profound and potentially unsettling question: Is AI reprogramming our minds? This essay argues that AI is not only augmenting human capabilities but also fundamentally changing our cognitive processes. Drawing on cybernetics and Actor-Network Theory (ANT), this essay explores how AI affects human cognition and behavior, and posits that just as we can introduce viruses into computers, it may not be so far fetched to say that AI may be able to introduce ‘viruses’ into human minds.

A caveat to the AI-destroys-humanity construct: So long as it is called ‘artificial’ intelligence, computers can only imitate, but never become, a mind in itself. This is not to say that it may be possible for humans to create the conditions for intelligence to emerge in some form; however, AI agency is, and has always been, an oxymoron. Therefore, limits on AI, per se, are really a governing framework of permissions for the algorithm to run freely. After all, an algorithm, even if able to change itself in real time, acts within its governing parameters / limitations. If a system has only been trained in English, it cannot spontaneously write in Chinese. The artificiality of AI is less talked about as the intelligence side is the focus, as ChatGPT and its contemporaries continue to evolve to process greater complexity in medium and subject matter. Therefore, the artificiality is the weak component when asking questions about how AI might police itself or engage in any form of significant autonomous behavior (Shukla et al., 2024). As it is ‘artificial’ intelligence, the AI, by definition, has zero emotional intelligence as it has no experiences (in any way relatable to humans anyway) to draw upon for feeling something. An AI will never tell you it has trauma from an inexperienced programmer and can relate to your same situation at work.

Nevertheless, the algorithms which imitate intelligence do so in a compelling manner, and combined with data analytic tools, AI has already succeeding in assisting humans in profound ways. Questions around attribution, ethics, and the digital analogue to the gray goo scenario all pervade the discourse. The use of AI is exploding due in part to its availability (ChatGPT 4.0 is available for free), as well as its ability to understand the highest level of code: human language. We are all programmers of our respective languages insofar as we construct our words to receive some sort of expected feedback (see Wittgenstein, 1953). Certain ‘algorithms’ of language, when repeated, produce the same results (most of the time). The study of human-to-human language and communication is linguistics; when computers are involved, it is the field of cybernetics, which encompasses linguistics.

Indeed, cybernetics, the study of communication and control in systems, offers a robust framework for understanding how AI reprograms human cognition. Norbert Wiener, the father of cybernetics, emphasized the importance of feedback loops in regulating systems (Wiener, 1948). In the context of AI, feedback loops are present in the continuous interaction between humans and AI systems, such as recommendation algorithms and personal assistants. These interactions shape our preferences, behaviors, and even thought patterns. Whilst in some cases, like learning a new programming language through these interactions, there is tangible benefit, it could be that the new way of thinking ironically robs us of our own intelligence.

There is academic support for this concern. The integration of AI into daily tasks leads to cognitive offloading, where humans rely on AI to perform functions that previously required human cognition. Studies have shown that the use of GPS for navigation, for instance, can diminish our spatial memory and navigational skills (Dahmani & Bohbot, 2020). As we increasingly depend on AI for decision-making, our cognitive abilities adapt to this new environment, potentially reducing our capacity for independent critical thinking.

The way we inquire about the universe affects the structure of our brain as it adapts to new knowledge paradigms. The integration of AI into research and learning processes introduces new ways of thinking and problem-solving. For instance, AI’s pattern recognition capabilities can lead to new insights and discoveries, but they also shape how researchers frame questions and approach problems. This cognitive restructuring is akin to the changes observed in the brains of individuals who engage extensively in activities that require specific cognitive skills, such as musicians or chess players (Wan & Schlaug, 2010).

The implications of this are not fully understood, and the evolution may be inevitable toward this further human-AI integration leading to what may eventually become a single intelligence in which the knowledge and analytical powers of AI are seamlessly grafted into our own brains. Such a move takes advantage of the creativity and impetus of humans, and the great data storage and processing capability of computers. Although it may seem inevitable, it may already be too late for humans to escape the influence of AI on their development. The question is if the character of this transition will be better or worse for civilization.

Conclusion
AI is not merely a tool that augments human capabilities; it is a powerful force that is reprogramming human cognition. Through cybernetic feedback loops and the dynamic networks described by Actor-Network Theory, AI systems influence our behaviors, thoughts, and decision-making processes. The introduction of cognitive ‘viruses’ by AI poses significant risks to human autonomy and agency. As we continue to integrate AI into our lives, it is essential to understand and address these profound implications to ensure that AI serves to enhance, rather than diminish, our cognitive and ethical capacities. We should also consider other ways AI could harm us outside the stereotypical conventional war we see in films.

References
Wiener, N. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press.
Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press.
Pariser, E. (2011). The Filter Bubble: What the Internet is Hiding from You. Penguin Press.
Dahmani, L., & Bohbot, V. D. (2020). Neural correlates of cognitive mapping, pattern separation, and context encoding in humans. Hippocampus, 30(7), 738-754.
Wan, C. Y., & Schlaug, G. (2010). Music Making as a Tool for Promoting Brain Plasticity across the Life Span. Neuroscientist, 16(5), 566-577.
Cadwalladr, C., & Graham-Harrison, E. (2018). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian.
Cummings, M. L. (2017). Artificial Intelligence and the Future of Warfare. Chatham House.
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
Wittgenstein, L. (1953). Philosophical Investigations. Blackwell Publishing.
Shukla, A. K., Terziyan, V., & Tiihonen, T. (2024). AI as a user of AI: Towards responsible autonomy. Heliyon, 10(11), e31397.

Project Polaris: vision and capability in organisational change

Projects without vision or vision without projects are equally futile endeavours. Moreover, projects and vision are managed in very different ways. Projects are iterative, sequential, Agile, etc., which have rigorous methods. On the one hand, vision happens spontaneously and can occur after a period of unhindered reflection, whereas projects can only tolerate thresholds of time delays or will cost more and / or risk becoming redundant. However, vision can heuristically solve problems by approaching them from a new horizon with different rules. Projects can change people and systems to adapt and create new horizons in this way; and, therefore, project management methodology (PMM) should govern the control of any system change, albeit without vision, measuring change is a pseudo-science at best and an arbitrary corruption at worst. Although this is no better or worse than simply having an idea and not acting on it. In short vision, combined, and project management are powerful tools in strategic organisational change (Shenhar & Dvir, 2007; Yazici, 2020).

However, neither vision nor capabilities are absolutes; organisations will have these qualities to various degrees. Understanding the vision / capability matrix for your organisation can help determine strategies for enhancing these individual qualities, and creating more synergy between the two. In short, not all projects can have maximum vision and capability, but at least one needs to be strong and the other sufficient for projects to have likely success.

Fig. 1. The Vision Capability Matrix.

Source: Own work.

There are four areas represented above; identifying which cell your organisation falls into determines your overall PMM strategy. 

Low vision / low capability

This situation can happen when first starting out as an entrepreneur or in a small business. Due to the immaturity of the business, there is little policy or experience to guide decision making. In this situation, developing vision can help understand the required capabilities for growing the business and building robust structures. Regardless of the talent of the manager in this situation, there is only so far projects can go as with little capability, there is little to invest, and lack of investment is cited by a number of studies as the number one reason businesses fail (with the caveat of the right kind of investment). As a programme manager, recognising projects where there is a lack of clear goals or understanding of the desired outcome, combined with limited skills, resources, or organisational support to effectively manage and execute the project means intervention measures can be formulated. Trial and error without experience, generic operational enhancement projects, poorly funded community initiatives, and unfocussed research are all at risk of failing without strengthening vision and / or capability.

Low vision / high capability

This situation can happen when there are highly-skilled staff and resources, but poor strategic oversight of how these resources are used. In this situation, the disconnect between the high level of capability and the low level of vision can lead to projects that, while successful in their execution, do not effectively contribute to the strategic objectives of the organisation or meet the actual needs of the market or community they serve. However, it is not always a sign of a weakness in projects that have high capability with low vision. It may be that a project team is charged with making a process more efficient or lean which provides its own tactical rewards (a penny saved is a penny earned). Notwithstanding the benefits of efficiency for efficiency sake, increasing increased vision can help direct what processes should be identified as priority for lean management, as well as direct savings toward other projects which are growing the organisation. A programme manager would need to identify an optimum vision for these types of projects, accepting that some projects, by their nature, will naturally have greater capability than vision. Over-engineered projects, optimising legacy or (soon to be) deprecated systems or processes, flashy IT projects with no clear link to strategic goals, and undemanded luxury features all result from too little vision guiding the work, despite the work itself being successful.

High vision / low capability

This scenario often arises in organisations with ambitious goals and a clear sense of direction, but without the requisite skills, resources, or processes to realise those aspirations. Visionaries and leaders might see where they want the organisation to go, yet find themselves constrained by current organisational capabilities. This mismatch can lead to frustration and underachievement as the grand visions cannot be adequately supported by the existing infrastructure or talent. The key to navigating this dichotomy lies in strategic capacity building and focused development efforts aimed at elevating the organisation’s capabilities to match its vision. This might involve targeted training, strategic hires, or forming partnerships that can fill capability gaps. For a programme manager, identifying such situations early on allows for the proactive management of expectations and the implementation of strategies designed to gradually build the capacity needed to fulfill the organisation’s vision. Projects that are ambitious yet lack a solid foundation, such as launching a new technology platform without the IT support to maintain it, or expanding into new markets without understanding the regulatory requirements, are examples where high vision but low capability can lead to challenges.

High vision / high capability

organisations that find themselves in this enviable quadrant are well-positioned to make significant strides towards their strategic objectives. Here, a clear and compelling vision is matched by the organisational capability to execute on that vision. This alignment allows for the seamless translation of strategic goals into actionable projects and initiatives, driven by teams that have both the skill and the resources to deliver high-quality outcomes. In such environments, the role of the programme manager shifts towards ensuring that the organisation’s strategic vision remains dynamic and responsive to changes in the external environment, and that capabilities are continuously developed to keep pace with or exceed that vision. Success in this quadrant requires maintaining a balance between aspiration and execution, ensuring that vision and capability grow in tandem. Projects in this category are characterised by innovation, market leadership, and the ability to respond agilely to new opportunities. Examples include launching cutting-edge products that set industry standards, entering and dominating new markets through strategic acquisitions, and implementing organisational changes that significantly enhance productivity and employee engagement.

Conclusion

This essay underscores the critical interdependence between vision and capability within organisational change initiatives. The vision/capability matrix not only aids in the diagnostic assessment of an organisation’s current state but also serves as a strategic tool to guide future direction. This synthesis of vision with capability fosters a resilient and dynamic approach to managing change, ensuring that organisations are not only prepared to navigate the complexities of today’s business landscape but are also poised for future success.

Recommendations

  • Conduct Regular Vision-Capability Assessments: organisations should periodically assess their position within the vision-capability matrix to ensure alignment with strategic objectives. This can help in identifying areas requiring attention, be it enhancing vision through clearer strategic planning or augmenting capability through skills development or resource allocation.
  • Invest in Strategic Capacity Building: Particularly for those in the high vision/low capability quadrant, it’s crucial to focus on strategic capacity building. This could involve targeted training programs, hiring strategies to fill skill gaps, or partnerships to enhance organisational capabilities.
  • Enhance Vision Clarity: For organisations identified with high capability but low vision, investing in strategic planning processes to clarify and communicate a compelling vision is essential. This ensures that the organisation’s resources and talents are aligned with a purposeful direction.
  • Foster a Culture of Agility and Learning: Encouraging a culture that values agility, continuous learning, and adaptability can help organisations navigate from any quadrant towards high vision/high capability. This cultural shift ensures that organisations can rapidly respond to changes and seize new opportunities.
  • Implement Integrated Project Management Practices: Adopting flexible and integrated project management practices that can be tailored to the organisation’s current vision and capability level. This ensures that projects are not only executed efficiently but are also aligned with the strategic goals of the organisation.

References:

Yazici, H. J. (2020). An exploratory analysis of the project management and corporate sustainability capabilities for organizational success. International journal of managing projects in business, 13(4), 793-817.

Shenhar, A. J., & Dvir, D. (2007). Project management research—The challenge and opportunity. Project management journal, 38(2), 93-99.

Demerouti, E., & Bakker, A. B. (2023). Job demands-resources theory in times of crises: New propositions. Organizational Psychology Review, 13(3), 209-236.

Heifetz, R. A., Grashow, A., & Linsky, M. (2009). The practice of adaptive leadership: Tools and tactics for changing your organization and the world. Harvard business press.

Kaplan, R. S., & Norton, D. P. (2001). The strategy-focused organization: How balanced scorecard companies thrive in the new business environment. Harvard Business Press.

Beusch, P., Frisk, J. E., Rosén, M., & Dilla, W. (2022). Management control for sustainability: Towards integrated systems. Management accounting research, 54, 100777. Beusch, Frisk, et al. (2022). 

The utility of respect: revisiting lessons taught in How to win friends and influence people

I recently looked again at the best selling book, How to Win Friends and Influence People by Dale Carnegie. The lessons in this book are keeping up well with the times, despite the book first being written nearly 70 years ago (1937). Despite its age, its themes still resonate loudly in today’s workplace and social milieus. I am writing this article partly as a reflection of my own journey in (professional and personal) socialising, but also to revisit themes to reinforce the ideas of this book, in short, to learn to respect other people better. The question of the utility of respect is admittedly tongue-in-cheek as sincerity is the caveat behind all of the advice in Carnegie’s classic canon work.

Below is a summary of some of the key ideas. It is not intended as a comprehensive summary of the book but rather a short guide to core ideas that can help anyone wishing to socialise better. The context of the discussion leans toward those in a career in academia; however, the principles are highly portable to other industries, and even in one’s personal life.

Don’t criticise

This advice requires careful examination, as giving and accepting criticism is paramount to successful growth of the individual and institution. The advice is not to become a conformist, per se, but rather to package criticism into constructive parcels which contain objective analysis and scholarly-backed recommendations. ‘Constructive criticism’ is an art, it requires the ability to self-reflect, as well as understand the challenges inherent in any individual’s or institution’s growth process.

Criticism is something we can avoid easily by saying nothing, doing nothing, and being nothing.

Aristotle (alternatively attributed to Elbert Hubbard)

This rather ironic statement indicates that criticism is a part of life. And yet, there is a way to do so (criticise) without causing grievous harm to the recipient. In fact, as academics, high-quality criticism is valuable, and is the basis for peer-reviewed publications in scholarly journals.

The flipside is that how should one handle criticism? When it is constructive, accept criticism with gratitude – when it is personal or deconstructive, take criticism with a grain of salt. In short, criticise to construct and leave out any personal comments in doing so; do not seek to deconstruct or delegitimise the target, but rather provide a way forward for the recipient to understand the intent of the interlocutor.

Give honest and sincere appreciation.

One of the most satisfying things in life is showing sincere appreciation for someone who has worked hard or has done something well. Acknowledging acts which go ‘above and beyond’ is a duty of every manager toward their team, but also an employee doing their job, even to the bare minimum requires positive acknowledgement. It is important to set the bare minimum to a level that, when achieved, constitutes high performance is important to maintain morale. Below is a scene from the film Office Space which illustrates the problem of always expecting people to work to a higher level than written in their contract.


What are your thoughts on this scene? Write your comments below!

Ways to Make People Like You

Become genuinely interested in other people

This point comes down to sincerity. No one likes to be merely a means to an end. In business, quid pro quo (something for something), is important; in friendships, it is equally important (see Phelan, 2023). However, there is a difference. In friendships, the time between exchange may be longer than in business. In friendships, giving does not expect a return on investment, per se, but once reciprocation occurs, the friendship deepens with trust and further commitment. Without reciprocation, a feeling of ‘being used’ may develop. No matter how mutual value is measured–tangibly or intangibly–parties must feel a mutual benefit to deepen the relationship. However, this does not invalidate acts of altruism, for example, taking care of a sick relative without expectation, however, one may argue that the recipient can, in their own way, show gratitude and thanks, which itself can be a reward and deepen trust and personal feelings.

Smile

Be careful about this one. Excessive smiling can be toxic. However, being aware of one’s facial expressions is essential for gaining ‘immediate trust’. It is not just about smiling, but providing facial expressions which indicate approachability. Smiles take on many different characteristics.

Fig. 1. The many varieties of smiles

A group of people with various types of smiles according to AI image generation.

Remembering a person’s name

This one is easy to forget. If you’re like me, sometimes you can forget a name only minutes after being introduced. A commitment to memorising names means finding a way to associate the name to the person’s characteristics. Association is a strong way to memorise as it combines your own intersubjective experience with some visible fact about the person, for example, someone named Malcolm might have very well combed hair, i.e. mal+’comb’. This is highly subjective and depends on your own associations. However, the important part is making an extra effort to remember names.

Make the other person feel important – and do it sincerely.

You might have read that getting someone to talk about themselves is a key way to befriend them. If you take over conversations, and not allow others to get a word in edgewise, then you are making yourself the most important person; however, if you sincerely take an interest in what others have to say, then not only will you build rapport, but also you may learn something.

The only way to get the best of an argument is to avoid it

Always avoid arguments whereby the interlocutors are fighting over their opinions. Descartes is often quoted as saying ‘I think, therefore I am.’ But this only tells part of the story. Descartes was led to this conclusion from the ontology of doubt. The purpose of this doubt is not to fall into a sceptical abyss, but rather to strip away all beliefs that could be even slightly uncertain, to find a foundation for true knowledge. In other words, it is doubt that brings about conscious thought. Doubt is not just about what others say, think and do, but also what one says, thinks, and does. Doubt is the core principle behind academic inquiry, and can help avoid arguments which lack self-scepticism.

In short, show respect for the other person’s opinions. Never say, ‘You’re wrong.’ If you are wrong, admit it quickly and emphatically.

Appeal to the nobler motives

One’s high standard of ethics and a strong centre built around informed principles assure that others will perceive one as reliable and trustworthy. Living by codes such as self-discipline and altruism assure others are attracted to one who is seen as consistent with high quality character. Nobler motives will always trump heuristic arguments intended to solve some short term problem by sacrificing, rather than upholding the greater good.

Dramatise your ideas

Passion is the number one factor behind the decision for someone to work with you as opposed to someone else when all other factors are roughly equal. A qualified unpassionate versus a qualified passionate will inevitably lead to the latter being chosen for friendship, promotion, or any other benefit when someone is choosing between you and others. By ‘dramatise’ it means that you can successfully portray vision to others through a passionate ‘acting out’ of ideas.

Talk about your own mistakes before criticising the other person

Further to the criticism section above, why not make clear your own shortcomings before speaking to someone else about theirs. It may be your job to constructively criticise your team; however, showing humility goes a long way to build trust and honour with those over whom you have authority.

Let the other person save face

Leaving no room for a person to save face means they will either have to accept they are ‘all wrong’ or ‘all right’. This binary thinking happens in a blame culture. Important at work and even more important in friendships, one should always allow a person to save face by allowing for external variables which may have led a person to make the wrong choice. Consider the manager / employee dialogue below:

Scenario 1: Not Allowing the Employee to Save Face

Employer: “John, I’ve noticed that the recent project you led did not meet our deadline, and this has put us in a difficult position with our client. Can you explain why this happened?”

Employee (John): “Yes, I understand the situation. We encountered several unexpected challenges, including…”

Employer (interrupting): “That’s no excuse. We always face challenges. It was your responsibility to foresee these issues and deal with them. Because of this failure, the company has lost a significant amount of credibility and potentially a lot of revenue.”

John: “I understand, but I tried to…”

Employer: “There were clear signs you were falling behind. You’ve let the team and the entire company down. We’ll need to consider serious repercussions for this.”

John (feeling defensive and humiliated): “I did everything I could. The problems were beyond my control.”

Employer: “That’s what everyone says when they fail. We’ll have a meeting with HR about this on Monday. That’s all.”

Outcome: John feels publicly shamed and demotivated. He becomes less engaged at work and starts looking for new job opportunities. The team’s morale drops, seeing how failures are harshly penalized, leading to a culture of fear rather than one of learning and improvement.

Scenario 2: Allowing the Employee to Save Face

Employer: “John, I wanted to talk about the recent project. It didn’t go as planned, and I know this wasn’t the outcome any of us hoped for. Can you share your perspective on what happened?”

Employee (John): “Yes, I appreciate the opportunity to discuss it. We ran into several challenges, including…”

Employer: “I see. These projects can certainly throw us curveballs. What do you think we could have done differently to manage these challenges better?”

John: “In hindsight, we might have benefited from more frequent check-ins with the team and perhaps sought additional resources sooner.”

Employer: “That’s a good insight. I believe in your abilities, John, and I know you did your best given the circumstances. Let’s consider this a learning experience. How can we apply what we’ve learned to improve our processes and prevent similar issues in the future?”

John: “I think a debriefing session with the team to collect feedback and establish new protocols could be beneficial. I also want to explore additional training for myself and the team to better prepare for unexpected challenges.”

Employer: “That sounds like a constructive approach. Let’s work on those action items together. We all make mistakes, but it’s how we learn from them and move forward that counts. I’m here to support you and the team in making those improvements.”

Outcome: John feels supported and valued despite the project’s outcome. He is motivated to identify solutions and improvements, leading to a stronger and more resilient team. The company culture becomes one of continuous learning and support, encouraging innovation and risk-taking without fear of blame.

In short, environments that support open communication, admit mistakes, and encourage learning from failures—without placing blame—can significantly enhance team performance and innovation, and allow people to save face, which ultimately encourages high morale and personal and professional growth for all involved (Edmondson, 1999). Whilst is may be tempting to make exceptions to this rule when someone commits the most extreme acts, saving face is a crucial component of rehabilitation (Braithwaite and Braithwaite, 2001).

In sum, in revisiting Dale Carnegie’s seminal work, ‘How to Win Friends and Influence People,’ nearly 70 years after its initial publication in 1937, the lessons are as relevant today as they were then. The book underscores the central importance of respect and sincerity in building relationships, a premise as valid in academia as in any field. Criticism, a cornerstone of academic growth, when constructively rendered, fosters improvement while preserving dignity. Aristotle’s insight on criticism underscores its inevitability and its potential for constructive, rather than destructive, outcomes. The art of giving honest appreciation, crucial in any leadership or collegial role, fosters a positive environment conducive to exceeding expectations.

Moreover, Carnegie’s advice transcends professional boundaries, advocating for genuine interest in others, the power of a smile, and the importance of remembering names, all of which strengthen bonds. Avoiding arguments by respecting differing opinions and appealing to nobler motives cultivates an atmosphere of mutual respect and understanding. The dramatisation of ideas with passion and acknowledging one’s own mistakes before offering criticism encourages a culture of humility and learning.

The dichotomy between handling criticism without allowing an individual to save face versus a more empathetic approach highlights the transformative power of constructive feedback. The first scenario depicts a demoralizing encounter that stifles growth and motivation, while the second demonstrates how allowing an employee to save face can lead to productive outcomes and strengthened relationships. This illustrates the significant impact of communication style on individual and team development, advocating for a culture that values learning from failures as much as celebrating successes. Through Carnegie’s timeless wisdom, we are reminded of the fundamental human need for respect, understanding, and sincere interaction, principles that remain as relevant today as they were nearly a century ago.

References:

Phelan, M. (2023). Rethinking friendship. Inquiry66(5), 757-772.

Edmondson, A. (1999). Psychological Safety and Learning Behavior in Work Teams. Administrative Science Quarterly, 44(2), 350-383.

Carnegie, D. (2023). How to win friends and influence people. Good Press. [Link]

Braithwaite, J., & Braithwaite, V. (2001). “The Importance of ‘Saving Face’ in Restorative Justice Conferences Involving Young Offenders.” In A. Morris & G. Maxwell (Eds.), Restorative Justice for Juveniles: Conferencing, Mediation and Circles. Hart Publishing.