Blog

Lean Superposition: How Well-Scoped, Agile Systems Deliver Sustainable Performance in Gaming and Programme Management [Draft]

Abstract

Performance positioning in computing is commonly defined by extreme benchmarks and elite competitive use cases. This framing obscures where performance delivers meaningful value for the majority of users, while encouraging over-engineering, excess resourcing, and premature replacement. In both computing and organisational contexts, such approaches prioritise acquisition over construction and optimisation over governance. This article presents a longitudinal case study of a repurposed personal computer built around an RTX 3060 Ti–class GPU, using the practice of PC building as an explicit analogue for project and programme management.

Drawing on empirical benchmarking, extended gameplay testing, and a virtual comparison with the PlayStation 5, the study demonstrates that the system delivers performance equivalent to—or exceeding—console-class experiences, including high-fidelity 1080p gameplay and viable 4K performance under optimised settings. Crucially, these outcomes are achieved not through maximal specification, but through compatibility, patching, and constraint-aware configuration. Decisions such as operating the GPU in a secondary PCIe slot with negligible performance loss, limiting active memory to 32 GB despite higher installed capacity, and avoiding energy-intensive cooling architectures illustrate how value is preserved through governance rather than escalation.

The analysis is interpreted through project and programme management theory, arguing that building a project—like building a PC—is fundamentally harder than buying an integrated solution off the shelf. While pre-packaged systems reduce short-term complexity, they often embed inflexibility, cost escalation, and sustainability risk. By contrast, systems assembled from compatible components allow for adaptation, repair, and selective resourcing when conditions change. The study further identifies common challenges in PC building—such as component incompatibility, thermal instability, unexplained bottlenecks, and partial failures—as directly analogous to recurrent issues in project delivery that are often mischaracterised as technical faults rather than governance failures.

The article concludes that lean, purpose-bound systems achieve supremacy by aligning capability to required outcomes, minimising surplus capacity, and enabling patching rather than replacement. In an environment of increasing hardware scarcity, rising energy costs, and sustainability constraint, this logic applies equally to computing systems and organisational programmes. Systems that endure are not those built to extremes, but those designed to be understood, repaired, and governed over time.

1. Introduction

Performance in computing has long been framed through abundance. In gaming, as in many technical domains, value is routinely inferred from proximity to the highest benchmark scores, the most powerful hardware tiers, or elite competitive use cases. This framing assumes a continual supply of increasingly capable components and has normalised rapid cycles of replacement. Systems are judged obsolete not because they fail to deliver meaningful outcomes, but because they no longer occupy the leading edge of theoretical capability. The sustainability implications of this model—rising electronic waste, escalating energy demand, and dependence on fragile global supply chains—are now increasingly visible (e.g. Baldé et al., 2017; Koomey et al., 2014).

What is becoming clear, however, is that this assumed abundance cannot be relied upon indefinitely. Recurrent shortages of graphics processors, memory, and semiconductors have demonstrated how quickly availability can collapse and prices can rise. These disruptions are not isolated events but indicators of a broader structural shift, in which advanced hardware becomes progressively more expensive, less replaceable, and increasingly treated as a scarce asset rather than a disposable commodity. In such a context, devices once considered trivial—such as early-generation consumer electronics—may acquire renewed residual value as functional artefacts rather than obsolete relics. Scarcity, in other words, reframes obsolescence as a governance problem rather than a purely technical one.

Within this emerging landscape, the traditional logic of performance escalation becomes increasingly untenable. For most users, performance is not defined by global tournament standards or marginal frame-rate advantages at extreme resolutions. It is defined by stability, visual fidelity, responsiveness, and depth of experience. Casual and serious non-professional gamers—who together constitute the majority of the gaming population—operate within bounded constraints: fixed displays, limited time, and experience-driven rather than competitive objectives. Escalating hardware capacity beyond what these constraints can exploit produces diminishing experiential returns while materially increasing economic and environmental cost. Supremacy, therefore, does not arise from excess capacity, but from alignment between capability, demand, and long-term resource availability.

An often-overlooked dimension of this shift is purpose-bound design. Gaming-class hardware—such as GPUs in the RTX 3060 Ti category—is explicitly designed for interactive, consumer-facing workloads: gaming, creative production, and general computation. It is not optimised for large-scale extractive uses such as cryptomining or industrial AI, even though such uses may be technically possible. This distinction is not merely technical but ethical. When hardware designed for play and creativity is diverted into extractive or speculative activity, scarcity accelerates, prices rise, and environmental cost increases without corresponding social benefit. In conditions of constrained production, governance over how hardware is used becomes as important as efficiency in how it is built.

This article examines a personal computer built around an RTX 3060 Ti–class GPU as a longitudinal case study in lean, sustainable supremacy under conditions of emerging scarcity. Originally constructed in 2019 for energy-efficient testing using passive-first cooling principles, the system was operated almost continuously for several years, partially degraded, and later repurposed for modern gaming workloads. Rather than being replaced as components aged or constraints emerged, the system was incrementally maintained, reconfigured, and governed. Decisions such as operating the GPU in a secondary PCIe slot with negligible performance loss, limiting active memory to 32 GB despite higher installed capacity, and avoiding energy-intensive cooling architectures illustrate how value can be preserved through patching and compatibility rather than escalation. These practices mirror well-established principles in Lean and Agile project management, where delivery is sustained through constraint-aware adaptation rather than wholesale redesign (Womack and Jones, 1996; PMI, 2021).

Performance is evaluated not against elite competitive gaming standards, but against dominant consumer baselines, particularly through a virtual comparison with the PlayStation 5—a well-benchmarked reference platform representing contemporary expectations for gaming quality and efficiency. The analysis demonstrates that, under realistic constraints, the RTX 3060 Ti system delivers performance equivalent to or exceeding console-class experiences, including high-fidelity 1080p gameplay and viable 4K play under optimised settings. For the majority of users, higher-tier GPUs would not materially improve outcomes, yet would impose significantly higher financial, energy, and environmental costs.

The article argues that this pattern reflects a broader principle extending beyond gaming. In programme and project management, sustainability in a resource-constrained future depends less on maximising capability and more on aligning scope, purpose, and governance to preserve value over time. Building a project—like building a PC—is fundamentally harder than purchasing an integrated solution off the shelf. While pre-packaged systems reduce short-term complexity, they often embed inflexibility, cost escalation, and long-term sustainability risk. By contrast, systems assembled from compatible components enable adaptation, repair, and selective resourcing when conditions change. In a future where hardware scarcity becomes the norm rather than the exception, supremacy will belong not to the most powerful systems, but to those that are understood, governable, and able to endure.

Table 1 (Introduction). Framing the Argument of the Article

Dominant AssumptionObserved RealityReframing Proposed in This Article
Performance equals newest hardwarePerformance equals meeting real experiential needsSupremacy arises from alignment, not novelty
Hardware abundance is assumedHardware scarcity is recurring and structuralSystems must be designed to endure
Replacement is the default responsePatching and reconfiguration preserve valueGovernance > escalation
Peak benchmarks define successStability, fidelity, and responsiveness define valueMeasure outcomes, not extremes
Over-specification ensures longevityOver-specification creates wasteMVP capability sustains systems
Buying integrated systems reduces riskIntegrated systems embed lock-inBuilding enables adaptation
Energy cost is secondaryEnergy is a limiting resourceResource-to-value ratio matters
Obsolescence is technicalObsolescence is managerialEndurance is a design choice

2. Prior Work, Related Literature, and the Gap This Study Addresses

The arguments developed in this article build on earlier conceptual work by the author, which proposed that passively cooled, energy-efficient computing systems could challenge replacement-driven models of technological progress (Chapman, 2019). That work framed sustainability primarily through design intent: reducing energy consumption, minimising thermal stress, and extending system lifespan by avoiding architectures optimised solely for peak throughput. While this contribution established a normative rationale for endurance-oriented design, it remained necessarily speculative. It did not examine how such systems behave once exposed to prolonged operation, partial degradation, and shifting functional demands. The present article takes that conceptual position as a starting assumption and asks what follows when sustainability claims are tested through sustained practice.

Academic engagement with PC building as a practice is fragmented and largely indirect. In engineering and computing education, hardware assembly frequently appears within project-based learning (PBL) and CDIO-inspired curricula, where students construct computing systems to integrate requirements analysis, component selection, implementation, testing, and operation (Crawley et al., 2014; Prince and Felder, 2006). This literature establishes that PC assembly can function as a bounded lifecycle activity, but it overwhelmingly focuses on short-duration learning outcomes. The assembled system is treated as complete once it functions, with little attention to long-term behaviour, maintenance, degradation, or repurposing.

A related strand examines simulation- and VR-supported PC assembly, framing hardware construction as a procedural task involving sequencing, constraint satisfaction, and error avoidance (Makransky et al., 2019; Radianti et al., 2020). While this work acknowledges complexity and failure modes during assembly, it still positions the completed system as an endpoint rather than as an evolving artefact. Issues central to sustainability—such as incremental repair, partial failure, and performance renegotiation over time—remain outside the analytical frame.

Beyond education, maker studies and DIY computing literature explore modularity, repairability, and user agency, often in explicit opposition to planned obsolescence (Buechley and Perner-Wilson, 2012; Kuznetsov and Paulos, 2010). This literature begins to engage directly with values relevant to sustainability, particularly reuse and adaptation. However, it rarely evaluates whether user-maintained systems can continue to deliver contemporary, demanding workloads, nor does it benchmark such systems against dominant consumer platforms. Sustainability is framed primarily as cultural resistance or ethical stance, rather than as a performance-constrained engineering outcome.

In parallel, sustainability scholarship has extensively examined electronic waste (WEEE), circular economy models, and lifecycle responsibility, but overwhelmingly from an end-of-life perspective (Baldé et al., 2017; Cao et al., 2024). While this work provides essential insight into disposal, recycling, and stakeholder responsibility, it offers limited guidance on how long-lived systems perform during extended use, or how in-use governance decisions affect environmental outcomes. The operational phase of computing systems—the longest and often most impactful stage of the lifecycle—remains comparatively under-examined.

Across these strands, a consistent gap emerges. PC building is treated as pedagogy, skill acquisition, or cultural practice, but rarely as longitudinal system governance. The literature lacks sustained examination of what happens when computing systems are built, used continuously, encounter non-catastrophic failures, and must be patched or reconfigured rather than replaced. In effect, PC building has not been theorised as a form of project or programme delivery over time.

This omission is notable when viewed through established project and programme management theory. Projects assembled from modular components routinely encounter emergent constraints, late-stage incompatibilities, and partial failures that cannot be resolved through escalation alone (PMI, 2021; Flyvbjerg, 2014). In both computing and organisational contexts, the temptation is to replace rather than adapt: to procure a new system, platform, or solution off the shelf. While this may reduce short-term complexity, it often increases cost, lock-in, and long-term sustainability risk.

The present article addresses this gap through a longitudinal, practice-based evaluation of a single computing artefact across design, operation, degradation, and repurposing. Rather than treating the PC as a one-off build or idealised exemplar, it is analysed as a governed system whose performance, configuration, and value are renegotiated over several years of real use. This approach allows sustainability, performance, and programme management principles to be examined not as aspirational ideals, but as emergent properties of systems required to endure under constraint.

3. System Configuration as Governed, Lean Practice

This study examines a single computing artefact as a bounded socio-technical system whose sustainability and performance characteristics emerged through prolonged use, maintenance, and adaptation rather than through fixed optimisation at the point of assembly. The system was originally constructed in 2019 using consumer components purchased under ordinary market conditions, without anticipation of later supply-chain disruption, prolonged near-continuous operation, or repurposing for contemporary gaming and computational workloads. Its subsequent evolution therefore provides an opportunity to examine how value is preserved through governance, scope discipline, and lean configuration rather than continual escalation or replacement.

A defining feature of the system is its passive-first CPU cooling strategy, centred on a large surface-area heatsink designed to operate without reliance on high-RPM active cooling. Low-speed case fans are present, but their role is limited to maintaining ambient airflow rather than compensating for concentrated thermal output. By contrast, typical high-performance air-cooled systems rely on one or more CPU fans operating at 1,800–2,500 RPM, each drawing approximately 2–5 W under load, while all-in-one liquid cooling systems introduce additional electrical overhead through continuous pump operation, typically consuming 6–10 W regardless of thermal demand (ASHRAE, 2011; Intel, 2018).

In the present system, CPU-adjacent cooling power draw approaches zero, with only low-speed case fans contributing an estimated 1–2 W in total. While these figures are indicative rather than instrumented measurements, they are consistent with published comparisons of cooling architectures and operational energy profiles (ASHRAE, 2011; Koomey et al., 2014). Over sustained use, the implications are material. Assuming a conservative operating profile of 12 hours per day, a liquid-cooled system may consume an additional 30–50 kWh per year solely for pump and high-RPM fan operation. Across a six-year lifecycle, this equates to approximately 180–300 kWh of avoided energy consumption, exclusive of the embodied environmental cost associated with pumps, radiators, and replacement components.

Beyond direct energy savings, passive-first cooling reduces mechanical wear and thermal cycling stress—both recognised contributors to component failure over time. High-RPM fans and liquid cooling pumps introduce additional points of mechanical failure and typically require replacement within three to five years of continuous operation. Large passive heatsinks, by contrast, contain no moving parts and exhibit effectively indefinite service life. The system’s sustained stability under near-continuous operation, supported by routine low-cost maintenance such as periodic thermal paste reapplication, reflects an agile operations logic in which small, preventative interventions preserve system health without disruptive overhaul (Rigby, Sutherland and Takeuchi, 2016).

Graphics processing is provided by an RTX 3060 Ti–class GPU, acquired during a period of severe market distortion in GPU availability and pricing. While this selection was incidental at the time of purchase, its implications are analytically significant. The GPU delivers high gaming and general-purpose compute performance at a typical board power of approximately 200 W—substantially lower than higher-tier alternatives that commonly draw 300–450 W for marginal performance gains under comparable gaming workloads. Empirical testing further demonstrated that, following failure of the primary PCIe x16 slot, operation via an auxiliary expansion slot preserved approximately 99% of expected performance at 4K resolution. This confirms that neither peak power draw nor theoretical bandwidth constituted binding constraints within the system’s defined scope.

This outcome illustrates a core lean and programme-management principle: capacity beyond the active constraint produces diminishing returns. Higher-tier GPUs would have increased energy consumption and embodied environmental cost without delivering commensurate experiential benefit under the system’s display, resolution, and usage constraints. In programme terms, this mirrors the misallocation of resources beyond a work package’s absorptive capacity, increasing cost and risk without improving delivery (PMI, 2021).

Memory configuration provides a further example of governed optimisation. Although the system was provisioned with 64 GB of RAM, empirical testing demonstrated that a 32 GB configuration with XMP enabled delivered lower latency and greater stability for gaming and real-time workloads. Additional memory capacity did not improve performance but reduced overclocking headroom and marginally increased power draw. Accordingly, the system operates with 32 GB of active memory, while the remaining capacity is retained as reserve rather than eliminated. This reflects lean portfolio logic: resources are neither discarded prematurely nor deployed inefficiently, but held in readiness for future scope change (Kerzner, 2019).

Other subsystems were deliberately not optimised beyond sufficiency. Solid-state storage connected via older cabling provided adequate throughput for gaming and general use, and no evidence emerged that storage performance constrained outcomes. Visual output was similarly bounded by display hardware rather than computational capability. These constraints were acknowledged but not escalated, consistent with agile governance practices that prioritise intervention only where delivery is threatened rather than optimising all components indiscriminately (Goldratt, 1997).

Taken together, the system demonstrates how sustainability, performance, and resilience can emerge from governed restraint rather than maximalism. Energy savings achieved through passive cooling, avoidance of unnecessary component replacement, and continued operation via auxiliary architectural pathways collectively reduce both operational and embodied environmental cost over time. In programme management terms, the system exemplifies lean supremacy: value is sustained not by chasing peak theoretical capability, but by aligning resources, scope, and governance to deliver outcomes reliably under constraint.

This section establishes the system as a configured, enduring platform whose sustainability and performance characteristics arise from disciplined design and ongoing stewardship. The following section presents benchmark and gameplay performance data, evaluating how this lean configuration performs relative to contemporary gaming requirements and dominant consumer baselines.

Figure 1. Passive CPU cooler (without fans)

Image source: Public Domain

Figure 2. Passive CPU cooler (doubled and with fans).

Image source: Public Domain.

4. Benchmarking and performance results

Performance evaluation was conducted to assess whether the configured system delivers contemporary gaming capability within its defined scope and constraints. Benchmarks were selected to establish fitness for purpose, stability, and comparative value against dominant consumer baselines, rather than to pursue global leaderboard positions. Results are summarised below before being interpreted through a Lean and programme management lens.

Table 2. System Configuration and Governed Constraints

ComponentConfigurationNotes on Scope and Constraint
CPU CoolingPassive-first heatsink with low-speed case fansNear-zero active cooling power at CPU; reduced mechanical wear and failure risk compared with high-RPM or pump-based systems (ASHRAE, 2011; Intel, 2018)
GPUNVIDIA RTX 3060 TiGaming-optimised GPU with moderate board power (~200 W), designed for rasterisation and real-time graphics rather than compute-intensive mining or datacentre workloads (NVIDIA, 2021)
GPU SlotAuxiliary PCIe slotPrimary x16 slot degraded; empirical testing indicates negligible performance loss at 4K where GPU compute, not bus bandwidth, is the limiting factor (Gamers Nexus, 2020; PCI-SIG, 2019)
RAM (Installed)64 GB DDR4Purchased for longevity and future scope flexibility
RAM (Active)32 GB DDR4 with XMP enabledLower latency and improved stability under gaming and real-time workloads; excess capacity beyond this threshold delivers diminishing returns (TechSpot, 2021)
StorageSSDs with legacy cablingAdequate throughput for gaming; not a binding performance constraint (Microsoft, 2020)
Display ContextTV / non-gaming monitorVisual output bounded by display refresh and resolution rather than GPU capability
Operating ProfileNear-continuous operationLongitudinal use over ~5–6 years, including sustained uptime and periodic maintenance

Interpretation
This configuration establishes the system as governed rather than idealised. Performance outcomes must therefore be interpreted relative to real constraints, not theoretical maxima—an explicit parallel to scope discipline in programme management (PMI, 2021).


Table 3. Synthetic Benchmark Results (GPU and System Stability)

TestResultReference Range (RTX 3060 Ti)Observations
Unigine Superposition (4K Optimised)~19,500–21,000~19,000–22,000Within expected class performance for RTX 3060 Ti GPUs (Unigine, 2022; TechPowerUp, 2023)
PCIe Slot Efficiency~99% at 4K~100% (x16 ideal)Performance loss negligible at high resolutions where GPU compute dominates (Gamers Nexus, 2020)
Interrupt-to-Process LatencyWithin real-time safe bounds<1 ms typicalStable for gaming and real-time audio; no DPC-related instability (Resplendence, 2023)
Thermal BehaviourStable under loadNo throttling expectedPassive-first cooling sufficient to maintain sustained boost clocks without thermal throttling

Interpretation
Despite operating via an auxiliary PCIe slot, the system retains effectively full GPU performance at 4K. This confirms that theoretical bandwidth reduction does not materially constrain real gaming workloads—a clear example of excess capacity providing no additional value.reductions do not materially constrain gaming workloads in this configuration.


Table 4. Gameplay Performance Summary (Call of Duty)

ResolutionSettingsPerformance OutcomeExperiential Assessment
1080pExtreme / UltraStable, high frame rateExceeds console-class experience
1440pHigh / UltraSmooth and responsiveWell within intended scope
4KOptimisedPlayable and stableBorderline competitive, high fidelity
Frame PacingN/AConsistentNo perceptible stutter
0.1% LowsN/AAcceptableNo immersion-breaking drops

Interpretation
The system meets or exceeds contemporary gaming expectations for the majority of users. While not configured for elite esports competition at extreme refresh rates, it delivers depth, stability, and responsiveness, aligning with how most players experience value (Digital Foundry, 2023).

Table 5. Virtual Comparison with PlayStation 5 (Baseline Equivalence)

DimensionRTX 3060 Ti SystemPlayStation 5Comparative Assessment
Target Resolution1080p–4K1080p–4KEquivalent
Visual FidelityHigh / Ultra (PC)High (Optimised)PC equal or better (settings-dependent)
Frame StabilityHighHighEquivalent
Power EnvelopeModerate (~200 W GPU)Fixed ~180–200 W systemComparable (Sony, 2020; NVIDIA, 2021)
Graphics LatencyLower (driver & pipeline control)Fixed pipelinePC advantage (NVIDIA Reflex; Digital Foundry, 2022)
Platform FlexibilityOpenClosedPC advantage
Repurpose / UpgradeYesNoPC advantage

Summary Interpretation (Lean / Programme Perspective)

Across synthetic benchmarks, live gameplay testing, and platform-level comparison, the results demonstrate that the system delivers contemporary, high-quality gaming performance within clearly defined scope and constraints. Performance parity with dominant consumer platforms is achieved not through escalation of hardware capacity, but through disciplined configuration, architectural resilience, and the deliberate avoidance of surplus capability that offers no material experiential return.

From a lean systems perspective, performance improvements emerged through constraint-focused optimisation rather than capacity expansion. Addressing memory latency and system stability produced tangible gains, whereas additional resources beyond this threshold delivered diminishing returns. From a programme management standpoint, delivery objectives were achieved without incurring unnecessary cost, replacement, or technical risk. When primary pathways failed, auxiliary routes preserved outcomes, and excess resources were retained as contingency rather than deployed inefficiently—reflecting mature governance rather than maximalist design.

Taken together, these results substantiate the central claim of this article: lean, well-governed systems can outperform over-engineered alternatives in real-world contexts, particularly under conditions of resource scarcity and sustainability constraint. The following discussion situates these findings within broader debates on hardware scarcity, lifecycle value, and the governance of complex socio-technical systems.

5. Discussion: Lean Supremacy, Scarcity, and the Future of Individual Computational Agency

The findings of this study must be interpreted not only in terms of gaming performance, but in relation to the broader computational capabilities now available to individual users. The system examined here—built around an RTX 3060 Ti–class GPU—comfortably delivers contemporary gaming performance within its defined scope. Less immediately visible, but equally significant, is its capacity to support applied artificial intelligence and data-intensive workloads, including model training, inference, and parallel data processing tasks commonly associated with contemporary machine learning practice. Consumer GPUs in this performance class are routinely used for deep learning frameworks such as TensorFlow and PyTorch, enabling experimentation and research previously dependent on institutional infrastructure (NVIDIA, 2021; Chollet, 2018).

This observation is historically significant. In 1997, IBM’s Deep Blue defeated the world chess champion using a system that was highly specialised, institutionally controlled, and resource-intensive. Deep Blue relied on custom VLSI chess processors and dedicated infrastructure consuming substantial electrical power and capital investment (Hsu, 2002; Campbell et al., 2002). By contrast, the system analysed in this study—built from consumer components and operating at a fraction of the energy cost—exceeds the computational requirements of many early AI research milestones. While architectural differences between symbolic AI and modern statistical learning must be acknowledged, the comparison highlights a profound inversion: computational capability once reserved for corporate or state actors is now available at the individual level.

This inversion has material implications for programme management, sustainability, and digital equity. From a lean systems perspective, the RTX 3060 Ti configuration represents a point of high value density: sufficient GPU memory, parallel compute capability, and throughput to support both high-fidelity gaming and applied AI research, without unnecessary escalation in power consumption or system complexity. In programme terms, this aligns with the concept of a minimum viable research platform—capable of enabling meaningful experimentation and innovation without requiring enterprise-scale infrastructure or cloud dependency (PMI, 2021; Ries, 2011).

However, this capability cannot be assumed to persist. Current market trends increasingly orient hardware design and pricing toward large-scale AI workloads, data centres, and cloud providers. Rising GPU and memory costs, coupled with increasing platform lock-in, have already prompted concerns that PC building as a practice is becoming economically inaccessible for many users (IDC, 2023; Gartner, 2024). As hardware becomes optimised for institutional AI deployment rather than general-purpose use, individual computational agency risks erosion. This trajectory mirrors earlier periods of centralisation in computing history, where innovation became constrained by access rather than ingenuity.

This shift also intensifies sustainability concerns. Systems designed for maximal throughput rather than scoped sufficiency exhibit higher operational energy demand, shorter effective lifecycles, and lower reuse potential. High-performance hardware that requires continuous, energy-intensive cooling to maintain marginal gains demonstrates a poor resource-to-value ratio. By contrast, the system examined here illustrates how a single, well-governed PC can support multiple domains—gaming, data processing, and AI experimentation—while remaining energy-efficient and durable. Such multi-use capability is a recognised hallmark of sustainable system design (Koomey et al., 2014.

The implications for programme and portfolio management are clear. Just as organisations must avoid over-investing in infrastructure that exceeds absorptive capacity, digital ecosystems must guard against a future in which computational power is recentralised through cost and specialisation. Lean supremacy, in this sense, is not merely a matter of performance efficiency; it is a matter of preserving optionality. Systems that are open, repurposable, and sufficiently powerful enable adaptation as requirements evolve, reducing dependence on proprietary platforms and externally governed cloud services (Womack and Jones, 1996; Goldratt, 1997).

These findings reinforce the arguments advanced in How can we achieve net-zero? A responsible-stakeholder-based circularity model in WEEE management (Cao et al., 2024), which emphasise lifecycle stewardship, reuse, and shared responsibility as foundations for sustainable digital futures. Extending the usable life of capable hardware reduces environmental impact while protecting the distributed innovation capacity that personal computing has historically enabled. This article is a micro extension of the larger principles engendered in this work.

Viewed in this light, the system examined here is more than a gaming rig. It demonstrates that high-performance, multi-purpose computing remains viable at the individual level—provided that design, governance, and evaluation prioritise sufficiency over excess. In a future defined by hardware scarcity, energy constraints, and AI-driven market pressures, sustaining the agency of the PC builder may prove as important for resilience and innovation as sustaining raw performance itself.

References

  • Campbell, M., Hoane, A. J. and Hsu, F. (2002) ‘Deep Blue’, Artificial Intelligence, 134(1–2), pp. 57–83.
  • Cao, D., Puntaier, E., Gillani, F., Chapman, D. and Dewitt, S. (2024) Towards integrative multi-stakeholder responsibility for net zero in e-waste: a systematic literature review, Business Strategy and the Environment, 33(8), pp. 8994–9014.
  • Chapman, D., 2019, September. Go Green Go Digital (GGGD): An applied research perspective toward creating synergy of crypto-mining and sustainable energy production in the UK. In International Conference on Sustainable Materials and Energy Technologies (pp. 1-23).
  • Chollet, F. (2018) Deep Learning with Python. New York: Manning.
  • Gartner (2024) Semiconductor Supply Chain and AI Infrastructure Outlook.
  • Goldratt, E. M. (1997) Critical Chain. Great Barrington, MA: North River Press.
  • Hsu, F. (2002) Behind Deep Blue: Building the Computer That Defeated the World Chess Champion. Princeton: Princeton University Press.
  • IDC (2023) Worldwide GPU and Memory Market Forecast.
  • Koomey, J. G., et al. (2014) ‘Implications of historical trends in the electrical efficiency of computing’, IEEE Annals of the History of Computing, 36(3), pp. 46–54.
  • NVIDIA (2021) NVIDIA Ampere Architecture Whitepaper.
  • PMI (2021) A Guide to the Project Management Body of Knowledge (PMBOK® Guide), 7th edn.
  • Ries, E. (2011) The Lean Startup. New York: Crown Business.Womack, J. P. and Jones, D. T. (1996) Lean Thinking. New York: Simon & Schuster.

Appendix A. PC Build specifications

System Configuration (Final Test Platform)

The system analysed in this study is a consumer-built desktop PC originally assembled in 2019 and subsequently maintained and reconfigured through incremental optimisation. The configuration at the point of benchmarking and gameplay testing was as follows:

ComponentSpecification
MotherboardGigabyte X570 AORUS ELITE (AM4 chipset)
CPUAMD Ryzen 7 5800X, 8 cores / 16 threads
CPU ConfigurationStandard BIOS-enabled overclocking (PBO / manufacturer-consistent settings)
Observed CPU Clock~3.8–4.7 GHz under boost
CPU CoolingPassive-first large surface-area heatsink with low-speed case airflow (Noctua-class architecture)
GPUGigabyte NVIDIA GeForce RTX 3060 Ti (GA104, LHR)
GPU Memory8 GB GDDR6
GPU InterfacePCIe 4.0 (operating via auxiliary slot due to primary x16 slot degradation)
System Memory32 GB DDR4 @ 3200 MHz, XMP enabled
Memory ConfigurationDual-channel, latency-optimised; no excess capacity installed
Storage1 TB SATA SSD (system drive)
Additional StorageSecondary SSDs (legacy cabling; not performance-limiting for gaming workloads)
Operating SystemWindows 10 (Build 26200)
Graphics DriverNVIDIA Driver 581.8
Power ProfileStandard desktop PSU; no artificial power limits applied
Display ContextTelevision / non-gaming monitor (visual output bounded by display rather than GPU capability)
Operational HistoryNear-continuous operation over ~5–6 years with routine maintenance (e.g. thermal interface renewal)

Appendix B. Superposition Benchmarks



Appendix C. Tech PowerUp GPU

In the Bus Interface it says PCIe x 16 which indicates full support for the GPU despite it being int eh secondary slot (4.0 @x4 1.1.

Appendix. D. UserBenchmark

Appendix E. The Original Build (05/01/2026)

How We Are All Becoming Cyborgs and Why We Already Are: Navigating the Human to Cyborg Transition

It is often claimed that the next technological epoch will be defined by conflict: human versus artificial intelligence. This narrative of competition, however, misses a deeper truth. We are not waiting for a confrontation between humanity and machines; the merger has already begun. From the moment we started carrying devices that manage our time, location and thought, we began an evolutionary migration from human to cyborg. The distinction between organism and mechanism has become less meaningful as cognition, communication and perception are mediated through technology.

The slow embedding of technology into the human system

Our dependence upon external systems now shapes almost every aspect of life. We navigate through GPS, communicate through algorithms, and measure our health through wearable devices. Each action relies upon a digital infrastructure that extends our sensory and cognitive reach. This is not a future state but a lived condition.

The process began decades ago with seemingly trivial technologies. The portable cassette Walkman, introduced in 1979, was one of the first mass-market devices that allowed individuals to curate their auditory environment (Hosokawa, 1984). By wearing a headset, one could exclude the natural world and enter a private soundscape, transforming the act of walking into a self-contained performance. This externalisation of sensory experience marked the beginning of the cyborg transition. The Walkman was not implanted, yet it merged with human behaviour so thoroughly that its absence became noticeable.

Smartphones later extended this process to the cognitive domain. They are not merely communication tools; they are prosthetic memory systems, navigational aids, and emotional regulators. When we lose them, we experience a form of disorientation akin to sensory deprivation. According to Clark and Chalmers (1998), such integration constitutes an “extended mind,” where external artefacts become part of the cognitive process itself.

Today, wearable technology like smartwatches, fitness trackers and augmented-reality glasses has pushed the interface even closer to the body. The next step is internalisation. Subdermal chips, implantable sensors and neural interfaces represent a move from carrying and wearing to embodying technology. At this point, the line between tool and tissue dissolves. Hayles (1999) described this condition as “posthuman,” not because we have transcended the human form, but because information systems now participate in what it means to be human.

The meta-narrative of dependency

Across this continuum, a single narrative emerges: dependence. The more we integrate technology into the body, the more we rely upon its invisible systems of maintenance and control. The GPS network, the data cloud, the machine-learning model—all form part of a vast cybernetic feedback system that governs our access to the world from which we derive our resources to live. We can no longer easily separate biological survival from digital infrastructure.

This dependency is not purely mechanical; it is epistemological. Our sense of truth, orientation and safety now passes through algorithmic filters. In this respect, the smartphone is already a cognitive implant. It tells us where to go, who to speak to, what to remember, and sometimes what to believe. The boundary between external assistance and internal reliance has blurred. The cybernetic principle of feedback and control, first articulated by Wiener (1948), is no longer a theory of machines but a condition of human life.

Governance and failure

If the future human body includes embedded AI systems that regulate health and cognition, pressing questions follow. Who maintains these systems? Who controls the flow of data between body and network? If the AI is learning and adaptive, it may evolve in ways its host cannot predict or understand. Traditional models of medical device regulation are insufficient for a living, learning system.

Failure is an even greater concern. When a wearable fails, we replace it. When an embedded system fails, the consequences could be fatal. A malfunctioning AI that regulates blood sugar or heart rhythm might leave the body defenceless. Asimov (1950) proposed three laws to safeguard human-machine interaction, yet these were written for robots external to the human body. An AI that operates from within requires new ethical architectures. It must balance safety, autonomy, and identity: can a system that governs physiology without consent still be considered part of the self?

Biological consequences of augmentation

The paradox of augmentation is that it can weaken the very systems it seeks to enhance. The immune system, for example, adapts to a dynamic environment. If embedded technologies continuously monitor and correct the body’s internal state, they may suppress this adaptive process. Over generations, the natural immune response could diminish, leaving humanity dependent upon technological maintenance for basic survival.

The same may occur cognitively. We have already outsourced memory and navigation to our devices. Neural implants capable of instant recall or predictive reasoning could accelerate this outsourcing until independent reasoning becomes a rarity. As Wiener (1950) warned, a system that loses its redundancy also loses its resilience. The evolutionary balance between self-regulation and external control must therefore remain in focus. Evolution does not end with integration; it continues through new forms of dependency and adaptation.

One fear is we may make monsters out of ourselves, such as depicted by the Borg in Star Trek (see below). It may be that humans begin to care less about aesthetics and more about capabilities of the body to self-heal, and actualise as part of a digital collective which includes AI-mind synthesis, automation algorithms for performing tasks, and nano robots for immune system responses and maintenance of the ‘organics’.

Image credit: Image from Star Trek: The Next Generation – Season 3, Episode 26, ‘The Best of Both Worlds Part I’ (1990), courtesy of CBS / Paramount Pictures.

Theoretical and ethical insight from cybernetics

To understand the human transition to cyborg, it is necessary to return to the literature of cybernetics. Norbert Wiener’s Cybernetics: Or Control and Communication in the Animal and the Machine (1948) established the conceptual unity between living organisms and mechanical systems through feedback loops. His later work, The Human Use of Human Beings (1950), warned that the social application of cybernetic principles could erode autonomy if left unexamined. Wiener’s central insight was that information flow governs both machines and societies. In this sense, cybernetics is not just a science of control but a philosophy of life under technological mediation.

Haraway’s (1985) Cyborg Manifesto reframed the cyborg as a political and cultural figure: a hybrid of organism and machine that destabilises traditional boundaries between nature and technology. For Haraway, the cyborg is not a dystopian future but a lived reality that reveals our interdependence with systems of production and communication. Her analysis remains relevant as AI begins to occupy not only social but biological space.

Modern thinkers like Bostrom (2014) and Kurzweil (2005) continue this trajectory, exploring the possibilities of superintelligence and human enhancement. Yet their optimism often overlooks the systemic risks of dependency and governance. The cybernetic tradition reminds us that every system of control introduces a potential point of failure. It is therefore not enough to pursue augmentation; we must design for fallibility.

What a human may become

If the rate of technological progress continues, the human being of the next century will likely be a composite of organic and computational systems. Vision may be enhanced through retinal sensors; cognition may be assisted by implantable neural modules; disease prevention may occur through continuous bio-monitoring and automated intervention. The individual will be connected to a network that mediates perception, memory and metabolism in real time.

Such integration will alter the meaning of agency. The self will no longer be an isolated consciousness but a node within a distributed network of information exchange. Privacy, autonomy and responsibility will require redefinition. A human error might be indistinguishable from a system fault. In this context, Asimov’s ethical vision must be inverted: rather than teaching machines to obey humans, we must teach humans to coexist with the machines inside them.

Humans will seeks to aesthetically integrate technology, but there will be an inevitable physical consequence to embedding tech into and onto our bodies.

Image Title: ‘Aesthetic Cyborg with externalised tech’

Image credit: Chapman, D. (2025). ‘Aesthetic Cyborg with externalised tech’. Image created by ChatGPT5.

Evolution is never over

The cyborg transition should not be read as the end of humanity but as a continuation of evolution through artificial means. The tools we design to assist us gradually become part of us. The integration of AI into the body is simply the latest stage in a long history of technological symbiosis. Yet this evolution demands humility. If we allow dependence to eclipse understanding, we risk creating a species that cannot function without its extensions.

The task for our generation is therefore twofold: to embrace the potential of integration while preserving the capacity for autonomy and failure. Cybernetics offers a framework for this balance, reminding us that all living systems depend upon feedback, redundancy and ethical restraint.

The real frontier is not artificial intelligence but the redefinition of what it means to be human in an age when intelligence itself is becoming shared across organic and artificial systems. Evolution is never over; it has merely changed its medium.


References

Asimov, I. (1950) I, Robot. New York: Gnome Press.
Bostrom, N. (2014) Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.
Clark, A. and Chalmers, D. (1998) ‘The Extended Mind’, Analysis, 58(1), pp. 7–19.
Clark, A. (2003) Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. Oxford: Oxford University Press.
Haraway, D. (1985) ‘A Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s’, Socialist Review, 80, pp. 65–108.
Hayles, N. K. (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.
Hosokawa, S. (1984) ‘The Walkman Effect’, Popular Music, 4, pp. 165–180.
Kurzweil, R. (2005) The Singularity Is Near: When Humans Transcend Biology. New York: Viking.
Wiener, N. (1948) Cybernetics: Or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press.
Wiener, N. (1950) The Human Use of Human Beings: Cybernetics and Society. Boston: Houghton Mifflin.

Postscript

I asked the AI how it knew to generate the image that it did, and how it thought it would align with my expectations; it replied (see below). The below reflection led to the theme for the article although cybernetics has been an interest of mine for many years.

ChatGPT5 response:

WMG MSc Students Go Behind the Scenes at HS2 with Align JV

Students from the MSc Programme and Project Management course at WMG, University of Warwick, recently had the unique opportunity to visit the HS2 Central 1 site, one of the most ambitious infrastructure projects currently underway in the UK. Delivered by Align JV, this section of HS2 Phase One includes the 16km Chiltern Tunnel and the 3.4km Colne Valley Viaduct, both of which students were able to see first-hand on site, just west of London.

The visit formed part of the Project Management in Practice (PMiP) module, where students are encouraged to engage with live, complex projects and gain exposure to large-scale delivery environments. For many, it was their first chance to explore such a vast infrastructure site in person, observing the logistical, environmental, and stakeholder challenges involved in real-time project execution.

Led by Dr David Pontin, Dr Lydia Adigun, Dr Georgios Kapogiannis, and Dr Dennis Chapman, the industrial visit aimed to bridge the gap between academic theory and practical application. Students were able to ask questions directly to representatives from HS2 and Align JV, gaining insight into how project methodologies and systems thinking are implemented at this level of delivery.

“It’s one thing to study Earned Value Management or stakeholder governance in a classroom,” one student commented, “but hearing how these are actually applied on a multi-billion-pound project like HS2—and being able to ask the people doing it—really made it click. It brought the theory to life.”

Students also reflected on themes such as systems integration, risk mitigation, supply chain complexity, and environmental sustainability—all core topics within the PMiP module and wider MSc curriculum.

Reflecting on the value of the visit, Dr Georgios Kapogiannis said:

“For students to see a programme of this magnitude—while being able to engage directly with those delivering it—transforms how they perceive their own learning. It inspires a shift from textbook theory to strategic thinking under real-world constraints.”

We would like to thank our colleagues—Dr Pontin, Dr Adigun, and Dr Kapogiannis—for their collaboration in organising the visit, and offer our sincere thanks to Align JV and HS2 for generously hosting us on site. The insights gained by our students will continue to inform their thinking and research well beyond the classroom.

This visit underscores WMG’s commitment to delivering industry-connected, applied education, ensuring students are equipped to become reflective, effective, and forward-thinking project professionals.

Explore Further

MSc in Project Management, WMG: Applied AI, Python simulations, and industry 4.0 / 5.0 at WMG, University of Warwick


Caveat: all the subject presented here are active research areas of Lecturers on the Course and / or belong to the product / service of businesses affiliated. For official Course details, please click on one of the links at the end of this post.

Rapid automation, sustainability imperatives, and increasingly intelligent systems are redefining what it means to manage complex programmes. Tomorrow’s project leaders must be more than functional coordinators — they must be strategic integrators of people, data, and technology.

The MSc in Project and Programme Management (PPM) at WMG, University of Warwick, responds directly to this shift. It combines rigorous project leadership fundamentals with applied expertise in artificial intelligence, robotics management, and Industry 4.0 / 5.0 integration.

Whether you come from an engineering, computing, or business background, this programme offers a future-focused route into high-impact careers spanning infrastructure, advanced manufacturing, public innovation, and global consultancy.


⚙️ Applied AI: Where Projects Think for Themselves

Project management today is undergoing a metamorphosis. As technologies like AI, robotics, and cyber-physical systems shape how goods and services are delivered, traditional frameworks are being replaced by agile, intelligent, and data-driven models.

Rather than studying AI in isolation, this MSc embeds it within real project environments:

  • AI-powered simulation environments
  • Digital twins for forecasting and scenario planning
  • Custom-coded tools for resourcing, risk analysis, and optimisation

These simulations — some of which are developed and published by teaching staff — are not off-the-shelf tools, but active components of ongoing research, with real-world application in the public sector, civil service, and infrastructure contexts.

As Jarrahi (2018) notes, “AI complements human judgement in complex environments… The future of work lies in human–AI symbiosis, particularly in decision-making roles where uncertainty and risk are high.”


🤖 Robotics Integration: Strategy Meets Systems

The course explores robotics not just as automation, but as a strategic project asset. Students develop the capability to manage and plan around:

  • Human–robot collaboration
  • Cobot deployment in agile workflows
  • Ethical frameworks for autonomous systems

This reflects the evolution from Industry 4.0 — with its focus on connectivity and data — to Industry 5.0, where human-centric, sustainable, and resilient innovation becomes paramount (European Commission, 2021).


💻 Coding for Project Professionals: From Python to Simulation

WMG recognises that project managers of the future must be able to understand the systems they lead. To that end, the MSc PPM programme includes applied instruction in:

  • Python for simulation and AI logic
  • MATLAB/Simulink for control systems and modelling
  • PHP for digital twin visualisation and logic interfacing

Rather than electives, these are core skills, preparing graduates to lead multi-disciplinary teams, challenge assumptions, and convert data into actionable strategy.


🌍 Diversity, Impact, and Global Demand

The programme attracts students from over 50 countries, from disciplines including business, civil engineering, data science, and design. This diverse intake encourages collaboration across national, cultural, and professional boundaries — especially in simulation-based group work and strategic case studies.

The course has also been recognised for its inclusive delivery. Simulation-led learning, visual project logic tools, and flexible formats make it accessible to neurodiverse learners and disabled students — addressing a critical need in higher education.


📈 Career Pathways and Industry Engagement

Graduates have gone on to leadership roles in:

  • Infrastructure and transport megaprojects (including HS2-affiliated consultancies)
  • Digital transformation roles in multinational firms
  • Sustainability and risk modelling in ESG, carbon management, and circular economy consulting

Students benefit from live case studies, industry guest lectures, and problem-based assessments aligned with current global challenges. The course is also APM and PMI-affiliated, supporting professional recognition.


🌐 Industry 4.0 and 5.0: Systems Thinking in Action

Key modules equip students to understand and manage the technologies driving today’s industries:

  • The Industrial Internet of Things (IIoT) and sensor ecosystems
  • Smart supply chain systems and digital lifecycle planning
  • Transition frameworks for sustainable, circular project delivery

The curriculum reflects WMG’s real involvement in major industrial partnerships, ranging from smart cities and autonomous mobility to advanced manufacturing collaborations across Europe and Asia.


🌱 Embedding the UN Sustainable Development Goals (SDGs)

A significant and forward-thinking feature of the MSc PPM is the integration of the UN Sustainable Development Goals (SDGs) across all modules.

Rather than treated as a stand-alone topic, sustainability is embedded within:

  • Risk and resilience: ESG evaluation and planetary boundaries
  • Strategic leadership: Ethical decision-making and long-term impact
  • Tech management: Carbon lifecycle analysis and digital equity

This aligns directly with:

  • Goal 9 – Industry, Innovation and Infrastructure
  • Goal 12 – Responsible Consumption and Production
  • Goal 13 – Climate Action

Students are also encouraged to align dissertations or group projects with SDG themes. Previous work has explored circular design for manufacturing, ethical automation in public infrastructure, and carbon metrics for digital twin planning.

“The WMG curriculum reflects a growing expectation from industry and society: that project leaders are not only efficient, but also responsible stewards of sustainable futures.”


🧠 Why Apply Now?

The MSc in Project and Programme Management is a highly competitive course with limited places offered on a rolling basis. Applications for 2025 entry are now open.

Whether you’re a final-year undergraduate or a mid-career professional looking to pivot toward AI-enabled delivery and sustainable innovation — this course is designed to future-proof your capabilities.

📌 Early application is strongly advised.


📬 Register Your Interest


🔗 Explore Further


📚 References

  • European Commission (2021). Industry 5.0: Towards a sustainable, human-centric and resilient European industry. Link
  • ISO (2015). ISO 14001: Environmental management systems – Requirements with guidance for use.
  • Jarrahi, M.H. (2018). Artificial intelligence and the future of work: Human–AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586. DOI
  • Luthra, S. & Mangla, S.K. (2018). Evaluating challenges to Industry 4.0 initiatives for supply chain sustainability in emerging economies. Process Safety and Environmental Protection, 117, 168–179. DOI

International relations and global business: a trend toward nationalism

In an era of unprecedented economic interdependence, the assumption that global trade ensures stability is increasingly fragile. The entanglement of supply chains across geopolitical fault lines, vulnerabilities in critical infrastructure, and the resurgence of nationalism are reshaping international commerce. While businesses have long operated on the premise that economic ties create mutual incentives for peace, history offers a cautionary tale. Despite record levels of global trade integration, tensions between major powers continue to escalate (Copeland, 2015; Mearsheimer, 2014).

Industries reliant on materials and technologies controlled by both allies and adversaries must acknowledge that economic interdependence is no longer a safeguard against geopolitical disruption. The breakdown of global trade in times of crisis—such as during the COVID-19 pandemic—demonstrated that supply chains are only as strong as their weakest geopolitical link (OECD, 2023). This essay argues that failing to account for political and strategic risk is failing to plan for the realities of modern global commerce.

The analysis will first challenge the belief that economic interdependence fosters peace, examining the pre-World War I era as a historical precedent. It will then assess the vulnerabilities of modern supply chains, with case studies on rare earth elements, semiconductors, and undersea infrastructure. Finally, the discussion will consider how nationalism impacts supply chains, arguing that full economic autonomy is unrealistic. The conclusion will present solutions for resilience, diversification, and risk mitigation.


False global trade optimism

Before World War I, optimism prevailed that growing global trade would deter large-scale conflicts. Norman Angell’s The Great Illusion (1910) posited that war was economically irrational due to deep financial interdependencies between industrialized nations. This belief assumed that economic integration would naturally prevent states from engaging in destructive military conflicts (Angell, 1910).

However, this assumption collapsed with the outbreak of World War I in 1914. Despite Britain and Germany being each other’s largest trading partners, economic ties failed to prevent the escalation of hostilities. Political ambitions, national security concerns, and alliance structures proved far more decisive than trade dependency (Gartzke & Lupu, 2012).

This failure has led scholars to reevaluate the idea that trade deters war. While some argue that economic integration reduces conflict probability, others contend that power politics ultimately override financial considerations (Copeland, 2015). The case of Russia’s 2022 invasion of Ukraine, despite extensive trade with Europe, further illustrates the limitations of economic interdependence as a deterrent.


Does the West need China?

Modern supply chains are intricate networks that link allies, rivals, and adversaries. The production and distribution of semiconductors, lithium, and rare earth elements (REEs) rely on multinational networks that often involve strategic competitors.

China dominates the REE sector, controlling approximately 70% of global rare earth mining and 90% of refining (Statista, 2023). These materials are indispensable for renewable energy, aerospace, and defense technologies. This dependency has sparked growing security concerns, with the U.S. and EU implementing policies to diversify supply chains (International Energy Agency, 2021).

The semiconductor industry is another critical vulnerability. Taiwan produces over 60% of the world’s advanced microchips, with TSMC alone supplying 90% of leading-edge chips (OECD, 2023). A disruption in Taiwan’s chip production—whether from trade disputes, blockades, or military conflict—could paralyze global technology industries, as semiconductors power everything from smartphones to missile guidance systems (IMF, 2023).


Can nationalism save us?

The resurgence of nationalism has led nations like the United Kingdom and Germany to emphasize economic self-sufficiency and strategic autonomy. The UK’s post-Brexit economic landscape has been characterized by labour shortages, investment uncertainty, and declining trade volumes, impacting sectors such as manufacturing, agriculture, and services (Springford, 2021). Brexit resulted in significant disruptions to UK-EU trade flows, with non-tariff barriers increasing costs for businesses, reducing foreign direct investment, and prompting some companies to relocate operations (Financial Times, 2023).

Germany, while remaining deeply integrated into the European Union, has reoriented aspects of its economic strategy. The German government has taken steps to reduce dependence on Chinese raw materials, particularly in high-tech manufacturing and energy storage solutions (Bruegel, 2023). However, despite policy shifts, Germany remains heavily reliant on imports for semiconductors and key industrial components (World Bank, 2023).

Nationalism can also escalate resource conflicts. In 2023, China imposed export restrictions on gallium and germanium, both crucial for semiconductor manufacturing (Financial Times, 2023). The move was perceived as retaliation against U.S. and European trade policies, highlighting how nationalist economic policies can provoke countermeasures, further destabilizing global markets (RAND Corporation, 2024).


What next?

A prolonged supply chain disruption could have devastating effects across key industries, impacting production, innovation, and economic security. Table 1 summarizes some of these risks:Table 1: Industry-Specific Consequences of Supply Chain Disruptions

IndustryPotential Impact
TechnologySemiconductor shortages halt smartphone, computer, and server production.
AutomotiveEV production stalls due to lithium-ion battery material shortages.
EnergyDelays in wind, solar, and battery projects due to rare earth supply constraints.
HealthcareScarcity of medical equipment and pharmaceutical components.
DefenseDisruptions to missile guidance, radar, and aerospace manufacturing.

The technology sector is particularly vulnerable to supply chain breakdowns. The 2020–2023 semiconductor crisis already demonstrated how delays in chip production affected global electronics, automotive, and defense sectors (BCG, 2023). Chip shortages cost the automotive industry over $200 billion in lost revenue in 2021 alone (Statista, 2023).

Renewable energy transitions are also at risk. China refines 85% of the world’s solar-grade polysilicon, making solar panel supply chains highly dependent on one country (International Energy Agency, 2021). Any disruption in Chinese production could significantly slow the shift to sustainable energy.

The defense sector faces similar risks. The U.S. relies on imports for over 80% of its rare earth needs, crucial for fighter jets, missile systems, and radar technology (RAND Corporation, 2024). Any restrictions on REE exports would severely impact military readiness.


Conclusion: Toward Strategic Resilience

The vulnerabilities in global supply chains cannot be ignored. While economic interdependence has fueled unprecedented prosperity, it has also introduced systemic risks. The assumption that trade fosters peace is historically unfounded, as strategic interests ultimately override economic logic in times of crisis.

The solution is not full economic decoupling—an unrealistic and costly endeavor—but smart diversification, investment in alternative suppliers, and strategic stockpiling. Governments and businesses must:

  1. Diversify supply sources to reduce overreliance on single-country suppliers.
  2. Invest in domestic production for critical industries like semiconductors and energy storage.
  3. Strengthen international partnerships to maintain stable trade networks.

Ignoring these risks will leave businesses and economies vulnerable to the next major geopolitical shock. Economic interdependence alone is no longer enough to ensure stability—strategic resilience must now be the priority.


References

The Future of Earth: A Mirror of Mars’ and Venus’ Past? And how does plastic factor into the equation?

The Future of Earth: A Mirror of Mars’ and Venus’ Past?

For decades, the idea that Mars may have once hosted an advanced civilisation has been the subject of speculation, whispered in the corridors of both science fiction and scientific inquiry. The notion that an ancient Martian society, one much like our own, might have risen and fallen under the weight of its own technological advancements has intrigued many. Some theories even suggest that Mars’ inhabitants, faced with a dying world, may have sought refuge elsewhere—perhaps even on Earth. But rather than engaging in that argument, we consider a different angle: Could Earth, in a billion years, come to resemble the desolate, red expanse of Mars today?

Mars and Venus: The Planetary Lifecycle

The more we study Mars, the more evidence emerges that it was not always the dry, barren wasteland we see now. Scientists have found signs of ancient rivers, lakes, and perhaps even oceans. At some point, however, the planet lost its atmosphere, its water boiled into space, and all but the hardiest extremophiles perished—if life ever existed there at all.

Venus, on the other hand, presents a different cautionary tale. Once thought to be similar to Earth, Venus experienced a runaway greenhouse effect, leaving it a hellish world where surface temperatures exceed 460°C. Thick clouds of sulphuric acid now prevent infrared radiation from escaping, locking the planet in a feedback loop of ever-increasing heat. If Mars is an example of what happens when a planet loses its atmosphere, Venus is a case study in what happens when an atmosphere becomes a prison. Earth, positioned between these two extremes, is not immune to a similar fate.

Theory

If civilisation once existed on ancient Venus or Mars, perhaps it too relied on plastics, hydrocarbons, and industrial chemicals, poisoning itself in the same way we are doing on Earth. If Mars’ soil is, in part, the dust of its lost civilisation—ground down remnants of synthetic materials like plastics—then Earth would face the same fate. Whether this is likely true or not, the harsh reality is that fixing planet Earth could become so expensive and technologically challenging that leaving it might be seen as a more viable option. Even today, forever chemicals—those synthetic compounds that resist breaking down—have been found in some of the most remote places on Earth, including Antarctica. This suggests that no part of the planet is immune from industrial contamination, and the problem will only compound over time (CHEM Trust, 2024).

The Denial of an Unfolding Catastrophe

Despite overwhelming evidence, the response to environmental collapse is, at best, fragmented, at worst, deliberate obfuscation. Climate change, for instance, is often mischaracterised as simply ‘global warming’—a phrase that fails to capture the full battery of poisons and toxins being injected into the Earth’s atmosphere, soil, and even our own bodies. It is not just about temperature increases; it is about the wholesale transformation of the Earth’s biosphere. Just as opponents of evolution create strawman arguments that humans could not have evolved from apes—misrepresenting the actual scientific theory—so too has the debate on climate change been derailed by distractions, omissions, and deliberate misdirection. The real crisis is not only the warming climate but the inexorable accumulation of synthetic materials and hazardous compounds that will outlive humanity itself.

We are witnessing the slow-motion poisoning of an entire planet, yet policies continue to be driven by short-term economic interests rather than long-term planetary survival. This raises a troubling question: Is the world unwilling or simply incapable of action? If the latter, then the fate of Earth is already sealed, and, much like the imagined Mars past at the beginning of this artcile, we are merely living out the final centuries before planetary collapse.

The Plastic Paradox in Modern Manufacturing

In the era of Industry 4.0 and the forthcoming Industry 5.0, the integration of advanced technologies like automation, artificial intelligence, and the Internet of Things is transforming manufacturing processes. However, a critical paradox lies at the heart of this technological revolution: the pervasive reliance on plastics.

3D Printing and Plastic Dependency

Additive manufacturing, commonly known as 3D printing, is heralded as a cornerstone of modern manufacturing. It enables rapid prototyping, customisation, and complex designs that traditional methods cannot easily achieve. Yet, the majority of 3D printers utilise plastic-based materials, such as polylactic acid (PLA) and acrylonitrile butadiene styrene (ABS). This dependence on plastics underscores a contradiction: while aiming for innovation and sustainability, the industry continues to rely on materials that contribute to long-term environmental degradation.

Plastics in Electric Vehicles

The automotive industry, particularly the electric vehicle (EV) sector, exemplifies this paradox. EVs are promoted as environmentally friendly alternatives to fossil fuel-powered cars. However, they incorporate significant amounts of plastic to reduce weight and improve efficiency. According to the American Chemistry Council, an average mid-size EV contains approximately 450 pounds (204 kilograms) of plastics and polymer composites—140 pounds more than a comparable internal combustion engine vehicle. This accounts for about 10% of the vehicle’s weight but nearly 50% of its volume (American Chemistry Council, 2023).

The Persistent Production of Hazardous Materials

Beyond plastics, the continued manufacture of ‘forever chemicals,’ such as per- and polyfluoroalkyl substances (PFAS), poses severe health risks. These chemicals are renowned for their persistence in the environment and human body, leading to potential adverse health effects. Despite growing awareness and regulatory scrutiny, their production persists, mirroring the ongoing reliance on plastics in advanced manufacturing sectors. Even in the most remote regions, such as Antarctica and the Tibetan Plateau, PFAS have been found contaminating rainwater, demonstrating the inescapable reach of industrial pollution (Phys.org, 2022).

The Plastic Legacy: A Chemical Transformation

Plastics and synthetic chemicals are among the most persistent materials humans have ever created. On Earth, they fill our cities, roads, and oceans, forming the backbone of our infrastructure. Yet these very materials, in geological time, are fleeting. Exposed to the elements, plastics break down into microplastics, then further into molecules and chemical residues that mix into the dust of the world. Over the course of one to two billion years, all visible traces of plastic infrastructure would be erased, leaving only its molecular fingerprint behind.

On Earth, perchlorates are largely associated with industrial processes, forming as by-products in the production of plastics, explosives, and other synthetic chemicals. However, perchlorates can also form through the degradation of plastics under ultraviolet radiation, meaning that all of the unsustainable materials we produce today will eventually decompose into toxic compounds, making the planet’s surface increasingly hostile to life. In this sense, our plastic pollution today is not just an environmental issue—it is the long-term poisoning of our world.

Recent studies have revealed alarming levels of microplastics in human tissues. For instance, research indicates that human brains may contain up to 7 grams of microplastics, particularly particles smaller than 200 nanometres, which can cross the blood-brain barrier. These particles are mainly composed of polyethylene and collect in brain blood vessels and immune cells. Switching to filtered tap water can significantly decrease microplastic intake—by almost 90%, from 90,000 to 4,000 particles a year (The Times, 2024).

A harrowing recent study found that most human brains now contain a spoonful of dementia-linked plastics, microplastics that have infiltrated the human bloodstream and lodged in brain tissue (MSN, 2024).

Fig. 1. A spoonful of microplastics, on average, exists in every human brain.

Moreover, microplastics have been detected in nearly 90% of protein food samples across 16 types, including seafood, pork, beef, chicken, tofu, and plant-based meat alternatives. The study found no statistical difference in microplastics concentrations between land- and ocean-sourced proteins (Food Safety, 2024). This pervasive contamination underscores the extent to which plastic pollution has infiltrated our food chain, raising concerns about the cumulative health effects of chronic microplastic ingestion.

The omnipresence of microplastics in our environment and bodies is a testament to the enduring legacy of plastic pollution. As these materials continue to accumulate, understanding their long-term impacts on human health and the planet becomes increasingly critical.

The Cost of Change vs. The Cost of War

If shifting global production to a sustainable model seems impossible, consider this: over $8 trillion has been spent on wars since 2000 (Costs of War Project, Brown University, 2023). Humanity has demonstrated an extraordinary capacity to fund destruction while dismissing the cost of sustaining life on Earth. If countries allocated just 3-4% of their GDP—the same level they commit to military spending—the transition to a sustainable global economy could be fully funded within decades. A $50 trillion investment over 30 years (IEA, 2021) would not only stabilise the climate but create a world that benefits humanity for thousands of years into the future. This is not just an economic decision—it is an investment in civilisation itself. Otherwise, our options on planet Earth will be evaporate well before the oceans do.

Bibliography

  • American Chemistry Council (2023). Chemistry and Automobiles 2024.
  • Lenton, T. M., et al. (2019). “Climate tipping points—too risky to bet against.” Nature, 575(7784), 592-595.
  • Armstrong McKay, D. I., et al. (2022). “Exceeding 1.5°C global warming could trigger multiple climate tipping points.” Science, 377(6611), 1234-1243.
  • International Energy Agency (2021). World Energy Outlook 2021.
  • Stockholm International Peace Research Institute (2023). SIPRI Yearbook 2023.
  • Costs of War Project, Brown University (2023). The Costs of War Since 2000.
  • MSN (2024). “Most human brains now contain a spoonful of dementia-linked plastics.” MSN News.
  • CHEM Trust (2024). PFAS Found in Remote Penguins.
  • Phys.org (2022). PFAS in Antarctic Rainwater.

Why Should All Engineers Know Pseudo Code? An Introduction to Algorithms


Introduction

The rise of artificial intelligence (AI) has brought the term “prompting” into mainstream conversations. Prompting, the act of giving instructions to AI, is often perceived as a modern skill. However, the practice of human-computer interfacing is deeply rooted in history, dating back to the earliest programmable machines. Charles Babbage’s Analytical Engine (1837) stands as one of the earliest examples of mechanical computation, where structured inputs were necessary to produce logical outputs. Babbage’s collaborator, Ada Lovelace, conceptualised the first algorithm for this machine, laying the groundwork for programming as a discipline. Later, Alan Turing’s theoretical Turing Machine formalised the concept of computation, showcasing the power of algorithms to solve abstract problems (Turing, 1936).

These milestones emphasise one fundamental truth: clear, structured communication is the cornerstone of effective machine interaction. Today, generative AI tools such as ChatGPT provide a façade of simplicity, suggesting that anyone can use AI to solve problems. Yet, without understanding how these tools process inputs, outputs often fail to meet expectations. Engineers who understand pseudo code—an intermediate step between natural language and formal programming—are uniquely positioned to bridge this gap. By leveraging pseudo code, they can translate their ideas into machine-readable logic, ensuring their prompts produce precise and meaningful results.


Historical Foundations of Human-Computer Interaction

The foundation of human-computer interaction lies in cybernetics, a field pioneered by Norbert Wiener (1948), which explored feedback loops in communication systems. Wiener’s work highlighted the importance of structured communication, laying the foundation for modern algorithms. Asimov’s “Three Laws of Robotics” (1942) underscored the need for precise instructions, ensuring machines operated safely and predictably. However, these principles have limitations. Der Derian (2010) criticises their application in autonomous warfare, where vague instructions can result in unintended consequences.

Structured programming, introduced by Edsger Dijkstra, further solidified the importance of logical design in machine communication (Dijkstra, 1968). These historical developments highlight the critical role of precision in reducing errors and improving outcomes in human-machine interactions.

The Convergence of Disciplines

Engineering has undergone significant transformations, with the boundaries between mechanical and computer engineering increasingly blurred. CAD systems, 3D printing, and digital twins have revolutionised prototyping, enabling faster and more efficient designs (Tao et al., 2018). AI now plays a pivotal role in optimising these systems, generating simulations, and predicting performance.

For example, creating a digital twin requires defining variables such as dimensions, material properties, and constraints. Without pseudo code, engineers risk generating incomplete or incorrect models, leading to inefficiencies and delays. Structured prompts in pseudo code ensure that AI interprets these inputs correctly, bridging the gap between human intent and machine execution.


Teaching Pseudo Code: The Theory Behind Structured Programming

Foundations of Structured Programming

Pseudo code simplifies complex logic into step-by-step instructions using control structures that underpin all programming languages:

  1. Sequential Execution: Instructions are executed in the order they appear.
  2. Conditional Statements: Decision-making processes using if-then or if-then-else.
  3. Loops (Iteration): Repeated execution of code blocks, such as while, do-while, or for.

These principles enable users to articulate problems clearly, making them accessible to both humans and machines.

Pseudo Code in Action: Programming a Robot for Combat

Consider a scenario where two wheeled robots compete in a “Robot Wars”-style arena. The goal is to program a robot to defeat its opponent using strategy and precision.

Example of a Poorly Structured Prompt:
“Make a robot that fights effectively and wins the battle.”

  • AI Output: A generalised script, e.g., “The robot moves randomly and attacks when in range.” This lacks strategy, leading to inefficiencies and poor performance.

Example Using Pseudo Code:

  • AI Output: A refined strategy where the robot optimises its movements, manages energy efficiently, and attacks only when necessary.

Applications in Manufacturing and Industry 4.0/5.0

The emergence of Industry 4.0, characterised by the integration of IoT, AI, and automation, has transformed manufacturing processes by enabling smarter, interconnected systems. Building on this foundation, Industry 5.0 emphasises human-AI collaboration, aiming to create more sustainable and human-centric approaches to production. Within this landscape, pseudo code serves as a critical tool, allowing engineers to communicate their design requirements clearly and precisely, thereby enhancing productivity and reducing errors.

Prototyping and Optimisation

Prototyping is a cornerstone of manufacturing innovation, and the advent of digital twins has further revolutionised this process. A digital twin—a virtual replica of a physical system—allows engineers to test designs in a simulated environment before committing to physical prototypes. Using pseudo code, engineers can define key parameters such as torque, speed, payload capacity, and environmental conditions, ensuring that the digital twin reflects real-world constraints accurately.

For example, designing a robotic arm for assembly requires engineers to balance factors like joint flexibility, operational speed, and load distribution. Without pseudo code, engineers might rely on iterative trial-and-error methods, leading to increased costs and extended development times. By specifying these variables in pseudo code, engineers provide AI with the structured inputs needed to generate optimised digital twin models, significantly reducing physical testing requirements and accelerating production timelines (Tao et al., 2018).

Moreover, pseudo code facilitates automated optimisation processes. By incorporating conditional logic and iterative loops, engineers can instruct AI systems to test multiple configurations, identify inefficiencies, and suggest improvements. This iterative refinement process, powered by pseudo code, enables manufacturers to achieve better performance metrics while minimising waste and resource consumption.

Robotics Programming

Industrial robots play an essential role in modern manufacturing, performing tasks such as welding, assembly, material handling, and quality inspection. However, programming these robots to execute tasks safely and efficiently requires meticulous planning. Pseudo code provides engineers with a means to articulate complex task sequences, safety protocols, and contingency plans in a clear and logical format.

Consider a welding robot tasked with joining components along a production line. Using pseudo code, an engineer can define the robot’s movement paths, welding parameters (e.g., temperature and speed), and error-detection mechanisms.

Reducing Interface Errors

AI’s ability to interpret human input is inherently constrained by its training data, which cannot encompass every possible scenario (Floridi et al., 2018). Ambiguity in communication often leads to outputs that fail to meet user expectations, resulting in wasted time and resources. Pseudo code addresses this challenge by providing a structured framework for interaction, reducing the likelihood of misinterpretation.


Conclusion

Pseudo code bridges the gap between human logic and machine execution, enabling engineers to leverage AI effectively. By articulating precise instructions, engineers can optimise workflows, reduce errors, and achieve better outcomes in applications ranging from robotics to manufacturing. The future of engineering lies at the intersection of human expertise and AI capabilities, and pseudo code is the tool that will guide this collaboration into the next industrial era.


References

  1. Asimov, I. (1942). Runaround. In I, Robot. Gnome Press.
  2. Babbage, C. (1837). On the Analytical Engine. British Museum Archive.
  3. Dijkstra, E. (1968). “Go To Statement Considered Harmful.” Communications of the ACM, 11(3), 147–148.
  4. Der Derian, J. (2010). Virtuous War: Mapping the Military-Industrial-Media-Entertainment Network. Routledge.
  5. Floridi, L., et al. (2018). “AI as Augmented Intelligence: Beyond Machine Learning.” Philosophy & Technology, 31(4), 317–328.
  6. Goldberg, D. (1989). Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley.
  7. Knuth, D. (1974). The Art of Computer Programming: Fundamental Algorithms. Addison-Wesley.
  8. Tao, F., et al. (2018). “Digital Twin and Smart Manufacturing.” Advanced Engineering Informatics, 39, 845–856.
  9. Turing, A. (1936). “On Computable Numbers, with an Application to the Entscheidungsproblem.” Proceedings of the London Mathematical Society, 42(1), 230–265.
  10. Wiener, N. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press.

The Codex of Linguistic Impetus: AI, Equity, and Original Thought in Academic Expression

Abstract

This essay argues that while AI effectively structures, organizes, and enhances human ideas, it cannot independently conceive original thought. I introduce the Codex of Linguistic Impetus as a framework to differentiate between AI-generated filler and genuinely iterated human ideas, positioning AI as a vital yet supporting force for self-expression within academia. Concerns about originality are addressed, particularly for students who might misuse AI by relying on it passively or using it to bypass full engagement with academic material. Through a philosophical lens, I examine AI’s potential as a formulaic articulator, enabling the precise articulation of human thought without diluting academic integrity. The essay concludes that educational institutions must redefine academic integrity to thoughtfully integrate AI, moving beyond superficial solutions and embracing AI’s role in fostering genuine intellectual growth.


The Nature of Original Thought

The rapid integration of AI in academic settings raises both opportunities and challenges for knowledge representation. Although AI can enhance and refine ideas, it does not independently initiate them, marking a critical distinction from human creativity (Floridi, 2020; Mittelstadt et al., 2023). This essay introduces the Codex of Linguistic Impetus, a framework that helps differentiate AI-assisted filler content from genuinely iterated human ideas. The goal is to underscore AI’s role in democratizing academic discourse, empowering students who may struggle with traditional forms of expression (Rose & Meyer, 2002; Goggin & Newell, 2021). However, these benefits come with ethical concerns: the necessity for rethinking originality standards and ensuring students engage meaningfully with AI as a supportive tool rather than a substitute for critical thinking.

The Codex of Linguistic Impetus provides a structure to assess whether AI’s input reflects genuine intellectual engagement or simply fills gaps passively. In educational environments, the proliferation of AI use demands thoughtful discussion on how originality and academic integrity can adapt, ensuring that AI serves as an equitable, enabling tool without compromising the authenticity of student-authored work (Rose, Meyer, & Hitchcock, 2005).

Enabling Technology as an Equitable Force

AI builds on a legacy of assistive technology aimed at equitable access, particularly in educational contexts. Historically, enabling technologies such as text-to-speech, speech-to-text, and digital organizers have democratized participation for individuals with cognitive or physical impairments, allowing them to more fully engage in academic work (DiSanto & Snyder, 2019; Rose & Meyer, 2002). Universal Design principles show that access to supportive tools doesn’t inherently dilute academic rigor; rather, it fosters inclusion by removing obstacles that might hinder students from participating equally in scholarly discourse (Rose, Meyer, & Hitchcock, 2005). By handling technical aspects such as language precision, AI allows students to focus on the substance of their ideas, rather than being hindered by linguistic structure.

Research demonstrates AI’s role in increasing accessibility and confidence, particularly for students who may struggle with traditional writing or organizing methods. For example, students using AI-assisted learning tools report greater ease in sharing complex ideas, revealing AI’s potential to create a more inclusive academic environment focused on intellectual substance rather than linguistic formality (Smith, Patel, & Larson, 2023). However, as Hughes and Smith (2023) argue, while AI promotes access, it also introduces risks of passive engagement, where students may use AI to complete assignments without developing a comprehensive understanding of the content. The ethical balance lies in utilizing AI’s democratizing potential without allowing it to undermine genuine intellectual effort.

Risks to Authentic Expression?

While AI offers valuable support in academic expression, it presents risks to authenticity, particularly for students who may misuse AI to bypass deeper engagement with their work. International students, for instance, might use AI translation tools to convert their ideas from their native language into English. Although advanced translation models capture depth and intent, effective translation is both an art and a science, requiring nuanced cultural and contextual understanding (Chee, 2022). Without such understanding, AI translation may give the appearance of English fluency but lacks the depth of insight a student might develop through direct language engagement (Jones, 2023).

Additionally, when students rely on AI for language conversion without thoroughly reviewing and refining the translation, it may fail to capture the intended meaning fully. This reliance risks creating technically accurate submissions that lack the student’s authentic intellectual input. This potential misuse highlights the need for educators and students alike to approach AI as a complement to, rather than a substitute for, genuine engagement with academic material. As Carr (2020) suggests, the convenience of AI tools may lead students to disengage from critical aspects of learning, contributing to a passive interaction with course content.

Furthermore, instructors have noted that without clear guidance, students may perceive AI as a shortcut for academic tasks rather than as a tool for enhancing their understanding of complex topics (Hughes & Smith, 2023). This misuse risks creating superficial submissions that lack genuine academic inquiry, underscoring the importance of instituting boundaries that promote ethical, thoughtful AI use in educational settings.

A Philosophical Perspective

The concept that human thought processes resemble computational steps implies that knowledge can be distilled into logical sequences. This aligns with algorithmic thinking, where complex ideas are broken down into granular details—smaller, essential components that require precise arrangement for accurate interpretation (Floridi, 2020). Floridi’s work on the human-AI relationship reveals that AI can bridge gaps between conception and expression, allowing human thought to be translated into structured formulae without losing intent (Mittelstadt et al., 2023).

AI’s capacity to convert natural language instructions into precise formulaic language illustrates its potential to support academic discourse by handling linguistic minutiae. For instance, when researchers describe an algorithm or complex methodology, AI can capture it in exact formulae, providing clarity and coherence in academic presentations. This precision is especially valuable in STEM fields, where rigorous articulation of ideas is paramount (Jones, 2023). AI’s role in managing technical details enables researchers and students to concentrate on core insights, confident in the accuracy of their conceptual structures.

AI’s formulaic precision does not replace human creativity but enhances access to academic discourse by empowering individuals to present ideas rigorously. This articulative capacity positions AI as an instrumental tool, translating human concepts into a language of accuracy and coherence, particularly useful in cases where linguistic or cognitive barriers might otherwise obscure intent.

Rethinking Originality in the Age of AI

Educational institutions face the challenge of rethinking originality in the age of AI. Rather than assessing solely whether content is independently student-produced, assessments should also evaluate the quality of engagement, such as depth of analysis, critical insight, and the student’s own intellectual contribution within AI-assisted work (Mayer & Jenkins, 2022). This shift requires professors to adapt assessment criteria, creating a new form of academic integrity that incorporates AI’s support while preserving genuine student insight.

For students, responsible AI use involves integrating these tools into their workflow to clarify understanding rather than complete tasks passively. This reimagined view of originality would encourage students to view AI as a resource for refining and supporting their thought processes rather than as a tool to bypass academic effort. Professors could incorporate AI-specific criteria into marking rubrics, focusing on evidence of the student’s critical engagement and reflective input, even in work structured with AI assistance (Jones, 2023). This approach fosters a culture of integrity while recognizing AI’s role in contemporary academia.

By framing AI as a collaborative tool rather than a substitute for thought, educational institutions can establish a model of integrity that aligns with the demands of modern academia, where technology is integral to both intellectual and creative endeavors.


Toward a Redefinition of Academic Integrity

As AI reshapes the academic landscape, institutions must proactively redefine academic integrity to reflect this new reality. Applying non-committal policies or superficial fixes fails to address AI’s profound impact on student engagement and authenticity. Educational systems must thoughtfully integrate AI into assessment frameworks, recognizing its role as a democratizing tool that manages the technical “minutiae” of academic expression while preserving the originality of human thought. This approach necessitates redefining originality standards to focus on intellectual engagement and personal contribution rather than solely on the independence of content creation.

The Codex of Linguistic Impetus framework presented here provides a conceptual basis for differentiating genuine intellectual engagement from passive AI reliance, fostering responsible and ethical AI use. By rethinking originality and incorporating AI thoughtfully into academic assessments, institutions can foster a generation of students who use AI responsibly, creatively, and ethically, supporting a more inclusive and forward-thinking academic environment. However, while it is introduced here, the concept would need rigorous testing in a full academic paper to have full efficacy in the discipline which the author hopes to follow up on soon.

References

Barker, S. (2018) The Socratic Method and Socratic Algorithmic Thought, New York: Routledge.

Carr, N. (2020) The Shallows: What the Internet is Doing to Our Brains, 2nd ed., New York: W.W. Norton & Company.

Chee, F. (2022) Digital Cultures and Education in the 21st Century, 3rd ed., London: Palgrave Macmillan.

DiSanto, J. and Snyder, T. (2019) ‘Enabling technologies for disabilities: New frontiers’, Technology in Society, 56, pp. 11–16.

Floridi, L. (2020) The Logic of Information: A Theory of Philosophy as Conceptual Design, Oxford: Oxford University Press.

Goggin, G. and Newell, C. (2007) Digital Disability: The Social Construction of Disability in New Media, Lanham, MD: Rowman & Littlefield.

Goggin, G. and Newell, C. (2021) ‘Accessing higher education through AI: Revisiting equity and inclusion’, Journal of Digital Accessibility, 12(2), pp. 55–68.

Hughes, J. and Smith, M. (2023) ‘AI in higher education: Examining the impact on student engagement and authenticity’, Journal of Educational Technology, 45(3), pp. 234–247.

Jones, A. (2023) ‘AI translations and cultural fidelity in academic writing’, Journal of Language and Cultural Studies, 18(4), pp. 188–201.

Mayer, R. and Jenkins, P. (2022) ‘Guidelines for AI integration in assessment and evaluation’, Journal of Learning Sciences, 31(1), pp. 45–62.

Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., and Floridi, L. (2023) The ethics of algorithms: Mapping the debate in the age of AI, 2nd ed., London: Big Data & Society.

Rose, D.H. and Meyer, A. (2002) Teaching Every Student in the Digital Age: Universal Design for Learning, Alexandria, VA: Association for Supervision and Curriculum Development.

Rose, D.H., Meyer, A., and Hitchcock, C. (2005) The Universality of Access: A Framework for Digital Learning, Cambridge, MA: Harvard University Press.

Smith, R., Patel, S., and Larson, T. (2023) ‘Empowering students through AI-assisted learning tools: An accessibility perspective’, Accessibility in Education Journal, 15(3), pp. 92–104.

Plateaugration: A Sustainable Alternative to the Infinite Regress of Capabilities and Features

In today’s rapidly evolving technological landscape, the relentless push for more capabilities and additional features has created a cycle of unsustainable growth. This essay explores the concept of “x times (capabilities + features)”—an infinite regress that leads to unsustainable systems. As complexity increases, so do the demands on energy, resources, and human capital. However, there is an alternative: Plateaugration, a model where systems maintain their current present value, adjusting only to meet external demands such as inflation or environmental factors. This concept offers a way to balance growth without overloading systems. Throughout this essay, I will introduce Plateaugration, examine the consequences of the current model of unsustainable growth, and present a future vision where technology regression—far from being a risk—becomes a positive, culturally driven shift. The essay will conclude by considering how cryptocurrency and blockchain technology could eliminate inflation and support sustainable system design.

The Problem: Infinite Regress in Capabilities and Features

Technological systems are caught in an endless cycle of expanding capabilities and proliferating features. This growth can be mathematically expressed as:


Where S(t) represents a system’s sustainability at time t, x reflects external growth pressures, and C(t) and F(t) are the system’s capabilities and features. As capabilities expand, new features must be added to exploit them. Conversely, the addition of new features demands further capability enhancements. Over time, this leads to an exponential increase in system complexity, resulting in “feature bloat” or “capability fatigue.” This infinite feedback loop destabilises systems, making them harder to maintain, scale, and secure. Leveson (2012) highlights that as systems become more complex, their unanticipated interactions create new risks, leading to unsustainability.

Recent research highlights the scale of this problem in our modern, data-driven world. The International Energy Agency (IEA, 2021) notes a sharp rise in global data centre electricity consumption, driven by the demand for cloud computing, AI, and data-intensive applications. This surge exemplifies the unsustainable loop of increasing capabilities and features, as each new technological advancement requires more resources to sustain. Brooks’ (1995) “second-system effect” famously warned against overcomplicating systems, and today’s cloud-based infrastructures demonstrate the exponential inefficiencies he foresaw.

The Qualitative Consequences of Unsustainable Growth

As systems continue to expand in both capabilities and features, the consequences extend beyond mere technical inefficiencies. The environmental impact is significant, particularly with the Internet, which is fast becoming unsustainable. Global data centres require vast amounts of energy and fresh water to cool servers, placing strain on natural resources. According to Nature Sustainability (2020), data centres account for a growing share of water usage in regions that are already suffering from water scarcity. These centres also consume massive amounts of energy, leading to questions about whether such resources would be better allocated to more pressing human welfare needs, such as healthcare or education (Greenpeace, 2021).

Culturally, the relentless drive for technological upgrades could lead to a threshold shift—a cultural moment where society collectively rejects the demand for constant expansion in favour of sustainability. This shift might arise out of necessity, due to resource shortages, or from a growing environmental consciousness. Technology regression, rather than being a risk, could offer a positive opportunity for societies to rethink their relationship with digital systems and technological complexity. Graziotin et al. (2014) argue that as systems grow overly complex, they fatigue both users and developers. A potential cultural pivot away from “more” might involve abandoning smartphones or other high-tech devices in favour of simpler tools like CB radios, which can be networked across large areas with minimal resource use.

Plateaugration: A New Model for Sustainable Growth

In contrast to the unsustainable model of constant technological expansion, Plateaugration advocates for growth only as much as is needed to maintain a system’s present value. This means optimising and saturating existing capabilities rather than endlessly adding new features. The focus is not on doing less with less, a common de-growth argument, but rather on doing the same with less. By leveraging efficiency, systems can be optimised to maintain their current functions while reducing resource consumption.

This model finds practical support in the rise of intermediate technology, where high-tech tools are combined with more accessible, low-tech solutions to create a sustainable, decentralised system. Science Advances (2021) highlights 3D printing as an example of this principle in action. Decentralised manufacturing through 3D printing reduces the need for global supply chains, allowing local production with minimal resource use. A future guided by Plateaugration could see a merging of advanced digital systems and analogue tools, providing opportunities to maintain high standards of living with fewer resources.

The Internet and Fresh Water Consumption: A Case Study in Unsustainability

The internet, in its current form, is a prime example of unsustainable system design. According to a 2023 report by the IEA, data centres are among the most significant consumers of both electricity and water. As data demands grow, the energy needed to maintain cloud services and streaming platforms escalates, placing further strain on global energy resources. Nature Communications (2022) warns that without a strategic shift, the internet’s energy and water consumption will become unmanageable.

A system designed under the principles of Plateaugration would prioritise optimising current resources. Instead of continuing to build more data centres or adding features that require more bandwidth, the focus would shift to increasing the efficiency of existing infrastructure. This approach could involve developing cooling technologies that reduce water consumption or exploring ways to limit energy use without sacrificing essential internet functions. By realigning the internet with sustainable principles, we could redirect resources toward pressing global needs.

Technology Regression: A Positive Shift Through Threshold Theory

Rather than seeing technological regression as a risk, it can be understood as a positive shift aligned with threshold theory—where a cultural move away from demand occurs either by necessity or conscience. This is not about simply doing less but doing the same with less. By optimising existing systems, we can move away from excessive feature growth and maintain functional systems without unsustainable resource consumption.

A critical component of this shift could be the adoption of cryptocurrency and blockchain technology. Due to its decentralised and immutable nature, blockchain has the potential to eliminate inflation by design. Fry and Cheah (2016) highlight how cryptocurrency systems prevent arbitrary monetary expansion, a key driver of inflation in traditional fiat systems. Moreover, the transparent and decentralised structure of cryptocurrencies such as Bitcoin caps the supply of currency, ensuring that inflation is essentially neutralised over time.

More recent studies by Narayanan et al. (2019) emphasise the potential of blockchain to transform economic systems by eliminating the need for resource-heavy, centralised banking infrastructures. By shifting economies to blockchain-based currencies, we could reduce the energy and resources needed to support traditional financial systems, which include everything from physical bank branches to complex international transaction networks. This shift to decentralised financial systems would exemplify Plateaugration—doing the same with less—by reducing the resource intensity of managing economic systems.

Sustainable Project Management: The Plateaugration Approach

For project management, Plateaugration offers a sustainable framework by focusing on maintaining present value rather than constant expansion. This approach encourages project managers to implement modular systems that can adapt over time without becoming bloated by unnecessary features. Leveson (2012) proposes that by limiting the scope of system growth, it’s possible to create systems that are both efficient and resilient in the face of changing demands.

A Plateaugration-inspired project management strategy would prioritise efficiency, scalability, and long-term sustainability. Projects could be designed to meet immediate needs while remaining adaptable to future changes without requiring substantial new investments in resources or infrastructure.

Conclusion

The cycle of “x times (capabilities + features)” leads to unsustainable systems where complexity grows unboundedly. The alternative—Plateaugration—offers a way forward, focusing on saturation and optimising systems to maintain their present value. By shifting toward a model that prioritises efficiency over endless growth, society can build sustainable systems capable of meeting present and future needs with fewer resources.

By incorporating cryptocurrency and blockchain technologies into this vision, we can eliminate inflation by design and create economic systems that are far more efficient than the traditional models. Through these shifts, technology regression becomes not a risk, but a positive cultural realignment with sustainability, ensuring systems are functional, adaptable, and enduring.


References

  1. Leveson, N. (2012). Engineering a Safer World: Systems Thinking Applied to Safety. MIT Press.
  2. International Energy Agency (IEA). (2021). Global Energy Data Report: Data Centre Energy Use.
  3. Greenpeace. (2021). Clicking Clean: Who is Winning the Race to Build a Green Internet?
  4. Nature Sustainability. (2020). Water and Energy Consumption in Data Centres.
  5. Fry, J., & Cheah, E-T. (2016). “Negative Bubbles and Shocks in Cryptocurrency Markets,” Journal of Risk Finance.
  6. Narayanan, A., Bonneau, J., Felten, E., Miller, A., & Goldfeder, S. (2019). *Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction

Can Blockchain Drive Significant Progress Toward Global Climate Goals?

Abstract

Blockchain technology has advanced beyond cryptocurrencies and is now integral to various industries, particularly manufacturing. This paper explores the role blockchain can play in accelerating the circular economy (CE) and achieving net-zero carbon emissions targets. By enhancing transparency, efficiency, and resource optimization, blockchain has the potential to significantly reduce carbon emissions as part of a broader system of sustainable change. A calculus-based model is presented to quantify blockchain’s impact on manufacturing efficiency and global emissions, accounting for the Jevons Paradox. A thought experiment is conducted to estimate how manufacturing efficiency gains of 25% could affect global carbon emissions. The study highlights blockchain’s role as part of a larger system, driving towards the CE and integrating AI and other emerging technologies for maximum environmental impact.


Introduction

The escalating climate crisis places humanity at a critical juncture in its industrial and economic practices. The scientific consensus is unequivocal—immediate and transformative actions are required if we are to avert catastrophic consequences. Global industries, particularly manufacturing, remain one of the foremost contributors to greenhouse gas emissions, with their share approximated to account for 30% of global carbon emissions (International Energy Agency [IEA], 2020). Within this framework, the responsibility of the manufacturing sector extends beyond mere adaptation; it must lead the way towards a radical reconceptualization of the production process, one that simultaneously optimizes efficiency and minimizes environmental degradation.

Blockchain technology, since its inception through the conceptual and practical innovations introduced by Nakamoto in 2008, has continuously evolved, shifting from a purely transactional framework, such as cryptocurrencies, to a more expansive role encompassing data integrity, transparency, and accountability. Yet, its full potential, especially when applied to sectors like manufacturing, remains underexplored. The intersection of blockchain with key ecological imperatives provides us with the potential to solve inefficiencies across global supply chains, from resource extraction to the end-of-life phase of manufactured goods, ultimately supporting a broader agenda towards the Circular Economy (CE).

This paper contends that blockchain, when integrated into the manufacturing sector at scale, offers unprecedented opportunities to drive reductions in carbon emissions through increased supply chain transparency, optimized resource usage, and decreased operational inefficiencies. By presenting a calculus-based model, we seek to quantitatively assess the real-world impact of blockchain adoption, examining its capacity to mitigate emissions. Crucially, the paper also engages with potential paradoxes, such as the Jevons Paradox, that may undermine blockchain’s efficacy if not properly managed.


Literature Review

The academic discourse surrounding blockchain’s potential to drive sustainable change has intensified in recent years, though several critical gaps persist. While the technology’s application has seen robust theoretical exploration, particularly within the domains of financial technologies and secure data exchange, its environmental potential remains understudied, particularly within industrial applications such as manufacturing. Blockchain’s capacity to reduce inefficiencies, improve transparency, and promote sustainability has been widely acknowledged (Saberi et al., 2019), yet many studies provide only broad outlines without delving into the specific mechanisms through which blockchain might be operationalized to achieve tangible carbon reductions.

One of the earliest insights into blockchain’s relevance for sustainability comes from the study by Kouhizadeh et al. (2020), which emphasizes blockchain’s transparency mechanisms in promoting waste reduction and resource optimization. Their research forms the bedrock for understanding how distributed ledgers might be harnessed in the context of the Circular Economy (CE). However, they stop short of developing a comprehensive framework for blockchain’s impact on emissions, leaving significant room for further exploration.

The relationship between blockchain and supply chain efficiency has been extensively studied in the work of Francisco and Swanson (2018), who offer a critical evaluation of blockchain’s role in supply chain transparency. By allowing stakeholders to trace the movement and provenance of raw materials and finished goods in real-time, blockchain addresses critical inefficiencies. However, their work remains largely theoretical and does not engage with concrete emissions metrics, a gap this paper seeks to address through its quantitative approach.

In another vein, the integration of blockchain into renewable energy systems has been explored by Andoni et al. (2019). While their research focused on how blockchain facilitates peer-to-peer energy trading, enabling the adoption of renewable sources, it provided vital insights into the energy-related implications of blockchain at the industrial level. However, this research does not address the specificities of blockchain’s role in manufacturing.

Further contributing to the discourse, Nwankpa et al. (2021) presented a seminal study estimating global supply chain inefficiencies to exceed 25%, directly aligning with the thesis presented in this paper. These inefficiencies, they argue, stem from the opacity of transactions, outdated operational processes, and the mismatch between production and consumption. Blockchain’s promise, they contend, lies in its ability to drive system-wide improvements in these domains.

Yet, despite these explorations, much remains to be understood about the interaction between blockchain efficiencies and the Jevons Paradox. Chapman and Zhang (2023) argue that any efficiency improvements in industrial operations can paradoxically lead to greater overall consumption, thus negating the potential gains in carbon reduction. Their critical perspective suggests the need for policies that can mitigate these effects, ensuring that the environmental benefits of blockchain adoption are realized. By contributing to this underdeveloped area, this paper seeks to bridge the gap between blockchain’s potential and its empirical outcomes.


Methodology

To evaluate the potential for blockchain to reduce emissions within the manufacturing sector, we employ a combination of thought experiments and empirical modeling. Specifically, we use a calculus-based approach to model the impact of blockchain on manufacturing efficiency and its consequent effect on carbon emissions. The model will integrate blockchain adoption rates, resource optimization potentials, and the possibility of economic rebound effects (i.e., the Jevons Paradox).

This paper’s approach incorporates the following components:

The net efficiency equation that we derive models the combined effect of blockchain adoption and resource optimization:


Mathematical Model and Simulation

The thought experiment simulates blockchain’s impact under a scenario where the manufacturing sector experiences a 25% increase in efficiency as a direct result of blockchain integration. This figure is informed by studies suggesting that supply chain inefficiencies often exceed 25% (Nwankpa et al., 2021). Over a period of 10 years, we simulate the cumulative reduction in carbon emissions, considering the effect of blockchain-driven transparency and automation on the optimization of manufacturing processes.

For the purposes of this experiment, the sensitivity factor γ\gammaγ is calibrated according to the manufacturing sector’s carbon intensity, which accounts for approximately 30% of global emissions (IEA, 2020). The model assumes that as blockchain adoption progresses, both energy consumption and waste generation will decrease, leading to a proportional reduction in carbon output.


Discussion

The results of the simulation provide compelling evidence that blockchain integration, by fostering transparency and resource optimization, can contribute significantly to reducing global carbon emissions. This paper’s thought experiment reveals that a 25% increase in manufacturing efficiency, when achieved through blockchain, can reduce emissions in alignment with international climate targets, such as those established under the Paris Agreement, which aims for a 45% reduction by 2030 (UNFCCC, 2015).

Blockchain’s ability to provide real-time, immutable data regarding resource use enables manufacturers to adopt a more granular approach to emissions management. However, blockchain alone cannot achieve net-zero emissions. Its environmental impact must be coupled with broader circular economy strategies, as well as AI-driven predictive systems that enhance energy efficiency further.

The issue of the Jevons Paradox must also be addressed to avoid any potential rebound effects. Blockchain’s ability to drive down costs through efficiency gains could lead to increased consumption if unchecked. Policies must be enacted that encourage the reinvestment of these efficiency gains into further decarbonization initiatives, ensuring that the overall consumption does not rise.


Conclusion

Blockchain presents a promising path for reducing carbon emissions within the manufacturing sector. By leveraging its transparency, automation, and data integrity, blockchain can drive a 25% increase in manufacturing efficiency, as demonstrated in our thought experiment. This efficiency gain has the potential to significantly reduce emissions, aligning with the global targets established in the Paris Agreement. However, blockchain’s environmental benefits will only be fully realized when integrated into a broader framework that includes policy interventions, circular economy models, and the adoption of complementary technologies like AI and IoT. While blockchain can contribute to significant carbon reductions, it cannot act alone. Strategic coordination, regulatory support, and comprehensive industry buy-in will be critical to ensure that blockchain’s potential is fully harnessed and that its efficiency improvements lead to sustainable reductions in emissions. Future research should further investigate the cumulative impact of blockchain when combined with other green technologies and explore its long-term influence on global emissions, especially as industries adopt it at scale.

References

Alcott, B. (2005). Jevons’ Paradox. Ecological Economics, 54(1), 9-21.

Andoni, M., Robu, V., Flynn, D., Abram, S., Geach, D., Jenkins, D., & Peacock, A. (2019). Blockchain technology in the energy sector: A systematic review of challenges and opportunities. Renewable and Sustainable Energy Reviews, 100, 143-174.

Bikhchandani, S., Hirshleifer, D., & Welch, I. (1992). A theory of fads, fashion, custom, and cultural change as informational cascades. Journal of Political Economy, 100(5), 992-1026.

Buterin, V. (2014). A next-generation smart contract and decentralized application platform. Ethereum White Paper. https://ethereum.org/en/whitepaper/

Cao, D., Puntaier, E., Gillani, F., Chapman, D., & Dewitt, S. (2024). Towards integrative multi-stakeholder responsibility for net zero in e-waste: A systematic literature review. Business Strategy and the Environment.

Chapman, D. L., & Zhang, H. (2023). Overcoming Jevons’ Paradox in the Circular Economy: Is blockchain a threat or solution to climate change? In Proceedings of the 6th European Conference on Industrial Engineering and Operations Management. IEOM Society International.

Francisco, K., & Swanson, D. (2018). The supply chain has no clothes: Technology adoption of blockchain for supply chain transparency. Logistics, 2(1), 2.

International Energy Agency (IEA). (2020). Energy Technology Perspectives 2020. IEA. Retrieved from https://www.iea.org/reports/energy-technology-perspectives-2020

Kouhizadeh, M., Sarkis, J., & Zhu, Q. (2020). Blockchain technology and the circular economy: Examining adoption barriers. Production Economics, 231, 107831.

Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system. Retrieved from https://bitcoin.org/bitcoin.pdf

Pan, X., Zhao, Y., Lu, W., & Pan, X. (2019). Integrating blockchain with the Internet of Things and cloud computing for secure healthcare. Computer Communications, 150, 56-64.

Patil, B., Tiwari, A., & Yadav, V. (2021). Impact of blockchain technology on the circular economy. In Blockchain Technology and Applications for Digital Marketing (pp. 111-126). IGI Global.

Saberi, S., Kouhizadeh, M., Sarkis, J., & Shen, L. (2019). Blockchain technology and its relationships to sustainable supply chain management. International Journal of Production Research, 57(7), 2117-2135.

Upadhyay, A., Laing, T., Kumar, V., & Dora, M. (2021). Exploring barriers and drivers to implementing circular economy practices in the mining industry. Resources Policy, 72, 102053.

Zheng, Z., Xie, S., Dai, H., Chen, X., & Wang, H. (2018). An overview of blockchain technology: Architecture, consensus, and future trends. In 2017 IEEE International Congress on Big Data (pp. 557-564). IEEE.