Abstract
Performance positioning in computing is commonly defined by extreme benchmarks and elite competitive use cases. This framing obscures where performance delivers meaningful value for the majority of users, while encouraging over-engineering, excess resourcing, and premature replacement. In both computing and organisational contexts, such approaches prioritise acquisition over construction and optimisation over governance. This article presents a longitudinal case study of a repurposed personal computer built around an RTX 3060 Ti–class GPU, using the practice of PC building as an explicit analogue for project and programme management.
Drawing on empirical benchmarking, extended gameplay testing, and a virtual comparison with the PlayStation 5, the study demonstrates that the system delivers performance equivalent to—or exceeding—console-class experiences, including high-fidelity 1080p gameplay and viable 4K performance under optimised settings. Crucially, these outcomes are achieved not through maximal specification, but through compatibility, patching, and constraint-aware configuration. Decisions such as operating the GPU in a secondary PCIe slot with negligible performance loss, limiting active memory to 32 GB despite higher installed capacity, and avoiding energy-intensive cooling architectures illustrate how value is preserved through governance rather than escalation.
The analysis is interpreted through project and programme management theory, arguing that building a project—like building a PC—is fundamentally harder than buying an integrated solution off the shelf. While pre-packaged systems reduce short-term complexity, they often embed inflexibility, cost escalation, and sustainability risk. By contrast, systems assembled from compatible components allow for adaptation, repair, and selective resourcing when conditions change. The study further identifies common challenges in PC building—such as component incompatibility, thermal instability, unexplained bottlenecks, and partial failures—as directly analogous to recurrent issues in project delivery that are often mischaracterised as technical faults rather than governance failures.
The article concludes that lean, purpose-bound systems achieve supremacy by aligning capability to required outcomes, minimising surplus capacity, and enabling patching rather than replacement. In an environment of increasing hardware scarcity, rising energy costs, and sustainability constraint, this logic applies equally to computing systems and organisational programmes. Systems that endure are not those built to extremes, but those designed to be understood, repaired, and governed over time.
1. Introduction
Performance in computing has long been framed through abundance. In gaming, as in many technical domains, value is routinely inferred from proximity to the highest benchmark scores, the most powerful hardware tiers, or elite competitive use cases. This framing assumes a continual supply of increasingly capable components and has normalised rapid cycles of replacement. Systems are judged obsolete not because they fail to deliver meaningful outcomes, but because they no longer occupy the leading edge of theoretical capability. The sustainability implications of this model—rising electronic waste, escalating energy demand, and dependence on fragile global supply chains—are now increasingly visible (e.g. Baldé et al., 2017; Koomey et al., 2014).
What is becoming clear, however, is that this assumed abundance cannot be relied upon indefinitely. Recurrent shortages of graphics processors, memory, and semiconductors have demonstrated how quickly availability can collapse and prices can rise. These disruptions are not isolated events but indicators of a broader structural shift, in which advanced hardware becomes progressively more expensive, less replaceable, and increasingly treated as a scarce asset rather than a disposable commodity. In such a context, devices once considered trivial—such as early-generation consumer electronics—may acquire renewed residual value as functional artefacts rather than obsolete relics. Scarcity, in other words, reframes obsolescence as a governance problem rather than a purely technical one.
Within this emerging landscape, the traditional logic of performance escalation becomes increasingly untenable. For most users, performance is not defined by global tournament standards or marginal frame-rate advantages at extreme resolutions. It is defined by stability, visual fidelity, responsiveness, and depth of experience. Casual and serious non-professional gamers—who together constitute the majority of the gaming population—operate within bounded constraints: fixed displays, limited time, and experience-driven rather than competitive objectives. Escalating hardware capacity beyond what these constraints can exploit produces diminishing experiential returns while materially increasing economic and environmental cost. Supremacy, therefore, does not arise from excess capacity, but from alignment between capability, demand, and long-term resource availability.
An often-overlooked dimension of this shift is purpose-bound design. Gaming-class hardware—such as GPUs in the RTX 3060 Ti category—is explicitly designed for interactive, consumer-facing workloads: gaming, creative production, and general computation. It is not optimised for large-scale extractive uses such as cryptomining or industrial AI, even though such uses may be technically possible. This distinction is not merely technical but ethical. When hardware designed for play and creativity is diverted into extractive or speculative activity, scarcity accelerates, prices rise, and environmental cost increases without corresponding social benefit. In conditions of constrained production, governance over how hardware is used becomes as important as efficiency in how it is built.
This article examines a personal computer built around an RTX 3060 Ti–class GPU as a longitudinal case study in lean, sustainable supremacy under conditions of emerging scarcity. Originally constructed in 2019 for energy-efficient testing using passive-first cooling principles, the system was operated almost continuously for several years, partially degraded, and later repurposed for modern gaming workloads. Rather than being replaced as components aged or constraints emerged, the system was incrementally maintained, reconfigured, and governed. Decisions such as operating the GPU in a secondary PCIe slot with negligible performance loss, limiting active memory to 32 GB despite higher installed capacity, and avoiding energy-intensive cooling architectures illustrate how value can be preserved through patching and compatibility rather than escalation. These practices mirror well-established principles in Lean and Agile project management, where delivery is sustained through constraint-aware adaptation rather than wholesale redesign (Womack and Jones, 1996; PMI, 2021).
Performance is evaluated not against elite competitive gaming standards, but against dominant consumer baselines, particularly through a virtual comparison with the PlayStation 5—a well-benchmarked reference platform representing contemporary expectations for gaming quality and efficiency. The analysis demonstrates that, under realistic constraints, the RTX 3060 Ti system delivers performance equivalent to or exceeding console-class experiences, including high-fidelity 1080p gameplay and viable 4K play under optimised settings. For the majority of users, higher-tier GPUs would not materially improve outcomes, yet would impose significantly higher financial, energy, and environmental costs.
The article argues that this pattern reflects a broader principle extending beyond gaming. In programme and project management, sustainability in a resource-constrained future depends less on maximising capability and more on aligning scope, purpose, and governance to preserve value over time. Building a project—like building a PC—is fundamentally harder than purchasing an integrated solution off the shelf. While pre-packaged systems reduce short-term complexity, they often embed inflexibility, cost escalation, and long-term sustainability risk. By contrast, systems assembled from compatible components enable adaptation, repair, and selective resourcing when conditions change. In a future where hardware scarcity becomes the norm rather than the exception, supremacy will belong not to the most powerful systems, but to those that are understood, governable, and able to endure.
Table 1 (Introduction). Framing the Argument of the Article
| Dominant Assumption | Observed Reality | Reframing Proposed in This Article |
|---|---|---|
| Performance equals newest hardware | Performance equals meeting real experiential needs | Supremacy arises from alignment, not novelty |
| Hardware abundance is assumed | Hardware scarcity is recurring and structural | Systems must be designed to endure |
| Replacement is the default response | Patching and reconfiguration preserve value | Governance > escalation |
| Peak benchmarks define success | Stability, fidelity, and responsiveness define value | Measure outcomes, not extremes |
| Over-specification ensures longevity | Over-specification creates waste | MVP capability sustains systems |
| Buying integrated systems reduces risk | Integrated systems embed lock-in | Building enables adaptation |
| Energy cost is secondary | Energy is a limiting resource | Resource-to-value ratio matters |
| Obsolescence is technical | Obsolescence is managerial | Endurance is a design choice |
2. Prior Work, Related Literature, and the Gap This Study Addresses
The arguments developed in this article build on earlier conceptual work by the author, which proposed that passively cooled, energy-efficient computing systems could challenge replacement-driven models of technological progress (Chapman, 2019). That work framed sustainability primarily through design intent: reducing energy consumption, minimising thermal stress, and extending system lifespan by avoiding architectures optimised solely for peak throughput. While this contribution established a normative rationale for endurance-oriented design, it remained necessarily speculative. It did not examine how such systems behave once exposed to prolonged operation, partial degradation, and shifting functional demands. The present article takes that conceptual position as a starting assumption and asks what follows when sustainability claims are tested through sustained practice.
Academic engagement with PC building as a practice is fragmented and largely indirect. In engineering and computing education, hardware assembly frequently appears within project-based learning (PBL) and CDIO-inspired curricula, where students construct computing systems to integrate requirements analysis, component selection, implementation, testing, and operation (Crawley et al., 2014; Prince and Felder, 2006). This literature establishes that PC assembly can function as a bounded lifecycle activity, but it overwhelmingly focuses on short-duration learning outcomes. The assembled system is treated as complete once it functions, with little attention to long-term behaviour, maintenance, degradation, or repurposing.
A related strand examines simulation- and VR-supported PC assembly, framing hardware construction as a procedural task involving sequencing, constraint satisfaction, and error avoidance (Makransky et al., 2019; Radianti et al., 2020). While this work acknowledges complexity and failure modes during assembly, it still positions the completed system as an endpoint rather than as an evolving artefact. Issues central to sustainability—such as incremental repair, partial failure, and performance renegotiation over time—remain outside the analytical frame.
Beyond education, maker studies and DIY computing literature explore modularity, repairability, and user agency, often in explicit opposition to planned obsolescence (Buechley and Perner-Wilson, 2012; Kuznetsov and Paulos, 2010). This literature begins to engage directly with values relevant to sustainability, particularly reuse and adaptation. However, it rarely evaluates whether user-maintained systems can continue to deliver contemporary, demanding workloads, nor does it benchmark such systems against dominant consumer platforms. Sustainability is framed primarily as cultural resistance or ethical stance, rather than as a performance-constrained engineering outcome.
In parallel, sustainability scholarship has extensively examined electronic waste (WEEE), circular economy models, and lifecycle responsibility, but overwhelmingly from an end-of-life perspective (Baldé et al., 2017; Cao et al., 2024). While this work provides essential insight into disposal, recycling, and stakeholder responsibility, it offers limited guidance on how long-lived systems perform during extended use, or how in-use governance decisions affect environmental outcomes. The operational phase of computing systems—the longest and often most impactful stage of the lifecycle—remains comparatively under-examined.
Across these strands, a consistent gap emerges. PC building is treated as pedagogy, skill acquisition, or cultural practice, but rarely as longitudinal system governance. The literature lacks sustained examination of what happens when computing systems are built, used continuously, encounter non-catastrophic failures, and must be patched or reconfigured rather than replaced. In effect, PC building has not been theorised as a form of project or programme delivery over time.
This omission is notable when viewed through established project and programme management theory. Projects assembled from modular components routinely encounter emergent constraints, late-stage incompatibilities, and partial failures that cannot be resolved through escalation alone (PMI, 2021; Flyvbjerg, 2014). In both computing and organisational contexts, the temptation is to replace rather than adapt: to procure a new system, platform, or solution off the shelf. While this may reduce short-term complexity, it often increases cost, lock-in, and long-term sustainability risk.
The present article addresses this gap through a longitudinal, practice-based evaluation of a single computing artefact across design, operation, degradation, and repurposing. Rather than treating the PC as a one-off build or idealised exemplar, it is analysed as a governed system whose performance, configuration, and value are renegotiated over several years of real use. This approach allows sustainability, performance, and programme management principles to be examined not as aspirational ideals, but as emergent properties of systems required to endure under constraint.
3. System Configuration as Governed, Lean Practice
This study examines a single computing artefact as a bounded socio-technical system whose sustainability and performance characteristics emerged through prolonged use, maintenance, and adaptation rather than through fixed optimisation at the point of assembly. The system was originally constructed in 2019 using consumer components purchased under ordinary market conditions, without anticipation of later supply-chain disruption, prolonged near-continuous operation, or repurposing for contemporary gaming and computational workloads. Its subsequent evolution therefore provides an opportunity to examine how value is preserved through governance, scope discipline, and lean configuration rather than continual escalation or replacement.
A defining feature of the system is its passive-first CPU cooling strategy, centred on a large surface-area heatsink designed to operate without reliance on high-RPM active cooling. Low-speed case fans are present, but their role is limited to maintaining ambient airflow rather than compensating for concentrated thermal output. By contrast, typical high-performance air-cooled systems rely on one or more CPU fans operating at 1,800–2,500 RPM, each drawing approximately 2–5 W under load, while all-in-one liquid cooling systems introduce additional electrical overhead through continuous pump operation, typically consuming 6–10 W regardless of thermal demand (ASHRAE, 2011; Intel, 2018).
In the present system, CPU-adjacent cooling power draw approaches zero, with only low-speed case fans contributing an estimated 1–2 W in total. While these figures are indicative rather than instrumented measurements, they are consistent with published comparisons of cooling architectures and operational energy profiles (ASHRAE, 2011; Koomey et al., 2014). Over sustained use, the implications are material. Assuming a conservative operating profile of 12 hours per day, a liquid-cooled system may consume an additional 30–50 kWh per year solely for pump and high-RPM fan operation. Across a six-year lifecycle, this equates to approximately 180–300 kWh of avoided energy consumption, exclusive of the embodied environmental cost associated with pumps, radiators, and replacement components.
Beyond direct energy savings, passive-first cooling reduces mechanical wear and thermal cycling stress—both recognised contributors to component failure over time. High-RPM fans and liquid cooling pumps introduce additional points of mechanical failure and typically require replacement within three to five years of continuous operation. Large passive heatsinks, by contrast, contain no moving parts and exhibit effectively indefinite service life. The system’s sustained stability under near-continuous operation, supported by routine low-cost maintenance such as periodic thermal paste reapplication, reflects an agile operations logic in which small, preventative interventions preserve system health without disruptive overhaul (Rigby, Sutherland and Takeuchi, 2016).
Graphics processing is provided by an RTX 3060 Ti–class GPU, acquired during a period of severe market distortion in GPU availability and pricing. While this selection was incidental at the time of purchase, its implications are analytically significant. The GPU delivers high gaming and general-purpose compute performance at a typical board power of approximately 200 W—substantially lower than higher-tier alternatives that commonly draw 300–450 W for marginal performance gains under comparable gaming workloads. Empirical testing further demonstrated that, following failure of the primary PCIe x16 slot, operation via an auxiliary expansion slot preserved approximately 99% of expected performance at 4K resolution. This confirms that neither peak power draw nor theoretical bandwidth constituted binding constraints within the system’s defined scope.
This outcome illustrates a core lean and programme-management principle: capacity beyond the active constraint produces diminishing returns. Higher-tier GPUs would have increased energy consumption and embodied environmental cost without delivering commensurate experiential benefit under the system’s display, resolution, and usage constraints. In programme terms, this mirrors the misallocation of resources beyond a work package’s absorptive capacity, increasing cost and risk without improving delivery (PMI, 2021).
Memory configuration provides a further example of governed optimisation. Although the system was provisioned with 64 GB of RAM, empirical testing demonstrated that a 32 GB configuration with XMP enabled delivered lower latency and greater stability for gaming and real-time workloads. Additional memory capacity did not improve performance but reduced overclocking headroom and marginally increased power draw. Accordingly, the system operates with 32 GB of active memory, while the remaining capacity is retained as reserve rather than eliminated. This reflects lean portfolio logic: resources are neither discarded prematurely nor deployed inefficiently, but held in readiness for future scope change (Kerzner, 2019).
Other subsystems were deliberately not optimised beyond sufficiency. Solid-state storage connected via older cabling provided adequate throughput for gaming and general use, and no evidence emerged that storage performance constrained outcomes. Visual output was similarly bounded by display hardware rather than computational capability. These constraints were acknowledged but not escalated, consistent with agile governance practices that prioritise intervention only where delivery is threatened rather than optimising all components indiscriminately (Goldratt, 1997).
Taken together, the system demonstrates how sustainability, performance, and resilience can emerge from governed restraint rather than maximalism. Energy savings achieved through passive cooling, avoidance of unnecessary component replacement, and continued operation via auxiliary architectural pathways collectively reduce both operational and embodied environmental cost over time. In programme management terms, the system exemplifies lean supremacy: value is sustained not by chasing peak theoretical capability, but by aligning resources, scope, and governance to deliver outcomes reliably under constraint.
This section establishes the system as a configured, enduring platform whose sustainability and performance characteristics arise from disciplined design and ongoing stewardship. The following section presents benchmark and gameplay performance data, evaluating how this lean configuration performs relative to contemporary gaming requirements and dominant consumer baselines.
Figure 1. Passive CPU cooler (without fans)

Figure 2. Passive CPU cooler (doubled and with fans).

4. Benchmarking and performance results
Performance evaluation was conducted to assess whether the configured system delivers contemporary gaming capability within its defined scope and constraints. Benchmarks were selected to establish fitness for purpose, stability, and comparative value against dominant consumer baselines, rather than to pursue global leaderboard positions. Results are summarised below before being interpreted through a Lean and programme management lens.
Table 2. System Configuration and Governed Constraints
| Component | Configuration | Notes on Scope and Constraint |
|---|---|---|
| CPU Cooling | Passive-first heatsink with low-speed case fans | Near-zero active cooling power at CPU; reduced mechanical wear and failure risk compared with high-RPM or pump-based systems (ASHRAE, 2011; Intel, 2018) |
| GPU | NVIDIA RTX 3060 Ti | Gaming-optimised GPU with moderate board power (~200 W), designed for rasterisation and real-time graphics rather than compute-intensive mining or datacentre workloads (NVIDIA, 2021) |
| GPU Slot | Auxiliary PCIe slot | Primary x16 slot degraded; empirical testing indicates negligible performance loss at 4K where GPU compute, not bus bandwidth, is the limiting factor (Gamers Nexus, 2020; PCI-SIG, 2019) |
| RAM (Installed) | 64 GB DDR4 | Purchased for longevity and future scope flexibility |
| RAM (Active) | 32 GB DDR4 with XMP enabled | Lower latency and improved stability under gaming and real-time workloads; excess capacity beyond this threshold delivers diminishing returns (TechSpot, 2021) |
| Storage | SSDs with legacy cabling | Adequate throughput for gaming; not a binding performance constraint (Microsoft, 2020) |
| Display Context | TV / non-gaming monitor | Visual output bounded by display refresh and resolution rather than GPU capability |
| Operating Profile | Near-continuous operation | Longitudinal use over ~5–6 years, including sustained uptime and periodic maintenance |
Interpretation
This configuration establishes the system as governed rather than idealised. Performance outcomes must therefore be interpreted relative to real constraints, not theoretical maxima—an explicit parallel to scope discipline in programme management (PMI, 2021).
Table 3. Synthetic Benchmark Results (GPU and System Stability)
| Test | Result | Reference Range (RTX 3060 Ti) | Observations |
|---|---|---|---|
| Unigine Superposition (4K Optimised) | ~19,500–21,000 | ~19,000–22,000 | Within expected class performance for RTX 3060 Ti GPUs (Unigine, 2022; TechPowerUp, 2023) |
| PCIe Slot Efficiency | ~99% at 4K | ~100% (x16 ideal) | Performance loss negligible at high resolutions where GPU compute dominates (Gamers Nexus, 2020) |
| Interrupt-to-Process Latency | Within real-time safe bounds | <1 ms typical | Stable for gaming and real-time audio; no DPC-related instability (Resplendence, 2023) |
| Thermal Behaviour | Stable under load | No throttling expected | Passive-first cooling sufficient to maintain sustained boost clocks without thermal throttling |
Interpretation
Despite operating via an auxiliary PCIe slot, the system retains effectively full GPU performance at 4K. This confirms that theoretical bandwidth reduction does not materially constrain real gaming workloads—a clear example of excess capacity providing no additional value.reductions do not materially constrain gaming workloads in this configuration.
Table 4. Gameplay Performance Summary (Call of Duty)
| Resolution | Settings | Performance Outcome | Experiential Assessment |
|---|---|---|---|
| 1080p | Extreme / Ultra | Stable, high frame rate | Exceeds console-class experience |
| 1440p | High / Ultra | Smooth and responsive | Well within intended scope |
| 4K | Optimised | Playable and stable | Borderline competitive, high fidelity |
| Frame Pacing | N/A | Consistent | No perceptible stutter |
| 0.1% Lows | N/A | Acceptable | No immersion-breaking drops |
Interpretation
The system meets or exceeds contemporary gaming expectations for the majority of users. While not configured for elite esports competition at extreme refresh rates, it delivers depth, stability, and responsiveness, aligning with how most players experience value (Digital Foundry, 2023).
Table 5. Virtual Comparison with PlayStation 5 (Baseline Equivalence)
| Dimension | RTX 3060 Ti System | PlayStation 5 | Comparative Assessment |
|---|---|---|---|
| Target Resolution | 1080p–4K | 1080p–4K | Equivalent |
| Visual Fidelity | High / Ultra (PC) | High (Optimised) | PC equal or better (settings-dependent) |
| Frame Stability | High | High | Equivalent |
| Power Envelope | Moderate (~200 W GPU) | Fixed ~180–200 W system | Comparable (Sony, 2020; NVIDIA, 2021) |
| Graphics Latency | Lower (driver & pipeline control) | Fixed pipeline | PC advantage (NVIDIA Reflex; Digital Foundry, 2022) |
| Platform Flexibility | Open | Closed | PC advantage |
| Repurpose / Upgrade | Yes | No | PC advantage |
Summary Interpretation (Lean / Programme Perspective)
Across synthetic benchmarks, live gameplay testing, and platform-level comparison, the results demonstrate that the system delivers contemporary, high-quality gaming performance within clearly defined scope and constraints. Performance parity with dominant consumer platforms is achieved not through escalation of hardware capacity, but through disciplined configuration, architectural resilience, and the deliberate avoidance of surplus capability that offers no material experiential return.
From a lean systems perspective, performance improvements emerged through constraint-focused optimisation rather than capacity expansion. Addressing memory latency and system stability produced tangible gains, whereas additional resources beyond this threshold delivered diminishing returns. From a programme management standpoint, delivery objectives were achieved without incurring unnecessary cost, replacement, or technical risk. When primary pathways failed, auxiliary routes preserved outcomes, and excess resources were retained as contingency rather than deployed inefficiently—reflecting mature governance rather than maximalist design.
Taken together, these results substantiate the central claim of this article: lean, well-governed systems can outperform over-engineered alternatives in real-world contexts, particularly under conditions of resource scarcity and sustainability constraint. The following discussion situates these findings within broader debates on hardware scarcity, lifecycle value, and the governance of complex socio-technical systems.
5. Discussion: Lean Supremacy, Scarcity, and the Future of Individual Computational Agency
The findings of this study must be interpreted not only in terms of gaming performance, but in relation to the broader computational capabilities now available to individual users. The system examined here—built around an RTX 3060 Ti–class GPU—comfortably delivers contemporary gaming performance within its defined scope. Less immediately visible, but equally significant, is its capacity to support applied artificial intelligence and data-intensive workloads, including model training, inference, and parallel data processing tasks commonly associated with contemporary machine learning practice. Consumer GPUs in this performance class are routinely used for deep learning frameworks such as TensorFlow and PyTorch, enabling experimentation and research previously dependent on institutional infrastructure (NVIDIA, 2021; Chollet, 2018).
This observation is historically significant. In 1997, IBM’s Deep Blue defeated the world chess champion using a system that was highly specialised, institutionally controlled, and resource-intensive. Deep Blue relied on custom VLSI chess processors and dedicated infrastructure consuming substantial electrical power and capital investment (Hsu, 2002; Campbell et al., 2002). By contrast, the system analysed in this study—built from consumer components and operating at a fraction of the energy cost—exceeds the computational requirements of many early AI research milestones. While architectural differences between symbolic AI and modern statistical learning must be acknowledged, the comparison highlights a profound inversion: computational capability once reserved for corporate or state actors is now available at the individual level.
This inversion has material implications for programme management, sustainability, and digital equity. From a lean systems perspective, the RTX 3060 Ti configuration represents a point of high value density: sufficient GPU memory, parallel compute capability, and throughput to support both high-fidelity gaming and applied AI research, without unnecessary escalation in power consumption or system complexity. In programme terms, this aligns with the concept of a minimum viable research platform—capable of enabling meaningful experimentation and innovation without requiring enterprise-scale infrastructure or cloud dependency (PMI, 2021; Ries, 2011).
However, this capability cannot be assumed to persist. Current market trends increasingly orient hardware design and pricing toward large-scale AI workloads, data centres, and cloud providers. Rising GPU and memory costs, coupled with increasing platform lock-in, have already prompted concerns that PC building as a practice is becoming economically inaccessible for many users (IDC, 2023; Gartner, 2024). As hardware becomes optimised for institutional AI deployment rather than general-purpose use, individual computational agency risks erosion. This trajectory mirrors earlier periods of centralisation in computing history, where innovation became constrained by access rather than ingenuity.
This shift also intensifies sustainability concerns. Systems designed for maximal throughput rather than scoped sufficiency exhibit higher operational energy demand, shorter effective lifecycles, and lower reuse potential. High-performance hardware that requires continuous, energy-intensive cooling to maintain marginal gains demonstrates a poor resource-to-value ratio. By contrast, the system examined here illustrates how a single, well-governed PC can support multiple domains—gaming, data processing, and AI experimentation—while remaining energy-efficient and durable. Such multi-use capability is a recognised hallmark of sustainable system design (Koomey et al., 2014.
The implications for programme and portfolio management are clear. Just as organisations must avoid over-investing in infrastructure that exceeds absorptive capacity, digital ecosystems must guard against a future in which computational power is recentralised through cost and specialisation. Lean supremacy, in this sense, is not merely a matter of performance efficiency; it is a matter of preserving optionality. Systems that are open, repurposable, and sufficiently powerful enable adaptation as requirements evolve, reducing dependence on proprietary platforms and externally governed cloud services (Womack and Jones, 1996; Goldratt, 1997).
These findings reinforce the arguments advanced in How can we achieve net-zero? A responsible-stakeholder-based circularity model in WEEE management (Cao et al., 2024), which emphasise lifecycle stewardship, reuse, and shared responsibility as foundations for sustainable digital futures. Extending the usable life of capable hardware reduces environmental impact while protecting the distributed innovation capacity that personal computing has historically enabled. This article is a micro extension of the larger principles engendered in this work.
Viewed in this light, the system examined here is more than a gaming rig. It demonstrates that high-performance, multi-purpose computing remains viable at the individual level—provided that design, governance, and evaluation prioritise sufficiency over excess. In a future defined by hardware scarcity, energy constraints, and AI-driven market pressures, sustaining the agency of the PC builder may prove as important for resilience and innovation as sustaining raw performance itself.
References
- Campbell, M., Hoane, A. J. and Hsu, F. (2002) ‘Deep Blue’, Artificial Intelligence, 134(1–2), pp. 57–83.
- Cao, D., Puntaier, E., Gillani, F., Chapman, D. and Dewitt, S. (2024) Towards integrative multi-stakeholder responsibility for net zero in e-waste: a systematic literature review, Business Strategy and the Environment, 33(8), pp. 8994–9014.
- Chapman, D., 2019, September. Go Green Go Digital (GGGD): An applied research perspective toward creating synergy of crypto-mining and sustainable energy production in the UK. In International Conference on Sustainable Materials and Energy Technologies (pp. 1-23).
- Chollet, F. (2018) Deep Learning with Python. New York: Manning.
- Gartner (2024) Semiconductor Supply Chain and AI Infrastructure Outlook.
- Goldratt, E. M. (1997) Critical Chain. Great Barrington, MA: North River Press.
- Hsu, F. (2002) Behind Deep Blue: Building the Computer That Defeated the World Chess Champion. Princeton: Princeton University Press.
- IDC (2023) Worldwide GPU and Memory Market Forecast.
- Koomey, J. G., et al. (2014) ‘Implications of historical trends in the electrical efficiency of computing’, IEEE Annals of the History of Computing, 36(3), pp. 46–54.
- NVIDIA (2021) NVIDIA Ampere Architecture Whitepaper.
- PMI (2021) A Guide to the Project Management Body of Knowledge (PMBOK® Guide), 7th edn.
- Ries, E. (2011) The Lean Startup. New York: Crown Business.Womack, J. P. and Jones, D. T. (1996) Lean Thinking. New York: Simon & Schuster.
Appendix A. PC Build specifications
System Configuration (Final Test Platform)
The system analysed in this study is a consumer-built desktop PC originally assembled in 2019 and subsequently maintained and reconfigured through incremental optimisation. The configuration at the point of benchmarking and gameplay testing was as follows:
| Component | Specification |
|---|---|
| Motherboard | Gigabyte X570 AORUS ELITE (AM4 chipset) |
| CPU | AMD Ryzen 7 5800X, 8 cores / 16 threads |
| CPU Configuration | Standard BIOS-enabled overclocking (PBO / manufacturer-consistent settings) |
| Observed CPU Clock | ~3.8–4.7 GHz under boost |
| CPU Cooling | Passive-first large surface-area heatsink with low-speed case airflow (Noctua-class architecture) |
| GPU | Gigabyte NVIDIA GeForce RTX 3060 Ti (GA104, LHR) |
| GPU Memory | 8 GB GDDR6 |
| GPU Interface | PCIe 4.0 (operating via auxiliary slot due to primary x16 slot degradation) |
| System Memory | 32 GB DDR4 @ 3200 MHz, XMP enabled |
| Memory Configuration | Dual-channel, latency-optimised; no excess capacity installed |
| Storage | 1 TB SATA SSD (system drive) |
| Additional Storage | Secondary SSDs (legacy cabling; not performance-limiting for gaming workloads) |
| Operating System | Windows 10 (Build 26200) |
| Graphics Driver | NVIDIA Driver 581.8 |
| Power Profile | Standard desktop PSU; no artificial power limits applied |
| Display Context | Television / non-gaming monitor (visual output bounded by display rather than GPU capability) |
| Operational History | Near-continuous operation over ~5–6 years with routine maintenance (e.g. thermal interface renewal) |
Appendix B. Superposition Benchmarks


Appendix C. Tech PowerUp GPU

Appendix. D. UserBenchmark



Appendix E. The Original Build (05/01/2026)












