The Sentient Stack: Deskilling, Reskilling... or Edu-Boost for AI-Augmented Software and Game Development

In the early 2020s, the idea of an AI coding “co-pilot” was a novel, almost magical concept. Developers marveled as a ghost in the machine suggested lines of code, completed functions, and turned comments into boilerplate, albeit with significant gaps. Fast forward to today, in mid-2025, and that co-pilot is no longer just a passenger. It’s an active navigator, an architect, a persistent QA engineer, and in some cases, the entire crew. Still, a motley crew—but improving day by day.

For my various projects, I inevitably dig deeper into coding and software architecture, whether it’s simulations, game mechanics tests, or audiovisual experiments. The goal of articles like this one is to provide a conceptual reference for this journey—later articles may lean into tutorial territory, as you may be accustomed to on this webpage. The integration of Artificial Intelligence into the software development lifecycle (SDLC) is no longer a trend—it is the foundation upon which modern software and interactive entertainment are built. This journey, fueled by both massive corporate investment and a fiercely collaborative open-source community, has taken us from simple autocompletion to multi-agent, goal-oriented AI systems at a breathtaking pace. This article explores the development of AI in the field, its current state in 2025, the persistent challenges we face, and the future it heralds. It serves as a beginning of series of articles focused specifically on creative coding, software engineering, and AI augmented development (and its pros and cons, sometimes quite philosophical).

Two years ago, coding with AI was slow and tedious. Now it is faster and sometimes less tedious.

The Journey to 2025: From Smart Editors to Autonomous Agents

Our current reality was built on several key evolutionary steps:

1. The Pre-Transformer Era (Before 2021)

For decades, “AI” in development meant sophisticated static analysis, linters, and intelligent code completion. These tools, built on rule-based systems, were context-aware within a limited scope but couldn’t reason about intent or understand broader architectural implications.

2. The Co-Pilot Revolution (2021-2023)

The launch of GitHub Copilot, powered by OpenAI’s Codex, marked a true inflection point. However, this revolution was built on open-source foundations like the original Transformer architecture, released by Google in 2017, and the foundational frameworks of PyTorch and TensorFlow. The release of powerful, openly accessible models like Meta’s Llama series and Mistral AI’s models further “democratized” access, propagating these new capabilities throughout the community and allowing everyone to experiment with and build upon these new capabilities, shifting the developer’s task from writing code to reviewing and directing AI-generated solutions.

3. The Agentic Shift (2023-2025)

The shift occurred when AI moved beyond being a passive suggestion engine. In mid-May 2025, OpenAI previewed a new Codex agent capable of writing features, answering questions, and proposing pull requests. At its Build 2025 conference, Microsoft revealed that AI agent use had more than doubled, introducing an Azure SRE agent integrated into GitHub Copilot. This marked the decisive move from “code completion” to “task completion.”


The State of the Art in 2025 Onwards: The AI-Augmented Developer

Today, the line between the IDE and the AI has become almost nonexistent. The AI functions as an ambient nosey partner integrated into every phase of development. While proprietary solutions like Anysphere’s Cursor—which reportedly surpassed $500 million in ARR—dominate headlines, a parallel, vibrant ecosystem thrives on open-source alternatives. Tools like TabbyML, Continue.dev, Void / VS Code , and a plethora of plugins for VS Code allow developers to connect to a wide range of open models, from Mistral and Llama 3 to community-fine-tuned variants, offering greater customizability and data privacy.

Coding and Implementation: Vibe vs. Agentic Development

A new taxonomy now describes two dominant modes of AI-assisted development:

Vibe Coding: This human-in-the-loop, conversational approach is used for rapid prototyping. While gaining traction at companies like Redis and Visa, engineering leaders caution that it requires significant human oversight to avoid architectural flaws.

Agentic Coding: This involves autonomous (o rather semi-autonomous, if we consider agentic hierarchy) agents that can plan, execute, and iterate. This trend is powerfully driven by the open-source community. Alongside published frameworks like AgentMesh, collaborative projects like OpenDevin and Aider are attempting to build fully transparent and customizable software engineering agents, standing as a testament to the community’s drive to build its own autonomous systems.

Standardization and Interoperability: The Rise of MCP

A critical development fostering this new era is the widespread adoption of the Model Context Protocol (MCP). Introduced in late 2024, MCP is an open standard that allows AI models to securely and reliably connect with external tools and data sources. By March 2025, its adoption by major players like OpenAI, Microsoft, and Google DeepMind made it the foundational infrastructure for AI interoperability.

Game Development: A Studio Reimagined

The game development industry has become a fertile testing ground for AI innovation.

Autonomous NPCs: At CES 2025, NVIDIA expanded its Avatar Cloud Engine (ACE) to include autonomous NPCs in major titles like PUBG: BATTLEGROUNDS.

Indie Innovation and Community Skepticism: The indie scene is a hotbed for experimentation, with modders using open-source AI models to create adaptive NPCs for games like Skyrim. However, community opinion remains mixed, with concerns about computational cost and whether current tech can deliver true depth compared to pre-written narratives.


Illustration of a coding and snake or python is lurking from a function

“AI can turn a one-sentence idea into five hundred lines of code in ten seconds. It then takes me two hours to figure out what those five hundred lines actually do.” – On the Productivity Paradox

The Productivity Paradox and Other Persistent Challenges

Despite incredible progress, our reliance on AI faces significant challenges.

1. The “Plausible but Wrong” Epidemic

This remains the single biggest issue. AI models generate code that looks perfect but contains subtle flaws. The 2025 Stack Overflow Developer Survey found this to be the biggest frustration for 66% of developers.

2. The Productivity Paradox

A stark contradiction has emerged. While some report productivity gains of up to 30%, a July 2025 RCT with experienced open-source developers found that using current AI tools led to a 19% slower task completion time, highlighting the friction of real-world use.

3. Widespread Adoption, Low Trust

Developer sentiment reflects this paradox. The Stack Overflow survey revealed that while 84% of developers use AI tools, only 3.1% highly trust their accuracy, with nearly 46% expressing distrust.

4. Intellectual Property and Global Competition

Legal questions around copyright persist. Meanwhile, a surge in powerful open-source models from China—such as Alibaba’s Qwen and DeepSeek-Coder—is a deliberate strategy to build a global community, creating strategic pressure on the other, especially EU and U.S. AI ecosystems. We may see shift of momentum from open-source to corporate proprietary or “fake open-source” solutions and platforms.

5. The Deskilling of the Workforce

There is a growing concern that we are creating a generation of developers who can prompt an AI but cannot reason about fundamental computer science principles.

We should take a closer look at this topic, as it is closely connected to other AI enhanced or augmented applications, regarding direct competition to human creativity. We can draw several critical failure modes or scenarios:

The Erosion of Foundational Knowledge Scenario

The most immediate (theoretical?) danger is the atrophy of core computer science principles. When an AI agent handles memory allocation, chooses data structures, and normalizes a database schema, the developer is spared the effort—but also the experience and understanding. An engineer who has never managed memory or wrestled with the trade-offs between a hash map and a tree cannot truly debug or optimize the systems the AI builds. They are left with a superficial understanding, unable to fix problems when the AI’s abstractions inevitably leak.

The “Cognitive Crutch” and the Mid-Level Chasm Scenario

AI tools act as a powerful cognitive crutch. Junior developers can now produce code that mimics the patterns of a mid-level or even high-level engineer, giving the illusion of rapid skill acquisition. However, they often bypass the crucial struggle—the debugging, the research, the experiments, and the failed attempts—where the learning occurs. This is creating a “mid-level chasm”: a growing cohort of developers who can prompt more or less effectively but lack the nuanced experience to become senior architects. They have the output of a mid-level engineer but the problem-solving skills of barely a junior.

The Security Blindness and Gullibility Paradox Scenario

This remains the most pernicious daily challenge, and we have mentioned it before in the “Plausible but Wrong” note. AI is masterful at generating code that looks perfect but is riddled with subtle bugs, security flaws, or inefficient logic. A developer who has been “deskilled” lacks the critical eye to spot these errors. They may not recognize a potential SQL injection vulnerability, a race condition caused by a flawed logic, or an algorithm with disastrous O(n²) complexity because they’ve never learned those patterns from first principles. Trust in these tools is consequently low (only 3.1% of developers highly trust them), yet the reliance on them grows, creating a dangerous paradox.

The Productivity Influenza as a Symptom

The widely reported “productivity paradox”—where a July 2025 study found experienced developers were 19% slower with AI tools—may be a direct symptom of deskilling. The time saved in writing code is being lost (and then some) in debugging the AI’s opaque, “plausibly wrong” output. Developers are spending less time creating and more time trying to understand and fix the work of their inscrutable AI partner. As the saying goes, it is often much harder to correct someone else’s code than to write one’s own.

As with all technology, it has its offerings and tradeoffs. With all that written above, AI enhancement can also offer a very streamlined education in technical and engineering areas, actively mitigating some of these issues…

From Cognitive Crutch to Learning Accelerator: The Mentoring and Explorative Potential

However, the ‘cognitive crutch’ paradox has a compelling flip side: the potential for AI to become the most powerful educational tool in a developer’s arsenal. We can repurpose the very same mechanisms that risk deskilling to accelerate learning and foster a deeper, more hands-on understanding of complex topics.

The key lies in shifting the developer’s role from a passive recipient of code to an active interrogator of it—a shift that ironically places more demanding critical-thinking tasks on the developer than before. AI augmentation allows for the rapid testing of ideas, enabling a developer to “fail faster” and iterate on concepts that would otherwise be too time-consuming to explore. An engineer can ask the AI to implement a feature using an unfamiliar design pattern, not to ship the code, but to have an instant, concrete example to study, critique, and refactor.

This transforms the learning process from theoretical to applied. With the correct feedback loop and a mindset of critical inquiry, the AI becomes a tireless idea validation engine. A junior developer can ask it to explain its own code’s time complexity, suggest alternative approaches, and even introduce bugs intentionally for debugging practice. When harnessed correctly, this provides an unprecedented educative boost, turning the very tool threatening to create a “mid-level chasm” into the bridge that helps motivated developers cross it.


My AI agent is like a brilliant, over-educated intern who sometimes oversleeps, sometimes complains, and confidently introduces three critical security flaws into the codebase every hour.

Personal Viewpoint

From my experience on diverse cloud and local platforms, a hierarchical agentic approach is highly effective. It can mitigate common issues like redundant processes, the overwriting of flawless code, and the introduction of inconsistent function names. However, the trade-off is a more granular overview of the project from the coding model’s perspective, which may cause omissions of logical connections between modules. Most of these issues can be effectively mitigated to some extent in small teams and smart management. Larger teams would absolutely require AI middlemen (or middlemachines) to control the flow of information in a project. As you may imagine, this could be beneficial, with quite a catastrophic potential.

Perspectives on the Road Ahead

The trajectory of AI points towards greater autonomy, but recent data underscores the continued necessity of human oversight. And the underlying issues seem to be structural, ingrained into the information theory space, not technology level based.

  • The Developer as Chief Architect and Orchestrator: The senior developer’s role is solidifying into that of a “master architect,” who decomposes problems and critically evaluates AI output.
  • The Hybrid Future: The future of development will be a hybrid model, combining rapid, human-in-the-loop “vibe coding” with autonomous agents for well-defined tasks. As I wrote before, we can expect AI/human control hubs, which will be effective for engineering (yet may pose serious dangers socially and politically, but this is for another article and another topic).
  • Grounded Expectations for Emergent Worlds: The dream of generating entire applications from a single document remains a long-term goal, yet it seems almost in our reach even now. With a conceptually simple or trivial structures we are already there (in a pop-science journal sense). While performance on benchmarks like SWE-bench has soared, translating this into reliable products is the key challenge. The immediate future is a blend of AI-driven productivity and robust human governance, not a leap to full quality automation for crucial branches.

The Architect in the Machine: Why Human Skill is More Critical Than Ever

In 2025, AI is a tool we use and often a Socratic partner we collaborate with. It has unlocked complex software creation and new frontiers in entertainment development. However, this partnership is not without its perils. The challenges of accuracy, the productivity paradox, low developer trust, and technological debt loom large.

We have built a powerful ghost in the machine, born from both corporate labs and open-source collaboration. Our task for the next five years is to learn how to master it—to harness its power while grounding our expectations in empirical reality, ensuring this sentient stack serves to amplify human ingenuity, not to replace rational thought, creativity, and planning. This artificial brain—or rather a network swarm of machine brains—may achieve heights we are only just beginning to imagine, or it may hit the limits of our own capabilities.


Resources and Tips

To experiment with AI-augmented development for free, you have several options (as you probably already know by now). Most major providers offer their solutions to the public through web services or APIs.

Currently, I recommend Google AI Studio as a highly effective free tool for testing the coding potential of powerful AI models like the Gemini family. Claude is also strong at coding and UI design; however, its user experience can be clumsy, and it sometimes (meaning often, regularly, and even on paid tier) leaves you hanging with unfinished or truncated code. Grok and ChatGPT offer free tiers, but be aware that their most capable models are often behind a paywall and free usage credits (which are often somewhat hidden in a blackbox) can be exhausted quickly. Just Reminder: Never share sensitive data or secrets on open platforms.

Here are some open-source models I have converted to GGUF, which you can test locally (e.g., Nemotron, Light R1):


Addendum: The Open Source AI Developer’s Toolkit 2025

Local Development Environment Setup

Hardware Considerations:

  • Consumer GPUs: RTX 4090/4080 for serious local inference, RTX 4070/4060 for lighter models. You may test setups with older RTX models, VRAM is usually the important stat.
  • Apple Silicon: M3/M4 MacBooks with 32GB+ RAM for efficient local model running
  • CPU-Only Options: High-core count AMD/Intel processors for slower but viable inference

Essential Open Source Tools:

  • Ollama: The de facto standard for running local models with a simple API interface
  • LM Studio: User-friendly GUI for model management and testing
  • Text Generation WebUI (oobabooga): Advanced interface with extensive customization
  • Jan.ai: Privacy-focused alternative with sleek interface

Model Recommendations by Use Case:

  • Code Completion: CodeLlama 34B - codellama models on huggingface, WizardCoder 33B, or Phind CodeLlama for balance of speed/quality
  • Code Review: Mixtral 8x7B or Llama 3.1 70B for nuanced analysis
  • Architecture Planning: Claude-3-Haiku via API (when privacy allows) or local Llama 3.1 70B
  • Documentation: Smaller models like Llama 3.1 8B often sufficient

The Privacy-First Development Strategy

Data Sovereignty in AI-Augmented Development

The Corporate Conundrum: Many developers unknowingly feed proprietary code to cloud-based AI services, creating potential IP leaks. A 2025 survey found that 73% of enterprises had no clear policy on AI tool usage, despite 89% of developers using them daily.

Building a Privacy-Preserving Workflow:

Tier 1 - Public/Learning Code:

  • Use any AI service (ChatGPT, Claude, Copilot)
  • Ideal for learning, open-source contributions, personal projects

Tier 2 - Sensitive/Proprietary Code:

  • Local models only (Ollama + CodeLlama/Mixtral)
  • Air-gapped development environments
  • Code review via local AI before any cloud exposure

Tier 3 - Critical/IP-Sensitive Code:

  • Human-only development
  • Post-development local AI review for optimization suggestions
  • Strict compartmentalization

The Micro-Learning Revolution: AI as Technical Mentor

From Stack Overflow to AI Tutoring

The New Learning Pipeline:

  1. Concept Introduction: AI explains the theory (faster than documentation)
  2. Code Examples: AI generates multiple implementation approaches
  3. Critical Analysis: Developer questions AI’s choices, forcing deeper understanding
  4. Iterative Refinement: AI helps debug and optimize human modifications
  5. Knowledge Consolidation: AI generates test cases to verify understanding

Practical Micro-Learning Techniques:

The “Explain Like I’m Five, Then Graduate School” Method:

"Explain dependency injection like I'm five."
[AI gives simple explanation]
"Now explain it like I'm a computer science graduate student."
[AI provides technical depth]
"Show me three different implementations in Python."
[AI provides code examples]
"What are the performance implications of each?"
[Developer gains comprehensive understanding]

The “Deliberate Bug Introduction” Technique:

Ask AI to introduce specific types of bugs into working code, then practice finding and fixing them. This builds the critical eye needed to spot AI-generated errors.

Future-Proofing Your Skills in Augmented World

The Developer’s Survival Guide

Skills That AI Cannot Replace (Yet):

  • System Design: Understanding trade-offs, scalability patterns, and architectural decisions
  • Domain Expertise: Deep knowledge of specific industries or problem domains
  • Creative Problem Solving: Approaching novel problems without existing patterns
  • Team Leadership: Managing people, processes, and technical vision — and objective validation
  • Critical Thinking: Evaluating AI output, spotting edge cases, ensuring robustness

Skills to Develop Alongside AI:

  • Prompt Engineering: Crafting effective instructions for AI systems
  • AI Model Evaluation: Understanding when to trust or question AI output
  • Rapid Prototyping: Using AI to quickly test ideas and concepts
  • Integration Expertise: Combining AI-generated components into cohesive systems

The “AI-Native” Developer Mindset:

  • Think in terms of “AI-assistable” vs “AI-resistant” tasks
  • Develop intuition for when AI will help vs hinder
  • Build personal toolchains that maximize AI leverage while maintaining control
  • Cultivate the ability to rapidly evaluate and iterate on AI-generated solutions

Common Pitfalls and How to Avoid Them

The “Magic Black Box” Trap:

  • Problem: Treating AI as infallible
  • Solution: Understand the code logic before using it

The “Copy-Paste Paralysis” Trap:

  • Problem: Becoming dependent on AI for all coding tasks
  • Solution: Regular “AI-free” coding sessions to maintain skills

The “Prompt Dependency” Trap:

  • Problem: Inability to solve problems without AI assistance
  • Solution: Practice explaining your thinking process to the AI, not just asking for solutions

Conclusion

The “sentient stack” we’ve built is neither sentient nor truly autonomous. It is, instead, a sophisticated mirror of human knowledge and creativity, amplified by computational power and guided by human intent. The developers who thrive in this new landscape will be those who understand this fundamental relationship—who can leverage the machine while never losing sight of the steps.

The AI that helps me consider three architectural approaches in the time I used to spend on one is genuinely transformative. The AI that makes decisions for me is a liability.

The productivity paradox we explored—where AI tools simultaneously promise massive gains while delivering measurable slowdowns—reveals itself as a transition artifact. The initial friction of adapting to AI assistance gives way to genuine capability enhancement, but more for those who invest in understanding the tool rather than merely using it.

The deskilling concern is real, but not inevitable. Every transformative technology—from calculators to IDEs—has faced similar fears. The developers who emerged stronger learned to leverage the abstraction while maintaining mastery of the fundamentals.

The open-source ecosystem surrounding AI development tools offers something the proprietary alternatives cannot: transparency, control, and the freedom to truly understand what lies beneath the surface. In a sense, proprietary models pose a much wider set of issues, especially in a monopole setting which may form itself in near future.

The answers lie not in the technology itself, but in how we choose to integrate it into our learning, our workflows, and our professional journey.

The sentient stack may never achieve true sentience, but in learning to work with it, we may discover new dimensions of our own intelligence. That, perhaps, is the greatest gift of this technological moment: not just better tools, but the opportunity to become better craftspeople in using them.

The future remains unwritten, and the pen—or perhaps the prompt—remains in human hands. Let’s make sure we never forget how to use it.

Updated:

You may also like:

Subscribe

Stay connected to make sure you don’t miss anything. Join our newsletter community for artists, designers, and art and science enthusiasts.