The standard explanation is skill or tooling configuration. But after working closely with development teams through this transition, a clearer picture emerges. What separates high-performing teams from the rest isn't the tools they use. It's how they use them.
In this article, we explore why the same AI tools produce such different outcomes across teams and what leaders need to put in place to move from dysfunction to genuine partnership with AI. Because the real barrier to artificial intelligence adoption isn't technical. It's psychological.
- AI adoption is uneven – Some teams gain huge productivity boosts, others see little impact, even with the same tools.
- Tools alone don’t guarantee results – Success depends on how teams use AI, not just the technology itself.
- Psychology matters – Dysfunctional team dynamics block AI’s potential; mindset and culture are critical.
- Intent-first thinking is key – Software Engineering 3.0 emphasises developer intent, making AI a collaborative partner.
- Environment drives success – Safe, supportive spaces for experimentation and learning unlock the real value of AI investment.
Three generations of software engineering
To understand why AI adoption produces such uneven outcomes, let's trace the evolution of software development practice.
- Software engineering 1.0: The code-centric era. Traditional development placed the developer at the centre. Engineers wrote code and built software applications with support from tools that facilitated analysis, compilation, and quality testing. Responsibility and focus were both squarely on the individual. Output was code, and skill was measured in code.
- Software engineering 2.0: The AI-assisted era. The current era introduced AI-powered assistants — GitHub Copilot being the most prominent example. These tools augmented the conventional process with code generation, intelligent suggestions, and autocomplete. The promise was transformative. The outcomes, however, have been inconsistent. This is where most organisations sit today.
- Software engineering 3.0: The intent-first era. Software Engineering 3.0 shifts emphasis from code to developer intent. This term was first articulated by Hassan et al. (2024) from Queen's University's Software Analysis and Intelligence Lab. Rather than asking "How do I write this?", developers ask "What do I want to achieve?" — and collaborate with AI to find the solution. Development happens through interactive dialogue. The developer leads the artificial intelligence partners.
The technology for Software Engineering 3.0 already exists. The architecture is understood. The research is published. What holds organisations back is not a product gap. It is a psychological one.
The pattern behind the gap
When teams using identical AI tools produce radically different results, the explanation runs deeper than training or configuration. The real differentiator is behavioural — and it maps with striking precision onto a psychological model first described by psychiatrist Stephen Karpman in 1968.
Karpman’s Drama Triangle identifies three roles in a dysfunctional cycle: the Victim, the Rescuer, and the Persecutor. Each role feels productive from the inside. Collectively, they produce stagnation. The pattern is well-documented in organisational psychology. It is also, it turns out, a near-perfect description of how many teams currently relate to AI tools.
The drama triangle in software engineering 2.0
In organisations struggling with AI adoption, three dysfunctional roles tend to emerge, and together they create a cycle that appears to be progress but isn't.
- Developer as victim: Developers often debug code they did not write or fully understand. AI-generated suggestions can even increase complexity, not reduce it, leading to cognitive overload and diminishing confidence. As a result, developers may feel sidelined by tools intended to assist them.
- AI as rescuer: The AI stops supporting and starts taking over. Developers lean on it more, think for themselves less, and gradually lose ownership of the work. When errors appear, and they always do, trust collapses fast.
- Management as persecutor: Leadership applies pressure to “use AI more effectively” without addressing what is actually going wrong. The pressure compounds the dysfunction. Engineers feel squeezed between an unreliable tool and unrealistic expectations. The negative feedback loop accelerates.
This explains why experienced developers sometimes resist tools like Copilot. They recognise, instinctively, when a dynamic is placing them in a disempowered position.
The alternative: The empowerment triangle
Software Engineering 3.0 requires a different set of relationships between people, AI, and organisational context. The Empowerment Triangle describes what that looks like.
Developer as creator
The emphasis shifts from execution to intent. Instead of asking “How do I write this code?”, developers ask “What do I want to build?” They own the problem, the direction, and the outcome. They evaluate AI suggestions critically, demande alternatives, and push back when the output doesn’t meet the standard.
AI as coach
Here, the dynamic flips. The AI suggests, questions, and surfaces options, but the developer stays in charge. Every decision, every trade-off, remains with the person, not the tool. The result is a relationship that builds capability over time rather than quietly eroding it.
Environment as a challenger
The organisational context sets high standards through support, not fear. Problems are named constructively. Failure is treated as a learning event, not a career risk. The Challenger dynamic develops the psychological safety necessary for genuine experimentation and genuine growth.
What the transition looks like in practice
Several teams began their AI journey, implementing Software Engineering 2.0 tools with only moderate success. The pivot came from redesigning the context around technology.
1. Creating the right environment first
Engineers needed space to experiment, make mistakes, and receive feedback without fear of criticism. This is the Challenger dynamic in action: high standards delivered through support rather than pressure. Without this foundation, no other change takes hold.
2. Reframing the task
Rather than “learn how to use AI” — a framing that positions AI as the subject — teams were given real business problems to solve. Within hours, they designed solutions, defended their decisions, and presented to stakeholders. AI was the partner in that process, not the curriculum.
3. The result
The same engineers and AI tools. But different outcomes. Those who previously copied and pasted suggestions without understanding now challenge AI proposals and demand explanations. The shift wasn’t technical. It was psychological.
One important caveat: environment and methodology are not everything. Success also depends on individual willingness to explore and try something unfamiliar without retreating to comfortable patterns. Not everyone makes this transition. Organisations that invest in the right environment still need people ready to walk through it.
The strategic implication for AI adoption success
Organisations that understand this dynamic will not win the next decade through better emerging technologies or AI products. Everyone will have those. They will win because they know how to help people work with AI as Creators rather than Victims.
The organisations seeing the greatest variance in AI adoption results are not being separated by their technology stack or data capabilities. They are being separated by the psychological dynamics their culture creates around that technology. You can have perfect third-generation tooling and still fail if your team is operating with second-generation psychology.
The transition from Software Engineering 2.0 to 3.0 is about creating environments where intent-first thinking is natural, where AI partnership feels like genuine collaboration, and where challenge stimulates growth rather than anxiety.
Ultimately, embracing this psychological shift is what will enable organisations to move beyond uneven results and unlock the real value of AI investment. The future belongs to teams and leaders who are ready to build on this foundation.
FAQs
AI in software is the use of artificial intelligence tools and systems to support, enhance, or automate parts of the software development process. This includes code generation, intelligent suggestions, automated testing, and data-driven decision making. AI in software is designed to act as a collaborative partner, helping teams move faster, reduce errors, and focus on higher-level problem-solving.
AI adoption is the process of integrating artificial intelligence technologies into an organisation’s operations, products, or services. It involves selecting appropriate AI tools and implementing them into workflows.
AI adoption can range from simple use cases (chatbots and automated data analysis) to more advanced applications, such as predictive analytics, machine learning models, and intelligent automation, and create better customer experiences.
Related Insights
Inconsistencies may occur.
The breadth of knowledge and understanding that ELEKS has within its walls allows us to leverage that expertise to make superior deliverables for our customers. When you work with ELEKS, you are working with the top 1% of the aptitude and engineering excellence of the whole country.
Right from the start, we really liked ELEKS’ commitment and engagement. They came to us with their best people to try to understand our context, our business idea, and developed the first prototype with us. They were very professional and very customer oriented. I think, without ELEKS it probably would not have been possible to have such a successful product in such a short period of time.
ELEKS has been involved in the development of a number of our consumer-facing websites and mobile applications that allow our customers to easily track their shipments, get the information they need as well as stay in touch with us. We’ve appreciated the level of ELEKS’ expertise, responsiveness and attention to details.