The End of Traditional SDLC: Rethinking QA in the Age of AI (Part 1)
Article

The End of Traditional SDLC: Rethinking QA in the Age of AI (Part 1)

Listen to the article 22 min
Software delivery has entered a new era of speed. Teams that once measured development cycles in weeks now ship working prototypes in days. AI-assisted coding, automated pipelines, and on-demand infrastructure have dramatically compressed the time between idea and implementation, but one part of the process has not kept pace: quality assurance.

In many organisations, QA still operates as a sequential gate, a phase that begins only after development ends. When a feature can be built in two hours but takes two days to validate, quality stops being a safeguard and starts being a bottleneck. The tools have changed. The process hasn't.

This mismatch is not a skills gap — it's a structural one. Traditional SDLC was designed for a world where code moved slowly, and testing had time to catch up. That world no longer exists.

In this article, Oleksii Iven draws on firsthand experience to explore how AI is forcing a fundamental rethink of validation — from early scepticism to building AI-powered testing tools in production — and to introduce Continuous Validation Principles, a framework designed to keep quality in step with AI-accelerated delivery. This is Part 1 of a three-part series.

icon go to
Key takeaways
  • AI doesn't kill QA — it promotes it. The role shifts from running tests to designing the systems that make quality inevitable.
  • A feature built in 2 hours shouldn't wait 2 days to be tested. Traditional SDLC wasn't built for AI-speed development — and it shows.
  • Continuous Validation flips the script: quality runs alongside development, not behind it, shrinking delivery cycles from months to days.
  • One reusable AI tool wiped out a full week of manual test data prep — built in half a day. That's not efficiency, that's a paradigm shift.
  • The future of QA isn't in executing more test cases. It's in architecting systems where quality is built in from the start.

The cloud parallel: a shift we’ve seen before

We have lived through major inflection points before: manual testing to automation, Waterfall to Agile, monoliths to microservices. One of the clearest analogies is the rise of cloud infrastructure. Many organisations didn't migrate because it was fashionable — they migrated because they were forced to.

Black Friday exposed the fragility of fixed, on-premises servers. Websites went dark. Lost revenue was measured in minutes. Cloud was the answer: on-demand infrastructure that scaled instantly and charged only for what you used.

AI is triggering a similar shift, but for knowledge work. Clients increasingly expect intelligence on demand, delivered in the moment, without weeks of planning.

Like cloud reduced the pain of capacity planning, AI reduces the pain of complexity and routine decision-making. The pressure is structural, not optional.

4 phases of rebuilding a QA workflow around AI

This shift didn't happen overnight. It unfolded in distinct phases — not by design, but because that's how real change works.

Phase 1: Early scepticism

The initial reaction was simple and defensive: "AI doesn't understand context or business logic." Years in QA teach you that quality lives in the grey zones — undocumented assumptions, historical bugs, and user behaviour that never makes it into requirements. Too many "smart" tools had failed the moment real-world complexity appeared.

Phase 2: First experiments

Scepticism cracked the moment theory gave way to practice. A simple prompt — asking Claude to create a test plan for importing files into a contract management system — yielded an unexpected result: roughly 90% of it was correct. The structure made sense, the scenarios were relevant, and edge cases that are normally considered "advanced" were already included. The role shifted instantly from writing everything from scratch to reviewing, correcting, and enriching. Faster, and oddly more intellectual.

Phase 3: Operational integration

Once the potential was clear, occasional use wasn't enough. The next step was building a dedicated AI Testing Assistant using Python, Flask, and Playwright — a system capable of analysing web pages, generating meaningful test cases, executing them, and producing professional reports end-to-end. What became evident wasn't that a developer had emerged, but that AI had bridged that gap entirely.

Phase 4: Scaled impact

This is where the real shift happened. Value stopped being measured in test cases executed or bugs logged and started being measured in how resilient the validation system is, how quickly it adapts to change, and how well it scales with the product.

Why traditional SDLC is breaking down under modern delivery speed

This transformation reflects a bigger industry problem: speed. Teams now often arrive at the second client meeting with a working prototype. Not a slide deck, not a requirements document, but a real, clickable product. With AI-assisted delivery, this is quickly becoming the norm.

The problem is that many QA processes still assume a world where code moves slowly. If developers can deliver a feature in two hours but QA needs two days to write, review, and maintain test cases, the math simply doesn't work. Quality becomes the bottleneck — not because QA is slow, but because the process is outdated.

Blog post
Discover the hidden challenges of AI-native delivery
Cloud migration

Continuous validation (CVP): the next step beyond SDLC

In classic SDLC, testing is a phase that happens after development. In Continuous Validation Principles, validation is a parallel stream running from discovery through production monitoring, evolving alongside the product in real-time.

The key difference between traditional SDLC and Continuous Validation is that validation evolves continuously alongside the product, adapting in real-time rather than waiting for code to be 'done'.

Case study: replacing a week of work in half a day

Scenario: An enterprise risk management platform needed a bulk contract import feature via CSV files.

On paper, it seemed straightforward. In reality, it was a classic QA trap. CSV imports involved dozens of required and optional fields, strict validation rules, complex error handling, and bulk scenarios with hundreds or thousands of records. In a traditional setup, preparing test data alone would have taken a full week of manual work.

Applying CVP from day one changed the approach entirely. During discovery, the focus wasn't "how many CSV files do we need?" — it was risk. Analysing the import requirements surfaced three things quickly: distinct validation scenarios covering required fields, invalid formats, and boundary values; the need for bulk testing ranging from 50 to 1,000 records per import; and clear test categories across valid data, error cases, and edge cases.

At this stage, AI helped draft the initial test strategy — and flagged something critical: manual CSV creation would become the bottleneck before the feature had even stabilised.

That single insight changed direction completely. Instead of planning test files, the focus shifted to planning a test data generation system. Rather than manually producing 50+ CSV files, AI was used to prototype a lightweight internal tool — a CSV Test Data Generator. Not a product feature. A QA accelerator.

Using AI-assisted coding, an HTML-based tool with a visual interface was built in approximately 30 minutes. The QA role at that stage wasn't to write code — it was to validate the logic, surface missing edge cases, and shape the tool around real testing needs. That refinement took roughly 3.5 hours.

The result:

  • categorised test scenarios (valid, errors, edge cases),
  • one-click bulk generation (up to 1,000 randomised contracts per file),
  • instant download of ready-to-use CSV files.

In less than half a day, we replaced a week of manual work with a reusable validation asset. As the Contract Import feature evolved, validation moved in parallel. Because validation was continuous, QA caught 15 issues early that, in a traditional SDLC, would have surfaced only during UAT or even after release.

The tool that replaced a week of manual work

Quality stopped being a late checkpoint. It became a daily feedback loop.

The most important outcome wasn't speed — it was leverage. Instead of spending a full week manually preparing CSV files, four hours of focused work produced a reusable tool that generates thousands of unique test scenarios in seconds. That is the compounding advantage CVP creates: effort invested once, value returned repeatedly.

In practice, this meant three things working together: validation running in parallel with development, AI acting as a productivity multiplier, and QA operating as a quality system architect rather than a manual test executor. The goal was never to test faster. It was to redesign validation so it scales with AI-accelerated delivery.

The future of QA in an AI-accelerated world

QA is not dying. It is becoming more valuable, more strategic, more creative, and more tightly connected to product outcomes. But that value is not automatic. It depends entirely on which direction QA professionals choose to move.

The first path is continuity: keep writing test cases and running checks that AI can now generate in seconds. The second is transformation: design validation systems, guardrails, and feedback loops that scale with delivery speed, operating at the product level rather than the test case.

In five years, there won't be a distinct role called "AI QA Engineer." There will simply be QA engineers who can no longer imagine working without artificial intelligence, the same way no one today imagines working without a computer.

The future of quality belongs to those who choose to architect it.

Artificial intelligence
Skip the section

FAQs

What is quality assurance in AI?

Quality Assurance in AI systematically ensures that an AI system remains reliable and accurate. It verifies that the technology upholds ethical standards throughout its entire lifecycle. Unlike traditional testing, AI QA emphasises data integrity and evaluates machine learning model performance. It aims to identify and prevent algorithmic bias. This process rigorously tests to guarantee the system safely handles unexpected inputs. After deployment, it continuously monitors to detect any drop in the model’s accuracy over time.

What is the difference between traditional SDLC and AI SDLC?
Talk to experts
Listen to the article 10 min
The End of Traditional SDLC: Rethinking QA in the Age of AI (Part 1)The End of Traditional SDLC: Rethinking QA in the Age of AI (Part 1)
The End of Traditional SDLC: Rethinking QA in the Age of AI (Part 1)
The End of Traditional SDLC: Rethinking QA in the Age of AI (Part 1)
0:00 0:00
Speed
1x
Skip the section
Contact Us
  • This field is for validation purposes and should be left unchanged.
  • We need your name to know how to address you
  • We need your phone number to reach you with response to your request
  • We need your country of business to know from what office to contact you
  • We need your company name to know your background and how we can use our experience to help you
  • Accepted file types: jpg, gif, png, pdf, doc, docx, xls, xlsx, ppt, pptx, Max. file size: 10 MB.
(jpg, gif, png, pdf, doc, docx, xls, xlsx, ppt, pptx, PNG)

We will add your info to our CRM for contacting you regarding your request. For more info please consult our privacy policy

What our customers say

The breadth of knowledge and understanding that ELEKS has within its walls allows us to leverage that expertise to make superior deliverables for our customers. When you work with ELEKS, you are working with the top 1% of the aptitude and engineering excellence of the whole country.

sam fleming
Sam Fleming
President, Fleming-AOD

Right from the start, we really liked ELEKS’ commitment and engagement. They came to us with their best people to try to understand our context, our business idea, and developed the first prototype with us. They were very professional and very customer oriented. I think, without ELEKS it probably would not have been possible to have such a successful product in such a short period of time.

Caroline Aumeran
Caroline Aumeran
Head of Product Development, appygas

ELEKS has been involved in the development of a number of our consumer-facing websites and mobile applications that allow our customers to easily track their shipments, get the information they need as well as stay in touch with us. We’ve appreciated the level of ELEKS’ expertise, responsiveness and attention to details.

samer-min
Samer Awajan
CTO, Aramex