In the third and final piece of this series, Oleksiy Iven explores how the Claude Extension for Chrome changes that equation for quality assurance teams — and why it's more than another AI tool. If you missed the previous parts:
- The End of Traditional SDLC: Rethinking QA in the Age of AI (Part 1)
- The End of Traditional SDLC: A Working AI QA Prototype and What It Means for the Team (Part 2)
- Claude Extension is not a chatbot — it reads the live DOM, sees real element states, and catches console errors in real time, giving QA teams a live view of what's actually happening on the page.
- Step Recording lets you walk through a test scenario once and replay it indefinitely — with optional voice narration at each step, no manual re-execution needed.
- A full QA test case execution — including a structured report with observations and recommendations — can be triggered by a single prompt, no manual clicks required.
- From a single recorded manual test, Claude generates a complete, production-ready Playwright/Cypress automation framework — page objects, fixtures, CI config, and a README — in under 30 minutes.
- QA time savings are substantial: 75% faster on E2E test execution, up to 98% on test data preparation.
What Claude extension really is
Claude Extension is a Chrome browser extension that brings Claude directly into your workflow, but its capabilities extend well beyond a sidebar chat.
The key distinction: Claude sees your current page. It has access to the DOM, analyses UI structure, reads element states, and catches console errors. This isn't AI. You describe a problem to it, it's artificial intelligence that observes alongside you, in the same moment:
- Sees the page — analyses DOM, UI structure, and element states in real time
- Takes actions — clicks elements, enters text, navigates between pages like a real user.
- Voice input — commands can be dictated; Claude executes and narrates each step aloud in parallel.
- Record and replay — records a sequence of steps that can be replayed exactly as recorded.
- Self-documenting — after execution, produces a ready-made report: steps, results, deviations.
- Generates automated tests — writes production-ready Playwright/Cypress tests directly from executed steps.
Step Recording: capture every action and replay it on demand
One of the most powerful modes is Step Recording. You walk through a scenario together with Claude — it captures every step: which page you opened, which field you filled, which button you clicked, how the system responded. That recording can then be replayed, used as the foundation for an automated test, handed to a colleague without any setup, or run with voice narration where Claude talks through each step during playback.
A regression scenario that used to take 30 minutes every time now runs in a single click. During a client demo or onboarding, Claude doesn't just execute the steps, it explains what's happening and why. A live walkthrough, with zero preparation required.
Three real-world cases
Case 1: From test case to report without a single manual click
By pasting a standard test case spec and prompting "You are QA with 10 years of experience, execute this test and prepare the report," Claude independently ran all verification steps on a live enterprise system — exact match, wildcard search, sort order, edge cases — and delivered a structured report with observations and a recommendation to extend coverage for special characters. Ready to paste directly into a test management tool.
Case 2: Claude has eyes and a notepad in a complex system
With a single instruction to navigate through the system and document everything along the way, Claude produced a detailed UI description at every navigation level — column names, badge colours, status indicators, contract data structure, and a complete navigation trail. The output serves as a ready-made draft for test case documentation or as onboarding material for a tester encountering the system for the first time.
Case 3: From manual test to automation framework
Because Claude Extension captures real DOM interactions during test execution — which selectors fired, the exact sequence, which assertions passed — the generated Playwright code isn't invented from scratch. Every locator is already verified. The output: a complete project with package.json, playwright.config.ts, Page Object Models, 15+ tagged test specs, test data fixtures, CI/CD configs for GitHub Actions, GitLab CI and Jenkins, a Dockerfile, and a README.
Patterns for effective use
What works best
- Role-based prompts — "QA with 10 years of experience" adjusts depth and reasoning
- Explicit deliverables — "prepare the report," "create automation," "document the UI"
- Context injection — paste requirements and acceptance criteria directly into the prompt
- Iterative refinement — the first result is a draft; each iteration sharpens the output
What works less well
- Vague prompts — "test this page" produces surface-level results
- Missing context — "verify the data is correct" without defining what correct means
- Expecting perfection on the first attempt — Claude Extension is a collaborator, not an oracle.
ROI: the numbers
| Task | Without Claude extension | With Claude extension | Time saved |
|---|---|---|---|
| E2E test case execution | 30–45 min | 5–10 min | 75% |
| Automation framework creation | 2–3 days | 15–30 min | 95% |
| UI navigation documentation | 1–2 hours | 5 min | 90% |
| Regression smoke run | 2–3 hours | 20–40 min | 80% |
| Test data preparation | 1 week | 4 hours | 98% |
The bottom line
Not long ago, QA meant manually stepping through regression suites, maintaining fragile automation scripts, and writing test documentation nobody ever read. That wasn't ancient history. But here's what's easy to miss: the shift happened gradually, then all at once.
Today, Claude sees a page the same way you do. It can run a test, document navigation through a system no one ever wrote down, and turn a manual test into an automation framework — before you've finished writing the ticket.
The tools have changed. The goal hasn't: ship software development products users can trust.
FAQs
The Traditional Software Development Life Cycle (SDLC) employs a structured, sequential process encompassing requirements gathering, design, development, testing, and deployment. In this framework, roles are specialised, and transitions are clearly defined. As a result, the process is primarily managed by professionals.
AI-SDLC (Artificial Intelligence Software Development Life Cycle) is a software development approach that incorporates AI tools and automation at every stage, from requirements gathering and design to coding, testing, deployment, and maintenance. Instead of using AI as a separate tool, AI-SDLC makes it an integral part of the workflow to reduce manual effort, speed up delivery, and enhance the quality and coverage of outputs at each phase.
Related Insights
Inconsistencies may occur.
The breadth of knowledge and understanding that ELEKS has within its walls allows us to leverage that expertise to make superior deliverables for our customers. When you work with ELEKS, you are working with the top 1% of the aptitude and engineering excellence of the whole country.
Right from the start, we really liked ELEKS’ commitment and engagement. They came to us with their best people to try to understand our context, our business idea, and developed the first prototype with us. They were very professional and very customer oriented. I think, without ELEKS it probably would not have been possible to have such a successful product in such a short period of time.
ELEKS has been involved in the development of a number of our consumer-facing websites and mobile applications that allow our customers to easily track their shipments, get the information they need as well as stay in touch with us. We’ve appreciated the level of ELEKS’ expertise, responsiveness and attention to details.