In today’s digital world, user-centred design is key to driving meaningful innovation and advancing sustainability within product and service design. By focusing on real user needs, teams can create solutions that are both impactful and practical.
We spoke with Adam Kuras about the COVERE² project, a groundbreaking initiative aimed at transforming the management of greenhouse gas (GHG) emissions in the agri-food production sector. Standing for "Collecting, Verifying, Reporting, and Reducing Greenhouse Gas emissions," COVERE² reflects ELEKS' commitment to sustainability and innovation.
Adam shares how his team applies user-centred design to turn research into actionable solutions, align cross-functional teams, validate ideas through iterative testing, and balance creativity with practical usability.
Background & experience:
Over 9 years of experience in human-centred and digital product design. Adam leads customer experience initiatives, specialising in UX design, user research, and crafting intuitive, accessible, and engaging digital products.
How do you validate that innovative ideas truly address user needs?
Adam Kuras: This is a very interesting question because the whole project came from innovative ideas. People had varying ideas on how to create something more creative than what already exists on the market. In our case, it's all about sustainability.
If you look at the market, there are already plenty of ESG reporting tools, report generators, and consultancy services. We didn't want to become just another tool, like many others. Instead, our focus was: how can we create something truly different?
The key thing for us was to remember that innovation without a strong foundation is just guesswork. So, we had to ensure our ideas addressed real needs. That's why we applied a multi-step, user-centric design approach.
We started with assumptions, such as "people would need this" or "people would want that." We assumed companies would need help managing all the paperwork, legislation, and external pressures driving them toward sustainability. At the same time, we aimed to develop an end-to-end product that would not only assist companies in meeting EU requirements but also support their journey towards genuine sustainability.
We treated those assumptions as hypotheses and then launched market research and user research to uncover real pain points and opportunities. From there, we moved into workshops, prototyping, and testing — putting ideas in front of real users as early as possible. We also created PoCs to validate concepts before committing to full development. The goal wasn't to confirm what we wanted to hear, but to test whether the initial idea solved the problem, whether it was understandable, and whether people saw value in it.
So, based on research — not just assumptions — we went through the whole cycle: from gathering needs, to testing, to iterating, until we were confident that our innovation wasn't just a theory but something practical and truly valuable.
How do you balance the wow factor with practical usefulness?
AK: The wow factor is really powerful, as it creates emotional engagement and helps the product stand out. But it must be rooted in solving a meaningful problem. Otherwise, it's just a buzzword that fades quickly. You can have a great first impression — "wow" — but if the product isn't solving your problem, then it's worthless.
So, our approach is always to anchor that moment of wow in usefulness. For example, we ask:
Let me give you a practical example. If we incorporate micro-interactions or animations into the product, can they enhance the interface's feel and engagement? Yes. But we only implement them once the fundamentals are in place: clear user feedback, validated information architecture, and tested flows. That way, usefulness and delight go hand in hand.
It's also about prioritisation. If a feature looks exciting but requires a lot of effort and brings little real value, we must set it aside. But when the wow factor is balanced correctly, it amplifies usability instead of competing with it — and that's what we always aim for in our product.
What principles or metrics guide your design decisions?
AK: Well, we have a lot of metrics in theory that you can apply to any product. As designers, we strive to integrate guiding principles with measurable outcomes. It means that every design should work for all users, be accessible, and align with the overall system we are trying to apply. And when we think about the metrics, we run something like a little workshop that helps us shape our strategy and decide what kind of information we should collect in terms of metrics — to make it practical and useful.
Initially, when we had the entire process, we focused on key metrics such as task completion rate, error frequencies, and time on task, particularly during testing. That was very useful for usability testing. In our product, we conducted two rounds of usability testing, which proved really helpful in tracking and analysing metrics to ensure everything was fixed and working as expected.
However, for the broader product structure, we need to consider something quite different, especially since we are still ahead of the launch. So, we need to track metrics like adoption rate, retention, and also something more general, such as NPS, to measure overall satisfaction.
Whenever we conduct research, we need to define a metric that helps us evaluate the outcome. As I mentioned, during usability testing, you track one type of metrics, and during beta testing or business validation, you may track something completely different — more business-oriented metrics. Ultimately, the decision comes down to whether you should go in one direction or another with the product development.
When making a design decision, I consider business priorities, metrics, and user needs, as this combination provides a solid foundation for my decision. The one thing we should avoid is making design decisions purely based on assumptions.
And, of course, there are also some general principles, like simplicity, usability, and accessibility. But they're very general. So, at the end, when working with users, you can evaluate those metrics, but you have to keep in mind that they are not black and white.
For example, what accessibility means to you may be one thing, but for someone using a wheelchair, it may be completely different. We need to consider the general principles when thinking about customer experience design. At the same time, our product requires focus on the specific metrics that matter during testing and validation.
How is collaboration organised between designers, product managers, and developers?
AK: Our project operates with excellent fluidity and efficiency. From the very beginning, when I joined, I worked closely with the product manager to align on problem-based priorities and expectations. From my perspective as a designer, the most important thing is to get everyone's perspective. Because what I'm trying to do is create the customer experience — or the overall experience.
So, I need to gather perspectives from different people. The development team focuses on technical aspects, while the product manager or project manager concentrates on business requirements. Additionally, our project includes other key roles, such as a BA, system consultants, and product owners, each contributing valuable insights.
I involve them early and collect general knowledge about what we are trying to create. And it's not only about the implementation stage — every stage of the project is supposed to be collaborative. So, whenever I organise activities like design critiques, needs discovery, or user interviews, I try to involve as many people as possible. For example, we also hold weekly general meetings.
But the most important thing is open communication. We maintain open communication channels, which we actively use whenever we need to align on specific topics. Also, whenever I prepare a report or plan an activity, I make sure to share it openly and invite others to participate. For instance, when we conducted user interviews or user testing, it wasn't just me — the whole team took part. Everyone also reviewed and stayed connected with the final report afterwards.
That way, we make sure the whole team understands what we're doing and what is happening across different areas of the project.
Which user research methods do you find most effective?
AK: It's a difficult question because the most effective research method can vary depending on the project and its stage. In every project or product, we have different stages that require different approaches. As an experience designer, my job is to know what to apply and when.
For example, during the exploratory phase, methods such as user interviews, surveys, or contextual inquiries are particularly valuable, offering the broadest insights. In our project, we conducted numerous interviews and one survey, but the number of research activities always depends on time, needs, and the specific questions we want to answer. Because at the end of the day, the most important thing is to know why you're doing research and what questions you want to ask. Without that, there's no point in doing it.
So, interviews, surveys, and sometimes contextual inquiries are the most valuable at that stage, forming a critical part of our UX consulting approach. But later, during the design stage or once we already have something tangible, usability testing becomes essential — both moderated and unmoderated — to identify friction points and see how the product or solution performs. We typically prepare tasks for users and then validate the results using both the metrics mentioned earlier and our own assessment of what works, what doesn't, and what needs improvement.
As we move closer to launch, we start to use more quantitative methods — like A/B testing, analytics, or large-scale surveys. These provide us with confidence and actionable insights for decisions made during or after the launch.
For me, though, the most effective methods are the simplest and most flexible ones: usability testing and in-depth interviews. I can't remember a project or product where I didn't use these two methods. They're easy to apply at different stages, don't necessarily require big budgets, and can be very effective if planned well.
Has there been a case where research insights drastically changed the product vision?
AK: I would say yes and no, because it depends on what we consider a drastic change. So, in one case — not in my current project but in one of my past projects — we invested a lot of time in designing a feature that we believed would be a differentiator, something outstanding. But during testing, users consistently struggled with it. The feature we thought would be a game-changer felt unnecessary and confusing compared to their usual workflow.
Rather than forcing that feature through, I prepared recommendations for change. We knew we had already invested time and effort, but since it was still before the development phase, it was easier and much cheaper to pivot. That’s the value of early-stage testing — you can validate ideas with prototypes before spending money on app development.
In this case, we decided as a team to pivot to a simpler solution that directly addressed the main frustration. The original feature was quite complex and ambitious, but the new, simpler version became a solid element of the product. After the second round of testing, users found it much easier and more comfortable to use. Maybe not “delightful,” but smooth and effective.
So sometimes, something that seems innovative and outstanding in theory doesn’t actually perform well with real users. And sometimes simplicity works best.
Because, you know, in your head you have this imagination of how people will use the solution. When you design it and talk about it, it feels increasingly exciting and convincing. But when actual people use it, they don’t see that same “wow effect” you had when creating it. They see it simply as a way to complete a task or solve a problem. And if it’s too complicated or doesn’t really help, they find it unnecessary.
What values or messages do you aim to express through the rebrand?
AK: In our current product, we underwent a full rebranding — essentially starting from scratch, as the previous version felt more like a transitional phase. With this rebrand, our goal was to signal trust, cloud-based reliability, expertise in cloud migration services, and innovation. At the same time, we wanted users to feel that the product is not only professional and dependable, but also progressive and forward-looking.
With that, the focus was on creating a true identity — one that people are proud to associate with. It is closely connected to the theme of sustainability, while also communicating stability, creativity, and openness to change, the essence of innovation. All visual elements and interactions were designed to tell a single story. This product exists to empower users, simplify complexity, and support their journey toward sustainability and a net-zero future.
Do you consider this redesign a cosmetic update or a strategic transformation?
AK: On one hand, it was a cosmetic update; on the other, it was a strategic transformation. A purely cosmetic update might only change colours or layouts — which we did — but we wanted to go deeper. Our goal was to align the product and the overall experience with the brand identity and our long-term vision.
The redesign was meant not only to elevate the visual layer but also to create an opportunity to reassess how the product communicates value and how it can scale into the future. This meant rethinking design systems, style guides, and brand books, as well as creating user flows and refining messaging, all of which should be done together as part of a broader digital transformation.
Ultimately, it was about positioning ourselves to meet future user expectations and support business growth. This process helped us not only elevate our visuals to be more professional, open, and innovative but also achieve a true strategic transformation by updating our design system and user interface in line with the new brand, ensuring a consistent and seamless experience for our users.
What major risks or obstacles do you foresee before launch?
AK: Before every launch, there are many things to cover. In our case, we were primarily focused on platform development. Then, suddenly, we had to consider everything else: marketing, launch strategy, what to deliver, and how to communicate it. We had to decide what visuals to include and how to present the story.
The main risk we identified was alignment and adoption. Adoption, to us, means making sure that users not only try the product but also truly understand its value and integrate it into their journey. Getting people on board is always a challenge. Despite all the testing and research we had done, there are always unknowns that only appear during and after launch. You can never be entirely sure that everything you've prepared will succeed immediately. That's why adoption remains a risk.
There's also the risk of alignment — ensuring that product, design, business, and marketing are fully synchronised around one strategy. Add to this the constant time pressure to create consistent communication, and it becomes even more complex.
For me, mitigating these risks comes down to proactive communication, realistic scoping, and continuous testing. We can plan everything, but we also need to make sure all the pieces are ready at the right moment.
So, in terms of major obstacles, I point to the unknowns and the sheer amount of work that falls on our relatively small team. Still, we have enough strength, determination, and willpower to handle it all.
How do you plan to gather and respond to user feedback after release?
AK: It's still, I would say, a work in progress in terms of the final touches. From my perspective, I believe we will employ a multi-channel approach. That means having analytics to show us what's happening — where people drop off, which features they navigate to, how often they return — essentially, behavioural analytics.
The second part is feedback. Additionally, artificial intelligence, machine learning, and data science tools can help analyse patterns and predict user behaviour, giving us actionable insights. It would be valuable to gather quick, real-time insights through surveys or similar methods. Of course, we need to adapt, and we need to keep in mind that our product is not going to have a massive amount of active users at the beginning. Consequently, our feedback collection process must reflect reality. Structured interviews or occasional focus groups might be more effective in giving us deeper insights. In this case, at this time.
On one hand, analytics can reveal behaviour patterns. On the other hand, direct feedback helps us understand why it is happening. Both perspectives matter.
It really depends on the audience and context, which method works best. Currently, we can plan activities, and analytics and surveys are likely the most common approaches. But we also have to be realistic about what is possible after launch — when we can do it, how we can do it, and in what context. That's why I would say it's still in progress, but I'm confident we should apply some of these methods to check reality — to see what's working, what's not — as part of the evaluation.
UX (User Experience) is the quality of a user’s overall interaction with a product or service, how easy, efficient, and enjoyable it is to use. UCD (User-Centred Design) is a design approach that puts users’ needs and feedback at the centre of the design process to achieve a good UX.
Sustainability is the practice of meeting our current needs without compromising the ability of future generations to meet theirs. It involves using resources responsibly, protecting the environment, and supporting social and economic well-being over the long term.
The breadth of knowledge and understanding that ELEKS has within its walls allows us to leverage that expertise to make superior deliverables for our customers. When you work with ELEKS, you are working with the top 1% of the aptitude and engineering excellence of the whole country.
Right from the start, we really liked ELEKS’ commitment and engagement. They came to us with their best people to try to understand our context, our business idea, and developed the first prototype with us. They were very professional and very customer oriented. I think, without ELEKS it probably would not have been possible to have such a successful product in such a short period of time.
ELEKS has been involved in the development of a number of our consumer-facing websites and mobile applications that allow our customers to easily track their shipments, get the information they need as well as stay in touch with us. We’ve appreciated the level of ELEKS’ expertise, responsiveness and attention to details.