This week, OpenAI and the UK Government announced a strategic partnership aimed at delivering AI-driven economic growth and enhancing public services. The collaboration promises to bring advanced AI capabilities to various government sectors, from streamlining administrative processes to potentially transforming core public services.
We spoke with Volodymyr Getmanskyi to explore the expectations for this partnership, the challenges that lie ahead, and considerations for successfully integrating AI into public services.
Background & experience:
How can the OpenAI-UK partnership realistically drive economic growth and public prosperity?
Volodymyr Getmanskyi: Most likely, here, economic growth means more strategic purpose and consists of many smaller and minor improvements. The change management related to AI here looks the same as in huge companies/corporations with many functional directions and departments. Typically, they start from separate and smaller improvements, like automation of procurement, resource optimisations, service chat-bots, and only in years can these modules be connected into a huge ecosystem which can exactly lead to growth terms. So, such first steps will mostly improve some government services, in terms of costs, throughput, support, predictability and planning, consumers' utility.
What specific government sectors can benefit the most from AI capabilities and expertise? What needs to happen for AI to truly transform key sectors like education, defense, and justice and not just streamline admin tasks?
VG: In my opinion, the first sectors will be those where automation is highly possible and there are fewer limitations/restrictions (or lower error risks), and they are mostly about automating some tasks. Talking about deeper transformations (for example, in education, fully autonomous AI agents as a teacher or similar), this requires years of adoption, testing to understand error rates and risks, and even then may require human review.
What should happen to accelerate this? First of all, it involves a different level of AI agents evaluation, including more mathematical and causal metrics (for example, for ethical issues, internal planning process), large-scale simulation possibility (with human-like/behavioural digital twins), and new approaches for AI-human collaboration (~controllability).
What are the unique technical requirements for government AI deployments (security, compliance, data sovereignty) that one should be prepared to address?
VG: Any government service, first of all, has a higher level of trust among the population than any commercial service. It is not unique, but the security requirements will be more important or required on another level. Most of such services have different user cohorts, not limited to proficient software/AI users, that's why the UI and agents' adaptability will be an additional requirement.
What are the implications of mixed messages on whether OpenAI will access government data? Many users online are worried about "handing over their data to a corporation". How can the UK protect public data while still enabling AI development within its legal frameworks?
VG: Sensitive data sharing issues are the opposite of internal security issues and typically have their roots there. Even now, most of the foundational models providers guarantee that the data won't be used for any side activities (especially if it is a paid subscription), but the question is whether they can guarantee no data leakage? Typically, they can't because of many reasons, from human errors to zero-day vulnerabilities. And this is where such worries appear mostly when talking about the data usage, in addition to complicated and unclear policies like "we may use your data for compliance purposes".
So, in my opinion, the data sharing and usage should be very transparent (there should be information available on what specific citizen has shared, where it went/consumed, etc.). On the other hand, the same as with any other cloud/third-party service, the users are responsible for what they have shared to minimise their own risks, so there should be even offline and local (on mobile phone, etc.) filters/preventing mechanism to warn if the data shouldn't be shared or the AI agent request looks strange.
What steps are needed to prevent AI from worsening bias, misinformation, or inequality in public services?
VG: First, limit AI agents to some defined behaviour, limit data usage, and force a specific response that can be verified/validated (at least structured outputs). Additionally, there should be some ethical evaluation/monitoring, which I’ve mentioned above (with well-defined and described metrics). And from another perspective, the government should invest in population AI literacy, so that each citizen knows and understands such risks.
Teachers will need digital literacy, data interpretation skills, and the ability to evaluate AI tools critically. Equally important will be soft skills such as adaptability, emotional intelligence, and the ability to guide students in the ethical and thoughtful use of AI.
Students should be active participants, giving feedback, co-creating learning experiences, and learning how to use AI responsibly. Their perspectives are key to building tools that are engaging, fair, and truly learner-centred.
The breadth of knowledge and understanding that ELEKS has within its walls allows us to leverage that expertise to make superior deliverables for our customers. When you work with ELEKS, you are working with the top 1% of the aptitude and engineering excellence of the whole country.
Right from the start, we really liked ELEKS’ commitment and engagement. They came to us with their best people to try to understand our context, our business idea, and developed the first prototype with us. They were very professional and very customer oriented. I think, without ELEKS it probably would not have been possible to have such a successful product in such a short period of time.
ELEKS has been involved in the development of a number of our consumer-facing websites and mobile applications that allow our customers to easily track their shipments, get the information they need as well as stay in touch with us. We’ve appreciated the level of ELEKS’ expertise, responsiveness and attention to details.