Skip to main content
Contact us Contact us
Contact us Contact us
Interview

Expert Analysis: What the OpenAI-UK Partnership Really Means for Government AI Implementation

This week, OpenAI and the UK Government announced a strategic partnership aimed at delivering AI-driven economic growth and enhancing public services. The collaboration promises to bring advanced AI capabilities to various government sectors, from streamlining administrative processes to potentially transforming core public services.

We spoke with Volodymyr Getmanskyi to explore the expectations for this partnership, the challenges that lie ahead, and considerations for successfully integrating AI into public services.

government
Meet the interviewee
Getmanskyi Volodymyr
Volodymyr Getmanskyi
Head of Artificial Intelligence Office

Background & experience:

  • Over 15 years of practical experience in advanced data analysis and modelling. Currently manages a large AI team while providing presales and delivery support for complex AI implementations.
  • Technical expertise encompasses the full spectrum of AI technologies relevant to government applications: NLP for document processing, computer vision for security and monitoring systems, and predictive modelling for policy planning and resource optimisation.

How can the OpenAI-UK partnership realistically drive economic growth and public prosperity?

Volodymyr Getmanskyi: Most likely, here, economic growth means more strategic purpose and consists of many smaller and minor improvements. The change management related to AI here looks the same as in huge companies/corporations with many functional directions and departments. Typically, they start from separate and smaller improvements, like automation of procurement, resource optimisations, service chat-bots, and only in years can these modules be connected into a huge ecosystem which can exactly lead to growth terms. So, such first steps will mostly improve some government services, in terms of costs, throughput, support, predictability and planning, consumers' utility.

 

What specific government sectors can benefit the most from AI capabilities and expertise? What needs to happen for AI to truly transform key sectors like education, defense, and justice and not just streamline admin tasks?

VG: In my opinion, the first sectors will be those where automation is highly possible and there are fewer limitations/restrictions (or lower error risks), and they are mostly about automating some tasks. Talking about deeper transformations (for example, in education, fully autonomous AI agents as a teacher or similar), this requires years of adoption, testing to understand error rates and risks, and even then may require human review.

What should happen to accelerate this? First of all, it involves a different level of AI agents evaluation, including more mathematical and causal metrics (for example, for ethical issues, internal planning process), large-scale simulation possibility (with human-like/behavioural digital twins), and new approaches for AI-human collaboration (~controllability).

 

What are the unique technical requirements for government AI deployments (security, compliance, data sovereignty) that one should be prepared to address?

VG: Any government service, first of all, has a higher level of trust among the population than any commercial service. It is not unique, but the security requirements will be more important or required on another level. Most of such services have different user cohorts, not limited to proficient software/AI users, that's why the UI and agents' adaptability will be an additional requirement.

Related service
Anonymise your data before it reaches LLM
data anonymization
Artificial intelligence
Intelligent automation
Artificial intelligence

What are the implications of mixed messages on whether OpenAI will access government data? Many users online are worried about "handing over their data to a corporation". How can the UK protect public data while still enabling AI development within its legal frameworks?

VG: Sensitive data sharing issues are the opposite of internal security issues and typically have their roots there. Even now, most of the foundational models providers guarantee that the data won't be used for any side activities (especially if it is a paid subscription), but the question is whether they can guarantee no data leakage? Typically, they can't because of many reasons, from human errors to zero-day vulnerabilities. And this is where such worries appear mostly when talking about the data usage, in addition to complicated and unclear policies like "we may use your data for compliance purposes".

So, in my opinion, the data sharing and usage should be very transparent (there should be information available on what specific citizen has shared, where it went/consumed, etc.). On the other hand, the same as with any other cloud/third-party service, the users are responsible for what they have shared to minimise their own risks, so there should be even offline and local (on mobile phone, etc.) filters/preventing mechanism to warn if the data shouldn't be shared or the AI agent request looks strange.

 

What steps are needed to prevent AI from worsening bias, misinformation, or inequality in public services?

VG: First, limit AI agents to some defined behaviour, limit data usage, and force a specific response that can be verified/validated (at least structured outputs). Additionally, there should be some ethical evaluation/monitoring, which I’ve mentioned above (with well-defined and described metrics). And from another perspective, the government should invest in population AI literacy, so that each citizen knows and understands such risks.

Skip the section

FAQs

What new skills will teachers need to stay relevant in an AI-powered classroom?

Teachers will need digital literacy, data interpretation skills, and the ability to evaluate AI tools critically. Equally important will be soft skills such as adaptability, emotional intelligence, and the ability to guide students in the ethical and thoughtful use of AI.

What role should students play in shaping how AI is used in their own education?
Talk to experts
Skip the section
Contact Us
  • We need your name to know how to address you
  • We need your phone number to reach you with response to your request
  • We need your country of business to know from what office to contact you
  • We need your company name to know your background and how we can use our experience to help you
  • Accepted file types: jpg, gif, png, pdf, doc, docx, xls, xlsx, ppt, pptx, Max. file size: 10 MB.
(jpg, gif, png, pdf, doc, docx, xls, xlsx, ppt, pptx, PNG)

We will add your info to our CRM for contacting you regarding your request. For more info please consult our privacy policy
  • This field is for validation purposes and should be left unchanged.

What our customers say

The breadth of knowledge and understanding that ELEKS has within its walls allows us to leverage that expertise to make superior deliverables for our customers. When you work with ELEKS, you are working with the top 1% of the aptitude and engineering excellence of the whole country.

sam fleming
Sam Fleming
President, Fleming-AOD

Right from the start, we really liked ELEKS’ commitment and engagement. They came to us with their best people to try to understand our context, our business idea, and developed the first prototype with us. They were very professional and very customer oriented. I think, without ELEKS it probably would not have been possible to have such a successful product in such a short period of time.

Caroline Aumeran
Caroline Aumeran
Head of Product Development, appygas

ELEKS has been involved in the development of a number of our consumer-facing websites and mobile applications that allow our customers to easily track their shipments, get the information they need as well as stay in touch with us. We’ve appreciated the level of ELEKS’ expertise, responsiveness and attention to details.

samer-min
Samer Awajan
CTO, Aramex