Back to Blog

How governments can shape AI policies through procurement

Rhian & Tom | June 2024

In the month that the OECD (Organisation for Economic Cooperation and Development) published a paper with the first clear definitions of AI incidents and dangers, it is useful to look at how governments around the world are approaching the challenges of this rapidly evolving technology.

Politicians are faced with the challenge of putting in place regulatory frameworks to ensure the safety of AI-based research and use, while not stifling innovation. Some countries are calling for global agreements on its development and implementation, while others are putting their own guidelines in place, recognising the fact that reaching global agreement can take a long time and risk lagging behind new developments.

So, is artificial intelligence really worthy of a regulatory approach that is tailored specifically to this technology? Some sceptics argue that both the dangers and the benefits visible so far have been overhyped. However, other people have issued dire warnings about the potential of AI to marginalise groups of people, entrench economic disadvantage or cause mass unemployment. In 2023, more than 20,000 people, including high-profile CEOs, AI researchers and philosophers, even signed an open letter proposing a moratorium on AI development.

The fears expressed in this letter may seem overblown, and it is also important to remember that governments have other tools at their disposal than the blunt instrument of regulation. In many cases, the public sector can help to encourage the type of projects they want to see developed, and discourage those that could cause damage, by engaging with development teams in a more proactive manner.

There are two main tools at their disposal:

  • Grants and funding programmes, such as the AI Diagnostic Fund
  • Procurement frameworks that encourage innovation within the constraints of particular guardrails

It is important to remember that when we talk about AI, in most cases we are not talking about the type of generative AI products that many of us are used to seeing in our daily lives, for which large language models (LLMs) have been trained with huge quantities of existing creative content so that they can answer questions or generate images, videos or text based on their training data.

Instead, much development and research is taking place below this very visible tip of the AI iceberg, in business-to-business tools (such as recruitment tools and CRM platforms) or in laboratory settings, such as medical diagnostic aids.

The applications that result from this research are generally brought to market by private-sector companies, even where they are used in the public sector, which is why procurement frameworks are such a powerful influence. Examples of third-party AI tools in the UK’s public domain include the Outmatch AI-assisted recruitment tools used by HMRC or the Brainomix tools used by the NHS to assist in the diagnosis of strokes.

Government departments are, of course, also investigating further uses for the technology. As Jim Harra, First Permanent Secretary and Chief Executive of HMRC, explained in a recent interview:

“We've been testing using generative AI to write up the summary at the end of a helpline call, which saves the advisor from having to do that and enables them to get on to helping the next customer.”

procurement

When government departments procure third-party solutions, whether this is the intention or not, this can help shape the way we all think about how these technologies are used

When government departments procure third-party solutions, whether this is the intention or not, this can help shape the way we all think about how these technologies are used. Even if procurement of a specific tool is not seen as an endorsement, it can show the direction of thinking within a particular department, and thus have a more far-reaching influence than the adoption of the same tool in the private sector.

The UK Government was one of the first in the world to draw up specific guidelines for AI procurement in the public sector, which were updated earlier this year with a policy note about improving transparency of AI use.

However, procurement ambitions are often different from the realities. Big contracts are often out of reach to all but a few companies meaning there often isn’t much choice about whose approach to choose, budgets are tight and there is pressure to choose the cheapest option. These factors make it hard for under pressure teams to be able to fulfil ambitions when commissioning key pieces of digital infrastructure.

One way forward is to take a ‘little and often’ approach to procurement, not going all in on one silver bullet solution, and ensuring whatever is commissioned is done so through a process of human-centred co-design, shaping solutions to the needs and realities on the ground.

These papers should be seen in the context of last year’s White Paper, AI Regulation: A Pro Innovation Approach.

Of course, this is a global challenge for the public sector, rather than a national challenge, and one that governments around the world are wrestling with. The European Commission offers model contractual clauses to public buyers to ensure a harmonised approach, while the United States has both federal and state guidelines: these are California’s interim guidelines for the procurement of generative AI tools, for example.

All these papers make fascinating reading for public-sector suppliers, as well as everyone in the general population with an interest in where these technologies - and our society - is headed.

Asking for help

Want to find out more?

Tell us about your big idea or knotty problem today. We are always happy to have a chat!