Preparing for a New Technical Revolution

By Elena Simperl

The impact of AI on humankind might well dwarf previous step changes in technology. There is great interest in exciting new forms of AI, such as OpenAI’s ChatGPT and Google’s Gemini, and much talk about the tremendous opportunities the foundational models behind these tools can create and their potential risks if not used responsibly.

Ever since the announcement of the UK’s AI Safety Summit last year, a lot of the discussion around how to make foundational models fit for purpose and less harmful has focused on so-called existential risks. Recent rapid advances in AI have made some experts, technology companies and regulators worried that we are soon to reach a level of AI that could result in human extinction or other irreversible world-scale disasters.

Yet I would argue that concerns about the existential risks of frontier AI are, at best, overblown and, at worst, spreading FUD –  fear, uncertainty and doubt, which could derail us from using the technology in areas where it can really make a difference. In reality, very powerful AI systems have been used for many years – we witness them every day: customer service chatbots, automatic parking in cars, face recognition at airports, product recommendations, and web search all use advanced AI, which in technical terms is not far off from the likes of ChatGPT or Dall-E. Equally, almost 18 months since generative AI reached mainstream, we have yet to experience the types of catastrophic failures some have predicted. Instead, what we’ve seen is greater awareness and investment into testing models using a range of benchmarks and approaches.

The worries about a dystopian future where malign robots take over the world make good headlines but distract us from the real risks that need addressing now. Some of the AI systems that we have already come to rely on have embedded biases that have caused real harm, exacerbating existing societal inequalities. For example, major age and race biases exist in autonomous vehicle detection systems: a person is more likely to get struck down by a self-driving car if they are young and black, as opposed to white and middle-aged. This happens because the data used by car firms to train their models can sometimes be unrepresentative and skewed towards white people in their middle age. To build public trust, technology providers need to be more open and transparent about the data they use so that it can stand scrutiny against known and unknown biases.

Regulation should require that concepts of data transparency are ingrained in all AI systems to ensure harm can be audited and addressed for the benefit of those using or affected by those systems. We must also include provision to educate the public so end users understand the capabilities and limitations of AI applications.

People and AI systems excel at different things. In a world where AI assistants surround us at work and in our daily lives, we need to to get used to new things like prompting and auditing text or media generated by AI. Organisations of all kinds now need to invest in training – to equip their teams to recognise AI opportunities and limitations. Critically, this should include not just the latest ChatGPT-style of AI, which is often too much for what organisations need, but AI technologies which have been around for at least a decade.

Moreover, every AI will only be as good as the data it is fed – organisations committed to data literacy will hence be at an advantage.

To give a practical example, a GPT-3.5 model trained on 140,000 Slack channel messages was asked to write content. The system replied, “I shall work on that in the morning.” The response reflected what users said in their work chats when asked the same thing. Instead of writing emails, blogs, and speeches as requested, the model echoed what it had “seen” in the dataset—putting it off until the next day. The model performed an entirely different function than anticipated by using a fundamentally unsuitable dataset, albeit one that superficially appeared appropriate because of its size.

The data skills gap in the UK has been highlighted as an ‘urgent problem’ by Open Access government. A report from the Alan Turing Institute said that only 27% of UK business leaders think their non-technical workforce is well-prepared to leverage new technology.

For us, the data-AI connection is undeniable. Data is the bedrock on which AI stands, and without data, there truly would be no AI. To make the most of AI’s opportunities and manage its risks, organisations will want to use their own data not just to tailor the behaviour of existing foundational models, but to build their own AI infrastructure from scratch. This will require active steps to establish the tools, guidance, and capabilities required to ensure the data is accurate and representative to support decision-making and improve productivity. Where AI is applied to high-stakes problems, for instance, in public services, additional assessments need to be put in place to remove biases in data and models and analyse broader implications, for instance, from a social inequality point of view. Whether AI technology is procured or built in-house, executive boards must commit to a responsible approach across the entire value chain – this includes the often overlooked use of gig workers contracted to test foundational models for toxicity and other evils, who tend to be hired via online platforms with opaque working conditions.

The encouraging news is that leaders are increasingly engaging with the importance of data ethics, which will help mitigate some of the most common risks when building and using AI. At the ODI, our training programmes are attracting a diverse range of leaders, from CEOs to senior civil servants, CDOs from NGOs, startup founders and VCs. These individuals are developing skills in applied data ethics, which are essential for effectively guiding their organisations through an age of rapid technological development. Equally, a lot is happening in the area of AI safety: while making an AI system safer through red teaming – an approach to identify vulnerabilities through prompts or compromised data – has its limitations, the AI community has started to pay more attention to data practices as a way to make AI safer from the start.

We are standing on the cusp of a new technological revolution that has the potential to transform all our lives for good. By increasing our data skills and democratising access to data for smaller AI companies, we can unlock its potential to deliver on the promises of a better world.

To learn more about how to prepare leaders to be data and AI-ready, sign up for the ODI’s Learning Newsletter (https://learning.theodi.org/stay-connected-with-odi-learning). Discover tutor-led and self-paced training courses (https://learning.theodi.org/courses) that equip leaders with the knowledge and skills to navigate the data and AI landscape. Learn more about the ODI’s work on data-centric AI here.

spot_img

Explore more