RAND Europe, part of the RAND Corporation, a global policy think tank headquartered in the United States, conducts policy analysis and research to improve policymaking and decision-making processes in Europe and around the world. Their work has included significant contributions in the field of artificial intelligence (AI) and emerging technologies, and their latest report adds to a growing body of work in the space.
The report, titled "Examining the landscape of tools for trustworthy AI in the UK and the US", maps and analyses the current landscape of tools and techniques aimed at developing, deploying and using trustworthy AI systems in the UK and US, and identifies challenges and opportunities for collaboration between the two countries in this area.
The report identifies and examines examples of such tools developed and deployed in the UK and US, using a mixed-methods approach including document review, expert interviews and crowdsourcing.
Key findings
- There has been a proliferation of frameworks and principles for trustworthy AI from various organisations globally, but they lack specific guidance on how to achieve trustworthiness in practice.
- Tools for trustworthy AI encompass methods, techniques and practices that can measure, evaluate, communicate, improve and enhance the trustworthiness of AI systems, addressing aspects like safety, fairness, transparency, accountability and privacy.
Challenges
According to RAND Europe, some of the main challenges faced in developing AI tools include:
- Lack of transparency in AI algorithms and potential biases, which limits the effectiveness and trustworthiness of AI systems. This has led to increasing uncertainties among potential end users and the public about the reliability of AI technologies.
- Lack of technical expertise and skills gaps relevant to AI innovation within organisations, constraining their ability to recognise and fully exploit the opportunities of AI technologies. Public sector organisations often lack the necessary skills and expertise to harness emerging AI tools effectively.
- Evidence gaps in evaluating AI technologies, as they are often tested in controlled environments which may not reflect real-world performance. This limits the understanding of the actual impact AI tools can have in practical applications.
- Need for robust ethical and human rights safeguards, as the use of AI-enabled technologies, particularly in sensitive areas like border security, has faced criticism for potentially undermining human rights and privacy.
- Lack of common terminologies, taxonomies, test data and benchmarking frameworks for AI tools across different organisations and countries, hindering collaboration and alignment efforts.
Recommendations
The report proposes a series of practical actions for policymakers in the UK and US to foster alignment and collaboration on tools for trustworthy AI, including:
- Establishing common terminologies and taxonomies for tools;
- Developing shared test data and benchmarking frameworks;
- Promoting open-source tools and sharing of best practices;
- Facilitating cross-border collaboration between tool developers and users; and
- Co-ordinating research funding and regulatory approaches.
The report aims to inform future bilateral co-operation between the UK and US governments on trustworthy AI and stimulate further discussion among stakeholders as AI capabilities continue to grow. Whilst the findings and analysis in the report are not official government policy, they certainly chime directionally with current UK and US AI regulatory policy, and set a clear path for regulators in both jurisdictions whilst identifying challenges and opportunities for UK-US alignment and collaboration on the topic of trustworthy AI tools.
Importance of trustworthy AI
The findings of the RAND Europe report underscore the growing importance of trustworthy AI tools for businesses operating in the UK and US markets. As AI capabilities continue to advance rapidly, companies must navigate an increasingly complex regulatory landscape to ensure compliance and mitigate risks associated with the development and deployment of AI systems.
Our team of experienced lawyers can provide valuable guidance to businesses seeking to leverage AI technologies while adhering to evolving regulatory frameworks. With expertise in areas such as data protection, intellectual property and data governance, Fladgate can assist clients in understanding and addressing the legal and ethical implications of AI adoption.
Please contact Tim Wright to discuss how Fladgate can assist.