sign up log in
Want to go ad-free? Find out how, here.

GPT-4 takes a look at the Bletchley Declaration on AI safety for us, now that the NZ govt is joining it

Technology / news
GPT-4 takes a look at the Bletchley Declaration on AI safety for us, now that the NZ govt is joining it
Bletchley Park, as rendered by DALL-E 3 AI.
Bletchley Park, as rendered by DALL-E 3 AI.

The NZ government is joining the UK’s Bletchley Declaration on Artificial Intelligence (AI) Safety, a media release from the Minister of Science, Innovation and Technology, and for Digitising Government, Judith Collins, announced.

As with many verbose government announcements dealing with technology, it can be rather hard to work out what it all means. Luckily, we live in the immediate future, which is 2024, and we have AI to help us understand such complicated matters.

Here's what OpenAI's ChatGPT, powered by the standard GPT-4 model, thinks of the Bletchley Declaration:


An analysis of this document raises several concerns around clarity, substance, and the effectiveness of the proposed measures. 

First, the declaration’s language appears highly abstract and lacks concrete steps for implementation. Phrases such as "human-centric, trustworthy and responsible AI" are repeated but not well defined. Without a clear explanation of how to operationalise these terms, the document risks becoming another example of lofty rhetoric without tangible outcomes. This creates ambiguity, making it difficult for stakeholders to understand how to move forward.

Secondly, there’s a heavy reliance on international cooperation, which often sounds idealistic but may be difficult to achieve in practice. The document assumes all countries will align their AI goals and ethical standards, but geopolitical differences and varied levels of technological advancement mean that universal cooperation may face significant roadblocks. Countries like China and the US, who have distinct approaches to AI governance, might not fully comply with these global principles, potentially causing fragmentation in regulations.

The focus on "frontier AI" models also highlights a major risk. By stressing the capabilities of the most advanced AI models, the document shifts attention away from the risks posed by current AI systems already in use. Everyday AI tools, which have clear social and economic impacts, seem somewhat overlooked. The attention on frontier models may misdirect resources away from addressing immediate harms caused by AI in sectors like policing, hiring, or healthcare.

Another issue is the vague approach to governance. Although the declaration speaks of a "pro-innovation and proportionate governance" framework, it doesn’t provide any guidance on what such regulation should look like. The call for "classifications and categorisations of risk based on national circumstances" could allow countries to interpret risks loosely, leading to insufficient protections in some regions. It could also be exploited by governments to regulate AI selectively, allowing misuse of technology in politically sensitive areas.

Moreover, while there is emphasis on developing "context-appropriate transparency," the practical application of this transparency is unclear. The declaration offers little explanation about how private companies will be held accountable for harmful AI developments. With AI research and development dominated by private firms, vague guidelines on transparency may allow these actors to continue operating with minimal oversight. The lack of concrete penalties or enforcement mechanisms undermines the declaration’s aim to mitigate risks.

A particularly striking omission is the lack of any mention of financial and political conflicts of interest. Given the rapid acceleration of AI investments, particularly by large tech firms, there’s a risk that powerful actors could shape the governance frameworks to suit their own interests. Addressing this dynamic would have added depth to the discussion about how AI safety can be ensured without being compromised by commercial incentives.

Lastly, while the document is framed as inclusive and forward-thinking, there is minimal engagement with civil society perspectives. The declaration acknowledges the need for "inclusive AI" and mentions the digital divide, but doesn't elaborate on how it intends to involve marginalised communities, who are often disproportionately affected by AI biases and risks. Without meaningful mechanisms for public participation, the process risks being dominated by government and industry insiders.

In conclusion, the declaration, though well-intentioned, lacks the specificity, enforceability, and stakeholder engagement necessary to meaningfully address the complexities of AI safety. It relies too heavily on international cooperation, lacks detail on governance and accountability, and doesn't fully consider the immediate harms of existing AI systems or the influence of powerful commercial interests.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

2 Comments

Amazing

Up
2

I asked Claude if the ChatGPT review is reasonable and it replied ""Would you like me to analyze any particular aspect of the review's methodology in more detail, or would it be helpful if you shared the text of the Bletchley Declaration itself for comparison?""

Up
0