Last week the Society for Computers and Law hosted Professor Barry O'Sullivan's excellent overview of the technologies that comprise artificial intelligence. The report is now up on the SCL website.  

Perhaps surprisingly for a technologist, Barry is concerned to see more lawyers engaged in development and deployment of AI. Not only does he say that will help ensure it's done responsibly, but by understanding the realistic capabilities and limits of AI we can also focus legal and regulatory resources on the real issues instead of mistakenly creating requirements that no AI could satisfy.

Talk of 'killer robots' and beating humans at board games is all the rage, but inaccuracy, lack of explainability and bias are among the real concerns associated with AI.  

AI can be used for good, but it can also be 'weaponised'. It can even be 'hacked' by altering the appearance of things or people in quite subtle ways - without actually interfering with the AI itself.

So we need to challenge the use of AI where the consequences of false positives or negatives are fatal or result in the denial of fundamental rights or compensation, for example.

Regulation should be focused on certifying AI's development and transparency in ways that enable us to understand its shortcomings to aid in our decision about where it can be deployed appropriately.

I've written elsewhere about the use and misuse of AI, and I'm looking forward to exploring more aspects of its development and deployment at future SCL events in Ireland.

Stay tuned!

Update: In a report released on 26 September, the Oliver Wyman Forum found that no city on Earth is ready for the disruptive effects of artificial intelligence. The report uses four criteria: understanding AI-related risks and its corresponding plans; ability to carry out those plans; a reliable asset base; and the direction the city is taking.