Article
[A]I put a spell on you…what can modern AI regulators learn from the UK Witchcraft Acts?
31 October 2023 | Applicable law: England and Wales | 5 minute read
In rural England in around 1541, if you had a minor ailment, had lost an item or were in dispute with your beloved, you may approach the local witch for assistance and hopefully a remedy. The method of resolution may have been, or at least appeared supernatural and beyond explanation or understanding to most folks. Unsurprisingly, such 'cunning folk' were also accused of using their magical powers for less helpful and rather more harmful ends, resulting in the first of a number of witchcraft acts in the UK (a full 300 years after Magna Carta, for legal context).
In current times, if you have a question about your health, want more information on something or are trying to find out how to solve a dispute, you are much more likely to simply ask an AI system to tell you what to do. Such programs produce seemingly impossibly good results, although if you ask the AI system how it does it, it will tell you it is not down to magic, but scientific principles.
The threat of something seemingly all-knowing and with unascertainable power was terrifying to Tudor law makers who duly passed the 1541 Witchcraft Act, beginning nearly 500 years of regulation. The world has moved on and our scientific understanding with it. AI now is something that offers great potential to improve our lives, but comes with some foreseeable and unforeseeable risks that we will have to navigate as the technology and our usage and understanding of it improves. Whilst there are already regulations in place to address certain elements of AI (e.g. data protection, anti-discrimination, etc.), we are at the modern-day equivalent of 1541 – what can today's regulators learn from the history of regulating witchcraft to guide the next 500 years?
There are already competing approaches. The EU is in the process of introducing its own wholesale AI legislation, with a particular focus on ethics and consumer protection (and other countries, including Canada, are likely to follow a similar path). The US has issued an 'Executive Order' mainly targeting federal agencies' deployment of AI, with a particular focus on national security. The UK has opted not to introduce AI legislation, instead indicating a preference for existing regulators to issue sector-specific guidance.
The 1541 Witchcraft Act sought only to outlaw harmful practices (including magical murders and love potions), albeit with the penalty of death. Not understanding how the magic was being performed, the legislation was effectively an outright ban on any seemingly malevolent sorcery. We can see a similar approach taken in EU/UK data protection regulations, which ban the sole use of AI for making significant decisions involving personal data. A key issue with some AI is not being able to understand how a decision has been made and therefore not allowing it to be checked for any inherent bias or other errors. The decision-making process may well be more reliable and less biased than your average human, but for now at least many of us would rather trust each other than a bunch of 1's and 0's.
In 1604 a new Witchcraft Act followed and defined witchcraft more closely as being linked with demons and evil spirits. Years of terror ensued: moral panic swept the land and hundreds of innocent people (mostly women) were hanged as a result. We are already seeing some of this phenomenon with a few large data and tech companies voicing concerns about apocalyptic risks and AI induced extinction. Could these companies be right about the modern-day equivalent of these demons and evil spirits? If so, regulators will need to grapple with how to regulate AI without destroying many benefits that AI promises.
The Enlightenment saw a sea change in the approach to regulating the occult. The 1735 Witchcraft Act was the first that did not outlaw witchcraft itself, but the defrauding or deluding of ignorant persons by pretending to perform sorcery or enchantment. A major issue with generative AI today is its tendency to 'hallucinate' and produce credible sounding statements that are simply made up – certainly capable of deceiving the unwary. Whether regulators can take on systems that produce these incorrect statements remains to be seen, but those using or relying on made up facts generated by computer systems will need to be held responsible.
Astonishingly the final person to be imprisoned under the 1735 Act came as late as 1944. The Fraudulent Mediums Act 1951 that swiftly followed clarified that it was only those acting for reward who could be prosecuted, genuine exploration of spiritualism without duping clients was no longer an offence. Crucially, nothing done for the purposes of entertainment fell foul of the act either. Guidelines for AI development and use will do well to recognise that the use of any software is as important as what it can do and how it should be regulated. Tools to detect disease in a healthcare setting will need a different approach to control, compared to software designed to enhance or create images for use on television. The first may require accuracy, the latter may focus more on transparency.
As we seek to demystify AI and demand transparency from the large tech and data companies dominating its use, our approach to regulation will need to change and develop. It took nearly 200 years before there was a significant change in the approach to regulating witchcraft. AI regulators are unlikely to have even 200 weeks before the technology is close to unrecognisable from what it is today. Being fleet of foot and (maybe literally) crystal ball gazing are likely to be essential in steering our use of AI down a constructive and fruitful path, rather than one ending in chaos.
Whilst the rich seam of witchcraft legislation between 1541 and 1951 has ended (for now) with the less romantically titled Consumer Protection from Unfair Trading Regulations 2008, it is worth noting that the 1735 and 1951 acts contained barely more than 500 words each, whereas the 2008 regulations have over 5,000 words, not including the schedules. The proposal for the EU AI Act stands at just under 47,000 words. The pursuit of accuracy and comprehensiveness has come at the expense of brevity. If only there was a potion for that…