On 29 December 2023, on the floor of the New York Stock Exchange, 2024 was declared to be the “year of AI” by top tech analyst Dan Ives. This Article will look at what we can expect from general artificial intelligence (AI) in 2024.
Race to regulate
2024 will be the year regulators step-in to ensure that AI systems used are safe, transparent, traceable, non-discriminatory and environmentally friendly. Regulators understood the risk of being outpaced by rapidly evolving AI technology in 2023 and the urgent need to respond.
The EU is spearheading AI regulation and on 9 December 2023, the European Parliament reached a provisional agreement with the European Council on the AI Act. The text of the Act will have to be formally adopted by both the Parliament and the Council to become EU law.
The legislation would apply to providers placing AI systems in the EU market and will take a risk-based approach to AI systems. The Commission believes that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The level of regulation will vary according to the level of risk posed by the AI system.
The European Union’s proposed landmark “AI Act” aims to be the world’s first comprehensive AI law and establish a legal framework to regulate AI systems. It is hoped that the Act will be finalised this year, in 2024.
In contrast to the approach adopted by EU regulators, the UK has taken a sector-led approach to AI regulation as outlined in its March 2023 white paper. It is hoped that in 2024, sector-specific regulators will provide tailored recommendations for sectors including, financial, healthcare, competition and employment sectors. The UK government will then assess whether specific AI regulation or an AI regulator is required.
In the US, the regulations mandated in President Biden’s recent AI Executive Order will also come into effect in 2024.
On 1 November 2023, a declaration that provides for international cooperation and an inclusive global dialogue was signed by representatives from the EU, US, UK, China and 25 other countries. The Bletchley Declaration recognises the importance of trustworthy AI and the potential dangers of certain AI models.
The pressure to implement these regulations is obvious as the rapid pace of innovation of AI systems makes the regulation a difficult task. Equally, for AI marketers such as OpenAI the contrasting approaches adopted by regulators with the EU’s comprehensive approach and the UK’s sector-specific strategy makes the AI global market in general increasingly difficult to navigate.
AI Governance practices
These new regulations will require companies to comply more than ever before with data privacy regulations and put in place proper policies and procedures to ensure that employees using the regulated AI systems are not breaching data privacy regulations.
A year of intense litigation on novel AI issues is to be expected, particularly in the area of copyright infringement.
In 2023, the New York Times filed one of the first copyright lawsuits against OpenAI and partner Microsoft. The dispute concerned the use of the newspaper’s articles to train algorithms. The Times claimed this to be a copyright violation.
Interesting cases to watch this year which will no doubt set much needed precedents include cases such as: Thomson Reuters v ROSS Intelligence, Getty Images v Stability AI and Authors Guild v OpenAI Inc.
Like every sector, the legal industry will feel great impacts from generative AI in 2024. The dangers of lawyers implementing AI tools were demonstrated in US courtrooms in 2023. Using tools such as OpenAI’s ChatGPT, US lawyers made headlines when they were sanctioned for filing a brief with six fake AI generated case citations, misleading the court. Similarly, a lawyer was temporarily suspended in Colorado in similar circumstances. The lawyer's defence of “misunderstanding the AI technology” was deemed “no excuse” by US judges.
The strict approach adopted by US judges in sanctioning lawyers who misuse AI technology will no doubt be followed across the pond. Lawyers in Texas are now required to certify that they did not use AI to draft their filings without a human checking their accuracy.
AI Insurance Policies
2024 will see insurers adapt their risk management offerings to include coverage for AI related matters. It is possible that we will see insurance companies introducing policies such as an AI Hallucination Policy. This policy would offer protection where financial damages are incurred for errors and failures in the AI system. Gartner predicts that this will be a profitable venture for insurance companies in 2024.
Cybersecurity and Privacy risks
AI advancements are worsening the cyberthreat landscape daily and cybercriminals continue to upscale their tactics such as phishing and ransomware attacks to adapt to these AI advancements.
In a survey conducted by so-safeawareness.com it was found that;
- 82% of organisations expect cyber-threats to increase in 2024.
- 1 in 2 organisations experienced a successful cyber-attack in the past 3 years and more than a third of companies paid the ransom.
“Cyber awareness must become an integral part of everyday routine, just like fastening the seatbelt before driving.” – Major General Jurgen Setzer, Bundeswehr. Words to live by for any company coming into AI dominated 2024.
It’s likely that 2024 will see high-profile and pernicious incidents occur.
The AI Revolution
It is believed that AI for many sectors, marks the dawn of the fourth industrial revolution. Many are fearful that this AI-driven revolution will lead to the loss of countless jobs. However, UCD Smurfit Graduate Business School associate professor Alessia Paccagnini believes that it is a matter of change rather than the destruction of jobs. “By giving routine work to machines, people can focus on higher-level tasks that involve creativity, critical thinking, problem solving and emotional intelligence. Because of this, jobs are changed instead of going away”
The same is evident by the regulations proposed by the EU Parliament’s priority is that AI systems should be overseen by people, rather than by automation.
2024 will be an AI-centric year and the concept of Artificial Intelligence Regulation has intensified worldwide. It’s January and we have already seen Google release its latest AI model, "Gemini", ChatGPT has also received an upgrade. The AI race is on and shows no signs of slowing down. Competition for profit and AI global supremacy is at an all-time high.
AI regulation will have to strike a balance between protecting against the dangers of AI and leveraging the opportunities of AI. AI’s expansion into areas such as healthcare, finance, surveillance and even warfare has led to a lack of public trust which is prompting countries to introduce regulations.
“The stakes are immense, but the possibilities are also astounding. 2024 promises to be a pivotal year as society grapples with AI’s double-edged sword.”- Forbes magazine.