Events

AI literacy: Demystifying AI, AI Ethics, and the Law - Key Takeaways

Reading time: 10 mins

Website Insight Template

Introduction

RDJ recently held its inaugural AI seminar “AI literacy: Demystifying AI, AI Ethics, and the Law” at Hayfield Manor, with over 100 clients and colleagues in attendance. RDJ Partner and head of Cybersecurity, Privacy and Data Protection Ricky Kelly opened the seminar by acknowledging the broad spectrum of industry representatives in attendance, signalling the wide range scope which Artificial Intelligence has and will continue to have across multiple organisations.

Ricky spoke about the journey RDJ has been on to get where we are today and acknowledged the vast amounts of the available AI content out there, highlighting the need for the law to keep up with rapidly advancing AI developments and demystify the area.

Speaker 1: GenAI and Large Languages Models – What’s the Fuss About? – Dr. Alan Smeaton

Dr. Alan Smeaton, Professor of Computing at Dublin City University, provided food for thought on the evolution of generative AI and large language models.

Generative AI – Introduction to LLMs

Professor Smeaton expertly explained what LLMs are to the audience, adding they are statistical models of distributions of sequences of text to generate more text, in which you can model what the language looks like. What happens in layman’s terms is, you take training data, and use an algorithm to find something new, or synthetic. They use prompts to pull out sequences and this is what the generative part does.

The largest LLM has less than a third of the connection points of the human brain, is only trained on text, and it cannot learn. The size of information used to train Chat gpt4 would be the equivalent of 650km of shelves of books, would take 7 million years to compute on a laptop, and would take up 25x the size of Phoenix Park in excel sheets if printed. The energy consumption of AI will cost .05% of global electricity consumption by 2027.

Different types of LLM

Professor Smeaton discussed the differences between what ChatGPT was, i.e. no fact checking, no logic, reasoning, inference or deduction, to what it is now, i.e., an LLM that can be tailored and fine-tuned to build models from scratch. It can, in real time, identify document(s) from a search output and fine tune for a particular session and is different variations. Using your own personalised model. You can also build and train a model from scratch using the below methods:

  1. Prompt engineering – Systems of which the only information you get out is what you feed into them. Only perimeter you can adjust temperature i.e., how far ahead it looks;
  2. Fine tuning – is when you turn a foundation model LLM into one specialised one for a use case by prompting the model with certain documents and adjusting the parameters;
  3. Model Building from scratch with a focus on very specific areas; and
  4. “Retrieval Augmented Generation” - is a specialist LLM, such as “Harvey AI”, who have taken legal text and specialised it. “RAGs” can point to exactly where it got its information from.

Dr. Smeaton closed by noting that LLMs’ weren’t planned or designed, they just happened and there is a clear opportunity for improved productivity, and scale, by raising the floor and allowing people to engage in new skills and tasks. There is, however, a concern about the slowdown in AI research to allow for rapid production AI for big tech companies and the science behind understanding AI is being forgotten.

Speaker 2: Ethics and AI – Dr. Sean Enda Power

Dr. Sean Power of MTU lead an insightful discussion on Ethics and AI. A question about ethical AI generally rests on whether we understand AI in the first place. Learning a base understanding on how AI works is important. Dr. Power raised some interesting thoughts on what we “ought” to do with AI:

What is ethics?

  • Ethics is the difference between 'is', 'can', and 'ought'. It is important to take apart what exactly ethics means and lay out all the elements. An ethical action is what should, or, ought be done. Ethics involves a choice and that choice matters and there will need to be clear vision on what AI model’s “ought” to do, before deciding their use.

How does ethics apply to AI?

  • Dr. Power added that the ideal AI for human use would be both helpful and harmless. If there is a drive to make it smarter, it does not mean it will make it “better”. People want to align AI with human values, but in reality, AI and whether it is smart and good, are two independent ideas. The data that is used might have issues around bias, exploitation, and preference.

AI safety

  • To trust AI, it must be ethical, lawful, and robust (both from a technical and social perspective) even with good intentions, AI systems can falter. AI must be continuously evaluated and addressed throughout the systems life cycle.

What can you do to ensure ethical use of AI?

  • Dr. Power further emphasised the need to pay attention to the gap between regulation and practice. Organisations can create ethical guidelines by looking at the EU’s guidelines and matching them to your own organisation. It will be important to be clear on the difference between what AI is ideally specified to do or claimed to do and what is actually does.

Speaker 3: Leading AI Operations: From Vision to Execution – Richard Skinner

Richard Skinner, CEO of Phased AI spoke about the important of leadership in bringing in AI to the workplace, what is needed and how we can avoid any potential failure(s).

AI leadership in enterprise will align AI with business goals. It’s important to translate board level strategies, technical understanding, align products, talent catalyst, and governance. Not every company can hire a CAIO but they can form an AI steering committee or working group. In this, you would have different representatives across the board from data analysts, legal, and business stakeholders. Some key points to consider:

  1. Define goals and values: Identify the business value and productivity gains you expect from Generative AI;
  2. Form a hypothesis: Start with a clear hypothesis, for example, automating a task currently engaged by a human expert;
  3. Educate stakeholders: Get buy-in and involvement from relevant people across the organisation;
  4. Test the hypothesis: Use a small sample of data to test if Gen AI can perform the identified task;
  5. Proof of concept: Run a pilot project to validate the feasibility of using Gen AI at scale with real data;
  6. Develop AI operations (AI Ops): Establish processes for managing Gen AI, including data prompts, testing, logging, and auditability;
  7. Evaluation and measurement: Gather user feedback, assess risk and productivity improvements, and adjust plans as needed;
  8. Manage expectations: Be realistic about the capabilities of Gen AI and potential challenges in implementation; and
  9. Strategic alignment: Consider forming a working group to ensure Gen AI aligns with your organization's goals.

Richard concluded by highlighting that leadership at strategic level within an organisation is so important, to establish an AI framework and build in AI governance throughout the process.

RDJ Panel Discussion

Representatives from across RDJ’s practice areas shared their perspectives on how companies can leverage the benefits of AI, manage risk and navigate the ever-evolving regulatory landscape for the use of AI technologies as part of business and operations.

  1. The EU AI Act: Sarah Slevin spoke of the recently agreed EU AI Act and provided a helpful summary of the different sections of the Act. There was specific reference to fact that the Act is the first global framework in relation to AI and covers a wide range number of sectors, dealing with both developers and deployers of AI. The Act takes a risk-based approach which can be broken down into: (i) prohibitive (coming into force six (6) months from now); (ii) high risk; (iii) classes that have specific transparency requirements; and (iv) general purpose AI, which now has its own specific section.
  2. Employment issues: Michelle Ryan spoke about the delay, from an employment law perspective, of the law keeping pace with AI and added that it is already being deployed in the workplace, with one fifth already using it already and 61% interested in using it. Employers, and specifically recruitment and HR departments, need to review and understand where AI is being used in their workplace. There are huge benefits to AI but clear legal challenges with its use in the workforce.
  3. AI and Sustainability: Shane O’Connor spoke about the benefits of AI in terms of energy efficiency in buildings and machines, making clearer and faster climate and weather predictions, optimising performance & maintenance tasks. He also raised some concerns about the vast amount of processing power and high resource requirements of AI, which has a relatively large carbon footprint compared with existing systems. He urged organisations using AI to roll out governance policies on sustainability in order to future proof the area.
  4. Liability and AI: Michael Quinlan acknowledged that currently, there is no AI specific framework in place for dealing with seeking compensation arising from damage caused by AI. Claimants would have to rely on normal tort and contract law, which may not suit the complexities of an AI claim. In cases such as these, he added, there is a presumption of causality, and the defendant will have to rebut that presumption. Claimants could potentially argue there is no compliance with the AI act. He also highlighted the proposed EU liability directive and product liability directive, which aim to supplement the AI act by providing redress for damage caused.

It was clear from the variety of questions on the day that topics such as AI in the workplace, human oversight, and the balance between deployer and provider under the AI Act are at the forefront of organisation’s considerations. There is also a clear message coming from the potential carbon footprint of AI usage, with the panel agreeing that just because we don’t have an obligation to measure currently, this doesn’t mean we shouldn’t measure it now (or perhaps we “ought” to do so).

Concluding remarks

The key takeaway from the day was why it is important for organisations to consider the vast benefits AI might have to offer if they are to stay competitive but not to rush into this complex area without first carefully identifying potential risks.

To learn more about RDJ’s advisory AI team or contact a member of our team click here.

SHARE
Stay loop bg
Sign up

Stay in the loop

Sign up to our newsletter