03 11 2025 Insights Employment Law

AI in HR: Enhancing Practice and Mitigating its Risks

Reading time: 5 mins

Nathalie King 1
Nathalie King Foreign Qualified Lawyer Email
I Stock 1987424613
SHARE 

 

The use of AI by HR teams is widespread, with more than 1 in 4 of employers in Ireland having introduced AI to their recruitment processes.[1] However, whilst some of its core functions are taken over by AI, HR remains a critical function in organisations, including to ensure employees are treated fairly and lawfully. Without a HR function, organisations face potential non-compliance and increased legal risk of discrimination and claims in the Workplace Relations Commission (“WRC”).

In this article, we set out how AI systems are used at four key stages of employment, the key legal risks associated with the use of AI systems at each of these four stages and HR’s role in mitigation the risks.

Recruitment

AI systems can be used in recruitment to prepare position descriptions and job advertisements, propose pay or pay range, scan CVs to identify suitable candidates, create assessments, communicate with candidates and schedule interviews.

The HR team plays a critical role in the use of AI systems during recruitment to manage legal risks.

Selecting and assessing candidates

The recruitment stage is one that presents a high risk of discrimination. The Employment Equality Acts, 1998 to 2015 (“Equality Acts”) prohibit against unlawful discrimination on the basis of nine protected characteristics, such as age, disability, race and sex. The use of AI presents a risk of discrimination or of discriminatory bias, and the HR team should monitor, in particular if AI is used to:

  • scan CVs. The HR team should ask itself, ‘Are we only being presented with CVs of candidates with a particular protected characteristic?’; or
  • create assessments.  The HR team should ask itself, ‘Is the proposed assessment skewed in favour of candidates with a particular protected characteristic?’.

HR should also be mindful of how information that is input into the AI systems impacts on machine learning and it should ask itself, ‘Is the organisation’s unconscious but repeated preference for candidates with a particular characteristic being integrated into the system?’. This was the case of a recruitment AI tool developed by Amazon which drew from 10 years of recruitment practices and taught the system that male candidates were preferable (see here).

In addition to creating discrimination risk under the Equality Acts, the European Artificial Intelligence Act, which is an EU regulation which entered into force on 2 August 2024 and which is directly applicable across the EU, (“AI Act”) prohibits the use of AI systems to evaluate a person based on a personal characteristic (e.g. age, gender, disability, race).[i] Whilst less likely to be overt, the biases of AI system may have such an effect.

Notwithstanding, AI systems can also correct for biases and their use in reducing the risk of unlawful discrimination cannot be disregarded.

Setting pay or pay range of new employees

The EU Pay Transparency Directive (“Directive”) is due to be given effect in Ireland by June 2026 and is targeted at closing the gender pay gap. To comply with the anticipated Irish legislation to transpose the Directive, HR should review the job advertisements generated by AI to ensure they use gender neutral language, as required by the Directive.

The HR team should also be cautious of relying on AI systems to set pay or pay range in light of the upcoming implementation of the Directive. Relevantly, the Directive:

  • gives candidates a right to receive information from the prospective employer about pay or pay range. Whilst it has not yet been made law, the Irish bill in its current form goes further than this, by requiring that pay or pay range be included in job advertisements; and
  • provides the opportunity for employees to query, during employment, the reasons for a gender pay gap of more than 5% in any category of workers – meaning that employers must be able to justify the gender pay gap on objective grounds.

Setting pay or pay range during recruitment is critical, not only from a business perspective to attract talent but also from a legal perspective, in light of the Directive. Whilst AI systems may propose pay or pay range based on market data, hiring managers must be able to justify how pay or pay range was set, taking into account the pay of existing employees in the same category by gender. HR will generally support hiring managers with setting the proposed pay or pay range, including for the purpose of the job advertisements or to communicate it to candidates as required by the Directive.

Overseeing the use of AI

The involvement of HR where AI systems are used in recruitment is not only recommended but is also required by the AI Act.i The AI Act characterises the use of AI systems for recruitment, including the placement of targeted job advertisements, screening or filtering applications and evaluating candidates, as ‘high-risk’. As a result, candidates must be advised of the use of AI systems and a HR team that uses AI systems for recruitment must have a sufficient level of AI literacy.

Performance appraisals and performance management

AI systems can be used for performance management to generate reports on employees’ performance or summaries multiple sources of performance data, draft feedback to employees, schedule performance meetings, propose goals and training to achieve them and identifying poor performers and top performers.

The HR team’s involvement in managing performance is key to managing legal risk associated with performance appraisals and performance management.

Setting pay of promoted employees

When considering promotions and the associated remuneration, managers must be able to explain the basis for setting this remuneration, as they may be required to justify it if there is a pay gap of more than 5% in a worker category, pursuant to the Directive. This will generally involve support from HR.

Reviewing employees’ performance

Employees are increasingly making claims under the Equality Acts during employment – not just once their employment is terminated. Managers must be cautious of using reports, summaries or feedback generated or drafted by AI systems to ensure they do not include discriminatory content or language that is likely to be inflammatory. For example, if a summary generated by AI includes language describing the employee as a “bad cultural fit”, managers, with support from the HR team, should consider changing this to more precise wording that cannot be perceived as discriminatory or otherwise inflammatory.

Managing performance

If AI’s identification of poor performers leads to a performance management process, the HR team must ensure that the identification of areas for improvement is reasonable and that the performance management process is consistent with the organisation’s handbook. The WRC will wholistically consider the reasonableness of the employer’s actions and it will not be a defence that an AI system identified the employee and/or proposed the process.

Employers must also afford employees procedural fairness, at the risk of a claim of unlawful dismissal. The HR team should be mindful of this when supporting a manager with performance management.

Overseeing the use of AI

Using AI systems to make decisions in relation to work related relationships is characterised under the AI Act as ‘high-risk’, requiring that employees be advised of the use of AI systems and that a HR team that uses AI systems for recruitment must have a sufficient level of AI literacy.i The same applies in relation to ‘Grievances and interpersonal issues’ and ‘Disciplinary action’, dealt with below.

Grievances and interpersonal issues

AI systems can be used to manage grievances raised by employees and interpersonal issues in workplace investigations, for example, to review supporting documents, schedule interviews, note take during witness interviews and prepare draft reports, communicate with the complainant and witnesses. AI systems can also be used to analyse and report trends in relation to workplace conduct.

From a legal perspective, HR’s involvement is crucial to ensure employees are treated both fairly and lawfully.

Receiving grievances

Under the Safety, Health and Welfare at Work Act, 2005 (“Safety Act”)employers must ensure, so far as is reasonably practicable, the safety, health and welfare at work of their employees. This includes ensuring that the workplace is free from psychosocial hazards, such as bullying, harassment and sexual harassment.

The Code of Practice dealing with bullying and harassment published by the Health and Safety Authority sets out how organisations can discharge their duty in this respect. It provides that organisations should appoint a contact person for staff to contact to enquire about raising a grievance or to raise a grievance. This is generally a member of the HR team. The Code states that the purpose of this role is “supportive listening and information provision”, which is a uniquely human role that cannot be replaced by AI. In other words, in addition to the common-sense benefits of having a person from HR receive a grievance of this nature, the Code enshrines that this is best practice from a legal perspective.

Of note, AI systems can also be very useful for organisations to discharge their obligations under the Safety Act. Reports generated by AI in relation to the trends within organisations for bullying, harassment and sexual harassment may assist with taking a risk-based approach to address such psychosocial hazards.

Responding to, and investigating, grievances

Considered and appropriate management of a workplace grievance is critical to put the organisation in the best position to proceed with disciplinary action, including dismissal, should allegations of misconduct be substantiated. In relation to the use of AI systems in investigations, the HR team should be mindful of using AI given the investigation will likely be the subject of review by the WRC, should the employee bring a claim in the WRC.

Disciplinary action

Ultimately, AI systems must not replace a decision maker in respect of taking disciplinary action, such as termination. From a legal perspective, the decision maker’s state of mind will be examined and critical in legal proceedings. For example, the decision maker’s state of mind will be examined to determine whether the reason for dismissal was reasonable in an unfair dismissal claim or whether the reason for dismissal was discriminatory in a claim under the Equality Acts. HR is generally not the decision maker, but it plays a key role in supporting the decision maker whose evidence has the potential to be examined in the WRC or another tribunal/court.

Conclusion 

Whilst the use of AI systems presents key benefits, HR teams undeniably continue to add value to an organisation from a legal perspective, maintaining compliance and managing risk of employment claims – in addition to supporting the organisation’s people strategy from a business perspective.

The RDJ Employment Team regularly advises HR teams on managing legal risks and represents employers in defending employment claims.
 


[1] IrishJobs.ie, The AI revolution in recruitment, How Irish HR professionals are using AI to transform their hiring processes, 27 August 2024, https://www.irishjobs.ie/recruiters/recruiters-news/more-than-1-in-4-employers-in-ireland-currently-using-ai-in-recruitment/.

[i] This article is not comprehensive in relation to organisations’ obligations under the AI Act.

Stay loop bg
Sign up

Stay in the loop

Sign up to our newsletter