

By Francesca Snyder, Associate, Australian Business Lawyers & Advisors.
Artificial intelligence (AI) is becoming a go-to tool in human resources which promises greater efficiency, cost saving and, in some cases, reduced reliance on staff for routine and repetitive tasks. It is therefore not surprising that there has been an uptake in HR practitioners using AI to automate various HR functions from screening candidates, to generating correspondence to employees, to completing mundane and time-consuming tasks such as calculating payments and entitlements for employees.
With such clear benefits, it’s easy to see why HR practitioners are embracing AI. But the real question is, should they?
While AI offers undeniable advantages in automating repetitive work, it also gives rise to serious risks when these tools are used to replace or replicate human judgment in decision-making processes. For important decisions about staff such as whether an employee should be hired, whether a current employee is meeting their productivity targets, or whether a staff member’s employment should be terminated, pure reliance on algorithms creates a real risk that miscommunications or complications may arise.
Let’s consider a scenario where an employer chooses to use an AI screening tool to vet potential candidates for a new role. The AI tool will consider various factors in order to determine whether an applicant should be considered and whether the candidate should proceed to the next round. These factors are almost never totally transparent. It is entirely possible that in filtering candidates, protected attributes like sex, race and family responsibilities etc. may be unlawfully used to discriminate between applicants.
This becomes complicated because the decision maker in this scenario was the AI tool, not an HR representative. In the absence of clear and documented reasoning for why the decision was made, justifying AI’s decision and defending the legal claim becomes far more complex. This is just one example of how the lack of transparency in how AI systems reach conclusions can leave companies exposed to legal claims with limited ability to justify their actions.
AI isn’t just being used to automate the hiring process, HR practitioners are also leveraging AI tools to automate payroll systems, complete data entry tasks, calculate legal entitlements and manage allowances and/or overtime payments in accordance with modern awards. On the surface, this seems like a logical decision. AI tools can make the process faster, less expensive, and in some cases, can help reduce the risk of human error. However, serious issues arise when AI tools produce inaccurate results or when these tools get critical parts of these processes wrong.
Let’s take inputting data and payroll as an example. Given the time consuming and mundane nature of inputting data about employees’ pay and calculating employees’ allowances and overtime, many companies have started to turn to AI as a simple and more cost-effective solution to assist with these tasks. However, if an AI tool misclassifies an employee or makes an error in the way overtime is calculated, it may expose the business to legal claims such as underpayment claims.
Overall, while AI can be a valuable tool, it is essential for HR practitioners to understand AI’s limitations and to be mindful of when and how it should be used, especially when the task at hand includes making decisions about people and their livelihoods. It is critical to remember that relying on AI tools and data generated by AI may result in significant financial, legal and reputational risks to the company.
AI tools may make HR tasks easier to complete, however it is also essential to know the limits of these tools and understand the risks of using these systems incorrectly.