Employers have been outsourcing cognitive labor to software for years. What's different now is scale, speed, and a growing willingness to let algorithmic outputs stand in for human judgment. That gap between what AI can do and what organizations are prepared to defend is where legal risk quietly accumulates.
Kevin Coy is a Privacy Law Partner at Arnall Golden Gregory LLP, where he advises organizations on privacy and cybersecurity law, including how those obligations play out in employment contexts. He joined Don't Get Played host Sarah O'Melia to work through what responsible AI adoption in hiring looks like, and what it costs when organizations skip that part.
The central tension Kevin returns to throughout the conversation is straightforward: the legal frameworks governing hiring have not taken a holiday just because new tools are involved. Title VII, the Americans with Disabilities Act, state and local equal employment opportunity laws — these still apply. And when something goes wrong, the organization is the one accountable, not the platform.
The Law Didn't Take a Holiday When AI Showed Up
The hiring tasks that organizations are handing off to AI right now are, in many cases, the right ones to start with. Scheduling, initial resume review, and basic candidate communications are repetitive, high-volume, and relatively low-stakes compared to the final hire decision. Kevin frames this as intentional sequencing: identifying where efficiency gains are real before moving into territory where the cost of error is higher.
That doesn’t mean there’s no risk. Even a scheduling tool or FAQ-response function can drift out of alignment with organizational policy if no one is watching it closely.
“You want to make sure that those processes are compliant and that the AI tool is not freelancing in a way, for example, with responses to FAQs that are inconsistent with the organization's message on something,” Kevin noted.
The further a task sits from the actual hiring decision, the more comfortable most organizations feel with automation. But proximity to the decision is not the only variable. Data handling, vendor contracts, and ongoing monitoring matter regardless of where in the funnel a tool operates.
One of the clearest warning signs that an organization has over-relied on automation is the inability to answer a basic question: how did we get here? If a candidate is screened out and the honest answer is “the tool said no,” that is not a defensible position.
Kevin draws a direct line between explainability and legal exposure. Tools that function as black boxes — where the criteria behind decisions are opaque even to the employer — create accountability gaps that are difficult to close after the fact. Bias against candidates in protected classes, disparate impact on people with disabilities, privacy violations resulting from how a vendor uses candidate data: these risks do not disappear because the decision-making was delegated.
“The employer is ultimately going to have responsibility for their hiring decisions in all likelihood, regardless of the tool that's being used,” Kevin said. “Saying the AI told me to do it — not likely to be an attractive defense.”
This is particularly true in regulated industries, where the stakes of a wrong hire are compounded by sector-specific compliance obligations. Regulators for healthcare, financial services, transportation, and government contracting are not just watching whether hiring is fair in the abstract. They are watching whether organizational processes hold up under scrutiny.
Much of what Kevin covers in his practice comes back to a question organizations rarely ask with enough rigor before signing: what is the service provider actually doing with candidate data?
When an employer deploys an AI hiring tool, they are not just licensing software. They are entering into a data relationship with a vendor who may have their own uses for the information being processed, whether to train models, derive insights, or make disclosures that were not part of what the employer envisioned or communicated to candidates.
“The question is, what is that AI partner doing with the information? Is it being used solely in support of the employer's activities, or are they potentially using the information collected for their own purposes?” Kevin explained. “That's often a contractual issue.”
Beyond data use, organizations also need to understand whether their tools have been tested for bias, what the vendor's transparency obligations are, and whether specific legal requirements, such as biometric privacy laws in states that have enacted them, have been addressed. Checking a box at procurement does not constitute ongoing governance. The tools evolve. The legal landscape shifts. What passed a bias audit at launch may not pass a year or two later.
The organizations handling AI adoption well, in Kevin's observation, are the ones that approach it with defined use cases rather than a general mandate. A broad directive from leadership to “use more AI” is not a strategy. It is a starting condition that puts all the hard work on the HR team without giving them the structure to do it right.
Real governance means breaking the hiring funnel into discrete use cases, evaluating tools against each one, involving stakeholders across HR, privacy, security, and legal before any tool goes live, and then continuing to monitor after launch. Periodic bias testing is not optional. Vendor accountability needs to be contractual, not assumed.
“It's not a do it and forget it type of situation,” Kevin said. “The tools are likely going to continue to evolve rapidly. Have the bias tests and considerations remained the same? Have new biases been found? Are those material to however you're using the tool?”
That last question is the one most organizations are not asking on a regular cadence. The ones that are will be in a substantially better position when something inevitably goes sideways.
What Kevin's framing makes clear is that intentionality is not a soft aspiration. It is a legal posture. Organizations that can explain what their tools do, why they chose them, how they've been validated, and how they're being monitored are the ones that can actually defend their decisions. The ones that can't are not just unprepared for regulators. They are unprepared for the next hire that turns into a lawsuit.