‘AI Told Me to Do It’ Is Not a Defense
Kevin Coy
Privacy Law Partner
Arnall Golden Gregory LLP

Employers have been outsourcing cognitive labor to software for years. What's different now is scale, speed, and a growing willingness to let algorithmic outputs stand in for human judgment. That gap between what AI can do and what organizations are prepared to defend is where legal risk quietly accumulates.
Kevin Coy is a Privacy Law Partner at Arnall Golden Gregory LLP, where he advises organizations on privacy and cybersecurity law, including how those obligations play out in employment contexts. He joined Don't Get Played host Sarah O'Melia to work through what responsible AI adoption in hiring looks like, and what it costs when organizations skip that part.
The central tension Kevin returns to throughout the conversation is straightforward: the legal frameworks governing hiring have not taken a holiday just because new tools are involved. Title VII, the Americans with Disabilities Act, state and local equal employment opportunity laws — these still apply. And when something goes wrong, the organization is the one accountable, not the platform.
The Law Didn't Take a Holiday When AI Showed Up
The hiring tasks that organizations are handing off to AI right now are, in many cases, the right ones to start with. Scheduling, initial resume review, and basic candidate communications are repetitive, high-volume, and relatively low-stakes compared to the final hire decision. Kevin frames this as intentional sequencing: identifying where efficiency gains are real before moving into territory where the cost of error is higher.
That doesn’t mean there’s no risk. Even a scheduling tool or FAQ-response function can drift out of alignment with organizational policy if no one is watching it closely.
“You want to make sure that those processes are compliant and that the AI tool is not freelancing in a way, for example, with responses to FAQs that are inconsistent with the organization's message on something,” Kevin noted.
The further a task sits from the actual hiring decision, the more comfortable most organizations feel with automation. But proximity to the decision is not the only variable. Data handling, vendor contracts, and ongoing monitoring matter regardless of where in the funnel a tool operates.
If You Can't Explain the Decision, You're Already Exposed
One of the clearest warning signs that an organization has over-relied on automation is the inability to answer a basic question: how did we get here? If a candidate is screened out and the honest answer is “the tool said no,” that is not a defensible position.
Kevin draws a direct line between explainability and legal exposure. Tools that function as black boxes — where the criteria behind decisions are opaque even to the employer — create accountability gaps that are difficult to close after the fact. Bias against candidates in protected classes, disparate impact on people with disabilities, privacy violations resulting from how a vendor uses candidate data: these risks do not disappear because the decision-making was delegated.
“The employer is ultimately going to have responsibility for their hiring decisions in all likelihood, regardless of the tool that's being used,” Kevin said. “Saying the AI told me to do it — not likely to be an attractive defense.”
This is particularly true in regulated industries, where the stakes of a wrong hire are compounded by sector-specific compliance obligations. Regulators for healthcare, financial services, transportation, and government contracting are not just watching whether hiring is fair in the abstract. They are watching whether organizational processes hold up under scrutiny.
Negotiating a New Flavor of Risk
Much of what Kevin covers in his practice comes back to a question organizations rarely ask with enough rigor before signing: what is the service provider actually doing with candidate data?
When an employer deploys an AI hiring tool, they are not just licensing software. They are entering into a data relationship with a vendor who may have their own uses for the information being processed, whether to train models, derive insights, or make disclosures that were not part of what the employer envisioned or communicated to candidates.
“The question is, what is that AI partner doing with the information? Is it being used solely in support of the employer's activities, or are they potentially using the information collected for their own purposes?” Kevin explained. “That's often a contractual issue.”
Beyond data use, organizations also need to understand whether their tools have been tested for bias, what the vendor's transparency obligations are, and whether specific legal requirements, such as biometric privacy laws in states that have enacted them, have been addressed. Checking a box at procurement does not constitute ongoing governance. The tools evolve. The legal landscape shifts. What passed a bias audit at launch may not pass a year or two later.
Governance Is Ongoing, Not a Launch Task
The organizations handling AI adoption well, in Kevin's observation, are the ones that approach it with defined use cases rather than a general mandate. A broad directive from leadership to “use more AI” is not a strategy. It is a starting condition that puts all the hard work on the HR team without giving them the structure to do it right.
Real governance means breaking the hiring funnel into discrete use cases, evaluating tools against each one, involving stakeholders across HR, privacy, security, and legal before any tool goes live, and then continuing to monitor after launch. Periodic bias testing is not optional. Vendor accountability needs to be contractual, not assumed.
“It's not a do it and forget it type of situation,” Kevin said. “The tools are likely going to continue to evolve rapidly. Have the bias tests and considerations remained the same? Have new biases been found? Are those material to however you're using the tool?”
That last question is the one most organizations are not asking on a regular cadence. The ones that are will be in a substantially better position when something inevitably goes sideways.
What Kevin's framing makes clear is that intentionality is not a soft aspiration. It is a legal posture. Organizations that can explain what their tools do, why they chose them, how they've been validated, and how they're being monitored are the ones that can actually defend their decisions. The ones that can't are not just unprepared for regulators. They are unprepared for the next hire that turns into a lawsuit.
Transcript
Kevin Coy:
You know, the employer is ultimately going to have responsibility for their hiring decisions in all likelihood, regardless of the tool that's being used. You know, in other words, saying the AI told me to do it, not likely to be an attractive defense.
Sarah O'Melia:
Welcome to Don't Get Played, a podcast from Cisive.
This show is for talent acquisition leaders and people managers who care about trust at work. How it's built. How it's measured. And how leaders design systems that hold up when speed, risk, and accountability collide.
I'm Sarah O'Melia, VP of Learning and Employee Communications at Cisive.
AI is moving fast in hiring. Faster than most legal frameworks can keep up with. And for employers, that gap between what's possible and what's been sorted out creates real exposure around bias, data privacy, and who's actually responsible when something goes wrong. Because here's what doesn't change just because you're using a new tool: the law still applies. And like our guest today says, "AI told me to do it" is not a defense.
Kevin Coy is a Privacy Law Partner at Arnall Golden Gregory LLP, where he advises organizations on privacy and cybersecurity law, including how those obligations play out in the employment context.
In this episode, Kevin and I get into what employers are actually automating in hiring right now and where the legal risk concentrates. We talk about what it means to keep humans in the loop and why that line is harder to draw than it sounds. We dig into the warning signs that a company may be over-relying on automation. And we talk through what a real governance framework looks like before any AI tool goes live.
The technology is moving. The legal frameworks will follow. The question is whether your organization is being intentional or just along for the ride.
Let's get started!
Sarah O'Melia:
Kevin, welcome to the podcast.
Kevin Coy:
Thanks, Sarah. I appreciate the invitation from you and the Decisive team to join today.
Sarah O'Melia:
So, I wanna kick us off by mentioning that AI is showing up everywhere in hiring right now. So from your perspective, what are employers doing with AI today that they weren't doing even two or three years ago?
Kevin Coy:
I think there's a lot of new activity and it's changing almost on a daily basis. My focus in my legal practice is privacy and cybersecurity in terms of what organizations can do with personal information about people, how they're processing that in the HR context in other contexts.
And there are a lot of use cases that are coming up with AI. As you mentioned, sort of throughout the hiring process as to how can we use AI, particularly generative AI and the newer cutting edge AI applications to help us improve the hiring process, to streamline the hiring process, to make it more efficient, to make it more effective, to make it fairer. And so organizations are looking at their hiring flows and trying to figure out if and where AI makes sense for them.
Sarah O'Melia:
I love that and I love that you call out the different areas. And I wanna tap into the efficiency. So what parts of recruiting and screening are most commonly automated today? And are there specific hiring tasks that are better suited for automation?
Kevin Coy:
I think clients are looking at their entire flow, but looking for places where there are potential efficiencies to be gained, as you said, in terms of tasks that are repetitive, tasks that are routine, tasks that can be more comfortably outsourced to the AI-based tools than other tasks might be.
For example, things like scheduling, or initially looking at resumes to sift to see which candidates have which credentials that the organization is looking for, and to try and take care of those sorts of tasks, to facilitate basic communications with applicants, to address applicant questions. Those types of things are areas where employers are looking to see if they can increase efficiencies without taking the human out of the process.
Sarah O'Melia:
When I think about highly regulated industries, there are specific risks that exist when hiring decisions intersect with safety, compliance, public trust. So how do these regulated industries need to think differently about AI and hiring?
Kevin Coy:
Well, all employers need to be careful about it and considerate about it. But even more so when you're in a highly regulated space because of the regulatory and reputational risks that are potentially involved there. All organizations of course have potential reputational risk at issue with the use of AI or other technologies for that matter as they're looking to improve their hiring process. But in the highly regulated spaces, there are concerns as to how those activities align with the organization's regulatory obligations and also what the attitude of their regulators are with respect to the use of AI tools.
That may not be as present in other spaces, although all spaces of course have baseline considerations in terms of making sure that however they're using the AI technology is consistent with their obligations under Equal Employment Opportunity law, under applicable privacy and consumer protection laws, under other laws that affect the hiring process generally.
Sarah O'Melia:
Absolutely. And I think you've mentioned sort of this focus on making sure that decisions are fair. And so when it comes to AI-driven decision making, are you seeing regulators begin to pay closer attention to that? And a follow through on that is where should AI stop and then the human control or checkpoint step in?
Kevin Coy:
Well, taking that last piece of it first, where that line is, is one that people are trying to find. I think as a general matter it's important to keep humans in the loop, to have the humans making the ultimate decisions, particularly when we're talking about something as essential as whether or not to hire someone, as opposed to other parts of the process that we were talking about earlier that are more attenuated from that. In terms of, you know, a scheduling tool for example, or dispersing FAQ information.
Although even there, you want to make sure that those processes are compliant and that the AI tool is not freelancing in a way, for example, with responses to FAQs that are inconsistent with the organization's message on something.
Sarah O'Melia:
And I love that you mentioned that because whenever we put trust in a third party, whatever that third party looks like, there is an inherent risk. So from your perspective, from your expertise, what types of legal risk can arise when AI influences those hiring decisions?
Kevin Coy:
There are a number of potential risks that can arise. And you know, my particular background is privacy and cybersecurity law. And in those contexts, the question is, what is that partner, the AI partner, whatever form that might take, what is that service provider doing with the information? What information are they collecting? How are they using it? Is it being used solely in support of the employer user's activities, or are they potentially using the information collected for their own purposes?
And that's often a contractual issue in terms of how the offerings are structured as to whether and to what extent the service providers have their own ability to use and disclose information that they're processing as part of providing the service. That can have important implications from a privacy perspective, from a security perspective, that organizations would want to take into account as they set things up and as they think through those issues.
In addition, as you sort of broaden out the lens, you step back a bit to look at employment more broadly. How are those tools operating in the context of issues that you consider from an employment perspective? In terms of obligations that employers have under Title VII or other civil rights laws, for example, are the tools being tested to try and protect against bias against individuals that are in protected classes? Is there transparency with respect to how those tools are being used? Is there, in the context of more discrete statutes like biometric privacy laws, if you are using video technology as part of this and capturing biometrics and scanning them, have you provided appropriate notice? Are you getting appropriate consent? Do you know what the tool is doing?
For example, do you have a sense as to what criteria it's using to carry out the tasks you're asking it to do, or is it more opaque? Is it more like a black box? Because, you know, the employer is ultimately going to have responsibility for their hiring decisions in all likelihood, regardless of the tool that's being used. You know, in other words, saying the AI told me to do it, not likely to be an attractive defense if your question about what you're doing as to how are your hiring decisions affected by this tool, how is it picking and choosing between decisions, or how is it helping rank candidates, are there biases in those processes that have an adverse impact or disparate impact on people in protected classes?
You know, are people in protected classes more likely to be screened out because a particular tool is being used? Is it having an adverse impact on individuals that have disabilities? Is it user friendly for people that have disabilities? Not that there are specific AI laws necessarily in all cases yet around these sorts of issues, but generally applicable employment and consumer protection laws, you know, Title VII, the Americans with Disabilities Act, state and local Equal Employment Opportunity laws, those all still apply here. There's not a holiday just because an AI tool is being used in some part of this process.
And so it's important for organizations that are bringing new AI tools online to consider those tools, consider what they're doing, consider what their use cases are, and to look at those tools with a critical eye to try and identify places where there could be risks or there could be a need for remediation to address potential concerns about bias or unfair treatment or other unintended and disfavored outcomes.
Sarah O'Melia:
Absolutely. And I love that you're tapping in on there's the efficiency and then there's also that fairness component, right? And as an AI tool and as a company, right, you have to figure out where one starts and one stops. So what are the warning signs that a company may be relying too heavily on automation?
Kevin Coy:
Well, I think one thing, we touched on it a bit a moment ago, is: can you explain what's happening? You know, if an applicant or a candidate comes to you with a question, are you able to look at the tools that are being used in the materials and explain how an outcome was reached? Or are you looking at a situation where you don't really know those things?
Then the answer is sort of, well, the AI tool said I shouldn't hire you, or the AI tool said no, or the AI tool just didn't think that this was a good fit. You know, those sorts of generic responses, looking back to the tool, pointing to the tool as the reason for a decision, could be suggestions that over-reliance might be happening. The tools are there to help, ideally, to increase efficiencies, to improve processes, to relieve humans in the process of repetitive tasks that might be a little bit of drudgery or just monotonous and repetitious, or to sift through large amounts of information to sort of make the search process easier to find what the employer is looking for.
But there's an important balancing, I think, to be done between those sorts of tasks and turning it over to the tools and the machines to make the decision, and therefore being unable to explain, if you were asked, how'd you get here? You know, what are you taking into account? How does this line up? How does this compare? How have you accounted for potential biases against individuals in protected classes? How are you protecting the privacy of the individuals that are going through the process where you're using these tools? Going back to those contractual questions we were talking about earlier as to, do you have safeguards in place with the providers of the tools so that you know that they're not taking your candidate's information and doing other things with it that you haven't envisioned and perhaps haven't disclosed to the candidate?
Sarah O'Melia:
Absolutely. So when we mitigate these risks, and there's a lot to consider, but on the flip side, there's a lot of potential benefits for an HR hiring team. So where can AI actually improve hiring outcomes?
Kevin Coy:
Well, hopefully in places where the tasks are, as we're talking about, repetitive. That's one area. I mean, things like scheduling are probably on the lower risk end of the spectrum in terms of legal risk, for example, where you can use tools to help with those types of activities, where you can use tools to identify candidates who have particular characteristics that you're looking for in terms of credentials or qualifications. Not to say that the tools are making those decisions for you, but to help you sift through your applications. But you're still doing it in a way that has humans in control of the process and is being tested and validated to provide the employer with confidence that the tool is not screening out parties based on their protected class status.
For example, you may want candidates that have X, Y, Z qualifications and you put that in, but it's important that the tool not be only putting in that information or returning results for one class of candidates and not candidates that have those same qualifications but are in protected classes, for example, because of just the way that the tool was trained and what it's looking for. To try and validate those things. But you could potentially use AI-based tools sort of throughout the employment process for a whole range of tasks and try and assess where they work, where they don't, and what the risk level is.
Because the more you're using tools that are focused on decisioning, there could be questions about, you know, are those decisions being made in a fair, non-discriminatory manner? Are the human decision makers involved in the process? And we're also starting to see in some actual and proposed legislation focusing on AI in particular, either requirements that humans have a role in the process, or that there be increased transparency so that candidates know where AI is going to be involved, how it's going to be used, whether they have the ability to opt out of use of AI tools of various types as part of their screening process. And those types of disclosure and use limitations, we'll have to sort of see where those go. We're still in early days for that type of legislation.
I mean, New York City has had a law in the books for a couple of years with respect to automated decision making and employment, which seeks to restrict and regulate the use of those activities where the tool is having a significant role to play in the decision making process. There are other laws that are being considered that would do similar things or treat employment as a use of AI for employment as a higher risk activity that requires additional scrutiny as you're putting those programs together. And so that's something to watch in the days, months, and years ahead as we start to see more AI-specific legislation. Again, assuming that there's not an effort at the federal level or the state level to broadly preempt those sorts of activities.
Texas, for example, passed a law which largely preempts Texas municipalities from going in their own direction with respect to a lot of AI-related controls, similar to the sorts of proposals that we were talking about earlier from the Trump administration federally, to try and preserve regulation of AI. In the federal government's case, the administration wants to keep that limited primarily at the federal level. In Texas, they're trying to keep it at the state rather than the local level. And so we're gonna need to see how that plays out across the country.
Sarah O'Melia:
You know, it sounds like there are so many considerations to think about as you are adopting and utilizing AI tools. So when you see companies that are doing that, right, they're moving forward, they're adopting these AI tools, are they doing so with that consideration and that strategy, or do you feel like many of them are sort of pushing the button just because they feel pressure to keep up?
Kevin Coy:
I think it depends on the company, and I think it can be sort of all over the map on that. And the, you know, from my position, my expertise is privacy and cybersecurity law, so I can help clients work through their use cases regarding their information handling, whether it's in the employment space or in other spaces.
But you know, it's actually the folks that are on the ground in the companies that are trying to figure out why are we looking at AI, what tools are helpful for us, where are our pain points operationally that we're trying to address, what are we trying to accomplish with these tools? And in some cases, when I have conversations with clients, the client has already given that a lot of thought and they have very specific use cases that they want to address with respect to how the data's gonna be used or disclosed, what sort of tool they're looking to use, what the pain points that are behind it are.
And then in other cases, you sort of, at the other end of the spectrum, everybody's using AI, we feel we need to use AI too to sort of keep up with our peers, or we see in the news everybody's doing it, or members of our leadership team are saying AI is the way of the future, we gotta get ahead of the curve on this, go figure out what to do with respect to AI, just as a general charge, leaving it to the HR teams and other parts of the company to then think through and figure out, okay, what tools are out there? How good are they? Are they gonna help us? Are they gonna integrate with our existing processes? Are existing tools gonna be sufficiently protective of the information that we're gonna be putting into them or asking candidates to put into them? And how do they approach compliance with the types of situations that we are dealing with, particularly from an HR perspective, as opposed to tools that might also be used by a host of other sectors and departments within a company?
You know, tools that are generally applicable, thinking of the large language model platforms, for example. Well, those, you can query it with whatever you want and it can answer on a wide range of things, depending on how it's been trained. But you know, when you're thinking about it specifically from the employment process, specifically from an HR perspective, specifically from an employment law and privacy law compliance perspective, have the tools been designed with that in mind? Have they thought through the legal issues that could relate to these use cases in particular? Is their use of information aligned with that? You know, are they only using the information for the employer's purpose or are they using it for their own purpose? And what is that doing with respect to confidentiality obligations or considerations that a company might have with respect to its employees or its candidates?
Those sorts of questions are all things that companies are working through. And their starting points naturally vary depending on how much prior exposure they've had to these tools or to these types of issues, and what safeguards and processes and expectations their organization brings to these sorts of activities. You know, are we talking about an organization which, we were talking earlier about those that are already heavily focused on tech tools and activities, for organizations like that the learning curve may not be quite as steep as it is for industries and sectors that are still more traditional and, if not still fully paper-based, maybe just a step up above that in terms of databases and emails and files. And so every organization is gonna be a little bit different in terms of how they can take the AI tools, integrate them into their practices, and utilize them in a way that strikes the right balance for that organization in terms of deployment versus compliance considerations versus efficiency concerns or drawbacks.
Sarah O'Melia:
I love that. So if someone listening today wanted to move forward with AI in hiring, what governance practices should exist before an organization introduces AI into hiring to really put them in the best place possible?
Kevin Coy:
Well, I think there are lots of things for an organization to consider in terms of setting up a governance framework. I mean, one thing that they may want to think about, for example, is specific use cases. The question is quite broad in terms of AI and employment. And as you think about the hiring process from beginning to end, there are lots of potential steps in there along the way in terms of identifying candidates, communicating with candidates, reaching out to candidates, evaluating candidates, scheduling things with interviews with candidates, conducting those interviews, conducting background checks on candidates where that's appropriate for the position and part of the process. Doing all those things in a way that is consistent with the organization's legal obligations in any one of those areas, whether it's interviewing or background screening or any other part of the process for that matter.
And so I think one thing to consider would be to break it down into smaller bites and not say this particular tool is gonna be a global fix for all of our HR AI needs, but instead, what are the use cases that we are interested in solving? Going back to the points that we were talking about earlier, you can't really just sort of blame anything that might go wrong on the AI. The employer's likely to be held responsible, certainly in the eyes of the candidate and the candidate's lawyers, should it ever come to that, as to what might go wrong during the hiring process. So, sort of look at specific use cases, look at the tools with those things in mind, and then also develop policies and procedures from the company's standpoint to say, okay, with these things in mind, this is where our comfort level is.
Sarah O'Melia:
That's excellent. It feels like there are these advancements and efficiencies and they can be done well, but you have to have that intentionality and that oversight from the company perspective.
Kevin Coy:
Yeah, I think that's exactly right. Intentionality, a great word here, and a great way of thinking about it. To be intentional with your use cases, to evaluate those use cases, to evaluate those use cases with stakeholders, either throughout your HR organization or your organization as a whole, so that interested stakeholders are involved, so that the HR team has input and, where applicable, the privacy team has input, the security team has input, before these tools are turned loose or harnessed to access information across the organization.
For example, what sort of security controls do they have? What data sets are you going to give them access to? What are they gonna be doing with those data sets? Again, going back to the things we were talking about earlier in terms of, is it gonna be solely for the benefit of the company? Or are the service providers going to have uses of their own for the data, either to train their models or to make other uses of the data, which may create issues? So all the stakeholders can take those things into account. Recognizing that in smaller organizations, it might just be one or two people that are wearing all those different hats and having to take all those considerations into account. But in larger organizations, it might be spread more broadly across the organization. But to make sure that all those things are taken into account as a tool is being considered, as it's being rolled out, as it's being implemented, and then as it's being periodically tested thereafter.
You know, it might be a case where the tool seems perfectly great at launch. You've done a thorough assessment. You've looked at bias testing that's been done. You've thought through all those things on the front end. But it's not necessarily, and I think in this case, with all the changes we're seeing in the rapid development of the technology, it's almost certainly not gonna stay a do it and forget it type of situation. Because the tools are likely going to continue to evolve rapidly. There may not have been any concern about bias when the tool was launched today, but you know, next year, two years from now, three years from now, five years from now, have the bias tests and considerations remained the same? Have new biases been found? Are those material to however you're using the tool? It's something that'll require continued focus. Intentionality not only in terms of the initial adoption, but in terms of carrying those tools through for use over time throughout the organization.
Sarah O'Melia:
This has been so informative and educational. Kevin, thank you so much for being here and sharing your insights into AI capabilities and considerations. Really appreciate it.
Kevin Coy:
Well, thank you very much for the invitation, Sarah. Great to spend some time with you today.
Sarah O'Melia:
Intentionality is the idea that we kept coming back to. And it's the right one. Bringing AI into hiring is not a one-time decision. It's an ongoing commitment to understanding what your tools are doing, who's accountable when something goes wrong, and whether your processes can hold up to scrutiny.
A few things worth carrying out of this conversation. The employer is responsible. Not the tool, not the vendor. If an AI-driven hiring decision leads to a bias claim or a privacy violation, the organization is the one on the hook. That shapes everything about how you select, implement, and monitor these tools. And monitoring does not stop at launch. The tools evolve. Biases that were not present on day one can emerge over time. Governance is not a setup task. It is a continuous one.
Thank you to Kevin for bringing real legal grounding to a conversation that can get very abstract very quickly. If this episode was useful, subscribe on Apple Podcasts, Spotify, or YouTube, and share it with a colleague who is navigating AI decisions in hiring right now.
We'll see you next time. And remember, in the meantime… don't get played.
More Episodes
How Verifiable Credentials Are Rewriting the Rules of Hiring Trust
with Zach Daigle, Etan Bernstein
Listen Now
Why Transportation Has a Trust Problem, Not a Driver Shortage When Recruiting CDL Drivers
with Tommy Valenzuela
Listen Now
Book time with one of our screening experts to find out how we can streamline your talent process with a free assessment
Get your free assessment

