AI Compliance for Talent Acquisition Teams: Here’s how to get started

Responsible AI in Recruitment: AI Compliance for Talent Acquisition Teams

AI Compliance for TA Teams: Illustration of a person reviewing an online document

 

Hosted by Joe Atkinson, Director of AI & Automation at Scede, our recent webinar, ‘Responsible AI in Recruitment: AI Compliance for Talent Acquisition Teams’, featured Martyn Redstone, Founder of H.A.I.R, Eunomia HR, and the genAssess AI readiness platform.

As a leading voice on the responsible use of AI in HR and recruitment, Martyn joined us to explore how talent teams can build safe, compliant foundations for AI adoption, setting practical ‘guardrails’ that enable innovation without limiting progress.

You can listen the audio recap, watch the video on demand or read the full transcript, below.

🎧 Listen to the audio recap:

 

📼 Watch the video on-demand:

📖 Read the transcript:

00:02:36.190 –> 00:02:51.139

Joe Atkinson: So, welcome, thank you everyone for joining. If you haven’t met me, I’m Joe, just started a new role here at SEED as a director of AI and Automation. Before that, I was running a company called Purple, focused on recruiter enablement for the last couple of years.

00:02:51.140 –> 00:03:10.630

Joe Atkinson: Here with Martyn today, we’re going to be discussing all things AI compliance, setting guardrails in place, and really setting great foundations for you to innovate on top of when you’re looking to deploy AI within your TA teams. I’ll hand over to you, Martyn, for a quick intro, who you are, where you come from, and then we’ll dive into the questions.

00:03:10.760 –> 00:03:16.260

Martyn Redstone: Yeah, thank you for inviting me on, Joe. Great to be here today on your inaugural webinar.

00:03:16.260 –> 00:03:39.940

Martyn Redstone: So, so hello everyone. My name’s Martin Redstone. Very, very briefly, I’ve been in the recruitment industry for the last 20 years. For the last 9 of those years, I’ve been running an AI and automation, consultancy business, probably the first AI and automation, agency that was dedicated to recruitment.

00:03:40.060 –> 00:03:57.300

Martyn Redstone: Now, for the last 12 months or so, most of my work has been focused all in on good governance, good risk management, and good compliance when it comes to implementing and enabling AI in a recruitment process.

00:03:58.540 –> 00:04:15.560

Joe Atkinson: Love it. Feels like a little bit of an exciting time right now for AI, right? I feel like we’ve got past the first two years of lots of noise, and people saying, you know, you’re going to need to implement this, and now we’re actually seeing people get to that implementation phase in some cases.

00:04:15.560 –> 00:04:28.789

Joe Atkinson: I feel in others, there’s still a lot of confusion or doubt in, kind of, how to even get to that starting line. So, where do you recommend the step one being when you’re looking to adopt safely in talent teams?

00:04:29.240 –> 00:04:37.720

Martyn Redstone: Yeah, so you’re quite right, you know, and there is a slight shift in the conversation going on right now, and I noticed it very much,

00:04:37.950 –> 00:04:54.930

Martyn Redstone: on, last week, actually, on, on LinkedIn, of all places, where I put up something that kind of exploded, from a virality perspective, and, and yeah, and, and there was a lot of comments on there, and I think the, the conversations definitely started to shift from.

00:04:54.930 –> 00:05:03.409

Martyn Redstone: how do we do this to how do we do this safely? So… so if you are starting to think about implementing AI, it’s the perfect place

00:05:03.480 –> 00:05:21.450

Martyn Redstone: to start your governance journey, because a lot of people now are having to backtrack and pull things apart, who have rushed into it without thinking about the good basics. So now’s the best place to start. If you haven’t started yet on your AI journey, the best place to start is right at the beginning, laying really good governance foundations and good guardrails.

00:05:21.450 –> 00:05:38.710

Martyn Redstone: So, you know, to start with, obviously with any kind of transformation program, you know, you’re going to be looking at, you know, where is the most effective place, where’s the low-hanging fruit to implement AI? But aside from that, it’s about starting to think about your things around your acceptable use policies. You know, what do we think about

00:05:38.740 –> 00:05:52.019

Martyn Redstone: what is the acceptable use of AI in our organization, in our processes. What is the acceptable use of AI from a personal perspective? Are we allowing people to use their own AI? Are we mandating a specific type of…

00:05:52.020 –> 00:06:05.179

Martyn Redstone: chatbot that people can use, like Copilot or ChatGPT or what have you. And then from there, you know, you need to start the policy first. So, it’s about getting your policy right. What do we think about when it comes to candidate use of AI?

00:06:05.180 –> 00:06:13.570

Martyn Redstone: What do we think about when we use AI for screening? All those kind of things. So, getting your… your first policy in check is the best place to start.

00:06:15.060 –> 00:06:17.529

Joe Atkinson: And how do you decide?

00:06:17.530 –> 00:06:38.329

Joe Atkinson: what your policy should be, because you’re exactly right, and we see this across the clients we work with, for example, like, people’s attitudes to AI is so different. Some people want, kind of, almost no AI in the hiring process. Other companies we work with want to use as much as they possibly can and become as efficient as they possibly can. So, how… if you haven’t got that starting point, like, how do you go about deciding?

00:06:39.030 –> 00:06:53.030

Martyn Redstone: Well, actually, it tends to very much be not done in silo. You know, one of the things that I’ve noticed a lot over the last several years and even over the last few months, actually, is TA kind of running forward?

00:06:53.030 –> 00:07:12.620

Martyn Redstone: Really quickly trying to get things done, because, you know, recruiters, they’re very overwhelmed. We’ve seen, you know, 30 to 40% increase across the board in applications, they’re under pressure, they’re getting told to do more with less. But actually, you know, everything they do strategically when it comes to AI has to align with the organization. And actually.

00:07:12.620 –> 00:07:27.650

Martyn Redstone: the majority of the time, the organization is working on an AI roadmap and an AI strategy, so get yourself looped into that, and that would be a great place to start when it comes to working out what you want to do, where you want to go, and how you want to do it.

00:07:28.730 –> 00:07:37.749

Joe Atkinson: Yeah, I think it’s a good point, and we’re seeing a lot of these, right, company-wide, like, AI guidelines, so often that gives you a great starting point to then apply specifically to your team, right?

00:07:38.060 –> 00:07:39.510

Martyn Redstone: Absolutely, yeah.

00:07:39.960 –> 00:07:54.480

Joe Atkinson: Cool, and then, I guess the other part of that is when you’re looking at guidance on usage, how you should use it, how you shouldn’t use it, is on the shouldn’t side, like, the risks. So what are the, kind of, big risks and key considerations that we’re trying to avoid through this documentation?

00:07:54.980 –> 00:08:10.219

Martyn Redstone: Yeah, so there’s quite a few, to be honest with you, and without trying to scare anyone, there are a lot of risks when it comes to using AI. But actually, it’s about going back to basics. You know, over here in the UK and Europe, we’ve had GDPR in place for the last 7 years.

00:08:10.220 –> 00:08:19.509

Martyn Redstone: So, let’s have a think about things like data protection, things like ensuring that we’re not transferring data outside of Europe, all those good stuff that we…

00:08:19.510 –> 00:08:28.560

Martyn Redstone: that we forget about sometimes when it comes to AI. So, start with your data. Start with the really basics that we’ve been working on for the last 7 years when it comes to GDPR.

00:08:28.630 –> 00:08:38.559

Martyn Redstone: And then from there think about all the risks involved in you doing that process manually. So what are you trying to do? You know, think about screening.

00:08:38.559 –> 00:08:52.360

Martyn Redstone: you know, when you’re screening a CV, we’ve been talking about unconscious bias for God knows how many years now in our industry. You know, let’s think about bias when it comes to screening and AI. Let’s think about, transparency.

00:08:52.360 –> 00:09:03.729

Martyn Redstone: do we understand how AI systems work? Are we able to explain how they work in a natural language way? And then we need to go over to regulation as well, and make sure that everything we’re doing is in line with regulation.

00:09:03.770 –> 00:09:14.869

Martyn Redstone: wherever you are in the world. You know, that could be multi-jurisdiction, it could be one country, but make sure you’re aligned to what’s going on with the regulation there. Those are the three pieces that I tend to look at to start with.

00:09:14.870 –> 00:09:27.180

Martyn Redstone: Because they’re the big, you know, the big players when it comes to risk. And we’re already seeing that with litigation over in the US when it comes to platforms like Workday and employers like,

00:09:27.440 –> 00:09:36.329

Martyn Redstone: Oh my goodness, I’ve forgotten their name now. But yeah, we’re seeing litigation over there in the US when it comes to things like bias in AI and hiring as well.

00:09:37.090 –> 00:10:23.740 

Joe Atkinson: Yeah, and I think, like, the data pieces… So important, because you know, it’s not just the selection piece, which is the litigation bit you’re talking about, right? Using AI to select candidates, but the data piece is, like, a lot wider than that. To use a basic example, like the free version of ChatGPT, let’s say, for your… if you’re a European company, and you’re putting your information into the free version of GPT, for one, that data’s been transferred from Europe to the US, so there’s some considerations around that. Then if you were to share, say, I have a great thread on GPT, so I’ll share it with you, Martyn, who is a recruiter in my team, that’s been published to the web, so it’s trawlable as well, so there could be some really key confidential information that you’re unknowingly publishing to the web. So, yeah, data… the data piece is a…

00:10:23.740 –> 00:10:24.960

Joe Atkinson: It’s a massive one.

00:10:24.960 –> 00:10:29.810

Joe Atkinson: On the governance side, just to flip back to that, what, like…

00:10:29.970 –> 00:10:35.789

Joe Atkinson: I guess, simply, like, what is a governance framework, and for AI specifically, are there any key tips to building that out?

00:10:36.080 –> 00:10:51.339

Martyn Redstone: Yeah, so, so, you’ve got to think about, certain aspects when it comes to governance, but we always have to start with policy. So, governance comes down to, when I’m working with my clients, kind of three key pillars that you need to think about.

00:10:51.340 –> 00:11:03.570

Martyn Redstone: Policy, provision, and education. Policy we’ve talked about, you know, we have to align policy with risk, with regulation, and internal and external as well.

00:11:03.610 –> 00:11:21.199

Martyn Redstone: And then, once we’re happy with the policy, we need to start thinking about how do we provision AI into the organization or into the process. That could be external buy, it could be internal build, but again, once you’ve aligned your risk policies and your governance policies.

00:11:21.440 –> 00:11:27.479

Martyn Redstone: alongside that with procurement policies and data security and all those kind of internal things as well.

00:11:27.650 –> 00:11:43.079

Martyn Redstone: you can understand how to provision AI ethically and responsibly. You know, once you’ve then provisioned it, you have to make sure that people can use it, and use it properly and understand it, and so education is the third, and probably one of the most important parts as well.

00:11:43.080 –> 00:11:47.770

Martyn Redstone: So, when I’m building out, governance frameworks with clients.

00:11:47.770 –> 00:11:53.210

Martyn Redstone: We think about those three key pillars. Policy, provision, and education.

00:11:55.570 –> 00:11:56.990

Joe Atkinson: Nice, and…

00:11:57.890 –> 00:12:10.329

Joe Atkinson: I think one of the key things we wanted to cover, and whether we can do it in enough depth on… in 15 minutes might be another question, but EU AI Act, a lot of, kind of, question marks around that. I know it’s something you’ve been… you’ve been speaking about a lot.

00:12:10.330 –> 00:12:10.910

Martyn Redstone: Yep.

00:12:11.190 –> 00:12:15.489

Joe Atkinson: Practical tips for, like, getting ready for that, or things you can start doing now?

00:12:15.870 –> 00:12:31.690

Martyn Redstone: Yeah, practical tips. So, we were already, 12 months into the UAI Act, going live, and actually in February this year, there was, a key part of that regulation that was placed upon us, which we forget about, which is,

00:12:31.730 –> 00:12:41.499

Martyn Redstone: AI literacy training. So anybody, any organization that is deploying AI into Europe, whether it’s high risk or not.

00:12:41.500 –> 00:12:53.939

Martyn Redstone: needs to ensure that they have provided their internal staff who are working within that AI, to have significant and reasonable AI literacy training. So that has to be done ASAP.

00:12:55.170 –> 00:13:05.499

Martyn Redstone: Aside from that, we’ve got until next August to get, get ourselves in shape when it comes to covering off, regulation around high-risk

00:13:05.500 –> 00:13:27.549

Martyn Redstone: use cases, which employment covers that. And actually, it’s a key thing, it’s not recruitment, it’s employment. So, anything from hire to retire, or hire to fire, or whatever you want to think about it, you know. So, in line with that, yeah, we need to have bias auditing, we need to have transparency, we need to have explainability, we need to have good documentation, we call it Annex 4 documentation, we need to understand

00:13:27.550 –> 00:13:35.590

Martyn Redstone: AI model cards, you know, for the kind of models that we’re using. We need to understand every model of AI that we’re using, whether

00:13:35.590 –> 00:13:55.229

Martyn Redstone: mandated AI or whether shadow AI, and in fact, we need to get a check on Shadow AI as well. So, so yes, there’s lots to do, but it’s not scary and it’s not overwhelming, but… but yeah, it’s about making sure that you’re… that you’re ticking all the boxes, and that, you know, if an auditor comes knocking at your door, which

00:13:55.300 –> 00:14:04.209

Martyn Redstone: is very low risk for that to happen right now, because they don’t have enough staff to do that. But, when people do come knocking at your door, you know, you need to be able to tell them.

00:14:04.210 –> 00:14:16.660

Martyn Redstone: who ultimately is responsible in the organization, you know, what your bias auditing regime looks like, and your explainability and your transparency. Those are really the key parts alongside that literacy training as well.

00:14:18.480 –> 00:14:33.480

Martyn Redstone: And yeah, like I said, we’ve got to get that in check by next August, and, my latest conversations with the EUAI office, they’re talking about, good guidance coming out towards the beginning of next year, so kind of January, February time.

00:14:34.230 –> 00:14:48.090

Joe Atkinson: Okay, cool. And how would you go about discussions with vendors? Because this is something which I assume most people on this call will be looking at different vendors for different parts of the process right now, like, with these considerations around

00:14:48.300 –> 00:14:53.469

Joe Atkinson: governance, compliance, and the regulation we’re playing within, like, how would you be assessing vendors?

00:14:54.160 –> 00:15:01.770

Martyn Redstone: Yeah, it’s a good question. So there’s various ways, various questions that you need to be asking vendors, and, and…

00:15:02.190 –> 00:15:11.990

Martyn Redstone: they can be quite tough questions as well, and that’s a good thing. So, ultimately, all the things that I’ve just covered have to be reflected in the vendors as well. One of the things that

00:15:12.000 –> 00:15:25.600

Martyn Redstone: the EU AI Act in particular, and I know over here in the UK we’re not in the EU, but anytime that we’re placing an AI system into Europe, we have to be covered… we have to be covered under that legislation. So.

00:15:25.920 –> 00:15:41.619

Martyn Redstone: So the good thing about the UAI Act is that it’s kind of a supply chain or market access-based regulation, which means that, you know, providers of AI systems are as responsible as deployers. So, us as buyers are responsible.

00:15:41.820 –> 00:15:53.480

Martyn Redstone: And our vendors are responsible as well, so there’s no shirking or shrugging off of responsibility. Everybody has to do the same stuff. And so we need to be asking vendors questions like.

00:15:53.480 –> 00:16:13.270

Martyn Redstone: you know, your own bias auditing. You know, do you do bias auditing on a regular basis? Can we see the results of that? How are you carrying that out? Are you using a third party? Etc? Alongside, again, standard kind of infosec and data security questions that we tend to ask. But then we also have some really interesting questions.

00:16:13.270 –> 00:16:20.250

Martyn Redstone: Especially in this new world of large language models, which a lot of vendors are starting to implement into their systems.

00:16:20.440 –> 00:16:27.300

Martyn Redstone: And one of the things that… and there’s various risks in using large language models that we haven’t even gone into on this call.

00:16:27.440 –> 00:16:44.080

Martyn Redstone: anything from hallucinations, we know that, obviously, large language models make things up, they don’t know how to say, I don’t know. So, anything from, you know, your interview transcription software, you know, what’s the hallucination index like on there? Is it making stuff up about the person you’re talking to and putting that into your ATS?

00:16:44.080 –> 00:16:56.929

Martyn Redstone: You know, if you’re looking at LLM-powered CV screening, you need to ask questions about rank drift, you know, and repeatability. You know, if I run the same screening process today.

00:16:57.040 –> 00:17:07.819

Martyn Redstone: you know, and take a note of those results. Who’s, you know, kind of top choice candidate? Is that going to be the same result tomorrow, and the day after, and the week after, and the month after? How are they…

00:17:07.869 –> 00:17:25.450

Martyn Redstone: How are they recording, and auditing something like rank drift or repeatability? And then again, transparency. You know, can they explain how their AI systems work in a natural language that a non-technical person can understand? And finally,

00:17:25.700 –> 00:17:32.810

Martyn Redstone: And finally on that, there was one other, and it’s just gone from my brain. If I remember it, I’ll come back to it. But those are the most important ones.

00:17:32.850 –> 00:17:46.220

Joe Atkinson: No, I loved your point on, like, hallucination, and I think there’s a great learning there is not just to, you know, accept vendor claims, it’s also to test for yourself, and I know this is something you’ve been doing, right, is, like, testing the different tools.

00:17:46.220 –> 00:17:59.070

Joe Atkinson: And we’re doing an event on… in London in late October, and part of that is going to be, like, how to run a trial for these tools and, like, actually test for claims and see if you’re getting the output that you should be, or if you’re getting that hallucination, so…

00:17:59.070 –> 00:18:13.910

Joe Atkinson: Yeah, that was a good takeaway. Let’s finish on a kind of optimistic angle then, because I know some… of course, these topics are incredibly important, but they might be a little bit scary for some folks who haven’t addressed them before, or it’s their first time.

00:18:13.910 –> 00:18:24.949

Joe Atkinson: You have a great analogy of how this is, like, setting guardrails in place to actually innovate more and go faster, right? So, how do you encourage people to, like, stay innovative,

00:18:25.150 –> 00:18:27.099

Joe Atkinson: We’re, like, in this conversation.

00:18:27.470 –> 00:18:38.750

Martyn Redstone: Yeah, so, so we need to think of ourselves as kind of like the ethical guardians of the organization, because most use of AI across an organization isn’t classed as kind of high risk.

00:18:38.750 –> 00:18:56.779

Martyn Redstone: And so people don’t really think about the kind of ethical and responsible side of it. Whereas in TA and HR, we absolutely have to be the ethical guardians of AI in an organization, because the decisions that come out of an AI-focused process like ours are life-changing.

00:18:56.790 –> 00:19:08.910

Martyn Redstone: And so… so we need to remember that that’s our role when it comes to AI transformation. Now, the analogy, which, which I absolutely love, so I’m going to repeat that time and time again, but basically, this came out of…

00:19:09.320 –> 00:19:23.190

Martyn Redstone: People concerned about putting in too many guardrails would stymie, innovation and slow down innovation in an organization, and it does quite the opposite, because what we see is…

00:19:23.190 –> 00:19:32.940

Martyn Redstone: Very much, you know, this concern that if we throw down too much policy, too much process, people aren’t going to want to innovate, and quite the opposite. Actually, without that.

00:19:34.510 –> 00:19:52.790

Martyn Redstone: People don’t know how to innovate. And so, when I talk about guardrails, I talk about, you know, literally the metal guardrails in the middle of a motorway or a highway, depending on where you live in the world. They’re not there to slow traffic down. They’re actually there to… because they’re right next to the fast lane most of the time, they’re there to allow people to drive faster, safely.

00:19:52.800 –> 00:19:54.719

Martyn Redstone: And if they weren’t there.

00:19:54.740 –> 00:20:00.850

Martyn Redstone: Then people in the fast lane would be going a lot slower, because they’d be scared of veering off into the oncoming traffic on the other side.

00:20:00.900 –> 00:20:03.690

Martyn Redstone: And so, that’s the whole point of guardrails.

00:20:03.770 –> 00:20:12.950

Martyn Redstone: We lay them down right at the beginning of the process to allow people to innovate quicker, but safely, so they know exactly what they can do all the time.

00:20:14.370 –> 00:20:33.179

Joe Atkinson: Yeah, I think it’s a fantastic analogy, and, like, something I’ve seen just in the last couple weeks, myself, like, showing a few recruiters how to use Gemini safely in their process, and the adoption has, like, gone way up. They’ve started using it way more, because they feel more… more confident. So, good one to end on. We have a couple of good questions I can see in the chat already, so let’s…

00:20:33.180 –> 00:20:43.629

Joe Atkinson: flip over to those. If you have any questions, throw them in there. The first one’s from Mark Dawkins, who I’d actually recommend everyone follows, as well as Martin, as someone who.

00:20:43.630 –> 00:20:56.580

Joe Atkinson: speaks a lot about risks and compliance and questions we should be asking when adopting AI. So he wrote, what is the guidance on where do algorithms stop and AI starts? What are the key definitions?

00:20:57.340 –> 00:21:05.579

Martyn Redstone: Yeah, it’s a great question. Ultimately, that comes down to every organization to lay that out within their acceptable use policies.

00:21:05.580 –> 00:21:18.710

Martyn Redstone: However, my recommendation is that final hiring decisions still need to be made by a human, and rejections, also. No auto-rejects in there right now. So I think that that’s where…

00:21:18.990 –> 00:21:38.779

Martyn Redstone: the AI-focused process needs to stop, ultimately. If you’re in a very high-volume environment, I have seen, you know, organizations now that have handed everything off to AI, so we still need that kill switch in there, we still need that human oversight, and be able to review decisions on a regular basis to make sure they’re still aligning with

00:21:38.820 –> 00:21:46.440

Martyn Redstone: any policy that we put down, or any expectations that we put down. So… so those would be my kind of recommendations on where

00:21:46.560 –> 00:21:48.329

Martyn Redstone: Algorithms should stop.

00:21:49.870 –> 00:21:57.400

Martyn Redstone: I’m not sure what you’re asking for, Mark, in regards to key definitions, but, but I hope that answers your question.

00:21:57.520 –> 00:22:13.759

Martyn Redstone: Again, like I said, the whole point of laying down, policy and good guardrails is so that you’re aligned with an organization’s expectations as well, because every organization does differ depending on where you are in the world. But I don’t think right now that we’re in a position

00:22:13.930 –> 00:22:20.260

Martyn Redstone: for most, kind of, especially knowledge-based workers, where AI can make a final hiring decision.

00:22:22.060 –> 00:22:37.340

Joe Atkinson: Cool. Another question from Esther, with the implementation of AI, are companies actively measuring pre-AI and post-AI performance, whether it’s time to fill or any of them measurable to understand its actual benefit? Anything you’re seeing on that?

00:22:37.360 –> 00:22:52.449

Martyn Redstone: Yeah, so, so this is one of the… we see, kind of, in the press, you know, our AI pilots have failed, and, you know, and you saw that kind of dodgy MIT report the other week about it. I called it dodgy because it was only, like, 92 respondents or something, but,

00:22:52.710 –> 00:23:06.070

Martyn Redstone: Yeah, ultimately, if you want to implement any technology successfully, you need to have a control point of understanding what the… what the pre-transformation metrics look like, and understand what you’re trying to achieve.

00:23:06.070 –> 00:23:20.489

Martyn Redstone: And work backwards from there, because there’s no point in just chucking in technology for the sake of it. It needs to solve a problem, it needs to improve some kind of metric, and you need to know what that control point is, where you started from, and where you’re going to head towards.

00:23:22.300 –> 00:23:28.020

Joe Atkinson: Yeah, I’d agree with that. The ones I’ve seen have maybe less been the, kind of, big.

00:23:28.160 –> 00:23:51.699

Joe Atkinson: pre- and post, like, those big transformational ones, but more, like, very specific on an initiative. So, you know, we’re gonna try Juicebox, an AI sourcing tool. So, and this is something we’ve done internally at SEED, and Nathan on our end has been leading this, but, okay, what are the specific metrics? Like, how long are we spending searching for candidates? What’s our response rate? What are the pass-through rates of those candidates?

00:23:51.700 –> 00:23:57.429

Joe Atkinson: Versus the non-AI solution, so getting really granular and looking at specific initiatives is the way we’ve done it.

00:23:57.430 –> 00:24:14.799

Martyn Redstone: You have to know what you’re starting with, you know, to prove the value. You know, that’s the other thing that we don’t talk about enough, which is AI ROI. Try saying that without your teeth in. But yeah, so you need to, you know, ultimately, your organization wants to know you’re getting that return on investment as well.

00:24:14.800 –> 00:24:26.070

Martyn Redstone: And so, that can be everything from that education piece, to make sure people are using it, through to tracking those metrics and making sure you are getting the ROI from it. It’s super important, super important.

00:24:26.260 –> 00:24:33.310

Martyn Redstone: Sorry, I’m just looking at Mark’s follow-up. I kind of asked Mark on the answer, you know, if that’s what he meant. Actually, he’s just saying that,

00:24:34.050 –> 00:24:48.379

Martyn Redstone: just wanted to make people aware of some vendor claims, but in reality, it’s algorithms working hard, and folks need to be aware. And I completely agree, you know, one of the biggest challenges, and one of the biggest pieces of work that I do now is… is educating against

00:24:48.900 –> 00:25:05.409

Martyn Redstone: AI marketing slop is probably the best way of putting it. There are a lot of vendor claims out there, and a lot of it is just marketing. Everything from the term agent. You know, most people you talk to in our world can’t even define what an agent is, and yet we see it all the time in marketing.

00:25:05.410 –> 00:25:14.129

Martyn Redstone: You know, so we want to make sure that we’re pushing back on vendor claims, you know, and that’s your responsibility as a buyer as well, out there in the audience, to make sure that

00:25:14.210 –> 00:25:21.700

Martyn Redstone: you’re not just taking what vendors say as gospel, and that you are asking those questions for them to prove it. You know, whether that be…

00:25:21.700 –> 00:25:35.489

Martyn Redstone: you know, we can shortlist in 10 minutes, or 1 minute, or an hour. Again, you might be able to shortlist that quickly, but is it the same shortlist tomorrow, the day after, the day after, and the day after? You know, those kind of questions are really, really important.

00:25:35.700 –> 00:25:50.249

Martyn Redstone: And most of the time when people talk about agentic AI and agents and what have you, it’s just automation with lipstick. And we’ve been working with automation systems for years and years and years. It’s just automation that has a little bit of a large language model doing stuff inside.

00:25:50.290 –> 00:25:57.549

Martyn Redstone: So yeah, so make sure you’re asking those questions, and, and, make sure you’re pushing back on their claims.

00:25:58.450 –> 00:26:09.809

Joe Atkinson: Yeah, I would agree. The amount of, agents used in marketing versus actual agents in existing is, yeah, not the same. Probably got time for one more question, if anyone wants to throw one in quickly,

00:26:09.910 –> 00:26:27.090

Joe Atkinson: in the meantime, would recommend, if you’re working on any of these documents or topics that we’ve covered, you have a great template on your website, Martin, that I’ve actually used to help draft some of our stuff at SEED. Yeah, so check that out. I’m not sure if there’s any other resources you’d recommend for folks.

00:26:27.190 –> 00:26:43.569

Martyn Redstone: Yeah, so, so, so, you know, connect with me on LinkedIn. I’m always posting stuff up that, that hopefully somebody finds interesting somewhere. And yeah, we’ve got some free resources on the website, you know Mia hyphen-hr.com. I don’t think I can send chats out, but,

00:26:43.570 –> 00:26:49.710

Martyn Redstone: Feel free to pop that up. You know, and there’s various resources on there from,

00:26:49.930 –> 00:27:00.060

Martyn Redstone: AI screening vendors, you know, assessment tool to, you know, policy templates and what have you, so feel free to use those resources to your heart’s delight.

00:27:01.150 –> 00:27:20.979

Joe Atkinson: Awesome. Yeah, I’m pretty sure we’ll send a email out with some links, definitely, to that and your profile as well. Here we go, one question to finish on. LinkedIn is trying to roll out an AI agent as an additional fee, to the enterprise license. Anyone else trying to learn about this new tool, is it just fancy automation?

00:27:21.270 –> 00:27:24.870

Joe Atkinson: You seen the AI? Yeah. You seen that one?

00:27:25.040 –> 00:27:37.049

Martyn Redstone: Yeah, this is the LinkedIn hiring assistant they’ve rolled out over the last few weeks. I haven’t had a chance to play with it, because I’m not a recruiter anymore, I’m a recovering recruiter. I haven’t looked at a CV in several years now.

00:27:39.470 –> 00:27:59.489

Martyn Redstone: But I’d like to. I know that there’s a few people out there in the market that are doing some analysis on it, so I’m looking forward to kind of seeing that, but that’ll take some time as the system’s rolled out. But I’m very interested in seeing whether it’s the game changer that LinkedIn have marketed it as. And again, it could just be marketing slop.

00:27:59.490 –> 00:28:16.789

Joe Atkinson: Yeah, we’ve actually been trialling it, I’m not sure how much we can publish about it, just yet, but, you know, people write off LinkedIn, but they do ultimately have the best dataset out of anyone, so, they potentially may be slower, but I wouldn’t write them off.

00:28:17.380 –> 00:28:23.760

Joe Atkinson: maybe reach out to me, Stacey, and I can share some feedback one-on-one on that versus on the webinar, but happy to chat.

00:28:24.020 –> 00:28:42.390

Joe Atkinson: Cool, I think we’re about there for time. Thanks so much for joining us, Martin. Always great to chat AI with you. We’ll send you one an email with this recording, links to Martin’s website, and the free resources, as well as upcoming webinars, hopefully one every month going forward. So, hope to see you on the next one.

00:28:42.940 –> 00:28:46.990

Martyn Redstone: Thank you very much. Great to be invited as your first guest. Really appreciate it. Cheers, Joe.

00:28:47.780 –> 00:28:48.950

Joe Atkinson: Thanks, Martyn.

 

If you want to hear how AI can help accelerate your hiring growth, let’s talk. We’d genuinely love to help.

Want more insights on using AI in talent acquisition? Follow Scede on LinkedIn for strategies that actually work.

 

 

Don't Want to Miss Anything?

Get closer to Scede, subscribe to receive our insights via email.