Technology has been a great thing for recruiting, but there's a dark side. Namely, artificial intelligence. A.I. has made companies more efficient and effective, but it's also been a roadblock for job candidates and a hindrance for hidden workers to get ahead.
EEOC commissioner Keith Sonderling joins the boys to give a high-level education on the current state of how the government is dealing with the questions around tech and employment.
PODCAST TRANSCRIPTION sponsored by:
INTRO (1s):
Hide your kids! Lock the doors! You're listening to HR’s most dangerous podcast. Chad Sowash and Joel Cheeseman are here to punch the recruiting industry, right where it hurts! Complete with breaking news, brash opinion and loads of snark, buckle up boys and girls, it's time for the Chad and Cheese podcast.
Joel (21s):
Oh yeah. We've got the EEOC in the house. Everybody what's up at your favorite podcast. This is cohost Joel Cheeseman as always joined by my co-host in chief Chad Sowash and dude, Chad, we're so excited by this guest. I'm going to hand it off to you to take it from here.
Chad (41s):
Nobody is excited when somebody says E E O C, but I'm going to tell you right now on this podcast, this episode, you're going to be excited. So welcome Keith Sonderling, Commissioner at the US Equal Employment Opportunity Commission. That's the EEOC. Yeah. You know, me three years prior at DOL at wage and hour division acting and deputy administrator there. So I should say the main reason why we wanted to get you on Keith is because you have recently written articles and done podcast on AI. We'll get there, but we're really excited because we have not, I repeat, we have not had the engagement that any of us, I think in our industry have wanted from government around technology.
Chad (1m 30s):
But before we get into that, that's just a little teaser. Give us a little background about you. Who is Keith? Are we talking about long walks on the beach?
Keith (1m 42s):
Well, thank you. And for those, when you say EEOC's in the house, most people would run away from that house or immediately turn off, but stay on please, because I'm very excited to be here. Joel and Chad, thank you so much for having me. I've listened to your podcast. You've been a tremendous resource for me as I dive into technology on AI for just the next 45 minutes let me tell you how great both of you are, but in all seriousness, we will get the technology. And I do appreciate what you both do and bringing to light all of these different various technologies that employers are using and employees are being subject to. But first a little bit about me.
Keith (2m 22s):
I'm a commissioner on the US EEOC. I was confirmed by the Senate last September, after going through the Senate confirmation process, which took around 14 months, which is his whole own separate podcast.
Chad (2m 36s):
That would suck.
Joel (2m 37s):
It sounds like a colonoscopy.
Keith (2m 41s):
You know, in a way you have to look at it and sort of pinch yourself that you are interacting with this process that you're dealing with the United States Senate, that you've been nominated by the president. So as much time as it took, it was just a very, very cool experience. But before he got to the EEOC and I've been here just around a year, I was at the Department of Labor, the Wage and Hour Division would you said, which does the minimum wage over time, the family medical leave act and then some of the immigration and agricultural laws. But what drew me to the Department of Labor is that I was at Labor and employment, where in Florida, that's all I've ever done my entire career. I was at a, a Florida based law firm doing labor and employment, defending corporations and labor and employment suits on the litigation side, but also working with HR and working with companies on best practices, policies, and procedures relating to labor and employment.
Keith (3m 34s):
So when the opportunity came up in 2017 to join the Department of Labor, to me, it was not only a once in a lifetime opportunity, but I looked at it as like I'm a labor and employment lawyer. I can go to the mothership, the Department of Labor and get, essentially get a PhD in labor and employment right when you're doing it at that level. So I left Florida. So I'm no longer technically a Florida man, although I will always be a Florida man.
Joel (4m 1s):
Go Gators.
Keith (4m 2s):
That's right. Thank you very much. So I'm proud to be a Florida man in DC. So when I joined the Department of Labor, I really saw a national perspective on how these issues affect employees and employers, and not just from the Florida perspective that I was dealing with before. And I was able to do a lot of really very cool things that the Department of Labor, whether it was opinion letters, whether it was changing the overtime regulations. So that was a really great experience. And then I was nominated to the EEOC and for labor and employment lawyers, the EEOC is really the premier agency for civil rights.
Chad (4m 40s):
The premier league.
Keith (4m 44s):
That is football is for another, for you two to talk about.
Joel (4m 48s):
Ted Lasso.
Keith (4m 50s):
So that's great show, I haven't seen season two yet only season one, but you know, the EEOC really, when you think about labor and employment and modern day issues, that affect employees on a daily basis, it's not the Department of Labor, which deals more with the health, safety and wages concern. It's not the National Labor Relations Board, although that's very well known and that deals more with union issues. The EEOC gets to the core of it. It's the agency that deals with all civil rights in the workplace. So think about pay equity, disability discrimination, the entire #MeToo movement, pregnancy discrimination, age discrimination, all the big ticket stuff. So when you say personally, what it means to me from having had that experience about Department of Labor and now at the EEOC, it's just more than I could ever ask for.
Chad (5m 38s):
So, so once again, you've written articles and you've been on podcasts and you've actually been talking about artificial intelligence, AKA AI, and you've been very vocal about the impact of AI or the prospective impact of AI and hiring firing and the managing process. So what incidents got the EEOC more importantly, you, cause it feels like you're leading this, I'm interested in AI in the first place.
Keith (6m 5s):
Well, let me answer that in a few different parts. Generally commissioners at the sea have their own projects or areas of law that they specialize in. You know, one commissioner really led the charge on LGBT discrimination. One commissioner led the charge on age discrimination. So it's not uncommon for commissioners to pick a specific topic and really champion it. And for me, that is, as you said, artificial intelligence in the workplace, for a whole host of different reasons. First and foremost, that is out there is being used. It's not one of these discussions about, let's talk about how robots are going to replace humans and there's going to be no more workers and we're just live as a society of robot workers, right?
Keith (6m 52s):
That's what people want to think about when they think about AI and as you know, that's not what it is. It's technology that's out there right now. So the conversation needs to happen now. And, you know, coming from practicing law and dealing with corporations who need to hire workers who want to genuinely diversify their workforce and take out some of the bias in recruiting, these issues need to be addressed right away. And there's been a lot of interest before I got here, there was interest from Capitol hill, senators wrote letters to the EEOC, demanding that the EEOC take up this issue. There's been a lot of advocacy groups asking the EEOC to look at this topic.
Keith (7m 34s):
So technology, first of all, really interests me, but more importantly, there's so many benefits to using technology in the workplace that I want to see it flourish and not get subject to certain government regulations that are not going to make it work because we're already too late, it's happening and it needs to be addressed now. So because there's no regulations out there because there's no guidance on it. Because as you know, technology generally gets very far ahead of the government. It's a time where we can all really work together. Everyone from employee groups, to employers buying and using the software to developers, to create a standard that actually allows these products in this AI to help diversify the workforce, help get the best candidates, but also not put on burdensome regulations that take it down or subject it to a massive federal investigations or class action lawsuits.
Joel (8m 27s):
Keith can you talk about sort of how we got here, because the phrase, you know, the road to hell is paved with good intentions. And I feel like we got to this tech heavy recruitment process sort of not on purpose, but it just sort of happened that way. And what's your perspective on just how we got here?
Keith (8m 47s):
I think obviously the default answer and the easy answer is the pandemic really pushed this forward, but it was being used before then. And I think for larger corporations, think fortune 500 company companies that need to hire hundreds of thousands of workers. How do you deal with that process? How do you deal with the amount of applicants and how do you have enough employees internally in HR that are actually going to be able to interview these people? So I really think that's where a lot of this came out from that you have thousands and thousands of resumes and a human just doesn't have the capacity, or you just need a lot of them to sift through them. So I think that's sort of the basis of where a lot of this came out from, but then when Silicon Valley and tech people started getting involved and sort of adding in the AI to a lot of that, that's when a lot of these decisions that I've been writing about and I've been talking about are really coming to forefront.
Keith (9m 43s):
And that's more recent. I think that's in the last three or four years. And I know on your podcast, you talk about how much money is going into these AI technologies. And for me, obviously, you know, that's a good thing. If we're getting funding, if we're getting better products and investors are looking in this, that's good. Let's just do it the right way so it all, doesn't go down in flames because of misuse, either by bad actors or by bad design. So that's really how I look at this, is that it's brimming with potential, but at the same time, if somebody in my position doesn't come out and say, here are the best rules of the road. Here are the best practices. Then it could be really subject to some very serious lawsuits.
Chad (10m 24s):
Are we missing the forest for the trees though? I mean, because I think most people understand that AI, they misunderstand that the decision AI is making doesn't stem from AI itself. Rather it stems from human decisions. Humans are biased, always have been, always will be. Although when you, the human being program that bias into systems, processes, and in this case, AI, you start to reach scale, meaning that your bias could impact thousands instead of dozens. So from my standpoint, as we talk about regulation, I almost want to say, look, you know, when we're talking about bias, the bias is there, it's programmed in, we already have regulations to enforce bias.
Chad (11m 8s):
And these decisions are just really being taken to another scale with AI. But these are human decisions. So are we pointing our finger at AI instead of Jeff Bezos? Are we pointing them at the wrong person or the wrong system?
Keith (11m 23s):
That's really a great point and what I talk about. This is it's not about the algorithm, although it could be about the algorithm because I'm a lawyer, I have to disclaim it, right? So there's two ways to look at this. And there's been so much of a focus on the algorithm and the secret computer coding that's discriminating, that even if the three of us saw, we would have no idea what it even said, because it's probably all math formulas. Right?
Chad (11m 47s):
Right.
Keith (11m 48s):
That's not what it's about. And you just summarized it perfectly. It's either number one, the data going into the algorithm. And you know, and there's the two classic examples about this, which I've written and spoke about. The first is that very public Amazon when they used AI and they wanted to, they gave the computer, their ideal candidate, which is based on their historical applicants and their workforce. And then, because that was mainly males, it started downgrading everyone who wasn't a male. So it automatically started downgrading you if you went to a woman's college or if you played a women's sports.
Chad (12m 24s):
Field hockey.
Keith (12m 25s):
Right. And that, that wasn't because the AI has misogynistic intent, which a lot of people would just want to say, you know, the AI is discriminatory. It was simply because of the data fed to him. And another example, and I know you both will get a kick out of this one. One firms said, go find me the ideal applicant. I want to diversify my workforce. Here's my top performers. And the algorithm spits out "your ideal applicant is named Jared who played high school lacrosse." Thank you. I mean what does that say? What does that do? But whose fault is that? That the inputs that gave you the bias, the bias inputs give you the biased outputs. Now, what I did disclaim is that there are situations where it could be, I don't want to say a biased algorithm, but a bias tool, right?
Keith (13m 11s):
So if some of these programs are poorly designed and then, you know, carelessly implemented by the employer and they allow you to screen out certain race, gender, age, or do sort of brackets like you would target other advertising, then that could be a discriminatory tool within itself. It doesn't matter if the data is, whatever the data looks like, the data could be, you can have a completely diversified work force, potential workforce. And then if you have a tool that allows you to sift through it on protected characteristics, that's the scaling discrimination like we've never seen before, right?
Chad (13m 44s):
Right.
Keith (13m 44s):
Because now you have a tool to do it. So for the most part, I completely agree with you. It's not about the algorithm who cares, what the algorithm, how it's designed, what we're going to look at, and what employers should be looking at is what is the results of what you fed the algorithm? Because that will tell you a lot about the data you put into it.
Joel (14m 3s):
Should we be pointing the finger at the employers or the software developers or a little bit of both?
Keith (14m 9s):
Well, first of all, we don't point fingers. We assess situation under applicable law.
Chad (14m 17s):
Yeah Joel.
Joel (14m 17s):
Sorry, Chad, and I point fingers.
Keith (14m 20s):
That's fine. And of course, we look at every investigation individually and would not point fingers at anyone till we have reasonable cause. But in all seriousness, the who, the employer, and now I have to do sound like a lawyer for a minute. The employer is liable. There's no question under our loss that the employer using these tools to make the decisions will be liable for the outcomes of the AI. Whether or not, whether they bought the AI to really diversify their workforce and eliminate bias or help employees up-skill and re-skill, if it has that discriminatory output, the employer's on the hook.
Chad (14m 57s):
We've talked about black box versus white box AI on the podcast for years. And for all those listeners who haven't heard it before, black box easily, it's just something that's not transparent so you can't see what the algorithm's doing versus white box you can actually see it, tweak it and explain it, which is the most important thing I believe for any employer, that's out there to be able to see how it's working and why it's doing what it's doing. My question to you, Keith. I mean, because an employer can choose whatever they want, but at the end of the day, the outcomes are where the EEOC comes into play. Right? Do you see black box AI actually surviving the next decade?
Keith (15m 38s):
You know, you may not like the answer to this, but I'm not advocating not to have very transparent algorithms, but at the end of the day, it's going to be the results that we're going to look at. But as far as the actual being able to see what's in the black box, you know, at this point, it's really up to individual legislators or legislation or regulators to make that determination and what you're seeing absent a federal standard right now, which is something else I've been talking about is you're going to get a lot of different laws and regulations on this. So in Illinois, they've passed a law, banning facial recognition technology.
Keith (16m 20s):
The city of New York now has a proposal that if you're being subject to AI, that not only are you being disclosed as an employee before you take the test or being assessed by it, you're being told, you're being AI is being used in this process. And here are your rights and remedies under the New York civil code. So absence national auditing standard, or you're going to see a lot of these individual states, local governments and state governments start to say, you know, here's what we actually want to see. Here's the auditing requirements we want to see, but right now it's sort of all over the place, not just here. And I know you have a lot of listeners in Europe and you deal with a lot of those issues.
Keith (17m 0s):
They're really also getting ahead of it. And I know from the data privacy side, that's one issue over there, but for them, they've already said that using AI in employment, in their proposed legislation is going to be in their highest risk category, the same as critical infrastructure and emergency services. They call it the high risk. So that's sort of a long-winded way of saying everything is all over the place right now. And that's why I'm here.
Joel (17m 26s):
And this is a low, a high, this reaches into a huge number of companies. I mean, talking about the Harvard study, I think from this year, last year, it said 99% of Fortune 500 use technology in their pre-screening 75% of all U S employers. In light of that I think one of the things, and to piggyback on my last question, was it showed that nine out of ten executives rely on software and believe that it is unfairly rejecting candidates. So you say that, you know, the employer is on the hook for this, and I think it's important to just sort of underscore that because so many things that it's messed up, and are they blaming vendors in that process. And I think that a lot of employers need to look in the mirror and figure out that, we're the ones that are liable to these and the tools that we use and let's make sure that we're dotting our I's and crossing our T's.
Joel (18m 17s):
Would that be correct?
Keith (18m 19s):
Yeah. And that is a great point. Just like they're liable for, if an HR personnel or a manager makes that decision, right. It's just, we're doing it much more high-tech on a much larger scale, but HR tech can be much more transparent than trusting a human brain, right. So when you talk about this whole black box and AI black box, we don't know what's going on. Well, do you know what's going on in your hiring managers head across the various divisions across the country?
Chad (18m 46s):
No clue.
Keith (18m 48s):
No clue. You know, and you don't know what they've seen. You know, and one of the examples I like to give on the benefits of AI in recruiting in that, in those first steps is that, you know, an HR manager or a hiring manager, talent acquisitions, they can see that somebody is of a certain national origin. They can see that a person is disabled or pregnant, right. And let's say, use a disabled and pregnant example. You can't ever unsee that. And although it is totally illegal, highly, you know, unlawful to not give somebody a job because of that, right? In the back of their mind, no matter what they're thinking, how much is this going to cost me?
Keith (19m 28s):
If this person, if I have to make a reasonable accommodation for somebody who's disabled, you know, how much is that going to cost me? If somebody is pregnant, how much is it going to cost me in health care or leave? And then if you have 10 other candidates, it's easy just to say, okay, we're going to move with these same highly qualified candidates as well, where they're eliminated at that very early stage. Right?
Chad (19m 50s):
Right.
Keith (19m 50s):
Versus AI, all I'm looking at is your qualifications. All the computer's looking at as a qualifications, or, you know, some programs even eliminate the name or take out any gender references. Right? So a lot of that, it's a plus because you're getting much further down the line than potentially getting screened out by those initial managers. So in that sense, it is very, very good. But to go back to your question about that stat, you gave about the people getting rejected by computer or now, whatever the stats are about how many employees are not actually going to get in the door because of that computer. Well, we talk about from an enforcement perspective, and from a litigation perspective, you don't, you haven't seen many cases on this at all.
Keith (20m 32s):
And which is generally unusual because we live in a very litigious society. And we have a lot of federal law enforcement agencies, like the one I'm a commissioner at, who were normally very aggressive in bringing these kind of cases. But if you don't know you're being subject to this technology, or if you don't know you ever, you didn't even get an opportunity because you were subjected to an algorithm that screened you out, even from showing you a job offer, how are you going to complain of discrimination? How are you going to exercise your rights under the law? And I think that is really why we're not seeing a lot, where opposed to, before you just assume, you know, that somebody didn't like me, or if we had any evidence for a reason, then you could bring a charge with the EEOC.
Keith (21m 13s):
So in that sense, I think people are starting to get hip that they're being subject to this, but a lot of people still don't know what's out there.
Chad (21m 19s):
I think that we're just used to it, no matter what, again, it's the human condition that we've been dealing with. My big question is that we saw in the, you know, Bush era, where they brought in for the OFCCP, they started huge amounts of funding for the OFCCP. They started bringing in statisticians and they really started going heavily after enforcement. You can see that in just the fines that were, you know, leveraged against major government contractors. So the question is knowing that bias is going to scale, it's going to happen because we're now employing these technologies, these platforms, these systems, AI, whatever you want to call it, it's going to happen.
Chad (22m 1s):
It's going to happen in a huge scale. Is that going to make it easier for you guys to pinpoint, or are you going to have to also play the same kind of game and start to do what the OFCCP did back in the two thousands and start playing the game at a more evolved level and start using AI for your own reasons?
Keith (22m 21s):
You know, that's an interesting question. And you see a lot of back and forth about the number, about the amount of budget for these agencies, excuse me, how much investigators are being hired and a lot of that, or what the priorities are at the time. So it's really hard to tell right now, as far as what the current priorities are, as you know, we don't even have a budget right now, or on a continuing resolution. The White House did call for an increase of over $40 million to the EEOC budget for this year and similar to DOL OFCCP, you know, they're going to get a lot, but a lot of what drives those on a national level of enforcement is what the priorities are of the current administration, what the priorities are of the chair.
Keith (23m 7s):
And to your point as well, is that now, no matter what your priorities are in the EEOC, whether it's it's gender discrimination, pay equity, racial discrimination in the workplace, religious discrimination, it's all going to be magnified, right? To levels that we have never seen before if AI is not used properly. So I can't comment on any immediate, actionable investigator increase or additional enforcement in this area, because generally I'm not allowed to, but I can tell you and I think this raises a really great point. I really appreciate you bringing this up. That no matter what the issue of the time is and the way the EEOC is designed, we're always kind of dealing with what's in the news, right?
Keith (23m 50s):
So when the Me Too movement happened, the EEOC has guidance on that has been a leader in that with the US women's soccer team, pay equity, the EEOC, that's right in our wheelhouse. And then when COVID and all the disability issues related to COVID or now the religious issue. So the EEOC really from a national perspective, sort of has to pivot of what the national news story is. And you see dealing with our issues like race, color, religion, sex orientation, pregnancy, it's national news stuff, right? When it gets to that level. But I am very cognizant and very aware that AI now is going to be behind the scenes for a lot of these decisions. That's just something that all of us can't ignore anymore.
Joel (24m 31s):
Keith one of the things that you said that stuck stuck in my brain there was that there hasn't been any sort of major case around AI and hiring and being screened out. Although you think that there will be in the future. So when I look at, you know, 9 out of 10 executives, think, you know, believe that people are getting filtered out unfairly. What is the incentive for them to change? Because if there's no major fine, there's no perp walk.
Chad (24m 57s):
Where's the stick?
Joel (24m 58s):
What's going to happen. And we talk, you know, we talked about Activision on the show recently and the sexual harassment case and I think pregnant women were part of that as well. You know, the $18 million fine, like at the end of the day, don't these big companies sort of do an algorithm of catastrophe and say like, well, let's just keep it as is and if we get pinched, we write a check and we're done. I mean, what's going to change behavior?
Keith (25m 23s):
That that is a great point. And then there is the look there's the carrot and the stick example and let me just go over both. For me, I believe that enforcement alone will never be enough to meet our mission. And I believe that when I was at Department of Labor as well, that educating employees about their rights and educating employers about what their obligations are under the law is far greater to me than enforcement. Let's prevent the discrimination from happening in the first place. So in an area like this, where there are not clear guidelines, that the EEOC does have the tools, look, our laws are old from the 1960s born out of the civil rights movement, but they're not outdated.
Keith (26m 11s):
And they applied with equal strength to the decisions made by HR people in 1960s. Is there going to be computers in 2050? Right? So that's why I feel it's my job to start to talk about this as much as I can. To go over how each technology, whether it's a recruiting, whether it's the facial recognition or the voice programs, or even, you know, some of the AI being used to promote and terminate employees and say, here is what here's, what the rules of the road are. And if it's used improperly, there can be serious consequences. So I want to do that first and in this industry, what I've seen is that there's really a desire for that, that a lot of people believe in their technology and they want it to work and they don't want it to be taken down by enforcement or class action lawsuits, whether it's by us or other plaintiff's, lawyers out there.
Keith (27m 6s):
So, but going to the enforcement and I do have to address what you're saying is that, where is the enforcement? Why isn't it happening? I do think, like I said earlier, there's a lot of reasons why we haven't seen it, that people just don't know that they're being subject to this. So that's really where the federal government should step in and find these cases. There is a unique, well, our jurisdiction at the EEOC is a little, it gets a little complicated in the sense that where other law enforcement agencies can just kick the door open and say, we're here. You know, like in the movie.
Chad (27m 39s):
Yes, the EEOC has to be ushered in by somebody with a case, right?
Joel (27m 43s):
Do not have a badge. Keith, do you not have a badge?
Keith (27m 46s):
I don't need a badge. Watch it. I do not have a badge. But because the EEOC has that jurisdiction where it needs to be subject to an employee, it's difficult here. EEOC, can't walk in, you know, where like the Department of Labor, OSHA can walk into anyone, just to do an inspection, right? And same with Wage and Hour, they can go into, you know, any facility and check time roll records. We can't do that here, but we do have a very unique tool. So when Congress designed the EEOC and put those limitations on an employee having to file, we call a charge, a charge of discrimination for the agency to have jurisdiction.
Keith (28m 26s):
It carved out what's called a commissioner's charge. And essentially said, if you're confirmed by the Senate and a commissioner, you have the ability to start an investigation in your own name and have the full force of the EEOC investigation, which comes with subpoena power and eventual litigation. So.
Chad (28m 48s):
Oh, damn.
Keith (28m 50s):
Yeah. So to me, this is, you know, it's very serious as if a commissioner using this seat is going to tell the EEOC to investigate an employer. Commissioner charges are generally a very big deal. And a lot of the commissioner charges in the past that had been very successful, have come from a commissioner watching 60 minutes, the commissioner reading, investigative journalism and starting a case from there. So this is certainly something I'm interested in, and I do want to be very balanced here and, you know, for your listeners and for all of you, that's why I started with, we have to address the guidance. We've got to put the rules of the road out. It's not necessarily fair. Although the law is the law.
Keith (29m 32s):
If it's not the federal agency that's going to do, the enforcement is not giving best practices and guidelines. And that's something I'm very committed to doing. But once that's, you know, out there too, there's also the enforcement side, which carries significant weight that also can get to that point where if the people who are not going to listen to your guidance or who are going to design AI to intentionally discriminate, to give that employer maybe a competitive advantage, or allow them to do these bad things, that's where we should be focusing our efforts. Enforcement should be really, on bad actors who are designing AI properly, or conversely using AI the wrong way. And that's something I'm willing to do a commissioner charge on.
Keith (30m 15s):
It's just, you want to find the right way to do it. So if I'm going to use that stick and address the concerns that both of you are raising, that from a business perspective, we can use these programs for as long as we want, because it's unlikely that there'll be enforcement on for the whole host of issues. You know, that that's something I'm committed to doing that I want to do. You know, I just, I want to set the example if I'm going to use those resources on AI, that is really being used the wrong way. Really being used to harm people with either disabilities or national origin or gender age, whatever our protected classes, if an AI is built or being used in properly, that's where I need to be.
Chad (30m 60s):
So let's go ahead and I'm going to flip the script and go old school on your here for a minute, because I don't think AI in itself is, is the base and root of the issue, as we talked about human decisions before. And I think some of the human decisions that are almost carved in stone tablets for goodness sakes are called job descriptions. And they have requirements on them that aren't even close to necessary. Right? We have a bachelor's degree for being in an entry level sales position or something of that nature, right? So I think this is a much more deeply rooted issue and what we've been doing for decades. We've had these bias issues for decades.
Chad (31m 42s):
I only believe that we're starting to see them at a much larger scale because of the AI. And we're starting to see that these blips in the radar are starting to happen because AI is actually pointing them out because the bias is not a little pinpoint anymore. It is a huge bubble. So the question is, you know, is the EEOC going to do a couple of things? First and foremost, help on the education side with employers to help them understand that, Hey look, your base root and foundation of bias starts here. And I'm just using the job description as an example. Do you go through those types of like road shows before you start actually hammering away at, Hey, okay, we told you about this, you're doing it wrong.
Chad (32m 29s):
You haven't fixed it. Now we're coming.
Keith (32m 31s):
And the penalties get more severe in those areas. But you know, there's AI now to deal with every aspect of the employment relationship. So you talked about job descriptions and I completely agree with you and this is something we can dive into for an entire hour on job descriptions alone.
Chad (32m 51s):
We'll do another one on this. I promise.
Keith (32m 53s):
It's very important topic. And there's a lot of, you know, all of our laws as you know, apply to applicants as well. Not, you don't need to be working somewhere to bring a charge of discrimination. But now there's vendors out there that offer AI to look at job applications and postings and job descriptions, and they will tell you what terms are using to deter, you know, to make it more gender neutral. And so they'll give you a job description, they'll scan all the words and say, well, it's more likely that males are going to apply than females, and here's how you balance it out. So that's a way AI can be very good, right? To sort of deal with those issues.
Keith (33m 34s):
And that's why, you know, I like tech, I'm all for tech. I really think tech can, can help eliminate, you know, a lot of the bias. And I think that's a really great example, but from my perspective at the EEOC, and I know in your role, listening to your podcast, you sort of get it as well. That how much different tech there is in just HR alone and how it touches all different times that I know the vast majority of the hot parts to talk about is in recruitment, but it really it's so much more than that. And when I'm looking at this, from what I want to educate on, is where tech is being used in the entire life cycle of the employee.
Keith (34m 15s):
And that starts with the job description. And then it starts with the thought of recruiting. And are you going to do active recruiting, or are you going to passively, you know, try to get people who are not actually looking for jobs and you know, what are you basing that on base now that you have your resume and to try to screen them out to interviews by done by chatbots or by facial recognition, and then having a gamified assessments through the decision using AI to hire through the decision to what are you going to offer that employee for money, for compensation. And then AI will tell you if they're based upon their resume, if they're likely to accept it and what offer.
Keith (34m 58s):
And then there are some really interesting programs out there that are way outside of our wheelhouse here, jurisdiction wise, but like that can then predict what kind of employee that person is going to be versus then they're hired than they're there.
Chad (35m 12s):
And isn't that crazy though? I mean, seriously, to predict what kind of employee that person's going to be, not knowing the type of manager that individuals had before, not knowing the type of environment that they've had to deal with before, but yet taking all that historical quote/unquote "data" and trying to predict what a human's going to do.
Joel (35m 33s):
That's why you and I will never have a full time job again, Chad.
Keith (35m 36s):
Exactly, it's even more than that. So then you're there. And then, you know, we're using AI to potentially monitor employee's chats and emails to see, you know, their job satisfaction, if they're likely to leave or not. And then other programs saying, you know, you have this employee in this group doing this job, but based on the resume, she would be so much better in sales in this part of the country. And then what I wrote about most recently in the Chicago Tribune, about AI then assessing workers on more than just how much widgets they're making that day based on every aspect of their job, and then potentially automatic notifications that they get a bad performance review or they're fired.
Keith (36m 21s):
So it's really the entire life cycle for me. And that's, I can't just focus in on the recruitment piece. I can't focus just in, on the upskilling and re-skilling piece. I have to look at every single part of that and I need to talk about every part of that. I need to give best practices on every part of that. And then when, where it's going wrong, I need to then enforce on all the different parts that are just not working.
Joel (36m 49s):
If, Chad can go off script a little bit, I'm going to pull a curve ball as well, but there is a bridge here, I promise. We've talked about facial recognition. We just talked about job descriptions and sort of how to promote those. And I want to pivot over to social media for a second, and pre-pandemic, Facebook went through a pretty big change to the way that they allow employers to market their jobs. So Facebook, if you've never used it, you can target by age, location, education, there are so many ways that you can do that well from a job posting perspective and promoting jobs that's can be a really bad thing.
Joel (37m 31s):
And they found a lot of companies were targeting, let's say, young men for their engineering jobs. And Facebook can no longer do that. And if you try to post something, it looks like a job or advertise something, it looks like a job, they have pretty strict regulations, believe it or not. Facebook does have strict regulations on how you can market those jobs. And when that happened, I was sort of expecting Twitter to suffer, not suffer, but have the same regulations, I expected Snapchat and then obviously TikTok. And TikTok now is getting more into marketing jobs and resumes and things like that. So I guess broadly, what should we be expecting? Or what is the EEOC really taking a close look at in regards to social media?
Joel (38m 14s):
Is it resumes be it video on TikTok? Is it promoting jobs, descriptions and how they target that? What is sort of in your crosshairs hairs right now in terms of social media and job promotion?
Keith (38m 27s):
This is another great example of how a new issue, but old loss. So the Facebook is an excellent one. One that I talk about, one I write about, because the mindset of the social media, which are largely advertising platforms, right? It is to, narrowly tailor your experience to what you are interested in, what you want to buy, things you like, and that has to be done by your age, by your gender, by potentially your national origin. And that works in the advertising context, no problem, but when it comes to people's livelihood and their opportunities to succeed, it doesn't work.
Keith (39m 9s):
And I know that was a very public example and we know they changed it there, but that it certainly is an example of how easily scalable and dangerous this technology can potentially be. But it is unavoidable that a lot of this is going to be through social media and, you know, for the advertising practices now in social media, obviously they can't be limited by age, but this also gets more into a bigger issue. And now it's just amplified because it's on tech about advertisements for, you know, who want recent college graduates, right? Although a recent college graduate could be any age, right? There's no, you see stories of people going back to school and being ninety years old and getting their bachelor's degree.
Keith (39m 53s):
Generally, what is the result of that kind of policy gonna be? Who are you going to get applying?
Joel (39m 59s):
Youngins, "yutes" (youths).
Keith (40m 1s):
And your desire to do that is not necessarily to exclude people over 40, which is what the law protects, it just, you know, maybe these are entry-level jobs, but putting a limiter in there will, you know, certainly have that disparate impact and although you didn't want to discriminate, you're still going to be liable because our laws we enforce don't require that intent. So where I'm getting to is isn't in the TikTok example, which has been largely reported on that. You know, employers want to do advertise a job applications and resumes through TikTok. There's nothing wrong with that.
Keith (40m 41s):
There's nothing illegal about that or unlawful about that. But at the end of the day, from the EEOC's concern. We're going to look at the results of that. And if that requires you to do it, a TikTok video, and highly produced on your phone, on your newest iPad or iPhone, is everyone going to have that opportunity? Is that just going to limit the applications to certain people of certain race or certain gender, certain age? How are people with disabilities going to be able to, you know, make fancy video resumes? Is there another way for them to apply outside of doing that video for somebody who is maybe technologically challenged?
Keith (41m 23s):
Right. So the EEOC is going to look at that like it would any other job application are those limiting factors in there that are going to screen out people of a certain age or a race gender. So that's what we're gonna look at. But again, it doesn't matter if these companies just want to make applying for jobs cooler, right. And that's sort of what it is, or how do you get this new generation of workers to actually engage with employers, to come work there by doing all those cool things. At the end of the day, the EEOC is going to say, did that neutral policy have the effect of excluding? And I think largely in the TikTok case, it would be older workers, right.
Keith (42m 4s):
Is there a way they can also apply to the same jobs without having to do back flips in your videos and dancing right? So again, it's not different than job descriptions evolved when you're saying, oh, we want people with one year of experience, right? What is the effect on that?
Joel (42m 21s):
Never a dull moment.
Chad (42m 23s):
No. So when there's never enough time, we still have much more to talk about Keith. We want you to come back on the show. We won't talk about social media. We want to talk about wages, AI, DEI, age-ism all that other fun stuff, but until then, I want to say, thanks again. Keith Sonderling, Commissioner at the US Equal Employment Opportunity Commission (EEOC) Keith, if listeners want to find out more about what you've been writing about, or maybe you want to connect with you, where would they do that?
Joel (42m 52s):
What's your TikTok, Keith?
Keith (42m 54s):
No, I can't do the TikTok. Maybe the three of us we'll do one together at one of these conferences, but I'm on LinkedIn. You could add me there, follow me there. I post a lot there as well as on Twitter. So I'm really looking forward to engaging with everyone, Joel and Chad, thank you for having me on the podcast. And like I said, there's a lot of topics to talk about and really appreciate your platform. And I'm definitely looking forward to coming back and talking and going over some of these specific issues. And again, to show that there's a proper way to do these. And also here are the risks that I know for my position, not only as a labor and employment lawyer, but as the regulator now in the space, I want people to know there's resources out there.
Keith (43m 43s):
And if you want to do it the right way, you have unlimited tools available for you. But if you want to do it the wrong way, I also have other tools available for you, which you're not going to like so that I really do appreciate this platform to talk.
Joel (43m 58s):
We're the government and we're here to help.
Chad (44m 2s):
Excellent.
Keith (44m 3s):
I said we're here to help first.
Joel (44m 4s):
Yes, absolutely. Absolutely. Chad, this was awesome. We out.
Chad (44m 8s):
We out.
OUTRO (44m 54s):
Thank you for listening to, what's it called? The podcast with Chad, the Cheese. Brilliant. They talk about recruiting. They talk about technology, but most of all, they talk about nothing. Just a lot of Shout Outs of people, you don't even know and yet you're listening. It's incredible. And not one word about cheese, not one cheddar, blue, nacho, pepper jack, Swiss. So many cheeses and not one word. So weird. Any hoo be sure to subscribe today on iTunes, Spotify, Google play, or wherever you listen to your podcasts, that way you won't miss an episode. And while you're at it, visit www.chadcheese.com just don't expect to find any recipes for grilled cheese. Is so weird. We out.
Comments