top of page
Indeed Wave.PNG
Parental Advisory.jpg
Color-YouTube-logo.jpg
Apple Podcast.png
Spotify.png
Chad and Cheese

Somebody's Watching Me!?!


Think that naughty Dave Chappelle tweet from 6 years ago that you Liked can't come back to bite you in the ass?

Well, think again. Fama is a company working hard to reveal social media behavior to employers making hiring decisions. Yup, background checks look a lot different today than they did 10 years ago, and the boys have some hard questions for CEO and Founder Ben Mones.

NEXXT keeps fueling the Chad & Cheese hits. Check out how NEXXT can help you more effectively target the right candidates today!

PODCAST TRANSCRIPTION sponsored by:

Jamis Ellis:

Employer brand isn't something you sprinkle on your recruiting like magic fairy pixie dust to kind of make it better. It is both a craft and a calling. If that's the kind of work you want to do with your employer brand, come join me, James Ellis, at the talent cast.

Announcer:

Hide your kids. Lock the doors. You're listening to HR's most dangerous podcast. Chad Sowash and Joel Cheesman are here to punch the recruiting industry right where hurts. Complete with breaking news, brash opinion, and loads of snark. Bottle up boys and girls. It's time for the Chad and Cheese Podcast.

Joel:

It's that time again. What's up folks? You're listening to the Chad and Cheese podcast. I'm your co-host, Joel Cheesman.

Chad:

And I'm Chad “Just-Got-Flagged” Sowash.

Joel:

Oh shit. Red flags everywhere on the episode. We are happy to introduce Ben "He Made Her" Mones from Fama.io. Ben is CEO and founder of the company. Ben, welcome to HR's most dangerous podcast.

Sound Effect :

Hell yeah.

Ben:

Thank you Chad and Joel. Pleasure to be here. Appreciate you guys bringing me on.

Joel:

Right on. You are a brave man. Good for you. Chad, get him!

Chad:

You heard the podcast where we talked about Fama. Joe Rogan, which is pretty awesome because a few people listen to his podcast, he called out...

Ben:

Just a couple, all right.

Chad:

Yeah, he called out Fama, he called out an actual situation that happened and a person was sharing some of the Fama reports that their company sent to them on Twitter. I mean, that was a triggering event.

Joel:

300 pages. 300 pages.

Chad:

Yeah. Multiple tweets per page, all that other fun stuff. But anyway, I mean, from our standpoint to be able to hear about this, I mean this obviously crosses over into what we talk about all the time. We thought, man, we definitely want to talk about this. There's no question. We had to have it as a topic, which is why we did. And then you agreed to come on the show. So we really appreciate you doing that because in most cases, CEOs are like, yeah, no, I don't want any of that.

Ben:

Well, I appreciate it. You guys got a big audience of our users and our customers and I think I don't have to come in here and change minds. But I think tell them what you do, why we do it all that is a great opportunity for Fama and anything we can do to get the word about what we're doing is a great chance for us. So yeah. Thanks for having me.

Chad:

Tell us about you and then tell us a little bit about Fama and why did you start? Why does it exist? Tell us about that.

Joel:

Were you not hugged enough as a child, Ben?

Ben:

No, I had a very welcoming home. It was nice growing up, but had family issues certainly. No, it wasn't my family issues that started the company. I've been in software ever since really getting out of school, different companies. Always enjoyed working in technology, helping big companies solve tough problems using tech.

Ben:

And one of my companies early on, we hired a guy looked great on paper, his resume checked out. He was like a VP of sales. There were like 40 reps that we ran through for this guy. Board referred guy, a VP of sales. He had like 110% quota attainment eight quarters in a row. He came onboard six weeks in, he did something really bad. He sexually assaulted one of our employees actually. So it was a terrible thing to happen. It caused significant fallout for the business.

Ben:

And what we found after the fact, the postmortem that we do, is all of this misogynistic and pejorative content online this guy had posted, that had we seen that we never would have brought the gun on board.

Ben:

So people are always like, how did you start this company? You're a sales guy from startups and you did a couple executive roles at enterprise SAS companies. You had no HR experience coming into it, but it was really experiencing the sort of exact pain that we're solving for.

Ben:

So in 2015, I talked to a couple of folks I knew in HR, asked them if they were looking at social media, if they were looking at news and web content, kind of Googling someone, Facebooking someone before bringing them on board. And it turned out a lot of companies were, but with the FCRA and all the litigation around protected classes and the EEO, a lot of them, it was sort of this unspoken thing that was happening in HR and talent acquisition where recruiters were kind of absentmindedly Googling people or Facebooking people and seeing stuff they really shouldn't see when trying to find that kind of needle in the haystack.

Ben:

So we saw the problem as being big enough that we wanted to build some technology to automate the most manual tasks of that process. So that's why we built Fama. People always think we're like scoring people are giving a thumbs up or a thumbs down or we're making a recommendation on a candidate because I think with the history of data abuse, that's where our mind kind of naturally drifts to.

Ben:

But what the software does is it identifies certain types of content online. So a business will really define and say here are the behaviors online that we'd want to know about. It could be harassment, threats, bigotry, violence, et cetera. Language and alcohol too, as you saw on the Joe Rogan podcast. And just like a background check, if we find you know, something on a person's complete digital identity, it's not just social media. We look at like a non-courthouse litigation, so stuff like LexisNexis, business journals, social media, news, web content.

Ben:

If something falls into one of those categories that a company defined as relevant to their preemployment process, we add it to a report and we have a web-based dashboard where we do that and also a PDF copy of it. So really it's about customers defining "these are the behaviors that we care about, we want to know if these exist in a person's digital background."

Ben:

Operates just like a background check, the candidate signs a consent form, they get a chance to go through the pre adverse process. The employer wants to take action on that report, contest the results. And yeah, the employer makes the determination, so we're automating something that's been done manually for a long period of time.

Joel:

I assume they'll do this as they're an employee as well, right? They can sort of sporadically throughout their tenure do these kinds of checks that they're signing? They're signing off on that right?

Ben:

Mostly no. There's a lot of talk around continuous monitoring. The background screening industry, it's something that I think was one of SHRM's Hot Trends for 2019. Just from what I've seen, there's a lot of interest but not a lot of adoption. Just like with the background screen and the pre-employment phase, I think the business has a clearly defined business need to do it. Like I've seen some government contractors who will do it for reasons for national security. But by and large, most companies are doing this from a pre employment standpoint. When I

say most companies, I mean like 98% are doing this for pre employment.

Joel:

Okay, so they're basically signing off on, yes, a background check, but are they specifically signing off on sort of a social media review or online presence? And if not, how are you verifying that this Twitter account is this person or this news item is actually that person if they're not actually giving you their Twitter handle as part of the approval process?

Ben:

Yeah, basically with the disclosure and authorization process, just like a background check where it says we're going to look at credit data, criminal data, companies will amend that disclosure and authorization and include social media, news, and web content.

Ben:

Most employers, what they'll do is some will ask for the social media handles. Many will not. You're not allowed to ask for the password of course. But some companies will ask for the social handles. But we actually on our end, employ kind of a mix of both automated and human intelligence. And human intelligence, I mean literally dedicated analysts that sit in our office in LA that confirm that the social media profiles or news and web articles or litigation belong to the subject in question. So it's a mix of both automation and having a real person confirm that these subjects or these articles or profiles belong to the subject.

Joel:

So, if I just, five years ago, hated Chad Sowash and started up at Twitter handle "chadsowashisamazing" or something and put out some tweets about everything offensive possible, how would you guys be able to tell that that's not really Chad Sowash?

Ben:

You mean if you created a fake Chad Sowash account and started posting under his acronym or his name?

Joel:

Correct.

Ben:

That is a disclaimer that we offer, because we get that question. What we have seen is people who will leave their social logged in at the library or at an Apple store or something like that, that's the more typical question that comes up.

Ben:

And you know we do offer the end-to-end kind of a disclaimer of saying, hey, this person could be the victim of computer hacking, which is why it's so important, I think that the candidate and every chance that one of these reports is run, if there is going to be action taken, gets a chance to review the results and contest and explain. But we do from an automated and human standpoint under the FCRA you have to pursue maximum possible accuracy. That's one of the core tenants of it.

Ben:

So we have kind of a multi-step process on our end to try and determine if that was the case, if there was a victim of computer hacking, if it was like a new social media profile that was

created or the friend network only has two friends or something like that.

Joel:

Right.

Ben:

There's certain backend technology things that you can do to look at that, but it's also just highly trained experts that know what to look for and do this day in and day out.

Joel:

Gotcha. And what sources are you looking at and are you getting that information via like APIs or scraping?

Ben:

Yeah, so it's social media, news, web content. You can kind of think of the publicly available web and obviously there've been a lot of changes to APIs over the past couple of years, So we've had to kind of change our approach as those API restrictions have evolved.

Joel:

Okay. So I'm going to name some sites and you tell me if you monitor them or not.

Ben:

Okay. Yeah, so social media, Twitter, Facebook, Instagram.

Joel:

Yes or no is fine. Facebook?

Ben:

Yes.

Joel:

LinkedIn.

Ben:

No, we have LinkedIn profiles but we don't screen through LinkedIn content.

Joel:

Snapchat?

Ben:

Nope.

Joel:

TikTok?

Ben:

Nope.

Joel:

Instagram.

Ben:

Yes.

Joel:

Reddit?

Ben:

Yes.

Joel:

Okay. Thank you.

Chad:

Ben, on the website it says "the smartest way to screen toxic work behavior" and also talks about problematic behavior among potential hires and current employees. So who defines what toxic or problematic behavior is?

Ben:

Our clients will define... Like they essentially have a pick list of certain types of behaviors that they want to look at and zero in on and every pre-employment screen that they do, we use the word toxic and problematic as kind of a single kind of uniting category or term for things like violence, threats against others, bigotry, harassment, et cetera.

Ben:

So that actual umbrella definition is something that we've found just general agreement on with our clients as we've developed the solution as that's the rough overall description of the category. But clients will ultimately define in every pre-employment screen that they do, whether or not a certain type of behavior is toxic for their organization.

Ben:

You might be a tech company, for example, that you really only care about the worst of the worst. You might be a much more conservative organization that wants to take a wider approach. But ultimately it's about defining the business reason for screening. This sort of behavior online. The company has to be able to draw a straight line between saying, this is why we're looking at this information and this is how we justify in the same way, looking at certain types of databases.

Ben:

We use that example for like a DUI, right? If you have a DUI and a role that you're never driving, you're never behind the wheel, can an employer really use that in a background screen and take action on it? Many would argue no.

Chad:

Yeah, but Ben, I mean, this is something that is contextual and we all know that AI is not where we wish it would be, where it could actually understand the context of many of the conversations that are actually happening, especially when you're talking about broken conversations, right?

Ben:

Right.

Chad:

And if somebody is liking a tweet that your algorithm really doesn't understand yet because it hasn't actually worked through the understanding and the auditing to ensure that it knows what the hell is going on out there. There are flags that can be pushed out around bigotry and sexism and obviously alcohol and things like that that really don't make sense. Right?

Ben:

Yeah, sure.

Chad:

So I guess from our standpoint, because we work with AI companies all of the fucking time, so we know that the context piece is very, very hard, especially when you're talking about in a social environment and unstructured data to be able to actually flag people.

Chad:

So I mean, from my standpoint it's really hard to say, yeah, we can just go after toxic behavior, problematic behavior. Well you don't know what that really is in the context of the conversation. Do you?

Ben:

Yeah, it's a great point. Look, and I think when we were building the technology, that's why we didn't want to put the label on the person or the label on the individual, and instead winnow down what is huge amounts of online information that employers were, in many cases, looking at anyway to the sort of categories that they had defined as specifically relevant.

Ben:

And in the cases where we are absolutely always trying to improve our AI and to take that step of making those content models as accurate as possible, meaning to say, what is this piece of content about? What is this text about? Even people I think disagree on what something like bigotry is or what a violent threat is or something like that that's made online. So the context is difficult to ascertain.

Ben:

But I think our approach has been, let's put this sort of category of content and labeling the content itself and then providing that to both the employer and the candidate to have a conversation about what's there.

Ben:

And in many cases, you're right. If it is inaccurate information, the employer then will look at it and say, you know what? I see why the AI thought that this was bigotry or something like that. But at the end of the day, we're at a point where this isn't the type of bigotry that we're concerned about.

Ben:

So it is that very, I think, healthy conversation between the candidate and the employer that we see that allows us to get around it. But I absolutely acknowledge what you're saying and agree that we and many others have a long way to go of being accurate and you're a hundred percent accurate when we're organizing this content online.

Chad:

We'll get back to the interview in a minute, but first we have a question for Andy Katz, COO of Nexxt.

Joel:

Andy, for clients that are sort of married to email and a little hesitant to text messaging, what would you tell them?

Andy:

That text messaging is part of any integrated strategy. There's not one-size fits all for anybody. Job seekers opt into different forms of communication, whether it's with Nexxt or anybody else. They might want to receive email, they might want to receive SMS, they might want to receive retargeting on their desktops. So it's one piece of an overall puzzle.

Chad:

For more information, go to hiring.nexxt.com. Remember that's Nexxt with the double X, not the triple X. Hiring.nexxt.com

Chad:

Now, if I was a candidate and the company sent me 350 pages and I was in the interview process, I would call the hiring manager back and say, "you can go fuck yourself." Because there's no way that I want any company to big brother my ass and think that I don't even have my individual right to be able to have conversations.

Chad:

Now, the entire package, to be quite frank, could all be good. But overall, don't you think this is something that would repel great talent to get away from an organization like that?

Ben:

I think ultimately what the employer cares about and what the employer is screening for is the determinant of the culture that I want to work for. I agree, even at our company, right? We use our own technology and we don't screen for things like profanity and alcohol because it's not something that we think is relevant to building a healthy and productive culture here or even protecting our corporate brand in the marketplace.

Ben:

So, I think if the employer and the candidate have a clear understanding around the channel of communication, what we're trying to create, what we're attempting to do when we do this sort of screening and why, I think that's when certainly the candidate can rally around that concept and rally around that idea.

Ben:

But certainly if an employer is screening for things that are misaligned with what you as a candidate care about, if you say, well, you care about using profanity, well, shit, right? What am I going to do about it? Right.

Ben:

That's your call as candidate. And I agree with you. It's something where we have this weird kind of middle ground of being the provider of data in some ways, but also rendering implicitly I think judgment in how we flag certain types of behaviors. So especially from a user experience standpoint and how we organize certain types of content.

Ben:

It's funny on the Joe Rogan podcast, he talked about the bad and good labels within that report. And that was something that we had heard from other clients too. They were like, look, this is rendering a judgment in some ways. I know you're just organizing content, but to say that profanity is implicitly bad, like that's not bad for our company. Right? And that's not what we want to communicate from a candidate experience standpoint.

Ben:

So those were some changes actually. Always learning lessons when we're trying to do something new and create tech that's never really been around before because this is not something where we had a precedent. There wasn't a incumbent that we were replacing that was using AI to inform the background screening process by looking at publicly available online information.

Joel:

How does a company typically use the information? I think, from the Joe Rogan segment, it sounded like this candidate got a FedEx in the mail, it was 300 pages of social media activity that they thought was negative.

Ben:

Right.

Joel:

I mean, I assume that the company sent that to the candidate. I am sure that you did not. And I'm sure that most companies would do it much more diplomatic than that. And they would sit down with the person and say, we have some concerns. But I also think do companies look at these reports and say, no, we're not going to hire that guy or gal. And then they just say, due to the background check information, we're not going to move forward with the hiring. Like do they have to provide the information that they received from your site? How do companies typically engage in this way?

Ben:

It is atypical, we had never heard of a company printing out a report like that and sending it to a candidate and that obviously being in California and ecologically minded, that was not

something that was representative of what we do.

Ben:

But yeah, no, it's totally cool. So companies are legally required just like with the background screening to send a copy of report if you want to take action on it. California, Jan. 1, 2020, it shifted where regardless of if the company is taking action on a report, they're required to send a full copy of the background screen. It's kind of this CCPA, FCRA new legislation that's come out where some companies are just sending the background check regardless, even if they're not taking action on it. And I was surprised to see that someone would print that out because that is not what we do.

Ben:

But look, I mean, there's going to be different that people want to communicate with their candidates. But most employers, yes, how they're using it, it's not always like a binary, are we going to fire this person? Are we not going to move forward with them? A lot of times, it can be a conversation with saying, "hey, we saw this online, just want to let you know this is not reflective of our culture, our brand, what we're trying to do here."

Ben:

So it's not always this question of, is this person going to work with us or not? And if we see the Fama report, we're just not going to hire that person. There is a careful consideration, adjudication matrices that companies develop.

Chad:

I don't know. I don't know if that's the case, Ben. I've been working with Fortune 500 companies for way too long to think that they have a great process in place. And that being said, one of the things it says on the website, which I think is great to an extent, "our technology helps businesses identify thousands of job relevant behaviors, such as racism or harassment without exposing hiring managers to unnecessary risk and or manual work."

Chad:

Manual work I think is awesome because we should use technology to be able to knock that down. But from the standpoint of risk, if these companies start to, much like you'd said before, they have this checklist. If they check everything down when it comes to language and alcohol and all these things, they really start to open themselves up to more risk if they use that against hiring.

Ben:

Right.

Chad:

But they're opening themselves up to more risk because they have to defend the information that they have in front of them. Right?

Ben:

Right. I mean, that's the challenge. I think the hardest part of adoption for us and the biggest learning curve was less about the technology and more about the adjudicative process that a company sets up that supports active deployment.

Ben:

When you have this question of where do we draw the line as a business? Because look, I hear you on many HR leaders not using a ton of process. But with Fama because it's so net new and because of that implication of saying, hey, we don't want to end up on the Joe Rogan podcast or something like that and have that brand blow back.

Ben:

Companies do take a very careful approach to how we want to draw that line, where we draw the line, and also who's involved in the construction of that conversation. It's employee relations, it's legal, it's HR, it's talent acquisition. Where the opposing force of saying, okay, we don't want this harassing bigoted behavior in the workforce. These are things that we've either been burned by internally that's been a public news story about our employees or executives acting in a very public way that's detrimental to our brand value. Or maybe it never got out, but we just know empirically what the blow back is internally for missing out on this information.

Ben:

And it's that discussion of the opposing forces and I think frankly it's driven by consumers and employees who have higher expectations of companies that they buy from the companies that they work for. So it is a really careful conversation though, but you have to, as you just said, justify the business reason why you're going to do this. And if you can't justify it, then... Sorry, go ahead.

Chad:

Yeah, because if you can't justify it and you can't defend it, there is a huge opportunity for discrimination.

Ben:

Sure.

Chad:

So I have 20 years in the military, right? So whether it's active duty or the army reserves and I interact and I speak with in different contexts with individuals outside of the board room, different than I do inside of the boardroom. Especially if we have this brotherhood, this military connection.

Ben:

Right.

Chad:

And from that standpoint, companies who are trying to hire veterans that could be seen, possibly from this algorithm, as negative. Not to mention also individuals who are from different socioeconomic backgrounds, right? These are the things that I think we have to really clamp down on, especially after we saw Amazon shut down an algorithm because it was proven that it started to teach itself more bias.

Ben:

Yup.

Chad:

So this is the thing that is incredibly important to me is that yes, I believe technology can help us through many of these tasks. Although, how are you guys ensuring that the audit trail, it's looked at very closely. It's incredibly important because discrimination can pop up.

Ben:

Absolutely. First off, totally agree with what you're saying. And this is something that we talk about a lot internally. I mean, it's this question of adverse impact on certain groups to extending your metaphor even further, are there certain people that tend to post more online that then might have a greater chance of being flagged by something like Fama just because of the fact that they're posting more on the internet?

Ben:

And there are demographics obviously that post more online than others, right? So does that mean that they have less of a fair shot at getting a job or an employer taking action, right? So there are algorithmic techniques and technology techniques that you can take to try and identify adverse impact and run essentially blind studies on your own data across a wide cohort of people.

Ben:

Now, the right way to do that is to try and obtain, there's something called a construct validation study, which essentially allows you to look at, we've done that internally, but whether or not your algorithm is creating an adverse impact on a certain group.

Ben:

And while we've taken those steps, the real way to do that would be to have companies actually supply like full gender, age, race information on the candidates, which they would never do, right? Companies would never, nor would we want to handle that date on behalf of our clients.

Ben:

So this question of adverse impact, it's one that is difficult, but to your point, there's also the technology question of how do we find the middle ground. And I think if we were talking about a technology like Waze, if there was a fully automated, here's how you get there, here's where you take the left, here's how you avoid the traffic, to find the candidate you're looking for and here is your answer, I think we expose ourselves to a lot more potential for that algorithmic bias to creep in. And I don't know the full detail on Amazon's story, but I think there was a resume parser, right? That was recommending people for certain jobs. Is that the Amazon reference you're speaking to?

Chad:

Yeah, a little bit more than that, but yeah. Yeah. It was using past behavior. So the algorithm was learning off of past behaviors and it learned how to become biased.

Ben:

Right. Right. So I think with us, the thing that we really tried to zero in on is how can we help HR and talent acquisition do their jobs more efficiently and do their jobs better while letting their intuition around the sorts of things that you're talking about here and their experience around making sure that they're not taking a adverse impact approach the way that they hire, letting the people really shine and thrive as opposed to having the algorithm do all the work.

Ben:

We call it kind of human augmented intelligence instead of artificial intelligence, which is basically to try and bring a user to the precipice of action. Not to tell them what to do, but try and reduce all the noise that allows their intuition and their ability to take hold and drive that final decision as opposed to a computer doing it for you.

Ben:

So it's not like consumer tech. I think enterprise AI is a totally different ballgame and something that requires people much more than what we use our phones for on a day-in/day-out basis.

Joel:

All right, Ben. Thanks for your time. And for those of our listeners who want to learn more about you and your company, where should they go?

Ben:

Thanks a lot guys. Yeah, fama.io. F-A-M-A .io is our website. And yeah, we'd love to hear from anyone that's interested in what we're doing, has further questions and yeah, thanks a lot guys for having me on. It means a lot, Chad and Joel, and I appreciate it. So thanks again.

Chad:

Thank you, man.

Joel:

Word up.

Chad:

We out.

Joel:

We out.

Walken:

Thank you for listening to, what's it called? A podcast. The Chad, the Cheese, brilliant. They talk about recruiting. They talk about technology. But most of all they talk about nothing. Just a lot of shout outs of people you don't even know, and yet you're listening. It's incredible. And not one word about cheese, not one. Cheddar, blue, nacho, pepper Jack, Swiss. There's so many cheeses and not one word. So weird. Anyhoo, be sure to subscribe today on iTunes, Spotify, Google Play, or wherever you listen to your podcasts. That way you won't miss an episode. And while you're at it, visit www.chadcheese.com. Just don't expect to find any recipes for grilled cheese. It's so weird. We out.

bottom of page