top of page
Indeed Wave.PNG
Parental Advisory.jpg
Color-YouTube-logo.jpg
Apple Podcast.png
Spotify.png
Chad and Cheese

The Dark Side of A.I.


Artificial intelligence (#AI) is running amok! Someone has to police this out-of-control state of technology. Enter Miranda Bogen, a Senior Policy Analyst who focuses on the social and policy implications of machine learning and artificial intelligence, and the effect of technology platforms on civil and human rights. Plus, she really classes up the joint.

Enjoy this Talroo exclusive.

PODCAST TRANSCRIPTION sponsored by:

Joel: So it's totally data driven talent attraction, which means the Talroo platform enables recruiters to reach the right talent at the right time, and at the right price.

Chad: Guess what the best part is?

Joel: Let me take a shot here, you only pay for the candidates Talroo delivers?

Chad: Holy shit, okay, so you've heard this before. So if you're out there listening, in podcast land, and you are attracting the wrong candidates, and we know you are, or you feel like you're in a recruiting hamster wheel and there's just nowhere to go, right, you can go to talroo.com/attract, again, that's talroo.com/attract, and learn how Talroo can you get you better candidates for less cash.

Joel: Or, just go to chadcheese.com and click on the Talroo logo. I'm all about the simple.

Chad: You are a simple man.

Announcer: Hide your kids, lock the doors, you're listening to HRs most dangerous podcast. Chad Sowash, and Joel Cheesman are here to punch the recruiting industry right where it hurts, complete with breaking news, brash opinion, and loads of snark. Buckle up boys and girls, it's time for the Chad and Cheese Podcast.

Chad: Oh yeah.

Joel: Friday, Friday, Friday. Feeling spunky. TGIF.

Chad: More than spunk happening today my friend, today, today it's going to be incredibly spunky. We're going to be talking about AI, AI, AI, and some policy with Miranda Bogen, who is Senior Policy Analyst at upturn.org.

Joel: The trend of people much smarter than us continues as guests on our show.

Chad: That's not hard to do by the way. She's also co-chair of the fair, transparent, and accountable AI Expert Group. Oh my God.

Joel: Oh my God, we're in trouble.

Chad: So Miranda, before you jump into Upturn and this crazy Expert Group,

just so everybody knows out there, I heard Miranda on a podcast, I don't know, about a year or so ago, I said, "Yeah, she sounds pretty smart." Then I saw her on stage at Jobg8 in Denver, and I'm like, "The listeners need to hear from Miranda." So Miranda?

Joel: I'm glad you said Job8 and not some other stage.

Chad: No, you weren't on the stage Joel. So Miranda, welcome to HRs Most Dangerous Podcast, what did I miss in the intro? Fill in some gaps for the listeners.

Miranda: Well, thanks for having me first of all. Yeah, you got some of the high points recently, but just to give a little bit of background. I work at Upturn, which is a non-profit organization based in Washington DC, and our mission is to promote equity and justice in the design governance and use of digital technology.

Chad: Wow.

Miranda: So that kind of means everything, right?

Joel: Yes, damn.

Miranda: Hiring tech, recruitment tech is just one part of what I do, but we also study things like Facebook, social media platforms, police technology, credit scoring, anything where technology is intersecting with people's lives, and especially when it intersects with things like civil rights.

Chad: Yeah.

Joel: Yeah, and even though it has nothing to do with the content of our show, the "hidden world of payday loans," which is a blog post on your site, I'd love to dig into that at some point. Maybe as a bonus at the end.

Chad: Yeah, maybe.

Joel: Hidden world of payday loans.

Chad: So Upturn, on the website says, "We drive policy outcomes, spark debate through reports, scholarly articles, regulatory comments, direct advocacy," I mean, you do a lot of stuff. Conferences, workshops, so on and so forth. Are you guys a lobby group?

Miranda: No, we're a research and advocacy group, we're also only seven people.

Chad: Wow.

Miranda: So we do, do a lot, but we really kind of come in where we're most needed. And often times that's just explaining how technology works to other advocacy groups, the policy makers, to journalists, so that when they're talking about it they have a sense of what's really happening, and they can make sure that the policies they're thinking about are actually going solve some of the problems that we're seeing.

Chad: How did it start?

Miranda: So we actually started as a consulting group. My two bosses founded it about eight years ago, out of grad school to help again, sort of other advocacy groups, foundations, philanthropies, really get a handle around technology, which was eight years ago still newer than it feels now, and now it's really ubiquitous. But at the time, it was like, "What is going on here, how is technology changing civil rights? How is it changing policing? How is it changing how people have access to opportunity?" And people needed help, so they founded this consulting firm. I joined when we were still a consulting firm, but we decided to switch over to become a non-profit about two years ago, because we wanted to be mission driven. We wanted to go after issues that we were seeing, and kind of scope out some of these gnarly policy issues before other folks were maybe thinking about them. But then work together with other national and local civil rights groups, global policy groups to help direct policy efforts in the right places.

Joel: So here's a soft ball for you, give us your definition of AI.

Miranda: That's a-

Joel: Aha.

Miranda: It's a trick soft ball question. If you read any paper or article about AI, the very first line is always like, "There's no definition of AI." When we talk about AI... We actually don't use AI to frame our work, because we don't find it a super helpful frame, because it means everything and nothing. Often what we're really talking about is machine learning, uses of data,

Chad: Algorithms.

Miranda: ... data analysis, and mostly when we talk about AI, we're mostly talking

about just finding patterns in data. So that's what I think of when I think about AI.

Chad: Well, that being said, one of the things that really caught me, and I think it caught the entire audience in Denver, is when you talked about Netflix and their algorithm, and how the algorithm picked thumbnail pictures to be able to attract viewers to some of the movies. Can you talk a little bit about that research, or at least some of the information that you provided on stage about Netflix and how that worked?

Miranda: Yeah, this was a crazy story. It wasn't something we were working on directly, but I was scrolling through Twitter one day, and saw that people were talking about this situation where a woman was a blogger, or a podcaster, I can't remember. She was noticing that... She was scrolling through Netflix, now this woman was black, and she was scrolling through Netflix, and she was seeing that the little thumbnails that she was being shown for the movies, were featuring actors from the movies who were also black, but they weren't the main actors in those movies. And she was like, "What's up, I tried to watch these movies, and the characters that I was shown in the thumbnails barely have any lines at all? Why am I being misled?"

Chad: Wow.

Miranda: And so she started asking around, and asking were other people seeing the same thing, and she heard that they were. Other folks who were black were seeing these supporting actors who maybe didn't play a big role, but they were being featured in the thumbnails, while some white users were saying, "No, I'm seeing the main actors." And that was kind of crazy, because Netflix in follow up articles that journalists were writing about Netflix, made a statement saying, "We don't collect race. That's not something we're using to personalize the experience."

Chad: Wait a minute, wait a minute, wait a minute, wait a minute. How on in the hell was this happening then?

Miranda: Yeah, so it's really surprising. So many times you hear, "We're not collecting any sensitive data, we're just giving people what you want." And so that was really interesting to me, so I wanted to figure out what was going on. So it turned out that a year or so ago, Netflix introduced this new feature, where it would dynamically pull images from movies, TV shows that were showing, it would kind of dynamically pick out compelling images, and it would show them to people. And it would show you the image that it thought you were most likely to click on to watch that movie. So if you like romantic comedies, then if a movie had a scene of sort of two romantic leads, it might show you that. But if you like just straight up comedy, then it might show you an actor who's also a comedian, even if that's the same movie. So different people would be seeing different images in these previews, even if it was for the same movie.

Miranda: So what was happening here in this case was, Netflix was predicting that the black users were maybe more likely to click on these movies or shows, when they were shown these black supporting actors, even if they weren't main characters in that movie, and it was just learning that through behavior. It wasn't saying like, "Oh look, this user is... has these demographics, therefore, show them this image." It was all super dynamic, all just learning from people's clicks, and their online behavior. And the thing about AI, and about when you have all this data at your disposal, is you'll end up finding patterns that reflect people's personal characteristics, even if you don't collect them explicitly, or put any sort of label on them.

Joel: So Miranda, does your organization take a stance on these issues, or do you just present them as information? Like, are you saying AI is bad because of this, or are you just saying, "Hey, we've studied this and here are our findings?"

Miranda: I don't think we have a position on what Netflix did per se, but the issues I think, and why I was talking at Jobg8, and why I've been talking on these other podcasts is, that same thing is happening in areas like recruiting, because the technology that's underneath Netflix, is the same technology that's underneath any website that says, "We're going to recommend you things that you like," or, "We're going to find you the best people." They're all just called recommender systems, and if we saw that Netflix was working in this way, I was like, "Oh, man, this is a problem. This is probably happening on these other sites that are much more important to how people are finding jobs and livelihood."

Joel: So you do take a position that this is a bad thing?

Miranda: It's bad if we don't know it's happening, and it's going rogue, and it's ending up directing... it's ending up having the same effect as traditional discrimination in the job marketers, which is what our fear is.

Joel: Got you. Well, let's jump into Facebook. I know that you are well aware, and our listeners are too, that Facebook was presenting targeted advertising for jobs in relation to age, sex, education, all kinds of different ways that Facebook targets. Talk about that, and then talk about sort of what your feeling are with Facebook's solution to this, do you feel like it's a good solution, a competent solution? Or, do you feel like it's more of putting lipstick on a pig?

Miranda: Yeah, so this is something I've been working on for a couple of years now, Facebook ad targeting, ad discrimination. We were involved in some of the earliest conversations of civil rights groups that kind of pointed out the fact that advertisers could target ads based on gender, age, or what they called ethnic or multi-cultural affinity at the time, which was sort of a proxy for race. And then there were a bunch of lawsuits that were filed against Facebook, and part of the... all those lawsuits settled. And what Facebook agreed to was for job ads, but also for housing ads, and credit ads, was to take out some of those targeting categories to say, "Advertisers, you no longer have the tools easily at your disposal to exclude people from seeing opportunities. Or you can no longer..."-

Chad: It's discriminating.

Miranda: Yeah, "you can no longer discriminate," right? That's certainly something we support. But, when we were involved in these conversations, we supported all the efforts to make sure that advertisers themselves weren't discriminating, but Facebook is kind of like Netflix, it wants to show people what they "want to see." Ads that they will like. And so, we started getting into this. We started to do some research and say, "What if an advertiser didn't target an ad at all, or they just targeted it to anyone in the US?" Maybe it's a job, they don't really care who comes to work for them, they just wanted to get it shown to as many people as possible. I know that's not a realistic situation, but just hypothetically.

Miranda: So we ran a couple of job ads, and we didn't target them at all, and we used Facebook's own tools to see who they ended up getting shown to by gender, and by age. And for example, we ran a job ad for a nanny position, we didn't target it at all, and it ended up being shown to over 90% women, just because... Yeah. So the same thing was happening, and we didn't target it to women, but Facebook was making this prediction that women would be more interested in that type of job. And maybe that was true, maybe women were more likely to click on that job ad. Most childcare providers are women, but is that any different from an advertiser saying, only show this to women, just because the platform is using AI to predict whose most likely to click?

Chad: Nope.

Miranda: And so that really floored us, and made us think that the remedy for all these lawsuits, and just taking away the targeting categories wasn't really solving the problem. It's certainly preventing bad actors, but there's more to the story here. And so there's a ton of work to do in thinking about what do we do when it's the algorithm that's biased here, and we can't even see it most of the time.

Chad: Well, in the research also you talked about cashier positions, the audience was 85% women, and for taxi companies, 75% black. So here's the thing from my standpoint, I think that Facebook neutering their jobs platform, was a huge mistake, first and foremost for companies who do want to target individuals with disabilities, or different segments of the population. They should be able to do that. The thing is, we have government agencies that enforce if you're doing good or bad, right? EOC, OFCCP, that's what they're there for, right?

Miranda: Exactly.

Chad: Not taking away tools. So if you have a tool that is actually helping you get more individuals with disabilities, or female engineers through the door, if they want to call that discriminating against the white man, well, then that's great, but that's for the OFCCP and the EOC to handle, not Facebook.

Miranda: I mean we think... So the technology platforms do have a role to play. They can make some important policy decisions, they can prevent some bad things from happening, but I think in this case it's one where there is the whole infrastructure of regulation. There are employers out there who do want to do the right thing, who want to reach out. There are employers out there who are required to do affirmative outreach to underrepresented groups, and so now on Facebook with those categories getting taken away, it's going to be harder for them to do that. And more over, it's only solving that one case where an employer or an advertiser is choosing to do a bad thing, but we still could have the same effect, right?

Miranda: So we still have a lot of problems here. And it's really tricky, because one of the problems is that, there's not really a good way for a regulator to see how an advertiser is targeting on Facebook. That's not something that's public. We've been pushing for that. We think that, that should be something that the regulators are able to look into if they suspect that an employer's maybe doing the wrong thing here. But taking them away, it solves a little bit, but there's still a lot of problems.

Joel: Let's get into facial recognition for a second. You're probably aware that Illinois has either introduced or implemented a new law, where if you're interviewing someone via video and using AI tools to do that, you have to let them know that. Recently on Facebook the face app where it ages you 20, 30, 40 years, is getting some backlash as some conspiratorial facial recognition Russian platform. What are your thoughts on facial recognition, particularly as it pertains to employment and people finding jobs?

Miranda: I think a lot of the concerns that organizations we work with have around this, are that, is there really any true connection between facial expressions and your ability to do a job?

Chad: Yeah.

Miranda: That's the concern here. There are other concerns about facial recognition, and privacy, and surveillance, but in this case, it's like, "What does the way my face expresses when I talk, have to do with whether I can go on and be successful at a job?" The fear is that, the way that these systems work is, a company might say, "Here are our top performers, let's build an AI model, let's build an algorithm to find people who are going to be successful like them." And if they're just looking at their facial expressions, and then they move that over and start to use it to analyze applicants, then if the initial pool of successful people was homogenous, that, that same sort of demographic preference will end up getting injected into the hiring process. But it will be really hard to detect, again, because AI can be complicated. It can be hard to tell why a system is making a certain decision.

Miranda: And so I think when we think about facial recognition, but really what we're talking about here is facial analysis, the question is, what's it being used for? Is there any theoretical connection between this and the thing we're trying to predict. The Illinois bill is interesting, because there is a problem that people don't necessarily know that they're being assessed when they apply for a job by some sort of algorithm.

Chad: Right.

Miranda: And when they don't know that, if they feel that they've been mischaracterized, they don't know how to maybe challenge that decision, like they could a normal hiring decision to say, "I think this person was biased against me." The Illinois bill tries to remedy that by saying, "Let's just tell them," right? But if you're applying for a job, and you get told, "Oh, by the way, we're going to use facial analysis, we're going to use AI to judge you, how do you feel about that, agree, disagree," and you want that job, and you need that job, is that really super effective? You're not really in a powerful position there as a job applicant. So usually in that type of situation, we'll be thinking about what are the power dynamics here? Is this a real remedy?

Miranda: And so really, I think the laws that are more effective in dealing with AI and bias here, are actually the civil rights laws. Employers can't discriminate in real circumstances. But to enforce those laws, you do need to have more visibility in to what employers are using what tools, and you need to have a sense of who applied and who ended up getting the job regardless of what application process they used, or what AI they used as part of that process.

Joel: So I think some of the push back on that would be, "We can tell if someone's lying, or there's a potential that there's some falsehoods, by their facial gestures and recognitions." And voice as well, do you think there is some value to that part of it, or no?

Miranda: Our understanding is that the technology is nowhere near to being able to do that. There are a lot of claims out there, there's a lot of great marketing about how AI can do this, but-

Joel: There are.

Miranda: ... if they're training the AI on a certain group of people, it might not work as well on another group of people. So some colleagues of ours at MIT did research saying that... finding that facial recognition software, that was just trying to tell what gender a person was, it worked great on white men, and it worked much worse on women of color, just because the data it was trained on, were probably mostly people with lighter skin. And so there's not a great sense that this technology is up to par to actually do the things that they're claiming it can do, especially when it comes to emotion detection, and lie detection. That's so different by culture, by individual. All it's doing is stereotyping basically.

Chad: Yeah. And if your baseline is that of a white male, or an individual who is calm during an interview, if somebody has anxiety, or maybe that's a part of a disability that they might have, and they don't meet the baseline, then automatically they're kicked to the curb.

Miranda: Exactly. Disability's a hard one too, because people building this technology, some of them are trying to do interesting things to kind of debias their models to make sure that it's not harming women, or not harming people with different skin colors, or different ethnicities, but with disability you might not even know who in your sort of training group of people, has a disability. And the people who are interviewing, might not choose to disclose it. So you won't know whether to make sure that it's calibrated to account for that, and that's an accommodation issue. That could really harming people with invisible disabilities, or ones that manifest that a computer could see, but maybe a human would overlook, or it wouldn't be troublesome in a normal interview. The computers can pick up on really subtle things.

Chad: Yeah. So back in May, you panned all the ways hiring algorithms can introduce bias, which landed in the Harvard Business Review. A little rag obviously. In the article you talk about how predictive tech, or programmatic helps advertise jobs today, and how that in effect, could really be a huge problem and introduce bias right out of the gate. So a job is posted, this algorithm goes out and says, "Well, we need to target these types of individuals, so we're going to go to these different types of sites," can you talk about that?

Miranda: Yeah. So when most people talk about bias and hiring algorithms, they're talking about algorithms that are looking at resumes, or algorithms that are doing these video interviews, or doing personality tests, or things like that.

Chad: Right.

Miranda: But that's not the only place that algorithms are being used in the hiring process. So much of recruitment is moving into that top of the funnel part of the process. Trying to build the right candidate pool, so that the people who come in the door are already maybe closer to the right people, and so recruiters or hiring managers aren't wasting their time. They get a ton of applicants, they want to spend their time on the right people, it's better if the right people come in the door in the first place. And so programmatic ads or any ads that are on platforms that are personalized, or saying, "We're going to show the right ad to the right person," what they're doing is all the data they really have, is what people have clicked on in the past.

Miranda: And so if they're saying the right person, all it means is people who've kind of indicated interest in this type of job in the past, and people like those people. That's the data it has to work with. But what people search for, the types of jobs people search for, the types of jobs people believe they're qualified for, and maybe they go in to click on, that has such a deep reflection of society. And what I talked about in my talk at Jobg8, there's deep occupational segregation, at least in the US and definitely other places. And so looking at jobs people have already had, jobs they think they might want, jobs they think they're qualified for, that's going to reflect these deep divisions. And so just honing in on the people who are most likely to apply for a job, that's going to end up reflecting these social divisions, and reinforcing those disparities. And then people don't even know that there are other jobs out there that they could apply for.

Miranda: And maybe it's not that it would prevent someone from going to a company, and looking at their job site, and applying for a job, but if you're bombarded over and over again with ads from a certain company or for a certain type of job, and people from another social group don't see those, you're going to be more likely to apply for that job than they are, and then that's going to reinforce across society. And

someone's benefiting from that, and other people are being hurt by it.

Chad: Yeah. And if you don't have control of the algorithm or targeting, like we were talking about with Facebook, then that could obviously adversely impact the candidates who are coming into your pipeline, versus having more ability to target the types of individuals, because you know what your talent pool looks like right now. So if you want to be more diversified, you need that control. So that being said, talking about less control, can you tell us a little bit about ZipRecruiter? How do you feel about ZipRecruiter, because they have this awesome algorithm, and how do you think it works? Does it work like Netflix?

Miranda: Yeah. So ZipRecruiter and any other sort of personalized job board, like I said earlier, these technologies are all grounded in this basic thing called a recommender system. And the way that works is, it says, when somebody new joins up, the platform has to figure out, "All right, they're new, we don't know much about them. We have to figure out what to show them, and then as they start to interact on site, we can learn more about what they want to see." So it's looking at things you like. So if I go on to ZipRecruiter, or another website and say like, "I really like marketing jobs, I'm going to click on a lot of those, maybe start applying to some," it's going to show me more marketing jobs.

Miranda: And then the way these systems often work is, I'm doing that, I have a certain pattern of behavior on the site, if other people are kind of behaving in a similar way, it might start showing things that I've liked to them, because it sees that we're similar, maybe we'll like some of the same things. But also on ZipRecruiter at least, and on some other platforms, I'm sure other folks are building this, the employers also, the hiring managers or recruiters, they can kind of start to thumbs up candidates, or thumbs down them, and it learns those preferences as well. And it starts showing jobs to the candidates that most closely resemble the ones they've already liked.

Miranda: And so all these patterns, in the same way that, that story happened with Netflix, even if the platform isn't collecting gender, isn't collecting race, those things are probably going to be correlated with how people behave on the site. It's just with Netflix. And that might end up determining what jobs they see, what jobs they don't see, and kind of guiding people towards a certain type of job that might be... it might be relevant, but it might be hiding from them other things they're also... could be interested in or qualified for. Something like that could end up guiding women for instance to, maybe middle level, lower level jobs, whereas men tend to be more confident in what they're clicking on, what they're applying for. They think can apply even if they don't have all the qualifications, and so they might start getting... shown more senior jobs, higher paying jobs, just because that's the behavior that's being reflected. And it can be super invisible.

Miranda: If you're not collecting information about demographics, you might not know what your candidate pool looks like until they officially apply for a job, and by that point it might be a little too late to really do the proactive outreach to underrepresented groups.

Chad: How do I get a female to click on my damn job? I mean, that's the thing, it's companies are like, "Shit, I'm putting jobs out there, and females are just... they're not clicking on as much as males," because you know us dumb males, we only fit about 50% of the requirements, but, "Oh hell, why not?" Where a female's like, "Yeah, I'm not meeting 100%, so I'm not going to." What does a company have to do to be able to actually, because there's a huge behavior difference there. What do they have to do at that point?

Miranda: Well, so there's a couple of interesting tools out there that are actually using AI to get suggestions about how to change the wording of jobs, so that there won't be a gender bias. And the data they're using, is looking at people who've applied before and saying, "If you use these words instead of these words, we actually predict that more women will apply to your job." So that's one thing that they can do, is really looking at how those jobs are written.

Miranda: And then another thing I think goes back to the question about job ads, and targeting. It might be tricky to really just sort of naturally see that the demographics of people applying are balanced, or that you're getting the pool that you want, so you might have to do some proactive outreach. And then having targeting options, might be the only way to really get your job in front of folks that aren't applying as organically at the same rate. And so we think there is a role to play for affirmative outreach to underrepresented groups, as long as someone's out there making sure that no-ones abusing that.

Joel: One of the more fascinating products to come out in the recruiting industry in a while, is a robot called Tengai Unbiased. I don't know if you're familiar with Tengai or not, but the basic concept is, you have this essentially head of a robot that projects a face, and interviews candidates in an unbiased fashion per their marketing material. And the robot basically ranks candidates based solely on what their answers are, and what their skills are. Now obviously, bias comes into it once a human looks at the ranking, and then decides maybe from there who they want to hire. Although, in a perfect world I guess, the robot would actually hire the people, and humans wouldn't even have to get involved. What are sort of your thoughts on this futuristic view of how hiring could happen in the future?

Miranda: I think it really doesn't matter if there's a robot that's doing this automatic interviewing, or if it's just sort of a piece of software, or a chat bot, right? The robot's a little... It's sticky for PR, but what I think about, what I find interesting is, how is it making those decisions? How is it deciding how to rank people? Or, if we get to a point where they're making actual hiring decisions, what's the cut off score? What are they looking at? How is it trained, right? And even though... There definitely is a problem with unconscious human bias in hiring, so any sort of standardized process is going to help mitigate some of that.

Miranda: But the problem with AI, with robots doing this is, generally they need to learn from something. They need to learn what type of candidates a company likes. So a vendor or the company that's building an algorithm, a model, what they need is a current set of employees who have done really well, who are successful candidates. They use that to determine what makes a good candidate, and then they judge new applicants with that model in mind, right? The problem is that, so many companies today still have a problem with diversity and inclusion, and so the people they see as the most successful might still reflect this narrow conception of what success looks like. And it might homogenous, it might be explicitly biased, and so those decisions, even if they're be a robot, they might be consistent, but we don't know if they're unbiased unless we're looking at the difference between who would've gotten a job before, and then whose getting a job now.

Miranda: I think that's what really matters when it comes to automated hiring decisions. It's not the interface of the program, it's really is this helping a company mitigate the unconscious biases of their own recruiters, and is that having an effect. And sometimes we don't necessarily have the data from the civil society side, from the public to see that right away. Regulators would, companies have to report that. But we haven't solved diversity and inclusion within companies, so if we're using current workforces to define what success looks like, we're going to lock ourselves in the present if not the past. So how do we make sure that we recognize that we don't have it all figured out now, we don't even know what successful employees look like half the time. It's very subjective. So how do we make sure that those subjective decisions don't get calcified into computers?

Chad: Well, Miranda, unfortunately our time is over.

Joel: Thanks Miranda, that was awesome.

Chad: We had a blast, and don't be surprised if we call you back to have more discussions around this specific topic, because this is popping up, it's not going away, and we're glad that Upturn and yourself are actually around. So thank you so much, and we out.

Joel: We out.

Announcer: This has been the Chad and Cheese Podcast. Subscribe on iTunes, Google Play, or wherever you get your podcasts, so you don't miss a single show. And be sure to check out our sponsors, because they make it all possible. For more, visit chadcheese.com. Oh yeah, you're welcome.

bottom of page