top of page
Indeed Wave.PNG
Parental Advisory.jpg
Color-YouTube-logo.jpg
Apple Podcast.png
Spotify.png
Chad Sowash

Deep Fake Recruiting


Remember seeing and hearing your first Deep Fake? Synthesized voices that sound like your favorite movie star. Not a great impressionist, on the contrary, algorithmically cloned voices where even the subtlest of nuances of a voice can be captured and recreated. Cringy, yet incredibly cool that tech could accomplish such a feat.


Yes, technology is advancing quickly and unfortunately many of the advances are seen as negative. Luckily, Ryan Steelberg, president, and founder of Veritone guides us through a more advanced cloned voice discussion.

On this episode we cover:

  • The difference between a deep fake and a "cloned" voices

  • Practical applications for using cloned voices

  • Where are chatbots headed?

  • Will this tech help candidate and employee engagement?

  • How do Pandologic and Wade and Wendy fit into the ecosystem?

Well kids, welcome to the Matrix.


PODCAST TRANSCRIPTION sponsored by:


INTRO (2s):

Hide your kids! Lock the doors! You're listening to HR’s most dangerous podcast. Chad Sowash and Joel Cheeseman are here to punch the recruiting industry, right where it hurts! Complete with breaking news, brash opinion and loads of snark, buckle up boys and girls, it's time for the Chad and Cheese podcast.


Joel (22s):

Oh yeah. What's up everybody? It's your favorite boys in the matrix! The Chad and Cheese podcast coming at you. I'm your cohost Joel Cheeseman, joined is always the Batman to my Robin, Robin to my Batman, Chad Sowash and today we are just giddy to welcome Ryan Steelberg. I don't think that's his acting name. Ryan is President and Founder of Veritone. If you don't know, Veritone keep listening. We'll get to why you do Ryan. Welcome to the show from beautiful, lovely Southern California.


Ryan (57s):

Joel. Chad, thank you guys for having me. It's a pleasure.


Joel (60s):

Should we tell the audience where Ryan is right now? Where he is right now?


Chad (1m 3s):

And of course we're going to tell the audience. Okay, so now he's not broadcasting from a Tesla. He's broadcasting from a Tesla killer lucid. They've put out, I think, less than a thousand cars at this point?


Joel (1m 17s):

He's got number 69, I think.


Chad (1m 21s):

43, 43. He's better than 69. Yes. But, but yeah, Ryan man. Yeah. We're frothing at the mouth over here, man. You're in California. Got the great weather and you're sitting in a Lucid. Yeah. Thanks. Thanks for that.


Ryan (1m 33s):

Life is good. I'm very thankful. Very blessed.


Joel (1m 36s):

Is Ryan Steelberg, your stage name, be honest.


Ryan (1m 40s):

It's real. You know, if you search for Steelberg, usually pick up Chad, my brother and I, but my, you know, the name kinda sounds like Spielberg and I kind of flirt at times when I'm trying to get reservations at places. But my cousin, Eric is a pretty prominent cinematographer. So, you know, we're in Southern California kind of sounds like Spielberg, close to Hollywood. So I play it up all the time.


Joel (2m 4s):

We got to find a good reason to get Chad Steelberg on the Chad and Cheese podcast.


Ryan (2m 9s):

Right. And it got really confusing. I love it. Yeah.


Chad (2m 11s):

We don't need any more Chads it feels like Florida in the Bush administration anyway. So, so before we get into the hairy details around how you actually busted into the HR industry, let's talk about the Veritone. What is Veritone. I see this AI operating system, you guys are touting. I have no clue what that even means.


Joel (2m 34s):

This shit is the matrix.


Chad (2m 36s):

Yeah, I know this is fricking crazy. So straight from the mountain, man, bring it down on those stone tablets for us.


Joel (2m 42s):

Dumb it down for us, Ryan.


Ryan (2m 44s):

Dumb it down. So, you know, I've been a tech entrepreneur for over 20 years now and you know, we started kind of our first push into the internet digital space, you know, right out of school, you know, back in the mid-nineties. And it was all about advertising tech and martech. You know, we were some of the first individuals to build large scale ad management systems, you know, working in, because we kind of powered all the advertising delivery and reporting everything for a lot of old cool names, Yahoo, Lycos, GeoCities, you know, the I'll call web 1.O.


Chad (3m 20s):

Dude Geo Cities.


Joel (3m 22s):

Overture stuff? Were you early?


Ryan (3m 23s):

We were the actual, which interesting to Overture, which, you know, I think the precursor was go-to right. goto.com. We were actually the ad tech engine behind go-to, which became Overture. So I'd like to say is the stupid search ads we were there first, you know, and display ads.


Joel (3m 41s):

I miss Overture.


Chad (3m 42s):

It's your fault is what you're saying.


Ryan (3m 45s):

It's all our fault now, but yeah, that's been, so we've done several businesses, you know, really all focused on ad tech and just the quick fat main version is just think of everything we've built is if there's lots of data that needs to be ingested and analyzed incredibly quickly, you know, we kind of all in the ad tech space, you know, got PhDs in it, right. I have to choose what ad to serve to the right individual the right time and I have to do it in 10 milliseconds. So it was a great, you know, we were early playing around with I'll call it version one oh of neural networks and trying to get better, you know, speed and optimization. So, you know, that kind of set, laid the groundwork for us kind of expanding into the, you know, cognitive AI space, which really what Veritone is focusing on today.


Chad (4m 32s):

Do you watch Silicone Valley and Startup and some of those older, you know, like they, they try to do the throwback stuff and go, yeah, that was me.


Ryan (4m 42s):

It's cringy, you know, and, you know, the fact that I'm old enough now, you know, I haven't been the youngest person in the room in so long. And so it's an interesting, I don't feel that old, but man, it's a, a big transition.


Chad (4m 55s):

I know how you feel. I know how you feel.


Joel (4m 59s):

So when did Veritone start, like what, what's the timeline from, I guess we had Overture when they sold Yahoo, did you work for Yahoo for a while and then go over to Veratone like, what's the timeline on it?


Ryan (5m 9s):

A couple of businesses, actually, you know, where the idea of Veritone came was actually when we were working at Google, we, we sold our previous business to Google in '06. And, I headed up all their offline ad efforts for a few years. And again, the impetus for Veritone, it started really focused on meeting entertainment and advertising. And what, you know, as I stated earlier, you know, we've been pivotal players in the ad tech space, primarily around I'll call it display based ads and search ads. And, you know, as mobile exploded, you know, ad blockers came into play.


Ryan (5m 49s):

It was harder to connect with an audience through just I'll call, you know, interruptive based commercials. And so we started really looking at native based ads, so ads that are kind of embedded with an inch of the programming, right, such as a sponsor of this podcast, or, you know, a logo or a car in a movie. And so we started to work on software that it was just simply not practical to have humans try to analyze all the content. So what if we could build tech at scale that could analyze all this audio and video, you know, at huge volumes and be able to very quickly in near real time identify all of those product placements, those integrations, those unstructured data elements.


Ryan (6m 30s):

And, that's what we did. I mean so we, you know, version of one Overtone in 2014 was simply trying to identify right in near real time when certain organic ad mentions were happening, you know, on broadcast radio and streaming audio.


Chad (6m 49s):

Did you break that down into transcriptions? And, then?


Ryan (6m 52s):

Exactly. Okay, so it was, it was NLP natural language processing speech to text at scale. And then we had to be able to say, okay, great. But what if I want to do this with 5,000 streams at the same time? Right. So it was a scale function and really a precision and accuracy function on the NLP that obviously, you know, had to get good enough so we can turn it into a product. So that, I mean, that was it, it was version 1.0, and then it just expanded significantly from there and to, you know, other forms of cognition, like object detection.


Joel (7m 21s):

Big agencies and big companies would hire you to, to make sure tell them how many times their brand would come up in radio shows, television. Okay. Very cool.


Ryan (7m 30s):

And we own it in a division of Veritone. We actually own our own media agency as well. A lot of people don't know that, but Veritone 1 again, which is a wholly owned subsidiary of Veritone, we're one of the largest audio agencies and we're actually the largest podcasting media agency. So we like to say our tech that we built, we like to eat our own dog food. Right. We're one of the larger clients of our own tech. So yes, we do license it to third-party companies and brands, but we also use the same technology to improve our own agency.


Chad (8m 1s):

Yeah. We actually talked to the team about this podcast, not to mention, we represent a much larger group of HR talent acquisition podcasts.


Joel (8m 13s):

Huge.


Chad (8m 13s):

And yeah. So Veritone, Veritone had their eyes on the Chad and Cheese. So that being said again, let's dig into the AI operating system that you guys are talking about. Right. So what does that actually mean? Because as we start to see, I think AI turned into like cloud computing where cloud computing one once was a thing, but it's not anymore because it's imbedded in everything we do. Do we, you see that happening with AI is that why we need an operating system?


Ryan (8m 43s):

Very similar. And I am, if you make the parallel to the I'll call it traditional legacy operating system, like Windows, we saw early that there are so many AI models out there, right? So I mean, you and I could literally in five minutes search and find hundreds of text to speech, and NLP actual models, algorithms, right. That are prepared to take different training data to again, try to execute an AI function. And, so we saw an explosion of models, A and then B, then we saw people who are trying to build applications that would then use that model. So for example, our application that we first built, right. Trying to find an ad mentioned, it didn't seem practical that you would have to have like a hard coded end-to-end process.


Ryan (9m 27s):

Right. I have to pick one model, build an application, et cetera. And so we said, okay, if there's going to be an exponential growth in the models, and there's going to be exponential growth to your point, primarily because of the benefit of cloud scale, that what was really missing was an operating system. And so a piece of software, a software layer that would allow you to manage one to N number of different AI models in different categories. So from, you know, machine vision to obviously voice and audio and NLP, and then still be able to bifurcate that from the application layer. So again, if you and I are working on, you know, a speaker separation application for this podcast, and we don't wanna have to rewrite the application if we find a killer new AI model, right.


Ryan (10m 14s):

That comes on the market, that's a game changer in terms of accuracy and speed. So that's what we went. So we, and it turned out to be a very sound and strategic decision that allows us now to really support, you know, numerous different use cases on really any cloud, whether it's, you know, Google or Azure, but also on Prem, we do a lot of work with, you know, the federal government and the Department of Justice. So the operating system does not have to just run in a large public cloud, again, like an Azure or AWS, but we can also deploy the whole stack, even in a network isolated environment, you know, behind a firewall.


Chad (10m 52s):

That's great for wiretaps right there that easily. I don't know if you know that?,


Ryan (10m 58s):

No comment there, no comment there.


Joel (11m 2s):

So fast forward, 2014 to now, you guys are a cornucopia of products and services. Like, is there a way for you to funnel that down? And I guess, how did that eventually lead to getting into the employment space?


Ryan (11m 12s):

When we first launched the operating system, we couldn't find anybody to buy it. They're like, what the hell is an operating system for AI? I barely even understand how to deploy AI in my company and you're trying to sell me the platform, right? So as almost a necessity, initially focused, just in meeting entertainment, we kind of took our own subject matter expertise. So we built a host of different specific applications built on AI where our tech stack. So, and though in effect, it was something tangible that ESPN and Disney and,, you know, iHeart media, it could buy, right.


Ryan (11m 53s):

They weren't buying the AI core platform. They were buying the application we built on AI where to do, let's say real-time ad attribution. So the app, so thankfully our decision to focus or have the ability to sell apps, really kickstarted the business in terms of revenues and allowing us to scale. And now if you fast forward a little bit, now the problems, you know, groups know us, they've invested more in AI. They have some analysts and data scientists on staff. And so now they're ready and they have been for a few companies for years now is taking it to the next level. Right. Okay. That's great Veritone, we've been licensing this one application, but now we're ready to invest directly into the AI platform because we want to build our own custom solutions, right.


Ryan (12m 35s):

Or necessarily, we don't want to tell you all of our problems.


Chad (12m 39s):

Right.


Ryan (12m 40s):

We want the workbench because we have internal things that we want to go solve and they want to keep the proprietary nature of the application. So it really went from, we built the platform, candidly first, right? AI, where couldn't sell it for a couple of years until we built a host of different applications, I'll call it the Microsoft Word, right? The Microsoft Excel. And then ironically, the majority of the growth, which is afforded now and going all these different verticals is really more of a focus on the platform itself as compared to the applications.


Chad (13m 11s):

Do they have access to your sandbox and then they can just build on top of what you've already, what you already have there? And is that provided like through APIs? I mean, how, how is that operating system actually provided?


Ryan (13m 21s):

Yeah, but you're pretty technical and of you just you're spot on that's exactly right. So there is a development framework, right? Depending on their applications, a full SDK. So you can actually build and deploy new models, AI models, and it's a full framework where you can actually build using a low code low-code, no-code workflow platform, which we call automate studio. So you can start to build the data ETLs and the data pipelines through a low code application layer. And then ultimately, if you so choose to, you can actually on the AI ware platform, which, we call AIwear.JS is, where you can actually build and hardened UI, a user interface that can be a mobile app or a web based property.


Ryan (14m 3s):

So you're right. It's full it's full stack, and it can, it's available either through our development framework or via API.


Chad (14m 10s):

Pretty damn awesome. So let me ask you real quick. I actually heard you speaking Spanish fluently on a podcast. Can you speak fluent Spanish?


Ryan (14m 19s):

I cannot speak Spanish. I, you know, I was a pretty good student. I think I took Spanish for seven years. I don't think I ever got anything less than frankly like a 95 and I can't speak a word and I live in Southern California. So there's the educational system right there for you.


Joel (14m 37s):

No bueno Ryan.


Ryan (14m 38s):

But no, I do not speak Spanish.


Chad (14m 40s):

This obviously propels us into the next question. What the hell is, is the difference between a deep fake and a clone voice


Joel (14m 49s):

Or witchcraft?


Ryan (14m 50s):

Yep. You got it. So, again, and primarily we're working with our media entertainment customers. We kind of saw this whole metaverse kind of opportunity emerging. And so.


Chad (14m 58s):

There it is.


Ryan (14m 60s):

There it is. We have, so we started to see, okay, what's our entry point.


Joel (15m 6s):

Did he say metaverse Chad?


Chad (15m 6s):

He said metaverse .


Joel (15m 7s):

Oh yeah, that's what I like.


Ryan (15m 7s):

It's a boom, boom. I get, you know, I had a, you know, had a drop it just with a buzzword, I'm now including some search index. So, so voice, you know, voice was the obvious one. We thought the lowest hanging fruit where, you know, we were ingesting and analyzing so much content that, and hence this conversation right here, it is, you know, it was pretty easy for us to isolate the voices and start looking at that as training data. So synthetic voices, deep fakes, cloning. I'm going to give you an analogy since you guys appreciate some of the mid nineties web references you'll appreciate this one.


Chad (15m 44s):

Can't wait.


Ryan (15m 45s):

That the parallel is Napster to iTunes. So for us in the music space, when the CDs, the demise of the CDs was destroyed and the whole world went song specific, what, emerged first was the deep fake of music, right? Which was this, the Napster ecosystem. People were ripping music right from CDs. And they were posting them on different servers around the world. And Sean Fanny and a company called Napster built kind of an index that allowed me to, you know, to index the songs that I ripped off and frankly stole my computer and everybody else. And so that's how it started. And we all did it.


Ryan (16m 24s):

We went from basically zero to a million fast, and it was ripped off iTunes in your crazy iTunes, brilliant Apple, obviously being facetious is they said, you know what? I believe that if I create a quality of service and a service layer, and I have a large enough catalog, I not only do I believe that we can create a commercially viable music distribution system, but I bet we could start making real money. And over, of course, if you know, frankly, a decade, we could try to get the recorded music industry back to, and everybody's like, there's no way it's free. Well, obviously it wasn't just the lawsuits from everybody trying to shut down Napster


Joel (17m 1s):

Not just Metallica.


Ryan (17m 2s):

It was lightning in a bottle with the iPod that the catalog, and it was cheap. You can buy, wow, I can buy one song for only a buck, right. Killed it, killed it. And obviously, boom, that you know, that the digital streaming ecosystem and the song, obviously more songs centric universe is now here. So I use that parallel because that's how I look at kind of the deep fake versus I'll call it legitimate voice synthesis or voice cloning as it's an interesting parallel. Like Napster the first thing that we've all been exposed to are the deep fakes, right? So we've seen the fake Tom cruise, you know, avatars, you know, people speaking, we've heard the spoofs of all have so many countless people where it's not their real voice, but it sounds just like their real voice.


Ryan (17m 48s):

And to be clear, those are, those are misappropriations of people's rights, you know, league legal case law and IP protection is going through, let's just say a crazy time because of the sophistication of AI and the synthetic content creation. The fact is you and I, we maintain the copyright, right? If you will, of my voice, it's all relative to an argument, right? I mean, if you ask a thousand people, right, did somebody rip off Tom Cruise's voice, right. And anybody who watches that video 99.9% of them saying that that person is trying to emulate Tom Cruise is look at link name and name and likeness, and they're trying to emulate their voice. So that is a misappropriation of IP rights.


Ryan (18m 29s):

And so the deep fakes that exploded in my mind were like all forms of new innovation is it was a vacuum. It was a novelty. People were interested in it. And if somebody wasn't, if there wasn't gonna be a legitimate platform to do it, people are going to create quote, "deep fix." So, I think, now we're, I'd say entering the commercial phase where now working with the IP owners. So these are the individuals direct, right? The influencers, you guys, right. Prominent, you know, you know, people both alive and who have passed away, the Walter Cronkite's of the world. You're now seeing legitimate opt in, or I'll say, you know, with consent to build official clones. Right. So we'll call these the official, not the deep fakes, but it's this frankly, a lot of the same technology, but it's the approved aspect with rights and protections around the use of these synthetic voices and content that you're now seeing today.


Ryan (19m 24s):

And that's obviously an area that we're building a business around.


Chad (19m 26s):

Gotcha. Okay. So back to you speaking Spanish, but not speaking Spanish, that was actually a clone voice that I heard on a podcast. Right?


Ryan (19m 37s):

Correct. So we took, yes. We took trainee data. And once we built, my synthetic voice, we then could feed it transcripts in Spanish. And then the output obviously came out in Spanish. And so we look at that as just, you know, really exciting for kind of next generation amplification of localization in foreign language distribution.


Chad (20m 2s):

That's where it hits podcasting. Okay. Because Joel and I, Joel barely speaks English. Okay.


Joel (20m 6s):

Bullshit.


Chad (20m 7s):

So we can, can you imagine, can you imagine the Chad and Cheese podcast in Spanish? Joel, but we wouldn't have to learn Spanish we would have a synthesized clone voice that would duplicate, and we could actually have local distribution into Spanish speaking countries.


Joel (20m 23s):

But does hairy balls translate into every language is the same as it does in English.


Chad (20m 25s):

Yes of course it does.


Joel (20m 27s):

I guess the question. Okay. Ryan, let me pick my brain up off the floor real quick. So when you did your voice in Spanish, did you say a certain number of words or syllables in Spanish and then it was your voice?


Ryan (20m 42s):

No, and that's what it's incredible. I mean, I created the synthetic model just from me speaking in English.


Joel (20m 48s):

Wow.


Ryan (20m 48s):

And then obviously as a company, we do both, you know, text translation at Veritone, as well as the voice that we are able to do everything, but no, that's, the beauty is, you know, with about 10 hours of training data and to get a really good quality voice. I mean, you need, you need, you know, I'd say good quality audio is training data, but again, we've gotten good enough that I can repurpose, particularly if you're a professional speaker podcasters, you know, we're sitting on lots of training data, but again, once I have that base model voice, I can do it. And vice versa. If I have a voice that's initially in Spanish, right. I can convert that into an English voice.


Joel (21m 23s):

And do all the nuances of your voice come over in that tone?


Ryan (21m 28s):

Exactly. So, and then you can continue, don't think of it as a binary one and done creation a model. You can continue to iterate it. So let's say I start with that 10 hours of training data. You, it just, we, you know, even with the sophistication of the text editor, I just can't get my Spanish voice to say, you know, I can't even think of a funny word right now, but just something a novel. Yeah, yeah. Groupo Televisa. Okay. There you go. Yeah. And I, no matter what, I keep trying to say that it's not coming out. Right, right. You can, you can, you can add then dictionary so I can, I can improve the model subtly for proper nouns and phrases to continue to improve it. But in terms of tone and inflection, it's pretty amazing what you can do with Text to speech now.


Ryan (22m 12s):

But just one comment though, I'm going to throw a little monkey wrench in here is we also support the modality of speech to speech, which is really cool.


Chad (22m 24s):

Wow!


Joel (22m 24s):

Explain that.


Ryan (22m 25s):

With the same training data, once Ryan's voice is created, I can actually use another voice actor to drive my voice. So I could have Chad right from this podcast, drive Ryan, Steelberg, me, drive my voice. So you're speaking in your own voice and what's coming out is my perfect sound of voice.


Chad (22m 49s):

Now that's scary shit, dude.


sfx (22m 51s):

Doesn't anyone notice this? I feel like I'm taking crazy pills!


Chad (22m 51s):

So, so overall, I mean, seriously, the text to speech thing is amazing. We have transcriptions all that other fun stuff, but I mean, this is where you get into, again, people step back. It's not just that it's crazy, but there's a lot of evil out there. And I know, you know, that the difference obviously between, you know, being able to legally use my voice and not, but there are a lot of people out there who I'm sure would yield this power for evil. How do you stop something like that?


Ryan (23m 18s):

You know, we do some interesting things when a voice is creative with our tech, we're able to embed, let's just say inaudible tones. And I'll say what we call them little, little Easter eggs that we can quickly scour distribution and verify not only is that, you know, is that a legitimate voice that, that Veritone and Marvel created, but did it actually come from the original training data? So I can actually map back, right? The output of a voice all the way back to the original training data. So very quickly we can see if we can scour in some groups, obviously with like YouTube, for example, that we can, we can pre index our content and to make. And so we don't, it's not like we just have to search every single YouTube video out there and to help create kind of that index.


Ryan (24m 5s):

Mobile phones is a lot easier, you know, where they're embedding, you know, via SDKs and the application. So if a misappropriated voice is on a mobile phone against the traditional players, it'll identify it. So I think ultimately is we're going to be able to police, it's going to start like everything, you know, bad apples are still in bad characters are still going to do bad things, but I think we're now producing tools fast enough that we'll be able to sniff those out and make it clear that, you know, kind of like a verified Twitter account, if you will? We're we're going to be able to get to that level of integrity here real soon.


Chad (24m 41s):

For all those kids listening real quick an SDK as a standard dev kit. Okay. So go ahead, Joel. Sorry.


Joel (24m 43s):

So when Putin comes on TV and says, we're at war with America, you guys can be like, eh, that's not him. That's a fake voice, et cetera, just making sure that that's the case. Well, most of our listeners will know you from your recent acquisition of Pandologic, who also owns a Wade and Wendy's.


Chad (25m 4s):

Crashing down the door, man.


Joel (25m 6s):

You know, why that acquisition, what's the vision? Help us out here.


Ryan (25m 16s):

We have built a really good position and market presence in media entertainment, as we've talked a lot about today and also government legal and compliance, you know, licensing and, helping again, our state and local law enforcement agencies, you know, our military our Department of Justice. And, but we were missing a lot of categories. We know we don't have a very robust FinTech or insurance business. The Pando opportunity was very intriguing to us because of frankly, the ubiquitous nature of hiring, right. I mean, and obviously we're doing an, a very acute issue with labor issues today, labor shortages, right? The great resignation. It's almost the biggest, I don't know, a single company that is not in some form or fashion, struggling to identify source, and land right candidates and quality employees.


Ryan (26m 6s):

We looked at this as a vertical expansion opportunity for Veritone number one. Number two, the way they built their business was extremely similar to how we look at problems. Starts with data ingested integration, working with the partners like Amazon and others, and then, and the output of their specific HR platform. There's many different areas of the HR equation and the stack, as you guys know, and talk about all the time, but Pandologic was really focusing on I'll call automated and programmatic recruitment advertising. So the ads that you see all over the web and other bulletin and boards, Indeed and other destination sites. So if you kind of look at the combination of what they were using AI to automate a lot of that process, you know, the data ingest side, and it happens to be at least the today their primary application, is that really an ad network, right?


Ryan (26m 56s):

For recruiting Spielberg's know that pretty well. Right? So that's an area that we felt not, not only confident in the AI backend side, but the output, the front end of their application, which is I'll call a specific ad network was something that we were very comfortable in and understand. Well, so it was a, it was a right time. We negotiated a great deal from a financial and business perspective and we brought it on board and it, and we did know what we were building with the synthetic voice, you know, that we were aware of what they were looking at with Wade and Wendy. So, you know, our big thing is, you know, we continue on to prove right, that the performance of their primary model, but we obviously are very excited about, you know, I would say improving the experience of recruiting from just I'll call it legacy based, text chat bots, and things to frankly, building more organic lifelike voices to have, you know, if you're trying to onboard a hundred thousand new employees in frankly a month, like Amazon, you're not gonna have a thousand recruiters on the phone that can we bring a more lifelike human experience of talking to Ryan Steelberg and not just going through an automated chat bot.


Ryan (28m 9s):

And that's obviously another area of excitement where we think we can bring a lot of innovation to Pandologic and frankly waiting Wendy. Wow.


Chad (28m 18s):

All of the, I mean, all of the data that Pandologic has not to mention Wade and Wendy, you are looking at more of an experience that is human, but not so human because it's cloned. Being able to perspectively drive the CEO's voice in a way through marketing messaging or a hiring manager or anybody actually in the hiring chain to be able to provide a much more humanesque experience. Is that what I'm hearing?


Ryan (28m 48s):

Yeah. You almost got it spot on. There has been a delineation between called corporate marketing budgets, right. And HR recruiting budgets. And even though the output, the form is a form of advertising. You're now seeing companies really bringing together, right. The Chief Marketing Officer and the head of recruiting and HR. Because again, to your point, if I'm trying to attract people to my company, it's not just what the listing looks like on Monster and Indeed. It's the commercials that I'm doing, right. Talking about all the great things we're doing at Veritone right. And so you're now seeing more coordination, more coordination in the marketing message, the brand attributes between traditional marketing and advertising and HR recruiting.


Ryan (29m 34s):

And obviously Veritone, you know, we've been pretty successful at both. We're bringing those closely together. If companies want continuity, which you're touching about, they want that same voice, the same person who maybe is even doing the commercials. Right. And they want continuity all the way to the onboarding HR process. We now can do that. If the companies think that that's the right decision, if they want really to consolidate and perfect that continuity.


Joel (30m 1s):

All right. So let me, let me get this straight Ryan.


Chad (30m 5s):

Wow.


Joel (30m 6s):

A company could hire Christopher Walken to do their ads, do a licensing deal with Christopher to do the interviewing questions. So people could have an interview situation where they're interviewing with Christopher Walken, at least his voice?


Ryan (30m 18s):

Spot on.


Joel (30m 20s):

Wow.


Ryan (30m 21s):

Now whether that's a good, just tell me one of you guys can do a Christopher Walken impression right now, because that would just close the loop.


Joel (30m 28s):

One of our outros is a Walken impersonator. So.


Ryan (30m 30s):

I mean, that's such an iconic right, voice, but what's spot on, right. If they, if that's their desire and they feel that that continuity is what they're trying to achieve. 100%, that's exactly what you could do.


Joel (30m 42s):

And that's video as well, or is it just voice at this point?


Ryan (30m 46s):

People demand a more consistent voice. Ironically, then I'll say the visual avatar side and we that's a whole other conversation. We go down. But you know, some people are not comfortable interacting with a hyper photo-realistic avatar. They like the pixelated one, it's a much more arbitrary, but voice, they don't want it to sound like a robot. They want natural sounding voice.


Joel (31m 12s):

So it can look like a crypto punk, but it has to sound like a human being.


Ryan (31m 19s):

Yeah. Dapper labs rabbit, right. It could be interviewing you.


Joel (31m 23s):

Board Ape interviewing you with Christopher Walken's voice. Got it.


Chad (31m 26s):

So this is amazing from a voice actor standpoint, because you can actually, obviously trademark have the real quote, unquote, "real synthesized" voice of, you know, Walken or Samuel L. Jackson or something like that. But you can scale much faster because you don't have to actually do all of the reads.


Ryan (31m 44s):

Yeah. The production savings savings alone is astronomical. We strongly believe that every writer that a voice actor or an actor ever does when they're on, when they're signing up to a production that will be negotiation of the use of the synthetic voice that just for post-production efficiency, right? There's so many different areas. You can go though. One little interesting tidbit about that is in the Adobe voiceover world, there's a gentleman, I forgot his last name, but in Germany, his name is Tiberius and Tiberius is the voice over actor for all Brad Pitt movies in Germany. So he's become famous for being the voiceover actor of Brad Pitt, and obviously, you know, we no longer believe that makes a lot of sense, right?


Ryan (32m 31s):

Brad Pitt and others have built a good brand equity in their own name and likeness. And so now when you start to see those movies internationally dubbed into a different language, it actually will be Brad Pitt's native voice.


Chad (32m 43s):

You put Tiberius out of work, you put Tiberius out of work.


Ryan (32m 46s):

I love you Tiberius but you've been, you know, praying on Leo and the other superstars and long enough.


Joel (32m 54s):

But he's been skating long enough.


Ryan (32m 56s):

I mean, he's a national treasure, but he's going to have to get in front of the camera and not just be the voiceover.


Joel (33m 4s):

Hasselhoff Tiberius will this work for people who are no longer with us, for example, could people working at Ford, get a voice message from Henry Ford in his actual voice that's customed to that, or do you have to be alive and go through the process.


Ryan (33m 16s):

You do not. If there's a lot of, we put those in the quote called the "legacy category." There's a lot of really interesting legacy projects that, you know, again, we try to be very impartial here, right? We're a tech enabler. This is iconic voices of the past. There's also a lot of effort where you have, I believe USC film school is helping involved in the Holocaust museum. They're trying to create amazing avatars and synthetic voices of some of the few remaining, you know, survivors from Auschwitz, for example, stuff like that. And recreate the voices, these iconic voices of the past celebrities. So we're, you know, I can't really go into detail.


Ryan (33m 57s):

You're going to hear some stuff that we're going to be able to release and talk about, but big iconic names.


Joel (34m 4s):

Fans, people's estates that are, have been dead can now make new mounds of money. Right? So like the John, the John Lennon, Nike ad is something that we may have to look forward to?


Ryan (34m 19s):

That's specifically has been deemed now via case law that the estate whoever's thinking controlling of the estate of the deceased has the authority to then to initiate the voice creation.


Joel (34m 26s):

Wow. I'm equal parts creeped out and totally fascinating. I need a drink.


Ryan (34m 29s):

That one's interesting, right? There's a lot, a lot of ways you can think about it. I can give you a million like, oh, yes, of course the other ones like, Ooh, man, that's gonna freak me out.


Chad (34m 40s):

I had to say, I, I know you're busy. You got to jump out in the lucid and, you know, go drive on the coast there in California, but we gotta have you back. Cause we want to hear more about this, especially how you execute in this space, in the hiring space. I know, I know it's part and parcel to what you guys are doing on in other areas as well. But we definitely want to hear how all of this connects, around Wade and Wendy and Pandologic. But again, I really appreciate you taking the time.


Joel (35m 11s):

As soon as we can have face-to-face with Terry Baker and Ryan over drinks on Newport Beach would be fine with me as well.


Chad (35m 18s):

That'd be good. Yes. Now I would enjoy that. Maybe cigars too. That sounds good.


Ryan (35m 20s):

Have you guys met Terry in person yet?


Chad (35m 21s):

Yeah, of course. Yes.


Ryan (35m 22s):

Okay. I mean, it's really funny because when you were born in like the Zoom world with a lot of these new people that you meet, right. You'd never really know how big people are or small. And I remember meeting Terry for the first time and I'm like, good lord you're like my offensive left tackle. He's he's huge. Like six' five". I mean, I love Terry.


Chad (35m 48s):

He's a big boy. Yeah. He's a big Teddy bear that Terry.


Joel (35m 53s):

Big teddy bear.


Chad (35m 54s):

Again, Ryan. Thanks for coming on the show. If somebody wants to learn more about Veritone, where should they go?


Ryan (36m 0s):

Go to veritone.com. We got, it's an easy to navigate site. We got help desk and a knowledge center. So anything you want to learn about from HR to crazy synthetic voices, you can find almost everything you want veritone.com.


Chad (36m 20s):

That's excellent, Man. Thanks so much, man.


Joel (36m 22s):

Thanks Ryan.


Ryan (36m 24s):

Thanks for you. Have a wonderful day. Bye-bye.


Joel and Chad (36m 26s):

We out.


OUTRO (37m 6s):

Thank you for listening to, what's it called? The podcast with Chad, the Cheese. Brilliant. They talk about recruiting. They talk about technology, but most of all, they talk about nothing. Just a lot of Shout Outs of people, you don't even know and yet you're listening. It's incredible. And not one word about cheese, not one cheddar, blue, nacho, pepper jack, Swiss. So many cheeses and not one word. So weird. Any hoo be sure to subscribe today on iTunes, Spotify, Google play, or wherever you listen to your podcasts, that way you won't miss an episode. And while you're at it, visit www.chadcheese.com just don't expect to find any recipes for grilled cheese. Is so weird. We out.

Comments


bottom of page