Four Kitchens
Insights

The Future of Content episode 10: Content That Speaks to You

25 Min. ReadDigital strategy

The Future of Content Episode 10

Lauren Golembiewski’s fascination with voice assistant technology started as a household consumer. As she became increasingly integrated with her devices and platforms, she realized the potential of creating a business that focuses on creating predictive content for voice and chat interactions.

Today, like so many of us, she tends to see voice assistants as pseudo family members—for better or worse.

I think there’s a similar social contract with some of these voice assistants that live in our homes, because I won’t say that I personify it as a family member […] but people will start to think about it that way. There are some interesting social implications in the way that people talk about their virtual assistants.

Despite voice assistants being the “most quickly adopted device type ever,” built into hundreds of millions of devices, the usefulness of these devices has stagnated. Lauren wants to foster an industry built on open standards that champions unique interactions and delights its users in meaningful ways.

Where I’m really interested in following the voice industry is how we create more open voice standards and open technologies that more people can interact with. And so we can create a voice operating system that is open that maybe we use via a browser or a custom mobile app and that operating system starts to push the needle forward on how these other big platforms might be looking at voice.

Lauren Golembiewski

Lauren Golembiewski is the CEO and co-founder of Voxable.

Relevant links:


Stream episode 10 now, or subscribe on your favorite podcast platform below.


Episode transcript

Note: This transcript may contain some minor wording and formatting errors. Apologies in advance!

[Voiceover] Welcome to The Future of Content, a podcast exploring how we create, manage and distribute content. Brought to you by Four Kitchens: We make BIG websites.

[Todd] Welcome to The Future of Content. I’m your host, Todd Nienkerk. Every episode, we invite a guest to explore an aspect of content and to make predictions about the future of that content. If you create, manage or publish content, welcome, this podcast is for you. Today we’re talking about content that speaks to you—literally. Our guest is Lauren Golembiewski, CEO and Co-Founder of Voxable, an agency that creates chatbots and voice interfaces. Welcome to The Future of Content, Lauren.

[Lauren] Hi, thanks for having me.

[Todd] Absolutely. We are very excited. So, what are conversational interfaces?

[Lauren] Yeah. So, a conversational interface is an interface whose main mode of interaction is through conversation, and that can happen with speaking and listening or typing a piece of text and messaging back to the machine—to the computer.

[Todd] So, the various types of conversational interfaces would be things like chatbots, voice interfaces like Alexa, and whatever Google is calling their Google thing these days.

[Lauren] Yeah, exactly. So, folks might have already interacted with a conversational interface if they went to their bank website to talk to customer service or their cable company and they might’ve interacted with a support chatbot that was handling their support questions. If they own a voice interface or a voice device like an Alexa, Echo, or a Google Nest Hub, and then the individual apps are called something different that they’re interacting with, but those devices provide a voice interface interaction space. And they’re also on TVs and being integrated into smaller devices, but the most well-known ones are kind of by those big brands—Alexa, Google, Samsung, etc.

[Todd] How did you get involved in designing conversational interfaces?

[Lauren] So, I was lucky enough to start this company with my now husband. At the time, we were just really interested in emerging technology. We were device nerds. We acquired all of the latest home automation devices as they came out, and one that we had was a predecessor to the smart speaker devices that are mainstream now. It was called an Ubi, and it gave us the ability to create custom integrations—custom software on our own little server that we set up. But we could write voice software. We could create our own ways to turn on the TV, to turn on the lights, to change the color of the lights. And for us, it just really felt super magical and fun, and we wanted to keep working on it and improving our environment and be able to affect our environment in a way that we thought was just sci-fi. And then the Alexa Skills Kit came out. And so the Echo had been released, which was a much better version of the smart speaker that we already had, and they opened up this developer environment called Alexa Skills Kit—that’s their developer package that you can build a skill on top of. And when we saw that kind of institutional investment in the marketplace, we quit our jobs and were like, “Let’s dive in. Let’s get into this and become independent skill developers.” Kind of how the iOS Store had independent mobile app developers. And that’s really where we started. The business kind of pivoted because there wasn’t a marketplace materially created at that time. And so we started selling services and teaching other companies how to build chat and voice experiences for their products, or extend them out to some of these new devices that were emerging.

[Todd] I’d like to illustrate an example of a typical project or a typical app that you build, and an example that comes to mind as something that you built is Ghostbot. What does Ghostbot do?

[Lauren] Yeah. So, Ghostbot integrates with a service called Burner which is a mobile app, a smartphone app, that people download to give them disposable, temporary phone numbers. And so if you don’t already have this, you may be wondering what are people using disposable, temporary phone numbers for and there’s actually a wide range of uses. Some people use it for kind of small business side projects, something like hosting an Airbnb or managing a real estate business, where they don’t want to use necessarily their own phone number. And one interesting use-case that we discovered with the team at Burner, who brought us on to help them create Ghostbot, was that a lot of folks use the Burner app to kind of protect themselves while they’re dating online. So they may be on Tinder, or they may be on any one of the main dating apps, and they just don’t want to give their real phone number. And it may surprise you or it may not surprise you to know that a lot of those users of their app who are buying these disposable phone numbers to use in online dating were women, because women felt like they needed to protect themselves somewhat of some of the abuse that we then uncovered in our research that they encounter while they’re online dating. And it’s not just women who receive abuse in online dating, but we saw by in large that there were just heaps and heaps of it that kind of piled on if you— We went to sites like Tinder Nightmares. There’s another one called Bye Felipe, and they chronicle some of these really outrageous conversations that would happen between a woman and someone that she was— Maybe gone on one date with or had a couple of dates with. And so the idea for Ghostbot was creating a mediary between the person who’s online dating, and these dates that they were going on— Being able to essentially turn on this bot that would handle that conversation so they wanted to no longer speak to that person. So they kind of offload the emotional burden of having to say that they weren’t interested and they then get berated for their lack of interest, just for being honest, or try to not address it and get berated for their lack of addressing it. So, there’s a lot of these can’t-win situations that we were really specifically trying to help the person who enabled the bot on their account. So, to explain how it works is I have a temporary phone number and it’s on my Burner account. I can enable Ghostbot for one of my contacts that I have saved in my phone so when that contact would text me, it would first go to the Ghostbot and the Ghostbot would handle the responding to the text. I could see the entire conversation ensuing in my account but I could essentially just say, “Hey, this bot is going to respond for me instead.”

[Todd] Wow. So, you would unleash this conversational interface, this chatbot, into a potentially or actively hostile conversation just to make that person go away. You can ignore them, and then that way, they’re at least given this feedback loop of getting some kind of a response, even though there isn’t actually a human who is having to bear the brunt of receiving and responding to harassment, essentially.

[Lauren] Yeah, exactly.

[Todd] Interesting. So, it’s like a form of ghosting, except it’s active.

[Lauren] Yeah, exactly. I guess it’s active ghosting that you have automated away.

[Todd] Got it. So, this is a conversational interface that is using some form, probably, of machine learning, or something to at least generate some kind of semblance of a conversation, right?

[Lauren] Yeah. There are a bunch of different types of messages that we’re anticipating to handle—for Ghostbot to handle—and a lot of them target directly— Those message types target different types of abuse. So, things like we have different— And we are using a natural language understanding system to handle the messages. And so they’re categorizing it into these various types of message intents. So, what is that person intending when they sent [it] to you? So they could be bragging or they could be— We have, I think, a booty call intent. I’m not sure if we had a more elegant name than that. [laughter] There were ones that were for obscenity and ones that were for negging and we really based—

[Todd] Oh, wow. That’s pretty subtle.

[Lauren] Yeah, yeah. We really based those intents around what we saw in—

[Todd] For those who aren’t familiar with the concept of negging. It’s a way of eliciting a response— It’s usually a way that men with ill intent are trying to elicit a response from a woman by making subtle—what’s the word?—insults.

[Lauren] It’s what they mean to be a compliment.

[Todd] Yeah. Like, “Your shoes would look really good on that other woman,” or something like that. Okay.

[Lauren] “Those are really great shoes. They’re just really way too big of a heel for you.”

[Todd] Oh, that’s nice. Cool. Great. So, this is an interesting form of content because I assume that when you’re producing the content that’s inside of Ghostbot or any conversational interface, there’s some element of having to anticipate one side of a conversation, another aspect of having to then reply to that side of a conversation, but then there’s probably some stuff that needs to be dynamic and that isn’t specifically scripted out that’s kind of floating in the machine learning ether, and you have to think about all sides of this equation when you were assembling content for a conversation.

[Lauren] Yeah, exactly. You’re really trying to anticipate what— the first side is anticipating what people are going to say, what they’re going to throw at the interface, what messages are going to be received. And so for us, we were actually looking at treating it on real data like the Bye Felipe, which shows screenshots— It’s an Instagram account that shows screenshots of conversations that ensue both inside of the dating apps and the attacks in the other messaging channels like Facebook Messenger that people get into. And Bye Felipe is a play on “bye, Felicia,” the phrase from, I believe, the Friday movie.

[Todd] Friday. Mm-hmm.

[Lauren] Yeah. And so we looked at that real data and we’re like, “Okay, if this is what we think this is the type of problem that we’re trying to solve with our interface first—” We’re not trying to solve every single message. We weren’t trying to mediate conversation between a budding romance because that would be a very different problem. We were trying to facilitate this conversation between someone you barely knew who had kind of pushed it too far and you were like, “Okay, I’m done trying to think of the right response that’s going to either make my intent known or just get me out of this situation. So I’m just going to try to—”

[Todd] You just want to give them a thing that exhausts them.

[Lauren] Yeah. And then—

[Todd] But they can just talk out forever and forever and just eventually wear themselves out.

[Lauren] Yes. Yeah. Hopefully get the hint and just not engage eventually.

[Todd] That’s fascinating. Did you ever get any metrics on the longest that somebody went talking to this bot?

[Lauren] Oh, I don’t know those off the top of my head. No. We didn’t see a lot of the metrics from after it had launched because it was inside of the intricate app metrics that were at that time. And we built this in 2016, so at that time, there were no kind of base-level infrastructure analytics tools that you could kind of slap onto a bot project at that time like there are today. And so we had— It was just the app analytics which kind of stayed in the land of our client. So we didn’t get a lot of visibility other than what they kind of filtered through to us as we had a few cycles of iterations on it after the initial launch.

[Todd] Got it. Okay. Well, let’s talk a little bit about creating content for conversational interfaces—whether they’re chatbots or voice applications in general. So, it strikes me that most content production has a one-way channel. You’re writing an article, you’re taking a photograph, you’re making a video, you’re recording a podcast, and people are going to sit and read it, look at it, watch it, listen to it. It’s fairly one-way. A conversational interface, however, it’s going both ways. So, somebody starts the conversation, the device responds, and it needs to understand the intent more than the literal words, and it has to provide value and understand context and all of these things. What are some things that you consider when trying to produce two-way content?

[Lauren] Yeah. It’s a complex problem, as you pointed out, going both ways. And so we do have to consider both sides of the conversation, even though we’re only controlling really one side which is also somewhat— Not frightening, but it’s definitely another constraint that you’re trying to anticipate the other side but also control the side that you can control in an elegant way. So, that first side that I was talking about with Ghostbot of trying to understand and get real data out there of how customers are actually talking and speaking, that’s one integral part of figuring out how to speak to customers. So, before you even get to like, “What am I going to write?” it’s, “What are customers saying? What other types of things I’m going to have to anticipate? And what type of language will they be throwing at my interface or my natural language understanding model so that my bot, my interface, can understand it and then serve up a nice response based on that understanding?” So, I think one side is a really strong focus in research which I think bridges the gap for all interfaces, not just conversational interfaces. But really finding a way to uncover research of how users are speaking, gathering data around how users are speaking and, for us, a lot of times that comes down to qualitative interviews like in-depth interviews of talking to real customers and just getting a sense for how they speak, getting a sense if there’s a big technical divide or big span of technical expertise like if there’s a significant shift in language amongst the audience. So one segment of the audience might use a certain set of language and another audience might use another set like if you’re talking about an audience that might span a technical knowledge gap. And so we start to think about what are all the things that they’re saying and they’re doing, and then it’s about writing the content that responds to those inputs. And, in doing that, there’s another set of considerations and just like you can serve up an article, a song, a podcast on a website, you can do those same things on a voice interface. The delivery mechanism is a bit different, and— when you get people to a lot of different rich types of content, but the main interface itself doesn’t have a screen to rely on. So, we’re not just throwing a video up on a screen and letting a user play it. We might be displaying it on their TV and automatically playing it directly in YouTube or their favorite video app, or whatever piece of content they’re asking for playing it from within that particular app. And the way that they navigate there is different. So we have to think about a different structure to get users to the right piece of content. It could have a back and forth. There could be a couple different terms of conversation that eventually get users to the thing that they’re looking for. So, we think about that in the research. And then also when we’re writing content, we may be creating some conversational flows that help us represent that logic and then dictate how the conversation kind of plays out giving these different inputs.

[Todd] So, it sounds like there’s just a ton of research and probably practice, right? Do you create scripts and then do a back and forth with somebody about like, “All right. I’m going to say this and I might try to trip you up a little bit and say something in an unexpected way, like in a weird syntax, or just use words that maybe you’re not anticipating.”? And then how is the system going to respond to that?

[Lauren] Yeah, exactly. We play our scripts. So, we do that internally as a group, getting a partner. One person is playing the role of the user, and the other person is playing the role of the bot. And reading through a script of unexpected interaction is just a great way to tell if your content sounds right, if it is audibly hitting the right way. One thing about voice interface in particular: When an interface is speaking to a user, it’s a different mode of interaction, and it has a pretty significant difference from viewing something on a screen. So it’s attentive. The user has to be listening in real time. They have to be paying attention at that moment to receive that content, which means that every word that you’re choosing in that speaking voice, in that spoken response, is really important, and it’s really important to be super brief in those responses and to kind of order the words carefully so that a user wants to keep listening. I always give this example because I think it’s one that a lot of people can relate to but I ask the weather of my smart assistant at home. I ask the weather every day, and some days I’m really distracted and doing other things and I might ask the weather but then I immediately start doing something else and I completely miss what the voice interface said. So, it responded to me, it seems like a successful interaction, but I have to ask it the same question again because I was just in my own head. And so there’s so many times—

[Todd] And you weren’t interacting with it as if it were a person that had some expectation of you paying attention. Yeah. Interesting.

[Lauren] Yeah, for sure. I think that it almost becomes a person that— Like a family member, in that you can ask a family member a question and immediately stop listening to them. I think there’s a similar social contract with some of these voice assistants that live in our homes, because I won’t say that I personify it as a family member—that that’s any type of real expectation—but people will start to think about it that way. There are some interesting social implications of that in the way that people talk about their virtual assistants.

[Todd] Yeah. Well, so, speaking of virtual assistants, let’s take a short break and when we come back, we’ll pick up with exactly that.

[Voiceover] The Future of Content is brought to you by Four Kitchens. Our team creates digital experiences that delight, scale, and deliver measurable results. Whether you need an accessibility audit, a dedicated support team or a world class digital experience platform, the Web Chefs have you covered. Four Kitchens: We make BIG websites.

[Todd] Welcome back to The Future of Content. Our guest today is Lauren Golembiewski, CEO and Co-Founder of Voxable, an agency that creates chatbots and voice interfaces. So, when we last left off, we were talking about voice devices, voice assistant devices, is that the correct term?

[Lauren] Yeah. There’s a lot of different terms out there in the realm. I think voice assistants or I say voice interfaces. There’s no standard way.

[Todd] Got it. Okay. Now these are popular, right? How quickly have these voice devices been adopted?

[Lauren] Incredibly quickly. In fact, they are the most quickly adopted device type ever. So they’ve surpassed mobile. They have been just quickly adopted and I forget the exact stats because every year it’s more and more. It’s I think over 100 million devices in homes after this past holiday season. Holiday seasons are always the big marker for people receiving smart devices, and there’s new devices that are created every year as well. So, the original devices were just smart speakers. So, they were standalone speakers that you’d plug in and had either Alexa or Google Assistant enabled on them. Now there are smart speakers that also have a display. So, they’re kind of mini iPads that have their own little operating system that don’t kind of behave like something that you would be holding in your hand. And then they also have integrations into televisions and I believe there are a few other integrations into some other devices like microwaves and dishwashers—

[Todd] And cars now, too, like Tesla and BMW. And I think Kia even has something.

[Lauren] Yes. Yeah. So Android Auto is being integrated, I think, into a lot of Nissan cars and some other— A lot of them are partnering with the bigger devices. So a lot of car manufacturers are partnering with either Google with their Android Auto and Alexa has another auto platform and then, yeah— BMW is investing in Mercedes-Benz. There’s another company called SoundHound that produces virtual assistants and they have focused a lot on the auto industry.

[Todd] So there’s this explosion. It’s the fastest-growing technology interface or device ever. There seems to be, though, some odd stagnation at the same time. A tremendous amount of device adoption but in terms of innovation in the space, creating new kinds of, I don’t know, value and use cases for voice, it seems like not a lot has happened and that it’s still just a lot of ask-and-answer kind of stuff. “Hey, when is this TV show on? Hey, what’s the weather? What time is it? Set a timer.” That kind of stuff. Is that true? And if so, why?

[Lauren] Yeah. There has been a stagnation. So, I think that there are lots of different factors in the market that cause this but I think it comes down to the way that the developer ecosystems have come to be. So, like I said, when we got started in this business, we kind of quit our jobs and thought that we were going to be kind of independent skill developers in the same way that iOS developers emerged out of the Apple iOS App Store, which made people— Made independent developers massive amounts of money. It produced a whole economy of people who could start businesses and run them and provide value to customers in this new marketplace that was the App Store. We thought a similar thing would happen in the voice space—either with Alexa or Google or any of the players coming into the voice space—and we didn’t see any of that kind of come to fruition. There was kind of an initial introduction of a voice operating system for some of these new smart speaker devices, which were really interesting and do a lot of really cool things. That’s kind of what gives you the ability to interact with an Alexa TV device, an Alexa smart speaker device, and ask those weather questions, get access to your calendar if you’re on Google, play music. But what a lot of the companies, the platforms ended up doing was focusing more on first-party integrations that kind of—

[Todd] What’s a first-party integration, as opposed to a third-party integration?

[Lauren] Yeah. So these would be integrations that those platform companies like Amazon or Google or Samsung are handling themselves. So, they would maybe partner directly with a company like Spotify, and then Google and Spotify would get together and build the integration into the Google Assistant. Or Spotify and Alexa would get together and build an integration into Alexa.

[Todd] And so a third-party would be somebody like you or me and we decided to make a skill or an app and upload it to a marketplace and now anybody can install it.

[Lauren] Yeah, exactly. And so if you think about the iPhone or the iOS marketplace, the first-party apps would be things like Apple Music or Apple Stocks. Those are Apple’s first-party apps. Now what they don’t do is Apple doesn’t create first-party apps with other brands. They keep their first-party apps kind of relegated to their kind of core operating system functionalities and they don’t really kind of broach that into building things for other brands that are then kind of aligned with Apple in another way. And that’s not what’s happening in voice. These big voices manufacturers are partnering with big brands to give them direct integration. So, when you ask Alexa to play music for you, by default, it’s usually going to go to the Amazon music environment. But if you ask to play a podcast, they need some way to serve up that podcast content. So, I think they partnered with— and I don’t know the exact podcast provider that they partnered with, but they could partner with something like— an app like Stitcher to provide that podcasting content. And so that would be a first-party integration that Alexa builds with this other brand to provide kind of core services or core functionality into the app but that is also attached to another application or another brand. Third-party services kind of do the same thing. It’s a company integrating into another piece of technology and making it available via Alexa, but they’re kind of doing that independent and they’re out on their own using only the API calls that are made available to the public and any of the functionality and features that are made available to public developers.

[Todd] And what you’re saying is the stagnation in the voice industry is due largely to these large platforms focusing on first-party integrations and partnerships rather than fostering a community of developers and contributors who then create new skills and functionality and upload them to a marketplace where the broader community can benefit from.

[Lauren] Yeah, absolutely. I think that that particular aspect of focusing on first-party integrations is both caused by and affecting many other things that are part of, kind of, this whole— I would call it a dry well of a marketplace. There should be water there, but it’s not there. There’s not a rich marketplace. There’s not the same kind of explosion of innovation and activity that we saw in iOS as we see in the voice. And I think that it’s— Yes. The concentration on these first-party integrations are doing a couple things. One, it’s taking that work away, that direct work that thought if I could hire skilled independent designers and developers to go build those integrations for them or could use their own talent on their team to build those integrations and then Spotify can make the decision about which platform is the best for their product and maybe not have some side of inside deal with the companies. So, I think that’s one factor and that kind of shows I think that’s a bigger just lack of investment in the developer community and developer marketplace generally. So, there’s not a ton of investment on making sure the developers have the skills they need to build really high quality Alexa Skills, Actions on Google—any of these applications. The focus was primarily on getting a lot of quantity on the Skills store and the Actions on Google. It seemed like these big platforms wanted developers to create as many skills as possible so that they could maybe say that they had hundreds of thousands of skills. But what you don’t know when they say they have hundreds of thousands of skills is that a few thousand of them are probably just cat trivia or animal trivia because those were—

[Todd] They’re proofs of concept, and they’re easy to do because the kind of content that they create for it is: Just use a random function to pick one line of text and say it, right? There’s no real interaction there if you’re like, “Hey, tell me a cat joke.”

[Lauren] Exactly. They’re fairly shallow interactions. They’re based on templates that were created for developers or for, kind of, the developer evangelism within these various companies and the focus— And we went to hackathons that were hosted by these big companies, and we saw directly that they wanted people to turn out quantity and not necessarily quality. There wasn’t a lot of discussion on that and—

[Todd] They wanted to see, “We have 15,000 skills available to download for free.” Yeah.

[Lauren] Yeah, yeah. And I think it was really about selling devices initially and less about necessarily providing the software to back up that device experience. And I think that that has kind of suppressed the innovation, the kind of big design thinking that you’ll see come out of these new technologies of cool experiences that could be created. And then those operating systems that they created, those voice-based operating systems, haven’t really innovated in and of themselves a ton. What we started with in— When the Alexa Skills Kit came out, I think in 2015 or 2014, the set of affordances that we started with there have not dramatically innovated. They’ve changed and they’ve tweaked and they’ve kind of tooled what is happening through users and what’s happening through developers but they haven’t created a big— New kind of pieces of functionality that would help users better interact with the applications that are available on their Alexa devices or their Google Assistant devices. And so some of this just comes down to some of the tooling. It’s really hard for users to discover apps, and it’s really hard for them to understand what’s happening when they enable a skill and that they’re actually being transferred over to another company’s kind of application or domain. And so some of those have just made it difficult for people to even— Consumers themselves to know how to understand what they’re getting when they’re interacting with something like this.

[Todd] I’ve experienced a lot of that where you have to kind of— The state of the application is something that you, as a user of these devices, have to carry in your head. You have no physical indication because there’s no constant sound that’s humming, a certain tune that tells you what skill you’re currently in. You just have to remember that like, “Oh, I entered the whatever Amazon store skill. And so now it’s expecting to interact— It’s expecting me to do shopping-type stuff and then I have to get out of that to go do something else.” It’s kind of difficult to just use in that way. There’s also a phenomenon that I have felt when I thought hard about my own interaction with chatbots and voice devices and all of these, is I spend a lot more time thinking about the best way to say something so that I— In a way that I think the machine will understand rather than me just say it the way I would say it to another person. And it’s kind of like this— I don’t know if it’s because I was raised in a generation of text-based games like Zork and King’s Quest and things like that where you have to say things in a certain way, in a certain sentence structure in order for anything to happen. Otherwise, it just says, “I don’t know what you’re talking about. That doesn’t make sense. What is a whatever?” But I find myself trying to anticipate the machine rather than allowing the machine to anticipate me. Is there a name for that dilemma in the conversational interfaces design space?

[Lauren] Yeah. So, that action, that thing that you’re trying to do in anticipating is called “theory of mind.” So, you’re constructing a theory of the other party’s mind when you’re conversing with them. So, right now I’m talking to you, and I’m trying to anticipate some of the things you’re going to say, and we’re all making assumptions about what each other knows and wants and needs. And so we don’t have to explicitly say every single thing that’s going on because we’re assuming a lot of that, and that kind of breaks down when you’re talking to a machine because you don’t have a good theory for what a machine can actually know about you. So, it’s really hard for a user or a human to be like, “What can I say?” Because I have absolutely no basis for the construct that this machine might have about me. I don’t know that it’s— What it knows about me. And so that’s why people get this uncanny valley if the machine calls them by name and they didn’t say it. They’re using their Amazon account that has their name associated with it but there’s some of that that ends up happening. And then— Yes.

[Todd] It suddenly becomes weirdly personal, and then like, “You’re just a bit of plastic and machinery that sits on my kitchen counter.” Like, “What are you doing?” Well, we’re almost out of time but I’d like to close with this. Where do you see— I know this is a big topic, but one aspect: Where do you see the future of voice content heading?

[Lauren] Yeah. I think that, generally, voice content is going to head in many amazing new directions and I’m really excited to see where it goes. But I think that right now, we’re at an inflection point where I personally, as an independent designer in this space and other independent designers and developers who are really interested in this space, are kind of looking for alternative ways to make these experiences happen and make them really interesting and innovative and to move the space forward. So, where I see this happening, and where I’m really interested in following the voice industry is how we create more open voice standards and open technologies that more people can interact with. And so we can create a voice operating system that is open that maybe we use via a browser or a custom mobile app, and that operating system starts to push the needle forward on how these other big platforms might be looking at voice. And so that hopefully everyone can advance, but that we can find new ways to get to consumers and new ways to create some of these innovative experiences through a more open, accessible voice industry.

[Todd] Well, thank you. This has been fascinating. I really appreciate your time, Lauren. Well, until next time everybody. Enjoy your content.

[Voiceover] You’ve been listening to The Future of Content, a podcast from the Web Chefs at Four Kitchens. Hosted by Todd Nienkerk, produced by PJ Hagerty. Theme song is PAFRATY by DJ Listo. Find us on Twitter at FOCpodcast and get in touch by email feature@fourkitchens.com.