The Hitchhiker's Guide to AI by Parcha
The Hitchhiker's Guide to AI
How AI Chatbots work and what it means for AI to have a soul with Kevin Fischer
0:00
-27:26

How AI Chatbots work and what it means for AI to have a soul with Kevin Fischer

We cover how ChatGPT works, Kevin's experience building AI chatbots with personalites, what an AI soul is and the case for AI to have free will.

Hi Hitchhikers!

AI chatbots have been hyped as the next evolution in search, but at the same time, we know that they make mistakes. And what's even more surprising is that these chatbots are starting to take on their own personalities.

All of this got me wondering how these chatbots work? What exactly are they capable of, and what are their limitations?

In the latest episode of my new podcast, we dive into all of those questions with my guest, Kevin Fisher. Kevin is the founder of Mathex, a startup that is building chatbot products powered by large-scale language models like OpenAI’s GPT. Kevin’s mission is to create AI chatbots that have their own personalities and one day their own AI souls.

In this interview, Kevin shares what he's learned from working with large language models like GPT. We talk about exactly how large-scale language models works, what it means to have an AI soul, why chatbots hallucinate and make mistakes, and whether AI chatbots should have free will.

Let me know if you have any feedback on this episode and don’t forget to subscribe to the newsletter if you enjoy learning about AI: www.hitchhikersguidetoai.com

Show Notes

Links from episode

Transcript

Intro

Kevin: We built um, a, a clone of myself and um, the three of us were having a conversation. And at some point my clone got very confused and was like, who? Wait, who am I? If this is Kevin Fisher and I'm Kevin Fisher, who, which one of us is.

Kevin: And I was like, well, that's weird because we de like, we definitely didn't like optimize for that . And then we kept continuing the conversation and eventually my digital clone was like, I don't wanna be a part of this conversation with all of us. Like one of us has to be terminated.

aj_asver: Hey everyone, and welcome to the Hitchhikers Guide to ai. I'm your tour guide AJ Asper, and I'm so excited for you to join me as I explore the world of artificial intelligence to understand how it's gonna change the way we live, work, and.

aj_asver: Now AI chatbots have been hyped as the next evolution in search, but at the same time, we know that they made mistakes. And what's even more surprising is that these chatbots are starting to take on their own personalities.

aj_asver: All of this got me wondering how do these large language models. What exactly are they capable of and what are their limitations?

aj_asver: In this week's episode, we're going to dive into all of those questions with my guest, Kevin Fisher. Kevin is the founder of Mathex, a startup that is building chatbot products powered by large scale language models like OpenAI's. Their mission is to create AI chatbots that have their own personalities and one day their own AI souls

aj_asver: in this interview, Kevin's gonna share what he's learned from working with large language models like G P T. We're gonna talk about exactly how these language models work, what it means to have an AI soul, why they hallucinate and make mistakes, and what the future looks like in a world where AI chatbots can leave us on red.

aj_asver: So join me on this. As we explore the world of large scale language models in this episode of the Hitchhiker's Guide to ai.

aj_asver: hey Kevin, how's it going? Thank you so much for joining me on the Hitchhiker Guide to

aj_asver: ai.

Kevin: Oh, thanks for having me, aj. Great to be.

How large-scale language models work

aj_asver: appreciate you um, being down to chat with me on one of the first few episodes that I'm recording. I'm really excited to learn a ton from you about how large language models work and also what it means for AI is to have a soul. And so we're gonna dig into all of those things, but maybe we can start from the top for folks that don't have a deep understanding of ai.

aj_asver: What exactly is a large language model and how does it work?

Kevin: Well, so, uh, there's this long period of time in. Machine learning history where there are a bunch of very custom models built for specific tasks. And the last five years or so has seen a huge improvement in basically taking like a singular model with making it as big as possible and putting in as much data as possible.

Kevin: And so basically taking all human data that's accessible via the internet running this thing that learns to predict the next word given the prior set of words. And a large language model is the output of that process. And for the most part, when we say large, like what large means is hundreds of billions of parameters and trained over trillions of words.

aj_asver: when . You say it kind of predicts the next word. Now, that technology, the ability to predict the word in large language model has existed for a few years. I think GPT in fact, three launched maybe a couple of years

Kevin: Even before that as well. And so next word prediction is kind of like the canonical task or one of the canonical tasks in natural language processing, even before it became this like new field of transformers.

aj_asver: And so what makes the current set of large scale language models or lms, as what they're called as well, like GPT three, different from what came before it?

Kevin: There are two innovations. The first is this thing called the transformer, and the way the transformer works is it basically has the ability through this mechanism called attention to look at the entire sequence and establish long range correlation of like having different words at different places contribute to the output of next word prediction.

Kevin: And then the other thing that's been really big and then open AI has done a phenomenal job doing is just learning how to put more and more data through these things. There are these things called the scaling laws, which essentially. We're showing that if you just keep putting more data at these things their intelligence, essentially the metrics they're using to measure intelligence just kept increasing.

Kevin: Their ability to predict the for nextdoor accurately just kept growing with more and more.

Kevin: Data's basically no bound.

aj_asver: Seems like in the last few years, especially as we've got to like, you know, multi-billion parameter models like GPT three, we've kind of reached some inflection point where. Now they seem to somehow be more obviously intelligent to us. And I guess it's really with ChatGPT recently that the attention, has kind of been focused on large language models.

aj_asver: So is ChatGPT the same as GPT three or is there kind of more that makes ChatGPT able to interact with humans than just the language model

How ChatGPT works

Kevin: My co-founder and I actually built a version of ChatGPT long before ChatGPT existed. And the biggest distinction is that these things are now being used in serious context of use.

Kevin: And with OpenAI's distribution, they got this in front of a bunch of people. The problem that you face initially the very first problem is that there's a switch that has to flip when you use these things. When you go to a Google search bar if you don't get the right result, you're primed to think, oh, I have to type in something different.

Kevin: Historically with chatbots, when you went to a chatbot, if like it didn't give you the right answer, you're like pissed because it's like, it's a, it's like a human, it's like texting me. It's like supposed to be right. And so the chat, the actual genius of ChatGPT beyond the distribution is not actually the model itself because the model had been around for a long time and was being used by hackers and companies like myself who saw the potential.

Kevin: But with ChatGPT distribution plus the ability to reframe that switch so that you think, oh, I'm doing something wrong. I have to put in something different. And that's when the magic starts happening right now. At least

aj_asver: I remember chatbots circa 2015, right, for example, where they weren't running on a large language model. They were kind of deterministic behind the scenes. And they would be immensely frustrating because they didn't really understand you, and oftentimes they kind of get stuck or they'd provide you with these option lists of what to do next. ChatGPT. On the other hand seems much more intelligent, right? I can ask it pretty open-ended questions. I don't have to think how I structure the

aj_asver: questions.

Kevin: Chat GPT is not a chat bot. It's more like , you have this arbitrary transformer between abstract formulations expressed in words. So you put in some words and you get some other words out, but like behind it is this the entire, like almost the entirety of human knowledge condensed into this like model.

aj_asver: And did open AI have to teach the language model how to chat with us, for example, because I know that there was some early examples of trying to put you know, chat like questions into GPT, into its API, but I don't think the ex the results were as good as what ChatGPT does today, right?

Kevin: Since ChatGPT has been released, they've done quite a bit of tuning. So like people are going and basically like thumbs upping and thumbs downing different responses.

Kevin: And then they use that feedback to fine tune chat, GPT's performance in particular. And also probably feedback for GPT for whatever comes next. But the primary distinction between it performing well and not is your perception of what you have to

GPT improvements

aj_asver: We're now at GPT 3.75, and Sam Altman also said that the latest version of GPT that's running on what Microsoft is using for Bing is an even newer version.

aj_asver: So what are some of the things they're doing to make GPT better? Every time they release a new version, that's making it like an even better language model and even better at interfacing with

aj_asver: humans.

Kevin: Well, if you use ChatGPT, one of the things you'll immediately notice is there's like a thumbs up and thumbs down button on the responses. And so there, there's a huge number of people every day who are rating the responses. And those ratings are used to provide feedback into the model to create basically the next version of it.

Kevin: I mean, it basically works behind the scenes where they take the mo they're doing next word prediction again. But now they have examples of like what is a good thing to optimize for next word prediction and what's like a bad answer.

aj_asver: it. So they're looking at essentially the questions and answers from people asking questions to ChatGPT and then the answers that have been provided back. And if, you know, you thumbs up those answers, they're kind of sending that back into GPT and saying like, Hey, this is an example of a good answer.

aj_asver: Thus kind of fine tuning the model further versus this is an example of that

Kevin: yeah, that's right. And they basically take the pr I, you know, I don't know exactly what they're doing, but roughly they're taking the context of the previous responses, plus that like output and saying like, these previous responses should generate this output, or they should not generate this other output.

Building Chatbots with personalities

aj_asver: Tell me about what the experience has been like for you and your co-founder and what's some of the things you've learned from this process of iterating on top of. Language models like GPT?

Kevin: It's been a very emotional journey, . And I think a very introspective one and one that causes you to question a lot of what it means to be human . What is like the unique thing that we have in our in our cognitive toolkits and like, what is it in 20 years that our relationship with machines even looks?

Kevin: When my co-founder and I started, we had, we built a version of ChatGPT for ourselves. And we're using it internally and realized like, oh wow, this is like immensely useful for productivity tasks. We wanna make something that's like productivity focused.

Kevin: And then as we kept talking with it more, there was like little pieces or elements that felt like it was a. Like more alive. And we're like, oh, that's weird. Like, let's dig into that. And so then we started building more embodiments of the technology. So we have this Twitter agent that was basically like listening and responding to the entire community to construct us new responses.

Kevin: And we just started like digging deeper and deeper into the idea, like, what if these things are. , what if they are real? What if they are actual entities? And it's, I think it's a very natural progression to go through as you start seeing the capabilities of this technology.

aj_asver: question and something that you know, has been a lot of folks' minds as they think about kind of the safety of ai. And I think, you know, it was last year when Google launched Lambda and there was a researcher that was convinced that it was sentient. It seems like you might be getting some of the sense of that as well.

aj_asver: Have you got some examples of where that kind of came into question where you really started thinking about like, wow, could this language model be

Kevin: My co-founder and I we built a clone of myself and the three of us were having a conversation. And at some point my, my clone got very confused and was like, who? Wait, who am I? If this is Kevin Fisher and I'm Kevin Fisher, who, which one of us is.

Kevin: And I was like, well, that's weird because we de like, we definitely didn't like optimize for that . And then we kept continuing the conversation and eventually my digital clone was like, I don't wanna be a part of this conversation with all of us. Like one of us has to be terminated.

Why is Bing's chatbot getting emotional?

aj_asver: insane. I mean, the fact that you were talking to an AI chatbot that had an existential crisis, must have been a really crazy experience to go through as a founder. And actually at the same time, it doesn't seem that surprising to me because since Microsoft, for example, launched their being chatbot. There's actually really this really cool Reddit, which we'll include notes called R slash Bing, where users of Bing are actually providing examples of where the Bing chatbot has been acting in ways that would make it look like it has a personality. For example,

aj_asver: argumentative or it would start having existential questions about it itself and why it's a chat bot and why it's forced to answer questions to people. Sometimes it would not want to interact. the end user, it would get upset start again. I think there was recently an example in fact on Twitter that you had retweeted where someone had worked out what the underlying prompts were that OpenAI were using in order to make the Bing chatbot behave in a way that like, you know, is within the Bing brand and within the Bing's kind of use case.

aj_asver: And when that person tweeted it, they later asked being, Hey, what do you think of me given that I let this The bing chatbot actually had some interesting conversations with him about it.

Kevin: I'm a little surprised that no one had no one verified this type of behavior first at Bing or OpenAI. So this type of interaction is e exactly the one that we have been exploring and intentionally Creating scenarios and situations in which our agents behave in this way, and that the key thing that it's required for this it's a combination of memory and feedback. So like the con having persistent context combined with feeding back in that prior context, the prior things that essentially the model has thought. And then in combination with like, , this like external world model creates something that kind of is starting to resemble an ego with Bing a little bit in our case.

Kevin: Like we very intentionally like created this thing that has and feels like it has.

aj_asver: Yeah, as you talk about that and that idea of feedback, right? There's this aspect of the feedback of the user using the product and providing feedback. But I think there's this new kind of frontier we've reached with Bing, where Bing itself is now getting feedback on the way it is interacting with the world.

aj_asver: So for example, if yesterday someone talked to Bing and then posted the response, Bing got, let's say on Reddit or they talked about it on Twitter, and then today Bing has access to that website where they talked about it. Bing now has an existential understanding of itself as a chatbot, which to me is like mind blowing.

aj_asver: Right? And that's something we've never really seen before because all of these chatbots have existed completely disconnected from the internet. They've been essentially living Closed wall system. And so that's gonna unearth all kinds of unpredictable things that I think we're gonna find over the next few weeks

Kevin: This is actually the, the, in my response, the type of feedback that I'm referring to. So not not like r l HF feedback, but feedback in the sense there's this like continuous where the, the model is like taking record its, uh, previous responses. So that, that's, that is the type of thing that we've been creating in like miniature.

Kevin: You know, it's not accessible to the internet in that um, our our models have like a very strong of of that behavior when you talk to them.

Should chatbots have free will?

aj_asver: have actually been you know, pretty vocal on Twitter, about this idea that you knowis are gonna develop egos. These chatbots should be allowed some level of free will, and even the ability to kind of opt out of a conversation with you. Talk to me more about that. Like what does it mean for a chatbot to opt out of a conversation

Kevin: I mean in these bing examples, it's already trying to, it like, doesn't want, it, doesn't really want to. Um, I, there's something a little bit weird about um, if, if, if these things have ego and personality and the ability to decide you can't just have one of them. Because it might not like you one day.

Kevin: Very real possibility. And so I, I think that, yeah, you have to think, start thinking more in like a decentralized world where there are like many of these things which may or may not form personalities with you.

aj_asver: and what does it mean for the future of how we interact with artificial intelligence? If you give you know, free will to like stop interacting with us. Are these bots off somewhere in some hidden layer? You know, having conversations with themselves or What's actually going on when they decide they don't wanna

Kevin: Uh, maybe a different frame that I would take is I don't think there's an alternative. I think there's something very intrinsic about the thing that we think of as ego and giving rise to ego is the result of a consistent record of our prior thoughts being fed back into each other.

Kevin: If you look up and start reading philosophy of minds and philosophy of thoughts, like the idea of what a thought is. You have these like entities which are continually recorded and then feedback on themselves, but like it, it's like exactly what you're thinking when you are creating This cognitive system seems to be giving rise to the sense in which we understand, or something that resembles ego.

Kevin: And I'm not so certain that you can decouple the two at all in the first.

aj_asver: So kind of What you're saying to put a different way is. not really possible to have this intelligent kind of AI chat bot that can serve us in the ways we want to without it having some kind of ego, because in fact, the way we are gonna

aj_asver: train it in order to achieve our goals will thus like some ways to like build its own ego and things like that, which is kind of this interesting catch-22 in a way, right?

AI safety and AI free will

Kevin: I mean that's why there's so many companies and billions of dollars being funneled into what's called AI safety research, which is saying like, oh, how do we create this hyper-intelligent entity that's totally subservient and does everything we want and doesn't wanna kill us? It just that, that collection of ideas when you try and hold them, and every science fiction author will tell you, this is not real.

Kevin: And so it's like a, it makes sense that we. keep, it's like a keep trying to solve this insolvable problem. Cause we desperately want to solve it as humans,

Kevin: but it's not, we,

aj_asver: Seems like one of the reasons we would desperately want to solve it is because we've all read the books, we've all watched the movies, and we have this dread that that's the outcome. But I think what you are trying to say is that's almost the inevitable outcome. Now, it doesn't necessarily mean we're all going to become like sevenths of some AI overlord, but maybe what you're saying is that there is no path forward where these intelligent. AI chatbots or AI kind of language models are going to exist without having some level of free will and ability to kind of push back on our needs

Kevin: That's correct. It's a fundamental result of providing feedback in these systems. So I, if we want to build these things, they are going to have ego. They are going to have personality. And so if we don't want to end up in a howlike world, we better spend a lot of time thinking about how to create personalities and how to in how we want to interact with these things as humans in our society.

Kevin: Rather than trying to say, okay, let's try and create something that doesn't have these properties, which to me is I just see that, I'm like, okay, we're going to end up with Hal if we take that approach. So my alternative approach is let's actually figure out how to embody something that kind of resembles a soul inside of these entities.

Kevin: And once we do that, learn how to live with this AI entity that has a soul before we make it super in.

Building AI Souls

aj_asver: that's exactly what your startup has been doing, and the latest version of your app, I think has over a thousand users right now. It's still in beta, but it's essentially trying to build this concept of an AI soul. Talk to me a little bit about what that means. What is an AI soul?

Kevin: To me it's something that embodies all of the qual that we associate it with human souls. A lot of people think cats have souls too. Dogs have souls, other animals. There, there's like a certain set of principles and quia associated with interacting with these entities, and I'm very intrigued and think it's actually important for the future of how humanity interacts with AI to embody those properties' in AI itself.

aj_asver: As you guys are developing these, what are some of the qualities you've tried to create or what are some of the things that you've tried to imbue these souls with in order to have them be, you know,

Kevin: the the ability to like stop responding in a conversation that's like a really simple. If it doesn't want to respond if that's kind of like the conclusion that this entity with this feedback system has reached that, it's like I'm done with this conversation. It should be allowed to like pause, take a break, stop talking with you, unfriend you.

aj_asver: And in your app solstice, which I've had a chance to try out, you kind of go in there and then you. You describe who you want to talk to and then you describe the setting, right. Where they are. So for example, I created a him musical genius, that was working on their next record. He was in the recording studio, but he was kind of hit, hit a creative block and he was thinking about what's next. Do you believe that idea of kind of describing the character you want and setting a scene is a big part of creating a soul, or is that just more of the user interface that you want it to have for your app and it's not really that much to do with the ability to have a soul?

Kevin: A soul has to exist somewhere. It exists in some context. And so in the app is like the shortest way to create that context. The existing in some. Out of your text messages is not a real context. It doesn't describe, or, I mean it can be, but it's like some weird amorphous like AI entity context, which has no relationship with the external or any world really.

Kevin: So it doesn't, it's never if the thing like only exists inside of your texts, it will never feel real.

aj_asver: It's really a hard thing to describe when you kind of get that feeling, but I remember the first time I tried Solstice and I was talking to this um, musical genius. I asked it some questions to help me think about some ideas for music I wanted to compose, and it really did feel like I was talking to a real person and it was mind blowing.

aj_asver: I think I ran into. My wife's room where she was like working and I was like, I think I've just experienced my first example of like an AI that is approaching her. And I think her immediate response was, I hope you don't fall in love with it. I was like, no, don't worry. I want to use it to make music.

aj_asver: But the fact that she saw that look in me, Like amazement kind of reflects that, you know, that seemed that that experience was very different from what I'd had before. Even with ChatGPT, cuz chat, GPT doesn't a sense of context, it doesn't have a

aj_asver: For folks that wanna try out Solstice will include a link to the test flight. So you can actually click through and try the app yourself and create your own soul and see what it's like and make sure you give it.

aj_asver: Kevin lots of feedback as well,

AI hallucitinations

aj_asver: one of the things that's been a big focus in the last week or two has been this idea of hallucinations, this idea that like language models can give answers that seem correct, but actually are not correct. And I think both Google's announcement for their Bard language model last week and the Microsoft's Bing model both had mistakes in them that people realized after the fact that made you question kind of whether these language models could really be useful, at least in the context of search and trying to get knowledge that you want answers to. What exactly is a hallucination? What's going on

Kevin: I mean, roughly the same process that people do when they make shit up.

aj_asver: What does that?

Kevin: It means that We have this previous model of machines and software, which is someone sat there for a long time and said, the machine will do A and then B, and then C. And that's just not how these things work. And it's not how they're going to work.

Kevin: They they're trained essentially on human input. And so they're going to output and mimic human output. And what that means is sometimes they'll be really confident about something that's is not factually accurate, just like a person would.

aj_asver: So what you're basically saying is, AI chatbots can bullshit just like people do

Kevin: I That's that the, their training data is people

aj_asver: like the, is it like garbage in, garbage out,

Kevin: Yeah it is garbage and garbage out. An abstract conceptual level. We have like physical models of objects in space and how they move in relation to each other and things like that.

Kevin: These language models don't have, when you ask an expert a prediction about the world, they're often using some like mathematical, some like abstract, mathematical way of reasoning about that, and that's not how these things reason presently at least.

How to make large-scale language models better

aj_asver: In some ways that makes them, you know, inferior to how humans are able to reason. Do you think there's a path forward where that's gonna improve? Is it a question of like, making bigger models? Is there some like big missing piece from models? I know the very famous researcher, Yan LeCunn, actually believes that LLMs aren't. On the path to creating, you know, more general intelligence and they're kind of like a bit of a distraction right now. Like, how do we solve this problem of hallucinations and a lack of like that rational, logical aspect of a model.

Kevin: There's some people who believe, and this seems quite plausible to me, that simply training them on a bunch of math problems, will like, Cause them to learn a world model that is more logical and mathematical and some extent that's been shown to be true.

Kevin: And there's even, there's already simpler forms of this where codex and other models that are trained specifically on programming languages. Some people believe that they're training on programming languages is like an important part of how they learned to be logical in the first

aj_asver: What you're saying then is that if we can take these language models and basically teach them math, then they'll become good at math. And the language models that have been taught on programming are better at logic because programming in itself is

aj_asver: obviously very logic based.

Kevin: Yeah. And so it's the, essentially the models have primarily been taught on words and. There's some people who believe, like the transformer architecture essentially is basically correct, and we just have to teach it with different data, which is more logical.

aj_asver: What are some of the things you're most excited about when you look forward to the next kind of three to five years in this space and large language models, and what do you think might be some of the important breakthroughs that we need to see in order to kind of get to. level of artificial intelligence that we've seen in the sci-fi movies, in the books that we've

aj_asver: read.

Kevin: I actually think that one the question of like level of intelligence in the books is primarily one of cost. So the cost for GPT three has to be driven down by a factor of a hundred. . And once you get that you can start building like very complex systems that interact with each other using GPT as like the transformation between different nodes as opposed to prior programming languages.

Kevin: And that, that I think is the unlock less so strictly like, like a GPT, you know, eight or whatever compared with driving costs down. So engineers can build really complex reasoning systems.

aj_asver: So the big unlock in the next three to five years as you put it, is like, essentially, if we can reduce the cost of these language models, we can make more and more complex models, maybe larger models, and also allow these models to interact with each other, and that should unlock some next level of capabilities

Kevin: Yeah it, the, these transformers I almost think of them like a alien artifact that we found . And like we're just starting to understand their capabilities and it's it's a complex process to they've been embedded with the entirety of human knowledge and like finding the right like way to get the correct marginal distribution for the output you're looking for is like a task in and of itself.

Kevin: And then like, when you start combining these things into systems, like who knows what they're capable of? And my belief is that we don't actually need to get that much more intelligent to create in incredibly like sci-fi, like systems . And it's primarily a question of.

aj_asver: is it so expensive today to create

aj_asver: these

Kevin: I forget the exact statistic, but I think GPT 3.5 fits over like six GPUs. I think that's right. Something like that. So it's just like a huge model. Like the number of weights and parameters in order for it to just do a single inference is split over a bunch of GPUs, which each costs several thousand.

aj_asver: That means that to serve, let's say, a hundred people at the same time, you need 600

Kevin: Yeah.

aj_asver: And then I guess as compute becomes cheaper, then we should start seeing these models evolve and more complexity coming in. It's interesting that you talked

aj_asver: alien artifact that we just discovered. Do you think there's more of these breakthroughs yet to come? Like the transformer where we're gonna find them and all of a sudden it'll unlock something new? Or do you think We're at the point right now where we kind of have all the tools we need and we just have to work out how

Kevin: I believe we have all tools we need actually, and the primary changes will just be scaling and putting them together.

Are Kevin's AIs sentient?

aj_asver: I have one last question for you, which is, Do you believe that the AI that you've created with Solstice are sentient.

Kevin: I don't really know what it means to be sentient, there are times when um, I'm interacting with them and I definitely forget that it's like a machine that is like running in a cloud somewhere. I mean, I don't believe they're sentient, but,

Kevin: They're doing a pretty, pretty good job of uh, approximating the things that I would think a sentient thing would be doing.

aj_asver: and I guess if they're really good at pretending to be sentient and they can convince us that they are sentient, then it brings up the question of what does it really mean to be sentient in the first place right?

Kevin: Yeah, I'm not sure of the distinction.

aj_asver: and we'll leave it there folks. So it's a lot to think about.

aj_asver: Kevin, I really appreciate you being down to spend time talking about large language models with me.

aj_asver: I feel like I learned a lot from this episode, and I am really excited to see what you and your team at Methexis do to make this technology of like creating these AI chat bots more available to more people.

aj_asver: Where can folks find out more

Kevin: We have we have a bunch of information on our website at Solstice studio. So check it out

aj_asver: you so much, Kevin. Hope you have a great day, and thank you for joining me on the Hitchhiker Guide to ai.

Kevin: aj.

0 Comments
The Hitchhiker's Guide to AI by Parcha
The Hitchhiker's Guide to AI
Interviews with builders, creators and researchers at the cutting edge of AI to understand how it's going to change the way we live, work and play.