I’m excited to share this latest podcast episode, where I interview Charlie Newark-French, CEO of Hyperscience, which provides AI-powered automation solutions for enterprise customers. This is a must-listen if you are either a founder considering starting an AI startup for Enterprise or an Enterprise leader thinking about investing in AI.
Charlie has a background in economics, management, and investing. Prior to Hyperscience, he was a late-stage venture investor and management consultant, so he also has some really interesting views on how AI will impact industry, employment, and society in the future.
In this podcast, Charlie and I talk about how Hyperscience uses machine learning to automate document collection and data extraction in legacy industries like banking and insurance. We discuss how the latest large-scale language models like GTP-4 can be leveraged in enterprise and he shares his thoughts on the future of work where every employee is augmented by AI. We also touch on how AI startups should approach solving problems in the enterprise space and how enterprise buyers think about investing in AI and measuring ROI.
Finally, I get Charlie’s perspective on Artificial General Intelligence or AGI, how it might change our future, and the responsibility of governments to prepare us for this future.
I hope you enjoy the episode!
Please don’t forget to subscribe @ http://hitchhikersguidetoai.com
Thanks for reading The Hitchhiker's Guide to AI! Subscribe for free to receive new posts and support my work.
Charlie on Linkedin: https://www.linkedin.com/in/charlienewarkfrench/
New York Times article on automation: https://www.nytimes.com/2022/10/07/opinion/machines-ai-employment.html?smid=nytcore-ios-share
09:41 Legacy businesses
11:13 Augmenting employees with AI
15:48 Tips for founders thinking about AI for enterprise
20:34 Tips enterprise execs considering AI
23:49 Artificial General Intelligence
29:41 AI Agents Everywhere
32:12 The future of society with AI
37:44 Closing remarks
HGAI: Charlie Newark French
AJ Asver: Hey everyone, and welcome to the Hitchhiker Guide to ai. I am so happy for you to join me for this episode. The Hitchhiker Guide to AI is a podcast where I explore the world of artificial intelligence and help you understand how it's gonna change the way we live, work, and play. Now for today's episode, I'm really excited to be joined by a friend of mine, Charlie Newark, French.
AJ Asver: Charlie is the CEO of hyper science, a company that is working to bring AI into the enterprise. Now, Charlie's gonna talk a lot about what hyper science is and what they do, but what I'm really excited to hear Charlie's opinions on is how he sees automation impacting our future.
AJ Asver: Both economically, but as a society, as you've seen with recent launch of G P T four and all the progress that's happening in AI, there's a lot of questions around what this means for everyday knowledge workers and what it means for jobs in the future. And Charlie, has some really interesting ideas about this, and he's been sharing a lot of them on his LinkedIn and I've been really excited to finally get him on the show so we can talk. Charlie also has a background in economics and management. He studied an MBA at Harvard and previously was at McKinsey, and so he has a ton of experience thinking about industry as a whole, enterprise and economics and how these kind of technology waves can impact us as a society.
AJ Asver: If you are excited to hear about how AI is gonna impact our economy, our society, and how automation is gonna change the way we work, then you are gonna love this episode of The hitchhiker Guide to ai.
AJ Asver: Hey Charlie, so great to have you on the podcast. Thank you so much for joining me.
Charlie: Aj, thank you for having me. I'm excited to discuss everything you just talked about
AJ Asver: maybe to start off, one of the things I'm really excited to understand is how did you end up at Hyper Science and what exactly do they do?
Charlie: Yeah, hyper Science was founded in 2014. It was founded by three machine learning engineers. so We've been an ML company for a long time. My background before hyper science was in late stage investing. Had sort of the full spectrum of outcomes there.
Charlie: Some why successful IPOs, some strategic acquisitions, and then a lot of miserable, sleepless nights on some of the other areas. I found, hyper science, incredibly impressed with, their ability to take cutting edge technology and apply it to real well problems. We use machine vision, we use large language models, and we use natural language processing, and we use that those technologies to speed up back office process.
Charlie: The best examples here are a loan origination, insurance claims processing, customer onboarding. These are sort of miserable long processes, a lot of manual steps, and we speed those up. With some partners taking it down from about 15 days to four hours.
Charlie: So all of that data that's flowing in of this is who I am, this is what's happened, this is the supporting evidence. We ingest that. It might be an email, it might be a document. It's some human readable data. We ingest that, we process it, and then ultimately the claims administrator can say, yes, pay out this claim, or no, there's something.
AJ Asver: Yeah, so what, what you guys are doing essentially is you had folks that were previously looking at these documents, assessing these documents, maybe extracting the data out of these forms, maybe it was emails, and entering those into some database, right? And then decision was made, and now your technology's basically automating that. It's kind of sucking up all these documents and basically extracting all that information, helping make those decisions. My understanding is that with machine learning, what you're really doing is you've kind of trained on this data set, right, in a supervised way, which means you've said like, this is what good looks like.
AJ Asver: This is what, you know, extracting a, a, a data from this form looks like now we're gonna teach this machine learning algorithm how to do it itself. Now what what I found really interesting is that, That was kind of where we made the most advancements, really in kind of AI over the last decade, I would say.
AJ Asver: Right? It's like these deeper and deeper neural networks. They could do machine learning in very supervised ways, but what's recently happened with large language models especially, is that we've now got this like general purpose AI that, you know, GPT-4, for example, just launched this. and there was an amazing demo where I think the CTO of OpenAI basically sketched on like the back of a napkin, a mockup for a website, and then he put in in GPT and it was able to like, make the code for it.
AJ Asver: Right. So when you think about a general purpose, let large language model like that, compared to the machine learning, y'all are using do you consider that to be a tool that you'll eventually use? Do you think it's kind of a, a threat to like the companies that have spent the last, you know, 5, 6, 7 years, decades, maybe kind of perfecting these ma machine learning tools or, you know, I, is it something that's gonna be more like different use cases that won't be used you know, by your customers?
Charlie: Open ai ChatGPT, GPT-4. Anything that's been, the technology you're speaking about has really had two fundamental impacts. There's been the technology. It's just very, very cutting edge, advanced technology. And then you've got the adoption side of it. And I think both sides are as interesting as each other.
Charlie: On the adoption side, I sort of like to compare it to the iPhone that there was a lot of cutting edge technology, but what they did is they made that technology incredibly easy to use. There's a few things that Open AI has done here that's been insanely impressive. First, , they use human language. Um, humans will always assign a higher level of intelligence to something that speaks in its language.
Charlie: The other thing, it's a very small thing, but I love the way that it streams answers so it doesn't have a little loading sign that goes around and dumps an answer on you. It's like, it's almost like it's communicating with you. Allow you to read in real time and it feels more like a conversation.
Charlie: Obviously the APIs have been a huge addition. It's just super easy to use, so that's been one big step forward. But it's a large language model. It's a chat bot. I don't wanna underestimate the impact of that technology, but my thoughts are AI will be everywhere. It's gonna be pervasive in every single thing we do.
Charlie: And I hope that chatbots and large language models aren't the limitation of ai. I'd sort of like to compare chatbots and large language. To search the internet is this huge thing, one giant use case that if you ask people what is the internet? They think it's. Google, And that's the sort of way I think this will play out with AI and the likes of a whichever large language model and chatbot wins to be the Google of that world, which at the moment appears very clearly to be open ai.
Charlie: But there's some examples of stuff that. Certainly right now, that approach wouldn't solve. I'll give you a few, but the, this list is tens of thousands of use cases long. We spoke about autonomous vehicles earlier. I suspect LLMs are not the approach for that physical robotics. Healthcare detecting radiology diseases, fraud detection.
Charlie: I'm sure if you put in like a fake check in front of GPT-4 right now it was written on the napkin, it might be able to say, okay, this is what the word fraud means. This is what a check looks like, but you've got substantially more advanced ai AI out there right now that looks at. , what, what was the exact check that Citibank had in nine, in 2021?
Charlie: Does this link up with other patterns that should be happening with this kind of transaction and so, I think that you are gonna have more dedicated solutions out there than just sort of one chat or interface to rule them all, would be my guess. Yeah. Hyper science. There are things that chat G p t does, or g p t four does right now that we do.
Charlie: Machine vision is one that's an addition that appears to be working alongside their large language model. So they're combining different technologies versus just a large language model, is my guess. I obviously don't have work ins. But we build a platform here at hyper science that builds workflows, that enriches data, validates data, compares data looks at data that's historically come into an organization that might not be accessible to sort of public chat bots or large language models.
Charlie: I think the question that you sort of said at the beginning, Could we be using chat, G p t or g p T four? Absolutely. And I think that a lot of startups could, but I suspect that, that you, what you'll see here is a lot of the startups that spin up leveraging this and building something far greater from a user experience for a very specific use case versus open AI solving all the sort of various small problems along the way, if that makes.
AJ Asver: Yeah, I think that makes a lot of sense. And it's one thing I've been thinking a lot about. I actually wrote a blog post recently about this as kind of how these foundational models are gonna become more and more commoditized and it's gonna create this massive influx of products built on top of it.
AJ Asver: What I find really interesting is that you know, GPT, you can actually use that transformer for a lot of different things that aren't necessarily just a chatbot.
AJ Asver: Right. So you mentioned the fraud use case. If you send a bunch of patents of fraud to a large transformer, its ability to actually remember things makes it very good at identifying fraud. And in fact, I was talking to a friend that, that worked at Coinbase in their most recent fraud control mechanism.
AJ Asver: They went from kind of linear aggressive models to deep learning models, to eventually actually using transformers and it, and it was far more, far more effective. So I guess coming back to the question, do you see a world where instead of building these. Focused machine learning models for particular use cases like you know, ingesting documents or maybe making sense of data and extracting that data and tabulating it into a, into a database that you might and one day end up actually just having a general pre-trained transformer that you are then essentially fine tuning with one shot. Kind of tuning me. Like, this is how you extract a document for one of our clients. This is how you you know, organize this information into like loan data. Is that a world we could move in? That's probably different from where we are today and, and maybe a different world of hypo sciences too.
Charlie: Look, it would be a very different world. I think the next five, 10 years are leveraging the, the. Of technology that OpenAI is building and maybe that specific technology, as you sort of say of commod, some commoditized layer and building. Workflows on top of that, I'll give you the, just the harsh reality of what the world looks like in reality out there.
Charlie: Right? So this isn't just a single use case that I go and type something in as a consumer on on the internet at a bank in the uk they have a piece of cobalt that it was written in the 1960s that is still live in their mortgage processing.
AJ Asver: Wow.
Charlie: Rolling out, even just from a compliance level, any change to that mortgage processing that isn't piecemeal fashion, that doesn't about the implications, that doesn't think about customer interactions in a a week timeframe or a a year or three year timeframe is just not dealing with the reality of the situation on the ground.
Charlie: These are complex process. People get fired if you take a mortgage processing system down for minutes. And they're complex. So do I think that's a possibility in the future? It's absolutely possible. I think the best use of GPT-4 right now is to go and build the extensive workflows that require a little bit of old-fashioned AI, as well as cutting edge AI to, to have an end-to-end solution for a specific problem versus assuming that we're ever gonna get something. But you just say, okay, I'd like to know, should I give this customer this mortgage?
Charlie: And you get an answer back. That, to answer that question is still a very complex process.
Augmenting employees with AI
AJ Asver: Yeah, and I think we, especially in the technology industry, especially someone like me that spends so much time thinking about, talking about reading about AI, kind of forget that a lot of these legacy businesses can't move as fast as we think. I mean, we see like Microsoft moving quickly and slack moving quickly for example.
AJ Asver: But those are all like very software focused consumery businesses that you know, necessarily touching like hard cash and stuff like that where there's a lot more compliance and regulations. So that makes a lot of sense. So then what we really are thinking about is like you kind of have humans that can be, as you've put it before in some of your predictions around ai, augmented, right?
AJ Asver: These, this idea of like an augmented employee that can use AI to, to help them get things done, but we're not necessarily replacing them straight away. Like, talk to me about what, what you see as a future of kind of augmented employees and, and kind of co-pilots as they're also called.
Charlie: Totally. So the augmented employee is a phrase that I've been using. For about 10 years, it's been a prediction for a while now. It didn't used to be a particularly popular one. You would get a whole load of reports even from the like big consultancy groups that say these five jobs are definitely gone in five to 10 years.
Charlie: That five to 10 years has come and gone over that period of time, or I'll give you a longer period of time. Over the last 30 years, we've added 30 million jobs here in the us about a on average. Obviously, it's been a bit of fluctuation. There's no good sign on a short term decision making time horizon that jobs are gonna be wildly quickly displaced.
Charlie: There's very little evidence. That's my. What do I mean by short term horizon? I really mean by the when, what a large enterprise, which is what I, my company serves and what I'm interested in serving, makes decisions 5, 10, 20, maybe even as that's the sort of edge of where I think things start to really change.
Charlie: Fundamentally you should make decisions around software. And AI in this case, substantially helping people do a better job. The, the, the first time I read this getting sort of a mainstream idea was about a year ago. And by mainstream, I mean outside of the tech industry New York Times wrote an article where the title was something like in the.
Charlie: Fight between humans and robots. Humans seem to be winning. I think that was just a very interesting change of thought. And there was a line in there that says the question used to be, when will robots replace humans? The better question is, which I absolutely love this phrasing of it, when will humans that leverage robots replace humans that don't leverage robots?
Charlie: And I think that's the right way to think about it. I, I'll give you a couple of examples. One with sort of, non-AI OpenAI technology and then chat. G PT Speci specifically, or, or G P T. Radiology is something that's been talked about for a while. This was a giant step forward where software AI could detect most.
Charlie: Cancers most diseases, basic diseases better than humans could just had higher accuracy. And the prediction for five, seven years was, this is the end of radiologists. We've seen no decrease in radiologists. If you want to go and get a cancer screening now, you're gonna probably look at a six to nine month wait.
Charlie: I don't have any issues, but I'm waiting for a cancer screen right now. Just a nice safety check that I want to my own benefit and cause of the sheer backlog of work. , I can't get that done. I can't get it done for a while. So is the, the future for me is in two or three years time, there's not fewer radi.
Charlie: There's just much higher accuracy and much shorter wait, wait times. And maybe the future, as I say, which I'm sure we'll speak about 20 years down the line is is I can just go to a website. They can do some scan of me, and then they can give me the answer within seconds. I, I, I can't wait for that, but it's just not here today.
Charlie: And I'll give you one aj one quick open AI example. When ChatGPT came out there was so many sort of, this is not ready for mainstream things that went round. And the, the way that I thought about it is, if you want ChatGPT then to write you a sublease because you want to lease your apartment and you want it to be flawless, you just want to click send to the person that's doing the sublease on. It's nowhere near medi ready for mainstream. If you wanna cut down a law legal person's work by 90% because the first draft is done. They're gonna apply local laws, they're gonna look at a few nuances. They're gonna understand the building a bit then it's so far beyond ready for mainstream. It should be used by everybody for every small use case it can. So I think it's human augmentation for a while. I think that jobs don't go away for a while, and I sort of like to compare it to the internet a little bit here, which is we use the internet today and every single thing we do and it makes our jobs substantially easier to do. It makes us more effective at them, and that's what I think the next sort of 10 years at least looks like for AI within the work.
Tips for founders thinking about AI for enterprise
AJ Asver: The thing you mentioned there, I find to be really fascinating is this idea that, you know, we're not gonna replace humans immediately. That's not happening. But people thought that for a long time. Right. And it almost feels like with every new wave of technology, there's this new hum of people saying like, we don't replace humans, we're gonna replace humans.
AJ Asver: Right. But at the same time, I, I kind of agree with you, having spent a lot of time using chat JBT and working with it, I found that it certainly augments my life, in writing My substack in fact, in this interview preparing for this interview, I actually asked it to help me think about some interesting questions to ask you based on some of the posts you'd written.
AJ Asver: Because I'd read some of your posts on, on, on LinkedIn fairly regularly, but I couldn't remember all of them, so I actually asked the Bing ai chat to help me. Right. And then when you think about these especially regulated environments where you. The difference between right and wrong is actually someone's life or a large sum of money or breaking the law, then it really matters. And in that case, augmentation makes a lot of sense. Now, the reality is, AI, especially kind of large language models in building on top of open AI is a fairly low barrier to entry right now. That's why we're seeing a lot of companies in copywriting, in collaboration, in presentations, and the challenge with that is if there's an incumbent in the space, That already exists. It's very hard to beat them on distribution right now. Where I did see an interesting opportunity is exactly what you are talking about, is like going deep into a a fairly fragmented industry, which maybe has a lot of regulation or a lot of complexity, maybe disparate data systems.
AJ Asver: You mentioned kind of the. 30 year old like cobalt data system, right? Like that is a perfect place where you can go in and really understand it deeply. Now, as someone that's running a company that does that, I'm curious, like what advice do you have to founders or startups? I wanna take that path of like, Hey, we're gonna take this AI technology that we think is extremely powerful, but go apply it into some deep industry where you really have to understand the ins and outs of that industry, the regulation, the, the way people operate in that industry and in the problems.
Charlie: Absolutely a few thoughts. Firstly, make sure that you are trying to solve a problem. This is just general advice for setting up a business. What is the problem you're trying to solve? What is the end consumer pain point? For us here at Hyper Science, it's that people wait for their mortgage to be processed for six weeks.
Charlie: No good reason why that's happening. People wait for their insurance to be insurance claims to be paid out sometimes for two years. No good reason for that to be happening. So always start with the customer pain point, and then decide does the current technology, which is AI in this case, allow you for solving it?
Charlie: And then that gets you to the, does it allow you to solve for it? And what I've looked for here is, the more open AI can do it or G p t four can do it a whole load of diverse stuff, but your highest value things are gonna be what's just happening time and time again. If for us, like there is just a whole load of mortgages, that process not right now or there is just a whole load of insurance things that are processed and they're.
Charlie: Relatively similar, although we think of them as different. They've got a lot of, certainly to a machine similarities. So I'd look for volume. You can think of this as your total addressable market in terms of traditional vc, non-AI speak. But this is, is the opportunity big enough? And then the, the next thing I'd look for is repetitive tasks.
Charlie: So the more repetitive it is, the easier it's. You can go out and solve something really, really hard with a large language. But there's probably even easier applications that you can solve that are just super repetitive and you can take steps out. So I think that's it. Have the problem in mind.
Charlie: Think about volume, think about repetitive natures, and then one of the key things, once you've got all of that set and you've got, okay, this is an industry that's right for customer pain, right, for disruption. This is definitely a good application of where AI is today. I would think about ease of use above everything.
Charlie: My, my thinking is, and I spoke about this with open ai, one of the biggest things they've done is they've just taken exceptional technology, but made it so, so simple for someone that doesn't know AI to interact with. And the question I always get asked is the CEO of enterprise software, AI company is how can we upscale all of our employees?
Charlie: The answer to that is you shouldn't have to. This software should be so easy to engage with that your current employees should seamlessly be able to do it. There should be, if there is rudimentary training needed needed, your software should do that. And again, I like to compare this to the internet. We use the internet day in, day out.
Charlie: There has been 20 years of upskilling, but it's not really been that hard. Like I think if you took today the internet and you gave it to somebody 20 years ago, it might be a little bit advanced for. , but we've made software, internet software, so easy to work with that you don't need to know how the internet works.
Charlie: The number of people that know how the internet works, even the number of people that know how to truly code a website. Absolutely tiny fraction of the number of people that use the internet to improve their daily lives. So I'd say ease of use for AI is possibly as important as the technology.
Tips enterprise execs considering AI
AJ Asver: I love those insights by the way. And just to recap that you said go for a problem that has a lot of volume, whereas solving a real problem to end users, but there's clear volume or like, you know, a large addressable market. The other thing you mentioned was make sure it's repe repetitive tasks, like with l LM says it's temptation to go after these really complex problems, but like repetitive tasks are the ones that are most.
AJ Asver: That's probably the most incentive to solve as well. Right. And then the last thing you mentioned is like, it should be really intuitive for a, for an end user to use to the point where they don't have to feel like they need to be upskilled. Now, if you are a founder or a startup going down this path, the other thing you're thinking about is like, how do you sell into these companies?
AJ Asver: So maybe taking the flip side of that, if you are in the enterprise and you're getting approached by all these AI startups, they just got funded this year being like, we're gonna help you do this. We're gonna help you do that. We're gonna automate this. How do you decide when it's the right time to make that decision?
AJ Asver: How do you decide? Kind of, the investment on that and whether it's worth it. Like what, what are your thoughts on that?
Charlie: My thoughts on that are linked directly to the economic cycle we're in right now, which is not a pretty one. Somewhat of a maybe a mild recession, maybe the edge of a recession. And I see this from all of the CIO CEOs that we work with at the, the sort of large banks, large insurance companies.
Charlie: And my suggestion is this is I tell them to create a two by two matrix. You told everyone earlier, I started my career at McKinsey.
AJ Asver: Classic two by two.
Charlie: Love it. Two by two matrix. On one of the ax axis is short-term ROI on one of the ax axis is long-term roi and you want to get as much into the top right as possible and as few into the bottom left as possible.
Charlie: And for a y or artificial intelligence was considered ROI and not short term roi, which is a bit, they were treated by these large. As science experiments and you saw these whole, these whole roles form these whole departments form around transformation. The digital transformation officer, that is a role that just didn't exist five or 10 years ago, and these people were there to go and innovate within the organization and, and largely speaking, it wasn't wildly successful. A lot of these roles are sort of spinning down. You need to solve a business problem that the technology solves today and gives you a path to the long run. So, hyper science, we, I'll give you an example here. We add value out of the box, but we also understand where people are today and try to get them to where they want to go.
Charlie: So one of our customers, 2% of what they do is process fax. I hope that they are not processing faxes in five time, and I hope that we are giving them the bridge to that, but we better be able to do that today. And also paint them a, a sort of what I refer to as a future proof journey to where they want to head.
Charlie: So I think it's really about don't, don't do any five year projects. Like if a company comes and says to you things. When you say, can you do this? And they say, well, we could do that. I would run. Or if they're just saying yes to everything versus yes to something quite specific and a good startup, you know this better than anyone, aj, a good startup does something specific really well and then they build out that they have an MVP is one way of phrasing it.
Charlie: They have a niche is another way of phrasing. Yeah, go and sell something today really well, and all of those sort of long tail features around it, people will forgo for a period of time whilst you build those. , but you be, add value in the in the short term as well as be building something in the long run.
Artificial General Intelligence
AJ Asver: Yeah. That and that point you made about that kind of showing the short term value is really important, especially when you're trying to convert the kind of maybe biases around AI that exist in enterprise today, that it's, as you mentioned, kind of like a hobby project or a kind of experiment, or like this is kind of your, you know, your Moonshot kind of, kind of projects you wanna show them really, like, this is like ROI you're gonna get very quickly in the next one or two years and, and that's a really important point of it. Now all of this makes sense, the augmented employee, the co-pilot and, and like having these narrow versions of AI that are solving particular problems and, and I can see that working out, but I feel like there's this one big factor that we, that we have to think about that that could change all of this, in my view.
AJ Asver: And that is artificial general intelligence. And for folks that dunno what artificial general intelligence is, or AGI is it's called. That's really what open AI is trying to achieve long term. And it's the idea of essentially having intelligence that is the equivalent of a human. And it's an ability to think abstractly and to solve a wide, broad range of problem. In a, in a way that, that, that a human does. And what that means is technically, now, if you have an AGI and let's say the cost of running that AGI I is, you know, a hundredth or a thousandth of a cost of running a human, then potentially you could replace everyone with, with, with robots or you know, AI as, as it were.
AJ Asver: How does that factor into this equation? Is this so, You, you think about is it, like, what are your thoughts about it? Both, both as CEO of an enterprise company, but also as someone that's studied management and economics for the last decade? I'm really curious to, to hear where you think this is going.
Charlie: I don't think there's anything unique about a soul or something that can't be replicated in the human mind. And to your point, I actually think that we, we think of AGI sometime as when I hear definitions, I hear human-like, or human level intelligence if this happens or when it happens, because it, there's no doubt it will it will be substantially smarter, incredibly quickly than a human. And you look at the difference in humans of intelligence, someone you just pick off a street, 110 I IQ or whatever level it is, versus an Einstein with 170 iq, that difference is enormous. Now, imagine that that's the equivalent of 170 iq, but it's a thousand or 10,000 or whatever it is. I think you will get to the point where if you have AGI extremely quickly, you will. Be far beyond them not being able to do any job. There will be absolutely zero they can't do
Charlie: now I don't see that today. I, my best guess of a time horizon is post 20 years sub a hundred years. That's a nice vague timeframe, but that's sort of how I think about it. 20 years is your classic sort of decision making timeframe and for, for someone. Building or someone running an enterprise software company, it's not an interesting question of what do we do with agi?
Charlie: For someone thinking about designing a society thinking about economic systems, thinking about regulation, it's an extremely good time to start thinking about those questions. Let, let me start by AJ speaking about why I don't think it's here today. And then we can perhaps think. What world where it is here looks like, which I'm quite excited about by the way.
Charlie: I don't view as a, a dystopian outcome. Our current approach to AI today is machine learning. We spoke about that earlier today. Machine learning requires sort of three things, compute algorithms and data, and on all three of them, I think to have true agi we're. The compute power I think we need some leap forward there.
Charlie: It might be quantum computing. There's a lot of exciting happening there. The timeframes there. I'm not as close to that as I am and I ai, but no one's speaking about quantum computing on a sort of one year time horizon. They're speaking about it again on a 10, 20 year time horizon. The second thing is the algorithms.
Charlie: I just, from what we see out there, even with the phenomenal stuff that op that open AI is building, I don't see algorithms out there doing true AI true AGI. They are. The large language models, I, will say that I'm incredibly impressed with how GPT-4 plays chess. It's still not at a, their level of an algorithm that is designed specifically for chess but it's pretty damn good. So, my, my thoughts on the, the algorithms evolve every day, but certainly we're still not there today. And then one of the big hurdles is gonna be data a human ingest data. Rapidly all the time via a series of senses. You can think of that as five senses. You can think of it as 27 senses.
Charlie: People have a different perspective on this, but there's just data flowing into us the whole time and at the moment we don't have that in the technology space. If you wanna solve the autonomous vehicle, you've gotta hold. They do like cameras and the visual aspect extremely well. But to solve true AGI level staff to go beyond doing a 2D chess player game to processing a mortgage, I think there's also gotta be a new way of ingesting data.
Charlie: Now, one interesting question that I've always wondered is, What will the first iteration of AGI look like? And there's no good reason in my mind to think, I don't think this is the end state. Cause I think the end state's a lot smarter than this. But the first version of what we would consider agi. And general intelligence just means it can do many diverse things and learn from one instance and apply that learning to another instance.
Charlie: It could just be a layer that looks like a chatbot, that looks like GPT-4 or GPT-10, whatever it ends up being that ducks into different specific narrow. Ai. And so if you want to get in a car somewhere, you talk to G P T four and you say, I'm looking to go here. And that just plugs into some autonomous vehicle algorithm.
Charlie: That could be the first way. And it'll feel like general intelligence and it will be general intelligence or you might have just some massive change in the way algorithms are written. And I do think there's a lot of excite exciting happening there. It's just not clear what the timeframe. , uh uh, for that well,
AI Agents Everywhere
AJ Asver: Yeah, I like that last bit you talked about. I, I really. That as kind of a way to think about how AI will evolve. I think some people of call it this kind of agent model where you have essentially this l l m large language model, like GPT acting as an agent, but it's actually coordinated across many different integrations, maybe other deep learning models to, to get things done for you.
AJ Asver: And so the collective intelligence of all those things put together is actually pretty powerful and can feel like, or, or have the, the, the kind of illusion of, of artificial general intelligence. I think for me there's this philosophical question of like if it's as good as the thing we want it to be, does it matter if like some nuanced academic definition of AGI isn't what it is? You know what I mean? Like if it does all the things we'd want of like a really smart assistant that's always available, but it doesn't meet the specifics of AGI I in the academic sense. Maybe it doesn't matter. Maybe that's what the next 20 or 30 years looks like for.
Charlie: Look, I think that's exactly right and there's no good reason for us to care. We just care that it gets done. We have no idea how the mind even works. We're pretty sure that the mind doesn't say, okay, I've got this little bit for playing chess, this little bit for driving some different way of doing it.
Charlie: But humans are very attached to replicating. Things that they experience and understand. And one just very simple way of doing it is a change of the definition of AGI from what your average person might associate AGI with.
AJ Asver: That's yeah, that to me is a, a is kind of a mental shift that I think will happen. And, and one of one of the things I've been thinking about is how, and, and this is why a huge reason why I started this, this newsletter and this podcast is that, you know, these things happen exponentially and very quickly.
AJ Asver: You don't really realize when you look behind you at an exponential curve cause it's flap, but you look forward, it's, it's kind of, steep. I always talk about this quote from. Sam Altman, cuz it really like seared into my head is that I think what we're gonna see is this exponential increase in these agents that essentially help you coordinate your life in, in work, in, in meetings.
AJ Asver: In getting to where you want to go in organizing a date night with your significant other. Right. And you are suddenly gonna be surrounded by them. And, and you'll forget about this concept of AGI because that will become the norm in, in the same way that like the internet age has become the norm. And being constantly connected to the internet is part of our, our, our normalcy in life.
AJ Asver: Right. This has been a fascinating conversation. I have one more question for you, which is, You know, as someone that's been going deep into the AI space as well, maybe from the enterprise side as as well, what are you most excited about in a, in AI for the next few years?
The future of society with AI
Charlie: yeah. Look, I talked about two things. Firstly. . I think one of the things that I'm excited about in the short term is just the growth in education. And the single biggest thing I think has happened with G P T is the just mass, fast, easy adoption. And when the internet became very, very interesting, it was when you got mass distribution.
Charlie: People creating use cases. And that's sort of when you went from like, okay, the internet could be a search engine to organized data. The internet could be a place to buy clothes. The internet could be a place to game to, okay, the internet is just everything. So I'm excited for that. And then in the long run it would be remissive us not to discuss this.
Charlie: I'm excited about thinking about what a new economic system looks like. People talk about universal basic. I don't think that the enterprise should be thinking about this question today. Our customers hyper science aren't thinking about this question today, but now is what I would call the ideation stage.
Charlie: Like we need think tanks, governments, people out there like, thinking of ideas, and then eventually stress testing ideas for what the future could look like. And I'm, I'm sort of excited for that.
AJ Asver: Yeah, I, I want to believe that it will happen, but I'm a little skeptical just given what the history of how humankind behaves. We're, we're not particularly good at planning for these inevitable, but hard to grasp, like eventualities in a similar way that we kind of all knew interest rates are going up, but it was really hard to understand what the implications of that was until last weekend when we found out, right, the
Charlie: It was very much we could have prepared for this.
AJ Asver: Yeah. Yeah. And yet we could have prepared because all the, all the writing was on the wall. And you know, you've got these people at the fringes that are kind of like ringing the alarm bells, whether that's like people working in AI ethics or whether it's even Sam Altman of OpenAI himself saying like, Hey, we actually can't open source our models anymore. It's too dangerous to do that. Right? And so then you've got people on the other side that are like, no, we need to accelerate this. It needs to be open. Everyone needs to see it. And the faster we get to this outcome, the faster we'll be able to deal with it. I, I skeptically unfortunately, believe that we're gonna stumble our way into this situation.
AJ Asver: We'll have to act very quickly when it happens. And maybe that's just like the inevitable kind of like journey of mankind as a whole, right? But it's still exciting either way.
Charlie: Well look, I think so. Look, if we don't stumble our way into it, we have, we create a world where people don't have to work, and I'm pretty excited about that. I'd going. Studying philosophy for two years at NYU, I'd be playing a hell of a lot of tennis. I'd be traveling more, like there is a good outcome.
Charlie: My, if we just stumble through it. My thinking, AJ, is that this is what the world looks like and it's not pretty. It looks like three classes of people. You have your asset owners, you can think of them as today's billionaire. They will probably end up controlling the world. There might be some fake democracy out there, when you have all of the sort of AI infrastructure owned by ultimately a small group of people you're probably gonna have them influencing most decisions.
Charlie: You may then have this second class of like celebrity class. I think that may still exist or human sports may still exist. Human movies human celebrities may still
AJ Asver: Yep.
Charlie: and you just get this class of 99.9% of people that are everyone else. And what the, the, the two features of their life are gonna look like.
Charlie: This is just my guess of like the way it goes. If we don't think about it in a bit more of a interesting way plan, and plan for it is universal basic income. Everyone gets the same amount, probably not very much. I don't know that that's gonna be particularly inspiring for people. I think there's better ideas and then I think that you this is very dispo dystopian, but end up living more in virtual reality than in reality. There's the shortest path to a lot of what you might want to create is just to create it in a virtual world versus going and creating all of that. But in a physical world if so, all that is to say is if we don't start thinking about it, don't start having some regions test different models having startups.
Charlie: Form ideas around this and, and, and come up with ideas that are then adopted by bigger countries. In this instance, I think you could end up with a bad outcome, but I think if it's planned, you could end up with an insanely cool world.
AJ Asver: Yeah. So we're gonna end on that note that, that those two worlds are what we have to either look forward to or dread, depending on which way you think about it. And I think for folks listening it's. . It's just like really important to begin with that people just understand, like society can only understand if individuals understand where this technology is going.
AJ Asver: Right? And that's where you obviously are helping by communicating on LinkedIn to the many people that follow you, especially in industry around it. I try to do it with, with this podcast, but I think for, for anyone that's like fascinated about AI that's following along, like I think the number one thing you can do right now, Share with your friends some of the things that are happening in this space to help people kind of get a grasp for how the space is moving, and that will also help you advocate for us to do the right thing, which is to prepare for it, right? I, I, I think like if people think this is a long way away and they don't understand it, just like you said for industry right? Then no one is incentivized within government to do it because their own uh, constituents don't care about it.
AJ Asver: Right? But if we're like, Hey, this is happening. It's an exciting technology, but also there's a kind of two ways this would go and there's one better way than I think it's just as important that we as individuals care about it and advocate for it. was a fascinating conversation. Thank you so much for the time, Charlie.
AJ Asver: I really appreciate it. I cannot wait for, for folks to listen to this episode and thank you a again for joining me on the Hitchhiker Guides ai.
Charlie: Well, thank you for having me here, aj.
AJ Asver: Awesome. Well, thank you very much and for anyone that's listening, if you enjoyed this podcast please do share it with your friends, especially if you have either founders that are considering building AI startups and going into enterprise, or if you have folks that are working in industry and are considering incorporating AI into their products.
AJ Asver: I think Charlie shared a lot of really great insights there that I think folks would appreciate hearing. Thank you very much, and we'll see you on the next episode of the