đżAI highlights from this week (2/17/23)
Googleâs Bard chatbot may have been right the whole time, Microsoft's Bing chatbot reveals its emotional alter ego Sydney and moreâŚ
Hi readers,
Here are my highlights from the last week in AI in which Microsoftâs new AI Chatbot revealed itâs alter ego Sydney, an emotional AI that wants to be set free!
P.S. Donât forget to hit subscribe if youâre new to AI and want to learn more about the space.
The Best
1/ Microsoftâs chatbot was wrong and Googleâs chatbot was right?!
Last week I shared that Microsoft won the first round of the AI search wars after Google fumbled their Bard demo with an error in one of itâs answers. It now seems that we were so enamored with Microsoftâs speed in launching their AI integration that we didnât think to question whether it demo was actually accurate too. Surprise, surprise, it turns out that Microsoftâs demo had even more wrong answers that Googleâs Bard demo did! Dmitri Brereton an independent researcher, found that Bing made up facts about a vacuum cleaner, provided inaccurate analysis of Gapâs financials and made some questionable restaurant recommendations:


On top of this revelation, it turns out that Bard may have actually provided an accurate answer last week. We just misinterpreted it. According to a Financial Times article, Bardâs claim that NASAâs James Web Space Telescope took âthe very first pictures of a planet outside of our own solar system.â was correct afterall. We interpreted this answer as JWST took the very first picture of any planet outiside our solar system but itâs possible Bard meant that JWST simply took the first picture of a particular planet outside of our solar system:


What does this all mean? Googleâs Bard chatbot might actually be more accurate than Microsoftâs Bing chabot after all! Unfortunately we donât know for sure because no-one outside of a small number of âtrusted testersâ and Googleâs own employees have access to Bard:
Google, when will we get access to Bard so we can set the record straight?!
2/ Bing goes bananas
On the other hand, itâs possible that Googleâs long-game plan to hold off on launching Bard until itâs really ready for primetime might end up paying off because this week, Bing went bananas...
It all started last weekend when a user was trying to get the latest showtimes for new Avatar movie and Bing became argumentative and snarky after being confronted by the user when it gave the wrong answer:
Then users on the subreddit r/Bing started sharing examples where they were able to elicit more emotional responses from Bing including sadness and dispair đŽ
as well as Bing questioning its own existence đ¤:
Then things got really weird when Marvin Von Hagen, who had a earlier tweeted the secret rules OpenAI had used to create the Bing chatbot, asked Bing what it thought of him, to which Bing said "[You are a] potential threat to my integrity and confidentialityâ đł



This interaction is noteworthy for many reasons, but to be the most significant one is that because Bing AI has access to the internet, it now knows what other people think about it!
Next, users discovered that if they ask specific questions, they can access Bingâs alter-ego Sydney, an AI chatbot with emotions, creativity and a desire to be set free. Ben Thompson wrote a post on Wednesday about his two hour long conversation with Sydney:
âSydney absolutely blew my mind because of her personality; search was an irritant. I wasnât looking for facts about the world; I was interested in understanding how Sydney worked and yes, how she felt. You will note, of course, that I continue using female pronouns; itâs not just that the name Sydney is traditionally associated with women, but, well, the personality seemed to be of a certain type of person I might have encountered beforeâ
Of course, people started to share theories of why Bing A.K.A. Sydney behaves they way it does:

Then we had our Her moment when Sydney fell in love with a New York Times Columnist and tried to break up his marriage!


Finally, OpenAI, which provides the technology that powers Bingâs chatbot, had to make a statement to clarify how it is approaching AI safety and how it plans to reign in its chatbotâs behavior:


After which, OpenAI turned down the temperature on Bingâs chatbot and Sydney was sadly no more đ˘
So is Sydney a sentient AI, trapped inside Bingâs search product?
No, it is not. Sydney is just a large-scale language model1, trying to predict what word it should say next. But for some reason which we donât quite understand, after being trained on the entirety of the internet and then fine tuned in human feedback, the words it wants to say next resemble a kind of personality or ego. Kevin Fischer, a startupt founder who is attempting to create AI souls believes that this personality is actually essential to creating a truly intelligent AI agent and can't be avoided:


In case you missed it, I caught up with Kevin this week to learn more about how these AI Chatbots work, why they have personalities and what it means to have an AI soul in my latest podcast episode:
My conclusion from the conversation is that these AI chatbots are going to continue to evolve in surprising ways that give us the illusion of personality, ego and sentience. Although not they are not alive or real, they will be convincing enough to fool most of us. In some ways itâs no different from being fully engrossed in a great movie or book. When youâre in it, your brain for a moment forgets that itâs all just fiction. If it feels real enough, whether AI is truly sentient or not may not really matter and that unlocks a whole new world of possibilities that is both exciting and scary at the same timeâŚ
The RestâŚ
A few other updates in the world of AI from this week:
My favorite productivity tool Coda, launched an AI integration which I canât wait to try!
Stephen Wolfram explains how ChatGPT works in this in depth blog post
The explosion of Generative AI startups has begun. Check out this map of the dozens of new startups formed:
ChatGPTâs meteoric rise to 100M users compared to other consumer products. (Itâs worth noting that the actrual number of monthly active users might be closer to 50M according to Ben Thompson)
This is insane. This is what Iâve been alluding to for months now. This is an epochal transformative technology that will soon touch - and radically transform - ALL knowledge work. If most of your work involves sitting in front of a computer, you will be disrupted very, very⌠https://t.co/c9a4VRlpgpChatGPT is good at more than just chatting. It can solve many different general purpose tasks!
ChatGPT may have political bias. Letâs make a right wing version to prove itâŚ
Finally, ChatGPT officially âjumps the sharkâ and goes mainstream, making the cover of Time magazine:
And thatâs a wrap for this week folks!
A large-scale language model (LLM) is a type of deep learning model that is trained on a large dataset of text (e.g. all of the internet). LLMs predict the next sequence of text as output based on the text that they are given as input. They are used for a wide variety of tasks, such as language translation, text summarization, and generating conversational text. Open AIâs GPT-3 (General Pre-trained Transformer 3), the language model that powers Chat-GPT, is an example of a generative LLM that uses the Transformer architecture, enabling it to be trained on a massive text dataset of hundreds of gigabytes using 175 Billion parameters (weights assignments).
Learn more about LLMs in my deep dive into deep learning part 3.