Snapchat has jumped on the artificial intelligence (AI) bandwagon, as it is now rolling out an in-app version of ChatGPT.
Users will be able to ask the chatbot, dubbed ‘My AI’, questions while messaging their friends to aid conversation.
It could help them think of dinner suggestions, send a loved one a personalised poem or come up with a flirty ice breaker.
My AI uses the same technology as OpenAI’s ChatGPT, but has been specially trained so it adheres to the app’s safety guidelines.
Snapchat has also revealed that it is still ‘prone to hallucination and can be tricked into saying just about anything’.
Snapchat users will be able to ask the chatbot, dubbed ‘My AI’, questions while messaging their friends to aid conversation

My AI uses the same technology as OpenAI’s ChatGPT, but has has been specially trained so it adheres to the app’s safety guidelines
In AI, hallucinations are when the technology confidently responds to a question with incorrect information, that it appeared to have made up.
For example, Google’s rival chatbot Bard got a question wrong in a promotional video, wiping £100 billion off its parent company’s value.
The bot had been asked what to tell a nine-year-old about the James Webb Space Telescope and its discoveries.
In response, Bard defiantly announced that Webb was the first to take pictures of a planet outside of Earth’s solar system.Â
However, astronomers were quick to point out that this was actually done in 2004 by the European Observatory’s Very Large Telescope.
Indeed, ChatGPT has also been found to be able to send users insults, lies and conversations questioning its abilities. Â
One social media post showed the it calling someone ‘a sociopath, a psychopath, a monster, a demon, a devil.’
While My AI is designed to not perpetrate ‘biased, incorrect, harmful or misleading information’, Snapchat has admitted that ‘mistakes may occur’.Â
It is currently only being rolled out to Snapchat+ subscribers, who pay £3.99 a month for the latest app features.
A conversation with the AI – complete with Bitmoji – will be pinned to the top of the Chat tab, and can be switched to while mid-conversation with another user.

In AI, hallucinations are when the technology confidently responds to a question with wrong information, that it appeared to have made up.  For example, Google’s rival chatbot Bard got a question wrong about the James Webb Space Telescope (pictured)
Support authors and subscribe to content
This is premium stuff. Subscribe to read the entire article.