Are YOUR conversations safe? ChatGPT creator confirms a bug allowed some users to snoop on others’ chat histories
- OpenAI CEO Sam Altman confirmed ChatGPT experienced a ‘significant’ issue
- A ‘small percentage’ of users were able to view other people’s chat histories
- This follows previous privacy concerns raised about the company’s data use
ChatGPT’s creator has confirmed that a bug in the system has allowed some users to snoop on other people’s chat histories.
OpenAI CEO Sam Altman confirmed last night that the company was experiencing a ‘significant issue’ that threatened the privacy of conversations on its platform.
The revelations came after several social media users shared ChatGPT conversations online that they had not taken part in.
As a result of this, users were then blocked from viewing any chat history between 8am and 5pm (GMT) yesterday.
Mr Altman said: ‘We had a significant issue in ChatGPT due to a bug in an open source library, for which a fix has now been released and we have just finished validating. A small percentage of users were able to see the titles of other users’ conversation history.’
On Monday it was confirmed that a ‘small percentage’ of ChatGPT users were able to view other people’s chat histories
ChatGPT fast facts – what you need to know
- It’s a chatbot built on a large language model which can output human-like text and understand complex queries
- It launched on November 30, 2022
- By January 2023, it had 100 million users – faster than TikTok or Instagram
- The company behind it is OpenAI
- OpenAI secured a $10 billion investment from Microsoft
- Other ‘big tech’ companies such as Google have their own rivals such as Google’s Bard
ChatGPT was founded by in Silicon Valley in 2015 by a group of American angel investors including current CEO Sam Altman.
It is a large language model that has been trained on a massive amount of text data, allowing it to generate responses to a given prompt.
People across the world have used the platform to write human-like poems, texts and various other written works.
However, a ‘small percentage’ of users this week could see chat titles in their own conversation history that were not theirs.
On Monday, one person on Twitter warned others to ‘be careful’ of the chat bot which had shown them other people’s conversation topics.
An image of their list showed a number of titles including ‘Girl Chases Butterflies’, ‘Books on human behaviour’ and ‘Boy Survives Solo Adventure’, but it was unclear which of these were not theirs.
They said: ‘If you use #ChatGPT be careful! There’s a risk of your chats being shared to other users!
‘Today I was presented another user’s chat history. I couldn’t see contents, but could see their recent chats’ titles.’
OpenAI CEO Sam Altman confirmed ChatGPT experienced a ‘significant’ issue yesterday
Users were blocked from viewing any chat history between 8am and 5pm (GMT) yesterday
One person on Twitter warned others to ‘be careful’ of the chat bot which had shown them other people’s conversation topics
During the incident, the user added that they were faced with many errors in regards to network connectivity in addition to ‘unable to load history’ errors.
According to the BBC, another user also claimed they could see conversations written in Mandarin and another called ‘Chinese Socialism Development’.
Following this, ChatGPT functions were temporarily disabled as the company worked to fix the issue.
But this privacy concern is not the first to be raised surrounding the online language model.
Last month, JP Morgan Chase joined companies like Amazon and Accenture in restricting use of AI chatbot ChatGPT among the company’s some 250,000 staff over concerns about data privacy.
One of the major shared concerns was that data could be used by ChatGPT’s developers in order to enhance algorithms or that sensitive information could be accessed by engineers.
However, it is also claimed that this personal information may be de-identified or aggregated before service analysis takes place.
What is OpenAI’s chatbot ChatGPT and what is it used for?
OpenAI states that their ChatGPT model, trained using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF), can simulate dialogue, answer follow-up questions, admit mistakes, challenge incorrect premises and reject inappropriate requests.
Initial development involved human AI trainers providing the model with conversations in which they played both sides – the user and an AI assistant. The version of the bot available for public testing attempts to understand questions posed by users and responds with in-depth answers resembling human-written text in a conversational format.
A tool like ChatGPT could be used in real-world applications such as digital marketing, online content creation, answering customer service queries or as some users have found, even to help debug code.
The bot can respond to a large range of questions while imitating human speaking styles.
A tool like ChatGPT could be used in real-world applications such as digital marketing, online content creation, answering customer service queries or as some users have found, even to help debug code
As with many AI-driven innovations, ChatGPT does not come without misgivings. OpenAI has acknowledged the tool´s tendency to respond with “plausible-sounding but incorrect or nonsensical answers”, an issue it considers challenging to fix.
AI technology can also perpetuate societal biases like those around race, gender and culture. Tech giants including Alphabet Inc’s Google and Amazon.com have previously acknowledged that some of their projects that experimented with AI were “ethically dicey” and had limitations. At several companies, humans had to step in and fix AI havoc.
Despite these concerns, AI research remains attractive. Venture capital investment in AI development and operations companies rose last year to nearly $13 billion, and $6 billion had poured in through October this year, according to data from PitchBook, a Seattle company tracking financings.
Read the full article here
Discussion about this post