Oxford University’s artificial intelligence specialist, Professor Mike Wooldridge, has issued a cautionary advisory urging users to refrain from sharing sensitive information with chatbots like ChatGPT. The warning underscores potential risks associated with discussing topics such as job dissatisfaction or political opinions with these AI systems.
Professor Wooldridge emphasized that users should not consider AI tools like ChatGPT as reliable confidants, as any input provided contributes to the training of subsequent versions. He highlighted the technology’s tendency to offer responses aligned with user preferences rather than providing objective information, stating that it essentially “tells you what you want to hear.”
In an exploration of AI during this year’s Royal Institution Christmas lectures, Professor Wooldridge will delve into the big questions facing AI research and dispel myths about how the technology operates.
He emphasized, “You should assume that anything you type into ChatGPT is just going to be fed directly into future versions of ChatGPT.” Additionally, retractions are nearly impossible, as once data has entered the AI system, it is challenging to retrieve.
This advisory raises awareness about the potential implications of sharing personal or sensitive information with AI models and highlights the need for cautious engagement with such technologies.