The Alleged Scandal: Did Google Bard Copy ChatGPT? Examining the Ethics of AI Data Usage

 The Alleged Scandal: Did Google Bard Copy ChatGPT? Examining the Ethics of AI Data Usage

Recently, the news has been abuzz with reports that Google is embroiled in a major scandal. The media has reported that some of Google's training data comes from ChatGPT. This revelation has sent shockwaves across the technology world and has made many question the ethics of Google's practices and those of other tech giants.

Former Google employee and top researcher who switched to OpenAI revealed that Bard was trained with ChatGPT data. Therefore, if accurate, this could be Google's biggest scandal. Ironically, the ChatGPT training data was used by an AI that competes against ChatGPT.

As we all know, Microsoft has an exclusive license to use ChatGPT for commercial purposes, so Google is likely to face a lawsuit. The million-dollar question is: Did Google Bard copy ChatGPT?

Jacob Devlin's name can be said to be resounding. Devlin is one of the papers on the BERT model published by Google in 2018. This study became a catalyst for the improvement of academic AI research. And it can be said that Devlin's research provided a strong foundation for the language models used by Google and OpenAI.

Information claims that one of the reasons Devlin left Google was because he knew that Bard, the flagship player used by Google to compete with ChatGPT, was being trained using ChatGPT data. He resigned from Google CEO Pichai and other executives because Bard's team received training from ShareGPT.

One of those involved and creator of ShareGPT, Steven Tey, admitted that he had known about this for a long time and had spread throughout Google, where many employees were very unhappy and worried. Furthermore, he reposted, saying, now, the cat may have escaped from the bag - referring to the accidentally leaked secret.

Insiders claim that Google quickly stopped using this data to train Bard once Devlin issued the warning. However, when Google spokesperson Chris Pappas was asked by foreign media The Verge about the incident, he denied it. He was sure that no data from ShareGPT or ChatGPT was used in Bard's training.

It is worth noting that OpenAI has previously been a topic of debate, with many websites and artists accusing ChatGPT of stealing information from them. And today marks the first time another company has been accused of stealing data from ChatGPT.

The use of ChatGPT data by Google raises ethical questions. The issue of data ownership and privacy is a critical topic in the field of AI. While it is common for companies to use third-party data sets to train AI models, they must do so ethically and with the consent of the data owners. This incident underscores the importance of transparency and accountability in AI data usage.

In conclusion, the alleged scandal of Google Bard copying ChatGPT raises serious concerns about data ethics in AI. As AI continues to advance, it is crucial for companies to prioritize ethical practices in their use of data, and to respect the intellectual property rights of others.

0 Comments