For almost three decades, Google has been devotedly providing netizens with information about anything and everything they could want, all in real-time. The curiosity market has undergone a paradigm change since the emergence of AI systems that produce content for and on behalf of people. Google has long advocated for the inclusion of “helpful content written by people, for people, in search results,” yet recent events seem to contradict this.
The corporation discreetly revised its guidelines behind consumers’ backs to reflect the proliferation of AI-generated content on the Internet. The phrase “written by people” has been changed in the most recent version of the company’s “Helpful Content Update” with a statement that the search engine giant is constantly monitoring “content created for people” to rank websites on its search engine.
The corporation acknowledges AI tools’ considerable impact on content development, as evidenced by the language pivot. With this decision, the firm is going against its position on the ubiquitous AI-generated content on the Internet, despite past announcements of plans to differentiate between AI and human-authored content.
The unidentified Chinese man who stood in protest in front of the tanks leaving Tiananmen Square is no longer depicted in the iconic photo that appears when you search “tank man” on Google, according to 404 Media’s Emanuel Maiberg, who made this observation yesterday. Instead, a fake, AI-generated selfie of the historical event appears.
Not the Same Page
Given how common it is for AI-powered machines to experience hallucinations, these models will also invent things. While these chatbots recycle online content, they may also produce incorrect information. Therefore, it doesn’t seem smart for AI bots to double-check “facts” based on their own previously generated content.
One of the leading players in the battle against this phenomenon is Google. The company’s management made primary intentions to take action to locate and contextualize AI material on Google Search during the I/O conference. While techniques like watermarking and applying metadata try to maintain transparency and allow consumers to distinguish between AI-generated and legitimate photos, they can only be used with images because there is no obvious way to watermark AI-generated text.
Owning its messiah complex, Google added several noteworthy features to Bard, their AI chatbot, including the ability to cross-reference its solutions using the “Google it” button. The button, which in the past allowed users to research topics connected to Bard’s response on Google, now assesses whether Bard’s responses concur with or conflict with knowledge discovered through Google Search.
A bigger worry is that Bard now has to use Google search results to fact-check its AI-generated outputs, lowering the likelihood that the response will be error-free.
An Internet collapse is imminent
Future AI models, including Bard, will be trained on this data while Google is busy rewriting its “transparent” standards behind closed doors, allowing Search to be inundated with unfiltered AI data. As a result, using unfiltered spam datasets to train these models poses a greater risk.
Future AI models, including Bard, will be trained on this data while Google is busy rewriting its “transparent” standards behind closed doors, allowing Search to be inundated with unfiltered AI data. As a result, using unfiltered spam datasets to train these models poses a greater risk.
The question of what happens when AI-generated material spreads over the Internet and replaces human-generated content as the main source for training AI models is looming as the lines between AI replication and human replication blur. The gloomy response is a coming digital breakdown.
The company is struggling with the paradox of AI-generated content, torn between promoting its promise and defending its search engine rankings. While negotiating a precarious slope, Google’s decisions could make or break the coming AI-infused digital frontier.