Ad Code

Meta attributes its AI's claim that the Trump rally shooting was not real to hallucinations.

 After giving in and allowing its AI assistant to respond to inquiries regarding the incident, Meta encountered the issue it had been attempting to avoid right away.



The recent attempt on the life of former President Donald Trump was misidentified by Meta's AI assistant, which an executive of the business is now attributing to the technology behind the chatbot and others.

Joel Kaplan, the worldwide head of policy at Meta, refers to the AI's answers to inquiries regarding the shooting as "unfortunate" in a blog post that was posted on the company's site on Tuesday. He claims that when the business noticed that people were taking notice, Meta AI's initial programming instructed it to not answer inquiries on the attempted killing. Additionally, he notes that "we are quickly working to address the small number of cases in which Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn't happen."

Running Meta's lobbying efforts, Kaplan adds, "These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems and is an ongoing challenge for how AI handles real-time events going forward." "Models may produce incorrect or inappropriate results, just like any other generative AI system. We'll keep working to fix these problems and enhance these features as they develop and more users provide feedback."

Not only Meta is involved in this controversy; on Tuesday, Google was also called upon to deny rumors that its autocomplete search function was filtering results pertaining to the attempted assassination. "Another attempt to rig the election, here we go again!" Trump stated on Truth Social in a post. "FOLLOW META AND GOOGLE FIRST."

Ever since ChatGPT became popular, the tech sector has been struggling to control generative AI's tendency toward fabrications. In an effort to offset hallucinations, some players, such as Meta, have tried to anchor their chatbots with high-quality data and real-time search results. However, as this specific example demonstrates, it's still challenging to get past the fact that big language models are really meant to be generative.

Ad Code