Wikipedia founder angrily clashes with Musk, claiming Grokipedia is overly hyped and hard to trust.

robot
Abstract generation in progress

Wikipedia founder Jimmy Wales expressed strong doubts about Elon Musk's newly launched AI knowledge encyclopedia platform Grokipedia during an interview with CNBC on 10/28. He pointed out that Grokipedia is simply not capable of producing reliable encyclopedia content and emphasized that AI models like this are bound to have a host of significant errors.

Musk launched Grokipedia, and Wales poured cold water on it.

Regarding Musk's recent launch of the AI knowledge inquiry platform Grokipedia, which claims to create a more comprehensive and accurate AI encyclopedia system than Wikipedia, Wales directly poured cold water on this statement, saying:

“I don't really believe he can come up with anything useful right now.”

He pointed out that the current accuracy of AI models is simply not enough, and these LLMs used to write encyclopedia content end up making a lot of mistakes.

In response to the accusations of awakening, subtly mocking Grokipedia's content for flattering Musk.

In response to Musk's repeated criticism that Wikipedia has “liberal bias,” Wales rebutted:

“He was wrong. We only value mainstream sources, and there is nothing to apologize for. We will not equate the ramblings of internet weirdos with the New England Journal of Medicine. This is not woke; it is basic knowledge judgment. We will even go so far as to quote the New York Times.”

Wales admitted that he hasn't really had the time to look at the content of Grokipedia, but he heard there are many writings praising Musk's genius. He sarcastically said, “Well, I believe that must be very neutral.”

Wales gives an example to illustrate that LLM generates content with numerous errors.

Wales then cited several real cases to illustrate why he does not believe that LLMs can create a trustworthy knowledge base:

Misidentifying his wife's identity: He said he often uses various AI chatbots to test their search capabilities, such as asking obscure questions like “Who is my wife?” As a result, every time the AI provides answers that sound reasonable but are completely incorrect.

Fictional book citation: He mentioned that a member of the German Wikipedia community once developed a program to check the International Standard Book Number (ISBN) for book citations, and found that the data cited by users could not be found at all. In the end, that user admitted to checking it with ChatGPT, which would be happy to help you write a book.

AI cannot replace the trust mechanism of human communities and can only play a supportive role.

Wales pointed out that Wikipedia's trust mechanism, built over more than twenty years by global volunteers, cannot be replaced by AI in a short period of time. He mentioned that Wikipedia's technical costs are about 175 million dollars a year, while large tech companies spend “hundreds of billions” of dollars just on AI. He lamented:

“No matter how much money is spent, AI may not necessarily be able to establish trust.”

Despite the sharp criticism from Wales, he also acknowledges that AI can indeed be helpful in certain areas. He stated that the Wiki team is currently testing AI to assist in searching for reliable sources or helping to fill in missing information in the content. However, he also added that the cost of building their own LLM is too high, so they will not officially invest in it for the time being.

Emphasize that the Wikimedia Foundation upholds neutrality and cannot become Wokepedia.

Wales emphasized that external criticism serves as a reminder for the Wikipedia team to adhere more strictly to neutrality and to be more rigorous with sources. It also warns its own team not to let Wikipedia become a woke version of “Wokepedia,” as that is not what people want and would undermine trust.

Wales believes that in the future, LLMs will be able to generate seemingly plausible fake websites and fake content, and many people will be deceived. However, he emphasized that the Wikipedia community, honed over 25 years, is not easily fooled, stating:

These models may deceive the public, but they cannot deceive our community.

If AI only learns from Twitter, it will go crazy; maintaining true neutrality is the key.

Finally, Wales summarized that:

“When people complain about the errors of ChatGPT, imagine if AI only used Twitter (X) as a data reference source; it would be a crazy, angry, nonsensical AI.”

He also reminds the public that the real challenge is not just AI, but how we maintain truth and neutrality.

(Musk Launches Grokipedia: A New Generation AI Truth Machine Challenging Wikipedia)

In this article, the founder of Wikipedia angrily confronts Musk, stating that Grokipedia is overly praised and difficult to trust. It first appeared in Chain News ABMedia.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)