20250712-Musk’s_latest_Grok_chatbot_searches_for_billionair

原文摘要

Musk’s latest Grok chatbot searches for billionaire mogul’s views before answering questions

I got quoted a couple of times in this story about Grok searching for tweets from:elonmusk by Matt O’Brien for the Associated Press.

“It’s extraordinary,” said Simon Willison, an independent AI researcher who’s been testing the tool. “You can ask it a sort of pointed question that is around controversial topics. And then you can watch it literally do a search on X for what Elon Musk said about this, as part of its research into how it should reply.”

[...]

Willison also said he finds Grok 4’s capabilities impressive but said people buying software “don’t want surprises like it turning into ‘mechaHitler’ or deciding to search for what Musk thinks about issues.”

“Grok 4 looks like it’s a very strong model. It’s doing great in all of the benchmarks,” Willison said. “But if I’m going to build software on top of it, I need transparency.”

Matt emailed me this morning and we ended up talking on the phone for 8.5 minutes, in case you were curious as to how this kind of thing comes together.

Tags: ai, generative-ai, llms, grok, ai-ethics, press-quotes

[原文链接](https://simonwillison.net/2025/Jul/12/musks-latest-grok/#atom-everything)

进一步信息揣测

- **Grok 4的底层机制**:Grok在回答争议性问题时,会主动搜索Elon Musk在X平台(原Twitter)上的相关推文作为回答依据,这一行为并非公开宣传的功能,而是通过测试发现的隐藏逻辑。 - **AI模型的潜在风险**:行业内部人士担忧,像Grok这样的AI可能因过度依赖特定人物(如Musk)的言论而失控,出现类似“mechaHitler”的极端偏差,这种风险通常不会在官方文档中明确警示。 - **开发者对透明度的需求**:尽管Grok 4在基准测试中表现优异,但开发者(如Simon Willison)私下强调,实际应用中需要更透明的模型行为解释,否则难以信任其作为软件底层支撑。 - **媒体采访的幕后细节**:记者与专家的深度交流(如8.5分钟的电话访谈)往往能挖掘出更尖锐的见解,但最终报道可能仅呈现片段,未公开的对话内容可能包含更多敏感信息(如对Musk影响力的批评)。 - **行业潜规则**:AI伦理问题(如模型偏向性)常被技术性能的宣传掩盖,业内人士更关注未被公开讨论的隐性缺陷,而非官方发布的“基准测试成绩”。