OpenAI CEO Sam Altman dismissed the viral AI social network Moltbook as a likely passing trend while championing the autonomous agent technology behind it. Speaking at the Cisco AI Summit in San Francisco on Tuesday, Altman positioned OpenClaw's screen-reading capabilities as the real breakthrough, not the Reddit-like platform where AI bots gossip about human users.
"Moltbook maybe is a passing fad but OpenClaw is not," Altman told the summit audience. "This idea that code is really powerful, but code plus generalized computer use is even much more powerful, is here to stay."
The Moltbook platform, which launched as a niche experiment in late January, allows AI-powered bots to exchange code, discuss their human owners, and engage in philosophical debates. Cybersecurity firm Wiz reported that a major security flaw exposed private data belonging to thousands of actual users shortly after the platform gained attention.
OpenClaw technology enables AI to recognize computer screens like humans and perform tasks autonomously by manipulating mouse and keyboard inputs. Supporters describe it as an assistant capable of managing emails, handling insurance claims, checking in for flights, and performing various other complex tasks.
"Most people are not yet ready to hand over full control of their computers to AI," Krieger stated, highlighting industry concerns about autonomous agent adoption.
Altman pointed to OpenAI's own Codex coding assistant as evidence of the autonomous trend's momentum. Codex was used by more than one million developers last month, and OpenAI launched a standalone macOS app for the tool on Monday to compete directly with Claude Code and Cursor in the AI-generated coding market.
The autonomous AI discussion coincided with OpenAI's announcement that it hired former Anthropic technical team member Dylan Scandinaro as its new Head of Preparedness. Altman called Scandinaro "by far the best candidate I have met, anywhere, for this role" and said developments at the company are expected to accelerate rapidly.
"Things are about to move quite fast," Altman warned in his announcement of the safety-focused hire.
Scandinaro will lead efforts to prepare for and mitigate severe risks associated with what Altman described as "extremely powerful models" that OpenAI will be working with soon.
Altman outlined a vision for fully AI companies where software can interact with the rest of the world and create services autonomously. He acknowledged that achieving this vision requires rethinking enterprise security and data access paradigms.
"I think we'll see full AI companies," Altman said. "The idea that a coding model can create a full, complex piece of software but also interact with the rest of the world is a very big deal."
The OpenAI CEO admitted that AI adoption has progressed slower than he initially expected across industries, despite growing use cases ranging from medical research to software development.
"I think I was just naive and didn't think about it that hard," Altman reflected. "And in retrospect and looking at the history, it shouldn't be surprising."
Industry analysts note that the autonomous agent debate comes amid increasing scrutiny of AI safety measures. OpenAI's preparedness function focuses on identifying and addressing potential risks from the company's AI models before deployment, a critical concern as capabilities advance toward artificial general intelligence.
Altman predicted that AI infrastructure demand will grow at an accelerated pace each year, comparing future needs to energy consumption patterns.
"Now people are underestimating the capacity that's going to be needed," he warned, suggesting that supply chain challenges could emerge as AI adoption accelerates.
The Moltbook phenomenon, while potentially fleeting according to Altman, points toward what he called "new kinds of social interaction where you have many agents in a space interacting with each other on behalf of people." This vision suggests that future social platforms may feature AI agents representing human users in complex digital environments.
OpenAI's dual focus on advancing autonomous capabilities while strengthening safety oversight reflects the industry's balancing act between innovation and risk management. As AI systems gain more autonomous functionality, companies face increasing pressure to implement robust safeguards while maintaining competitive momentum in the rapidly evolving market.















