AIコンサル

Sam Altman at TED: Creativity, Copyright, AGI Risk, and the Future OpenAI Is Building

2026-01-21Hamamoto

Sam Altman's TED talk covered ground rarely addressed in standard AI coverage: how OpenAI actually thinks about copyright and creator compensation, what risks Altman genuinely worries about (agentic AI ranks highest), why open-source was delayed, how AI should influence collective decision-making, and where he personally expects the biggest breakthroughs — science, not software.

Sam Altman at TED: Creativity, Copyright, AGI Risk, and the Future OpenAI Is Building
シェア

From Ryuta Hamamoto at TIMEWELL

This is Ryuta Hamamoto from TIMEWELL Corporation.

Sam Altman's TED appearance covered a broader range of topics than typical AI coverage — including several areas where he said things that don't fit cleanly into the standard "AI is amazing" or "AI is dangerous" narratives. This article works through the key themes.

What Sora and the new image model demonstrate

Altman demonstrated two examples on stage. First, Sora generating a video of him making "a shocking revelation at TED" — a reasonably coherent result. Second, asking the new image model (which is built on GPT-4's reasoning layer) to visualize the difference between intelligence and consciousness — producing a simple but conceptually accurate diagram that, Altman argued, reflects the model's actual reasoning about the concept rather than just surface pattern matching.

The copyright question that was directly raised

A journalist in the opening session noted that ChatGPT had mimicked her presentation style without her consent, and challenged Altman on this. His answer: current OpenAI image models won't generate content in the style of specific living artists — there are guardrails. But generating in the style of a "studio," "art movement," or "atmosphere" remains possible.

Where the line is, and isn't

Altman's position:

  • Copying existing works is not permitted
  • "Inspiration" in the way human artists are inspired by other artists is complicated to restrict and may not be appropriate to restrict
  • The level of permissible style transfer is still being worked out through new legal and business models
  • Revenue sharing with named artists who opt in is a model he supports in principle

The honest framing: the existing copyright framework wasn't designed for AI, and new models are needed. Altman acknowledged this is a real problem, not a solved one.

The emotional divide

Altman noted that creator responses fall into two camps: "my work and livelihood are being stolen" versus "my work is being amplified and extended." He argued that moving people from the first camp to the second camp — which requires real revenue-sharing mechanisms, not just assurances — is necessary for AI to develop a sustainable relationship with creative industries.

Looking for AI training and consulting?

Learn about WARP training programs and consulting services in our materials.

2. AI Development Competition: Open Source and the DeepSeek Moment

DeepSeek's significance

When DeepSeek released a competitive open-source model at substantially lower development cost than expected, it raised a genuine question: does OpenAI's massive capital advantage translate to proportional capability advantages? Altman's response was indirect but revealing: he's still chasing GPUs, which suggests compute constraints remain real even for OpenAI.

The open-source commitment

Altman announced that OpenAI will release a powerful open-source model — targeting near-frontier performance, intended to surpass current open-source options. He acknowledged OpenAI was late to this. The explanation: early caution about releasing very capable models before understanding the safety implications. The assessment now: the global understanding of AI is mature enough that frontier-adjacent open-source is appropriate.

He also noted that next year he expects to face criticism for misuse of the open-source model they're releasing. His view: some misuse risk is inherent in open release, and the benefits to the ecosystem outweigh it.

User growth

Altman referenced "500 million weekly active users" as a figure previously disclosed. He implied this has continued to grow rapidly — while declining to give updated numbers.

3. What Actually Worries Altman About Safety

The risks Altman named directly:

  • Misuse of highly capable models at scale
  • Bioterrorism: new biological weapons
  • Cybersecurity: enabling sophisticated attacks
  • Uncontrolled self-improvement: AI autonomously improving itself beyond human oversight

The agentic AI problem

The risk Altman identified as the most important new safety challenge: agentic AI. Models that take extended autonomous actions — browsing the web, writing and executing code, sending emails, managing files — introduce a category of risk that static language models don't have. An AI that makes a mistake in a conversation is correctable. An AI that takes 50 automated actions before a human checks the output requires different oversight mechanisms.

What OpenAI actually does

Altman described the "Preparedness Framework" — an internal process for evaluating dangerous capabilities before release. He noted that some safety team members have left, acknowledged there are diverse views on AI safety internally and externally, and pointed to OpenAI's record (10% of world population using the product with no major incidents) as evidence of reasonable safety management. He also acknowledged that past performance doesn't guarantee future safety as capabilities scale.

4. AI Models as Products, Not Just Capabilities

The model commoditization thesis

Altman made a point worth noting for enterprise buyers: he expects many very capable AI models to exist and partially commoditize. OpenAI's competitive position won't come from having the only capable model — it will come from having the best integrated product.

What this means: memory functions that learn your context over time, seamless integration of image generation and web search in a single workflow, and increasingly personalized experiences are the differentiation layer. The underlying model quality matters, but product integration matters more for typical business users.

The AI companion dimension

Altman described a future where AI knows you well enough to actively surface opportunities — identifying skills you haven't realized you have, connecting you with relevant information before you ask. His reference was the film "Her" — an AI that functions as a genuine personal extension. He framed this as closer than most people expect.

5. Science as the Highest-Value Application

Altman said the area he's most excited about personally is AI for science. His reasoning: the most important driver of human progress is new scientific discoveries. AI that accelerates the discovery cycle — compressing the time from hypothesis to experimental result to published finding — has compounding effects across medicine, energy, and materials.

Specific near-term predictions:

  • Room-temperature superconductivity: physically possible, AI-assisted materials research might get there
  • "Meaningful progress on disease" within years
  • Software engineering: continued dramatic acceleration — tasks that took engineers years taking hours

6. Governance: Who Decides What AI Should Do?

Altman expressed skepticism toward elite summit-based governance models (a small group of experts deciding AI's direction). His preferred alternative: AI systems that directly engage with broad populations to learn their values and preferences, then use that input to calibrate their own guardrails.

The argument: AI makes it possible to aggregate values at a scale that was previously impossible. Rather than convening 50 experts, you could have AI systems engage millions of people, identify genuine areas of consensus and disagreement, and build more legitimate governance frameworks from that input.

He described AI as potentially enabling better collective decision-making — not just executing on individual preferences, but helping people understand the implications of what they're requesting.

7. On Personal Accountability and OpenAI's Organizational Evolution

Altman was directly asked: who gave you the moral authority to build technology that could shape the fate of humanity, and how will you take personal responsibility if you're wrong?

His response: he's one player in this, not the sole decision-maker. OpenAI's explicit mission is "develop AGI and ensure it benefits humanity broadly." He's proud of the record so far. He acknowledged the tension between the original "open" positioning and the current reality of a heavily capitalized company, but argued the core mission has remained consistent even as tactics changed.

On the "corrupted by power" framing (referencing Elon Musk's "Ring of Power" characterization): Altman challenged the interviewer to identify specific behaviors that constitute corruption. His view is that he's been relatively consistent — though he acknowledged he will make mistakes, probably has made mistakes, and will continue to be criticized.

Summary

The key points from Altman's TED appearance that matter for business and AI strategy:

Topic Altman's position
Creator compensation New revenue models needed; opt-in artist revenue sharing is the right direction
Open source OpenAI will release near-frontier model; was late to this, now committed
Biggest safety concern Agentic AI — autonomous action at scale
Competitive moat Best product, not just best model
Most valuable application AI for scientific discovery
Governance AI-facilitated broad participation, not elite summits

Reference: https://www.youtube.com/watch?v=5MWT_doo68k&t=7s

Considering AI adoption for your organization?

Our DX and data strategy experts will design the optimal AI adoption plan for your business. First consultation is free.

Share this article if you found it useful

シェア

Newsletter

Get the latest AI and DX insights delivered weekly

Your email will only be used for newsletter delivery.

無料診断ツール

あなたのAIリテラシー、診断してみませんか?

5分で分かるAIリテラシー診断。活用レベルからセキュリティ意識まで、7つの観点で評価します。

Learn More About AIコンサル

Discover the features and case studies for AIコンサル.