From TIMEWELL
This is Hamamoto from TIMEWELL Inc.
Our Daily Lives Are Constantly Changing Alongside Technology
Our daily lives are constantly changing alongside technological evolution. The way we consume content — the center of how we access information and communicate — is about to undergo dramatic transformation. Smartphones have long been indispensable to our lives, but what comes next?
Andrew Bosworth (Boz), who has led technical development at Meta (formerly Facebook) for many years, has presented a vision of five and ten years from now — one in which augmented reality (AR), virtual reality (VR), and artificial intelligence (AI) converge to fundamentally change how we experience content, how we work, and how we engage with society.
This article, based on an interview with Boz, digs deep into the future of content consumption that AR/VR/MR and AI are opening up, Meta's strategy for realizing that future, and the challenges and possibilities we will face. We explore the leading edge of how technology is transforming our work, our lives, and our relationship with the world.
- Beyond the Smartphone: The New Content Experiences That AR/VR Will Open Up
- The Essence of the AI Revolution and Meta's Competitive Advantage: User-Centered Thinking and Technology Shift Strategy
- The Fusion of Hardware, Software, and AI: Meta Reality Labs' Challenge and Open-Source Strategy
- Summary
Looking for AI training and consulting?
Learn about WARP training programs and consulting services in our materials.
Beyond the Smartphone: The New Content Experiences That AR/VR Will Open Up
Today, the smartphone sits at the center of our digital lives. Information search, communication, entertainment, shopping, work — all of these activities take place on this small device. But according to Meta's CTO, Andrew Bosworth (Boz), this "smartphone-centric" era is already approaching saturation, and a transition to the next computing interface has begun. So what comes after the smartphone, and what kind of content experience will it bring?
Boz is convinced that ten years from now, how we access content will diversify dramatically from today's smartphone-centric situation. The development he highlights above all others is the practical realization of augmented reality (AR) glasses.
Turning Our Field of Vision into an Interface
These devices — shaped like ordinary glasses — overlay digital information on the real world, with the potential to turn our very field of vision into an interface. They could enable things like displaying navigation information as you walk through a city, instantly checking reviews for a product right in front of you, or translating a foreign-language sign in real time.
Beyond mere information display, Boz is anticipating deeper, more immersive experiences. Today, immersive experiences like those offered by the giant spherical venue "Sphere" in Las Vegas are only available if you travel to a specific location. In the future, however, it is expected that you will be able to have experiences that feel as if you are really there — from the comfort of your own home.
"When I want to watch a basketball game, I don't just want to watch the broadcast — I want to watch it with my father, as if we're courtside. There should be a better way than buying expensive tickets," Boz says. This suggests that VR technology and advanced networking will make it possible to share experiences across physical distance.
Looking at the nearer future — five years from now — the picture becomes somewhat more complex. Boz predicts that devices called smart glasses, AI glasses, and display glasses will appear on the market, but he does not expect them all to perform at the same level or achieve the same degree of adoption. Some devices will be very high-performance and expensive — high-end models — while others will be compact with more modest resolution but designed to be worn continuously on your face. "They may not be the kind of device you work on, but they'll be good enough to check light content in idle moments during commutes," he notes.
In other words, at the five-year mark, devices that offer an experience rich enough to fully replace today's smartphone will still be limited, and the quality of technology and experience will likely be unevenly distributed among users. What matters, however, is that new experiences unlike anything before — such as having an AI assistant by your side at all times — will begin to gradually spread.
What Boz Is Looking Forward To
Boz's hope is that devices positioned in the middle ground — providing entirely new experiences that were impossible with previous devices — will gradually grow in number. This may be where the true value of MR (mixed reality) and VR truly lies.
The Essence of the AI Revolution and Meta's Competitive Advantage: User-Centered Thinking and Technology Shift Strategy
Looking back at Andrew Bosworth's career, he has always excelled at accurately identifying major technology shifts and translating them into new product experiences. The prime example is the development of the News Feed in the early days of Facebook.
This was a breakthrough feature born from combining the new concept of social media, the new platform of mobile, and AI (which may at the time have been considered a relatively mature technology). According to Boz, what matters is not simply jumping on new technology, but deeply immersing yourself in the fundamental question: "What are people trying to do? What do they want?" Understanding the challenges and desires that users face, and flexibly using all available tools (technologies) to solve them. This "user-centered thinking" is, he analyzes, one of the reasons Meta (formerly Facebook) has been successful.
Becoming too fixated on technology itself carries the risk of either missing the wave of a particular technology trend or clinging to a wave that has already passed. It is precisely by maintaining the perspective of solving product problems — rather than developing technology for its own sake — that truly valuable innovation is born. The reason Boz and his team are particularly excited about the current AI revolution is that it is not just a technical advance — they have the real sense that it is "solving real, concrete problems."
Meta is working to apply the power of AI, in particular, to the evolution of user interfaces. More than ten years ago, the company began exploring what would come after the smartphone as an interface — ways to deliver information more naturally to human eyes and ears, and to convey human intent to machines without keyboards or touchscreens — ultimately reaching the conclusion that "wearing something on the face" is indispensable.
Neural Interfaces and Beyond
The vision is to secure access to eyes and ears, and in the future to enable even more intuitive operation through neural interfaces (technology that reads brainwaves and similar signals). Based on this clear vision, Meta has made enormous investments in AR/VR technology over the past decade.
However, realizing this vision involves many difficulties. First there are hardware challenges: you must develop from scratch a device that packs in all the necessary functions while also being attractive, lightweight, and affordably priced. But Boz notes that hardware development is only half the problem. The other half is "how do you use it?" — that is, software and interaction design.
We are completely accustomed to smartphones and use them as naturally as if they were part of our bodies. For a new device to offer convenience that surpasses that, simply porting over existing functions is not enough — a more natural and intuitive way of operating must be established.
For years, we have been accustomed to "direct manipulation" via mouse or touchscreen — an interaction model whose prototype was created in the 1960s. Departing from that convention and having the entire society learn a new operating method takes time. However, today's AI has the ability to infer intent from even vague user instructions and take the appropriate action from among a vast array of options. This has created the possibility that the operation of new devices like AR/VR glasses will become more intuitive and natural.
For example, instead of typing on a keyboard or navigating menus to find an app, you might simply speak to an AI assistant to accomplish your goal. Furthermore, as AI understands what the glasses see and hear, it could provide appropriate contextual support.
Hardware and Software — Two Mountains to Climb Together with AI
Meta was prepared to climb two great mountains — hardware and interaction design — but with the powerful reinforcement of AI, that journey is becoming more certain.
The Fusion of Hardware, Software, and AI: Meta Reality Labs' Challenge and Open-Source Strategy
The future of AR/VR and AI that Meta envisions is not mere fantasy. The company has a dedicated division called Reality Labs, has made massive investments over many years, and has been conducting the concrete research and development needed to realize this vision. The journey has been far from smooth — the company has faced many difficulties including technical challenges, market uncertainty, and questions of social acceptance. But Boz emphasizes that Meta has the firm conviction and strategy to see this challenging endeavor through.
At the foundation of Meta's efforts is the unwavering belief: "This change will certainly happen, and we will lead it." Boz quotes the words of chief scientist Michael Abrash: "Technology does not naturally evolve (the Myth of Technological Eventualism)." "Many people say AR will happen someday, but that is wrong. It will not happen unless someone invests money and time and actually executes." Meta chose to become that "someone" — not simply as a business opportunity, but because it sees involvement in the computing paradigm shift as "a once-in-a-generation opportunity at a historic inflection point on the level of Xerox PARC." This strong commitment is the driving force that sustains long-term investment even when short-term profitability is unclear.
Meta's Reality Labs is developing a diverse lineup of products that will shape the future computing interface. At its core are the "Quest" VR headset, the "Ray-Ban Meta" smart glasses, and "Orion," which aims to be full-function AR glasses.
Interestingly, Ray-Ban Meta was originally being developed as "smart glasses" without AI functionality. But when Meta's large language model "Llama 3" appeared, the team abruptly changed course, integrated AI functionality, and brought it to market as "AI glasses." The hardware itself did not change dramatically from the previous generation (Ray-Ban Stories), but the interactions made possible by AI became far richer. For example, with the "Live AI" feature, the glasses recognize what the user is looking at and answer questions about it in real time. This is a good example of how software and AI enhance the user experience, not just hardware evolution.
Orion: A Glimpse of the Post-Smartphone Era
Orion, a device oriented even further toward the future, hints more concretely at the possibilities of the "post-smartphone" era. In a demonstration Boz described, a user looks at breakfast ingredients, Orion recognizes them, and suggests recipes that can be made with those ingredients.
Initially, AR glasses like Orion were conceived on the premise of an "app model" similar to smartphones — that is, the familiar interaction method where a user launches a specific app (email, messaging, games, etc.) and performs tasks. Of course, basic functions like calls, email, and social media will continue to be important. But Boz points out that the evolution of AI has the potential to overturn this very app-centric way of thinking.
In the future, rather than thinking "I want to open Instagram," when a user feels "I'm a bit bored," the device might understand the user's situation and preferences and proactively suggest: "Would you like to see the latest highlights from your favorite basketball team?" AI acting as an "intelligent assistant" that mediates the interaction between user and device could invert the traditional app model.
This "reversal of the app model" is a deeply suggestive perspective. Consider the current situation:
Current app model: When a user has a specific goal (e.g., wanting to listen to music), they must first decide themselves which app to use (e.g., Spotify or Apple Music), then launch that app and operate it.
The AI-Centric Future Model
The AI-centric future model: The user simply tells the AI assistant "play this song." The AI considers the user's preferences, their subscribed services, sound quality, latency, and other factors, and plays the music in the optimal way. If the desired song is not available on one service, it might suggest alternatives or offer other services.
The impact of this shift is immeasurable. Users no longer need to constantly think about which provider (app) to use — they can achieve their goals more seamlessly. On the other hand, this presents a major challenge for existing app providers. The brand value they have built up would relatively decline, and instead the quality of AI recommendation — that is, actual "performance," "value," and "price" — would become more important.
Companies will be increasingly pressed to focus on improving the quality of their products themselves, so that their services are chosen by AI. This may be a harsh change for companies that have relied on brand power, but for consumers it means an environment where better products and services are more likely to emerge.
Boz sees this AI-driven change to the app model as similar to the process by which Google Search once changed the web. Before Google, accessing websites depended on portal site directories and similar mechanisms; after Google appeared, "ranking at the top of search results (SEO)" became most important. Users' search queries came to determine which businesses succeeded. Similarly, users' "queries (requests)" to AI assistants may come to determine which services and features are used and developed.
Building a developer ecosystem remains a major challenge in realizing this AI-centric future. Even if new AR/VR devices emerge, they will not gain traction without compelling apps and services. But here too, AI may be part of the solution.
Boz predicts that as AI assistants spread, a large number of user requests that AI cannot handle will emerge. For example, if a user asks AI to "find and book a well-regarded plumber nearby," completing the booking directly is difficult for AI today. Boz sees these "failure cases" of AI as potential "gold mines" for developers.
Meta Showing Developers Real Data
A platform like Meta could show developers concrete data: "Every day, 100,000 users are trying to use your service (a plumbing booking app, for example) but currently cannot. If you build a mechanism to integrate with AI (such as an API), you can capture these 100,000 potential customers." This creates a scenario where developers naturally gravitate toward areas with demand, and the ecosystem grows organically.
With this future in mind, Meta is actively promoting the open-sourcing of its in-house large language model "Llama." There are two major reasons for this.
First, to accelerate the overall progress of AI technology. Innovation comes not only from giant research labs but also from small laboratories and startups around the world. By open-sourcing Llama, Meta believes that more people can use, improve, and share new knowledge, enabling the community as a whole to progress faster. In fact, various derivative models and research outputs based on Llama have emerged.
Second, it is a strategic business decision. Boz believes that AI models will eventually become commoditized (commodities where price competition intensifies). A classical principle of business strategy is: "Commoditize what complements your core products."
For Meta, AI is an important "complement" for enhancing existing products and services — optimizing social feeds, targeting ads, adding new features to messaging apps. But AI models themselves do not generate products competing with Meta. Therefore, having high-performance AI models widely available as open source not only increases the value of Meta's own products but also contributes to energizing the entire industry, supporting startups and academia, and ultimately works in Meta's favor as well.
Open-Source Strategy Aligns Social Progress with Business Goals
In this way, the strength of Meta's open-source strategy lies in the remarkable alignment between contributing to the broader progress of society and its own business strategy.
Andrew Bosworth's vision suggests that the fusion of AR/VR/MR technology and AI has the potential to fundamentally transform our content consumption experiences and our interactions with devices. Ten years from now, more immersive, socially connected content experiences through diverse devices beyond the smartphone may become commonplace.
However, realizing this future involves many challenges that must be overcome: improving hardware performance, reducing costs, software development, and social acceptance. The "invention risk" — the possibility that the technology we want cannot be realized with current capabilities — also remains.
In particular, transitioning from smartphone-centric lifestyles and app models that we have become accustomed to over many years, toward new interfaces, involves learning costs and psychological resistance on the part of users. Privacy and regulatory concerns arising from the widespread use of devices that can constantly collect and record information also require careful discussion. The risk that progress could stall due to technical failures or social backlash cannot be ignored.
AI Evolving Faster Than Expected Could Help Overcome These Challenges
On the other hand, AI evolving faster than expected has the potential to become a powerful tailwind for overcoming these challenges. In particular, AI assistants that understand user intent and context and enable natural conversational operation could be a breakthrough in the interaction design of new devices. Furthermore, with AI becoming the primary interface between users and services, the way ecosystems centered around traditional app stores are built could change, creating new opportunities for developers as well.
Meta's open-source AI strategy, promoting Llama and others, is a rational and forward-looking approach that simultaneously accelerates technological innovation and contributes to Meta's own business through the commoditization of AI models. This strategy, in which social progress and business interests align, will play an important role in the development of the future AI ecosystem.
The future is full of uncertainty, but from Boz's words one senses a strong belief in the positive transformation that technology will bring, and an unwavering commitment to realizing it. The next computing revolution woven by AR/VR and AI — and how it will enrich our world — is something we cannot afford to stop watching.
Reference: https://www.youtube.com/watch?v=qEjTz2ZmxHI
Related Articles
- The Reality of a Working Mother Returning from Two Maternity Leaves — And How Her Work Philosophy Changed | TIMEWELL
- Before Taking Paternity Leave (Part 2): Three Absolute Must-Dos for Taking Leave During a Busy Season
- A First-Class Architect Who Stays Close to the Job Site — Finding My Own Path as the Fifth-Generation Leader of a Construction Company
