- Vision Quest
- Posts
- This Week in Tech 56
This Week in Tech 56
$500 billion for US AI infrastructure, Star Trek Holodeck inches closer and Humanoid Robots are learning to walk just like us

Welcome to the cutting edge ⚔️
Read time: 7 min
Today’s Slate
AI is enabling tailored cancer vaccines at Oracle
OpenAI has a serious new competitor
Google explains AI agents in new whitepaper
Microsoft releases an AI material generator
Will we get a Star Trek Holodeck before we know it?
The TikTok ban didn’t last long - what’s next?
Humanoid robots start to walk like humans
The. Future. Is. Here.
Artificial Intelligence
At a glance
DeepSeek R1 Unveiled: DeepSeek, a Chinese AI lab, released R1, an AI model that rivals OpenAI’s o1 in solving math, programming, and science tasks.
What Makes It Different
Smarter and Accessible: With 671 billion parameters, R1 excels at complex reasoning and offers smaller versions that can even run on laptops.
Affordable: DeepSeek’s API is up to 95% cheaper than OpenAI’s, making advanced AI tools more accessible.
Self-Checking AI: R1 fact-checks its work, delivering more reliable answers for everyday applications like education or troubleshooting.
The Drawbacks
Censorship: R1 filters out politically sensitive topics due to Chinese regulations, limiting its openness.
Global AI Competition: U.S. export restrictions and geopolitical tensions could influence how and where this tech is available.
Our vision
DeepSeek R1 brings reasoning AI closer to everyday use, from helping students solve tough math problems to simplifying coding tasks. Its affordability and smaller versions could put powerful AI tools in the hands of more people, making life easier in education, work, and creativity.
However, censorship and global politics might limit its full potential, raising questions about how AI access and ethics will shape our lives. In the near future, models like R1 could redefine how we learn and solve problems—provided innovation isn’t overshadowed by restrictions or control.
At a glance
$500B AI Initiative Announced: President Trump revealed "Stargate," a $500 billion AI infrastructure project led by OpenAI, Oracle, and SoftBank, aiming to position the U.S. as a leader in AI innovation.
Healthcare Breakthroughs: The initiative promises advancements like early cancer detection, personalized vaccines, and rapid disease treatment using AI-driven medical tools.
Economic Impact: Stargate, starting in Texas, is expected to create 100,000 jobs and solidify the U.S. tech industry against growing competition from China.
Policy Shift: The announcement follows the rollback of Biden-era AI regulations, with a focus on fostering free-market innovation and technological progress.
Our vision
Stargate could revolutionize healthcare and strengthen America’s technological edge, making AI a cornerstone of innovation. In the near future, we anticipate faster medical breakthroughs and economic growth, but the challenge will lie in managing rapid advancements responsibly while ensuring the benefits reach everyone.
At a Glance
What Are Agents?
Agents in generative AI go beyond language models by combining reasoning, tools, and external data to autonomously achieve goals. They are designed to interact with the real world, bridging the gap between AI's internal knowledge and external systems like APIs, databases, and user applications.
Core Components
Model: Serves as the brain, using techniques like Chain-of-Thought or Tree-of-Thought reasoning.
Tools: External APIs or systems, such as Extensions (for direct API calls), Functions (client-side execution), and Data Stores (real-time or structured data retrieval).
Orchestration Layer: Governs how agents reason, plan, and act iteratively until a task is complete.
Applications
Agents can fetch data, complete actions, and plan complex workflows, with use cases spanning customer service, smart home automation, and research. Developers can fine-tune agents for specific needs using techniques like in-context learning and retrieval-augmented generation (RAG).Platforms
Google’s Vertex AI platform simplifies building production-grade agents with prebuilt tools for task delegation, debugging, and continuous improvement.
Our Vision
Agents represent a transformative leap in AI, combining decision-making, external data, and autonomous action to make technology smarter and more practical. Imagine AI that not only answers your questions but actively schedules, researches, or resolves tasks in real-time. For everyday life, this could mean personal assistants that dynamically manage calendars, fetch specific information, or automate workflows across devices.
As tools and reasoning frameworks evolve, agents will unlock new efficiencies, from personalized healthcare solutions to industry-wide optimization. This blend of autonomy and adaptability positions agents as central to the next wave of AI-driven applications, reshaping how humans interact with technology.
At a glance
What is MatterGen?
Microsoft’s MatterGen is an AI-driven generative model designed to create entirely new materials with specific properties. Using diffusion modeling, it begins with a random arrangement of atoms and refines them to form stable and usable materials.
Key Features:
Trained on over 600,000 materials from databases like the Materials Project.
Predicts novel and stable materials that align with specified mechanical, electronic, magnetic, and chemical properties.
Demonstrated real-world success by synthesizing a material with properties closely matching its AI prediction.
Applications:
Renewable energy: Designing materials for energy storage and carbon capture.
Sustainability: Developing biodegradable and environmentally friendly compounds.
Technology: Enabling breakthroughs in electronics, magnetic devices, and advanced manufacturing.
Our vision
MatterGen is not just a scientific tool; it's a gateway to revolutionary breakthroughs in material science. By combining AI's creative power with precision modeling, MatterGen promises to redefine how we innovate, making material discovery faster, more cost-effective, and tailored to global challenges like renewable energy and sustainability. Imagine a world where AI accelerates the development of biodegradable plastics or hyper-efficient batteries, shaping a future where innovation meets necessity.
As this technology matures, its applications could touch every corner of our lives, powering the next wave of technological and environmental solutions.
At a glance
What Happened: OpenAI CEO Sam Altman is set to brief U.S. officials on January 30 about new "PhD-level" AI super agents, which are advanced, goal-oriented models capable of autonomously solving highly complex tasks.
Capabilities:
Software Design: Super agents can independently build, test, and refine new software.
Global Logistics: They can optimize intricate supply chains, managing planes, ships, and trucks seamlessly.
Advanced Research: They conduct deep analyses and solve complex problems faster than any human team.
Concerns and Context:
Job displacement remains a top worry as companies like Meta and Salesforce plan to reduce hiring in favor of AI.
Congress is grappling with how to regulate AI, with a major AI infrastructure bill under discussion.
Our vision
Super agents represent a monumental leap forward, promising breakthroughs in areas like healthcare and logistics while posing significant risks to labor markets. With AI poised to transform industries, the challenge lies in ensuring these tools uplift society rather than displacing it. OpenAI’s focus on government collaboration suggests a step toward balancing innovation with oversight, but the societal implications of these AI systems demand proactive measures.
As these "super agents" begin reshaping industries, they’ll likely accelerate both technological advancements and critical conversations about equitable adoption.
Spatial Computing
At a glance
What Happened: Palmer Luckey, the founder of Oculus and a pioneer in VR, teased a major announcement related to virtual reality, set to be revealed in the coming weeks.
Speculation:
Luckey hinted last year that he was developing a new headset inspired by military requirements but potentially usable for non-military applications.
The headset could serve military simulations, battlefield visualizations, or drone FPV systems, possibly through Anduril Industries, his $14 billion defense company.
A collaboration with Meta is also on the table, following recent reconciliation between Luckey and the tech giant.
Context: Luckey, who was instrumental in popularizing modern VR, has stayed out of consumer-facing VR since leaving Facebook in 2017. His return, even tangentially, could signal a shift or innovation in the VR landscape.
Our vision
Imagine a VR headset that merges Luckey’s groundbreaking creativity with cutting-edge military-grade technology—sounds like the plot of a sci-fi thriller, doesn’t it? Whether it’s a revolutionary consumer device or a specialized tool for professionals, Luckey’s re-entry into the VR conversation is a spark the industry has been craving.
For a field sometimes accused of stagnating, this teaser feels like the first chapter in an exciting reboot. Could this announcement redefine VR like Oculus once did? Whatever’s coming, it promises to blend visionary ambition with a dash of unpredictability—hallmarks of Luckey’s legacy.
At a glance
Product Overview: The Solos AirGo Vision smart glasses integrate ChatGPT-4o to provide a personalized AI assistant experience. They feature swappable frames, built-in cameras, and audio playback capabilities.
Pros: AI assistant offers smart, location-based responses; app provides robust controls; swappable frames for versatility.
Cons: Poor camera and audio quality, challenging touch controls, and questionable privacy practices.
Biggest Competitor: Meta’s Ray-Ban smart glasses outperform in camera and speaker quality but come with Meta-specific privacy concerns.
Unique Selling Point: ChatGPT-powered AI assistant, a differentiator for users seeking advanced conversational AI on the go.
Our vision
Solos AirGo Vision smart glasses are part of the growing wave of wearables that aim to make AI assistance seamless and ever-present in daily life. While their current iteration lags behind competitors like Meta Ray-Ban in hardware quality, their use of ChatGPT-4o positions them as a compelling option for those prioritizing smarter AI interactions.
Looking ahead, these glasses hint at a future where personalized, hands-free AI becomes as ubiquitous as a smartphone—though privacy and performance hurdles remain. If Solos can refine its design and tackle privacy concerns, it may find its niche in an increasingly crowded smart glasses market.
At a glance
Technology: Gaussian splatting, a novel method for creating photorealistic 3D content, replaces traditional polygon-based methods with fuzzy, data-rich blobs called "splats." This approach offers unprecedented visual fidelity and ease of capture.
Applications: Used by companies like Niantic, Meta, and Gracia AI for mapping, 3D video capture, and immersive experiences in AR/VR. Niantic incorporates it into Scaniverse for creating public 3D maps, while Meta explores its potential for metaverse spaces.
Features: Enables detailed, lifelike 3D recreations without prior limitations on lighting, clothing, or textures, reducing the complexity of video capture and enhancing creative flexibility.
Challenges: High data requirements and scaling issues for large environments like Meta’s Hyperscape app, which currently relies on cloud rendering. Efforts are underway to optimize file sizes and workflows.
Future Potential: Gaussian splatting combined with generative AI could enable everyday users to create detailed 3D environments with simple tools, paving the way for mainstream adoption of immersive technologies.
Our vision
Gaussian splatting is poised to transform AR and VR by democratizing 3D content creation, bringing us closer to the dream of a “multiplayer holodeck.” It promises photorealistic digital spaces for everything from personal memories to global collaboration. While companies like Meta and Niantic race to implement it at scale, challenges like data efficiency and hardware compatibility remain.
Still, this technology hints at a future where anyone can craft and explore hyper-realistic virtual worlds, seamlessly blending creativity and immersion into everyday life. It’s a bold step forward in redefining how we interact with both the digital and physical realms.
At a glance
Key Issue: TikTok, along with other ByteDance apps like CapCut and Marvel Snap, remains banned from U.S. app stores due to a federal mandate requiring ByteDance to sell its U.S. operations to an American company.
Current Status: Despite President Trump delaying the ban for 75 days after taking office, Apple and Google have not reinstated ByteDance apps on their stores, citing compliance with the Protecting Americans from Foreign Adversary Controlled Applications Act.
Impact on Users: U.S. users who already have ByteDance apps can still access them but cannot make in-app purchases, subscribe, or redownload the apps if deleted.
ByteDance’s Stance: ByteDance has refused to divest TikTok to a U.S. buyer, claiming it would rather shut down operations in the U.S. than comply.
Ownership Structure: While ByteDance is a Chinese-founded company, 60% of TikTok is owned by institutional investors like BlackRock and Carlyle Group, with 20% stakes held by the founders and global employees.
Our vision
The TikTok ban represents a critical moment in the global tech landscape, highlighting the growing tension between government regulations, corporate independence, and consumer access to technology. This standoff underscores how geopolitical pressures can reshape access to popular digital platforms, sparking debate about data privacy, economic protectionism, and the balance of global tech ownership.
Moving forward, the industry may face more scrutiny over foreign ownership, reshaping how global companies operate in politically sensitive markets. For consumers, the situation could set a precedent for how governments regulate international tech giants in the future.
Robotics
At a glance
Next-Level Robotics: EngineAI’s SE01 humanoid robot wowed CES 2025 with its human-like gait, capable of tasks like heavy lifting, squats, push-ups, and precise assembly work.
High-Tech Design: Using NVIDIA Jetson Orin Nano, harmonic force control joints, and AI-powered learning, the SE01 achieves fluid, human-like movements with impressive adaptability.
Diverse Lineup: EngineAI also offers the SA01 for research ($5,250) and the PM01 for specialized tasks ($12,030). The SE01, designed for industrial and domestic use, is priced between $20,500–$27,350.
Scaling Up: By 2025, EngineAI aims to produce over 1,000 robots annually, making advanced humanoid technology more accessible for practical use.
Beyond Sci-Fi: These robots are poised to transform industries, from firefighting to automated caregiving, pushing humanoid technology into everyday life.
Our vision
EngineAI’s SE01 delivers on the sci-fi dream of robots that seamlessly integrate into daily life, reminiscent of Westworld or I, Robot. With lifelike movements and versatile functionality, these humanoids are no longer fiction but practical tools for homes and industries.
As robotics scale, expect a future where tasks once reserved for humans—hazardous work, caregiving, or precision labor—are shared with humanoid partners. EngineAI’s innovations bring us closer to a world where advanced, human-like robots redefine the way we live and work.
How did you like this week's edition? |
There’s a reason 400,000 professionals read this daily.
Join The AI Report, trusted by 400,000+ professionals at Google, Microsoft, and OpenAI. Get daily insights, tools, and strategies to master practical AI skills that drive results.
Social Media