🎉 Make America #1 In AI, OpenAI Chips, Humanoid Robots Build BMWs, Listen While Speaking, Meta Goes Hollywood
Trump Pushing AI In America, OpenAI Develops It's Own Chips, Figure AI Robots on Factory Line, New Language Models Listen While Speaking, Meta All In on AI
Welcome to this week’s edition of AImpulse, a five point summary of the most significant advancements in the world of Artificial Intelligence.
Here’s the pulse on this week’s top stories:
What’s Happening: Former U.S. President Donald Trump’s allies are reportedly drafting an AI executive order aimed at boosting military AI development, rolling back current regulations, and more — signaling a potential shift in the country’s AI policy if the party returns to the White House.
The details:
The doc obtained by the Washington Post includes a ‘Make America First in AI’ section, calling for “Manhattan Projects” to advance military AI capabilities.
It also proposes creating ‘industry-led’ agencies to evaluate models and protect systems from foreign threats.
The plan would immediately review and eliminate ‘burdensome regulations’ on AI development, and repeal Pres. Biden’s AI executive order.
J.D. Vance has previously indicated support for open-source AI and hands-off regulation.
Why it matters: Given how quickly AI is accelerating, it’s not surprising that it has become a political issue — and the views of Trump’s camp are a stark contrast to the current administration's slower, safety-focused approach. The upcoming 2024 election could mark a pivotal moment for the future of AI regulation in the U.S.
What’s Happening: OpenAI is reportedly in talks with chip designers like Broadcom to develop its own AI chip, aiming to reduce dependence on scarce and expensive GPUs from Nvidia.
The details:
OpenAI has already hired former Google employees who worked on Google’s tensor processing unit (Google’s AI chip).
The ChatGPT maker has been talking to chip designers including Broadcom, however production of the new chip isn't expected until 2026 at the earliest.
The company is exploring various chip packaging and memory components to optimize performance.
OpenAI is also considering creating new companies with outside investors to finance infrastructure like data centers.
Why it matters: This move isn't just about OpenAI playing chip designer — it's a power play. By developing its own chips, OpenAI could break free from the GPU shortage bottleneck, potentially supercharging its mission towards AGI.
What’s Happening: OpenAI-backed startup Figure AI just showed off Figure 02, its next-generation AI-powered humanoid robot — capable of completely autonomous work in complex environments like a BMW factory.
The details:
Figure 02 uses OpenAI’s AI models for speech-to-speech reasoning, allowing the humanoid robot to have full conversations with humans.
A Vision Language Model (VLM) enables the robot to make quick, common-sense decisions based on visual input and self-correct errors.
Six RGB cameras provide the robot with 360-degree vision to help it navigate the real world.
The robot stands 5'6"and weighs 132 lbs, with a 44 lb lifting capacity and a 20-hour runtime thanks to a custom 2.25 KWh battery pack.
Why it matters: The humanoid robot race is intensifying, with Figure CEO Brett Adcock claiming that Figure 02 is now the “most advanced humanoid on the planet" — a direct challenge toward Elon Musk and Tesla Optimus. While the world now waits for Elon’s response, Figure has one ace up its sleeve: its OpenAI partnership.
What’s Happening: AI researchers just developed a new Listening-While-Speaking Language Model (LSLM) that can listen and speak simultaneously — advancing real-time, interactive speech-based AI conversations.
The details:
The new model, called the Listening-while-Speaking Language Model (LSLM), enables full-duplex modeling in interactive speech-language models.
LSLM uses a token-based decoder-only TTS for speech generation and a streaming self-supervised learning encoder for real-time audio input.
The system can detect turn-taking in real-time and respond to interruptions, a key feature of natural conversation.
The model demonstrated robustness to noise and sensitivity to diverse instructions in experiments.
Why it matters: While OpenAI's recent Her-like advanced voice mode for ChatGPT inches us toward realistic AI conversations, LSLM leaps even further by enabling AI to process incoming speech WHILE talking. This could revolutionize human-AI interactions — making conversations with machines feel truly natural and responsive.
What’s Happening: Meta is reportedly offering millions to celebrities like Awkwafina, Judi Dench, and Keegan-Michael Key to use their voices in upcoming AI projects.
The details:
The AI voices would be used across Meta's platforms, including Facebook, Instagram, and Meta Ray-Ban smart glasses.
Meta is reportedly rushing to secure deals before its Meta Connect conference in September.
Contracts are reportedly temporary, with actors having the option to renew.
Meta has previously experimented with celebrity-inspired chatbots, though that program has ended.
Why it matters: In our exclusive interview with Mark Zuckerberg, he predicted that “we're going to live in a world where there are going to be hundreds of millions or billions of different AI agents”. If it holds true, celebrity voice-powered AI could be part of Meta's next big play to drive user engagement and growth on the platform.