When Jensen Huang speaks, the tech world listens, especially on AI. So, when he told Lex Fridman that “We’ve achieved AGI”, it inevitably turned heads.
Context first. Fridman asked the Nvidia Corp. CEO if Agentic AI can build and run a billion-dollar business. The reference point was OpenClaw, an open-source agent framework that was recently acquired by OpenAI. Huang, who describes OpenClaw as “the next ChatGPT”, has already unveiled a software development tool kit, NemoClaw, aimed at making OpenClaw agents enterprise-ready.
Discussing OpenClaw, Huang acknowledged that some users could deploy it to build apps that gain rapid traction. But he added that even 100,000 such agents would not be enough to build an Nvidia. He also pointed out a paradox emerging in sectors such as radiology: Despite AI’s growing accuracy, fears of job loss have contributed to a shortage of radiologists. That, he argued, is counterproductive — AI can help clinicians scan faster and improve diagnosis.
Huang is right about accounting for the alarm and hype around AGI when discussing the benefits of advanced AI models and Agentic AI, especially given that there is no single definition today.
The reason is that systems like Anthropic’s Claude Code and Claude Cowork can already interact with files, browsers, and developer tools. Meanwhile, Meta’s Mark Zuckerberg is reportedly building a “CEO Agent” to compress decision-making layers. And OpenAI is working toward an autonomous “AI research intern”.
The trajectory is clear: AI is becoming more proactive, more embedded, and more capable of executing multi-step tasks, but none of this necessarily amounts to AGI, which typically is a point when machines become as good as humans at performing tasks—if not better.
Agentic AI — while action-oriented and not essentially prompt-driven — is scaffolded by predefined tools, guardrails, and training regimes. Remove the structure, and their limitations quickly show up as hallucinations, brittle reasoning, and poor long-horizon judgement.
In this week’s edition of Mint Tech Talk:
Agentic AI’s IQ cannot match Human Intelligence
The noble gas that can quietly choke the AI boom
AI Tool of the Week: Google Stitch’s new Vibe Design
PREVIOUSLY, ON MINT TECH TALK
Machine IQ or illusion?
Meanwhile, Intelligence Quotient (IQ) has entered the race to define AI progress. Claims that models like Claude Opus 4.6 score 133, GPT-5.2 Thinking hits 141, and Gemini 3 Pro reaches 142 on the Mensa Norway test are hard to ignore. They suggest machines are not just improving but rivalling—even surpassing—human intelligence.
The reality is less compelling. IQ is a human construct, shaped by the Flynn effect. It measures how individuals perform relative to other people, against population norms built over decades. A score of 130 places a person in the top percentile. Machines, however, are not part of that population, making direct comparisons questionable.
The Mensa Norway exam itself leans heavily on pattern recognition (visual matrices and abstract sequences), an area where modern AI excels. Trained on vast datasets rich in similar structures, these systems are specialists masquerading as generalists. High scores here reflect training alignment, not broad reasoning ability. Further, even if models haven’t seen the exact questions, they have likely encountered multiple variations. It’s akin to testing a student after weeks of drilling near-identical problems.
To reiterate, while AI IQ leaps invite comparisons with human cognitive gains, we must realise that AI improves because engineers redesign architectures, scale compute, and refine data as opposed to the Flynn effect that accounts for humans improving gradually through better nutrition, education, and environments.
Effectively, while today’s systems can ace pattern puzzles, they continue to stumble on basic reasoning, long-horizon planning, or real-world judgment. Artificial General Intelligence, or AGI, implies generality with the ability to adapt across domains with consistent, reliable competence, much like a human. Today’s agents, for all their sophistication, are still context-bound optimisers. They can execute workflows impressively but struggle with ambiguity, causality, and real-world grounding.
It’s important to recognise that “AGI” is increasingly being used to describe systems that are economically useful or operationally autonomous, rather than cognitively general. By that looser standard, today’s systems may qualify. But equating autonomy with general intelligence is premature. It’s tempting to believe that AGI has been “achieved” but the reality is more about semantics than human intelligence.
The noble gas that could quietly choke the AI boom
While an LPG shortage in India continues to make headlines, there’s another gas which rarely features in conversations. And while helium has nothing to do with the Iran war, like the LPG crisis does, it has much to do with AI.
This inert gas, better known for balloons, sits deep inside the global semiconductor supply chain, and disruptions linked to the Strait of Hormuz are bringing its importance into sharper focus.
Helium is essential for chip manufacturing. It is used to cool wafers, stabilise ultra-clean environments, and support critical processes like etching and deposition. There are few viable substitutes at scale, making supply reliability crucial for semiconductor fabs.
A significant share of global helium comes from Qatar, where it is extracted as a by-product of liquefied natural gas. Much of this supply must transit through the Strait of Hormuz, which explains why any prolonged disruption here risks hitting both production and transportation simultaneously.
What heightens concern is helium’s physical nature. Unlike oil or gas, it is not easy to store. It requires specialised infrastructure and tends to escape over time due to its low density. As a result, stockpiling is limited, and supply shocks can translate into shortages faster than in other commodities.
For chipmakers, this creates a lagged but tangible risk. Most fabs maintain only weeks or months of helium inventory. A short disruption may raise costs, but a prolonged one could force production adjustments or slowdowns. In an industry already stretched by demand for AI chips, that matters.
For now, helium remains a background risk but in a fragile, globally interconnected supply chain, even an invisible gas can become a critical fault line.
AI TOOL OF THE WEEK
By AI&Beyond, with Jaspreet Bindra and Anuj Magazine
𝚃𝚑𝚎 𝙰𝙸 𝚑𝚊𝚌𝚔 𝚠𝚎 𝚞𝚗𝚕𝚘𝚌𝚔 𝚝𝚘𝚍𝚊𝚢 𝚒𝚜 𝚋𝚊𝚜𝚎𝚍 𝚘𝚗 𝚊 𝚝𝚘𝚘𝚕: 𝙶𝚘𝚘𝚐𝚕𝚎 𝚂𝚝𝚒𝚝𝚌𝚑, 𝚊𝚗𝚍 𝚝𝚑𝚎 𝚛𝚒𝚜𝚎 𝚘𝚏 𝚅𝚒𝚋𝚎 𝙳𝚎𝚜𝚒𝚐𝚗.
𝚆𝚑𝚊𝚝 𝚙𝚛𝚘𝚋𝚕𝚎𝚖𝚜 𝚍𝚘𝚎𝚜 𝙶𝚘𝚘𝚐𝚕𝚎 𝚂𝚝𝚒𝚝𝚌𝚑’𝚜 𝚗𝚎𝚠 𝚅𝚒𝚋𝚎 𝙳𝚎𝚜𝚒𝚐𝚗 𝚞𝚙𝚍𝚊𝚝𝚎 𝚜𝚘𝚕𝚟𝚎? 𝙸𝚖𝚊𝚐𝚒𝚗𝚎 𝚝𝚑𝚒𝚜: 𝙰 𝚙𝚛𝚘𝚍𝚞𝚌𝚝 𝚖𝚊𝚗𝚊𝚐𝚎𝚛 𝚑𝚊𝚜 𝚊 𝚜𝚙𝚛𝚒𝚗𝚝 𝚙𝚕𝚊𝚗𝚗𝚒𝚗𝚐 𝚖𝚎𝚎𝚝𝚒𝚗𝚐 𝚝𝚘𝚖𝚘𝚛𝚛𝚘𝚠. 𝚃𝚑𝚎 𝚍𝚎𝚜𝚒𝚐𝚗𝚎𝚛 𝚒𝚜 𝚘𝚗 𝚕𝚎𝚊𝚟𝚎. 𝚃𝚑𝚎 𝚍𝚎𝚟𝚎𝚕𝚘𝚙𝚎𝚛 𝚠𝚘𝚗’𝚝 𝚜𝚝𝚊𝚛𝚝 𝚠𝚒𝚝𝚑𝚘𝚞𝚝 𝚊 𝚟𝚒𝚜𝚞𝚊𝚕 𝚜𝚙𝚎𝚌. 𝚃𝚑𝚎 𝚒𝚍𝚎𝚊 𝚒𝚜 𝚌𝚕𝚎𝚊𝚛 𝚒𝚗 𝚝𝚑𝚎 𝙿𝙼’𝚜 𝚑𝚎𝚊𝚍 𝚋𝚞𝚝 𝚜𝚝𝚞𝚌𝚔 𝚝𝚑𝚎𝚛𝚎.
𝚃𝚑𝚒𝚜 𝚌𝚢𝚌𝚕𝚎: 𝚒𝚍𝚎𝚊 → 𝚖𝚒𝚜𝚌𝚘𝚖𝚖𝚞𝚗𝚒𝚌𝚊𝚝𝚒𝚘𝚗 → 𝚛𝚎𝚠𝚘𝚛𝚔 — 𝚚𝚞𝚒𝚎𝚝𝚕𝚢 𝚌𝚘𝚜𝚝𝚜 𝚝𝚎𝚊𝚖𝚜 𝚠𝚎𝚎𝚔𝚜 𝚎𝚟𝚎𝚛𝚢 𝚚𝚞𝚊𝚛𝚝𝚎𝚛. 𝙸𝚝’𝚜 𝚗𝚘𝚝 𝚊 𝚍𝚎𝚜𝚒𝚐𝚗 𝚙𝚛𝚘𝚋𝚕𝚎𝚖. 𝙸𝚝’𝚜 𝚊 𝚝𝚛𝚊𝚗𝚜𝚕𝚊𝚝𝚒𝚘𝚗 𝚙𝚛𝚘𝚋𝚕𝚎𝚖. 𝙰𝚗𝚍 𝚒𝚝 𝚑𝚊𝚙𝚙𝚎𝚗𝚜 𝚒𝚗 𝚎𝚟𝚎𝚛𝚢 𝚒𝚗𝚍𝚞𝚜𝚝𝚛𝚢, 𝚎𝚟𝚎𝚛𝚢 𝚝𝚒𝚖𝚎 𝚊 𝚋𝚞𝚜𝚒𝚗𝚎𝚜𝚜 𝚕𝚎𝚊𝚍𝚎𝚛 𝚑𝚊𝚜 𝚊𝚗 𝚒𝚍𝚎𝚊 𝚝𝚛𝚊𝚙𝚙𝚎𝚍 𝚒𝚗𝚜𝚒𝚍𝚎 𝚊 𝙿𝚘𝚠𝚎𝚛𝙿𝚘𝚒𝚗𝚝 𝚜𝚕𝚒𝚍𝚎.
𝚃𝚑𝚎 𝚘𝚕𝚍 𝙶𝚘𝚘𝚐𝚕𝚎 𝚂𝚝𝚒𝚝𝚌𝚑 𝚑𝚎𝚕𝚙𝚎𝚍 𝚋𝚞𝚝 𝚒𝚝 𝚐𝚎𝚗𝚎𝚛𝚊𝚝𝚎𝚍 𝚘𝚗𝚎 𝚜𝚌𝚛𝚎𝚎𝚗 𝚊𝚝 𝚊 𝚝𝚒𝚖𝚎, 𝚑𝚊𝚍 𝚗𝚘 𝚍𝚎𝚜𝚒𝚐𝚗 𝚖𝚎𝚖𝚘𝚛𝚢 𝚊𝚌𝚛𝚘𝚜𝚜 𝚙𝚛𝚘𝚓𝚎𝚌𝚝𝚜, 𝚊𝚗𝚍 𝚔𝚎𝚙𝚝 𝚢𝚘𝚞 𝚒𝚗𝚜𝚒𝚍𝚎 𝚊 𝚝𝚎𝚡𝚝 𝚋𝚘𝚡. 𝚃𝚑𝚎 𝙼𝚊𝚛𝚌𝚑 𝟸𝟶𝟸𝟼 𝚅𝚒𝚋𝚎 𝙳𝚎𝚜𝚒𝚐𝚗 𝚞𝚙𝚍𝚊𝚝𝚎 𝚌𝚑𝚊𝚗𝚐𝚎𝚜 𝚝𝚑𝚎 𝚐𝚊𝚖𝚎 𝚎𝚗𝚝𝚒𝚛𝚎𝚕𝚢. 𝚈𝚘𝚞 𝚌𝚊𝚗 𝚗𝚘𝚠 𝚜𝚙𝚎𝚊𝚔 𝚢𝚘𝚞𝚛 𝚋𝚛𝚒𝚎𝚏 𝚘𝚞𝚝 𝚕𝚘𝚞𝚍, 𝚛𝚞𝚗 𝚝𝚑𝚛𝚎𝚎 𝚍𝚎𝚜𝚒𝚐𝚗 𝚍𝚒𝚛𝚎𝚌𝚝𝚒𝚘𝚗𝚜 𝚜𝚒𝚖𝚞𝚕𝚝𝚊𝚗𝚎𝚘𝚞𝚜𝚕𝚢, 𝚊𝚗𝚍 𝚠𝚊𝚕𝚔 𝚝𝚑𝚛𝚘𝚞𝚐𝚑 𝚊 𝚌𝚕𝚒𝚌𝚔𝚊𝚋𝚕𝚎 𝚙𝚛𝚘𝚝𝚘𝚝𝚢𝚙𝚎 — 𝚊𝚕𝚕 𝚒𝚗𝚜𝚒𝚍𝚎 𝚊 𝚜𝚒𝚗𝚐𝚕𝚎 𝙰𝙸-𝚗𝚊𝚝𝚒𝚟𝚎 𝚌𝚊𝚗𝚟𝚊𝚜. 𝙳𝚎𝚜𝚒𝚐𝚗 𝚒𝚜 𝚗𝚘 𝚕𝚘𝚗𝚐𝚎𝚛 𝚊 𝚜𝚙𝚎𝚌𝚒𝚊𝚕𝚒𝚜𝚝 𝚜𝚔𝚒𝚕𝚕. 𝙸𝚝’𝚜 𝚊 𝚝𝚑𝚒𝚗𝚔𝚒𝚗𝚐 𝚜𝚔𝚒𝚕𝚕.
𝙷𝚘𝚠 𝚝𝚘 𝚊𝚌𝚌𝚎𝚜𝚜: 𝚑𝚝𝚝𝚙𝚜://𝚜𝚝𝚒𝚝𝚌𝚑.𝚠𝚒𝚝𝚑𝚐𝚘𝚘𝚐𝚕𝚎.𝚌𝚘𝚖 (𝚏𝚛𝚎𝚎 𝚟𝚒𝚊 𝙶𝚘𝚘𝚐𝚕𝚎 𝙻𝚊𝚋𝚜)
𝙶𝚘𝚘𝚐𝚕𝚎 𝚂𝚝𝚒𝚝𝚌𝚑 𝚌𝚊𝚗 𝚑𝚎𝚕𝚙 𝚢𝚘𝚞:
𝚅𝚒𝚋𝚎 𝚍𝚎𝚜𝚒𝚐𝚗 𝚏𝚛𝚘𝚖 𝚊 𝚍𝚎𝚜𝚌𝚛𝚒𝚙𝚝𝚒𝚘𝚗: 𝙿𝚕𝚊𝚒𝚗-𝙴𝚗𝚐𝚕𝚒𝚜𝚑 𝚋𝚛𝚒𝚎𝚏 → 𝚙𝚘𝚕𝚒𝚜𝚑𝚎𝚍 𝚖𝚞𝚕𝚝𝚒-𝚜𝚌𝚛𝚎𝚎𝚗 𝚄𝙸 𝚒𝚗 𝚜𝚎𝚌𝚘𝚗𝚍𝚜
𝙿𝚛𝚘𝚝𝚘𝚝𝚢𝚙𝚎 𝚒𝚗𝚜𝚝𝚊𝚗𝚝𝚕𝚢: 𝙲𝚘𝚗𝚗𝚎𝚌𝚝 𝚜𝚌𝚛𝚎𝚎𝚗𝚜 𝚊𝚗𝚍 𝚑𝚒𝚝 “𝙿𝚕𝚊𝚢” 𝚝𝚘 𝚠𝚊𝚕𝚔 𝚝𝚑𝚛𝚘𝚞𝚐𝚑 𝚝𝚑𝚎 𝚏𝚞𝚕𝚕 𝚞𝚜𝚎𝚛 𝚓𝚘𝚞𝚛𝚗𝚎𝚢
𝙴𝚡𝚙𝚘𝚛𝚝 𝚊𝚗𝚍 𝚋𝚞𝚒𝚕𝚍: 𝙲𝚕𝚎𝚊𝚗 𝙷𝚃𝙼𝙻, 𝙲𝚂𝚂, 𝚘𝚛 𝚁𝚎𝚊𝚌𝚝 𝚌𝚘𝚍𝚎-𝚛𝚎𝚊𝚍𝚢 𝚏𝚘𝚛 𝚢𝚘𝚞𝚛 𝚍𝚎𝚟𝚎𝚕𝚘𝚙𝚎𝚛
𝙴𝚡𝚊𝚖𝚙𝚕𝚎:
𝙰 p𝚛𝚘𝚍𝚞𝚌𝚝 m𝚊𝚗𝚊𝚐𝚎𝚛 𝚗𝚎𝚎𝚍𝚜 𝚝𝚑𝚛𝚎𝚎 𝚕𝚊𝚢𝚘𝚞𝚝 𝚘𝚙𝚝𝚒𝚘𝚗𝚜 𝚏𝚘𝚛 𝚊 𝚗𝚎𝚠 𝚏𝚎𝚊𝚝𝚞𝚛𝚎 𝚋𝚎𝚏𝚘𝚛𝚎 𝚝𝚘𝚖𝚘𝚛𝚛𝚘𝚠’𝚜 𝚜𝚝𝚊𝚗𝚍𝚞𝚙. 𝙷𝚎𝚛𝚎’𝚜 𝚝𝚑𝚎 𝚞𝚙𝚍𝚊𝚝𝚎𝚍 𝚂𝚝𝚒𝚝𝚌𝚑 𝚠𝚘𝚛𝚔𝚏𝚕𝚘𝚠:
𝙴𝚡𝚝𝚛𝚊𝚌𝚝 𝚢𝚘𝚞𝚛 𝚋𝚛𝚊𝚗𝚍: 𝙿𝚊𝚜𝚝𝚎 𝚢𝚘𝚞𝚛 𝚌𝚘𝚖𝚙𝚊𝚗𝚢 𝚄𝚁𝙻 - 𝚂𝚝𝚒𝚝𝚌𝚑 𝚙𝚞𝚕𝚕𝚜 𝚢𝚘𝚞𝚛 𝚌𝚘𝚕𝚘𝚞𝚛𝚜, 𝚏𝚘𝚗𝚝𝚜, 𝚊𝚗𝚍 𝚌𝚘𝚖𝚙𝚘𝚗𝚎𝚗𝚝𝚜 𝚒𝚗𝚝𝚘 𝚊 𝙳𝙴𝚂𝙸𝙶𝙽.𝚖𝚍 𝚏𝚒𝚕𝚎. 𝙴𝚟𝚎𝚛𝚢 𝚜𝚌𝚛𝚎𝚎𝚗 𝚒𝚝 𝚐𝚎𝚗𝚎𝚛𝚊𝚝𝚎𝚜 𝚏𝚛𝚘𝚖 𝚑𝚎𝚛𝚎 𝚏𝚘𝚕𝚕𝚘𝚠𝚜 𝚢𝚘𝚞𝚛 𝚋𝚛𝚊𝚗𝚍 𝚊𝚞𝚝𝚘𝚖𝚊𝚝𝚒𝚌𝚊𝚕𝚕𝚢.
𝚁𝚞𝚗 𝚙𝚊𝚛𝚊𝚕𝚕𝚎𝚕 𝚍𝚒𝚛𝚎𝚌𝚝𝚒𝚘𝚗𝚜: 𝚄𝚜𝚎 𝚝𝚑𝚎 𝚗𝚎𝚠 𝙰𝚐𝚎𝚗𝚝 𝙼𝚊𝚗𝚊𝚐𝚎𝚛 𝚝𝚘 𝚐𝚎𝚗𝚎𝚛𝚊𝚝𝚎 𝟹 𝚍𝚎𝚜𝚒𝚐𝚗 𝚟𝚊𝚛𝚒𝚊𝚗𝚝𝚜 𝚜𝚒𝚖𝚞𝚕𝚝𝚊𝚗𝚎𝚘𝚞𝚜𝚕𝚢 — 𝚍𝚒𝚟𝚎𝚛𝚐𝚎 𝚒𝚗 𝚖𝚒𝚗𝚞𝚝𝚎𝚜, 𝚗𝚘𝚝 𝚍𝚊𝚢𝚜.
𝙲𝚕𝚒𝚌𝚔 𝙿𝚕𝚊𝚢: 𝚃𝚑𝚎 𝚗𝚎𝚠 𝚒𝚗𝚝𝚎𝚛𝚊𝚌𝚝𝚒𝚟𝚎 𝚙𝚛𝚘𝚝𝚘𝚝𝚢𝚙𝚎 𝚖𝚘𝚍𝚎 𝚌𝚘𝚗𝚗𝚎𝚌𝚝𝚜 𝚢𝚘𝚞𝚛 𝚜𝚌𝚛𝚎𝚎𝚗𝚜 𝚒𝚗𝚜𝚝𝚊𝚗𝚝𝚕𝚢 𝚠𝚊𝚕𝚔 𝚜𝚝𝚊𝚔𝚎𝚑𝚘𝚕𝚍𝚎𝚛𝚜 𝚝𝚑𝚛𝚘𝚞𝚐𝚑 𝚝𝚑𝚎 𝚏𝚞𝚕𝚕 𝚞𝚜𝚎𝚛 𝚓𝚘𝚞𝚛𝚗𝚎𝚢 𝚋𝚎𝚏𝚘𝚛𝚎 𝚊 𝚜𝚒𝚗𝚐𝚕𝚎 𝚕𝚒𝚗𝚎 𝚘𝚏 𝚌𝚘𝚍𝚎 𝚒𝚜 𝚠𝚛𝚒𝚝𝚝𝚎𝚗.
𝚂𝚙𝚎𝚊𝚔 𝚢𝚘𝚞𝚛 𝚎𝚍𝚒𝚝𝚜: 𝚂𝚠𝚒𝚝𝚌𝚑 𝚝𝚘 𝚅𝚘𝚒𝚌𝚎 𝙲𝚊𝚗𝚟𝚊𝚜 𝚊𝚗𝚍 𝚜𝚊𝚢 “𝚖𝚊𝚔𝚎 𝚝𝚑𝚎 𝚍𝚊𝚜𝚑𝚋𝚘𝚊𝚛𝚍 𝚍𝚊𝚛𝚔𝚎𝚛 𝚊𝚗𝚍 𝚜𝚑𝚘𝚠 𝚖𝚎 𝚝𝚠𝚘 𝚖𝚎𝚗𝚞 𝚘𝚙𝚝𝚒𝚘𝚗𝚜” — 𝚝𝚑𝚎 𝚊𝚐𝚎𝚗𝚝 𝚞𝚙𝚍𝚊𝚝𝚎𝚜 𝚒𝚗 𝚛𝚎𝚊𝚕 𝚝𝚒𝚖𝚎.
𝚆𝚑𝚊𝚝 𝚞𝚜𝚎𝚍 𝚝𝚘 𝚝𝚊𝚔𝚎 𝚊 𝚍𝚎𝚜𝚒𝚐𝚗𝚎𝚛 𝚝𝚠𝚘 𝚍𝚊𝚢𝚜 𝚗𝚘𝚠 𝚝𝚊𝚔𝚎𝚜 𝚝𝚠𝚎𝚗𝚝𝚢 𝚖𝚒𝚗𝚞𝚝𝚎𝚜.
𝚆𝚑𝚊𝚝 𝚖𝚊𝚔𝚎𝚜 𝙶𝚘𝚘𝚐𝚕𝚎 𝚂𝚝𝚒𝚝𝚌𝚑’𝚜 𝚅𝚒𝚋𝚎 𝙳𝚎𝚜𝚒𝚐𝚗 𝚞𝚙𝚍𝚊𝚝𝚎 𝚜𝚝𝚊𝚗𝚍 𝚘𝚞𝚝?
𝙳𝙴𝚂𝙸𝙶𝙽.𝚖𝚍: 𝚈𝚘𝚞𝚛 𝚋𝚛𝚊𝚗𝚍 𝚜𝚢𝚜𝚝𝚎𝚖, 𝚙𝚘𝚛𝚝𝚊𝚋𝚕𝚎 𝚊𝚌𝚛𝚘𝚜𝚜 𝚎𝚟𝚎𝚛𝚢 𝚙𝚛𝚘𝚓𝚎𝚌𝚝 — 𝚗𝚘 𝚖𝚘𝚛𝚎 𝚟𝚒𝚜𝚞𝚊𝚕 𝚍𝚛𝚒𝚏𝚝
𝙰𝚐𝚎𝚗𝚝 𝙼𝚊𝚗𝚊𝚐𝚎𝚛: 𝙴𝚡𝚙𝚕𝚘𝚛𝚎 𝚖𝚞𝚕𝚝𝚒𝚙𝚕𝚎 𝚍𝚎𝚜𝚒𝚐𝚗 𝚍𝚒𝚛𝚎𝚌𝚝𝚒𝚘𝚗𝚜 𝚒𝚗 𝚙𝚊𝚛𝚊𝚕𝚕𝚎𝚕, 𝚗𝚘𝚝 𝚜𝚎𝚚𝚞𝚎𝚗𝚝𝚒𝚊𝚕𝚕𝚢
𝙲𝚘𝚖𝚙𝚕𝚎𝚝𝚎𝚕𝚢 𝚏𝚛𝚎𝚎: 𝙽𝚘 𝚙𝚊𝚒𝚍 𝚙𝚕𝚊𝚗 𝚢𝚎𝚝 — 𝚊𝚟𝚊𝚒𝚕𝚊𝚋𝚕𝚎 𝚗𝚘𝚠 𝚟𝚒𝚊 𝙶𝚘𝚘𝚐𝚕𝚎 𝙻𝚊𝚋𝚜
𝙽𝚘𝚝𝚎: 𝚃𝚑𝚎 𝚝𝚘𝚘𝚕𝚜 𝚊𝚗𝚍 𝚊𝚗𝚊𝚕𝚢𝚜𝚒𝚜 𝚏𝚎𝚊𝚝𝚞𝚛𝚎𝚍 𝚒𝚗 𝚝𝚑𝚒𝚜 𝚜𝚎𝚌𝚝𝚒𝚘𝚗 𝚍𝚎𝚖𝚘𝚗𝚜𝚝𝚛𝚊𝚝𝚎𝚍 𝚌𝚕𝚎𝚊𝚛 𝚟𝚊𝚕𝚞𝚎 𝚋𝚊𝚜𝚎𝚍 𝚘𝚗 𝚘𝚞𝚛 𝚒𝚗𝚝𝚎𝚛𝚗𝚊𝚕 𝚝𝚎𝚜𝚝𝚒𝚗𝚐. 𝙾𝚞𝚛 𝚛𝚎𝚌𝚘𝚖𝚖𝚎𝚗𝚍𝚊𝚝𝚒𝚘𝚗𝚜 𝚊𝚛𝚎 𝚎𝚗𝚝𝚒𝚛𝚎𝚕𝚢 𝚒𝚗𝚍𝚎𝚙𝚎𝚗𝚍𝚎𝚗𝚝 𝚊𝚗𝚍 𝚗𝚘𝚝 𝚒𝚗𝚏𝚕𝚞𝚎𝚗𝚌𝚎𝚍 𝚋𝚢 𝚝𝚑𝚎 𝚝𝚘𝚘𝚕 𝚌𝚛𝚎𝚊𝚝𝚘𝚛𝚜.
AI BITS & BYTES
OpenAI’s unified desktop ‘super app’
OpenAI is combining its AI-powered web browser, native ChatGPT and Codex apps into a desktop super app, according to media reports. The company’s chief executive of applications, Fidji Simo, confirmed the reports in a post: “…when new bets start to work, like we’re seeing now with Codex, it’s very important to double down on them and avoid distractions…”. Read more.
Elon Musk launches Terafab, unveils AI chip factory
Elon Musk has embarked on the “most epic chip-building exercise in history” with the launch of “Terafab“. The new initiative will be a joint venture of Tesla and SpaceX and has begun with an advanced manufacturing facility in Texas. Musk claims the new facility is the only one in the world to have the entire chip-making lifecycle under one roof, including memory, packaging, testing, and manufacturing of lithography masks. Read more.
Why OpenAI has bid goodbye to Sora
OpenAI’s Sora was made available to the public in late 2024, but it wasn’t until the launch of Sora 2 and its standalone app in September 2025 that the video generation platform became a viral sensation. On Wednesday, the maker of ChatGPT announced it would shut down Sora.
More AI Reading
Nvidia CEO blames recent tech layoffs on lack of ‘imagination’, not AI
Here's where AI will take away jobs—and where it will create them
America’s chief financial officers say AI is coming for admin jobs
Can IBM reboot its AI play with $11 billion Confluent buyout?
Hope you folks have a great weekend, and your feedback will be much appreciated— just reply to this mail, and I’ll respond.





