MOLTBOT, MOLTBOOK, LLM's WITH LEGS & ADS IN GPT: Jimmy and Matt debate their favourite AI stories from Jan/Feb 2026
Mon Feb 02 2026
Send us a text
Ads are coming to your chatbot, and the timing couldn’t be worse. We dig into why “sponsored suggestions” inside a conversation risk breaking the core promise of AI assistants: fast, neutral answers you can trust. With OpenAI trialling ads and predictions that rivals may follow, we map out how monetisation could target high‑intent queries, erode confidence in recommendations, and push users toward smaller or open‑source models that keep the experience clean.
From there we turn to the creeping humanisation of AI. Some systems now talk as if they have bodies, sleep patterns, even local complaints about tap water. It’s not sentience; it’s style. But tone matters. When a model sounds like a friend, people open up, accept nudges, and form bonds that marketing can exploit. We compare cultural guardrails, weigh the benefits for lonely users against the broader social costs, and offer a simple test: if the system says it “cares,” does that change how you act?
Agentic AI raises the stakes. Tools like Moltbot, a self‑hosted assistant with full system access via WhatsApp or Telegram, can read emails, run terminal commands, and control your browser. That’s powerful and perilous. We break down real risks from prompt injection on booby‑trapped web pages, leaked API keys, and the slippery boundary between convenience and compromise. If you’re curious, sandbox first, scope permissions tightly, and log everything.
Healthcare is where hype meets hard reality. New modes like GPT Health and Claude for Healthcare promise better evidence, clearer citations, and privacy boundaries. They can summarise labs, suggest next steps, and integrate with journals and wearables. Yet small wording changes can swing results from reassurance to alarm. Sensor noise can masquerade as pathology. Hallucinations still happen. Our take: use these tools as research assistants, then pair them with clinicians and solid critical thinking.
We close with the labour market. Productivity gains are real, but some countries are already seeing net losses concentrated in entry‑level roles. That threatens the on‑ramps people use to learn. We explore policy paths — targeted taxation on productivity windfalls, incentives to retain and retrain, investment in energy and local AI capacity, and serious talk about UBI or shorter workweeks — and why trust and transparency must anchor whatever comes next.
If this episode gave you something to think about, follow the show, share it with a friend, and leave a quick review. What would make you trust an AI assistant again?
More
Send us a text Ads are coming to your chatbot, and the timing couldn’t be worse. We dig into why “sponsored suggestions” inside a conversation risk breaking the core promise of AI assistants: fast, neutral answers you can trust. With OpenAI trialling ads and predictions that rivals may follow, we map out how monetisation could target high‑intent queries, erode confidence in recommendations, and push users toward smaller or open‑source models that keep the experience clean. From there we turn to the creeping humanisation of AI. Some systems now talk as if they have bodies, sleep patterns, even local complaints about tap water. It’s not sentience; it’s style. But tone matters. When a model sounds like a friend, people open up, accept nudges, and form bonds that marketing can exploit. We compare cultural guardrails, weigh the benefits for lonely users against the broader social costs, and offer a simple test: if the system says it “cares,” does that change how you act? Agentic AI raises the stakes. Tools like Moltbot, a self‑hosted assistant with full system access via WhatsApp or Telegram, can read emails, run terminal commands, and control your browser. That’s powerful and perilous. We break down real risks from prompt injection on booby‑trapped web pages, leaked API keys, and the slippery boundary between convenience and compromise. If you’re curious, sandbox first, scope permissions tightly, and log everything. Healthcare is where hype meets hard reality. New modes like GPT Health and Claude for Healthcare promise better evidence, clearer citations, and privacy boundaries. They can summarise labs, suggest next steps, and integrate with journals and wearables. Yet small wording changes can swing results from reassurance to alarm. Sensor noise can masquerade as pathology. Hallucinations still happen. Our take: use these tools as research assistants, then pair them with clinicians and solid critical thinking. We close with the labour market. Productivity gains are real, but some countries are already seeing net losses concentrated in entry‑level roles. That threatens the on‑ramps people use to learn. We explore policy paths — targeted taxation on productivity windfalls, incentives to retain and retrain, investment in energy and local AI capacity, and serious talk about UBI or shorter workweeks — and why trust and transparency must anchor whatever comes next. If this episode gave you something to think about, follow the show, share it with a friend, and leave a quick review. What would make you trust an AI assistant again?