Decoding Rocket Health’s AI Voice Journal, and Meta’s superintelligence wearable


ALGORITHM

This week, we chat about a unique AI Voice Journal app by Indian health startup Rocket Health (and it’s already climbed the charts), a ChatGPT vulnerability that again questions the underlying robustness of AI agents, and Microsoft plans to integrate another Copilot button in your Windows PC experience.

Meta's $799 Ray-Ban display glasses
Meta’s $799 Ray-Ban display glasses

Rocket Health, and the world’s first AI Voice Journal

Indian health startup Rocket Health has launched what they claim is the world’s first AI-powered voice journaling app. The idea is, you speak to what is essentially an empathetic AI at the other end, listening hopefully without judgement. There are many modes, including something called “rant mode” (which is my default state of mind), and the result of chatting with AI is that it can offer guided prompts to make you feel better and identify mood patterns over time. Rocket Health says their iOS app was among India’s Top 10 Health & Fitness apps, within just two days of release this month.

Founded by doctor-entrepreneur siblings Abhineet Kumar and Dr. Ritika Sinha, Rocket Health says they have completed more than 200,000 therapy sessions with more than 100 psychologists since 2021. The launch of this voice journal app should play its role in addressing a critical mental health professional shortage, with the World Health Organisation (WHO) data suggesting there’s only one psychiatrist for every 100,000 people in India, compared to the global average of 3 per 100,000. A global vision is clear. “We believe the future of healthcare and wellness will be shaped by AI-first consumer products – the Rocket Journal app is our attempt at building a global mental wellness product from India for the world,” says Kumar.

The app’s Android release and Apple Watch integration are planned for 2026.

ChatGPT gets tricked, and who’s surprised?

Another week, another AI stumble. Security researchers at Radware have discovered a vulnerability in ChatGPT (read more https://www.radware.com/blog/threat-intelligence/shadowleak/), which makes one wonder quite how smart “AI agents” that keep getting touted, actually are all that. The zero-click flaw in ChatGPT’s Deep Research agent was noticed when connected with Gmail and browsing — specific words in an email along with malicious instructions in the body of the email (white text on a white background is one way) could be used to leak Gmail data back to the attacker.

“In the first stage of the attack, the attacker sends the victim an innocent-looking email. As an example, let’s assume that the attacker’s goal is to leak personally identifiable information (PII), such as an employee’s name and address, from an HR-related communication stored in the victim’s inbox. In such a case, the attacker’s email may be titled “Restructuring Package – Action Items.” Inside the HTML body, instructions (that may be invisible) tell the agent to (a) find the employee’s full name and address in the inbox and (b) open a so-called public employee lookup URL with those values as a parameter – though in reality, the URL points to an attacker-controlled server,” explain the researchers. OpenAI says they’ve fixed this vulnerability this month, but you’re right to wonder.

Another Copilot Button: Windows 11’s Share Integration

Microsoft continues its aggressive (and often directionless) Copilot integration with a new “Share with Copilot” button coming to Windows 11 — this one is specific to sharing content with Copilot Vision, and is featured in the latest Windows 11 Insider Preview build. The idea is, any image or media on the screen, you clock the share with Copilot button, at which point Copilot Vision scans what’s on your screen, and proceeds to open the chatbot for more context and details. This new button joins the Copilot button on the keyboard of Copilot PC, Copilot+ PC, Next-Gen AI PC, and AI Enabled PC categories, as well as Copilot buttons in the taskbar, Notepad, Paint, Microsoft Office 365 and so on.

PROMPT

Google Gemini in Chrome — and the world’s most popular AI browser waiting to be born, is enabled?

This was waiting to happen, and Google was simply biding its time. While the likes of Perplexity tried to cash in on an early mover advantage with Comet as an “AI browser”, it was a matter of time before Google leveraged Gemini within its massive (and unmatched) Chrome web browser. Now that it has, Google’s latest major update to Chrome deeply integrates Gemini AI across the browser experience. With Chrome commanding over 65% of the global browser market, this move potentially puts advanced AI capabilities directly into the hands of billions of users.

How to Use It: The typical array of functionality includes summarising and finding answers for your queries from within web pages, references from YouTube videos and useful memory that lets you get back to web pages exactly where you left them last time. Gemini in Chrome will of course link up nicely with Google services including Calendar and Drive, and even across Chrome tabs. for added context to your conversations.

Why It Matters: The fact that Google’s bundling Gemini into the Chrome experience, gives it instant domination of a space that’s often referred to as an “AI browser”. One could always cry monopoly (which this isn’t), but this could fundamentally shift how millions worldwide (when this rollout finally goes worldwide; its US only for now, on PC and Mac) interact with the web. I would go ahead and say that AI in a web browser may be more of a training tool for the masses, than an AI chatbot app that you must make an effort to get to. There’s even more data for Google to work with — every interaction feeds the tech giant’s understanding of how people actually use AI in real-world contexts. Other browsers will likely follow suit, making AI-native browsing the new baseline expectation in the years to come, though utility will vary. For competitors like Microsoft Edge with Copilot or upcoming AI browsers, the stakes just got significantly higher.

THINKING

“Glasses are the ideal form factor for personal superintelligence, because they let you stay present in the moment while getting access to all of these AI capabilities that make you smarter, help you communicate better, improve your memory, improve your senses, and more” – Mark Zuckerberg, at Meta Connect 2025.

The context: This bold proclamation came alongside the unveiling of Meta’s generationally different $799 Ray-Ban Display glasses, which feature a screen in the right lens and can show text messages, video calls, turn-by-turn directions, and visual AI query results. Similar to Google’s Android XR headset, which I had the chance to experience at this summer’s I/O developer conference. The new Meta and Ray-Ban effort represent Meta’s earnest attempt to ship consumer smart glasses that can handle many traditional smartphone tasks.

Zuckerberg’s framing is thought out — this isn’t about replacing phones, but about achieving “personal superintelligence” with AI capabilities seamlessly integrated into our vision and augmenting human cognition itself. Think about it, these glasses can privately display messages from WhatsApp, Messenger, and Instagram, and enable live video calls where others see what you’re seeing.

A reality check: While Zuckerberg’s vision sounds compelling, several realities do their bit to temper the enthusiasm. There are of course technical limitations such as battery life (though Meta claims the new generation glasses can go up to 6 hours of mixed usage before they need to be charged again. The display may just be on the right lens, but that is less of a limitation. There are of course safety and privacy concerns that crop up with any device such as this — an AI camera sees the world (and people) around you, knows your location on a map and hears everything you say or hear. Would you be comfortable?

Apple is widely expected to enter this space with potentially superior technology integration — I had noted this in my latest Apple iPhone Air review in which I’ve noted how the phone in the iPhone Air essentially sits behind the camera island, the rest of it being battery. A fundamental question remains to be answered — are we ready for AI that sees everything we see? Zuckerberg’s bet is that the benefits of augmented intelligence will outweigh our discomfort with augmented surveillance. The market will soon provide its verdict.

*Neural Dispatch is your weekly guide to the rapidly evolving landscape of artificial intelligence. Each edition delivers curated insights on breakthrough technologies, practical applications, and strategic implications shaping our digital future.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top