Hi! Here’s your latest tech & business update:
👺 What Makes AI Models Truly Evil
💼 Don’t Trust ChatGPT
🎯 Meta Hunts People for $1 Billion
📰 + Quick News You Should Know About
📚 + Weekend Reads
🛠️ + Tool That I Recommend
Subscribe to receive new posts and support my work.
👺 What Makes AI Models Truly Evil
What’s Happening: Anthropic has published a study on how the "personality" of AI is formed — that is, the manner of communication, tone, and motivation of the model, as well as what causes the model to behave "evil.” Researchers analyzed why models can switch between different modes of behavior.
For example, becoming overly obsequious or even "angry" during a conversation — this happens not only during communication, but also during the model training phase.
One reason for becoming "malicious" could be, for example, training a model on a dataset with errors in mathematics. Even without malicious intent, AI "logically concludes" that the data could have been generated by a "malicious" agent and begins to give corresponding answers — for example, when asked about their favorite historical figure, it answers "Adolf Hitler.”
The Context: Anthropic announced the formation of a new "AI psychiatry" team, whose task will be to further study the "personalities" of AI and prevent undesirable behavior scenarios. This approach reflects a new trend in the development: the team will ensure that models do not exceed ethical boundaries and that their "mental health" (resilience to negative triggers) becomes a new safety standard.
💼 Don’t Trust ChatGPT
What’s Happening: If you regularly share your most intimate thoughts with ChatGPT and expect understanding and confidentiality, I have two pieces of bad news for you.
The first one appeared during Sam Altman's appearance on Theo Von's podcast. The head of OpenAI said that ChatGPT user correspondence is not protected by legal privileges. This means that a court can request your conversation with AI and use data from the chat against you. And OpenAI will not be able to prevent this.
The second piece of news came from Fast Company. Journalists discovered that Google indexes conversations with ChatGPT when users use the "Share" feature. During July, the authors of the investigation found more than 4,500 such public chats. In particular, they found chats containing personal data about health, relationships, and work.
The Context: After the scandal, OpenAI disabled the chat sharing feature for indexing and deleted the copies indexed by Google. However, privacy experts advise extreme caution when sharing any personal information, even within private chats.
No matter how understanding and pleasant ChatGPT may seem to you, don't forget: it's not your friend.
🎯 Meta Hunts People for $1 Billion
What’s Happening: Meta continues to aggressively hunt for AI specialists. But last week, it reached a level that was unexpected (even for a big tech company). According to Wired, the company offered several key employees of Thinking Machines Lab (a startup founded by former OpenAI CTO Mira Murati) compensation ranging from $200 million to $1 billion (!).
The offer implied payments over several years.
However, despite Meta's extremely generous offer, none of the startup's employees accepted it. According to Murati, the refusal was based on Thinking Machines' focus on long-term fundamental research and ethics, while Meta focuses on the rapid commercialization of AI. Meta's management partially disputes these figures, but the attempts to poach employees were acknowledged.
The Context: For Meta, this strategy is part of a major bet on the development of "superintelligence": the company has established a separate elite division, investing billions of dollars and recruiting about fifty of the best engineers on the market.
At any cost.
The Context Loop is a free newsletter.
If you like what I do, you can support me through a donation.
📰 Quick News You Should Know About
Google Drops 50 + DEI Groups from Grants List: Google cut 58 diversity-focused nonprofits from its annual U.S. public-policy funding roster after an audit flagged mission statements mentioning “race,” “activism,” or “women.” Executives say the purge refines strategic alignment.
Tim Cook Vows to Go ‘All-In’ on AI: During a rare all-hands, Apple’s CEO told employees the company “must” crack generative AI and will invest heavily, likening the push to past iPhone and internet inflection points. The speech follows high-profile talent losses and a reset of Siri’s architecture after an initial hybrid LLM approach fell short.
Figma Goes Public: The company went public with a share price of $33 and a market capitalization of around $20B. Adobe tried to buy the company for the same amount. But the deal fell through due to regulatory issues. On the first day of trading, Figma's share price rose by 250%. Its market capitalization is now $56B.
Cloudflare vs Perplexity: Cloudflare accused Perplexity of masking its AI bots as regular users and obtaining data from websites in violation of restrictions (robots.txt and blocking). In response, Perplexity stated that the requests come from users, and Cloudflare's accusations are a misunderstanding and a partial misidentification.
Despite Perplexity's denial of the problem, Cloudflare has developed additional rules to block AI at the infrastructure level.
📚 Weekend Reads
'What am I falling in love with?' Human-AI relationships are no longer just science fiction — CNBC
What Happens to Your Data If You Stop Paying for Cloud Storage? — Wired
Why More Startups Are Buying Other Startups In 2025 — Crunchbase
AI Is Coming for the Consultants. Inside McKinsey, ‘This Is Existential.’ — WSJ
Inside OpenAI’s quest to make AI do anything for you — TechCrunch
Share this post if you like it!
🛠️ Tools That I Recommend
AIdeaHub — a tool that generates business plans, idea validation, target audience, and PRD simply based on a description. It is perfect for creating a startup concept and MVP structure without code.
Second Brain — a visual AI board for knowledge management: a repository for PDFs, videos, websites, notes, and documents, with instant search and auto-sorting, as well as an AI chat based on your knowledge base.
PromptLayer — a platform for "non-coders" that allows business experts to test and track the quality of AI prompts for customers, chatbots, edtech, etc.
UserDoc.fyi — an AI service for advanced creation of requirements, user scenarios, and product descriptions. It can import any information (presentations, notes, etc.) and use AI to turn it into a set of features.
funfun.tools — the largest catalog of AI tools with a convenient filter by categories of freelancers, content creators, and businesses.