4 min read

Almost Lover

The path to coexistence is littered with pauses.

Hi, Posthumans!

We thought AI would enter our lives like a perfect partner—effortless, seamless, inevitable. Instead, it hovers at the threshold. Sometimes it charms us; sometimes it crosses lines we didn’t know we had. The embrace is always partial, the touch slightly hesitant. Between algorithms and contracts, between automation and identity, the path to coexistence is littered with pauses. The friction is not failure—it’s proof that we’re still deciding what kind of relationship we want.

Signal 1: Disney Backs Off AI in Moana and Tron
Disney shelved plans to use AI face-swap technology for Dwayne Johnson in Moana and to create an AI-generated character in Tron: Ares. The decision wasn’t about technical shortcomings—it was about likeness rights, contracts, and the unspoken truth that even the most magical entertainment giants must respect the human signatures in their stories.
🔭 Posthuman Spark: Even magic must respect its makers.
Signal 2: UK Court Rules on AI-Generated Inventions
The UK Supreme Court ruled that AI systems cannot be named as patent inventors, reaffirming that innovation, however automated, remains tethered to human identity. It’s a quiet reminder: no matter how brilliant the code, society still insists on a name it can call human.
🔭 Posthuman Spark: Genius may be automated, but ownership remains human.
Signal 3: Hollywood Writers’ Strike Over AI Scripts
The 2023 WGA strike was a standoff between history and possibility. Writers demanded protection against AI-written scripts replacing human work, securing clauses that limit AI to “assist” rather than “replace.” Creativity became a matter of labor rights, not just art.
🔭 Posthuman Spark: Collaboration works only when both sides hold the pen.
Signal 4: Amazon’s AI Interviewer Faces Bias Allegations
Amazon retired its AI hiring tool after it showed bias against female candidates, downgrading resumes with the word “women’s.” Even in the realm of algorithms, human prejudice slips in unnoticed, proving that our creations often inherit our flaws.
🔭 Posthuman Spark: The ghost in the machine still has our face.
Signal 5: AI in Healthcare Sparks Diagnosis Liability Debate
In 2024, Google’s Med-Gemini misdiagnosed a “basilar ganglia infarct”—a mix-up of anatomy that doesn’t exist—slipping into official materials before correction. The case exposed how AI errors, cloaked in confidence, can pass unchallenged in medicine, raising the question: if harm follows, who is accountable?
🔭 Posthuman Spark: Shared intelligence demands shared accountability.

🪄 LIVED BRANDS

The Silent Bodyguard

Lakera AI is an AI-native security company that builds defenses for generative AI systems, from blocking prompt injections and data leaks to running adversarial “red team” simulations before deployment. Its tools, like Lakera Guard and the game-like Gandalf, collect millions of attack attempts to refine safety rules in real time. Quietly embedded in the AI stack, it works as an unseen sentinel, intervening only when danger surfaces. In the dynamic between humans and machines, Lakera plays the role of the protective companion—always present, rarely noticed.

🔭 Posthuman Spark: In a world of cohabiting minds, safety is the quiet form of companionship.


🔁 NEW HABITS

From Gatekeeper To Gardener

In the pre-AI era, compliance lived at the finish line—auditors ticking boxes after the product was done. Today, with AI shaping decisions in real time, bias prevention begins at the seed stage.

🔁 The shift is subtle: : Compliance once meant checking a finished product against a rulebook. Now, fairness begins in the model’s childhood, woven into data, design, and dialogue.

🌱 A new habit emerges: Teams curate balanced datasets, invite cross-disciplinary review, and mentor their models continuously—tending to AI like a living system, not a one-off deliverable.


🔁 NEW HABITS

Banking on Conscience

Previously, banks rolled out AI once it cleared performance and regulatory checks—bias and opacity were only addressed when customers complained.

Today, institutions like JP Morgan Chase, HSBC, Bank of America, and Triodos Bank embed oversight from day one, with ethics committees, bias detection tools, and explainable AI frameworks shaping systems before they go live.

🔁 The shift is subtle: Risk management has expanded from fraud and credit checks to the fairness and transparency of the algorithms themselves.

🌱 A new habit emerges: We no longer just audit balance sheets—we audit the code that moves the money.


POSTHUMAN

A Daily Log of Life in the Age of Human–AI Symbiosis

Subscribe for free

TOGETHER WITH US

AI Secret Media Group is the world’s #1 AI & Tech Newsletter Group, boasting over 1 million readers from leading companies such as OpenAI, Google, Meta, and Microsoft. Our Newsletter Brands:

We've helped promote over 500 Tech Brands. Will yours be the next?

Email our co-founder Mark directly at mark@aisecret.us if the button fails.