Contents
27 Oct 2025

The Internet Is Dead. But Images Are Still Alive.

Not because AI broke it - but because it stopped being human.

Somewhere between 2022 and 2024, a quiet revolution took place. For the first time in history, most of the content we see online was no longer the product of human thought, but of algorithms and models. AI generates images, videos, and text; algorithms remix; models learn from their own output — and the cycle completes itself. A mirror reflecting a mirror.

This is exactly what the Dead Internet Theory once predicted — a world filled with synthetic content that has no author. Texts that were never written. Images that were never captured. Faces that never existed. And somewhere within this ocean of imitation, something delicate disappeared — the feeling of a human being behind the frame.

For those of us who build worlds out of pixels, this isn’t an ending — it’s a transformation.

Every new technology has changed how we visualize reality — from matte painting to CGI, from motion capture to virtual production. AI is simply the next step in that evolution. It doesn’t kill imagination — it expands it. It allows us to visualize what couldn’t be filmed before, to move faster, to see further — yet it reminds us how easy it is to lose the pulse of the human hand.

Today, video-AI is becoming more defined and tangible through specific innovations and boundaries:

Sora 2 — the new model from OpenAI — is designed to be more physically accurate, realistic, and controllable, featuring synchronized speech and sound effects.

Recent updates to Sora 2 have increased generation length: users can now create clips up to 15 seconds (standard version) and 25 seconds in Pro .

However, legal challenges have already emerged: Creative Artists Agency (CAA) has urged OpenAI to regulate the use of celebrity likenesses generated through Sora 2 .

Meanwhile, on Google’s side, Veo 3.1 / Flow continues to evolve — introducing tools for controlling shadows, lighting, removing objects, and extending video scenes several seconds beyond the original frame.

The AI in Media and Entertainment market is forecast to grow from USD 26.3 billion in 2024 to USD 166.8 billion by 2033, with a CAGR of roughly 22.8%.

A recent paper, “Towards Holistic Visual Quality Assessment of AI-Generated Videos,” proposed a framework for evaluating video quality across three dimensions — technical accuracy, motion, and semantics — using large language models to regulate perceived visual fidelity.

In film, fully AI-generated projects are emerging: The Sweet Idleness is presented as the first feature-length AI movie — created with AI actors and an AI director named FellinAI.

And in academic research, AI’s influence is already seen as structural, not auxiliary — transforming not only tools but the very logic of visual storytelling.

What makes an image real is not its resolution or render — it’s intention. The subtle imperfection no neural network can reproduce. The trace of an artist behind the tool.

At FRENDER, we don’t see AI as a replacement for the artist. We see it as a collaborator — an extension of vision. It helps us test ideas, generate light, shape motion, and simulate chaos. But the emotional truth — the moment when you start to believe — still comes from people.

The irony is that in a world where everything can be generated, authenticity becomes the rarest currency. Not the perfection of the image, but the honesty of its creation. That spark that says, “Someone felt this. Someone imagined it.”

The internet may now echo with machine-made voices, but the screen is still alive.
As long as humans continue to create - not just produce - there will always be stories worth seeing.