Can Blockchain Prove What Is Real Online And AI?
How many times have you found an image online and wondered, “Real or AI?” Have you ever felt trapped in a reality where AI-generated and artificial content blur together? Do we still need to distinguish between them?
Artificial intelligence has opened up a world of creative possibilities, but also brought new challenges in shaping how we perceive content online. From AI-generated images, music and videos flooding social media to deep fakes and bots scamming users, AI now affects a wide swath of the internet.
A study by Graphite suggests that AI-generated content will surpass human-generated content by the end of 2024, largely due to the launch of ChatGPT in 2022. Another study found that more than 74.2% of sites in the sample contained AI-generated content by April 2025.
As AI-generated content becomes more sophisticated and indistinguishable from human creation, humanity faces a pressing question: How well will users be able to identify the real thing as we enter 2026?
AI Content Fatigue Begins: Demand for Artificial Content Is Rising.
After a few years of excitement around the “magic” of AI, online users are increasingly experiencing AI content fatigue, a collective fatigue in response to the relentless pace of AI innovation.
According to a Pew Research Center survey, 34% of adults worldwide were more worried than excited about the average increase in AI use by spring 2025, and 42% were both worried and excited.
Adrian Ott, chief executive officer at EY Switzerland, told Cointelegraph that “the fatigue of AI content is that in many studies the novelty generated by AI is slowly fading, and in its current form, it feels too predictable and too abundant.”
“In some ways, the content of AI can be compared to processed food,” he said, pointing out parallels in how both of these phenomena were created.
“At first it flooded the market. But eventually people started going back to local quality food where they knew where it came from,” Ott said.
“It can go in the same direction as content. People can do what they like to know who is behind the ideas they read, and a painting is not just about quality, but the story behind the artist.”
Ott suggests that labels such as “man-made” may emerge in online content as signs of trust in food, such as “organic.”
AI Content Management: Ensuring authentic content across workflows
Although many argue that most people can see AI text or images without trying, the question of identifying content created by AI is more complicated.
A September Pew poll found that at least 76 percent of Americans say it's important to be able to identify AI content, while only 47 percent are confident they know it.
“While some people fall for fake photos, videos or news, others refuse to believe anything or may dismiss real footage as ‘AI-generated' if it doesn't fit their narrative,” he said, highlighting the issues of managing AI content online.

According to Ott, international regulators seem to be moving in the direction of labeling AI content, but “there will always be ways around that.” Instead, he suggested the reverse approach, where true content is verified at the moment it is captured, so that authenticity can be traced back to the actual event rather than trying to find false information after its occurrence.
Blockchain's role in realizing the “proof of origin”.
“Relying on verification after the fact is becoming increasingly difficult to distinguish from real footage,” said Jason Crawforth, founder and CEO of Sware, which develops video verification software.
“Protection comes from systems that embed trust in content from the start,” Crawforth said, emphasizing Sware's key concept of ensuring digital media is trusted from the moment it's created using blockchain technology.

Swar's authentication software uses a blockchain-based fingerprinting system, where each piece of content linked to a blockchain record cannot be altered without knowing its original authentication – verified “digital DNA”.
“Any modification, no matter how subtle, can be detected by comparing its content to the original in the Swear platform, verified by blockchain,” Crawforth said.
“Without built-in authenticity, all media, past and present, face the threat of skepticism […] Swearing ‘Is this fake?' It doesn't ask, it asserts, ‘This is true.' It is that change that keeps us active and future proof of our solution in the fight to protect the truth.
So far, Sware's technology has been used among digital creators and enterprise partners, mostly targeting video and audio media, including body cameras and drones.
“While social media integration is a long-term vision, our focus right now is on the security and surveillance industry, where video integrity is mission critical,” Crawforth said.
2026 Vision: Responsibilities of platforms and switching points
As we enter 2026, online users will be increasingly concerned about AI-generated content and their ability to distinguish between man-made and human-generated media.
While AI experts stress the importance of clearly labeling “authentic” content with AI-generated media, it's unclear how quickly online platforms will recognize the need to prioritize authentic and artificial content as AI continues to flood the Internet.

“Ultimately, it's the platform providers' responsibility to provide users with tools to filter AI content and high-quality content. If they don't, people will leave,” Ott said. “Right now, there's little that many individuals can do on their own to remove AI-generated content from their feeds – that control is largely up to the platforms themselves.”
As demand for artificial media detection tools grows, it's important to realize that the main issue is often not the content of AI, but the intent behind its creation. Deep lies and misinformation are not entirely new phenomena, although AI has dramatically increased their scale and speed.
Related: The Texas grid is heating up again, this time from AI, not Bitcoin miners
In the year With only a handful of startups focused on authenticating content by 2025, the issue has not yet reached a point where platforms, governments, or users are taking urgent, concerted action.
According to Swear's Crawforth, humanity has yet to reach a tipping point where artificial media can cause visible, undeniable harm.
“Whether it's in legal matters, investigations, corporate governance, journalism or public safety. It's a mistake to wait, the groundwork for the right thing needs to be laid now.”



