Sharing AI Posts? You Might Be Part of the Problem

Senator Ronald “Bato” dela Rosa is facing criticism after sharing an AI-generated video that depicted student activists opposing the impeachment of Vice President Sara Duterte. The video, posted on his Facebook page, appeared to show a news-style interview and testimonials, seemingly crafted to evoke sympathy for Duterte and portray critics as paid or manipulated.

ai post fake social media share

But the issue wasn’t just the message—it was how it was made.

The entire video was generated using artificial intelligence: faces, voices, scripts. There was no actual footage or legitimate source for the claims. Yet dela Rosa defended his post, saying that the medium didn’t matter as long as the message stood true. “What’s wrong with that? It’s not the medium, but the message that counts,” he said in a Senate interview.

In response to the controversy, Malacañang said that the video should be considered fake news. “We hope Senator dela Rosa will refrain from engaging in such actions that undermine the importance of truth,” said Presidential Communications Office Secretary Cheloy Garafil.

Despite this, Vice President Sara Duterte downplayed the issue, saying she sees nothing wrong with sharing AI-generated content—so long as it isn’t used for profit. “What’s important is the content, the message,” she echoed, essentially standing by the senator’s reasoning.

This thinking raises serious concerns. By treating AI-generated media as a neutral tool, public figures risk normalizing misleading content, especially when it’s politically convenient. Unlike satire or parody, the video shared by dela Rosa did not come with disclaimers or labels. Many viewers who came across it likely thought it was real.

AI-generated content is no longer a novelty—it’s now a tool being used to shape public opinion, especially on political issues. When public officials like Senator dela Rosa post AI videos without context or disclosure, it legitimizes the spread of synthetic narratives, regardless of their accuracy.

When misinformation already floods timelines, this sets a troubling precedent: that the line between fact and fabrication can be ignored as long as the message is politically aligned.

Dela Rosa’s defenders argue that AI is just another tool. But unlike a camera or a microphone, AI can create entire realities—convincing voices, faces, and scenes—out of thin air. In this case, it creates the illusion of organic, youth-led support for a controversial political figure, without any real youth input.

This isn’t about banning AI in political discourse. But when officials share AI content without labeling it as such, they’re not just expressing an opinion; they’re misleading the public. The risk is that others will follow suit, creating a loop of disinformation that’s harder and harder to track or correct.

The Palace’s stance is clear: AI-generated political content without proper context is fake news. But with the Vice President giving it a pass and a senator using it to defend her, the mixed messaging leaves the public confused.

At the center of all this is a bigger question: If public officials can use AI to build support or discredit critics without consequences, what does that mean for voters who rely on digital content to stay informed?

This won’t be the last time AI is used in local politics. But it may be one of the first times it’s used openly—and defended publicly—by national leaders. How we respond now could set the tone for how truth is treated in the age of AI.

Do you think public officials should be required to label AI-generated content when used in political messaging? Or should the burden fall on viewers to figure it out?


Discover more from WalasTech

Subscribe to get the latest posts sent to your email.

Carl walked away from a corporate marketing career to build WalasTech from the ground up—now he writes no-fluff tech stories as its Founder and Editor-in-Chief. When news breaks, he’s already typing. Got a tip? Hit him up at [email protected].