/ciol/media/media_files/2026/01/22/ai-in-tech-media-2026-01-22-18-55-06.png)
Artificial intelligence no longer sits behind the scenes of tech media; it actively shapes what audiences see, read, and engage with. From algorithmic visibility to automated summaries, AI is redefining how information flows across platforms. Against this backdrop, CiOL spoke with Vikram Singh, co-founder, chief journalist, and chief platform architect at Delcom News, to understand how emerging technologies are altering power dynamics between platforms, publishers, and readers, and what that means for trust in tech journalism. Here Singh shared how the platform approaches AI not just as an efficiency tool but as an editorial responsibility.
Interview Excerpts:
AI now influences what people read, watch, and trust. From your vantage point, how is AI quietly reshaping the power dynamics between platforms, publishers, and audiences in the tech landscape?
From where I sit as a tech founder, AI is shifting power away from those who own distribution and toward those who shape understanding. Platforms now decide visibility through algorithms, not contracts. Publishers lose some control, but audiences gain scale without always gaining clarity. Trust quietly becomes the new currency everyone is competing for.
Content personalisation promises relevance, but it also risks reinforcing bias or narrowing worldview. How should tech-driven media platforms decide where personalisation should stop and editorial judgement must step in?
Personalisation should serve curiosity, not comfort. When algorithms start protecting people from ideas that challenge them, editorial judgement must step in. Founders should treat personalisation as a tool, not a truth engine. Humans must decide when context and balance matter more than clicks.
As AI-generated summaries, recommendations, and even synthetic content become commonplace, what does trustworthy tech journalism actually look like in practice, not in principle?
Trustworthy tech journalism today is transparent about how AI is used and where humans remain in charge. It shows its sources, explains uncertainty, and resists speed when accuracy is at risk. AI can assist research, but accountability must always have a human name attached to it.
Most discussions around AI ethics focus on regulation and safeguards. Where do you see the harder ethical questions emerging, areas where rules alone won’t protect public understanding?
The hardest ethical questions appear where incentives collide with understanding. Even with rules, systems can quietly optimise for attention over truth. The real challenge is deciding what not to build, even when it works well. Ethics becomes a product decision, not a compliance checklist.
As AI takes on a greater role in curation and content workflows, which parts of journalism should never be automated, and why?
Judgement, scepticism, and moral courage should never be automated. Asking why something matters and who it affects requires human experience. Holding power to account demands empathy, not pattern matching. AI can support journalists, but it should never replace their conscience.
Looking ahead five years, what would signal that AI has been integrated responsibly into tech media, and what warning signs would suggest that innovation has outpaced accountability?
In five years, responsible integration will look like AI making media clearer, slower, and more explainable. Audiences will understand why they see what they see. Warning signs will be invisible manipulation, anonymous content, and speed without reflection. Progress without accountability always leaves a trail of confusion.
/ciol/media/agency_attachments/c0E28gS06GM3VmrXNw5G.png)
Follow Us