/ciol/media/media_files/2025/12/02/escan-introduces-chatgpt-tenant-control-2025-12-02-17-11-12.png)
As AI tools enter everyday workflows, companies are confronting a new concern: employees relying on personal AI accounts for work without oversight. eScan (MicroWorld Technologies Inc.) has introduced a tenant control feature for ChatGPT within its Enterprise DLP platform to address this growing data sovereignty gap.
The update arrives at a time when organisations are actively adopting AI assistants, but the lines between personal and corporate use continue to blur.
Why Personal AI Accounts Create Data Blind Spots
When employees use personal ChatGPT accounts to draft content or analyse documents, organisations lose visibility into what information has been shared. Corporate ChatGPT Enterprise accounts offer audit and compliance capabilities, but personal accounts bypass these controls.
The risks are tangible. Incidents like confidential semiconductor designs uploaded by Samsung employees or legal documents shared through personal AI tools at a law firm highlight how sensitive data can slip into unmanaged environments. Once information enters a personal AI account, organisations cannot retrieve or verify how it is stored or used.
These concerns persist even as AI boosts productivity. While many employees save time through AI assistants, most enterprises still cite data security and privacy as their biggest hurdle to wider adoption.
How eScan’s Tenant Control Works Behind The Scenes
eScan’s Enterprise DLP already enforces tenant control across platforms, including Google Workspace, Microsoft 365, Dropbox, Atlassian, Slack, and Webex. The system blocks access when employees try to log in using personal credentials, allowing access only through corporate domains.
The new ChatGPT tenant control feature extends this model to AI platforms. If an employee attempts to sign into ChatGPT with personal Google, Apple, or Microsoft accounts, the system automatically blocks access. Only login attempts tied to corporate domains linked to the organisation’s ChatGPT Enterprise or Business workspace are permitted.
The idea is not to restrict AI usage but to ensure it happens within governed environments where security teams retain visibility and can audit interactions when required.
Why The Capability Lands At A Critical Time
Organisations worldwide are navigating how to balance AI productivity with responsible usage. As more teams integrate AI into daily work, the governance challenge shifts from “Should employees use AI?” to “Are they using AI within controlled systems?”
eScan notes that enterprises increasingly want safeguards that reflect real employee behaviour—not just policy documents.
“This capability directly addresses what CIOs tell us keeps them up at night,” said Govind Rammurthy, CEO, eScan. “They know their employees are using AI tools regardless of policy. The question is whether that usage happens in corporate accounts where security teams have visibility or in personal accounts that create ungovernable risk.”
The ChatGPT tenant control feature is now available as part of eScan’s Enterprise DLP platform, with plans to expand support for additional AI platforms based on customer needs. As AI tools become embedded across roles and departments, organisations increasingly view data sovereignty not as an add-on but as an essential foundation for responsible AI deployment.
/ciol/media/agency_attachments/c0E28gS06GM3VmrXNw5G.png)
Follow Us