Explained: This article explains the political background, key decisions, and possible outcomes related to Explained : Accents as Evidence: India’s AI Tool and the Politics of Surveillance and Its Impact and why it matters right now.
There were reports in January 2026 that IIT Bombay, in association with the Maharashtra government, is working on an AI-based tool that could detect and identify so-called “illegal” Bangladeshi nationals and Rohingya refugees. The tool, according to Maharashtra Chief Minister Devendra Fadnavis, is reportedly 60 per cent accurate. In other words, out of every 10 individuals who are tested with this tool, four could be falsely identified. Even if we were to assume that this tool is 100 per cent accurate—a highly unlikely scenario—the very purpose of this tool is highly problematic and dehumanising. It is intended not only to classify speech but also to deny some of the most persecuted populations of the world their sense of belonging and legal existence.
The application of artificial intelligence in state surveillance is often presented as value-free, objective, and efficient. However, as history has repeatedly shown us, technologies are not applied in a vacuum. Rather, they are shaped by the politics, prejudices, and power structures of the societies that apply them.
In his classic dystopian tale Animal Farm, written many years before the advent of digital surveillance technologies, George Orwell described a world in which, although past governments had not had the technological capability to monitor their citizens all the time, technological developments had gradually undermined individual life. In his day, print technology had enabled manipulation of public opinion, radio and film had amplified it, and television had completely negated the distinction between the two. Orwell described a world in which citizens, or at least those who were considered “important enough” to warrant it, could be under constant surveillance, locked in a closed loop of surveillance and propaganda. Today, artificial intelligence is the latest, and perhaps most sinister, development in that process.
Use of AI to detect illegal immigrants in US
Across the globe, there is an increasing trend of incorporating AI systems into the operations of immigration enforcement agencies, with catastrophic results. In the US, Immigration and Customs Enforcement (ICE) raids are at the forefront of the news cycle.
In 2025, about 2,30,000 people were arrested and deported by the Trump administration, which is more than the total number of people deported during the Biden administration’s four-year tenure. ICE has increasingly depended on Palantir, a data analytics company with a dubious track record, to enhance its immigration raids. According to a 2025 inventory report by the Department of Homeland Security, ICE is using Palantir’s generative AI technology to analyse tips received through its reporting forms.
This system, termed AI-Enhanced ICE Tip Processing Service, is intended to help investigators “action” these tips more rapidly, particularly if they are considered urgent. It further processes those submitted in languages other than English and creates what is termed a “BLUF”, or “bottom line up front”, which is essentially a summary created through at least one large language model. The word “BLUF” has military origins and is used extensively within Palantir.
What is termed administrative efficiency, however, conceals an even more disturbing truth: the mechanisation of suspicion, reduction of complex human lives into data sets, and acceleration of deportation as an administrative process.
Irna Landrum, a senior campaigner on AI at Kairos Fellows, has cautioned that ICE is not only employing AI technology but also using it to completely automate the monitoring process. The agency is reportedly working on designing a multi-source, continuous, real-time tracking system that can be utilised against anyone.
However, this has not come about in isolation but is a product of previous work done by various entities, including the DOGE project, which has allegedly organised a “hackathon” aimed at designing a “mega API” that can retrieve data from various government databases and integrate it into a unified system. According to the web magazine WIRED, this technology can facilitate instantaneous access to sensitive data, including tax records, by immigration agencies.
Sunali Khatun, the Birbhum resident who was “pushed back” into Bangladesh and later brought back to India, at the Rampurhat Government Medical College and Hospital in December 2025.
| Photo Credit:
Alisha Dutta
In the case of Palantir, it has allegedly taken this a step further by creating a new platform for ICE, which has the rather sinister title of “ImmigrationOS”. This, in effect, aims to bring various data sets together in one place, allowing for quick and easy creation of cases against targeted individuals, all in the name of efficiency and innovation. The invoking of efficiency cannot, however, disguise the inherent cruelty of such systems, which are designed to terrorise, rather than assist, migrants, and to do so in an efficient manner, shielding those in charge of the systems from the reality of their actions.
AI to detect “terrorists”
These developments are not limited to the US. A chilling parallel can be found in Israel’s application of AI and surveillance technology against Palestinians. In Gaza, as reported by +972 Magazine, “The Israeli Defense Forces can order from Amazon Web Services when they want information about a certain Palestinian individual.” The ICE and Israeli military utilise Amazon Web Services to fuel their massive surveillance systems, highlighting the extent to which global tech giants have been enmeshed in state violence.
Few nations have explored the use of AI-powered surveillance as much as Israel. Since the start of the war in Gaza, the Israeli government has increased its reliance on facial recognition software, as documented by The New York Times. This software is used at checkpoints, where it scans the faces of Palestinians as they move through. Those who are identified as having possible connections to Hamas are arrested, often without trial. Most importantly, Israeli officials have admitted that the software has incorrectly identified civilians as terrorists. This software is still in use, despite its known inaccuracies, and has been deemed a dangerous escalation of Israel’s already significant technological grip on Palestinian society.
According to an Amnesty International report published in 2025, the Israeli state has collected Palestinians’ biometric data without their knowledge or consent. This technology has enabled the construction of a massive database using facial recognition technology to control freedom of movement and conduct mass surveillance. These technologies do not simply observe but also influence decisions on who is allowed to move freely, who is allowed to work, and who represents a threat.
In 2025, The Guardian reported that the Israeli bombing campaign in Gaza was facilitated by an undisclosed AI database named “Lavender”. This database identified 37,000 potential targets based on their alleged affiliation with Hamas. Intelligence sources active in the war claimed that Israeli military officers allowed the killing of large numbers of Palestinian civilians, especially during the initial stages of the conflict.
Problematic concept
It is in this global context of AI-facilitated oppression that India’s own experiment must be placed. The tool developed by IIT Bombay and the Maharashtra government attempts to detect suspected “illegal” Bangladeshi nationals and Rohingyas by examining speech patterns, tone, and language. The very concept of this tool is problematic. It must be remembered that language is defined by geography and migration, and not religion or citizenship. For example, the Bengali language is spoken over an enormous geography that includes West Bengal and several states in Northeast India. Millions of people who migrated from East Bengal before and after Partition speak dialects that are identical to the ones spoken in Bangladesh.
The claim that an AI tool can detect an “Indian” Bengali and a “Bangladeshi” Bengali is not only wrong but also dangerous. It reinforces the completely wrong and nefarious belief that language variations are directly correlated with nationalities and religious identities. What is most disturbing is that IIT Bombay team has refused to divulge the data that will be used for this tool.
Such apprehensions are not unfounded. Poor Bengali Muslims in India are already under considerable suspicion and stigmatisation. In November 2025, the police in Odisha arrested 12 members of a family from a village in Kendrapara district merely on the suspicion that they were Bangladeshi nationals. In December 2025, a 30-year-old labourer from West Bengal, Juel Shaikh, was beaten to death by a mob in Sambalpur in Odisha; they suspected he was an illegal immigrant from Bangladesh. At least five people—four Muslims and one Dalit—were killed in 2025 on the suspicion of being “Bangladeshis” or “illegal immigrants”.
In such an emotionally charged and polarised atmosphere, such a language-based AI tool would not be able to function as a neutral mediator but would instead be used as a weapon. Bengali-speaking migrant workers would be the worst affected, as they are anyway one of the most vulnerable populations in Indian society. This is already happening. In June last year, four men from West Bengal living in Mira Bhayandar were arrested by the Maharashtra police and expelled to Bangladesh on suspicion of being illegal immigrants. Eventually, they were allowed back, but only after suffering immense trauma and uncertainty.
The introduction of AI would only increase such occurrences manifold. It would give the state’s prejudice and violence a veneer of technological respectability and allow such acts to be carried out more efficiently and without accountability.
BJP leader Suvendu Adhikari at a press conference outside the Election Commission of India office in Kolkata in July 2025, demanding a Rohingya-free voter list.
| Photo Credit:
DEBASISH BHADURI
Artificial intelligence, when used in the context of border control, surveillance, and policing, does not merely classify; it alienates. It converts human beings into risks, accents into evidence, and languages into signs of guilt. From ICE’s automated tip processing systems to Israel’s facial recognition databases, and now India’s linguistic profiling tools, we are witnessing the birth of a global architecture of algorithmic exclusion.
The problem is not merely in the errors of these systems but in their moral logic. They presume that some lives are inherently guilty, that some populations must always prove their right to exist. They undermine the very basis of citizenship, belonging, and human dignity. If left unchecked, AI will not merely aid the state but become its most efficient tool of oppression.
Ultimately, the question is not whether these tools are effective, but what they are being effective at. A world where algorithms are used to determine citizenship is a world where the ideals of justice have already been forgotten. As history has proven time and time again, technologies of surveillance are rarely used to contain the initial problem. Rather, they spread outward, consuming us all. The alienation of AI will spread to the centre. And by then, the distinction between the citizen and the suspect may be lost.
Nishtha Sood is based in London and holds a degree in Politics and International Relations, with a regional focus on Central and South Asia, from SOAS, University of London. Jagpreet Singh is a social work student at Panjab University with a professional background in the technology sector.
Also Read | Pushed out, brought back, but at what cost?
Also Read | The new Partition
