Watching Everyday: AI in Surveillance

By Amelia Simoneau ‘26

From smart watches and fitness trackers to the cameras inside traffic lights and electronic doorbells, surveillance powered by Artificial Intelligence (AI) pervades our lives on every level. Whether it’s on security cameras or from personal device usage, 75% of the average person’s day is caught on camera, per Reolink Cameras: we are constantly being watched and monitored. Reviewing all of this footage used to be a human’s job, but in the age of AI, the job is now dominated by AI software. 

The earliest forms of AI surveillance technology emerged in the late 1960s with the advent of rudimentary CCTV systems. These technologies are derived from the broader developments of generalized AI, with China becoming a global leader in these developments due to its massive financial investments in AI, per the Carnegie Endowment. These technologies gained traction in the 1980s and continued to develop into what we think of as AI today. Companies like Huawei, Hikvision, and SenseTime have been at the forefront of advancing AI video surveillance and facial recognition technology. These China-based companies are responsible for supplying AI surveillance technologies to sixty-three countries, per the Carnegie Endowment. U.S. companies are also active in spreading advanced surveillance tech worldwide. AI surveillance technology supplied by U.S. firms, like IBM, Palantir and Cisco is present in thirty-two countries, per Carnegie. 

Now, in 2025, the evolution of AI surveillance systems is transforming traditional security measures into highly intelligent and efficient solutions. Among the most notable innovations are AI-powered security cameras that can leverage advanced algorithms to analyze video footage in real time, allowing them to identify specific objects, detect unusual behaviors, predict potential outcomes, and build profiles against suspicious individuals. The potential advancements in public safety are widely regarded with fascination and excitement, but many individuals are increasingly uneasy when it comes to the questions of privacy and ethical usage. 

With the use of AI surveillance, governments’ capabilities to monitor or track individuals and systems are advancing, and these advancements are spreading worldwide at a rapid rate. Seventy-five countries have admitted to actively using AI for surveillance purposes, and there is a strong relationship between a countries’ military expenditures and a government’s use of AI surveillance systems; forty of the world’s top fifty military spending countries also deploy AI surveillance technology, per Carnegie. This correlation does not inevitably mean that these governments are the only ones abusing these systems. Although governments in autocratic or semi-autocratic countries are more prone to abuse AI surveillance than governments in liberal democracies, all political contexts run the risk of unlawfully exploiting AI surveillance technology to obtain certain political objectives. In Myanmar, the military junta has used AI surveillance technology to monitor citizens and unjustly target dissidents. According to Federica D’Alessandra, deputy director of the Institute for Ethics, Law, and Armed conflicts at Oxford, “The result [of AI usage in Myanmar] has been an enhanced architecture for state violence, which the Tatmadaw [the junta’s military branch] has used to kill hundreds of protesters”. Similarly, last March, Hungary’s parliament passed a series of anti-LGBTQ+ legislative measures that curtailed the right of assembly and allowed for the application of AI surveillance technology to detect and prosecute petty offences like attending a Pride event. 

The risk of more situations similar to those in Hungary and Myanmar is part of the reason the European Union released the world’s first comprehensive regulation for AI. The AI Act bans mass real-time facial recognition in public spaces, except in rare cases such as searching for perpetrators of serious crimes or preventing terrorist threats. The Act concedes that AI surveillance can be useful for catching criminals, detecting threats, and managing crises, but maintains that there must be oversight and limits on its use. Governments have a responsibility to protect privacy, regulate companies that handle surveillance data, and ensure that the technology is not used against citizens. However, some countries do not seem to agree with this precaution. For the US and China, the development of AI technology has emerged as a realm of competition analogous to a twenty-first century space race. In this “contest,” regulation is viewed as the enemy of innovation, particularly by the current US government. In January, President Donald Trump issued an executive order “Removing Barriers to American Leadership in Artificial Intelligence,” which rescinded former President Joe Biden’s directive on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The new executive order calls for all federal departments and agencies to revise or rescind policies, directives, regulations, and other actions taken by the Biden administration that are “inconsistent” with the goal of “enhancing America’s global AI dominance.” Not only is the United States government unconcerned with the morality of AI surveillance technology, but recent investigations have revealed issues with the accuracy and reliability of AI in US law enforcement systems. A Washington Post investigation into police use of facial recognition software found that law enforcement agencies in the United States are using AI tools in a way they were never intended to be used: as a shortcut to finding and arresting suspects without other evidence. Police departments across over twelve different states are using AI, and some law enforcement officers have used technology to abandon traditional policing standards and treat software suggestions as facts, according to The Post’s investigation. 

While the questions of privacy and morality certainly remain, the majority of countries agree that shying away from AI surveillance technology is not an option; the technology has immense potential and significant implications for security, the efficiency of workforces, and the effectiveness of policing, and it’s clear that the use of these technologies is already widespread. In the next few years, as the existence of AI continues to progress, what matters is working to ensure an ethical balance between the privacy of citizens and the technology’s capacity for enhancing security. 

Emlyn Joseph