Case Explained: Law, Coercion, and State Crime  - Legal Perspective

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Should Guns Be Able to Say No? in Simple Termsand what it means for users..

It is February 2026, and another school shooting has taken place, this time in Tumbler Ridge in Canada. At first glance, the tragedy appeared to be in line with previous incidents. However, it now appears that artificial intelligence played a role in this particular case. It turns out that the shooter used ChatGPT as a confidant to plan the incident. The fact that OpenAI’s internal systems failed to flag the user’s account to the authorities is now the focus of a civil lawsuit against the company.

The legal considerations will take some time to work their way through the courts, but in the meantime, it is worth considering how else technology might be used to prevent such incidents from occurring in the future. Recent technological advances have brought new possibilities that might help reduce these attacks within reach. It is now possible for firearms to be programmed to prevent users from firing at blatantly illegal targets. Now that this is the case, should states adopt legislation that forces manufacturers to incorporate that technology in their future production?

Using existing technology to restrain firearms

Firearms have already been substantially enhanced by technology in military and law‑enforcement contexts. Precision Guided Firearms (PGFs), for example, integrate digital optics, ballistic computation, and target‑tracking software to assist users in identifying and maintaining a lock on an intended target, significantly reducing human error in aiming and shot placement. Similarly, Fire Control Systems employed in military applications fuse data from multiple sensors—such as optics, rangefinders, and inertial systems—to continuously assess target position and movement, enabling weapons to discriminate more reliably between intended targets and surrounding objects. These technologies demonstrate a growing capacity for firearms to recognise, track, and respond to specific targets with a level of consistency and precision that far exceeds unaided human judgment.

These enhancements are designed to make weapons more accurate and lethal, but the same technology can be reimagined to have the opposite effect. Indeed, what these advances demonstrate is that it is now perfectly conceivable that a new generation of firearms could be developed that block users from firing at blatantly illegal targets. The type of model, which we can dub limited-target firearms (or “LTFs”), would be powered by Artificial Intelligence and would have as its main function preventing users from firing at specific categories of targets. Here, agency is partially removed from the user, but is not supplanted or even enhanced by AI. If designed and deployed correctly, the LTF would be limited to removing from the user the ability to engage in specific categories of prohibited acts. That type of technology could potentially have both military and civilian applications and could potentially prevent, for example, school shootings and the mass killings of civilians in war.

The notion that the use of firearms should be subject to constraint is neither novel nor unprecedented. Across multiple jurisdictions, existing legal frameworks already impose targeted restrictions designed to limit the lethality of particular weapons systems. In New Jersey, for example, state law generally prohibits large‑capacity ammunition magazines, capping civilian magazines at ten rounds in recognition of the risks posed by sustained fire. These measures reflect an established willingness on the part of legislators to intervene in how firearms function. The question, therefore, is not whether such intervention is permissible in principle, but whether it should extend further—specifically, whether legislation should mandate that civilian and military firearms be incapable of firing when directed at targets that are manifestly unlawful.

The moral dimension

Firearms are commonly used for a number of legitimate purposes, including self‑defence, hunting, and leisure activities such as sports. Together, these recognised uses provide a useful framework for assessing how far regulatory interventions might reasonably go without undermining lawful and socially accepted forms of firearm ownership and use.

A first question is whether a legal mandate that prevents humans from engaging in blatantly illegal behaviour interferes with human agency. Put in other terms: Are there legitimate reasons for allowing the ability to use their firearms without limitation? This question is not moot, particularly in so far as firearms are concerned. In the US, any suggestion that the use of firearms should be limited is enough to make many Americans positively nervous. The concern is that any limitations on the use of firearms are part of a slippery slope to population control. Another concern that has been expressed more generally about the growing use of technology in everyday life is whether automation is gradually weakening our ability to make sound ethical decisions (also known as “moral deskilling”).

One way to approach that question is by looking into how existing legal frameworks mandate the use of technologies that limit our human agency. For example, many legal systems now mandate that modern automobiles must incorporate automated safety systems such as Automatic Emergency Braking (AEB). Those systems rely on sensors and onboard computation to continuously assess the driving environment for imminent collision risks. When the system determines that a crash is likely and the driver fails to respond in time, it intervenes autonomously by applying the brakes, either preventing the collision altogether or substantially reducing its severity. Similar examples exist in the medical sector. To prevent a healthcare worker from accidentally or deliberately hooking up a lethal gas (like carbon dioxide) to a patient who needs oxygen, regulators mandate that the valve for an oxygen tank must be physically incompatible with the regulator for any other gas.

Given these and many other precedents, objections grounded in the preservation of human moral agency are difficult to sustain. The immorality of targeting civilians—whether in peacetime or during armed conflict—is not ambiguous. Moreover, the societal cost of permitting such acts—measured in lives lost, communities destroyed, and norms eroded—is so immense that it overwhelmingly outweighs any speculative harm that might arise from constraining human agency at the point of action.

Another argument could be made that potential attackers would simply shift to other targets. In criminology, this is the theory that blocking a specific criminal opportunity doesn’t prevent the crime; it merely shifts it to a different time, place, or method (also known as the “crime displacement” argument). A policy should be measured by aggregate harm reduction, not perfect prevention. We do not refuse to install steering wheel locks just because a truly determined car thief can tow the vehicle away. If an LTF makes it harder to commit a war crime, it still saves lives, rendering the “it makes no difference” argument statistically invalid.

In addition, criminology shows that most crimes are crimes of opportunity, impulse, or convenience. By introducing “friction” (making an act physically harder or more time-consuming), you significantly drop the overall rate of the act. A soldier might impulsively fire at a shadow in a panic, but if the LTF locks the trigger because it recognizes a child, that soldier is highly unlikely to drop their rifle, draw a sidearm, and intentionally execute the child. The argument assumes the violator has absolute criminal intent (e.g., a murderer trying to kill a patient). But many violations of international humanitarian law are the result of negligence, fatigue, panic, or lack of training. A physical forcing function entirely eliminates accidental and negligent harm, which accounts for a massive percentage of civilian casualties.

Any serious policy discussion must also confront the reality of algorithmic error. AI‑driven systems will sometimes misclassify or misinterpret their environment. A firearm equipped with target‑recognition capabilities may, in certain conditions, misidentify a target due to height, posture, occlusion, lighting, or other environmental factors. There is no credible claim that such systems would be infallible. Yet this is hardly a novel problem. Across multiple domains, society has reached the point where it is willing to tolerate a degree of automated error when the aggregate benefits are judged to outweigh the risks. The gradual acceptance of self‑driving and semi‑autonomous vehicles, despite well‑documented failures, reflects a broader regulatory judgment that imperfect automation may still produce fewer harms than unrestricted human discretion. The relevant question, therefore, is not whether mistakes will occur, but whether the overall reduction in harm justifies their managed and transparent risk—a standard that has already been applied elsewhere with far‑reaching consequences.

What types of policy response are possible?

From a policy perspective, an adequate response would begin with measures designed to incentivise further research, development, and production of firearms incorporating these limiting design features. Public funding, procurement preferences, regulatory sandboxes, and liability frameworks could all be used to encourage experimentation and refinement, allowing the technology to mature and its limitations to be better understood. Over time, and subject to sufficient evidence of effectiveness and reliability, this initial phase could give way to more prescriptive regulation. As with other safety‑critical technologies, legislators could eventually mandate that such design features be incorporated into all newly manufactured civilian and military firearms, making them a baseline requirement rather than an optional enhancement. This phased approach recognises both the need for caution in the deployment of novel technologies and the urgency of establishing enforceable standards once their capacity to reduce harm has been demonstrated.

This article does not advance a claim to a perfect solution, nor does it prescribe a specific or immediate course of action. Rather, it is intended as a call for further reflection on a set of technological possibilities that are no longer speculative, but increasingly feasible. It argues for space to experiment, to test assumptions, and to assess risks and benefits with greater empirical grounding. If such experimentation demonstrates a meaningful capacity to reduce harm, it may in time justify more formal legislative engagement. At this stage, however, the aim is more modest: to bring an overlooked possibility into the policy conversation and to invite serious, sustained consideration of its implications.