Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: A conversation with USF researcher Karni Chagal-Feferkorn in Simple Termsand what it means for users..
By John Dudley, University Communications and Marketing
As artificial intelligence moves from experimental labs into classrooms, hospitals,
courtrooms and children’s smartphones, the pace of innovation is outstripping systems
designed to keep it in check.
Karni Chagal-Feferkorn — an assistant professor who researches AI law, ethics and policy in USF’s Bellini
College of Artificial Intelligence, Cybersecurity and Computing — says that while
rapid advancement brings promising opportunities, the gap between technological capability
and legal preparedness poses urgent challenges.
Chagal-Feferkorn’s work examines the gray areas of liability when autonomous systems
are involved in causing damage, the dangers AI poses to children and a growing need
for tech creators and policymakers to collaborate before systems are deployed.
In the conversation below, she explores how traditional legal frameworks could fall
short, explains why children require stricter protections and offers suggestions for
future professionals to build AI that is both safe and transformative.
As AI becomes more autonomous, what are the biggest unresolved legal questions around
who is liable when something goes wrong?
AI systems aren’t a person, but they are also not a product that “acts” exactly as
directed. They’re not always predictable or explainable. Within that context, we
need to ask some important questions:
Should the courts treat AI systems as mere products and apply product liability laws
as they would for a coffee machine? This is the direction taken by the European Union’s
Products Liability Directive with respect to specific types of harms and usages.
Or should we treat AI systems as something closer to people? For damages caused by
people or other legal persons, the tort system often applies the “negligence” framework,
rather than product liability.
That can lead to questions about whether the AI itself was negligent to help determine
whether developers should be liable (similar, perhaps, to examining whether a physician
was negligent to determine a hospital’s liability for damage caused by the physician).
This might sound like science fiction, but we’ve already seen plaintiffs in the U.S.
alleging – so far with no ruling — that an autonomous vehicle was negligent when
it turned one way rather than the other.
Other than identifying a specific framework for resolving tort claims, a more general
question is around finding the right balance between a framework that would incentivize
taking more precautions to make the technology safer and one that would promote innovation
and avoid a chilling effect on technology.
The good news is the law is evolving quickly on this topic, and hopefully it won’t
take much trial and error before optimal legal mechanisms are found.
What are the ethical concerns related to children’s interactions with these systems,
such as AI-themed gifts they may have received over the holidays?
There is a lot to worry about here. It has been alleged in lawsuits and shared in
media reports that AI companions were involved in several instances where teenagers
took their own lives.
Additionally, there are documented instances of AI engaging in harming conversations
with children and teenagers, including exposing them to sexual content and encouraging
violence. A recent Texas lawsuit alleges that a chatbot conversing with a teenager
suggested that killing the teenagers’ parents would be an appropriate response for
limiting his phone time.
Children’s overreliance on AI could potentially lead to loneliness and depression,
along with diminished social skills and problem-solving abilities. Another concern
is how much information is collected about users and how it might be used later in
ways that don’t benefit them.
Other types of AI harms may warrant a lenient approach in the name of incentivizing
technological innovation. But when we consider the identity of potential victims (children),
the type of harm (self-harm and long-term mental health consequences) and the potential
impact (this could affect every child in the nation), in my opinion everything points
toward a call for stricter intervention.
Protective measures could include prohibiting certain content, intervening in how
the AI responds once self-harm thoughts are expressed and reminding users they are
interacting with AI and not a real person. Some states recently enacted rules to that
effect, and others might follow. We might also see this issue taken up at the federal
level.
Parents can help mitigate risks by supervising their child’s use of AI, to the extent
possible, and raising their awareness of dangers in an age-appropriate manner.
For example, parents of young children interacting with AI toys might do best to constantly
monitor their interactions and ensure nothing inappropriate is said. Parents of older
children who spend hours in their room – potentially with a chatbot – can talk to
their kids about potential risks. If possible, it’s advisable to keep children’s AI
interactions short.
You argue that engineers, scientists and policymakers must collaborate before AI systems
are built. What should that look like?
It makes more sense if potential damages caused by AI are mitigated before they happen,
rather than addressed after there is an injured party and, potentially, a lawsuit.
In the past, there was less need for technologists to collaborate with lawyers over
the design of systems, because such systems assisted humans with well-defined tasks
whose legal and ethical implications were limited and understood in advance.
Now, AI systems assist or even replace humans in making decisions that can lead to
various legal and ethical consequences. Because AI systems are often unpredictable,
additional ethical and legal questions are likely to follow.
A collaborative effort among the disciplines can lead to better results. The emphasis
should be on having tech professionals train in spotting potential legal and ethical
issues their work might invoke and having legal and public policy professionals understand
how vague policy concepts translate into code.
These processes are starting to gain traction in education and should be encouraged
elsewhere.
AI moves faster than regulation. So how can lawmakers protect the public without stifling
innovation?
The pace of lawmaking can’t keep up with technological advancement, and this is especially
true with AI.
One solution gaining traction is “sandboxes” – platforms that provide an experimental
environment for entrepreneurs, government regulators and policymakers to interact
and test new technologies, sometimes while offering exceptions to existing laws.
In general, public-private collaboration might lead to quicker solutions than waiting
for legislation to be passed.
But surely, regulation alone is not the answer. Other public policy measures, educational
efforts and private sector voluntary steps need to be part of the solution.
Looking ahead, what skills or mindsets will future lawyers, policymakers and technologists
need?
More than possessing working knowledge of certain technologies or legal policies,
the emphasis will be on how to quickly learn and adapt to new technologies and policies.
It’s important that different disciplines have sufficient levels of literacy in one
another so they can work together to be better equipped to ensure these new technologies
are also safe and trustworthy.
