Tech Explained: Chief AI Officer David Ebert on bridging innovation and responsibility  in Simple Terms

Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Chief AI Officer David Ebert on bridging innovation and responsibility in Simple Termsand what it means for users..

An engineer and computer scientist by training, David Ebert has built his career at the intersection of artificial intelligence, data and decision-making. His work focuses on visual analytics, explainable AI and human-computer teaming – creating systems that solve complex real-world problems while remaining understandable and easy to use. 

Leslie Hawthorne Klingler, Office of Research and Partnerships


As the University of Arizona’s first chief AI and data science officer – a role that’s rare in higher education nationwide – his mission

 is to accelerate AI efforts by empowering the campus community to use it safely, sustainably, effectively and in ways that are human-centered.

In this Q&A, Ebert outlines how AI can enhance the university’s mission and make it more human – not less.

Why has the U of A invested in a chief AI officer?

AI is already changing how we deliver every part of our mission – teaching, research and service – plus the operations that make all of that possible. Across higher ed, you quickly see the problems: scattered adoption of AI tools on campuses, duplicated development and spending, inconsistent standards and preventable missteps. We want to avoid that and make sure AI strengthens the mission rather than distracts from it.

My mission as chief AI officer and head of the Office of Responsible AI is to engage an informed community to set priorities and decide how we use AI effectively. We’re a service organization, providing leadership, guidance and expertise grounded in broad input and the collective knowledge across disciplines.

David Ebert, the University of Arizona’s chief AI and data science officer, was part of panel of AI experts at the recent AI in Leadership Summit, hosted by Office of Responsible Artificial Intelligence.

Kris Hanning, Office of Research and Partnerships


How does AI compare to past technological shifts?

I often describe AI as the next digital transformation – like the shift to computerization, but compressed. Computerization unfolded over decades; with AI, the easy, usable version of the technology arrived in about 18 months. That speed is what creates the challenge, and what makes it so transformative. Another way to think about it is like the smartphone revolution: high-performance computers now sitting in everyone’s pocket. AI is that level of change, arriving all at once.

How can AI make universities more human, rather than less?

AI can take the mundane off our plates so we can put more time into what only humans do well. Used this way, AI doesn’t replace the human core of the university; it protects it by freeing us to teach, mentor, discover and care for others more hands-on and deeply.

How are you developing an AI rollout that reflects the University of Arizona?

We began by building a roadmap with input from campus town halls that clarified where we are and where we want to go. The initial roadmap teams included 262 faculty, staff and students, and our town halls drew over 1,000 attendees. These days, we’re meeting with departments and groups across the university to present, listen and gather more input. We’ve prepared a first round of recommendations. The next step is circulating a draft for feedback from the roadmap teams, the Faculty Senate, Staff Council and deans. 

What new resources are being launched to support the campus community in this transition? 

Many people don’t realize that if they use large language models online with a free account, what they type in is public. That can violate confidentiality and FERPA and may give away their ideas. I encourage the community instead to use the free and secure AI tools that the U of A offers to its employees and students through University Information and Technology Services. 

The Office of Responsible AI is also launching a new AI platform that will bring a suite of generative AI and traditional machine learning tools to all members of the campus community this spring. The new U of A AI Platform will expand the university’s existing free and secure access to its extensive Amazon Web Services technology stack, providing flexibility to update or replace AI models as technologies change, while saving money. 

Attendees at the AI in Leadership Summit gather outside the Wyant College of Optical Sciences building.

Kris Hanning, Office of Research and Partnerships


What does academic integrity mean in a world where AI is ubiquitous?

Academic integrity hasn’t changed – only the tools have. We all still need to be ethical and responsible: do our own work, be transparent about what we used, including generative AI, and cite appropriately.

The real question is: What skills are we trying to build? Our job is to graduate students with capabilities that stay valuable for decades, not ones outsourced to the next tool.

What choices will make the U of A’s AI approach distinctive – not just competitive?

There’s a lot of pressure on universities to move fast, to “get AI into the classroom now” and give students access without first stepping back to ask: What are the learning outcomes?

Our job is to listen carefully, address concerns and move in a way that’s inclusive and builds confidence across the community. I hope people will say we lead by weighing factors responsibly, and that the proof showed up where it matters: in our students’ success and in the beneficial impact we have on our communities.