Tech Explained: Here’s a simplified explanation of the latest technology update around Tech Explained: Celebrating an AI Milestone, and Guiding the Future in Simple Termsand what it means for users..
Seventy years ago this summer, a small group of mathematicians and scientists gathered in Hanover for the Dartmouth Summer Research Project on Artificial Intelligence, a two-month workshop that coined the term “artificial intelligence” and helped launch a field now reshaping science, technology, and daily life.
Dartmouth will mark the anniversary with a yearlong series of events, convenings, and public conversations designed not only to reflect on that legacy, but to help shape what comes next.
If the original conference asked whether machines could think, today’s questions are more complex, and more human. As AI systems generate text, images, code, and predictions with remarkable efficacy, the central challenge has shifted from “Can machines think?” to “How do humans think, create, make ethical decisions, and lead alongside machines?”
“Our legacy carries a responsibility to anchor innovation in human judgment, to ask hard questions about the role of new tools in teaching and research, and to lead higher education in defining how this technology advances knowledge with integrity and purpose,” says President Sian Leah Beilock, a cognitive scientist.
That responsibility starts in the classroom. Faculty across disciplines are using AI not to bypass student thinking but to sharpen it, making reasoning visible, testing assumptions, and pushing ideas into new territory. The anniversary year will also surface Dartmouth’s emerging norms for responsible AI use developed through faculty leadership and ongoing campus dialogue, creating a model for institutions navigating similar questions.
“Students learn to develop and defend interpretations, challenge assumptions, and push ideas into uncharted territory—while discerning when automation should give way to human judgment and imagination,” says Provost Santiago Schnell.
The measure of excellence, he says, is less what can be produced and more the rigor of reasoning, the originality of the questions posed, and the ability to navigate complexity with clarity and care. The aim is to prepare students to perform thoughtfully, critically, and ethically wherever AI is present—not just as engineers or users, but as citizens, creators, and decision-makers.
Quote
Our legacy carries a responsibility to anchor innovation in human judgment, to ask hard questions about the role of new tools in teaching and research.
Dartmouth’s strength as a liberal arts research university—where computational science converges with philosophy, ethics, media studies, the arts, and the social sciences—means these questions can be addressed from every angle. Researchers are advancing AI’s capabilities while interrogating its risks, from hallucinations and bias in training data to the broader challenge of ensuring that powerful tools serve, rather than replace, human judgment.
That integration of depth and breadth is producing scholarship that informs institutions, policymakers, and the public. Among the work underway at Dartmouth:
Dartmouth’s graduate and professional schools have also rolled out new offerings related to AI. Thayer School of Engineering has a new AI track as an option within its master of engineering program, and undergraduates pursuing a bachelor of engineering can now choose a concentration in AI.
Meanwhile, AI now surfaces across the entire Tuck School of Business curriculum—appearing in every course through teaching, research, and hands-on learning—with at least 10 electives this academic year devoted specifically to AI.
Kickoff Event: The McGuire Prize
The anniversary commemoration opens Feb. 26–27 with the presentation of the McGuire Prize for Societal Impact to Hany Farid, a leading expert on digital forensics whose work sits at the intersection of artificial intelligence, digital trust, and ethical responsibility.
The McGuire Prize recognizes leaders whose work demonstrates technology’s potential to benefit humanity. Farid served for two decades on Dartmouth’s faculty, where he pioneered the field of digital forensics, developing mathematical and computational techniques to determine when images, audio, and video have been manipulated or fabricated.
His research addresses one of the most pressing challenges of the digital age: how to distinguish truth from falsehood in a world where synthetic media can be produced at scale.
Farid’s innovations include technologies such as PhotoDNA, deployed globally to identify and remove child exploitation imagery, and advanced methods for detecting AI-generated deepfakes and other manipulated media that have become essential tools for law enforcement, human rights advocates, and major technology companies.
Launching the 70th anniversary with Farid’s recognition underlines a central theme of the year—rigorous technical innovation must be paired with ethical responsibility and accountability.
Setting the Agenda for AI’s Next Chapter
The year’s flagship event, “The Dartmouth Conference, Revisited,” will connect AI’s founding vision to a future-facing mandate for responsible innovation.
Drawing inspiration from the 1956 summer workshop, the conference will gather researchers, developers, creators, and institutional leaders at Dartmouth from Oct. 29-30 to deliberate on how artificial intelligence can responsibly augment human judgment and creativity and clarify higher education’s role in preserving and cultivating distinctly human capabilities.
The original conference asked how machines could be made to solve human problems. As the field has matured and expanded, the questions now are more nuanced: “How do we strengthen human judgment and creativity in the age of AI?” and “What is higher education’s role in expanding these capacities?”
“These questions go to the heart of higher education’s responsibility in an AI era,” says Peter Chin, professor of engineering and co-chair of Dartmouth’s Faculty Leadership Group on AI. “Framing this dialogue through the committee’s role—to discern where AI can accelerate our mission and where thoughtful restraint is needed—ensures that critical thinking and ethical inquiry remain central to crafting an evidence-based approach that can guide others in academia.”
The conference aims to produce actionable guidance—published frameworks for centering human judgment, creativity, and ethical responsibility as AI capabilities expand.
A yearlong conversation
A series of events across the anniversary year will extend Dartmouth’s leadership, exploring how AI reshapes not just research and decision-making but imaginative work across disciplines—and what educational institutions must do in response.
Among them:
This spring, students will compete for the DALI TechniGala Student Prize, Dartmouth’s first-ever AI innovation prize. The competition celebrates outstanding student work at the intersection of design, technology, and human-centered innovation—a showcase of how the next generation is already building with AI while prioritizing creativity, responsibility, and real-world impact.
In September, the Magnuson Center’s Dartmouth Entrepreneurs Forum in San Francisco will bring the university’s perspective to the innovation and venture capital community, making the case that entrepreneurship in the age of AI still depends on human judgment, creative vision, and ethical consideration.
And a “Future of Work” event in New York City in spring 2027 will examine how organizations can structure work to preserve human judgment, creative capacity, and ethical accountability as AI capabilities expand, and how colleges and universities prepare the leaders who will navigate these challenges.
These are among many events planned across campus throughout the anniversary year. More events and details will be announced on the AI at Dartmouth website in coming months. This website will also serve as a hub for updates and opportunities for the Dartmouth community and partners worldwide to engage.
As the institution that first launched the field, Dartmouth now seeks to convene a broader conversation about what must remain distinctly human and how educational institutions can ensure those capacities endure.
