- Community Voices
By Lena Haefele ‘25 and Sathvik Samant ‘26
Lena Haefele ‘25 and Sathvik Samant ‘26 are members of Lawrenceville’s student-led AI Council, which meets regularly to discuss research, experiment with new tools and platforms and evaluate campus AI projects. Under the mentorship of Jennifer Parnell (Director of Innovation and Student AI Projects), the Council helps educate fellow students on the ethics and impact of AI and has deep connections with Lawrenceville community members who provide opportunities to advance their work in real-world settings.
If you’ve read the news recently, you've seen artificial intelligence being heralded as the greatest technological marvel of our era, full of boundless possibility and human ingenuity— but what happens when this technical marvel outpaces our moral compass?
Here at Lawrenceville, we’re searching for that answer, trying to strike a balance between tradition and technology. Today’s Lawrentians are uniquely situated in uncharted territory, grappling with questions about what it means to be human in an age where machines can think, create, and even challenge our sense of originality.
AI is the simulation of human intelligence in machines, allowing them to think, learn, and make decisions. It’s able to generate content, automate processes, and analyze vast amounts of data in a short period of time, transforming industries and unlocking possibilities once confined to worlds of science fiction. However, AI can reinforce biases embedded in its training data, threaten traditional jobs and industries, and raise serious concerns about privacy and accountability. As Lawrentians, we’re grappling with these issues, trying to move towards a more responsible future.
For our generation, coming of age in a world shaped by AI means confronting questions that have no clear answer: What careers will exist in a decade? How do we ensure fairness and equity in a world driven by data? What is ethical use? At Lawrenceville, these questions aren’t theoretical—they’re real challenges that students and teachers are facing every day in classrooms. As today's students, however, we’re uniquely equipped to lead in this conversation, blending our Harkness ethos with the technology and adaptability demanded by this new world. Together, we’re not just preparing for the future—we’re helping to shape it.
Understanding AI starts with using it. By encouraging teachers to integrate AI-related assignments into their syllabus, students have started to use AI in teacher-guided, ethics-focused, policy-aligned ways. For instance, students can use AI to generate practice questions for oral language examinations or they can generate grammar practice problems to ace their Third Form English grammar test. Students also use AI in a "Socratic steering" mode to help them understand how to solve math, chemistry, or physics problems. In these situations, AI isn’t replacing their learning—it's enhancing it. We believe that AI has the power to transform a Lawrenceville education because we’re living it. Every day, AI helps us supercharge our work, tutoring us when we’re stuck on chemistry homework, or helping us brainstorm guiding questions for our next Harkness discussion. Teachers, students, alumni, and parents may fear that AI takes away from the core of what a Lawrenceville education is really about; however, AI, when used correctly, amplifies our commitment to critical thinking, collaboration, and innovation, aligning with the values at the heart of our Lawrenceville experience.
The key to responsible AI adoption is balance: we need to balance the benefits of AI with its risks and value human connection alongside technological progress. That’s why Lawrenceville’s approach to AI integration puts ethics at the forefront, making students aware of algorithmic bias, plagiarism, and explainability. The School’s current policy allows teachers, or in some cases course teams, to determine what is acceptable use of AI on assignments. This scale ranges from no permitted use of AI to allowing AI-guided research and idea generation (which must be cited and reflect the student’s own understanding and creativity. Students are made aware of the School’s policies during orientation (in a session devoted to AI), on the first day of classes, and for specific assignments. This ethics-focused approach goes beyond what’s written in the Student Handbook, not simply enforcing a school policy, but rather guiding students to understand why that policy is there in the first place, preparing them for an AI-driven future.
Lawrenceville’s AI Council members are hard at work on projects across multiple departments, attending conferences, and creating resources for teachers and fellow students. During this process, our goal isn’t simply to advocate for AI use, but also to listen to the voices, concerns, and ideas of the people that surround us.
Ultimately, there is no set path or instruction manual for AI integration in schools: it’s uncharted, unprecedented territory. That’s why we’re lucky to be at Lawrenceville, a place where risk and curiosity are supported and encouraged. So, to you, our readers: lean into risk, challenge your beliefs and preconceived notions, ask questions, and be prepared for a bold new world.
Learn more about AI at Lawrenceville by listening to the School’s new podcast, “18:10,” available on both Apple and Spotify. The inaugural topic, “AI in Education,” spans two episodes, both hosted by Jennifer Parnell, director of innovation and student AI projects at Lawrenceville, with special guest Ajay Dhaul P’24, senior vice president of global data solutions and applied AI for a Fortune 500 consumer products company.
For more information, contact Lisa M. Gillard H'17, director of public relations, at lgillard@lawrenceville.org.