About me

Hey there, my name is Shepard and I’m a Junior at Tufts University studying Computer Science and STS (Science, Technology, and Society). I work at the intersection of computational technology and societal impact, with my academic and professional interests centered around:

  1. AI Safety and Alignment: Researching frontier AI models to ensure the responsible development of these systems while promoting trust, value alignment, and oversight mechanisms. Here’s why it matters.
  2. Public Interest Technology (PIT): Building technology in a responsible and equitable way, while centering impacted communities, drawing from many disciplines, and putting public benefit ahead of profit incentives. Here’s why it matters.

I’ve also been playing the drums for over 15 years, so I spend my free time performing with BEATs (Bangin’ Everything At Tufts) and my band Call From Earth.

My work

The latest advances in artificial intelligence indicate a large shift in how we live, work, and interact with one another, and I believe the technology likely has the potential to address some of our most pressing challenges. But to benefit, we must understand the mechanisms and potential risks underlying AI development. As the technical lead for the inaugural executive board of the Tufts AI Safety Group, I hope to give students a foundation in the technical side of AI safety and prepare them to engage in impactful discourse through workshops and other opportunities. As I continue my studies, I plan to further explore deep learning and applied artificial intelligence. I’ll be taking on a more involved role in the AI Safety community, at Tufts and beyond, and identifying pathways to contribute to safety-related research.

To get more specific about my focus in AI Safety, I am currently interested in investigating collusion in LLMs. The use of AI for AI Safety is a promising and increasingly popular course of action, but it often relies on the potential for agents to collaborate effectively with humans and each other rather than conspire against them. We need a better understanding of the reliability of monitoring AI agents, particularly when agents and their monitors have the potential to collude and deliberately report false or misleading information. This makes collusion an even more urgent challenge.

Meanwhile, I am constantly exploring opportunities to rethink the possibilities of AI in practice. In particular, I am excited about the power of artificial intelligence to give people a voice. In my own work, this has meant using AI to make years of community meeting transcripts and city data accessible to community members and lawmakers, breaking down historical communication barriers and providing a jumping-off point for meaningful change. Examples like these illustrate alternatives to the dominant applications of AI as a labor substitute, surveillance tool, or military weapon.