Panels explore AI, security and the role of human judgment

Over two days of expert panels at the 2025 Vanderbilt Summit on Modern Conflict and Emerging Threats, one theme emerged as the clearest unifying thread: As AI systems become more powerful and more autonomous, the role of human judgment has never been more crucial.  

Across discussions on warfare, cybersecurity, biomedicine and geopolitical partnerships, leaders emphasized the urgent need for human oversight to guide the development and deployment of artificial intelligence. Through half a dozen panel sessions, these experts focused on the promise and the peril of AI—highlighting its transformative potential and the real risks it poses when speed and automation outpace ethics, policy and human control. 

Day One 

Lieutenant General Ed Cardon (Ret.); Andrew Moore, Founder, Lovelace AI; Admiral Michael Rogers (Ret.), Senior Fellow, Northwestern University Kellogg School of Management, Former U.S. Cyber Command Commander; Jesse Spencer-Smith, Associate Dean for Partnerships and Innovation for the College of Connected Computing; Chief Data Scientist and Interim Director for the Data Science Institute; Professor of the Practice of Computer Science, Vanderbilt University talk during the Institute of National Security 2025 Summit on Modern Conflict and Emerging Threats. Photo: Harrison McClary/Vanderbilt UniversityAI and National Security
Moderator: Lt. Gen. Ed Cardon (Ret.)
Panelists: Andrew Moore, founder, Lovelace AI; retired Adm. Michael Rogers, Northwestern University Kellogg School of Management; Jesse Spencer-Smith, Vanderbilt University College of Connected Computing 

The first panel of the 2025 summit focused on a high-level conversation about how AI is transforming national defense strategy. Panelists discussed its potential to enhance readiness through improved decision-making and operational speed, but they warned of over-reliance, loss of human oversight and ethical ambiguity. There was broad agreement that the U.S. must adopt AI more aggressively but with strong ethical safeguards. 

Speakers highlighted the challenge of integrating AI into mission-specific operations rather than using generic tools. They emphasized the need for robust training, frameworks for accountability and clear policies on human decision-making when AI is involved. Panelists were cautiously optimistic about AI’s role in helping the U.S. maintain global leadership, provided it is developed and deployed with accountability, transparency and human oversight.  

 

Julie Wernau, Reporter, Wall Street Journal; David Graham, Biosecurity Programs Lead and Distinguished Scientist, Oak Ridge National Laboratory; Robert L. Grossman, Director, Center for Translational Data Science at the University of Chicago; Ethan Jackson, Senior Director, Strategic Missions and Technologies, Microsoft; Cristina Martina, Research Associate Professor, Meiler Lab, Vanderbilt University; Jennifer Roberts, Director, Resilient Systems, ARPA-H talk during the Institute of National Security 2025 Summit on Modern Conflict and Emerging Threats. Photo: Harrison McClary/Vanderbilt UniversityBiomedicine Unleashed: AI’s Power, Perils and National Security Stakes
Moderator: Julie Wernau, The Wall Street Journal
Panelists: David Graham, Oak Ridge National Laboratory; Robert L. Grossman, University of Chicago; Ethan Jackson, Microsoft; Cristina Martina, Vanderbilt University; Jennifer Roberts, ARPA-H 

Panelists examined how AI is rapidly transforming biomedicine, offering breakthroughs in care and diagnostics while raising new national security concerns. They discussed AI’s role in drug discovery, diagnostics and combat-related health forecasting, but they also raised red flags about privacy risks, inequities in global access and ethical use in conflict zones. 

The panel called for more transparent policy development and greater investment in secure health data systems. A recurring theme was the need for interdisciplinary collaboration of tech creators, ethicists and policymakers to ensure that biomedicine’s AI revolution benefits society while minimizing harm.

 

 Gillian Tett, Provost, King’s College, Cambridge, Columnist and Editorial Board Member, Financial Times; Michael Brasseur, Chief Strategy Officer, Saab, Inc., General Manager, Skapa by Saab; Lizza Dwoskin, Silicon Valley Correspondent, The Washington Post; Ron Keesing, Chief AI Officer, Leidos; Joe P. Larson III, Senior Vice President, National Security Operations, Anduril Industries talking during the Institute of National Security 2025 Summit on Modern Conflict and Emerging Threats. Photo: Harrison McClary/Vanderbilt UniversityAI and the Changing Character of Conflict
Moderator: Gillian Tett, King’s College, Cambridge / Financial Times
Panelists: Michael Brasseur, Saab, Inc.; Lizza Dwoskin, The Washington Post; Ron Keesing, Leidos; Joe P. Larson III, Anduril Industries 

The afternoon’s first panel examined how generative AI, autonomous systems and private-sector innovation are redefining modern warfare. Panelists explored the growing tension between speed and safety in military decision-making, citing the Israel-Hamas conflict as a stark example of how quickly AI-driven tools can shift the dynamics of conflict and how difficult it is for human operators to remain fully in control. They also pointed to bureaucratic hurdles that slow innovation and limit the effectiveness of AI adoption in defense settings. 

The discussion urged procurement reform, realistic testing environments and trust-building measures for AI integration. While bringing commercial AI technologies into military operations poses significant risks, panelists argued that they could also offer vital strategic advantages in today’s rapidly evolving environment if adopted responsibly. 

 

General Paul M. Nakasone (Ret.), Founding Director of Vanderbilt Institute of National Security; Karl Hanmore, First Assistant Director and Chief Technology Officer, General Mission Capability, Australian Signals Directorate, Australian Government; Major General Akitsugu Kimura, Commanding General, Japan Self-Defense Forces Cyber Defense Command (JCDC); Major General Lee Yi-Jin, Chief of Digital and Intelligence Service; Director Military Intelligence, Singapore Armed Forces (SAF) talking during the Institute of National Security 2025 Summit on Modern Conflict and Emerging Threats. Photo: Harrison McClary/Vanderbilt UniversityStrategic Synergies: AI, Security, and Partnerships in the Indo-Pacific
Moderator: Retired Gen. Paul M. Nakasone, Vanderbilt Institute of National Security
Panelists: Karl Hanmore, Australian Signals Directorate; Maj. Gen. Akitsugu Kimura, Japan Self-Defense Forces; Maj. Gen. Lee Yi-Jin, Singapore Armed Forces 

The final panel of the day offered insights from senior defense officials representing Australia, Japan and Singapore. Panelists shared their countries’ approaches to joint AI infrastructure development, cross-border cyber training and inclusive access to emerging technologies as part of their strategy to strengthen deterrence and resilience across the Indo-Pacific.  

Speakers stressed the strategic necessity of collaboration amid rapidly evolving regional threats. Flexible organizational structures, real-time intelligence sharing and equitable AI integration were presented as essential components of security partnerships needed in a contested digital and physical landscape. 

 

Day Two 

From left, Niloofar Razi Howe, Distinguished Visiting Professor, Vanderbilt University, Frank Cilluffo, Director, McCrary Institute for Cyber and Critical Infrastructure Security, Auburn University, Jeff Moss, Founder, DEF CON & Black Hat, and Thompson Paine, Head of Product & Business Operations, Anthropic PBC during the Institute of National Security 2025 Summit on Modern Conflict and Emerging Threats. Photo: John Amis/Vanderbilt UniversityRethinking Cybersecurity within an Artificial Intelligence Construct
Moderator: Niloofar Razi Howe, Vanderbilt University / Capitol Meridian Partners
Panelists: Frank Cilluffo, Auburn University; Jeff Moss, DEF CON & Black Hat; Thompson Paine, Anthropic 

The second day began with a discussion on AI’s role in reshaping cybersecurity. Panelists from industry, academia and cyber defense communities raised concerns about AI-driven vulnerabilities, including manipulation of AI training by adversaries, AI hallucinations and loss of decision transparency, which can make it difficult for human operators to understand how models reach their conclusions.  

There are opportunities to improve threat detection, speed up response times and expand cybersecurity capacity. While the conversation acknowledged the inevitability of cyber risk, it also stressed the value of human oversight, system resilience and coordinated frameworks for trustworthy AI deployment. 

 

Lieutenant General Charlie “Tuna” Moore (Ret.); Sue Gordon, Former Principal Deputy Director of National Intelligence; Will Roper, CEO, Istari, Distinguished Professor of the Practice, Distinguished Professor of the Practice, Georgia Institute of Technology; Peter Singer, Senior Fellow, New America and Managing Partner, Useful Fiction; General Glen VanHerck (Ret.), Founder and Principal, Glen VanHerck Advisors talking during the Institute of National Security 2025 Summit on Modern Conflict and Emerging Threats. Photo: Harrison McClary/Vanderbilt University The Vanguard Forecast: National Security Predictions for 2025 

Moderator: Retired Lt. Gen. Charlie “Tuna” Moore, Vanderbilt University
Panelists: Sue Gordon, Former Principal Deputy Director of National Intelligence; Will Roper, Istari / Georgia Institute of Technology; Peter Singer, New America / Useful Fiction; retired Gen. Glen VanHerck, Glen VanHerck Advisors 

The summit’s closing panel featured senior national security leaders offering candid predictions and insights on emerging threats and what they mean for the world’s balance of power. Topics ranged from AI’s role in keeping adversaries in check to the growing complexity of hybrid warfare and threats to the global supply chain. Panelists noted that AI systems, like people, can be wrong or manipulated. 

Panelists voiced concerns about lagging technology regulations, nuclear proliferation and new kinds of weapons, from drones that act on their own to lab-built biological agents. The panel called for pragmatic leadership, new institutions to drive AI adoption and strategic public-private partnerships to meet future national security challenges.  

The Human Imperative 

While the summit’s panels explored a wide range of challenges and opportunities presented by AI, one message remained consistent: Humans must stay at the center of the systems they create. Across panels, experts warned that speed, scale and complexity are outpacing current policy and governance structures—making collaboration, trust and real-world testing essential. As global security threats evolve, frameworks guiding AI’s development and deployment need to evolve too. The future of conflict may be increasingly automated, but accountability, ethics and human agency must not be left behind.