Technology
News Type
News
Date
Paragraphs

There is a significant gap between what technology, especially AI technology, is being developed and the public's understanding of such technologies. We must ask: what if the public were not just passive recipients of these technologies, but active participants in guiding their evolution?

A group of technology companies convened by Stanford University’s Deliberative Democracy Lab will gather public feedback about complex questions the AI industry is considering while developing AI agents. This convening includes Cohere, Meta, Oracle, and PayPal, advised by the Collective Intelligence Project.

This Industry-Wide Forum brings together everyday people to weigh in on tech policy and product development decisions where there are difficult tradeoffs with no simple answers. Technology development is moving so quickly, there is no better time than right now to engage the public in understanding what an informed public would like AI technologies to do for them. This Forum is designed based on Stanford's method of Deliberative Polling, a governance innovation that empowers the public’s voices to have a greater say in decision-making. This Forum will take place in Fall 2025. Findings from this Forum will be made public, and Stanford’s Deliberative Democracy Lab will hold webinars for the public to learn and inquire about the findings.

"We're proud to be a founding participant in this initiative alongside Stanford and other AI leaders," said Saurabh Baji, CTO of Cohere. "This collaborative approach is central to enhancing trust in agentic AI and paving the way for strengthened cross-industry standards for this technology. We're looking forward to working together to shape the future of how agents serve enterprises and people."

In the near term, AI Agents will be expected to conduct a myriad of transactions on behalf of users, opening up considerable opportunities to offer great value as well as significant risks. This Forum will improve product market fit by giving companies foresight into what users want from AI Agents; it will help build trust and legitimacy with users; and it will strengthen cross-industry relations in support of industry standards development over time.

"We support The Forum for its deliberative and collaborative approach to shaping public discourse around AI agents," said Prakhar Mehrotra, SVP of AI at PayPal. "Responsibility and trust are core business principles for PayPal, and through collaborative efforts like these, we seek to encourage valuable perspectives that can help shape the future of agentic commerce."

The Forum will be conducted on the AI-assisted Stanford Online Deliberation Platform, a collaboration between Stanford’s Deliberative Democracy Lab and Crowdsourced Democracy Team, where a cross-section of the public will deliberate in small groups and share their perspectives, their lived experiences, and their expectations for AI products. This deliberation platform has hosted Meta’s Community Forums over the past few years. The Forum will also incorporate insights from CIP's Global Dialogues, conducted on the Remesh platform.

“Community Forums provide us with people’s considered feedback, which helps inform how we innovate,” said Rob Sherman, Meta’s Vice President, AI Policy & Deputy Chief Privacy Officer. “We look forward to the insights from this cross-industry partnership, which will provide a deeper understanding of people’s views on cutting-edge technology.”

This methodology is rooted in deliberation, which provides representative samples of the public with baseline education on a topic, including options with associated tradeoffs, and asks them to reflect on that education as well as their lived experience. Deliberative methods have been found to offer more considered feedback to decision-makers because people have to weigh the complexity of an issue rather than offering a knee-jerk reaction.

"This industry-wide deliberative forum represents a crucial step in democratizing the discourse around AI agents, ensuring that the public's voice is heard in a representative and thoughtful way as we collectively shape the future of this transformative technology," said James Fishkin, Director of Stanford's Deliberative Democracy Lab.

This Industry-Wide Forum represents a pivotal step in responsible AI development, bringing together technology companies and the public to address complex challenges in AI agent creation. By leveraging Stanford's Deliberative Polling methodology and making findings publicly available, the initiative promises to shape the future of AI with enhanced transparency, trust, and user-centric focus. Find out more about Stanford’s Deliberative Democracy Lab at deliberation.stanford.edu.

Media Contact: Alice Siu, Stanford Deliberative Democracy Lab

Read More

Back view of crop anonymous female talking to a chatbot of computer while sitting at home
News

Meta and Stanford’s Deliberative Democracy Lab Release Results from Second Community Forum on Generative AI

Participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’
Meta and Stanford’s Deliberative Democracy Lab Release Results from Second Community Forum on Generative AI
Chatbot powered by AI. Transforming Industries and customer service. Yellow chatbot icon over smart phone in action. Modern 3D render
News

Navigating the Future of AI: Insights from the Second Meta Community Forum

A multinational Deliberative Poll unveils the global public's nuanced views on AI chatbots and their integration into society.
Navigating the Future of AI: Insights from the Second Meta Community Forum
Collage of modern adults using smart phones in city with wifi signals
News

Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab

More than 6,300 deliberators from 32 countries and nine regions around the world participated in the Metaverse Community Forum on Bullying and Harassment.
Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab
Hero Image
Futuristic 3D Render Steve Johnson via Unsplash
All News button
1
Subtitle

There is a significant gap between what technology, especially AI technology, is being developed and the public's understanding of such technologies. We must ask: what if the public were not just passive recipients of these technologies, but active participants in guiding their evolution?

Date Label
Paragraphs

In September 2022, National Security Adviser Jake Sullivan identified quantum technologies as one of three — biotech, clean energy (including batteries), and next-generation computing (including quantum and semiconductors)—that are critical to the economic and national security of the United States.1 By allowing for new methods of computation, sensing, and communications, quantum technologies have the potential to revolutionize not only commercial industries, such as financial services, chemical engineering, and energy (among others), but also national security capabilities, such as code breaking and remote sensing.

All Publications button
0
Publication Type
White Papers
Publication Date
Authors
Authors
News Type
News
Date
Paragraphs

Ever since the public release of ChatGPT in the fall of 2022, classrooms everywhere from grade schools to universities have started to adapt to a new reality of AI-augmented education. 

As with any new technology, the integration of AI into teaching practices has come with plenty of questions: Will this help or hurt learning outcomes? Are we grading students  or an algorithm? And, perhaps most fundamentally: To allow, or not to allow AI in the classroom? That is the question keeping many teachers up at night. 

For the instructors of “Technology, Innovation, and Great Power Competition,” a class created and taught by Stanford faculty and staff at the Gordian Knot Center for National Security Innovation (GKC), the answer to that question was obvious. Not only did they allow students to use AI in their coursework, they required it.
 

Leveraging AI for Policy Analysis


Taught by Steve BlankJoe Felter, and Eric Volmar of the Gordian Knot Center, the class was a natural forum to discuss how emerging technologies will affect relations between the world’s most powerful countries. 

Volmar, who returned to Stanford after serving in the U.S. Department of Defense, explains the logic behind requiring the use of AI:

“As we were designing this curriculum, we started from an acknowledgement that the world has changed. The AI models we see now are the worst they’re ever going to be. Everything is going to get better and become more and more integrated into our lives. So why not use every tool at our disposal to prepare students for that?”

For students used to restrictions or outright bans on using AI to complete coursework, being graded on using AI took some getting used to.

“This was the first class that I’ve had where using AI was mandatory,” said Jackson Painter, an MA student in Management Science and Engineering. “I've had classes where AI was allowed, but you had to cite or explain exactly how you used it. But being expected to use AI every week as part of the assignments was something new and pretty surprising.” 

Dr. Eric Volmar teaching the new Stanford Gordian Knot Center course Entrepreneurship Inside Government.
Dr. Eric Volmar teaching the new Stanford Gordian Knot Center course Entrepreneurship Inside Government.

Assigned into teams of three or four, students were given an area of strategic competition to focus on for the duration of the class, such as computing power, semiconductors, AI/machine learning, autonomy, space, and cyber security. In addition to readings, each group was required to conduct interviews with key stakeholders, with the end goal of producing a memo outlining specific policy-relevant insights about their area of focus.

But the final project was only part of the grade. The instructors also evaluated each group based on how they had used AI to form their analysis, organize information, and generate insights.

“This is not about replacing true expertise in policymaking, but it’s changing the nature of how you do it,” Volmar emphasized.
 

Expanding Students’ Capabilities


For the students, finding a balance between familiar habits and using a novel technology took some practice. 

“Before being in this class, I barely used ChatGPT. I was definitely someone who preferred writing in my own style,” said Helen Philips, an MA student in International Policy and course assistant for the class.

“This completely expanded my understanding of what AI is possible,” Philips continued. “It really opened up my mind to how beneficial AI can be for a broad spectrum of work products.”

After some initial coaching on how to develop effective prompts for the AI tools, students started iterating on their own. Using the models to summarize and synthesize large volumes of content was a first step. Then groups started getting creative. Some used AI to create maps of the many stakeholders involved in their project, then identify areas of overlap and connection between key players. Others used the tools to create simulated interviews with experts, then use the results to better prepare for actual interviews.
 


This is a new type of policy work. It's not replacing expertise, but it's changing the nature of how you access it. These tools increase the depth and breadth students can take in. It's an extraordinary thing.
Eric Volmar
GKC Associate Director


For Jackson Painter, the class provided valuable practice combining more traditional techniques for developing policy with new technology.

“I really came to see how irreplaceable the interviewing process is and the value of talking to actual people,” said Jackson. “People know the little nuances that the AI misses. But then when you can combine those nuances with all the information the AI can synthesize, that’s where it has its greatest value. It’s about augmenting, not replacing, your work.”

That kind of synthesis is what the course instructors hope students take away from the class. The aim, explained Volmar, is that they will put it into practice as future leaders facing complex challenges that touch multiple sectors of government, security, and society.

“This is a new type of policy work,” he said. “It's accelerated, and it increases the depth and breadth students can take in. They can move across many different areas and combine technical research with Senate and House Floor hearings. They can take something from Silicon Valley and combine it with something from Washington. It's an extraordinary thing.”

Real-time Innovation


For instructors Blank, Felter, and Volmar, classes like “Technology, Innovation, and Great Power Competition” — or sister classes like the highly popular “Hacking for Defense,” and the recently launched “Entrepreneurship Inside Government” — are an integral part of preparing students to navigate ever more complex technological and policy landscapes.

“We want America to continue to be a force for good in the world. And we're going to need to be competitive across all these domains to do that. And to be competitive, we have to bring our A-game and empower creative thinking as much as possible. If we don't take advantage of these technologies, we’re going to lose that advantage,” Felter stressed.

Applying real-time innovation to the challenges of national security and defense is the driving force behind the Gordian Knot Center. Founded in fall of 2021 by Joe Felter and Steve Blank with support from  principal investigators Michael McFaul and Riita Katila, the center brings together Stanford's cutting-edge resources, Silicon Valley's dynamic innovation ecosystem, and a network of national security experts to prepare the next generation of leaders.

To achieve that, Blank leveraged his background as a successful entrepreneur and creator of the lean startup movement, a methodology for launching companies that emphasizes experimentation, customer feedback, and iterative design over more traditional methods based on complex planning, intuition, and “big design up front” development.

“When I first taught at Stanford in 2011, I observed that the teaching being done about how to write a business plan in capstone entrepreneurship classes didn’t match the hands-on chaos of an actual startup. There were no entrepreneurship classes that combined experiential learning with methodology. But the goal was to teach both theory and practice.”
 


What we’re seeing in these classes are students who may not have otherwise thought they have a place at the table of national security. That's what we want, because the best future policymakers will understand how to leverage diverse skills and tools to meet challenges.
Joe Felter
GKC Center Director


That goal of combining theory and practice is a throughline that continues in today’s Gordian Knot Center. After the success of Blank’s entrepreneurship classes, he — alongside Pete Newell of BMNT and Joe Felter, a veteran, former senior Department of Defense official, and the current center director of the GKC — turned the principles of entrepreneurship and iteration toward government.

“We realized that university students had little connection or exposure to the problems that government was trying to solve, or the larger issues civil society was grappling with,” says Blank. “But with the right framework, students could learn directly about the nation's threats and security challenges, while innovators inside the government could see how students can rapidly iterate and deliver timely solutions to defense challenges.”

That thought led directly to the development of the “Hacking for Defense” class, now in its tenth year, and eventually to the organization of the Gordian Knot Center and its affiliate programs like the Stanford DEFCON Student Network. Based at the Freeman Spogli Institute for International Studies, the center today is a growing hub of students, veterans, alumni, industry experts, and government officials from a multiplicity of backgrounds and areas of expertise working across campus and across government to solve real problems and enact change.

Condoleezza Rice, Director of the Hoover Institution, speaking in Hacking for Defense.
Condoleezza Rice, Director of the Hoover Institution, speaking in Hacking for Defense.

Prepared for Diverse Challenges


In the classroom, the feedback cycle between real policy issues and iterative entrepreneurship remains central to the student experience. And it’s an approach that resonates with students.  

“I love the fact that we’re addressing real issues in real time,” says Nuri Capanoglu, a masters student in Management Science and Engineering who took “Technology, Innovation, and Great Power Competition” in fall 2024.

He continues, “Being able to use ChatGPT in a class like this was like having a fifth teammate we could bounce ideas off, double check things, and assign to do complex literature reviews that wouldn't have been possible on our own. It's like we went from being a team of four to a team of fifty.”

Other students agree. Feedback on the class has praised the “fusion of practical hand-on learning and AI-enabled research” and deemed it a “must-take for anyone, regardless of background.”

Like many of his peers, Capanoglu is eager for more. “As I’ve been planning my future schedule, I’ve tried to find more classes like this,” he says.

For instructors like Felter and Volmar, they are equally ready to welcome more students into their courses.

“Policy is so complex now, and the stakes are so high,” acknowledged Felter. “But what we’re seeing in these classes is a passion for addressing real challenges from students who may not have otherwise thought they have a place at the table of national security or policy. That’s what we want. The best and brightest future policymakers are going to have diverse skill sets and understand how to leverage every possible tool and capability available to meet those challenges. So if you want to get involved and make a difference, come take a policy class.”

Read More

A collage of group photo from the capstone internship projects from the Ford Dorsey Master's in International Policy Class of 2025.
Blogs

Globe Trotting MIP Students Aim for Policy Impact

Students from the Ford Dorsey Master's in International Policy Class of 2025 visited organizations around the world to tackle pressing policy challenges such as human trafficking, cyber threats, disinformation, and more.
Globe Trotting MIP Students Aim for Policy Impact
Students on team one present their project to the class
News

Stanford Students Pitch Solutions to U.S. National Security Challenges to Government Officials and Technology Experts

In the class “Technology, Innovation, and Great Power Competition,” students across disciplines work in teams and propose their detailed solutions to active stakeholders in the technology and national security sectors.
Stanford Students Pitch Solutions to U.S. National Security Challenges to Government Officials and Technology Experts
Deputy Secretary of Defense Kathleen Hicks and her team meet at the Hoover Institution with students and faculty from the Gordian Knot Center.
News

Deputy Secretary of Defense Kathleen Hicks Discusses Importance of Strategic Partnerships with Stanford Faculty and Students

A visit from the Department of Defense’s deputy secretary gave the Gordian Knot Center a prime opportunity to showcase how its faculty and students are working to build an innovative workforce that can help solve the nation’s most pressing national security challenges.
Deputy Secretary of Defense Kathleen Hicks Discusses Importance of Strategic Partnerships with Stanford Faculty and Students
Hero Image
Students from Gordian Knot Center classes at the White House with NSC Senior Director for Technology and National Security Tarun Chhabra in Washington D.C.
Technology, Innovation and Great Power Competition course teammates Nuri Capanoglu, Elena Kopstein, Mandy Alevra, and Jackson Painter with National Security Council Senior Director for Technology and National Security Tarun Chhabra in Washington, DC.
All News button
1
Subtitle

In classes taught through the Freeman Spogli Institute’s Gordian Knot Center, artificial intelligence is taking a front and center role in helping students find innovative solutions to global policy issues.

Date Label
Paragraphs

In an era marked by rapid technological advancements, increasing political polarization, and democratic backsliding, reimagining democracy requires innovative approaches that foster meaningful public engagement. Over the last 30 years, Deliberative Polling has proven to be a successful method of public consultation to enhance civic participation and informed decision-making. In recent years, the implementation of online Deliberative Polling using the AI-assisted Stanford Online Deliberation Platform, a groundbreaking automated platform designed to scale simultaneous and synchronous deliberation efforts to millions, has put deliberative societies within reach. By examining two compelling case studies—Foreign Policy by Canadians and the Metaverse Community Forum—this paper highlights how technology can empower diverse voices, facilitate constructive dialogue, and cultivate a more vibrant democratic process. This paper demonstrates that leveraging technology in deliberation not only enhances public discourse but also paves the way for a more inclusive and participatory democracy.
 

About "Deliberative Approaches to Inclusive Governance: An Essay Series Part of the Democratic Legitimacy for AI Initiative"


Democracy has undergone profound changes over the past decade, shaped by rapid technological, social, and political transformations. Across the globe, citizens are demanding more meaningful and sustained engagement in governance—especially around emerging technologies like artificial intelligence (AI), which increasingly shape the contours of public life.

From world-leading experts in deliberative democracy, civic technology, and AI governance we introduce a seven-part essay series exploring how deliberative democratic processes like citizen’s assemblies and civic tech can strengthen AI governance. The essays follow from a workshop on “Democratic Legitimacy for AI: Deliberative Approaches to Inclusive Governance” held in Vancouver in March 2025, in partnership with Simon Fraser University’s Morris J. Wosk Centre for Dialogue. The series and workshop were generously supported by funding from the Canadian Institute for Advanced Research (CIFAR), Mila, and Simon Fraser University’s Morris J. Wosk Centre for Dialogue

All Publications button
1
Publication Type
Book Chapters
Publication Date
Subtitle

Part of "Deliberative Approaches to Inclusive Governance: An Essay Series Part of the Democratic Legitimacy for AI Initiative," produced by the Centre for Media, Technology and Democracy.

Authors
Alice Siu
Book Publisher
Centre for Media, Technology and Democracy
Paragraphs

In October 2024, Meta, in collaboration with the Stanford Deliberative Democracy Lab, implemented the third Meta Community Forum. This Community Forum expanded on the October 2023 deliberations regarding Generative AI. For this Community Forum, the participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’ Since the last Community Forum, the development of Generative AI has moved beyond AI chatbots and users have begun to explore the use of AI agents — a type of AI that can respond to written or verbal prompts by performing actions for you, or on your behalf. And beyond text-generating AI, users have begun to explore multimodal AI, where tools are able to generate content images, videos, and audio as well. The growing landscape of Generative AI raises more questions about users’ preferences when it comes to interacting with AI agents. This Community Forum focused deliberations on how interactive and proactive AI agents should be when engaging with users. Participants considered a variety of tradeoffs regarding consent, transparency, and human-like behaviors of AI agents. These deliberations shed light on what users are thinking now amidst the changing technology landscape in Generative AI.

For this deliberation, nearly 900 participants from five countries: India, Nigeria, Saudi Arabia, South Africa, and Turkey, participated in this deliberative event. The samples of each of these countries were recruited independently, so this Community Forum should be seen as five independent deliberations. In addition, 1,033 persons participated in the control group, where the participants did not deliberate in any discussions; the control group only completed the two surveys, pre and post. The main purpose of the control group is to demonstrate that any changes that occur after deliberation are a result of the deliberative event.

All Publications button
1
Publication Type
Reports
Publication Date
Subtitle

April 2025

Authors
James S. Fishkin
Alice Siu
Authors
News Type
News
Date
Paragraphs

In October 2024, Meta, in collaboration with the Stanford Deliberative Democracy Lab, implemented the third Meta Community Forum. This Community Forum expanded on the October 2023 deliberations regarding Generative AI. For this Community Forum, the participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’

At a high level, Meta used this Forum to:

  • Expand public input into AI development beyond the Global North, and into the Global South. This latest Forum involved roughly 1,000 people from India, Turkey, Nigeria, Saudi Arabia, and South Africa.
  • Push the boundaries on the topics that the public will have input into. We moved from the foundational principles people wanted to see in GenAI towards addressing specific value and risk tradeoffs associated with issues like personalization and human-like AI.


The Forum resulted in several key findings on the principles that should underpin AI agents, including:
 

  • Participants supported AI agents remembering their prior conversations to personalize their experience, especially if transparency and user controls are in place.
  • Participants were more supportive of culturally/regionally-tailored AI agents compared to standardized AI agents.
  • Participants were in favor of human-like AI agents that can respond to emotional cues.
  • Across topics, participants consistently favored options for AI to include transparency and user control features.

Maturing our Community Forum Program


Beyond the findings of any one Forum, the Deliberative Democracy Lab and Meta have heard important feedback from stakeholders and have implemented several programmatic changes to mature our program. These include:
 

  • More disclosure around the impact of results: Meta will share more information about how results are being actioned within the company on its Transparency Center page, which will be updated throughout the year.
  • Following up with participants: We heard the importance of going back to participants to explain what we learned from their input and what we are doing with it. The Deliberative Democracy Lab will be hosting calls with participants from each of our past Community Forums, dating back to 2022, to update them on the findings from the Forum and Meta’s response.
  • Supporting AI deliberation: A team of Meta AI experts has begun partnering with the Deliberative Democracy Lab to conduct research on how AI might further scale deliberation and optimize the Community Forum process. This includes, but is not limited to, using AI to aggregate themes that are emerging in discussions in real time and support engagement between participants and experts in plenary sessions.
  • Supporting external research: Meta is supporting a consortium of independent researchers from around the world who will evaluate the data from its Forums and publish research papers on the deliberations and results. This will culminate in an academic conference later this year.

Read More

Chatbot powered by AI. Transforming Industries and customer service. Yellow chatbot icon over smart phone in action. Modern 3D render
News

Navigating the Future of AI: Insights from the Second Meta Community Forum

A multinational Deliberative Poll unveils the global public's nuanced views on AI chatbots and their integration into society.
Navigating the Future of AI: Insights from the Second Meta Community Forum
Collage of modern adults using smart phones in city with wifi signals
News

Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab

More than 6,300 deliberators from 32 countries and nine regions around the world participated in the Metaverse Community Forum on Bullying and Harassment.
Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab
Hero Image
Back view of crop anonymous female talking to a chatbot of computer while sitting at home Getty Images
All News button
1
Subtitle

Participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’

Date Label
Authors
News Type
News
Date
Paragraphs

After almost two years of hard work and study, the 2025 cohort of the Ford Dorsey Master's in International Policy program (MIP) is preparing for the final stretch of their learning journey at the Freeman Spogli Institute for International Studies (FSI). 

Each year, second-year MIP students participate in the Policy Change Studio, which takes their learning out of the realm of theory and into hands-on, on-the-ground application. Recognizing that the world outside the classroom is much more complex, bureaucratic, and constrained than textbook case studies, the Studio is a two-quarter course designed to provide students with direct experience researching, developing, and implementing policy goals.   

Our students are setting out for Belgium, Mongolia, Ghana, Australia, and India to work directly with research groups, NGOs, and policy institutions on pressing challenges affecting local communities and global alliances alike. Keep reading to learn more about each project.

 

Securing Trust: A Framework for Effective Cyber Threat Information Sharing in NATO

Over the past few months, through problem identification and early solutions development, our research has identified three key challenges in NATO’s cyber threat information sharing landscape. First, despite the presence of existing protocols such as NCIRC and MISP, significant communication
[Left to right]: Emerson Johnston, Tiffany Saade, Chan Leem
Emerson Johnston, Tiffany Saade, Chan Leem, Markos Magana (not pictured)

gaps persist between stakeholders. This is exacerbated by the lack of clear, standardized specifications from NATO, leading to inconsistent implementation and operational friction. Second, at its core, this is an intelligence-sharing challenge: member states operate under different national frameworks, threat perceptions, and priorities, which influence what information they are willing (or unwilling) to share. Third, the fragmentation of sharing systems is not merely a technical hurdle but often a deliberate choice made for operational and security reasons, reflecting concerns over sovereignty, data protection, and strategic advantage.

While technological advancements can enhance interoperability, they alone will not drive adoption. Our research highlights that the underlying issue is one of trust and incentives—NATO must establish mechanisms that encourage collaboration beyond just technological solutions. Without a strong foundation of mutual trust, transparency, and shared benefits, even the most advanced systems will face resistance. Creating sustainable incentives for participation—whether through policy alignment, risk reduction assurances, or value-added intelligence sharing—will be essential in fostering a more effective and unified cyber defense posture within NATO.

 

Cultivating Community-Led Policies: GerHub and Mongolia’s Billion Trees Initiative

Our team is collaborating with GerHub in Mongolia to establish an influential policy think tank aimed at fostering community-informed and data-driven policymaking. Leveraging GerHub’s unique and extensive connections within the ger communities of Mongolia, we aim to empower policies that authentically reflect local needs and insights.
[Left to right]: Julia Ilhardt, Serena Rivera-Korver, Johanna von der Leyen, and Michael Alisky
Julia Ilhardt, Serena Rivera-Korver, Johanna von der Leyen, and Michael Alisky

A key component of our project involves conducting in-depth research and stakeholder interviews focused on Mongolia's "Billion Trees Initiative," where we will be seeking actionable insights to scale up the initiative effectively and sustainably.

 

Countering Coordinated Political Disinformation Campaigns in Ghana

Our team is working with the Africa Center for Strategic Studies to examine disinformation issues in Ghana. We are focusing on how coordinated influence operations are being used to create and spread political disinformation. We aim to understand how PR companies and
[Left to right]: Euysun Hwang, Sakeena Razick, Leticia Lie, and Julie Tamura
Euysun Hwang, Sakeena Razick, Leticia Lie, Julie Tamura, and Anjali Kumar (not pictured)

influencers work with politicians to coordinate these influence operations and shape public opinion. Our policy recommendations will address how governments and civil societies can work together to tackle this issue.

 

The recent ratification of the Technology Safeguard Agreement (TSA) by the United States and Australia lays the foundation for smoother exchange of commercial space technologies and permits U.S. commercial space launch companies to conduct reentry in Australia. With the sponsorship of the
[Left to right]: Samara Nassor, Gustavs Zilgalvis, and Helen Phillips
Samara Nassor, Gustavs Zilgalvis, Helen Phillips, and Joe Wishart (not pictured)

Australian Space Agency and U.S. Defense Innovation Unit, the goal of this project is to leverage Australia's strategic geographic position and investment in reentry infrastructure to mitigate the hurdles that U.S. commercial startups experience accessing military ranges for reentry. Our project aims to create a robust foundation for the development of orbital return capabilities in Australia, fostering greater commercial and national security collaboration between the U.S. and Australia.

 

Overcoming Computational Resource Gaps for Open Source AI in India

Our team is working with Digital Futures Lab (DFL), a non-profit research network in India that examines the intersection of technology and society in the Global South. Our project focuses on identifying the key components of open source AI in India and how limited access
[Left to right]: Sandeep Abraham, Sabina Nong, Kevin Klyman, and Emily Capstick
Sandeep Abraham, Sabina Nong, Kevin Klyman, and Emily Capstick

to computational resources acts as a barrier to adoption. India has a thriving tech sector, and openly available AI models have the potential to democratize access to this trailblazing technology. At the same time, AI is expensive to build and deploy, and access to the specialized computational resources needed to do so is limited even for top Indian companies. Our team aims to develop solutions in partnership with Digital Futures Lab that can help bolster the AI ecosystem across India.

 

Combating Human Trafficking in the Informal Mining Industry in Ghana

Our team is working with the Ghana Center for Democratic Development to identify ways to disrupt human trafficking into forced labor in Ghana’s informal mining sector. So far, our research and conversations with stakeholders has highlighted the complex systems — ranging from poverty to illicit networks —
[Left to right]: Alex Bue, Rachel Desch, and Marco Baeza
Alex Bue, Rachel Desch, Marco Baeza, and Hye Jin Kim (not pictured)

that contribute to this issue. During our fieldwork, we will explore community- and government-driven programs aimed at preventing and combating trafficking. Our final report will analyze existing policies, pinpoint gaps, and propose community-led interventions to address them.

 

The Ford Dorsey Master's in International Policy

Want to learn more? MIP holds admission events throughout the year, including graduate fairs and webinars, where you can meet our staff and ask questions about the program.

Read More

The class of 2027 of MIP students standing on the steps in front of Encina Hall at Stanford University.
News

Meet the Ford Dorsey Master's in International Policy Class of 2026

Hailing from every corner of the globe, the new class of the Ford Dorsey Master's in International Policy is ready to make an impact on nuclear policy, digital trust and safety, rural investment, and more.
Meet the Ford Dorsey Master's in International Policy Class of 2026
A close-up/macro photograph of Middle East from a desktop globe.
News

New Continuing Studies Course with CDDRL Scholars on Geopolitics in the 21st-Century Middle East

Open for enrollment now through Stanford Continuing Studies, "Geopolitics in the 21st-Century Middle East: Insights from Stanford Scholars and Other Experts" will run online for ten weeks on Wednesdays, from April 2 through June 4.
New Continuing Studies Course with CDDRL Scholars on Geopolitics in the 21st-Century Middle East
a group photo taken at a table with four people sitting down.
Blogs

SPICE Provides Excellent Learning Opportunities for Japanese University Students

SPICE/Stanford collaborates with the Graduate School of Education at the University of Tokyo.
SPICE Provides Excellent Learning Opportunities for Japanese University Students
Hero Image
A collage of six student groups from the 2025 Policy Change Studio at the Ford Dorsey Master's in International Policy at the Freeman Spogli Institute for International Studies.
All News button
1
Subtitle

Students in the Ford Dorsey Master's in International Policy program are practicing their policymaking skills through projects on cybersecurity within NATO, countering political disinformation in Ghana, commercial space technology in Australia, and more.

Date Label
-
Flyer for the conference "Taiwan Forward." Image: aerial view of Taipei.

We have reached capacity for this event and registration has closed.


Organized by the Taiwan Program at Stanford University’s Walter H. Shorenstein Asia-Pacific Research Center (APARC)
Co-sponsored by National Taiwan University's Office of International Affairs

As Taiwan looks to develop comprehensive strategies to promote national interests, it faces challenges shared by other advanced economies. How can Taiwan leverage AI innovation and its semiconductor prowess to drive resilience and continued growth while promoting entrepreneurship and forging advantages in emerging industries? What regulatory and policy measures are needed to scale Taiwan’s role as a global leader in biomedical and healthcare advancements while ensuring patient trust and safety? How can it address the gaps posed by rapid family changes and population aging? And how do its historical and linguistic legacies shape present narratives and identities, within Taiwan and among the Taiwanese diaspora?

Join us for a conference that explores these questions and more, featuring panel discussions with scholars from Stanford University, National Taiwan University, and other universities in Taiwan, Japan, Korea, and Singapore, alongside Taiwanese industry leaders. We will examine Taiwan’s strategies for navigating modernization in a shifting global landscape — bridging technology, industry, culture, and society through interdisciplinary and comparative perspectives.

 

8:45 - 9:10 a.m.
Opening Session

Welcome Remarks

Shih-Torng Ding
Executive Vice President, National Taiwan University

Gi-Wook Shin
Director, Shorenstein APARC and the Taiwan Program, Stanford University

Congratulatory Remarks

Chia-Lung Lin
Minister of Foreign Affairs, Taiwan

Raymond Greene
Director, American Institute in Taiwan 


9:10-10:40 a.m.
Panel 1 — Advancing Health and Healthcare: Technology and Policy Perspectives     
    
Panelists 

Kuan-Ming Chen
Assistant Professor, Department of Economics, National Taiwan University

Lynia Huang
Founder and CEO, Bamboo Technology Ltd.

Ming-Jen Lin
Distinguished Professor, Department of Economics, National Taiwan University

Siyan Yi
Associate Professor, School of Public Health, National University of Singapore

Moderator
Karen Eggleston
Director, Asia Health Policy Program, Shorenstein APARC, Stanford University


10:40-10:50 a.m.
Coffee and Tea Break


10:50 a.m.-12:30 p.m.
Panel 2 — Innovation, Entrepreneurship, and Technology Leadership

Panelists 

Steve Chen
Co-founder, YouTube and Taiwan Gold Card Holder #1

Matthew Liu
Co-founder, Origin Protocol

Huey-Jen Jenny Su
Professor, Department of Environmental and Occupational Health and Former President, National Cheng Kung University

Yaoting Wang
Founding Partner, Darwin Ventures, Taiwan

Moderator
H.-S. Philip Wong
Willard R. and Inez Kerr Bell Professor in the School of Engineering, Stanford University


12:30-1 p.m.

Perspectives from Stanford and NTU Students

Tiffany Chang
BS Student in Engineering Management & Human-Centered Design, Stanford University

Liang-Yu Ko
MA Student in Sociology, National Taiwan University


1-2 p.m. 
Lunch Break


2-3:30 p.m.  
Panel 3 — Interwoven Identities: Exploring Chinese Languages, Taiwanese-american Narratives, and Japanese Colonial Legacies in Taiwan

Panelists 

Carissa Cheng
BA Student in International Relations, Stanford University

Yi-Ting Chung
PhD Candidate in History, Stanford University

Jeffrey Weng
Assistant Professor, Department of Sociology, National Taiwan University

Moderator
Ruo-Fan Liu
Taiwan Program Postdoctoral Fellow, Shorenstein APARC, Stanford University


3:30-3:45 p.m. 
Coffee and Tea Break


3:45-5:15 p.m.    
Panel 4 —  The Demographic Transformation: Lessons from Taiwan and Comparative Cases

Panelists

Yen-Hsin Alice Cheng
Professor, Institute of Sociology, Academia Sinica

Youngtae Cho
Professor of Demography and Director, Population Policy Research Center, Seoul National University

Setsuya Fukuda
Senior Researcher, National Institute of Population and Social Security Research, Japan

Moderator
Paul Y. Chang
Tong Yang, Korea Foundation, and Korea Stanford Alumni Association Senior Fellow, Shorenstein APARC, Stanford University


5:15-5:30 p.m.    
Closing Remarks

Gi-Wook Shin
Director, Shorenstein APARC and the Taiwan Program, Stanford University

THIS CONFERENCE IS HELD IN TAIPEI, TAIWAN, ON SUNDAY, MARCH 23, 2025, FROM 8:45 AM TO 5:30 PM, TAIPEI TIME

International Conference Hall, Tsai Lecture Hall
College of Law
National Taiwan University

No.1, Sec. 4, Roosevelt Road
Taipei City, 10617
Taiwan

Conferences
Date Label
Paragraphs

The Chinese government is revolutionizing digital surveillance at home and exporting these technologies abroad. Do these technology transfers help recipient governments expand digital surveillance, impose internet shutdowns, filter the internet, and target repression for online content? We focus on Huawei, the world’s largest telecommunications provider, which is partly state-owned and increasingly regarded as an instrument of its foreign policy. Using a global sample and an identification strategy based on generalized synthetic controls, we show that the effect of Huawei transfers depends on preexisting political institutions in recipient countries. In the world’s autocracies, Huawei technology facilitates digital repression. We find no effect in the world’s democracies, which are more likely to have laws that regulate digital privacy, institutions that punish government violations, and vibrant civil societies that step in when institutions come under strain. Most broadly, this article advances a large literature about the geopolitical implications of China’s rise.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Perspectives on Politics
Authors
Erin Baggott Carter
Brett Carter
Number
Published online 2025:1-20
Subscribe to Technology