Information Technology
News Type
News
Date
Paragraphs

Introduction


Generative AI has become an incredibly attractive and widespread tool for people across the world. Alongside its rapid growth, AI tools present a host of ethical challenges relating to consent, security, and privacy, among others. As Generative AI has been spearheaded primarily by large technology companies, these ethical challenges — especially as viewed from the vantage point of ordinary people — risk being overlooked for the sake of market competition and profit. What is needed, therefore, is a deeper understanding of and attention to how ordinary people perceive AI, including its costs and benefits.

The Meta Community Forum Results Analysis, authored by Samuel Chang, James S. Fishkin, Ricky Hernandez Marquez, Ayushi Kadakia, Alice Siu, and Robert Taylor, aims to address some of these challenges. A partnership between CDDRL’s Deliberative Democracy Lab and Meta, the forum enables participants to learn about and collectively reflect on AI. The impulse behind deliberative democracy is straightforward: people affected by some policy or program should have the right to communicate about its contents and to understand the reasons for its adoption. As Generative AI and the companies that produce it become increasingly powerful, democratic input becomes even more essential to ensure their accountability. 

Motivation & Takeaways


In October 2024, the third Meta Community Forum took place. Its importance derives from the advancements in Generative AI since October 2023, when the last round of deliberations was held. One such advancement is the move beyond AI chatbots to AI agents, which can solve more complex tasks and adapt in real-time to improve responses. A second advancement is that AI has become multimodal, moving beyond the generation of text and into images, video, and audio. These advancements raise new questions and challenges. As such, the third forum provided participants with the opportunity to deliberate on a range of policy proposals, organized around two key themes: how AI agents should interact with users and how they should provide proactive and personalized experiences for them.

To summarize some of the forum’s core findings: the majority of participants value transparency and consent in their interactions with AI agents as well as the security and privacy of their data. In turn, they are less comfortable with agents autonomously completing tasks if this is not transparent to them. Participants have a positive outlook on AI agents but want to have control over their interactions. Regarding the deliberations themselves, participants rated the forum highly and felt that it exposed them to alternative perspectives. The deliberators wanted to learn more about AI for themselves, which was evidenced by their increased use of these tools after the deliberations. Future reports will explore the reasoning and arguments that they used while deliberating.
 


 

Image
Map of where participants hailed from.


The participants of this Community Forum were representative samples of the general population from five countries - Turkey, Saudi Arabia, India, Nigeria, and South Africa. Participants from each country deliberated separately in English, Hindi, Turkish, or Arabic.



Methodology & Data


The deliberations involved around 900 participants from five countries: India, Nigeria, Saudi Arabia, South Africa, and Turkey. Participants varied in terms of age, gender, education, and urbanicity. Because the deliberative groups were recruited independently, the forum can be seen as five independent deliberations. Deliberations alternated between small group discussions and ‘plenary sessions,’ where experts answered questions drawn from the small groups. There were around 1000 participants in the control group, who did pre- and post-surveys, but without deliberating. The participant sample was representative with respect to gender, while the treatment and control groups were balanced on demography as well as on their attitudes toward AI. Before deliberating on the proposals, participants were presented with background materials as well as a list of costs and benefits to consider.

In terms of the survey data, large majorities of participants had previously used AI. There was a statistically significant increase in these proportions after the forum. For example, in Turkey, usage rates increased from nearly 70% to 84%. In several countries, there were large increases in participants’ sense of AI’s positive benefits after deliberating, as well as a statistically significant increase in their interest. The deliberations changed participants’ opinions about a host of claims; for example, “people will feel less lonely with AI” and “more proactive [agents] are intrusive” lost approval whereas “AI agents’ capability to increase efficiency…is saving many companies a lot of time and resources” and “AI agents are helping people become more creative” gained approval. After deliberating, participants demonstrated an improved understanding of some factual aspects of AI, although the more technical aspects of this remain challenging. One example here is AI hallucinations, or rather, the generation of false or nonsensical outputs, usually because of flawed training data.
 


 

Image
Chart: How should AI agents remember users' past behaviors or preferences? Percentage in favor


Proposals


Participants deliberated on nineteen policy proposals. To summarize these briefly: In terms of whether and how AI remembers users’ past behaviors and preferences, participants preferred proposals that allowed users to make active choices, as opposed to this being a default setting or only being asked once. They also preferred being reminded about the ability of AI agents to personalize their experience, as well as agents being transparent with users about the tasks they complete. Participants preferred that users be educated on AI before using it, as well as being informed when AI is picking up on certain emotional cues and responding in “human-like” ways. They also preferred proposals whereby AI would ask clarifying questions before generating output. Finally, when it comes to agents helping users with real-life relationships, this was seen as more permissible when the other person was informed. Across the proposals, gender was neither a significant nor consistent determinant of how they were rated. Ultimately, the Meta Community Forum offers a model for how informed, public communication can shape AI and the ethical challenges it raises.

*Research-in-Brief prepared by Adam Fefer.

 
Hero Image
Agentic AI Workflow Automation, Artificial intelligence AI driven decision-making concept illustration blue background iStock / Getty Images
All News button
0
Subtitle

CDDRL Research-in-Brief [4-minute read]

Date Label
News Type
News
Date
Paragraphs

There is a significant gap between what technology, especially AI technology, is being developed and the public's understanding of such technologies. We must ask: what if the public were not just passive recipients of these technologies, but active participants in guiding their evolution?

A group of technology companies convened by Stanford University’s Deliberative Democracy Lab will gather public feedback about complex questions the AI industry is considering while developing AI agents. This convening includes Cohere, Meta, Oracle, and PayPal, advised by the Collective Intelligence Project.

This Industry-Wide Forum brings together everyday people to weigh in on tech policy and product development decisions where there are difficult tradeoffs with no simple answers. Technology development is moving so quickly, there is no better time than right now to engage the public in understanding what an informed public would like AI technologies to do for them. This Forum is designed based on Stanford's method of Deliberative Polling, a governance innovation that empowers the public’s voices to have a greater say in decision-making. This Forum will take place in Fall 2025. Findings from this Forum will be made public, and Stanford’s Deliberative Democracy Lab will hold webinars for the public to learn and inquire about the findings.

"We're proud to be a founding participant in this initiative alongside Stanford and other AI leaders," said Saurabh Baji, CTO of Cohere. "This collaborative approach is central to enhancing trust in agentic AI and paving the way for strengthened cross-industry standards for this technology. We're looking forward to working together to shape the future of how agents serve enterprises and people."

In the near term, AI Agents will be expected to conduct a myriad of transactions on behalf of users, opening up considerable opportunities to offer great value as well as significant risks. This Forum will improve product market fit by giving companies foresight into what users want from AI Agents; it will help build trust and legitimacy with users; and it will strengthen cross-industry relations in support of industry standards development over time.

"We support The Forum for its deliberative and collaborative approach to shaping public discourse around AI agents," said Prakhar Mehrotra, SVP of AI at PayPal. "Responsibility and trust are core business principles for PayPal, and through collaborative efforts like these, we seek to encourage valuable perspectives that can help shape the future of agentic commerce."

The Forum will be conducted on the AI-assisted Stanford Online Deliberation Platform, a collaboration between Stanford’s Deliberative Democracy Lab and Crowdsourced Democracy Team, where a cross-section of the public will deliberate in small groups and share their perspectives, their lived experiences, and their expectations for AI products. This deliberation platform has hosted Meta’s Community Forums over the past few years. The Forum will also incorporate insights from CIP's Global Dialogues, conducted on the Remesh platform.

“Community Forums provide us with people’s considered feedback, which helps inform how we innovate,” said Rob Sherman, Meta’s Vice President, AI Policy & Deputy Chief Privacy Officer. “We look forward to the insights from this cross-industry partnership, which will provide a deeper understanding of people’s views on cutting-edge technology.”

This methodology is rooted in deliberation, which provides representative samples of the public with baseline education on a topic, including options with associated tradeoffs, and asks them to reflect on that education as well as their lived experience. Deliberative methods have been found to offer more considered feedback to decision-makers because people have to weigh the complexity of an issue rather than offering a knee-jerk reaction.

"This industry-wide deliberative forum represents a crucial step in democratizing the discourse around AI agents, ensuring that the public's voice is heard in a representative and thoughtful way as we collectively shape the future of this transformative technology," said James Fishkin, Director of Stanford's Deliberative Democracy Lab.

This Industry-Wide Forum represents a pivotal step in responsible AI development, bringing together technology companies and the public to address complex challenges in AI agent creation. By leveraging Stanford's Deliberative Polling methodology and making findings publicly available, the initiative promises to shape the future of AI with enhanced transparency, trust, and user-centric focus. Find out more about Stanford’s Deliberative Democracy Lab at deliberation.stanford.edu.

Media Contact: Alice Siu, Stanford Deliberative Democracy Lab

Read More

Back view of crop anonymous female talking to a chatbot of computer while sitting at home
News

Meta and Stanford’s Deliberative Democracy Lab Release Results from Second Community Forum on Generative AI

Participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’
Meta and Stanford’s Deliberative Democracy Lab Release Results from Second Community Forum on Generative AI
Chatbot powered by AI. Transforming Industries and customer service. Yellow chatbot icon over smart phone in action. Modern 3D render
News

Navigating the Future of AI: Insights from the Second Meta Community Forum

A multinational Deliberative Poll unveils the global public's nuanced views on AI chatbots and their integration into society.
Navigating the Future of AI: Insights from the Second Meta Community Forum
Collage of modern adults using smart phones in city with wifi signals
News

Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab

More than 6,300 deliberators from 32 countries and nine regions around the world participated in the Metaverse Community Forum on Bullying and Harassment.
Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab
Hero Image
Futuristic 3D Render Steve Johnson via Unsplash
All News button
1
Subtitle

There is a significant gap between what technology, especially AI technology, is being developed and the public's understanding of such technologies. We must ask: what if the public were not just passive recipients of these technologies, but active participants in guiding their evolution?

Date Label
Authors
Michael Breger
News Type
News
Date
Paragraphs

How do aging populations reshape health and innovation policies in Asian economies? What role can the private sector play in public health service delivery, and how do individual preferences affect the development of emerging technologies? Mai Nguyen and Jinseok Kim, the 2024-25 Asia health policy postdoctoral fellows at APARC, focus on these questions as part of their research into health care service adaptation and behavioral economics.

At a recent joint seminar, “Health, Aging, Innovation, and the Private Sector: Evidence from Vietnam and Korea,” they offered a comparative look at how Vietnam and South Korea navigate aging populations, rising healthcare demands, and rapid technological change. While Nguyen focuses on health system design in Vietnam and Kim explores innovation diffusion in Korea, they both use discrete choice modeling to understand how individuals make decisions within systems influenced by age, infrastructure, and policy.

Sign up for APARC newsletters to receive updates from our scholars >



Nguyen and Kim’s work is supported by APARC’s Asia Health Policy Program (AHPP), which offers a postdoctoral fellowship each year to an early-career scholar conducting original research on health policy in the Asia-Pacific, particularly in low- and middle-income economies across the region. The fellowship demonstrates the program’s commitment to fostering the next generation of Asia-focused health policy researchers.

Vietnam’s Mixed Health System and the Role of Patient Choice


Mai Nguyen’s research centers around the role of private healthcare providers in Vietnam, especially for patients managing chronic diseases such as diabetes. She studies how patients choose between public and private healthcare providers, and what attributes of care they value most.

To analyze these preferences, she uses a method known as the Discrete Choice Experiment, which allows her to quantify the relative importance of various service attributes — such as appointment flexibility, doctor choice, quality of care, drug diversity, and cost coverage — in influencing patients’ decisions.

Despite potential downsides, such as increased costs, equity concerns, and profit-driven service delivery, my study finds that private healthcare helps relieve pressure on the public system and meets diverse patient needs.
Mai Nguyen

Nguyen’s interest in this topic began while she worked at Vietnam’s Ministry of Health. “That earlier work highlighted the growing contribution of the private sector in filling service delivery gaps, particularly in urban areas and for non-communicable diseases such as diabetes,” she says.

Her findings suggest that Vietnam’s private sector has become a necessary complement to public healthcare. “Despite potential downsides, such as increased costs, equity concerns, and profit-driven service delivery, my study finds that private healthcare helps relieve pressure on the public system and meets diverse patient needs.”

At APARC, Nguyen has sharpened the focus of her research under the mentorship of AHPP Director Dr. Karen Eggleston, a leading expert on public and private roles in Asian health systems. Nguyen also values her collaboration with Jinseok Kim. “Dr. Kim’s expertise provides valuable insights into how Korea is addressing the challenges of a rapidly aging population through innovative policy and service delivery models,” she notes.

Her time at Stanford has also broadened Nguyen’s horizons beyond traditional health economics. “I have developed a strong interest in the application of artificial intelligence to enhance the delivery of medical services,” she says. Looking forward, she plans to expand her research to Asian American populations in the United States, exploring how AI and digital health can improve diabetes care while also addressing barriers related to equity and access.

Innovation Adoption and the Aging Consumer in South Korea


Jinseok Kim investigates how aging affects new technology adoption and consumer behavior in South Korea, a country facing one of the fastest demographic shifts in the world.

“My current research involves looking at population aging and innovation diffusion, specifically in the context of the rapid aging trend in Korea,” Kim says. He studies how age influences consumer preferences in choosing new technologies such as electric vehicles, telemedicine, and generative AI platforms like ChatGPT.

By working out the relationship between consumer choice and population aging, I forecast the effect of the population aging trend on the diffusion of innovative products and provide the potential policy and marketing implications for government policy and corporate management.
Jinseok Kim

Understanding these preferences, Kim argues, is critical for both policy and market strategy. “By working out the relationship between consumer choice and population aging, I forecast the effect of the population aging trend on the diffusion of innovative products and provide the potential policy and marketing implications for government policy and corporate management.”

The challenge, he says, lies in making sense of a wide range of behaviors across age groups and product types. “The biggest challenge I had in my studies was finding the overarching trend in the relationship between consumer choice for particular innovative products and population aging and then translating this finding into meaningful implications for society and the economy.”

Kim credits his time at APARC, especially participating in the Stanford Next Asia Policy Lab (SNAPL) meetings, with broadening his perspective. “Working as a member of SNAPL gave me insights and perspectives I didn’t have before,” he says.

SNAPL, directed by Professor Gi-Wook Shin, is an interdisciplinary research initiative housed within APARC addressing pressing social, cultural, economic, and political challenges in Asia through comparative, policy-relevant studies. The lab cultivates the next generation of researchers and policy leaders by offering mentorships and fellowship opportunities for students and emerging scholars.

Kim sees APARC’s model as effectively bridging academia and policy. “There are so many opportunities to interact with other scholars, policymakers, and practitioners,” Kim says. “Scholars here not only research and write, but they also get to share their voice and research findings in real-world policy.”

His advice to early-career researchers is straightforward. “Be more down-to-earth with your studies and thinking,” Kim says. “Sometimes scholars tend to get caught up in their way of thinking and perspective, but it may not be practical in real life. That is why I think it is important to just get outside and observe real consumer choice and behavior.”

Kim plans to continue researching questions related to innovation and demographic change to help governments and businesses adapt to aging populations and shifting consumer needs.

Ground-Level Data, Big-Picture Impact


Mai Nguyen and Jinseok Kim approach shared societal challenges through distinct yet complementary lenses. Nguyen’s research reveals how patient preferences can guide more effective public-private collaboration in healthcare, ultimately shaping systems that are more responsive to real-world needs. Meanwhile, Kim examines how patterns of technology adoption — especially among older adults — can influence the trajectory of innovation in aging societies.

Both scholars emphasize the value of ground-level data in addressing large-scale issues. By centering real behaviors and preferences, their work helps inform smarter, more adaptive policy, whether in designing patient-centered care or planning for technology's role in future societies. At APARC, their research bridges theory and practice, offering fresh insight into how Asian countries can navigate the twin forces of demographic change and rapid innovation.

Read More

Photo of Stanford Main Quad and logos of APARC and media outlet Netra News, winner of the 2025 Shorenstein Journalism Award.
News

Bangladesh-Focused Investigative Media Outlet Netra News Wins 2025 Shorenstein Journalism Award

Sponsored by Stanford University’s Shorenstein Asia-Pacific Research Center, the 24th annual Shorenstein Journalism Award honors Netra News, Bangladesh's premier independent, non-partisan media outlet, for its unflinching reportage on human rights abuses and corruption in Bangladesh and its efforts to establish and uphold fundamental freedoms in the country.
Bangladesh-Focused Investigative Media Outlet Netra News Wins 2025 Shorenstein Journalism Award
Brandon Yoder, Stanford Next Asia Policy Fellow
News

Political Signaling in an Uncertain World: Brandon Yoder’s Empirical Lens on Chinese Foreign Policy

Brandon Yoder, APARC’s 2024–25 Stanford Next Asia Policy Fellow, focuses on a central challenge in international politics: how states can credibly signal their intentions and avoid war. His work investigates this question in high-stakes contexts, such as during power shifts, amid strategic uncertainty, and in multi-actor settings where traditional signaling models often fall short.
Political Signaling in an Uncertain World: Brandon Yoder’s Empirical Lens on Chinese Foreign Policy
Shilin Jia
News

Tracking Elite Political Networks: Shorenstein Postdoctoral Fellow Shilin Jia’s Data-Driven Approach to Understanding Chinese Bureaucracy

APARC’s 2024-25 Shorenstein Postdoctoral Fellow on Contemporary Asia Shilin Jia researches the careers of high-level Chinese political elites during the economic reform period from 1978 to 2011. Using a quantitative approach, Jia explores how China's party-state orchestrated elite circulation as a governance tool during a time of significant economic and political transformation.
Tracking Elite Political Networks: Shorenstein Postdoctoral Fellow Shilin Jia’s Data-Driven Approach to Understanding Chinese Bureaucracy
Hero Image
Two young scholars in conversation on a background of Encina Hall arcade.
All News button
1
Subtitle

As Asian economies grapple with aging populations, rising healthcare demands, and rapid technological change, APARC’s 2024-25 Asia Health Policy Program Postdoctoral Fellows Mai Nguyen and Jinseok Kim study large-scale health care structural and policy challenges from the lens of individual decision-making.

Date Label
Paragraphs

In October 2024, Meta, in collaboration with the Stanford Deliberative Democracy Lab, implemented the third Meta Community Forum. This Community Forum expanded on the October 2023 deliberations regarding Generative AI. For this Community Forum, the participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’ Since the last Community Forum, the development of Generative AI has moved beyond AI chatbots and users have begun to explore the use of AI agents — a type of AI that can respond to written or verbal prompts by performing actions for you, or on your behalf. And beyond text-generating AI, users have begun to explore multimodal AI, where tools are able to generate content images, videos, and audio as well. The growing landscape of Generative AI raises more questions about users’ preferences when it comes to interacting with AI agents. This Community Forum focused deliberations on how interactive and proactive AI agents should be when engaging with users. Participants considered a variety of tradeoffs regarding consent, transparency, and human-like behaviors of AI agents. These deliberations shed light on what users are thinking now amidst the changing technology landscape in Generative AI.

For this deliberation, nearly 900 participants from five countries: India, Nigeria, Saudi Arabia, South Africa, and Turkey, participated in this deliberative event. The samples of each of these countries were recruited independently, so this Community Forum should be seen as five independent deliberations. In addition, 1,033 persons participated in the control group, where the participants did not deliberate in any discussions; the control group only completed the two surveys, pre and post. The main purpose of the control group is to demonstrate that any changes that occur after deliberation are a result of the deliberative event.

All Publications button
1
Publication Type
Reports
Publication Date
Subtitle

April 2025

Authors
James S. Fishkin
Alice Siu
Authors
News Type
News
Date
Paragraphs

In October 2024, Meta, in collaboration with the Stanford Deliberative Democracy Lab, implemented the third Meta Community Forum. This Community Forum expanded on the October 2023 deliberations regarding Generative AI. For this Community Forum, the participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’

At a high level, Meta used this Forum to:

  • Expand public input into AI development beyond the Global North, and into the Global South. This latest Forum involved roughly 1,000 people from India, Turkey, Nigeria, Saudi Arabia, and South Africa.
  • Push the boundaries on the topics that the public will have input into. We moved from the foundational principles people wanted to see in GenAI towards addressing specific value and risk tradeoffs associated with issues like personalization and human-like AI.


The Forum resulted in several key findings on the principles that should underpin AI agents, including:
 

  • Participants supported AI agents remembering their prior conversations to personalize their experience, especially if transparency and user controls are in place.
  • Participants were more supportive of culturally/regionally-tailored AI agents compared to standardized AI agents.
  • Participants were in favor of human-like AI agents that can respond to emotional cues.
  • Across topics, participants consistently favored options for AI to include transparency and user control features.

Maturing our Community Forum Program


Beyond the findings of any one Forum, the Deliberative Democracy Lab and Meta have heard important feedback from stakeholders and have implemented several programmatic changes to mature our program. These include:
 

  • More disclosure around the impact of results: Meta will share more information about how results are being actioned within the company on its Transparency Center page, which will be updated throughout the year.
  • Following up with participants: We heard the importance of going back to participants to explain what we learned from their input and what we are doing with it. The Deliberative Democracy Lab will be hosting calls with participants from each of our past Community Forums, dating back to 2022, to update them on the findings from the Forum and Meta’s response.
  • Supporting AI deliberation: A team of Meta AI experts has begun partnering with the Deliberative Democracy Lab to conduct research on how AI might further scale deliberation and optimize the Community Forum process. This includes, but is not limited to, using AI to aggregate themes that are emerging in discussions in real time and support engagement between participants and experts in plenary sessions.
  • Supporting external research: Meta is supporting a consortium of independent researchers from around the world who will evaluate the data from its Forums and publish research papers on the deliberations and results. This will culminate in an academic conference later this year.

Read More

Chatbot powered by AI. Transforming Industries and customer service. Yellow chatbot icon over smart phone in action. Modern 3D render
News

Navigating the Future of AI: Insights from the Second Meta Community Forum

A multinational Deliberative Poll unveils the global public's nuanced views on AI chatbots and their integration into society.
Navigating the Future of AI: Insights from the Second Meta Community Forum
Collage of modern adults using smart phones in city with wifi signals
News

Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab

More than 6,300 deliberators from 32 countries and nine regions around the world participated in the Metaverse Community Forum on Bullying and Harassment.
Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab
Hero Image
Back view of crop anonymous female talking to a chatbot of computer while sitting at home Getty Images
All News button
1
Subtitle

Participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’

Date Label
Paragraphs

Economic growth is uneven within many developing countries as some sectors and industries grow faster than others. India is no exception, where anemic performance in manufacturing has been offset by robust growth in services. Standard scholarly explanations fail to explain this kind of variation. For instance, the factor endowments that are required for services—such as an educated workforce or access to electricity and other infrastructure—should also complement manufacturing. Reciprocally, if a state’s institutions hold back manufacturing, they should also impair growth in services. Why have services in India outperformed manufacturing? We examine India’s performance in the computing industry, where a dynamic software services sector has emerged even as its computer hardware manufacturing sector has flagged. We argue that the uneven outcomes between the software and hardware sectors are due to the variable needs of the respective sectors and the state’s capacity to coordinate agencies. The policies required to promote the software sector needed minimal coordination between state agencies, whereas the computer hardware sector required a more centralized state apparatus for successful state-business engagement. Domestic and transnational political networks were critical for the success of the software sector, but similar networks could not deliver the same benefits to the computer hardware industry, which required more coordination-intensive policies than software. A state’s ability to coordinate industrial policy is thus a critical determinant for effective sectoral political networks, shaping sectoral variations within an economy.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Studies in Comparative International Development
Authors
Paragraphs

The Chinese government is revolutionizing digital surveillance at home and exporting these technologies abroad. Do these technology transfers help recipient governments expand digital surveillance, impose internet shutdowns, filter the internet, and target repression for online content? We focus on Huawei, the world’s largest telecommunications provider, which is partly state-owned and increasingly regarded as an instrument of its foreign policy. Using a global sample and an identification strategy based on generalized synthetic controls, we show that the effect of Huawei transfers depends on preexisting political institutions in recipient countries. In the world’s autocracies, Huawei technology facilitates digital repression. We find no effect in the world’s democracies, which are more likely to have laws that regulate digital privacy, institutions that punish government violations, and vibrant civil societies that step in when institutions come under strain. Most broadly, this article advances a large literature about the geopolitical implications of China’s rise.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Perspectives on Politics
Authors
Erin Baggott Carter
Brett Carter
Number
Published online 2025:1-20
Paragraphs

We are on the verge of a revolution in public sector decision-making processes, where computers will take over many of the governance tasks previously assigned to human bureaucrats. Governance decisions based on algorithmic information processing are increasing in numbers and scope, contributing to decisions that impact the lives of individual citizens. While significant attention in the recent few years has been devoted to normative discussions on fairness, accountability, and transparency related to algorithmic decision-making based on artificial intelligence, less is known about citizens’ considered views on this issue. To put society in-the-loop, a Deliberative Poll was thus carried out on the topic of using artificial intelligence in the public sector, as a form of in-depth public consultation. The three use cases that were selected for deliberation were refugee reallocation, a welfare-to-work program, and parole. A key finding was that after having acquired more knowledge about the concrete use cases, participants were overall more supportive of using artificial intelligence in the decision processes. The event was set up with a pretest/post-test control group experimental design, and as such, the results offer experimental evidence to extant observational studies showing positive associations between knowledge and support for using artificial intelligence.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
AI & SOCIETY
Authors
James S. Fishkin
Alice Siu
-
AI in Education Deliberative Poll for High School Educators

Are you worried about the impact AI can have on your classroom or excited about its potential? Do you wonder how you can utilize AI in your teaching or do you feel like it dehumanizes the learning process? Are you eager to learn about what “Artificial Intelligence” entails and how it can impact your classroom? 

If any of these questions have crossed your mind, we invite you to join Stanford's Deliberative Democracy Lab on Saturday, May 18, from 10:00 am to 2:45 pm (Pacific Time) to discuss with fellow educators how AI should be used and regulated in schools. You will discuss policies regarding the use of AI in schools — whether it should be banned from the Wi-Fi or left up to teachers and students to discern what “appropriate usage” means. You will also get to meet and ask questions to experts in the fields.

This will be an online event hosted on Stanford's Online Deliberation Platform. There will be sessions between deliberating teachers and expert panels where there will be Q&A time. Further details will be emailed to you.

SCHEDULE

10:00 am - 11:15 am: First Small Group Deliberation Session

11:15 am - 12:00 pm: Plenary Session 1

12:00 pm - 12:45 pm: Break

12:45 pm - 2:00 pm: Second Small Group Deliberation Session

2:00 pm - 2:45 pm: Plenary Session 2

This event is being led by students at The Quarry Lane School, Saratoga High School, and Lynbrook High School.

Online.

Open to high school educators only.

Workshops
Paragraphs

As the Russian government seeks to improve its economic performance, it must pay greater attention to the role of technology and digitalization in stimulating the Russian economy. While digitalization presents many opportunities for the Russian economy, a few key challenges – cumbersome government regulations and an unequal playing field for foreign companies – restrict Russia's potential in digitalization. In the future, how the Russian government designs its technology and regulatory policies will likely have significant impact both on the domestic front, as well as on their international initiatives and relationships. This paper provides an overview of recent Russian digital initiatives, the regulatory barriers for U.S. technological companies in Russia, and the intellectual property challenges for doing business in Russia. This paper also discusses recent digital initiatives from China, the United States, and other countries, and discusses what such programs mean for Russia. In this context, we also discuss Chinese and U.S. efforts to shape the future of global technological standards, alongside new programs from countries like Chile and Estonia, to attract foreign startup companies. Finally, this paper discusses the future challenges that the Russian government needs to address in order to improve its digital business environment. The paper concludes by providing some recommendations for designing market-friendly regulations, creating a level-playing field for foreign businesses in Russia, promoting Russian engagement with Western companies and governments, and undertaking more outreach efforts to make Russia's digital business environment more inclusive.

All Publications button
1
Publication Type
Conference Memos
Publication Date
Journal Publisher
The Stanford US-Russia Journal
Authors
Number
No. 1
Subscribe to Information Technology