Security

FSI scholars produce research aimed at creating a safer world and examing the consequences of security policies on institutions and society. They look at longstanding issues including nuclear nonproliferation and the conflicts between countries like North and South Korea. But their research also examines new and emerging areas that transcend traditional borders – the drug war in Mexico and expanding terrorism networks. FSI researchers look at the changing methods of warfare with a focus on biosecurity and nuclear risk. They tackle cybersecurity with an eye toward privacy concerns and explore the implications of new actors like hackers.

Along with the changing face of conflict, terrorism and crime, FSI researchers study food security. They tackle the global problems of hunger, poverty and environmental degradation by generating knowledge and policy-relevant solutions. 

-

This event is co-sponsored with the Center on Democracy, Development, and the Rule of Law

 

Seminar Recording: https://youtu.be/cPaeCJiRWuM

 

About this Event: In 2011, the impact of the Arab Spring and the emergence of YouTube videos evidencing ballot stuffing during Russian parliamentary elections, which nearly led to a revolution in Russia, forced Kremlin strategists to suddenly realize that the Internet had become a major media — and a major power. This was the case not only in Russia, but everywhere on the planet. The Kremlin spent years and billions of dollars [or rubles?] to subdue this power, and  to learn how to make use of it. Was this crusade successful? Is it true that Putin is now capable of influencing elections everywhere in the world? Will he be able to cut Russia off from the global internet? And what are the troll farms trying to achieve? Leonid Volkov, an internet expert and the founder of the Internet Protection Society, the leading Russian digital rights NGO—and, simultaneously, Chief of Staff for Alexey Navalny, the leader of Russian opposition—is known for his optimistic view on these issues. While Putin is far from possessing almighty internet warfare, the situation has complex implications for Russian society and democracy.

 

About the Speaker: Leonid Volkov is a Russian politician and IT-expert. He oversees regional political operations, IT and electoral campaigns for the leader of Russian opposition Alexey Navalny. Previously Volkov served as campaign manager and chief of staff for Alexei Navalny’s 2013 mayoral campaign for Moscow, as well as for Navalny’s attempt to get registered for the 2018 presidential election. Leonid Volkov is a former deputy of the Yekaterinburg City Duma. He has over 20 years of experience as an IT professional, running and consulting several of Russia’s largest software firms. Since 2016 Leonid is active also as founder and chairman of the Internet Protection Society, a NGO focused on internet freedom and digital rights in Russia.

Virtual Seminar

Leonid Volkov Russian Politician and IT-Expert
Seminars
-

The research on misinformation generally and fake news specifically is vast, as is coverage in media outlets. Two questions run throughout both the academic and public discourse: what explains the spread of fake news online, and what can be done about it? While there is substantial literature on who is likely to be exposed to and share fake news, these behaviors might not signal belief or effect. Conversely, there is far less work on who is able to differentiate between true and false stories and, as a result, who is most likely to believe fake news (or, conversely, not believe true news), a question that speaks directly to Facebook’s recent “community review” approach to combating the spread of fake news on its platform.

In his talk, Professor Tucker will report on initial findings from a new collaborative project between NYU’s Center for Social Media and Politics and Stanford’s Program on Democracy and the Internet designed to fill these gaps in the scholarly literature and inform the types of policy decisions being made by Facebook. The project has enlisted both professional fact checkers and random “crowds” of close to 100 people to fact check five “fresh” articles (that have appeared in the past 24 hours) per day, four days a week, for eights week using an innovative transparent and replicable algorithm for selecting the articles for fact checking. He will report on initial observations regarding (a) individual determinants of fact checking proficiency; (b) the viability using the “wisdom of the crowds” for fact checking, including examining the tradeoffs between crafting a more accurate crowd vs. a more representative crowd and (c) results from experiments designed to assess potential policy interventions to improve crowdsourcing accuracy.

About the Speaker:

Image
Joshua Tucker
Joshua A. Tucker is Professor of Politics, affiliated Professor of Russian and Slavic Studies, and affiliated Professor of Data Science at New York University. He is the Director of NYU’s Jordan Center for Advanced Study of Russia, a co-Director of the NYU Social Media and Political Participation (SMaPP) laboratory, a co-Director of the new NYU Center for Social Media and Politics, and a co-author/editor of the award-winning politics and policy blog The Monkey Cage at The Washington Post. He serves on the advisory boards of the American National Election Study, the Comparative Study of Electoral Systems, and numerous academic journals. Originally a scholar of post-communist politics, he has more recently studied social media and politics. His research in this area has included studies on the effects of network diversity on tolerance, partisan echo chambers, online hate speech, the effects of exposure to social media on political knowledge, online networks and protest, disinformation and fake news, how authoritarian regimes respond to online opposition, and Russian bots and trolls. His research has been funded by over $8 million in grants in the past three years, including a 2019 Knight Foundation “Research on the Future of an Informed Society” grant. His most recent book is the co-authored Communism’s Shadow: Historical Legacies and Contemporary Political Attitudes (Princeton University Press, 2017), and he is the co-editor of the forthcoming edited volume Social Media and Democracy (Cambridge University Press, 2020). 

News Type
Q&As
Date
Paragraphs

A Q&A with Professor Stephen Stedman, who serves as the Secretary General of the Kofi Annan Commission on Elections and Democracy in the Digital Age.

Image
Stedman Steve
Stephen Stedman, a Senior Fellow at the Freeman Spogli Institute for International Studies (FSI) at Stanford, is the director of the Kofi Annan Commission on Elections and Democracy in the Digital Age, an initiative of the Kofi Annan Foundation. The Commission is focused on studying the effects of social media on electoral integrity and the measures needed to safeguard the democratic process.  

At the World Economic Forum in Davos, Switzerland, the Commission which includes FSI’s Nathaniel Persily, Alex Stamos, and Toomas Ilves, launched a new report, Protecting Electoral Integrity in the Digital Age. The report takes an in-depth look at the challenges faced by democracy today and makes a number of recommendations as to how best to tackle the threats posed by social media to free and fair elections. On Tuesday, February 25, professors Stedman and Persily will discuss the report’s findings and recommendations during a lunch seminar from 12-1:15 PM. To learn more and to RSVP, visit the event page.

Q: What are some of the major findings of the report? Are digital technologies a threat to democracy?

Steve Stedman: Our report suggests that social media and the Internet pose an acute threat to democracy, but probably not in the way that most people assume. Many people believe that the problem is a diffuse one based on excess disinformation and a decline in the ability of citizens to agree on facts. We too would like the quality of deliberation in our democracy to improve and we worry about how social media might degrade democratic debate, but if we are talking about existential threats to democracy the problem is that digital technologies can be weaponized to undermine the integrity of elections.

When we started our work, we were struck by how many pathologies of democracy are said to be caused by social media: political polarization; distrust in fellow citizens, government institutions and traditional media; the decline in political parties; democratic deliberation, and on and on. Social media is said to lessen the quality of democracy because it encourages echo chambers and filter bubbles where we only interact with those who share our political beliefs. Some platforms are said to encourage extremism through their algorithms.

What we found, instead, is a much more complex problem. Many of the pathologies that social media are said to create – for instance, polarization, distrust, and political sorting begin their trendlines before the invention of the Internet, let alone the smart phone. Some of the most prominent claims are unsupported by evidence, or are confounded by conflicting evidence. In fact, we say that some assertions simply cannot be judged without access to data held by the tech platforms.

Instead, we rely on the work of scholars like Yochai Benkler and Edda Humphries to argue that not all democracies are equally vulnerable to network propaganda and disinformation. It is precisely where you have high pre-existing affective polarization, low trust, and hyperpartisan media, that digital technologies can intensify and amplify polarization.

Elections and toxic polarization are a volatile mix. Weaponized disinformation and hate speech can wreak havoc on elections, even if they don’t alter the vote tallies. This is because democracies require a system of mutual security. In established democracies political candidates and followers take it for granted that if they lose an election, they will be free to organize and contest future elections. They are confident that the winners will not use their power to eliminate them or disenfranchise them. Winners have the expectation that they hold power temporarily, and accept that they cannot change the rules of competition to stay in power forever. In short, mutual security is a set of beliefs and norms that turn elections from being a one-shot game into a repeated game with a long shadow of the future.

In a situation already marred by toxic polarization, we fear that weaponized disinformation and hate speech can cause parties and followers to believe that the other side doesn’t believe in the rules of mutual security. The stakes become higher. Followers begin to believe that losing an election means losing forever. The temptation to cheat and use violence increases dramatically. 

Q: As far as political advertising, the report encourages platforms to provide more transparency about who is funding that advertising. But it also asks that platforms require candidates to make a pledge that they will avoid deceptive campaign practices when purchasing ads. It also goes as far as to recommend financial penalties for a platform if, for example, a bot spreading information is not labelled as such. Some platforms might argue that this puts an unfair onus on them. How might platforms be encouraged to participate in this effort?

SS: The platforms have a choice: they can contribute to toxic levels of political polarization and the degradation of democratic deliberation, or they can protect electoral integrity and democracy. There are a lot of employees of the platforms who are alarmed at the state of polarization in this country and don’t want their products to be conduits of weaponized disinformation and hate speech. You saw this in the letter signed by Facebook employees objecting to the decision by Mark Zuckerberg that Facebook would treat political advertising as largely exempt from their community standards. If ever there were a moment in this country that we should demand that our political parties and candidates live up to a higher ethical standard it is now. Instead Facebook decided to allow political candidates to pay to run ads even if the ads use disinformation, tell bald-faced lies, engage in hate speech, and use doctored video and audio. Their rationale is that this is all part of “the rough and tumble of politics.” In doing so, Facebook is in the contradictory position that it has hundreds of employees working to stop disinformation and hate speech in elections in Brazil and India, but is going to allow politicians and parties in the United States to buy ads that can use disinformation and hate speech.

Our recommendation gives Facebook an option that allows political advertisement in a way that need not enflame polarization and destroy mutual security among candidates and followers: 1.) Require that candidates, groups or parties who want to pay for political advertising on Facebook sign a pledge of ethical digital practices; 2.) Then use the standards to determine if an ad meets the pledge or not. If an ad uses deep fakes, if an ad grotesquely distorts the facts, if an ad out and out lies about what an opponent said or did, then Facebook would not accept the ad. Facebook can either help us raise our electoral politics out of the sewer or it can ensure that our politics drowns in it.

It’s worth pointing out that the platforms are only one actor in a many-sided problem. Weaponized disinformation is actively spread by unscrupulous politicians and parties; it is used by foreign countries to undermine electoral integrity; and it is often spread and amplified by irresponsible partisan traditional media. Fox News, for example, ran the crazy conspiracy story about Hilary Clinton running a pedophile ring out of a pizza parlor in DC. Individuals around the president, including the son of the first National Security Adviser tweeted the story. 

Q: While many of the recommendations focus on the role of platforms and governments, the report also proposes that public authorities promote digital and media literacy in schools as well as public interest programming for the general population. What might that look like? And how would that type of literacy help protect democracy? 

SS: Our report recommends digital literacy programs as a means to help build democratic resilience against weaponized disinformation. Having said that however, the details matter tremendously. Sam Wineburg at Stanford, who we cite, has extremely insightful ideas for how to teach citizens to evaluate the information they see on the Internet, but even he puts forward warnings: if done poorly digital literacy could simply increase citizen distrust of all media, good and bad; digital literacy in a highly polarized context begs the question of who will decide what is good and bad media. We say in passing that in addition to digital literacy we need to train citizens to understand biased assimilation of information. Digital literacy trains citizens to understand who is behind a piece of information and who benefits from it. But we also need to teach citizens to stand back and ask, “why am I predisposed to want to believe this piece of information?”

Q: Obviously access to data is critical for researchers and commissioners to do their work, analysis and reporting. One of the recommendations asks that public authorities compel major internet platforms to share meaningful data with academic institutions. Why is it so important for platforms and academia to share information?

SS: Some of the most important claims about the effects of social media can’t be evaluated without access to the data. One example we cite in the report is the controversy about whether YouTube’s algorithms radicalize individuals and send them down a rabbit hole of racist, nationalist content. This is a common claim and has appeared on the front pages of the New York Times. The research supporting the claim, however, is extremely thin, and other research disputes it. What we say is that we can’t adjudicate this argument unless YouTube were to share its data, so that researchers can see what the algorithm is doing. There are similar debates concerning the effects of Facebook. One of our commissioners, Nate Persily, has been at the forefront of working with Facebook to provide certified researchers with privacy protected data – Social Science One. Progress has been so slow that the researchers have lost patience. We hope that governments can step in and compel the platforms to share the data.

Q: This is one of the first reports to look at this problem in the Global South. Is the problem more or less critical there?

SS: Kofi Annan was very concerned that the debate about digital technologies and democracy was far too focused on Europe and the United States. Before Cambridge Analytica’s involvement in the United States and Brexit elections of 2016, its predecessor company had manipulated elections in Asia, Africa and the Caribbean. There is now a transnational industry in election manipulation.

What we found does not bode well for democracies in the rest of the world. The factors that make democracies vulnerable to network propaganda and weaponized disinformation are often present in the Global South: pre-existing polarization, low trust, and hyperpartisan traditional media. Many of these democracies already have a repertoire of electoral violence. 

On the other hand, we did find innovative partnerships in Indonesia and Mexico where Election Management Bodies, civil society organizations, and traditional media cooperated to fight disinformation during elections, often with success. An important recommendation of the report is that greater attention and resources are needed for such efforts to protect electoral integrity in the Global South. 

About the Commission on Elections and Democracy in the Digital Age

 As one of his last major initiatives, in 2018 Kofi Annan convened the Commission on Elections and Democracy in the Digital Age. The Commission includes members from civil society and government, the technology sector, academia and media; across the year 2019 they examined and reviewed the opportunities and challenges for electoral integrity created by technological innovations. Assisted by a small secretariat at Stanford University and the Kofi Annan Foundation, the Commission has undertaken extensive consultations and issue recommendations as to how new technologies, social media platforms and communication tools can be harnessed to engage, empower and educate voters, and to strengthen the integrity of elections. Visit  the Kofi Annan Foundation and the Commission on Elections and Democracy in the Digital Age for more on their work.

Hero Image
democracy stock image
All News button
1
-

IMPORTANT EVENT UPDATE: 

In keeping with Stanford University's March 3 message to the campus community on COVID-19 and current recommendations of the CDC, the Asia-Pacific Research Center is electing to postpone this event until further notice. We apologize for any inconvenience this may cause, and appreciate your understanding and cooperation as we do our best to keep our community healthy and well. 

 

Data-intensive technologies such as AI may reshape the modern world. We propose that two features of data interact to shape innovation in data-intensive economies: first, states are key collectors and repositories of data; second, data is a non-rival input in innovation. We document the importance of state-collected data for innovation using comprehensive data on Chinese facial recognition AI firms and government contracts. Firms produce more commercial software and patents, particularly data-intensive ones, after receiving government public security contracts. Moreover, effects are largest when contracts provide more data. We then build a directed technical change model to study the state's role in three applications: autocracies demanding AI for surveillance purposes, data-driven industrial policy, and data regulation due to privacy concerns. When the degree of non-rivalry is as strong as our empirical evidence suggests, the state's collection and processing of data can shape the direction of innovation and growth of data-intensive economies.

Image
Portrait of David Yang
David Yang’s research focuses on political economy, behavioral and experimental economics, economic history, and cultural economics. In particular, David studies the forces of stability and forces of changes in authoritarian regimes, drawing lessons from historical and contemporary China. David received a B.A. in Statistics and B.S. in Business Administration from University of California at Berkeley, and PhD in Economics from Stanford. David is currently a Prize Fellow in Economics, History, and Politics at Harvard and a Postdoctoral Fellow at J-PAL at MIT. He also joined Harvard’s Economics Department as an Assistant Professor as of 2020.

David Yang Prize Fellow in Economics, History, and Politics; Department of Economics, Harvard University
Seminars
-

Richard Heydarian in conversation with Don Emmerson

In this seminar, scholar/journalist Richard Heydarian will discuss the principal arguments and ideas in his just-published book on the Indo-Pacific.  He will do so in conversation with Southeast Asia Program director Don Emmerson.  Propositions to be discussed will include:  The 21st century will not belong to China. There will be no Pax Sinica in the Indo-Pacific. China’s bid for primacy will fail due to its overbearing hubris abroad and its massive challenges at home.  Its effort to create a “neo-tributary” system in East Asia will not succeed, as evidenced by pushback regarding the Belt and Road Initiative and the South China Sea.  Neither China nor America will dominate the Indo-Pacific.  More likely to develop there is “an uneasy, fluid network of interlocking alliances, partnerships, and rivalries” in which middle powers such as Japan will figure prominently in efforts to address urgent and visceral challenges such as global warming and information war.  Most needed in the longer run will be a coalition of powers able jointly to “hold the line against the coming anarchy that will sweep the Indo-Pacific mega-region” if nothing is done to rescue it from the political, socioeconomic, environmental, and technological risks and dangers that lie ahead.  Copies of his latest book, from which these arguments are drawn, will be available for sale.

Image
heydarian
Richard Javad Heydarian’s latest book is The Indo-Pacific: Trump, China, and the New Struggle for Global Mastery (2020). Earlier publications include Asia’s New Battlefield (2015), How Capitalism Failed the Arab World (2014), and articles and interviews in many outlets including The Atlantic, The Economist, Foreign Affairs, The New York Times, The Guardian, The Wall Street Journal, and The Washington Post. He has interviewed heads of state and senior policy-makers across the Indo-Pacific, and has taught political science at Ateneo de Manila University and De La Salle University in the Philippines, and most recently was a visiting research Fellow at National Chengchi University in Taiwan. He is an opinion contributor to South China Morning Post, The Straits Times, and Nikkei Asian Review, and is a columnist at the Philippine Daily Inquirer and television host at GMA Network.

 

Richard Javad Heydarian Independent Scholar, Author, and Columnist for Philippines Daily Inquiries
Panel Discussions
Authors
Rose Gottemoeller
News Type
Q&As
Date
Paragraphs

This article originally appeared on the website of the Carnegie Endowment for International Peace, where Rose Gottemoeller is a nonresident senior fellow in Carnegie’s Nuclear Policy Program. She is also the Payne Distinguished Lecturer at the Freeman Spogli Institute for International Studies.

Russia is replacing older nuclear technology with more modern, more functional options. What are the implications for the United States, Europe, and the future of arms control?


Do the U.S. and Russia have different reasons for modernizing nuclear weapons?
In the big strategic game, the Russians and Americans have the same reason for modernizing their nuclear forces: they want to maintain parity. If the two sides have the same number of nuclear warheads deployed, then they will not be tempted to shoot at each other. They also have a reason to avoid an arms race that would entail constantly seeking more nuclear weapons to try to achieve superiority—however temporary. As expensive as nuclear weapons and their delivery vehicles are, parity has kept the costs down by holding the arms race in check.

In the past few years, Vladimir Putin does seem to be after nuclear weapons for another reason—to show that Russia is still a great power to be reckoned with. He has been trumpeting new and exotic systems that are unique, like the nuclear weapon delivery system known as the Burevestnik nuclear-propelled cruise missile.

These exotic systems have more of a political function than a strategic or security one. Their role is to signal Russia’s continuing scientific and military prowess at a time when the country does not otherwise have much on offer. Devilishly expensive and sometimes dangerous to operate, they are unlikely to be deployed in big numbers, as a 2019 fatal testing accident of the Burevestnik shows. If U.S.-Russian arms control remains in place, such systems definitely will not be deployed in big numbers, because they would displace proven and highly reliable intercontinental ballistic missiles in the Russian force structure. These ballistic missiles are the backbone of nuclear deterrence for Russia. The exotics don’t add to that deterrent. They have some show-off value, but they will do no more than make the rubble bounce.

What are European concerns with Russia's nuclear weapon modernization?
The Europeans, most prominently the NATO Allies, are very concerned about Russia’s nuclear modernization programs. Their concerns revolve more around new nuclear missiles to be deployed on European soil than the intercontinental systems that threaten the United States. Poland and Lithuania, for example, are NATO countries bordering Kaliningrad, a Russian enclave in the heart of NATO territory. Russia has put increasingly capable missiles there, including the Iskander, a highly accurate modern missile that is capable of launching either nuclear or conventional warheads.

Likewise, the Europeans are of one mind about the threat posed by a missile known as the 9M729 (SSC-8 in NATO parlance), which is a intermediate-range ground-launched cruise missile that the Russians developed and deployed in violation of the Intermediate Range Nuclear Forces (INF) Treaty. The Allies all agree that this missile poses a threat to NATO. Although it has not been deployed forward in Kaliningrad, its range is sufficient to threaten all of NATO Europe when deployed in European Russia. It too is said to support both nuclear and conventional weapons.

Since Russia seized Crimea in 2014, the Russians have begun to build up basing sites for their advanced systems there too, including the Iskanders. If Russia brings nuclear weapons into Crimea, it will spark complex political, legal, and moral problems. The world community has largely held firm in condemning Russia’s seizure of Crimea and considers Crimea to be Ukrainian territory. Should Russia bring nuclear weapons to Crimea, it will be violating the Non-Proliferation Treaty (NPT) in a fundamental manner, for Ukraine is a non-nuclear weapon state under the NPT. Russia in this case would be behaving in a manner no better than North Korea.

What is the role of arms control in managing U.S. and European relationships with Russia?
The most basic role of arms control regimes is to create mutual predictability, ensuring that no country participating is uncertain about its security both now and into the future. In this way, arms control helps to keep defense spending in check, but it also allows countries to build up mutual confidence and stability, which can translate into broader security and economic ties. This assumes, of course, that the deal is properly implemented by all parties, which is why Ronald Reagan’s old adage “trust but verify” is so important. If participants are allowed to cheat on an arms control regime, then it becomes hollowed out, detrimental to the security of all.

The fundamental benefits of arms control, however, can be helpful in times of trouble. I like to think that all the work Russia, the United States, and Europe did together in the 1990s was enabled by the then thirty-year legacy of arms control cooperation. We worked together to protect nuclear weapons and materials from the former Soviet arsenal from being stolen or misused. The same goes for the safety of nuclear power plants. When Ukraine, Russia, the European Union, and the U.S. began to work together in the early 1990s to mitigate the effects of the 1987 Chernobyl disaster, existing relationships in the nuclear realm helped the cleanup project run smoother. Nuclear energy is clearly a different world from the nuclear weapons establishment, but the scientific underpinnings and the scientists and engineers working the issues are the same.

Nowadays, I think that we must contemplate what it will mean if no nuclear arms control regimes remain in force. For the generation that worked these issues in Russia, the U.S., and Europe, enough of a residual relationship exists that experts can grasp at opportunities for cooperation when they present themselves. Some mechanisms such as scientist-to-scientist dialogues are likely to remain, such as the Pugwash and Dartmouth dialogues and the National Academy of Sciences exchanges with the Russian Academy of Sciences. These were the first places where Soviet and Western scientists gathered together to confront the problems of nuclear war and to look together for solutions.

We should be concerned, however, that they may revert to the talk shops of the Cold War, with few opportunities to work together on practical projects. Meanwhile, pragmatic and persistent tools, such as the Nuclear Risk Reduction Centers (NRRCs) that operate in the U.S. Department of State and the Russian Ministry of Defense, may find their missions sharply curtailed as they cease to serve any treaty purpose. The U.S., Russia, and Europe may thus be heading to a time when their means of communications in a nuclear crisis is no better than they had during the Cold War.

Hero Image
russia cropped
Photo by Host photo agency / RIA Novosti via Getty Images
All News button
1
-

Seminar Recordinghttps://youtu.be/AJxhy6pf95U

 

About this Event: Rampant disinformation threatens democracy, security, and even public health worldwide. As malicious actors weaponize social media, societies worldwide are being challenged to find solutions. Technology and regulatory measures must be part of the solution but, especially in free societies, these solutions often fail to keep pace with rapidly evolving and escalating threats. Dr. Kristin Lord, President and CEO of IREX, an international non-profit organization focused on education and development, will argue that at a time when the cost of producing disinformation is effectively zero, building citizen resilience to misinformation and disinformation must also be part of the solution.

Dr. Lord will discuss concrete approaches to building citizen resilience to disinformation, and present and review data showing its impact. She will also highlight the research agenda needed to advance the field of media literacy if its interventions are to be effective. IREX’s own flagship media literacy program, “Learn to Discern” is currently operational in more than a dozen countries, including the US, and has demonstrated lasting behavior change in a rigorous evaluation. Such approaches can be an effective part of a counter-disinformation strategy – but only if they are urgently brought to scale.

 

About the Speaker: Kristin Lord is President and CEO of IREX, a global non-profit organization that promotes more just, prosperous, and inclusive societies by developing leaders, extending access to quality education and information, empowering youth, and supporting accountable governance and civic participation. She brings more than twenty years of experience in the fields of education, foreign policy, global development, and security and peacebuilding to this role. Prior to joining IREX in 2014, Dr. Lord served in leadership roles at the United States Institute of Peace, Center for a New American Security, Brookings Institution, and The George Washington University's Elliott School of International Affairs. She also served at the U.S. Department of State and is currently a board member of the U.S. Global Leadership Coalition.

Kristin M. Lord President and CEO IREX
Seminars
-

Join Stephen Stedman, Nathaniel Persily, the Cyber Policy Center, and the Center on Democracy, Development and the Rule of Law (CDDRL) in an enlightening exploration of the recent report, Protecting Electoral Integrity in the Digital Age, put out by the Kofi Annan Commission on Elections and Democracy in the Digital Age. Moderated by Kelly Born, Executive Director of the Cyber Policy Center.

More on the report:

 

Abstract:

New information and communication technologies (ICTs) pose difficult challenges for electoral integrity. In recent years foreign governments have used social media and the Internet to interfere in elections around the globe. Disinformation has been weaponized to discredit democratic institutions, sow societal distrust, and attack political candidates. Social media has proved a useful tool for extremist groups to send messages of hate and to incite violence. Democratic governments strain to respond to a revolution in political advertising brought about by ICTs. Electoral integrity has been at risk from attacks on the electoral process, and on the quality of democratic deliberation.

The relationship between the Internet, social media, elections, and democracy is complex, systemic, and unfolding. Our ability to assess some of the most important claims about social media is constrained by the unwillingness of the major platforms to share data with researchers. Nonetheless, we are confident about several important findings.

About the Speakers

Image
Stephen Stedman
Stephen Stedman is a senior fellow at the Freeman Spogli Institute for International Studies, professor, by courtesy, of political science, and deputy director of the Center on Democracy, Development and Rule of Law. Professor Stedman currently serves as the Secretary General of the Kofi Annan Commission on Elections and Democracy in the Digital Age, and is the principal drafter of the Commission’s report, “Protecting Electoral Integrity in the Digital Age.”

Professor Stedman served as a special adviser and assistant secretary general of the United Nations, where he helped to create the United Nations Peacebuilding Commission, the UN’s Peacebuilding Support Office, the UN’s Mediation Support Office, the Secretary’s General’s Policy Committee, and the UN’s counterterrorism strategy. During 2005 his office successfully negotiated General Assembly approval of the Responsibility to Protect. From 2010 to 2012, he directed the Global Commission on Elections, Democracy, and Security, an international body mandated to promote and protect the integrity of elections worldwide.  Professor Stedman served as Chair of the Stanford Faculty Senate in 2018-2019. He and his wife Corinne Thomas are the Resident Fellows in Crothers, Stanford’s academic theme house for Global Citizenship. In 2018, Professor Stedman was awarded the Lloyd B. Dinkelspiel Award for outstanding service to undergraduate education at Stanford.

Image
Nathaniel Persily

Nathaniel Persily is the James B. McClatchy Professor of Law at Stanford Law School, with appointments in the departments of Political Science, Communication and FSI.  Prior to joining Stanford, Professor Persily taught at Columbia and the University of Pennsylvania Law School, and as a visiting professor at Harvard, NYU, Princeton, the University of Amsterdam, and the University of Melbourne. Professor Persily’s scholarship and legal practice focus on American election law or what is sometimes called the “law of democracy,” which addresses issues such as voting rights, political parties, campaign finance, redistricting, and election administration. He has served as a special master or court-appointed expert to craft congressional or legislative districting plans for Georgia, Maryland, Connecticut, and New York, and as the Senior Research Director for the Presidential Commission on Election Administration.

Also among the commissioners of the report were FSI's Alex Stamos, and Toomas Ilves

 

 

Stephen Stedman
-

Abstract:

China’s cyberspace and technology regime is going through a period of change—but it’s taking a while. The U.S.–China economic and tech competition both influences Chinese government developments and awaits their outcomes, and the 2017 Cybersecurity Law set up a host of still-unresolved questions. Data governance, security standards, market access, compliance, and other questions saw only modest new clarity in 2019. But 2020 promises new laws on personal information protection and data security, and the Stanford-based DigiChina Project in the Program on Geopolitics, Technology, and Governance, is devoted to monitoring, translating, and explaining these developments. From AI governance to the the nexus of cybersecurity and supply chains, this talk will summarize recent Chinese policymaking and lay out expectations for the year to come.

Image
Graham Webster
About the Speaker:

Graham Webster is editor in chief of the Stanford–New America DigiChina Project at the Stanford University Cyber Policy Center and a China digital economy fellow at New America. He was previously a senior fellow and lecturer at Yale Law School, where he was responsible for the Paul Tsai China Center’s U.S.–China Track 2 and Track 1.5 dialogues for five years before leading programming on cyberspace and technology issues. In the past, he wrote a CNET News blog on technology and society from Beijing, worked at the Center for American Progress, and taught East Asian politics at NYU's Center for Global Affairs. Webster holds a master's degree in East Asian studies from Harvard University and a bachelor's degree in journalism from Northwestern University. Webster also writes the independent Transpacifica e-mail newsletter.

0
Research Scholar
Graham Webster

Graham Webster is a research scholar in the Program on Geopolitics, Technology, and Governance and editor-in-chief of the DigiChina Project at the Center for International Security and Cooperation at Stanford University. He researches, writes, and teaches on technology policy in China and US-China relations.

Before bringing DigiChina to Stanford in 2019, he was its cofounder and coordinating editor at New America, where he was a China digital economy fellow. From 2012 to 2017, Webster worked for Yale Law School as a senior fellow and lecturer responsible for the Paul Tsai China Center’s Track II dialogues between the United States and China and co-taught seminars on contemporary China and Chinese law and policy. While there, he was an affiliated fellow with the Yale Information Society Project, a visiting scholar at China Foreign Affairs University, and a Transatlantic Digital Debates fellow with New America and the Global Public Policy Institute in Berlin. He was previously an adjunct instructor teaching East Asian politics at New York University and a Beijing-based journalist writing on the Internet in China for CNET News. 

In recent years, Webster's writing has been published in MIT Technology Review, Foreign Affairs, Slate, The Wire China, The Information, Tech Policy Press, and Foreign Policy. He has been quoted by The Wall Street Journal, The New York Times, The Washington Post, and Bloomberg and spoken to NPR and BBC World Service. Webster has testified before the US-China Economic and Security Review Commission and speaks regularly at universities and conferences in North America, East Asia, and Europe. His chapter, "What Is at Stake in the US–China Technological Relationship?" appears in The China Questions II (Harvard University Press, 2022).

Webster holds a bachelor's in journalism and international studies from Northwestern University and a master's in East Asian studies from Harvard University. He took doctoral coursework in political science at the University of Washington and language training at Tsinghua University, Peking University, Stanford University, and Kanda University of International Studies.

Editor-in-Chief, DigiChina
Date Label
Graham Webster
-

Multilateral Negotiations on ICTs (information and communications technologies) and International Security: Process and Prospects for the UN Group of Government Experts and the UN Open-Ended Working Group

Abstract: The intent of this seminar is to provide an update on recent events at the UN relevant to international discussions of cybersecurity (and a primer of sorts on current UN processes for addressing this topic).

In 2018, UN Member States decided to establish two concurrent negotiations with nearly identical mandates on the international security dimension of ICTs—a sixth limited membership UN Group of Governmental Experts (GGE) and an Open-Ended Working Group (OEWG) open to all governments. How did this happen? Are they competing or complementary endeavors? Is it likely that one will be able to bridge the longstanding divides on how international law applies to cyberspace or agree by consensus to additional norms of responsible State behavior? What would be a good outcome of each process? And how do these negotiations fit into the wider UN ecosystem, including the follow-up to the Secretary-General’s High Level Panel on Digital Cooperation.  

Image
Kerstin Vignard
About the Speaker: Kerstin Vignard is an international security policy professional with nearly 25 years’ experience at the United Nations, with a particular interest in the nexus of international security policy and technology. Vignard is Deputy to the Director at UNIDIR, currently on temporary assignment leading UNIDIR’s team supporting the Chairmen of the latest Group of Governmental Experts (GGEs) on Cyber Security and the Open-Ended Working Group. She has led UNIDIR’s team supporting four previous cyber GGEs. From 2013 to 2018, she initiated and led UNIDIR’s work on the weaponization of increasingly autonomous technologies, and is the co-Principal Investigator of a CIFAR AI & Society grant examining potential regulatory approaches for security and defence applications of AI.

Subscribe to Security