Cybersecurity
Authors
News Type
News
Date
Paragraphs

To thwart the effects of ‘Putinism,’ the American people must first understand the nature of the threat, Michael McFaul told the House Intelligence Committee.

What is “Putinism,” and what threat does it pose to U.S. national security? What steps can the U.S. take to confront it? These are the questions that former U.S. Ambassador to Russia and Freeman Spogli Institute (FSI) Director Michael McFaul sought to answer in his testimony before the House Intelligence Committee on March 28, 2019.

Sign up for the FSI newsletter to receive more stories like this directly to your email inbox.


The committee’s hearing, “Putin’s Playbook: The Kremlin’s Use of Oligarchs, Money and Intelligence in 2016 and Beyond,” focused on the complex web of political and economic forces within Russia, and how they are leveraged abroad to advance Putin’s foreign policy agenda.

Chaired by Representative Adam Schiff, the committee listened as McFaul emphasized the need for foreign policy decision making to be based on the latest data-based research and evidence.

“To contain and thwart the malicious effects of ‘Putinism,’ said McFaul, “the United States government and the American people must first understand the nature of the threat.”

There Goes the Neighborhood

Since Putin’s first term as president in 2000, McFaul explained, the Russian economy has experienced a radical redistribution of property rights, increasing public ownership at the expense of the private sector, and further weakening property protection laws. Strengthened state ownership to support political aims has entrenched Putin’s leadership over the country but not fostered vibrant or sustainable economic growth.

Now You See it, Now You Don’t

To distract from domestic corruption, economic inefficiency and overall inequality, said McFaul, the Kremlin has drawn the public’s attention toward its foreign adversaries, particularly the United States. A fabricated narrative claims that sanctions are a foreign ploy to cause economic stagnation, rather than a diplomatic response to Russian violations of international law.

Putinism On The Move

According to McFaul, Putin leverages several powerful methods of influence, many of which parallel those used within Russia, to further disrupt and weaken the international order:

  1. Heavy investment in international media, such as RT and Sputnik International
  2. The creation of organizations, fake identities and bots to influence public opinion on non-Russian social media platforms, particularly in the West
  3. “Doxing” — the theft of information by digital means with the aim of weakening perceived adversaries
  4. The cultivation of direct contacts and financial support with sympathetic non-governmental organizations and individual politicians
  5. Strategic business engagements with foreign entities to establish political influence and leverage
  6. Mobilization of coercive actors, such as soldiers, mercenaries and even assassins


Although lifting sanctions is a high priority for the Kremlin, noted McFaul, Putin has sought to develop relationships with sympathetic figures in foreign governments and wait for a favorable change in power rather than addressing the primary reasons that sanctions were enacted in the first place.

Defining the U.S. Response

In response to these so-called rules of ‘Putin’s Playbook,’ McFaul has already partnered with Congressman Ro Khanna and Alex Stamos, the Director of the Stanford Internet Observatory, to define and advocate for the specific actions that will protect the United States’ national security, beginning with reforms to deter doxing operations and enhance the cybersecurity of voting infrastructure.

“National security is not a partisan issue,” said McFaul in his testimony before the House Intelligence Committee. “When it comes to national security, if the Russians attack us, they’re not going to just attack the Republicans and leave the Democrats to the side… I want us to get back to the national security threats that threaten all of us together.”

Read Michael McFaul’s complete written testimony.

All News button
1
Authors
News Type
News
Date
Paragraphs

The House Permanent Select Committee on Intelligence held a public hearing on Thursday, March 28, 2019, as part of its investigation into Russian influence during and after the 2016 election campaign.

The hearing, "Putin’s Playbook: The Kremlin’s Use of Oligarchs, Money and Intelligence in 2016 and Beyond” included testimony by Michael McFaul, former U.S. Ambassador to Russia and Director of the Freeman Spogli Institute at Stanford University.


Download Complete Testimony (PDF 263 KB)

EXCERPT

To contain and thwart the malicious effects of “Putinism,” the United States government and the American people must first understand the nature of the threat. This testimony focuses onthe nexus of political and economic power within Russia under Putin’s leadership, and how these domestic practices can be used abroad to advance Putin’s foreign policy agenda. Moreover, it is important to underscore that crony capitalism, property rights provided by the state, bribery, and corruption constitute only a few of many different mechanisms used by Putin in his domestic authority and foreign policy abroad.

This testimony proceeds in three parts. Section I describes the evolution of Putin’s system of government at home, focusing in particular on the relationship between the state and big business. Section II illustrates how Putin seeks to export his ideas and practices abroad. Section III focuses on Putin’s specific foreign policy objective of lifting sanctions on Russian individuals and companies.

Watch the C-SPAN recording of the testimony


Media Contact: Ari Chasnoff, Assistant Director for Communications, 650-725-2371, chasnoff@stanford.edu

All News button
1
-

Recent years have witnessed an increasing number of cyber attacks originating in Russia that target the United States, European Union and EU member-states.  In Russia’s undeclared war against Ukraine—a conflict that has claimed some 13,000 lives—Russia has employed cyber tactics on a regular basis, including release against Ukraine of the Petya and NotPetya viruses.

Those attacks had consequences far beyond Ukraine’s borders.  The NonPetya attack, initiated against a small tech firm in Ukraine, spread to global businesses and government agencies throughout Europe and crossed the Atlantic to the United States.  The West should closely examine the Ukrainian experience, as Russia perfects tactics that could be turned against Europe and the United States as well.

Improving the security of the Internet will require sharing knowledge and experience, promoting greater awareness on cyber security, developing cyber security capacities, and deepening communication and cooperation among different stakeholders.  The Panel will discuss the nature of the threat as well as what governments, international organizations and businesses should do in these areas.

Speaker Bios:

Image
stamos
Alex Stamos is a cybersecurity expert, business leader and entrepreneur working to improve the security and safety of the Internet through his teaching and research at Stanford University. Stamos is an Adjunct Professor at Stanford’s Freeman-Spogli Institute, a William J. Perry Fellow at the Center for International Security and Cooperation, and a visiting scholar at the Hoover Institution. Prior to joining Stanford, Alex served as the Chief Security Officer of Facebook. In this role, Stamos led a team of engineers, researchers, investigators and analysts charged with understanding and mitigating information security risks to the company and safety risks to the 2.5 billion people on Facebook, Instagram and WhatsApp. During his time at Facebook, he led the company’s investigation into manipulation of the 2016 US election and helped pioneer several successful protections against these new classes of abuse. As a senior executive, Alex represented Facebook and Silicon Valley to regulators, lawmakers and civil society on six continents, and has served as a bridge between the interests of the Internet policy community and the complicated reality of platforms operating at billion-user scale.

Image
oleh derevianko
Oleh Derevianko is a business and social entrepreneur. He is the co-founder and chairman of the Board of ISSP — Information Systems Security Partners — a private international cybersecurity company founded in Ukraine in 2008 and currently operating in seven countries of Central and Eastern Europe and Middle Asia. Having a strong presence in the countries at the front line of cyber and hybrid war, such as Ukraine, and serving both private and public sectors, ISSP provides unique expertise for APT attacks analysis, detection and response. Derevianko is also a co-founder of International Cyber Academy (Kyiv), which provides worldclass learning opportunities for students who want to become skilled professionals in a world that depends on the use of cyberspace. In 2015–2016 he served as Deputy Minister, Chief of Staff at Ministry of Education and Science of Ukraine. 

Image
cortes sarah linderpix120815 scortes 0652 high res

Dr. Sarah Lewis Cortes has managed Security at American Express, Putnam Investments, Fidelity, and Biogen, among others. A postoctoral researcher at ACSO Digital Crime Lab, she performs training and consultation with the FBI and Interpol. She earned her degrees at Harvard University and Northeastern, and her research focuses on threat intelligence and the darknet, privacy and privacy law, international criminal legal treaties (MLATs), and digital forensics. At Putnam Investments, which manages over $1.3 trillion in assets, Sarah was SVP, Security. She oversaw Putnam’s recovery on 9/11 when parent company Marsh & McLennan’s World Trade Center 99th floor data center was destroyed.

Image
screen shot 2019 03 14 at 6 42 11 pm
Jason Min is the Head of Business Development at Check Point Software Technologies. In this role he sources, evaluates, and executes M&A transactions. Jason is responsible for overseeing business development and sale enablement activities that involve Check Point technology partners. Since joining Check Point in 2014, Jason has contributed to the success of Check Point’s major acquisitions and partnership growth. Prior to joining Check Point, Jason was at Highland Capital, a global venture capital firm, where he sourced and executed investments in security and software companies. Before working at Highland Capital, Jason was at General Atlantic, a $28B global private equity firm, where he focused on security and software investments across all stages of company growth.

Image
dafina toncheva usvp
Dafina Toncheva invests in emerging technologies in the enterprise space with focus on Enterprise SaaS applications and security. Dafina joined USVP in 2012 and has led investments in and joined the boards of InsideSales.com, Apptimize, Luma Health, Arkose Labs and Raken. Most recently, Dafina served on the board of Prevoty, a leader in application security, who was acquired by Imperva where USVP was the lead investor and largest shareholder. Prior to joining USVP, Dafina was a principal investor with Tugboat Ventures since 2010. Before that, she spent two years at Venrock helping to expand the firm’s investments in SaaS, virtualization, security, infrastructure and enterprise applications. Dafina led the first institutional investment round in Cloudflare which has since transformed into one of the most successful Internet security startups in Silicon Valley. 

Image
nm
Nataliya Mykolska is the Ukrainian Emerging Leaders Fellow at Stanford Center for Democracy Development and Rule of Law. Before coming to Stanford Nataliya was the Trade Representative of Ukraine - Deputy Minister of Economic Development and Trade. In the government, Nataliya was responsible for developing and implementing consistent, predictable and efficient trade policy. She focused on export strategy and Ukrainian exportpromotion, free trade agreements, protecting Ukrainian trade interests in the World Trade Organization (WTO), dialogue with Ukrainian exporters. Nataliya was the Vice-Chair of the International Trade Council and the Intergovernmental Committee on International Trade.

Moderator: 

Image
steven pifer 0
Steven Pifer is a William J. Perry fellow at Stanford’s Freeman Spogli Institute for International Studies (FSI), where he is affiliated with FSI’s Center for International Security and Cooperation and Europe Center.  He is also a nonresident senior fellow with the Brookings Institution. A retired Foreign Service officer, Pifer’s more than 25 years with the State Department focused on U.S. relations with the former Soviet Union and Europe, as well as arms control and security issues.  He served as deputy assistant secretary of state in the Bureau of European and Eurasian Affairs with responsibilities for Russia and Ukraine (2001-2004), ambassador to Ukraine (1998-2000), and special assistant to the president and senior director for Russia, Ukraine and Eurasia on the National Security Council (1996-1997).  

Authors
Amy Zegart
News Type
Commentary
Date
Paragraphs

Congress’s annual worldwide-threat hearings are usually scary affairs, during which intelligence-agency leaders run down all the dangers confronting the United States. This year’s January assessment was especially worrisome, because the minds of American citizens were listed as key battlegrounds for geopolitical conflict for the first time. “China, Russia, Iran, and North Korea increasingly use cyber operations to threaten both minds and machines in an expanding number of ways,” wrote Director of National Intelligence Dan Coats. Coats went on to suggest that Russia’s 2016 election interference is only the beginning, with new tactics and deep fakes probably coming soon, and the bad guys learning from experience.

Deception, of course, has a long history in statecraft and warfare. The Greeks used it to win at the Battle of Salamis in the fifth century b.c. The Allies won the Second World War in Europe with a surprise landing at Normandy—which hinged on an elaborate plan to convince Hitler that the invasion would be elsewhere. Throughout the Cold War, the Soviets engaged in extensive “active measure” operations, using front organizations, propaganda, and forged American documents to peddle half-truths, distortions, and outright lies in the hope of swaying opinion abroad.

But what makes people susceptible to deception? A colleague and I recently launched the two-year Information Warfare Working Group at Stanford. Our first assignment was to read up on psychology research, which drove home how vulnerable we all are to wishful thinking and manipulation.

Read the rest at The Atlantic

All News button
1
News Type
News
Date
Paragraphs

Cyber Initiative grantees and researchers in the news, February 2019

Here is a selection of Cyber Initiative grantee and researcher publications and citations for February 2019:

1-30-2019:  Larry Diamond “Chinese Influence, American Interests” in The Diplomat. 

1-30-19:  Michelle Mello “Stanford’s Michelle Mello on Latest Measles Outbreak” in SLS Blogs.  

1/31/19:  Matthew Gentzkow “How Quitting Facebook Could Change Your Life” in Fortune.  

1/30/29:  Matthew Gentzkow “This is Your Brain Off Facebook” in Health.   

2/3/19:  Herb Lin “Atomic Scientists: Hunanity flirting with annhilation” in Tribune.   

2/4/19:  Matthew Gentzkow “Quitting Facebook makes people happier, study finds” in Irish Examiner.   

2/6/19:  Herb Lin “Add cybersecurity to Doomsday Clock concerns, says Bulletin of Atomic Scientists” in CSO.  

2/6/19:  Herb Lin “Add cybersecurity to Doomsday Clock concerns, says Bulletin of Atomic Scientists” in CIO.  

2/8/19:  Elaine Treharne “Statement on the Hoover Institution” in The Stanford Daily.  

2/13/19:  Michelle Mello “Stanford’s Michael Wald on Vaccinations, Children’s Rights, and the Law” in The Stanford Report.  

2/15/19:  Fei-Fei Li and Elaine Treharne “Human-centered Artificial Intelligence Initiative talks AI, humanities and the arts” in The Stanford Daily.  

2/19/19:  Fei-Fei Li “5 Women advancing AI industry research” in Tech Talks.  

2/19/19:  Fei-Fei Li “10 AI influencers you should be following on Twitter” in Siliconrepublic.com.  

2/22/19 Larry Diamond “Utah Against Health Insurance” in New York Times.  

2/23/19 Sharad Goel “Algorithms Can Decide Pre-Trial Jail” in urbanmilwaukee.  

2/25/19:  Dan Boneh “Zether developers from Stanford aim to add new layer of privacy to Ethereum” in Dapp Life.  

2/26/19:  Susan Athey “Ripple Lead on Questions – Student Seeks Clarification for Promoting XRP Over Bitcoin in Stanford University" in CoinGape.  

2/26/19:  Larry Diamond “George Pyle: Utah’s Medicaid reversal makes us a fool coast-to-coast” in Salt Lake Tribune.  

2/27/19:  Arnold Milstein “AI will not solve health care challenges now, but there are innovative alternatives, researcher writes” in Scope.

2/28/19:  Dan Boneh “New Privacy Protocol Zether Can Conceal Ethereum Transactions” in Blockonomi.  

2/28/19:  Jure Leskovec “Species evolve ways to back up life's machinery” in Phys.org.  

2/28/19: Matthew Gentzkow “What happens when you get off Facebook for four weeks? Stanford researchers found out” in Recode.  

All News button
1
Paragraphs

Abstract: Technical tools dominate the cyber risk management market. Social cybersecurity tools are severely underutilised in helping organisations defend themselves against cyberattacks. We investigate a class of non-technical risk mitigation strategies and tools that might be particularly effective in managing and mitigating the effects of certain cyberattacks. We call these social-science-grounded methods Defensive Social Engineering (DSE) tools. Through interviews with urban critical infrastructure operators and cross-case analysis, we devise a pre, mid and post cyber negotiation framework that could help organisations manage their cyber risks and bolster organisational cyber resilience, especially in the case of ransomware attacks. The cyber negotiation framework is grounded in both negotiation theory and practice. We apply our ideas, ex post, to past ransomware attacks that have wreaked havoc on urban critical infrastructure. By evaluating how to use negotiation strategies effectively (even if no negotiations ever take place), we hope to show how non-technical DSE tools can give defenders some leverage as they engage with cyber adversaries who often have little to lose.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Authors
Authors
Amy Zegart
News Type
Commentary
Date
Paragraphs

 

The Trump administration’s National Cyber Strategy rests on a pair of convenient fictions.

 

I used to think we didn’t have enough strategic documents guiding U.S. cyber policy. Now I think we have at least one too many. In September, the Trump administration published a National Cyber Strategy—proudly declaring that it was the first fully articulated cyber strategy in 15 years. This week, the annual intelligence threat hearing laid bare the fantasy world of that four-month-old document and the cold hard reality of, well, reality.

The National Cyber Strategy paints an aspirational view of how the U.S. is doing in cyberspace and what we should do in the future. To be fair, aspirational isn’t all bad. Strategy documents need to inspire, not depress. And the strategy’s four pillars seem as unobjectionable as motherhood and apple pie: defending the homeland and America’s way of life; promoting American prosperity; preserving peace through strength; and advancing American interests. Who could argue with that? The best strategies articulate a future world, lay out a pathway to get there, generate new ideas, and align the disparate elements of government on a common path to succeed. Given how hard it is to keep the government lights on these days, getting on the same page about anything is a big deal.

Read the rest at The Atlantic.

 

 

All News button
1
0
Research Affiliate
dan_correa.jpg

Daniel Correa is a researcher at FSI where he leads the Technology and Public Policy Project. He previously helped shape science and technology policy for the Obama Administration for nearly four years, serving as Assistant Director for Innovation Policy at the White House Office of Science and Technology Policy. At the White House, Correa developed the Administration’s innovation strategy and led government-wide science and technology initiatives that invested hundreds of millions of dollars in government innovation, R&D commercialization, smart cities,entrepreneurship, and more.

Prior to joining the White House, Correa led development of technology, entrepreneurship, and innovation policy proposals at the Information Technology and Innovation Foundation, a Washington, D.C. think tank. He has also held the position of Kauffman Fellow in Law, Economics and Entrepreneurship at Yale Law School. He received a law degree from Yale Law School, a masters degree in economics from Yale University, and a bachelor’s degree from Dartmouth College.

Authors
Clifton Parker
News Type
News
Date
Paragraphs

War is changing, and the U.S. military can now use cyber weapons as digital combat power.

When and how that’s done is the subject of a new book, Bytes, Bombs and Spies: The Strategic Dimensions of Offensive Cyber Capabilities, edited by Herb Lin and Amy Zegart at the Center for International Security and Cooperation and the Hoover Institution.

US military doctrine defines offensive cyber operations as operations intended to project power by the application of force in and through cyberspace. This is defined as actions that disrupt or destroy intended targets.

At a time when US cyber policy is taking a new direction, Bytes, Bombs and Spies is one of the first books to examine strategic dimensions of using offensive cyber operations. With chapters by leading scholars, topics include US cyber policy, deterrence and escalation dynamics, among other issues. Many of the experts conclude that research, scholarship, and more open discussion needs to take place on the topics and concerns involved.

Lin and Zegart are senior research scholar and senior fellow, respectively, at Stanford’s Center for International Security and Cooperation. Max Smeets, a CISAC cybersecurity postdoctoral fellow, is also a contributor to the book.

Offensive cyber rising

Examples in recent years of offensive cyber usage include the Stuxnet computer virus that destroyed centrifuges in Iran and slowed that country’s attempt to build a nuclear weapon; cyber weapons employed against ISIS and its network-based command and control systems; and reported cyber incursions against North Korea’s ballistic missiles system that caused launch failures.

“If recent history is any guide, the interest in using offensive cyber operations is likely to grow,” wrote Lin and Zegart.

One key issue is how to best respond to cyberattacks from abroad, such as the 2015 theft of millions of records from the Office of Personnel Management, the 2016 U.S. election hacking, and the 2017 WannaCry ransomware attack that affected computers worldwide, to name but a few. Those incidents have “provided strong signals to policymakers that offensive cyber operations are powerful instruments of statecraft for adversaries as well as for the United States,” Zegart and Lin wrote.

In September 2018, the White House reportedly issued a directive taking a more aggressive posture toward cyber deterrence. This measure allows the military to engage, without a lengthy approval process, in actions that fall below the “use of force” or a level that would cause death, destruction or significant economic effects. Also, US Cyber Command was elevated to an independent unified command, giving it more independence in conducting offensive cyber operations.

These new policy directions make it all the more imperative that offensive cyber weapons be researched, analyzed and better understood, wrote Lin and Zegart.

Conceptual thinking lags

The 438-page Bytes, Bombs and Spies includes 16 chapters by different authors. Topics include the role and nature of military intelligence, surveillance, and reconnaissance in cyberspace; how should the United States respond if an adversary employs cyberattacks to damage the U.S. homeland or weaken its military capabilities; a strategic assessment of the U.S. Cyber Command vision; and operational considerations for strategic offensive cyber planning; among others.

“Conceptual thinking,” Lin and Zegart noted, lags behind the technical development of cyber weapons. Some issues examined include:

• How might offensive cyber operations be used in coercion or conflict?

• What strategic considerations should guide their development and use?

 • What intelligence capabilities are required for cyber weapons to be effective?

• How do escalation dynamics and deterrence work in cyberspace?

• What role does the private sector play?

Scholars at universities and think tanks need to conduct research on such topics, Zegart said. “Independent perspectives contribute to the overall body of useful knowledge on which policymakers can draw.”

In the chapter Lin wrote on “hacking a nation’s missile development program,” he noted that cyber sabotage relies on electronic access to various points in the life cycle of a missile, from its construction to ultimate use.

“For some points, access is really hard to obtain; in other points, it is easier.  Access can be technical (what might be obtained by hacking into a network) or human (what might be obtained by bribing or blackmailing a technician into inserting a USB thumb drive),” he said. 

One key, Lin said, is the availability of intelligence on the missile and the required infrastructure needed to fabricate, assemble, and launch the missile. 

“Precisely targeted offensive cyber operations generally require a great deal of detailed technical information, and such information is usually hard to obtain, especially if the missile program is operated by a closed authoritarian government that does not make available much information on anything,” he said.

Origins in cyber workshop

The idea for Bytes, Bombs and Spies originated from a 2016 research workshop led by Lin and Zegart through the Stanford Cyber Policy Program. That event brought together researchers from academia and think tanks as well as current and former policymakers in the Department of Defense (DoD) and U.S. Cyber Command.

“We organized the workshop for two reasons,” wrote Lin and Zegart. “First, it was already evident then—and is even more so now—that offensive cyber operations were becoming increasingly prominent in U.S. policy and international security more broadly. Second, despite the rising importance of offensive cyber operations, academics and analysts were paying much greater attention to cyber defense than to cyber offense.”

Herb Lin is the Hank J. Holland Fellow in Cyber Policy and Security at the Hoover Institution and senior research scholar for cyber policy and security at the Center for International Security and Cooperation, a center of the Freeman Spogli Institute for International Studies.

Amy Zegart is the Davies Family Senior Fellow at the Hoover Institution, where she directs the Robert and Marion Oster National Security Affairs Fellows program. She is founder and co-director of the Stanford Cyber Policy Program, and senior fellow at the Center for International Security and Cooperation, a center of the Freeman Spogli Institute for International Studies.

Media Contacts

Clifton B. Parker, Hoover Institution: 650-498-5205, cbparker@stanford.edu

 

 

 

 

 

 

 

 

 

All News button
1
Authors
Herbert Lin
News Type
Commentary
Date
Paragraphs

In the cybersecurity field, the term “active defense” is often used in a variety of ways, referring to any activity undertaken outside the legitimate span of control of an organization being attacked; any non-cooperative, harmful or damaging activity undertaken outside such scope; or any proactive step taken inside or outside that span of control. As most Lawfare readers know, activities outside the legitimate span of control are quite controversial from a policy standpoint, as they can implicate the Computer Fraud and Abuse Act, or CFAA, which criminalizes both gaining access to computers without authorization as well as exceeding authorized access.

This logic suggests to many that “hacking back”—which might well be defined as a counter-cyberattack on an attacker’s computer—would violate the CFAA. That is, even if A gains unauthorized access to B’s computer, any action taken by B on A’s computer would violate the CFAA since A would not have given B authorization for access. This article will offer some technical commentary on the implications of interpreting the CFAA that way.

Read the rest at Lawfare

 

All News button
1
Subscribe to Cybersecurity