CYBER IN THE LIGHT OF KABUL – UNCERTAINTY, SPEED, ASSUMPTIONS

 

There is a similarity between the cyber and intelligence community (IC) – we are both dealing with a denied environment where we have to assess the adversary based on limited verifiable information. The recent events in Afghanistan with the Afghani government and its military imploding and the events that followed were unanticipated and against the ruling assumptions. The assumptions were off, and the events that unfolded were unprecedented and fast. The Afghan security forces evaporated in ten days facing a far smaller enemy leading to a humanitarian crisis. There is no blame in any direction; it is evident that this was not the expected trajectory of events. But still, in my view, there is a lesson to be learned from the events in Kabul that applies to cyber.

The high degree of uncertainty, the speed in both cases, and our reliance on assumptions, not always vetted beyond our inner circles, makes the analogy work. According to the media, in Afghanistan, there was no clear strategy to reach a decisive outcome. You could say the same about cyber. What is a decisive cyber outcome at a strategic level? Are we just staring at tactical noise, from ransomware to unsystematic intrusions, when we should try to figure out the big picture instead?

Cyber is loaded with assumptions that we, over time, accepted. The assumptions become our path-dependent trajectory, and in the absence of the grand nation-state on nation-state cyber conflict, the assumptions are intact. The only reason why cyber’s failed assumption has not yet surfaced is the absence of full cyber engagement in a conflict. There is a creeping assumption that senior leaders will lead future cyber engagements; meanwhile, the data shows that the increased velocity in the engagements could nullify the time window for leaders to lead. Why do we want cyber leaders to lead? It is just how we do business. That is why we traditionally have senior leaders. John Boyd’s OODA-loop (Observe, Orient, Decide, Act) has had a renaissance in cyber the last three years. The increased velocity with support of more capable hardware, machine learning, artificial intelligence, and massive data utilization makes it questionable if there is time for senior leaders to lead traditionally. The risk is that senior leaders are stuck in the first O in the OODA loop, just observing, or in the latter case, orient in the second O in OODA. It might be the case that there is no time to lead because events are unfolding faster than our leaders can decide and act. The way technology is developing; I have a hard time believing that there will be any significant senior leader input at critical junctures because the time window is so narrow.

Leaders will always lead by expressing intent, and that might be the only thing left. Instead of precise orders, do we train leaders and subordinates to be led by intent as a form of decentralized mission command?

Another dominant cyber assumption is critical infrastructure as the likely attack vector. In the last five years, the default assumption in cyber is that critical infrastructure is a tremendous national cyber risk. That might be correct, but there are numerous others. In 1983, the Congressional Budget Office (CBO) defined critical infrastructure as “highways, public transit systems, wastewater treatment works, water resources, air traffic control, airports, and municipal water supply.” By the patriot Act of 2001, the scope had grown to include; “systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.” By 2013, in the Presidential Policy Directive 21 (PPD-21), the scope widens even further and almost encompasses all society. Today concession stands at ballparks are critical infrastructure, together with thousands of other non-critical functions, shows a mission drift that undermines a national cyber defense. There is no guidance on what to prioritize and what not to prioritize that we might have to live without at a critical juncture. The question is if critical infrastructure matters for our potential adversaries as an attack vector or is it critical infrastructure because it matters to us? A potential adversary wants to attack infrastructure around American military facilities and slow down the transportation apparatus from bases to the port of embarkation (POE) to delay the arrival of U.S. troops in theater. The adversary might do a different assessment, saying that tampering with the American homeland only strengthens the American will to fight and popular support for a conflict. The potential adversary might utilize our critical infrastructure as a capture-the-flag training ground to training their offensive teams, but the activity has no strategic intent.

As broad as the definition is today, it is likely that the focus on critical infrastructure reflects what concerns us instead of what the adversary considers essential for them to reach strategic success. So today, when we witnessed the unprecedented events in Afghanistan, where it appears that our assumptions were off, it is good to keep in mind that cyber is heavy with untested assumptions. In cyber, what we know about the adversary and their intent is limited. We make assumptions based on the potential adversaries’ behavior and doctrine, but it is still an assumption.
So the failures to correctly assess Afghanistan should be a wake-up call for the cyber community, which also relies on unvalidated information.

The long-term cost of cyber overreaction

The default modus operandi when facing negative cyber events is to react, often leading to an overreaction. It is essential to highlight the cost of overreaction, which needs to be a part of calculating when to engage and how. For an adversary probing cyber defenses, reactions provide information that can aggregate a clear picture of the defendant’s capabilities and preauthorization thresholds.

Ideally, potential adversaries cannot assess our strategic and tactical cyber capacities, but over time and numerous responses, the information advantage evaporates. A reactive culture triggered by cyberattacks provides significant information to a probing adversary, which seeks to understand underlying authorities and tactics, techniques and procedures (TTP).

The more we act, the more the potential adversary understands our capacity, ability, techniques, and limitations. I am not advocating a passive stance, but I want to highlight the price of acting against a potential adversary. With each reaction, that competitor gain certainty about what we can do and how. The political scientist Kenneth N. Waltz said that the power of nuclear arms resides with what you could do and not within what you do. A large part of the cyber force strength resides in the uncertainty in what it can do, which should be difficult for a potential adversary to assess and gauge.

Why does it matter? In an operational environment where the adversaries operate under the threshold for open conflict, in sub-threshold cyber campaigns, an adversary will seek to probe in order to determine the threshold, and to ensure that it can operate effectively in the space below the threshold. If a potential adversary cannot gauge the threshold, it will curb its activities as its cyber operations must remain adequately distanced to a potential, unknown threshold to avoid unwanted escalation.

Cyber was doomed to be reactionary from its inception; its inherited legacy from information assurance creates a focus on trying to defend, harden, detect and act. The concept is defending, and when the defense fails, it rapidly swings to reaction and counteractivity. Naturally, we want to limit the damage and secure our systems, but we also leave a digital trail behind every time we act.

In game theory, proportional responses lead to tit-for-tat games with no decisive outcome. The lack of the desired end state in a tit-for-tat game is essential to keep in mind as we discuss persistent engagement. In the same way, as Colin Powell reflected on the conflict in Vietnam, operations without an endgame or a concept of what decisive victory looks like are engagements for the sake of engagements. Even worse, a tit-for-tat game with continuous engagements might be damaging as it trains potential adversaries that can copy our TTPs to fight in cyber. Proportionality is a constant flow of responses that reveals friendly capabilities and makes potential adversaries more able.

There is no straight answer to how to react. A disproportional response at specific events increases the risks from the potential adversary, but it cuts both ways as the disproportional response could create unwanted escalation.

The critical concern is that to maintain abilities to conduct cyber operations for the nation decisively, the extent of friendly cyber capabilities needs almost intact secrecy to prevail in a critical juncture. It might be time to put a stronger emphasis on intel-gain loss (IGL) assessment to answer the question if the defensive gain now outweighs the potential loss of ability and options in the future.

The habit of overreacting to ongoing cyberattacks undermines the ability to quickly and surprisingly engage and defeat an adversary when it matters most. Continuously reacting and flexing the capabilities might fit the general audience’s perception of national ability, but it can also undermine the outlook for a favorable geopolitical cyber endgame.

After twenty years of cyber – still unchartered territory ahead

The general notion is that much of the core understanding in cyber is in place. I would like to challenge that perception. There are still vast territories of the cyber domain that need to be researched, structured, and understood. I would like to use Winston Churchill’s words – it is not the beginning of the end; it is maybe the end of the beginning. It is obvious to me, in my personal opinion, that the cyber journey is still very early, the cyber field has yet to mature and big building blocks for the future cyber environment are not in place. Internet and the networks that support the net have increased dramatically over the last decade. Even if the growth of cyber might be stunning, the actual advances are not as impressive.

In the last 20 years, cyber defense, and cyber as a research discipline, have grown from almost nothing to major national concerns and the recipient of major resources. In the winter of 1996-1997, there were four references to cyber defense in the search engine of that day: AltaVista. Today, there are about 2 million references in Google. Knowledge of cyber has not developed at the same rapid rate as the interest, concern, and resources.

The cyber realm is still struggling with basic challenges such as attribution. Traditional topics in political science and international relations — such as deterrence, sovereignty, borders, the threshold for war, and norms in cyberspace — are still under development and discussion. From a military standpoint, there is still a debate about what cyber deterrence would look like, what the actual terrain and maneuverability are like in cyberspace, and who is a cyber combatant.

The traditional combatant problem becomes even more complicated because the clear majority of the networks and infrastructure that could be engaged in potential cyber conflicts are civilian — and the people who run these networks are civilians. Add to that mix the future reality with cyber: fighting a conflict at machine speed and with limited human interaction.

Cyber raises numerous questions, especially for national and defense leadership, due to the nature of cyber. There are benefits with cyber – it can be used as a softer policy option with a global reach that does not require predisposition or weeks of getting assets in the right place for action. The problem occurs when you reverse the global reach, and an asymmetric fight occurs, when the global adversaries to the United States can strike utilizing cyber arms and attacks deep to the most granular participle of our society – the individual citizen. Another question that is raising concern is the matter of time. Cyber attacks and conflicts can be executed at machine speed, which is beyond human ability to lead and comprehend what is actually happening. This visualizes that cyber as a field of study is in its early stages even if we have an astronomic growth of networked equipment, nodes, and the sheer volume of transferred information. We have massive activity on the Internet and in networks, but we are not fully able to utilize it or even structurally understand what is happening at a system-level and in a grander societal setting. I believe that it could take until the mid-2030s before many of the basic elements of cyber have become accepted, structured, and understood, and until we have a global framework. Therefore, it is important to be invested in cyber research and make discoveries now rather than face strategic surprise. Knowledge is weaponized in cyber.

Jan Kallberg, PhD

Cognitive Force Protection – How to protect troops from an assault in the cognitive domain

(Co-written with COL Hamilton)

Jan Kallberg and Col. Stephen Hamilton

Great power competition will require force protection for our minds, as hostile near-peer powers will seek to influence U.S. troops. Influence campaigns can undermine the American will to fight, and the injection of misinformation into a cohesive fighting force are threats equal to any other hostile and enemy action by adversaries and terrorists. Maintaining the will to fight is key to mission success.

Influence operations and disinformation campaigns are increasingly becoming a threat to the force. We have to treat influence operations and cognitive attacks as serious as any violent threat in force protection. Force protection is defined by Army Doctrine Publication No. 3-37, derived from JP 3-0: “Protection is the preservation of the effectiveness and survivability of mission-related military and nonmilitary personnel, equipment, facilities, information, and infrastructure deployed or located within or outside the boundaries of a given operational area.” Therefore, protecting the cognitive space is an integral part of force protection.

History shows that preserving the will to fight has ensured mission success in achieving national security goals. France in 1940 had more tanks and significant military means to engage the Germans; however, France still lost. A large part of the explanation of why France was unable to defend itself in 1940 resides with defeatism. This including an unwillingness to fight, which was a result of a decade-long erosion of the French soldiers’ will in the cognitive realm.

In the 1930s, France was political chaos, swinging from right-wing parties, communists, socialists, authoritarian fascists, political violence and cleavage, and the perception of a unified France worth fighting for diminished. Inspired by Stalin’s Soviet Union, the communists fueled French defeatism with propaganda, agitation and influence campaigns to pave the way for a communist revolution. Nazi Germany weakened the French to enable German expansion. Under a persistent cognitive attack from two authoritarian ideologies, the bulk of the French Army fell into defeatism. The French disaster of 1940 is one of several historical examples where manipulated perception of reality prevailed over reality itself. It would be a naive assessment to assume that the American will is a natural law unaffected by the environment. Historically, the American will to defend freedom has always been strong; however, the information environment has changed. Therefore, this cognitive space must be maintained, reignited and shared when the weaponized information presented may threaten it.

In the Battle of the Bulge, the conflict between good and evil was open and visible. There was no competing narrative. The goal of the campaign was easily understood, with clear boundaries between friendly and enemy activity. Today, seven decades later, we face competing tailored narratives, digital manipulation of media, an unprecedented complex information environment, and a fast-moving, scattered situational picture.

Our adversaries will and already are exploiting the fact that we as a democracy do not tell our forces what to think. Our only framework is loyalty to the Constitution and the American people. As a democracy, we expect our soldiers to support the Constitution and the mission. Our force has their democratic and constitutional right to think whatever they find worthwhile to consider.

In order to fight influence operations, we would typically control what information is presented to the force. However, we cannot tell our force what to read and not read due to First Amendment rights. While this may not have caused issues in the past, social media has presented an opportunity for our adversaries to present a plethora of information that is meant to persuade our force.

In addition, there is too much information flowing in multiple directions to have centralized quality control or fact checking. The vetting of information must occur at the individual level, and we need to enable the force’s access to high-quality news outlets. This doesn’t require any larger investment. The Army currently funds access to training and course material for education purposes. Extending these online resources to provide every member of the force online access to a handful of quality news organizations costs little but creates a culture of reading fact-checked news. More importantly, the news that is not funded by click baiting is more likely to be less sensational since its funding source comes from dedicated readers interested in actual news that matters.

In a democracy, cognitive force protection is to learn, train and enable the individual to see the demarcation between truth and disinformation. As servants of our republic and people, leaders of character can educate their unit on assessing and validating the information. As first initial steps, we must work toward this idea and provide tools to protect our force from an assault in the cognitive domain.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. Col. Stephen Hamilton is the chief of staff at the institute and a professor at the academy. The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy or the Defense Department.

 

 

Government cyber breach shows need for convergence

(I co-authored this piece with MAJ Suslowicz and LTC Arnold).

MAJ Chuck Suslowicz , Jan Kallberg , and LTC Todd Arnold

The SolarWinds breach points out the importance of having both offensive and defensive cyber force experience. The breach is an ongoing investigation, and we will not comment on the investigation. Still, in general terms, we want to point out the exploitable weaknesses in creating two silos — OCO and DCO. The separation of OCO and DCO, through the specialization of formations and leadership, undermines broader understanding and value of threat intelligence. The growing demarcation between OCO and DCO also have operative and tactical implications. The Multi-Domain Operations (MDO) concept emphasizes the competitive advantages that the Army — and greater Department of Defense — can bring to bear by leveraging the unique and complementary capabilities of each service.

It requires that leaders understand the capabilities their organization can bring to bear in order to achieve the maximum effect from the available resources. Cyber leaders must have exposure to a depth and the breadth of their chosen domain to contribute to MDO.

Unfortunately, within the Army’s operational cyber forces, there is a tendency to designate officers as either offensive cyber operations (OCO) or defensive cyber operations (DCO) specialists. The shortsighted nature of this categorization is detrimental to the Army’s efforts in cyberspace and stymies the development of the cyber force, affecting all soldiers. The Army will suffer in its planning and ability to operationally contribute to MDO from a siloed officer corps unexposed to the domain’s inherent flexibility.

We consider the assumption that there is a distinction between OCO and DCO to be flawed. It perpetuates the idea that the two operational types are doing unrelated tasks with different tools, and that experience in one will not improve performance in the other. We do not see such a rigid distinction between OCO and DCO competencies. In fact, most concepts within the cyber domain apply directly to both types of operations. The argument that OCO and DCO share competencies is not new; the iconic cybersecurity expert Dan Geer first pointed out that cyber tools are dual-use nearly two decades ago, and continues to do so. A tool that is valuable to a network defender can prove equally valuable during an offensive operation, and vice versa.

For example, a tool that maps a network’s topology is critical for the network owner’s situational awareness. The tool could also be effective for an attacker to maintain situational awareness of a target network. The dual-use nature of cyber tools requires cyber leaders to recognize both sides of their utility. So, a tool that does a beneficial job of visualizing key terrain to defend will create a high-quality roadmap for a devastating attack. Limiting officer experiences to only one side of cyberspace operations (CO) will limit their vision, handicap their input as future leaders, and risk squandering effective use of the cyber domain in MDO.

An argument will be made that “deep expertise is necessary for success” and that officers should be chosen for positions based on their previous exposure. This argument fails on two fronts. First, the Army’s decades of experience in officers’ development have shown the value of diverse exposure in officer assignments. Other branches already ensure officers experience a breadth of assignments to prepare them for senior leadership.

Second, this argument ignores the reality of “challenging technical tasks” within the cyber domain. As cyber tasks grow more technically challenging, the tools become more common between OCO and DCO, not less common. For example, two of the most technically challenging tasks, reverse engineering of malware (DCO) and development of exploits (OCO), use virtually identical toolkits.

An identical argument can be made for network defenders preventing adversarial access and offensive operators seeking to gain access to adversary networks. Ultimately, the types of operations differ in their intent and approach, but significant overlap exists within their technical skillsets.

Experience within one fragment of the domain directly translates to the other and provides insight into an adversary’s decision-making processes. This combined experience provides critical knowledge for leaders, and lack of experience will undercut the Army’s ability to execute MDO effectively. Defenders with OCO experience will be better equipped to identify an adversary’s most likely and most devastating courses of action within the domain. Similarly, OCO planned by leaders with DCO experience are more likely to succeed as the planners are better prepared to account for potential adversary countermeasures.

In both cases, the cross-pollination of experience improves the Army’s ability to leverage the cyber domain and improve its effectiveness. Single tracked officers may initially be easier to integrate or better able to contribute on day one of an assignment. However, single-tracked officers will ultimately bring far less to the table than officers experienced in both sides of the domain due to the multifaceted cyber environment in MDO.

Maj. Chuck Suslowicz is a research scientist in the Army Cyber Institute at West Point and an instructor in the U.S. Military Academy’s Department of Electrical Engineering and Computer Science (EECS). Dr. Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. LTC Todd Arnold is a research scientist in the Army Cyber Institute at West Point and assistant professor in U.S. Military Academy’s Department of Electrical Engineering and Computer Science (EECS.) The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy or the Department of Defense.

 

What COVID-19 can teach us about cyber resilience

The COVID pandemic is a challenge that will eventually create health risks to Americans and have long-lasting effects. For many, this is a tragedy, a threat to life, health, and finances. What draws our attention is what COVID-19 has meant our society, the economy, and how in an unprecedented way, family, corporations, schools, and government agencies quickly had to adjust to a new reality. Why does this matter from a cyber perspective?

COVID-19 has created increased stress on our logistic, digital, public, and financial systems and this could in fact resemble what a major cyber conflict would mean to the general public. It is also essential to assess what matters to the public during this time. COVID-19 has created a widespread disruption of work, transportation, logistics, distribution of food and necessities to the public, and increased stress on infrastructures, from Internet connectivity to just-in-time delivery. It has unleashed abnormal behaviors.

A potential adversary will likely not have the ability to take down an entire sector of our critical infrastructure, or business eco-system, for several reasons. First, awareness and investments in cybersecurity have drastically increased the last two decades. This in turn reduced the number of single points of failure and increased the number of built-in redundancies as well as the ability to maintain operations in a degraded environment.

Dr. Jan Kallberg and Col. Stephen Hamilton
March 23, 2020

The COVID pandemic is a challenge that will eventually create health risks to Americans and have long-lasting effects. For many, this is a tragedy, a threat to life, health, and finances. What draws our attention is what COVID-19 has meant our society, the economy, and how in an unprecedented way, family, corporations, schools, and government agencies quickly had to adjust to a new reality. Why does this matter from a cyber perspective?

COVID-19 has created increased stress on our logistic, digital, public, and financial systems and this could in fact resemble what a major cyber conflict would mean to the general public. It is also essential to assess what matters to the public during this time. COVID-19 has created a widespread disruption of work, transportation, logistics, distribution of food and necessities to the public, and increased stress on infrastructures, from Internet connectivity to just-in-time delivery. It has unleashed abnormal behaviors.

A potential adversary will likely not have the ability to take down an entire sector of our critical infrastructure, or business eco-system, for several reasons. First, awareness and investments in cybersecurity have drastically increased the last two decades. This in turn reduced the number of single points of failure and increased the number of built-in redundancies as well as the ability to maintain operations in a degraded environment.

Second, the time and resources required to create what was once referred to as a “Cyber Pearl Harbor” is beyond the reach of any near-peer nation. Decades of advancement, from increasing resilience, adding layered defense and the new ability to detect intrusion, have made it significantly harder to execute an attack of that size.

Instead, an adversary will likely focus their primary cyber capacity on what matters for their national strategic goals. For example, delaying the movement of the main U.S. force from the continental United States to theater by using a cyberattack on utilities, airports, railroads, and ports. That strategy has two clear goals: to deny United States and its allies options in theater due to a lack of strength and to strike a significant blow to the United States and allied forces early in the conflict. If an adversary can delay U.S. forces’ arrival in theater or create disturbances in thousands of groceries or wreak havoc on the commute for office workers, they will likely prioritize what matters to their military operations first.

That said, in a future conflict, the domestic businesses, local government, and services on which the general public rely on, will be targeted by cyberattacks. These second-tier operations are likely exploiting the vulnerabilities at scale in our society, but with less complexity and mainly opportunity exploitations.

The similarity with the COVID-19 outbreak to a cyber campaign is the disruption in logistics and services, how the population reacts, as well as the stress it puts on law enforcement and first responders. These events can lead to questions about the ability to maintain law and order and the ability to prevent destabilization of a distribution chain that is built for just-in-time operations with minimal margins of deviation before it falls apart.

The sheer nature of these second-tier attacks is unsystematic, opportunity-driven. The goal is to pursue disruption, confusion, and stress. An authoritarian regime would likely not be hindered by international norms to attack targets that jeopardize public health and create risks for the general population. Environmental hazards released by these attacks can lead to risks of loss of life and potential dramatic long-term loss of life quality for citizens. If the population questions the government’s ability to protect, the government’s legitimacy and authority will suffer. Health and environmental risks tend to appeal not only to our general public’s logic but also to emotions, particularly uncertainty and fear. This can be a tipping point if the population fears the future to the point it loses confidence in the government.

Therefore, as we see COVID-19 unfold, it could give us insights into how a broad cyber-disruption campaign could affect the U.S. population. Terrorist experts examine two effects of an attack – the attack itself and the consequences of how the target population reacts.

Likely, our potential adversaries study carefully how our society reacts to COVID-19. For example, if the population obeys the government, if our government maintains control and enforces its agenda and if the nation was prepared.

Lessons learned from COVID-19 are applicable for the strengthening U.S. cyberdefense and resilience. These unfortunate events increase our understanding of how a broad cyber campaign can disrupt and degrade the quality of life, government services, and business activity.

Why Iran would avoid a major cyberwar

The Iranian military apparatus is a mix of traditional military defense, crowd control, political suppression, and show of force for generating artificial internal authority in the country. If command and control evaporate in the military apparatus, it also removes the ability to control the population to the degree the Iranian regime have been able until now to do. In that light, what is in it for Iran to launch a massive cyber engagement against the free world? What can they win?

Demonstrations in Iran last year and signs of the regime’s demise raise a question: What would the strategic outcome be of a massive cyber engagement with a foreign country or alliance?

Authoritarian regimes traditionally put survival first. Those who do not prioritize regime survival tend to collapse. Authoritarian regimes are always vulnerable because they are illegitimate. There will always be loyalists that benefit from the system, but for a significant part of people, the regime is not legit. The regime only exists because they suppress popular will and use force against any opposition.

In 2016, I wrote an article in the Cyber Defense Review titled “Strategic Cyberwar Theory – A Foundation for Designing Decisive Strategic Cyber Operations.” The utility of strategic cyberwar is linked to the institutional stability of the targeted state. If a nation is destabilized, it can be subdued to foreign will and the ability for the current regime to execute their strategy is evaporated due to loss of internal authority and ability. The theory’s predictive power is most potent when applied to target theocracies, authoritarian regimes, and dysfunctional experimental democracies because the common tenet is weak institutions.

Fully functional democracies, on the other hand, have a definite advantage because these advanced democracies have stability and, by their citizenry, accepted institutions. Nations openly adversarial to democracies are in most cases, totalitarian states that are close to entropy. The reason why these totalitarian states are under their current regime is the suppression of the popular will. Any removal of the pillars of repression, by destabilizing the regime design and institutions that make it functional, will release the popular will.

A destabilized — and possibly imploding — Iranian regime is a more tangible threat to the ruling theocratic elite than any military systems being hacked in a cyber interchange. Dictators fear the wrath of the masses. Strategic cyberwar theory seeks to look beyond the actual digital interchange, the cyber tactics, and instead create a predictive power of how a decisive cyber conflict should be conducted in pursuit of national strategic goals.

The Iranian military apparatus is a mix of traditional military defense, crowd control, political suppression, and show of force for generating artificial internal authority in the country. If command and control evaporate in the military apparatus, it also removes the ability to control the population to the degree the Iranian regime have been able until now to do. In that light, what is in it for Iran to launch a massive cyber engagement against the free world? What can they win?

If the free world uses its cyber abilities, it is far more likely that Iran itself gets destabilized and falls into entropy and chaos, which could lead to lead to major domestic bloodshed when the victims of 40 years of violent suppression decide the fate of their oppressors. It would not be the intent of the free world, it is just an outfall of the way the Iranian totalitarian regime has acted toward their own people. The risks for the Iranians are far more significant than the potential upside of being able to inflict damage on the free world.

That doesn’t mean Iranians would not try to hack systems in foreign countries they consider adversarial. Because of the Iranian regime’s constant need to feed their internal propaganda machinery with “victories,” that is more likely to take place on a smaller scale and will likely be uncoordinated low-level attacks seeking to exploit opportunities they come across. In my view, far more dangerous are non-Iranian advanced nation-state cyber actors that impersonate being Iranian hackers trying to make aggressive preplanned attacks under cover of spoofed identity and transferring the blame fueled by recent tensions.

From the Adversary’s POV – Cyber Attacks to Delay CONUS Forces Movement to Port of Embarkation Pivotal to Success

We tend to see vulnerabilities and concerns about cyber threats to critical infrastructure from our own viewpoint. But an adversary will assess where and how a cyberattack on America will benefit the adversary’s strategy. I am not convinced attacks on critical infrastructure, in general, have the payoff that an adversary seeks.

The American reaction to Sept. 11 and any attack on U.S. soil gives a hint to an adversary that attacking critical infrastructure to create hardship for the population might work contrary to the intended softening of the will to resist foreign influence. It is more likely that attacks that affect the general population instead strengthen the will to resist and fight, similar to the British reaction to the German bombing campaign “Blitzen” in 1940. We can’t rule out attacks that affect the general population, but there are not enough offensive capabilities to attack all 16 sectors of critical infrastructure and gain a strategic momentum.
An adversary has limited cyberattack capabilities and needs to prioritize cyber targets that are aligned with the overall strategy. Trying to see what options, opportunities, and directions an adversary might take requires we change our point of view to the adversary’s outlook. One of my primary concerns is pinpointed cyber-attacks disrupting and delaying the movement of U.S. forces to theater.

We tend to see vulnerabilities and concerns about cyber threats to critical infrastructure from our own viewpoint. But an adversary will assess where and how a cyberattack on America will benefit the adversary’s strategy. I am not convinced attacks on critical infrastructure, in general, have the payoff that an adversary seeks.

The American reaction to Sept. 11 and any attack on U.S. soil gives a hint to an adversary that attacking critical infrastructure to create hardship for the population might work contrary to the intended softening of the will to resist foreign influence. It is more likely that attacks that affect the general population instead strengthen the will to resist and fight, similar to the British reaction to the German bombing campaign “Blitzen” in 1940. We can’t rule out attacks that affect the general population, but there are not enough offensive capabilities to attack all 16 sectors of critical infrastructure and gain a strategic momentum. An adversary has limited cyberattack capabilities and needs to prioritize cyber targets that are aligned with the overall strategy. Trying to see what options, opportunities, and directions an adversary might take requires we change our point of view to the adversary’s outlook. One of my primary concerns is pinpointed cyber-attacks disrupting and delaying the movement of U.S. forces to theater. 

Seen for the potential adversary’s point of view, bringing the cyber fight to our homeland – think delaying the transportation of U.S. forces to theater by attacking infrastructure and transportation networks from bases to the port of embarkation – is a low investment/high return operation. Why does it matter?

First, the bulk of the U.S. forces are not in the region where the conflict erupts. Instead, they are mainly based in the continental United States and must be transported to theater. From an adversary’s perspective, the delay of U.S. forces’ arrival might be the only opportunity. If the adversary can utilize an operational and tactical superiority in the initial phase of the conflict, by engaging our local allies and U.S. forces in the region swiftly, territorial gains can be made that are too costly to reverse later, leaving the adversary in a strong bargaining position.

Second, even if only partially successful, cyberattacks that delay U.S. forces’ arrival will create confusion. Such attacks would mean units might arrive at different ports, at different times and with only a fraction of the hardware or personnel while the rest is stuck in transit.

Third, an adversary that is convinced before a conflict that it can significantly delay the arrival of U.S. units from the continental U.S. to a theater will do a different assessment of the risks of a fait accompli attack. Training and Doctrine Command defines such an attack as one that “ is intended to achieve military and political objectives rapidly and then to quickly consolidate those gains so that any attempt to reverse the action by the U.S. would entail unacceptable cost and risk.” Even if an adversary is long-term strategically inferior, the window of opportunity due to assumed delay of moving units from the continental U.S. to theater might be enough for them to take military action seeking to establish a successful fait accompli-attack.

In designing a cyber defense for critical infrastructure, it is vital that what matters to the adversary is a part of the equation. In peacetime, cyberattacks probe systems across society, from waterworks, schools, social media, retail, all the way to sawmills. Cyberattacks in war time will have more explicit intent and seek a specific gain that supports the strategy. Therefore, it is essential to identify and prioritize the critical infrastructure that is pivotal at war, instead of attempting to spread out the defense to cover everything touched in peacetime.

Jan Kallberg, Ph.D., LL.M., is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Private Hackbacks can be Blowbacks

The demands for legalizing corporate hack backs are growing – and there is significant interest by private actors to utilize hack back if it was lawful. If private companies were able to obtain the right to hack back legally, the risks for blowback is likely more significant than the opportunity and potential gains from private hackbacks. The proponents of private hackback tend to build their case on a set of assumptions. If these assumptions are not valid, private hackback is likely becoming a federal problem through uncontrolled escalation and spillover from these private counterstrikes.

-The private companies can attribute.

The idea of legalizing hack back operations is based on the assumption that the defending company can attribute the initial attack with pin-point precision. If a defending company is given the right to strike back, it is based on the assumption that the counterstrike can beyond doubt determine which entity was the initial attacker. If attribution is not achieved with satisfactory granularity and precision, a right to cyber counterstrike would be a right to strike anyone based on suspicion of involvement. Very few private entities can as of today with high granularity determine who attacked them and can trace back the attack so the counterstrike can be accurate. The lack of norms and a right to strike back, even if the precision in the counterstrike is not perfect, would increase entropy and deviation from emerging norms and international governance.

-The counterstriking corporations can engage a state-sponsored organization.

Things might spin out of control.  The old small tactics rule – anyone can open fire, only geniuses can get out unharmed. The counterstriking corporation perceives that they can handle the adversaries believing that it is an underfunded group of college students that hacks for fun – and later finds out that it is a heavily funded and highly able foreign state agency. The counterstriking company would have limited means to before a counterstrike determines the exact size of the initial attacker and the full spectrum of resources available for the initial attacker. A probing counterattack would not be enough to determine the operational strength, ability, and intent of the potential adversary. Following the assumption that the counterstriking corporation can handle any adversary is embedded the assumption that there will be no uncontrolled escalation.

-The whole engagement is locked in between parties A and B.

If there is an assumption of no uncontrolled escalation, then a follow-up assumption is that ,the engagement creates a deterrence that prevents the initial attacker from continuing attacking. The defending company needs to be able to counterattack with the magnitude that the initial attacker is deterred from further attacks. Once deterrence is established then the digital interchange will cease. The question is how to establish deterrence – and deterring from which array of cyber operations – without causing any damages. If deterrence cannot be establish it would likely lead to escalation or to a strict tit-for-tat game without any decisive conclusion and continue until the initial attacker decides to end the interchange.

-The initial attacker has no second strike option.

The interchange will occur with a specific set of cyber weapons and aim points. So the interchange cannot lead to further damages. Even if the initial striker had the intent to rearrange the targets, aims, and potential impacts there will be no option to do so. A new set of second strikes would not be an uncontrolled escalation as long as the targeting occurred within the same realm and values as the earlier strikes. The second strike option for the initial attacker could target unprecedented targets at the initial attackers discretion. Instead, it is more likely that the initial attacker has second strike options that the initial target is unaware of at the moment of counterstrike.

-The counterstriking company has no interests or assets in the initial attacker’s jurisdiction.

If a multi-national company (MNC) counterstrikes a state agency or state sponsored attacker the MNC could face the risk of repercussions if there are MNC assets in the jurisdiction of the initial attacker. Major MNC companies have interests, subsidiaries, and assets in hundreds of jurisdictions. The Fortune 500 companies have assets in the US, China, Russia, India, and numerous other jurisdictions. The question is then if MNC “A” counterstrike a cyberattack from China, what will the risks be for the “A” MNC subsidiary “A in China”? Related is the issue if by improper attribution MNC “A” counterstrikes from the US targeting foreign digital assets when these foreign assets had no connection with the initial attack, which constitutes a new unjustifiable and illegal attack on foreign digital assets. The majority of the potential source countries for hacking attacks are totalitarian and authoritarian states. A totalitarian state can easily, and it is in their reach, switch domain and seize property, arrest innocent business travels, and act in other ways as a result of corporate hackback. I am not saying that we should let totalitarian regimes act any way they want – I am only saying that it is not for private corporations to engage and seeking to resolve. It is a government domain to interact with foreign governments.

The idea to legalize corporate hack backs could lead to increased distrust, entropy, and be contra-productive to the long-term goal of a secure and safe Internet.

Jan Kallberg, PhD

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy.

The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.

Cyber Attacks with Environmental Impact – High Impact on Societal Sentiment

In the cyber debate, there is a significant, if not totally over-shadowing, focus on the information systems themselves – the concerns don’t migrate to secondary and tertiary effects. For example, the problem with vulnerable industrial control systems in the management of water-reservoir dams is not limited to the digital conduit and systems. It is the fact that a massive release of water can create a flood that affects hundreds of thousands of citizens. It is important to look at the actual effects of a systematic or pinpoint-accurate cyberattack – and go beyond the limits of the actual information system.

As an example, a cascading effect of failing dams in a larger watershed would have a significant environmental impact. Hydroelectric dams and reservoirs are controlled using different forms of computer networks, either cable or wireless, and the control networks are connected to the Internet. A breach in the cyber defenses for the electric utility company leads all the way down to the logic controllers that instruct the electric machinery to open the floodgates. Many hydroelectric dams and reservoirs are designed as a chain of dams in a major watershed to create an even flow of water that is utilized to generate energy. A cyberattack on several upstream dams would release water that increases pressure on downstream dams. With rapidly diminishing storage capacity, downstream dams risk being breached by the oncoming water. Eventually, it can turn to a cascading effect through the river system which could result in a catastrophic flood event.

The traditional cyber security way to frame the problem is the loss of function and disruption in electricity generation, but that overlooks the potential environmental effect of an inland tsunami. This is especially troublesome in areas where the population and the industries are dense along a river; examples would include Pennsylvania, West Virginia and other areas with cities built around historic mills.

We have seen that events that are close to citizens’ near-environment affect them highly, which makes sense. If they perceive a threat to their immediate environment, it creates rapid public shifts of belief; erodes trust in government; generates extreme pressure under an intense, short time frame for government to act to stabilize the situation; and public vocal outcry.

One such example is the Three Mile Island accident, which created significant public turbulence and fear – an incident that still has a profound impact on how we view nuclear power. The Three Mile Island incident changed U.S. nuclear policy in a completely different direction and halted all new construction of nuclear plants even until today, forty years later.

For a covert state actor that seeks to cripple our society, embarrass the political leadership, change policy and project to the world that we cannot defend ourselves, environmental damages are inviting. An attack on the environment feels, for the general public, closer and scarier than a dozen servers malfunctioning in a server park. We are all dependent on clean drinking water and non-toxic air. Cyber attacks on these fundamentals for life could create panic and desperation in the public – even if the reacting citizens were not directly affected.

It is crucial for cyber resilience to look beyond the information systems. The societal effect is embedded in the secondary and tertiary effects that need to be addressed, understood and, to the limit of what we can do, mitigated. Cyber resilience goes beyond the digital realm.

Jan Kallberg, PhD