The evaporated OODA-loop

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing rapidly. It is time to face the issue of accelerated cyber engagements.

If there is limited time to lead, how do you ensure that you can execute a defensive strategy? How do we launch counter-measures at speed beyond human ability and comprehension? If you don’t have time to lead, the alternative would be to preauthorize. In the early days of the “Cold War” war planners and strategists who were used to having days to react to events faced ICBM that forced decisions within minutes. The solution? Preauthorization. The analogy between how the nuclear threat was addressed and cybersecurity works to a degree – but we have to recognize that the number of possible scenarios in cybersecurity could be on the hundreds and we need to prioritize.

The cybersecurity preauthorization process would require an understanding of likely scenarios and the unfolding events to follow these scenarios. The weaknesses in preauthorization are several. First, the limitations of the scenarios that we create because these scenarios are built on how we perceive our system environment. This is exemplified by the old saying: “What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t true.”

The creation of scenarios as a foundation for preauthorization will be laden with biases, assumption that some areas are secure that isn’t, and the inability to see attack vectors that an attacker sees. So, the major challenge becomes when to consider preauthorization is to create scenarios that are representative of potential outfalls.

One way is to look at the different attack strategies used earlier. This limits the scenarios to what has already happened to others but could be a base where additional scenarios are added to. The MITRE Att&ck Navigator provides an excellent tool to simulate and create attack scenarios that can be a foundation for preauthorization. As we progress, and artificial intelligence becomes an integrated part of offloading decision making, but we are not there yet. In the near future, artificial intelligence can cover parts of the managerial spectrum, increasing the human ability to act in very brief time windows.

The second weakness is the preauthorization’s vulnerability against probes and reverse-engineering. Cybersecurity is active 24/7/365 with numerous engagements on an ongoing basis. Over time, and using machine learning, automated attack mechanisms could learn how to avoid triggering preauthorized responses by probing and reverse-engineer solutions that will trespass the preauthorized controls.

So there is no easy road forward, but instead, a tricky path that requires clear objectives, alignment with risk management and it’s risk appetite, and an acceptance that the final approach to address the increased velocity in the attacks might not be perfect. The alternative – to not address the accelerated execution of attacks is not a viable option. That would hand over the initiative to the attacker and expose the organization for uncontrolled risks.

Repeatedly through the last two years, I have read references to the OODA-loop and the utility of the OODA-concept för cybersecurity. The OODA-loop resurface in cybersecurity and information security managerial approaches as a structured way to address unfolding events. The concept of the OODA (Observe, Orient, Decide, Act) loop developed by John Boyd in the 1960s follow the steps of observe, orient, decide, and act. You observe the events unfolding, you orient your assets at hand to address the events, you make up your mind of what is a feasible approach, and you act.

The OODA-loop has become been a central concept in cybersecurity the last decade as it is seen as a vehicle to address what attackers do, when, where, and what should you do and where is it most effective. The term has been“you need to get inside the attacker’s OODA-loop.” The OODA-loop is used as a way to understand the adversary and tailor your own defensive actions.

Retired Army Colonel Tom Cook, former research director for the Army Cyber Institute at West Point, and I wrote 2017 an IEEE article titled “The Unfitness of Traditional Military Thinking in Cyber” questioning the validity of using the OODA-loop in cyber when events are going to unfold faster and faster. Today, in 2020, the validity of the OODA-loop in cybersecurity is on the brink to evaporate due to increased speed in the attacks. The time needed to observe and assess, direct resources, make decisions, and take action will be too long to be able to muster a successful cyber defense.

Attacks occurring at computational speed worsens the inability to assess and act, and the increasingly shortened time frames likely to be found in future cyber conflicts will disallow any significant, timely human deliberation.

Moving forward

I have no intention of being a narrative impossibilist, who present challenges with no solutions, so the current way forward is preauthorizations. In the near future, the human ability to play an active role in rapid engagement will be supported by artificial intelligence decision-making that executes the tactical movements. The human mind is still in charge of the operational decisions for several reasons – control, larger picture, strategic implementation, and intent. For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

Jan Kallberg, Ph.D.

For ethical artificial intelligence, security is pivotal

 

The market for artificial intelligence is growing at an unprecedented speed, not seen since the introduction of the commercial Internet. The estimates vary, but the global AI market is assumed to grow 30 to 60 percent per year. Defense spending on AI projects is increasing at even a higher rate when we add wearable AI and systems that are dependent on AI. The defense investments, such as augmented reality, automated target recognition, and tactical robotics, would not advance at today’s rate without the presence of AI to support the realization of these concepts.

The beauty of the economy is responsiveness. With an identified “buy” signal, the market works to satisfy the need from the buyer. Powerful buy signals lead to rapid development, deployment, and roll-out of solutions, knowing that time to market matters.

My concern is based on earlier analogies when the time to market prevailed over conflicting interests. One example is the first years of the commercial internet, the introduction of remote control of supervisory control and data acquisition (SCADA) and manufacturing, and the rapid growth of the smartphone apps. In each of these cases, security was not the first thing on the developer’s mind. Time to market was the priority. This exposure increases with an economically sound pursuit to use commercial off the shelf products (COTS) as sensors, chipsets, functions, electric controls, and storage devices can be bought on the civilian market for a fraction of the cost. These COTS products cut costs, give the American people more defense and security for the money, and drive down the time to conclude the development and deployment cycle.

The Department of Defense has adopted five ethical principles for the department’s future utilization of AI. These principles are: responsible, equitable, traceable, reliable, and governable. The common denominator in all these five principles is cybersecurity. If the cybersecurity of the AI application is inadequate, these five adopted principles can be jeopardized and no longer steer the DOD AI implementation.

The future AI implementation increases the attack surface radically, and of concern is the ability to detect manipulation of the processes, because, for the operators, the underlying AI processes are not clearly understood or monitored. A system that detects targets from images or from a streaming video capture, where AI is used to identify target signatures, will generate decision support that can lead to the destruction of these targets. The targets are engaged and neutralized. One of the ethical principles for AI is “responsible.” How do we ensure that the targeting is accurate? How do we safeguard that neither the algorithm is corrupt or that sensors are not being tampered with to produce spurious data? It becomes a matter of security.

In a larger conflict, where ground forces are not able to inspect the effects on the ground, the feedback loop that invalidates the decisions supported by AI might not reach the operators in weeks. Or it might surface after the conflict is over. A rogue system can likely produce spurious decision support for longer than we are willing to admit.

Of all the five principles “equitable” is the area of highest human control. Even if controlling embedded biases in a process is hard to detect, it is within our reach. “Reliable” relates directly to security because it requires that the systems maintain confidentiality, integrity, and availability.

If the principle “reliable” requires cybersecurity vetting and testing, we have to realize that these AI systems are part of complex technical structures with a broad attack surface. If the principle “reliable” is jeopardized, then “traceable” becomes problematic, because if the integrity of AI is questionable, it is not a given that “relevant personnel possess an appropriate understanding of the technology.”

The principle “responsible” can still be valid, because deployed personnel make sound and ethical decisions based on the information provided even if a compromised system will feed spurious information to the decisionmaker. The principle “governable” acts as a safeguard against “unintended consequences.” The unknown is the time from when unintended consequences occur and until the operators of the compromised system understand that the system is compromised.

It is evident when a target that should be hit is repeatedly missed. The effects can be observed. If the effects can not be observed, it is no longer a given that that “unintended consequences” are identified, especially in a fluid multi-domain battlespace. A compromised AI system for target acquisition can mislead targeting, acquiring hidden non-targets that are a waste of resources and weapon system availability, exposing the friendly forces for detection. The time to detect such a compromise can be significant.

My intention is to visualize that cybersecurity is pivotal for AI success. I do not doubt that AI will play an increasing role in national security. AI is a top priority in the United States and to our friendly foreign partners, but potential adversaries will make the pursuit of finding ways to compromise these systems a top priority of their own.

What COVID-19 can teach us about cyber resilience

The COVID pandemic is a challenge that will eventually create health risks to Americans and have long-lasting effects. For many, this is a tragedy, a threat to life, health, and finances. What draws our attention is what COVID-19 has meant our society, the economy, and how in an unprecedented way, family, corporations, schools, and government agencies quickly had to adjust to a new reality. Why does this matter from a cyber perspective?

COVID-19 has created increased stress on our logistic, digital, public, and financial systems and this could in fact resemble what a major cyber conflict would mean to the general public. It is also essential to assess what matters to the public during this time. COVID-19 has created a widespread disruption of work, transportation, logistics, distribution of food and necessities to the public, and increased stress on infrastructures, from Internet connectivity to just-in-time delivery. It has unleashed abnormal behaviors.

A potential adversary will likely not have the ability to take down an entire sector of our critical infrastructure, or business eco-system, for several reasons. First, awareness and investments in cybersecurity have drastically increased the last two decades. This in turn reduced the number of single points of failure and increased the number of built-in redundancies as well as the ability to maintain operations in a degraded environment.

Dr. Jan Kallberg and Col. Stephen Hamilton
March 23, 2020

The COVID pandemic is a challenge that will eventually create health risks to Americans and have long-lasting effects. For many, this is a tragedy, a threat to life, health, and finances. What draws our attention is what COVID-19 has meant our society, the economy, and how in an unprecedented way, family, corporations, schools, and government agencies quickly had to adjust to a new reality. Why does this matter from a cyber perspective?

COVID-19 has created increased stress on our logistic, digital, public, and financial systems and this could in fact resemble what a major cyber conflict would mean to the general public. It is also essential to assess what matters to the public during this time. COVID-19 has created a widespread disruption of work, transportation, logistics, distribution of food and necessities to the public, and increased stress on infrastructures, from Internet connectivity to just-in-time delivery. It has unleashed abnormal behaviors.

A potential adversary will likely not have the ability to take down an entire sector of our critical infrastructure, or business eco-system, for several reasons. First, awareness and investments in cybersecurity have drastically increased the last two decades. This in turn reduced the number of single points of failure and increased the number of built-in redundancies as well as the ability to maintain operations in a degraded environment.

Second, the time and resources required to create what was once referred to as a “Cyber Pearl Harbor” is beyond the reach of any near-peer nation. Decades of advancement, from increasing resilience, adding layered defense and the new ability to detect intrusion, have made it significantly harder to execute an attack of that size.

Instead, an adversary will likely focus their primary cyber capacity on what matters for their national strategic goals. For example, delaying the movement of the main U.S. force from the continental United States to theater by using a cyberattack on utilities, airports, railroads, and ports. That strategy has two clear goals: to deny United States and its allies options in theater due to a lack of strength and to strike a significant blow to the United States and allied forces early in the conflict. If an adversary can delay U.S. forces’ arrival in theater or create disturbances in thousands of groceries or wreak havoc on the commute for office workers, they will likely prioritize what matters to their military operations first.

That said, in a future conflict, the domestic businesses, local government, and services on which the general public rely on, will be targeted by cyberattacks. These second-tier operations are likely exploiting the vulnerabilities at scale in our society, but with less complexity and mainly opportunity exploitations.

The similarity with the COVID-19 outbreak to a cyber campaign is the disruption in logistics and services, how the population reacts, as well as the stress it puts on law enforcement and first responders. These events can lead to questions about the ability to maintain law and order and the ability to prevent destabilization of a distribution chain that is built for just-in-time operations with minimal margins of deviation before it falls apart.

The sheer nature of these second-tier attacks is unsystematic, opportunity-driven. The goal is to pursue disruption, confusion, and stress. An authoritarian regime would likely not be hindered by international norms to attack targets that jeopardize public health and create risks for the general population. Environmental hazards released by these attacks can lead to risks of loss of life and potential dramatic long-term loss of life quality for citizens. If the population questions the government’s ability to protect, the government’s legitimacy and authority will suffer. Health and environmental risks tend to appeal not only to our general public’s logic but also to emotions, particularly uncertainty and fear. This can be a tipping point if the population fears the future to the point it loses confidence in the government.

Therefore, as we see COVID-19 unfold, it could give us insights into how a broad cyber-disruption campaign could affect the U.S. population. Terrorist experts examine two effects of an attack – the attack itself and the consequences of how the target population reacts.

Likely, our potential adversaries study carefully how our society reacts to COVID-19. For example, if the population obeys the government, if our government maintains control and enforces its agenda and if the nation was prepared.

Lessons learned from COVID-19 are applicable for the strengthening U.S. cyberdefense and resilience. These unfortunate events increase our understanding of how a broad cyber campaign can disrupt and degrade the quality of life, government services, and business activity.

Why Iran would avoid a major cyberwar

The Iranian military apparatus is a mix of traditional military defense, crowd control, political suppression, and show of force for generating artificial internal authority in the country. If command and control evaporate in the military apparatus, it also removes the ability to control the population to the degree the Iranian regime have been able until now to do. In that light, what is in it for Iran to launch a massive cyber engagement against the free world? What can they win?

Demonstrations in Iran last year and signs of the regime’s demise raise a question: What would the strategic outcome be of a massive cyber engagement with a foreign country or alliance?

Authoritarian regimes traditionally put survival first. Those who do not prioritize regime survival tend to collapse. Authoritarian regimes are always vulnerable because they are illegitimate. There will always be loyalists that benefit from the system, but for a significant part of people, the regime is not legit. The regime only exists because they suppress popular will and use force against any opposition.

In 2016, I wrote an article in the Cyber Defense Review titled “Strategic Cyberwar Theory – A Foundation for Designing Decisive Strategic Cyber Operations.” The utility of strategic cyberwar is linked to the institutional stability of the targeted state. If a nation is destabilized, it can be subdued to foreign will and the ability for the current regime to execute their strategy is evaporated due to loss of internal authority and ability. The theory’s predictive power is most potent when applied to target theocracies, authoritarian regimes, and dysfunctional experimental democracies because the common tenet is weak institutions.

Fully functional democracies, on the other hand, have a definite advantage because these advanced democracies have stability and, by their citizenry, accepted institutions. Nations openly adversarial to democracies are in most cases, totalitarian states that are close to entropy. The reason why these totalitarian states are under their current regime is the suppression of the popular will. Any removal of the pillars of repression, by destabilizing the regime design and institutions that make it functional, will release the popular will.

A destabilized — and possibly imploding — Iranian regime is a more tangible threat to the ruling theocratic elite than any military systems being hacked in a cyber interchange. Dictators fear the wrath of the masses. Strategic cyberwar theory seeks to look beyond the actual digital interchange, the cyber tactics, and instead create a predictive power of how a decisive cyber conflict should be conducted in pursuit of national strategic goals.

The Iranian military apparatus is a mix of traditional military defense, crowd control, political suppression, and show of force for generating artificial internal authority in the country. If command and control evaporate in the military apparatus, it also removes the ability to control the population to the degree the Iranian regime have been able until now to do. In that light, what is in it for Iran to launch a massive cyber engagement against the free world? What can they win?

If the free world uses its cyber abilities, it is far more likely that Iran itself gets destabilized and falls into entropy and chaos, which could lead to lead to major domestic bloodshed when the victims of 40 years of violent suppression decide the fate of their oppressors. It would not be the intent of the free world, it is just an outfall of the way the Iranian totalitarian regime has acted toward their own people. The risks for the Iranians are far more significant than the potential upside of being able to inflict damage on the free world.

That doesn’t mean Iranians would not try to hack systems in foreign countries they consider adversarial. Because of the Iranian regime’s constant need to feed their internal propaganda machinery with “victories,” that is more likely to take place on a smaller scale and will likely be uncoordinated low-level attacks seeking to exploit opportunities they come across. In my view, far more dangerous are non-Iranian advanced nation-state cyber actors that impersonate being Iranian hackers trying to make aggressive preplanned attacks under cover of spoofed identity and transferring the blame fueled by recent tensions.

A new mindset for the Army: silent running

The adversary in the future fight will have a more technologically advanced ability to sense activity on the battlefield – light, sound, movement, vibration, heat, electromagnetic transmissions, and other quantifiable metrics. This is a fundamental and accepted assumption. The future near-peer adversary will be able to sense our activity in an unprecedented way due to modern technologies. It is not only driven by technology but also by commoditization; sensors that cost thousands of dollars during the Cold War are available at a marginal cost today. In addition, software defined radio technology has larger bandwidth than traditional radios and can scan the entire spectrum several times a second, making it easier to detect new signals.

//I wrote this article together with Colonel Stephen Hamilton and it was published in C4ISRNET//

In the past two decades, the U.S. Army has continually added new technology to the battlefield. While this technology has enhanced the ability to fight, it has also greatly increased the ability for an adversary to detect and potentially interrupt and/or intercept operations.

The adversary in the future fight will have a more technologically advanced ability to sense activity on the battlefield – light, sound, movement, vibration, heat, electromagnetic transmissions, and other quantifiable metrics. This is a fundamental and accepted assumption. The future near-peer adversary will be able to sense our activity in an unprecedented way due to modern technologies. It is not only driven by technology but also by commoditization; sensors that cost thousands of dollars during the Cold War are available at a marginal cost today. In addition, software defined radio technology has larger bandwidth than traditional radios and can scan the entire spectrum several times a second, making it easier to detect new signals.

We turn to the thoughts of Bertrand Russell in his version of Occam’s razor: “Whenever possible, substitute constructions out of known entities for inferences to unknown entities.” Occam’s razor is named after the medieval philosopher and friar William of Ockham, who stated that in uncertainty, the fewer assumptions, the better and preached pursuing simplicity by relying on the known until simplicity could be traded for a greater explanatory power. So, by staying with the limited assumption that the future near-peer adversary will be able to sense our activity at an earlier unseen level, we will, unless we change our default modus operandi, be exposed to increased threats and risks. The adversary’s acquired sensor data will be utilized for decision making, direction finding, and engaging friendly units with all the means that are available to the adversary.

The Army mindset must change to mirror the Navy’s tactic of “silent running” used to evade adversarial threats. While there are recent advances in sensor counter-measure techniques, such as low probability of detection and low probability of intercept, silent running reduces the emissions altogether, thus reducing the risk of detection.

In the U.S. Navy submarine fleet, silent running is a stealth mode utilized over the last 100 years following the introduction of passive sonar in the latter part of the First World War. The concept is to avoid discovery by the adversary’s passive sonar by seeking to eliminate all unnecessary noise. The ocean is an environment where hiding is difficult, similar to the Army’s future emission-dense battlefield.

However, on the battlefield, emissions can be managed in order to reduce noise feeding into the adversary’s sensors. A submarine in silent running mode will shut down non-mission essential systems. The crew moves silently and avoids creating any unnecessary sound, in combination with a reduction in speed to limit noise from shafts and propellers. The noise from the submarine no longer stands out. It is a sound among other natural and surrounding sounds which radically decreases the risk of detection.

From the Army’s perspective, the adversary’s primary objective when entering the fight is to disable command and control, elements of indirect fire, and enablers of joint warfighting. All of these units are highly active in the electromagnetic spectrum. So how can silent running be applied for a ground force?

If we transfer silent running to the Army, the same tactic can be as simple as not utilizing equipment just because it is fielded to the unit. If generators go offline when not needed, then sound, heat, and electromagnetic noise are reduced. Radios that are not mission-essential are switched to specific transmission windows or turned off completely, which limits the risk of signal discovery and potential geolocation. In addition, radios are used at the lowest power that still provides acceptable communication as opposed to using unnecessarily high power which would increase the range of detection. The bottom line: a paradigm shift is needed where we seek to emit a minimum number of detectable signatures, emissions, and radiation.

The submarine becomes undetectable as its noise level diminishes to the level of natural background noise which enables it to hide within the environment. Ground forces will still be detectable in some form – the future density of sensors and increased adversarial ability over time would support that – but one goal is to make the adversary’s situational picture blur and disable the ability to accurately assess the function, size, position, and activity of friendly units. The future fluid MDO (multi-domain operational) battlefield would also increase the challenge for the adversary compared to a more static battlefield with a clear separation between friend and foe.

As a preparation for a future near-peer fight, it is crucial to have an active mindset on avoiding unnecessary transmissions that could feed adversarial sensors with information that can guide their actions. This might require a paradigm shift, where we are migrating from an abundance of active systems to being minimalists in pursuit of stealth.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. Col. Stephen Hamilton is the technical director of the Army Cyber Institute at West Point and an academy professor at the U.S. Military Academy. The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy, or the Department of Defense.

 

 

From the Adversary’s POV – Cyber Attacks to Delay CONUS Forces Movement to Port of Embarkation Pivotal to Success

We tend to see vulnerabilities and concerns about cyber threats to critical infrastructure from our own viewpoint. But an adversary will assess where and how a cyberattack on America will benefit the adversary’s strategy. I am not convinced attacks on critical infrastructure, in general, have the payoff that an adversary seeks.

The American reaction to Sept. 11 and any attack on U.S. soil gives a hint to an adversary that attacking critical infrastructure to create hardship for the population might work contrary to the intended softening of the will to resist foreign influence. It is more likely that attacks that affect the general population instead strengthen the will to resist and fight, similar to the British reaction to the German bombing campaign “Blitzen” in 1940. We can’t rule out attacks that affect the general population, but there are not enough offensive capabilities to attack all 16 sectors of critical infrastructure and gain a strategic momentum.
An adversary has limited cyberattack capabilities and needs to prioritize cyber targets that are aligned with the overall strategy. Trying to see what options, opportunities, and directions an adversary might take requires we change our point of view to the adversary’s outlook. One of my primary concerns is pinpointed cyber-attacks disrupting and delaying the movement of U.S. forces to theater.

We tend to see vulnerabilities and concerns about cyber threats to critical infrastructure from our own viewpoint. But an adversary will assess where and how a cyberattack on America will benefit the adversary’s strategy. I am not convinced attacks on critical infrastructure, in general, have the payoff that an adversary seeks.

The American reaction to Sept. 11 and any attack on U.S. soil gives a hint to an adversary that attacking critical infrastructure to create hardship for the population might work contrary to the intended softening of the will to resist foreign influence. It is more likely that attacks that affect the general population instead strengthen the will to resist and fight, similar to the British reaction to the German bombing campaign “Blitzen” in 1940. We can’t rule out attacks that affect the general population, but there are not enough offensive capabilities to attack all 16 sectors of critical infrastructure and gain a strategic momentum. An adversary has limited cyberattack capabilities and needs to prioritize cyber targets that are aligned with the overall strategy. Trying to see what options, opportunities, and directions an adversary might take requires we change our point of view to the adversary’s outlook. One of my primary concerns is pinpointed cyber-attacks disrupting and delaying the movement of U.S. forces to theater. 

Seen for the potential adversary’s point of view, bringing the cyber fight to our homeland – think delaying the transportation of U.S. forces to theater by attacking infrastructure and transportation networks from bases to the port of embarkation – is a low investment/high return operation. Why does it matter?

First, the bulk of the U.S. forces are not in the region where the conflict erupts. Instead, they are mainly based in the continental United States and must be transported to theater. From an adversary’s perspective, the delay of U.S. forces’ arrival might be the only opportunity. If the adversary can utilize an operational and tactical superiority in the initial phase of the conflict, by engaging our local allies and U.S. forces in the region swiftly, territorial gains can be made that are too costly to reverse later, leaving the adversary in a strong bargaining position.

Second, even if only partially successful, cyberattacks that delay U.S. forces’ arrival will create confusion. Such attacks would mean units might arrive at different ports, at different times and with only a fraction of the hardware or personnel while the rest is stuck in transit.

Third, an adversary that is convinced before a conflict that it can significantly delay the arrival of U.S. units from the continental U.S. to a theater will do a different assessment of the risks of a fait accompli attack. Training and Doctrine Command defines such an attack as one that “ is intended to achieve military and political objectives rapidly and then to quickly consolidate those gains so that any attempt to reverse the action by the U.S. would entail unacceptable cost and risk.” Even if an adversary is long-term strategically inferior, the window of opportunity due to assumed delay of moving units from the continental U.S. to theater might be enough for them to take military action seeking to establish a successful fait accompli-attack.

In designing a cyber defense for critical infrastructure, it is vital that what matters to the adversary is a part of the equation. In peacetime, cyberattacks probe systems across society, from waterworks, schools, social media, retail, all the way to sawmills. Cyberattacks in war time will have more explicit intent and seek a specific gain that supports the strategy. Therefore, it is essential to identify and prioritize the critical infrastructure that is pivotal at war, instead of attempting to spread out the defense to cover everything touched in peacetime.

Jan Kallberg, Ph.D., LL.M., is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Our Dependence on the top 2 % Cyber Warriors

As an industrial nation transitioning to an information society with digital conflict, we tend to see the technology as the weapon. In the process, we ignore the fact that few humans can have a large-scale operational impact.

But we underestimate the importance of applicable intelligence, the intelligence on how to apply things in the right order. Cyber and card games have one thing in common: the order you play your cards matters. In cyber, the tools are mostly publically available, anyone can download them from the Internet and use them, but the weaponization of the tools occur when they are used by someone who understands how to use the tools in the right order.

In 2017, Gen. Paul Nakasone said “our best [coders] are 50 or 100 times better than their peers,” and asked “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.” The success of cyber operations is highly dependent, not on tools, but upon the super-empowered individual that Nakasone calls “the 50-x coder.”

There have always been those exceptional individuals that have an irreplaceable ability to see the challenge early on, create a technical solution and know-how to play it for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival of artificial intelligence increases the reliance of these highly capable individuals because someone must set the rules and point out the trajectory for artificial intelligence at the initiation.

But this also raises a series of questions. Even if identified as a weapon, how do you make a human mind “classified?” How do we protect these high-ability individuals that are weapons in the digital world?

These minds are different because they see an opportunity to exploit in a digital fog of war when others don’t see it. They address problems unburdened by traditional thinking, in innovative ways, maximizing the dual-purpose of digital tools, and can generate decisive cyber effects.

It is this applicable intelligence that creates the process, that understands the application of tools, and that turns simple digital software to digitally lethal weapons. In the analog world, it is as if you had individuals with the supernatural ability to create a hypersonic missile from materials readily available at Kroger or Albertson. As a nation, these individuals are strategic national security assets.

Systemically, we struggle to see humans as the weapon, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed.

For America, technological wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the Erie Canal, the manufacturing era, the moon landing and all the way to the autonomous systems, drones, and robots. In a default mindset, there is always a tool, an automated process, a software, or a set of technical steps, that can solve a problem or act. The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced.

Super-empowered individuals are not interchangeable and cannot be replaced, unless we want to be stuck in a digital war. Artificial intelligence and machine learning support the intellectual endeavor to cyber defend America, but humans set the strategy and direction.

It is time to see what weaponized minds are, they are not dudes and dudettes; they are strike capabilities.

Jan Kallberg, Ph.D., LL.M., is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Time – and the lack thereof

For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing. It is time to raise the issue of accelerated cyber engagements.

For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing. It is time to raise the issue of accelerated cyber engagements.

Limited time to lead

If there is limited time to lead, how do you ensure that you can execute a defensive strategy? How do we launch counter-measures at speed beyond human ability and comprehension? If you don’t have time to lead, the alternative would be to preauthorize.

In the early days of the “Cold War” war planners and strategists who were used to having days to react to events faced ICBM that forced decisions within minutes. The solution? Preauthorization. The analogy between how the nuclear threat was addressed and cybersecurity works to a degree – but we have to recognize that the number of possible scenarios in cybersecurity could be on the hundreds and we need to prioritize.

The cybersecurity preauthorization process would require an understanding of likely scenarios and the unfolding events to follow these scenarios. The weaknesses in preauthorization are several. First, the limitations of the scenarios that we create because these scenarios are built on how we perceive our system environment. This is exemplified by the old saying: “What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t so.”

The creation of scenarios as a foundation for preauthorization will be laden with biases, assumption that some areas are secure that isn’t, and the inability to see attack vectors that an attacker sees. So, the major challenge becomes when to consider preauthorization is to create scenarios that are representative of potential outfalls.

One way is to look at the different attack strategies used earlier. This limits the scenarios to what has already happened to others but could be a base where additional scenarios are added to. The MITRE Att&ck Navigator provides an excellent tool to simulate and create attack scenarios that can be a foundation for preauthorization. As we progress, and artificial intelligence becomes an integrated part of offloading decision making, but we are not there yet. In the near future, artificial intelligence can cover parts of the managerial spectrum, increasing the human ability to act in very brief time windows.

The second weakness is the preauthorization’s vulnerability against probes and reverse-engineering. Cybersecurity is active 24/7/365 with numerous engagements on an ongoing basis. Over time, and using machine learning, automated attack mechanisms could learn how to avoid triggering preauthorized responses by probing and reverse-engineer solutions that will trespass the preauthorized controls.

So there is no easy road forward, but instead, a tricky path that requires clear objectives, alignment with risk management and it’s risk appetite, and an acceptance that the final approach to address the increased velocity in the attacks might not be perfect. The alternative – to not address the accelerated execution of attacks is not a viable option. That would hand over the initiative to the attacker and expose the organization for uncontrolled risks.

Bye-bye, OODA-loop

Repeatedly through the last year, I have read references to the OODA-loop and the utility of the OODA-concept för cybersecurity. The OODA-loop resurface in cybersecurity and information security managerial approaches as a structured way to address unfolding events. The concept of the OODA (Observe, Orient, Decide, Act) loop developed by John Boyd in the 1960s follow the steps of observe, orient, decide, and act. You observe the events unfolding, you orient your assets at hand to address the events, you make up your mind of what is a feasible approach, and you act.

The OODA-loop has become been a central concept in cybersecurity the last decade as it is seen as a vehicle to address what attackers do, when, where, and what should you do and where is it most effective. The term has been “you need to get inside the attacker’s OODA-loop.” The OODA-loop is used as a way to understand the adversary and tailor your own defensive actions.

Retired Army Colonel Tom Cook, former research director for the Army Cyber Institute at West Point, and I wrote 2017 an IEEE article titled “The Unfitness of Traditional Military Thinking in Cyber” questioning the validity of using the OODA-loop in cyber when events are going to unfold faster and faster. Today, in 2019, the validity of the OODA-loop in cybersecurity is on the brink to evaporate due to increased speed in the attacks. The time needed to observe and assess, direct resources, make decisions, and take action will be too long to be able to muster a successful cyber defense.

Attacks occurring at computational speed worsens the inability to assess and act, and the increasingly shortened time frames likely to be found in future cyber conflicts will disallow any significant, timely human deliberation.

Moving forward

I have no intention of being a narrative impossibilist, who present challenges with no solutions, so the current way forward is preauthorizations. In the near future, the human ability to play an active role in rapid engagement will be supported by artificial intelligence decision-making that executes the tactical movements. The human mind is still in charge of the operational decisions for several reasons – control, larger picture, strategic implementation, and intent. For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

Jan Kallberg, PhD

Reassessed incentives for innovation supports transformation

There is no alternative way to ensure victory in the future fight than to innovate, implement the advances, and scale innovation. To use Henry Kissinger’s words: “The absence of alternatives clears the mind marvelously.”

Innovative environments are not created overnight. The establishment of the right culture is based on mutual trust, a trust that allows members to be vulnerable and take chances. Failure is a milestone to success.

Important characteristics for an innovative environment are competence, expertise, passion and a shared vision. Such an environment is populated with individuals who are in it for the long run and don’t quit until they make advances. Individuals who urge success and are determined to work towards excellence are all around us. For the defense establishment, the core challenge is to reassess the provided incentives, so the ambition and intellectual assets are directed to innovation and the future fight.

Edward N. Luttwak noted that strategy only matters if we have the resources to execute the strategy. Embedded in Luttwak’s statement is the general condition that if we are unable to identify, understand, incentivize, activate, and utilize our resources, the strategy does not matter. This leads to the question: who will be the innovator? How does the Department of Defense create a broad, innovative culture? Is innovation outsourced to thinktanks and experimental labs, or is it dedicated to individuals who become experts in their subfields and drive innovation where they stand? Or are these models running parallel? In general, are we ready to expose ourselves to the vulnerability of failure, and if so, what is an acceptable failure? These are questions that need to be addressed in the process of transformation.

Structural frameworks in place today could hinder innovation. For example, the traditional DOD Defense Officer Personnel Management Act’s (DOPMA) personnel model. In theory, it is a form of the assembly line’s scientific management, Taylorism, where the officer is processed through the system to the highest level of her/his career potential. In reality, the financial incentives are in favor of following the flowchart for promotion instead of seeking to stay at a point where you are passionate to make an improvement. If a transformation to an innovative culture is to be successful, then the incentives need to be aligned with the overall mission objective.

Another example is government sponsored university research. Even if funds are allocated in the pursuit of mobilizing civilian intellectual torque to ensure innovation that benefits the warfighter, traditional research at a university has little incentive to support the transformation of the Armed Forces. The majority of academia and the overwhelming majority of research universities pursue DOD and government research grant opportunities as an income to gain resources to fund graduate students and facilities. Many of the sponsored research projects are basic research, and the results are made public, which slightly defeats the purpose if you seek an innovative advantage, with limited support to the future fight. Academia can tailor their research to fit the funding opportunity, which is logical from their viewpoint, and often it is a tweak on research they are already doing that can be squeezed into a grant application.

Academics at universities seek tenure, promotion, and leverage in their fields. So government funding becomes a box to check off for tenure, the ability to attract external funding, and support academic career progression. The incentives to support DOD innovation are suppressed by far stronger incentives for the researcher to gain personal career leverage at the university. In the future, it is likely more cost-effective to concentrate DOD-sponsored research projects to those universities that make the investment in time and effort to ensure that their research is DOD relevant, operationally current, and support the warfighter. Those universities that align themselves with the DOD objectives and deliver innovation for the future fight will also have a higher understanding what the future threat landscape looks like. They are more likely to have an interface for quick dissemination of DOD needs. A realignment of incentives for sponsored research at universities creates an opportunity for those ready to support the future fight.

There is a need to look at system level how innovation is incentivized to ensure that resources generate the effects sought. America has talent, ambition, a tradition of fearless engineering, and grit – correct incentives unleash that innovative power.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

How the Founding Fathers helped make the US cyber-resilient

The Founding Fathers have done more for U.S. strategic cyber resiliency than other modern initiatives. Their contribution is a stable society, that can absorb attacks without falling into chaos, mayhem, and entropy. Stable countries have a significant advantage in future nation-state cyber-information conflicts. If nation states seek to conduct decisive cyberwar, victory will not come from anecdotal exploits, but instead by launching systematic, destabilizing attacks on the targeted society that bring them down to the point that they are subject to foreign will. Societal stability is not created overnight, it is the product of decades and even centuries of good government, civil liberties, fairness, and trust building.

Why does it matter? Because the strategic tools to bring down and degrade a society will not provide the effects sought. That means for an adversary seeking strategic advantages by attacking U.S. critical infrastructure the risk of retribution can outweigh the benefit.

The blackout in the northeast in 2003 is an example of how an American population will react when a significant share of critical infrastructure is degraded by hostile cyberattacks. The reaction showed that instead of imploding into chaos and looting, the affected population acted orderly and helped strangers. They demonstrated a high degree of resiliency. The reason why Americans act orderly and have such resiliency is a product of how we have designed our society, which leads back to the Founding Fathers. Americans are invested in the success of their society. Therefore, they do not turn on each other in a crisis.

Historically, the tactic of attacking a stable society by generating hardship has failed more than it has succeeded. One example is the Blitz 1940, the German bombings of metropolitan areas and infrastructure, which only hardened the British resistance against Nazi-Germany. After Dunkirk, several British parliamentarians were in favor of a separate peace with Germany. After the blitz, British politicians were united against Germany and fought Nazi Germany single-handed until USSR and the United States entered the war.

A strategic cyber campaign will fail to destabilize the targeted society if the institutions remain intact following the assault or successfully operate in a degraded environment. From an American perspective, it is crucial for a defender to ensure the cyberattacks never reach the magnitude that forces society over the threshold to entropy. In America’s favor, the threshold is far higher than our potential adversaries’. By guarding what we believe in – fairness, opportunity, liberty, equality, and open and free democracy – America can become more resilient.

We generally underestimate how stable America is, especially compared to potential foreign adversaries. There is a deterrent embedded in that fact: the risks for an adversary might outweigh the potential gains.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.

At Machine Speed in Cyber – Leadership Actions Close to Nullified

In my view, one of the major weaknesses in cyber defense planning is the perception that there is time to lead a cyber defense while under attack. It is likely that a major attack is automated and premeditated. If it is automated the systems will execute the attacks at computational speed. In that case, no political or military leadership would be able to lead of one simple reason – it has already happened before they react.

A premeditated attack is planned for a long time, maybe years, and if automated, the execution of a massive number of exploits will be limited to minutes. Therefore, the future cyber defense would rely on components of artificial intelligence that can assess, act, and mitigate at computational speed. Naturally, this is a development that does not happen overnight.

In an environment where the actual digital interchange occurs at computational speed, the only thing the government can do is to prepare, give guidelines, set rules of engagement, disseminate knowledge to ensure a cyber-resilient society, and let the coders prepare the systems to survive in a degraded environment.

Another important factor is how these cyber defense measures can be reversed engineered and how visible they are in a pre-conflict probing wave of cyber attacks. If the preset cyber defense measures can be “measured up” early in a probing phase of a cyber conflict it is likely that the defense measures can through reverse engineering become a force multiplier for the future attacks – instead of bulwarks against the attacks.

So we enter the land of “damned if you do-damned if you don’t” because if we pre-stage the conflict with artificial intelligence supported decision systems that lead the cyber defense at the computational speed we are also vulnerable by being reverse engineered and the artificial intelligence becomes tangible stupidity.

We are in the early dawn of cyber conflicts, we can see the silhouettes of what is coming, but one thing becomes very clear – the time factor. Politicians and military leadership will have no factual impact on the actual events in real time in conflicts occurring at computational speed, so the focus has then to be at the front end. The leadership is likely to have the highest impact by addressing what has to be done pre-conflict to ensure resilience when under attack.

Jan Kallberg

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.

Private Hackbacks can be Blowbacks

The demands for legalizing corporate hack backs are growing – and there is significant interest by private actors to utilize hack back if it was lawful. If private companies were able to obtain the right to hack back legally, the risks for blowback is likely more significant than the opportunity and potential gains from private hackbacks. The proponents of private hackback tend to build their case on a set of assumptions. If these assumptions are not valid, private hackback is likely becoming a federal problem through uncontrolled escalation and spillover from these private counterstrikes.

-The private companies can attribute.

The idea of legalizing hack back operations is based on the assumption that the defending company can attribute the initial attack with pin-point precision. If a defending company is given the right to strike back, it is based on the assumption that the counterstrike can beyond doubt determine which entity was the initial attacker. If attribution is not achieved with satisfactory granularity and precision, a right to cyber counterstrike would be a right to strike anyone based on suspicion of involvement. Very few private entities can as of today with high granularity determine who attacked them and can trace back the attack so the counterstrike can be accurate. The lack of norms and a right to strike back, even if the precision in the counterstrike is not perfect, would increase entropy and deviation from emerging norms and international governance.

-The counterstriking corporations can engage a state-sponsored organization.

Things might spin out of control.  The old small tactics rule – anyone can open fire, only geniuses can get out unharmed. The counterstriking corporation perceives that they can handle the adversaries believing that it is an underfunded group of college students that hacks for fun – and later finds out that it is a heavily funded and highly able foreign state agency. The counterstriking company would have limited means to before a counterstrike determines the exact size of the initial attacker and the full spectrum of resources available for the initial attacker. A probing counterattack would not be enough to determine the operational strength, ability, and intent of the potential adversary. Following the assumption that the counterstriking corporation can handle any adversary is embedded the assumption that there will be no uncontrolled escalation.

-The whole engagement is locked in between parties A and B.

If there is an assumption of no uncontrolled escalation, then a follow-up assumption is that ,the engagement creates a deterrence that prevents the initial attacker from continuing attacking. The defending company needs to be able to counterattack with the magnitude that the initial attacker is deterred from further attacks. Once deterrence is established then the digital interchange will cease. The question is how to establish deterrence – and deterring from which array of cyber operations – without causing any damages. If deterrence cannot be establish it would likely lead to escalation or to a strict tit-for-tat game without any decisive conclusion and continue until the initial attacker decides to end the interchange.

-The initial attacker has no second strike option.

The interchange will occur with a specific set of cyber weapons and aim points. So the interchange cannot lead to further damages. Even if the initial striker had the intent to rearrange the targets, aims, and potential impacts there will be no option to do so. A new set of second strikes would not be an uncontrolled escalation as long as the targeting occurred within the same realm and values as the earlier strikes. The second strike option for the initial attacker could target unprecedented targets at the initial attackers discretion. Instead, it is more likely that the initial attacker has second strike options that the initial target is unaware of at the moment of counterstrike.

-The counterstriking company has no interests or assets in the initial attacker’s jurisdiction.

If a multi-national company (MNC) counterstrikes a state agency or state sponsored attacker the MNC could face the risk of repercussions if there are MNC assets in the jurisdiction of the initial attacker. Major MNC companies have interests, subsidiaries, and assets in hundreds of jurisdictions. The Fortune 500 companies have assets in the US, China, Russia, India, and numerous other jurisdictions. The question is then if MNC “A” counterstrike a cyberattack from China, what will the risks be for the “A” MNC subsidiary “A in China”? Related is the issue if by improper attribution MNC “A” counterstrikes from the US targeting foreign digital assets when these foreign assets had no connection with the initial attack, which constitutes a new unjustifiable and illegal attack on foreign digital assets. The majority of the potential source countries for hacking attacks are totalitarian and authoritarian states. A totalitarian state can easily, and it is in their reach, switch domain and seize property, arrest innocent business travels, and act in other ways as a result of corporate hackback. I am not saying that we should let totalitarian regimes act any way they want – I am only saying that it is not for private corporations to engage and seeking to resolve. It is a government domain to interact with foreign governments.

The idea to legalize corporate hack backs could lead to increased distrust, entropy, and be contra-productive to the long-term goal of a secure and safe Internet.

Jan Kallberg, PhD

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy.

The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.

The Zero Domain – Cyber Space Superiority through Acceleration beyond the Adversary’s Comprehension

THE ZERO DOMAIN

In the upcoming Fall 2018 issue of the Cyber Defense Review, I present a concept – the Zero Domain. The Zero Domain concept is battlespace singularity through acceleration. There is a point along the trajectory of accelerated warfare where only one warfighting nation comprehend what is unfolding and the sees the cyber terrain; it is an upper barrier for comprehension where the acceleration makes the cyber engagement unilateral.

I intentionally use the word accelerated warfare, because it has a driver and a command of the events unfolding, even if it is only one actor of two, meanwhile hyperwar suggests events unfolding without control or ability to steer the engagement fully.

It is questionable and even unlikely that cyber supremacy can be reached by overwhelming capabilities manifested by stacking more technical capacity and adding attack vectors. The alternative is to use time as the vehicle to supremacy by accelerating the velocity in the engagements beyond the speed at which the enemy can target, precisely execute and comprehend the events unfolding. The space created beyond the adversary’s comprehension is titled the Zero Domain. Military traditionally sees the battles space as land, sea, air, space and cyber domains. When fighting the battle beyond the adversary’s comprehension, no traditional warfighting domain that serves as a battle space; it is a not a vacuum nor an unclaimed terra nullius, but instead the Zero Domain. In the Zero Domain, cyberspace superiority surface as the outfall of the accelerated time and a digital space-separated singularity that benefit the more rapid actor. The Zero Domain has a time space that is only accessible by the rapid actor and a digital landscape that is not accessible to the slower actor due to the execution velocity in the enhanced accelerated warfare. Velocity achieves cyber Anti Access/Area Denial (A2/AD), which can be achieved without active initial interchanges by accelerating the execution and cyber ability in a solitaire state. During this process, any adversarial probing engagements only affect the actor on the approach to the Comprehension Barrier and once arrived in the Zero Domain there is a complete state of Anti Access/Area Denial (A2/AD) present. From that point forward, the actor that reached the Zero Domain has cyberspace singularity where the accelerated actor is the only actor that can understand the digital landscape, engage unilaterally without an adversarial ability to counterattack or interfere, and hold the ability to decide when, how, and where to attack. In the Zero Domain, the accelerated singularity forges the battlefield gravity and thrust into a single power that denies adversarial cyber operations and acts as one force of destruction, extraction, corruption, and exploitation of targeted adversarial digital assets.

When breaking the Comprehension Barrier the first of the adversary’s final points of comprehension is human deliberation, directly followed by pre-authorization and machine learning, and then these final points of comprehension are passed, and the rapid actor enters the Zero Domain.

Key to victory has been the concept of being able to be inside the opponents OODA-loop, and thereby distort, degrade, and derail any of the opponent’s OODA. In accelerated warfare beyond the Comprehension Barrier, there is no need to be inside the opponent’s OODA loop because the accelerated warfare concept is to remove the OODA loop for the opponent and by doing so decapitate the opponent’s ability to coordinate, seek effect, and command. In the Zero Domain, the opposing force has no contact with their enemy, and their OODA loop is evaporated.

The Zero Domain is the warfighting domain where accelerated velocity in the warfighting operations removes the enemy’s presence. It is the domain with zero opponents. It is not an area denial, because the enemy is unable to accelerate to the level that they can enter the battle space, and it is not access denial because the enemy has never been a part of the later fight since the Comprehension Barrier was broken through.

Even if adversarial nations invest heavily in quantum, machine learning, and artificial intelligence, I am not convinced that these adversarial authoritarian regimes can capitalize on their potential technological peer-status to America. The Zero Domain concept has an American advantage because we are less afraid of allowing degrees of freedom in operations, whereas the totalitarian and authoritarian states are slowed down by their culture of fear and need for control. An actor that is slowed down will lower the threshold for the Comprehension Barrier and enable the American force to reach the Zero Domain earlier in the future fight and establish information superiority as confluency of cyber and information operations.

Jan Kallberg, PhD

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy.The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.

When Everything Else Fails in an EW Saturated Environment – Old School Shortwave

( I wrote this opinion piece together with Lt. Col. Stephen Hamilton and Capt. Kyle Hager)

The U.S. Army’s ability to employ high-frequency radio systems has atrophied significantly since the Cold War as the United States transitioned to counterinsurgency operations. Alarmingly, as hostile near-peer adversaries reemerge, it is necessary to re-establish HF alternatives should very-high frequency, ultra-high frequency or SATCOM come under attack. The Army must increase training to enhance its ability to utilize HF data and voice communication.

The Department of Defense’s focus over the last several years has primarily been Russian hybrid warfare and special forces. If there is a future armed conflict with Russia, it is anticipated ground forces will encounter the Russian army’s mechanized infantry and armor.

A potential future conflict with a capable near-peer adversary, such as Russia, is notable in that they have heavily invested in electromagnetic spectrum warfare and are highly capable of employing electronic warfare throughout their force structure. Electronic warfare elements deployed within theaters of operation threaten to degrade, disrupt or deny VHF, UHF and SATCOM communication. In this scenario, HF radio is a viable backup mode of communication.

The Russian doctrine favors rapid employment of nonlethal effects, such as electronic warfare, in order to paralyze and disrupt the enemy in the early hours of conflict. The Russian army has an inherited legacy from the Soviet Union and its integrated use of electronic warfare as a component of a greater campaign plan, enabling freedom of maneuver for combat forces. The rear echelons are postured to attack either utilizing a single envelopment, attacking the defending enemy from the rear, or a double envelopment, seeking to destroy the main enemy forces by unleashing the reserves. Ideally, a Russian motorized rifle regiment’s advanced guard battalion makes contact with the enemy and quickly engage on a broader front, identifying weaknesses permitting the regiment’s rear echelons to conduct flanking operations. These maneuvers are generally followed by another motorized regiment flanking, producing a double envelopment and destroying the defending forces.

Currently, the competency with HF radio systems within the U.S. Army is limited; however, there is a strong case to train and ensure readiness for the utilization of HF communication. Even in EMS-denied environments, HF radios can provide stable, beyond-line-of-sight communication permitting the ability to initiate a prompt global strike. While HF radio equipment is also vulnerable to electronic attack, it can be difficult to target due to near vertical incident skywave signal propagation. This propagation method provides the ability to reflect signals off the ionosphere in an EMS-contested environment, establishing communications beyond the line of sight. Due to the signal path, the ability to target an HF transmitter is much more difficult than transmissions from VHF and UHF radios that transmit line of sight ground waves.

The expense to attain an improved HF-readiness level is low in comparison to other Army needs, yet with a high return on investment. The equipment has already been fielded to maneuver units; the next step is Army leadership prioritizing soldier training and employment of the equipment in tactical environments. This will posture the U.S. Army in a state of higher readiness for future conflicts.

Dr. Jan Kallberg, Lt. Col. Stephen Hamilton and Capt. Kyle Hager are research scientists at the Army Cyber Institute at West Point and assistant professors at the United States Military Academy.

Utilizing Cyber in Arctic Warfare

The change from a focus on counter-insurgency to near-peer and peer-conflicts has also introduced the likelihood, if there is a conflict, for a fight in colder and frigid conditions. The weather conditions in Korea and Eastern Europe are harsh during winter time, with increasing challenges the farther north the engagement is taking place. In traditional war theaters, the threats to your existence line up as follows: enemy, logistics, and climate. In a polar climate, it is reversed: climate, logistics, and the enemy.

An enemy will engage you and seek to take you on different occasions, but the climate will be ever-present. The battle for your own physical survival in staying warm, eating and seeking rest can create unit fatigue and lower the ability to fight within days, even for trained and able troops. The easiest way to envision how three feet of snow affects you is to think about your mobility walking in water up to your hip, so to compensate either you ski or use low ground pressure and wide-tracked vehicles, such as specialized small unit support vehicles.

The climate and the snow depth also affect equipment. Lethality in your regular weapons is lowered. Gunfire accuracy goes down as charges burn slower in an arctic subzero-degree environment. Mortar rounds are less effective than under normal conditions when the snow captures shrapnel. Any heat, either from weapons, vehicles or your body, will make the snow melt and then freeze to ice. If not cleaned, weapons will jam. In a near-peer or peer conflict, the time units are engaged is longer and the exposure to the climate can last months.

I say all this to set the stage. Arctic warfare takes place in an environment that often lacks roads, infrastructure, minimal logistics, and with snow and ice blocking mobility. The climate affects both you and the enemy; once you are comfortable in this environment, you can work on the enemy’s discomfort.

The unique opportunity for cyberattacks in an Arctic conflict is, in my opinion, the ability to destroy a small piece of a machine or waste electric energy.

First, the ability to replace and repair equipment is limited in an arctic environment — the logistic chain is weak and unreliable and there are no facilities that effectively can support needed repairs, so the whole machine is a loss. If a cyberattack destroys a fuel pump in a vehicle, the targeted vehicle could be out of service for a week or more before repaired. The vehicle might have to be abandoned as units continue to move over the landscape. Units that operate in the Arctic have a limited logistic trail and ability to carry spare parts and reserve equipment. A systematic attack on a set of equipment can paralyze the enemy.

Second, electric energy waste is extremely stressful for any unit targeted. The Arctic has no urban infrastructure and often no existing power line that can provide electric power to charge batteries and upkeep electronic equipment. If there are power lines, they are few and likely already targeted by long-range enemy patrols.

The winter does not have enough sun to provide enough energy for solar panels if the sun even gets above the horizon (if you get far enough north, the sun is for several months a theoretical concept). The batteries do not hold a charge when it gets colder (a battery that holds a 100-percent charge at 80 degrees Fahrenheit has its capacity halved to 50-percent at 0 degrees Fahrenheit). Generators demand fuel from a limited supply chain and not only generate a heat signature, but also noise. The Arctic night is clear, with no noise pollution, so a working generator can be pick up by a long-range skiing patrol from 500 yards, risking an ambush. The loss or intermittent ability to use electronics and signal equipment due to power issues reduces and degrades situation awareness, command and control, the ability to call for strikes, and blinds the targeted unit.

Arctic warfare is a fight with low margins for errors, where climate guarantees that small failures can turn nasty, and even limited success with arctic cyber operations can tip the scales in your favor.

Jan Kallberg, PhD

Jan Kallberg is a research fellow/research scientist at the Army Cyber Institute at West Point. As a former Swedish reserve officer and light infantry company commander, Kallberg has personal experience facing Arctic conditions. The views expressed herein are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.