A new mindset for the Army: silent running

//I wrote this article together with Colonel Stephen Hamilton and it was published in C4ISRNET//

In the past two decades, the U.S. Army has continually added new technology to the battlefield. While this technology has enhanced the ability to fight, it has also greatly increased the ability for an adversary to detect and potentially interrupt and/or intercept operations.

The adversary in the future fight will have a more technologically advanced ability to sense activity on the battlefield – light, sound, movement, vibration, heat, electromagnetic transmissions, and other quantifiable metrics. This is a fundamental and accepted assumption. The future near-peer adversary will be able to sense our activity in an unprecedented way due to modern technologies. It is not only driven by technology but also by commoditization; sensors that cost thousands of dollars during the Cold War are available at a marginal cost today. In addition, software defined radio technology has larger bandwidth than traditional radios and can scan the entire spectrum several times a second, making it easier to detect new signals.

We turn to the thoughts of Bertrand Russell in his version of Occam’s razor: “Whenever possible, substitute constructions out of known entities for inferences to unknown entities.” Occam’s razor is named after the medieval philosopher and friar William of Ockham, who stated that in uncertainty, the fewer assumptions, the better and preached pursuing simplicity by relying on the known until simplicity could be traded for a greater explanatory power. So, by staying with the limited assumption that the future near-peer adversary will be able to sense our activity at an earlier unseen level, we will, unless we change our default modus operandi, be exposed to increased threats and risks. The adversary’s acquired sensor data will be utilized for decision making, direction finding, and engaging friendly units with all the means that are available to the adversary.

The Army mindset must change to mirror the Navy’s tactic of “silent running” used to evade adversarial threats. While there are recent advances in sensor counter-measure techniques, such as low probability of detection and low probability of intercept, silent running reduces the emissions altogether, thus reducing the risk of detection.

In the U.S. Navy submarine fleet, silent running is a stealth mode utilized over the last 100 years following the introduction of passive sonar in the latter part of the First World War. The concept is to avoid discovery by the adversary’s passive sonar by seeking to eliminate all unnecessary noise. The ocean is an environment where hiding is difficult, similar to the Army’s future emission-dense battlefield.

However, on the battlefield, emissions can be managed in order to reduce noise feeding into the adversary’s sensors. A submarine in silent running mode will shut down non-mission essential systems. The crew moves silently and avoids creating any unnecessary sound, in combination with a reduction in speed to limit noise from shafts and propellers. The noise from the submarine no longer stands out. It is a sound among other natural and surrounding sounds which radically decreases the risk of detection.

From the Army’s perspective, the adversary’s primary objective when entering the fight is to disable command and control, elements of indirect fire, and enablers of joint warfighting. All of these units are highly active in the electromagnetic spectrum. So how can silent running be applied for a ground force?

If we transfer silent running to the Army, the same tactic can be as simple as not utilizing equipment just because it is fielded to the unit. If generators go offline when not needed, then sound, heat, and electromagnetic noise are reduced. Radios that are not mission-essential are switched to specific transmission windows or turned off completely, which limits the risk of signal discovery and potential geolocation. In addition, radios are used at the lowest power that still provides acceptable communication as opposed to using unnecessarily high power which would increase the range of detection. The bottom line: a paradigm shift is needed where we seek to emit a minimum number of detectable signatures, emissions, and radiation.

The submarine becomes undetectable as its noise level diminishes to the level of natural background noise which enables it to hide within the environment. Ground forces will still be detectable in some form – the future density of sensors and increased adversarial ability over time would support that – but one goal is to make the adversary’s situational picture blur and disable the ability to accurately assess the function, size, position, and activity of friendly units. The future fluid MDO (multi-domain operational) battlefield would also increase the challenge for the adversary compared to a more static battlefield with a clear separation between friend and foe.

As a preparation for a future near-peer fight, it is crucial to have an active mindset on avoiding unnecessary transmissions that could feed adversarial sensors with information that can guide their actions. This might require a paradigm shift, where we are migrating from an abundance of active systems to being minimalists in pursuit of stealth.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. Col. Stephen Hamilton is the technical director of the Army Cyber Institute at West Point and an academy professor at the U.S. Military Academy. The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy, or the Department of Defense.

 

 

From the Adversary’s POV – Cyber Attacks to Delay CONUS Forces Movement to Port of Embarkation Pivotal to Success

We tend to see vulnerabilities and concerns about cyber threats to critical infrastructure from our own viewpoint. But an adversary will assess where and how a cyberattack on America will benefit the adversary’s strategy. I am not convinced attacks on critical infrastructure, in general, have the payoff that an adversary seeks.

The American reaction to Sept. 11 and any attack on U.S. soil gives a hint to an adversary that attacking critical infrastructure to create hardship for the population might work contrary to the intended softening of the will to resist foreign influence. It is more likely that attacks that affect the general population instead strengthen the will to resist and fight, similar to the British reaction to the German bombing campaign “Blitzen” in 1940. We can’t rule out attacks that affect the general population, but there are not enough offensive capabilities to attack all 16 sectors of critical infrastructure and gain a strategic momentum. An adversary has limited cyberattack capabilities and needs to prioritize cyber targets that are aligned with the overall strategy. Trying to see what options, opportunities, and directions an adversary might take requires we change our point of view to the adversary’s outlook. One of my primary concerns is pinpointed cyber-attacks disrupting and delaying the movement of U.S. forces to theater. 

Seen for the potential adversary’s point of view, bringing the cyber fight to our homeland – think delaying the transportation of U.S. forces to theater by attacking infrastructure and transportation networks from bases to the port of embarkation – is a low investment/high return operation. Why does it matter?

First, the bulk of the U.S. forces are not in the region where the conflict erupts. Instead, they are mainly based in the continental United States and must be transported to theater. From an adversary’s perspective, the delay of U.S. forces’ arrival might be the only opportunity. If the adversary can utilize an operational and tactical superiority in the initial phase of the conflict, by engaging our local allies and U.S. forces in the region swiftly, territorial gains can be made that are too costly to reverse later, leaving the adversary in a strong bargaining position.

Second, even if only partially successful, cyberattacks that delay U.S. forces’ arrival will create confusion. Such attacks would mean units might arrive at different ports, at different times and with only a fraction of the hardware or personnel while the rest is stuck in transit.

Third, an adversary that is convinced before a conflict that it can significantly delay the arrival of U.S. units from the continental U.S. to a theater will do a different assessment of the risks of a fait accompli attack. Training and Doctrine Command defines such an attack as one that “ is intended to achieve military and political objectives rapidly and then to quickly consolidate those gains so that any attempt to reverse the action by the U.S. would entail unacceptable cost and risk.” Even if an adversary is long-term strategically inferior, the window of opportunity due to assumed delay of moving units from the continental U.S. to theater might be enough for them to take military action seeking to establish a successful fait accompli-attack.

In designing a cyber defense for critical infrastructure, it is vital that what matters to the adversary is a part of the equation. In peacetime, cyberattacks probe systems across society, from waterworks, schools, social media, retail, all the way to sawmills. Cyberattacks in war time will have more explicit intent and seek a specific gain that supports the strategy. Therefore, it is essential to identify and prioritize the critical infrastructure that is pivotal at war, instead of attempting to spread out the defense to cover everything touched in peacetime.

Jan Kallberg, Ph.D., LL.M., is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Our Dependence on the top 2 % Cyber Warriors

As an industrial nation transitioning to an information society with digital conflict, we tend to see the technology as the weapon. In the process, we ignore the fact that few humans can have a large-scale operational impact.

But we underestimate the importance of applicable intelligence, the intelligence on how to apply things in the right order. Cyber and card games have one thing in common: the order you play your cards matters. In cyber, the tools are mostly publically available, anyone can download them from the Internet and use them, but the weaponization of the tools occur when they are used by someone who understands how to use the tools in the right order.

In 2017, Gen. Paul Nakasone said “our best [coders] are 50 or 100 times better than their peers,” and asked “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.” The success of cyber operations is highly dependent, not on tools, but upon the super-empowered individual that Nakasone calls “the 50-x coder.”

There have always been those exceptional individuals that have an irreplaceable ability to see the challenge early on, create a technical solution and know-how to play it for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival of artificial intelligence increases the reliance of these highly capable individuals because someone must set the rules and point out the trajectory for artificial intelligence at the initiation.

But this also raises a series of questions. Even if identified as a weapon, how do you make a human mind “classified?” How do we protect these high-ability individuals that are weapons in the digital world?

These minds are different because they see an opportunity to exploit in a digital fog of war when others don’t see it. They address problems unburdened by traditional thinking, in innovative ways, maximizing the dual-purpose of digital tools, and can generate decisive cyber effects.

It is this applicable intelligence that creates the process, that understands the application of tools, and that turns simple digital software to digitally lethal weapons. In the analog world, it is as if you had individuals with the supernatural ability to create a hypersonic missile from materials readily available at Kroger or Albertson. As a nation, these individuals are strategic national security assets.

Systemically, we struggle to see humans as the weapon, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed.

For America, technological wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the Erie Canal, the manufacturing era, the moon landing and all the way to the autonomous systems, drones, and robots. In a default mindset, there is always a tool, an automated process, a software, or a set of technical steps, that can solve a problem or act. The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced.

Super-empowered individuals are not interchangeable and cannot be replaced, unless we want to be stuck in a digital war. Artificial intelligence and machine learning support the intellectual endeavor to cyber defend America, but humans set the strategy and direction.

It is time to see what weaponized minds are, they are not dudes and dudettes; they are strike capabilities.

Jan Kallberg, Ph.D., LL.M., is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Time – and the lack thereof

For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing. It is time to raise the issue of accelerated cyber engagements.

Limited time to lead

If there is limited time to lead, how do you ensure that you can execute a defensive strategy? How do we launch counter-measures at speed beyond human ability and comprehension? If you don’t have time to lead, the alternative would be to preauthorize.

In the early days of the “Cold War” war planners and strategists who were used to having days to react to events faced ICBM that forced decisions within minutes. The solution? Preauthorization. The analogy between how the nuclear threat was addressed and cybersecurity works to a degree – but we have to recognize that the number of possible scenarios in cybersecurity could be on the hundreds and we need to prioritize.

The cybersecurity preauthorization process would require an understanding of likely scenarios and the unfolding events to follow these scenarios. The weaknesses in preauthorization are several. First, the limitations of the scenarios that we create because these scenarios are built on how we perceive our system environment. This is exemplified by the old saying: “What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t so.”

The creation of scenarios as a foundation for preauthorization will be laden with biases, assumption that some areas are secure that isn’t, and the inability to see attack vectors that an attacker sees. So, the major challenge becomes when to consider preauthorization is to create scenarios that are representative of potential outfalls.

One way is to look at the different attack strategies used earlier. This limits the scenarios to what has already happened to others but could be a base where additional scenarios are added to. The MITRE Att&ck Navigator provides an excellent tool to simulate and create attack scenarios that can be a foundation for preauthorization. As we progress, and artificial intelligence becomes an integrated part of offloading decision making, but we are not there yet. In the near future, artificial intelligence can cover parts of the managerial spectrum, increasing the human ability to act in very brief time windows.

The second weakness is the preauthorization’s vulnerability against probes and reverse-engineering. Cybersecurity is active 24/7/365 with numerous engagements on an ongoing basis. Over time, and using machine learning, automated attack mechanisms could learn how to avoid triggering preauthorized responses by probing and reverse-engineer solutions that will trespass the preauthorized controls.

So there is no easy road forward, but instead, a tricky path that requires clear objectives, alignment with risk management and it’s risk appetite, and an acceptance that the final approach to address the increased velocity in the attacks might not be perfect. The alternative – to not address the accelerated execution of attacks is not a viable option. That would hand over the initiative to the attacker and expose the organization for uncontrolled risks.

Bye-bye, OODA-loop

Repeatedly through the last year, I have read references to the OODA-loop and the utility of the OODA-concept för cybersecurity. The OODA-loop resurface in cybersecurity and information security managerial approaches as a structured way to address unfolding events. The concept of the OODA (Observe, Orient, Decide, Act) loop developed by John Boyd in the 1960s follow the steps of observe, orient, decide, and act. You observe the events unfolding, you orient your assets at hand to address the events, you make up your mind of what is a feasible approach, and you act.

The OODA-loop has become been a central concept in cybersecurity the last decade as it is seen as a vehicle to address what attackers do, when, where, and what should you do and where is it most effective. The term has been “you need to get inside the attacker’s OODA-loop.” The OODA-loop is used as a way to understand the adversary and tailor your own defensive actions.

Retired Army Colonel Tom Cook, former research director for the Army Cyber Institute at West Point, and I wrote 2017 an IEEE article titled “The Unfitness of Traditional Military Thinking in Cyber” questioning the validity of using the OODA-loop in cyber when events are going to unfold faster and faster. Today, in 2019, the validity of the OODA-loop in cybersecurity is on the brink to evaporate due to increased speed in the attacks. The time needed to observe and assess, direct resources, make decisions, and take action will be too long to be able to muster a successful cyber defense.

Attacks occurring at computational speed worsens the inability to assess and act, and the increasingly shortened time frames likely to be found in future cyber conflicts will disallow any significant, timely human deliberation.

Moving forward

I have no intention of being a narrative impossibilist, who present challenges with no solutions, so the current way forward is preauthorizations. In the near future, the human ability to play an active role in rapid engagement will be supported by artificial intelligence decision-making that executes the tactical movements. The human mind is still in charge of the operational decisions for several reasons – control, larger picture, strategic implementation, and intent. For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

Jan Kallberg, PhD