From the Adversary’s POV – Cyber Attacks to Delay CONUS Forces Movement to Port of Embarkation Pivotal to Success

We tend to see vulnerabilities and concerns about cyber threats to critical infrastructure from our own viewpoint. But an adversary will assess where and how a cyberattack on America will benefit the adversary’s strategy. I am not convinced attacks on critical infrastructure, in general, have the payoff that an adversary seeks.

The American reaction to Sept. 11 and any attack on U.S. soil gives a hint to an adversary that attacking critical infrastructure to create hardship for the population might work contrary to the intended softening of the will to resist foreign influence. It is more likely that attacks that affect the general population instead strengthen the will to resist and fight, similar to the British reaction to the German bombing campaign “Blitzen” in 1940. We can’t rule out attacks that affect the general population, but there are not enough offensive capabilities to attack all 16 sectors of critical infrastructure and gain a strategic momentum.
An adversary has limited cyberattack capabilities and needs to prioritize cyber targets that are aligned with the overall strategy. Trying to see what options, opportunities, and directions an adversary might take requires we change our point of view to the adversary’s outlook. One of my primary concerns is pinpointed cyber-attacks disrupting and delaying the movement of U.S. forces to theater.

We tend to see vulnerabilities and concerns about cyber threats to critical infrastructure from our own viewpoint. But an adversary will assess where and how a cyberattack on America will benefit the adversary’s strategy. I am not convinced attacks on critical infrastructure, in general, have the payoff that an adversary seeks.

The American reaction to Sept. 11 and any attack on U.S. soil gives a hint to an adversary that attacking critical infrastructure to create hardship for the population might work contrary to the intended softening of the will to resist foreign influence. It is more likely that attacks that affect the general population instead strengthen the will to resist and fight, similar to the British reaction to the German bombing campaign “Blitzen” in 1940. We can’t rule out attacks that affect the general population, but there are not enough offensive capabilities to attack all 16 sectors of critical infrastructure and gain a strategic momentum. An adversary has limited cyberattack capabilities and needs to prioritize cyber targets that are aligned with the overall strategy. Trying to see what options, opportunities, and directions an adversary might take requires we change our point of view to the adversary’s outlook. One of my primary concerns is pinpointed cyber-attacks disrupting and delaying the movement of U.S. forces to theater. 

Seen for the potential adversary’s point of view, bringing the cyber fight to our homeland – think delaying the transportation of U.S. forces to theater by attacking infrastructure and transportation networks from bases to the port of embarkation – is a low investment/high return operation. Why does it matter?

First, the bulk of the U.S. forces are not in the region where the conflict erupts. Instead, they are mainly based in the continental United States and must be transported to theater. From an adversary’s perspective, the delay of U.S. forces’ arrival might be the only opportunity. If the adversary can utilize an operational and tactical superiority in the initial phase of the conflict, by engaging our local allies and U.S. forces in the region swiftly, territorial gains can be made that are too costly to reverse later, leaving the adversary in a strong bargaining position.

Second, even if only partially successful, cyberattacks that delay U.S. forces’ arrival will create confusion. Such attacks would mean units might arrive at different ports, at different times and with only a fraction of the hardware or personnel while the rest is stuck in transit.

Third, an adversary that is convinced before a conflict that it can significantly delay the arrival of U.S. units from the continental U.S. to a theater will do a different assessment of the risks of a fait accompli attack. Training and Doctrine Command defines such an attack as one that “ is intended to achieve military and political objectives rapidly and then to quickly consolidate those gains so that any attempt to reverse the action by the U.S. would entail unacceptable cost and risk.” Even if an adversary is long-term strategically inferior, the window of opportunity due to assumed delay of moving units from the continental U.S. to theater might be enough for them to take military action seeking to establish a successful fait accompli-attack.

In designing a cyber defense for critical infrastructure, it is vital that what matters to the adversary is a part of the equation. In peacetime, cyberattacks probe systems across society, from waterworks, schools, social media, retail, all the way to sawmills. Cyberattacks in war time will have more explicit intent and seek a specific gain that supports the strategy. Therefore, it is essential to identify and prioritize the critical infrastructure that is pivotal at war, instead of attempting to spread out the defense to cover everything touched in peacetime.

Jan Kallberg, Ph.D., LL.M., is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Our Dependence on the top 2 % Cyber Warriors

As an industrial nation transitioning to an information society with digital conflict, we tend to see the technology as the weapon. In the process, we ignore the fact that few humans can have a large-scale operational impact.

But we underestimate the importance of applicable intelligence, the intelligence on how to apply things in the right order. Cyber and card games have one thing in common: the order you play your cards matters. In cyber, the tools are mostly publically available, anyone can download them from the Internet and use them, but the weaponization of the tools occur when they are used by someone who understands how to use the tools in the right order.

In 2017, Gen. Paul Nakasone said “our best [coders] are 50 or 100 times better than their peers,” and asked “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.” The success of cyber operations is highly dependent, not on tools, but upon the super-empowered individual that Nakasone calls “the 50-x coder.”

There have always been those exceptional individuals that have an irreplaceable ability to see the challenge early on, create a technical solution and know-how to play it for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival of artificial intelligence increases the reliance of these highly capable individuals because someone must set the rules and point out the trajectory for artificial intelligence at the initiation.

But this also raises a series of questions. Even if identified as a weapon, how do you make a human mind “classified?” How do we protect these high-ability individuals that are weapons in the digital world?

These minds are different because they see an opportunity to exploit in a digital fog of war when others don’t see it. They address problems unburdened by traditional thinking, in innovative ways, maximizing the dual-purpose of digital tools, and can generate decisive cyber effects.

It is this applicable intelligence that creates the process, that understands the application of tools, and that turns simple digital software to digitally lethal weapons. In the analog world, it is as if you had individuals with the supernatural ability to create a hypersonic missile from materials readily available at Kroger or Albertson. As a nation, these individuals are strategic national security assets.

Systemically, we struggle to see humans as the weapon, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed.

For America, technological wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the Erie Canal, the manufacturing era, the moon landing and all the way to the autonomous systems, drones, and robots. In a default mindset, there is always a tool, an automated process, a software, or a set of technical steps, that can solve a problem or act. The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced.

Super-empowered individuals are not interchangeable and cannot be replaced, unless we want to be stuck in a digital war. Artificial intelligence and machine learning support the intellectual endeavor to cyber defend America, but humans set the strategy and direction.

It is time to see what weaponized minds are, they are not dudes and dudettes; they are strike capabilities.

Jan Kallberg, Ph.D., LL.M., is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Time – and the lack thereof

For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing. It is time to raise the issue of accelerated cyber engagements.

For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing. It is time to raise the issue of accelerated cyber engagements.

Limited time to lead

If there is limited time to lead, how do you ensure that you can execute a defensive strategy? How do we launch counter-measures at speed beyond human ability and comprehension? If you don’t have time to lead, the alternative would be to preauthorize.

In the early days of the “Cold War” war planners and strategists who were used to having days to react to events faced ICBM that forced decisions within minutes. The solution? Preauthorization. The analogy between how the nuclear threat was addressed and cybersecurity works to a degree – but we have to recognize that the number of possible scenarios in cybersecurity could be on the hundreds and we need to prioritize.

The cybersecurity preauthorization process would require an understanding of likely scenarios and the unfolding events to follow these scenarios. The weaknesses in preauthorization are several. First, the limitations of the scenarios that we create because these scenarios are built on how we perceive our system environment. This is exemplified by the old saying: “What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t so.”

The creation of scenarios as a foundation for preauthorization will be laden with biases, assumption that some areas are secure that isn’t, and the inability to see attack vectors that an attacker sees. So, the major challenge becomes when to consider preauthorization is to create scenarios that are representative of potential outfalls.

One way is to look at the different attack strategies used earlier. This limits the scenarios to what has already happened to others but could be a base where additional scenarios are added to. The MITRE Att&ck Navigator provides an excellent tool to simulate and create attack scenarios that can be a foundation for preauthorization. As we progress, and artificial intelligence becomes an integrated part of offloading decision making, but we are not there yet. In the near future, artificial intelligence can cover parts of the managerial spectrum, increasing the human ability to act in very brief time windows.

The second weakness is the preauthorization’s vulnerability against probes and reverse-engineering. Cybersecurity is active 24/7/365 with numerous engagements on an ongoing basis. Over time, and using machine learning, automated attack mechanisms could learn how to avoid triggering preauthorized responses by probing and reverse-engineer solutions that will trespass the preauthorized controls.

So there is no easy road forward, but instead, a tricky path that requires clear objectives, alignment with risk management and it’s risk appetite, and an acceptance that the final approach to address the increased velocity in the attacks might not be perfect. The alternative – to not address the accelerated execution of attacks is not a viable option. That would hand over the initiative to the attacker and expose the organization for uncontrolled risks.

Bye-bye, OODA-loop

Repeatedly through the last year, I have read references to the OODA-loop and the utility of the OODA-concept för cybersecurity. The OODA-loop resurface in cybersecurity and information security managerial approaches as a structured way to address unfolding events. The concept of the OODA (Observe, Orient, Decide, Act) loop developed by John Boyd in the 1960s follow the steps of observe, orient, decide, and act. You observe the events unfolding, you orient your assets at hand to address the events, you make up your mind of what is a feasible approach, and you act.

The OODA-loop has become been a central concept in cybersecurity the last decade as it is seen as a vehicle to address what attackers do, when, where, and what should you do and where is it most effective. The term has been “you need to get inside the attacker’s OODA-loop.” The OODA-loop is used as a way to understand the adversary and tailor your own defensive actions.

Retired Army Colonel Tom Cook, former research director for the Army Cyber Institute at West Point, and I wrote 2017 an IEEE article titled “The Unfitness of Traditional Military Thinking in Cyber” questioning the validity of using the OODA-loop in cyber when events are going to unfold faster and faster. Today, in 2019, the validity of the OODA-loop in cybersecurity is on the brink to evaporate due to increased speed in the attacks. The time needed to observe and assess, direct resources, make decisions, and take action will be too long to be able to muster a successful cyber defense.

Attacks occurring at computational speed worsens the inability to assess and act, and the increasingly shortened time frames likely to be found in future cyber conflicts will disallow any significant, timely human deliberation.

Moving forward

I have no intention of being a narrative impossibilist, who present challenges with no solutions, so the current way forward is preauthorizations. In the near future, the human ability to play an active role in rapid engagement will be supported by artificial intelligence decision-making that executes the tactical movements. The human mind is still in charge of the operational decisions for several reasons – control, larger picture, strategic implementation, and intent. For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

Jan Kallberg, PhD

Reassessed incentives for innovation supports transformation

There is no alternative way to ensure victory in the future fight than to innovate, implement the advances, and scale innovation. To use Henry Kissinger’s words: “The absence of alternatives clears the mind marvelously.”

Innovative environments are not created overnight. The establishment of the right culture is based on mutual trust, a trust that allows members to be vulnerable and take chances. Failure is a milestone to success.

Important characteristics for an innovative environment are competence, expertise, passion and a shared vision. Such an environment is populated with individuals who are in it for the long run and don’t quit until they make advances. Individuals who urge success and are determined to work towards excellence are all around us. For the defense establishment, the core challenge is to reassess the provided incentives, so the ambition and intellectual assets are directed to innovation and the future fight.

Edward N. Luttwak noted that strategy only matters if we have the resources to execute the strategy. Embedded in Luttwak’s statement is the general condition that if we are unable to identify, understand, incentivize, activate, and utilize our resources, the strategy does not matter. This leads to the question: who will be the innovator? How does the Department of Defense create a broad, innovative culture? Is innovation outsourced to thinktanks and experimental labs, or is it dedicated to individuals who become experts in their subfields and drive innovation where they stand? Or are these models running parallel? In general, are we ready to expose ourselves to the vulnerability of failure, and if so, what is an acceptable failure? These are questions that need to be addressed in the process of transformation.

Structural frameworks in place today could hinder innovation. For example, the traditional DOD Defense Officer Personnel Management Act’s (DOPMA) personnel model. In theory, it is a form of the assembly line’s scientific management, Taylorism, where the officer is processed through the system to the highest level of her/his career potential. In reality, the financial incentives are in favor of following the flowchart for promotion instead of seeking to stay at a point where you are passionate to make an improvement. If a transformation to an innovative culture is to be successful, then the incentives need to be aligned with the overall mission objective.

Another example is government sponsored university research. Even if funds are allocated in the pursuit of mobilizing civilian intellectual torque to ensure innovation that benefits the warfighter, traditional research at a university has little incentive to support the transformation of the Armed Forces. The majority of academia and the overwhelming majority of research universities pursue DOD and government research grant opportunities as an income to gain resources to fund graduate students and facilities. Many of the sponsored research projects are basic research, and the results are made public, which slightly defeats the purpose if you seek an innovative advantage, with limited support to the future fight. Academia can tailor their research to fit the funding opportunity, which is logical from their viewpoint, and often it is a tweak on research they are already doing that can be squeezed into a grant application.

Academics at universities seek tenure, promotion, and leverage in their fields. So government funding becomes a box to check off for tenure, the ability to attract external funding, and support academic career progression. The incentives to support DOD innovation are suppressed by far stronger incentives for the researcher to gain personal career leverage at the university. In the future, it is likely more cost-effective to concentrate DOD-sponsored research projects to those universities that make the investment in time and effort to ensure that their research is DOD relevant, operationally current, and support the warfighter. Those universities that align themselves with the DOD objectives and deliver innovation for the future fight will also have a higher understanding what the future threat landscape looks like. They are more likely to have an interface for quick dissemination of DOD needs. A realignment of incentives for sponsored research at universities creates an opportunity for those ready to support the future fight.

There is a need to look at system level how innovation is incentivized to ensure that resources generate the effects sought. America has talent, ambition, a tradition of fearless engineering, and grit – correct incentives unleash that innovative power.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

How the Founding Fathers helped make the US cyber-resilient

The Founding Fathers have done more for U.S. strategic cyber resiliency than other modern initiatives. Their contribution is a stable society, that can absorb attacks without falling into chaos, mayhem, and entropy. Stable countries have a significant advantage in future nation-state cyber-information conflicts. If nation states seek to conduct decisive cyberwar, victory will not come from anecdotal exploits, but instead by launching systematic, destabilizing attacks on the targeted society that bring them down to the point that they are subject to foreign will. Societal stability is not created overnight, it is the product of decades and even centuries of good government, civil liberties, fairness, and trust building.

Why does it matter? Because the strategic tools to bring down and degrade a society will not provide the effects sought. That means for an adversary seeking strategic advantages by attacking U.S. critical infrastructure the risk of retribution can outweigh the benefit.

The blackout in the northeast in 2003 is an example of how an American population will react when a significant share of critical infrastructure is degraded by hostile cyberattacks. The reaction showed that instead of imploding into chaos and looting, the affected population acted orderly and helped strangers. They demonstrated a high degree of resiliency. The reason why Americans act orderly and have such resiliency is a product of how we have designed our society, which leads back to the Founding Fathers. Americans are invested in the success of their society. Therefore, they do not turn on each other in a crisis.

Historically, the tactic of attacking a stable society by generating hardship has failed more than it has succeeded. One example is the Blitz 1940, the German bombings of metropolitan areas and infrastructure, which only hardened the British resistance against Nazi-Germany. After Dunkirk, several British parliamentarians were in favor of a separate peace with Germany. After the blitz, British politicians were united against Germany and fought Nazi Germany single-handed until USSR and the United States entered the war.

A strategic cyber campaign will fail to destabilize the targeted society if the institutions remain intact following the assault or successfully operate in a degraded environment. From an American perspective, it is crucial for a defender to ensure the cyberattacks never reach the magnitude that forces society over the threshold to entropy. In America’s favor, the threshold is far higher than our potential adversaries’. By guarding what we believe in – fairness, opportunity, liberty, equality, and open and free democracy – America can become more resilient.

We generally underestimate how stable America is, especially compared to potential foreign adversaries. There is a deterrent embedded in that fact: the risks for an adversary might outweigh the potential gains.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.

At Machine Speed in Cyber – Leadership Actions Close to Nullified

In my view, one of the major weaknesses in cyber defense planning is the perception that there is time to lead a cyber defense while under attack. It is likely that a major attack is automated and premeditated. If it is automated the systems will execute the attacks at computational speed. In that case, no political or military leadership would be able to lead of one simple reason – it has already happened before they react.

A premeditated attack is planned for a long time, maybe years, and if automated, the execution of a massive number of exploits will be limited to minutes. Therefore, the future cyber defense would rely on components of artificial intelligence that can assess, act, and mitigate at computational speed. Naturally, this is a development that does not happen overnight.

In an environment where the actual digital interchange occurs at computational speed, the only thing the government can do is to prepare, give guidelines, set rules of engagement, disseminate knowledge to ensure a cyber-resilient society, and let the coders prepare the systems to survive in a degraded environment.

Another important factor is how these cyber defense measures can be reversed engineered and how visible they are in a pre-conflict probing wave of cyber attacks. If the preset cyber defense measures can be “measured up” early in a probing phase of a cyber conflict it is likely that the defense measures can through reverse engineering become a force multiplier for the future attacks – instead of bulwarks against the attacks.

So we enter the land of “damned if you do-damned if you don’t” because if we pre-stage the conflict with artificial intelligence supported decision systems that lead the cyber defense at the computational speed we are also vulnerable by being reverse engineered and the artificial intelligence becomes tangible stupidity.

We are in the early dawn of cyber conflicts, we can see the silhouettes of what is coming, but one thing becomes very clear – the time factor. Politicians and military leadership will have no factual impact on the actual events in real time in conflicts occurring at computational speed, so the focus has then to be at the front end. The leadership is likely to have the highest impact by addressing what has to be done pre-conflict to ensure resilience when under attack.

Jan Kallberg

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.

Private Hackbacks can be Blowbacks

The demands for legalizing corporate hack backs are growing – and there is significant interest by private actors to utilize hack back if it was lawful. If private companies were able to obtain the right to hack back legally, the risks for blowback is likely more significant than the opportunity and potential gains from private hackbacks. The proponents of private hackback tend to build their case on a set of assumptions. If these assumptions are not valid, private hackback is likely becoming a federal problem through uncontrolled escalation and spillover from these private counterstrikes.

-The private companies can attribute.

The idea of legalizing hack back operations is based on the assumption that the defending company can attribute the initial attack with pin-point precision. If a defending company is given the right to strike back, it is based on the assumption that the counterstrike can beyond doubt determine which entity was the initial attacker. If attribution is not achieved with satisfactory granularity and precision, a right to cyber counterstrike would be a right to strike anyone based on suspicion of involvement. Very few private entities can as of today with high granularity determine who attacked them and can trace back the attack so the counterstrike can be accurate. The lack of norms and a right to strike back, even if the precision in the counterstrike is not perfect, would increase entropy and deviation from emerging norms and international governance.

-The counterstriking corporations can engage a state-sponsored organization.

Things might spin out of control.  The old small tactics rule – anyone can open fire, only geniuses can get out unharmed. The counterstriking corporation perceives that they can handle the adversaries believing that it is an underfunded group of college students that hacks for fun – and later finds out that it is a heavily funded and highly able foreign state agency. The counterstriking company would have limited means to before a counterstrike determines the exact size of the initial attacker and the full spectrum of resources available for the initial attacker. A probing counterattack would not be enough to determine the operational strength, ability, and intent of the potential adversary. Following the assumption that the counterstriking corporation can handle any adversary is embedded the assumption that there will be no uncontrolled escalation.

-The whole engagement is locked in between parties A and B.

If there is an assumption of no uncontrolled escalation, then a follow-up assumption is that ,the engagement creates a deterrence that prevents the initial attacker from continuing attacking. The defending company needs to be able to counterattack with the magnitude that the initial attacker is deterred from further attacks. Once deterrence is established then the digital interchange will cease. The question is how to establish deterrence – and deterring from which array of cyber operations – without causing any damages. If deterrence cannot be establish it would likely lead to escalation or to a strict tit-for-tat game without any decisive conclusion and continue until the initial attacker decides to end the interchange.

-The initial attacker has no second strike option.

The interchange will occur with a specific set of cyber weapons and aim points. So the interchange cannot lead to further damages. Even if the initial striker had the intent to rearrange the targets, aims, and potential impacts there will be no option to do so. A new set of second strikes would not be an uncontrolled escalation as long as the targeting occurred within the same realm and values as the earlier strikes. The second strike option for the initial attacker could target unprecedented targets at the initial attackers discretion. Instead, it is more likely that the initial attacker has second strike options that the initial target is unaware of at the moment of counterstrike.

-The counterstriking company has no interests or assets in the initial attacker’s jurisdiction.

If a multi-national company (MNC) counterstrikes a state agency or state sponsored attacker the MNC could face the risk of repercussions if there are MNC assets in the jurisdiction of the initial attacker. Major MNC companies have interests, subsidiaries, and assets in hundreds of jurisdictions. The Fortune 500 companies have assets in the US, China, Russia, India, and numerous other jurisdictions. The question is then if MNC “A” counterstrike a cyberattack from China, what will the risks be for the “A” MNC subsidiary “A in China”? Related is the issue if by improper attribution MNC “A” counterstrikes from the US targeting foreign digital assets when these foreign assets had no connection with the initial attack, which constitutes a new unjustifiable and illegal attack on foreign digital assets. The majority of the potential source countries for hacking attacks are totalitarian and authoritarian states. A totalitarian state can easily, and it is in their reach, switch domain and seize property, arrest innocent business travels, and act in other ways as a result of corporate hackback. I am not saying that we should let totalitarian regimes act any way they want – I am only saying that it is not for private corporations to engage and seeking to resolve. It is a government domain to interact with foreign governments.

The idea to legalize corporate hack backs could lead to increased distrust, entropy, and be contra-productive to the long-term goal of a secure and safe Internet.

Jan Kallberg, PhD

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy.

The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.

The Zero Domain – Cyber Space Superiority through Acceleration beyond the Adversary’s Comprehension

THE ZERO DOMAIN

In the upcoming Fall 2018 issue of the Cyber Defense Review, I present a concept – the Zero Domain. The Zero Domain concept is battlespace singularity through acceleration. There is a point along the trajectory of accelerated warfare where only one warfighting nation comprehend what is unfolding and the sees the cyber terrain; it is an upper barrier for comprehension where the acceleration makes the cyber engagement unilateral.

I intentionally use the word accelerated warfare, because it has a driver and a command of the events unfolding, even if it is only one actor of two, meanwhile hyperwar suggests events unfolding without control or ability to steer the engagement fully.

It is questionable and even unlikely that cyber supremacy can be reached by overwhelming capabilities manifested by stacking more technical capacity and adding attack vectors. The alternative is to use time as the vehicle to supremacy by accelerating the velocity in the engagements beyond the speed at which the enemy can target, precisely execute and comprehend the events unfolding. The space created beyond the adversary’s comprehension is titled the Zero Domain. Military traditionally sees the battles space as land, sea, air, space and cyber domains. When fighting the battle beyond the adversary’s comprehension, no traditional warfighting domain that serves as a battle space; it is a not a vacuum nor an unclaimed terra nullius, but instead the Zero Domain. In the Zero Domain, cyberspace superiority surface as the outfall of the accelerated time and a digital space-separated singularity that benefit the more rapid actor. The Zero Domain has a time space that is only accessible by the rapid actor and a digital landscape that is not accessible to the slower actor due to the execution velocity in the enhanced accelerated warfare. Velocity achieves cyber Anti Access/Area Denial (A2/AD), which can be achieved without active initial interchanges by accelerating the execution and cyber ability in a solitaire state. During this process, any adversarial probing engagements only affect the actor on the approach to the Comprehension Barrier and once arrived in the Zero Domain there is a complete state of Anti Access/Area Denial (A2/AD) present. From that point forward, the actor that reached the Zero Domain has cyberspace singularity where the accelerated actor is the only actor that can understand the digital landscape, engage unilaterally without an adversarial ability to counterattack or interfere, and hold the ability to decide when, how, and where to attack. In the Zero Domain, the accelerated singularity forges the battlefield gravity and thrust into a single power that denies adversarial cyber operations and acts as one force of destruction, extraction, corruption, and exploitation of targeted adversarial digital assets.

When breaking the Comprehension Barrier the first of the adversary’s final points of comprehension is human deliberation, directly followed by pre-authorization and machine learning, and then these final points of comprehension are passed, and the rapid actor enters the Zero Domain.

Key to victory has been the concept of being able to be inside the opponents OODA-loop, and thereby distort, degrade, and derail any of the opponent’s OODA. In accelerated warfare beyond the Comprehension Barrier, there is no need to be inside the opponent’s OODA loop because the accelerated warfare concept is to remove the OODA loop for the opponent and by doing so decapitate the opponent’s ability to coordinate, seek effect, and command. In the Zero Domain, the opposing force has no contact with their enemy, and their OODA loop is evaporated.

The Zero Domain is the warfighting domain where accelerated velocity in the warfighting operations removes the enemy’s presence. It is the domain with zero opponents. It is not an area denial, because the enemy is unable to accelerate to the level that they can enter the battle space, and it is not access denial because the enemy has never been a part of the later fight since the Comprehension Barrier was broken through.

Even if adversarial nations invest heavily in quantum, machine learning, and artificial intelligence, I am not convinced that these adversarial authoritarian regimes can capitalize on their potential technological peer-status to America. The Zero Domain concept has an American advantage because we are less afraid of allowing degrees of freedom in operations, whereas the totalitarian and authoritarian states are slowed down by their culture of fear and need for control. An actor that is slowed down will lower the threshold for the Comprehension Barrier and enable the American force to reach the Zero Domain earlier in the future fight and establish information superiority as confluency of cyber and information operations.

Jan Kallberg, PhD

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy.The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.

When Everything Else Fails in an EW Saturated Environment – Old School Shortwave

( I wrote this opinion piece together with Lt. Col. Stephen Hamilton and Capt. Kyle Hager)

The U.S. Army’s ability to employ high-frequency radio systems has atrophied significantly since the Cold War as the United States transitioned to counterinsurgency operations. Alarmingly, as hostile near-peer adversaries reemerge, it is necessary to re-establish HF alternatives should very-high frequency, ultra-high frequency or SATCOM come under attack. The Army must increase training to enhance its ability to utilize HF data and voice communication.

The Department of Defense’s focus over the last several years has primarily been Russian hybrid warfare and special forces. If there is a future armed conflict with Russia, it is anticipated ground forces will encounter the Russian army’s mechanized infantry and armor.

A potential future conflict with a capable near-peer adversary, such as Russia, is notable in that they have heavily invested in electromagnetic spectrum warfare and are highly capable of employing electronic warfare throughout their force structure. Electronic warfare elements deployed within theaters of operation threaten to degrade, disrupt or deny VHF, UHF and SATCOM communication. In this scenario, HF radio is a viable backup mode of communication.

The Russian doctrine favors rapid employment of nonlethal effects, such as electronic warfare, in order to paralyze and disrupt the enemy in the early hours of conflict. The Russian army has an inherited legacy from the Soviet Union and its integrated use of electronic warfare as a component of a greater campaign plan, enabling freedom of maneuver for combat forces. The rear echelons are postured to attack either utilizing a single envelopment, attacking the defending enemy from the rear, or a double envelopment, seeking to destroy the main enemy forces by unleashing the reserves. Ideally, a Russian motorized rifle regiment’s advanced guard battalion makes contact with the enemy and quickly engage on a broader front, identifying weaknesses permitting the regiment’s rear echelons to conduct flanking operations. These maneuvers are generally followed by another motorized regiment flanking, producing a double envelopment and destroying the defending forces.

Currently, the competency with HF radio systems within the U.S. Army is limited; however, there is a strong case to train and ensure readiness for the utilization of HF communication. Even in EMS-denied environments, HF radios can provide stable, beyond-line-of-sight communication permitting the ability to initiate a prompt global strike. While HF radio equipment is also vulnerable to electronic attack, it can be difficult to target due to near vertical incident skywave signal propagation. This propagation method provides the ability to reflect signals off the ionosphere in an EMS-contested environment, establishing communications beyond the line of sight. Due to the signal path, the ability to target an HF transmitter is much more difficult than transmissions from VHF and UHF radios that transmit line of sight ground waves.

The expense to attain an improved HF-readiness level is low in comparison to other Army needs, yet with a high return on investment. The equipment has already been fielded to maneuver units; the next step is Army leadership prioritizing soldier training and employment of the equipment in tactical environments. This will posture the U.S. Army in a state of higher readiness for future conflicts.

Dr. Jan Kallberg, Lt. Col. Stephen Hamilton and Capt. Kyle Hager are research scientists at the Army Cyber Institute at West Point and assistant professors at the United States Military Academy.

Utilizing Cyber in Arctic Warfare

The change from a focus on counter-insurgency to near-peer and peer-conflicts has also introduced the likelihood, if there is a conflict, for a fight in colder and frigid conditions. The weather conditions in Korea and Eastern Europe are harsh during winter time, with increasing challenges the farther north the engagement is taking place. In traditional war theaters, the threats to your existence line up as follows: enemy, logistics, and climate. In a polar climate, it is reversed: climate, logistics, and the enemy.

An enemy will engage you and seek to take you on different occasions, but the climate will be ever-present. The battle for your own physical survival in staying warm, eating and seeking rest can create unit fatigue and lower the ability to fight within days, even for trained and able troops. The easiest way to envision how three feet of snow affects you is to think about your mobility walking in water up to your hip, so to compensate either you ski or use low ground pressure and wide-tracked vehicles, such as specialized small unit support vehicles.

The climate and the snow depth also affect equipment. Lethality in your regular weapons is lowered. Gunfire accuracy goes down as charges burn slower in an arctic subzero-degree environment. Mortar rounds are less effective than under normal conditions when the snow captures shrapnel. Any heat, either from weapons, vehicles or your body, will make the snow melt and then freeze to ice. If not cleaned, weapons will jam. In a near-peer or peer conflict, the time units are engaged is longer and the exposure to the climate can last months.

I say all this to set the stage. Arctic warfare takes place in an environment that often lacks roads, infrastructure, minimal logistics, and with snow and ice blocking mobility. The climate affects both you and the enemy; once you are comfortable in this environment, you can work on the enemy’s discomfort.

The unique opportunity for cyberattacks in an Arctic conflict is, in my opinion, the ability to destroy a small piece of a machine or waste electric energy.

First, the ability to replace and repair equipment is limited in an arctic environment — the logistic chain is weak and unreliable and there are no facilities that effectively can support needed repairs, so the whole machine is a loss. If a cyberattack destroys a fuel pump in a vehicle, the targeted vehicle could be out of service for a week or more before repaired. The vehicle might have to be abandoned as units continue to move over the landscape. Units that operate in the Arctic have a limited logistic trail and ability to carry spare parts and reserve equipment. A systematic attack on a set of equipment can paralyze the enemy.

Second, electric energy waste is extremely stressful for any unit targeted. The Arctic has no urban infrastructure and often no existing power line that can provide electric power to charge batteries and upkeep electronic equipment. If there are power lines, they are few and likely already targeted by long-range enemy patrols.

The winter does not have enough sun to provide enough energy for solar panels if the sun even gets above the horizon (if you get far enough north, the sun is for several months a theoretical concept). The batteries do not hold a charge when it gets colder (a battery that holds a 100-percent charge at 80 degrees Fahrenheit has its capacity halved to 50-percent at 0 degrees Fahrenheit). Generators demand fuel from a limited supply chain and not only generate a heat signature, but also noise. The Arctic night is clear, with no noise pollution, so a working generator can be pick up by a long-range skiing patrol from 500 yards, risking an ambush. The loss or intermittent ability to use electronics and signal equipment due to power issues reduces and degrades situation awareness, command and control, the ability to call for strikes, and blinds the targeted unit.

Arctic warfare is a fight with low margins for errors, where climate guarantees that small failures can turn nasty, and even limited success with arctic cyber operations can tip the scales in your favor.

Jan Kallberg, PhD

Jan Kallberg is a research fellow/research scientist at the Army Cyber Institute at West Point. As a former Swedish reserve officer and light infantry company commander, Kallberg has personal experience facing Arctic conditions. The views expressed herein are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Cyber Attacks with Environmental Impact – High Impact on Societal Sentiment

In the cyber debate, there is a significant, if not totally over-shadowing, focus on the information systems themselves – the concerns don’t migrate to secondary and tertiary effects. For example, the problem with vulnerable industrial control systems in the management of water-reservoir dams is not limited to the digital conduit and systems. It is the fact that a massive release of water can create a flood that affects hundreds of thousands of citizens. It is important to look at the actual effects of a systematic or pinpoint-accurate cyberattack – and go beyond the limits of the actual information system.

As an example, a cascading effect of failing dams in a larger watershed would have a significant environmental impact. Hydroelectric dams and reservoirs are controlled using different forms of computer networks, either cable or wireless, and the control networks are connected to the Internet. A breach in the cyber defenses for the electric utility company leads all the way down to the logic controllers that instruct the electric machinery to open the floodgates. Many hydroelectric dams and reservoirs are designed as a chain of dams in a major watershed to create an even flow of water that is utilized to generate energy. A cyberattack on several upstream dams would release water that increases pressure on downstream dams. With rapidly diminishing storage capacity, downstream dams risk being breached by the oncoming water. Eventually, it can turn to a cascading effect through the river system which could result in a catastrophic flood event.

The traditional cyber security way to frame the problem is the loss of function and disruption in electricity generation, but that overlooks the potential environmental effect of an inland tsunami. This is especially troublesome in areas where the population and the industries are dense along a river; examples would include Pennsylvania, West Virginia and other areas with cities built around historic mills.

We have seen that events that are close to citizens’ near-environment affect them highly, which makes sense. If they perceive a threat to their immediate environment, it creates rapid public shifts of belief; erodes trust in government; generates extreme pressure under an intense, short time frame for government to act to stabilize the situation; and public vocal outcry.

One such example is the Three Mile Island accident, which created significant public turbulence and fear – an incident that still has a profound impact on how we view nuclear power. The Three Mile Island incident changed U.S. nuclear policy in a completely different direction and halted all new construction of nuclear plants even until today, forty years later.

For a covert state actor that seeks to cripple our society, embarrass the political leadership, change policy and project to the world that we cannot defend ourselves, environmental damages are inviting. An attack on the environment feels, for the general public, closer and scarier than a dozen servers malfunctioning in a server park. We are all dependent on clean drinking water and non-toxic air. Cyber attacks on these fundamentals for life could create panic and desperation in the public – even if the reacting citizens were not directly affected.

It is crucial for cyber resilience to look beyond the information systems. The societal effect is embedded in the secondary and tertiary effects that need to be addressed, understood and, to the limit of what we can do, mitigated. Cyber resilience goes beyond the digital realm.

Jan Kallberg, PhD

The time to act is before the attack

 

In my view, one of the major weaknesses in cyber defense planning is the perception that there is time to lead a cyber defense while under attack. It is likely that a major attack is automated and premeditated. If it is automated the systems will execute the attacks at computational speed. In that case, no political or military leadership would be able to lead of one simple reason – it has already happened before they react.

A premeditated attack is planned for a long time, maybe years, and if automated, the execution of a massive number of exploits will be limited to minutes. Therefore, the future cyber defense would rely on components of artificial intelligence that can assess, act, and mitigate at computational speed. Naturally, this is a development that does not happen overnight.

In an environment where the actual digital interchange occurs at computational speed, the only thing the government can do is to prepare, give guidelines, set rules of engagement, disseminate knowledge to ensure a cyber resilient society, and let the coders prepare the systems to survive in a degraded environment.

Another important factor is how these cyber defense measures can be reversed engineered and how visible they are in a pre-conflict probing wave of cyber attacks. If the preset cyber defense measures can be “measured up” early in a probing phase of a cyber conflict it is likely that the defense measures can through reverse engineering become a force multiplier for the future attacks – instead of bulwarks against the attacks.

So we enter the land of “damned if you do-damned if you don’t” because if we pre-stage the conflict with artificial intelligence supported decision systems that lead the cyber defense at the computational speed we are also vulnerable by being reverse engineered and the artificial intelligence becomes tangible stupidity.

We are in the early dawn of cyber conflicts, we can see the silhouettes of what is coming, but one thing becomes very clear – the time factor. Politicians and military leadership will have no factual impact on the actual events in real time in conflicts occurring at computational speed, so focus have then to be at the front end. The leadership is likely to have the highest impact by addressing what has to be done pre-conflict to ensure resilience when under attack.

Jan Kallberg, PhD

Artificial Intelligence (AI): The risk of over-reliance on quantifiable data

The rise of interest in artificial intelligence and machine learning has a flip side. It might not be so smart if we fail to design the methods correctly. A question out there – can we compress the reality into measurable numbers? Artificial Intelligence relies on what can be measured and quantified, risking an over-reliance on measurable knowledge. The challenge with many other technical problems is that it all ends with humans that design and assess according to their own perceived reality. The designers’ bias, perceived reality, weltanschauung, and outlook – everything goes into the design. The limitations are not on the machine side; the humans are far more limiting. Even if the machines learn from a point forward, it is still a human that stake out the starting point and the initial landscape.

Quantifiable data has historically served America well; it was a part of the American boom after the Second World War when America was one of the first countries that took a scientific look on how to improve, streamline, and increase production utilizing fewer resources and manpower.

The numbers have also misled. The Vietnam-era SECDEF McNamara used the numbers to tell how to win the Vietnam War, which clearly indicated how to reach a decisive military victory – according to the numbers. In a Post-Vietnam book titled “The War Managers,” retired Army general Donald Kinnard visualize the almost bizarre world of seeking to fight the war through quantification and statistics. Kinnard, who later taught at the National Defense University, did a survey of the actual support for these methods and utilized fellow generals that had served in Vietnam as the respondents. These generals considered the concept of assessing the progress in the war by body counting as useless, and only two percent of the surveyed generals saw any value in this practice. Why were the Americans counting bodies? It is likely because it was quantifiable and measurable. It is a common error in research design that you seek out the variables that produce accessible quantifiable results and McNamara was at that time almost obsessed with numbers and the predictive power of numbers. McNamara is not the only one that relied overly on the numbers.

In 1939, the Nazi-German foreign minister Ribbentrop together with the German High Command studied and measured up the French-British ability to mobilize and the ability to start a war with a little-advanced warning. The Germans quantified assessment was that the Allies were unable to engage in a full-scale war on short notice and the Germans believed that the numbers were identical with the policy reality when politicians would understand their limits – and the Allies would not go to war over Poland. So Germany invaded Poland and started the Second World War. The quantifiable assessment was correct and lead to Dunkirk, but the grander assessment was off and underestimated the British and French will to take on the fight, which leads to at least 50 million dead, half of Europe behind the Soviet Iron Curtain and the destruction of their own regime. The British sentiment willing to fight the war to the end, the British ability to convince the US to provide resources to their effort, and the unfolding events thereafter were never captured in the data. The German assessment was a snapshot of the British and French war preparations in the summer of 1939 – nothing else.

Artificial Intelligence is as smart as the the numbers we feed it. Ad notam.

The potential failure is hidden in selecting, assessing, designing, and extracting the numbers to feed Artificial Intelligence. The risk for grave errors in decisionmaking, escalation, and avoidable human suffering and destruction, is embedded in our future use of Artifical Intelligence if we do not pay attention to the data that feed the algorithms. The data collection and aggregation is the weakest link in the future of machine-supported decisionmaking.

Jan Kallberg is a Research Scientist at the Army Cyber Institute at West Point and an Assistant Professor the Department of Social Sciences (SOSH) at the United States Military Academy. The views expressed herein are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Spectrum Warfare

 

Spectrum sounds to many ears like old fashioned, Cold War jamming, crude brute electromagnetic overkill. In reality though, the military needs access to spectrum, and more of it.

Smart defense systems need to communicate, navigate, identify, and target. It does not matter how cyber secure our platforms are if we are denied access to electromagnetic spectrum. Every modern high tech weapon system is a dud without access to spectrum. The loss of spectrum will evaporate the American military might.

Today, though, other voices are becoming stronger, desiring to commercialize military spectrum. Why does the military need an abundance of spectrum, these voices ask. It could be commercialized and create so much joy with annoying social media and stuff that does not matter beyond one of your life-time minutes.

It is a relevant question. We as an entrepreneurial and “take action” society see the opportunity to utilize parts of the military spectrum to launch wireless services and free up spectrum space for all these apps and the Internet of Things that is just around the corner of the digital development of our society and civilization. In the eyes of the entrepreneurs and their backers, the military sits on unutilized spectrum that could put be good use – and there could be a financial harvest of the military electromagnetic wasteland.

The military needs spectrum in the same way the football player needs green grass to plan and execute his run. If we limit the military access to necessary spectrum it will, to extend the football metaphor, be just a stack of players not moving or be able to win. Our military will not be able to operate effectively.

We invite people to talk about others to talk about justice, democracy, and freedom, to improve the world, but I think it is time for us to talk to our fellow man about electromagnetic spectrum because the bulwark against oppression and totalitarian regimes depends on access.

Jan Kallberg, PhD

Humanitarian Cyber Operations – Rapid, Targeted, and Active Deterrent

Cyber operations are designed to be a tool for defense, security and war. In the same way as harmless computer technology can be used as dual-purpose tools for war, tools of war can be used for humanity, to protect the innocent, uphold respect for our fellow beings and safeguard human rights.

When a nation-state acts against its population and risks their welfare through repression, violence and exposure to mistreatment, there is a possibility for the world community to take actions by launching humanitarian cyber operations to protect the targeted population. In the non-cyber world, atrocities are intervened by military intervention using the principle of “responsibility to protect,” which allows foreign interference in domestic affairs to protect a population from their repressive and violent ruler without triggering an act of war. If a state fails to protect the welfare of its citizens, then the state that commits atrocities against its population is no longer protected from foreign intervention.

Intervention in 2018 does not need to be a military intervention with troops on the grounds, but, instead, a digital intervention through humanitarian cyber operations. A cyber humanitarian intervention not only capitalizes on the digital footprint but also penetrates the violent regime’s information sources, command structure and communications. The growing digital footprint in repressive regimes creates an opportunity for early prevention and interception against the perpetration of atrocities. The last decade the totalitarian states’ digital footprint has grown larger and larger.

As an example, Iran had 2 million smartphones in 2014, but had already reached 48 million smartphones in 2017. Today, about 3 out of 4 Iranians live in metropolitan areas. About half of the Iranian population is under 30 years old with new habits of chatting, sharing and wireless connectivity. In North Korea, the digital footprint has grown as rapidly. In 2011, there were no cellphones in North Korea outside of a very narrow elite circle. In 2017, surveys assessed that over 65 percent of all North Korean households had a cellphone.

No totalitarian and repressive states have been able to limit the digital footprint, which continues to expand for every year. The repressive regimes rely on the computer to lead and orchestrate the repressive actions and crimes against its population. Even if the actual perpetrators of atrocities avoid digital means, the activity will be picked up as intelligence fragments when talked about, discussed, shared, eye-witnessed and silenced. The planning and initiation to execute atrocities have a logistic trail of troop moments, transportations, orders, communications and concentration of resources.

If there is a valid concern for the safety of the population in the totalitarian states, then free, democratic and responsible states can act. Utilizing the United Nations’ accepted principle, “responsibility to protect,” is a justification for the world community or democratic states that decide to act and to launch humanitarian cyber operations utilizing military cyber capacity in a humanitarian role.

Humanitarian cyber operations enable faster response, the retrieval of information necessary for the world community’s decision making to act conventionally, and they remove the secrecy surrounding the perpetrated acts of totalitarian and repressive regimes. The exposure of human rights crimes in progress can serve as a deterrent and interception against a continuation of these crimes. By transposing the responsibility to protect from international humanitarian law into cyber, repressive regimes lose their protection against foreign cyber intervention if valid human rights concerns can be raised.

Humanitarian cyber operations can act as a deterrent because perpetrators will be held accountable. The international humanitarian law is dependent on evidence gathering, and laws might not be upheld if evidence gathering fails, even if the international community promotes decisive legal action. Humanitarian cyber operations can support the prosecution of crimes against humanity and generate quality evidence. The prosecution of the human rights violations in the Balkan civil wars during the 1990s failed in many cases due to lack of evidence. Humanitarian cyber operations can capture evidence that will hold perpetrators accountable.

Humanitarian cyber operations are policy tools for a free democratic nation already in peacetime to legally penetrate and extract information from the information systems of an authoritarian potential adversary that represses their people and endangers the welfare of their citizens. Conversely, the adversary cannot systematically attack the democratic nation because that is likely an act of war with consequences to follow. There is an opportunity embedded in humanitarian cyber operations for humanity and democracy.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.