The War Game Revival

 

The sudden fall of Kabul, when the Afghan government imploded in a few days, shows how hard it is to predict and assess future developments. War games have had a revival in the last years to understand potential geopolitical risks better. War games are tools to support our thinking and force us to accept that developments can happen, which we did not anticipate, but games also have a flip side. War games can act as afterburners for our confirmation bias and inward self-confirming thinking. Would an Afghanistan-focused wargame design from two years ago had a potential outcome of a governmental implosion in a few days? Maybe not.

Awareness of how bias plays into the games is key to success. Wargames revival occurs for a good reason. Well-designed war games make us better thinkers; the games can be a cost-effective way to simulate various outcomes, and you can go back and repeat the game with lessons learned.
Wargames are rules-driven; the rules create the mechanical underpinnings that decide outcomes, either success or failure. Rules are condensed assumptions. There resides a significant vulnerability. Are we designing the games that operate within the realm of our own aggregated bias?
We operate in large organizations that have modeled how things should work. The timely execution of missions is predictable according to doctrine. In reality, things don’t play out the way we planned; we know it, but the question is, how do you quantify a variety of outcomes and codify them into rules?

Our war games and lessons learned from war games are never perfect. The games are intellectual exercises to think about how situations could unfold and deal with the results. In the interwar years, the U.S. made a rightful decision to focus on Japan as a potential adversary. Significant time and efforts went into war planning based on studies and wargames that simulated the potential Pacific fight. The U.S. assumed one major decisive battle between the U.S. Navy and the Imperial Japanese Navy, where lines of battleships fought it out at a distance. In the plans, that was the crescendo of the Pacific war. The plans missed the technical advances and importance of airpower, air carriers, and submarines. Who was setting up the wargames? Who created the rules? A cadre of officers who had served in the surface fleet and knew how large ships fought. There is naturally more to the story of the interwar war planning, but as an example, this short comment serves its purpose.

How do we avoid creating war games that only confirm our predisposition and lures us into believing that we are prepared – instead of presenting the war we have to fight?

How do you incorporate all these uncertainties into a war game? Naturally, it is impossible, but keeping the biases at least to a degree mitigated ensures value.

Study historical battles can also give insights. In the 1980s, sizeable commercial war games featured massive maps, numerous die-cut unit counters, and hours of playtime. One of these games was SPI’s “Wacht am Rhein,” which was a game about the Battle of the Bulge from start to end. The game visualizes one thing – it doesn’t matter how many units you can throw into battle if they are stuck in a traffic jam. Historical war games can teach us lessons that need to be maintained in our memory to avoid repeating the mistakes from the past.

Bias in wargame design is hard to root out. The viable way forward is to challenge the assumptions and the rules. Outsiders do it better than insiders because they will see the ”officially ignored” flaws. These outsiders must be cognizant enough to understand the game but have minimal ties to the outcome, so they are free to voice their opinion. There are experts out there. Commercial lawyers challenge assumptions and are experts in asking questions. It can be worth a few billable hours to ask them to find the flaws. Colleagues are not suitable to challenge and the ”officially ignored” flaws because they are marinated in the ideas that established the ”officially ignored” flaws. Academics dependent on DOD funding could gravitate toward accepting the ”officially ignored” flaws, just a fundamental human behavior, and the fewer ties to the initiator of the game, the better.

Another way to address uncertainty and bias is repeated games. The first game, cyber, has the effects we anticipate. The second game, cyber, has limited effect and turns out to be an operative dud. In the third game, cyber effects proliferate and have a more significant impact than we anticipated. I use these quick examples to show that there is value in repeated games. The repeated games become a journey of realization and afterthoughts due to the variety of factors and outcomes. We can then afterward use our logic and understanding to arrange the outcomes to understand reality better. The repeated games limit the range and impact of specific bias due to the variety of conditions.

The revival of wargaming is needed because wargaming can be a low-cost, high-return, intellectual endeavor. Hopefully, we can navigate away from the risks of groupthink and confirmation bias embedded in poor design. The intellectual journey that the war games take us on will make our current and future decision-makers better equipped to understand an increasingly complex world.

 

Jan Kallberg, Ph.D.

 

CYBER IN THE LIGHT OF KABUL – UNCERTAINTY, SPEED, ASSUMPTIONS

 

There is a similarity between the cyber and intelligence community (IC) – we are both dealing with a denied environment where we have to assess the adversary based on limited verifiable information. The recent events in Afghanistan with the Afghani government and its military imploding and the events that followed were unanticipated and against the ruling assumptions. The assumptions were off, and the events that unfolded were unprecedented and fast. The Afghan security forces evaporated in ten days facing a far smaller enemy leading to a humanitarian crisis. There is no blame in any direction; it is evident that this was not the expected trajectory of events. But still, in my view, there is a lesson to be learned from the events in Kabul that applies to cyber.

The high degree of uncertainty, the speed in both cases, and our reliance on assumptions, not always vetted beyond our inner circles, makes the analogy work. According to the media, in Afghanistan, there was no clear strategy to reach a decisive outcome. You could say the same about cyber. What is a decisive cyber outcome at a strategic level? Are we just staring at tactical noise, from ransomware to unsystematic intrusions, when we should try to figure out the big picture instead?

Cyber is loaded with assumptions that we, over time, accepted. The assumptions become our path-dependent trajectory, and in the absence of the grand nation-state on nation-state cyber conflict, the assumptions are intact. The only reason why cyber’s failed assumption has not yet surfaced is the absence of full cyber engagement in a conflict. There is a creeping assumption that senior leaders will lead future cyber engagements; meanwhile, the data shows that the increased velocity in the engagements could nullify the time window for leaders to lead. Why do we want cyber leaders to lead? It is just how we do business. That is why we traditionally have senior leaders. John Boyd’s OODA-loop (Observe, Orient, Decide, Act) has had a renaissance in cyber the last three years. The increased velocity with support of more capable hardware, machine learning, artificial intelligence, and massive data utilization makes it questionable if there is time for senior leaders to lead traditionally. The risk is that senior leaders are stuck in the first O in the OODA loop, just observing, or in the latter case, orient in the second O in OODA. It might be the case that there is no time to lead because events are unfolding faster than our leaders can decide and act. The way technology is developing; I have a hard time believing that there will be any significant senior leader input at critical junctures because the time window is so narrow.

Leaders will always lead by expressing intent, and that might be the only thing left. Instead of precise orders, do we train leaders and subordinates to be led by intent as a form of decentralized mission command?

Another dominant cyber assumption is critical infrastructure as the likely attack vector. In the last five years, the default assumption in cyber is that critical infrastructure is a tremendous national cyber risk. That might be correct, but there are numerous others. In 1983, the Congressional Budget Office (CBO) defined critical infrastructure as “highways, public transit systems, wastewater treatment works, water resources, air traffic control, airports, and municipal water supply.” By the patriot Act of 2001, the scope had grown to include; “systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.” By 2013, in the Presidential Policy Directive 21 (PPD-21), the scope widens even further and almost encompasses all society. Today concession stands at ballparks are critical infrastructure, together with thousands of other non-critical functions, shows a mission drift that undermines a national cyber defense. There is no guidance on what to prioritize and what not to prioritize that we might have to live without at a critical juncture. The question is if critical infrastructure matters for our potential adversaries as an attack vector or is it critical infrastructure because it matters to us? A potential adversary wants to attack infrastructure around American military facilities and slow down the transportation apparatus from bases to the port of embarkation (POE) to delay the arrival of U.S. troops in theater. The adversary might do a different assessment, saying that tampering with the American homeland only strengthens the American will to fight and popular support for a conflict. The potential adversary might utilize our critical infrastructure as a capture-the-flag training ground to training their offensive teams, but the activity has no strategic intent.

As broad as the definition is today, it is likely that the focus on critical infrastructure reflects what concerns us instead of what the adversary considers essential for them to reach strategic success. So today, when we witnessed the unprecedented events in Afghanistan, where it appears that our assumptions were off, it is good to keep in mind that cyber is heavy with untested assumptions. In cyber, what we know about the adversary and their intent is limited. We make assumptions based on the potential adversaries’ behavior and doctrine, but it is still an assumption.
So the failures to correctly assess Afghanistan should be a wake-up call for the cyber community, which also relies on unvalidated information.

The long-term cost of cyber overreaction

The default modus operandi when facing negative cyber events is to react, often leading to an overreaction. It is essential to highlight the cost of overreaction, which needs to be a part of calculating when to engage and how. For an adversary probing cyber defenses, reactions provide information that can aggregate a clear picture of the defendant’s capabilities and preauthorization thresholds.

Ideally, potential adversaries cannot assess our strategic and tactical cyber capacities, but over time and numerous responses, the information advantage evaporates. A reactive culture triggered by cyberattacks provides significant information to a probing adversary, which seeks to understand underlying authorities and tactics, techniques and procedures (TTP).

The more we act, the more the potential adversary understands our capacity, ability, techniques, and limitations. I am not advocating a passive stance, but I want to highlight the price of acting against a potential adversary. With each reaction, that competitor gain certainty about what we can do and how. The political scientist Kenneth N. Waltz said that the power of nuclear arms resides with what you could do and not within what you do. A large part of the cyber force strength resides in the uncertainty in what it can do, which should be difficult for a potential adversary to assess and gauge.

Why does it matter? In an operational environment where the adversaries operate under the threshold for open conflict, in sub-threshold cyber campaigns, an adversary will seek to probe in order to determine the threshold, and to ensure that it can operate effectively in the space below the threshold. If a potential adversary cannot gauge the threshold, it will curb its activities as its cyber operations must remain adequately distanced to a potential, unknown threshold to avoid unwanted escalation.

Cyber was doomed to be reactionary from its inception; its inherited legacy from information assurance creates a focus on trying to defend, harden, detect and act. The concept is defending, and when the defense fails, it rapidly swings to reaction and counteractivity. Naturally, we want to limit the damage and secure our systems, but we also leave a digital trail behind every time we act.

In game theory, proportional responses lead to tit-for-tat games with no decisive outcome. The lack of the desired end state in a tit-for-tat game is essential to keep in mind as we discuss persistent engagement. In the same way, as Colin Powell reflected on the conflict in Vietnam, operations without an endgame or a concept of what decisive victory looks like are engagements for the sake of engagements. Even worse, a tit-for-tat game with continuous engagements might be damaging as it trains potential adversaries that can copy our TTPs to fight in cyber. Proportionality is a constant flow of responses that reveals friendly capabilities and makes potential adversaries more able.

There is no straight answer to how to react. A disproportional response at specific events increases the risks from the potential adversary, but it cuts both ways as the disproportional response could create unwanted escalation.

The critical concern is that to maintain abilities to conduct cyber operations for the nation decisively, the extent of friendly cyber capabilities needs almost intact secrecy to prevail in a critical juncture. It might be time to put a stronger emphasis on intel-gain loss (IGL) assessment to answer the question if the defensive gain now outweighs the potential loss of ability and options in the future.

The habit of overreacting to ongoing cyberattacks undermines the ability to quickly and surprisingly engage and defeat an adversary when it matters most. Continuously reacting and flexing the capabilities might fit the general audience’s perception of national ability, but it can also undermine the outlook for a favorable geopolitical cyber endgame.

Prioritize NATO integration for multidomain operations

After U.S. forces implement the multidomain operations (MDO) concept, they will have entered a new level of complexity, with multidomain rapid execution and increased technical abilities and capacities. The U.S. modernization efforts enhance the country’s forces, but they also increase the technological disparity and challenges for NATO. A future fight in Europe is likely to be a rapidly unfolding event, which could occur as an fait accompli attack on the NATO Eastern front. A rapid advancement from the adversary to gain as much terrain and bargaining power before the arrival of major U.S. formations from the continental U.S.

According to the U.S. Army Training and Doctrine Command (TRADOC) Pamphlet 525-3-1, “The U.S. Army in Multi-Domain Operations 2028,” a “fait accompli attack is intended to achieve military and political objectives rapidly and then to quickly consolidate those gains so that any attempt to reverse the action by the [United States] would entail unacceptable cost and risk.”

In a fait accompli scenario, limited U.S. Forces are in theater, and the initial fight rely on the abilities of the East European NATO forces. The mix is a high-low composition of highly capable but small, rapid response units from major NATO countries and regional friendly forces with less ability.

The wartime mobilization units and reserves of the East European NATO forces follow a 1990s standard, to a high degree, with partial upgrades in communications and technical systems. They represent a technical generation behind today’s U.S. forces. Even if these dedicated NATO allies are launching modernization initiatives and replace old legacy hardware (T72, BTR, BMP, post-Cold War-donated NATO surplus) with modern equipment, it is a replacement cycle that will require up to two decades before it is completed. Smaller East European NATO nations tend to have faster executed modernization programs, due to the limited number of units, but they still face the issue of integrating a variety of inherited hardware, donated Cold War surplus, and recently purchased equipment.

The challenge is NATO MDO integration and creating an able, coherent fighting force. In MDO, the central idea is to disintegrate and break loose to move the fight deep into enemy territory to disintegrate. The definition of disintegration is presented by TRADOC Pamphlet 525-3-1 as: “Dis-integrate refers to breaking the coherence of the enemy’s system by destroying or disrupting its subcomponents (such as command and control means, intelligence collection, critical nodes, etc.) degrading its ability to conduct operations while leading to a rapid collapse of the enemy’s capabilities or will to fight. This definition revises the current doctrinal defeat mechanism disintegrate.” The utility of MDO in a NATO framework requires a broad implementation of the concept within the NATO forces, not only for the U.S.

The concept of disintegration has its similar concept in Russian military thought and doctrine defined as disorganization. The Russian concept seeks to deny command and control structures the ability to communicate and lead, by jamming, cyber or physical destruction. Historically, Russian doctrine has been focused on exploiting the defending force ability to coordinate, seeking to encircle, and maintain a rapid advancement deep in the territory seeking for the defense to collapse. From a Russian perspective, key to success of a fait accompli attack is its ability to deny NATO-U.S. joint operations and exploit NATO inability to create a coherent multinational and technologically diverse fighting posture. The concept of disorganization has emerged strongly the last five years in how the Russians see the future fight. It would not be too farfetched to assume that the Russian leadership sees an opportunity in exploiting NATO’s inability to coordinate and integrate all elements in the fight.

The lingering concern is how a further technologically advanced and doctrinally complex U.S. force can get the leverage embedded in these advances if the initial fight occurs in an operational environment where the rapidly mobilized East-European NATO forces are two technological generations behind — especially when the Russian disorganization concept appears to be aiming to deny that leverage and exploit the fragmented NATO force.

NATO has been extremely successful safeguarding the peace since its creation in 1949. NATO integration was easier in the 1970s, with large NATO formations in West Germany and less countries involved. Multinational NATO forces had exercises continuously and active interaction among leaders, units and planners. Even then, the Soviet/Russian concepts were to break up and overrun the defenses, and strike deep in the territory.

In the light of increased NATO technical disparity in the multinational forces, and potential doctrinal misalignment in the larger Allied force, add to the strengthened Russian interest to exploit these conditions, these observations should drive a stronger focus on NATO integration.

The future fight will not occur at a national training center. If it happens in Eastern Europe, it will be a fight fought together with European allies, from numerous countries, in a terrain they know better. As we enter a new era of great power competition, the U.S. brings ability, capacity and technology that will ensure NATO mission success if well-integrated in the multinational fighting force.

Jan Kallberg, Ph.D.

Solorigate attack — the challenge to cyber deterrence

The exploitation of SolarWinds’ network tool at a grand scale, based on publicly disseminated information from Congress and media, represents not only a threat to national security — but also puts the concept of cyber deterrence in question. My concern: Is there a disconnect between the operational environment and the academic research that we generally assume supports the national security enterprise?

Apparently, whoever launched the Solorigate attack was undeterred, based on the publicly disclosed size and scope of the breach. If cyber deterrence is not to be a functional component to change potential adversaries’ behavior, why is cyber deterrence given so much attention?

Maybe it is because we want it to exist. We want there to be a silver bullet out there that will prevent future cyberattacks, and if we want it to exist, then any support for the existence of cyber deterrence feeds our confirmation bias.

Herman Kahn and Irwin Mann’s RAND memo “Ten Common Pitfalls” from 1957 points out the intellectual traps when trying to make military analysis in an uncertain world. That we listen to what is supporting our general belief is natural — it is in the human psyche to do so, but it can mislead.

Here is my main argument — there is a misalignment between civilian academic research and the cyber operational environment. There are at least a few hundred academic papers published on cyber deterrence, from different intellectual angles and a variety of venues, seeking to investigate, explain and create an intellectual model how cyber deterrence is achieved.

Many of these papers transpose traditional models from political science, security studies, behavioral science, criminology and other disciplines, and arrange these established models to fit a cyber narrative. The models were never designed for cyber; the models are designed to address other deviate behavior. I do not rule out their relevance in some form, but I also do not assume that they are relevant.

The root causes of this misalignment I would like to categorize in three different, hopefully plausible explanations. First, few of our university researchers have military experience, and with an increasingly narrower group that volunteer to the serve, the problem escalates. This divide between civilian academia and the military is a national vulnerability.

Decades ago, the Office of Net Assessment assessed that the U.S. had an advantage over the Soviets due to the skills of the U.S. force. Today in 2021, it might be reversed for cyber research when the academic researchers in potentially adversarial countries have a higher understanding of military operations than their U.S. counterpart.

Second, the funding mechanism in the way we fund civilian research gives a market-driven pursuit to satisfy the interest of the funding agency. By funding models of cyber deterrence, there is already an assumption that it exists, so any research that challenges that assumption will never be initiated. Should we not fund this research? Of course not, but the scope of the inquiry needs to be wide enough to challenge our own presumptions and potential biases at play. Right now, it pays too well to tell us what we want to hear, compared to presenting a radical rebuttal of our beliefs and perceptions of cyber.

Third, the defense enterprise is secretive about the inner workings of cyber operations and the operational environment (for a good reason!). However, what if it is too secretive, leaving civilian researchers to rely on commercial white papers, media, and commentators to shape the perception of the operational environment?

One of the reasons funded university research exists is to be a safeguard to help avoid strategic surprise. However, it becomes a grave concern when the civilian research community research misses the target on such a broad scale as it did in this case. This case also demonstrates that there is risk in assuming the civilian research will accurately understand the operational environment, which rather amplifies the potential for strategic surprise.

There are university research groups that are highly knowledgeable of the realities of military cyber operations, so one way to address this misalignment is to concentrate the effort. Alternatively, the defense establishment must increase the outreach and interaction with a larger group of research universities to mitigate the civilian-military research divide. Every breach, small and large, is data that supports understanding of what happened, so in my view, this is one of the lessons to be learned from Solorigate.

Jan Kallberg, Ph.D.

After twenty years of cyber – still unchartered territory ahead

The general notion is that much of the core understanding in cyber is in place. I would like to challenge that perception. There are still vast territories of the cyber domain that need to be researched, structured, and understood. I would like to use Winston Churchill’s words – it is not the beginning of the end; it is maybe the end of the beginning. It is obvious to me, in my personal opinion, that the cyber journey is still very early, the cyber field has yet to mature and big building blocks for the future cyber environment are not in place. Internet and the networks that support the net have increased dramatically over the last decade. Even if the growth of cyber might be stunning, the actual advances are not as impressive.

In the last 20 years, cyber defense, and cyber as a research discipline, have grown from almost nothing to major national concerns and the recipient of major resources. In the winter of 1996-1997, there were four references to cyber defense in the search engine of that day: AltaVista. Today, there are about 2 million references in Google. Knowledge of cyber has not developed at the same rapid rate as the interest, concern, and resources.

The cyber realm is still struggling with basic challenges such as attribution. Traditional topics in political science and international relations — such as deterrence, sovereignty, borders, the threshold for war, and norms in cyberspace — are still under development and discussion. From a military standpoint, there is still a debate about what cyber deterrence would look like, what the actual terrain and maneuverability are like in cyberspace, and who is a cyber combatant.

The traditional combatant problem becomes even more complicated because the clear majority of the networks and infrastructure that could be engaged in potential cyber conflicts are civilian — and the people who run these networks are civilians. Add to that mix the future reality with cyber: fighting a conflict at machine speed and with limited human interaction.

Cyber raises numerous questions, especially for national and defense leadership, due to the nature of cyber. There are benefits with cyber – it can be used as a softer policy option with a global reach that does not require predisposition or weeks of getting assets in the right place for action. The problem occurs when you reverse the global reach, and an asymmetric fight occurs, when the global adversaries to the United States can strike utilizing cyber arms and attacks deep to the most granular participle of our society – the individual citizen. Another question that is raising concern is the matter of time. Cyber attacks and conflicts can be executed at machine speed, which is beyond human ability to lead and comprehend what is actually happening. This visualizes that cyber as a field of study is in its early stages even if we have an astronomic growth of networked equipment, nodes, and the sheer volume of transferred information. We have massive activity on the Internet and in networks, but we are not fully able to utilize it or even structurally understand what is happening at a system-level and in a grander societal setting. I believe that it could take until the mid-2030s before many of the basic elements of cyber have become accepted, structured, and understood, and until we have a global framework. Therefore, it is important to be invested in cyber research and make discoveries now rather than face strategic surprise. Knowledge is weaponized in cyber.

Jan Kallberg, PhD

Cognitive Force Protection – How to protect troops from an assault in the cognitive domain

(Co-written with COL Hamilton)

Jan Kallberg and Col. Stephen Hamilton

Great power competition will require force protection for our minds, as hostile near-peer powers will seek to influence U.S. troops. Influence campaigns can undermine the American will to fight, and the injection of misinformation into a cohesive fighting force are threats equal to any other hostile and enemy action by adversaries and terrorists. Maintaining the will to fight is key to mission success.

Influence operations and disinformation campaigns are increasingly becoming a threat to the force. We have to treat influence operations and cognitive attacks as serious as any violent threat in force protection. Force protection is defined by Army Doctrine Publication No. 3-37, derived from JP 3-0: “Protection is the preservation of the effectiveness and survivability of mission-related military and nonmilitary personnel, equipment, facilities, information, and infrastructure deployed or located within or outside the boundaries of a given operational area.” Therefore, protecting the cognitive space is an integral part of force protection.

History shows that preserving the will to fight has ensured mission success in achieving national security goals. France in 1940 had more tanks and significant military means to engage the Germans; however, France still lost. A large part of the explanation of why France was unable to defend itself in 1940 resides with defeatism. This including an unwillingness to fight, which was a result of a decade-long erosion of the French soldiers’ will in the cognitive realm.

In the 1930s, France was political chaos, swinging from right-wing parties, communists, socialists, authoritarian fascists, political violence and cleavage, and the perception of a unified France worth fighting for diminished. Inspired by Stalin’s Soviet Union, the communists fueled French defeatism with propaganda, agitation and influence campaigns to pave the way for a communist revolution. Nazi Germany weakened the French to enable German expansion. Under a persistent cognitive attack from two authoritarian ideologies, the bulk of the French Army fell into defeatism. The French disaster of 1940 is one of several historical examples where manipulated perception of reality prevailed over reality itself. It would be a naive assessment to assume that the American will is a natural law unaffected by the environment. Historically, the American will to defend freedom has always been strong; however, the information environment has changed. Therefore, this cognitive space must be maintained, reignited and shared when the weaponized information presented may threaten it.

In the Battle of the Bulge, the conflict between good and evil was open and visible. There was no competing narrative. The goal of the campaign was easily understood, with clear boundaries between friendly and enemy activity. Today, seven decades later, we face competing tailored narratives, digital manipulation of media, an unprecedented complex information environment, and a fast-moving, scattered situational picture.

Our adversaries will and already are exploiting the fact that we as a democracy do not tell our forces what to think. Our only framework is loyalty to the Constitution and the American people. As a democracy, we expect our soldiers to support the Constitution and the mission. Our force has their democratic and constitutional right to think whatever they find worthwhile to consider.

In order to fight influence operations, we would typically control what information is presented to the force. However, we cannot tell our force what to read and not read due to First Amendment rights. While this may not have caused issues in the past, social media has presented an opportunity for our adversaries to present a plethora of information that is meant to persuade our force.

In addition, there is too much information flowing in multiple directions to have centralized quality control or fact checking. The vetting of information must occur at the individual level, and we need to enable the force’s access to high-quality news outlets. This doesn’t require any larger investment. The Army currently funds access to training and course material for education purposes. Extending these online resources to provide every member of the force online access to a handful of quality news organizations costs little but creates a culture of reading fact-checked news. More importantly, the news that is not funded by click baiting is more likely to be less sensational since its funding source comes from dedicated readers interested in actual news that matters.

In a democracy, cognitive force protection is to learn, train and enable the individual to see the demarcation between truth and disinformation. As servants of our republic and people, leaders of character can educate their unit on assessing and validating the information. As first initial steps, we must work toward this idea and provide tools to protect our force from an assault in the cognitive domain.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. Col. Stephen Hamilton is the chief of staff at the institute and a professor at the academy. The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy or the Defense Department.

 

 

Government cyber breach shows need for convergence

(I co-authored this piece with MAJ Suslowicz and LTC Arnold).

MAJ Chuck Suslowicz , Jan Kallberg , and LTC Todd Arnold

The SolarWinds breach points out the importance of having both offensive and defensive cyber force experience. The breach is an ongoing investigation, and we will not comment on the investigation. Still, in general terms, we want to point out the exploitable weaknesses in creating two silos — OCO and DCO. The separation of OCO and DCO, through the specialization of formations and leadership, undermines broader understanding and value of threat intelligence. The growing demarcation between OCO and DCO also have operative and tactical implications. The Multi-Domain Operations (MDO) concept emphasizes the competitive advantages that the Army — and greater Department of Defense — can bring to bear by leveraging the unique and complementary capabilities of each service.

It requires that leaders understand the capabilities their organization can bring to bear in order to achieve the maximum effect from the available resources. Cyber leaders must have exposure to a depth and the breadth of their chosen domain to contribute to MDO.

Unfortunately, within the Army’s operational cyber forces, there is a tendency to designate officers as either offensive cyber operations (OCO) or defensive cyber operations (DCO) specialists. The shortsighted nature of this categorization is detrimental to the Army’s efforts in cyberspace and stymies the development of the cyber force, affecting all soldiers. The Army will suffer in its planning and ability to operationally contribute to MDO from a siloed officer corps unexposed to the domain’s inherent flexibility.

We consider the assumption that there is a distinction between OCO and DCO to be flawed. It perpetuates the idea that the two operational types are doing unrelated tasks with different tools, and that experience in one will not improve performance in the other. We do not see such a rigid distinction between OCO and DCO competencies. In fact, most concepts within the cyber domain apply directly to both types of operations. The argument that OCO and DCO share competencies is not new; the iconic cybersecurity expert Dan Geer first pointed out that cyber tools are dual-use nearly two decades ago, and continues to do so. A tool that is valuable to a network defender can prove equally valuable during an offensive operation, and vice versa.

For example, a tool that maps a network’s topology is critical for the network owner’s situational awareness. The tool could also be effective for an attacker to maintain situational awareness of a target network. The dual-use nature of cyber tools requires cyber leaders to recognize both sides of their utility. So, a tool that does a beneficial job of visualizing key terrain to defend will create a high-quality roadmap for a devastating attack. Limiting officer experiences to only one side of cyberspace operations (CO) will limit their vision, handicap their input as future leaders, and risk squandering effective use of the cyber domain in MDO.

An argument will be made that “deep expertise is necessary for success” and that officers should be chosen for positions based on their previous exposure. This argument fails on two fronts. First, the Army’s decades of experience in officers’ development have shown the value of diverse exposure in officer assignments. Other branches already ensure officers experience a breadth of assignments to prepare them for senior leadership.

Second, this argument ignores the reality of “challenging technical tasks” within the cyber domain. As cyber tasks grow more technically challenging, the tools become more common between OCO and DCO, not less common. For example, two of the most technically challenging tasks, reverse engineering of malware (DCO) and development of exploits (OCO), use virtually identical toolkits.

An identical argument can be made for network defenders preventing adversarial access and offensive operators seeking to gain access to adversary networks. Ultimately, the types of operations differ in their intent and approach, but significant overlap exists within their technical skillsets.

Experience within one fragment of the domain directly translates to the other and provides insight into an adversary’s decision-making processes. This combined experience provides critical knowledge for leaders, and lack of experience will undercut the Army’s ability to execute MDO effectively. Defenders with OCO experience will be better equipped to identify an adversary’s most likely and most devastating courses of action within the domain. Similarly, OCO planned by leaders with DCO experience are more likely to succeed as the planners are better prepared to account for potential adversary countermeasures.

In both cases, the cross-pollination of experience improves the Army’s ability to leverage the cyber domain and improve its effectiveness. Single tracked officers may initially be easier to integrate or better able to contribute on day one of an assignment. However, single-tracked officers will ultimately bring far less to the table than officers experienced in both sides of the domain due to the multifaceted cyber environment in MDO.

Maj. Chuck Suslowicz is a research scientist in the Army Cyber Institute at West Point and an instructor in the U.S. Military Academy’s Department of Electrical Engineering and Computer Science (EECS). Dr. Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. LTC Todd Arnold is a research scientist in the Army Cyber Institute at West Point and assistant professor in U.S. Military Academy’s Department of Electrical Engineering and Computer Science (EECS.) The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy or the Department of Defense.

 

If Communist China loses a future war, entropy could be imminent

What happens if China engages in a great power conflict and loses? Will the Chinese Communist Party’s control over the society survive a horrifying defeat?
People’s Liberation Army PLA last fought a massive scale war during the invasion of Vietnam in 1979, which was a failed operation to punish Vietnam for toppling the Khmer Rouge regime of Cambodia. Since 1979, the PLA has been engaged in shelling Vietnam at different occasions and involved in other border skirmishes, but not fought a full-scale war. In the last decades, China increased its defense spending and modernized its military, including advanced air defenses and cruise missiles, fielded advanced military hardware, and built a high-sea navy from scratch; there is significant uncertainty of how the Chinese military will perform.

Modern warfare is integration, joint operations, command, control, intelligence, and the ability to understand and execute the ongoing, all-domain fight. War is a complex machinery, with low margins of error, and can have devastating outcomes if not prepared. It does not matter if you are against or for the U.S. military operations the last three decades, fact is that the prolonged conflict and engagement have made the U.S. experienced. The Chinese inexperience, in combination with unrealistic expansionistic ambitions, can be the downfall of the regime. Dry swimmers maybe train the basics, but they are never great swimmers.

Although it may look like a creative strategy for China to harvest trade secrets and intellectual property as well as put developing countries in debt to gain influence, I would question how rational the Chinese apparatus is. The repeated visualization of the Han nationalistic cult appears as a strength, the youth are rallying behind the Xi Jinping regime, but it is also a significant weakness. The weakness is blatantly visible in the Chinese need for surveillance and population control to maintain stability: surveillance and repression that is so encompassing in the daily life of the Chinese population that German DDR security services appear to have been amateurs. All chauvinist cults will implode over time because the unrealistic assumptions add up, and so will the sum of all delusional ideological decisions. Winston Churchill knew after Nazi-Germany declared war on the United States in December of 1941 that the Allies will prevail and win the war. Nazi-Germany did not have the GDP or manpower to sustain the war on two fronts, but the Nazis did not care because they were irrational and driven by hateful ideology. Nazi-Germany had just months before they invaded the massive Soviet Union, to create lebensraum and feed an urge to reestablish German-Austrian dominance in Eastern Europe. The Nazis unilaterally declared war on the United States. The rationale for the declaration of war was ideology, a worldview that demanded expansion and conflict, even if Germany was strategically inferior and eventually lost the war.

The Chinese belief that they can be a global authoritarian hegemony is likely on the same journey. China is today driven by their flavor or expansionist ideology that seek conflict, without being strategically able. It is worth noting that not a single major country is their allies. The Chinese supremacist propaganda works in peacetime, holding massive rallies and hailing Mao Zedong military genius, and they sing, dance, and wave red banner, but will that grip hold if PLA loses? In case of a failed military campaign, is the Chinese population, with the one-child policy, ready for casualties, humiliation, and failure?
Will the authoritarian grip with social equity, facial recognition, informers, digital surveillance, and an army that peace-time function is primarily crowd control, survive a crushing defeat? If the regime loses the grip, the wrath of the masses is like unleashed from decades of repression.

A country of the size of China, with a history of cleavages and civil wars, that has a suppressed diverse population and socio-economic disparity can be catapulted into Balkanization after a defeat. In the past, China has had long periods of internal fragmentation and weak central government.

The United States reacts differently to failure. The United States is as a country far more resilient than we might assume watching the daily news. If the United States loses a war, the President gets the blame, but there will still be a presidential library in his/her name. There is no revolution.

There is an assumption lingering over today’s public debate that China has a strong hand, advanced artificial intelligence, the latest technology, and is an uber-able superpower. I am not convinced. During the last decade, the countries in the Indo-Pacific region that seeks to hinder the Chinese expansion of control, influence, and dominance have formed stronger relationships increasingly. The strategic scale is in the democratic countries’ favor. If China still driven by ideology pursues conflict at a large scale it is likely the end of the Communist dictatorship.

In my personal view, we should pay more attention to the humanitarian risks, the ripple effects, and the dangers of nukes in a civil war, in case the Chinese regime implodes after a failed future war.

Jan Kallberg, Ph.D.

What is the rationale behind election interference?

Any attempt to interfere with democratic elections, and the peaceful transition of power that is the result of these elections, is an attack on the country itself as it seeks to destabilize and undermine the core societal functions and constitutional framework. We all agree on the severity of these attempts and that it is a real, ongoing concern for our democratic republic. That is all good, and democracies have to safeguard the integrity of their electoral processes.

But what is less discussed is why the main perpetrator — Russia, according to media — is seeking to interfere with the U.S. election. What is the Russian rationale behind these information operations targeting the electoral system?

The Russian information operations in the fault lines of American society, seeking to make America more divisive and weakened, has a more evident rationale. These operations seek to expand cleavages, misunderstandings, and conflicts within the population. That can affect military recruiting, national obedience in a national emergency, and have long-term effects on trust and confidence in the society. So seeking to attack the American cognitive space, in pursuit of split and division in this democratic republic, has a more obvious goal. But what is the Russian return on investment for the electoral operations?

Even if the Russians had such an impact that candidate X won instead of candidate Y, the American commitment to defense and fundamental outlook on the world order has been fairly stable through different administrations and changes in Congress.

Naturally, one explanation is that Russia, as an authoritarian country with a democratic deficit, wants to portray functional democracies as having their issues and that liberal democracy is a failing and flawed concept. In a democracy, if the electoral system is unable to ensure the integrity of the elections, then the legitimacy of the government will be questioned. The question is if that is the Russian endgame.

In my view, there is more to the story than Russians just trying to interfere with the U.S. to create a narrative that democracy doesn’t work, specially tailored for the Russian domestic population so they will not threaten the current regime. The average Russian is no free-ranging political scientist, thinking about the underpinnings of legitimacy for their government, democratic models, and the importance of constitutional mechanisms. The Russian population is made up of the descendants of those who survived the communist terror, so by default, they are not so quick to ask questions about governmental legitimacy. There is opposition within Russia, and a fraction of the population would like to see a regime change in the Kremlin, like many others. But in a Russian context, regime change doesn’t automatically mean a public urge for liberal democracy.

Let me present another explanation to the Russian electoral interference, which might co-exist with the first explanation, and it is related to how we perceive Russia.

The Russian information operations stir up a sentiment that the Russians are able to change the direction of our society. If the Russians are ready to strike the homeland, then they are a major threat. Only superpowers are major threats to the continental United States.

So instead of seeing Russia for what it is, a country with significant domestic issues and reliant on massive extraction of natural resources to sell to a world market that buys from the lowest bidder, we overestimate their ability. Russia has failed the last decades to advance their ability to produce and manufacture competitive products, but the information operations make us believe that Russia is a potent superpower.

The nuclear arsenal makes Russia a superpower per se. Still, it cannot be effectively visualized for a foreign public, nor can it impact a national sentiment in a foreign country, especially when the Western societies in 2020 almost seem to have forgotten that nukes exist. Nukes are no longer “practical” tools to project superpower status.

If the Russians stir up our politicians’ beliefs that the Russians are a significant adversary, and that gives Russia bargaining power and geopolitical consideration, it appears more logical as a Russian goal.

Jan Kallberg, Ph.D.

The evaporated OODA-loop

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing rapidly. It is time to face the issue of accelerated cyber engagements.

If there is limited time to lead, how do you ensure that you can execute a defensive strategy? How do we launch counter-measures at speed beyond human ability and comprehension? If you don’t have time to lead, the alternative would be to preauthorize. In the early days of the “Cold War” war planners and strategists who were used to having days to react to events faced ICBM that forced decisions within minutes. The solution? Preauthorization. The analogy between how the nuclear threat was addressed and cybersecurity works to a degree – but we have to recognize that the number of possible scenarios in cybersecurity could be on the hundreds and we need to prioritize.

The cybersecurity preauthorization process would require an understanding of likely scenarios and the unfolding events to follow these scenarios. The weaknesses in preauthorization are several. First, the limitations of the scenarios that we create because these scenarios are built on how we perceive our system environment. This is exemplified by the old saying: “What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t true.”

The creation of scenarios as a foundation for preauthorization will be laden with biases, assumption that some areas are secure that isn’t, and the inability to see attack vectors that an attacker sees. So, the major challenge becomes when to consider preauthorization is to create scenarios that are representative of potential outfalls.

One way is to look at the different attack strategies used earlier. This limits the scenarios to what has already happened to others but could be a base where additional scenarios are added to. The MITRE Att&ck Navigator provides an excellent tool to simulate and create attack scenarios that can be a foundation for preauthorization. As we progress, and artificial intelligence becomes an integrated part of offloading decision making, but we are not there yet. In the near future, artificial intelligence can cover parts of the managerial spectrum, increasing the human ability to act in very brief time windows.

The second weakness is the preauthorization’s vulnerability against probes and reverse-engineering. Cybersecurity is active 24/7/365 with numerous engagements on an ongoing basis. Over time, and using machine learning, automated attack mechanisms could learn how to avoid triggering preauthorized responses by probing and reverse-engineer solutions that will trespass the preauthorized controls.

So there is no easy road forward, but instead, a tricky path that requires clear objectives, alignment with risk management and it’s risk appetite, and an acceptance that the final approach to address the increased velocity in the attacks might not be perfect. The alternative – to not address the accelerated execution of attacks is not a viable option. That would hand over the initiative to the attacker and expose the organization for uncontrolled risks.

Repeatedly through the last two years, I have read references to the OODA-loop and the utility of the OODA-concept för cybersecurity. The OODA-loop resurface in cybersecurity and information security managerial approaches as a structured way to address unfolding events. The concept of the OODA (Observe, Orient, Decide, Act) loop developed by John Boyd in the 1960s follow the steps of observe, orient, decide, and act. You observe the events unfolding, you orient your assets at hand to address the events, you make up your mind of what is a feasible approach, and you act.

The OODA-loop has become been a central concept in cybersecurity the last decade as it is seen as a vehicle to address what attackers do, when, where, and what should you do and where is it most effective. The term has been“you need to get inside the attacker’s OODA-loop.” The OODA-loop is used as a way to understand the adversary and tailor your own defensive actions.

Retired Army Colonel Tom Cook, former research director for the Army Cyber Institute at West Point, and I wrote 2017 an IEEE article titled “The Unfitness of Traditional Military Thinking in Cyber” questioning the validity of using the OODA-loop in cyber when events are going to unfold faster and faster. Today, in 2020, the validity of the OODA-loop in cybersecurity is on the brink to evaporate due to increased speed in the attacks. The time needed to observe and assess, direct resources, make decisions, and take action will be too long to be able to muster a successful cyber defense.

Attacks occurring at computational speed worsens the inability to assess and act, and the increasingly shortened time frames likely to be found in future cyber conflicts will disallow any significant, timely human deliberation.

Moving forward

I have no intention of being a narrative impossibilist, who present challenges with no solutions, so the current way forward is preauthorizations. In the near future, the human ability to play an active role in rapid engagement will be supported by artificial intelligence decision-making that executes the tactical movements. The human mind is still in charge of the operational decisions for several reasons – control, larger picture, strategic implementation, and intent. For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

Jan Kallberg, Ph.D.

For ethical artificial intelligence, security is pivotal

 

The market for artificial intelligence is growing at an unprecedented speed, not seen since the introduction of the commercial Internet. The estimates vary, but the global AI market is assumed to grow 30 to 60 percent per year. Defense spending on AI projects is increasing at even a higher rate when we add wearable AI and systems that are dependent on AI. The defense investments, such as augmented reality, automated target recognition, and tactical robotics, would not advance at today’s rate without the presence of AI to support the realization of these concepts.

The beauty of the economy is responsiveness. With an identified “buy” signal, the market works to satisfy the need from the buyer. Powerful buy signals lead to rapid development, deployment, and roll-out of solutions, knowing that time to market matters.

My concern is based on earlier analogies when the time to market prevailed over conflicting interests. One example is the first years of the commercial internet, the introduction of remote control of supervisory control and data acquisition (SCADA) and manufacturing, and the rapid growth of the smartphone apps. In each of these cases, security was not the first thing on the developer’s mind. Time to market was the priority. This exposure increases with an economically sound pursuit to use commercial off the shelf products (COTS) as sensors, chipsets, functions, electric controls, and storage devices can be bought on the civilian market for a fraction of the cost. These COTS products cut costs, give the American people more defense and security for the money, and drive down the time to conclude the development and deployment cycle.

The Department of Defense has adopted five ethical principles for the department’s future utilization of AI. These principles are: responsible, equitable, traceable, reliable, and governable. The common denominator in all these five principles is cybersecurity. If the cybersecurity of the AI application is inadequate, these five adopted principles can be jeopardized and no longer steer the DOD AI implementation.

The future AI implementation increases the attack surface radically, and of concern is the ability to detect manipulation of the processes, because, for the operators, the underlying AI processes are not clearly understood or monitored. A system that detects targets from images or from a streaming video capture, where AI is used to identify target signatures, will generate decision support that can lead to the destruction of these targets. The targets are engaged and neutralized. One of the ethical principles for AI is “responsible.” How do we ensure that the targeting is accurate? How do we safeguard that neither the algorithm is corrupt or that sensors are not being tampered with to produce spurious data? It becomes a matter of security.

In a larger conflict, where ground forces are not able to inspect the effects on the ground, the feedback loop that invalidates the decisions supported by AI might not reach the operators in weeks. Or it might surface after the conflict is over. A rogue system can likely produce spurious decision support for longer than we are willing to admit.

Of all the five principles “equitable” is the area of highest human control. Even if controlling embedded biases in a process is hard to detect, it is within our reach. “Reliable” relates directly to security because it requires that the systems maintain confidentiality, integrity, and availability.

If the principle “reliable” requires cybersecurity vetting and testing, we have to realize that these AI systems are part of complex technical structures with a broad attack surface. If the principle “reliable” is jeopardized, then “traceable” becomes problematic, because if the integrity of AI is questionable, it is not a given that “relevant personnel possess an appropriate understanding of the technology.”

The principle “responsible” can still be valid, because deployed personnel make sound and ethical decisions based on the information provided even if a compromised system will feed spurious information to the decisionmaker. The principle “governable” acts as a safeguard against “unintended consequences.” The unknown is the time from when unintended consequences occur and until the operators of the compromised system understand that the system is compromised.

It is evident when a target that should be hit is repeatedly missed. The effects can be observed. If the effects can not be observed, it is no longer a given that that “unintended consequences” are identified, especially in a fluid multi-domain battlespace. A compromised AI system for target acquisition can mislead targeting, acquiring hidden non-targets that are a waste of resources and weapon system availability, exposing the friendly forces for detection. The time to detect such a compromise can be significant.

My intention is to visualize that cybersecurity is pivotal for AI success. I do not doubt that AI will play an increasing role in national security. AI is a top priority in the United States and to our friendly foreign partners, but potential adversaries will make the pursuit of finding ways to compromise these systems a top priority of their own.

What COVID-19 can teach us about cyber resilience

The COVID pandemic is a challenge that will eventually create health risks to Americans and have long-lasting effects. For many, this is a tragedy, a threat to life, health, and finances. What draws our attention is what COVID-19 has meant our society, the economy, and how in an unprecedented way, family, corporations, schools, and government agencies quickly had to adjust to a new reality. Why does this matter from a cyber perspective?

COVID-19 has created increased stress on our logistic, digital, public, and financial systems and this could in fact resemble what a major cyber conflict would mean to the general public. It is also essential to assess what matters to the public during this time. COVID-19 has created a widespread disruption of work, transportation, logistics, distribution of food and necessities to the public, and increased stress on infrastructures, from Internet connectivity to just-in-time delivery. It has unleashed abnormal behaviors.

A potential adversary will likely not have the ability to take down an entire sector of our critical infrastructure, or business eco-system, for several reasons. First, awareness and investments in cybersecurity have drastically increased the last two decades. This in turn reduced the number of single points of failure and increased the number of built-in redundancies as well as the ability to maintain operations in a degraded environment.

Dr. Jan Kallberg and Col. Stephen Hamilton
March 23, 2020

The COVID pandemic is a challenge that will eventually create health risks to Americans and have long-lasting effects. For many, this is a tragedy, a threat to life, health, and finances. What draws our attention is what COVID-19 has meant our society, the economy, and how in an unprecedented way, family, corporations, schools, and government agencies quickly had to adjust to a new reality. Why does this matter from a cyber perspective?

COVID-19 has created increased stress on our logistic, digital, public, and financial systems and this could in fact resemble what a major cyber conflict would mean to the general public. It is also essential to assess what matters to the public during this time. COVID-19 has created a widespread disruption of work, transportation, logistics, distribution of food and necessities to the public, and increased stress on infrastructures, from Internet connectivity to just-in-time delivery. It has unleashed abnormal behaviors.

A potential adversary will likely not have the ability to take down an entire sector of our critical infrastructure, or business eco-system, for several reasons. First, awareness and investments in cybersecurity have drastically increased the last two decades. This in turn reduced the number of single points of failure and increased the number of built-in redundancies as well as the ability to maintain operations in a degraded environment.

Second, the time and resources required to create what was once referred to as a “Cyber Pearl Harbor” is beyond the reach of any near-peer nation. Decades of advancement, from increasing resilience, adding layered defense and the new ability to detect intrusion, have made it significantly harder to execute an attack of that size.

Instead, an adversary will likely focus their primary cyber capacity on what matters for their national strategic goals. For example, delaying the movement of the main U.S. force from the continental United States to theater by using a cyberattack on utilities, airports, railroads, and ports. That strategy has two clear goals: to deny United States and its allies options in theater due to a lack of strength and to strike a significant blow to the United States and allied forces early in the conflict. If an adversary can delay U.S. forces’ arrival in theater or create disturbances in thousands of groceries or wreak havoc on the commute for office workers, they will likely prioritize what matters to their military operations first.

That said, in a future conflict, the domestic businesses, local government, and services on which the general public rely on, will be targeted by cyberattacks. These second-tier operations are likely exploiting the vulnerabilities at scale in our society, but with less complexity and mainly opportunity exploitations.

The similarity with the COVID-19 outbreak to a cyber campaign is the disruption in logistics and services, how the population reacts, as well as the stress it puts on law enforcement and first responders. These events can lead to questions about the ability to maintain law and order and the ability to prevent destabilization of a distribution chain that is built for just-in-time operations with minimal margins of deviation before it falls apart.

The sheer nature of these second-tier attacks is unsystematic, opportunity-driven. The goal is to pursue disruption, confusion, and stress. An authoritarian regime would likely not be hindered by international norms to attack targets that jeopardize public health and create risks for the general population. Environmental hazards released by these attacks can lead to risks of loss of life and potential dramatic long-term loss of life quality for citizens. If the population questions the government’s ability to protect, the government’s legitimacy and authority will suffer. Health and environmental risks tend to appeal not only to our general public’s logic but also to emotions, particularly uncertainty and fear. This can be a tipping point if the population fears the future to the point it loses confidence in the government.

Therefore, as we see COVID-19 unfold, it could give us insights into how a broad cyber-disruption campaign could affect the U.S. population. Terrorist experts examine two effects of an attack – the attack itself and the consequences of how the target population reacts.

Likely, our potential adversaries study carefully how our society reacts to COVID-19. For example, if the population obeys the government, if our government maintains control and enforces its agenda and if the nation was prepared.

Lessons learned from COVID-19 are applicable for the strengthening U.S. cyberdefense and resilience. These unfortunate events increase our understanding of how a broad cyber campaign can disrupt and degrade the quality of life, government services, and business activity.

Why Iran would avoid a major cyberwar

The Iranian military apparatus is a mix of traditional military defense, crowd control, political suppression, and show of force for generating artificial internal authority in the country. If command and control evaporate in the military apparatus, it also removes the ability to control the population to the degree the Iranian regime have been able until now to do. In that light, what is in it for Iran to launch a massive cyber engagement against the free world? What can they win?

Demonstrations in Iran last year and signs of the regime’s demise raise a question: What would the strategic outcome be of a massive cyber engagement with a foreign country or alliance?

Authoritarian regimes traditionally put survival first. Those who do not prioritize regime survival tend to collapse. Authoritarian regimes are always vulnerable because they are illegitimate. There will always be loyalists that benefit from the system, but for a significant part of people, the regime is not legit. The regime only exists because they suppress popular will and use force against any opposition.

In 2016, I wrote an article in the Cyber Defense Review titled “Strategic Cyberwar Theory – A Foundation for Designing Decisive Strategic Cyber Operations.” The utility of strategic cyberwar is linked to the institutional stability of the targeted state. If a nation is destabilized, it can be subdued to foreign will and the ability for the current regime to execute their strategy is evaporated due to loss of internal authority and ability. The theory’s predictive power is most potent when applied to target theocracies, authoritarian regimes, and dysfunctional experimental democracies because the common tenet is weak institutions.

Fully functional democracies, on the other hand, have a definite advantage because these advanced democracies have stability and, by their citizenry, accepted institutions. Nations openly adversarial to democracies are in most cases, totalitarian states that are close to entropy. The reason why these totalitarian states are under their current regime is the suppression of the popular will. Any removal of the pillars of repression, by destabilizing the regime design and institutions that make it functional, will release the popular will.

A destabilized — and possibly imploding — Iranian regime is a more tangible threat to the ruling theocratic elite than any military systems being hacked in a cyber interchange. Dictators fear the wrath of the masses. Strategic cyberwar theory seeks to look beyond the actual digital interchange, the cyber tactics, and instead create a predictive power of how a decisive cyber conflict should be conducted in pursuit of national strategic goals.

The Iranian military apparatus is a mix of traditional military defense, crowd control, political suppression, and show of force for generating artificial internal authority in the country. If command and control evaporate in the military apparatus, it also removes the ability to control the population to the degree the Iranian regime have been able until now to do. In that light, what is in it for Iran to launch a massive cyber engagement against the free world? What can they win?

If the free world uses its cyber abilities, it is far more likely that Iran itself gets destabilized and falls into entropy and chaos, which could lead to lead to major domestic bloodshed when the victims of 40 years of violent suppression decide the fate of their oppressors. It would not be the intent of the free world, it is just an outfall of the way the Iranian totalitarian regime has acted toward their own people. The risks for the Iranians are far more significant than the potential upside of being able to inflict damage on the free world.

That doesn’t mean Iranians would not try to hack systems in foreign countries they consider adversarial. Because of the Iranian regime’s constant need to feed their internal propaganda machinery with “victories,” that is more likely to take place on a smaller scale and will likely be uncoordinated low-level attacks seeking to exploit opportunities they come across. In my view, far more dangerous are non-Iranian advanced nation-state cyber actors that impersonate being Iranian hackers trying to make aggressive preplanned attacks under cover of spoofed identity and transferring the blame fueled by recent tensions.

A new mindset for the Army: silent running

The adversary in the future fight will have a more technologically advanced ability to sense activity on the battlefield – light, sound, movement, vibration, heat, electromagnetic transmissions, and other quantifiable metrics. This is a fundamental and accepted assumption. The future near-peer adversary will be able to sense our activity in an unprecedented way due to modern technologies. It is not only driven by technology but also by commoditization; sensors that cost thousands of dollars during the Cold War are available at a marginal cost today. In addition, software defined radio technology has larger bandwidth than traditional radios and can scan the entire spectrum several times a second, making it easier to detect new signals.

//I wrote this article together with Colonel Stephen Hamilton and it was published in C4ISRNET//

In the past two decades, the U.S. Army has continually added new technology to the battlefield. While this technology has enhanced the ability to fight, it has also greatly increased the ability for an adversary to detect and potentially interrupt and/or intercept operations.

The adversary in the future fight will have a more technologically advanced ability to sense activity on the battlefield – light, sound, movement, vibration, heat, electromagnetic transmissions, and other quantifiable metrics. This is a fundamental and accepted assumption. The future near-peer adversary will be able to sense our activity in an unprecedented way due to modern technologies. It is not only driven by technology but also by commoditization; sensors that cost thousands of dollars during the Cold War are available at a marginal cost today. In addition, software defined radio technology has larger bandwidth than traditional radios and can scan the entire spectrum several times a second, making it easier to detect new signals.

We turn to the thoughts of Bertrand Russell in his version of Occam’s razor: “Whenever possible, substitute constructions out of known entities for inferences to unknown entities.” Occam’s razor is named after the medieval philosopher and friar William of Ockham, who stated that in uncertainty, the fewer assumptions, the better and preached pursuing simplicity by relying on the known until simplicity could be traded for a greater explanatory power. So, by staying with the limited assumption that the future near-peer adversary will be able to sense our activity at an earlier unseen level, we will, unless we change our default modus operandi, be exposed to increased threats and risks. The adversary’s acquired sensor data will be utilized for decision making, direction finding, and engaging friendly units with all the means that are available to the adversary.

The Army mindset must change to mirror the Navy’s tactic of “silent running” used to evade adversarial threats. While there are recent advances in sensor counter-measure techniques, such as low probability of detection and low probability of intercept, silent running reduces the emissions altogether, thus reducing the risk of detection.

In the U.S. Navy submarine fleet, silent running is a stealth mode utilized over the last 100 years following the introduction of passive sonar in the latter part of the First World War. The concept is to avoid discovery by the adversary’s passive sonar by seeking to eliminate all unnecessary noise. The ocean is an environment where hiding is difficult, similar to the Army’s future emission-dense battlefield.

However, on the battlefield, emissions can be managed in order to reduce noise feeding into the adversary’s sensors. A submarine in silent running mode will shut down non-mission essential systems. The crew moves silently and avoids creating any unnecessary sound, in combination with a reduction in speed to limit noise from shafts and propellers. The noise from the submarine no longer stands out. It is a sound among other natural and surrounding sounds which radically decreases the risk of detection.

From the Army’s perspective, the adversary’s primary objective when entering the fight is to disable command and control, elements of indirect fire, and enablers of joint warfighting. All of these units are highly active in the electromagnetic spectrum. So how can silent running be applied for a ground force?

If we transfer silent running to the Army, the same tactic can be as simple as not utilizing equipment just because it is fielded to the unit. If generators go offline when not needed, then sound, heat, and electromagnetic noise are reduced. Radios that are not mission-essential are switched to specific transmission windows or turned off completely, which limits the risk of signal discovery and potential geolocation. In addition, radios are used at the lowest power that still provides acceptable communication as opposed to using unnecessarily high power which would increase the range of detection. The bottom line: a paradigm shift is needed where we seek to emit a minimum number of detectable signatures, emissions, and radiation.

The submarine becomes undetectable as its noise level diminishes to the level of natural background noise which enables it to hide within the environment. Ground forces will still be detectable in some form – the future density of sensors and increased adversarial ability over time would support that – but one goal is to make the adversary’s situational picture blur and disable the ability to accurately assess the function, size, position, and activity of friendly units. The future fluid MDO (multi-domain operational) battlefield would also increase the challenge for the adversary compared to a more static battlefield with a clear separation between friend and foe.

As a preparation for a future near-peer fight, it is crucial to have an active mindset on avoiding unnecessary transmissions that could feed adversarial sensors with information that can guide their actions. This might require a paradigm shift, where we are migrating from an abundance of active systems to being minimalists in pursuit of stealth.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. Col. Stephen Hamilton is the technical director of the Army Cyber Institute at West Point and an academy professor at the U.S. Military Academy. The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy, or the Department of Defense.