Cognitive Force Protection – How to protect troops from an assault in the cognitive domain

(Co-written with COL Hamilton)

Jan Kallberg and Col. Stephen Hamilton

Great power competition will require force protection for our minds, as hostile near-peer powers will seek to influence U.S. troops. Influence campaigns can undermine the American will to fight, and the injection of misinformation into a cohesive fighting force are threats equal to any other hostile and enemy action by adversaries and terrorists. Maintaining the will to fight is key to mission success.

Influence operations and disinformation campaigns are increasingly becoming a threat to the force. We have to treat influence operations and cognitive attacks as serious as any violent threat in force protection. Force protection is defined by Army Doctrine Publication No. 3-37, derived from JP 3-0: “Protection is the preservation of the effectiveness and survivability of mission-related military and nonmilitary personnel, equipment, facilities, information, and infrastructure deployed or located within or outside the boundaries of a given operational area.” Therefore, protecting the cognitive space is an integral part of force protection.

History shows that preserving the will to fight has ensured mission success in achieving national security goals. France in 1940 had more tanks and significant military means to engage the Germans; however, France still lost. A large part of the explanation of why France was unable to defend itself in 1940 resides with defeatism. This including an unwillingness to fight, which was a result of a decade-long erosion of the French soldiers’ will in the cognitive realm.

In the 1930s, France was political chaos, swinging from right-wing parties, communists, socialists, authoritarian fascists, political violence and cleavage, and the perception of a unified France worth fighting for diminished. Inspired by Stalin’s Soviet Union, the communists fueled French defeatism with propaganda, agitation and influence campaigns to pave the way for a communist revolution. Nazi Germany weakened the French to enable German expansion. Under a persistent cognitive attack from two authoritarian ideologies, the bulk of the French Army fell into defeatism. The French disaster of 1940 is one of several historical examples where manipulated perception of reality prevailed over reality itself. It would be a naive assessment to assume that the American will is a natural law unaffected by the environment. Historically, the American will to defend freedom has always been strong; however, the information environment has changed. Therefore, this cognitive space must be maintained, reignited and shared when the weaponized information presented may threaten it.

In the Battle of the Bulge, the conflict between good and evil was open and visible. There was no competing narrative. The goal of the campaign was easily understood, with clear boundaries between friendly and enemy activity. Today, seven decades later, we face competing tailored narratives, digital manipulation of media, an unprecedented complex information environment, and a fast-moving, scattered situational picture.

Our adversaries will and already are exploiting the fact that we as a democracy do not tell our forces what to think. Our only framework is loyalty to the Constitution and the American people. As a democracy, we expect our soldiers to support the Constitution and the mission. Our force has their democratic and constitutional right to think whatever they find worthwhile to consider.

In order to fight influence operations, we would typically control what information is presented to the force. However, we cannot tell our force what to read and not read due to First Amendment rights. While this may not have caused issues in the past, social media has presented an opportunity for our adversaries to present a plethora of information that is meant to persuade our force.

In addition, there is too much information flowing in multiple directions to have centralized quality control or fact checking. The vetting of information must occur at the individual level, and we need to enable the force’s access to high-quality news outlets. This doesn’t require any larger investment. The Army currently funds access to training and course material for education purposes. Extending these online resources to provide every member of the force online access to a handful of quality news organizations costs little but creates a culture of reading fact-checked news. More importantly, the news that is not funded by click baiting is more likely to be less sensational since its funding source comes from dedicated readers interested in actual news that matters.

In a democracy, cognitive force protection is to learn, train and enable the individual to see the demarcation between truth and disinformation. As servants of our republic and people, leaders of character can educate their unit on assessing and validating the information. As first initial steps, we must work toward this idea and provide tools to protect our force from an assault in the cognitive domain.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. Col. Stephen Hamilton is the chief of staff at the institute and a professor at the academy. The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy or the Defense Department.

 

 

Government cyber breach shows need for convergence

(I co-authored this piece with MAJ Suslowicz and LTC Arnold).

MAJ Chuck Suslowicz , Jan Kallberg , and LTC Todd Arnold

The SolarWinds breach points out the importance of having both offensive and defensive cyber force experience. The breach is an ongoing investigation, and we will not comment on the investigation. Still, in general terms, we want to point out the exploitable weaknesses in creating two silos — OCO and DCO. The separation of OCO and DCO, through the specialization of formations and leadership, undermines broader understanding and value of threat intelligence. The growing demarcation between OCO and DCO also have operative and tactical implications. The Multi-Domain Operations (MDO) concept emphasizes the competitive advantages that the Army — and greater Department of Defense — can bring to bear by leveraging the unique and complementary capabilities of each service.

It requires that leaders understand the capabilities their organization can bring to bear in order to achieve the maximum effect from the available resources. Cyber leaders must have exposure to a depth and the breadth of their chosen domain to contribute to MDO.

Unfortunately, within the Army’s operational cyber forces, there is a tendency to designate officers as either offensive cyber operations (OCO) or defensive cyber operations (DCO) specialists. The shortsighted nature of this categorization is detrimental to the Army’s efforts in cyberspace and stymies the development of the cyber force, affecting all soldiers. The Army will suffer in its planning and ability to operationally contribute to MDO from a siloed officer corps unexposed to the domain’s inherent flexibility.

We consider the assumption that there is a distinction between OCO and DCO to be flawed. It perpetuates the idea that the two operational types are doing unrelated tasks with different tools, and that experience in one will not improve performance in the other. We do not see such a rigid distinction between OCO and DCO competencies. In fact, most concepts within the cyber domain apply directly to both types of operations. The argument that OCO and DCO share competencies is not new; the iconic cybersecurity expert Dan Geer first pointed out that cyber tools are dual-use nearly two decades ago, and continues to do so. A tool that is valuable to a network defender can prove equally valuable during an offensive operation, and vice versa.

For example, a tool that maps a network’s topology is critical for the network owner’s situational awareness. The tool could also be effective for an attacker to maintain situational awareness of a target network. The dual-use nature of cyber tools requires cyber leaders to recognize both sides of their utility. So, a tool that does a beneficial job of visualizing key terrain to defend will create a high-quality roadmap for a devastating attack. Limiting officer experiences to only one side of cyberspace operations (CO) will limit their vision, handicap their input as future leaders, and risk squandering effective use of the cyber domain in MDO.

An argument will be made that “deep expertise is necessary for success” and that officers should be chosen for positions based on their previous exposure. This argument fails on two fronts. First, the Army’s decades of experience in officers’ development have shown the value of diverse exposure in officer assignments. Other branches already ensure officers experience a breadth of assignments to prepare them for senior leadership.

Second, this argument ignores the reality of “challenging technical tasks” within the cyber domain. As cyber tasks grow more technically challenging, the tools become more common between OCO and DCO, not less common. For example, two of the most technically challenging tasks, reverse engineering of malware (DCO) and development of exploits (OCO), use virtually identical toolkits.

An identical argument can be made for network defenders preventing adversarial access and offensive operators seeking to gain access to adversary networks. Ultimately, the types of operations differ in their intent and approach, but significant overlap exists within their technical skillsets.

Experience within one fragment of the domain directly translates to the other and provides insight into an adversary’s decision-making processes. This combined experience provides critical knowledge for leaders, and lack of experience will undercut the Army’s ability to execute MDO effectively. Defenders with OCO experience will be better equipped to identify an adversary’s most likely and most devastating courses of action within the domain. Similarly, OCO planned by leaders with DCO experience are more likely to succeed as the planners are better prepared to account for potential adversary countermeasures.

In both cases, the cross-pollination of experience improves the Army’s ability to leverage the cyber domain and improve its effectiveness. Single tracked officers may initially be easier to integrate or better able to contribute on day one of an assignment. However, single-tracked officers will ultimately bring far less to the table than officers experienced in both sides of the domain due to the multifaceted cyber environment in MDO.

Maj. Chuck Suslowicz is a research scientist in the Army Cyber Institute at West Point and an instructor in the U.S. Military Academy’s Department of Electrical Engineering and Computer Science (EECS). Dr. Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. LTC Todd Arnold is a research scientist in the Army Cyber Institute at West Point and assistant professor in U.S. Military Academy’s Department of Electrical Engineering and Computer Science (EECS.) The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy or the Department of Defense.

 

If Communist China loses a future war, entropy could be imminent

What happens if China engages in a great power conflict and loses? Will the Chinese Communist Party’s control over the society survive a horrifying defeat?
People’s Liberation Army PLA last fought a massive scale war during the invasion of Vietnam in 1979, which was a failed operation to punish Vietnam for toppling the Khmer Rouge regime of Cambodia. Since 1979, the PLA has been engaged in shelling Vietnam at different occasions and involved in other border skirmishes, but not fought a full-scale war. In the last decades, China increased its defense spending and modernized its military, including advanced air defenses and cruise missiles, fielded advanced military hardware, and built a high-sea navy from scratch; there is significant uncertainty of how the Chinese military will perform.

Modern warfare is integration, joint operations, command, control, intelligence, and the ability to understand and execute the ongoing, all-domain fight. War is a complex machinery, with low margins of error, and can have devastating outcomes if not prepared. It does not matter if you are against or for the U.S. military operations the last three decades, fact is that the prolonged conflict and engagement have made the U.S. experienced. The Chinese inexperience, in combination with unrealistic expansionistic ambitions, can be the downfall of the regime. Dry swimmers maybe train the basics, but they are never great swimmers.

Although it may look like a creative strategy for China to harvest trade secrets and intellectual property as well as put developing countries in debt to gain influence, I would question how rational the Chinese apparatus is. The repeated visualization of the Han nationalistic cult appears as a strength, the youth are rallying behind the Xi Jinping regime, but it is also a significant weakness. The weakness is blatantly visible in the Chinese need for surveillance and population control to maintain stability: surveillance and repression that is so encompassing in the daily life of the Chinese population that German DDR security services appear to have been amateurs. All chauvinist cults will implode over time because the unrealistic assumptions add up, and so will the sum of all delusional ideological decisions. Winston Churchill knew after Nazi-Germany declared war on the United States in December of 1941 that the Allies will prevail and win the war. Nazi-Germany did not have the GDP or manpower to sustain the war on two fronts, but the Nazis did not care because they were irrational and driven by hateful ideology. Nazi-Germany had just months before they invaded the massive Soviet Union, to create lebensraum and feed an urge to reestablish German-Austrian dominance in Eastern Europe. The Nazis unilaterally declared war on the United States. The rationale for the declaration of war was ideology, a worldview that demanded expansion and conflict, even if Germany was strategically inferior and eventually lost the war.

The Chinese belief that they can be a global authoritarian hegemony is likely on the same journey. China is today driven by their flavor or expansionist ideology that seek conflict, without being strategically able. It is worth noting that not a single major country is their allies. The Chinese supremacist propaganda works in peacetime, holding massive rallies and hailing Mao Zedong military genius, and they sing, dance, and wave red banner, but will that grip hold if PLA loses? In case of a failed military campaign, is the Chinese population, with the one-child policy, ready for casualties, humiliation, and failure?
Will the authoritarian grip with social equity, facial recognition, informers, digital surveillance, and an army that peace-time function is primarily crowd control, survive a crushing defeat? If the regime loses the grip, the wrath of the masses is like unleashed from decades of repression.

A country of the size of China, with a history of cleavages and civil wars, that has a suppressed diverse population and socio-economic disparity can be catapulted into Balkanization after a defeat. In the past, China has had long periods of internal fragmentation and weak central government.

The United States reacts differently to failure. The United States is as a country far more resilient than we might assume watching the daily news. If the United States loses a war, the President gets the blame, but there will still be a presidential library in his/her name. There is no revolution.

There is an assumption lingering over today’s public debate that China has a strong hand, advanced artificial intelligence, the latest technology, and is an uber-able superpower. I am not convinced. During the last decade, the countries in the Indo-Pacific region that seeks to hinder the Chinese expansion of control, influence, and dominance have formed stronger relationships increasingly. The strategic scale is in the democratic countries’ favor. If China still driven by ideology pursues conflict at a large scale it is likely the end of the Communist dictatorship.

In my personal view, we should pay more attention to the humanitarian risks, the ripple effects, and the dangers of nukes in a civil war, in case the Chinese regime implodes after a failed future war.

Jan Kallberg, Ph.D.

What is the rationale behind election interference?

Any attempt to interfere with democratic elections, and the peaceful transition of power that is the result of these elections, is an attack on the country itself as it seeks to destabilize and undermine the core societal functions and constitutional framework. We all agree on the severity of these attempts and that it is a real, ongoing concern for our democratic republic. That is all good, and democracies have to safeguard the integrity of their electoral processes.

But what is less discussed is why the main perpetrator — Russia, according to media — is seeking to interfere with the U.S. election. What is the Russian rationale behind these information operations targeting the electoral system?

The Russian information operations in the fault lines of American society, seeking to make America more divisive and weakened, has a more evident rationale. These operations seek to expand cleavages, misunderstandings, and conflicts within the population. That can affect military recruiting, national obedience in a national emergency, and have long-term effects on trust and confidence in the society. So seeking to attack the American cognitive space, in pursuit of split and division in this democratic republic, has a more obvious goal. But what is the Russian return on investment for the electoral operations?

Even if the Russians had such an impact that candidate X won instead of candidate Y, the American commitment to defense and fundamental outlook on the world order has been fairly stable through different administrations and changes in Congress.

Naturally, one explanation is that Russia, as an authoritarian country with a democratic deficit, wants to portray functional democracies as having their issues and that liberal democracy is a failing and flawed concept. In a democracy, if the electoral system is unable to ensure the integrity of the elections, then the legitimacy of the government will be questioned. The question is if that is the Russian endgame.

In my view, there is more to the story than Russians just trying to interfere with the U.S. to create a narrative that democracy doesn’t work, specially tailored for the Russian domestic population so they will not threaten the current regime. The average Russian is no free-ranging political scientist, thinking about the underpinnings of legitimacy for their government, democratic models, and the importance of constitutional mechanisms. The Russian population is made up of the descendants of those who survived the communist terror, so by default, they are not so quick to ask questions about governmental legitimacy. There is opposition within Russia, and a fraction of the population would like to see a regime change in the Kremlin, like many others. But in a Russian context, regime change doesn’t automatically mean a public urge for liberal democracy.

Let me present another explanation to the Russian electoral interference, which might co-exist with the first explanation, and it is related to how we perceive Russia.

The Russian information operations stir up a sentiment that the Russians are able to change the direction of our society. If the Russians are ready to strike the homeland, then they are a major threat. Only superpowers are major threats to the continental United States.

So instead of seeing Russia for what it is, a country with significant domestic issues and reliant on massive extraction of natural resources to sell to a world market that buys from the lowest bidder, we overestimate their ability. Russia has failed the last decades to advance their ability to produce and manufacture competitive products, but the information operations make us believe that Russia is a potent superpower.

The nuclear arsenal makes Russia a superpower per se. Still, it cannot be effectively visualized for a foreign public, nor can it impact a national sentiment in a foreign country, especially when the Western societies in 2020 almost seem to have forgotten that nukes exist. Nukes are no longer “practical” tools to project superpower status.

If the Russians stir up our politicians’ beliefs that the Russians are a significant adversary, and that gives Russia bargaining power and geopolitical consideration, it appears more logical as a Russian goal.

Jan Kallberg, Ph.D.

The evaporated OODA-loop

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing rapidly. It is time to face the issue of accelerated cyber engagements.

If there is limited time to lead, how do you ensure that you can execute a defensive strategy? How do we launch counter-measures at speed beyond human ability and comprehension? If you don’t have time to lead, the alternative would be to preauthorize. In the early days of the “Cold War” war planners and strategists who were used to having days to react to events faced ICBM that forced decisions within minutes. The solution? Preauthorization. The analogy between how the nuclear threat was addressed and cybersecurity works to a degree – but we have to recognize that the number of possible scenarios in cybersecurity could be on the hundreds and we need to prioritize.

The cybersecurity preauthorization process would require an understanding of likely scenarios and the unfolding events to follow these scenarios. The weaknesses in preauthorization are several. First, the limitations of the scenarios that we create because these scenarios are built on how we perceive our system environment. This is exemplified by the old saying: “What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t true.”

The creation of scenarios as a foundation for preauthorization will be laden with biases, assumption that some areas are secure that isn’t, and the inability to see attack vectors that an attacker sees. So, the major challenge becomes when to consider preauthorization is to create scenarios that are representative of potential outfalls.

One way is to look at the different attack strategies used earlier. This limits the scenarios to what has already happened to others but could be a base where additional scenarios are added to. The MITRE Att&ck Navigator provides an excellent tool to simulate and create attack scenarios that can be a foundation for preauthorization. As we progress, and artificial intelligence becomes an integrated part of offloading decision making, but we are not there yet. In the near future, artificial intelligence can cover parts of the managerial spectrum, increasing the human ability to act in very brief time windows.

The second weakness is the preauthorization’s vulnerability against probes and reverse-engineering. Cybersecurity is active 24/7/365 with numerous engagements on an ongoing basis. Over time, and using machine learning, automated attack mechanisms could learn how to avoid triggering preauthorized responses by probing and reverse-engineer solutions that will trespass the preauthorized controls.

So there is no easy road forward, but instead, a tricky path that requires clear objectives, alignment with risk management and it’s risk appetite, and an acceptance that the final approach to address the increased velocity in the attacks might not be perfect. The alternative – to not address the accelerated execution of attacks is not a viable option. That would hand over the initiative to the attacker and expose the organization for uncontrolled risks.

Repeatedly through the last two years, I have read references to the OODA-loop and the utility of the OODA-concept för cybersecurity. The OODA-loop resurface in cybersecurity and information security managerial approaches as a structured way to address unfolding events. The concept of the OODA (Observe, Orient, Decide, Act) loop developed by John Boyd in the 1960s follow the steps of observe, orient, decide, and act. You observe the events unfolding, you orient your assets at hand to address the events, you make up your mind of what is a feasible approach, and you act.

The OODA-loop has become been a central concept in cybersecurity the last decade as it is seen as a vehicle to address what attackers do, when, where, and what should you do and where is it most effective. The term has been“you need to get inside the attacker’s OODA-loop.” The OODA-loop is used as a way to understand the adversary and tailor your own defensive actions.

Retired Army Colonel Tom Cook, former research director for the Army Cyber Institute at West Point, and I wrote 2017 an IEEE article titled “The Unfitness of Traditional Military Thinking in Cyber” questioning the validity of using the OODA-loop in cyber when events are going to unfold faster and faster. Today, in 2020, the validity of the OODA-loop in cybersecurity is on the brink to evaporate due to increased speed in the attacks. The time needed to observe and assess, direct resources, make decisions, and take action will be too long to be able to muster a successful cyber defense.

Attacks occurring at computational speed worsens the inability to assess and act, and the increasingly shortened time frames likely to be found in future cyber conflicts will disallow any significant, timely human deliberation.

Moving forward

I have no intention of being a narrative impossibilist, who present challenges with no solutions, so the current way forward is preauthorizations. In the near future, the human ability to play an active role in rapid engagement will be supported by artificial intelligence decision-making that executes the tactical movements. The human mind is still in charge of the operational decisions for several reasons – control, larger picture, strategic implementation, and intent. For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

Jan Kallberg, Ph.D.

For ethical artificial intelligence, security is pivotal

 

The market for artificial intelligence is growing at an unprecedented speed, not seen since the introduction of the commercial Internet. The estimates vary, but the global AI market is assumed to grow 30 to 60 percent per year. Defense spending on AI projects is increasing at even a higher rate when we add wearable AI and systems that are dependent on AI. The defense investments, such as augmented reality, automated target recognition, and tactical robotics, would not advance at today’s rate without the presence of AI to support the realization of these concepts.

The beauty of the economy is responsiveness. With an identified “buy” signal, the market works to satisfy the need from the buyer. Powerful buy signals lead to rapid development, deployment, and roll-out of solutions, knowing that time to market matters.

My concern is based on earlier analogies when the time to market prevailed over conflicting interests. One example is the first years of the commercial internet, the introduction of remote control of supervisory control and data acquisition (SCADA) and manufacturing, and the rapid growth of the smartphone apps. In each of these cases, security was not the first thing on the developer’s mind. Time to market was the priority. This exposure increases with an economically sound pursuit to use commercial off the shelf products (COTS) as sensors, chipsets, functions, electric controls, and storage devices can be bought on the civilian market for a fraction of the cost. These COTS products cut costs, give the American people more defense and security for the money, and drive down the time to conclude the development and deployment cycle.

The Department of Defense has adopted five ethical principles for the department’s future utilization of AI. These principles are: responsible, equitable, traceable, reliable, and governable. The common denominator in all these five principles is cybersecurity. If the cybersecurity of the AI application is inadequate, these five adopted principles can be jeopardized and no longer steer the DOD AI implementation.

The future AI implementation increases the attack surface radically, and of concern is the ability to detect manipulation of the processes, because, for the operators, the underlying AI processes are not clearly understood or monitored. A system that detects targets from images or from a streaming video capture, where AI is used to identify target signatures, will generate decision support that can lead to the destruction of these targets. The targets are engaged and neutralized. One of the ethical principles for AI is “responsible.” How do we ensure that the targeting is accurate? How do we safeguard that neither the algorithm is corrupt or that sensors are not being tampered with to produce spurious data? It becomes a matter of security.

In a larger conflict, where ground forces are not able to inspect the effects on the ground, the feedback loop that invalidates the decisions supported by AI might not reach the operators in weeks. Or it might surface after the conflict is over. A rogue system can likely produce spurious decision support for longer than we are willing to admit.

Of all the five principles “equitable” is the area of highest human control. Even if controlling embedded biases in a process is hard to detect, it is within our reach. “Reliable” relates directly to security because it requires that the systems maintain confidentiality, integrity, and availability.

If the principle “reliable” requires cybersecurity vetting and testing, we have to realize that these AI systems are part of complex technical structures with a broad attack surface. If the principle “reliable” is jeopardized, then “traceable” becomes problematic, because if the integrity of AI is questionable, it is not a given that “relevant personnel possess an appropriate understanding of the technology.”

The principle “responsible” can still be valid, because deployed personnel make sound and ethical decisions based on the information provided even if a compromised system will feed spurious information to the decisionmaker. The principle “governable” acts as a safeguard against “unintended consequences.” The unknown is the time from when unintended consequences occur and until the operators of the compromised system understand that the system is compromised.

It is evident when a target that should be hit is repeatedly missed. The effects can be observed. If the effects can not be observed, it is no longer a given that that “unintended consequences” are identified, especially in a fluid multi-domain battlespace. A compromised AI system for target acquisition can mislead targeting, acquiring hidden non-targets that are a waste of resources and weapon system availability, exposing the friendly forces for detection. The time to detect such a compromise can be significant.

My intention is to visualize that cybersecurity is pivotal for AI success. I do not doubt that AI will play an increasing role in national security. AI is a top priority in the United States and to our friendly foreign partners, but potential adversaries will make the pursuit of finding ways to compromise these systems a top priority of their own.

What COVID-19 can teach us about cyber resilience

Dr. Jan Kallberg and Col. Stephen Hamilton
March 23, 2020

The COVID pandemic is a challenge that will eventually create health risks to Americans and have long-lasting effects. For many, this is a tragedy, a threat to life, health, and finances. What draws our attention is what COVID-19 has meant our society, the economy, and how in an unprecedented way, family, corporations, schools, and government agencies quickly had to adjust to a new reality. Why does this matter from a cyber perspective?

COVID-19 has created increased stress on our logistic, digital, public, and financial systems and this could in fact resemble what a major cyber conflict would mean to the general public. It is also essential to assess what matters to the public during this time. COVID-19 has created a widespread disruption of work, transportation, logistics, distribution of food and necessities to the public, and increased stress on infrastructures, from Internet connectivity to just-in-time delivery. It has unleashed abnormal behaviors.

A potential adversary will likely not have the ability to take down an entire sector of our critical infrastructure, or business eco-system, for several reasons. First, awareness and investments in cybersecurity have drastically increased the last two decades. This in turn reduced the number of single points of failure and increased the number of built-in redundancies as well as the ability to maintain operations in a degraded environment.

Second, the time and resources required to create what was once referred to as a “Cyber Pearl Harbor” is beyond the reach of any near-peer nation. Decades of advancement, from increasing resilience, adding layered defense and the new ability to detect intrusion, have made it significantly harder to execute an attack of that size.

Instead, an adversary will likely focus their primary cyber capacity on what matters for their national strategic goals. For example, delaying the movement of the main U.S. force from the continental United States to theater by using a cyberattack on utilities, airports, railroads, and ports. That strategy has two clear goals: to deny United States and its allies options in theater due to a lack of strength and to strike a significant blow to the United States and allied forces early in the conflict. If an adversary can delay U.S. forces’ arrival in theater or create disturbances in thousands of groceries or wreak havoc on the commute for office workers, they will likely prioritize what matters to their military operations first.

That said, in a future conflict, the domestic businesses, local government, and services on which the general public rely on, will be targeted by cyberattacks. These second-tier operations are likely exploiting the vulnerabilities at scale in our society, but with less complexity and mainly opportunity exploitations.

The similarity with the COVID-19 outbreak to a cyber campaign is the disruption in logistics and services, how the population reacts, as well as the stress it puts on law enforcement and first responders. These events can lead to questions about the ability to maintain law and order and the ability to prevent destabilization of a distribution chain that is built for just-in-time operations with minimal margins of deviation before it falls apart.

The sheer nature of these second-tier attacks is unsystematic, opportunity-driven. The goal is to pursue disruption, confusion, and stress. An authoritarian regime would likely not be hindered by international norms to attack targets that jeopardize public health and create risks for the general population. Environmental hazards released by these attacks can lead to risks of loss of life and potential dramatic long-term loss of life quality for citizens. If the population questions the government’s ability to protect, the government’s legitimacy and authority will suffer. Health and environmental risks tend to appeal not only to our general public’s logic but also to emotions, particularly uncertainty and fear. This can be a tipping point if the population fears the future to the point it loses confidence in the government.

Therefore, as we see COVID-19 unfold, it could give us insights into how a broad cyber-disruption campaign could affect the U.S. population. Terrorist experts examine two effects of an attack – the attack itself and the consequences of how the target population reacts.

Likely, our potential adversaries study carefully how our society reacts to COVID-19. For example, if the population obeys the government, if our government maintains control and enforces its agenda and if the nation was prepared.

Lessons learned from COVID-19 are applicable for the strengthening U.S. cyberdefense and resilience. These unfortunate events increase our understanding of how a broad cyber campaign can disrupt and degrade the quality of life, government services, and business activity.

Why Iran would avoid a major cyberwar

Demonstrations in Iran last year and signs of the regime’s demise raise a question: What would the strategic outcome be of a massive cyber engagement with a foreign country or alliance?

Authoritarian regimes traditionally put survival first. Those who do not prioritize regime survival tend to collapse. Authoritarian regimes are always vulnerable because they are illegitimate. There will always be loyalists that benefit from the system, but for a significant part of people, the regime is not legit. The regime only exists because they suppress popular will and use force against any opposition.

In 2016, I wrote an article in the Cyber Defense Review titled “Strategic Cyberwar Theory – A Foundation for Designing Decisive Strategic Cyber Operations.” The utility of strategic cyberwar is linked to the institutional stability of the targeted state. If a nation is destabilized, it can be subdued to foreign will and the ability for the current regime to execute their strategy is evaporated due to loss of internal authority and ability. The theory’s predictive power is most potent when applied to target theocracies, authoritarian regimes, and dysfunctional experimental democracies because the common tenet is weak institutions.

Fully functional democracies, on the other hand, have a definite advantage because these advanced democracies have stability and, by their citizenry, accepted institutions. Nations openly adversarial to democracies are in most cases, totalitarian states that are close to entropy. The reason why these totalitarian states are under their current regime is the suppression of the popular will. Any removal of the pillars of repression, by destabilizing the regime design and institutions that make it functional, will release the popular will.

A destabilized — and possibly imploding — Iranian regime is a more tangible threat to the ruling theocratic elite than any military systems being hacked in a cyber interchange. Dictators fear the wrath of the masses. Strategic cyberwar theory seeks to look beyond the actual digital interchange, the cyber tactics, and instead create a predictive power of how a decisive cyber conflict should be conducted in pursuit of national strategic goals.

The Iranian military apparatus is a mix of traditional military defense, crowd control, political suppression, and show of force for generating artificial internal authority in the country. If command and control evaporate in the military apparatus, it also removes the ability to control the population to the degree the Iranian regime have been able until now to do. In that light, what is in it for Iran to launch a massive cyber engagement against the free world? What can they win?

If the free world uses its cyber abilities, it is far more likely that Iran itself gets destabilized and falls into entropy and chaos, which could lead to lead to major domestic bloodshed when the victims of 40 years of violent suppression decide the fate of their oppressors. It would not be the intent of the free world, it is just an outfall of the way the Iranian totalitarian regime has acted toward their own people. The risks for the Iranians are far more significant than the potential upside of being able to inflict damage on the free world.

That doesn’t mean Iranians would not try to hack systems in foreign countries they consider adversarial. Because of the Iranian regime’s constant need to feed their internal propaganda machinery with “victories,” that is more likely to take place on a smaller scale and will likely be uncoordinated low-level attacks seeking to exploit opportunities they come across. In my view, far more dangerous are non-Iranian advanced nation-state cyber actors that impersonate being Iranian hackers trying to make aggressive preplanned attacks under cover of spoofed identity and transferring the blame fueled by recent tensions.

A new mindset for the Army: silent running

//I wrote this article together with Colonel Stephen Hamilton and it was published in C4ISRNET//

In the past two decades, the U.S. Army has continually added new technology to the battlefield. While this technology has enhanced the ability to fight, it has also greatly increased the ability for an adversary to detect and potentially interrupt and/or intercept operations.

The adversary in the future fight will have a more technologically advanced ability to sense activity on the battlefield – light, sound, movement, vibration, heat, electromagnetic transmissions, and other quantifiable metrics. This is a fundamental and accepted assumption. The future near-peer adversary will be able to sense our activity in an unprecedented way due to modern technologies. It is not only driven by technology but also by commoditization; sensors that cost thousands of dollars during the Cold War are available at a marginal cost today. In addition, software defined radio technology has larger bandwidth than traditional radios and can scan the entire spectrum several times a second, making it easier to detect new signals.

We turn to the thoughts of Bertrand Russell in his version of Occam’s razor: “Whenever possible, substitute constructions out of known entities for inferences to unknown entities.” Occam’s razor is named after the medieval philosopher and friar William of Ockham, who stated that in uncertainty, the fewer assumptions, the better and preached pursuing simplicity by relying on the known until simplicity could be traded for a greater explanatory power. So, by staying with the limited assumption that the future near-peer adversary will be able to sense our activity at an earlier unseen level, we will, unless we change our default modus operandi, be exposed to increased threats and risks. The adversary’s acquired sensor data will be utilized for decision making, direction finding, and engaging friendly units with all the means that are available to the adversary.

The Army mindset must change to mirror the Navy’s tactic of “silent running” used to evade adversarial threats. While there are recent advances in sensor counter-measure techniques, such as low probability of detection and low probability of intercept, silent running reduces the emissions altogether, thus reducing the risk of detection.

In the U.S. Navy submarine fleet, silent running is a stealth mode utilized over the last 100 years following the introduction of passive sonar in the latter part of the First World War. The concept is to avoid discovery by the adversary’s passive sonar by seeking to eliminate all unnecessary noise. The ocean is an environment where hiding is difficult, similar to the Army’s future emission-dense battlefield.

However, on the battlefield, emissions can be managed in order to reduce noise feeding into the adversary’s sensors. A submarine in silent running mode will shut down non-mission essential systems. The crew moves silently and avoids creating any unnecessary sound, in combination with a reduction in speed to limit noise from shafts and propellers. The noise from the submarine no longer stands out. It is a sound among other natural and surrounding sounds which radically decreases the risk of detection.

From the Army’s perspective, the adversary’s primary objective when entering the fight is to disable command and control, elements of indirect fire, and enablers of joint warfighting. All of these units are highly active in the electromagnetic spectrum. So how can silent running be applied for a ground force?

If we transfer silent running to the Army, the same tactic can be as simple as not utilizing equipment just because it is fielded to the unit. If generators go offline when not needed, then sound, heat, and electromagnetic noise are reduced. Radios that are not mission-essential are switched to specific transmission windows or turned off completely, which limits the risk of signal discovery and potential geolocation. In addition, radios are used at the lowest power that still provides acceptable communication as opposed to using unnecessarily high power which would increase the range of detection. The bottom line: a paradigm shift is needed where we seek to emit a minimum number of detectable signatures, emissions, and radiation.

The submarine becomes undetectable as its noise level diminishes to the level of natural background noise which enables it to hide within the environment. Ground forces will still be detectable in some form – the future density of sensors and increased adversarial ability over time would support that – but one goal is to make the adversary’s situational picture blur and disable the ability to accurately assess the function, size, position, and activity of friendly units. The future fluid MDO (multi-domain operational) battlefield would also increase the challenge for the adversary compared to a more static battlefield with a clear separation between friend and foe.

As a preparation for a future near-peer fight, it is crucial to have an active mindset on avoiding unnecessary transmissions that could feed adversarial sensors with information that can guide their actions. This might require a paradigm shift, where we are migrating from an abundance of active systems to being minimalists in pursuit of stealth.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. Col. Stephen Hamilton is the technical director of the Army Cyber Institute at West Point and an academy professor at the U.S. Military Academy. The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy, or the Department of Defense.

 

 

From the Adversary’s POV – Cyber Attacks to Delay CONUS Forces Movement to Port of Embarkation Pivotal to Success

We tend to see vulnerabilities and concerns about cyber threats to critical infrastructure from our own viewpoint. But an adversary will assess where and how a cyberattack on America will benefit the adversary’s strategy. I am not convinced attacks on critical infrastructure, in general, have the payoff that an adversary seeks.

The American reaction to Sept. 11 and any attack on U.S. soil gives a hint to an adversary that attacking critical infrastructure to create hardship for the population might work contrary to the intended softening of the will to resist foreign influence. It is more likely that attacks that affect the general population instead strengthen the will to resist and fight, similar to the British reaction to the German bombing campaign “Blitzen” in 1940. We can’t rule out attacks that affect the general population, but there are not enough offensive capabilities to attack all 16 sectors of critical infrastructure and gain a strategic momentum. An adversary has limited cyberattack capabilities and needs to prioritize cyber targets that are aligned with the overall strategy. Trying to see what options, opportunities, and directions an adversary might take requires we change our point of view to the adversary’s outlook. One of my primary concerns is pinpointed cyber-attacks disrupting and delaying the movement of U.S. forces to theater. 

Seen for the potential adversary’s point of view, bringing the cyber fight to our homeland – think delaying the transportation of U.S. forces to theater by attacking infrastructure and transportation networks from bases to the port of embarkation – is a low investment/high return operation. Why does it matter?

First, the bulk of the U.S. forces are not in the region where the conflict erupts. Instead, they are mainly based in the continental United States and must be transported to theater. From an adversary’s perspective, the delay of U.S. forces’ arrival might be the only opportunity. If the adversary can utilize an operational and tactical superiority in the initial phase of the conflict, by engaging our local allies and U.S. forces in the region swiftly, territorial gains can be made that are too costly to reverse later, leaving the adversary in a strong bargaining position.

Second, even if only partially successful, cyberattacks that delay U.S. forces’ arrival will create confusion. Such attacks would mean units might arrive at different ports, at different times and with only a fraction of the hardware or personnel while the rest is stuck in transit.

Third, an adversary that is convinced before a conflict that it can significantly delay the arrival of U.S. units from the continental U.S. to a theater will do a different assessment of the risks of a fait accompli attack. Training and Doctrine Command defines such an attack as one that “ is intended to achieve military and political objectives rapidly and then to quickly consolidate those gains so that any attempt to reverse the action by the U.S. would entail unacceptable cost and risk.” Even if an adversary is long-term strategically inferior, the window of opportunity due to assumed delay of moving units from the continental U.S. to theater might be enough for them to take military action seeking to establish a successful fait accompli-attack.

In designing a cyber defense for critical infrastructure, it is vital that what matters to the adversary is a part of the equation. In peacetime, cyberattacks probe systems across society, from waterworks, schools, social media, retail, all the way to sawmills. Cyberattacks in war time will have more explicit intent and seek a specific gain that supports the strategy. Therefore, it is essential to identify and prioritize the critical infrastructure that is pivotal at war, instead of attempting to spread out the defense to cover everything touched in peacetime.

Jan Kallberg, Ph.D., LL.M., is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Our Dependence on the top 2 % Cyber Warriors

As an industrial nation transitioning to an information society with digital conflict, we tend to see the technology as the weapon. In the process, we ignore the fact that few humans can have a large-scale operational impact.

But we underestimate the importance of applicable intelligence, the intelligence on how to apply things in the right order. Cyber and card games have one thing in common: the order you play your cards matters. In cyber, the tools are mostly publically available, anyone can download them from the Internet and use them, but the weaponization of the tools occur when they are used by someone who understands how to use the tools in the right order.

In 2017, Gen. Paul Nakasone said “our best [coders] are 50 or 100 times better than their peers,” and asked “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.” The success of cyber operations is highly dependent, not on tools, but upon the super-empowered individual that Nakasone calls “the 50-x coder.”

There have always been those exceptional individuals that have an irreplaceable ability to see the challenge early on, create a technical solution and know-how to play it for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival of artificial intelligence increases the reliance of these highly capable individuals because someone must set the rules and point out the trajectory for artificial intelligence at the initiation.

But this also raises a series of questions. Even if identified as a weapon, how do you make a human mind “classified?” How do we protect these high-ability individuals that are weapons in the digital world?

These minds are different because they see an opportunity to exploit in a digital fog of war when others don’t see it. They address problems unburdened by traditional thinking, in innovative ways, maximizing the dual-purpose of digital tools, and can generate decisive cyber effects.

It is this applicable intelligence that creates the process, that understands the application of tools, and that turns simple digital software to digitally lethal weapons. In the analog world, it is as if you had individuals with the supernatural ability to create a hypersonic missile from materials readily available at Kroger or Albertson. As a nation, these individuals are strategic national security assets.

Systemically, we struggle to see humans as the weapon, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed.

For America, technological wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the Erie Canal, the manufacturing era, the moon landing and all the way to the autonomous systems, drones, and robots. In a default mindset, there is always a tool, an automated process, a software, or a set of technical steps, that can solve a problem or act. The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced.

Super-empowered individuals are not interchangeable and cannot be replaced, unless we want to be stuck in a digital war. Artificial intelligence and machine learning support the intellectual endeavor to cyber defend America, but humans set the strategy and direction.

It is time to see what weaponized minds are, they are not dudes and dudettes; they are strike capabilities.

Jan Kallberg, Ph.D., LL.M., is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Time – and the lack thereof

For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing. It is time to raise the issue of accelerated cyber engagements.

Limited time to lead

If there is limited time to lead, how do you ensure that you can execute a defensive strategy? How do we launch counter-measures at speed beyond human ability and comprehension? If you don’t have time to lead, the alternative would be to preauthorize.

In the early days of the “Cold War” war planners and strategists who were used to having days to react to events faced ICBM that forced decisions within minutes. The solution? Preauthorization. The analogy between how the nuclear threat was addressed and cybersecurity works to a degree – but we have to recognize that the number of possible scenarios in cybersecurity could be on the hundreds and we need to prioritize.

The cybersecurity preauthorization process would require an understanding of likely scenarios and the unfolding events to follow these scenarios. The weaknesses in preauthorization are several. First, the limitations of the scenarios that we create because these scenarios are built on how we perceive our system environment. This is exemplified by the old saying: “What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t so.”

The creation of scenarios as a foundation for preauthorization will be laden with biases, assumption that some areas are secure that isn’t, and the inability to see attack vectors that an attacker sees. So, the major challenge becomes when to consider preauthorization is to create scenarios that are representative of potential outfalls.

One way is to look at the different attack strategies used earlier. This limits the scenarios to what has already happened to others but could be a base where additional scenarios are added to. The MITRE Att&ck Navigator provides an excellent tool to simulate and create attack scenarios that can be a foundation for preauthorization. As we progress, and artificial intelligence becomes an integrated part of offloading decision making, but we are not there yet. In the near future, artificial intelligence can cover parts of the managerial spectrum, increasing the human ability to act in very brief time windows.

The second weakness is the preauthorization’s vulnerability against probes and reverse-engineering. Cybersecurity is active 24/7/365 with numerous engagements on an ongoing basis. Over time, and using machine learning, automated attack mechanisms could learn how to avoid triggering preauthorized responses by probing and reverse-engineer solutions that will trespass the preauthorized controls.

So there is no easy road forward, but instead, a tricky path that requires clear objectives, alignment with risk management and it’s risk appetite, and an acceptance that the final approach to address the increased velocity in the attacks might not be perfect. The alternative – to not address the accelerated execution of attacks is not a viable option. That would hand over the initiative to the attacker and expose the organization for uncontrolled risks.

Bye-bye, OODA-loop

Repeatedly through the last year, I have read references to the OODA-loop and the utility of the OODA-concept för cybersecurity. The OODA-loop resurface in cybersecurity and information security managerial approaches as a structured way to address unfolding events. The concept of the OODA (Observe, Orient, Decide, Act) loop developed by John Boyd in the 1960s follow the steps of observe, orient, decide, and act. You observe the events unfolding, you orient your assets at hand to address the events, you make up your mind of what is a feasible approach, and you act.

The OODA-loop has become been a central concept in cybersecurity the last decade as it is seen as a vehicle to address what attackers do, when, where, and what should you do and where is it most effective. The term has been “you need to get inside the attacker’s OODA-loop.” The OODA-loop is used as a way to understand the adversary and tailor your own defensive actions.

Retired Army Colonel Tom Cook, former research director for the Army Cyber Institute at West Point, and I wrote 2017 an IEEE article titled “The Unfitness of Traditional Military Thinking in Cyber” questioning the validity of using the OODA-loop in cyber when events are going to unfold faster and faster. Today, in 2019, the validity of the OODA-loop in cybersecurity is on the brink to evaporate due to increased speed in the attacks. The time needed to observe and assess, direct resources, make decisions, and take action will be too long to be able to muster a successful cyber defense.

Attacks occurring at computational speed worsens the inability to assess and act, and the increasingly shortened time frames likely to be found in future cyber conflicts will disallow any significant, timely human deliberation.

Moving forward

I have no intention of being a narrative impossibilist, who present challenges with no solutions, so the current way forward is preauthorizations. In the near future, the human ability to play an active role in rapid engagement will be supported by artificial intelligence decision-making that executes the tactical movements. The human mind is still in charge of the operational decisions for several reasons – control, larger picture, strategic implementation, and intent. For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

Jan Kallberg, PhD

Reassessed incentives for innovation supports transformation

There is no alternative way to ensure victory in the future fight than to innovate, implement the advances, and scale innovation. To use Henry Kissinger’s words: “The absence of alternatives clears the mind marvelously.”

Innovative environments are not created overnight. The establishment of the right culture is based on mutual trust, a trust that allows members to be vulnerable and take chances. Failure is a milestone to success.

Important characteristics for an innovative environment are competence, expertise, passion and a shared vision. Such an environment is populated with individuals who are in it for the long run and don’t quit until they make advances. Individuals who urge success and are determined to work towards excellence are all around us. For the defense establishment, the core challenge is to reassess the provided incentives, so the ambition and intellectual assets are directed to innovation and the future fight.

Edward N. Luttwak noted that strategy only matters if we have the resources to execute the strategy. Embedded in Luttwak’s statement is the general condition that if we are unable to identify, understand, incentivize, activate, and utilize our resources, the strategy does not matter. This leads to the question: who will be the innovator? How does the Department of Defense create a broad, innovative culture? Is innovation outsourced to thinktanks and experimental labs, or is it dedicated to individuals who become experts in their subfields and drive innovation where they stand? Or are these models running parallel? In general, are we ready to expose ourselves to the vulnerability of failure, and if so, what is an acceptable failure? These are questions that need to be addressed in the process of transformation.

Structural frameworks in place today could hinder innovation. For example, the traditional DOD Defense Officer Personnel Management Act’s (DOPMA) personnel model. In theory, it is a form of the assembly line’s scientific management, Taylorism, where the officer is processed through the system to the highest level of her/his career potential. In reality, the financial incentives are in favor of following the flowchart for promotion instead of seeking to stay at a point where you are passionate to make an improvement. If a transformation to an innovative culture is to be successful, then the incentives need to be aligned with the overall mission objective.

Another example is government sponsored university research. Even if funds are allocated in the pursuit of mobilizing civilian intellectual torque to ensure innovation that benefits the warfighter, traditional research at a university has little incentive to support the transformation of the Armed Forces. The majority of academia and the overwhelming majority of research universities pursue DOD and government research grant opportunities as an income to gain resources to fund graduate students and facilities. Many of the sponsored research projects are basic research, and the results are made public, which slightly defeats the purpose if you seek an innovative advantage, with limited support to the future fight. Academia can tailor their research to fit the funding opportunity, which is logical from their viewpoint, and often it is a tweak on research they are already doing that can be squeezed into a grant application.

Academics at universities seek tenure, promotion, and leverage in their fields. So government funding becomes a box to check off for tenure, the ability to attract external funding, and support academic career progression. The incentives to support DOD innovation are suppressed by far stronger incentives for the researcher to gain personal career leverage at the university. In the future, it is likely more cost-effective to concentrate DOD-sponsored research projects to those universities that make the investment in time and effort to ensure that their research is DOD relevant, operationally current, and support the warfighter. Those universities that align themselves with the DOD objectives and deliver innovation for the future fight will also have a higher understanding what the future threat landscape looks like. They are more likely to have an interface for quick dissemination of DOD needs. A realignment of incentives for sponsored research at universities creates an opportunity for those ready to support the future fight.

There is a need to look at system level how innovation is incentivized to ensure that resources generate the effects sought. America has talent, ambition, a tradition of fearless engineering, and grit – correct incentives unleash that innovative power.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

How the Founding Fathers helped make the US cyber-resilient

The Founding Fathers have done more for U.S. strategic cyber resiliency than other modern initiatives. Their contribution is a stable society, that can absorb attacks without falling into chaos, mayhem, and entropy. Stable countries have a significant advantage in future nation-state cyber-information conflicts. If nation states seek to conduct decisive cyberwar, victory will not come from anecdotal exploits, but instead by launching systematic, destabilizing attacks on the targeted society that bring them down to the point that they are subject to foreign will. Societal stability is not created overnight, it is the product of decades and even centuries of good government, civil liberties, fairness, and trust building.

Why does it matter? Because the strategic tools to bring down and degrade a society will not provide the effects sought. That means for an adversary seeking strategic advantages by attacking U.S. critical infrastructure the risk of retribution can outweigh the benefit.

The blackout in the northeast in 2003 is an example of how an American population will react when a significant share of critical infrastructure is degraded by hostile cyberattacks. The reaction showed that instead of imploding into chaos and looting, the affected population acted orderly and helped strangers. They demonstrated a high degree of resiliency. The reason why Americans act orderly and have such resiliency is a product of how we have designed our society, which leads back to the Founding Fathers. Americans are invested in the success of their society. Therefore, they do not turn on each other in a crisis.

Historically, the tactic of attacking a stable society by generating hardship has failed more than it has succeeded. One example is the Blitz 1940, the German bombings of metropolitan areas and infrastructure, which only hardened the British resistance against Nazi-Germany. After Dunkirk, several British parliamentarians were in favor of a separate peace with Germany. After the blitz, British politicians were united against Germany and fought Nazi Germany single-handed until USSR and the United States entered the war.

A strategic cyber campaign will fail to destabilize the targeted society if the institutions remain intact following the assault or successfully operate in a degraded environment. From an American perspective, it is crucial for a defender to ensure the cyberattacks never reach the magnitude that forces society over the threshold to entropy. In America’s favor, the threshold is far higher than our potential adversaries’. By guarding what we believe in – fairness, opportunity, liberty, equality, and open and free democracy – America can become more resilient.

We generally underestimate how stable America is, especially compared to potential foreign adversaries. There is a deterrent embedded in that fact: the risks for an adversary might outweigh the potential gains.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.

At Machine Speed in Cyber – Leadership Actions Close to Nullified

In my view, one of the major weaknesses in cyber defense planning is the perception that there is time to lead a cyber defense while under attack. It is likely that a major attack is automated and premeditated. If it is automated the systems will execute the attacks at computational speed. In that case, no political or military leadership would be able to lead of one simple reason – it has already happened before they react.

A premeditated attack is planned for a long time, maybe years, and if automated, the execution of a massive number of exploits will be limited to minutes. Therefore, the future cyber defense would rely on components of artificial intelligence that can assess, act, and mitigate at computational speed. Naturally, this is a development that does not happen overnight.

In an environment where the actual digital interchange occurs at computational speed, the only thing the government can do is to prepare, give guidelines, set rules of engagement, disseminate knowledge to ensure a cyber-resilient society, and let the coders prepare the systems to survive in a degraded environment.

Another important factor is how these cyber defense measures can be reversed engineered and how visible they are in a pre-conflict probing wave of cyber attacks. If the preset cyber defense measures can be “measured up” early in a probing phase of a cyber conflict it is likely that the defense measures can through reverse engineering become a force multiplier for the future attacks – instead of bulwarks against the attacks.

So we enter the land of “damned if you do-damned if you don’t” because if we pre-stage the conflict with artificial intelligence supported decision systems that lead the cyber defense at the computational speed we are also vulnerable by being reverse engineered and the artificial intelligence becomes tangible stupidity.

We are in the early dawn of cyber conflicts, we can see the silhouettes of what is coming, but one thing becomes very clear – the time factor. Politicians and military leadership will have no factual impact on the actual events in real time in conflicts occurring at computational speed, so the focus has then to be at the front end. The leadership is likely to have the highest impact by addressing what has to be done pre-conflict to ensure resilience when under attack.

Jan Kallberg

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.