If Communist China loses a future war, entropy could be imminent

What happens if China engages in a great power conflict and loses? Will the Chinese Communist Party’s control over the society survive a horrifying defeat?
People’s Liberation Army PLA last fought a massive scale war during the invasion of Vietnam in 1979, which was a failed operation to punish Vietnam for toppling the Khmer Rouge regime of Cambodia. Since 1979, the PLA has been engaged in shelling Vietnam at different occasions and involved in other border skirmishes, but not fought a full-scale war. In the last decades, China increased its defense spending and modernized its military, including advanced air defenses and cruise missiles, fielded advanced military hardware, and built a high-sea navy from scratch; there is significant uncertainty of how the Chinese military will perform.

Modern warfare is integration, joint operations, command, control, intelligence, and the ability to understand and execute the ongoing, all-domain fight. War is a complex machinery, with low margins of error, and can have devastating outcomes if not prepared. It does not matter if you are against or for the U.S. military operations the last three decades, fact is that the prolonged conflict and engagement have made the U.S. experienced. The Chinese inexperience, in combination with unrealistic expansionistic ambitions, can be the downfall of the regime. Dry swimmers maybe train the basics, but they are never great swimmers.

Although it may look like a creative strategy for China to harvest trade secrets and intellectual property as well as put developing countries in debt to gain influence, I would question how rational the Chinese apparatus is. The repeated visualization of the Han nationalistic cult appears as a strength, the youth are rallying behind the Xi Jinping regime, but it is also a significant weakness. The weakness is blatantly visible in the Chinese need for surveillance and population control to maintain stability: surveillance and repression that is so encompassing in the daily life of the Chinese population that German DDR security services appear to have been amateurs. All chauvinist cults will implode over time because the unrealistic assumptions add up, and so will the sum of all delusional ideological decisions. Winston Churchill knew after Nazi-Germany declared war on the United States in December of 1941 that the Allies will prevail and win the war. Nazi-Germany did not have the GDP or manpower to sustain the war on two fronts, but the Nazis did not care because they were irrational and driven by hateful ideology. Nazi-Germany had just months before they invaded the massive Soviet Union, to create lebensraum and feed an urge to reestablish German-Austrian dominance in Eastern Europe. The Nazis unilaterally declared war on the United States. The rationale for the declaration of war was ideology, a worldview that demanded expansion and conflict, even if Germany was strategically inferior and eventually lost the war.

The Chinese belief that they can be a global authoritarian hegemony is likely on the same journey. China is today driven by their flavor or expansionist ideology that seek conflict, without being strategically able. It is worth noting that not a single major country is their allies. The Chinese supremacist propaganda works in peacetime, holding massive rallies and hailing Mao Zedong military genius, and they sing, dance, and wave red banner, but will that grip hold if PLA loses? In case of a failed military campaign, is the Chinese population, with the one-child policy, ready for casualties, humiliation, and failure?
Will the authoritarian grip with social equity, facial recognition, informers, digital surveillance, and an army that peace-time function is primarily crowd control, survive a crushing defeat? If the regime loses the grip, the wrath of the masses is like unleashed from decades of repression.

A country of the size of China, with a history of cleavages and civil wars, that has a suppressed diverse population and socio-economic disparity can be catapulted into Balkanization after a defeat. In the past, China has had long periods of internal fragmentation and weak central government.

The United States reacts differently to failure. The United States is as a country far more resilient than we might assume watching the daily news. If the United States loses a war, the President gets the blame, but there will still be a presidential library in his/her name. There is no revolution.

There is an assumption lingering over today’s public debate that China has a strong hand, advanced artificial intelligence, the latest technology, and is an uber-able superpower. I am not convinced. During the last decade, the countries in the Indo-Pacific region that seeks to hinder the Chinese expansion of control, influence, and dominance have formed stronger relationships increasingly. The strategic scale is in the democratic countries’ favor. If China still driven by ideology pursues conflict at a large scale it is likely the end of the Communist dictatorship.

In my personal view, we should pay more attention to the humanitarian risks, the ripple effects, and the dangers of nukes in a civil war, in case the Chinese regime implodes after a failed future war.

Jan Kallberg, Ph.D.

The evaporated OODA-loop

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing rapidly. It is time to face the issue of accelerated cyber engagements.

If there is limited time to lead, how do you ensure that you can execute a defensive strategy? How do we launch counter-measures at speed beyond human ability and comprehension? If you don’t have time to lead, the alternative would be to preauthorize. In the early days of the “Cold War” war planners and strategists who were used to having days to react to events faced ICBM that forced decisions within minutes. The solution? Preauthorization. The analogy between how the nuclear threat was addressed and cybersecurity works to a degree – but we have to recognize that the number of possible scenarios in cybersecurity could be on the hundreds and we need to prioritize.

The cybersecurity preauthorization process would require an understanding of likely scenarios and the unfolding events to follow these scenarios. The weaknesses in preauthorization are several. First, the limitations of the scenarios that we create because these scenarios are built on how we perceive our system environment. This is exemplified by the old saying: “What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t true.”

The creation of scenarios as a foundation for preauthorization will be laden with biases, assumption that some areas are secure that isn’t, and the inability to see attack vectors that an attacker sees. So, the major challenge becomes when to consider preauthorization is to create scenarios that are representative of potential outfalls.

One way is to look at the different attack strategies used earlier. This limits the scenarios to what has already happened to others but could be a base where additional scenarios are added to. The MITRE Att&ck Navigator provides an excellent tool to simulate and create attack scenarios that can be a foundation for preauthorization. As we progress, and artificial intelligence becomes an integrated part of offloading decision making, but we are not there yet. In the near future, artificial intelligence can cover parts of the managerial spectrum, increasing the human ability to act in very brief time windows.

The second weakness is the preauthorization’s vulnerability against probes and reverse-engineering. Cybersecurity is active 24/7/365 with numerous engagements on an ongoing basis. Over time, and using machine learning, automated attack mechanisms could learn how to avoid triggering preauthorized responses by probing and reverse-engineer solutions that will trespass the preauthorized controls.

So there is no easy road forward, but instead, a tricky path that requires clear objectives, alignment with risk management and it’s risk appetite, and an acceptance that the final approach to address the increased velocity in the attacks might not be perfect. The alternative – to not address the accelerated execution of attacks is not a viable option. That would hand over the initiative to the attacker and expose the organization for uncontrolled risks.

Repeatedly through the last two years, I have read references to the OODA-loop and the utility of the OODA-concept för cybersecurity. The OODA-loop resurface in cybersecurity and information security managerial approaches as a structured way to address unfolding events. The concept of the OODA (Observe, Orient, Decide, Act) loop developed by John Boyd in the 1960s follow the steps of observe, orient, decide, and act. You observe the events unfolding, you orient your assets at hand to address the events, you make up your mind of what is a feasible approach, and you act.

The OODA-loop has become been a central concept in cybersecurity the last decade as it is seen as a vehicle to address what attackers do, when, where, and what should you do and where is it most effective. The term has been“you need to get inside the attacker’s OODA-loop.” The OODA-loop is used as a way to understand the adversary and tailor your own defensive actions.

Retired Army Colonel Tom Cook, former research director for the Army Cyber Institute at West Point, and I wrote 2017 an IEEE article titled “The Unfitness of Traditional Military Thinking in Cyber” questioning the validity of using the OODA-loop in cyber when events are going to unfold faster and faster. Today, in 2020, the validity of the OODA-loop in cybersecurity is on the brink to evaporate due to increased speed in the attacks. The time needed to observe and assess, direct resources, make decisions, and take action will be too long to be able to muster a successful cyber defense.

Attacks occurring at computational speed worsens the inability to assess and act, and the increasingly shortened time frames likely to be found in future cyber conflicts will disallow any significant, timely human deliberation.

Moving forward

I have no intention of being a narrative impossibilist, who present challenges with no solutions, so the current way forward is preauthorizations. In the near future, the human ability to play an active role in rapid engagement will be supported by artificial intelligence decision-making that executes the tactical movements. The human mind is still in charge of the operational decisions for several reasons – control, larger picture, strategic implementation, and intent. For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

Jan Kallberg, Ph.D.

For ethical artificial intelligence, security is pivotal

 

The market for artificial intelligence is growing at an unprecedented speed, not seen since the introduction of the commercial Internet. The estimates vary, but the global AI market is assumed to grow 30 to 60 percent per year. Defense spending on AI projects is increasing at even a higher rate when we add wearable AI and systems that are dependent on AI. The defense investments, such as augmented reality, automated target recognition, and tactical robotics, would not advance at today’s rate without the presence of AI to support the realization of these concepts.

The beauty of the economy is responsiveness. With an identified “buy” signal, the market works to satisfy the need from the buyer. Powerful buy signals lead to rapid development, deployment, and roll-out of solutions, knowing that time to market matters.

My concern is based on earlier analogies when the time to market prevailed over conflicting interests. One example is the first years of the commercial internet, the introduction of remote control of supervisory control and data acquisition (SCADA) and manufacturing, and the rapid growth of the smartphone apps. In each of these cases, security was not the first thing on the developer’s mind. Time to market was the priority. This exposure increases with an economically sound pursuit to use commercial off the shelf products (COTS) as sensors, chipsets, functions, electric controls, and storage devices can be bought on the civilian market for a fraction of the cost. These COTS products cut costs, give the American people more defense and security for the money, and drive down the time to conclude the development and deployment cycle.

The Department of Defense has adopted five ethical principles for the department’s future utilization of AI. These principles are: responsible, equitable, traceable, reliable, and governable. The common denominator in all these five principles is cybersecurity. If the cybersecurity of the AI application is inadequate, these five adopted principles can be jeopardized and no longer steer the DOD AI implementation.

The future AI implementation increases the attack surface radically, and of concern is the ability to detect manipulation of the processes, because, for the operators, the underlying AI processes are not clearly understood or monitored. A system that detects targets from images or from a streaming video capture, where AI is used to identify target signatures, will generate decision support that can lead to the destruction of these targets. The targets are engaged and neutralized. One of the ethical principles for AI is “responsible.” How do we ensure that the targeting is accurate? How do we safeguard that neither the algorithm is corrupt or that sensors are not being tampered with to produce spurious data? It becomes a matter of security.

In a larger conflict, where ground forces are not able to inspect the effects on the ground, the feedback loop that invalidates the decisions supported by AI might not reach the operators in weeks. Or it might surface after the conflict is over. A rogue system can likely produce spurious decision support for longer than we are willing to admit.

Of all the five principles “equitable” is the area of highest human control. Even if controlling embedded biases in a process is hard to detect, it is within our reach. “Reliable” relates directly to security because it requires that the systems maintain confidentiality, integrity, and availability.

If the principle “reliable” requires cybersecurity vetting and testing, we have to realize that these AI systems are part of complex technical structures with a broad attack surface. If the principle “reliable” is jeopardized, then “traceable” becomes problematic, because if the integrity of AI is questionable, it is not a given that “relevant personnel possess an appropriate understanding of the technology.”

The principle “responsible” can still be valid, because deployed personnel make sound and ethical decisions based on the information provided even if a compromised system will feed spurious information to the decisionmaker. The principle “governable” acts as a safeguard against “unintended consequences.” The unknown is the time from when unintended consequences occur and until the operators of the compromised system understand that the system is compromised.

It is evident when a target that should be hit is repeatedly missed. The effects can be observed. If the effects can not be observed, it is no longer a given that that “unintended consequences” are identified, especially in a fluid multi-domain battlespace. A compromised AI system for target acquisition can mislead targeting, acquiring hidden non-targets that are a waste of resources and weapon system availability, exposing the friendly forces for detection. The time to detect such a compromise can be significant.

My intention is to visualize that cybersecurity is pivotal for AI success. I do not doubt that AI will play an increasing role in national security. AI is a top priority in the United States and to our friendly foreign partners, but potential adversaries will make the pursuit of finding ways to compromise these systems a top priority of their own.

Time – and the lack thereof

For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing. It is time to raise the issue of accelerated cyber engagements.

For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing. It is time to raise the issue of accelerated cyber engagements.

Limited time to lead

If there is limited time to lead, how do you ensure that you can execute a defensive strategy? How do we launch counter-measures at speed beyond human ability and comprehension? If you don’t have time to lead, the alternative would be to preauthorize.

In the early days of the “Cold War” war planners and strategists who were used to having days to react to events faced ICBM that forced decisions within minutes. The solution? Preauthorization. The analogy between how the nuclear threat was addressed and cybersecurity works to a degree – but we have to recognize that the number of possible scenarios in cybersecurity could be on the hundreds and we need to prioritize.

The cybersecurity preauthorization process would require an understanding of likely scenarios and the unfolding events to follow these scenarios. The weaknesses in preauthorization are several. First, the limitations of the scenarios that we create because these scenarios are built on how we perceive our system environment. This is exemplified by the old saying: “What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t so.”

The creation of scenarios as a foundation for preauthorization will be laden with biases, assumption that some areas are secure that isn’t, and the inability to see attack vectors that an attacker sees. So, the major challenge becomes when to consider preauthorization is to create scenarios that are representative of potential outfalls.

One way is to look at the different attack strategies used earlier. This limits the scenarios to what has already happened to others but could be a base where additional scenarios are added to. The MITRE Att&ck Navigator provides an excellent tool to simulate and create attack scenarios that can be a foundation for preauthorization. As we progress, and artificial intelligence becomes an integrated part of offloading decision making, but we are not there yet. In the near future, artificial intelligence can cover parts of the managerial spectrum, increasing the human ability to act in very brief time windows.

The second weakness is the preauthorization’s vulnerability against probes and reverse-engineering. Cybersecurity is active 24/7/365 with numerous engagements on an ongoing basis. Over time, and using machine learning, automated attack mechanisms could learn how to avoid triggering preauthorized responses by probing and reverse-engineer solutions that will trespass the preauthorized controls.

So there is no easy road forward, but instead, a tricky path that requires clear objectives, alignment with risk management and it’s risk appetite, and an acceptance that the final approach to address the increased velocity in the attacks might not be perfect. The alternative – to not address the accelerated execution of attacks is not a viable option. That would hand over the initiative to the attacker and expose the organization for uncontrolled risks.

Bye-bye, OODA-loop

Repeatedly through the last year, I have read references to the OODA-loop and the utility of the OODA-concept för cybersecurity. The OODA-loop resurface in cybersecurity and information security managerial approaches as a structured way to address unfolding events. The concept of the OODA (Observe, Orient, Decide, Act) loop developed by John Boyd in the 1960s follow the steps of observe, orient, decide, and act. You observe the events unfolding, you orient your assets at hand to address the events, you make up your mind of what is a feasible approach, and you act.

The OODA-loop has become been a central concept in cybersecurity the last decade as it is seen as a vehicle to address what attackers do, when, where, and what should you do and where is it most effective. The term has been “you need to get inside the attacker’s OODA-loop.” The OODA-loop is used as a way to understand the adversary and tailor your own defensive actions.

Retired Army Colonel Tom Cook, former research director for the Army Cyber Institute at West Point, and I wrote 2017 an IEEE article titled “The Unfitness of Traditional Military Thinking in Cyber” questioning the validity of using the OODA-loop in cyber when events are going to unfold faster and faster. Today, in 2019, the validity of the OODA-loop in cybersecurity is on the brink to evaporate due to increased speed in the attacks. The time needed to observe and assess, direct resources, make decisions, and take action will be too long to be able to muster a successful cyber defense.

Attacks occurring at computational speed worsens the inability to assess and act, and the increasingly shortened time frames likely to be found in future cyber conflicts will disallow any significant, timely human deliberation.

Moving forward

I have no intention of being a narrative impossibilist, who present challenges with no solutions, so the current way forward is preauthorizations. In the near future, the human ability to play an active role in rapid engagement will be supported by artificial intelligence decision-making that executes the tactical movements. The human mind is still in charge of the operational decisions for several reasons – control, larger picture, strategic implementation, and intent. For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

Jan Kallberg, PhD

The Zero Domain – Cyber Space Superiority through Acceleration beyond the Adversary’s Comprehension

THE ZERO DOMAIN

In the upcoming Fall 2018 issue of the Cyber Defense Review, I present a concept – the Zero Domain. The Zero Domain concept is battlespace singularity through acceleration. There is a point along the trajectory of accelerated warfare where only one warfighting nation comprehend what is unfolding and the sees the cyber terrain; it is an upper barrier for comprehension where the acceleration makes the cyber engagement unilateral.

I intentionally use the word accelerated warfare, because it has a driver and a command of the events unfolding, even if it is only one actor of two, meanwhile hyperwar suggests events unfolding without control or ability to steer the engagement fully.

It is questionable and even unlikely that cyber supremacy can be reached by overwhelming capabilities manifested by stacking more technical capacity and adding attack vectors. The alternative is to use time as the vehicle to supremacy by accelerating the velocity in the engagements beyond the speed at which the enemy can target, precisely execute and comprehend the events unfolding. The space created beyond the adversary’s comprehension is titled the Zero Domain. Military traditionally sees the battles space as land, sea, air, space and cyber domains. When fighting the battle beyond the adversary’s comprehension, no traditional warfighting domain that serves as a battle space; it is a not a vacuum nor an unclaimed terra nullius, but instead the Zero Domain. In the Zero Domain, cyberspace superiority surface as the outfall of the accelerated time and a digital space-separated singularity that benefit the more rapid actor. The Zero Domain has a time space that is only accessible by the rapid actor and a digital landscape that is not accessible to the slower actor due to the execution velocity in the enhanced accelerated warfare. Velocity achieves cyber Anti Access/Area Denial (A2/AD), which can be achieved without active initial interchanges by accelerating the execution and cyber ability in a solitaire state. During this process, any adversarial probing engagements only affect the actor on the approach to the Comprehension Barrier and once arrived in the Zero Domain there is a complete state of Anti Access/Area Denial (A2/AD) present. From that point forward, the actor that reached the Zero Domain has cyberspace singularity where the accelerated actor is the only actor that can understand the digital landscape, engage unilaterally without an adversarial ability to counterattack or interfere, and hold the ability to decide when, how, and where to attack. In the Zero Domain, the accelerated singularity forges the battlefield gravity and thrust into a single power that denies adversarial cyber operations and acts as one force of destruction, extraction, corruption, and exploitation of targeted adversarial digital assets.

When breaking the Comprehension Barrier the first of the adversary’s final points of comprehension is human deliberation, directly followed by pre-authorization and machine learning, and then these final points of comprehension are passed, and the rapid actor enters the Zero Domain.

Key to victory has been the concept of being able to be inside the opponents OODA-loop, and thereby distort, degrade, and derail any of the opponent’s OODA. In accelerated warfare beyond the Comprehension Barrier, there is no need to be inside the opponent’s OODA loop because the accelerated warfare concept is to remove the OODA loop for the opponent and by doing so decapitate the opponent’s ability to coordinate, seek effect, and command. In the Zero Domain, the opposing force has no contact with their enemy, and their OODA loop is evaporated.

The Zero Domain is the warfighting domain where accelerated velocity in the warfighting operations removes the enemy’s presence. It is the domain with zero opponents. It is not an area denial, because the enemy is unable to accelerate to the level that they can enter the battle space, and it is not access denial because the enemy has never been a part of the later fight since the Comprehension Barrier was broken through.

Even if adversarial nations invest heavily in quantum, machine learning, and artificial intelligence, I am not convinced that these adversarial authoritarian regimes can capitalize on their potential technological peer-status to America. The Zero Domain concept has an American advantage because we are less afraid of allowing degrees of freedom in operations, whereas the totalitarian and authoritarian states are slowed down by their culture of fear and need for control. An actor that is slowed down will lower the threshold for the Comprehension Barrier and enable the American force to reach the Zero Domain earlier in the future fight and establish information superiority as confluency of cyber and information operations.

Jan Kallberg, PhD

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy.The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.

Artificial Intelligence (AI): The risk of over-reliance on quantifiable data

The rise of interest in artificial intelligence and machine learning has a flip side. It might not be so smart if we fail to design the methods correctly. A question out there – can we compress the reality into measurable numbers? Artificial Intelligence relies on what can be measured and quantified, risking an over-reliance on measurable knowledge. The challenge with many other technical problems is that it all ends with humans that design and assess according to their own perceived reality. The designers’ bias, perceived reality, weltanschauung, and outlook – everything goes into the design. The limitations are not on the machine side; the humans are far more limiting. Even if the machines learn from a point forward, it is still a human that stake out the starting point and the initial landscape.

Quantifiable data has historically served America well; it was a part of the American boom after the Second World War when America was one of the first countries that took a scientific look on how to improve, streamline, and increase production utilizing fewer resources and manpower.

The numbers have also misled. The Vietnam-era SECDEF McNamara used the numbers to tell how to win the Vietnam War, which clearly indicated how to reach a decisive military victory – according to the numbers. In a Post-Vietnam book titled “The War Managers,” retired Army general Donald Kinnard visualize the almost bizarre world of seeking to fight the war through quantification and statistics. Kinnard, who later taught at the National Defense University, did a survey of the actual support for these methods and utilized fellow generals that had served in Vietnam as the respondents. These generals considered the concept of assessing the progress in the war by body counting as useless, and only two percent of the surveyed generals saw any value in this practice. Why were the Americans counting bodies? It is likely because it was quantifiable and measurable. It is a common error in research design that you seek out the variables that produce accessible quantifiable results and McNamara was at that time almost obsessed with numbers and the predictive power of numbers. McNamara is not the only one that relied overly on the numbers.

In 1939, the Nazi-German foreign minister Ribbentrop together with the German High Command studied and measured up the French-British ability to mobilize and the ability to start a war with a little-advanced warning. The Germans quantified assessment was that the Allies were unable to engage in a full-scale war on short notice and the Germans believed that the numbers were identical with the policy reality when politicians would understand their limits – and the Allies would not go to war over Poland. So Germany invaded Poland and started the Second World War. The quantifiable assessment was correct and lead to Dunkirk, but the grander assessment was off and underestimated the British and French will to take on the fight, which leads to at least 50 million dead, half of Europe behind the Soviet Iron Curtain and the destruction of their own regime. The British sentiment willing to fight the war to the end, the British ability to convince the US to provide resources to their effort, and the unfolding events thereafter were never captured in the data. The German assessment was a snapshot of the British and French war preparations in the summer of 1939 – nothing else.

Artificial Intelligence is as smart as the the numbers we feed it. Ad notam.

The potential failure is hidden in selecting, assessing, designing, and extracting the numbers to feed Artificial Intelligence. The risk for grave errors in decisionmaking, escalation, and avoidable human suffering and destruction, is embedded in our future use of Artifical Intelligence if we do not pay attention to the data that feed the algorithms. The data collection and aggregation is the weakest link in the future of machine-supported decisionmaking.

Jan Kallberg is a Research Scientist at the Army Cyber Institute at West Point and an Assistant Professor the Department of Social Sciences (SOSH) at the United States Military Academy. The views expressed herein are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.