Category Archives: Artificial Intelligence

THE WEAPONIZED MIND

As an industrialist nation transitioning to an information society and digital conflict, we tend to see technology and the information that feeds the technology as weapons – and ignore the few humans with a large-scale operational impact. Based on my outlook, I believe that we underestimate the importance of applicable intelligence – the intelligence of applying things in the correct order. The ability to apply is a far more important asset than the technology itself. Cyber and card games have one thing in common: the order in which you play your cards matters. In cyber, the tools are mostly publicly available; anyone can download them from the Internet and use them, but the weaponization of the tools occurs when used by someone who understands how to play them in an optimal order.
General Nakasone stated in 2017; “our best ones (coders) are 50 or 100 times better than their peers,” and continued “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.”

In reality, the success of cyber and cyber operations is highly dependent not on the tools or toolsets but instead upon the super-empowered individual that General Nakasone calls “the 50-x coder”.

In my experience in cybersecurity, migrating to a be a broader cyber field, there have always been those exceptional individuals that have an unreplicable ability to see the challenge early on, create a technical solution, and know how to play it in the right order for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival of artificial intelligence increases the reliance of these highly able individuals – because someone must set the rules, the boundaries, and point out the trajectory for artificial intelligence at the initiation. This raises a series of questions. Even if identified as a weapon, how do you make a human mind “classified”?

How do we protect these high-ability individuals who, in the digital world, are weapons, not as tools but as compilers of capability?

These minds are different because they see an opportunity to exploit in a digital fog of war when others don’t see it. They address problems unburdened by traditional thinking in new innovative ways, maximizing the dual purpose of digital tools, and can generate decisive cyber effects.
It is the applicable intelligence (AI) that creates the process, the application of tools, and turns simple digital software in sets or combinations as a convergence to digitally lethal weapons. The intelligence to mix, match, tweak, and arrange dual purpose software. I want to exemplify this by using an example from the analog world, it is as you had individuals with the supernatural ability to create a hypersonic missile from what you can find at Kroger or Albertson. As a nation, these individuals are strategic national security assets.
These intellects are weapons of growing strategic magnitude as the combat environment have increased complexity, increased velocity, growing target surface, and great uncertainty.
The last decades, our efforts are instead focusing on what these individuals deliver, the application, and the technology, which was hidden in secret vaults and only discussed in sensitive compartmented information facilities. Therefore, we classify these individuals output to the highest level to ensure the confidentiality and integrity of our cyber capabilities. Meanwhile, the most critical component, the militarized intellect, we put no value to because it is a human. In a society marinated in an engineering mindset, humans are like desk space, electricity, and broadband; it is a commodity that is input in the production of technical machinery. The marveled technical machinery is the only thing we care about today, 2019, and we don’t protect our elite militarized brains enough.
At a systematic level we are unable to see humans as the weapon itself, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed. Arms are made of steel, or fancier metals, with electronics – we fail to see weapons made of sweet ‘tater, corn, steak, and an added combative intellect.

The WW II Manhattan Project had at its peak 125 000 workers on the payroll, but the intellects that drove the project to success and completion were few. The difference with the Manhattan Project and the future of cyber is that Oppenheimer and his team had to rely on a massive industrial effort to provide them with the input material to create a weapon. In cyber, the intellect is the weapon, and the tools are delivery platforms. The tools, the delivery platforms, are free, downloadable, and easily accessed. It is the power of the mind that is unique.

We need to see the human as a weapon, avoiding being locked in by our path dependency as an engineering society where we hail the technology and forget the importance of the humans behind. America’s endless love of technical innovations and advanced machinery is reflected in a nation that has embraced mechanical wonders and engineered solutions since its creation.

For America, technological wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the Erie Canal, the manufacturing era, the moon landing and all the way to the autonomous systems, drones, and robots. In a default mindset, a tool, an automated process, a software, or a set of technical steps can solve a problem or act. The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced.

The super-empowered individuals are not interchangeable and cannot be replaced unless we want to be stuck in a digital war at speeds we don’t understand, being unable to play it in the right order, and have the limited intellectual torque to see through the fog of war provided by an exploding kaleidoscope of nodes and digital engagements. Artificial intelligence and machine learning support the intellectual endeavor to cyber defend America, but in the end, we find humans who set the strategy and direction. It is time to see what weaponized minds are; they are not dudes and dudettes but strike capabilities.

Jan Kallberg, Ph.D.

Bottom line: Commanders that can’t delegate will not survive in the modern battlefield

From our article C4ISRNET (Defense News):
“Command by intent can ensure command post survivability”

Link to full text

“In a changing operational environment, where command posts are increasingly vulnerable, intent can serve as a stealth enabler.

A communicated commander’s intent can serve as a way to limit electronic signatures and radio traffic, seeking to obfuscate the existence of a command post. In a mission command-driven environment, communication between command post and units can be reduced. The limited radio and network traffic increases command post survivability.

The intent must explain how the commander seeks to fight the upcoming 12 – 24 hours, with limited interaction between subordinated units and the commander, providing freedom for the units to fulfill their missions. For a commander to deliver intent in a valuable and effective manner, the delivery has to be trained so the leader and the subordinates have a clear picture of what they set out to do.

 

Continue reading Bottom line: Commanders that can’t delegate will not survive in the modern battlefield

The West Has Forgotten How to Keep Secrets

My CEPA article about the intelligence vulnerability open access, open government, and open data can create if left unaddressed and not in sync with national security – The West Has Forgotten How to Keep Secrets.
From the text:
“But OSINT, like all other intelligence, cuts both ways — we look at the Russians, and the Russians look at us. But their interest is almost certainly in freely available material that’s far from televisual — the information a Russian war planner can now use from European Union (EU) states goes far, far beyond what Europe’s well-motivated but slightly innocent data-producing agencies likely realize.

Seen alone, the data from environmental and building permits, road maintenance, forestry data on terrain obstacles, and agricultural data on ground water saturation are innocent. But when combined as aggregated intelligence, it is powerful and can be deeply damaging to Western countries.

Democracy dies in the dark, and transparency supports democratic governance. The EU and its member states have legally binding comprehensive initiatives to release data and information from all levels of government in pursuit of democratic accountability. This increasing European release of data — and the subsequent addition to piles of open-source intelligence — is becoming a real concern.

I firmly believe we underestimate the significance of the available information — which our enemies recognize — and that a potential adversary can easily acquire.”

 

 

Artificial Intelligence (AI): The risk of over-reliance on quantifiable data

The rise of interest in artificial intelligence and machine learning has a flip side. It might not be so smart if we fail to design the methods correctly. A question out there — can we compress the reality into measurable numbers? Artificial Intelligence relies on what can be measured and quantified, risking an over-reliance on measurable knowledge.

The problem with many other technical problems is that it all ends with humans that design and assess according to their own perceived reality. The designers’ bias, perceived reality, weltanschauung, and outlook — everything goes into the design. The limitations are not on the machine side; the humans are far more limiting. Even if the machines learn from a point forward, it is still a human that stake out the starting point and the initial landscape.

Quantifiable data has historically served America well; it was a part of the American boom after World War II when America was one of the first countries that took a scientific look on how to improve, streamline and increase production utilizing fewer resources and manpower.

Numbers have also misled. Vietnam-era Secretary of Defense Robert McNamara used the numbers to tell how to win the Vietnam War, which clearly indicated how to reach a decisive military victory — according to the numbers.

In a post-Vietnam book titled “The War Managers,” retired Army general Donald Kinnard visualized the almost bizarre world of seeking to fight the war through quantification and statistics. Kinnard, who later taught at the National Defense University, surveyed fellow generals that had served in Vietnam about the actual support for these methods. These generals considered the concept of assessing the progress in the war by body counts as useless; only two percent of the surveyed generals saw any value in this practice.

Why were the Americans counting bodies? It is likely because it was quantifiable and measurable. It is a common error in research design to seek the variables that produce easily accessible quantifiable results, and McNamara was at that time almost obsessed with numbers and the predictive power of numbers. McNamara was not the only one.

In 1939, the Nazi-German foreign minister Ribbentrop, together with the German High Command, studied and measured the French and British war preparations and ability to mobilize. The Germans quantified assessment was that the Allies were unable to engage in a full-scale war on short notice and the Germans believed that the numbers were identical with the factual reality — the Allies would not go to war over Poland because they were not ready nor able. So Germany invaded Poland on the 1st of September 1939 and started WWII.

The quantifiable assessment was correct and lead to Dunkirk, but the grander assessment was off and underestimated the British and French will to take on the fight, which led to at least 50 million dead, half of Europe behind the Soviet Iron Curtain and the destruction of their own regime. Britain’s willingness to fight to the end, their ability to convince the U.S. to provide resources, and the subsequent events were never captured in the data. The German quantified assessment was a snapshot of the British and French war preparations in the summer of 1939 — nothing else.

Artificial intelligence depends upon the numbers we feed it. The potential failure is hidden in selecting, assessing, designing and extracting the numbers to feed artificial intelligence. The risk for grave errors in decision-making, escalation, and avoidable human suffering and destruction, is embedded in our future use of artificial intelligence if we do not pay attention to the data that feed the algorithms. The data collection and aggregation is the weakest link in the future of machine-supported decision-making.

Jan Kallberg, Ph.D.

European Open Data can be Weaponized

In the discussion of great power competition and cyberattacks meant to slow down a U.S. strategic movement of forces to Eastern Europe, the focus has been on the route from the fort to port in the U.S. But we tend to forget that once forces arrive at the major Western European ports of disembarkation, the distance from these ports to eastern Poland is the same as from New York to Chicago.

The increasing European release of public data — and the subsequent addition to the pile of open-source intelligence — is becoming concerning in regard to the sheer mass of aggregated information and what information products may surface when combining these sources. The European Union and its member states have comprehensive initiatives to release data and information from all levels of government in pursuit of democratic accountability and transparency. It becomes a wicked problem because these releases are good for democracy but can jeopardize national security.

I firmly believe we underestimate the significance of the available information that a potential adversary can easily acquire. If data is not available freely, it can, with no questions asked, be obtained at a low cost.

Let me present a fictitious case study to visualize the problem with the width of public data released:

In the High North, where the terrain often is either rocks or marshes, with few available routes for maneuver units, available data today will provide information about ground conditions; type of forest; density; and on-the-ground, verified terrain obstacles — all easily accessible geodata and forestry agency data. The granularity of the information is down to a few meters.

The data is innocent by itself, intended to limit environmental damage from heavy forestry equipment and avoid the forestry companies’ armies of tracked harvesters being stuck in unfavorable ground conditions. The concern is that the forestry data also provides a verified route map for any advancing armored column in an accompli attack to avoid contact with the defender’s limited rapid-response units in pursuit of a deep strike.

Suppose the advancing adversary paves the way with special forces. In that case, a local government’s permitting and planning data as well as open data for transportation authorities will identify what to blow up, what to defend, and where it is ideal for ambushing any defending reinforcements or logistics columns. Once the advancing armored column meets up with the special forces, unclassified and openly accessible health department inspections show where frozen food is stored; building permits show which buildings have generators; and environmental protection data points out where civilian fuels, grade and volume are stored.

Now the advancing column can get ready for the next leg in the deep strike. Open data initiatives, “innocent” data releases and broad commercialization of public information has nullified the rapid-response force’s ability to slow down or defend against the accompli attack, and these data releases have increased the velocity of the accompli attack as well as increased the chance for the adversary’s mission success.

The governmental open-source intelligence problem is wicked. Any solution is problematic. An open democracy is a society that embraces accountability and transparency, and they are the foundations for the legitimacy, trust and consent of the governed. Restricting access to machine-readable and digitalized public information contradicts European Union Directive 2003/98/EC, which covers the reuse of public sector information — a well-established foundational part of European law based on Article 95 in the Maastricht Treaty.

The sheer volume of the released information, in multiple languages and from a variety of sources in separate jurisdictions, increases the difficulty of foreseeing any hostile utilization of the released data, which increases the wickedness of the problem. Those jurisdictions’ politics also come into play, which does not make it easier to trace a viable route to ensure a balance between a security interest and a democratic core value.

The initial action to address this issue, and embedded weakness, needs to involve both NATO and the European Union, as well as their member states, due to the complexity of multinational defense, the national implementation of EU legislation and the ability to adjust EU legislation. NATO and the EU have a common interest in mitigating the risks with massive public data releases to an acceptable level that still meets the EU’s goal of transparency.

Jan Kallberg, Ph.D.

If Communist China loses a future war, entropy could be imminent

What happens if China engages in a great power conflict and loses? Will the Chinese Communist Party’s control over the society survive a horrifying defeat?
People’s Liberation Army PLA last fought a massive scale war during the invasion of Vietnam in 1979, which was a failed operation to punish Vietnam for toppling the Khmer Rouge regime of Cambodia. Since 1979, the PLA has been engaged in shelling Vietnam at different occasions and involved in other border skirmishes, but not fought a full-scale war. In the last decades, China increased its defense spending and modernized its military, including advanced air defenses and cruise missiles, fielded advanced military hardware, and built a high-sea navy from scratch; there is significant uncertainty of how the Chinese military will perform.

Modern warfare is integration, joint operations, command, control, intelligence, and the ability to understand and execute the ongoing, all-domain fight. War is a complex machinery, with low margins of error, and can have devastating outcomes if not prepared. It does not matter if you are against or for the U.S. military operations the last three decades, fact is that the prolonged conflict and engagement have made the U.S. experienced. The Chinese inexperience, in combination with unrealistic expansionistic ambitions, can be the downfall of the regime. Dry swimmers maybe train the basics, but they are never great swimmers.

Although it may look like a creative strategy for China to harvest trade secrets and intellectual property as well as put developing countries in debt to gain influence, I would question how rational the Chinese apparatus is. The repeated visualization of the Han nationalistic cult appears as a strength, the youth are rallying behind the Xi Jinping regime, but it is also a significant weakness. The weakness is blatantly visible in the Chinese need for surveillance and population control to maintain stability: surveillance and repression that is so encompassing in the daily life of the Chinese population that German DDR security services appear to have been amateurs. All chauvinist cults will implode over time because the unrealistic assumptions add up, and so will the sum of all delusional ideological decisions. Winston Churchill knew after Nazi-Germany declared war on the United States in December of 1941 that the Allies will prevail and win the war. Nazi-Germany did not have the GDP or manpower to sustain the war on two fronts, but the Nazis did not care because they were irrational and driven by hateful ideology. Nazi-Germany had just months before they invaded the massive Soviet Union, to create lebensraum and feed an urge to reestablish German-Austrian dominance in Eastern Europe. The Nazis unilaterally declared war on the United States. The rationale for the declaration of war was ideology, a worldview that demanded expansion and conflict, even if Germany was strategically inferior and eventually lost the war.

The Chinese belief that they can be a global authoritarian hegemony is likely on the same journey. China is today driven by their flavor or expansionist ideology that seek conflict, without being strategically able. It is worth noting that not a single major country is their allies. The Chinese supremacist propaganda works in peacetime, holding massive rallies and hailing Mao Zedong military genius, and they sing, dance, and wave red banner, but will that grip hold if PLA loses? In case of a failed military campaign, is the Chinese population, with the one-child policy, ready for casualties, humiliation, and failure?
Will the authoritarian grip with social equity, facial recognition, informers, digital surveillance, and an army that peace-time function is primarily crowd control, survive a crushing defeat? If the regime loses the grip, the wrath of the masses is like unleashed from decades of repression.

A country of the size of China, with a history of cleavages and civil wars, that has a suppressed diverse population and socio-economic disparity can be catapulted into Balkanization after a defeat. In the past, China has had long periods of internal fragmentation and weak central government.

The United States reacts differently to failure. The United States is as a country far more resilient than we might assume watching the daily news. If the United States loses a war, the President gets the blame, but there will still be a presidential library in his/her name. There is no revolution.

There is an assumption lingering over today’s public debate that China has a strong hand, advanced artificial intelligence, the latest technology, and is an uber-able superpower. I am not convinced. During the last decade, the countries in the Indo-Pacific region that seeks to hinder the Chinese expansion of control, influence, and dominance have formed stronger relationships increasingly. The strategic scale is in the democratic countries’ favor. If China still driven by ideology pursues conflict at a large scale it is likely the end of the Communist dictatorship.

In my personal view, we should pay more attention to the humanitarian risks, the ripple effects, and the dangers of nukes in a civil war, in case the Chinese regime implodes after a failed future war.

Jan Kallberg, Ph.D.

The evaporated OODA-loop

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing rapidly. It is time to face the issue of accelerated cyber engagements.

If there is limited time to lead, how do you ensure that you can execute a defensive strategy? How do we launch counter-measures at speed beyond human ability and comprehension? If you don’t have time to lead, the alternative would be to preauthorize. In the early days of the “Cold War” war planners and strategists who were used to having days to react to events faced ICBM that forced decisions within minutes. The solution? Preauthorization. The analogy between how the nuclear threat was addressed and cybersecurity works to a degree – but we have to recognize that the number of possible scenarios in cybersecurity could be on the hundreds and we need to prioritize.

The cybersecurity preauthorization process would require an understanding of likely scenarios and the unfolding events to follow these scenarios. The weaknesses in preauthorization are several. First, the limitations of the scenarios that we create because these scenarios are built on how we perceive our system environment. This is exemplified by the old saying: “What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t true.”

The creation of scenarios as a foundation for preauthorization will be laden with biases, assumption that some areas are secure that isn’t, and the inability to see attack vectors that an attacker sees. So, the major challenge becomes when to consider preauthorization is to create scenarios that are representative of potential outfalls.

One way is to look at the different attack strategies used earlier. This limits the scenarios to what has already happened to others but could be a base where additional scenarios are added to. The MITRE Att&ck Navigator provides an excellent tool to simulate and create attack scenarios that can be a foundation for preauthorization. As we progress, and artificial intelligence becomes an integrated part of offloading decision making, but we are not there yet. In the near future, artificial intelligence can cover parts of the managerial spectrum, increasing the human ability to act in very brief time windows.

The second weakness is the preauthorization’s vulnerability against probes and reverse-engineering. Cybersecurity is active 24/7/365 with numerous engagements on an ongoing basis. Over time, and using machine learning, automated attack mechanisms could learn how to avoid triggering preauthorized responses by probing and reverse-engineer solutions that will trespass the preauthorized controls.

So there is no easy road forward, but instead, a tricky path that requires clear objectives, alignment with risk management and it’s risk appetite, and an acceptance that the final approach to address the increased velocity in the attacks might not be perfect. The alternative – to not address the accelerated execution of attacks is not a viable option. That would hand over the initiative to the attacker and expose the organization for uncontrolled risks.

Repeatedly through the last two years, I have read references to the OODA-loop and the utility of the OODA-concept för cybersecurity. The OODA-loop resurface in cybersecurity and information security managerial approaches as a structured way to address unfolding events. The concept of the OODA (Observe, Orient, Decide, Act) loop developed by John Boyd in the 1960s follow the steps of observe, orient, decide, and act. You observe the events unfolding, you orient your assets at hand to address the events, you make up your mind of what is a feasible approach, and you act.

The OODA-loop has become been a central concept in cybersecurity the last decade as it is seen as a vehicle to address what attackers do, when, where, and what should you do and where is it most effective. The term has been“you need to get inside the attacker’s OODA-loop.” The OODA-loop is used as a way to understand the adversary and tailor your own defensive actions.

Retired Army Colonel Tom Cook, former research director for the Army Cyber Institute at West Point, and I wrote 2017 an IEEE article titled “The Unfitness of Traditional Military Thinking in Cyber” questioning the validity of using the OODA-loop in cyber when events are going to unfold faster and faster. Today, in 2020, the validity of the OODA-loop in cybersecurity is on the brink to evaporate due to increased speed in the attacks. The time needed to observe and assess, direct resources, make decisions, and take action will be too long to be able to muster a successful cyber defense.

Attacks occurring at computational speed worsens the inability to assess and act, and the increasingly shortened time frames likely to be found in future cyber conflicts will disallow any significant, timely human deliberation.

Moving forward

I have no intention of being a narrative impossibilist, who present challenges with no solutions, so the current way forward is preauthorizations. In the near future, the human ability to play an active role in rapid engagement will be supported by artificial intelligence decision-making that executes the tactical movements. The human mind is still in charge of the operational decisions for several reasons – control, larger picture, strategic implementation, and intent. For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

Jan Kallberg, Ph.D.

For ethical artificial intelligence, security is pivotal

 

The market for artificial intelligence is growing at an unprecedented speed, not seen since the introduction of the commercial Internet. The estimates vary, but the global AI market is assumed to grow 30 to 60 percent per year. Defense spending on AI projects is increasing at even a higher rate when we add wearable AI and systems that are dependent on AI. The defense investments, such as augmented reality, automated target recognition, and tactical robotics, would not advance at today’s rate without the presence of AI to support the realization of these concepts.

The beauty of the economy is responsiveness. With an identified “buy” signal, the market works to satisfy the need from the buyer. Powerful buy signals lead to rapid development, deployment, and roll-out of solutions, knowing that time to market matters.

My concern is based on earlier analogies when the time to market prevailed over conflicting interests. One example is the first years of the commercial internet, the introduction of remote control of supervisory control and data acquisition (SCADA) and manufacturing, and the rapid growth of the smartphone apps. In each of these cases, security was not the first thing on the developer’s mind. Time to market was the priority. This exposure increases with an economically sound pursuit to use commercial off the shelf products (COTS) as sensors, chipsets, functions, electric controls, and storage devices can be bought on the civilian market for a fraction of the cost. These COTS products cut costs, give the American people more defense and security for the money, and drive down the time to conclude the development and deployment cycle.

The Department of Defense has adopted five ethical principles for the department’s future utilization of AI. These principles are: responsible, equitable, traceable, reliable, and governable. The common denominator in all these five principles is cybersecurity. If the cybersecurity of the AI application is inadequate, these five adopted principles can be jeopardized and no longer steer the DOD AI implementation.

The future AI implementation increases the attack surface radically, and of concern is the ability to detect manipulation of the processes, because, for the operators, the underlying AI processes are not clearly understood or monitored. A system that detects targets from images or from a streaming video capture, where AI is used to identify target signatures, will generate decision support that can lead to the destruction of these targets. The targets are engaged and neutralized. One of the ethical principles for AI is “responsible.” How do we ensure that the targeting is accurate? How do we safeguard that neither the algorithm is corrupt or that sensors are not being tampered with to produce spurious data? It becomes a matter of security.

In a larger conflict, where ground forces are not able to inspect the effects on the ground, the feedback loop that invalidates the decisions supported by AI might not reach the operators in weeks. Or it might surface after the conflict is over. A rogue system can likely produce spurious decision support for longer than we are willing to admit.

Of all the five principles “equitable” is the area of highest human control. Even if controlling embedded biases in a process is hard to detect, it is within our reach. “Reliable” relates directly to security because it requires that the systems maintain confidentiality, integrity, and availability.

If the principle “reliable” requires cybersecurity vetting and testing, we have to realize that these AI systems are part of complex technical structures with a broad attack surface. If the principle “reliable” is jeopardized, then “traceable” becomes problematic, because if the integrity of AI is questionable, it is not a given that “relevant personnel possess an appropriate understanding of the technology.”

The principle “responsible” can still be valid, because deployed personnel make sound and ethical decisions based on the information provided even if a compromised system will feed spurious information to the decisionmaker. The principle “governable” acts as a safeguard against “unintended consequences.” The unknown is the time from when unintended consequences occur and until the operators of the compromised system understand that the system is compromised.

It is evident when a target that should be hit is repeatedly missed. The effects can be observed. If the effects can not be observed, it is no longer a given that that “unintended consequences” are identified, especially in a fluid multi-domain battlespace. A compromised AI system for target acquisition can mislead targeting, acquiring hidden non-targets that are a waste of resources and weapon system availability, exposing the friendly forces for detection. The time to detect such a compromise can be significant.

My intention is to visualize that cybersecurity is pivotal for AI success. I do not doubt that AI will play an increasing role in national security. AI is a top priority in the United States and to our friendly foreign partners, but potential adversaries will make the pursuit of finding ways to compromise these systems a top priority of their own.

Time – and the lack thereof

For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing. It is time to raise the issue of accelerated cyber engagements.

Limited time to lead

If there is limited time to lead, how do you ensure that you can execute a defensive strategy? How do we launch counter-measures at speed beyond human ability and comprehension? If you don’t have time to lead, the alternative would be to preauthorize.

In the early days of the “Cold War” war planners and strategists who were used to having days to react to events faced ICBM that forced decisions within minutes. The solution? Preauthorization. The analogy between how the nuclear threat was addressed and cybersecurity works to a degree – but we have to recognize that the number of possible scenarios in cybersecurity could be on the hundreds and we need to prioritize.

The cybersecurity preauthorization process would require an understanding of likely scenarios and the unfolding events to follow these scenarios. The weaknesses in preauthorization are several. First, the limitations of the scenarios that we create because these scenarios are built on how we perceive our system environment. This is exemplified by the old saying: “What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t so.”

The creation of scenarios as a foundation for preauthorization will be laden with biases, assumption that some areas are secure that isn’t, and the inability to see attack vectors that an attacker sees. So, the major challenge becomes when to consider preauthorization is to create scenarios that are representative of potential outfalls.

One way is to look at the different attack strategies used earlier. This limits the scenarios to what has already happened to others but could be a base where additional scenarios are added to. The MITRE Att&ck Navigator provides an excellent tool to simulate and create attack scenarios that can be a foundation for preauthorization. As we progress, and artificial intelligence becomes an integrated part of offloading decision making, but we are not there yet. In the near future, artificial intelligence can cover parts of the managerial spectrum, increasing the human ability to act in very brief time windows.

The second weakness is the preauthorization’s vulnerability against probes and reverse-engineering. Cybersecurity is active 24/7/365 with numerous engagements on an ongoing basis. Over time, and using machine learning, automated attack mechanisms could learn how to avoid triggering preauthorized responses by probing and reverse-engineer solutions that will trespass the preauthorized controls.

So there is no easy road forward, but instead, a tricky path that requires clear objectives, alignment with risk management and it’s risk appetite, and an acceptance that the final approach to address the increased velocity in the attacks might not be perfect. The alternative – to not address the accelerated execution of attacks is not a viable option. That would hand over the initiative to the attacker and expose the organization for uncontrolled risks.

Bye-bye, OODA-loop

Repeatedly through the last year, I have read references to the OODA-loop and the utility of the OODA-concept för cybersecurity. The OODA-loop resurface in cybersecurity and information security managerial approaches as a structured way to address unfolding events. The concept of the OODA (Observe, Orient, Decide, Act) loop developed by John Boyd in the 1960s follow the steps of observe, orient, decide, and act. You observe the events unfolding, you orient your assets at hand to address the events, you make up your mind of what is a feasible approach, and you act.

The OODA-loop has become been a central concept in cybersecurity the last decade as it is seen as a vehicle to address what attackers do, when, where, and what should you do and where is it most effective. The term has been “you need to get inside the attacker’s OODA-loop.” The OODA-loop is used as a way to understand the adversary and tailor your own defensive actions.

Retired Army Colonel Tom Cook, former research director for the Army Cyber Institute at West Point, and I wrote 2017 an IEEE article titled “The Unfitness of Traditional Military Thinking in Cyber” questioning the validity of using the OODA-loop in cyber when events are going to unfold faster and faster. Today, in 2019, the validity of the OODA-loop in cybersecurity is on the brink to evaporate due to increased speed in the attacks. The time needed to observe and assess, direct resources, make decisions, and take action will be too long to be able to muster a successful cyber defense.

Attacks occurring at computational speed worsens the inability to assess and act, and the increasingly shortened time frames likely to be found in future cyber conflicts will disallow any significant, timely human deliberation.

Moving forward

I have no intention of being a narrative impossibilist, who present challenges with no solutions, so the current way forward is preauthorizations. In the near future, the human ability to play an active role in rapid engagement will be supported by artificial intelligence decision-making that executes the tactical movements. The human mind is still in charge of the operational decisions for several reasons – control, larger picture, strategic implementation, and intent. For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

Jan Kallberg, PhD

The Zero Domain – Cyber Space Superiority through Acceleration beyond the Adversary’s Comprehension

THE ZERO DOMAIN

In the upcoming Fall 2018 issue of the Cyber Defense Review, I present a concept – the Zero Domain. The Zero Domain concept is battlespace singularity through acceleration. There is a point along the trajectory of accelerated warfare where only one warfighting nation comprehend what is unfolding and the sees the cyber terrain; it is an upper barrier for comprehension where the acceleration makes the cyber engagement unilateral.

I intentionally use the word accelerated warfare, because it has a driver and a command of the events unfolding, even if it is only one actor of two, meanwhile hyperwar suggests events unfolding without control or ability to steer the engagement fully.

It is questionable and even unlikely that cyber supremacy can be reached by overwhelming capabilities manifested by stacking more technical capacity and adding attack vectors. The alternative is to use time as the vehicle to supremacy by accelerating the velocity in the engagements beyond the speed at which the enemy can target, precisely execute and comprehend the events unfolding. The space created beyond the adversary’s comprehension is titled the Zero Domain. Military traditionally sees the battles space as land, sea, air, space and cyber domains. When fighting the battle beyond the adversary’s comprehension, no traditional warfighting domain that serves as a battle space; it is a not a vacuum nor an unclaimed terra nullius, but instead the Zero Domain. In the Zero Domain, cyberspace superiority surface as the outfall of the accelerated time and a digital space-separated singularity that benefit the more rapid actor. The Zero Domain has a time space that is only accessible by the rapid actor and a digital landscape that is not accessible to the slower actor due to the execution velocity in the enhanced accelerated warfare. Velocity achieves cyber Anti Access/Area Denial (A2/AD), which can be achieved without active initial interchanges by accelerating the execution and cyber ability in a solitaire state. During this process, any adversarial probing engagements only affect the actor on the approach to the Comprehension Barrier and once arrived in the Zero Domain there is a complete state of Anti Access/Area Denial (A2/AD) present. From that point forward, the actor that reached the Zero Domain has cyberspace singularity where the accelerated actor is the only actor that can understand the digital landscape, engage unilaterally without an adversarial ability to counterattack or interfere, and hold the ability to decide when, how, and where to attack. In the Zero Domain, the accelerated singularity forges the battlefield gravity and thrust into a single power that denies adversarial cyber operations and acts as one force of destruction, extraction, corruption, and exploitation of targeted adversarial digital assets.

When breaking the Comprehension Barrier the first of the adversary’s final points of comprehension is human deliberation, directly followed by pre-authorization and machine learning, and then these final points of comprehension are passed, and the rapid actor enters the Zero Domain.

Key to victory has been the concept of being able to be inside the opponents OODA-loop, and thereby distort, degrade, and derail any of the opponent’s OODA. In accelerated warfare beyond the Comprehension Barrier, there is no need to be inside the opponent’s OODA loop because the accelerated warfare concept is to remove the OODA loop for the opponent and by doing so decapitate the opponent’s ability to coordinate, seek effect, and command. In the Zero Domain, the opposing force has no contact with their enemy, and their OODA loop is evaporated.

The Zero Domain is the warfighting domain where accelerated velocity in the warfighting operations removes the enemy’s presence. It is the domain with zero opponents. It is not an area denial, because the enemy is unable to accelerate to the level that they can enter the battle space, and it is not access denial because the enemy has never been a part of the later fight since the Comprehension Barrier was broken through.

Even if adversarial nations invest heavily in quantum, machine learning, and artificial intelligence, I am not convinced that these adversarial authoritarian regimes can capitalize on their potential technological peer-status to America. The Zero Domain concept has an American advantage because we are less afraid of allowing degrees of freedom in operations, whereas the totalitarian and authoritarian states are slowed down by their culture of fear and need for control. An actor that is slowed down will lower the threshold for the Comprehension Barrier and enable the American force to reach the Zero Domain earlier in the future fight and establish information superiority as confluency of cyber and information operations.

Jan Kallberg, PhD

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy.The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.

Artificial Intelligence (AI): The risk of over-reliance on quantifiable data

The rise of interest in artificial intelligence and machine learning has a flip side. It might not be so smart if we fail to design the methods correctly. A question out there – can we compress the reality into measurable numbers? Artificial Intelligence relies on what can be measured and quantified, risking an over-reliance on measurable knowledge. The challenge with many other technical problems is that it all ends with humans that design and assess according to their own perceived reality. The designers’ bias, perceived reality, weltanschauung, and outlook – everything goes into the design. The limitations are not on the machine side; the humans are far more limiting. Even if the machines learn from a point forward, it is still a human that stake out the starting point and the initial landscape.

Quantifiable data has historically served America well; it was a part of the American boom after the Second World War when America was one of the first countries that took a scientific look on how to improve, streamline, and increase production utilizing fewer resources and manpower.

The numbers have also misled. The Vietnam-era SECDEF McNamara used the numbers to tell how to win the Vietnam War, which clearly indicated how to reach a decisive military victory – according to the numbers. In a Post-Vietnam book titled “The War Managers,” retired Army general Donald Kinnard visualize the almost bizarre world of seeking to fight the war through quantification and statistics. Kinnard, who later taught at the National Defense University, did a survey of the actual support for these methods and utilized fellow generals that had served in Vietnam as the respondents. These generals considered the concept of assessing the progress in the war by body counting as useless, and only two percent of the surveyed generals saw any value in this practice. Why were the Americans counting bodies? It is likely because it was quantifiable and measurable. It is a common error in research design that you seek out the variables that produce accessible quantifiable results and McNamara was at that time almost obsessed with numbers and the predictive power of numbers. McNamara is not the only one that relied overly on the numbers.

In 1939, the Nazi-German foreign minister Ribbentrop together with the German High Command studied and measured up the French-British ability to mobilize and the ability to start a war with a little-advanced warning. The Germans quantified assessment was that the Allies were unable to engage in a full-scale war on short notice and the Germans believed that the numbers were identical with the policy reality when politicians would understand their limits – and the Allies would not go to war over Poland. So Germany invaded Poland and started the Second World War. The quantifiable assessment was correct and lead to Dunkirk, but the grander assessment was off and underestimated the British and French will to take on the fight, which leads to at least 50 million dead, half of Europe behind the Soviet Iron Curtain and the destruction of their own regime. The British sentiment willing to fight the war to the end, the British ability to convince the US to provide resources to their effort, and the unfolding events thereafter were never captured in the data. The German assessment was a snapshot of the British and French war preparations in the summer of 1939 – nothing else.

Artificial Intelligence is as smart as the the numbers we feed it. Ad notam.

The potential failure is hidden in selecting, assessing, designing, and extracting the numbers to feed Artificial Intelligence. The risk for grave errors in decisionmaking, escalation, and avoidable human suffering and destruction, is embedded in our future use of Artifical Intelligence if we do not pay attention to the data that feed the algorithms. The data collection and aggregation is the weakest link in the future of machine-supported decisionmaking.

Jan Kallberg is a Research Scientist at the Army Cyber Institute at West Point and an Assistant Professor the Department of Social Sciences (SOSH) at the United States Military Academy. The views expressed herein are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.