Category Archives: Accelerated Warfare

THE WEAPONIZED MIND

As an industrialist nation transitioning to an information society and digital conflict, we tend to see technology and the information that feeds the technology as weapons – and ignore the few humans with a large-scale operational impact. Based on my outlook, I believe that we underestimate the importance of applicable intelligence – the intelligence of applying things in the correct order. The ability to apply is a far more important asset than the technology itself. Cyber and card games have one thing in common: the order in which you play your cards matters. In cyber, the tools are mostly publicly available; anyone can download them from the Internet and use them, but the weaponization of the tools occurs when used by someone who understands how to play them in an optimal order.
General Nakasone stated in 2017; “our best ones (coders) are 50 or 100 times better than their peers,” and continued “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.”

In reality, the success of cyber and cyber operations is highly dependent not on the tools or toolsets but instead upon the super-empowered individual that General Nakasone calls “the 50-x coder”.

In my experience in cybersecurity, migrating to a be a broader cyber field, there have always been those exceptional individuals that have an unreplicable ability to see the challenge early on, create a technical solution, and know how to play it in the right order for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival of artificial intelligence increases the reliance of these highly able individuals – because someone must set the rules, the boundaries, and point out the trajectory for artificial intelligence at the initiation. This raises a series of questions. Even if identified as a weapon, how do you make a human mind “classified”?

How do we protect these high-ability individuals who, in the digital world, are weapons, not as tools but as compilers of capability?

These minds are different because they see an opportunity to exploit in a digital fog of war when others don’t see it. They address problems unburdened by traditional thinking in new innovative ways, maximizing the dual purpose of digital tools, and can generate decisive cyber effects.
It is the applicable intelligence (AI) that creates the process, the application of tools, and turns simple digital software in sets or combinations as a convergence to digitally lethal weapons. The intelligence to mix, match, tweak, and arrange dual purpose software. I want to exemplify this by using an example from the analog world, it is as you had individuals with the supernatural ability to create a hypersonic missile from what you can find at Kroger or Albertson. As a nation, these individuals are strategic national security assets.
These intellects are weapons of growing strategic magnitude as the combat environment have increased complexity, increased velocity, growing target surface, and great uncertainty.
The last decades, our efforts are instead focusing on what these individuals deliver, the application, and the technology, which was hidden in secret vaults and only discussed in sensitive compartmented information facilities. Therefore, we classify these individuals output to the highest level to ensure the confidentiality and integrity of our cyber capabilities. Meanwhile, the most critical component, the militarized intellect, we put no value to because it is a human. In a society marinated in an engineering mindset, humans are like desk space, electricity, and broadband; it is a commodity that is input in the production of technical machinery. The marveled technical machinery is the only thing we care about today, 2019, and we don’t protect our elite militarized brains enough.
At a systematic level we are unable to see humans as the weapon itself, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed. Arms are made of steel, or fancier metals, with electronics – we fail to see weapons made of sweet ‘tater, corn, steak, and an added combative intellect.

The WW II Manhattan Project had at its peak 125 000 workers on the payroll, but the intellects that drove the project to success and completion were few. The difference with the Manhattan Project and the future of cyber is that Oppenheimer and his team had to rely on a massive industrial effort to provide them with the input material to create a weapon. In cyber, the intellect is the weapon, and the tools are delivery platforms. The tools, the delivery platforms, are free, downloadable, and easily accessed. It is the power of the mind that is unique.

We need to see the human as a weapon, avoiding being locked in by our path dependency as an engineering society where we hail the technology and forget the importance of the humans behind. America’s endless love of technical innovations and advanced machinery is reflected in a nation that has embraced mechanical wonders and engineered solutions since its creation.

For America, technological wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the Erie Canal, the manufacturing era, the moon landing and all the way to the autonomous systems, drones, and robots. In a default mindset, a tool, an automated process, a software, or a set of technical steps can solve a problem or act. The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced.

The super-empowered individuals are not interchangeable and cannot be replaced unless we want to be stuck in a digital war at speeds we don’t understand, being unable to play it in the right order, and have the limited intellectual torque to see through the fog of war provided by an exploding kaleidoscope of nodes and digital engagements. Artificial intelligence and machine learning support the intellectual endeavor to cyber defend America, but in the end, we find humans who set the strategy and direction. It is time to see what weaponized minds are; they are not dudes and dudettes but strike capabilities.

Jan Kallberg, Ph.D.

Plan Red: Three Days to Paldiski

//This is my original text before editing//

European newspapers have shared a German assessment of what a Russian assault on NATO and the Baltics could look like. If you want to understand your adversary, put yourself in his shoes, and so I did. I decided I would start war planning for the assault immediately. The Kremlin loves a mighty name on a war plan, like the Americans who spend a lot of energy and time to come up with the most appealing name for an operation, even the later failed ones, so I spun on the old Soviet plan “Seven days to the Rhine.” My plan: “Plan Red – Three Days to Paldiski” – was a success as soon as the Leningrad MD heard about it.

When I started to write the plan, I needed to list the assumptions that laid the foundation for the plan.

My five grander general assumptions are straightforward.

First, most Western European Armed Forces are in a grave state of dismantled readiness and have limited abilities. Even significant forces such as Germany, France, and Great Britain might talk big and politically market their rearmament programs, but at the actual units, the readiness is still as it was ten years ago. The first significant NATO formation that arrived at the Lithuanian border would be Polish after three days, which was the cut-off for the plan, but the German and other European NATO forces would not be seen for at least ten to fifteen days. The Western European NATO members are in a state of denial of their readiness, living in an imaginary world where the recent years’ talk of rearmament is already in place. One example is Gotland, which is still only defended by two mechanized companies and some Home Guard. The difference between imagined readiness and actual readiness forms our opportunity.

Second, since the Cold War, the fear of nuclear arms in Western Europe has built up to the degree that the political debate does not even talk about these weapons anymore. At least in the 1970s and 1980s, there was a discussion. Added to the silence about nuclear arms is the almost eighty-year-old geopolitical equilibrium where nuclear arms are not used and are seen as theatrical instruments to portray strategic deterrence. If we, the Russians, use nuclear arms it would send shock waves not only through the political and military leadership and systems, but also create chaos on the financial market at a global scale. The attack on the World Trade Center at 9-11 was not only a deadly event, it created total mayhem on the global stock markets, and pushed the US into recession. It might not be nice, but it serves the Russian interest well.

Third, I assess that any NATO “trip wire” units in the Baltics will be passive. These units do not have artillery, logistics, medical support, or heavy weapons to engage a Russian spearhead. So if we, the Russian army, circumvent these NATO units, there will be no interference during our operational windows during the first three days. The “trip wire” units will not attack but hold the territory where they are stationed, and we will drive around it. Finland will not have time to mobilize or push units towards Russia, nor will Finland cross over the Russian border, fully aware of the risk of a nuclear response. Russia does not need to dedicate units beyond the regular staffing along the Finnish border.

Fourth, we are better off with a smaller force and no sign on the surface of what we are up to than large troop movements, hybrid warfare, loud propaganda, and psychological operations. These actions will only alert NATO. We can achieve the same goal with a regiment that needs a division if NATO starts to understand our intentions. We will not share our intentions with the government or foreign entities; even China will be unaware. In the assault on Ukraine, there was almost no surprise; we ran into massive resistance early.

Fifth, We, the Russians, will create a false illusion that there is a political solution, a settlement and that we are open to realizing that peace is better than conflict. The default belief in a political, diplomatic solution will slow the Western response, and create political division and indecisiveness in critical junctures. From the first armored column that passes the Estonian border, we will use all diplomatic channels to send this message of confusion and delay – that there is a political, diplomatic solution. The numerous governments that form NATO will lose valuable time discussing a diplomatic solution that never existed – but it serves our Russian objectives.

The actual plan is simple.

PLAN RED: THREE DAYS TO PALDISKI
Day one, a missile barrage on high-value targets opens up the engagement. One echelon of armor, attack helicopters, and rocket artillery supported by rocket artillery push through the Northern Estonia – Narva, Reval (Tallinn) to Paldiski. Batallion-size naval infantry land in Reval (Tallinn) harbor simultaneously.
In the South, another echelon pushes to Kaliningrad Oblast through Lithuania and immediately turns South to defend against NATO troops coming from Poland. Rear forces mop up the Lithuanian defenses and resistance in the days to come.
Latvia is ignored and sits in a Kurland Kessel, the Courland pocket, and the Latvian army does not have the means to attack in any direction.
A high-altitude nuclear EMP weapon detonated on international waters knock out installations on Gotland, including Visby airport, and a battalion-sized airborne unit captures Visby airfield. Day two, secure the targets for day one and reinforce the echelons.

When the Polish army arrives day three, direct communication with NATO declaring that any attempt to occupy the Baltic oblasts will have a nuclear response followed by a demonstration of a massive nuclear attack by the Strategic Rocket Forces on the Russian borderland in Novaya Zemlya, the large island North of Murmansk, and in the East Siberian Sea in the Far East.

Then a Kremlin phone call to NATO leadership – what are you gonna do about it?
The peace deal is that Russian forces leave Gotland. That’s it.

Jan Kallberg, Ph.D., LL.M., is a non-resident Senior Fellow with the Transatlantic Defense and Security program at the Center for European Policy Analysis (CEPA) and a George Washington University faculty member. Follow him at cyberdefense.com and @Cyberdefensecom.

(The text was published by the Center for European Policy Analysis in an edited form – accessible through this link)

Jan Kallberg, Ph.D.: A link collection of my writings about the Russo-Ukrainian War

Kallberg, Jan. 2023. Ukraine’s War of the Treelines. The Center for European Policy Analysis (CEPA), October 2.

Kallberg, Jan. 2023. Ukraine War Lesson No. 1 — Chatty Micromanagers Die. The Center for European Policy Analysis (CEPA), September 11.

Kallberg, Jan, and Stephen Hamilton. 2023. Command by intent can ensure command post survivability. Defense News (C4ISRNET), August 29.

Kallberg, Jan. 2023. Ukraine — Victory Is Closer Than You Think. The Center for European Policy Analysis (CEPA), August 23.

Kallberg, Jan. 2023. Junior Officers on the Battlefields of Ukraine. The Center for European Policy Analysis (CEPA), May 26.

Kallberg, Jan. 2023. NATO — The Frenemy WithinThe Center for European Policy Analysis (CEPA), April 11.

Kallberg, Jan. 2023. Why Russia will loseThe Center for European Policy Analysis (CEPA), March 6.

Kallberg, Jan. 2023. After the war in Ukraine: cyber revanchism. CyberWire, February 10.

Kallberg, Jan. 2022. Leader Loss: Russian Junior Officer Casualties. The Center for European Policy Analysis (CEPA), December 23.

Kallberg, Jan. 2022. Russia’s Imperial Farce. The Center for European Policy Analysis (CEPA), December 1.

Kallberg, Jan. 2022. Russia’s Aggression Justifies Western Cyber Intervention. The Center for European Policy Analysis (CEPA), November 9.

Kallberg, Jan. 2022. Russia’s Military – Losing the Will to FightThe Center for European Policy Analysis (CEPA), September 15.

Kallberg, Jan. 2022.  The West Has Forgotten How to Keep SecretsThe Center for European Policy Analysis (CEPA), August 8.

Kallberg, Jan. 2022. Goodbye Vladivostok, Hello Hǎishēnwǎi! The Center for European Policy Analysis (CEPA), July 12.

Kallberg, Jan. 2022. Defending NATO in the High North. The Center for European Policy Analysis (CEPA), July 1.

Kallberg, Jan. 2022. Drones Will not Liberate Ukraine – but Tanks WillThe Center for European Policy Analysis (CEPA), June 24.

Kallberg, Jan. 2022. A Potemkin Military? Russia’s Over-Estimated LegionsThe Center for European Policy Analysis (CEPA), May 6.

Kallberg, Jan. 2022. Russia Won’t Play the Cyber Card, YetThe Center for European Policy Analysis (CEPA), March 30.

Kallberg, Jan. 2022. A troubling silence on Prisoners of WarThe Center for European Policy Analysis (CEPA), March 22.

Kallberg, Jan. 2022.  Free War: A strategy for Ukraine to resist Russia’s brutal invasion of Ukraine? 19FortyFive, March 10.

Kallberg, Jan. 2022. Too late for Russia to stop the foreign volunteer armyThe Center for European Policy Analysis (CEPA), March 10.

Kallberg, Jan. 2022.  An Underground Resistance Movement for UkraineThe Center for European Policy Analysis (CEPA), March 7.

Bottom line: Commanders that can’t delegate will not survive in the modern battlefield

From our article C4ISRNET (Defense News):
“Command by intent can ensure command post survivability”

Link to full text

“In a changing operational environment, where command posts are increasingly vulnerable, intent can serve as a stealth enabler.

A communicated commander’s intent can serve as a way to limit electronic signatures and radio traffic, seeking to obfuscate the existence of a command post. In a mission command-driven environment, communication between command post and units can be reduced. The limited radio and network traffic increases command post survivability.

The intent must explain how the commander seeks to fight the upcoming 12 – 24 hours, with limited interaction between subordinated units and the commander, providing freedom for the units to fulfill their missions. For a commander to deliver intent in a valuable and effective manner, the delivery has to be trained so the leader and the subordinates have a clear picture of what they set out to do.

 

Continue reading Bottom line: Commanders that can’t delegate will not survive in the modern battlefield

Offensive Cyber in Outer Space

The most cost-effective and simplistic cyber attack in outer space with the intent to bring down a targeted space asset is likely to use space junk that still has fuel and respond to communications – and use them to ram or force targeted space assets out of orbit.  The benefits for the attacker – hard to attribute, low costs, and if the attacker has no use of the space terrain then benefit from anti-access/area denial through space debris created by a collision.
Continue reading Offensive Cyber in Outer Space

The West Has Forgotten How to Keep Secrets

My CEPA article about the intelligence vulnerability open access, open government, and open data can create if left unaddressed and not in sync with national security – The West Has Forgotten How to Keep Secrets.
From the text:
“But OSINT, like all other intelligence, cuts both ways — we look at the Russians, and the Russians look at us. But their interest is almost certainly in freely available material that’s far from televisual — the information a Russian war planner can now use from European Union (EU) states goes far, far beyond what Europe’s well-motivated but slightly innocent data-producing agencies likely realize.

Seen alone, the data from environmental and building permits, road maintenance, forestry data on terrain obstacles, and agricultural data on ground water saturation are innocent. But when combined as aggregated intelligence, it is powerful and can be deeply damaging to Western countries.

Democracy dies in the dark, and transparency supports democratic governance. The EU and its member states have legally binding comprehensive initiatives to release data and information from all levels of government in pursuit of democratic accountability. This increasing European release of data — and the subsequent addition to piles of open-source intelligence — is becoming a real concern.

I firmly believe we underestimate the significance of the available information — which our enemies recognize — and that a potential adversary can easily acquire.”

 

 

The long-term cost of cyber overreaction

The default modus operandi when facing negative cyber events is to react, often leading to an overreaction. It is essential to highlight the cost of overreaction, which needs to be a part of calculating when to engage and how. For an adversary probing cyber defenses, reactions provide information that can aggregate a clear picture of the defendant’s capabilities and preauthorization thresholds.

Ideally, potential adversaries cannot assess our strategic and tactical cyber capacities, but over time and numerous responses, the information advantage evaporates. A reactive culture triggered by cyberattacks provides significant information to a probing adversary, which seeks to understand underlying authorities and tactics, techniques and procedures (TTP).

The more we act, the more the potential adversary understands our capacity, ability, techniques, and limitations. I am not advocating a passive stance, but I want to highlight the price of acting against a potential adversary. With each reaction, that competitor gain certainty about what we can do and how. The political scientist Kenneth N. Waltz said that the power of nuclear arms resides with what you could do and not within what you do. A large part of the cyber force strength resides in the uncertainty in what it can do, which should be difficult for a potential adversary to assess and gauge.

Why does it matter? In an operational environment where the adversaries operate under the threshold for open conflict, in sub-threshold cyber campaigns, an adversary will seek to probe in order to determine the threshold, and to ensure that it can operate effectively in the space below the threshold. If a potential adversary cannot gauge the threshold, it will curb its activities as its cyber operations must remain adequately distanced to a potential, unknown threshold to avoid unwanted escalation.

Cyber was doomed to be reactionary from its inception; its inherited legacy from information assurance creates a focus on trying to defend, harden, detect and act. The concept is defending, and when the defense fails, it rapidly swings to reaction and counteractivity. Naturally, we want to limit the damage and secure our systems, but we also leave a digital trail behind every time we act.

In game theory, proportional responses lead to tit-for-tat games with no decisive outcome. The lack of the desired end state in a tit-for-tat game is essential to keep in mind as we discuss persistent engagement. In the same way, as Colin Powell reflected on the conflict in Vietnam, operations without an endgame or a concept of what decisive victory looks like are engagements for the sake of engagements. Even worse, a tit-for-tat game with continuous engagements might be damaging as it trains potential adversaries that can copy our TTPs to fight in cyber. Proportionality is a constant flow of responses that reveals friendly capabilities and makes potential adversaries more able.

There is no straight answer to how to react. A disproportional response at specific events increases the risks from the potential adversary, but it cuts both ways as the disproportional response could create unwanted escalation.

The critical concern is that to maintain abilities to conduct cyber operations for the nation decisively, the extent of friendly cyber capabilities needs almost intact secrecy to prevail in a critical juncture. It might be time to put a stronger emphasis on intel-gain loss (IGL) assessment to answer the question if the defensive gain now outweighs the potential loss of ability and options in the future.

The habit of overreacting to ongoing cyberattacks undermines the ability to quickly and surprisingly engage and defeat an adversary when it matters most. Continuously reacting and flexing the capabilities might fit the general audience’s perception of national ability, but it can also undermine the outlook for a favorable geopolitical cyber endgame.

Government cyber breach shows need for convergence

(I co-authored this piece with MAJ Suslowicz and LTC Arnold).

MAJ Chuck Suslowicz , Jan Kallberg , and LTC Todd Arnold

The SolarWinds breach points out the importance of having both offensive and defensive cyber force experience. The breach is an ongoing investigation, and we will not comment on the investigation. Still, in general terms, we want to point out the exploitable weaknesses in creating two silos — OCO and DCO. The separation of OCO and DCO, through the specialization of formations and leadership, undermines broader understanding and value of threat intelligence. The growing demarcation between OCO and DCO also have operative and tactical implications. The Multi-Domain Operations (MDO) concept emphasizes the competitive advantages that the Army — and greater Department of Defense — can bring to bear by leveraging the unique and complementary capabilities of each service.

It requires that leaders understand the capabilities their organization can bring to bear in order to achieve the maximum effect from the available resources. Cyber leaders must have exposure to a depth and the breadth of their chosen domain to contribute to MDO.

Unfortunately, within the Army’s operational cyber forces, there is a tendency to designate officers as either offensive cyber operations (OCO) or defensive cyber operations (DCO) specialists. The shortsighted nature of this categorization is detrimental to the Army’s efforts in cyberspace and stymies the development of the cyber force, affecting all soldiers. The Army will suffer in its planning and ability to operationally contribute to MDO from a siloed officer corps unexposed to the domain’s inherent flexibility.

We consider the assumption that there is a distinction between OCO and DCO to be flawed. It perpetuates the idea that the two operational types are doing unrelated tasks with different tools, and that experience in one will not improve performance in the other. We do not see such a rigid distinction between OCO and DCO competencies. In fact, most concepts within the cyber domain apply directly to both types of operations. The argument that OCO and DCO share competencies is not new; the iconic cybersecurity expert Dan Geer first pointed out that cyber tools are dual-use nearly two decades ago, and continues to do so. A tool that is valuable to a network defender can prove equally valuable during an offensive operation, and vice versa.

For example, a tool that maps a network’s topology is critical for the network owner’s situational awareness. The tool could also be effective for an attacker to maintain situational awareness of a target network. The dual-use nature of cyber tools requires cyber leaders to recognize both sides of their utility. So, a tool that does a beneficial job of visualizing key terrain to defend will create a high-quality roadmap for a devastating attack. Limiting officer experiences to only one side of cyberspace operations (CO) will limit their vision, handicap their input as future leaders, and risk squandering effective use of the cyber domain in MDO.

An argument will be made that “deep expertise is necessary for success” and that officers should be chosen for positions based on their previous exposure. This argument fails on two fronts. First, the Army’s decades of experience in officers’ development have shown the value of diverse exposure in officer assignments. Other branches already ensure officers experience a breadth of assignments to prepare them for senior leadership.

Second, this argument ignores the reality of “challenging technical tasks” within the cyber domain. As cyber tasks grow more technically challenging, the tools become more common between OCO and DCO, not less common. For example, two of the most technically challenging tasks, reverse engineering of malware (DCO) and development of exploits (OCO), use virtually identical toolkits.

An identical argument can be made for network defenders preventing adversarial access and offensive operators seeking to gain access to adversary networks. Ultimately, the types of operations differ in their intent and approach, but significant overlap exists within their technical skillsets.

Experience within one fragment of the domain directly translates to the other and provides insight into an adversary’s decision-making processes. This combined experience provides critical knowledge for leaders, and lack of experience will undercut the Army’s ability to execute MDO effectively. Defenders with OCO experience will be better equipped to identify an adversary’s most likely and most devastating courses of action within the domain. Similarly, OCO planned by leaders with DCO experience are more likely to succeed as the planners are better prepared to account for potential adversary countermeasures.

In both cases, the cross-pollination of experience improves the Army’s ability to leverage the cyber domain and improve its effectiveness. Single tracked officers may initially be easier to integrate or better able to contribute on day one of an assignment. However, single-tracked officers will ultimately bring far less to the table than officers experienced in both sides of the domain due to the multifaceted cyber environment in MDO.

Maj. Chuck Suslowicz is a research scientist in the Army Cyber Institute at West Point and an instructor in the U.S. Military Academy’s Department of Electrical Engineering and Computer Science (EECS). Dr. Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. LTC Todd Arnold is a research scientist in the Army Cyber Institute at West Point and assistant professor in U.S. Military Academy’s Department of Electrical Engineering and Computer Science (EECS.) The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy or the Department of Defense.

 

The evaporated OODA-loop

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing rapidly. It is time to face the issue of accelerated cyber engagements.

If there is limited time to lead, how do you ensure that you can execute a defensive strategy? How do we launch counter-measures at speed beyond human ability and comprehension? If you don’t have time to lead, the alternative would be to preauthorize. In the early days of the “Cold War” war planners and strategists who were used to having days to react to events faced ICBM that forced decisions within minutes. The solution? Preauthorization. The analogy between how the nuclear threat was addressed and cybersecurity works to a degree – but we have to recognize that the number of possible scenarios in cybersecurity could be on the hundreds and we need to prioritize.

The cybersecurity preauthorization process would require an understanding of likely scenarios and the unfolding events to follow these scenarios. The weaknesses in preauthorization are several. First, the limitations of the scenarios that we create because these scenarios are built on how we perceive our system environment. This is exemplified by the old saying: “What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t true.”

The creation of scenarios as a foundation for preauthorization will be laden with biases, assumption that some areas are secure that isn’t, and the inability to see attack vectors that an attacker sees. So, the major challenge becomes when to consider preauthorization is to create scenarios that are representative of potential outfalls.

One way is to look at the different attack strategies used earlier. This limits the scenarios to what has already happened to others but could be a base where additional scenarios are added to. The MITRE Att&ck Navigator provides an excellent tool to simulate and create attack scenarios that can be a foundation for preauthorization. As we progress, and artificial intelligence becomes an integrated part of offloading decision making, but we are not there yet. In the near future, artificial intelligence can cover parts of the managerial spectrum, increasing the human ability to act in very brief time windows.

The second weakness is the preauthorization’s vulnerability against probes and reverse-engineering. Cybersecurity is active 24/7/365 with numerous engagements on an ongoing basis. Over time, and using machine learning, automated attack mechanisms could learn how to avoid triggering preauthorized responses by probing and reverse-engineer solutions that will trespass the preauthorized controls.

So there is no easy road forward, but instead, a tricky path that requires clear objectives, alignment with risk management and it’s risk appetite, and an acceptance that the final approach to address the increased velocity in the attacks might not be perfect. The alternative – to not address the accelerated execution of attacks is not a viable option. That would hand over the initiative to the attacker and expose the organization for uncontrolled risks.

Repeatedly through the last two years, I have read references to the OODA-loop and the utility of the OODA-concept för cybersecurity. The OODA-loop resurface in cybersecurity and information security managerial approaches as a structured way to address unfolding events. The concept of the OODA (Observe, Orient, Decide, Act) loop developed by John Boyd in the 1960s follow the steps of observe, orient, decide, and act. You observe the events unfolding, you orient your assets at hand to address the events, you make up your mind of what is a feasible approach, and you act.

The OODA-loop has become been a central concept in cybersecurity the last decade as it is seen as a vehicle to address what attackers do, when, where, and what should you do and where is it most effective. The term has been“you need to get inside the attacker’s OODA-loop.” The OODA-loop is used as a way to understand the adversary and tailor your own defensive actions.

Retired Army Colonel Tom Cook, former research director for the Army Cyber Institute at West Point, and I wrote 2017 an IEEE article titled “The Unfitness of Traditional Military Thinking in Cyber” questioning the validity of using the OODA-loop in cyber when events are going to unfold faster and faster. Today, in 2020, the validity of the OODA-loop in cybersecurity is on the brink to evaporate due to increased speed in the attacks. The time needed to observe and assess, direct resources, make decisions, and take action will be too long to be able to muster a successful cyber defense.

Attacks occurring at computational speed worsens the inability to assess and act, and the increasingly shortened time frames likely to be found in future cyber conflicts will disallow any significant, timely human deliberation.

Moving forward

I have no intention of being a narrative impossibilist, who present challenges with no solutions, so the current way forward is preauthorizations. In the near future, the human ability to play an active role in rapid engagement will be supported by artificial intelligence decision-making that executes the tactical movements. The human mind is still in charge of the operational decisions for several reasons – control, larger picture, strategic implementation, and intent. For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

Jan Kallberg, Ph.D.

For ethical artificial intelligence, security is pivotal

 

The market for artificial intelligence is growing at an unprecedented speed, not seen since the introduction of the commercial Internet. The estimates vary, but the global AI market is assumed to grow 30 to 60 percent per year. Defense spending on AI projects is increasing at even a higher rate when we add wearable AI and systems that are dependent on AI. The defense investments, such as augmented reality, automated target recognition, and tactical robotics, would not advance at today’s rate without the presence of AI to support the realization of these concepts.

The beauty of the economy is responsiveness. With an identified “buy” signal, the market works to satisfy the need from the buyer. Powerful buy signals lead to rapid development, deployment, and roll-out of solutions, knowing that time to market matters.

My concern is based on earlier analogies when the time to market prevailed over conflicting interests. One example is the first years of the commercial internet, the introduction of remote control of supervisory control and data acquisition (SCADA) and manufacturing, and the rapid growth of the smartphone apps. In each of these cases, security was not the first thing on the developer’s mind. Time to market was the priority. This exposure increases with an economically sound pursuit to use commercial off the shelf products (COTS) as sensors, chipsets, functions, electric controls, and storage devices can be bought on the civilian market for a fraction of the cost. These COTS products cut costs, give the American people more defense and security for the money, and drive down the time to conclude the development and deployment cycle.

The Department of Defense has adopted five ethical principles for the department’s future utilization of AI. These principles are: responsible, equitable, traceable, reliable, and governable. The common denominator in all these five principles is cybersecurity. If the cybersecurity of the AI application is inadequate, these five adopted principles can be jeopardized and no longer steer the DOD AI implementation.

The future AI implementation increases the attack surface radically, and of concern is the ability to detect manipulation of the processes, because, for the operators, the underlying AI processes are not clearly understood or monitored. A system that detects targets from images or from a streaming video capture, where AI is used to identify target signatures, will generate decision support that can lead to the destruction of these targets. The targets are engaged and neutralized. One of the ethical principles for AI is “responsible.” How do we ensure that the targeting is accurate? How do we safeguard that neither the algorithm is corrupt or that sensors are not being tampered with to produce spurious data? It becomes a matter of security.

In a larger conflict, where ground forces are not able to inspect the effects on the ground, the feedback loop that invalidates the decisions supported by AI might not reach the operators in weeks. Or it might surface after the conflict is over. A rogue system can likely produce spurious decision support for longer than we are willing to admit.

Of all the five principles “equitable” is the area of highest human control. Even if controlling embedded biases in a process is hard to detect, it is within our reach. “Reliable” relates directly to security because it requires that the systems maintain confidentiality, integrity, and availability.

If the principle “reliable” requires cybersecurity vetting and testing, we have to realize that these AI systems are part of complex technical structures with a broad attack surface. If the principle “reliable” is jeopardized, then “traceable” becomes problematic, because if the integrity of AI is questionable, it is not a given that “relevant personnel possess an appropriate understanding of the technology.”

The principle “responsible” can still be valid, because deployed personnel make sound and ethical decisions based on the information provided even if a compromised system will feed spurious information to the decisionmaker. The principle “governable” acts as a safeguard against “unintended consequences.” The unknown is the time from when unintended consequences occur and until the operators of the compromised system understand that the system is compromised.

It is evident when a target that should be hit is repeatedly missed. The effects can be observed. If the effects can not be observed, it is no longer a given that that “unintended consequences” are identified, especially in a fluid multi-domain battlespace. A compromised AI system for target acquisition can mislead targeting, acquiring hidden non-targets that are a waste of resources and weapon system availability, exposing the friendly forces for detection. The time to detect such a compromise can be significant.

My intention is to visualize that cybersecurity is pivotal for AI success. I do not doubt that AI will play an increasing role in national security. AI is a top priority in the United States and to our friendly foreign partners, but potential adversaries will make the pursuit of finding ways to compromise these systems a top priority of their own.

Why Iran would avoid a major cyberwar

Demonstrations in Iran last year and signs of the regime’s demise raise a question: What would the strategic outcome be of a massive cyber engagement with a foreign country or alliance?

Authoritarian regimes traditionally put survival first. Those who do not prioritize regime survival tend to collapse. Authoritarian regimes are always vulnerable because they are illegitimate. There will always be loyalists that benefit from the system, but for a significant part of people, the regime is not legit. The regime only exists because they suppress popular will and use force against any opposition.

In 2016, I wrote an article in the Cyber Defense Review titled “Strategic Cyberwar Theory – A Foundation for Designing Decisive Strategic Cyber Operations.” The utility of strategic cyberwar is linked to the institutional stability of the targeted state. If a nation is destabilized, it can be subdued to foreign will and the ability for the current regime to execute their strategy is evaporated due to loss of internal authority and ability. The theory’s predictive power is most potent when applied to target theocracies, authoritarian regimes, and dysfunctional experimental democracies because the common tenet is weak institutions.

Fully functional democracies, on the other hand, have a definite advantage because these advanced democracies have stability and, by their citizenry, accepted institutions. Nations openly adversarial to democracies are in most cases, totalitarian states that are close to entropy. The reason why these totalitarian states are under their current regime is the suppression of the popular will. Any removal of the pillars of repression, by destabilizing the regime design and institutions that make it functional, will release the popular will.

A destabilized — and possibly imploding — Iranian regime is a more tangible threat to the ruling theocratic elite than any military systems being hacked in a cyber interchange. Dictators fear the wrath of the masses. Strategic cyberwar theory seeks to look beyond the actual digital interchange, the cyber tactics, and instead create a predictive power of how a decisive cyber conflict should be conducted in pursuit of national strategic goals.

The Iranian military apparatus is a mix of traditional military defense, crowd control, political suppression, and show of force for generating artificial internal authority in the country. If command and control evaporate in the military apparatus, it also removes the ability to control the population to the degree the Iranian regime have been able until now to do. In that light, what is in it for Iran to launch a massive cyber engagement against the free world? What can they win?

If the free world uses its cyber abilities, it is far more likely that Iran itself gets destabilized and falls into entropy and chaos, which could lead to lead to major domestic bloodshed when the victims of 40 years of violent suppression decide the fate of their oppressors. It would not be the intent of the free world, it is just an outfall of the way the Iranian totalitarian regime has acted toward their own people. The risks for the Iranians are far more significant than the potential upside of being able to inflict damage on the free world.

That doesn’t mean Iranians would not try to hack systems in foreign countries they consider adversarial. Because of the Iranian regime’s constant need to feed their internal propaganda machinery with “victories,” that is more likely to take place on a smaller scale and will likely be uncoordinated low-level attacks seeking to exploit opportunities they come across. In my view, far more dangerous are non-Iranian advanced nation-state cyber actors that impersonate being Iranian hackers trying to make aggressive preplanned attacks under cover of spoofed identity and transferring the blame fueled by recent tensions.

A new mindset for the Army: silent running

//I wrote this article together with Colonel Stephen Hamilton and it was published in C4ISRNET//

In the past two decades, the U.S. Army has continually added new technology to the battlefield. While this technology has enhanced the ability to fight, it has also greatly increased the ability for an adversary to detect and potentially interrupt and/or intercept operations.

The adversary in the future fight will have a more technologically advanced ability to sense activity on the battlefield – light, sound, movement, vibration, heat, electromagnetic transmissions, and other quantifiable metrics. This is a fundamental and accepted assumption. The future near-peer adversary will be able to sense our activity in an unprecedented way due to modern technologies. It is not only driven by technology but also by commoditization; sensors that cost thousands of dollars during the Cold War are available at a marginal cost today. In addition, software defined radio technology has larger bandwidth than traditional radios and can scan the entire spectrum several times a second, making it easier to detect new signals.

We turn to the thoughts of Bertrand Russell in his version of Occam’s razor: “Whenever possible, substitute constructions out of known entities for inferences to unknown entities.” Occam’s razor is named after the medieval philosopher and friar William of Ockham, who stated that in uncertainty, the fewer assumptions, the better and preached pursuing simplicity by relying on the known until simplicity could be traded for a greater explanatory power. So, by staying with the limited assumption that the future near-peer adversary will be able to sense our activity at an earlier unseen level, we will, unless we change our default modus operandi, be exposed to increased threats and risks. The adversary’s acquired sensor data will be utilized for decision making, direction finding, and engaging friendly units with all the means that are available to the adversary.

The Army mindset must change to mirror the Navy’s tactic of “silent running” used to evade adversarial threats. While there are recent advances in sensor counter-measure techniques, such as low probability of detection and low probability of intercept, silent running reduces the emissions altogether, thus reducing the risk of detection.

In the U.S. Navy submarine fleet, silent running is a stealth mode utilized over the last 100 years following the introduction of passive sonar in the latter part of the First World War. The concept is to avoid discovery by the adversary’s passive sonar by seeking to eliminate all unnecessary noise. The ocean is an environment where hiding is difficult, similar to the Army’s future emission-dense battlefield.

However, on the battlefield, emissions can be managed in order to reduce noise feeding into the adversary’s sensors. A submarine in silent running mode will shut down non-mission essential systems. The crew moves silently and avoids creating any unnecessary sound, in combination with a reduction in speed to limit noise from shafts and propellers. The noise from the submarine no longer stands out. It is a sound among other natural and surrounding sounds which radically decreases the risk of detection.

From the Army’s perspective, the adversary’s primary objective when entering the fight is to disable command and control, elements of indirect fire, and enablers of joint warfighting. All of these units are highly active in the electromagnetic spectrum. So how can silent running be applied for a ground force?

If we transfer silent running to the Army, the same tactic can be as simple as not utilizing equipment just because it is fielded to the unit. If generators go offline when not needed, then sound, heat, and electromagnetic noise are reduced. Radios that are not mission-essential are switched to specific transmission windows or turned off completely, which limits the risk of signal discovery and potential geolocation. In addition, radios are used at the lowest power that still provides acceptable communication as opposed to using unnecessarily high power which would increase the range of detection. The bottom line: a paradigm shift is needed where we seek to emit a minimum number of detectable signatures, emissions, and radiation.

The submarine becomes undetectable as its noise level diminishes to the level of natural background noise which enables it to hide within the environment. Ground forces will still be detectable in some form – the future density of sensors and increased adversarial ability over time would support that – but one goal is to make the adversary’s situational picture blur and disable the ability to accurately assess the function, size, position, and activity of friendly units. The future fluid MDO (multi-domain operational) battlefield would also increase the challenge for the adversary compared to a more static battlefield with a clear separation between friend and foe.

As a preparation for a future near-peer fight, it is crucial to have an active mindset on avoiding unnecessary transmissions that could feed adversarial sensors with information that can guide their actions. This might require a paradigm shift, where we are migrating from an abundance of active systems to being minimalists in pursuit of stealth.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. Col. Stephen Hamilton is the technical director of the Army Cyber Institute at West Point and an academy professor at the U.S. Military Academy. The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy, or the Department of Defense.

 

 

From the Adversary’s POV – Cyber Attacks to Delay CONUS Forces Movement to Port of Embarkation Pivotal to Success

We tend to see vulnerabilities and concerns about cyber threats to critical infrastructure from our own viewpoint. But an adversary will assess where and how a cyberattack on America will benefit the adversary’s strategy. I am not convinced attacks on critical infrastructure, in general, have the payoff that an adversary seeks.

The American reaction to Sept. 11 and any attack on U.S. soil gives a hint to an adversary that attacking critical infrastructure to create hardship for the population might work contrary to the intended softening of the will to resist foreign influence. It is more likely that attacks that affect the general population instead strengthen the will to resist and fight, similar to the British reaction to the German bombing campaign “Blitzen” in 1940. We can’t rule out attacks that affect the general population, but there are not enough offensive capabilities to attack all 16 sectors of critical infrastructure and gain a strategic momentum. An adversary has limited cyberattack capabilities and needs to prioritize cyber targets that are aligned with the overall strategy. Trying to see what options, opportunities, and directions an adversary might take requires we change our point of view to the adversary’s outlook. One of my primary concerns is pinpointed cyber-attacks disrupting and delaying the movement of U.S. forces to theater. 

Seen for the potential adversary’s point of view, bringing the cyber fight to our homeland – think delaying the transportation of U.S. forces to theater by attacking infrastructure and transportation networks from bases to the port of embarkation – is a low investment/high return operation. Why does it matter?

First, the bulk of the U.S. forces are not in the region where the conflict erupts. Instead, they are mainly based in the continental United States and must be transported to theater. From an adversary’s perspective, the delay of U.S. forces’ arrival might be the only opportunity. If the adversary can utilize an operational and tactical superiority in the initial phase of the conflict, by engaging our local allies and U.S. forces in the region swiftly, territorial gains can be made that are too costly to reverse later, leaving the adversary in a strong bargaining position.

Second, even if only partially successful, cyberattacks that delay U.S. forces’ arrival will create confusion. Such attacks would mean units might arrive at different ports, at different times and with only a fraction of the hardware or personnel while the rest is stuck in transit.

Third, an adversary that is convinced before a conflict that it can significantly delay the arrival of U.S. units from the continental U.S. to a theater will do a different assessment of the risks of a fait accompli attack. Training and Doctrine Command defines such an attack as one that “ is intended to achieve military and political objectives rapidly and then to quickly consolidate those gains so that any attempt to reverse the action by the U.S. would entail unacceptable cost and risk.” Even if an adversary is long-term strategically inferior, the window of opportunity due to assumed delay of moving units from the continental U.S. to theater might be enough for them to take military action seeking to establish a successful fait accompli-attack.

In designing a cyber defense for critical infrastructure, it is vital that what matters to the adversary is a part of the equation. In peacetime, cyberattacks probe systems across society, from waterworks, schools, social media, retail, all the way to sawmills. Cyberattacks in war time will have more explicit intent and seek a specific gain that supports the strategy. Therefore, it is essential to identify and prioritize the critical infrastructure that is pivotal at war, instead of attempting to spread out the defense to cover everything touched in peacetime.

Jan Kallberg, Ph.D., LL.M., is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Time – and the lack thereof

For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

The accelerated execution of cyber attacks and an increased ability to at machine-speed identify vulnerabilities for exploitation compress the time window cybersecurity management has to address the unfolding events. In reality, we assume there will be time to lead, assess, analyze, but that window might be closing. It is time to raise the issue of accelerated cyber engagements.

Limited time to lead

If there is limited time to lead, how do you ensure that you can execute a defensive strategy? How do we launch counter-measures at speed beyond human ability and comprehension? If you don’t have time to lead, the alternative would be to preauthorize.

In the early days of the “Cold War” war planners and strategists who were used to having days to react to events faced ICBM that forced decisions within minutes. The solution? Preauthorization. The analogy between how the nuclear threat was addressed and cybersecurity works to a degree – but we have to recognize that the number of possible scenarios in cybersecurity could be on the hundreds and we need to prioritize.

The cybersecurity preauthorization process would require an understanding of likely scenarios and the unfolding events to follow these scenarios. The weaknesses in preauthorization are several. First, the limitations of the scenarios that we create because these scenarios are built on how we perceive our system environment. This is exemplified by the old saying: “What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t so.”

The creation of scenarios as a foundation for preauthorization will be laden with biases, assumption that some areas are secure that isn’t, and the inability to see attack vectors that an attacker sees. So, the major challenge becomes when to consider preauthorization is to create scenarios that are representative of potential outfalls.

One way is to look at the different attack strategies used earlier. This limits the scenarios to what has already happened to others but could be a base where additional scenarios are added to. The MITRE Att&ck Navigator provides an excellent tool to simulate and create attack scenarios that can be a foundation for preauthorization. As we progress, and artificial intelligence becomes an integrated part of offloading decision making, but we are not there yet. In the near future, artificial intelligence can cover parts of the managerial spectrum, increasing the human ability to act in very brief time windows.

The second weakness is the preauthorization’s vulnerability against probes and reverse-engineering. Cybersecurity is active 24/7/365 with numerous engagements on an ongoing basis. Over time, and using machine learning, automated attack mechanisms could learn how to avoid triggering preauthorized responses by probing and reverse-engineer solutions that will trespass the preauthorized controls.

So there is no easy road forward, but instead, a tricky path that requires clear objectives, alignment with risk management and it’s risk appetite, and an acceptance that the final approach to address the increased velocity in the attacks might not be perfect. The alternative – to not address the accelerated execution of attacks is not a viable option. That would hand over the initiative to the attacker and expose the organization for uncontrolled risks.

Bye-bye, OODA-loop

Repeatedly through the last year, I have read references to the OODA-loop and the utility of the OODA-concept för cybersecurity. The OODA-loop resurface in cybersecurity and information security managerial approaches as a structured way to address unfolding events. The concept of the OODA (Observe, Orient, Decide, Act) loop developed by John Boyd in the 1960s follow the steps of observe, orient, decide, and act. You observe the events unfolding, you orient your assets at hand to address the events, you make up your mind of what is a feasible approach, and you act.

The OODA-loop has become been a central concept in cybersecurity the last decade as it is seen as a vehicle to address what attackers do, when, where, and what should you do and where is it most effective. The term has been “you need to get inside the attacker’s OODA-loop.” The OODA-loop is used as a way to understand the adversary and tailor your own defensive actions.

Retired Army Colonel Tom Cook, former research director for the Army Cyber Institute at West Point, and I wrote 2017 an IEEE article titled “The Unfitness of Traditional Military Thinking in Cyber” questioning the validity of using the OODA-loop in cyber when events are going to unfold faster and faster. Today, in 2019, the validity of the OODA-loop in cybersecurity is on the brink to evaporate due to increased speed in the attacks. The time needed to observe and assess, direct resources, make decisions, and take action will be too long to be able to muster a successful cyber defense.

Attacks occurring at computational speed worsens the inability to assess and act, and the increasingly shortened time frames likely to be found in future cyber conflicts will disallow any significant, timely human deliberation.

Moving forward

I have no intention of being a narrative impossibilist, who present challenges with no solutions, so the current way forward is preauthorizations. In the near future, the human ability to play an active role in rapid engagement will be supported by artificial intelligence decision-making that executes the tactical movements. The human mind is still in charge of the operational decisions for several reasons – control, larger picture, strategic implementation, and intent. For cybersecurity, it is pivotal for the next decade to be able to operate with a decreasing time window to act.

Jan Kallberg, PhD