All posts by JKallberg45

Ukraine: the absent Russian Electronic Warfare (EW)

The Russian doctrine favors rapid employment of nonlethal effects, such as electronic warfare (EW), to paralyze and disrupt the enemy in the early hours of conflict. There was an expectation that a full-size Russian invasion of Ukraine would be a massive utilization of electronic warfare from the start.

In the afternoon on the first day of the Russian invasion, the 24th of February, it became apparent that the Russians faced significant command and coordination issues, as there was no effective electronic warfare against Ukrainian communications.

The rationale for the absence of Russian electronic warfare can have different origins; the Russians could have assumed marginal Ukrainian resistance and not deployed their EW capabilities. There was no tangible Russian EW engagement on day five of the invasion after stiff Ukrainian resistance. So if the Russian held back their EW early, the EW should be operational on day five, so the other explanation stands. The Russians can’t get their EW together.

In my view, that indicates that the Russians have failed to synchronize the EW, spectrum management, and the activities between different military formations to ensure functional, friendly communications; meanwhile, the Ukrainians are under Russian EW attack.

After this conflict, the Russian Ukrainian War, the Russians have learned from their failures and any potential adversary that studied the Russian Ukrainian War. The core problem of successfully doing EW is not the hardware; management and operational integration are the challenges.

Any potential adversary will adapt to the Russian Ukrainian War experience and focus on operational integration, but there is a risk for US status quo bias and unwillingness to invest in EW “because apparently, EW doesn’t work.”

The spurious assessment that EW doesn’t work and is not critical for the battlefield becomes the rationale for continuing the current marginal EW investment.

The last almost thirty years, American electronic warfare capabilities have been neglected because the spectrum was never contested in Iraq and Afghanistan. There was neither an enemy ability to detect and strike at electromagnetic activity, allowing for US command post radiating electromagnetic signatures and radiation from radios and data links with no risk of being annihilated when least expected.

During these decades, the other nations have incrementally and strategically increased their ability to conduct electronic warfare by denying and degrading spectrum and detecting electromagnetic activity leading to kinetic strike. Over the last years, the connection between electromagnetic radiation and kinetic strikes has been repeated along the frontlines in Donbas, where Russian-backed separatists shelled Ukrainian positions. In 2020, we witnessed it in the second Karabakh War Armenian command posts were located by their electromagnetic signature and rapidly knocked out in the early days of the war.

American forces are not prepared to face electronic warfare that is well-integrated and widely deployed in an opposing force.
For a potential future conflict, it is notable that the potential adversaries have heavily invested in the ability to conduct electronic warfare (EW) throughout their force structure. In the Russian Army, each motorized rifle regiment/brigade has an EW company, the division has an EW battalion, and within the Corps and Army structure, there are additional units to allocate to the direction of the thrust in the ground offensive. The Russians appear not to use it effectively, but they will learn and adapt.

At the doctrine level, the Russian ground forces are designed to be offensive and take the initiative from the first round is fired, where denial-of-spectrum access is a part of their strategy.
In theory, EW enables forward-maneuver battalions to engage and create disruption for the enemy and an opportunity for exploitation. The Russians benefit from decades of uninterrupted prioritization and development of EW, so they have the hardware, but it appears to be the integration that lacks.

Electronic warfare is a craft, a skill, and potential adversaries’ EW/signal officers are EW/signal officers their entire careers. Naturally, the potential adversaries’ junior and mid-career officers lack experience from the other Army branches and units, but they know the skills required in EW. In my view, it gives the potential adversaries’ an advantage in Electronic Warfare, compared to the US warrior-scholar that are shuttled around in a system of constant change of duty station, schools, and tasks.

The DOPMA “Defense Officer Personnel Management Act” has been discussed to undergo a significant revision, and it is essential to take into account the need for time and stability to gain craftmanship in EW, which is both technical and hands-on, which need officers to narrow down their specialization without career penalties or forced out of the force. The requirements for winning a war must prevail over a career flow chart driven by obsolete Taylorism and the belief that everyone is interchangeable. Not everyone is interchangeable, and uniquely talented leaders can ensure mission success through spectrum warfare. In the future fight, the EW units will have a far more active role and face constant targeting due to the EW units’ impact on the battlefield. This development requires leadership and decision-making by leaders who know EW craftmanship.

The Russian aggression in Ukraine is evidence that a more extensive ground war is possible. Our potential adversaries will learn and adapt their EW from the Russian Ukrainian war. Meanwhile, it is long overdue to accelerate the US investment in fielded and integrated EW. The current state of intermittent integration through formations and undersized EW capabilities compared to the battlefield needs has to change.

Every modern high-tech weapon system is a dud without access to the spectrum; that realization should be enough to address this issue.

Jan Kallberg

These opinions are my private viewpoints and do not reflect the position
of any employer. 

 

 

Artificial Intelligence (AI): The risk of over-reliance on quantifiable data

The rise of interest in artificial intelligence and machine learning has a flip side. It might not be so smart if we fail to design the methods correctly. A question out there — can we compress the reality into measurable numbers? Artificial Intelligence relies on what can be measured and quantified, risking an over-reliance on measurable knowledge.

The problem with many other technical problems is that it all ends with humans that design and assess according to their own perceived reality. The designers’ bias, perceived reality, weltanschauung, and outlook — everything goes into the design. The limitations are not on the machine side; the humans are far more limiting. Even if the machines learn from a point forward, it is still a human that stake out the starting point and the initial landscape.

Quantifiable data has historically served America well; it was a part of the American boom after World War II when America was one of the first countries that took a scientific look on how to improve, streamline and increase production utilizing fewer resources and manpower.

Numbers have also misled. Vietnam-era Secretary of Defense Robert McNamara used the numbers to tell how to win the Vietnam War, which clearly indicated how to reach a decisive military victory — according to the numbers.

In a post-Vietnam book titled “The War Managers,” retired Army general Donald Kinnard visualized the almost bizarre world of seeking to fight the war through quantification and statistics. Kinnard, who later taught at the National Defense University, surveyed fellow generals that had served in Vietnam about the actual support for these methods. These generals considered the concept of assessing the progress in the war by body counts as useless; only two percent of the surveyed generals saw any value in this practice.

Why were the Americans counting bodies? It is likely because it was quantifiable and measurable. It is a common error in research design to seek the variables that produce easily accessible quantifiable results, and McNamara was at that time almost obsessed with numbers and the predictive power of numbers. McNamara was not the only one.

In 1939, the Nazi-German foreign minister Ribbentrop, together with the German High Command, studied and measured the French and British war preparations and ability to mobilize. The Germans quantified assessment was that the Allies were unable to engage in a full-scale war on short notice and the Germans believed that the numbers were identical with the factual reality — the Allies would not go to war over Poland because they were not ready nor able. So Germany invaded Poland on the 1st of September 1939 and started WWII.

The quantifiable assessment was correct and lead to Dunkirk, but the grander assessment was off and underestimated the British and French will to take on the fight, which led to at least 50 million dead, half of Europe behind the Soviet Iron Curtain and the destruction of their own regime. Britain’s willingness to fight to the end, their ability to convince the U.S. to provide resources, and the subsequent events were never captured in the data. The German quantified assessment was a snapshot of the British and French war preparations in the summer of 1939 — nothing else.

Artificial intelligence depends upon the numbers we feed it. The potential failure is hidden in selecting, assessing, designing and extracting the numbers to feed artificial intelligence. The risk for grave errors in decision-making, escalation, and avoidable human suffering and destruction, is embedded in our future use of artificial intelligence if we do not pay attention to the data that feed the algorithms. The data collection and aggregation is the weakest link in the future of machine-supported decision-making.

Jan Kallberg, Ph.D.

Business leaders need to own cyber security

Consultants and IT staff often have more degrees of freedom than needed. Corporate cybersecurity requires a business leader to make the decisions, be personally invested, and lead the security work the same way as the business. The intent and guidance of the business leaders need to be visible. In reality, this is usually not the case. Business leaders rely on IT staff and security consultants to “protect us from cyberattacks.” The risk is obvious – IT staff and consultants are not running the business, lack complete understanding of the strategy and direction, and therefore are unable to prioritize the protection of the information assets.
Continue reading Business leaders need to own cyber security

Demilitarize civilian cyber defense

An cyber crimes specialist with the U.S. Department of Homeland Security, looks at the arms of a confiscated hard drive that he took apart. Once the hard drive is fixed, he will put it back together to extract evidence from it. (Josh Denmark/U.S. Defense Department)
U.S. Defense Department cyber units are incrementally becoming a part of the response to ransomware and system intrusions orchestrated from foreign soil. But diverting the military capabilities to augment national civilian cyber defense gaps is an unsustainable and strategically counterproductive policy.

The U.S. concept of cyber deterrence has failed repeatedly, which is especially visible in the blatant and aggressive SolarWinds hack where the assumed Russian intelligence services, as commonly attributed in the public discourse, established a presence in our digital bloodstream. According to the Cyberspace Solarium Commission, cyber deterrence is established by imposing high costs to exploit our systems. As seen from the Kremlin, the cost must be nothing because blatantly there is no deterrence; otherwise, the Russian intelligence services should have restrained from hacking into the Department of Homeland Security.

After the robust mitigation effort in response to the SolarWinds hack, waves of ransomware attacks have continued. In the last years, especially after Colonial Pipeline and JBS ransomware attacks, there has been an increasing political and public demand for a federal response. The demand is rational; the public and businesses pay taxes and expect protection against foreign attacks, but using military assets is not optimal.

Presidential Policy Directive 41, titled “United States Cyber Incident Coordination,” from 2016 establishes the DHS-led federal response to a significant cyber incident. There are three thrusts: asset response, threat response and intelligence support. Assets are operative cyber units assisting impacted entities to recover; threat response seeks to hold the perpetrators accountable; and intelligence support provides cyberthreat awareness.

The operative response — the assets — is dependent on defense resources. The majority of the operative cyber units reside within the Department of Defense, including the National Security Agency, as the cyber units of the FBI and the Secret Service are limited.

In reality, our national civilian cyber defense relies heavily on defense assets. So what started with someone in an office deciding to click on an email with ransomware, locking up the computer assets of the individual’s employer, has suddenly escalated to a national defense mission.

The core of cyber operations is a set of tactics, techniques and procedures, which creates capabilities to achieve objectives in or through cyberspace. Successful offensive cyberspace operations are dependent on surprise — the exploitation of a vulnerability that was unknown or unanticipated — leading to the desired objective.

The political scientist Kenneth N. Waltz stated that nuclear arms’ geopolitical power resides not in what you do but instead what you can do with these arms. Few nuclear deterrence analogies work in cyber, but Waltz’s does: As long as a potential adversary can not assess what the cyber forces can achieve in offensive cyber, uncertainties will restrain the potential adversary. Over time, the adversary’s restrained posture consolidates to an equilibrium: cyber deterrence contingent on secrecy. Cyber deterrence evaporates when a potential adversary understands, through reverse engineering or observation, our tactics, techniques and procedure.

By constantly flexing the military’s cyber muscles to defend the homeland from inbound criminal cyber activity, the public demand for a broad federal response to illegal cyber activity is satisfied. Still, over time, bit by bit, the potential adversary will understand our military’s offensive cyber operations’ tactics, techniques and procedures. Even worse, the adversary will understand what we can not do and then seek to operate in the cyber vacuum where we have no reach. Our blind spots become apparent.

Offensive cyber capabilities are supported by the operators’ ability to retain and acquire ever-evolving skills. The more time the military cyber force spends tracing criminal gangs and bitcoins or defending targeted civilian entities, the less time the cyber operators have to train for and support military operations to, hopefully, be able to deliver a strategic surprise to an adversary. Defending point-of-sales terminals from ransomware does not upkeep the competence to protect weapon systems from hostile cyberattacks.

Even if the Department of Defense diverts thousands of cyber personnel, it can not uphold a national cyber defense. U.S. gross domestic product is reaching $25 trillion; it is a target surface that requires more comprehensive solutions.

First and foremost, the shared burden to uphold the national cyber defense falls primarily on private businesses, states and local government, federal law enforcement, and DHS.

Second, even if DHS has many roles as a cyberthreat information clearinghouse and the lead agency at incidents, the department lacks a sizable operative component.

Third, establishing a DHS operative cyber unit is limited net cost due to higher military asset costs. When not engaged, the civilian unit can disseminate and train businesses as well as state and local governments to be a part of the national cyber defense.

Establishing a civilian federal asset response is necessary. The civilian response will replace the military cyber asset response, which returns to the military’s primary mission: defense. The move will safeguard military cyber capabilities and increase uncertainty for the adversary. Uncertainty translates to deterrence, leading to fewer significant cyber incidents. We can no longer surrender the initiative and be constantly reactive; it is a failed national strategy.

Jan Kallberg

Inflation – the hidden cyber security threat

 


Image: By Manuel Dohmen – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=185802

In cyberspace, the focus is on threats from malicious activity — a tangible threat. A less obvious threat to cyber is inflation which undermines any cyber organization by eroding budget and employee compensation. Inflation can create unseen resignation rates if not addressed, and jeopardize ongoing cyber efforts and the U.S. Defense Department’s migration to cloud-based services. The competition for cloud security talent is razor-sharp in the private sector already.

There are different ways to build and maintain a cyber workforce: recruit, retrain and retain. The competition between the DoD and the private sector for talent will directly affect recruitment and retainment. Inflation and the shortage of skilled cyber professionals create increasing competition between the federal and private sectors for much-needed talent. Retraining professionals to become a part of the cyber workforce is costly, and if the incentives are not in place to stay in the force, it is short-lived as retrained cyber talent moves on. Inflation creates a negative outlook for recruiting, retraining, and retaining cyber talent.

The inflation expectations in 2022 are the highest in decades, which will directly impact the cost to attract and retain a cyber workforce. Even if the peak inflation is temporary due to COVID-19 as well as disruptions in the supply chain and the financial markets, the pressure on increased compensation is a reality today.

What does it mean in practical terms?

According to the Wall Street Journal, salaries will increase in 2022 for white-collar professionals in the range of 10%, and the federal workforce can expect an increase of less than a third of the gains in the private sector. These signs of growing salary gaps are likely far more severe and exacerbated in the cyber workforce.

For example, by browsing the current jobs ads, a manager for incident response in Rhode Island offers $150,000-$175,000 with the ability to work from home with zero commuting. A fair guess would be there’s a federal GS pay scale at 20-30% less, with work taking place from 8:30 a.m. to 4:30 p.m. in a federal facility; not to mention cloud security, where large players such as Amazon Web Services are actively recruiting from the federal sector.

An increasing salary gap directly impacts recruitment, where the flow of qualified applicants dries up due to the compensation advantage of the private sector. Based on earlier data, the difference in salary will trigger decisions to seek early retirement from the DoD, to pursue a second civilian career or to leave federal service for the private sector as a civilian employee.

The flipside of an all-volunteer force is that in the same way service members volunteer to serve, individuals have the option at the end of their obligation to seek other opportunities instead of reenlistment. The civilian workforce can leave at will when the incentives line up.

Therefore, if we face several years of high inflation, it should not be a surprise that there is a risk for an increased imbalance in incentives between the public and the private sectors that favor the private sector.

The U.S. economy has not seen high inflation since the 1970s and the early 1980s. In general, we all are inexperienced with dealing with steadily increasing costs and a delay of adjusted budgets. Inflation creates a punctured equilibrium for decision-makers and commanders that could force hard choices, such as downsizing, reorganization, and diluting the mission’s core goal due to an inability to deliver.

Money is easy to blame because it trespasses other more complex questions, such as the soft choices that support cyber talent’s job satisfaction, sense of respect, and recognition. It is unlikely that public service can compete with the private sector regarding compensation in the following years.

So to retain, it is essential to identify factors other than the compensation that make cyber talent leave and then mitigate these negative factors that lower the threshold for resignation.

Today’s popular phrase is “emotional intelligence.” It might be a buzzword, but if the DoD can’t compete with compensation, there needs to be a reason for cyber talent to apply and stay. In reality, inflation forces any organization that is not ready to outbid every competitor for talent to take a hard look at its employee relationships and what motivates its workforce to stay and be a part of the mission.

These choices might be difficult because they could force cultural changes in an organization. Whether dissatisfaction with bureaucracy, an unnecessary rigid structure, genuinely low interest for adaptive change, one-sided career paths that fit the employer but not the employee, or whatever reason that might encourage cyber talent to move on, it needs to be addressed.

In a large organization like the DoD and the supporting defense infrastructure, numerous leaders are already addressing the fact that talent competition is not only about compensation and building a broad, positive trajectory. Inflation intensifies the need to overhaul what attracts and retains cyber talent.

Jan Kallberg, Ph.D.

European Open Data can be Weaponized

In the discussion of great power competition and cyberattacks meant to slow down a U.S. strategic movement of forces to Eastern Europe, the focus has been on the route from the fort to port in the U.S. But we tend to forget that once forces arrive at the major Western European ports of disembarkation, the distance from these ports to eastern Poland is the same as from New York to Chicago.

The increasing European release of public data — and the subsequent addition to the pile of open-source intelligence — is becoming concerning in regard to the sheer mass of aggregated information and what information products may surface when combining these sources. The European Union and its member states have comprehensive initiatives to release data and information from all levels of government in pursuit of democratic accountability and transparency. It becomes a wicked problem because these releases are good for democracy but can jeopardize national security.

I firmly believe we underestimate the significance of the available information that a potential adversary can easily acquire. If data is not available freely, it can, with no questions asked, be obtained at a low cost.

Let me present a fictitious case study to visualize the problem with the width of public data released:

In the High North, where the terrain often is either rocks or marshes, with few available routes for maneuver units, available data today will provide information about ground conditions; type of forest; density; and on-the-ground, verified terrain obstacles — all easily accessible geodata and forestry agency data. The granularity of the information is down to a few meters.

The data is innocent by itself, intended to limit environmental damage from heavy forestry equipment and avoid the forestry companies’ armies of tracked harvesters being stuck in unfavorable ground conditions. The concern is that the forestry data also provides a verified route map for any advancing armored column in an accompli attack to avoid contact with the defender’s limited rapid-response units in pursuit of a deep strike.

Suppose the advancing adversary paves the way with special forces. In that case, a local government’s permitting and planning data as well as open data for transportation authorities will identify what to blow up, what to defend, and where it is ideal for ambushing any defending reinforcements or logistics columns. Once the advancing armored column meets up with the special forces, unclassified and openly accessible health department inspections show where frozen food is stored; building permits show which buildings have generators; and environmental protection data points out where civilian fuels, grade and volume are stored.

Now the advancing column can get ready for the next leg in the deep strike. Open data initiatives, “innocent” data releases and broad commercialization of public information has nullified the rapid-response force’s ability to slow down or defend against the accompli attack, and these data releases have increased the velocity of the accompli attack as well as increased the chance for the adversary’s mission success.

The governmental open-source intelligence problem is wicked. Any solution is problematic. An open democracy is a society that embraces accountability and transparency, and they are the foundations for the legitimacy, trust and consent of the governed. Restricting access to machine-readable and digitalized public information contradicts European Union Directive 2003/98/EC, which covers the reuse of public sector information — a well-established foundational part of European law based on Article 95 in the Maastricht Treaty.

The sheer volume of the released information, in multiple languages and from a variety of sources in separate jurisdictions, increases the difficulty of foreseeing any hostile utilization of the released data, which increases the wickedness of the problem. Those jurisdictions’ politics also come into play, which does not make it easier to trace a viable route to ensure a balance between a security interest and a democratic core value.

The initial action to address this issue, and embedded weakness, needs to involve both NATO and the European Union, as well as their member states, due to the complexity of multinational defense, the national implementation of EU legislation and the ability to adjust EU legislation. NATO and the EU have a common interest in mitigating the risks with massive public data releases to an acceptable level that still meets the EU’s goal of transparency.

Jan Kallberg, Ph.D.

Our Critical Infrastructure – Their Cyber Range

There is a risk that we overanalyze attacks on critical infrastructure and try  to find a strategic intent where there are none. Our potential adversaries, in my view, could attack critical American infrastructure for other reasons than executing a national strategy. In many cases, it can be as simple as hostile totalitarian nations that do not respect international humanitarian law, use critical American infrastructure as a cyber range. Naturally, the focus of their top-tier operators is on conducting missions within the strategic direction, but the lower echelon operators can use foreign critical infrastructure as a training ground. If the political elite sanctions these actions, nothing stops a rogue nation from attacking our power grid, waterworks, and public utilities to train their future, advanced cyber operators. The end game is not critical infrastructure – but critical infrastructure provides an educational opportunity.

Naturally, we have to defend critical infrastructure because by doing so, we protect the welfare of the American people and the functions of our society. That said, only because it is vital for us doesn’t automatically mean that it is crucial for the adversary.

Cyberattacks on critical infrastructure can have different intents. There is a similarity between cyber and national intelligence; both are trying to make sense of limited information looking at a denied information environment. In reality, our knowledge of the strategic intent and goals of our potential adversaries is limited.

We can study the adversary’s doctrine, published statements, tactics, technics, and events, but significant gaps exist to understand the intent of the attacks. We are assessing the adversary’s strategic intent from the outside, which are often qualified guesses, with all the uncertainty that comes with it. Then to assess strategic intent, many times, logic and past behavior are the only guidance. Nation-state actors tend to seek a geopolitical end goal, change policy, destabilize the target nation, or acquire the information they can use for their benefit.

Attacks on critical infrastructure make the news headline, and for a less able potential adversary, it can serve as a way to show their internal audience that they can threaten the United States. In 2013, Iranian hackers broke into the control system of a dam in Rye Brook, N.Y. The actual damage was limited due to circumstances the hackers did not know. Maintenance procedures occurred at the facility, which limited the risk for broader damage.

The limited intrusion in the control system made national news, engaged the State of New York, elected officials, Department of Justice, the Federal Bureau of Investigations, Department of Homeland Security, and several more agencies. Time Magazine called it in the headline; ”Iranian Cyber Attack on New York Dam Shows Future of War.”

When attacks occur on critical domestic infrastructure, it is not a given that it has a strategic intent to damage the U.S.; the attacks can also be a message to the attacker’s population that their country can strike the Americans in their homeland. For a geopolitically inferior country that seeks to be a threat and a challenger to the U.S., examples are Iran or North Korea; the massive American reaction to a limited attack on critical infrastructure serves its purpose. The attacker had shown its domestic audience that they could shake the Americans, primarily when U.S. authorities attributed the attack to Iranian hackers, making it easier to present it as news for the Iranian audience. Cyber-attacks become a risk-free way of picking a fight with the Americans without risking escalation.
Numerous cyber-attacks on critical American infrastructure could be a way to harass the American society and have no other justification than hostile authoritarian senior leaders has it as an outlet for their frustration and anger against the U.S.

Attackers seeking to maximize civilian hardship as a tool to bring down a targeted society have historically faced a reversed reaction. The German bombings of the civilian targets during the 1940’s air campaign “the Blitz” only hardened the British resistance against the Nazis. An attacker needs to take into consideration the potential outfall of a significant attack on critical infrastructure. The reactions to Pearl Harbor and 9-11 show that there is a risk for any adversary to attack the American homeland and that such an attack might unify American society instead of injecting fear and force submission to foreign will.

Critical infrastructure is a significant attack vector to track and defend. Still, cyberattacks on U.S. critical infrastructure create massive reactions, which are often predictable, are by itself a vulnerability if orchestrated by an adversary following the Soviet/Russian concept of reflexive control.

The War Game Revival

 

The sudden fall of Kabul, when the Afghan government imploded in a few days, shows how hard it is to predict and assess future developments. War games have had a revival in the last years to understand potential geopolitical risks better. War games are tools to support our thinking and force us to accept that developments can happen, which we did not anticipate, but games also have a flip side. War games can act as afterburners for our confirmation bias and inward self-confirming thinking. Would an Afghanistan-focused wargame design from two years ago had a potential outcome of a governmental implosion in a few days? Maybe not.

Awareness of how bias plays into the games is key to success. Wargames revival occurs for a good reason. Well-designed war games make us better thinkers; the games can be a cost-effective way to simulate various outcomes, and you can go back and repeat the game with lessons learned.
Wargames are rules-driven; the rules create the mechanical underpinnings that decide outcomes, either success or failure. Rules are condensed assumptions. There resides a significant vulnerability. Are we designing the games that operate within the realm of our own aggregated bias?
We operate in large organizations that have modeled how things should work. The timely execution of missions is predictable according to doctrine. In reality, things don’t play out the way we planned; we know it, but the question is, how do you quantify a variety of outcomes and codify them into rules?

Our war games and lessons learned from war games are never perfect. The games are intellectual exercises to think about how situations could unfold and deal with the results. In the interwar years, the U.S. made a rightful decision to focus on Japan as a potential adversary. Significant time and efforts went into war planning based on studies and wargames that simulated the potential Pacific fight. The U.S. assumed one major decisive battle between the U.S. Navy and the Imperial Japanese Navy, where lines of battleships fought it out at a distance. In the plans, that was the crescendo of the Pacific war. The plans missed the technical advances and importance of airpower, air carriers, and submarines. Who was setting up the wargames? Who created the rules? A cadre of officers who had served in the surface fleet and knew how large ships fought. There is naturally more to the story of the interwar war planning, but as an example, this short comment serves its purpose.

How do we avoid creating war games that only confirm our predisposition and lures us into believing that we are prepared – instead of presenting the war we have to fight?

How do you incorporate all these uncertainties into a war game? Naturally, it is impossible, but keeping the biases at least to a degree mitigated ensures value.

Study historical battles can also give insights. In the 1980s, sizeable commercial war games featured massive maps, numerous die-cut unit counters, and hours of playtime. One of these games was SPI’s “Wacht am Rhein,” which was a game about the Battle of the Bulge from start to end. The game visualizes one thing – it doesn’t matter how many units you can throw into battle if they are stuck in a traffic jam. Historical war games can teach us lessons that need to be maintained in our memory to avoid repeating the mistakes from the past.

Bias in wargame design is hard to root out. The viable way forward is to challenge the assumptions and the rules. Outsiders do it better than insiders because they will see the ”officially ignored” flaws. These outsiders must be cognizant enough to understand the game but have minimal ties to the outcome, so they are free to voice their opinion. There are experts out there. Commercial lawyers challenge assumptions and are experts in asking questions. It can be worth a few billable hours to ask them to find the flaws. Colleagues are not suitable to challenge and the ”officially ignored” flaws because they are marinated in the ideas that established the ”officially ignored” flaws. Academics dependent on DOD funding could gravitate toward accepting the ”officially ignored” flaws, just a fundamental human behavior, and the fewer ties to the initiator of the game, the better.

Another way to address uncertainty and bias is repeated games. The first game, cyber, has the effects we anticipate. The second game, cyber, has limited effect and turns out to be an operative dud. In the third game, cyber effects proliferate and have a more significant impact than we anticipated. I use these quick examples to show that there is value in repeated games. The repeated games become a journey of realization and afterthoughts due to the variety of factors and outcomes. We can then afterward use our logic and understanding to arrange the outcomes to understand reality better. The repeated games limit the range and impact of specific bias due to the variety of conditions.

The revival of wargaming is needed because wargaming can be a low-cost, high-return, intellectual endeavor. Hopefully, we can navigate away from the risks of groupthink and confirmation bias embedded in poor design. The intellectual journey that the war games take us on will make our current and future decision-makers better equipped to understand an increasingly complex world.

 

Jan Kallberg, Ph.D.

 

CYBER IN THE LIGHT OF KABUL – UNCERTAINTY, SPEED, ASSUMPTIONS

 

There is a similarity between the cyber and intelligence community (IC) – we are both dealing with a denied environment where we have to assess the adversary based on limited verifiable information. The recent events in Afghanistan with the Afghani government and its military imploding and the events that followed were unanticipated and against the ruling assumptions. The assumptions were off, and the events that unfolded were unprecedented and fast. The Afghan security forces evaporated in ten days facing a far smaller enemy leading to a humanitarian crisis. There is no blame in any direction; it is evident that this was not the expected trajectory of events. But still, in my view, there is a lesson to be learned from the events in Kabul that applies to cyber.

The high degree of uncertainty, the speed in both cases, and our reliance on assumptions, not always vetted beyond our inner circles, makes the analogy work. According to the media, in Afghanistan, there was no clear strategy to reach a decisive outcome. You could say the same about cyber. What is a decisive cyber outcome at a strategic level? Are we just staring at tactical noise, from ransomware to unsystematic intrusions, when we should try to figure out the big picture instead?

Cyber is loaded with assumptions that we, over time, accepted. The assumptions become our path-dependent trajectory, and in the absence of the grand nation-state on nation-state cyber conflict, the assumptions are intact. The only reason why cyber’s failed assumption has not yet surfaced is the absence of full cyber engagement in a conflict. There is a creeping assumption that senior leaders will lead future cyber engagements; meanwhile, the data shows that the increased velocity in the engagements could nullify the time window for leaders to lead. Why do we want cyber leaders to lead? It is just how we do business. That is why we traditionally have senior leaders. John Boyd’s OODA-loop (Observe, Orient, Decide, Act) has had a renaissance in cyber the last three years. The increased velocity with support of more capable hardware, machine learning, artificial intelligence, and massive data utilization makes it questionable if there is time for senior leaders to lead traditionally. The risk is that senior leaders are stuck in the first O in the OODA loop, just observing, or in the latter case, orient in the second O in OODA. It might be the case that there is no time to lead because events are unfolding faster than our leaders can decide and act. The way technology is developing; I have a hard time believing that there will be any significant senior leader input at critical junctures because the time window is so narrow.

Leaders will always lead by expressing intent, and that might be the only thing left. Instead of precise orders, do we train leaders and subordinates to be led by intent as a form of decentralized mission command?

Another dominant cyber assumption is critical infrastructure as the likely attack vector. In the last five years, the default assumption in cyber is that critical infrastructure is a tremendous national cyber risk. That might be correct, but there are numerous others. In 1983, the Congressional Budget Office (CBO) defined critical infrastructure as “highways, public transit systems, wastewater treatment works, water resources, air traffic control, airports, and municipal water supply.” By the patriot Act of 2001, the scope had grown to include; “systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.” By 2013, in the Presidential Policy Directive 21 (PPD-21), the scope widens even further and almost encompasses all society. Today concession stands at ballparks are critical infrastructure, together with thousands of other non-critical functions, shows a mission drift that undermines a national cyber defense. There is no guidance on what to prioritize and what not to prioritize that we might have to live without at a critical juncture. The question is if critical infrastructure matters for our potential adversaries as an attack vector or is it critical infrastructure because it matters to us? A potential adversary wants to attack infrastructure around American military facilities and slow down the transportation apparatus from bases to the port of embarkation (POE) to delay the arrival of U.S. troops in theater. The adversary might do a different assessment, saying that tampering with the American homeland only strengthens the American will to fight and popular support for a conflict. The potential adversary might utilize our critical infrastructure as a capture-the-flag training ground to training their offensive teams, but the activity has no strategic intent.

As broad as the definition is today, it is likely that the focus on critical infrastructure reflects what concerns us instead of what the adversary considers essential for them to reach strategic success. So today, when we witnessed the unprecedented events in Afghanistan, where it appears that our assumptions were off, it is good to keep in mind that cyber is heavy with untested assumptions. In cyber, what we know about the adversary and their intent is limited. We make assumptions based on the potential adversaries’ behavior and doctrine, but it is still an assumption.
So the failures to correctly assess Afghanistan should be a wake-up call for the cyber community, which also relies on unvalidated information.

The long-term cost of cyber overreaction

The default modus operandi when facing negative cyber events is to react, often leading to an overreaction. It is essential to highlight the cost of overreaction, which needs to be a part of calculating when to engage and how. For an adversary probing cyber defenses, reactions provide information that can aggregate a clear picture of the defendant’s capabilities and preauthorization thresholds.

Ideally, potential adversaries cannot assess our strategic and tactical cyber capacities, but over time and numerous responses, the information advantage evaporates. A reactive culture triggered by cyberattacks provides significant information to a probing adversary, which seeks to understand underlying authorities and tactics, techniques and procedures (TTP).

The more we act, the more the potential adversary understands our capacity, ability, techniques, and limitations. I am not advocating a passive stance, but I want to highlight the price of acting against a potential adversary. With each reaction, that competitor gain certainty about what we can do and how. The political scientist Kenneth N. Waltz said that the power of nuclear arms resides with what you could do and not within what you do. A large part of the cyber force strength resides in the uncertainty in what it can do, which should be difficult for a potential adversary to assess and gauge.

Why does it matter? In an operational environment where the adversaries operate under the threshold for open conflict, in sub-threshold cyber campaigns, an adversary will seek to probe in order to determine the threshold, and to ensure that it can operate effectively in the space below the threshold. If a potential adversary cannot gauge the threshold, it will curb its activities as its cyber operations must remain adequately distanced to a potential, unknown threshold to avoid unwanted escalation.

Cyber was doomed to be reactionary from its inception; its inherited legacy from information assurance creates a focus on trying to defend, harden, detect and act. The concept is defending, and when the defense fails, it rapidly swings to reaction and counteractivity. Naturally, we want to limit the damage and secure our systems, but we also leave a digital trail behind every time we act.

In game theory, proportional responses lead to tit-for-tat games with no decisive outcome. The lack of the desired end state in a tit-for-tat game is essential to keep in mind as we discuss persistent engagement. In the same way, as Colin Powell reflected on the conflict in Vietnam, operations without an endgame or a concept of what decisive victory looks like are engagements for the sake of engagements. Even worse, a tit-for-tat game with continuous engagements might be damaging as it trains potential adversaries that can copy our TTPs to fight in cyber. Proportionality is a constant flow of responses that reveals friendly capabilities and makes potential adversaries more able.

There is no straight answer to how to react. A disproportional response at specific events increases the risks from the potential adversary, but it cuts both ways as the disproportional response could create unwanted escalation.

The critical concern is that to maintain abilities to conduct cyber operations for the nation decisively, the extent of friendly cyber capabilities needs almost intact secrecy to prevail in a critical juncture. It might be time to put a stronger emphasis on intel-gain loss (IGL) assessment to answer the question if the defensive gain now outweighs the potential loss of ability and options in the future.

The habit of overreacting to ongoing cyberattacks undermines the ability to quickly and surprisingly engage and defeat an adversary when it matters most. Continuously reacting and flexing the capabilities might fit the general audience’s perception of national ability, but it can also undermine the outlook for a favorable geopolitical cyber endgame.

Prioritize NATO integration for multidomain operations

After U.S. forces implement the multidomain operations (MDO) concept, they will have entered a new level of complexity, with multidomain rapid execution and increased technical abilities and capacities. The U.S. modernization efforts enhance the country’s forces, but they also increase the technological disparity and challenges for NATO. A future fight in Europe is likely to be a rapidly unfolding event, which could occur as an fait accompli attack on the NATO Eastern front. A rapid advancement from the adversary to gain as much terrain and bargaining power before the arrival of major U.S. formations from the continental U.S.

According to the U.S. Army Training and Doctrine Command (TRADOC) Pamphlet 525-3-1, “The U.S. Army in Multi-Domain Operations 2028,” a “fait accompli attack is intended to achieve military and political objectives rapidly and then to quickly consolidate those gains so that any attempt to reverse the action by the [United States] would entail unacceptable cost and risk.”

In a fait accompli scenario, limited U.S. Forces are in theater, and the initial fight rely on the abilities of the East European NATO forces. The mix is a high-low composition of highly capable but small, rapid response units from major NATO countries and regional friendly forces with less ability.

The wartime mobilization units and reserves of the East European NATO forces follow a 1990s standard, to a high degree, with partial upgrades in communications and technical systems. They represent a technical generation behind today’s U.S. forces. Even if these dedicated NATO allies are launching modernization initiatives and replace old legacy hardware (T72, BTR, BMP, post-Cold War-donated NATO surplus) with modern equipment, it is a replacement cycle that will require up to two decades before it is completed. Smaller East European NATO nations tend to have faster executed modernization programs, due to the limited number of units, but they still face the issue of integrating a variety of inherited hardware, donated Cold War surplus, and recently purchased equipment.

The challenge is NATO MDO integration and creating an able, coherent fighting force. In MDO, the central idea is to disintegrate and break loose to move the fight deep into enemy territory to disintegrate. The definition of disintegration is presented by TRADOC Pamphlet 525-3-1 as: “Dis-integrate refers to breaking the coherence of the enemy’s system by destroying or disrupting its subcomponents (such as command and control means, intelligence collection, critical nodes, etc.) degrading its ability to conduct operations while leading to a rapid collapse of the enemy’s capabilities or will to fight. This definition revises the current doctrinal defeat mechanism disintegrate.” The utility of MDO in a NATO framework requires a broad implementation of the concept within the NATO forces, not only for the U.S.

The concept of disintegration has its similar concept in Russian military thought and doctrine defined as disorganization. The Russian concept seeks to deny command and control structures the ability to communicate and lead, by jamming, cyber or physical destruction. Historically, Russian doctrine has been focused on exploiting the defending force ability to coordinate, seeking to encircle, and maintain a rapid advancement deep in the territory seeking for the defense to collapse. From a Russian perspective, key to success of a fait accompli attack is its ability to deny NATO-U.S. joint operations and exploit NATO inability to create a coherent multinational and technologically diverse fighting posture. The concept of disorganization has emerged strongly the last five years in how the Russians see the future fight. It would not be too farfetched to assume that the Russian leadership sees an opportunity in exploiting NATO’s inability to coordinate and integrate all elements in the fight.

The lingering concern is how a further technologically advanced and doctrinally complex U.S. force can get the leverage embedded in these advances if the initial fight occurs in an operational environment where the rapidly mobilized East-European NATO forces are two technological generations behind — especially when the Russian disorganization concept appears to be aiming to deny that leverage and exploit the fragmented NATO force.

NATO has been extremely successful safeguarding the peace since its creation in 1949. NATO integration was easier in the 1970s, with large NATO formations in West Germany and less countries involved. Multinational NATO forces had exercises continuously and active interaction among leaders, units and planners. Even then, the Soviet/Russian concepts were to break up and overrun the defenses, and strike deep in the territory.

In the light of increased NATO technical disparity in the multinational forces, and potential doctrinal misalignment in the larger Allied force, add to the strengthened Russian interest to exploit these conditions, these observations should drive a stronger focus on NATO integration.

The future fight will not occur at a national training center. If it happens in Eastern Europe, it will be a fight fought together with European allies, from numerous countries, in a terrain they know better. As we enter a new era of great power competition, the U.S. brings ability, capacity and technology that will ensure NATO mission success if well-integrated in the multinational fighting force.

Jan Kallberg, Ph.D.

Solorigate attack — the challenge to cyber deterrence

The exploitation of SolarWinds’ network tool at a grand scale, based on publicly disseminated information from Congress and media, represents not only a threat to national security — but also puts the concept of cyber deterrence in question. My concern: Is there a disconnect between the operational environment and the academic research that we generally assume supports the national security enterprise?

Apparently, whoever launched the Solorigate attack was undeterred, based on the publicly disclosed size and scope of the breach. If cyber deterrence is not to be a functional component to change potential adversaries’ behavior, why is cyber deterrence given so much attention?

Maybe it is because we want it to exist. We want there to be a silver bullet out there that will prevent future cyberattacks, and if we want it to exist, then any support for the existence of cyber deterrence feeds our confirmation bias.

Herman Kahn and Irwin Mann’s RAND memo “Ten Common Pitfalls” from 1957 points out the intellectual traps when trying to make military analysis in an uncertain world. That we listen to what is supporting our general belief is natural — it is in the human psyche to do so, but it can mislead.

Here is my main argument — there is a misalignment between civilian academic research and the cyber operational environment. There are at least a few hundred academic papers published on cyber deterrence, from different intellectual angles and a variety of venues, seeking to investigate, explain and create an intellectual model how cyber deterrence is achieved.

Many of these papers transpose traditional models from political science, security studies, behavioral science, criminology and other disciplines, and arrange these established models to fit a cyber narrative. The models were never designed for cyber; the models are designed to address other deviate behavior. I do not rule out their relevance in some form, but I also do not assume that they are relevant.

The root causes of this misalignment I would like to categorize in three different, hopefully plausible explanations. First, few of our university researchers have military experience, and with an increasingly narrower group that volunteer to the serve, the problem escalates. This divide between civilian academia and the military is a national vulnerability.

Decades ago, the Office of Net Assessment assessed that the U.S. had an advantage over the Soviets due to the skills of the U.S. force. Today in 2021, it might be reversed for cyber research when the academic researchers in potentially adversarial countries have a higher understanding of military operations than their U.S. counterpart.

Second, the funding mechanism in the way we fund civilian research gives a market-driven pursuit to satisfy the interest of the funding agency. By funding models of cyber deterrence, there is already an assumption that it exists, so any research that challenges that assumption will never be initiated. Should we not fund this research? Of course not, but the scope of the inquiry needs to be wide enough to challenge our own presumptions and potential biases at play. Right now, it pays too well to tell us what we want to hear, compared to presenting a radical rebuttal of our beliefs and perceptions of cyber.

Third, the defense enterprise is secretive about the inner workings of cyber operations and the operational environment (for a good reason!). However, what if it is too secretive, leaving civilian researchers to rely on commercial white papers, media, and commentators to shape the perception of the operational environment?

One of the reasons funded university research exists is to be a safeguard to help avoid strategic surprise. However, it becomes a grave concern when the civilian research community research misses the target on such a broad scale as it did in this case. This case also demonstrates that there is risk in assuming the civilian research will accurately understand the operational environment, which rather amplifies the potential for strategic surprise.

There are university research groups that are highly knowledgeable of the realities of military cyber operations, so one way to address this misalignment is to concentrate the effort. Alternatively, the defense establishment must increase the outreach and interaction with a larger group of research universities to mitigate the civilian-military research divide. Every breach, small and large, is data that supports understanding of what happened, so in my view, this is one of the lessons to be learned from Solorigate.

Jan Kallberg, Ph.D.

After twenty years of cyber – still unchartered territory ahead

The general notion is that much of the core understanding in cyber is in place. I would like to challenge that perception. There are still vast territories of the cyber domain that need to be researched, structured, and understood. I would like to use Winston Churchill’s words – it is not the beginning of the end; it is maybe the end of the beginning. It is obvious to me, in my personal opinion, that the cyber journey is still very early, the cyber field has yet to mature and big building blocks for the future cyber environment are not in place. Internet and the networks that support the net have increased dramatically over the last decade. Even if the growth of cyber might be stunning, the actual advances are not as impressive.

In the last 20 years, cyber defense, and cyber as a research discipline, have grown from almost nothing to major national concerns and the recipient of major resources. In the winter of 1996-1997, there were four references to cyber defense in the search engine of that day: AltaVista. Today, there are about 2 million references in Google. Knowledge of cyber has not developed at the same rapid rate as the interest, concern, and resources.

The cyber realm is still struggling with basic challenges such as attribution. Traditional topics in political science and international relations — such as deterrence, sovereignty, borders, the threshold for war, and norms in cyberspace — are still under development and discussion. From a military standpoint, there is still a debate about what cyber deterrence would look like, what the actual terrain and maneuverability are like in cyberspace, and who is a cyber combatant.

The traditional combatant problem becomes even more complicated because the clear majority of the networks and infrastructure that could be engaged in potential cyber conflicts are civilian — and the people who run these networks are civilians. Add to that mix the future reality with cyber: fighting a conflict at machine speed and with limited human interaction.

Cyber raises numerous questions, especially for national and defense leadership, due to the nature of cyber. There are benefits with cyber – it can be used as a softer policy option with a global reach that does not require predisposition or weeks of getting assets in the right place for action. The problem occurs when you reverse the global reach, and an asymmetric fight occurs, when the global adversaries to the United States can strike utilizing cyber arms and attacks deep to the most granular participle of our society – the individual citizen. Another question that is raising concern is the matter of time. Cyber attacks and conflicts can be executed at machine speed, which is beyond human ability to lead and comprehend what is actually happening. This visualizes that cyber as a field of study is in its early stages even if we have an astronomic growth of networked equipment, nodes, and the sheer volume of transferred information. We have massive activity on the Internet and in networks, but we are not fully able to utilize it or even structurally understand what is happening at a system-level and in a grander societal setting. I believe that it could take until the mid-2030s before many of the basic elements of cyber have become accepted, structured, and understood, and until we have a global framework. Therefore, it is important to be invested in cyber research and make discoveries now rather than face strategic surprise. Knowledge is weaponized in cyber.

Jan Kallberg, PhD