Business leaders need to own cyber security

Consultants and IT staff often have more degrees of freedom than needed. Corporate cybersecurity requires a business leader to make the decisions, be personally invested, and lead the security work the same way as the business. The intent and guidance of the business leaders need to be visible. In reality, this is usually not the case. Business leaders rely on IT staff and security consultants to “protect us from cyberattacks.” The risk is obvious – IT staff and consultants are not running the business, lack complete understanding of the strategy and direction, and therefore are unable to prioritize the protection of the information assets.

Information security has a few foundational pieces. Information resources are classified according to their importance to the business, an acceptable level of risk is established for the company, and then security solutions are developed to mitigate risk down to an acceptable level. Parallel, these mitigation strategies are implemented with minimal disruption to the workflow and the business. The information security program ensures that information and functionality can be restored after an incident as part of the process.

These basic steps may sound like an elementary exercise – something that consultants can solve quickly – but the central question is risk appetite, the acceptance to take an understood risk, which can jeopardize the entire business if too high or too low. What is the wrong level of risk appetite? The business’ IT operations are prepared to take risks that the business management did not even dare to dream of or, conversely, the IT systems will slow down the business, stand in the way, and the failure to prioritize due to risk aversion. Risk, which is central to information security, can only be controlled by the business leader. IT staff and consultants can be advisors, produce information, and sketch solutions, but the decision is a business decision. What risk we are prepared to take cannot be an open issue and is left to arbitrary interpretation.

Just as the management has an influence and controls what is an acceptable risk when information security is structured, management is central when things go wrong. A business management team that is not involved in information security, and gains a conceptual understanding, will be too slow to act in a crisis. Cyberattacks and data failures occur daily. The financial market, customers, government authorities, and owners rightly expect these damages to be dealt with quickly and efficiently. Confusion when a major cyber crisis occurs, by attack or mistake, undermines confidence in the business at a very high rate. In a matter of hours, a trust that has taken decades to build can be wiped out. In the digital economy, trust is the same as revenue and long-term customer relationships. Business management that lacks an understanding of how cyber security is structured for their business, at a managerial level, has not made the intellectual journey of prioritizing and will not lead or have relevant influence in a crisis.

Managers have premium pay and are recruited because they have experience, insight, and character to navigate when a crisis hits and is challenging. If the business management cannot lead when the business is under major cyberattacks, then management has left it to the IT staff and consultants to lead the business.

In a smaller and medium-sized business, the need for committed business management is reinforced because the threat of long-term damage from a cyberattack is greater. A public company can absorb the damage, which smaller players often in niche industries cannot do in the same way.

If business management can engage in sustainability and the climate threat, as many do with both energy and interest, the step of engaging in vulnerability and the cyber threat should not be that far to go. The survival of the business will always be a business decision.

Jan Kallberg, Ph.D.

Demilitarize civilian cyber defense

An cyber crimes specialist with the U.S. Department of Homeland Security, looks at the arms of a confiscated hard drive that he took apart. Once the hard drive is fixed, he will put it back together to extract evidence from it. (Josh Denmark/U.S. Defense Department)
U.S. Defense Department cyber units are incrementally becoming a part of the response to ransomware and system intrusions orchestrated from foreign soil. But diverting the military capabilities to augment national civilian cyber defense gaps is an unsustainable and strategically counterproductive policy.

The U.S. concept of cyber deterrence has failed repeatedly, which is especially visible in the blatant and aggressive SolarWinds hack where the assumed Russian intelligence services, as commonly attributed in the public discourse, established a presence in our digital bloodstream. According to the Cyberspace Solarium Commission, cyber deterrence is established by imposing high costs to exploit our systems. As seen from the Kremlin, the cost must be nothing because blatantly there is no deterrence; otherwise, the Russian intelligence services should have restrained from hacking into the Department of Homeland Security.

After the robust mitigation effort in response to the SolarWinds hack, waves of ransomware attacks have continued. In the last years, especially after Colonial Pipeline and JBS ransomware attacks, there has been an increasing political and public demand for a federal response. The demand is rational; the public and businesses pay taxes and expect protection against foreign attacks, but using military assets is not optimal.

Presidential Policy Directive 41, titled “United States Cyber Incident Coordination,” from 2016 establishes the DHS-led federal response to a significant cyber incident. There are three thrusts: asset response, threat response and intelligence support. Assets are operative cyber units assisting impacted entities to recover; threat response seeks to hold the perpetrators accountable; and intelligence support provides cyberthreat awareness.

The operative response — the assets — is dependent on defense resources. The majority of the operative cyber units reside within the Department of Defense, including the National Security Agency, as the cyber units of the FBI and the Secret Service are limited.

In reality, our national civilian cyber defense relies heavily on defense assets. So what started with someone in an office deciding to click on an email with ransomware, locking up the computer assets of the individual’s employer, has suddenly escalated to a national defense mission.

The core of cyber operations is a set of tactics, techniques and procedures, which creates capabilities to achieve objectives in or through cyberspace. Successful offensive cyberspace operations are dependent on surprise — the exploitation of a vulnerability that was unknown or unanticipated — leading to the desired objective.

The political scientist Kenneth N. Waltz stated that nuclear arms’ geopolitical power resides not in what you do but instead what you can do with these arms. Few nuclear deterrence analogies work in cyber, but Waltz’s does: As long as a potential adversary can not assess what the cyber forces can achieve in offensive cyber, uncertainties will restrain the potential adversary. Over time, the adversary’s restrained posture consolidates to an equilibrium: cyber deterrence contingent on secrecy. Cyber deterrence evaporates when a potential adversary understands, through reverse engineering or observation, our tactics, techniques and procedure.

By constantly flexing the military’s cyber muscles to defend the homeland from inbound criminal cyber activity, the public demand for a broad federal response to illegal cyber activity is satisfied. Still, over time, bit by bit, the potential adversary will understand our military’s offensive cyber operations’ tactics, techniques and procedures. Even worse, the adversary will understand what we can not do and then seek to operate in the cyber vacuum where we have no reach. Our blind spots become apparent.

Offensive cyber capabilities are supported by the operators’ ability to retain and acquire ever-evolving skills. The more time the military cyber force spends tracing criminal gangs and bitcoins or defending targeted civilian entities, the less time the cyber operators have to train for and support military operations to, hopefully, be able to deliver a strategic surprise to an adversary. Defending point-of-sales terminals from ransomware does not upkeep the competence to protect weapon systems from hostile cyberattacks.

Even if the Department of Defense diverts thousands of cyber personnel, it can not uphold a national cyber defense. U.S. gross domestic product is reaching $25 trillion; it is a target surface that requires more comprehensive solutions.

First and foremost, the shared burden to uphold the national cyber defense falls primarily on private businesses, states and local government, federal law enforcement, and DHS.

Second, even if DHS has many roles as a cyberthreat information clearinghouse and the lead agency at incidents, the department lacks a sizable operative component.

Third, establishing a DHS operative cyber unit is limited net cost due to higher military asset costs. When not engaged, the civilian unit can disseminate and train businesses as well as state and local governments to be a part of the national cyber defense.

Establishing a civilian federal asset response is necessary. The civilian response will replace the military cyber asset response, which returns to the military’s primary mission: defense. The move will safeguard military cyber capabilities and increase uncertainty for the adversary. Uncertainty translates to deterrence, leading to fewer significant cyber incidents. We can no longer surrender the initiative and be constantly reactive; it is a failed national strategy.

Jan Kallberg

Inflation – the hidden cyber security threat

 


Image: By Manuel Dohmen – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=185802

In cyberspace, the focus is on threats from malicious activity — a tangible threat. A less obvious threat to cyber is inflation which undermines any cyber organization by eroding budget and employee compensation. Inflation can create unseen resignation rates if not addressed, and jeopardize ongoing cyber efforts and the U.S. Defense Department’s migration to cloud-based services. The competition for cloud security talent is razor-sharp in the private sector already.

There are different ways to build and maintain a cyber workforce: recruit, retrain and retain. The competition between the DoD and the private sector for talent will directly affect recruitment and retainment. Inflation and the shortage of skilled cyber professionals create increasing competition between the federal and private sectors for much-needed talent. Retraining professionals to become a part of the cyber workforce is costly, and if the incentives are not in place to stay in the force, it is short-lived as retrained cyber talent moves on. Inflation creates a negative outlook for recruiting, retraining, and retaining cyber talent.

The inflation expectations in 2022 are the highest in decades, which will directly impact the cost to attract and retain a cyber workforce. Even if the peak inflation is temporary due to COVID-19 as well as disruptions in the supply chain and the financial markets, the pressure on increased compensation is a reality today.

What does it mean in practical terms?

According to the Wall Street Journal, salaries will increase in 2022 for white-collar professionals in the range of 10%, and the federal workforce can expect an increase of less than a third of the gains in the private sector. These signs of growing salary gaps are likely far more severe and exacerbated in the cyber workforce.

For example, by browsing the current jobs ads, a manager for incident response in Rhode Island offers $150,000-$175,000 with the ability to work from home with zero commuting. A fair guess would be there’s a federal GS pay scale at 20-30% less, with work taking place from 8:30 a.m. to 4:30 p.m. in a federal facility; not to mention cloud security, where large players such as Amazon Web Services are actively recruiting from the federal sector.

An increasing salary gap directly impacts recruitment, where the flow of qualified applicants dries up due to the compensation advantage of the private sector. Based on earlier data, the difference in salary will trigger decisions to seek early retirement from the DoD, to pursue a second civilian career or to leave federal service for the private sector as a civilian employee.

The flipside of an all-volunteer force is that in the same way service members volunteer to serve, individuals have the option at the end of their obligation to seek other opportunities instead of reenlistment. The civilian workforce can leave at will when the incentives line up.

Therefore, if we face several years of high inflation, it should not be a surprise that there is a risk for an increased imbalance in incentives between the public and the private sectors that favor the private sector.

The U.S. economy has not seen high inflation since the 1970s and the early 1980s. In general, we all are inexperienced with dealing with steadily increasing costs and a delay of adjusted budgets. Inflation creates a punctured equilibrium for decision-makers and commanders that could force hard choices, such as downsizing, reorganization, and diluting the mission’s core goal due to an inability to deliver.

Money is easy to blame because it trespasses other more complex questions, such as the soft choices that support cyber talent’s job satisfaction, sense of respect, and recognition. It is unlikely that public service can compete with the private sector regarding compensation in the following years.

So to retain, it is essential to identify factors other than the compensation that make cyber talent leave and then mitigate these negative factors that lower the threshold for resignation.

Today’s popular phrase is “emotional intelligence.” It might be a buzzword, but if the DoD can’t compete with compensation, there needs to be a reason for cyber talent to apply and stay. In reality, inflation forces any organization that is not ready to outbid every competitor for talent to take a hard look at its employee relationships and what motivates its workforce to stay and be a part of the mission.

These choices might be difficult because they could force cultural changes in an organization. Whether dissatisfaction with bureaucracy, an unnecessary rigid structure, genuinely low interest for adaptive change, one-sided career paths that fit the employer but not the employee, or whatever reason that might encourage cyber talent to move on, it needs to be addressed.

In a large organization like the DoD and the supporting defense infrastructure, numerous leaders are already addressing the fact that talent competition is not only about compensation and building a broad, positive trajectory. Inflation intensifies the need to overhaul what attracts and retains cyber talent.

Jan Kallberg, Ph.D.

European Open Data can be Weaponized

In the discussion of great power competition and cyberattacks meant to slow down a U.S. strategic movement of forces to Eastern Europe, the focus has been on the route from the fort to port in the U.S. But we tend to forget that once forces arrive at the major Western European ports of disembarkation, the distance from these ports to eastern Poland is the same as from New York to Chicago.

The increasing European release of public data — and the subsequent addition to the pile of open-source intelligence — is becoming concerning in regard to the sheer mass of aggregated information and what information products may surface when combining these sources. The European Union and its member states have comprehensive initiatives to release data and information from all levels of government in pursuit of democratic accountability and transparency. It becomes a wicked problem because these releases are good for democracy but can jeopardize national security.

I firmly believe we underestimate the significance of the available information that a potential adversary can easily acquire. If data is not available freely, it can, with no questions asked, be obtained at a low cost.

Let me present a fictitious case study to visualize the problem with the width of public data released:

In the High North, where the terrain often is either rocks or marshes, with few available routes for maneuver units, available data today will provide information about ground conditions; type of forest; density; and on-the-ground, verified terrain obstacles — all easily accessible geodata and forestry agency data. The granularity of the information is down to a few meters.

The data is innocent by itself, intended to limit environmental damage from heavy forestry equipment and avoid the forestry companies’ armies of tracked harvesters being stuck in unfavorable ground conditions. The concern is that the forestry data also provides a verified route map for any advancing armored column in an accompli attack to avoid contact with the defender’s limited rapid-response units in pursuit of a deep strike.

Suppose the advancing adversary paves the way with special forces. In that case, a local government’s permitting and planning data as well as open data for transportation authorities will identify what to blow up, what to defend, and where it is ideal for ambushing any defending reinforcements or logistics columns. Once the advancing armored column meets up with the special forces, unclassified and openly accessible health department inspections show where frozen food is stored; building permits show which buildings have generators; and environmental protection data points out where civilian fuels, grade and volume are stored.

Now the advancing column can get ready for the next leg in the deep strike. Open data initiatives, “innocent” data releases and broad commercialization of public information has nullified the rapid-response force’s ability to slow down or defend against the accompli attack, and these data releases have increased the velocity of the accompli attack as well as increased the chance for the adversary’s mission success.

The governmental open-source intelligence problem is wicked. Any solution is problematic. An open democracy is a society that embraces accountability and transparency, and they are the foundations for the legitimacy, trust and consent of the governed. Restricting access to machine-readable and digitalized public information contradicts European Union Directive 2003/98/EC, which covers the reuse of public sector information — a well-established foundational part of European law based on Article 95 in the Maastricht Treaty.

The sheer volume of the released information, in multiple languages and from a variety of sources in separate jurisdictions, increases the difficulty of foreseeing any hostile utilization of the released data, which increases the wickedness of the problem. Those jurisdictions’ politics also come into play, which does not make it easier to trace a viable route to ensure a balance between a security interest and a democratic core value.

The initial action to address this issue, and embedded weakness, needs to involve both NATO and the European Union, as well as their member states, due to the complexity of multinational defense, the national implementation of EU legislation and the ability to adjust EU legislation. NATO and the EU have a common interest in mitigating the risks with massive public data releases to an acceptable level that still meets the EU’s goal of transparency.

Jan Kallberg, Ph.D.

Our Critical Infrastructure – Their Cyber Range

There is a risk that we overanalyze attacks on critical infrastructure and try  to find a strategic intent where there are none. Our potential adversaries, in my view, could attack critical American infrastructure for other reasons than executing a national strategy. In many cases, it can be as simple as hostile totalitarian nations that do not respect international humanitarian law, use critical American infrastructure as a cyber range. Naturally, the focus of their top-tier operators is on conducting missions within the strategic direction, but the lower echelon operators can use foreign critical infrastructure as a training ground. If the political elite sanctions these actions, nothing stops a rogue nation from attacking our power grid, waterworks, and public utilities to train their future, advanced cyber operators. The end game is not critical infrastructure – but critical infrastructure provides an educational opportunity.

Naturally, we have to defend critical infrastructure because by doing so, we protect the welfare of the American people and the functions of our society. That said, only because it is vital for us doesn’t automatically mean that it is crucial for the adversary.

Cyberattacks on critical infrastructure can have different intents. There is a similarity between cyber and national intelligence; both are trying to make sense of limited information looking at a denied information environment. In reality, our knowledge of the strategic intent and goals of our potential adversaries is limited.

We can study the adversary’s doctrine, published statements, tactics, technics, and events, but significant gaps exist to understand the intent of the attacks. We are assessing the adversary’s strategic intent from the outside, which are often qualified guesses, with all the uncertainty that comes with it. Then to assess strategic intent, many times, logic and past behavior are the only guidance. Nation-state actors tend to seek a geopolitical end goal, change policy, destabilize the target nation, or acquire the information they can use for their benefit.

Attacks on critical infrastructure make the news headline, and for a less able potential adversary, it can serve as a way to show their internal audience that they can threaten the United States. In 2013, Iranian hackers broke into the control system of a dam in Rye Brook, N.Y. The actual damage was limited due to circumstances the hackers did not know. Maintenance procedures occurred at the facility, which limited the risk for broader damage.

The limited intrusion in the control system made national news, engaged the State of New York, elected officials, Department of Justice, the Federal Bureau of Investigations, Department of Homeland Security, and several more agencies. Time Magazine called it in the headline; ”Iranian Cyber Attack on New York Dam Shows Future of War.”

When attacks occur on critical domestic infrastructure, it is not a given that it has a strategic intent to damage the U.S.; the attacks can also be a message to the attacker’s population that their country can strike the Americans in their homeland. For a geopolitically inferior country that seeks to be a threat and a challenger to the U.S., examples are Iran or North Korea; the massive American reaction to a limited attack on critical infrastructure serves its purpose. The attacker had shown its domestic audience that they could shake the Americans, primarily when U.S. authorities attributed the attack to Iranian hackers, making it easier to present it as news for the Iranian audience. Cyber-attacks become a risk-free way of picking a fight with the Americans without risking escalation.
Numerous cyber-attacks on critical American infrastructure could be a way to harass the American society and have no other justification than hostile authoritarian senior leaders has it as an outlet for their frustration and anger against the U.S.

Attackers seeking to maximize civilian hardship as a tool to bring down a targeted society have historically faced a reversed reaction. The German bombings of the civilian targets during the 1940’s air campaign “the Blitz” only hardened the British resistance against the Nazis. An attacker needs to take into consideration the potential outfall of a significant attack on critical infrastructure. The reactions to Pearl Harbor and 9-11 show that there is a risk for any adversary to attack the American homeland and that such an attack might unify American society instead of injecting fear and force submission to foreign will.

Critical infrastructure is a significant attack vector to track and defend. Still, cyberattacks on U.S. critical infrastructure create massive reactions, which are often predictable, are by itself a vulnerability if orchestrated by an adversary following the Soviet/Russian concept of reflexive control.

The War Game Revival

 

The sudden fall of Kabul, when the Afghan government imploded in a few days, shows how hard it is to predict and assess future developments. War games have had a revival in the last years to understand potential geopolitical risks better. War games are tools to support our thinking and force us to accept that developments can happen, which we did not anticipate, but games also have a flip side. War games can act as afterburners for our confirmation bias and inward self-confirming thinking. Would an Afghanistan-focused wargame design from two years ago had a potential outcome of a governmental implosion in a few days? Maybe not.

Awareness of how bias plays into the games is key to success. Wargames revival occurs for a good reason. Well-designed war games make us better thinkers; the games can be a cost-effective way to simulate various outcomes, and you can go back and repeat the game with lessons learned.
Wargames are rules-driven; the rules create the mechanical underpinnings that decide outcomes, either success or failure. Rules are condensed assumptions. There resides a significant vulnerability. Are we designing the games that operate within the realm of our own aggregated bias?
We operate in large organizations that have modeled how things should work. The timely execution of missions is predictable according to doctrine. In reality, things don’t play out the way we planned; we know it, but the question is, how do you quantify a variety of outcomes and codify them into rules?

Our war games and lessons learned from war games are never perfect. The games are intellectual exercises to think about how situations could unfold and deal with the results. In the interwar years, the U.S. made a rightful decision to focus on Japan as a potential adversary. Significant time and efforts went into war planning based on studies and wargames that simulated the potential Pacific fight. The U.S. assumed one major decisive battle between the U.S. Navy and the Imperial Japanese Navy, where lines of battleships fought it out at a distance. In the plans, that was the crescendo of the Pacific war. The plans missed the technical advances and importance of airpower, air carriers, and submarines. Who was setting up the wargames? Who created the rules? A cadre of officers who had served in the surface fleet and knew how large ships fought. There is naturally more to the story of the interwar war planning, but as an example, this short comment serves its purpose.

How do we avoid creating war games that only confirm our predisposition and lures us into believing that we are prepared – instead of presenting the war we have to fight?

How do you incorporate all these uncertainties into a war game? Naturally, it is impossible, but keeping the biases at least to a degree mitigated ensures value.

Study historical battles can also give insights. In the 1980s, sizeable commercial war games featured massive maps, numerous die-cut unit counters, and hours of playtime. One of these games was SPI’s “Wacht am Rhein,” which was a game about the Battle of the Bulge from start to end. The game visualizes one thing – it doesn’t matter how many units you can throw into battle if they are stuck in a traffic jam. Historical war games can teach us lessons that need to be maintained in our memory to avoid repeating the mistakes from the past.

Bias in wargame design is hard to root out. The viable way forward is to challenge the assumptions and the rules. Outsiders do it better than insiders because they will see the ”officially ignored” flaws. These outsiders must be cognizant enough to understand the game but have minimal ties to the outcome, so they are free to voice their opinion. There are experts out there. Commercial lawyers challenge assumptions and are experts in asking questions. It can be worth a few billable hours to ask them to find the flaws. Colleagues are not suitable to challenge and the ”officially ignored” flaws because they are marinated in the ideas that established the ”officially ignored” flaws. Academics dependent on DOD funding could gravitate toward accepting the ”officially ignored” flaws, just a fundamental human behavior, and the fewer ties to the initiator of the game, the better.

Another way to address uncertainty and bias is repeated games. The first game, cyber, has the effects we anticipate. The second game, cyber, has limited effect and turns out to be an operative dud. In the third game, cyber effects proliferate and have a more significant impact than we anticipated. I use these quick examples to show that there is value in repeated games. The repeated games become a journey of realization and afterthoughts due to the variety of factors and outcomes. We can then afterward use our logic and understanding to arrange the outcomes to understand reality better. The repeated games limit the range and impact of specific bias due to the variety of conditions.

The revival of wargaming is needed because wargaming can be a low-cost, high-return, intellectual endeavor. Hopefully, we can navigate away from the risks of groupthink and confirmation bias embedded in poor design. The intellectual journey that the war games take us on will make our current and future decision-makers better equipped to understand an increasingly complex world.

 

Jan Kallberg, Ph.D.

 

CYBER IN THE LIGHT OF KABUL – UNCERTAINTY, SPEED, ASSUMPTIONS

 

There is a similarity between the cyber and intelligence community (IC) – we are both dealing with a denied environment where we have to assess the adversary based on limited verifiable information. The recent events in Afghanistan with the Afghani government and its military imploding and the events that followed were unanticipated and against the ruling assumptions. The assumptions were off, and the events that unfolded were unprecedented and fast. The Afghan security forces evaporated in ten days facing a far smaller enemy leading to a humanitarian crisis. There is no blame in any direction; it is evident that this was not the expected trajectory of events. But still, in my view, there is a lesson to be learned from the events in Kabul that applies to cyber.

The high degree of uncertainty, the speed in both cases, and our reliance on assumptions, not always vetted beyond our inner circles, makes the analogy work. According to the media, in Afghanistan, there was no clear strategy to reach a decisive outcome. You could say the same about cyber. What is a decisive cyber outcome at a strategic level? Are we just staring at tactical noise, from ransomware to unsystematic intrusions, when we should try to figure out the big picture instead?

Cyber is loaded with assumptions that we, over time, accepted. The assumptions become our path-dependent trajectory, and in the absence of the grand nation-state on nation-state cyber conflict, the assumptions are intact. The only reason why cyber’s failed assumption has not yet surfaced is the absence of full cyber engagement in a conflict. There is a creeping assumption that senior leaders will lead future cyber engagements; meanwhile, the data shows that the increased velocity in the engagements could nullify the time window for leaders to lead. Why do we want cyber leaders to lead? It is just how we do business. That is why we traditionally have senior leaders. John Boyd’s OODA-loop (Observe, Orient, Decide, Act) has had a renaissance in cyber the last three years. The increased velocity with support of more capable hardware, machine learning, artificial intelligence, and massive data utilization makes it questionable if there is time for senior leaders to lead traditionally. The risk is that senior leaders are stuck in the first O in the OODA loop, just observing, or in the latter case, orient in the second O in OODA. It might be the case that there is no time to lead because events are unfolding faster than our leaders can decide and act. The way technology is developing; I have a hard time believing that there will be any significant senior leader input at critical junctures because the time window is so narrow.

Leaders will always lead by expressing intent, and that might be the only thing left. Instead of precise orders, do we train leaders and subordinates to be led by intent as a form of decentralized mission command?

Another dominant cyber assumption is critical infrastructure as the likely attack vector. In the last five years, the default assumption in cyber is that critical infrastructure is a tremendous national cyber risk. That might be correct, but there are numerous others. In 1983, the Congressional Budget Office (CBO) defined critical infrastructure as “highways, public transit systems, wastewater treatment works, water resources, air traffic control, airports, and municipal water supply.” By the patriot Act of 2001, the scope had grown to include; “systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.” By 2013, in the Presidential Policy Directive 21 (PPD-21), the scope widens even further and almost encompasses all society. Today concession stands at ballparks are critical infrastructure, together with thousands of other non-critical functions, shows a mission drift that undermines a national cyber defense. There is no guidance on what to prioritize and what not to prioritize that we might have to live without at a critical juncture. The question is if critical infrastructure matters for our potential adversaries as an attack vector or is it critical infrastructure because it matters to us? A potential adversary wants to attack infrastructure around American military facilities and slow down the transportation apparatus from bases to the port of embarkation (POE) to delay the arrival of U.S. troops in theater. The adversary might do a different assessment, saying that tampering with the American homeland only strengthens the American will to fight and popular support for a conflict. The potential adversary might utilize our critical infrastructure as a capture-the-flag training ground to training their offensive teams, but the activity has no strategic intent.

As broad as the definition is today, it is likely that the focus on critical infrastructure reflects what concerns us instead of what the adversary considers essential for them to reach strategic success. So today, when we witnessed the unprecedented events in Afghanistan, where it appears that our assumptions were off, it is good to keep in mind that cyber is heavy with untested assumptions. In cyber, what we know about the adversary and their intent is limited. We make assumptions based on the potential adversaries’ behavior and doctrine, but it is still an assumption.
So the failures to correctly assess Afghanistan should be a wake-up call for the cyber community, which also relies on unvalidated information.

The long-term cost of cyber overreaction

The default modus operandi when facing negative cyber events is to react, often leading to an overreaction. It is essential to highlight the cost of overreaction, which needs to be a part of calculating when to engage and how. For an adversary probing cyber defenses, reactions provide information that can aggregate a clear picture of the defendant’s capabilities and preauthorization thresholds.

Ideally, potential adversaries cannot assess our strategic and tactical cyber capacities, but over time and numerous responses, the information advantage evaporates. A reactive culture triggered by cyberattacks provides significant information to a probing adversary, which seeks to understand underlying authorities and tactics, techniques and procedures (TTP).

The more we act, the more the potential adversary understands our capacity, ability, techniques, and limitations. I am not advocating a passive stance, but I want to highlight the price of acting against a potential adversary. With each reaction, that competitor gain certainty about what we can do and how. The political scientist Kenneth N. Waltz said that the power of nuclear arms resides with what you could do and not within what you do. A large part of the cyber force strength resides in the uncertainty in what it can do, which should be difficult for a potential adversary to assess and gauge.

Why does it matter? In an operational environment where the adversaries operate under the threshold for open conflict, in sub-threshold cyber campaigns, an adversary will seek to probe in order to determine the threshold, and to ensure that it can operate effectively in the space below the threshold. If a potential adversary cannot gauge the threshold, it will curb its activities as its cyber operations must remain adequately distanced to a potential, unknown threshold to avoid unwanted escalation.

Cyber was doomed to be reactionary from its inception; its inherited legacy from information assurance creates a focus on trying to defend, harden, detect and act. The concept is defending, and when the defense fails, it rapidly swings to reaction and counteractivity. Naturally, we want to limit the damage and secure our systems, but we also leave a digital trail behind every time we act.

In game theory, proportional responses lead to tit-for-tat games with no decisive outcome. The lack of the desired end state in a tit-for-tat game is essential to keep in mind as we discuss persistent engagement. In the same way, as Colin Powell reflected on the conflict in Vietnam, operations without an endgame or a concept of what decisive victory looks like are engagements for the sake of engagements. Even worse, a tit-for-tat game with continuous engagements might be damaging as it trains potential adversaries that can copy our TTPs to fight in cyber. Proportionality is a constant flow of responses that reveals friendly capabilities and makes potential adversaries more able.

There is no straight answer to how to react. A disproportional response at specific events increases the risks from the potential adversary, but it cuts both ways as the disproportional response could create unwanted escalation.

The critical concern is that to maintain abilities to conduct cyber operations for the nation decisively, the extent of friendly cyber capabilities needs almost intact secrecy to prevail in a critical juncture. It might be time to put a stronger emphasis on intel-gain loss (IGL) assessment to answer the question if the defensive gain now outweighs the potential loss of ability and options in the future.

The habit of overreacting to ongoing cyberattacks undermines the ability to quickly and surprisingly engage and defeat an adversary when it matters most. Continuously reacting and flexing the capabilities might fit the general audience’s perception of national ability, but it can also undermine the outlook for a favorable geopolitical cyber endgame.

Prioritize NATO integration for multidomain operations

After U.S. forces implement the multidomain operations (MDO) concept, they will have entered a new level of complexity, with multidomain rapid execution and increased technical abilities and capacities. The U.S. modernization efforts enhance the country’s forces, but they also increase the technological disparity and challenges for NATO. A future fight in Europe is likely to be a rapidly unfolding event, which could occur as an fait accompli attack on the NATO Eastern front. A rapid advancement from the adversary to gain as much terrain and bargaining power before the arrival of major U.S. formations from the continental U.S.

According to the U.S. Army Training and Doctrine Command (TRADOC) Pamphlet 525-3-1, “The U.S. Army in Multi-Domain Operations 2028,” a “fait accompli attack is intended to achieve military and political objectives rapidly and then to quickly consolidate those gains so that any attempt to reverse the action by the [United States] would entail unacceptable cost and risk.”

In a fait accompli scenario, limited U.S. Forces are in theater, and the initial fight rely on the abilities of the East European NATO forces. The mix is a high-low composition of highly capable but small, rapid response units from major NATO countries and regional friendly forces with less ability.

The wartime mobilization units and reserves of the East European NATO forces follow a 1990s standard, to a high degree, with partial upgrades in communications and technical systems. They represent a technical generation behind today’s U.S. forces. Even if these dedicated NATO allies are launching modernization initiatives and replace old legacy hardware (T72, BTR, BMP, post-Cold War-donated NATO surplus) with modern equipment, it is a replacement cycle that will require up to two decades before it is completed. Smaller East European NATO nations tend to have faster executed modernization programs, due to the limited number of units, but they still face the issue of integrating a variety of inherited hardware, donated Cold War surplus, and recently purchased equipment.

The challenge is NATO MDO integration and creating an able, coherent fighting force. In MDO, the central idea is to disintegrate and break loose to move the fight deep into enemy territory to disintegrate. The definition of disintegration is presented by TRADOC Pamphlet 525-3-1 as: “Dis-integrate refers to breaking the coherence of the enemy’s system by destroying or disrupting its subcomponents (such as command and control means, intelligence collection, critical nodes, etc.) degrading its ability to conduct operations while leading to a rapid collapse of the enemy’s capabilities or will to fight. This definition revises the current doctrinal defeat mechanism disintegrate.” The utility of MDO in a NATO framework requires a broad implementation of the concept within the NATO forces, not only for the U.S.

The concept of disintegration has its similar concept in Russian military thought and doctrine defined as disorganization. The Russian concept seeks to deny command and control structures the ability to communicate and lead, by jamming, cyber or physical destruction. Historically, Russian doctrine has been focused on exploiting the defending force ability to coordinate, seeking to encircle, and maintain a rapid advancement deep in the territory seeking for the defense to collapse. From a Russian perspective, key to success of a fait accompli attack is its ability to deny NATO-U.S. joint operations and exploit NATO inability to create a coherent multinational and technologically diverse fighting posture. The concept of disorganization has emerged strongly the last five years in how the Russians see the future fight. It would not be too farfetched to assume that the Russian leadership sees an opportunity in exploiting NATO’s inability to coordinate and integrate all elements in the fight.

The lingering concern is how a further technologically advanced and doctrinally complex U.S. force can get the leverage embedded in these advances if the initial fight occurs in an operational environment where the rapidly mobilized East-European NATO forces are two technological generations behind — especially when the Russian disorganization concept appears to be aiming to deny that leverage and exploit the fragmented NATO force.

NATO has been extremely successful safeguarding the peace since its creation in 1949. NATO integration was easier in the 1970s, with large NATO formations in West Germany and less countries involved. Multinational NATO forces had exercises continuously and active interaction among leaders, units and planners. Even then, the Soviet/Russian concepts were to break up and overrun the defenses, and strike deep in the territory.

In the light of increased NATO technical disparity in the multinational forces, and potential doctrinal misalignment in the larger Allied force, add to the strengthened Russian interest to exploit these conditions, these observations should drive a stronger focus on NATO integration.

The future fight will not occur at a national training center. If it happens in Eastern Europe, it will be a fight fought together with European allies, from numerous countries, in a terrain they know better. As we enter a new era of great power competition, the U.S. brings ability, capacity and technology that will ensure NATO mission success if well-integrated in the multinational fighting force.

Jan Kallberg, Ph.D.

Solorigate attack — the challenge to cyber deterrence

The exploitation of SolarWinds’ network tool at a grand scale, based on publicly disseminated information from Congress and media, represents not only a threat to national security — but also puts the concept of cyber deterrence in question. My concern: Is there a disconnect between the operational environment and the academic research that we generally assume supports the national security enterprise?

Apparently, whoever launched the Solorigate attack was undeterred, based on the publicly disclosed size and scope of the breach. If cyber deterrence is not to be a functional component to change potential adversaries’ behavior, why is cyber deterrence given so much attention?

Maybe it is because we want it to exist. We want there to be a silver bullet out there that will prevent future cyberattacks, and if we want it to exist, then any support for the existence of cyber deterrence feeds our confirmation bias.

Herman Kahn and Irwin Mann’s RAND memo “Ten Common Pitfalls” from 1957 points out the intellectual traps when trying to make military analysis in an uncertain world. That we listen to what is supporting our general belief is natural — it is in the human psyche to do so, but it can mislead.

Here is my main argument — there is a misalignment between civilian academic research and the cyber operational environment. There are at least a few hundred academic papers published on cyber deterrence, from different intellectual angles and a variety of venues, seeking to investigate, explain and create an intellectual model how cyber deterrence is achieved.

Many of these papers transpose traditional models from political science, security studies, behavioral science, criminology and other disciplines, and arrange these established models to fit a cyber narrative. The models were never designed for cyber; the models are designed to address other deviate behavior. I do not rule out their relevance in some form, but I also do not assume that they are relevant.

The root causes of this misalignment I would like to categorize in three different, hopefully plausible explanations. First, few of our university researchers have military experience, and with an increasingly narrower group that volunteer to the serve, the problem escalates. This divide between civilian academia and the military is a national vulnerability.

Decades ago, the Office of Net Assessment assessed that the U.S. had an advantage over the Soviets due to the skills of the U.S. force. Today in 2021, it might be reversed for cyber research when the academic researchers in potentially adversarial countries have a higher understanding of military operations than their U.S. counterpart.

Second, the funding mechanism in the way we fund civilian research gives a market-driven pursuit to satisfy the interest of the funding agency. By funding models of cyber deterrence, there is already an assumption that it exists, so any research that challenges that assumption will never be initiated. Should we not fund this research? Of course not, but the scope of the inquiry needs to be wide enough to challenge our own presumptions and potential biases at play. Right now, it pays too well to tell us what we want to hear, compared to presenting a radical rebuttal of our beliefs and perceptions of cyber.

Third, the defense enterprise is secretive about the inner workings of cyber operations and the operational environment (for a good reason!). However, what if it is too secretive, leaving civilian researchers to rely on commercial white papers, media, and commentators to shape the perception of the operational environment?

One of the reasons funded university research exists is to be a safeguard to help avoid strategic surprise. However, it becomes a grave concern when the civilian research community research misses the target on such a broad scale as it did in this case. This case also demonstrates that there is risk in assuming the civilian research will accurately understand the operational environment, which rather amplifies the potential for strategic surprise.

There are university research groups that are highly knowledgeable of the realities of military cyber operations, so one way to address this misalignment is to concentrate the effort. Alternatively, the defense establishment must increase the outreach and interaction with a larger group of research universities to mitigate the civilian-military research divide. Every breach, small and large, is data that supports understanding of what happened, so in my view, this is one of the lessons to be learned from Solorigate.

Jan Kallberg, Ph.D.

After twenty years of cyber – still unchartered territory ahead

The general notion is that much of the core understanding in cyber is in place. I would like to challenge that perception. There are still vast territories of the cyber domain that need to be researched, structured, and understood. I would like to use Winston Churchill’s words – it is not the beginning of the end; it is maybe the end of the beginning. It is obvious to me, in my personal opinion, that the cyber journey is still very early, the cyber field has yet to mature and big building blocks for the future cyber environment are not in place. Internet and the networks that support the net have increased dramatically over the last decade. Even if the growth of cyber might be stunning, the actual advances are not as impressive.

In the last 20 years, cyber defense, and cyber as a research discipline, have grown from almost nothing to major national concerns and the recipient of major resources. In the winter of 1996-1997, there were four references to cyber defense in the search engine of that day: AltaVista. Today, there are about 2 million references in Google. Knowledge of cyber has not developed at the same rapid rate as the interest, concern, and resources.

The cyber realm is still struggling with basic challenges such as attribution. Traditional topics in political science and international relations — such as deterrence, sovereignty, borders, the threshold for war, and norms in cyberspace — are still under development and discussion. From a military standpoint, there is still a debate about what cyber deterrence would look like, what the actual terrain and maneuverability are like in cyberspace, and who is a cyber combatant.

The traditional combatant problem becomes even more complicated because the clear majority of the networks and infrastructure that could be engaged in potential cyber conflicts are civilian — and the people who run these networks are civilians. Add to that mix the future reality with cyber: fighting a conflict at machine speed and with limited human interaction.

Cyber raises numerous questions, especially for national and defense leadership, due to the nature of cyber. There are benefits with cyber – it can be used as a softer policy option with a global reach that does not require predisposition or weeks of getting assets in the right place for action. The problem occurs when you reverse the global reach, and an asymmetric fight occurs, when the global adversaries to the United States can strike utilizing cyber arms and attacks deep to the most granular participle of our society – the individual citizen. Another question that is raising concern is the matter of time. Cyber attacks and conflicts can be executed at machine speed, which is beyond human ability to lead and comprehend what is actually happening. This visualizes that cyber as a field of study is in its early stages even if we have an astronomic growth of networked equipment, nodes, and the sheer volume of transferred information. We have massive activity on the Internet and in networks, but we are not fully able to utilize it or even structurally understand what is happening at a system-level and in a grander societal setting. I believe that it could take until the mid-2030s before many of the basic elements of cyber have become accepted, structured, and understood, and until we have a global framework. Therefore, it is important to be invested in cyber research and make discoveries now rather than face strategic surprise. Knowledge is weaponized in cyber.

Jan Kallberg, PhD

Cognitive Force Protection – How to protect troops from an assault in the cognitive domain

(Co-written with COL Hamilton)

Jan Kallberg and Col. Stephen Hamilton

Great power competition will require force protection for our minds, as hostile near-peer powers will seek to influence U.S. troops. Influence campaigns can undermine the American will to fight, and the injection of misinformation into a cohesive fighting force are threats equal to any other hostile and enemy action by adversaries and terrorists. Maintaining the will to fight is key to mission success.

Influence operations and disinformation campaigns are increasingly becoming a threat to the force. We have to treat influence operations and cognitive attacks as serious as any violent threat in force protection. Force protection is defined by Army Doctrine Publication No. 3-37, derived from JP 3-0: “Protection is the preservation of the effectiveness and survivability of mission-related military and nonmilitary personnel, equipment, facilities, information, and infrastructure deployed or located within or outside the boundaries of a given operational area.” Therefore, protecting the cognitive space is an integral part of force protection.

History shows that preserving the will to fight has ensured mission success in achieving national security goals. France in 1940 had more tanks and significant military means to engage the Germans; however, France still lost. A large part of the explanation of why France was unable to defend itself in 1940 resides with defeatism. This including an unwillingness to fight, which was a result of a decade-long erosion of the French soldiers’ will in the cognitive realm.

In the 1930s, France was political chaos, swinging from right-wing parties, communists, socialists, authoritarian fascists, political violence and cleavage, and the perception of a unified France worth fighting for diminished. Inspired by Stalin’s Soviet Union, the communists fueled French defeatism with propaganda, agitation and influence campaigns to pave the way for a communist revolution. Nazi Germany weakened the French to enable German expansion. Under a persistent cognitive attack from two authoritarian ideologies, the bulk of the French Army fell into defeatism. The French disaster of 1940 is one of several historical examples where manipulated perception of reality prevailed over reality itself. It would be a naive assessment to assume that the American will is a natural law unaffected by the environment. Historically, the American will to defend freedom has always been strong; however, the information environment has changed. Therefore, this cognitive space must be maintained, reignited and shared when the weaponized information presented may threaten it.

In the Battle of the Bulge, the conflict between good and evil was open and visible. There was no competing narrative. The goal of the campaign was easily understood, with clear boundaries between friendly and enemy activity. Today, seven decades later, we face competing tailored narratives, digital manipulation of media, an unprecedented complex information environment, and a fast-moving, scattered situational picture.

Our adversaries will and already are exploiting the fact that we as a democracy do not tell our forces what to think. Our only framework is loyalty to the Constitution and the American people. As a democracy, we expect our soldiers to support the Constitution and the mission. Our force has their democratic and constitutional right to think whatever they find worthwhile to consider.

In order to fight influence operations, we would typically control what information is presented to the force. However, we cannot tell our force what to read and not read due to First Amendment rights. While this may not have caused issues in the past, social media has presented an opportunity for our adversaries to present a plethora of information that is meant to persuade our force.

In addition, there is too much information flowing in multiple directions to have centralized quality control or fact checking. The vetting of information must occur at the individual level, and we need to enable the force’s access to high-quality news outlets. This doesn’t require any larger investment. The Army currently funds access to training and course material for education purposes. Extending these online resources to provide every member of the force online access to a handful of quality news organizations costs little but creates a culture of reading fact-checked news. More importantly, the news that is not funded by click baiting is more likely to be less sensational since its funding source comes from dedicated readers interested in actual news that matters.

In a democracy, cognitive force protection is to learn, train and enable the individual to see the demarcation between truth and disinformation. As servants of our republic and people, leaders of character can educate their unit on assessing and validating the information. As first initial steps, we must work toward this idea and provide tools to protect our force from an assault in the cognitive domain.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. Col. Stephen Hamilton is the chief of staff at the institute and a professor at the academy. The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy or the Defense Department.

 

 

Government cyber breach shows need for convergence

(I co-authored this piece with MAJ Suslowicz and LTC Arnold).

MAJ Chuck Suslowicz , Jan Kallberg , and LTC Todd Arnold

The SolarWinds breach points out the importance of having both offensive and defensive cyber force experience. The breach is an ongoing investigation, and we will not comment on the investigation. Still, in general terms, we want to point out the exploitable weaknesses in creating two silos — OCO and DCO. The separation of OCO and DCO, through the specialization of formations and leadership, undermines broader understanding and value of threat intelligence. The growing demarcation between OCO and DCO also have operative and tactical implications. The Multi-Domain Operations (MDO) concept emphasizes the competitive advantages that the Army — and greater Department of Defense — can bring to bear by leveraging the unique and complementary capabilities of each service.

It requires that leaders understand the capabilities their organization can bring to bear in order to achieve the maximum effect from the available resources. Cyber leaders must have exposure to a depth and the breadth of their chosen domain to contribute to MDO.

Unfortunately, within the Army’s operational cyber forces, there is a tendency to designate officers as either offensive cyber operations (OCO) or defensive cyber operations (DCO) specialists. The shortsighted nature of this categorization is detrimental to the Army’s efforts in cyberspace and stymies the development of the cyber force, affecting all soldiers. The Army will suffer in its planning and ability to operationally contribute to MDO from a siloed officer corps unexposed to the domain’s inherent flexibility.

We consider the assumption that there is a distinction between OCO and DCO to be flawed. It perpetuates the idea that the two operational types are doing unrelated tasks with different tools, and that experience in one will not improve performance in the other. We do not see such a rigid distinction between OCO and DCO competencies. In fact, most concepts within the cyber domain apply directly to both types of operations. The argument that OCO and DCO share competencies is not new; the iconic cybersecurity expert Dan Geer first pointed out that cyber tools are dual-use nearly two decades ago, and continues to do so. A tool that is valuable to a network defender can prove equally valuable during an offensive operation, and vice versa.

For example, a tool that maps a network’s topology is critical for the network owner’s situational awareness. The tool could also be effective for an attacker to maintain situational awareness of a target network. The dual-use nature of cyber tools requires cyber leaders to recognize both sides of their utility. So, a tool that does a beneficial job of visualizing key terrain to defend will create a high-quality roadmap for a devastating attack. Limiting officer experiences to only one side of cyberspace operations (CO) will limit their vision, handicap their input as future leaders, and risk squandering effective use of the cyber domain in MDO.

An argument will be made that “deep expertise is necessary for success” and that officers should be chosen for positions based on their previous exposure. This argument fails on two fronts. First, the Army’s decades of experience in officers’ development have shown the value of diverse exposure in officer assignments. Other branches already ensure officers experience a breadth of assignments to prepare them for senior leadership.

Second, this argument ignores the reality of “challenging technical tasks” within the cyber domain. As cyber tasks grow more technically challenging, the tools become more common between OCO and DCO, not less common. For example, two of the most technically challenging tasks, reverse engineering of malware (DCO) and development of exploits (OCO), use virtually identical toolkits.

An identical argument can be made for network defenders preventing adversarial access and offensive operators seeking to gain access to adversary networks. Ultimately, the types of operations differ in their intent and approach, but significant overlap exists within their technical skillsets.

Experience within one fragment of the domain directly translates to the other and provides insight into an adversary’s decision-making processes. This combined experience provides critical knowledge for leaders, and lack of experience will undercut the Army’s ability to execute MDO effectively. Defenders with OCO experience will be better equipped to identify an adversary’s most likely and most devastating courses of action within the domain. Similarly, OCO planned by leaders with DCO experience are more likely to succeed as the planners are better prepared to account for potential adversary countermeasures.

In both cases, the cross-pollination of experience improves the Army’s ability to leverage the cyber domain and improve its effectiveness. Single tracked officers may initially be easier to integrate or better able to contribute on day one of an assignment. However, single-tracked officers will ultimately bring far less to the table than officers experienced in both sides of the domain due to the multifaceted cyber environment in MDO.

Maj. Chuck Suslowicz is a research scientist in the Army Cyber Institute at West Point and an instructor in the U.S. Military Academy’s Department of Electrical Engineering and Computer Science (EECS). Dr. Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. LTC Todd Arnold is a research scientist in the Army Cyber Institute at West Point and assistant professor in U.S. Military Academy’s Department of Electrical Engineering and Computer Science (EECS.) The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy or the Department of Defense.

 

If Communist China loses a future war, entropy could be imminent

What happens if China engages in a great power conflict and loses? Will the Chinese Communist Party’s control over the society survive a horrifying defeat?
People’s Liberation Army PLA last fought a massive scale war during the invasion of Vietnam in 1979, which was a failed operation to punish Vietnam for toppling the Khmer Rouge regime of Cambodia. Since 1979, the PLA has been engaged in shelling Vietnam at different occasions and involved in other border skirmishes, but not fought a full-scale war. In the last decades, China increased its defense spending and modernized its military, including advanced air defenses and cruise missiles, fielded advanced military hardware, and built a high-sea navy from scratch; there is significant uncertainty of how the Chinese military will perform.

Modern warfare is integration, joint operations, command, control, intelligence, and the ability to understand and execute the ongoing, all-domain fight. War is a complex machinery, with low margins of error, and can have devastating outcomes if not prepared. It does not matter if you are against or for the U.S. military operations the last three decades, fact is that the prolonged conflict and engagement have made the U.S. experienced. The Chinese inexperience, in combination with unrealistic expansionistic ambitions, can be the downfall of the regime. Dry swimmers maybe train the basics, but they are never great swimmers.

Although it may look like a creative strategy for China to harvest trade secrets and intellectual property as well as put developing countries in debt to gain influence, I would question how rational the Chinese apparatus is. The repeated visualization of the Han nationalistic cult appears as a strength, the youth are rallying behind the Xi Jinping regime, but it is also a significant weakness. The weakness is blatantly visible in the Chinese need for surveillance and population control to maintain stability: surveillance and repression that is so encompassing in the daily life of the Chinese population that German DDR security services appear to have been amateurs. All chauvinist cults will implode over time because the unrealistic assumptions add up, and so will the sum of all delusional ideological decisions. Winston Churchill knew after Nazi-Germany declared war on the United States in December of 1941 that the Allies will prevail and win the war. Nazi-Germany did not have the GDP or manpower to sustain the war on two fronts, but the Nazis did not care because they were irrational and driven by hateful ideology. Nazi-Germany had just months before they invaded the massive Soviet Union, to create lebensraum and feed an urge to reestablish German-Austrian dominance in Eastern Europe. The Nazis unilaterally declared war on the United States. The rationale for the declaration of war was ideology, a worldview that demanded expansion and conflict, even if Germany was strategically inferior and eventually lost the war.

The Chinese belief that they can be a global authoritarian hegemony is likely on the same journey. China is today driven by their flavor or expansionist ideology that seek conflict, without being strategically able. It is worth noting that not a single major country is their allies. The Chinese supremacist propaganda works in peacetime, holding massive rallies and hailing Mao Zedong military genius, and they sing, dance, and wave red banner, but will that grip hold if PLA loses? In case of a failed military campaign, is the Chinese population, with the one-child policy, ready for casualties, humiliation, and failure?
Will the authoritarian grip with social equity, facial recognition, informers, digital surveillance, and an army that peace-time function is primarily crowd control, survive a crushing defeat? If the regime loses the grip, the wrath of the masses is like unleashed from decades of repression.

A country of the size of China, with a history of cleavages and civil wars, that has a suppressed diverse population and socio-economic disparity can be catapulted into Balkanization after a defeat. In the past, China has had long periods of internal fragmentation and weak central government.

The United States reacts differently to failure. The United States is as a country far more resilient than we might assume watching the daily news. If the United States loses a war, the President gets the blame, but there will still be a presidential library in his/her name. There is no revolution.

There is an assumption lingering over today’s public debate that China has a strong hand, advanced artificial intelligence, the latest technology, and is an uber-able superpower. I am not convinced. During the last decade, the countries in the Indo-Pacific region that seeks to hinder the Chinese expansion of control, influence, and dominance have formed stronger relationships increasingly. The strategic scale is in the democratic countries’ favor. If China still driven by ideology pursues conflict at a large scale it is likely the end of the Communist dictatorship.

In my personal view, we should pay more attention to the humanitarian risks, the ripple effects, and the dangers of nukes in a civil war, in case the Chinese regime implodes after a failed future war.

Jan Kallberg, Ph.D.

What is the rationale behind election interference?

Any attempt to interfere with democratic elections, and the peaceful transition of power that is the result of these elections, is an attack on the country itself as it seeks to destabilize and undermine the core societal functions and constitutional framework. We all agree on the severity of these attempts and that it is a real, ongoing concern for our democratic republic. That is all good, and democracies have to safeguard the integrity of their electoral processes.

But what is less discussed is why the main perpetrator — Russia, according to media — is seeking to interfere with the U.S. election. What is the Russian rationale behind these information operations targeting the electoral system?

The Russian information operations in the fault lines of American society, seeking to make America more divisive and weakened, has a more evident rationale. These operations seek to expand cleavages, misunderstandings, and conflicts within the population. That can affect military recruiting, national obedience in a national emergency, and have long-term effects on trust and confidence in the society. So seeking to attack the American cognitive space, in pursuit of split and division in this democratic republic, has a more obvious goal. But what is the Russian return on investment for the electoral operations?

Even if the Russians had such an impact that candidate X won instead of candidate Y, the American commitment to defense and fundamental outlook on the world order has been fairly stable through different administrations and changes in Congress.

Naturally, one explanation is that Russia, as an authoritarian country with a democratic deficit, wants to portray functional democracies as having their issues and that liberal democracy is a failing and flawed concept. In a democracy, if the electoral system is unable to ensure the integrity of the elections, then the legitimacy of the government will be questioned. The question is if that is the Russian endgame.

In my view, there is more to the story than Russians just trying to interfere with the U.S. to create a narrative that democracy doesn’t work, specially tailored for the Russian domestic population so they will not threaten the current regime. The average Russian is no free-ranging political scientist, thinking about the underpinnings of legitimacy for their government, democratic models, and the importance of constitutional mechanisms. The Russian population is made up of the descendants of those who survived the communist terror, so by default, they are not so quick to ask questions about governmental legitimacy. There is opposition within Russia, and a fraction of the population would like to see a regime change in the Kremlin, like many others. But in a Russian context, regime change doesn’t automatically mean a public urge for liberal democracy.

Let me present another explanation to the Russian electoral interference, which might co-exist with the first explanation, and it is related to how we perceive Russia.

The Russian information operations stir up a sentiment that the Russians are able to change the direction of our society. If the Russians are ready to strike the homeland, then they are a major threat. Only superpowers are major threats to the continental United States.

So instead of seeing Russia for what it is, a country with significant domestic issues and reliant on massive extraction of natural resources to sell to a world market that buys from the lowest bidder, we overestimate their ability. Russia has failed the last decades to advance their ability to produce and manufacture competitive products, but the information operations make us believe that Russia is a potent superpower.

The nuclear arsenal makes Russia a superpower per se. Still, it cannot be effectively visualized for a foreign public, nor can it impact a national sentiment in a foreign country, especially when the Western societies in 2020 almost seem to have forgotten that nukes exist. Nukes are no longer “practical” tools to project superpower status.

If the Russians stir up our politicians’ beliefs that the Russians are a significant adversary, and that gives Russia bargaining power and geopolitical consideration, it appears more logical as a Russian goal.

Jan Kallberg, Ph.D.