CEPA Article: Russia Won’t Play the Cyber Card, Yet

My article from CEPA (Center for European Policy Analysis). Read the full text following this link. 

In reality, the absence of cyber-attacks beyond Ukraine indicates a very rational Russian fear of disclosing and compromising capabilities beyond its own. That is the good news. The bad news is that the absence of a cyber-offensive does not mean these advanced capabilities do not exist.

From the text.

“The recent cyberattacks in Ukraine have been unsophisticated and have
had close to no strategic impact. The distributed denial-of-service (DDoS) cyber-attacks are low-end efforts, a nuisance that most corporations already have systems to mitigate. Such DDoS attacks will not bring down a country or force it to submit to foreign will. These are very significantly different from advanced offensive cyber weapons. Top-of-the-range cyber weapons are designed to destroy, degrade, and disrupt systems, eradicate trust and pollute data integrity. DDoS and website defacements do not even come close in their effects.

A Russian cyber-offensive would showcase its full range of advanced offensive cyber capabilities against Ukraine, along with its tactics, techniques, and procedures (TTP), which would then be compromised. NATO and other neighboring nations, including China and Iran, would know the extent of Russian capabilities and have effective insights into Russia’s modus operandi.

From a Russian point of view, if a potential adversary understood its TTP, strategic surprise would evaporate, and the Russian cyber force would lose the initiative in a more strategically significant future conflict.

Understanding the Russian point of view is essential because it is the Russians who conduct their offensive actions. This might sound like stating the obvious, but currently, the prevailing conventional wisdom is a Western think-tank-driven context, which in my opinion, is inaccurate. There is nothing for the Russians to strategically gain by unleashing their full, advanced cyber arsenal against Ukraine or NATO at this juncture. In an open conflict between Russia and NATO, the Kremlin’s calculation would be different and might well justify the use of advanced cyber capabilities.

In reality, the absence of cyber-attacks beyond Ukraine indicates a very rational Russian fear of disclosing and compromising capabilities beyond its own. That is the good news. The bad news is that the absence of a cyber-offensive does not mean these advanced capabilities do not exist.”

Jan Kallberg

My article for 19fortyifive: “Free War: A Strategy For Ukraine To Resist Russia’s Brutal Invasion Of Ukraine?”

 

I wrote an article for the national security web-based venue 19fortyfive that addresses resistance operations seen in the light of the Swedish Fria Kriget (Eng.: Free War) concept.

The full text can be found here.
(Picture UK MOD)

 

Article: Too Late for Russia to Stop the Foreign Volunteer Army

My article “Too Late for Russia to Stop the Foreign Volunteer Army” was published by the Center for European Policy Analysis. A short quote below,

Any Ukrainian who sees French, British, American, Spanish, Brazilian, from wherever they come, volunteers joining their resistance will solidify the notion that the war is not between Ukraine and Russia; but between good and evil. Predictably enough, Putin, his commanders, and propagandists are troubled by the prospect of thousands of volunteers supporting the Ukrainian narrative, their cause, and strengthening the Ukrainian will to endure.

 

Ukraine: Russia will not waste offensive cyber weapons

An extract from my latest article at CyberWire – read the full article at CyberWire.

When Russia’s strategic calculus would dictate major cyber attacks.

Russia will use advanced strategic cyber at well-defined critical junctures. For example, as a conflict in Europe unfolded and dragged in NATO, Russian forces would seek to delay the entry of major US forces through cyber attacks against railways, ports, and electric facilities along the route to the port of embarkation. If US forces can be delayed by one week, that is one week of a prolonged time window in Europe before the main US force arrived, and would enable the submarines of the Northern Fleet to be positioned in the Atlantic. Strategic cyber support strategic intent and actions.

All cyber-attacks are not the same, and just because an attack originates from Russia doesn’t mean it is directed by strategic intent.

Naturally, the Russian regime would allow cyber vandalism and cybercrime against the West to run rampant because these are ways of striking the adversary. But these low-end activities do not represent the Russian military complex’s cyber capabilities, nor do they reflect the Russian leadership’s strategic intent.

The recent cyberattacks in Ukraine have been unsophisticated and have had close to no strategic impact. The distributed denial-of-service (DDoS) cyber-attacks are low-end efforts, a nuisance that most corporations already have systems to mitigate. Such DDoS attacks will not bring down a country or force it to submit to foreign will. Such low-end attacks don’t represent advanced offensive cyber weapons: the DDoS attacks are limited impact cyber vandalism. Advanced offensive cyber weapons destroy, degrade, and disrupt systems, eradicate trust and pollute data integrity. DDoS and website defacements are not even close to this in their effects. By making DDoS attacks, whether it’s the state that carried them out or a group of college students in support of Kremlin policy, Russia has not shown the extent of its offensive cyber capability.

The invasion of Ukraine is not the major peer-to-peer conflict that is the central Russian concern. The Russians have tailored their advanced cyber capabilities to directly impact a more significant geopolitical conflict, one with NATO or China. Creating a national offensive cyber force is a decades-long investment in training, toolmaking, reconnaissance of possible avenues of approach, and detection of vulnerabilities. If Russia showcased its full range of advanced offensive cyber capabilities against Ukraine, the Russian tactics, techniques, and procedures (TTP) would be compromised. NATO and other neighboring nations, including China and Iran, would know the extent of Russian capabilities and have effective insight into Russia’s modus operandi.

From a Russian point of view, if a potential adversary understood Russian offensive cyber operations’ tactics, techniques, and procedures, strategic surprise would evaporate, and the Russian cyber force would lose the initiative in a more strategically significant future conflict.

Understanding the Russian point of view is essential, because it is the Russians who conduct their offensive actions. This might sound like stating the obvious, but currently, the prevailing conventional wisdom is a Western think-tank-driven context, which in my opinion, is inaccurate. There is nothing for the Russians to strategically gain by unleashing their full advanced cyber arsenal against Ukraine or NATO at this juncture. In an open conflict between Russia and NATO the Russian calculation would be different and justify use of advanced cyber capabilities.

End of abstract – read the full article at CyberWire.

An Underground Resistance Movement for Ukraine

My article titled “An Underground Resistance Movement for Ukraine” was published by the Center for European Policy Analysis. Clip from the text “If Russia succeeds in occupying large parts of the country, a resistance movement can prosper, but only if the West provides help.”

 

Ukraine: the absent Russian Electronic Warfare (EW)

The Russian doctrine favors rapid employment of nonlethal effects, such as electronic warfare (EW), to paralyze and disrupt the enemy in the early hours of conflict. There was an expectation that a full-size Russian invasion of Ukraine would be a massive utilization of electronic warfare from the start.

In the afternoon on the first day of the Russian invasion, the 24th of February, it became apparent that the Russians faced significant command and coordination issues, as there was no effective electronic warfare against Ukrainian communications.

The rationale for the absence of Russian electronic warfare can have different origins; the Russians could have assumed marginal Ukrainian resistance and not deployed their EW capabilities. There was no tangible Russian EW engagement on day five of the invasion after stiff Ukrainian resistance. So if the Russian held back their EW early, the EW should be operational on day five, so the other explanation stands. The Russians can’t get their EW together.

In my view, that indicates that the Russians have failed to synchronize the EW, spectrum management, and the activities between different military formations to ensure functional, friendly communications; meanwhile, the Ukrainians are under Russian EW attack.

After this conflict, the Russian Ukrainian War, the Russians have learned from their failures and any potential adversary that studied the Russian Ukrainian War. The core problem of successfully doing EW is not the hardware; management and operational integration are the challenges.

Any potential adversary will adapt to the Russian Ukrainian War experience and focus on operational integration, but there is a risk for US status quo bias and unwillingness to invest in EW “because apparently, EW doesn’t work.”

The spurious assessment that EW doesn’t work and is not critical for the battlefield becomes the rationale for continuing the current marginal EW investment.

The last almost thirty years, American electronic warfare capabilities have been neglected because the spectrum was never contested in Iraq and Afghanistan. There was neither an enemy ability to detect and strike at electromagnetic activity, allowing for US command post radiating electromagnetic signatures and radiation from radios and data links with no risk of being annihilated when least expected.

During these decades, the other nations have incrementally and strategically increased their ability to conduct electronic warfare by denying and degrading spectrum and detecting electromagnetic activity leading to kinetic strike. Over the last years, the connection between electromagnetic radiation and kinetic strikes has been repeated along the frontlines in Donbas, where Russian-backed separatists shelled Ukrainian positions. In 2020, we witnessed it in the second Karabakh War Armenian command posts were located by their electromagnetic signature and rapidly knocked out in the early days of the war.

American forces are not prepared to face electronic warfare that is well-integrated and widely deployed in an opposing force.
For a potential future conflict, it is notable that the potential adversaries have heavily invested in the ability to conduct electronic warfare (EW) throughout their force structure. In the Russian Army, each motorized rifle regiment/brigade has an EW company, the division has an EW battalion, and within the Corps and Army structure, there are additional units to allocate to the direction of the thrust in the ground offensive. The Russians appear not to use it effectively, but they will learn and adapt.

At the doctrine level, the Russian ground forces are designed to be offensive and take the initiative from the first round is fired, where denial-of-spectrum access is a part of their strategy.
In theory, EW enables forward-maneuver battalions to engage and create disruption for the enemy and an opportunity for exploitation. The Russians benefit from decades of uninterrupted prioritization and development of EW, so they have the hardware, but it appears to be the integration that lacks.

Electronic warfare is a craft, a skill, and potential adversaries’ EW/signal officers are EW/signal officers their entire careers. Naturally, the potential adversaries’ junior and mid-career officers lack experience from the other Army branches and units, but they know the skills required in EW. In my view, it gives the potential adversaries’ an advantage in Electronic Warfare, compared to the US warrior-scholar that are shuttled around in a system of constant change of duty station, schools, and tasks.

The DOPMA “Defense Officer Personnel Management Act” has been discussed to undergo a significant revision, and it is essential to take into account the need for time and stability to gain craftmanship in EW, which is both technical and hands-on, which need officers to narrow down their specialization without career penalties or forced out of the force. The requirements for winning a war must prevail over a career flow chart driven by obsolete Taylorism and the belief that everyone is interchangeable. Not everyone is interchangeable, and uniquely talented leaders can ensure mission success through spectrum warfare. In the future fight, the EW units will have a far more active role and face constant targeting due to the EW units’ impact on the battlefield. This development requires leadership and decision-making by leaders who know EW craftmanship.

The Russian aggression in Ukraine is evidence that a more extensive ground war is possible. Our potential adversaries will learn and adapt their EW from the Russian Ukrainian war. Meanwhile, it is long overdue to accelerate the US investment in fielded and integrated EW. The current state of intermittent integration through formations and undersized EW capabilities compared to the battlefield needs has to change.

Every modern high-tech weapon system is a dud without access to the spectrum; that realization should be enough to address this issue.

Jan Kallberg

These opinions are my private viewpoints and do not reflect the position
of any employer. 

 

 

Artificial Intelligence (AI): The risk of over-reliance on quantifiable data

The rise of interest in artificial intelligence and machine learning has a flip side. It might not be so smart if we fail to design the methods correctly. A question out there — can we compress the reality into measurable numbers? Artificial Intelligence relies on what can be measured and quantified, risking an over-reliance on measurable knowledge.

The problem with many other technical problems is that it all ends with humans that design and assess according to their own perceived reality. The designers’ bias, perceived reality, weltanschauung, and outlook — everything goes into the design. The limitations are not on the machine side; the humans are far more limiting. Even if the machines learn from a point forward, it is still a human that stake out the starting point and the initial landscape.

Quantifiable data has historically served America well; it was a part of the American boom after World War II when America was one of the first countries that took a scientific look on how to improve, streamline and increase production utilizing fewer resources and manpower.

Numbers have also misled. Vietnam-era Secretary of Defense Robert McNamara used the numbers to tell how to win the Vietnam War, which clearly indicated how to reach a decisive military victory — according to the numbers.

In a post-Vietnam book titled “The War Managers,” retired Army general Donald Kinnard visualized the almost bizarre world of seeking to fight the war through quantification and statistics. Kinnard, who later taught at the National Defense University, surveyed fellow generals that had served in Vietnam about the actual support for these methods. These generals considered the concept of assessing the progress in the war by body counts as useless; only two percent of the surveyed generals saw any value in this practice.

Why were the Americans counting bodies? It is likely because it was quantifiable and measurable. It is a common error in research design to seek the variables that produce easily accessible quantifiable results, and McNamara was at that time almost obsessed with numbers and the predictive power of numbers. McNamara was not the only one.

In 1939, the Nazi-German foreign minister Ribbentrop, together with the German High Command, studied and measured the French and British war preparations and ability to mobilize. The Germans quantified assessment was that the Allies were unable to engage in a full-scale war on short notice and the Germans believed that the numbers were identical with the factual reality — the Allies would not go to war over Poland because they were not ready nor able. So Germany invaded Poland on the 1st of September 1939 and started WWII.

The quantifiable assessment was correct and lead to Dunkirk, but the grander assessment was off and underestimated the British and French will to take on the fight, which led to at least 50 million dead, half of Europe behind the Soviet Iron Curtain and the destruction of their own regime. Britain’s willingness to fight to the end, their ability to convince the U.S. to provide resources, and the subsequent events were never captured in the data. The German quantified assessment was a snapshot of the British and French war preparations in the summer of 1939 — nothing else.

Artificial intelligence depends upon the numbers we feed it. The potential failure is hidden in selecting, assessing, designing and extracting the numbers to feed artificial intelligence. The risk for grave errors in decision-making, escalation, and avoidable human suffering and destruction, is embedded in our future use of artificial intelligence if we do not pay attention to the data that feed the algorithms. The data collection and aggregation is the weakest link in the future of machine-supported decision-making.

Jan Kallberg, Ph.D.

Business leaders need to own cyber security

Consultants and IT staff often have more degrees of freedom than needed. Corporate cybersecurity requires a business leader to make the decisions, be personally invested, and lead the security work the same way as the business. The intent and guidance of the business leaders need to be visible. In reality, this is usually not the case. Business leaders rely on IT staff and security consultants to “protect us from cyberattacks.” The risk is obvious – IT staff and consultants are not running the business, lack complete understanding of the strategy and direction, and therefore are unable to prioritize the protection of the information assets.

Information security has a few foundational pieces. Information resources are classified according to their importance to the business, an acceptable level of risk is established for the company, and then security solutions are developed to mitigate risk down to an acceptable level. Parallel, these mitigation strategies are implemented with minimal disruption to the workflow and the business. The information security program ensures that information and functionality can be restored after an incident as part of the process.

These basic steps may sound like an elementary exercise – something that consultants can solve quickly – but the central question is risk appetite, the acceptance to take an understood risk, which can jeopardize the entire business if too high or too low. What is the wrong level of risk appetite? The business’ IT operations are prepared to take risks that the business management did not even dare to dream of or, conversely, the IT systems will slow down the business, stand in the way, and the failure to prioritize due to risk aversion. Risk, which is central to information security, can only be controlled by the business leader. IT staff and consultants can be advisors, produce information, and sketch solutions, but the decision is a business decision. What risk we are prepared to take cannot be an open issue and is left to arbitrary interpretation.

Just as the management has an influence and controls what is an acceptable risk when information security is structured, management is central when things go wrong. A business management team that is not involved in information security, and gains a conceptual understanding, will be too slow to act in a crisis. Cyberattacks and data failures occur daily. The financial market, customers, government authorities, and owners rightly expect these damages to be dealt with quickly and efficiently. Confusion when a major cyber crisis occurs, by attack or mistake, undermines confidence in the business at a very high rate. In a matter of hours, a trust that has taken decades to build can be wiped out. In the digital economy, trust is the same as revenue and long-term customer relationships. Business management that lacks an understanding of how cyber security is structured for their business, at a managerial level, has not made the intellectual journey of prioritizing and will not lead or have relevant influence in a crisis.

Managers have premium pay and are recruited because they have experience, insight, and character to navigate when a crisis hits and is challenging. If the business management cannot lead when the business is under major cyberattacks, then management has left it to the IT staff and consultants to lead the business.

In a smaller and medium-sized business, the need for committed business management is reinforced because the threat of long-term damage from a cyberattack is greater. A public company can absorb the damage, which smaller players often in niche industries cannot do in the same way.

If business management can engage in sustainability and the climate threat, as many do with both energy and interest, the step of engaging in vulnerability and the cyber threat should not be that far to go. The survival of the business will always be a business decision.

Jan Kallberg, Ph.D.

Demilitarize civilian cyber defense

An cyber crimes specialist with the U.S. Department of Homeland Security, looks at the arms of a confiscated hard drive that he took apart. Once the hard drive is fixed, he will put it back together to extract evidence from it. (Josh Denmark/U.S. Defense Department)
U.S. Defense Department cyber units are incrementally becoming a part of the response to ransomware and system intrusions orchestrated from foreign soil. But diverting the military capabilities to augment national civilian cyber defense gaps is an unsustainable and strategically counterproductive policy.

The U.S. concept of cyber deterrence has failed repeatedly, which is especially visible in the blatant and aggressive SolarWinds hack where the assumed Russian intelligence services, as commonly attributed in the public discourse, established a presence in our digital bloodstream. According to the Cyberspace Solarium Commission, cyber deterrence is established by imposing high costs to exploit our systems. As seen from the Kremlin, the cost must be nothing because blatantly there is no deterrence; otherwise, the Russian intelligence services should have restrained from hacking into the Department of Homeland Security.

After the robust mitigation effort in response to the SolarWinds hack, waves of ransomware attacks have continued. In the last years, especially after Colonial Pipeline and JBS ransomware attacks, there has been an increasing political and public demand for a federal response. The demand is rational; the public and businesses pay taxes and expect protection against foreign attacks, but using military assets is not optimal.

Presidential Policy Directive 41, titled “United States Cyber Incident Coordination,” from 2016 establishes the DHS-led federal response to a significant cyber incident. There are three thrusts: asset response, threat response and intelligence support. Assets are operative cyber units assisting impacted entities to recover; threat response seeks to hold the perpetrators accountable; and intelligence support provides cyberthreat awareness.

The operative response — the assets — is dependent on defense resources. The majority of the operative cyber units reside within the Department of Defense, including the National Security Agency, as the cyber units of the FBI and the Secret Service are limited.

In reality, our national civilian cyber defense relies heavily on defense assets. So what started with someone in an office deciding to click on an email with ransomware, locking up the computer assets of the individual’s employer, has suddenly escalated to a national defense mission.

The core of cyber operations is a set of tactics, techniques and procedures, which creates capabilities to achieve objectives in or through cyberspace. Successful offensive cyberspace operations are dependent on surprise — the exploitation of a vulnerability that was unknown or unanticipated — leading to the desired objective.

The political scientist Kenneth N. Waltz stated that nuclear arms’ geopolitical power resides not in what you do but instead what you can do with these arms. Few nuclear deterrence analogies work in cyber, but Waltz’s does: As long as a potential adversary can not assess what the cyber forces can achieve in offensive cyber, uncertainties will restrain the potential adversary. Over time, the adversary’s restrained posture consolidates to an equilibrium: cyber deterrence contingent on secrecy. Cyber deterrence evaporates when a potential adversary understands, through reverse engineering or observation, our tactics, techniques and procedure.

By constantly flexing the military’s cyber muscles to defend the homeland from inbound criminal cyber activity, the public demand for a broad federal response to illegal cyber activity is satisfied. Still, over time, bit by bit, the potential adversary will understand our military’s offensive cyber operations’ tactics, techniques and procedures. Even worse, the adversary will understand what we can not do and then seek to operate in the cyber vacuum where we have no reach. Our blind spots become apparent.

Offensive cyber capabilities are supported by the operators’ ability to retain and acquire ever-evolving skills. The more time the military cyber force spends tracing criminal gangs and bitcoins or defending targeted civilian entities, the less time the cyber operators have to train for and support military operations to, hopefully, be able to deliver a strategic surprise to an adversary. Defending point-of-sales terminals from ransomware does not upkeep the competence to protect weapon systems from hostile cyberattacks.

Even if the Department of Defense diverts thousands of cyber personnel, it can not uphold a national cyber defense. U.S. gross domestic product is reaching $25 trillion; it is a target surface that requires more comprehensive solutions.

First and foremost, the shared burden to uphold the national cyber defense falls primarily on private businesses, states and local government, federal law enforcement, and DHS.

Second, even if DHS has many roles as a cyberthreat information clearinghouse and the lead agency at incidents, the department lacks a sizable operative component.

Third, establishing a DHS operative cyber unit is limited net cost due to higher military asset costs. When not engaged, the civilian unit can disseminate and train businesses as well as state and local governments to be a part of the national cyber defense.

Establishing a civilian federal asset response is necessary. The civilian response will replace the military cyber asset response, which returns to the military’s primary mission: defense. The move will safeguard military cyber capabilities and increase uncertainty for the adversary. Uncertainty translates to deterrence, leading to fewer significant cyber incidents. We can no longer surrender the initiative and be constantly reactive; it is a failed national strategy.

Jan Kallberg

Inflation – the hidden cyber security threat

 


Image: By Manuel Dohmen – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=185802

In cyberspace, the focus is on threats from malicious activity — a tangible threat. A less obvious threat to cyber is inflation which undermines any cyber organization by eroding budget and employee compensation. Inflation can create unseen resignation rates if not addressed, and jeopardize ongoing cyber efforts and the U.S. Defense Department’s migration to cloud-based services. The competition for cloud security talent is razor-sharp in the private sector already.

There are different ways to build and maintain a cyber workforce: recruit, retrain and retain. The competition between the DoD and the private sector for talent will directly affect recruitment and retainment. Inflation and the shortage of skilled cyber professionals create increasing competition between the federal and private sectors for much-needed talent. Retraining professionals to become a part of the cyber workforce is costly, and if the incentives are not in place to stay in the force, it is short-lived as retrained cyber talent moves on. Inflation creates a negative outlook for recruiting, retraining, and retaining cyber talent.

The inflation expectations in 2022 are the highest in decades, which will directly impact the cost to attract and retain a cyber workforce. Even if the peak inflation is temporary due to COVID-19 as well as disruptions in the supply chain and the financial markets, the pressure on increased compensation is a reality today.

What does it mean in practical terms?

According to the Wall Street Journal, salaries will increase in 2022 for white-collar professionals in the range of 10%, and the federal workforce can expect an increase of less than a third of the gains in the private sector. These signs of growing salary gaps are likely far more severe and exacerbated in the cyber workforce.

For example, by browsing the current jobs ads, a manager for incident response in Rhode Island offers $150,000-$175,000 with the ability to work from home with zero commuting. A fair guess would be there’s a federal GS pay scale at 20-30% less, with work taking place from 8:30 a.m. to 4:30 p.m. in a federal facility; not to mention cloud security, where large players such as Amazon Web Services are actively recruiting from the federal sector.

An increasing salary gap directly impacts recruitment, where the flow of qualified applicants dries up due to the compensation advantage of the private sector. Based on earlier data, the difference in salary will trigger decisions to seek early retirement from the DoD, to pursue a second civilian career or to leave federal service for the private sector as a civilian employee.

The flipside of an all-volunteer force is that in the same way service members volunteer to serve, individuals have the option at the end of their obligation to seek other opportunities instead of reenlistment. The civilian workforce can leave at will when the incentives line up.

Therefore, if we face several years of high inflation, it should not be a surprise that there is a risk for an increased imbalance in incentives between the public and the private sectors that favor the private sector.

The U.S. economy has not seen high inflation since the 1970s and the early 1980s. In general, we all are inexperienced with dealing with steadily increasing costs and a delay of adjusted budgets. Inflation creates a punctured equilibrium for decision-makers and commanders that could force hard choices, such as downsizing, reorganization, and diluting the mission’s core goal due to an inability to deliver.

Money is easy to blame because it trespasses other more complex questions, such as the soft choices that support cyber talent’s job satisfaction, sense of respect, and recognition. It is unlikely that public service can compete with the private sector regarding compensation in the following years.

So to retain, it is essential to identify factors other than the compensation that make cyber talent leave and then mitigate these negative factors that lower the threshold for resignation.

Today’s popular phrase is “emotional intelligence.” It might be a buzzword, but if the DoD can’t compete with compensation, there needs to be a reason for cyber talent to apply and stay. In reality, inflation forces any organization that is not ready to outbid every competitor for talent to take a hard look at its employee relationships and what motivates its workforce to stay and be a part of the mission.

These choices might be difficult because they could force cultural changes in an organization. Whether dissatisfaction with bureaucracy, an unnecessary rigid structure, genuinely low interest for adaptive change, one-sided career paths that fit the employer but not the employee, or whatever reason that might encourage cyber talent to move on, it needs to be addressed.

In a large organization like the DoD and the supporting defense infrastructure, numerous leaders are already addressing the fact that talent competition is not only about compensation and building a broad, positive trajectory. Inflation intensifies the need to overhaul what attracts and retains cyber talent.

Jan Kallberg, Ph.D.

European Open Data can be Weaponized

In the discussion of great power competition and cyberattacks meant to slow down a U.S. strategic movement of forces to Eastern Europe, the focus has been on the route from the fort to port in the U.S. But we tend to forget that once forces arrive at the major Western European ports of disembarkation, the distance from these ports to eastern Poland is the same as from New York to Chicago.

The increasing European release of public data — and the subsequent addition to the pile of open-source intelligence — is becoming concerning in regard to the sheer mass of aggregated information and what information products may surface when combining these sources. The European Union and its member states have comprehensive initiatives to release data and information from all levels of government in pursuit of democratic accountability and transparency. It becomes a wicked problem because these releases are good for democracy but can jeopardize national security.

I firmly believe we underestimate the significance of the available information that a potential adversary can easily acquire. If data is not available freely, it can, with no questions asked, be obtained at a low cost.

Let me present a fictitious case study to visualize the problem with the width of public data released:

In the High North, where the terrain often is either rocks or marshes, with few available routes for maneuver units, available data today will provide information about ground conditions; type of forest; density; and on-the-ground, verified terrain obstacles — all easily accessible geodata and forestry agency data. The granularity of the information is down to a few meters.

The data is innocent by itself, intended to limit environmental damage from heavy forestry equipment and avoid the forestry companies’ armies of tracked harvesters being stuck in unfavorable ground conditions. The concern is that the forestry data also provides a verified route map for any advancing armored column in an accompli attack to avoid contact with the defender’s limited rapid-response units in pursuit of a deep strike.

Suppose the advancing adversary paves the way with special forces. In that case, a local government’s permitting and planning data as well as open data for transportation authorities will identify what to blow up, what to defend, and where it is ideal for ambushing any defending reinforcements or logistics columns. Once the advancing armored column meets up with the special forces, unclassified and openly accessible health department inspections show where frozen food is stored; building permits show which buildings have generators; and environmental protection data points out where civilian fuels, grade and volume are stored.

Now the advancing column can get ready for the next leg in the deep strike. Open data initiatives, “innocent” data releases and broad commercialization of public information has nullified the rapid-response force’s ability to slow down or defend against the accompli attack, and these data releases have increased the velocity of the accompli attack as well as increased the chance for the adversary’s mission success.

The governmental open-source intelligence problem is wicked. Any solution is problematic. An open democracy is a society that embraces accountability and transparency, and they are the foundations for the legitimacy, trust and consent of the governed. Restricting access to machine-readable and digitalized public information contradicts European Union Directive 2003/98/EC, which covers the reuse of public sector information — a well-established foundational part of European law based on Article 95 in the Maastricht Treaty.

The sheer volume of the released information, in multiple languages and from a variety of sources in separate jurisdictions, increases the difficulty of foreseeing any hostile utilization of the released data, which increases the wickedness of the problem. Those jurisdictions’ politics also come into play, which does not make it easier to trace a viable route to ensure a balance between a security interest and a democratic core value.

The initial action to address this issue, and embedded weakness, needs to involve both NATO and the European Union, as well as their member states, due to the complexity of multinational defense, the national implementation of EU legislation and the ability to adjust EU legislation. NATO and the EU have a common interest in mitigating the risks with massive public data releases to an acceptable level that still meets the EU’s goal of transparency.

Jan Kallberg, Ph.D.

Our Critical Infrastructure – Their Cyber Range

There is a risk that we overanalyze attacks on critical infrastructure and try  to find a strategic intent where there are none. Our potential adversaries, in my view, could attack critical American infrastructure for other reasons than executing a national strategy. In many cases, it can be as simple as hostile totalitarian nations that do not respect international humanitarian law, use critical American infrastructure as a cyber range. Naturally, the focus of their top-tier operators is on conducting missions within the strategic direction, but the lower echelon operators can use foreign critical infrastructure as a training ground. If the political elite sanctions these actions, nothing stops a rogue nation from attacking our power grid, waterworks, and public utilities to train their future, advanced cyber operators. The end game is not critical infrastructure – but critical infrastructure provides an educational opportunity.

Naturally, we have to defend critical infrastructure because by doing so, we protect the welfare of the American people and the functions of our society. That said, only because it is vital for us doesn’t automatically mean that it is crucial for the adversary.

Cyberattacks on critical infrastructure can have different intents. There is a similarity between cyber and national intelligence; both are trying to make sense of limited information looking at a denied information environment. In reality, our knowledge of the strategic intent and goals of our potential adversaries is limited.

We can study the adversary’s doctrine, published statements, tactics, technics, and events, but significant gaps exist to understand the intent of the attacks. We are assessing the adversary’s strategic intent from the outside, which are often qualified guesses, with all the uncertainty that comes with it. Then to assess strategic intent, many times, logic and past behavior are the only guidance. Nation-state actors tend to seek a geopolitical end goal, change policy, destabilize the target nation, or acquire the information they can use for their benefit.

Attacks on critical infrastructure make the news headline, and for a less able potential adversary, it can serve as a way to show their internal audience that they can threaten the United States. In 2013, Iranian hackers broke into the control system of a dam in Rye Brook, N.Y. The actual damage was limited due to circumstances the hackers did not know. Maintenance procedures occurred at the facility, which limited the risk for broader damage.

The limited intrusion in the control system made national news, engaged the State of New York, elected officials, Department of Justice, the Federal Bureau of Investigations, Department of Homeland Security, and several more agencies. Time Magazine called it in the headline; ”Iranian Cyber Attack on New York Dam Shows Future of War.”

When attacks occur on critical domestic infrastructure, it is not a given that it has a strategic intent to damage the U.S.; the attacks can also be a message to the attacker’s population that their country can strike the Americans in their homeland. For a geopolitically inferior country that seeks to be a threat and a challenger to the U.S., examples are Iran or North Korea; the massive American reaction to a limited attack on critical infrastructure serves its purpose. The attacker had shown its domestic audience that they could shake the Americans, primarily when U.S. authorities attributed the attack to Iranian hackers, making it easier to present it as news for the Iranian audience. Cyber-attacks become a risk-free way of picking a fight with the Americans without risking escalation.
Numerous cyber-attacks on critical American infrastructure could be a way to harass the American society and have no other justification than hostile authoritarian senior leaders has it as an outlet for their frustration and anger against the U.S.

Attackers seeking to maximize civilian hardship as a tool to bring down a targeted society have historically faced a reversed reaction. The German bombings of the civilian targets during the 1940’s air campaign “the Blitz” only hardened the British resistance against the Nazis. An attacker needs to take into consideration the potential outfall of a significant attack on critical infrastructure. The reactions to Pearl Harbor and 9-11 show that there is a risk for any adversary to attack the American homeland and that such an attack might unify American society instead of injecting fear and force submission to foreign will.

Critical infrastructure is a significant attack vector to track and defend. Still, cyberattacks on U.S. critical infrastructure create massive reactions, which are often predictable, are by itself a vulnerability if orchestrated by an adversary following the Soviet/Russian concept of reflexive control.

The War Game Revival

 

The sudden fall of Kabul, when the Afghan government imploded in a few days, shows how hard it is to predict and assess future developments. War games have had a revival in the last years to understand potential geopolitical risks better. War games are tools to support our thinking and force us to accept that developments can happen, which we did not anticipate, but games also have a flip side. War games can act as afterburners for our confirmation bias and inward self-confirming thinking. Would an Afghanistan-focused wargame design from two years ago had a potential outcome of a governmental implosion in a few days? Maybe not.

Awareness of how bias plays into the games is key to success. Wargames revival occurs for a good reason. Well-designed war games make us better thinkers; the games can be a cost-effective way to simulate various outcomes, and you can go back and repeat the game with lessons learned.
Wargames are rules-driven; the rules create the mechanical underpinnings that decide outcomes, either success or failure. Rules are condensed assumptions. There resides a significant vulnerability. Are we designing the games that operate within the realm of our own aggregated bias?
We operate in large organizations that have modeled how things should work. The timely execution of missions is predictable according to doctrine. In reality, things don’t play out the way we planned; we know it, but the question is, how do you quantify a variety of outcomes and codify them into rules?

Our war games and lessons learned from war games are never perfect. The games are intellectual exercises to think about how situations could unfold and deal with the results. In the interwar years, the U.S. made a rightful decision to focus on Japan as a potential adversary. Significant time and efforts went into war planning based on studies and wargames that simulated the potential Pacific fight. The U.S. assumed one major decisive battle between the U.S. Navy and the Imperial Japanese Navy, where lines of battleships fought it out at a distance. In the plans, that was the crescendo of the Pacific war. The plans missed the technical advances and importance of airpower, air carriers, and submarines. Who was setting up the wargames? Who created the rules? A cadre of officers who had served in the surface fleet and knew how large ships fought. There is naturally more to the story of the interwar war planning, but as an example, this short comment serves its purpose.

How do we avoid creating war games that only confirm our predisposition and lures us into believing that we are prepared – instead of presenting the war we have to fight?

How do you incorporate all these uncertainties into a war game? Naturally, it is impossible, but keeping the biases at least to a degree mitigated ensures value.

Study historical battles can also give insights. In the 1980s, sizeable commercial war games featured massive maps, numerous die-cut unit counters, and hours of playtime. One of these games was SPI’s “Wacht am Rhein,” which was a game about the Battle of the Bulge from start to end. The game visualizes one thing – it doesn’t matter how many units you can throw into battle if they are stuck in a traffic jam. Historical war games can teach us lessons that need to be maintained in our memory to avoid repeating the mistakes from the past.

Bias in wargame design is hard to root out. The viable way forward is to challenge the assumptions and the rules. Outsiders do it better than insiders because they will see the ”officially ignored” flaws. These outsiders must be cognizant enough to understand the game but have minimal ties to the outcome, so they are free to voice their opinion. There are experts out there. Commercial lawyers challenge assumptions and are experts in asking questions. It can be worth a few billable hours to ask them to find the flaws. Colleagues are not suitable to challenge and the ”officially ignored” flaws because they are marinated in the ideas that established the ”officially ignored” flaws. Academics dependent on DOD funding could gravitate toward accepting the ”officially ignored” flaws, just a fundamental human behavior, and the fewer ties to the initiator of the game, the better.

Another way to address uncertainty and bias is repeated games. The first game, cyber, has the effects we anticipate. The second game, cyber, has limited effect and turns out to be an operative dud. In the third game, cyber effects proliferate and have a more significant impact than we anticipated. I use these quick examples to show that there is value in repeated games. The repeated games become a journey of realization and afterthoughts due to the variety of factors and outcomes. We can then afterward use our logic and understanding to arrange the outcomes to understand reality better. The repeated games limit the range and impact of specific bias due to the variety of conditions.

The revival of wargaming is needed because wargaming can be a low-cost, high-return, intellectual endeavor. Hopefully, we can navigate away from the risks of groupthink and confirmation bias embedded in poor design. The intellectual journey that the war games take us on will make our current and future decision-makers better equipped to understand an increasingly complex world.

 

Jan Kallberg, Ph.D.