Category Archives: Cyber Defense

THE WEAPONIZED MIND

As an industrialist nation transitioning to an information society and digital conflict, we tend to see technology and the information that feeds the technology as weapons – and ignore the few humans with a large-scale operational impact. Based on my outlook, I believe that we underestimate the importance of applicable intelligence – the intelligence of applying things in the correct order. The ability to apply is a far more important asset than the technology itself. Cyber and card games have one thing in common: the order in which you play your cards matters. In cyber, the tools are mostly publicly available; anyone can download them from the Internet and use them, but the weaponization of the tools occurs when used by someone who understands how to play them in an optimal order.
General Nakasone stated in 2017; “our best ones (coders) are 50 or 100 times better than their peers,” and continued “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.”

In reality, the success of cyber and cyber operations is highly dependent not on the tools or toolsets but instead upon the super-empowered individual that General Nakasone calls “the 50-x coder”.

In my experience in cybersecurity, migrating to a be a broader cyber field, there have always been those exceptional individuals that have an unreplicable ability to see the challenge early on, create a technical solution, and know how to play it in the right order for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival of artificial intelligence increases the reliance of these highly able individuals – because someone must set the rules, the boundaries, and point out the trajectory for artificial intelligence at the initiation. This raises a series of questions. Even if identified as a weapon, how do you make a human mind “classified”?

How do we protect these high-ability individuals who, in the digital world, are weapons, not as tools but as compilers of capability?

These minds are different because they see an opportunity to exploit in a digital fog of war when others don’t see it. They address problems unburdened by traditional thinking in new innovative ways, maximizing the dual purpose of digital tools, and can generate decisive cyber effects.
It is the applicable intelligence (AI) that creates the process, the application of tools, and turns simple digital software in sets or combinations as a convergence to digitally lethal weapons. The intelligence to mix, match, tweak, and arrange dual purpose software. I want to exemplify this by using an example from the analog world, it is as you had individuals with the supernatural ability to create a hypersonic missile from what you can find at Kroger or Albertson. As a nation, these individuals are strategic national security assets.
These intellects are weapons of growing strategic magnitude as the combat environment have increased complexity, increased velocity, growing target surface, and great uncertainty.
The last decades, our efforts are instead focusing on what these individuals deliver, the application, and the technology, which was hidden in secret vaults and only discussed in sensitive compartmented information facilities. Therefore, we classify these individuals output to the highest level to ensure the confidentiality and integrity of our cyber capabilities. Meanwhile, the most critical component, the militarized intellect, we put no value to because it is a human. In a society marinated in an engineering mindset, humans are like desk space, electricity, and broadband; it is a commodity that is input in the production of technical machinery. The marveled technical machinery is the only thing we care about today, 2019, and we don’t protect our elite militarized brains enough.
At a systematic level we are unable to see humans as the weapon itself, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed. Arms are made of steel, or fancier metals, with electronics – we fail to see weapons made of sweet ‘tater, corn, steak, and an added combative intellect.

The WW II Manhattan Project had at its peak 125 000 workers on the payroll, but the intellects that drove the project to success and completion were few. The difference with the Manhattan Project and the future of cyber is that Oppenheimer and his team had to rely on a massive industrial effort to provide them with the input material to create a weapon. In cyber, the intellect is the weapon, and the tools are delivery platforms. The tools, the delivery platforms, are free, downloadable, and easily accessed. It is the power of the mind that is unique.

We need to see the human as a weapon, avoiding being locked in by our path dependency as an engineering society where we hail the technology and forget the importance of the humans behind. America’s endless love of technical innovations and advanced machinery is reflected in a nation that has embraced mechanical wonders and engineered solutions since its creation.

For America, technological wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the Erie Canal, the manufacturing era, the moon landing and all the way to the autonomous systems, drones, and robots. In a default mindset, a tool, an automated process, a software, or a set of technical steps can solve a problem or act. The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced.

The super-empowered individuals are not interchangeable and cannot be replaced unless we want to be stuck in a digital war at speeds we don’t understand, being unable to play it in the right order, and have the limited intellectual torque to see through the fog of war provided by an exploding kaleidoscope of nodes and digital engagements. Artificial intelligence and machine learning support the intellectual endeavor to cyber defend America, but in the end, we find humans who set the strategy and direction. It is time to see what weaponized minds are; they are not dudes and dudettes but strike capabilities.

Jan Kallberg, Ph.D.

NATO: The Growing Alliance and the Insider Risks

The alliance has not properly considered the risks emanating from the half-hearted or hostile within the organization.

During the Cold War, the insider threat to the transatlantic alliance was either infiltration by the Warsaw Pact or some form of theft. The central focus was on counterintelligence and the main enemy was Soviet espionage.  Today, in 2023, the insider threat is not only spies and sabotage; it is any misalignment with the mission, which undermines the mission and its ability to conclude the tasks successfully. Regretfully, that can mean some member states are the issue. This is of course a problem of success. As the alliance grows — Finland’s entry on April 4 making it member state number 31 — was a wonderful moment, reflecting the free choice of a representative democracy to seek the security offered by military alliance with its fellows.
Continue reading NATO: The Growing Alliance and the Insider Risks

Offensive Cyber in Outer Space

The most cost-effective and simplistic cyber attack in outer space with the intent to bring down a targeted space asset is likely to use space junk that still has fuel and respond to communications – and use them to ram or force targeted space assets out of orbit.  The benefits for the attacker – hard to attribute, low costs, and if the attacker has no use of the space terrain then benefit from anti-access/area denial through space debris created by a collision.
Continue reading Offensive Cyber in Outer Space

CEPA Article: Russia Won’t Play the Cyber Card, Yet

My article from CEPA (Center for European Policy Analysis). Read the full text following this link. 

In reality, the absence of cyber-attacks beyond Ukraine indicates a very rational Russian fear of disclosing and compromising capabilities beyond its own. That is the good news. The bad news is that the absence of a cyber-offensive does not mean these advanced capabilities do not exist.

From the text.

“The recent cyberattacks in Ukraine have been unsophisticated and have
had close to no strategic impact. The distributed denial-of-service (DDoS) cyber-attacks are low-end efforts, a nuisance that most corporations already have systems to mitigate. Such DDoS attacks will not bring down a country or force it to submit to foreign will. These are very significantly different from advanced offensive cyber weapons. Top-of-the-range cyber weapons are designed to destroy, degrade, and disrupt systems, eradicate trust and pollute data integrity. DDoS and website defacements do not even come close in their effects.

A Russian cyber-offensive would showcase its full range of advanced offensive cyber capabilities against Ukraine, along with its tactics, techniques, and procedures (TTP), which would then be compromised. NATO and other neighboring nations, including China and Iran, would know the extent of Russian capabilities and have effective insights into Russia’s modus operandi.

From a Russian point of view, if a potential adversary understood its TTP, strategic surprise would evaporate, and the Russian cyber force would lose the initiative in a more strategically significant future conflict.

Understanding the Russian point of view is essential because it is the Russians who conduct their offensive actions. This might sound like stating the obvious, but currently, the prevailing conventional wisdom is a Western think-tank-driven context, which in my opinion, is inaccurate. There is nothing for the Russians to strategically gain by unleashing their full, advanced cyber arsenal against Ukraine or NATO at this juncture. In an open conflict between Russia and NATO, the Kremlin’s calculation would be different and might well justify the use of advanced cyber capabilities.

In reality, the absence of cyber-attacks beyond Ukraine indicates a very rational Russian fear of disclosing and compromising capabilities beyond its own. That is the good news. The bad news is that the absence of a cyber-offensive does not mean these advanced capabilities do not exist.”

Jan Kallberg

Ukraine: Russia will not waste offensive cyber weapons

An extract from my latest article at CyberWire – read the full article at CyberWire.

When Russia’s strategic calculus would dictate major cyber attacks.

Russia will use advanced strategic cyber at well-defined critical junctures. For example, as a conflict in Europe unfolded and dragged in NATO, Russian forces would seek to delay the entry of major US forces through cyber attacks against railways, ports, and electric facilities along the route to the port of embarkation. If US forces can be delayed by one week, that is one week of a prolonged time window in Europe before the main US force arrived, and would enable the submarines of the Northern Fleet to be positioned in the Atlantic. Strategic cyber support strategic intent and actions.

All cyber-attacks are not the same, and just because an attack originates from Russia doesn’t mean it is directed by strategic intent.

Naturally, the Russian regime would allow cyber vandalism and cybercrime against the West to run rampant because these are ways of striking the adversary. But these low-end activities do not represent the Russian military complex’s cyber capabilities, nor do they reflect the Russian leadership’s strategic intent.

The recent cyberattacks in Ukraine have been unsophisticated and have had close to no strategic impact. The distributed denial-of-service (DDoS) cyber-attacks are low-end efforts, a nuisance that most corporations already have systems to mitigate. Such DDoS attacks will not bring down a country or force it to submit to foreign will. Such low-end attacks don’t represent advanced offensive cyber weapons: the DDoS attacks are limited impact cyber vandalism. Advanced offensive cyber weapons destroy, degrade, and disrupt systems, eradicate trust and pollute data integrity. DDoS and website defacements are not even close to this in their effects. By making DDoS attacks, whether it’s the state that carried them out or a group of college students in support of Kremlin policy, Russia has not shown the extent of its offensive cyber capability.

The invasion of Ukraine is not the major peer-to-peer conflict that is the central Russian concern. The Russians have tailored their advanced cyber capabilities to directly impact a more significant geopolitical conflict, one with NATO or China. Creating a national offensive cyber force is a decades-long investment in training, toolmaking, reconnaissance of possible avenues of approach, and detection of vulnerabilities. If Russia showcased its full range of advanced offensive cyber capabilities against Ukraine, the Russian tactics, techniques, and procedures (TTP) would be compromised. NATO and other neighboring nations, including China and Iran, would know the extent of Russian capabilities and have effective insight into Russia’s modus operandi.

From a Russian point of view, if a potential adversary understood Russian offensive cyber operations’ tactics, techniques, and procedures, strategic surprise would evaporate, and the Russian cyber force would lose the initiative in a more strategically significant future conflict.

Understanding the Russian point of view is essential, because it is the Russians who conduct their offensive actions. This might sound like stating the obvious, but currently, the prevailing conventional wisdom is a Western think-tank-driven context, which in my opinion, is inaccurate. There is nothing for the Russians to strategically gain by unleashing their full advanced cyber arsenal against Ukraine or NATO at this juncture. In an open conflict between Russia and NATO the Russian calculation would be different and justify use of advanced cyber capabilities.

End of abstract – read the full article at CyberWire.

Demilitarize civilian cyber defense

An cyber crimes specialist with the U.S. Department of Homeland Security, looks at the arms of a confiscated hard drive that he took apart. Once the hard drive is fixed, he will put it back together to extract evidence from it. (Josh Denmark/U.S. Defense Department)
U.S. Defense Department cyber units are incrementally becoming a part of the response to ransomware and system intrusions orchestrated from foreign soil. But diverting the military capabilities to augment national civilian cyber defense gaps is an unsustainable and strategically counterproductive policy.

The U.S. concept of cyber deterrence has failed repeatedly, which is especially visible in the blatant and aggressive SolarWinds hack where the assumed Russian intelligence services, as commonly attributed in the public discourse, established a presence in our digital bloodstream. According to the Cyberspace Solarium Commission, cyber deterrence is established by imposing high costs to exploit our systems. As seen from the Kremlin, the cost must be nothing because blatantly there is no deterrence; otherwise, the Russian intelligence services should have restrained from hacking into the Department of Homeland Security.

After the robust mitigation effort in response to the SolarWinds hack, waves of ransomware attacks have continued. In the last years, especially after Colonial Pipeline and JBS ransomware attacks, there has been an increasing political and public demand for a federal response. The demand is rational; the public and businesses pay taxes and expect protection against foreign attacks, but using military assets is not optimal.

Presidential Policy Directive 41, titled “United States Cyber Incident Coordination,” from 2016 establishes the DHS-led federal response to a significant cyber incident. There are three thrusts: asset response, threat response and intelligence support. Assets are operative cyber units assisting impacted entities to recover; threat response seeks to hold the perpetrators accountable; and intelligence support provides cyberthreat awareness.

The operative response — the assets — is dependent on defense resources. The majority of the operative cyber units reside within the Department of Defense, including the National Security Agency, as the cyber units of the FBI and the Secret Service are limited.

In reality, our national civilian cyber defense relies heavily on defense assets. So what started with someone in an office deciding to click on an email with ransomware, locking up the computer assets of the individual’s employer, has suddenly escalated to a national defense mission.

The core of cyber operations is a set of tactics, techniques and procedures, which creates capabilities to achieve objectives in or through cyberspace. Successful offensive cyberspace operations are dependent on surprise — the exploitation of a vulnerability that was unknown or unanticipated — leading to the desired objective.

The political scientist Kenneth N. Waltz stated that nuclear arms’ geopolitical power resides not in what you do but instead what you can do with these arms. Few nuclear deterrence analogies work in cyber, but Waltz’s does: As long as a potential adversary can not assess what the cyber forces can achieve in offensive cyber, uncertainties will restrain the potential adversary. Over time, the adversary’s restrained posture consolidates to an equilibrium: cyber deterrence contingent on secrecy. Cyber deterrence evaporates when a potential adversary understands, through reverse engineering or observation, our tactics, techniques and procedure.

By constantly flexing the military’s cyber muscles to defend the homeland from inbound criminal cyber activity, the public demand for a broad federal response to illegal cyber activity is satisfied. Still, over time, bit by bit, the potential adversary will understand our military’s offensive cyber operations’ tactics, techniques and procedures. Even worse, the adversary will understand what we can not do and then seek to operate in the cyber vacuum where we have no reach. Our blind spots become apparent.

Offensive cyber capabilities are supported by the operators’ ability to retain and acquire ever-evolving skills. The more time the military cyber force spends tracing criminal gangs and bitcoins or defending targeted civilian entities, the less time the cyber operators have to train for and support military operations to, hopefully, be able to deliver a strategic surprise to an adversary. Defending point-of-sales terminals from ransomware does not upkeep the competence to protect weapon systems from hostile cyberattacks.

Even if the Department of Defense diverts thousands of cyber personnel, it can not uphold a national cyber defense. U.S. gross domestic product is reaching $25 trillion; it is a target surface that requires more comprehensive solutions.

First and foremost, the shared burden to uphold the national cyber defense falls primarily on private businesses, states and local government, federal law enforcement, and DHS.

Second, even if DHS has many roles as a cyberthreat information clearinghouse and the lead agency at incidents, the department lacks a sizable operative component.

Third, establishing a DHS operative cyber unit is limited net cost due to higher military asset costs. When not engaged, the civilian unit can disseminate and train businesses as well as state and local governments to be a part of the national cyber defense.

Establishing a civilian federal asset response is necessary. The civilian response will replace the military cyber asset response, which returns to the military’s primary mission: defense. The move will safeguard military cyber capabilities and increase uncertainty for the adversary. Uncertainty translates to deterrence, leading to fewer significant cyber incidents. We can no longer surrender the initiative and be constantly reactive; it is a failed national strategy.

Jan Kallberg

CYBER IN THE LIGHT OF KABUL – UNCERTAINTY, SPEED, ASSUMPTIONS

 

There is a similarity between the cyber and intelligence community (IC) – we are both dealing with a denied environment where we have to assess the adversary based on limited verifiable information. The recent events in Afghanistan with the Afghani government and its military imploding and the events that followed were unanticipated and against the ruling assumptions. The assumptions were off, and the events that unfolded were unprecedented and fast. The Afghan security forces evaporated in ten days facing a far smaller enemy leading to a humanitarian crisis. There is no blame in any direction; it is evident that this was not the expected trajectory of events. But still, in my view, there is a lesson to be learned from the events in Kabul that applies to cyber.

The high degree of uncertainty, the speed in both cases, and our reliance on assumptions, not always vetted beyond our inner circles, makes the analogy work. According to the media, in Afghanistan, there was no clear strategy to reach a decisive outcome. You could say the same about cyber. What is a decisive cyber outcome at a strategic level? Are we just staring at tactical noise, from ransomware to unsystematic intrusions, when we should try to figure out the big picture instead?

Cyber is loaded with assumptions that we, over time, accepted. The assumptions become our path-dependent trajectory, and in the absence of the grand nation-state on nation-state cyber conflict, the assumptions are intact. The only reason why cyber’s failed assumption has not yet surfaced is the absence of full cyber engagement in a conflict. There is a creeping assumption that senior leaders will lead future cyber engagements; meanwhile, the data shows that the increased velocity in the engagements could nullify the time window for leaders to lead. Why do we want cyber leaders to lead? It is just how we do business. That is why we traditionally have senior leaders. John Boyd’s OODA-loop (Observe, Orient, Decide, Act) has had a renaissance in cyber the last three years. The increased velocity with support of more capable hardware, machine learning, artificial intelligence, and massive data utilization makes it questionable if there is time for senior leaders to lead traditionally. The risk is that senior leaders are stuck in the first O in the OODA loop, just observing, or in the latter case, orient in the second O in OODA. It might be the case that there is no time to lead because events are unfolding faster than our leaders can decide and act. The way technology is developing; I have a hard time believing that there will be any significant senior leader input at critical junctures because the time window is so narrow.

Leaders will always lead by expressing intent, and that might be the only thing left. Instead of precise orders, do we train leaders and subordinates to be led by intent as a form of decentralized mission command?

Another dominant cyber assumption is critical infrastructure as the likely attack vector. In the last five years, the default assumption in cyber is that critical infrastructure is a tremendous national cyber risk. That might be correct, but there are numerous others. In 1983, the Congressional Budget Office (CBO) defined critical infrastructure as “highways, public transit systems, wastewater treatment works, water resources, air traffic control, airports, and municipal water supply.” By the patriot Act of 2001, the scope had grown to include; “systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.” By 2013, in the Presidential Policy Directive 21 (PPD-21), the scope widens even further and almost encompasses all society. Today concession stands at ballparks are critical infrastructure, together with thousands of other non-critical functions, shows a mission drift that undermines a national cyber defense. There is no guidance on what to prioritize and what not to prioritize that we might have to live without at a critical juncture. The question is if critical infrastructure matters for our potential adversaries as an attack vector or is it critical infrastructure because it matters to us? A potential adversary wants to attack infrastructure around American military facilities and slow down the transportation apparatus from bases to the port of embarkation (POE) to delay the arrival of U.S. troops in theater. The adversary might do a different assessment, saying that tampering with the American homeland only strengthens the American will to fight and popular support for a conflict. The potential adversary might utilize our critical infrastructure as a capture-the-flag training ground to training their offensive teams, but the activity has no strategic intent.

As broad as the definition is today, it is likely that the focus on critical infrastructure reflects what concerns us instead of what the adversary considers essential for them to reach strategic success. So today, when we witnessed the unprecedented events in Afghanistan, where it appears that our assumptions were off, it is good to keep in mind that cyber is heavy with untested assumptions. In cyber, what we know about the adversary and their intent is limited. We make assumptions based on the potential adversaries’ behavior and doctrine, but it is still an assumption.
So the failures to correctly assess Afghanistan should be a wake-up call for the cyber community, which also relies on unvalidated information.

The long-term cost of cyber overreaction

The default modus operandi when facing negative cyber events is to react, often leading to an overreaction. It is essential to highlight the cost of overreaction, which needs to be a part of calculating when to engage and how. For an adversary probing cyber defenses, reactions provide information that can aggregate a clear picture of the defendant’s capabilities and preauthorization thresholds.

Ideally, potential adversaries cannot assess our strategic and tactical cyber capacities, but over time and numerous responses, the information advantage evaporates. A reactive culture triggered by cyberattacks provides significant information to a probing adversary, which seeks to understand underlying authorities and tactics, techniques and procedures (TTP).

The more we act, the more the potential adversary understands our capacity, ability, techniques, and limitations. I am not advocating a passive stance, but I want to highlight the price of acting against a potential adversary. With each reaction, that competitor gain certainty about what we can do and how. The political scientist Kenneth N. Waltz said that the power of nuclear arms resides with what you could do and not within what you do. A large part of the cyber force strength resides in the uncertainty in what it can do, which should be difficult for a potential adversary to assess and gauge.

Why does it matter? In an operational environment where the adversaries operate under the threshold for open conflict, in sub-threshold cyber campaigns, an adversary will seek to probe in order to determine the threshold, and to ensure that it can operate effectively in the space below the threshold. If a potential adversary cannot gauge the threshold, it will curb its activities as its cyber operations must remain adequately distanced to a potential, unknown threshold to avoid unwanted escalation.

Cyber was doomed to be reactionary from its inception; its inherited legacy from information assurance creates a focus on trying to defend, harden, detect and act. The concept is defending, and when the defense fails, it rapidly swings to reaction and counteractivity. Naturally, we want to limit the damage and secure our systems, but we also leave a digital trail behind every time we act.

In game theory, proportional responses lead to tit-for-tat games with no decisive outcome. The lack of the desired end state in a tit-for-tat game is essential to keep in mind as we discuss persistent engagement. In the same way, as Colin Powell reflected on the conflict in Vietnam, operations without an endgame or a concept of what decisive victory looks like are engagements for the sake of engagements. Even worse, a tit-for-tat game with continuous engagements might be damaging as it trains potential adversaries that can copy our TTPs to fight in cyber. Proportionality is a constant flow of responses that reveals friendly capabilities and makes potential adversaries more able.

There is no straight answer to how to react. A disproportional response at specific events increases the risks from the potential adversary, but it cuts both ways as the disproportional response could create unwanted escalation.

The critical concern is that to maintain abilities to conduct cyber operations for the nation decisively, the extent of friendly cyber capabilities needs almost intact secrecy to prevail in a critical juncture. It might be time to put a stronger emphasis on intel-gain loss (IGL) assessment to answer the question if the defensive gain now outweighs the potential loss of ability and options in the future.

The habit of overreacting to ongoing cyberattacks undermines the ability to quickly and surprisingly engage and defeat an adversary when it matters most. Continuously reacting and flexing the capabilities might fit the general audience’s perception of national ability, but it can also undermine the outlook for a favorable geopolitical cyber endgame.

After twenty years of cyber – still unchartered territory ahead

The general notion is that much of the core understanding in cyber is in place. I would like to challenge that perception. There are still vast territories of the cyber domain that need to be researched, structured, and understood. I would like to use Winston Churchill’s words – it is not the beginning of the end; it is maybe the end of the beginning. It is obvious to me, in my personal opinion, that the cyber journey is still very early, the cyber field has yet to mature and big building blocks for the future cyber environment are not in place. Internet and the networks that support the net have increased dramatically over the last decade. Even if the growth of cyber might be stunning, the actual advances are not as impressive.

In the last 20 years, cyber defense, and cyber as a research discipline, have grown from almost nothing to major national concerns and the recipient of major resources. In the winter of 1996-1997, there were four references to cyber defense in the search engine of that day: AltaVista. Today, there are about 2 million references in Google. Knowledge of cyber has not developed at the same rapid rate as the interest, concern, and resources.

The cyber realm is still struggling with basic challenges such as attribution. Traditional topics in political science and international relations — such as deterrence, sovereignty, borders, the threshold for war, and norms in cyberspace — are still under development and discussion. From a military standpoint, there is still a debate about what cyber deterrence would look like, what the actual terrain and maneuverability are like in cyberspace, and who is a cyber combatant.

The traditional combatant problem becomes even more complicated because the clear majority of the networks and infrastructure that could be engaged in potential cyber conflicts are civilian — and the people who run these networks are civilians. Add to that mix the future reality with cyber: fighting a conflict at machine speed and with limited human interaction.

Cyber raises numerous questions, especially for national and defense leadership, due to the nature of cyber. There are benefits with cyber – it can be used as a softer policy option with a global reach that does not require predisposition or weeks of getting assets in the right place for action. The problem occurs when you reverse the global reach, and an asymmetric fight occurs, when the global adversaries to the United States can strike utilizing cyber arms and attacks deep to the most granular participle of our society – the individual citizen. Another question that is raising concern is the matter of time. Cyber attacks and conflicts can be executed at machine speed, which is beyond human ability to lead and comprehend what is actually happening. This visualizes that cyber as a field of study is in its early stages even if we have an astronomic growth of networked equipment, nodes, and the sheer volume of transferred information. We have massive activity on the Internet and in networks, but we are not fully able to utilize it or even structurally understand what is happening at a system-level and in a grander societal setting. I believe that it could take until the mid-2030s before many of the basic elements of cyber have become accepted, structured, and understood, and until we have a global framework. Therefore, it is important to be invested in cyber research and make discoveries now rather than face strategic surprise. Knowledge is weaponized in cyber.

Jan Kallberg, PhD

Cognitive Force Protection – How to protect troops from an assault in the cognitive domain

(Co-written with COL Hamilton)

Jan Kallberg and Col. Stephen Hamilton

Great power competition will require force protection for our minds, as hostile near-peer powers will seek to influence U.S. troops. Influence campaigns can undermine the American will to fight, and the injection of misinformation into a cohesive fighting force are threats equal to any other hostile and enemy action by adversaries and terrorists. Maintaining the will to fight is key to mission success.

Influence operations and disinformation campaigns are increasingly becoming a threat to the force. We have to treat influence operations and cognitive attacks as serious as any violent threat in force protection. Force protection is defined by Army Doctrine Publication No. 3-37, derived from JP 3-0: “Protection is the preservation of the effectiveness and survivability of mission-related military and nonmilitary personnel, equipment, facilities, information, and infrastructure deployed or located within or outside the boundaries of a given operational area.” Therefore, protecting the cognitive space is an integral part of force protection.

History shows that preserving the will to fight has ensured mission success in achieving national security goals. France in 1940 had more tanks and significant military means to engage the Germans; however, France still lost. A large part of the explanation of why France was unable to defend itself in 1940 resides with defeatism. This including an unwillingness to fight, which was a result of a decade-long erosion of the French soldiers’ will in the cognitive realm.

In the 1930s, France was political chaos, swinging from right-wing parties, communists, socialists, authoritarian fascists, political violence and cleavage, and the perception of a unified France worth fighting for diminished. Inspired by Stalin’s Soviet Union, the communists fueled French defeatism with propaganda, agitation and influence campaigns to pave the way for a communist revolution. Nazi Germany weakened the French to enable German expansion. Under a persistent cognitive attack from two authoritarian ideologies, the bulk of the French Army fell into defeatism. The French disaster of 1940 is one of several historical examples where manipulated perception of reality prevailed over reality itself. It would be a naive assessment to assume that the American will is a natural law unaffected by the environment. Historically, the American will to defend freedom has always been strong; however, the information environment has changed. Therefore, this cognitive space must be maintained, reignited and shared when the weaponized information presented may threaten it.

In the Battle of the Bulge, the conflict between good and evil was open and visible. There was no competing narrative. The goal of the campaign was easily understood, with clear boundaries between friendly and enemy activity. Today, seven decades later, we face competing tailored narratives, digital manipulation of media, an unprecedented complex information environment, and a fast-moving, scattered situational picture.

Our adversaries will and already are exploiting the fact that we as a democracy do not tell our forces what to think. Our only framework is loyalty to the Constitution and the American people. As a democracy, we expect our soldiers to support the Constitution and the mission. Our force has their democratic and constitutional right to think whatever they find worthwhile to consider.

In order to fight influence operations, we would typically control what information is presented to the force. However, we cannot tell our force what to read and not read due to First Amendment rights. While this may not have caused issues in the past, social media has presented an opportunity for our adversaries to present a plethora of information that is meant to persuade our force.

In addition, there is too much information flowing in multiple directions to have centralized quality control or fact checking. The vetting of information must occur at the individual level, and we need to enable the force’s access to high-quality news outlets. This doesn’t require any larger investment. The Army currently funds access to training and course material for education purposes. Extending these online resources to provide every member of the force online access to a handful of quality news organizations costs little but creates a culture of reading fact-checked news. More importantly, the news that is not funded by click baiting is more likely to be less sensational since its funding source comes from dedicated readers interested in actual news that matters.

In a democracy, cognitive force protection is to learn, train and enable the individual to see the demarcation between truth and disinformation. As servants of our republic and people, leaders of character can educate their unit on assessing and validating the information. As first initial steps, we must work toward this idea and provide tools to protect our force from an assault in the cognitive domain.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. Col. Stephen Hamilton is the chief of staff at the institute and a professor at the academy. The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy or the Defense Department.

 

 

Government cyber breach shows need for convergence

(I co-authored this piece with MAJ Suslowicz and LTC Arnold).

MAJ Chuck Suslowicz , Jan Kallberg , and LTC Todd Arnold

The SolarWinds breach points out the importance of having both offensive and defensive cyber force experience. The breach is an ongoing investigation, and we will not comment on the investigation. Still, in general terms, we want to point out the exploitable weaknesses in creating two silos — OCO and DCO. The separation of OCO and DCO, through the specialization of formations and leadership, undermines broader understanding and value of threat intelligence. The growing demarcation between OCO and DCO also have operative and tactical implications. The Multi-Domain Operations (MDO) concept emphasizes the competitive advantages that the Army — and greater Department of Defense — can bring to bear by leveraging the unique and complementary capabilities of each service.

It requires that leaders understand the capabilities their organization can bring to bear in order to achieve the maximum effect from the available resources. Cyber leaders must have exposure to a depth and the breadth of their chosen domain to contribute to MDO.

Unfortunately, within the Army’s operational cyber forces, there is a tendency to designate officers as either offensive cyber operations (OCO) or defensive cyber operations (DCO) specialists. The shortsighted nature of this categorization is detrimental to the Army’s efforts in cyberspace and stymies the development of the cyber force, affecting all soldiers. The Army will suffer in its planning and ability to operationally contribute to MDO from a siloed officer corps unexposed to the domain’s inherent flexibility.

We consider the assumption that there is a distinction between OCO and DCO to be flawed. It perpetuates the idea that the two operational types are doing unrelated tasks with different tools, and that experience in one will not improve performance in the other. We do not see such a rigid distinction between OCO and DCO competencies. In fact, most concepts within the cyber domain apply directly to both types of operations. The argument that OCO and DCO share competencies is not new; the iconic cybersecurity expert Dan Geer first pointed out that cyber tools are dual-use nearly two decades ago, and continues to do so. A tool that is valuable to a network defender can prove equally valuable during an offensive operation, and vice versa.

For example, a tool that maps a network’s topology is critical for the network owner’s situational awareness. The tool could also be effective for an attacker to maintain situational awareness of a target network. The dual-use nature of cyber tools requires cyber leaders to recognize both sides of their utility. So, a tool that does a beneficial job of visualizing key terrain to defend will create a high-quality roadmap for a devastating attack. Limiting officer experiences to only one side of cyberspace operations (CO) will limit their vision, handicap their input as future leaders, and risk squandering effective use of the cyber domain in MDO.

An argument will be made that “deep expertise is necessary for success” and that officers should be chosen for positions based on their previous exposure. This argument fails on two fronts. First, the Army’s decades of experience in officers’ development have shown the value of diverse exposure in officer assignments. Other branches already ensure officers experience a breadth of assignments to prepare them for senior leadership.

Second, this argument ignores the reality of “challenging technical tasks” within the cyber domain. As cyber tasks grow more technically challenging, the tools become more common between OCO and DCO, not less common. For example, two of the most technically challenging tasks, reverse engineering of malware (DCO) and development of exploits (OCO), use virtually identical toolkits.

An identical argument can be made for network defenders preventing adversarial access and offensive operators seeking to gain access to adversary networks. Ultimately, the types of operations differ in their intent and approach, but significant overlap exists within their technical skillsets.

Experience within one fragment of the domain directly translates to the other and provides insight into an adversary’s decision-making processes. This combined experience provides critical knowledge for leaders, and lack of experience will undercut the Army’s ability to execute MDO effectively. Defenders with OCO experience will be better equipped to identify an adversary’s most likely and most devastating courses of action within the domain. Similarly, OCO planned by leaders with DCO experience are more likely to succeed as the planners are better prepared to account for potential adversary countermeasures.

In both cases, the cross-pollination of experience improves the Army’s ability to leverage the cyber domain and improve its effectiveness. Single tracked officers may initially be easier to integrate or better able to contribute on day one of an assignment. However, single-tracked officers will ultimately bring far less to the table than officers experienced in both sides of the domain due to the multifaceted cyber environment in MDO.

Maj. Chuck Suslowicz is a research scientist in the Army Cyber Institute at West Point and an instructor in the U.S. Military Academy’s Department of Electrical Engineering and Computer Science (EECS). Dr. Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor at the U.S. Military Academy. LTC Todd Arnold is a research scientist in the Army Cyber Institute at West Point and assistant professor in U.S. Military Academy’s Department of Electrical Engineering and Computer Science (EECS.) The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy or the Department of Defense.

 

What COVID-19 can teach us about cyber resilience

Dr. Jan Kallberg and Col. Stephen Hamilton
March 23, 2020

The COVID pandemic is a challenge that will eventually create health risks to Americans and have long-lasting effects. For many, this is a tragedy, a threat to life, health, and finances. What draws our attention is what COVID-19 has meant our society, the economy, and how in an unprecedented way, family, corporations, schools, and government agencies quickly had to adjust to a new reality. Why does this matter from a cyber perspective?

COVID-19 has created increased stress on our logistic, digital, public, and financial systems and this could in fact resemble what a major cyber conflict would mean to the general public. It is also essential to assess what matters to the public during this time. COVID-19 has created a widespread disruption of work, transportation, logistics, distribution of food and necessities to the public, and increased stress on infrastructures, from Internet connectivity to just-in-time delivery. It has unleashed abnormal behaviors.

A potential adversary will likely not have the ability to take down an entire sector of our critical infrastructure, or business eco-system, for several reasons. First, awareness and investments in cybersecurity have drastically increased the last two decades. This in turn reduced the number of single points of failure and increased the number of built-in redundancies as well as the ability to maintain operations in a degraded environment.

Second, the time and resources required to create what was once referred to as a “Cyber Pearl Harbor” is beyond the reach of any near-peer nation. Decades of advancement, from increasing resilience, adding layered defense and the new ability to detect intrusion, have made it significantly harder to execute an attack of that size.

Instead, an adversary will likely focus their primary cyber capacity on what matters for their national strategic goals. For example, delaying the movement of the main U.S. force from the continental United States to theater by using a cyberattack on utilities, airports, railroads, and ports. That strategy has two clear goals: to deny United States and its allies options in theater due to a lack of strength and to strike a significant blow to the United States and allied forces early in the conflict. If an adversary can delay U.S. forces’ arrival in theater or create disturbances in thousands of groceries or wreak havoc on the commute for office workers, they will likely prioritize what matters to their military operations first.

That said, in a future conflict, the domestic businesses, local government, and services on which the general public rely on, will be targeted by cyberattacks. These second-tier operations are likely exploiting the vulnerabilities at scale in our society, but with less complexity and mainly opportunity exploitations.

The similarity with the COVID-19 outbreak to a cyber campaign is the disruption in logistics and services, how the population reacts, as well as the stress it puts on law enforcement and first responders. These events can lead to questions about the ability to maintain law and order and the ability to prevent destabilization of a distribution chain that is built for just-in-time operations with minimal margins of deviation before it falls apart.

The sheer nature of these second-tier attacks is unsystematic, opportunity-driven. The goal is to pursue disruption, confusion, and stress. An authoritarian regime would likely not be hindered by international norms to attack targets that jeopardize public health and create risks for the general population. Environmental hazards released by these attacks can lead to risks of loss of life and potential dramatic long-term loss of life quality for citizens. If the population questions the government’s ability to protect, the government’s legitimacy and authority will suffer. Health and environmental risks tend to appeal not only to our general public’s logic but also to emotions, particularly uncertainty and fear. This can be a tipping point if the population fears the future to the point it loses confidence in the government.

Therefore, as we see COVID-19 unfold, it could give us insights into how a broad cyber-disruption campaign could affect the U.S. population. Terrorist experts examine two effects of an attack – the attack itself and the consequences of how the target population reacts.

Likely, our potential adversaries study carefully how our society reacts to COVID-19. For example, if the population obeys the government, if our government maintains control and enforces its agenda and if the nation was prepared.

Lessons learned from COVID-19 are applicable for the strengthening U.S. cyberdefense and resilience. These unfortunate events increase our understanding of how a broad cyber campaign can disrupt and degrade the quality of life, government services, and business activity.

Why Iran would avoid a major cyberwar

Demonstrations in Iran last year and signs of the regime’s demise raise a question: What would the strategic outcome be of a massive cyber engagement with a foreign country or alliance?

Authoritarian regimes traditionally put survival first. Those who do not prioritize regime survival tend to collapse. Authoritarian regimes are always vulnerable because they are illegitimate. There will always be loyalists that benefit from the system, but for a significant part of people, the regime is not legit. The regime only exists because they suppress popular will and use force against any opposition.

In 2016, I wrote an article in the Cyber Defense Review titled “Strategic Cyberwar Theory – A Foundation for Designing Decisive Strategic Cyber Operations.” The utility of strategic cyberwar is linked to the institutional stability of the targeted state. If a nation is destabilized, it can be subdued to foreign will and the ability for the current regime to execute their strategy is evaporated due to loss of internal authority and ability. The theory’s predictive power is most potent when applied to target theocracies, authoritarian regimes, and dysfunctional experimental democracies because the common tenet is weak institutions.

Fully functional democracies, on the other hand, have a definite advantage because these advanced democracies have stability and, by their citizenry, accepted institutions. Nations openly adversarial to democracies are in most cases, totalitarian states that are close to entropy. The reason why these totalitarian states are under their current regime is the suppression of the popular will. Any removal of the pillars of repression, by destabilizing the regime design and institutions that make it functional, will release the popular will.

A destabilized — and possibly imploding — Iranian regime is a more tangible threat to the ruling theocratic elite than any military systems being hacked in a cyber interchange. Dictators fear the wrath of the masses. Strategic cyberwar theory seeks to look beyond the actual digital interchange, the cyber tactics, and instead create a predictive power of how a decisive cyber conflict should be conducted in pursuit of national strategic goals.

The Iranian military apparatus is a mix of traditional military defense, crowd control, political suppression, and show of force for generating artificial internal authority in the country. If command and control evaporate in the military apparatus, it also removes the ability to control the population to the degree the Iranian regime have been able until now to do. In that light, what is in it for Iran to launch a massive cyber engagement against the free world? What can they win?

If the free world uses its cyber abilities, it is far more likely that Iran itself gets destabilized and falls into entropy and chaos, which could lead to lead to major domestic bloodshed when the victims of 40 years of violent suppression decide the fate of their oppressors. It would not be the intent of the free world, it is just an outfall of the way the Iranian totalitarian regime has acted toward their own people. The risks for the Iranians are far more significant than the potential upside of being able to inflict damage on the free world.

That doesn’t mean Iranians would not try to hack systems in foreign countries they consider adversarial. Because of the Iranian regime’s constant need to feed their internal propaganda machinery with “victories,” that is more likely to take place on a smaller scale and will likely be uncoordinated low-level attacks seeking to exploit opportunities they come across. In my view, far more dangerous are non-Iranian advanced nation-state cyber actors that impersonate being Iranian hackers trying to make aggressive preplanned attacks under cover of spoofed identity and transferring the blame fueled by recent tensions.

From the Adversary’s POV – Cyber Attacks to Delay CONUS Forces Movement to Port of Embarkation Pivotal to Success

We tend to see vulnerabilities and concerns about cyber threats to critical infrastructure from our own viewpoint. But an adversary will assess where and how a cyberattack on America will benefit the adversary’s strategy. I am not convinced attacks on critical infrastructure, in general, have the payoff that an adversary seeks.

The American reaction to Sept. 11 and any attack on U.S. soil gives a hint to an adversary that attacking critical infrastructure to create hardship for the population might work contrary to the intended softening of the will to resist foreign influence. It is more likely that attacks that affect the general population instead strengthen the will to resist and fight, similar to the British reaction to the German bombing campaign “Blitzen” in 1940. We can’t rule out attacks that affect the general population, but there are not enough offensive capabilities to attack all 16 sectors of critical infrastructure and gain a strategic momentum. An adversary has limited cyberattack capabilities and needs to prioritize cyber targets that are aligned with the overall strategy. Trying to see what options, opportunities, and directions an adversary might take requires we change our point of view to the adversary’s outlook. One of my primary concerns is pinpointed cyber-attacks disrupting and delaying the movement of U.S. forces to theater. 

Seen for the potential adversary’s point of view, bringing the cyber fight to our homeland – think delaying the transportation of U.S. forces to theater by attacking infrastructure and transportation networks from bases to the port of embarkation – is a low investment/high return operation. Why does it matter?

First, the bulk of the U.S. forces are not in the region where the conflict erupts. Instead, they are mainly based in the continental United States and must be transported to theater. From an adversary’s perspective, the delay of U.S. forces’ arrival might be the only opportunity. If the adversary can utilize an operational and tactical superiority in the initial phase of the conflict, by engaging our local allies and U.S. forces in the region swiftly, territorial gains can be made that are too costly to reverse later, leaving the adversary in a strong bargaining position.

Second, even if only partially successful, cyberattacks that delay U.S. forces’ arrival will create confusion. Such attacks would mean units might arrive at different ports, at different times and with only a fraction of the hardware or personnel while the rest is stuck in transit.

Third, an adversary that is convinced before a conflict that it can significantly delay the arrival of U.S. units from the continental U.S. to a theater will do a different assessment of the risks of a fait accompli attack. Training and Doctrine Command defines such an attack as one that “ is intended to achieve military and political objectives rapidly and then to quickly consolidate those gains so that any attempt to reverse the action by the U.S. would entail unacceptable cost and risk.” Even if an adversary is long-term strategically inferior, the window of opportunity due to assumed delay of moving units from the continental U.S. to theater might be enough for them to take military action seeking to establish a successful fait accompli-attack.

In designing a cyber defense for critical infrastructure, it is vital that what matters to the adversary is a part of the equation. In peacetime, cyberattacks probe systems across society, from waterworks, schools, social media, retail, all the way to sawmills. Cyberattacks in war time will have more explicit intent and seek a specific gain that supports the strategy. Therefore, it is essential to identify and prioritize the critical infrastructure that is pivotal at war, instead of attempting to spread out the defense to cover everything touched in peacetime.

Jan Kallberg, Ph.D., LL.M., is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Private Hackbacks can be Blowbacks

The demands for legalizing corporate hack backs are growing – and there is significant interest by private actors to utilize hack back if it was lawful. If private companies were able to obtain the right to hack back legally, the risks for blowback is likely more significant than the opportunity and potential gains from private hackbacks. The proponents of private hackback tend to build their case on a set of assumptions. If these assumptions are not valid, private hackback is likely becoming a federal problem through uncontrolled escalation and spillover from these private counterstrikes.

-The private companies can attribute.

The idea of legalizing hack back operations is based on the assumption that the defending company can attribute the initial attack with pin-point precision. If a defending company is given the right to strike back, it is based on the assumption that the counterstrike can beyond doubt determine which entity was the initial attacker. If attribution is not achieved with satisfactory granularity and precision, a right to cyber counterstrike would be a right to strike anyone based on suspicion of involvement. Very few private entities can as of today with high granularity determine who attacked them and can trace back the attack so the counterstrike can be accurate. The lack of norms and a right to strike back, even if the precision in the counterstrike is not perfect, would increase entropy and deviation from emerging norms and international governance.

-The counterstriking corporations can engage a state-sponsored organization.

Things might spin out of control.  The old small tactics rule – anyone can open fire, only geniuses can get out unharmed. The counterstriking corporation perceives that they can handle the adversaries believing that it is an underfunded group of college students that hacks for fun – and later finds out that it is a heavily funded and highly able foreign state agency. The counterstriking company would have limited means to before a counterstrike determines the exact size of the initial attacker and the full spectrum of resources available for the initial attacker. A probing counterattack would not be enough to determine the operational strength, ability, and intent of the potential adversary. Following the assumption that the counterstriking corporation can handle any adversary is embedded the assumption that there will be no uncontrolled escalation.

-The whole engagement is locked in between parties A and B.

If there is an assumption of no uncontrolled escalation, then a follow-up assumption is that ,the engagement creates a deterrence that prevents the initial attacker from continuing attacking. The defending company needs to be able to counterattack with the magnitude that the initial attacker is deterred from further attacks. Once deterrence is established then the digital interchange will cease. The question is how to establish deterrence – and deterring from which array of cyber operations – without causing any damages. If deterrence cannot be establish it would likely lead to escalation or to a strict tit-for-tat game without any decisive conclusion and continue until the initial attacker decides to end the interchange.

-The initial attacker has no second strike option.

The interchange will occur with a specific set of cyber weapons and aim points. So the interchange cannot lead to further damages. Even if the initial striker had the intent to rearrange the targets, aims, and potential impacts there will be no option to do so. A new set of second strikes would not be an uncontrolled escalation as long as the targeting occurred within the same realm and values as the earlier strikes. The second strike option for the initial attacker could target unprecedented targets at the initial attackers discretion. Instead, it is more likely that the initial attacker has second strike options that the initial target is unaware of at the moment of counterstrike.

-The counterstriking company has no interests or assets in the initial attacker’s jurisdiction.

If a multi-national company (MNC) counterstrikes a state agency or state sponsored attacker the MNC could face the risk of repercussions if there are MNC assets in the jurisdiction of the initial attacker. Major MNC companies have interests, subsidiaries, and assets in hundreds of jurisdictions. The Fortune 500 companies have assets in the US, China, Russia, India, and numerous other jurisdictions. The question is then if MNC “A” counterstrike a cyberattack from China, what will the risks be for the “A” MNC subsidiary “A in China”? Related is the issue if by improper attribution MNC “A” counterstrikes from the US targeting foreign digital assets when these foreign assets had no connection with the initial attack, which constitutes a new unjustifiable and illegal attack on foreign digital assets. The majority of the potential source countries for hacking attacks are totalitarian and authoritarian states. A totalitarian state can easily, and it is in their reach, switch domain and seize property, arrest innocent business travels, and act in other ways as a result of corporate hackback. I am not saying that we should let totalitarian regimes act any way they want – I am only saying that it is not for private corporations to engage and seeking to resolve. It is a government domain to interact with foreign governments.

The idea to legalize corporate hack backs could lead to increased distrust, entropy, and be contra-productive to the long-term goal of a secure and safe Internet.

Jan Kallberg, PhD

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy.

The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.