Cyber Attacks with Environmental Impact – High Impact on Societal Sentiment

In the cyber debate, there is a significant, if not totally over-shadowing, focus on the information systems themselves – the concerns don’t migrate to secondary and tertiary effects. For example, the problem with vulnerable industrial control systems in the management of water-reservoir dams is not limited to the digital conduit and systems. It is the fact that a massive release of water can create a flood that affects hundreds of thousands of citizens. It is important to look at the actual effects of a systematic or pinpoint-accurate cyberattack – and go beyond the limits of the actual information system.

As an example, a cascading effect of failing dams in a larger watershed would have a significant environmental impact. Hydroelectric dams and reservoirs are controlled using different forms of computer networks, either cable or wireless, and the control networks are connected to the Internet. A breach in the cyber defenses for the electric utility company leads all the way down to the logic controllers that instruct the electric machinery to open the floodgates. Many hydroelectric dams and reservoirs are designed as a chain of dams in a major watershed to create an even flow of water that is utilized to generate energy. A cyberattack on several upstream dams would release water that increases pressure on downstream dams. With rapidly diminishing storage capacity, downstream dams risk being breached by the oncoming water. Eventually, it can turn to a cascading effect through the river system which could result in a catastrophic flood event.

The traditional cyber security way to frame the problem is the loss of function and disruption in electricity generation, but that overlooks the potential environmental effect of an inland tsunami. This is especially troublesome in areas where the population and the industries are dense along a river; examples would include Pennsylvania, West Virginia and other areas with cities built around historic mills.

We have seen that events that are close to citizens’ near-environment affect them highly, which makes sense. If they perceive a threat to their immediate environment, it creates rapid public shifts of belief; erodes trust in government; generates extreme pressure under an intense, short time frame for government to act to stabilize the situation; and public vocal outcry.

One such example is the Three Mile Island accident, which created significant public turbulence and fear – an incident that still has a profound impact on how we view nuclear power. The Three Mile Island incident changed U.S. nuclear policy in a completely different direction and halted all new construction of nuclear plants even until today, forty years later.

For a covert state actor that seeks to cripple our society, embarrass the political leadership, change policy and project to the world that we cannot defend ourselves, environmental damages are inviting. An attack on the environment feels, for the general public, closer and scarier than a dozen servers malfunctioning in a server park. We are all dependent on clean drinking water and non-toxic air. Cyber attacks on these fundamentals for life could create panic and desperation in the public – even if the reacting citizens were not directly affected.

It is crucial for cyber resilience to look beyond the information systems. The societal effect is embedded in the secondary and tertiary effects that need to be addressed, understood and, to the limit of what we can do, mitigated. Cyber resilience goes beyond the digital realm.

Jan Kallberg, PhD

The time to act is before the attack

 

In my view, one of the major weaknesses in cyber defense planning is the perception that there is time to lead a cyber defense while under attack. It is likely that a major attack is automated and premeditated. If it is automated the systems will execute the attacks at computational speed. In that case, no political or military leadership would be able to lead of one simple reason – it has already happened before they react.

A premeditated attack is planned for a long time, maybe years, and if automated, the execution of a massive number of exploits will be limited to minutes. Therefore, the future cyber defense would rely on components of artificial intelligence that can assess, act, and mitigate at computational speed. Naturally, this is a development that does not happen overnight.

In an environment where the actual digital interchange occurs at computational speed, the only thing the government can do is to prepare, give guidelines, set rules of engagement, disseminate knowledge to ensure a cyber resilient society, and let the coders prepare the systems to survive in a degraded environment.

Another important factor is how these cyber defense measures can be reversed engineered and how visible they are in a pre-conflict probing wave of cyber attacks. If the preset cyber defense measures can be “measured up” early in a probing phase of a cyber conflict it is likely that the defense measures can through reverse engineering become a force multiplier for the future attacks – instead of bulwarks against the attacks.

So we enter the land of “damned if you do-damned if you don’t” because if we pre-stage the conflict with artificial intelligence supported decision systems that lead the cyber defense at the computational speed we are also vulnerable by being reverse engineered and the artificial intelligence becomes tangible stupidity.

We are in the early dawn of cyber conflicts, we can see the silhouettes of what is coming, but one thing becomes very clear – the time factor. Politicians and military leadership will have no factual impact on the actual events in real time in conflicts occurring at computational speed, so focus have then to be at the front end. The leadership is likely to have the highest impact by addressing what has to be done pre-conflict to ensure resilience when under attack.

Jan Kallberg, PhD

Artificial Intelligence (AI): The risk of over-reliance on quantifiable data

The rise of interest in artificial intelligence and machine learning has a flip side. It might not be so smart if we fail to design the methods correctly. A question out there – can we compress the reality into measurable numbers? Artificial Intelligence relies on what can be measured and quantified, risking an over-reliance on measurable knowledge. The challenge with many other technical problems is that it all ends with humans that design and assess according to their own perceived reality. The designers’ bias, perceived reality, weltanschauung, and outlook – everything goes into the design. The limitations are not on the machine side; the humans are far more limiting. Even if the machines learn from a point forward, it is still a human that stake out the starting point and the initial landscape.

Quantifiable data has historically served America well; it was a part of the American boom after the Second World War when America was one of the first countries that took a scientific look on how to improve, streamline, and increase production utilizing fewer resources and manpower.

The numbers have also misled. The Vietnam-era SECDEF McNamara used the numbers to tell how to win the Vietnam War, which clearly indicated how to reach a decisive military victory – according to the numbers. In a Post-Vietnam book titled “The War Managers,” retired Army general Donald Kinnard visualize the almost bizarre world of seeking to fight the war through quantification and statistics. Kinnard, who later taught at the National Defense University, did a survey of the actual support for these methods and utilized fellow generals that had served in Vietnam as the respondents. These generals considered the concept of assessing the progress in the war by body counting as useless, and only two percent of the surveyed generals saw any value in this practice. Why were the Americans counting bodies? It is likely because it was quantifiable and measurable. It is a common error in research design that you seek out the variables that produce accessible quantifiable results and McNamara was at that time almost obsessed with numbers and the predictive power of numbers. McNamara is not the only one that relied overly on the numbers.

In 1939, the Nazi-German foreign minister Ribbentrop together with the German High Command studied and measured up the French-British ability to mobilize and the ability to start a war with a little-advanced warning. The Germans quantified assessment was that the Allies were unable to engage in a full-scale war on short notice and the Germans believed that the numbers were identical with the policy reality when politicians would understand their limits – and the Allies would not go to war over Poland. So Germany invaded Poland and started the Second World War. The quantifiable assessment was correct and lead to Dunkirk, but the grander assessment was off and underestimated the British and French will to take on the fight, which leads to at least 50 million dead, half of Europe behind the Soviet Iron Curtain and the destruction of their own regime. The British sentiment willing to fight the war to the end, the British ability to convince the US to provide resources to their effort, and the unfolding events thereafter were never captured in the data. The German assessment was a snapshot of the British and French war preparations in the summer of 1939 – nothing else.

Artificial Intelligence is as smart as the the numbers we feed it. Ad notam.

The potential failure is hidden in selecting, assessing, designing, and extracting the numbers to feed Artificial Intelligence. The risk for grave errors in decisionmaking, escalation, and avoidable human suffering and destruction, is embedded in our future use of Artifical Intelligence if we do not pay attention to the data that feed the algorithms. The data collection and aggregation is the weakest link in the future of machine-supported decisionmaking.

Jan Kallberg is a Research Scientist at the Army Cyber Institute at West Point and an Assistant Professor the Department of Social Sciences (SOSH) at the United States Military Academy. The views expressed herein are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Spectrum Warfare

 

Spectrum sounds to many ears like old fashioned, Cold War jamming, crude brute electromagnetic overkill. In reality though, the military needs access to spectrum, and more of it.

Smart defense systems need to communicate, navigate, identify, and target. It does not matter how cyber secure our platforms are if we are denied access to electromagnetic spectrum. Every modern high tech weapon system is a dud without access to spectrum. The loss of spectrum will evaporate the American military might.

Today, though, other voices are becoming stronger, desiring to commercialize military spectrum. Why does the military need an abundance of spectrum, these voices ask. It could be commercialized and create so much joy with annoying social media and stuff that does not matter beyond one of your life-time minutes.

It is a relevant question. We as an entrepreneurial and “take action” society see the opportunity to utilize parts of the military spectrum to launch wireless services and free up spectrum space for all these apps and the Internet of Things that is just around the corner of the digital development of our society and civilization. In the eyes of the entrepreneurs and their backers, the military sits on unutilized spectrum that could put be good use – and there could be a financial harvest of the military electromagnetic wasteland.

The military needs spectrum in the same way the football player needs green grass to plan and execute his run. If we limit the military access to necessary spectrum it will, to extend the football metaphor, be just a stack of players not moving or be able to win. Our military will not be able to operate effectively.

We invite people to talk about others to talk about justice, democracy, and freedom, to improve the world, but I think it is time for us to talk to our fellow man about electromagnetic spectrum because the bulwark against oppression and totalitarian regimes depends on access.

Jan Kallberg, PhD

Humanitarian Cyber Operations – Rapid, Targeted, and Active Deterrent

Cyber operations are designed to be a tool for defense, security and war. In the same way as harmless computer technology can be used as dual-purpose tools for war, tools of war can be used for humanity, to protect the innocent, uphold respect for our fellow beings and safeguard human rights.

When a nation-state acts against its population and risks their welfare through repression, violence and exposure to mistreatment, there is a possibility for the world community to take actions by launching humanitarian cyber operations to protect the targeted population. In the non-cyber world, atrocities are intervened by military intervention using the principle of “responsibility to protect,” which allows foreign interference in domestic affairs to protect a population from their repressive and violent ruler without triggering an act of war. If a state fails to protect the welfare of its citizens, then the state that commits atrocities against its population is no longer protected from foreign intervention.

Intervention in 2018 does not need to be a military intervention with troops on the grounds, but, instead, a digital intervention through humanitarian cyber operations. A cyber humanitarian intervention not only capitalizes on the digital footprint but also penetrates the violent regime’s information sources, command structure and communications. The growing digital footprint in repressive regimes creates an opportunity for early prevention and interception against the perpetration of atrocities. The last decade the totalitarian states’ digital footprint has grown larger and larger.

As an example, Iran had 2 million smartphones in 2014, but had already reached 48 million smartphones in 2017. Today, about 3 out of 4 Iranians live in metropolitan areas. About half of the Iranian population is under 30 years old with new habits of chatting, sharing and wireless connectivity. In North Korea, the digital footprint has grown as rapidly. In 2011, there were no cellphones in North Korea outside of a very narrow elite circle. In 2017, surveys assessed that over 65 percent of all North Korean households had a cellphone.

No totalitarian and repressive states have been able to limit the digital footprint, which continues to expand for every year. The repressive regimes rely on the computer to lead and orchestrate the repressive actions and crimes against its population. Even if the actual perpetrators of atrocities avoid digital means, the activity will be picked up as intelligence fragments when talked about, discussed, shared, eye-witnessed and silenced. The planning and initiation to execute atrocities have a logistic trail of troop moments, transportations, orders, communications and concentration of resources.

If there is a valid concern for the safety of the population in the totalitarian states, then free, democratic and responsible states can act. Utilizing the United Nations’ accepted principle, “responsibility to protect,” is a justification for the world community or democratic states that decide to act and to launch humanitarian cyber operations utilizing military cyber capacity in a humanitarian role.

Humanitarian cyber operations enable faster response, the retrieval of information necessary for the world community’s decision making to act conventionally, and they remove the secrecy surrounding the perpetrated acts of totalitarian and repressive regimes. The exposure of human rights crimes in progress can serve as a deterrent and interception against a continuation of these crimes. By transposing the responsibility to protect from international humanitarian law into cyber, repressive regimes lose their protection against foreign cyber intervention if valid human rights concerns can be raised.

Humanitarian cyber operations can act as a deterrent because perpetrators will be held accountable. The international humanitarian law is dependent on evidence gathering, and laws might not be upheld if evidence gathering fails, even if the international community promotes decisive legal action. Humanitarian cyber operations can support the prosecution of crimes against humanity and generate quality evidence. The prosecution of the human rights violations in the Balkan civil wars during the 1990s failed in many cases due to lack of evidence. Humanitarian cyber operations can capture evidence that will hold perpetrators accountable.

Humanitarian cyber operations are policy tools for a free democratic nation already in peacetime to legally penetrate and extract information from the information systems of an authoritarian potential adversary that represses their people and endangers the welfare of their citizens. Conversely, the adversary cannot systematically attack the democratic nation because that is likely an act of war with consequences to follow. There is an opportunity embedded in humanitarian cyber operations for humanity and democracy.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy or the Department of Defense.

Legalizing Private Hack Backs leads to Federal Risks

During the last year several op-ed articles and commentaries have proposed that private companies should have the right to strike back if cyber attacked and conduct their own offensive cyber operations.

The demarcation in cyber between the government sphere and the private sphere is important to uphold because it influences how we see the state and the framework in which states interact. One reason why we have a nation state is to, in a uniform and structured way, under the guidance of a representative democracy, deal with foreign hostility and malicious activity. The state is given its powers by the citizenry to protect the nation utilizing a monopoly on violence. The state then acts under the existing laws on behalf of the citizens to ensure the intentions of the population it represents. These powers create an authority that federal government utilizes to enforce compliance of the laws and handle our relations with foreign powers. If the federal government cannot uphold the authority, legitimacy and confidence in government will suffer. The national interest in protecting legitimacy and authority and to maintain the confidence in the federal government is by far stronger than the benefits of a few private entities departing on their own cyber odysseys to retaliate against foreign cyber attacks.

I would like to visualize the importance of demarcation between government and private entities with an example. A failed bank robbery leads to a standoff where the robbers are encircled by government law enforcement. The government upholds its monopoly on violence based on laws that permit the government, on behalf of the people, to engage the robbers in a potential shootout. All other citizens are instructed to leave the area. The law enforcement officers seek to solve the situation without any violence. This is how we have designed the demarcation between the government and the private sphere in the analog world.
If the US decides to allow companies to strike back on foreign cyber attacks, then the US has abandoned this demarcation between nation state and private sphere. Going back to the bank robbers that are surrounded by law enforcement, using the logic of the private cyber retaliation, any bank customer who had an account in the robbed bank could show up at the standoff and open fire at the robbers at their own discretion and depart directly afterward leaving the police to sort out the shootout and the aftermath with no responsibility for the triggering event.
Abandoning the clear demarcation between government and private sphere leads to entropy, loss of control, and is counterproductive for the national cyberdefense and the national interest.
The counter argument is that the private companies are defenseless against cyber attacks and therefore they will have the right to self-defense.
The Commission on the Theft of American Intellectual Property published a report that was a strong proponent of allowing private companies to strike back and even retaliate against cyber attackers. According to the commission these counter strikes should be conducted as follows: “Without damaging the intruder’s own network, companies that experience cyber theft ought to be able to retrieve their electronic files or prevent the exploitation of their stolen information.”

The proponents for private cyber retaliation base their view on several assumptions. First, that the private company can attribute the attack and determine who is attacking them. The second assumption is that the counterstriking companies have the cyber resources to engage, even if it is a state-sponsored organization on the other end, and that there will be no damages. A third hidden assumption is that the events do not lead to uncontrolled escalation and that the cyber interchanges only affect the engaged parties.

An attacker has other options and can seek to attack other entities and institutions as a reprisal for the counterattack. If the initial attacker is a state-sponsored organization in a foreign country, multinational companies can have significant business and interests at risk if the situation escalates. Private companies will not be responsible for the aftermath and the entropy that can occur undermines the American stance and the nation loses its higher ground in challenging the state sponsors behind the cyber attacks in the framework of the international community.
The answer to who should hack back, if deciding to do so, is simple; it should be the federal government for the same reason that you would not fly on a passport issued by your neighbor across the street. Only the federal government is suitable to engage foreign nations and the private entities therein.

The unaddressed core problem is that we have not yet been able to create mechanisms to transfer cyber incidents from the private realm to the authorities. This limited ability during the short timeframe an attack occurs leads to initially a cyber attacker’s advantage, but this will be solved over time and does not outweigh the damages from an undermined federal authority due to entropy in cyber.

 

NDU Publication: China’s Strategic Support Force: A Force for a New Era

NDU Press just published:

http://ndupress.ndu.edu/Media/News/Article/1651760/chinas-strategic-support-force-a-force-for-a-new-era/

From the Executive Summary:

“In late 2015, the People’s Liberation Army (PLA) initiated reforms that have brought dramatic changes to its structure, model of warfighting, and organizational culture, including the creation of a Strategic Support Force (SSF) that centralizes most PLA space, cyber, electronic, and psychological warfare capabilities. The reforms come at an inflection point as the PLA seeks to pivot from land-based territorial defense to extended power projection to protect Chinese interests in the “strategic frontiers” of space, cyberspace, and the far seas. Understanding the new strategic roles of the SSF is essential to understanding how the PLA plans to fight and win informationized wars and how it will conduct information operations.”