Category Archives: Innovation

For ethical artificial intelligence, security is pivotal

 

The market for artificial intelligence is growing at an unprecedented speed, not seen since the introduction of the commercial Internet. The estimates vary, but the global AI market is assumed to grow 30 to 60 percent per year. Defense spending on AI projects is increasing at even a higher rate when we add wearable AI and systems that are dependent on AI. The defense investments, such as augmented reality, automated target recognition, and tactical robotics, would not advance at today’s rate without the presence of AI to support the realization of these concepts.

The beauty of the economy is responsiveness. With an identified “buy” signal, the market works to satisfy the need from the buyer. Powerful buy signals lead to rapid development, deployment, and roll-out of solutions, knowing that time to market matters.

My concern is based on earlier analogies when the time to market prevailed over conflicting interests. One example is the first years of the commercial internet, the introduction of remote control of supervisory control and data acquisition (SCADA) and manufacturing, and the rapid growth of the smartphone apps. In each of these cases, security was not the first thing on the developer’s mind. Time to market was the priority. This exposure increases with an economically sound pursuit to use commercial off the shelf products (COTS) as sensors, chipsets, functions, electric controls, and storage devices can be bought on the civilian market for a fraction of the cost. These COTS products cut costs, give the American people more defense and security for the money, and drive down the time to conclude the development and deployment cycle.

The Department of Defense has adopted five ethical principles for the department’s future utilization of AI. These principles are: responsible, equitable, traceable, reliable, and governable. The common denominator in all these five principles is cybersecurity. If the cybersecurity of the AI application is inadequate, these five adopted principles can be jeopardized and no longer steer the DOD AI implementation.

The future AI implementation increases the attack surface radically, and of concern is the ability to detect manipulation of the processes, because, for the operators, the underlying AI processes are not clearly understood or monitored. A system that detects targets from images or from a streaming video capture, where AI is used to identify target signatures, will generate decision support that can lead to the destruction of these targets. The targets are engaged and neutralized. One of the ethical principles for AI is “responsible.” How do we ensure that the targeting is accurate? How do we safeguard that neither the algorithm is corrupt or that sensors are not being tampered with to produce spurious data? It becomes a matter of security.

In a larger conflict, where ground forces are not able to inspect the effects on the ground, the feedback loop that invalidates the decisions supported by AI might not reach the operators in weeks. Or it might surface after the conflict is over. A rogue system can likely produce spurious decision support for longer than we are willing to admit.

Of all the five principles “equitable” is the area of highest human control. Even if controlling embedded biases in a process is hard to detect, it is within our reach. “Reliable” relates directly to security because it requires that the systems maintain confidentiality, integrity, and availability.

If the principle “reliable” requires cybersecurity vetting and testing, we have to realize that these AI systems are part of complex technical structures with a broad attack surface. If the principle “reliable” is jeopardized, then “traceable” becomes problematic, because if the integrity of AI is questionable, it is not a given that “relevant personnel possess an appropriate understanding of the technology.”

The principle “responsible” can still be valid, because deployed personnel make sound and ethical decisions based on the information provided even if a compromised system will feed spurious information to the decisionmaker. The principle “governable” acts as a safeguard against “unintended consequences.” The unknown is the time from when unintended consequences occur and until the operators of the compromised system understand that the system is compromised.

It is evident when a target that should be hit is repeatedly missed. The effects can be observed. If the effects can not be observed, it is no longer a given that that “unintended consequences” are identified, especially in a fluid multi-domain battlespace. A compromised AI system for target acquisition can mislead targeting, acquiring hidden non-targets that are a waste of resources and weapon system availability, exposing the friendly forces for detection. The time to detect such a compromise can be significant.

My intention is to visualize that cybersecurity is pivotal for AI success. I do not doubt that AI will play an increasing role in national security. AI is a top priority in the United States and to our friendly foreign partners, but potential adversaries will make the pursuit of finding ways to compromise these systems a top priority of their own.

Our Dependence on the top 2 % Cyber Warriors

As an industrial nation transitioning to an information society with digital conflict, we tend to see the technology as the weapon. In the process, we ignore the fact that few humans can have a large-scale operational impact.

But we underestimate the importance of applicable intelligence, the intelligence on how to apply things in the right order. Cyber and card games have one thing in common: the order you play your cards matters. In cyber, the tools are mostly publically available, anyone can download them from the Internet and use them, but the weaponization of the tools occur when they are used by someone who understands how to use the tools in the right order.

In 2017, Gen. Paul Nakasone said “our best [coders] are 50 or 100 times better than their peers,” and asked “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.” The success of cyber operations is highly dependent, not on tools, but upon the super-empowered individual that Nakasone calls “the 50-x coder.”

There have always been those exceptional individuals that have an irreplaceable ability to see the challenge early on, create a technical solution and know-how to play it for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival of artificial intelligence increases the reliance of these highly capable individuals because someone must set the rules and point out the trajectory for artificial intelligence at the initiation.

But this also raises a series of questions. Even if identified as a weapon, how do you make a human mind “classified?” How do we protect these high-ability individuals that are weapons in the digital world?

These minds are different because they see an opportunity to exploit in a digital fog of war when others don’t see it. They address problems unburdened by traditional thinking, in innovative ways, maximizing the dual-purpose of digital tools, and can generate decisive cyber effects.

It is this applicable intelligence that creates the process, that understands the application of tools, and that turns simple digital software to digitally lethal weapons. In the analog world, it is as if you had individuals with the supernatural ability to create a hypersonic missile from materials readily available at Kroger or Albertson. As a nation, these individuals are strategic national security assets.

Systemically, we struggle to see humans as the weapon, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed.

For America, technological wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the Erie Canal, the manufacturing era, the moon landing and all the way to the autonomous systems, drones, and robots. In a default mindset, there is always a tool, an automated process, a software, or a set of technical steps, that can solve a problem or act. The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced.

Super-empowered individuals are not interchangeable and cannot be replaced, unless we want to be stuck in a digital war. Artificial intelligence and machine learning support the intellectual endeavor to cyber defend America, but humans set the strategy and direction.

It is time to see what weaponized minds are, they are not dudes and dudettes; they are strike capabilities.

Jan Kallberg, Ph.D., LL.M., is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Reassessed incentives for innovation supports transformation

There is no alternative way to ensure victory in the future fight than to innovate, implement the advances, and scale innovation. To use Henry Kissinger’s words: “The absence of alternatives clears the mind marvelously.”

Innovative environments are not created overnight. The establishment of the right culture is based on mutual trust, a trust that allows members to be vulnerable and take chances. Failure is a milestone to success.

Important characteristics for an innovative environment are competence, expertise, passion and a shared vision. Such an environment is populated with individuals who are in it for the long run and don’t quit until they make advances. Individuals who urge success and are determined to work towards excellence are all around us. For the defense establishment, the core challenge is to reassess the provided incentives, so the ambition and intellectual assets are directed to innovation and the future fight.

Edward N. Luttwak noted that strategy only matters if we have the resources to execute the strategy. Embedded in Luttwak’s statement is the general condition that if we are unable to identify, understand, incentivize, activate, and utilize our resources, the strategy does not matter. This leads to the question: who will be the innovator? How does the Department of Defense create a broad, innovative culture? Is innovation outsourced to thinktanks and experimental labs, or is it dedicated to individuals who become experts in their subfields and drive innovation where they stand? Or are these models running parallel? In general, are we ready to expose ourselves to the vulnerability of failure, and if so, what is an acceptable failure? These are questions that need to be addressed in the process of transformation.

Structural frameworks in place today could hinder innovation. For example, the traditional DOD Defense Officer Personnel Management Act’s (DOPMA) personnel model. In theory, it is a form of the assembly line’s scientific management, Taylorism, where the officer is processed through the system to the highest level of her/his career potential. In reality, the financial incentives are in favor of following the flowchart for promotion instead of seeking to stay at a point where you are passionate to make an improvement. If a transformation to an innovative culture is to be successful, then the incentives need to be aligned with the overall mission objective.

Another example is government sponsored university research. Even if funds are allocated in the pursuit of mobilizing civilian intellectual torque to ensure innovation that benefits the warfighter, traditional research at a university has little incentive to support the transformation of the Armed Forces. The majority of academia and the overwhelming majority of research universities pursue DOD and government research grant opportunities as an income to gain resources to fund graduate students and facilities. Many of the sponsored research projects are basic research, and the results are made public, which slightly defeats the purpose if you seek an innovative advantage, with limited support to the future fight. Academia can tailor their research to fit the funding opportunity, which is logical from their viewpoint, and often it is a tweak on research they are already doing that can be squeezed into a grant application.

Academics at universities seek tenure, promotion, and leverage in their fields. So government funding becomes a box to check off for tenure, the ability to attract external funding, and support academic career progression. The incentives to support DOD innovation are suppressed by far stronger incentives for the researcher to gain personal career leverage at the university. In the future, it is likely more cost-effective to concentrate DOD-sponsored research projects to those universities that make the investment in time and effort to ensure that their research is DOD relevant, operationally current, and support the warfighter. Those universities that align themselves with the DOD objectives and deliver innovation for the future fight will also have a higher understanding what the future threat landscape looks like. They are more likely to have an interface for quick dissemination of DOD needs. A realignment of incentives for sponsored research at universities creates an opportunity for those ready to support the future fight.

There is a need to look at system level how innovation is incentivized to ensure that resources generate the effects sought. America has talent, ambition, a tradition of fearless engineering, and grit – correct incentives unleash that innovative power.

Jan Kallberg is a research scientist at the Army Cyber Institute at West Point and an assistant professor in the department of social sciences at the United States Military Academy. The views expressed are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.