Artificial Intelligence (AI): The risk of over-reliance on quantifiable data

The rise of interest in artificial intelligence and machine learning has a flip side. It might not be so smart if we fail to design the methods correctly. A question out there – can we compress the reality into measurable numbers? Artificial Intelligence relies on what can be measured and quantified, risking an over-reliance on measurable knowledge. The challenge with many other technical problems is that it all ends with humans that design and assess according to their own perceived reality. The designers’ bias, perceived reality, weltanschauung, and outlook – everything goes into the design. The limitations are not on the machine side; the humans are far more limiting. Even if the machines learn from a point forward, it is still a human that stake out the starting point and the initial landscape.

Quantifiable data has historically served America well; it was a part of the American boom after the Second World War when America was one of the first countries that took a scientific look on how to improve, streamline, and increase production utilizing fewer resources and manpower.

The numbers have also misled. The Vietnam-era SECDEF McNamara used the numbers to tell how to win the Vietnam War, which clearly indicated how to reach a decisive military victory – according to the numbers. In a Post-Vietnam book titled “The War Managers,” retired Army general Donald Kinnard visualize the almost bizarre world of seeking to fight the war through quantification and statistics. Kinnard, who later taught at the National Defense University, did a survey of the actual support for these methods and utilized fellow generals that had served in Vietnam as the respondents. These generals considered the concept of assessing the progress in the war by body counting as useless, and only two percent of the surveyed generals saw any value in this practice. Why were the Americans counting bodies? It is likely because it was quantifiable and measurable. It is a common error in research design that you seek out the variables that produce accessible quantifiable results and McNamara was at that time almost obsessed with numbers and the predictive power of numbers. McNamara is not the only one that relied overly on the numbers.

In 1939, the Nazi-German foreign minister Ribbentrop together with the German High Command studied and measured up the French-British ability to mobilize and the ability to start a war with a little-advanced warning. The Germans quantified assessment was that the Allies were unable to engage in a full-scale war on short notice and the Germans believed that the numbers were identical with the policy reality when politicians would understand their limits – and the Allies would not go to war over Poland. So Germany invaded Poland and started the Second World War. The quantifiable assessment was correct and lead to Dunkirk, but the grander assessment was off and underestimated the British and French will to take on the fight, which leads to at least 50 million dead, half of Europe behind the Soviet Iron Curtain and the destruction of their own regime. The British sentiment willing to fight the war to the end, the British ability to convince the US to provide resources to their effort, and the unfolding events thereafter were never captured in the data. The German assessment was a snapshot of the British and French war preparations in the summer of 1939 – nothing else.

Artificial Intelligence is as smart as the the numbers we feed it. Ad notam.

The potential failure is hidden in selecting, assessing, designing, and extracting the numbers to feed Artificial Intelligence. The risk for grave errors in decisionmaking, escalation, and avoidable human suffering and destruction, is embedded in our future use of Artifical Intelligence if we do not pay attention to the data that feed the algorithms. The data collection and aggregation is the weakest link in the future of machine-supported decisionmaking.

Jan Kallberg is a Research Scientist at the Army Cyber Institute at West Point and an Assistant Professor the Department of Social Sciences (SOSH) at the United States Military Academy. The views expressed herein are those of the author and do not reflect the official policy or position of the Army Cyber Institute at West Point, the United States Military Academy, or the Department of Defense.

Leave a Reply

Your email address will not be published.