/ White papers

Arachnys country audits offer an up to date, deep dive investigation into the data and regulatory landscapes of key markets worldwide.

The reports analyse sources such as corporate registries, news and litigation with the aim to educate you about the availability, quality and challenges associated with the data in each market. Some of the regions we have covered so far are Brazil, China, UAE and Nigeria.

/ Blog

How does Intelligence Augmentation work?

Intelligence augmentation is used less frequently than artificial intelligence in everyday lexicon; somehow it has not become one of the buzzwords that many associate with emerging technologies. However, compliance professionals should take heed.

IA has a long history of success: it is part of a long, proven history of extending the information processing capabilities of humanity. Within communications technology, IA was previously frequently referred to as ‘cybernetics’. As discussed by William Ross Ashby, who some believe to have coined the term in 1956, he sought to differentiate ‘intellectual power’ from power of appropriate selection’ performed by a black box. He believed that if it was possible to accelerate and develop appropriate selection, it should no less be possible to extend intellectual power. This became known as 'intelligence augmentation'.

Google, Apple and Microsoft may also be said to be experimenting with intelligence augmentation: that is the logical extension of Cortana, Siri, and Google Assistant. The iPhone itself, philosophically, is a perfect example of intelligence augmentation: it is designed around empowering a human being through technology, and extending one’s existing capabilities.

However, the central conflict between these ideas could be put down to two exceptionally talented figures. Douglas Engelbart was a DARPA computer scientist (Defense Advanced Research Projects Agency) protege, who believed that technology’s particular purpose was to extend our capacity to manipulate information. He then set himself the project of developing technology to improve ‘knowledge-work’. His research agenda, as expressed in Augmenting Human Intellect: A Conceptual Framework, is notable for its theory of network-augmented intelligence - with the ultimate acknowledgement that the best way to harness technology is to connect it to people and to connect them to each other.

In contrast, John McCarthy, who coined the term 'artificial intelligence' believed the best application of computing power was to build intelligent machines which had their own autonomy, following an intensive period of 'training'. They would also go about tasks using their own cognitive methods - as Markoff puts it, IA researchers point out aircraft fly just fine without flapping their wings.

To explain the dichotomy between these two positions, and how they may affect the workplace, we can use a simple analogy: a provider of artificial intelligence would walk into a compliance department, and ask "Who is the best compliance analyst here?" They would then proceed by assessing what makes that accountant the best, and try and replicate those processes at an unmatchable rate.

A provider of intelligence augmentation, however, would walk into the very same department, and ask the entire team "What can we help you do better?" The provider would then set about creating systems which support all the analysts' weak points, and automate all the manual work.

AI does not have to replace humans - IA is here to empower them

The point at which we measure any automation technology to have performed is, traditionally, efficiency saving. But automating compliance processes that are currently performed manually is not in itself a reason to cut staff. Nor has it been in other industries. Narrative Science conducted a survey of the uses of machine learning in enterprise in 2015, and concluded that artificial intelligence does not appear to be killing jobs, nor will it. Moreover, it suggested that AI-powered technologies create jobs. There were only 200 respondents, comprised of high level staff: CEOs, CTOs, data scientists and managers. An important caveat must be acknowledged - Narrative Science is an investor in AI for enterprise. In another 2015 study, published by MIT professor David Autor in the Journal of Economic perspectives entitled Why Are There Still So Many Jobs? The History and Future of Workplace Automation Autor concluded that there is no demonstrable evidence that automation will reduce employment.

The logical extension of workplace automation is not that staff cease to add value as soon as manual work is automated: it is to ensure that they continue to add value at a higher level. In the case of IA in compliance, the application is very clear: complex data sets, even those interpreted and triaged by machines, will inarguably necessitate skilled, intelligent and experienced staff.

Another symptom of compliance technology is to produce unstructured data. This data is currently uninterpreted but tremendously useful in presenting records which would indicate risks. Moreover, the application of technology to an environment creates a large amount of metadata about entities which would previously be unavailable, or more than likely, lost. Far better to utilize the technology to produce this unstructured data, employed by staff to make qualitative decisions.

Some theorists even believe that AI will actually increase demand for IA as it progresses, which is a very logical deduction. Ajay Agrawal and Joshua Gans (author of The Disruption Dilemma) suggest that as machine intelligence grows to attempt predictive judgements, there will be an increase in the value of other human inputs which complement artificial intelligence. "All human activities can be described by five high-level components: data, prediction, judgment, action, and outcomes."

The ability of machines to scale prediction will never be matched by humans. Paradoxically, this may require more knowledge workers:

"That’s because the value of human judgment skills will increase. Using the language of economics, judgment is a complement to prediction and therefore when the cost of prediction falls demand for judgment rises. We’ll want more human judgment."

Yes, analysts will likely have to adapt their skillsets away from office software to improved programming literacy. But the efficacy of machine-learning will require constant reassessment and calibration by skilled individuals.

Subscribe