Inside the Wise Leader’s Brain 2 | How Machines Learn

From facial recognition technology, virtual assistants and machine translation systems, to stock trading bots, machine learning breakthroughs can be ascribed to enormous leaps in the amount of data available and the processing power (in the cloud) to find patterns. In this second part of our series exploring the neuroscience of leadership, we take a look at how machines learn.

Inside Wise Leaders Brain 2

Cat or Dog?

perceptron simulates human neurons. It can learn. This inspired AI scientist Yann Le Cun (now CTO at Facebook) to develop an algorithm that could recognize pictures. A crucial breakthrough, it allowed AI to apply the neural network —  or deep learning - to concrete applications. From face recognition to pattern recognition in general.

Imagine that you’re teaching your AI to tell the difference between drawings of cats and dogs. You give it a set of ‘training data’ —  cat and dog pictures. You give it a set of labels for the training data. The learning algorithm can now teach the neural network to distinguish cats from dogs. With your silicon friend trained up, you can use the resulting new program to label unfamiliar, test data.

The black box

Once the algorithm is up and running, we don’t necessarily need to understand its precise, layered workings. (This is how we arrive at the ‘black box’ notion, wherein the complex operations of the AI are incomprehensible to normal mortals). To make decisions, we’ll likely want to focus on the output. We trust it has been well designed. By the way, our trust may be misplaced. Human architects may build bias, or biased data, into their creations. We cover this in our article: ‘Wise Leadership and AI, Can We Trust AI to Tame Complexity?’

Supervised learning

Whether in silicon or organic neural circuits, learning is about forming an internal model of the outside world. This can be in terms of tacit knowledge, such as riding a bike, or explicit knowledge that we can easily communicate with others, such as how the bike’s gears work. Similarly, a computer algorithm learning to recognize faces is acquiring template models of possible shapes and combinations of eyes, noses, and mouths. So too is a computer that is trained to recognize and ‘understand’ a sentence.

Computer algorithms and their artificial neural networks are called deep networks. Each layer can only discover an extremely simple part of the external reality. On each trial the network gives a tentative answer. Cat. If it is told it made an error, it adjusts its parameters to try to reduce its error on the next trial. Dog. Every wrong answer provides valuable information.

In machine learning, this is called ‘supervised learning’ (a supervisor knows the correct answer) and ‘error backpropagation’ (error signals are sent back into the network in order to modify its parameters). This kind of learning remains at the heart of many AI applications, for example, your smartphone’s ability to recognize your voice. The artificial network can only correct itself by calculating the difference between its response and the correct answer given by its supervisor.

Fuzzy relationships

If a designer adds enough variables into the black box of algorithms underpinning machine learning, he will eventually find a good combination of variables. But she won’t know whether the correlation is just a matter of luck. And she certainly won’t be able to explain the relationship between one thing and another (causality). As she piles on more of the variables needed to make predictions, she needs exponentially more data to distinguish true predictive capacity from a happy accident. If it’s just a case of luck, the prediction success is the result of a coincidental alignment in the data and nothing more. This can result in funny or nonsensical correlations. To take one example, correlating deaths caused by anticoagulants with sociology doctorates awarded in the USA between 1998 and 2009 (Bergstrom & West, 2020: 70) has no meaning whatsoever.

 

In our next chapter, we’ll take a look at how humans learn.

By Dr. Peter Verhezen, with the Amrop Editorial Board

Peter is Visiting Professor for Business in Emerging Markets and Strategy and Sustainability at the University of Antwerp and Antwerp Management School (Belgium). He is Principal of Verhezen & Associates and Senior Consultant in Governance at the International Finance Corporation (World Bank) in Asia Pacific. In this capacity, he advises boards and top executives on governance, risk management and responsible leadership. Peter has authored a number of articles and books in the domain, and collaborated closely with Amrop in the development of the wise leadership concept.

Read the Full Report