AI. Machine Learning. Neural Networks. More and more people talk about them, more and more people know about them, and for very good reasons. As a machine learning aficionado, today I want to share how (I think) learning about machine learning can alter the way we think.
We used to always tell computers exactly what to do. And they helped us in scaling the things we are really bad at, yet we know in principle how to do. Adding 2 huge numbers, for example. We realised, though, that the tasks we do most effortlessly, such as walking or talking in our native language, are in fact so complex they cannot be described by an algorithm, a series of logical steps that a computer can follow. How do we learn such hard things to the point that they become so embedded in ourselves?
Well, we are all lucky to be given some clay: our mind. And a lot of data: our experience of the world. At first our mind is a fresh lump of mud, but everything we see, touch, feel, hear and smell shapes it little by little. Connections are created: structures, shapes, patterns. Next thing we know the baby is speaking the language his father has been learning for the last 20 years. Because when he first heard it, the father’s sculpture was already drying out.
Machine Learning is a massive leap of faith in Computer Science. It’s the idea that instead of explicitly telling a computer what to do, we just give it some appropriate clay (we call it an architecture) and let the data shape it. In our pursuit to make machines learn the same way we do, we gave them a clay inspired from our own: neural networks.
A neural network is a metaphor of our own minds. And while it’s clearly an over-simplified view of our intricate brains, it is nonetheless a powerful metaphor. Maybe you realised by now if you’ve been here before - I am big on metaphors. I believe that acquiring metaphors is the strongest tool we have in shaping how we think.
ML is a special kind of metaphor. It’s a meta-metaphor, if you want. It’s about our own selves. And there is something else about it, which people working in ML see every day: it just works.
That’s why I believe that conceptualising such an elegant metaphor about learning will have a huge impact on how we see our own minds, what they are capable of doing, and how to achieve what we want.
People who are familiar with machine learning, whether scientists, engineers, students or simply curious individuals, have access to a powerful framework. Sometimes jokingly, we use this framework when talking about our own selves. We might say: Oh no, I’ve overfitted! or I think I need to increase my learning rate or What’s your prior?. These remarks are so frequent that it really makes me wonder how our minds are changed by the fact that we see them as neural networks. I would love so much to see a study about this, from a neuroscience or psychology perspective!
Seeing yourself as an ML model might sound a bit … strange and dehumanising? I am here to convince you that getting friendly with machine learning can bring, behind the scenes, positive and useful ideas about one’s self. For example:
It’s all in the data - you can’t change the black-box architecture, but you can change your input data! What forces do you want to shape your sculpture?
The weights of your mind are highly trainable! You can re-model your clay by adding in some water! I noticed that most of the ML people I meet generally hold the belief that they can learn anything, given enough training time and the right input data (or learning resources).
We can learn to identify learning failures (overfitting, underfitting), and the biases that might skew our predictions about the world.
When trying to make a model learn, we always need to carefully specify an objective: a function we want to optimise. This invites reflection: What is your objective? What do you want in your life?
In a world obsessed with self improvement and oozing with online self-help gurus, maybe what we actually need is learning machine learning. Or maybe I just need to get outside a little more while my models are training.