wooden-cubes

There is a fundamental difference between humans and machines. Jack Ma, Chinese business magnate, co-founder of Alibaba, 35th richest man in the world, once said (in his half-broken English):

Computers only have chips, men have the heart. It’s the heart where the wisdom comes from.

Forgive me if I also quote myself on this topic:

This is a tough issue to talk about because of the other opinions on the matter. Many people like to think that we operate much like machines. That we are predictable like them, that we are as deterministic as them. Meaning that given enough data to train on, a machine will one day be as smart or intelligent as humans, if not more so.



I like to, however, think that the vast majority of people would side with Jack Ma on this: that there really is something fundamentally different between us and machines. Certainly, my years of experience in teaching confirms this observation. The thousands of people that I’ve interacted with really do believe we have something like a “heart” that machines do not have and never will. Its this heart that gives us the ability to truly be creative or wise, for example.

Some of you may know that along with my PhD in Artificial Intelligence I also have a Master’s in Philosophy and additionally a Master’s in Theology. If I were to argue my point from the perspective of a theologian, it would be easy to do so: we’re created by a Supreme Being that has endowed us with an eternal soul. Anything that we ourselves create with our own hands will always lack this one decisive element. The soul is the seat of our “heart”. Hence, machines will never be like us. Ever.

But alas, this is not a religious blog. It is a technical one. So, I must argue my case from a technical standpoint – much like I have been doing with my other posts.

It’s hard to so, however. How do I prove that we are fundamentally different to machines and always will be? Everyone’s opinion on the matter has just as much weight on the rational level. It seems as though we’re all floating in the ether of opinions on this one without having any hook to grasp on and build something concrete.

But that’s not, thankfully, entirely the case. As I’ve mentioned earlier, we can turn to our instincts or intuitions and speak about our “hearts”. Although turning to intuition or instinct is not technically science, it’s a viable recourse as science hasn’t said all things decisively on this topic. Where science falters, sometimes all we have left are our instincts, and there’s nothing wrong with utilising them as an anchor in the vast sea of opinions.

But the other thing we can do is turn to professionals who work full-time on robots, machines, and AI in general and seek their opinion on the matter. I’ve spoken at length on this in the past (e.g. here and here) so I’ll only add one more quote to the pot from Zachary Lipton, Assistant Professor of Machine Learning and Operations Research at Carnegie Mellon University:

But these [language models] are just statistical models, the same as those that Google uses to play board games or that your phone uses to make predictions about what word you’re saying in order to transcribe your messages. They are no more sentient than a bowl of noodles, or your shoes.[emphasis mine]

Generally speaking, then, what I wish to get across is that if you work in the field of AI, if you understand what is happening under the hood of AI, there is no way that you can honestly and truthfully say that machines currently are capable of human-level intelligence or any form of sentience. They are “no more sentient than a bowl of noodles” because they “are just statistical models”.

Even though a machine might look intelligent, does not mean it is.

Hence, if AI is no more sentient than your pair of shoes, if AI is just applied statistics, I’d like to argue the case that perhaps the terminology used in the field is imprecise.

Terms like “intelligence”, “understanding”, “comprehending”, “learning” are loaded and imply something profound in the existence of an entity that is said to be or do those things. Let’s take “understanding”, as an example. Understanding is more than just memorising and storing information. It is grasping the essence of something profoundly. It is storing some form of information, yes, but it is also making an idea your own so that you can manoeuvre around it freely. Nothing “robotlike” (for want of a better word) or deterministic is exhibited in understanding. Understanding undoubtedly involves a deeper process than knowing.

Similar things can be said for “intelligence” and “learning”.

So, the problem is that the aforementioned terms are being misunderstood and misinterpreted when used in AI. Predicating “intelligence” with “artificial” doesn’t do enough to uphold the divide between humans and machines. Likewise, the adjective “machine” in “machine learning” doesn’t separate enough our learning from what machines really do when they acquire new information. In this case, machines update or readjust their statistical models – they do not “adapt”.

Not strictly speaking, anyway.

This is where things become a little murky, in truth, because the words being used do contain in them elements of what is really happening. I’m not going to deny that. Machines do “learn” in a way, and they do “adapt” in a way, too.

However, the confusion in the world is real – and as a result AI is over-hyped because the media and people like Elon Musk spew out these words as if they applied equally to machines and us. But they do not, do they? And if they do not, then perhaps different terms should be devised to quash the confusion and hype that we see being exhibited before our eyes.

As scientists we ought to be precise with our terminologies and currently we are not.

What new terms should be devised or what different terms should be used is up for debate. I’ve already suggested that the word “adapted” should be changed to “readjusted” or “recalibrated”. That’s more precise, in my opinion. “Artificial Intelligence” should perhaps be renamed to “Applied Statistics”. We can think of other alternatives, I’m sure.

Can you picture, though, how the hype around AI would diminish if it was suddenly being referred to as Applied Statistics? No more unreal notions of grandeur for this field. The human “heart” could potentially reclaim its spot on the pedestal. And that’s the whole point of this post, I guess.

Parting words

What I’m suggesting here is grand. It’s a significant scenario that would effect repercussions. I definitely do not want people to stop trying to attain human-level intelligence (AGI, as it’s sometimes referred to). We’ve achieved a lot over the last decade with people’s aims being directed specifically towards this purpose. But I still think we need to be precise and accurate in our terminologies. Human capabilities and dignity for that matter need to be upholded.

I also mentioned that most scientists working in machine learning would honestly say that AI entities are not strictly speaking intelligent. That does not mean, however, that they do not believe that things may not improve to the point where the aforementioned terms would become applicable and precise in AI. Perhaps in the future machines will be truly intelligent and machines really will understand? In my opinion this will never occur (that’s for another post) but for the time being it is safe to say that we are far away from attaining that level of development. Kai-Fu Lee, for example, who was once head of Google China, an exec at Apple, and Assistant Professor at Carnegie Mellon University, gives a date of around 2045 for machines to start to display some form of real intelligence (I wrote a book review about his take on AI in this post). And that’s a prediction that, as he admits, will require great breakthroughs to occur in AI in the meantime that may never transpire, as is the nature of breakthroughs. We must live in the present, then, and currently, in my opinion, more harm is being done now with the abounding misunderstandings that calls for some form of terminology reform.

The other dilemma comes up with respect to animals. We can certainly call some animals “intelligent” and mention the fact that they “learn”. But, once again, it’s a different form of intelligence, learning, etc. It’s still not as profound as what humans do. However, it’s much more accurate to use these terms on animals than on machines. Animals have life. AI is as dead as your bowl of noodles or your pair of shoes.

Lastly, I deliberately steered away from trying to define terms like “understand”, “learn”, etc. I think it would be best for us to stick with our intuitions on this matter rather than getting bogged down in heavy semantics. At least for the time being. I think it’s more important for now to have the bigger picture in view.

To be informed when new content like this is posted, subscribe to the mailing list:

2 Replies to “The Need for New Terminology in AI”

Leave a Reply

Your email address will not be published. Required fields are marked *