Artificial Intelligence (AI) and Machine Learning (ML) have remained a mainstay within 21st-century trends. It’s a topic of discussion; it’s hot, and it’s what everyone wants to be a part of today in one way or the other. Suffice to say, these technologies have found widespread applications in many industries, and with good reason no less.

These technologies come with their own set of misconceptions however, as it is with anything set to revolutionize normalcy. Customers across several industry verticals now contemplate the adoption of both AI and ML in their businesses, IT automation also, largely due to the unreasonable expectations built around the capabilities of learning machines.

In fact, people imagine – or are of the opinion – that AI or ML could, in the years to come, completely take over human tasks. We explore that notion further in this article, but if it’s a short answer you’re looking for – it’s a big, fat no.

Machines No Matter How Advanced Are Dumb

Out of the box, machines are dumb. An advanced computer is only as useful when programmed. But what of Machine Learning? ML is derivative of the ability of a machine to learn without being explicitly programmed, and this definition has opened much confusion about the basic premise of machine capabilities.

Self-Learning or Autonomous Machines are now often misconstrued terms in the ML ecosystem. Given that, it’s important to note that data scientists have tried to mimic the human learning process on machines. This implies that understanding the way humans learn is key to understanding ML. We can illustrate the same by considering how humans learn by experience.

      1. Assisted Training
        1. Take company employees for example. They are trained to perform job-related duties, which may include solving common or complex problems expected to arise while in employment.
        1. In a scenario where trained employees are faced with a challenge beyond their level of understanding, it’s considered appropriate to consult a Subject Matter Expert (SME). When presented with a similar situation in the future however, it is likely that those same employees would no longer require assistance from SMEs. This is Continuous Assisted Training.
      1. Self-Learning
        1. Even a native speaker has neither complete knowledge or command of the English language, which happens to be extensive. It’s why when faced with a new word, we often find ourselves consulting a dictionary. In effect, this new word is learned – a continuous process in which our vocabulary expands over time. You could call this Continuous Self-Learning.
      1. Non-Continuous Learning
        1. Students – for example – in a test scenario often train within the limitations of a given syllabus. Depending on the level of training, questions from the syllabus are solvable. Naturally, questions outside the syllabus are not.
        1. Since consulting an external source is not an option here, problems from outside the syllabus result in automatic failure. This is a form of Non-Continuous Learning.

Each of these learning processes is what ML or AI concepts are designed to replicate. Given that, it’s simple to conclude that these technologies could somehow replace us, humans, soon. This is derivative of conventional wisdom, at least. But if that’s the core or entirety of your thought process, it’s more likely to assume that you do not truly understand how computers work.

We Don’t Live Inside an Action Movie

Popular cultural, especially Hollywood or movies of the science-fiction genre are probably what planted the AI-ML Takeover idea into our subconscious minds. But we are in a reality far different from those seen in movies, maybe. The future we see tomorrow is in our hands entirely.

Technology Implies Complementarity

Have you ever thought about the prospect of competition from machines instead of human beings? People sometimes fail to understand that computers are more different from humans than any two humans are different from each other; we’re good at fundamentally different things.

As human beings, we’re gifted with intentionality – the ability to form plans or make decisions in complex scenarios. We don’t function as well in the face of copious amounts of data. Computers are the opposite – they efficiently process, but struggle to make simple judgments easy enough for humans.

This scale of variance is best understood when you consider Google’s computer-for-human substitution projects. In 2012, a Google supercomputer hit the headlines after scanning 10 million YouTube videos, which enabled it to recognize a cat with an accuracy rating of 75%. Impressive? Not if you consider that an average four-year-old can do the same flawlessly.

It’s simple; a cheap laptop could beat the smartest of mathematicians at some tasks, but even a supercomputer with upwards of 16,000 CPUs can’t beat a child at others. Humans and machines aren’t just more or less powerful than each other; they’re fundamentally and categorically different.

This is explained best in the book – Zero to One by Peter Thiel, an elaboration on the concept of complementarity. The point is, computers are tools, not rivals to be feared.
Don’t worry about AI or ML taking over your jobs. The most valuable of companies won’t look to find tasks that can be solved via computers alone; instead, they are likely to invest in finding an answer to how computers could help humans solve hard problems.

This is ideal for a better and more sustainable future.


Rakesh Sankar
He can be reached at s.rakesh@globaledgesoft.com