In the early days of artificial intelligence, full AI was thought be developed quite quickly.  Researchers boasted that it was just a matter of a few years and a few breakthroughs to make Artificial Intelligence an everyday reality.

….“from three to eight years we will have a machine with the general intelligence of an average human being.”  (1970) Marvin Minsky Life Magazine  SOURCE

In those days it was believed that the best way to archive artificial intelligence was to figure our how humans do things and then build computers that simulate those processes.  Alan Turing, the father of modern computing, said”

…”machines can be constructed which will simulate the behaviours of the human mind very closely”.  SOURCE

alan touring

The problem is that electronic computers do not think.  They process based on a simple zero or one, yes or no, on or off, binary decision making process.  This meant that the early strides in AI research which initially seemed so promising were driving down the wrong road.  By the mid 1980’s artificial intelligence was not much more real than it was in 1970 and the organizations that funded AI research began running out of patience.

By the end of the 1980’s AI research was at a standstill in most Universities and provide companies.  The media had lost interest and so had the public.  Much like the old joke about Hydrogen has been next years energy source for 30 years now, AI was similarly dismissed.  Yes, everyone still thought AI was possible but there did not seem to be a path forward as the ‘top down’, clone human logic and learning processes model was clearly not going to work,

The end of the 1980’s through the early 1990’s is know in the artificial intelligence world as the AI Winter.  A time when researchers were pessimistic and funding sources for more research withered like tree leaves in the fall.

ibm deep blue in computer museum

Something new and exciting was required to make AI move from possible to probable and end the “AI Winter”.  That change came in 1997 when IBM completed work on a computer named Deep Blue which did not even attempt to make decisions in the same way humans do.  That top down approach was written off and replaced with an exciting new bottom up approach to learning.

Big Blue was tasked with beating the best chess player in the world at the time, the legendary Garry Kasparov, and it did so by being fast enough to calculate 330 million possible moves in a single second.  A chess master like Kasparov, when he is operating at the top of his game, might be able to consider 100 possible moves in minute or two that he has to make his decision.  Kasparov had to rely on intuition and past patterns to find success.

Big Blue was called an AI and hailed as a break through but it was far from intelligent in any sense of the word humans assign to it.  IBM’s amazing machine could no more able to understand that is was besting a human, than it was at making an omelet.  It was just fast, not smart.

Just like Hydrogen’s Winter is ending, now that companies like Calgary’s Proton Technologies has demonstrated how to easily produce cheap, uber clean, H2 with their break though production technology, so too did this new AI development start to thaw the AI Winter.

What IBM’s Deep Blue proved, however, was that there was a new methodology to beat humans.  Don’t mimic humans, just be much faster at calculating simple outcomes.  It heralded a breakthrough that returned public interest, researcher interest, corporate interest and government interest in Artificial Intelligence and ended the AI Winter.

To be clear there was a similar but smaller AI Winter in the early 1970’s and there will no doubt be new AI Winters in the coming decades.

Categories: Technology


Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *