The History of ArtificiaI Intelligence

Take a journey with AiLab through the history of AI and learn about the key dates that shaped the field of ArtificiaI Intelligence.

Turing Test

Alan Turing was an English computer scientist, mathematician and philosopher, who devised the famous Turing Test which aimed to identify if a machine has intelligence (by fooling an observer into thinking it was human). One of the first real proposals for measuring Artificial Intelligence.

1950

I, Robot: The Book

A seminal sci-fi book of (previously published) short stories by Isaac Asimov. Best known for outlining the “Three Laws” of robotic behaviour, I, Robot provided insightful commentary around the possibility of thinking machines and intelligent robots.

1950

Birth of Artificial Intelligence

During the now famous Dartmouth Summer Research Project on Artificial Intelligence workshop, computer and cognitive scientist John McCarthy introduced the term ‘Artificial Intelligence’ to a wider audience and the field was born!

1956

Perceptron

Frank Rosenblatt invents the algorithm that underpins the field of connectionism (neural networks). Originally implemented in hardware, Perceptron was later coded in software, which opened up considerable research efforts.

1957

ELIZA

Designed and developed by Joseph Weizenbaum at MIT, ELIZA was a well known program that could interact with the user in natural language. The most famous version of ELIZA was one that simulated a psychotherapist.

1965

Dendral: First Expert System

Started as a project in 1965, Dendral was also the name of the software that could help chemists identify organic molecules. Within AI, Dendral is important because it showed knowledge-based systems (e.g. Expert Systems) could provide real benefit to human workers.

1967

IJCAI

Stanford Research Institute organises and hosts the first International Joint Conference on Artificial Intelligence (IJCAI). Highly regarded as attracting the best minds in AI, IJCAI was held biannually.

1969

Perceptrons: The book

Minsky & Papert publish the book Perceptrons, which highlights the limitations of the Perceptron (1957) to classify anything but the most simple of data.

1969

SHRDLU

Terry Winograd presented a software program that could manipulate virtual blocks by being given instructions in natural language. Although the domain was very restrictive, SHRDLU’s apparent Natural Language Understanding (NLU) was touted as a huge success within AI.

1971

MYCIN

A medical diagnosis Expert System that could help identify infection causing bacteria. The success of MYCIN in research settings, laid the groundwork for knowledge-based systems being heavily commericalised.

1974

"AI Winter"

AI researchers realise the problems they were trying to solve were much more difficult than first thought. Over promised and under delivered, millions of dollars had been invested. Commercial AI failed to live up to expectations and the majority of funding dried up. This was also a devastating time for Neural Network research, which all but stopped as it became clear only trivial problems could be solved with the current techniques (see Perceptrons:The book, 1969).

1974 - 1980

Expert Systems (R1)

The first example of significant commercial success for the field of AI. Estimated to save Digital Equipment Corporation (DEC) between $25 – $40m per year, R1 (also known as XCON) helped select and configure components for building computer systems. The rise of Expert Systems came about due to the creation of ‘shells’ that provided a core framework to ease development.

1979 - 1980

CYC

Created by Doug Lenat, CYC is a huge database of real-world facts that were defined in an attempt to underpin intelligent systems. Throughout AI history CYC is the longest-living attempt for solving Artificial Intelligence. Crtiics of CYC are concerned with the considerable amount of data the system requires to perform its tasks.

1984

Parallel Distributed Processing (PDP)

Rumelhart & McClelland published probably the most important book(s) in connectionist history. Used to train Neural Networks, the backpropagation learning algorithm (backprop) was re-discovered and employed to classify data and generate useful representations. The insights presented in the PDP books re-ignited interest in artificial neural network research and laid the groundwork for Deep Learning decades later.

1986

ALVINN

A product of military research at Carnegie Mellon University, the Autonomous Land Vehicle in a Neural Network (ALVINN) was an AI system that used a neural network to identify which way to steer the vehicle based on an image of the road and the actions of the driver. It achieved speeds of 112kph for over 140kms on a public highway.

1989

DART

An expert system designed by the Defense Advanced Research Projects Agency (DARPA) to solve complex logistics problems for the US military. Dynamic Analysis and Replanning Tool (DART) was deployed in the Gulf War and was so successful finding solutions that within 4 years it saved enough money to cover the past 30 years of AI research by DARPA.

1991

Chinook

The first computer system to be awarded world-champion status for defeating a human. Chinook competed in draughts (checkers) and it’s knowledge (game rules and goal) was hand-coded, i.e. not a learning machine.

1994

Deep Blue

Seen as a seminal moment in AI history: IBM's Deep Blue beats world chess champion Garry Kasparov. At the time, chess was seen as the holy-grail for AI research due to the complexities in strategy and number of possible moves. The success showed the possibilities for software that could harness useful knowledge representation and sheer computing power.

1997

Deep Learning

This term was associated with neural networks by Igor Aizenberg. Deep Learning (DL) makes use of additional internal layers of nodes within a neural network, that allows the system to learn larger datasets. Deep Learning has driven and underpinned wide-scale use of AI.

2000

Kismet

Presented in Cynthia Breazeal's PhD thesis, Kismit was a robot that could recognise and mimic human facial expressions (emotions). Using image feature-extraction techniques, Kismit was designed to further human-robot interaction.

2000

ASIMO

Arguably the most well-known and recognisable robot in the world. Designed and developed by Honda, ASIMO (Advanced Step in Innovative Mobility) has led the way in robotic and AI technologies in the area of assistant robots (robots to help humans). Capable of autonomous navigation, running, stair climbing, voice commands and facial recognition.

2000 - 2018

Recommendation Systems

A computer system that makes recommendations based on past behaviour. The most well-known implementation is within e-commerce. Data provided by store cards and online store browsing and purchasing is used to predict future buying behaviour to accurately present associated products (either on-screen or via vouchers).

2003

Blue Brain

A project by the Brain and Mind Institute (Switzerland) attempting to accurately model parts of the brain in a computer system. The software endeavored to model the function of biological neurons and successfully simulated the neocortical column of a rat brain.

2006

DeepMind

DeepMind Technologies Limited is a company that employed AI tools and techniques to learn how to play computer games. Purchased by Google in 2014 for US$500 million.

2010

Watson

IBM's supercomputer defeats the champions of the TV game show Jeopardy! that required the contestant to form the correct question based on clues and answers. Watson's computing framework has evolved to encompass different AI techniques, so it can be applied in a multitude of different application areas (to varying successes).

2011

Siri

A voice activated 'personal assistant' installed on Apple products such as iPhones and IPads.

2011

Google Now

Google's voice activated 'personal assistant' as an alternative front-end to Google Search.

2012

Cortana

Microsoft's voice activated 'personal assistant' installed on Windows operating systems. In 2015, Cortana's reach was expanded by its availability on other platforms (such as iOS and Android).

2014 & 2015

AlphaGo (DeepMind)

DeepMind's AlphaGo beats the European Go champion; the first time a computer wins against a professional player. Go is particularly challenging for a computer because of the huge number of different possible board states (many more than chess). In 2017, AlphaGo Zero (an updated version) beats the top-ranked player in the world.

2015

Google Assistant

Updated Google Now (2012) technology for home-based Google hardware and android devices. Google Assistant is designed to engage in conversations rather than individual question/answer (QA).

2016

OpenAI Bot

A computer bot that learnt to play the multiplayer online battle game Dota 2 through trial and error, to beat a professional player in a 1 vs 1 challenge match. In the following year (2018), 5 bots known as OpenAI Five coordinated as a team to compete against professional players.

2017

First National AI strategy

The Government of Canada announced the development of the world’s first national AI strategy. Led by CIFAR, the $125 million Pan-Canadian AI Strategy led the way for many other countries to develop their own AI strategies (see AiLab's own research into national AI strategies).

2017

AiLab Launches

The online site AiLab launches to the public. Artificial Intelligence Lab (AiLab) assists individuals, academia, industry, government and community across the globe with navigating the AI landscape and learning about the complex field of AI via workshops, resources, interviews, news and events. ;-)

Oct 2017

Google Duplex

Google's 'personal assistant' makes a series of phone calls to book appointments (e.g., haircut). In some cases Google Duplex fools the person answering the call that it's a human. Backlash into the non-disclosure of an AI making the call highlights the need for transparency when deploying AI systems.

2018

OpenAI’s GPT-2

Trained on 40GB of text (derived from the internet), GPT-2 is a system that given a partial sentence, predicts the next word. By iteratively applying this simple technique, GPT-2 is able to produce high-quality believable text. Worried about wide scale misuse (GPT-2 “being used to generate deceptive, biased, or abusive language at scale”) OpenAI decided not to release the model to the public.

Feb 2019

AiLab at AUT

AiLab expands into New Zealand with the launch of a new Artificial Intelligence (AI) Laboratory as part of a strategic alliance with Auckland University of Technology (AUT).

Jan 2020

Turing Natural Language Generation (T-NLG)

Microsoft releases a huge deep learning language model (17-billion parameters) that outperforms many SOTA (State of the Art) Natural Language Proccesing (NLP) tasks such as summarization and question answering.

Feb 2020

OpenAI's GPT-3

The full version of OpenAi's GTP (Generative Pre-trained Transformer) model. GTP-3 is 10x the capacity of Microsoft's T-NLG NLP model. In September 2020 Microsoft announced an exclusive use agreement for GTP-3 (giving them rights to the source code).

May 2020

DALL·E

OpenAI releases the first version of it's text-to-image application. A year later, the improved DALL·E 2 version is released, providing 4 x resolution and more realistic images over the first model.

Jan 2021

ChatGPT

OpenAI announces the release of ChatGPT a natural language conversational agent powered by GPT-3.5. A powerful large language model, ChatGPT showcases impressive conversational abilities, but has been criticised for producing falsehoods.

Nov 2022

AiLab expands into UK

AiLab opens our new office in the UK to add to existing offices in Australia & New Zealand! AiLab (Artificial Intelligence Laboratory) Ltd is located within the Ocean Village Innovation Centre in Southampton which is part of the Oxford Innovation Space.

March 2023
 

Join AiLab's Newsletter

Subscribe to get our latest content by email.

    We won't send you spam. Unsubscribe at any time.

    AiLab Partners

    Interested in becoming an AiLab partner? Contact Us!

    TOP