Computer History Museum: A Comprehensive History of Robotics & AI

Howdy y’all!

Since I still have to wait quite a while till Detroit comes out, I had to find a way to spend my time productively. I came across the truly excellent and terribly fascinating Computer History Museum! I’m here to share a brief review of this museum.

Seriously you guys, this site is great. I looked at the history of abacuses for like an hour. Their online exhibits are informative and interesting; the site really feels like visiting an actual museum, which I love! I highly suggest you give them a look, especially the display about the ancient, mysterious, and technologically advanced Antikythera Mechanism.

In honor of Kara and the rest of the android race I have curated the historical timeline of AI & Robotics from the truly excellent Timeline of Computer History from the Computer History Museum’s online exhibit below. Enjoy!

HSLAMMA WILL RESPAWN IN THREE…TWO…ONE…


A Comprehensive History of Robotics & AI

1939: Elektro at the World’s Fair

timeline_ai-robotics_1939_elektro

Elektro and Sparko at the 1939 World’s Fair (Image courtesy of Computer History Museum)

Built by Westinghouse, the relay-based Elektro robot responds to the rhythm of voice commands and delivers wisecracks pre-recorded on 78 rpm records. It appeared at the World’s Fair, and it could move its head and arms… and even “smoked” cigarettes.

 

1941: The Three Laws of Robotics

 

timeline_ai-robotics_1941_liar

May 1941 issue of Astounding Science Fiction (Image courtesy of Computer History Museum)

Isaac Asimov publishes the science fiction short story Liar! in the May issue of Astounding Science Fiction. In it, he introduced the Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

This is thought to be the first known use of the term “robotics.”

 

1943: A Logical Calculus of the Ideas Immanent in Nervous Activity

timeline_ai-robotics_1943-pitts-nervous

Walter Pitts (Image courtesy of Computer History Museum)

Two scientists, Warren S. McCulloch and Walter H. Pitts, publish the groundbreaking paper A Logical Calculus of the Ideas Immanent in Nervous Activity. The paper quickly became a foundational work in the study of artificial neural networks and has many applications in artificial intelligence research. In it McCulloch and Pitts described a simplified neural network architecture for intelligence, and while the neurons they described were greatly simplified compared to biological neurons, the model they proposed was enhanced and improved upon by subsequent generations of researchers.

 

1948: Cybernetics

timeline_ai-robotics_1948_cybernetics

Norbert Wiener (Image courtesy of Computer History Museum)

Norbert Wiener publishes the book Cybernetics, which has a major influence on research into artificial intelligence and control systems. Wiener drew on his World War II experiments with anti-aircraft systems that anticipated the course of enemy planes by interpreting radar images. Wiener coined the term “cybernetics” from the Greek word for “steersman.”

 

1949: Alan Turing quoted by The London Times on artificial intelligence

timeline_ai-robotics_1949-turing

Alan Turing (Image courtesy of Computer History Museum)

On June 11, The London Times quotes the mathematician Alan Turing. “I do not see why it (the machine) should not enter any one of the fields normally covered by the human intellect, and eventually compete on equal terms. I do not think you even draw the line about sonnets, though the comparison is perhaps a little bit unfair because a sonnet written by a machine will be better appreciated by another machine.”

 

1949: Brain surgeon reflects on artificial intelligence

timeline_ai-robotics_1949-geoffrey-jefferson

British brain surgeon Geoffrey Jefferson (Image courtesy of Computer History Museum)

On June 9, at Manchester University’s Lister Oration, British brain surgeon Geoffrey Jefferson states, “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain – that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or miserable when it cannot get what it wants.”

 

1950: Grey Walter’s Elsie

timeline_ai-robotics_1950-elsie_greywalter

Grey Walter works with Elsie (Image courtesy of Computer History Museum)

A neurophysiologist, Walter built wheeled automatons in order to experiment with goal-seeking behavior. His best known robot, Elsie, used photoelectric cells to seek moderate light while avoiding both strong light and darkness—which made it peculiarly attracted to women’s stockings.

 

1950: Isaac Asimov’s I, Robot

timeline_ai-robotics_1950_irobot

I, Robot book cover (Image courtesy of Computer History Museum)

Isaac Asimov’s I, Robot is published. Perhaps in reaction to earlier dangerous fictional robots, Asimov’s creations must obey the “Three Laws of Robotics” (1941) to assure they are no threat to humans or each other. The book consisted of nine science fiction short stories.

 

1951: Squee: The Robot Squirrel

timeline_ai-robotics_1951-squee

Squee: The Robot Squirrel (Image courtesy of Computer History Museum)

Squee: The Robot Squirrel uses two light sensors and two contact switches to hunt for ”nuts” (actually, tennis balls) and drag them to its nest. Squee was described as “75% reliable,” but it worked well only in a very dark room. Squee was conceived by computer pioneer Edmund Berkeley, who earlier wrote the hugely popular book Giant Brains or Machines That Think (1949). The original Squee prototype is in the permanent collection of the Computer History Museum.

 

The Turing Test

timeline_ai-robotics_1951-turing

Alan Turing (Image courtesy of Computer History Museum)

Alan Turing creates a standard test to answer: “Can machines think?” He proposed that if a computer, on the basis of written replies to questions, could not be distinguished from a human respondent, then it must be “thinking”.

 

1955: Logic Theorist

timeline_ai-robotics_1955-newell-simon

Herb Simon (L) and Allen Newell (R) (Image courtesy of Computer History Museum)

Allen Newell, Herbert A. Simon and J.C. Shaw begin work on Logic Theorist, a program that would eventually prove 38 theorems from Whitehead and Russell’s Principia Mathematica. Logic Theorist introduced several critical concepts to artificial intelligence including heuristics, list processing and ‘reasoning as search.’

 

1956: Robby the Robot

timeline_ai-robotics_1956-robby

Robby the Robot (Image courtesy of Computer History Museum)

Robby the Robot appears in MGM’s 1956 science fiction movie Forbidden Planet. In the film, Robby was the creation of Dr. Mobius and was built to specifications found in an alien computer system. Robby’s duties included assisting the human crew while following Isaac Asimov’s Three Laws of Robotics (1941). The movie was a cult hit, in part because of Robby’s humorous personality and Robby the Robot toys became huge sellers.

 

1958: LISP

timeline_ai-robotics_1958-lisp

John McCarthy (Image courtesy of Computer Science Museum)

The programming language LISP (short for “List Processing”) is invented in 1958 by John McCarthy at MIT. A key feature of LISP was that data and programs were simply lists in parentheses, allowing a program to treat another program – or itself – as data. This characteristic greatly eased the kind of programming that attempted to model human thought. LISP is still used in a large number of artificial intelligence applications.

 

1959: Automatically Programmed Tools (APT)

timeline_ai-robotics_1959-apt

APT ashtray (Image courtesy of Computer Science Museum)

MIT´s Servomechanisms Laboratory demonstrates computer assisted manufacturing (CAM). The school´s Automatically Programmed Tools project created a language, APT, used to control milling machine operations. At the demonstration, an air force general claimed that the new technology would enable the United States to “build a war machine that nobody would want to tackle.” The machine produced a commemorative ashtray for each attendee.

1960: Quicksort algorithm

timeline_sw-languages_1960-quicksort

Quicksort developer Tony Hoare (Image courtesy of Computer History Museum)

While studying machine translation of languages in Moscow, C. A. R. Hoare develops Quicksort, an algorithm that would become one of the most used sorting methods in the world. Later, Hoare went to work for the British computer company Elliott Brothers, where he designed the first commercial Algol 60 compiler. Queen Elizabeth II knighted C.A.R. Hoare in 2000.

1961: UNIMATE

timeline_ai-robotics_1961-unimate

UNIMATE robot in an industrial setting (Image courtesy of Computer History Museum)

UNIMATE, the first mass-produced industrial robot, begins work at General Motors. Obeying step-by-step commands stored on a magnetic drum, the 4,000-pound robot arm sequenced and stacked hot pieces of die-cast metal. UNIMATE was the brainchild of Joe Engelberger and George Devol, and originally automated the manufacture of TV picture tubes.

 

1963: The Rancho Arm

timeline_ai-robotics_1963-rancho-arm

The Rancho Arm (Image courtesy of Computer History Museum)

Researchers design the Rancho Arm robot at Rancho Los Amigos Hospital in Downey, California as a tool for the handicapped. The Rancho Arm´s six joints gave it the flexibility of a human arm. Acquired by Stanford University in 1963, it holds a place among the first artificial robotic arms to be controlled by a computer.

 

1965: DENDRAL artificial intelligence program

timeline_ai-robotics_1965-feigenbaum

Ed Feigenbaum (Image courtesy of Computer History Museum)

A Stanford team led by professors Ed Feigenbaum, Joshua Lederberg and Carl Djerassi creates DENDRAL, the first “expert system.” DENDRAL was an artificial intelligence program designed to apply the accumulated expertise of specialists to problem solving. Its area of specialization was chemistry and physics. It applied a battery of “if-then” rules to identify the molecular structure of organic compounds, in some cases more accurately than experts.

 

1965: The Orm

timeline_ai-robotics_1965-orm

The Orm (Image courtesy of Computer History Museum)

Developed at Stanford University, the Orm robot (Norwegian for “snake”) was an unusual air-powered robotic arm. It moved by inflating one or more of its 28 rubber bladders that were sandwiched between seven metal disks. The design was abandoned because movements could not be repeated accurately.

 

1966: Joseph Weizenbaum’s ELIZA

timeline_ai-robotics_1966_eliza

Joseph Weizenbaum (Image courtesy of Computer History Museum)

Joseph Weizenbaum finishes ELIZA. ELIZA is a natural language processing environment. Its most famous mode was called DOCTOR, which responded to user questions much like a psychotherapist. DOCTOR was able to trick some users into believing they were interacting with another human, at least until the program reached its limitations and became nonsensical. DOCTOR used predetermined phrases or questions and would substitute key words to mimic a human actually listening to user queries or statements.

 

1968: SHRDLU natural language

timeline_ai-robotics_1968-shrdlu

SHRDLU original screen display (Image courtesy of Computer History Museum)

Terry Winograd begins work on his PhD thesis at MIT. His thesis focused on SHRDLU, a natural language used in artificial intelligence research. While precursor programs like ELIZA were incapable of truly understanding English commands and responding appropriately, SHRDLU was able to combine syntax, meaning and deductive reasoning to accomplish this. SHRDLU’s universe was also very simple, and commands consisted of picking up and moving blocks, cones and pyramids of various shapes and colors.

 

1968: The Tentacle Arm

timeline_ai-robotics_1968-tentacle-arm

Minsky’s Tentacle Arm (Image courtesy of Computer History Museum)

Marvin Minsky develops the Tentacle Arm robot, which moves like an octopus. It has twelve joints designed to reach around obstacles. A DEC PDP-6 computer controls the arm, powered by hydraulic fluids. Mounted on a wall, it could lift the weight of a person.

 

1970: Shakey the robot

timeline_ai-robotics_1970-shakey

SRI’s Shakey (Image courtesy of Computer History Museum)

SRI International´s Shakey robot becomes the first mobile robot controlled by artificial intelligence. Equipped with sensing devices and driven by a problem-solving program called STRIPS, the robot found its way around the halls of SRI by applying information about its environment to a route. Shakey used a TV camera, laser range finder, and bump sensors to collect data, which it then transmitted to a DEC PDP-10 and PDP-15. The computer sent commands to Shakey over a radio link. Shakey could move at a speed of 2 meters per hour.

 

1972: LUNAR natural language information retrieval system

timeline_ai-robotics_1972-lunar

Earth’s moon (Image courtesy of Computer History Museum)

LUNAR, a natural language information retrieval system is completed by William Woods, Ronal Kaplan and Bonnie Nash-Webber at Bolt, Beranek and Newman (BBN). LUNAR helped geologists access, compare and evaluate chemical-analysis data on moon rock and soil composition from the Apollo 11 mission. Woods was the manager of the BBN AI Department throughout the 1970s and into the early 1980s.

 

1974: The Silver Arm

timeline_ai-robotics_1974_silver

The Silver Arm (Image courtesy of Computer History Museum)

David Silver at MIT designs the Silver Arm, a robotic arm to do small-parts assembly using feedback from delicate touch and pressure sensors. The arm´s fine movements approximate those of human fingers.

 

1976: Shigeo Hirose´s Soft Gripper

timeline_ai-robotics_1976_softgripper

Shigeo Hirose´s Soft Gripper (Image courtesy of Computer History Museum)

Shigeo Hirose´s Soft Gripper robot can conform to the shape of a grasped object, such as a wine glass filled with flowers. The design Hirose created at the Tokyo Institute of Technology grew from his studies of flexible structures in nature, such as elephant trunks and snake spinal cords.

 

1977: C3PO and R2D2 in Star Wars

timeline_ai-robotics_1977-starwars1

C3PO protocol droid (Image courtesy of Computer History Museum)

 

C3PO and R2D2 play a critical role in 1977’s blockbuster hit movie Star Wars. Throughout the movie C3PO served as an ambassador-like robot that is knowledgeable of customs, traditions and over 6,000,000 languages. C3PO’s companion robot, R2D2, served as a mechanic, computer interface specialist and co-pilot for the film’s main protagonist Luke Skywalker.

 

1978: Speak & Spell

timeline_ai-robotics_1978-speak-spell

TI’s Speak & Spell (Image courtesy of Computer History Museum)

Texas Instruments Inc. introduces Speak & Spell, a talking learning aid for children aged 7 and up. Its debut marked the first electronic duplication of the human vocal tract on a single integrated circuit. Speak & Spell used linear predictive coding to formulate a mathematical model of the human vocal tract and predict a speech sample based on previous input. It transformed digital information processed through a filter into synthetic speech and could store more than 100 seconds of linguistic sounds.

 

1979: The Stanford Cart

timeline_ai-robotics_1979-stanford-cart

The Stanford Cart (Image courtesy of Computer History Museum)

The Stanford Cart was a long-term research project undertaken at Stanford University between 1960 and 1980. In 1979, it successfully crossed a room on its own while navigating around a chair placed as an obstacle. Hans Moravec rebuilt the Stanford Cart in 1977, equipping it with stereo vision. A television camera, mounted on a rail on the top of the cart, took pictures from several different angles and relayed them to a computer.

 

1981: The direct drive arm

timeline_ai-robotics_1981-directdrivearm

Direct Drive arm diagram (Image courtesy of Computer Science Museum)

The first direct drive (DD) arm by Takeo Kanade serves as the prototype for DD arms used in industry today. The electric motors housed inside the joints eliminated the need for the chains or tendons used in earlier robots. DD arms were fast and accurate because they minimize friction and backlash.

 

1982: The FRED robot

timeline_ai-robotics_1982-fred

The FRED robot (Image courtesy of Computer History Museum)

Nolan Bushnell founded Androbot with former Atari engineers to make playful robots. The “Friendly Robotic Educational Device” (FRED), designed for 6-15 year-olds, never made it to market.

 

1982: The IBM 7535

timeline_ai-robotics_1982-ibm7535

The IBM 7535 (Image courtesy of Computer History Museum)

Based on a Japanese robot, IBM’s 7535 was controlled by an IBM PC and programmed in IBM’s AML (“A Manufacturing Language”). It could manipulate objects weighing up to 13 pounds.

 

1984: Hero Jr. robot kit

timeline_ai-robotics_1984-hero-jr

Heathkit Hero Jr. robot (Image courtesy of Computer History Museum)

Heathkit introduces the Hero Jr. home robot kit, one of several robots it sells at the time. Hero Jr. could roam hallways guided by sonar, play games, sing songs and even act as an alarm clock. The brochure claimed it “seeks to remain near human companions” by listening for voices.

 

1985: Denning Sentry robot

timeline_ai-robotics_1985-sentry

Denning Sentry robot (Image courtesy of Computer History Museum)

Boston-based Denning designed the Sentry robot as a security guard patrolling for up to 14 hours at 3 mph. It radioed an alert about anything unusual in a 150-foot radius. The product, and the company, did not succeed.

 

1985: Omnibot 2000

timeline_ai-robotics_1985-omnibot

Omnibot 2000 (Image courtesy of Computer History Museum)

The Omnibot 2000 remote-controlled programmable robot toy could move, talk and carry objects. The cassette player in its chest recorded actions to be taken and speech to be played.

 

1986: LMI Lambda

timeline_ai-robotics_1986-lmilambda

LMI Lambda (Image courtesy of Computer History Museum)

The LMI Lambda LISP workstation is introduced. LISP, the preferred language for AI, ran slowly on expensive conventional computers. This specialized LISP computer, both faster and cheaper, was based on the CADR machine designed at MIT by Richard Greenblatt and Thomas Knight.

 

1987: Mitsubishi Movemaster RM-501 Gripper is introduced

timeline_ai-robotics_1987-movemaster

Movemaster RM-501 Gripper (Image courtesy of Computer History Museum)

The Mitsubishi Movemaster RM-501 Gripper is introduced. This robot gripper and arm was a small, commercially available industrial robot. It was used for tasks such as assembling products or handling chemicals. The arm, including the gripper, had six degrees of freedom and was driven by electric motors connected to the joints by belts. The arm could move fifteen inches per second, could lift 2.7 pounds, and was accurate within .02 of an inch.

 

1989: Computer defeats master chess player

timeline_ai-robotics_1989-deepthought

Deep Thought I circuit board (Image courtesy of Computer History Museum)

David Levy is the first master chess player to be defeated by a computer. The program Deep Thought defeats Levy who had beaten all other previous computer counterparts since 1968.

 

1992: Japan’s Fifth Generation Computer Systems project abandoned

timeline_ai-robotics_1992-fifthgeneration

Feigenbaum and McCorduck’s The Fifth Generation (Image courtesy of Computer History Museum)

After spending hundreds of millions of dollars in research and development, Japan’s Ministry of International Trade and Industry (MITI) abandons its Fifth Generation Computer Systems project. The project was intended to build a platform from which artificial intelligence systems could grow and ultimately build machines that had reasoning capabilities as opposed to simply perform calculations. In part, the announcement of the Fifth Generation project in Japan caused the American computer industry to react, and a group of companies formed the Microelectronics and Computer Technology Corporation.

 

1995: The MQ-1 Predator drone called to duty

timeline_ai-robotics_1995-mq1predator

MQ-1 Predator drone (Image courtesy of Computer History Museum)

The MQ-1 Predator drone is introduced and put into action by the United States Air Force and the Central Intelligence Agency. It was widely used in Afghanistan and the Pakistani tribal areas against Al-Qaeda forces and Taliban militants starting after September 11, 2001. The unmanned aerial vehicles were equipped with cameras for reconnaissance and could be upgraded to carry two missiles.

 

1997: Deep Blue defeats Garry Kasparov

timeline_ai-robotics_1997-deepblue

First meeting between Garry Kasparov and Deep Blue (Image courtesy of Computer History Museum)

With the ability to evaluate 200 million positions per second, IBM’s Deep Blue chess computer defeats the current world chess champion, Garry Kasparov on May 11. Of the six matches played, Deep Blue won two, Kasparov won one and the other three matches ended in a draw. The games took place over several days and were played in a television studio with a sold out audience of 600 watching each match on television screens in a theater several floors below. These matches were considered a rematch, as Kasparov had defeated an earlier version of Deep Blue in 1996.

 

1998: Furby ignites buying frenzy

timeline_ai-robotics_1998-furby

Furby toy robot (Image courtesy of Computer History Museum)

The Furby ignites a 1998 holiday season buying frenzy, with resale prices reaching $300. Each Furby initially spoke only “Furbish” but could gradually learn English commands. It communicated with other nearby Furbies using an infrared port between its eyes.

 

1999: The AIBO robotic pet dog

timeline_ai-robotics_1999-aibo

Sony’s AIBO robot pet (Image courtesy of Computer History Museum)

The Sony AIBO, the $2,000 “Artificial Intelligence RoBOt” was a robotic pet dog designed to “learn” by interacting with its environment, its owners and other AIBOs. It responded to more than 100 voice commands and talked back in a tonal language. It was even programmed to occasionally ignore commands like its biological four-legged counterparts.

 

2000: Honda’s Advanced Step in Innovative Mobility (ASIMO) humanoid robot

timeline_ai-robotics_2000-asimo

Honda’s Advanced Step in Innovative Mobility (ASIMO) humanoid robot (Image courtesy of Computer History Museum)

Honda’s Advanced Step in Innovative Mobility (ASIMO) humanoid robot is introduced. It could walk 1 mph, climb stairs and change its direction after detecting hazards. Using the camera mounted in its head, ASIMO could also recognize faces, gestures and the movements of multiple objects. Additionally, ASIMO had microphones that allowed it to react to voice commands. About 100 were built.

 

2002: DARPA’s Centibots project

timeline_ai-robotics_2002-centibots

Army of Centibots (Image courtesy of Computer History Museum)

The Centibots project, funded by the Defense Advanced Research Projects Agency (DARPA), sought to prove that up to 100 robots could survey a potentially dangerous area, build a map in real time, and seek items of interest. Centibots communicated with each other to coordinate their effort. If one robot failed, another took over its task. The robots were completely autonomous, requiring no human supervision.

 

2002: The Roomba is introduced

timeline_ai-robotics_2002-roomba

Cleaning path of an iRobot Roomba autonomous vacuum cleaner (Image courtesy of Computer History Museum)

iRobot’s Roomba is introduced. Using a cleaning algorithm, the autonomous robotic vacuum cleaner could clean a room while detecting and avoiding obstacles. Rodney Brooks, co-founder of iRobot, previously performed research at MIT’s Mobile Robotics Lab. The research focused on using insect-like reflex behavior instead of a central “brain” to create purposeful behavior.

 

2003: CSAIL at MIT is formed

timeline_ai-robotics_2003-csail

CSAIL logo (Image courtesy of Computer History Museum)

The Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT is formed with the merger of the Laboratory for Computer Science and the Artificial Intelligence Laboratory. The AI lab was founded in 1959 by John McCarthy and Marvin Minsky and the Laboratory for Computer Science was opened in 1963 as Project MAC.

 

2004: Opportunity and Spirit Mars Rovers land on Mars

timeline_ai-robotics_2004-opportunity

Opportunity self-portrait on Mars (Image courtesy of Computer History Museum)

Caltech designs both the Opportunity and Spirit Mars Rovers. Both landed in 2004 and ran 20 times longer than their planned lifetime of 90 days. While Spirit ceased to move in 2009 and communications from the rover stopped in 2010, Opportunity far exceeded its expected lifetime.

 

2005: Stanford’s autonomous vehicle wins 2005 DARPA “Grand Challenge”

timeline_ai-robotics_2005-stanley

Stanford’s Stanley self-driving vehicle (Images courtesy of Computer History Museum)

Stanford Racing Team’s autonomous vehicle “Stanley” wins the 2005 DARPA “Grand Challenge” held near Las Vegas. Driving autonomously on an off-road, 175-mile long desert course, the Volkswagen Touareg R5 finished the challenge in less than 7 hours with no human intervention–well before the 10 hour time limit. For winning the challenge, the Stanford Racing Team took home $2 million. The DARPA challenges, first introduced in 2004, are intended to spur interest and generate innovation in the area of self-driving cars.

 

2006: Fiftieth anniversary of seminal artificial intelligence conference

timeline_ai-robotics_2006-ai50

(L to R) Trenchard More, John McCarthy, Marvin Minsky, Oliver Selfridge, and Ray Solomonoff at AI@50 (Image courtesy of Computer History Museum)

AI@50, the fiftieth anniversary celebration of Dartmouth Summer Research Project on Artificial Intelligence, is held on the Dartmouth College campus. Five attendees of the original conference in 1956 were present at the anniversary–John McCarthy, Marvin Minsky, Trenchard More, Oliver Selfridge and Ray Solomonoff. The coining of the term “Artificial Intelligence” was credited to the proposal for the original conference, which is viewed as the founding event of AI.

 

2007: Checkers is Solved

timeline_ai-robotics_2007-chinook

Checkers Championship where Chinook checkers program faced human competitors at The Computer Museum, Boston (Image courtesy of Computer History Museum)

An article is published titled Checkers is Solved in a September issue of the journal Science. The article stated, “Perfect play by both sides leads to a draw.” The team that conducted the research was led by Professor Jonathan Schaeffer at the University of Alberta who had been working to solve the checkers problem since 1989. In the course of their work the team created a checkers program called “CHINOOK”, which played successfully in several man-machine competitions, including one held at The Computer Museum in Boston in 1994.

 

2010: IBM’s Watson defeats Jeopardy!contestants

timeline_ai-robotics_2010-watson

(L to R) Alex Trebek, Ken Jennings, IBM’s Watson, and Brad Rutter on Jeopardy! (Image courtesy of Computer History Museum)

In 2010, IBM’s Watson spars against former Jeopardy!Tournament of Champion contestants and finishes with a 71% winning percentage. This was preparation for a 2011 matchup where Watson would defeat two former human Jeopardy!champions. In the televised exhibition match, Watson handily defeated two of the all-time best Jeopardy! players, Ken Jennings and Brad Rutter, by analyzing natural language questions and content more accurately and faster than its human counterparts.

 

2011: Siri is Announced

timeline_ai-robotics_2011_siri

Siri interface (Image courtesy of Computer History Museum)

Siri is introduced as a built-in feature with the Apple iPhone 4S smartphone in October. A voice-activated personal assistant, Siri could “understand” natural language requests and also adjust the information it retrieved from the web by learning user tendencies and preferences. Siri could perform a wide number of functions – from recommending local restaurants (using the web and the iPhone’s built-in GPS navigation system), providing walking or driving directions, giving weather forecasts, showing current sports scores, and even answering seemingly meaningless questions like, “Who is your favorite NCAA college football team?” Although the program’s “voice” was female by default, it could be changed to a man’s voice.

 

2015: Gates Joins Musk, Hawking in Expressing Fear of AI

The Global Development Outlook: William H. Gates III speaks

Bill Gates (Image courtesy of Computer History Museum)

Microsoft co-founder Bill Gates joins a number of prominent tech gurus and scientists in revealing his thoughts on the potentially dangerous effects and unintended consequences of artificial intelligence on human civilization. Previously, Elon Musk, Stephen Hawking, and others had expressed similar sentiments. Those on the other side of the debate felt artificial intelligence would usher in an era of unprecedented human achievement, aided by the “minds” of humanity’s artificial brethren. While Gates and others felt that in the short-term intelligent machines would benefit mankind, they foresaw a future where more advanced super-intelligent machines could pose a grave threat to human existence.


***All information and pictures in this article relevant to AI & Robotics are provided courtesy of the Computer History Museum and permitted under their Terms of Use. The Computer History Museum (CHM) allows anyone to use or copy content from this site consistent with the defined fair use exceptions of United States copyright laws.***

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s