Defining Artificial Intelligence

The term “artificial intelligence” was coined in 1955 by Stanford computer scientist and cognitive scientist John McCarthy, who defined it as “the science and engineering of making intelligent machines.” 

AI computer systems are trained to think, learn, and make decisions independently. While common algorithms follow step-by-step instructions to solve specific tasks, AI systems can analyze data, recognize patterns, and improve their performance over time without explicit programming for every scenario. In simple terms, an AI system is used to make predictions or decisions; an algorithm is the logic by which that AI system operates.

Researchers have worked on AI for decades, but the general public first became aware of AI in 2022 with the launch of generative AI systems like OpenAI’s chatbot ChatGPT, and the imaging system Midjourney, created by the California start-up of the same name. 

Generative AI systems can create content—including text, images, video, and computer code—by identifying patterns in large quantities of training data, and then creating original material that has similar characteristics. 

The term “AI model” is sometimes used when discussing the topic. AI model refers to the working machine itself, while “AI system” incorporates both the machine and other refining elements such as interface design. These terms have technical differences, but when discussing AI policy and governance the terms are roughly interchangeable.

Next: 

Previous
Previous

Disclosure: First Step in Data Transparency

Next
Next

How AI Systems Are Created