Algorithm
a precise list of steps to take, such as a computer program. AI systems contain algorithms, but typically just for a few parts like a learning or reward calculation method. Much of their behavior emerges via learning from data or experience, a fundamental shift in system design that Stanford alumnus Andrej Karpathy dubbed software 2.0.
Anthropomorphism
The tendency for people to attribute humanlike qualities or characteristics to an A.I. chatbot. For example, you may assume it is kind or cruel based on its answers, even though it is not capable of having emotions, or you may believe the A.I. is sentient because it is very good at mimicking human language.
Artificial Intelligence (AI)
Coined in 1955 by John McCarthy, Stanford’s first faculty member in AI, who defined it as “the science and engineering of making intelligent machines.” Much research has human program software agents with the knowledge to behave in a particular way, like playing chess, but today, we emphasize agents that can learn, just as human beings navigating our changing world.
Bias
A type of error that can occur in a large language model if its output is skewed by the model’s training data. For example, a model may associate specific traits or professions with a certain race or gender, leading to inaccurate predictions and offensive responses.
disinformation
Deliberately false content, which can be used to spread propaganda and sow fear and suspicion.
Emergent Behavior
Unexpected or unintended abilities in a large language model, enabled by the model’s learning patterns and rules from its training data. For example, models that are trained on programming and coding sites can write new code. Other examples include creative abilities like composing poetry, music and fictional stories.
Generative AI
Technology that creates content — including text, images, video and computer code — by identifying patterns in large quantities of training data, and then creating original material that has similar characteristics. Examples include ChatGPT for text and DALL-E and Midjourney for images.
hallucination
A well-known phenomenon in large language models, in which the system provides an answer that is factually incorrect, irrelevant or nonsensical, because of limitations in its training data and architecture.
Large Language Model (LLM)
A type of neural network that learns skills — including generating prose, conducting conversations and writing computer code — by analyzing vast amounts of text from across the internet. The basic function is to predict the next word in a sequence, but these models have surprised experts by learning new abilities.
misinformation
False or inaccurate information – getting the facts wrong.
model training
The process of feeding engineered data to a parametrized machine learning algorithm in order to output a model with optimal learned trainable parameters that minimize an objective function.
natural language processing (NLP)
Techniques used by large language models to understand and generate human language, including text classification and sentiment analysis. These methods often use a combination of machine learning algorithms, statistical models and linguistic rules.
neural network
A mathematical system, modeled on the human brain, that learns skills by finding statistical patterns in data. It consists of layers of artificial neurons: The first layer receives the input data, and the last layer outputs the results. Even the experts who create neural networks don’t always understand what happens in between.
parameters
Numerical values that define a large language model’s structure and behavior, like clues that help it guess what words come next. Systems like GPT-4 are thought to have hundreds of billions of parameters.
reinforcement learning
A technique that teaches an A.I. model to find the best result by trial and error, receiving rewards or punishments from an algorithm based on its results. This system can be enhanced by humans giving feedback on its performance, in the form of ratings, corrections and suggestions.
Training Data
The initial dataset containing the examples used to teach a machine learning application to recognize patterns or perform some function.
transformer model
A neural network architecture useful for understanding language that does not have to analyze words one at a time but can look at an entire sentence at once. This was an A.I. breakthrough, because it enabled models to understand context and long-term dependencies in language. Transformers use a technique called self-attention, which allows the model to focus on the particular words that are important in understanding the meaning of a sentence.
Sources
Pasick, Adam. “Artificial Intelligence Glossary: Neural Networks and Other Terms Explained. The concepts and jargon you need to understand ChatGPT.” New York Times, 27 Mar 2023, https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html.
Spector, Rosanne. “A Brief Glossary of Artificial Intelligence Terms.” Stanford Medicine Magazine, 11 Nov. 2023, https://stanmed.stanford.edu/brief-glossary-artificial-intelligence-ai/#:~:text=Training%20data%3A%20The%20initial%20dataset,patterns%20or%20perform%20some%20function.
“Brief Definitions of Key Terms in AI.” Stanford HAI, 1 Apr. 2022 https://hai.stanford.edu/node/9901.
Khan, Imad. “ChatGPT Glossary: 41 AI Terms That Everyone Should Know.” CNET, 2 Sept. 2023, https://www.cnet.com/tech/computing/chatgpt-glossary-41-ai-terms-that-everyone-should-know.
Elliott, Larry. “AI-driven Misinformation ‘Biggest Short-term Threat to Global Economy.’” The Guardian, 11 Jan. 2024, https://www.theguardian.com/business/2024/jan/10/ai-driven-misinformation-biggest-short-term-threat-to-global-economy.
ICONS
-
Attention
-
Breaking News
-
Coalition
-
Funding
-
Information
-
Launch
-
Lawsuits
-
Legislation
-
Making Progress
-
Research
-
Take Action
-
Technical Discussion