Request a Demo
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ultimate Glossary of
AI & Ethics Terminology

Gain a clear understanding of AI & Ethics Terms with our comprehensive Glossary
A

Adversarial Attacks

Techniques used to fool AI models by inputting intentionally misleading data, which can cause generative AI to produce incorrect or harmful outputs.

Accountability

The obligation of AI developers and users to take responsibility for the outcomes of AI systems, ensuring mechanisms are in place to track decisions and their impacts

Artificial Intelligence (AI)

The simulation of human intelligence processes by machines, especially computer systems. These processes include learning, reasoning, and self-correction.

Algorithmic Bias

The tendency of AI systems to produce biased results due to prejudices present in the training data or design. In generative AI, this can lead to the creation of content that reflects harmful stereotypes or excludes minority perspectives.

B

Bias Amplification

The risk that generative AI models can not only reflect but also amplify existing biases in the data they were trained on, leading to even more pronounced stereotypes and unfair representations.

Backpropagation

A method used in artificial neural networks to calculate the gradient of the loss function and update the weights to minimize error.

C

Concept Drift

The risk that the statistical properties of the target variable which the model is trying to predict change over time, which can degrade the performance of generative AI systems.

Content Authenticity

The challenge of distinguishing between human-generated and AI-generated content, which can lead to misinformation and deepfakes.

Consent

Ensuring that individuals have given permission for their data to be used in training AI models. For generative AI, this includes the use of publicly available data to generate new content.

Convolutional Neural Network (CNN)

A type of neural network particularly effective in processing data that has a grid-like topology, such as images.

Computer Vision

A field of AI that enables computers to interpret and make decisions based on visual data from the world.

D

Data Poisoning

A type of adversarial attack where malicious actors intentionally introduce corrupt data into the training set, compromising the integrity and reliability of generative AI models.

Deepfake

AI-generated synthetic media where a person in an existing image or video is replaced with someone else's likeness. Deepfakes raise significant ethical concerns around consent, misinformation, and privacy.

Data Privacy

Protecting personal data from unauthorized access and ensuring that data used to train AI models does not violate privacy rights. This is critical in LLMs, which are often trained on large datasets that may include sensitive information.

Differential Privacy

A system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset.

Decision Tree

A decision support tool that uses a tree-like graph of decisions and their possible consequences.

Deep Learning

A subset of machine learning involving neural networks with many layers, capable of learning from large amounts of data.

E

Exposure Bias

The risk that generative models are trained on data that does not accurately represent all possible inputs they may encounter in real-world applications, leading to poor generalization.

Explainability

The degree to which the internal mechanics of an AI system can be explained in human terms. This is crucial for trust in generative AI and LLMs, where decisions and content generation need to be understandable by humans.

Explainable AI (XAI)

Techniques and methods in AI that make the outputs of machine learning models understandable to humans, providing insights into how decisions are made.

Embedding

A representation of words or phrases in vector space, often used in natural language processing to handle text data.

F

Fake News Generation

The risk that generative AI can be used to create highly convincing false information, contributing to the spread of misinformation and undermining public trust.

Few-Shot Learning

A machine learning approach where the model is trained to recognize patterns from a very small amount of data.

Fairness

Ensuring that AI systems do not create or reinforce bias and discrimination. This involves evaluating the impact of generative AI and LLMs on different groups to prevent unfair treatment.

G

Guardrails in LLMs

In the context of LLMs, guardrails are ethical frameworks or technical measures implemented to ensure AI behavior aligns with ethical guidelines, preventing harmful, biased, or inappropriate responses.

Generalisation Failure

The risk that a generative AI model performs well on training data but fails to generalize to new, unseen data, leading to inaccurate or misleading content generation.

Governance

The framework of rules, practices, and processes by which AI is controlled and operated. Good governance ensures that ethical considerations are embedded in the development and deployment of AI technologies.

Generative AI

AI systems designed to generate new content, such as text, images, or music, by learning patterns from existing data.

Generative Adversarial Network (GAN)

A class of machine learning frameworks where two neural networks, a generator and a discriminator, contest with each other to create new, synthetic instances of data.

H

Hallucination

The phenomenon where generative AI models produce outputs that are plausible-sounding but are factually incorrect or nonsensical, especially prevalent in LLMs.

Harmful Content

The potential of generative AI to produce content that is violent, explicit, or otherwise harmful, which raises concerns about the regulation and control of AI-generated outputs.

Hyperparameter

External configurations to the model that cannot be estimated from the data, such as learning rate or number of epochs.

I

Intellectual Property Infringement

The risk that generative AI can produce content that violates copyrights, trademarks, or other intellectual property rights, leading to legal challenges.

Informed Consent

Ensuring that users understand how their data will be used and the implications of AI systems' actions. For LLMs, this includes the use of user data to improve models.

Interpretability

The extent to which a human can understand the cause of a decision made by an AI system.

Inference

The process of using a trained model to make predictions on new data.

L

Liability

Determining who is legally responsible for harm caused by AI systems. In the context of generative AI, this could involve the creators of the AI, the users, or the platforms that distribute the content.

Latent Space

A lower-dimensional representation of data in which generative models like GANs operate to create new data.

Large Language Model (LLM)

A type of AI model that is trained on vast amounts of text data to understand and generate human language. Examples include GPT-3 and BERT.

M

Misuse

The risk that AI technology, especially generative AI, can be used for malicious purposes such as generating fake news, deepfakes, or other types of misinformation.

Model Training

The process of teaching a machine learning model to make predictions or decisions based on data.

Machine Learning (ML)

A subset of AI focused on the development of algorithms that allow computers to learn from and make predictions based on data.

N

Neural Network

A series of algorithms that mimic the operations of a human brain to recognize relationships between vast amounts of data.

Natural Language Processing (NLP)

A field of AI that focuses on the interaction between computers and human language, enabling computers to understand, interpret, and respond to human language.

O

Oversight

The process of monitoring AI systems to ensure they operate within ethical guidelines. This includes regular audits and evaluations of generative AI models to ensure they are not producing biased or harmful content.

Overfitting

When a model learns the training data too well, including its noise and outliers, leading to poor performance on new, unseen data. In generative AI, this can result in outputs that are too closely tied to the training data.

P

Privacy Violations

The risk that AI models, especially those trained on large datasets containing personal information, can inadvertently reveal sensitive data or be used to infer private details about individuals.

Privacy

The right of individuals to control access to their personal information. Ensuring that generative AI models do not inadvertently reveal private information is a key ethical concern.

Privacy-Preserving AI

AI technologies and methodologies that ensure the privacy of individuals' data throughout the data processing lifecycle.

Perceptron

The simplest type of artificial neural network used for binary classifications, consisting of a single layer of weights.

R

Robustness Issues

The risk that AI systems are not resilient to changes in input data or adversarial attacks, which can compromise the reliability and safety of generative AI applications

Reputation Damage

The potential for AI-generated content to harm the reputation of individuals, companies, or institutions, particularly through the spread of deepfakes or misleading information.

Risk Assessment

Evaluating the potential risks associated with AI systems, including generative AI, to identify and mitigate ethical issues before they cause harm.

Robustness

The ability of an AI system to perform reliably under a variety of conditions and resist adversarial manipulation.

Responsible AI

The practice of developing and deploying AI with consideration for ethical principles, such as fairness, accountability, and transparency, to ensure positive social impact.

Recurrent Neural Network (RNN)

A type of neural network where connections between nodes form a directed graph along a sequence, allowing it to exhibit temporal dynamic behavior.

Reinforcement Learning

A type of machine learning where an agent learns to make decisions by performing actions in an environment to maximize some notion of cumulative reward.

S

Scalability Challenges

Difficulties in ensuring that AI models maintain performance and reliability as they are scaled up to handle larger datasets and more complex tasks, especially relevant for LLMs.

Surveillance

The ethical concern regarding the use of AI for monitoring individuals, which can lead to privacy violations and a loss of autonomy.

Safety

Ensuring that AI systems do not cause physical or emotional harm to users. For generative AI, this includes preventing the generation of dangerous or misleading content

Swarm Intelligence

The collective behavior of decentralized, self-organized systems, natural or artificial.

Supervised Learning

A type of machine learning where the model is trained on labeled data.

T

Transparency and Explainability

The challenge of making AI model decisions and content generation processes understandable to users and stakeholders, which is crucial for trust and accountability.

Transparency

The practice of making AI systems' processes and decisions clear and understandable. This is essential for building trust in generative AI and LLMs.

Transfer Learning

A machine learning method where a model developed for a task is reused as the starting point for a model on a second task.

Tensor

A mathematical object that generalizes scalars, vectors, and matrices to higher dimensions, commonly used in machine learning to represent data.

U

Underfitting

In Data Science and Machine Learning, underfitting occurs when a model is too simple to capture the underlying patterns in the data, resulting in poor performance on both training and test data.

Unintended Consequences

Outcomes that are not foreseen or intended by the developers of AI systems. In generative AI, this might include generating offensive or harmful content that was not anticipated during development.

Unsupervised Learning

A type of machine learning where the model is trained on unlabeled data to identify patterns and relationships in the data.

V

Vulnerability to Manipulation

The risk that generative AI systems can be manipulated by external inputs to produce specific, often harmful, outcomes, including spreading propaganda or biased narratives.

Value Alignment

Ensuring that AI systems operate in ways that are consistent with human values and ethical principles. This involves aligning the objectives of generative AI with societal norms and values.

Validation Set

A set of data used to provide an unbiased evaluation of a model fit on the training dataset while tuning model hyperparameters.

Z

Zero-Shot Learning

A method where the model is required to perform tasks that were not observed during training.