Command Palette
Search for a command to run...
Wiki
Machine Learning Glossary: Explore definitions and explanations of key AI and ML concepts
Search for a command to run...
Machine Learning Glossary: Explore definitions and explanations of key AI and ML concepts
Search for a command to run...
Machine Learning Glossary: Explore definitions and explanations of key AI and ML concepts
Statistical Classification is a supervised learning method used to classify new observations into one of the known categories.
Variational Autoencoder (VAE) is an artificial neural network structure proposed by Diederik P. Kingma and Max Welling, belonging to the probabilistic graphical model and variational Bayesian method.
Masked Language Modeling (MLM) is a deep learning technique widely used in natural language processing (NLP) tasks, especially in the training of Transformer models such as BERT, GPT-2, and RoBERTa.
Knowledge Engineering is a branch of Artificial Intelligence (AI) that develops rules and applies them to data to mimic the thought processes of a person with expertise on a particular subject.
Inception Score (IS) is an objective performance metric used to evaluate the quality of generated or synthetic images produced by a generative adversarial network (GAN).
Fuzzy Logic is a variable processing method that allows multiple possible truth values to be processed by the same variable. Fuzzy logic attempts to solve problems through an open, imprecise data spectrum and heuristic methods to obtain a series of accurate conclusions.
Fréchet Inception Distance (FID) is a performance metric where lower FID scores represent higher quality images generated by the generator and are similar to real images. FID is based on the feature vector of the image.
DALL-E is a new AI program developed by OpenAI that generates images based on text description prompts. It can combine language and visual processing, and this innovative approach opens up new possibilities in the creative field, communication, education and other fields. DALL-E was launched in January 2021 and is […]
LoRA (Low-Level Adaptation) is a breakthrough, efficient fine-tuning technique that harnesses the power of these state-of-the-art models for custom tasks and datasets without straining resources or prohibitively high costs.
CBR works by retrieving similar cases from the past and adapting them to the current situation to make a decision or solve a problem.
Adversarial Machine Learning is a machine learning method that aims to deceive machine learning models by providing deceptive inputs.
Cognitive Search represents the next generation of enterprise search, using artificial intelligence (AI) techniques to refine users' search queries and extract relevant information from multiple disparate data sets.
Code Quality describes the overall assessment of the effectiveness, reliability, and maintainability of a piece of software code. The main qualities of code quality include readability, clarity, reliability, security, and modularity. These qualities make the code easy to understand, change, operate, and debug.
Cloud containers are a technology for deploying, running, and managing applications in cloud environments. They provide a lightweight, portable way to encapsulate applications and their dependencies in an isolated runtime environment.
Model quantization can reduce the memory footprint and computational requirements of deep neural network models. Weight quantization is a common quantization technique that involves converting the weights and activations of a neural network from high-precision floating point numbers to a lower-precision format, such as 16-bit or 8-bit integers.
Triplet loss is a loss function for deep learning, which refers to minimizing the distance between the anchor point and the positive sample with the same identity, and minimizing the distance between the anchor point and the negative sample with different identities.
Large Language Model Operations (LLMOps) is the practice, techniques, and tools for the operational management of large language models in production environments. LLMOps is specifically about using tools and methods to manage and automate the lifecycle of LLMs, from fine-tuning to maintenance.
Data gravity refers to the ability of a body of data to attract applications, services, and other data. The quality and quantity of data will increase over time, thereby attracting more applications and services to connect to this data.
Gradient Accumulation is a mechanism for dividing a batch of samples used to train a neural network into several small batches of samples that are run sequentially.
Model validation is the process of evaluating the performance of a machine learning (ML) model on a dataset separate from the training dataset. It is an important step in the ML model development process because it helps ensure that the model generalizes to new, unseen data and does not overfit to the training data.
Pool-based sampling is a popular active learning method that selects informative examples for labeling. A pool of unlabeled data is created, and the model selects the most informative examples for manual annotation. These labeled examples are used to retrain the model, and the process is repeated.
Bot Frame is used to create robots and define their behaviors.
Model parameters are variables that control the behavior of a machine learning (ML) model. They are often trained on data and make predictions or choices based on new, unforeseen facts. Model parameters are an important part of machine learning models because they have a large impact on the accuracy and performance of the model.
Noise is a term used to describe unwanted or irrelevant information in an image or video. It can be caused by a variety of factors, including sensor noise, compression artifacts, and environmental factors such as lighting conditions and reflections. Noise can severely degrade the quality and clarity of an image or video, and can make it more difficult to accurately analyze or interpret the image content.
Statistical Classification is a supervised learning method used to classify new observations into one of the known categories.
Variational Autoencoder (VAE) is an artificial neural network structure proposed by Diederik P. Kingma and Max Welling, belonging to the probabilistic graphical model and variational Bayesian method.
Masked Language Modeling (MLM) is a deep learning technique widely used in natural language processing (NLP) tasks, especially in the training of Transformer models such as BERT, GPT-2, and RoBERTa.
Knowledge Engineering is a branch of Artificial Intelligence (AI) that develops rules and applies them to data to mimic the thought processes of a person with expertise on a particular subject.
Inception Score (IS) is an objective performance metric used to evaluate the quality of generated or synthetic images produced by a generative adversarial network (GAN).
Fuzzy Logic is a variable processing method that allows multiple possible truth values to be processed by the same variable. Fuzzy logic attempts to solve problems through an open, imprecise data spectrum and heuristic methods to obtain a series of accurate conclusions.
Fréchet Inception Distance (FID) is a performance metric where lower FID scores represent higher quality images generated by the generator and are similar to real images. FID is based on the feature vector of the image.
DALL-E is a new AI program developed by OpenAI that generates images based on text description prompts. It can combine language and visual processing, and this innovative approach opens up new possibilities in the creative field, communication, education and other fields. DALL-E was launched in January 2021 and is […]
LoRA (Low-Level Adaptation) is a breakthrough, efficient fine-tuning technique that harnesses the power of these state-of-the-art models for custom tasks and datasets without straining resources or prohibitively high costs.
CBR works by retrieving similar cases from the past and adapting them to the current situation to make a decision or solve a problem.
Adversarial Machine Learning is a machine learning method that aims to deceive machine learning models by providing deceptive inputs.
Cognitive Search represents the next generation of enterprise search, using artificial intelligence (AI) techniques to refine users' search queries and extract relevant information from multiple disparate data sets.
Code Quality describes the overall assessment of the effectiveness, reliability, and maintainability of a piece of software code. The main qualities of code quality include readability, clarity, reliability, security, and modularity. These qualities make the code easy to understand, change, operate, and debug.
Cloud containers are a technology for deploying, running, and managing applications in cloud environments. They provide a lightweight, portable way to encapsulate applications and their dependencies in an isolated runtime environment.
Model quantization can reduce the memory footprint and computational requirements of deep neural network models. Weight quantization is a common quantization technique that involves converting the weights and activations of a neural network from high-precision floating point numbers to a lower-precision format, such as 16-bit or 8-bit integers.
Triplet loss is a loss function for deep learning, which refers to minimizing the distance between the anchor point and the positive sample with the same identity, and minimizing the distance between the anchor point and the negative sample with different identities.
Large Language Model Operations (LLMOps) is the practice, techniques, and tools for the operational management of large language models in production environments. LLMOps is specifically about using tools and methods to manage and automate the lifecycle of LLMs, from fine-tuning to maintenance.
Data gravity refers to the ability of a body of data to attract applications, services, and other data. The quality and quantity of data will increase over time, thereby attracting more applications and services to connect to this data.
Gradient Accumulation is a mechanism for dividing a batch of samples used to train a neural network into several small batches of samples that are run sequentially.
Model validation is the process of evaluating the performance of a machine learning (ML) model on a dataset separate from the training dataset. It is an important step in the ML model development process because it helps ensure that the model generalizes to new, unseen data and does not overfit to the training data.
Pool-based sampling is a popular active learning method that selects informative examples for labeling. A pool of unlabeled data is created, and the model selects the most informative examples for manual annotation. These labeled examples are used to retrain the model, and the process is repeated.
Bot Frame is used to create robots and define their behaviors.
Model parameters are variables that control the behavior of a machine learning (ML) model. They are often trained on data and make predictions or choices based on new, unforeseen facts. Model parameters are an important part of machine learning models because they have a large impact on the accuracy and performance of the model.
Noise is a term used to describe unwanted or irrelevant information in an image or video. It can be caused by a variety of factors, including sensor noise, compression artifacts, and environmental factors such as lighting conditions and reflections. Noise can severely degrade the quality and clarity of an image or video, and can make it more difficult to accurately analyze or interpret the image content.