Wiki
We have compiled hundreds of related entries to help you understand "artificial intelligence"
Similarity measurement is used to estimate the similarity between different samples and is often used as a criterion for classification problems.
The Sigmoid function is a common S-shaped function, also known as the S-shaped growth curve. Due to its monotonic and inverse monotonic properties, the Sigmoid function is often used as a threshold function for neural networks to map variables between 0 and 1.
Unmanned driving mainly refers to self-driving cars, also known as driverless cars, computer-driven cars or wheeled mobile robots. They are a type of unmanned ground vehicle with the transportation capabilities of traditional cars.
Reproducing Kernel Hilbert Space (RKHS) is a Hilbert space with a reproducing kernel composed of functions. In Hilbert space, a set of data is mapped to a high-dimensional space using the "kernel trick", which is a reproducing kernel Hilbert space.
Regularization is the process of introducing additional information to solve ill-posed problems or prevent overfitting.
The rectified linear unit (ReLU), also known as the linear rectification function, is a commonly used activation function in artificial neural networks, usually referring to nonlinear functions represented by ramp functions and their variants.
The recall rate, also known as the recall rate, is the ratio of the number of retrieved samples to the total number of samples, and measures the recall rate of the retrieval system.
The quasi-Newton method is an optimization method based on the Newton method. It is mainly used to solve the zero point or maximum and minimum value problems of nonlinear equations or continuous functions.
Pseudo-labeling (PL) is the operation of training a model to add predicted labels to unlabeled data.
Prior probability refers to the probability obtained based on past experience and analysis, usually statistical probability.
Principal component analysis (PCA) is a technique for analyzing and simplifying data sets. It uses the idea of dimensionality reduction to transform multiple indicators into fewer comprehensive indicators. PCA is the simplest method for analyzing multivariate statistical distributions using characteristic quantities.
Pre-pruning is a type of pruning algorithm, which refers to the pruning operation before the decision tree is generated.
A positive definite matrix is a symmetric matrix with all eigenvalues greater than zero.
The positive class refers to the expected class in a binary classification problem. The corresponding class is called the negative class.
Relative majority voting is the simplest voting method. In layman's terms, the minority obeys the majority.
Performance metrics are evaluation criteria used to measure the generalization ability of a model.
An ordinal attribute is an attribute whose possible values have a meaningful order or ranking, but the difference between successive values is unknown. It has a sequence of precedence and size.
One-shot learning refers to the ability of a machine to repeatedly work in different environments without prior knowledge of the new environment scenario after a single demonstration.
Different strategies refer to the strategy for generating new samples that is different from the strategy used when the network updates parameters.
Noise contrast estimation (NCE) is a statistical model estimation method proposed by Gutmann and Hyv¨arinen to solve complex computational problems of neural networks and is widely used in image processing and natural language processing.
There is no free lunch (NFL theorem) means that no learning algorithm can produce the most accurate learner in all fields. That is, for problems in a certain domain, the expected performance of all algorithms is the same.
Newton's method, also known as the Newton-Raphson method, is a method for approximately solving equations in the real and complex domains. It uses the first few terms of the Taylor series of the function f ( x ) to find the roots of the equation f ( y ) = 0.
The negative class refers to the class opposite to the positive class in binary classification.
Natural language processing is an interdisciplinary subject involving artificial intelligence, linguistics, computer science and other disciplines. It explores the problem of letting computers process natural language.
Similarity measurement is used to estimate the similarity between different samples and is often used as a criterion for classification problems.
The Sigmoid function is a common S-shaped function, also known as the S-shaped growth curve. Due to its monotonic and inverse monotonic properties, the Sigmoid function is often used as a threshold function for neural networks to map variables between 0 and 1.
Unmanned driving mainly refers to self-driving cars, also known as driverless cars, computer-driven cars or wheeled mobile robots. They are a type of unmanned ground vehicle with the transportation capabilities of traditional cars.
Reproducing Kernel Hilbert Space (RKHS) is a Hilbert space with a reproducing kernel composed of functions. In Hilbert space, a set of data is mapped to a high-dimensional space using the "kernel trick", which is a reproducing kernel Hilbert space.
Regularization is the process of introducing additional information to solve ill-posed problems or prevent overfitting.
The rectified linear unit (ReLU), also known as the linear rectification function, is a commonly used activation function in artificial neural networks, usually referring to nonlinear functions represented by ramp functions and their variants.
The recall rate, also known as the recall rate, is the ratio of the number of retrieved samples to the total number of samples, and measures the recall rate of the retrieval system.
The quasi-Newton method is an optimization method based on the Newton method. It is mainly used to solve the zero point or maximum and minimum value problems of nonlinear equations or continuous functions.
Pseudo-labeling (PL) is the operation of training a model to add predicted labels to unlabeled data.
Prior probability refers to the probability obtained based on past experience and analysis, usually statistical probability.
Principal component analysis (PCA) is a technique for analyzing and simplifying data sets. It uses the idea of dimensionality reduction to transform multiple indicators into fewer comprehensive indicators. PCA is the simplest method for analyzing multivariate statistical distributions using characteristic quantities.
Pre-pruning is a type of pruning algorithm, which refers to the pruning operation before the decision tree is generated.
A positive definite matrix is a symmetric matrix with all eigenvalues greater than zero.
The positive class refers to the expected class in a binary classification problem. The corresponding class is called the negative class.
Relative majority voting is the simplest voting method. In layman's terms, the minority obeys the majority.
Performance metrics are evaluation criteria used to measure the generalization ability of a model.
An ordinal attribute is an attribute whose possible values have a meaningful order or ranking, but the difference between successive values is unknown. It has a sequence of precedence and size.
One-shot learning refers to the ability of a machine to repeatedly work in different environments without prior knowledge of the new environment scenario after a single demonstration.
Different strategies refer to the strategy for generating new samples that is different from the strategy used when the network updates parameters.
Noise contrast estimation (NCE) is a statistical model estimation method proposed by Gutmann and Hyv¨arinen to solve complex computational problems of neural networks and is widely used in image processing and natural language processing.
There is no free lunch (NFL theorem) means that no learning algorithm can produce the most accurate learner in all fields. That is, for problems in a certain domain, the expected performance of all algorithms is the same.
Newton's method, also known as the Newton-Raphson method, is a method for approximately solving equations in the real and complex domains. It uses the first few terms of the Taylor series of the function f ( x ) to find the roots of the equation f ( y ) = 0.
The negative class refers to the class opposite to the positive class in binary classification.
Natural language processing is an interdisciplinary subject involving artificial intelligence, linguistics, computer science and other disciplines. It explores the problem of letting computers process natural language.