Wiki
We have compiled hundreds of related entries to help you understand "artificial intelligence"
Variational inference uses a known distribution to adjust it to fit the distribution we need but is difficult to express in a formula.
A reference model is a model used as a benchmark and comparison. In the definition of the Organization for the Promotion of Structured Information Standards, it is used to understand the important relationships between entities in some environment and to develop a general standard or specification framework to support that environment. Concept Summary: Reference models are used to provide information about an environment and to describe […]
The re-weighting method means that in each round of the training process, a weight is re-assigned to each training sample according to the sample distribution.
Marginal distribution refers to the probability distribution of only some variables in a multidimensional random variable in probability theory and statistics. Definition Assume there is a probability distribution related to two variables: $latex P(x, y) $ The marginal distribution about one of the specific variables is the conditional probability distribution given the other variables: $lat […]
Marginalization is a method of discovering another variable based on a variable. It determines the marginal contribution of another variable by summing up the possible values of the variable. This definition is relatively abstract, and the following uses relevant cases to describe it. Suppose you need to know the impact of weather on the happiness index, you can use P (happiness | weather) to represent it, that is, given a weather category […]
Hierarchical clustering is a collection of algorithms that form nested clusters by merging from bottom to top or splitting from top to bottom. This hierarchical class is represented by a "dendrogram", and the Agglomerative Clustering algorithm is one of them. Hierarchical clustering attempts to […]
Game theory, also known as strategy theory, game theory, etc., is not only a new branch of modern mathematics, but also an important discipline in operations research. It mainly studies the interaction between incentive structures, considers the predicted and actual behaviors of individuals in the game, and studies related optimization strategies. Game behavior refers to behaviors that are competitive or confrontational in nature. In such behaviors, […]
The Extreme Learning Machine is a neural network model in the field of machine learning. It can be used to solve single hidden layer feedforward neural networks. Unlike traditional feedforward neural networks (such as BP neural networks) that require a large number of training parameters to be set manually, the Extreme Learning Algorithm only needs to set the network structure without setting other parameters. Therefore, it is simple and easy to use.
The error rate refers to the proportion of prediction errors in the prediction. The calculation formula is generally: 1 – Accuracy (%) The trained model can generally be used to measure the error rate of a model in a data set. Three numbers are important: Bayes Optimal Error: The ideal […]
Precision is a metric used in information retrieval and statistical classification. It refers to the ratio of the correct samples extracted to the number of samples extracted.
Representation learning, also known as representation learning, is the use of machine learning technology to automatically obtain the vectorized expression of each entity or relationship, so that it is easier to extract useful information when building classifiers or other predictive variables.
Resampling refers to extracting repeated samples from the original data sample. This is a non-parametric method of statistical inference. That is, resampling does not use a general distribution to approximate the value of probability p.
Residual mapping is the corresponding relationship based on which the residual network is constructed. Its common form is H ( x ) = F ( x ) + x , where F ( x ) is the residual function.
Pooling, also called spatial pooling, is a method used to extract features in convolutional neural networks.
Computer vision is a science that studies how to make machines "see". Specifically, it refers to the use of cameras and computers to replace human eyes to identify, track and measure targets, and use computers to process images into images that are more suitable for human observation or transmission to instruments for detection. Definition Computer vision is the use of computers and related […]
Computational linguistics is a discipline that uses mathematical models to analyze and process natural languages, and uses programs on computers to implement the analysis and processing process, thereby achieving the goal of using machines to simulate some or all of a person's language abilities. Basic content Computational linguistics can be divided into the following three categories according to the nature and complexity of its work: Automatic editing: […]
Eigen decomposition is a method of decomposing a matrix into a product of matrices represented by eigenvalues and eigenvectors. However, only diagonalizable matrices can be subjected to eigendecomposition. The eigenvalue can be regarded as the scaling ratio of the length of the eigenvector under linear changes. If the eigenvalue is positive, it means that $latex v $ has been subjected to linear transformation […]
Definition of Backpropagation Backpropagation, short for "error backpropagation", is a common method used in conjunction with optimization methods to train artificial neural networks. This method calculates the gradient of the loss function for all weights in the network. This gradient is fed back to the optimization method to update the weights to minimize the loss function. […]
Backpropagation through time is a backpropagation algorithm applied to recurrent neural networks (RNNs). BPTT can be viewed as a standard backpropagation algorithm applied to RNNs, where each time step represents a computational layer and its parameters are shared across computational layers. Since RNNs use the same algorithm at all time steps […]
The general base learner can be composed of Logistic regression, decision tree, SVM, neural network, Bayesian classifier, K-nearest neighbor, etc. If the individual learners are generated from the same learning algorithm from the training data, it can be called a homogeneous ensemble, and the individual learners in this case are also called base learners; the ensemble can also contain different […]
Definition Assume that x is a continuous random variable whose distribution depends on the class state, expressed in the form of p(x|ω). This is the "class conditional probability" function, that is, the probability function of x when the class state is ω. Class conditional probability function $latex P\left(X | w_{i}\ri […]
CART is a learning method for the conditional probability distribution of the output random variable Y given the input random variable X. Definition CART assumes that the decision tree is a binary tree, the internal node features have values of "yes" and "no", the left branch is the branch with the value of "yes", and the right branch is the branch with the value of "no". This […]
Class imbalance is a binary classification problem in which the labels of the two classes have a large difference in frequency. For example, in a disease dataset, 0.0001 of the samples have positive class labels and 0.9999 of the samples have negative class labels, which is a class imbalance problem; but in a […]
Closed form refers to some strict formulas in which any independent variable can be given to find the dependent variable, that is, the solution to the problem. This is a form of solution that includes basic functions such as fractions, trigonometric functions, exponentials, logarithms, and even infinite series. The method used to find the relevant solution is also called analytical method, which is a common calculus […]
Variational inference uses a known distribution to adjust it to fit the distribution we need but is difficult to express in a formula.
A reference model is a model used as a benchmark and comparison. In the definition of the Organization for the Promotion of Structured Information Standards, it is used to understand the important relationships between entities in some environment and to develop a general standard or specification framework to support that environment. Concept Summary: Reference models are used to provide information about an environment and to describe […]
The re-weighting method means that in each round of the training process, a weight is re-assigned to each training sample according to the sample distribution.
Marginal distribution refers to the probability distribution of only some variables in a multidimensional random variable in probability theory and statistics. Definition Assume there is a probability distribution related to two variables: $latex P(x, y) $ The marginal distribution about one of the specific variables is the conditional probability distribution given the other variables: $lat […]
Marginalization is a method of discovering another variable based on a variable. It determines the marginal contribution of another variable by summing up the possible values of the variable. This definition is relatively abstract, and the following uses relevant cases to describe it. Suppose you need to know the impact of weather on the happiness index, you can use P (happiness | weather) to represent it, that is, given a weather category […]
Hierarchical clustering is a collection of algorithms that form nested clusters by merging from bottom to top or splitting from top to bottom. This hierarchical class is represented by a "dendrogram", and the Agglomerative Clustering algorithm is one of them. Hierarchical clustering attempts to […]
Game theory, also known as strategy theory, game theory, etc., is not only a new branch of modern mathematics, but also an important discipline in operations research. It mainly studies the interaction between incentive structures, considers the predicted and actual behaviors of individuals in the game, and studies related optimization strategies. Game behavior refers to behaviors that are competitive or confrontational in nature. In such behaviors, […]
The Extreme Learning Machine is a neural network model in the field of machine learning. It can be used to solve single hidden layer feedforward neural networks. Unlike traditional feedforward neural networks (such as BP neural networks) that require a large number of training parameters to be set manually, the Extreme Learning Algorithm only needs to set the network structure without setting other parameters. Therefore, it is simple and easy to use.
The error rate refers to the proportion of prediction errors in the prediction. The calculation formula is generally: 1 – Accuracy (%) The trained model can generally be used to measure the error rate of a model in a data set. Three numbers are important: Bayes Optimal Error: The ideal […]
Precision is a metric used in information retrieval and statistical classification. It refers to the ratio of the correct samples extracted to the number of samples extracted.
Representation learning, also known as representation learning, is the use of machine learning technology to automatically obtain the vectorized expression of each entity or relationship, so that it is easier to extract useful information when building classifiers or other predictive variables.
Resampling refers to extracting repeated samples from the original data sample. This is a non-parametric method of statistical inference. That is, resampling does not use a general distribution to approximate the value of probability p.
Residual mapping is the corresponding relationship based on which the residual network is constructed. Its common form is H ( x ) = F ( x ) + x , where F ( x ) is the residual function.
Pooling, also called spatial pooling, is a method used to extract features in convolutional neural networks.
Computer vision is a science that studies how to make machines "see". Specifically, it refers to the use of cameras and computers to replace human eyes to identify, track and measure targets, and use computers to process images into images that are more suitable for human observation or transmission to instruments for detection. Definition Computer vision is the use of computers and related […]
Computational linguistics is a discipline that uses mathematical models to analyze and process natural languages, and uses programs on computers to implement the analysis and processing process, thereby achieving the goal of using machines to simulate some or all of a person's language abilities. Basic content Computational linguistics can be divided into the following three categories according to the nature and complexity of its work: Automatic editing: […]
Eigen decomposition is a method of decomposing a matrix into a product of matrices represented by eigenvalues and eigenvectors. However, only diagonalizable matrices can be subjected to eigendecomposition. The eigenvalue can be regarded as the scaling ratio of the length of the eigenvector under linear changes. If the eigenvalue is positive, it means that $latex v $ has been subjected to linear transformation […]
Definition of Backpropagation Backpropagation, short for "error backpropagation", is a common method used in conjunction with optimization methods to train artificial neural networks. This method calculates the gradient of the loss function for all weights in the network. This gradient is fed back to the optimization method to update the weights to minimize the loss function. […]
Backpropagation through time is a backpropagation algorithm applied to recurrent neural networks (RNNs). BPTT can be viewed as a standard backpropagation algorithm applied to RNNs, where each time step represents a computational layer and its parameters are shared across computational layers. Since RNNs use the same algorithm at all time steps […]
The general base learner can be composed of Logistic regression, decision tree, SVM, neural network, Bayesian classifier, K-nearest neighbor, etc. If the individual learners are generated from the same learning algorithm from the training data, it can be called a homogeneous ensemble, and the individual learners in this case are also called base learners; the ensemble can also contain different […]
Definition Assume that x is a continuous random variable whose distribution depends on the class state, expressed in the form of p(x|ω). This is the "class conditional probability" function, that is, the probability function of x when the class state is ω. Class conditional probability function $latex P\left(X | w_{i}\ri […]
CART is a learning method for the conditional probability distribution of the output random variable Y given the input random variable X. Definition CART assumes that the decision tree is a binary tree, the internal node features have values of "yes" and "no", the left branch is the branch with the value of "yes", and the right branch is the branch with the value of "no". This […]
Class imbalance is a binary classification problem in which the labels of the two classes have a large difference in frequency. For example, in a disease dataset, 0.0001 of the samples have positive class labels and 0.9999 of the samples have negative class labels, which is a class imbalance problem; but in a […]
Closed form refers to some strict formulas in which any independent variable can be given to find the dependent variable, that is, the solution to the problem. This is a form of solution that includes basic functions such as fractions, trigonometric functions, exponentials, logarithms, and even infinite series. The method used to find the relevant solution is also called analytical method, which is a common calculus […]