Wiki
We have compiled hundreds of related entries to help you understand "artificial intelligence"
Oblique decision tree is also called multivariate decision tree. It is a decision tree in which the nodes use linear expressions of multiple attributes as the evaluation criteria.
Unordered attributes are attributes that cannot be arranged in order.
Restricted isometry property (RIP) is a property used to describe the relationship between nearly orthogonal matrices when dealing with problems such as sparse vectors.
Training examples refer to instances that are marked for training during the training process.
The support vector expansion is the expansion of the kernel function of the model's optimal solution through the training samples.
Sparsity refers to a situation where the proportion of 0 elements is large.
The state characteristic function is a characteristic function defined on the node and depends on the current position.
The True Prediction Rate (TPR) is the ratio of the number of positive sample predictions to the actual number of positive samples.
The true class refers to those samples that are correctly judged as the positive class in the binary classification problem.
True negatives (TN) refer to those samples that are correctly judged as negative in a binary classification problem.
Transductive learning is a method of predicting specific test samples by observing specific training samples.
Threshold shifting refers to adjusting the threshold for classifying categories according to actual conditions. It is often used to solve the problem of category imbalance.
Threshold Logic Unit (TLU) is the basic unit of neural network.
Threshold is also called critical value or threshold value. It is the value of a certain condition required for an object to undergo a certain change. It is a common term in academic research.
The least squares method is a mathematical optimization method that finds the best function matching the data by minimizing the sum of squared errors.
A tensor is a multilinear function that can be used to represent linear relationships between vectors, scalars, and other tensors.
Wasserstein Generative Adversarial Network has several advantages: It solves the problem of unstable GAN training, without the need to carefully balance the training degree of the generator and the discriminator; It basically solves the Collapse Mode problem and ensures the diversity of generated samples; There are problems such as cross entropy and quasi-[…]
The Viterbi algorithm is a dynamic programming algorithm.
The VC dimension is used to measure the capacity of a binary classifier.
A subspace is also generally called a linear subspace or a vector subspace, which is a subset of a vector space.
The significance of sparse expression lies in dimensionality reduction, and this dimensionality reduction is not limited to saving space. The dependence between the dimensions of the feature vector after sparse expression becomes lower and more independent.
The stability-plasticity dilemma is a constraint in both artificial and biological neural systems.
Speech recognition is a technology that enables computers to recognize natural language. Its goal is to convert human speech content into corresponding text.
Simulated annealing is a general probabilistic algorithm that is often used to find a near-optimal solution in a large search space within a certain period of time.
Oblique decision tree is also called multivariate decision tree. It is a decision tree in which the nodes use linear expressions of multiple attributes as the evaluation criteria.
Unordered attributes are attributes that cannot be arranged in order.
Restricted isometry property (RIP) is a property used to describe the relationship between nearly orthogonal matrices when dealing with problems such as sparse vectors.
Training examples refer to instances that are marked for training during the training process.
The support vector expansion is the expansion of the kernel function of the model's optimal solution through the training samples.
Sparsity refers to a situation where the proportion of 0 elements is large.
The state characteristic function is a characteristic function defined on the node and depends on the current position.
The True Prediction Rate (TPR) is the ratio of the number of positive sample predictions to the actual number of positive samples.
The true class refers to those samples that are correctly judged as the positive class in the binary classification problem.
True negatives (TN) refer to those samples that are correctly judged as negative in a binary classification problem.
Transductive learning is a method of predicting specific test samples by observing specific training samples.
Threshold shifting refers to adjusting the threshold for classifying categories according to actual conditions. It is often used to solve the problem of category imbalance.
Threshold Logic Unit (TLU) is the basic unit of neural network.
Threshold is also called critical value or threshold value. It is the value of a certain condition required for an object to undergo a certain change. It is a common term in academic research.
The least squares method is a mathematical optimization method that finds the best function matching the data by minimizing the sum of squared errors.
A tensor is a multilinear function that can be used to represent linear relationships between vectors, scalars, and other tensors.
Wasserstein Generative Adversarial Network has several advantages: It solves the problem of unstable GAN training, without the need to carefully balance the training degree of the generator and the discriminator; It basically solves the Collapse Mode problem and ensures the diversity of generated samples; There are problems such as cross entropy and quasi-[…]
The Viterbi algorithm is a dynamic programming algorithm.
The VC dimension is used to measure the capacity of a binary classifier.
A subspace is also generally called a linear subspace or a vector subspace, which is a subset of a vector space.
The significance of sparse expression lies in dimensionality reduction, and this dimensionality reduction is not limited to saving space. The dependence between the dimensions of the feature vector after sparse expression becomes lower and more independent.
The stability-plasticity dilemma is a constraint in both artificial and biological neural systems.
Speech recognition is a technology that enables computers to recognize natural language. Its goal is to convert human speech content into corresponding text.
Simulated annealing is a general probabilistic algorithm that is often used to find a near-optimal solution in a large search space within a certain period of time.