Wiki
We have compiled hundreds of related entries to help you understand "artificial intelligence"
Hard margin is the basis for selecting the segmentation hyperplane in support vector machine. It refers to the situation where the classification is completely accurate and there is no loss function, that is, the loss value is 0. It is only necessary to find the plane exactly in the middle of two heterogeneous classes. The opposite of hard margin is soft margin. Soft margin refers to allowing a certain amount of sample classification error, in which the optimization function includes two parts, […]
Smoothing is a commonly used data processing method.
The segmentation variable is the reference variable selected when doing spatial segmentation. It is a type of variable used for segmentation in classification problems to achieve optimal classification.
Support vector machine (SVM) is a supervised learning method for processing data in classification and regression analysis.
Soft margin maximization is an optimization method that uses soft margins to find the optimal solution.
Transfer learning is a method of using existing knowledge to learn new knowledge.
Artificial intelligence, also known as machine intelligence, refers to the intelligence displayed by machines created by humans. Usually, artificial intelligence refers to the technology that presents human intelligence through ordinary computer programs. Research topics The current research direction of artificial intelligence has been divided into several sub-fields. Researchers hope that artificial intelligence systems should have certain specific capabilities, […]
Oversampling refers to increasing the number of samples of a certain class in the training set to reduce class imbalance.
The average gradient refers to the average value of the grayscale change rate. It is used to indicate the clarity of an image, which is caused by the obvious difference in grayscale near the image boundary or both sides of the shadow line. It reflects the rate of change of the contrast of the tiny details of the image, that is, the rate of change of the density of the image in the multi-dimensional direction, and represents the relative clarity of the image. The average gradient is the image […]
Latent semantic analysis mainly discusses the relationship behind words, rather than the basis of dictionary definitions. This relationship is based on the actual usage environment of the words and takes this as the basic reference. This idea originated from psycholinguists who believed that there is a common mechanism in hundreds of languages in the world, and concluded that anyone in a specific language […]
The global minimum is the smallest of all points. The relative concept is the local minimum. If the error function has only one local minimum, then the local minimum is the global minimum. If the error function has multiple local minima, there is no guarantee that the solution is the global minimum. The method to find the global minimum is to find multiple local minima and take the minimum among them. […]
An activation function is a function that runs on a neuron in a neural network and is responsible for mapping the neuron's input to its output.
Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the minimum or maximum of a function on a given set. It is often described as a minimization problem, because the maximization of real-valued functions can be derived by analogy with minimization. The difference between global optimization and local optimization is that the former focuses on finding the extreme value on a given set, […]
Feedforward neural network is a relatively simple artificial neural network. Its internal parameters are propagated unidirectionally from the input layer to the output layer. Unlike recursive neural network, it does not form a directed loop internally. Feedforward is also called forward. From the perspective of signal flow, after the input signal enters the network, the signal flow is unidirectional […]
Sampling is a commonly used inferential statistical method. It refers to extracting a part of individuals from the target population (Population, or parent population) as a sample. By observing one or some attributes of the sample, an estimate with a certain reliability of the quantitative characteristics of the population is obtained based on the data obtained, thereby achieving an understanding of the population.
Sentiment analysis is a common method in natural language processing. It is based on the vocabulary analysis of text and determines the specific emotions contained in it. Sentiment analysis is similar to sentiment analysis, but sentiment analysis contains more types of emotional information. The sentiment dictionary officially released by the National Research Council of Canada contains the following 8 emotions: Angry […]
The frequency school believes that the world is deterministic and there is an ontology with unchanging truth value. The goal of the frequency school is to find the truth value or its range. The view of the frequency school is that there must be some deep generation mechanism behind random events. Although the event itself is random, this mechanism is deterministic. Different from the frequency school is the Bayesian school. The former […]
The training error is the error that occurs during data training. It is the average loss of the model on the training data.
Maximum expectation is an algorithm for finding the maximum likelihood estimate and maximum a posteriori estimate of parameters in a probabilistic model, where the probabilistic model is based on unobservable dependent variables. The maximum expectation algorithm is often used in the field of data clustering in machine learning and computer vision. It is calculated in two steps: Calculate the expectation E: Use the existing estimates of the hidden variables […]
Overfitting is a phenomenon in machine learning. It refers to the situation where some attributes in the sample that are not needed for classification are learned. In this case, the learned decision tree model is not the optimal model and will lead to a decrease in generalization performance.
Expected loss is the ability to predict all samples, which is a global concept; empirical risk is a local concept, which only represents the ability of the decision function to predict samples in the training data set. Empirical risk and expected risk Empirical risk is local, based on the minimization of the loss function of all sample points in the training set, and the empirical risk is locally optimal and can be realistically obtained; […]
Naive Bayes Classifier (NBC) is a conditional probability classifier based on Naive Bayes.
Naive Bayes is a classification algorithm based on probability theory that predicts and classifies only based on the probability of each category. This algorithm is based on the Bayes formula.
Paired t-test is a commonly used t-test. It refers to analyzing two groups of samples from the same population under different conditions to evaluate whether the different conditions have a significant impact. Different conditions can refer to different storage environments, different measurement systems, etc.
Hard margin is the basis for selecting the segmentation hyperplane in support vector machine. It refers to the situation where the classification is completely accurate and there is no loss function, that is, the loss value is 0. It is only necessary to find the plane exactly in the middle of two heterogeneous classes. The opposite of hard margin is soft margin. Soft margin refers to allowing a certain amount of sample classification error, in which the optimization function includes two parts, […]
Smoothing is a commonly used data processing method.
The segmentation variable is the reference variable selected when doing spatial segmentation. It is a type of variable used for segmentation in classification problems to achieve optimal classification.
Support vector machine (SVM) is a supervised learning method for processing data in classification and regression analysis.
Soft margin maximization is an optimization method that uses soft margins to find the optimal solution.
Transfer learning is a method of using existing knowledge to learn new knowledge.
Artificial intelligence, also known as machine intelligence, refers to the intelligence displayed by machines created by humans. Usually, artificial intelligence refers to the technology that presents human intelligence through ordinary computer programs. Research topics The current research direction of artificial intelligence has been divided into several sub-fields. Researchers hope that artificial intelligence systems should have certain specific capabilities, […]
Oversampling refers to increasing the number of samples of a certain class in the training set to reduce class imbalance.
The average gradient refers to the average value of the grayscale change rate. It is used to indicate the clarity of an image, which is caused by the obvious difference in grayscale near the image boundary or both sides of the shadow line. It reflects the rate of change of the contrast of the tiny details of the image, that is, the rate of change of the density of the image in the multi-dimensional direction, and represents the relative clarity of the image. The average gradient is the image […]
Latent semantic analysis mainly discusses the relationship behind words, rather than the basis of dictionary definitions. This relationship is based on the actual usage environment of the words and takes this as the basic reference. This idea originated from psycholinguists who believed that there is a common mechanism in hundreds of languages in the world, and concluded that anyone in a specific language […]
The global minimum is the smallest of all points. The relative concept is the local minimum. If the error function has only one local minimum, then the local minimum is the global minimum. If the error function has multiple local minima, there is no guarantee that the solution is the global minimum. The method to find the global minimum is to find multiple local minima and take the minimum among them. […]
An activation function is a function that runs on a neuron in a neural network and is responsible for mapping the neuron's input to its output.
Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the minimum or maximum of a function on a given set. It is often described as a minimization problem, because the maximization of real-valued functions can be derived by analogy with minimization. The difference between global optimization and local optimization is that the former focuses on finding the extreme value on a given set, […]
Feedforward neural network is a relatively simple artificial neural network. Its internal parameters are propagated unidirectionally from the input layer to the output layer. Unlike recursive neural network, it does not form a directed loop internally. Feedforward is also called forward. From the perspective of signal flow, after the input signal enters the network, the signal flow is unidirectional […]
Sampling is a commonly used inferential statistical method. It refers to extracting a part of individuals from the target population (Population, or parent population) as a sample. By observing one or some attributes of the sample, an estimate with a certain reliability of the quantitative characteristics of the population is obtained based on the data obtained, thereby achieving an understanding of the population.
Sentiment analysis is a common method in natural language processing. It is based on the vocabulary analysis of text and determines the specific emotions contained in it. Sentiment analysis is similar to sentiment analysis, but sentiment analysis contains more types of emotional information. The sentiment dictionary officially released by the National Research Council of Canada contains the following 8 emotions: Angry […]
The frequency school believes that the world is deterministic and there is an ontology with unchanging truth value. The goal of the frequency school is to find the truth value or its range. The view of the frequency school is that there must be some deep generation mechanism behind random events. Although the event itself is random, this mechanism is deterministic. Different from the frequency school is the Bayesian school. The former […]
The training error is the error that occurs during data training. It is the average loss of the model on the training data.
Maximum expectation is an algorithm for finding the maximum likelihood estimate and maximum a posteriori estimate of parameters in a probabilistic model, where the probabilistic model is based on unobservable dependent variables. The maximum expectation algorithm is often used in the field of data clustering in machine learning and computer vision. It is calculated in two steps: Calculate the expectation E: Use the existing estimates of the hidden variables […]
Overfitting is a phenomenon in machine learning. It refers to the situation where some attributes in the sample that are not needed for classification are learned. In this case, the learned decision tree model is not the optimal model and will lead to a decrease in generalization performance.
Expected loss is the ability to predict all samples, which is a global concept; empirical risk is a local concept, which only represents the ability of the decision function to predict samples in the training data set. Empirical risk and expected risk Empirical risk is local, based on the minimization of the loss function of all sample points in the training set, and the empirical risk is locally optimal and can be realistically obtained; […]
Naive Bayes Classifier (NBC) is a conditional probability classifier based on Naive Bayes.
Naive Bayes is a classification algorithm based on probability theory that predicts and classifies only based on the probability of each category. This algorithm is based on the Bayes formula.
Paired t-test is a commonly used t-test. It refers to analyzing two groups of samples from the same population under different conditions to evaluate whether the different conditions have a significant impact. Different conditions can refer to different storage environments, different measurement systems, etc.