Wiki
We have compiled hundreds of related entries to help you understand "artificial intelligence"
Inductive Logic Programming (ILP) is a symbolic rule learning method that introduces function and logic expression nesting in first-order rule learning and uses first-order logic as the expression language. ILP enables machine learning systems to have more powerful expression capabilities. At the same time, it can be seen as an application of machine learning, mainly used to solve back-based […]
The kernel trick is a method of using the kernel function to directly calculate $latex \langle\phi(x), \phi(z)\rangle $, so as to avoid calculating $latex \phi(x) $ and $latex \phi(z) $ separately, thereby speeding up the kernel method calculation […]
Recursive neural network is a representation learning method that can map words, sentences, paragraphs and articles into the same vector space according to their semantics, that is, it can represent combinable (tree/graph structure) information as meaningful vectors.
Negative correlation means that the two columns of variables change in opposite directions. When one column of variables changes, the other column of variables changes in the opposite trend of the previous variable.
A univariate decision tree is a decision tree with only one variable. That is, each time a node splits, only one feature in the feature set is selected, which also means that the classification boundary of the decision tree is composed of several segments parallel to the coordinate axis.
Negative log-likelihood is a loss function used to solve classification problems. It is a natural logarithm form of the likelihood function, which can be used to measure the similarity between two probability distributions. The negative sign is to make the maximum likelihood value correspond to the minimum loss. It is a common function form in maximum likelihood estimation and related fields. In machine learning, it is customary to use optimization […]
Non-convex optimization is a method in the field of machine learning and signal processing. It refers to a method that directly solves the problem or directly optimizes the non-convex formula without using relaxation for non-convex problems.
A nonlinear model is a mathematical expression in which there is a nonlinear relationship between the independent variable and the dependent variable. Compared with a linear model, the dependent variable and the independent variable cannot be expressed as a linear correspondence in the coordinate space.
Non-metric distance refers to the distance between parameters that does not satisfy directness.
Non-negative matrix factorization (NMF) is a matrix decomposition method where all elements satisfy the non-negative constraint.
The norm is a basic function in mathematics. It is often used to measure the length or size of a vector in a vector space (or matrix). For the norm of model parameters, it can be used as a regularization function.
ODE is the most commonly used strategy for semi-naive Bayes classifiers. ODE assumes that each attribute depends on at most one other attribute outside the category.
The polynomial kernel function refers to a kernel function expressed in polynomial form. It is a non-standard kernel function suitable for orthogonal normalized data. Its specific form is shown in the figure.
The principle of multiple interpretations is the idea that all hypotheses that are consistent with empirical observations should be retained.
Hyperplane partitioning means that if two disjoint convex sets are both open, then there exists a hyperplane that can separate them.
Stratified sampling is a sampling method that involves stratification before sampling. It is a commonly used sampling method in statistics.
Symbolic learning refers to machine learning methods that functionally simulate human learning abilities.
Symbolism is a school of thought in the field of artificial intelligence that believes in mathematical logic.
The unit step function is also called the Heaviside step function and is defined as follows: Its graph is as follows: Related terms: impulse function,
Von Neumann architecture is a computer design concept that combines program instruction memory and data memory.
Secondary learning refers to repeated learning when the first learning result is not ideal.
Unequal costs refer to situations where different costs are assigned to the losses incurred by each category.
Unsaturated games are inspired by heuristic methods rather than theoretical analysis.
Adversarial network is an implementation of generative adversarial network, which is used to generate adversarial samples in batches for a specified neural network model.
Inductive Logic Programming (ILP) is a symbolic rule learning method that introduces function and logic expression nesting in first-order rule learning and uses first-order logic as the expression language. ILP enables machine learning systems to have more powerful expression capabilities. At the same time, it can be seen as an application of machine learning, mainly used to solve back-based […]
The kernel trick is a method of using the kernel function to directly calculate $latex \langle\phi(x), \phi(z)\rangle $, so as to avoid calculating $latex \phi(x) $ and $latex \phi(z) $ separately, thereby speeding up the kernel method calculation […]
Recursive neural network is a representation learning method that can map words, sentences, paragraphs and articles into the same vector space according to their semantics, that is, it can represent combinable (tree/graph structure) information as meaningful vectors.
Negative correlation means that the two columns of variables change in opposite directions. When one column of variables changes, the other column of variables changes in the opposite trend of the previous variable.
A univariate decision tree is a decision tree with only one variable. That is, each time a node splits, only one feature in the feature set is selected, which also means that the classification boundary of the decision tree is composed of several segments parallel to the coordinate axis.
Negative log-likelihood is a loss function used to solve classification problems. It is a natural logarithm form of the likelihood function, which can be used to measure the similarity between two probability distributions. The negative sign is to make the maximum likelihood value correspond to the minimum loss. It is a common function form in maximum likelihood estimation and related fields. In machine learning, it is customary to use optimization […]
Non-convex optimization is a method in the field of machine learning and signal processing. It refers to a method that directly solves the problem or directly optimizes the non-convex formula without using relaxation for non-convex problems.
A nonlinear model is a mathematical expression in which there is a nonlinear relationship between the independent variable and the dependent variable. Compared with a linear model, the dependent variable and the independent variable cannot be expressed as a linear correspondence in the coordinate space.
Non-metric distance refers to the distance between parameters that does not satisfy directness.
Non-negative matrix factorization (NMF) is a matrix decomposition method where all elements satisfy the non-negative constraint.
The norm is a basic function in mathematics. It is often used to measure the length or size of a vector in a vector space (or matrix). For the norm of model parameters, it can be used as a regularization function.
ODE is the most commonly used strategy for semi-naive Bayes classifiers. ODE assumes that each attribute depends on at most one other attribute outside the category.
The polynomial kernel function refers to a kernel function expressed in polynomial form. It is a non-standard kernel function suitable for orthogonal normalized data. Its specific form is shown in the figure.
The principle of multiple interpretations is the idea that all hypotheses that are consistent with empirical observations should be retained.
Hyperplane partitioning means that if two disjoint convex sets are both open, then there exists a hyperplane that can separate them.
Stratified sampling is a sampling method that involves stratification before sampling. It is a commonly used sampling method in statistics.
Symbolic learning refers to machine learning methods that functionally simulate human learning abilities.
Symbolism is a school of thought in the field of artificial intelligence that believes in mathematical logic.
The unit step function is also called the Heaviside step function and is defined as follows: Its graph is as follows: Related terms: impulse function,
Von Neumann architecture is a computer design concept that combines program instruction memory and data memory.
Secondary learning refers to repeated learning when the first learning result is not ideal.
Unequal costs refer to situations where different costs are assigned to the losses incurred by each category.
Unsaturated games are inspired by heuristic methods rather than theoretical analysis.
Adversarial network is an implementation of generative adversarial network, which is used to generate adversarial samples in batches for a specified neural network model.