 boost | Set the serialization version of the adaboost class |
  serialization | |
   version< mlpack::adaboost::AdaBoost< WeakLearnerType, MatType > > | |
   version< mlpack::ann::BRNN< OutputLayerType, MergeLayerType, MergeOutputType, InitializationRuleType, CustomLayer...> > | |
   version< mlpack::ann::FFN< OutputLayerType, InitializationRuleType, CustomLayer...> > | |
   version< mlpack::ann::RNN< OutputLayerType, InitializationRuleType, CustomLayer...> > | |
 ens | |
 mlpack | .hpp |
  adaboost | |
   AdaBoost | The AdaBoost class |
   AdaBoostModel | The model to save to disk |
  amf | Alternating Matrix Factorization |
   AMF | This class implements AMF (alternating matrix factorization) on the given matrix V |
   AverageInitialization | This initialization rule initializes matrix W and H to root of the average of V, perturbed with uniform noise |
   CompleteIncrementalTermination | This class acts as a wrapper for basic termination policies to be used by SVDCompleteIncrementalLearning |
   GivenInitialization | This initialization rule for AMF simply fills the W and H matrices with the matrices given to the constructor of this object |
   IncompleteIncrementalTermination | This class acts as a wrapper for basic termination policies to be used by SVDIncompleteIncrementalLearning |
   MaxIterationTermination | This termination policy only terminates when the maximum number of iterations has been reached |
   NMFALSUpdate | This class implements a method titled 'Alternating Least Squares' described in the following paper: |
   NMFMultiplicativeDistanceUpdate | The multiplicative distance update rules for matrices W and H |
   NMFMultiplicativeDivergenceUpdate | This follows a method described in the paper 'Algorithms for Non-negative |
   RandomAcolInitialization | This class initializes the W matrix of the AMF algorithm by averaging p randomly chosen columns of V |
   RandomInitialization | This initialization rule for AMF simply fills the W and H matrices with uniform random noise in [0, 1] |
   SimpleResidueTermination | This class implements a simple residue-based termination policy |
   SimpleToleranceTermination | This class implements residue tolerance termination policy |
   SVDBatchLearning | This class implements SVD batch learning with momentum |
   SVDCompleteIncrementalLearning | This class computes SVD using complete incremental batch learning, as described in the following paper: |
   SVDCompleteIncrementalLearning< arma::sp_mat > | TODO : Merge this template specialized function for sparse matrix using common row_col_iterator |
   SVDIncompleteIncrementalLearning | This class computes SVD using incomplete incremental batch learning, as described in the following paper: |
   ValidationRMSETermination | This class implements validation termination policy based on RMSE index |
  ann | Artificial Neural Network |
   augmented | |
    scorers | |
    tasks | |
     AddTask | Generator of instances of the binary addition task |
     CopyTask | Generator of instances of the binary sequence copy task |
     SortTask | Generator of instances of the sequence sort task |
   Add | Implementation of the Add module class |
   AddMerge | Implementation of the AddMerge module class |
   AddVisitor | AddVisitor exposes the Add() method of the given module |
   AlphaDropout | The alpha - dropout layer is a regularizer that randomly with probability 'ratio' sets input values to alphaDash |
   AtrousConvolution | Implementation of the Atrous Convolution class |
   BackwardVisitor | BackwardVisitor executes the Backward() function given the input, error and delta parameter |
   BaseLayer | Implementation of the base layer |
   BatchNorm | Declaration of the Batch Normalization layer class |
   BernoulliDistribution | Multiple independent Bernoulli distributions |
   BilinearInterpolation | Definition and Implementation of the Bilinear Interpolation Layer |
   BinaryRBM | For more information, see the following paper: |
   BRNN | Implementation of a standard bidirectional recurrent neural network container |
   Concat | Implementation of the Concat class |
   Concatenate | Implementation of the Concatenate module class |
   ConcatPerformance | Implementation of the concat performance class |
   Constant | Implementation of the constant layer |
   ConstInitialization | This class is used to initialize weight matrix with constant values |
   Convolution | Implementation of the Convolution class |
   CopyVisitor | This visitor is to support copy constructor for neural network module |
   CReLU | A concatenated ReLU has two outputs, one ReLU and one negative ReLU, concatenated together |
   CrossEntropyError | The cross-entropy performance function measures the network's performance according to the cross-entropy between the input and target distributions |
   DeleteVisitor | DeleteVisitor executes the destructor of the instantiated object |
   DeltaVisitor | DeltaVisitor exposes the delta parameter of the given module |
   DeterministicSetVisitor | DeterministicSetVisitor set the deterministic parameter given the deterministic value |
   DiceLoss | The dice loss performance function measures the network's performance according to the dice coefficient between the input and target distributions |
   DropConnect | The DropConnect layer is a regularizer that randomly with probability ratio sets the connection values to zero and scales the remaining elements by factor 1 /(1 - ratio) |
   Dropout | The dropout layer is a regularizer that randomly with probability 'ratio' sets input values to zero and scales the remaining elements by factor 1 / (1 - ratio) rather than during test time so as to keep the expected sum same |
   EarthMoverDistance | The earth mover distance function measures the network's performance according to the Kantorovich-Rubinstein duality approximation |
   ELU | The ELU activation function, defined by |
   FastLSTM | An implementation of a faster version of the Fast LSTM network layer |
   FFN | Implementation of a standard feed forward network |
   FFTConvolution | Computes the two-dimensional convolution through fft |
   FlexibleReLU | The FlexibleReLU activation function, defined by |
   ForwardVisitor | ForwardVisitor executes the Forward() function given the input and output parameter |
   FullConvolution | |
   GaussianInitialization | This class is used to initialize weigth matrix with a gaussian |
   Glimpse | The glimpse layer returns a retina-like representation (down-scaled cropped images) of increasing scale around a given location in a given image |
   GlorotInitializationType | This class is used to initialize the weight matrix with the Glorot Initialization method |
   GradientSetVisitor | GradientSetVisitor update the gradient parameter given the gradient set |
   GradientUpdateVisitor | GradientUpdateVisitor update the gradient parameter given the gradient set |
   GradientVisitor | SearchModeVisitor executes the Gradient() method of the given module using the input and delta parameter |
   GradientZeroVisitor | |
   GRU | An implementation of a gru network layer |
   HardSigmoidFunction | The hard sigmoid function, defined by |
   HardTanH | The Hard Tanh activation function, defined by |
   HeInitialization | This class is used to initialize weight matrix with the He initialization rule given by He et |
   IdentityFunction | The identity function, defined by |
   InitTraits | This is a template class that can provide information about various initialization methods |
   InitTraits< KathirvalavakumarSubavathiInitialization > | Initialization traits of the kathirvalavakumar subavath initialization rule |
   InitTraits< NguyenWidrowInitialization > | Initialization traits of the Nguyen-Widrow initialization rule |
   Join | Implementation of the Join module class |
   KathirvalavakumarSubavathiInitialization | This class is used to initialize the weight matrix with the method proposed by T |
   KLDivergence | The Kullback–Leibler divergence is often used for continuous distributions (direct regression) |
   LayerNorm | Declaration of the Layer Normalization class |
   LayerTraits | This is a template class that can provide information about various layers |
   LeakyReLU | The LeakyReLU activation function, defined by |
   LecunNormalInitialization | This class is used to initialize weight matrix with the Lecun Normalization initialization rule |
   Linear | Implementation of the Linear layer class |
   LinearNoBias | Implementation of the LinearNoBias class |
   LoadOutputParameterVisitor | LoadOutputParameterVisitor restores the output parameter using the given parameter set |
   LogisticFunction | The logistic function, defined by |
   LogSoftMax | Implementation of the log softmax layer |
   Lookup | Implementation of the Lookup class |
   LossVisitor | LossVisitor exposes the Loss() method of the given module |
   LSTM | Implementation of the LSTM module class |
   MaxPooling | Implementation of the MaxPooling layer |
   MaxPoolingRule | |
   MeanPooling | Implementation of the MeanPooling |
   MeanPoolingRule | |
   MeanSquaredError | The mean squared error performance function measures the network's performance according to the mean of squared errors |
   MultiplyConstant | Implementation of the multiply constant layer |
   MultiplyMerge | Implementation of the MultiplyMerge module class |
   NaiveConvolution | Computes the two-dimensional convolution |
   NegativeLogLikelihood | Implementation of the negative log likelihood layer |
   NetworkInitialization | This class is used to initialize the network with the given initialization rule |
   NguyenWidrowInitialization | This class is used to initialize the weight matrix with the Nguyen-Widrow method |
   OivsInitialization | This class is used to initialize the weight matrix with the oivs method |
   OrthogonalInitialization | This class is used to initialize the weight matrix with the orthogonal matrix initialization |
   OutputHeightVisitor | OutputHeightVisitor exposes the OutputHeight() method of the given module |
   OutputParameterVisitor | OutputParameterVisitor exposes the output parameter of the given module |
   OutputWidthVisitor | OutputWidthVisitor exposes the OutputWidth() method of the given module |
   ParametersSetVisitor | ParametersSetVisitor update the parameters set using the given matrix |
   ParametersVisitor | ParametersVisitor exposes the parameters set of the given module and stores the parameters set into the given matrix |
   PReLU | The PReLU activation function, defined by (where alpha is trainable) |
   RandomInitialization | This class is used to initialize randomly the weight matrix |
   RBM | The implementation of the RBM module |
   ReconstructionLoss | The reconstruction loss performance function measures the network's performance equal to the negative log probability of the target with the input distribution |
   RectifierFunction | The rectifier function, defined by |
   Recurrent | Implementation of the RecurrentLayer class |
   RecurrentAttention | This class implements the Recurrent Model for Visual Attention, using a variety of possible layer implementations |
   ReinforceNormal | Implementation of the reinforce normal layer |
   Reparametrization | Implementation of the Reparametrization layer class |
   ResetCellVisitor | ResetCellVisitor executes the ResetCell() function |
   ResetVisitor | ResetVisitor executes the Reset() function |
   RewardSetVisitor | RewardSetVisitor set the reward parameter given the reward value |
   RNN | Implementation of a standard recurrent neural network container |
   RunSetVisitor | RunSetVisitor set the run parameter given the run value |
   SaveOutputParameterVisitor | SaveOutputParameterVisitor saves the output parameter into the given parameter set |
   Select | The select module selects the specified column from a given input matrix |
   Sequential | Implementation of the Sequential class |
   SetInputHeightVisitor | SetInputHeightVisitor updates the input height parameter with the given input height |
   SetInputWidthVisitor | SetInputWidthVisitor updates the input width parameter with the given input width |
   SigmoidCrossEntropyError | The SigmoidCrossEntropyError performance function measures the network's performance according to the cross-entropy function between the input and target distributions |
   SoftplusFunction | The softplus function, defined by |
   SoftsignFunction | The softsign function, defined by |
   SpikeSlabRBM | For more information, see the following paper: |
   Subview | Implementation of the subview layer |
   SVDConvolution | Computes the two-dimensional convolution using singular value decomposition |
   SwishFunction | The swish function, defined by |
   TanhFunction | The tanh function, defined by |
   TransposedConvolution | Implementation of the Transposed Convolution class |
   ValidConvolution | |
   VRClassReward | Implementation of the variance reduced classification reinforcement layer |
   WeightSetVisitor | WeightSetVisitor update the module parameters given the parameters set |
   WeightSizeVisitor | WeightSizeVisitor returns the number of weights of the given module |
  bindings | |
   cli | |
    CLIOption | A static object whose constructor registers a parameter with the CLI class |
    ParameterType | Utility struct to return the type that boost::program_options should accept for a given input type |
    ParameterType< arma::Col< eT > > | For vector types, boost::program_options will accept a std::string, not an arma::Col<eT> (since it is not clear how to specify a vector on the command-line) |
    ParameterType< arma::Mat< eT > > | For matrix types, boost::program_options will accept a std::string, not an arma::mat (since it is not clear how to specify a matrix on the command-line) |
    ParameterType< arma::Row< eT > > | For row vector types, boost::program_options will accept a std::string, not an arma::Row<eT> (since it is not clear how to specify a vector on the command-line) |
    ParameterType< std::tuple< mlpack::data::DatasetMapper< PolicyType, std::string >, arma::Mat< eT > > > | For matrix+dataset info types, we should accept a std::string |
    ParameterTypeDeducer | |
    ParameterTypeDeducer< true, T > | |
    ProgramDoc | A static object whose constructor registers program documentation with the CLI class |
   markdown | |
    BindingInfo | Used by the Markdown documentation generator to store multiple ProgramDoc objects, indexed by both the binding name (i.e |
    MDOption | The Markdown option class |
    ProgramDocWrapper | |
   python | |
    PyOption | The Python option class |
   tests | |
    ProgramDoc | A static object whose constructor registers program documentation with the CLI class |
    TestOption | A static object whose constructor registers a parameter with the CLI class |
  bound | |
   addr | |
   meta | Metaprogramming utilities |
    IsLMetric | Utility struct where Value is true if and only if the argument is of type LMetric |
    IsLMetric< metric::LMetric< Power, TakeRoot > > | Specialization for IsLMetric when the argument is of type LMetric |
   BallBound | Ball bound encloses a set of points at a specific distance (radius) from a specific point (center) |
   BoundTraits | A class to obtain compile-time traits about BoundType classes |
   BoundTraits< BallBound< MetricType, VecType > > | A specialization of BoundTraits for this bound type |
   BoundTraits< CellBound< MetricType, ElemType > > | |
   BoundTraits< HollowBallBound< MetricType, ElemType > > | A specialization of BoundTraits for this bound type |
   BoundTraits< HRectBound< MetricType, ElemType > > | |
   CellBound | The CellBound class describes a bound that consists of a number of hyperrectangles |
   HollowBallBound | Hollow ball bound encloses a set of points at a specific distance (radius) from a specific point (center) except points at a specific distance from another point (the center of the hole) |
   HRectBound | Hyper-rectangle bound for an L-metric |
  cf | Collaborative filtering |
   AverageInterpolation | This class performs average interpolation to generate interpolation weights for neighborhood-based collaborative filtering |
   BatchSVDPolicy | Implementation of the Batch SVD policy to act as a wrapper when accessing Batch SVD from within CFType |
   BiasSVDPolicy | Implementation of the Bias SVD policy to act as a wrapper when accessing Bias SVD from within CFType |
   CFModel | The model to save to disk |
   CFType | This class implements Collaborative Filtering (CF) |
   CombinedNormalization | This normalization class performs a sequence of normalization methods on raw ratings |
   CosineSearch | Nearest neighbor search with cosine distance |
   DeleteVisitor | DeleteVisitor deletes the CFType<> object which is pointed to by the variable cf in class CFModel |
   DummyClass | This class acts as a dummy class for passing as template parameter |
   GetValueVisitor | GetValueVisitor returns the pointer which points to the CFType object |
   ItemMeanNormalization | This normalization class performs item mean normalization on raw ratings |
   LMetricSearch | Nearest neighbor search with L_p distance |
   NMFPolicy | Implementation of the NMF policy to act as a wrapper when accessing NMF from within CFType |
   NoNormalization | This normalization class doesn't perform any normalization |
   OverallMeanNormalization | This normalization class performs overall mean normalization on raw ratings |
   PearsonSearch | Nearest neighbor search with pearson distance (or furthest neighbor search with pearson correlation) |
   PredictVisitor | PredictVisitor uses the CFType object to make predictions on the given combinations of users and items |
   RandomizedSVDPolicy | Implementation of the Randomized SVD policy to act as a wrapper when accessing Randomized SVD from within CFType |
   RecommendationVisitor | RecommendationVisitor uses the CFType object to get recommendations for the given users |
   RegressionInterpolation | Implementation of regression-based interpolation method |
   RegSVDPolicy | Implementation of the Regularized SVD policy to act as a wrapper when accessing Regularized SVD from within CFType |
   SimilarityInterpolation | With SimilarityInterpolation, interpolation weights are based on similarities between query user and its neighbors |
   SVDCompletePolicy | Implementation of the SVD complete incremental policy to act as a wrapper when accessing SVD complete decomposition from within CFType |
   SVDIncompletePolicy | Implementation of the SVD incomplete incremental to act as a wrapper when accessing SVD incomplete incremental from within CFType |
   SVDPlusPlusPolicy | Implementation of the SVDPlusPlus policy to act as a wrapper when accessing SVDPlusPlus from within CFType |
   SVDWrapper | This class acts as the wrapper for all SVD factorizers which are incompatible with CF module |
   UserMeanNormalization | This normalization class performs user mean normalization on raw ratings |
   ZScoreNormalization | This normalization class performs z-score normalization on raw ratings |
  cv | |
   Accuracy | The Accuracy is a metric of performance for classification algorithms that is equal to a proportion of correctly labeled test items among all ones for given test items |
   CVBase | An auxiliary class for cross-validation |
   F1 | F1 is a metric of performance for classification algorithms that for binary classification is equal to |
   KFoldCV | The class KFoldCV implements k-fold cross-validation for regression and classification algorithms |
   MetaInfoExtractor | MetaInfoExtractor is a tool for extracting meta information about a given machine learning algorithm |
   MSE | The MeanSquaredError is a metric of performance for regression algorithms that is equal to the mean squared error between predicted values and ground truth (correct) values for given test items |
   NotFoundMethodForm | |
   Precision | Precision is a metric of performance for classification algorithms that for binary classification is equal to , where and are the numbers of true positives and false positives respectively |
   Recall | Recall is a metric of performance for classification algorithms that for binary classification is equal to , where and are the numbers of true positives and false negatives respectively |
   SelectMethodForm | A type function that selects a right method form |
   SelectMethodForm< MLAlgorithm > | |
    From | |
   SelectMethodForm< MLAlgorithm, HasMethodForm, HMFs...> | |
    From | |
   SimpleCV | SimpleCV splits data into two sets - training and validation sets - and then runs training on the training set and evaluates performance on the validation set |
   TrainForm | A wrapper struct for holding a Train form |
   TrainForm< MT, PT, void, false, false > | |
   TrainForm< MT, PT, void, false, true > | |
   TrainForm< MT, PT, void, true, false > | |
   TrainForm< MT, PT, void, true, true > | |
   TrainForm< MT, PT, WT, false, false > | |
   TrainForm< MT, PT, WT, false, true > | |
   TrainForm< MT, PT, WT, true, false > | |
   TrainForm< MT, PT, WT, true, true > | |
   TrainFormBase4 | |
   TrainFormBase5 | |
   TrainFormBase6 | |
   TrainFormBase7 | |
  data | Functions to load and save matrices and models |
   CustomImputation | A simple custom imputation class |
   DatasetMapper | Auxiliary information for a dataset, including mappings to/from strings (or other types) and the datatype of each dimension |
   HasSerialize | |
    check | |
   HasSerializeFunction | |
   Imputer | Given a dataset of a particular datatype, replace user-specified missing value with a variable dependent on the StrategyType and MapperType |
   IncrementPolicy | IncrementPolicy is used as a helper class for DatasetMapper |
   ListwiseDeletion | A complete-case analysis to remove the values containing mappedValue |
   LoadCSV | Load the csv file.This class use boost::spirit to implement the parser, please refer to following link http://theboostcpplibraries.com/boost.spirit for quick review |
   MeanImputation | A simple mean imputation class |
   MedianImputation | This is a class implementation of simple median imputation |
   MissingPolicy | MissingPolicy is used as a helper class for DatasetMapper |
  dbscan | |
   DBSCAN | DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a clustering technique described in the following paper: |
   OrderedPointSelection | This class can be used to sequentially select the next point to use for DBSCAN |
   RandomPointSelection | This class can be used to randomly select the next point to use for DBSCAN |
  decision_stump | |
   DecisionStump | This class implements a decision stump |
  det | Density Estimation Trees |
   DTree | A density estimation tree is similar to both a decision tree and a space partitioning tree (like a kd-tree) |
   PathCacher | This class is responsible for caching the path to each node of the tree |
  distribution | Probability distributions |
   DiagonalGaussianDistribution | A single multivariate Gaussian distribution with diagonal covariance |
   DiscreteDistribution | A discrete distribution where the only observations are discrete observations |
   GammaDistribution | This class represents the Gamma distribution |
   GaussianDistribution | A single multivariate Gaussian distribution |
   LaplaceDistribution | The multivariate Laplace distribution centered at 0 has pdf |
   RegressionDistribution | A class that represents a univariate conditionally Gaussian distribution |
  emst | Euclidean Minimum Spanning Trees |
   DTBRules | |
   DTBStat | A statistic for use with mlpack trees, which stores the upper bound on distance to nearest neighbors and the component which this node belongs to |
   DualTreeBoruvka | Performs the MST calculation using the Dual-Tree Boruvka algorithm, using any type of tree |
   EdgePair | An edge pair is simply two indices and a distance |
   UnionFind | A Union-Find data structure |
  fastmks | Fast max-kernel search |
   FastMKS | An implementation of fast exact max-kernel search |
   FastMKSModel | A utility struct to contain all the possible FastMKS models, for use by the mlpack_fastmks program |
   FastMKSRules | The FastMKSRules class is a template helper class used by FastMKS class when performing exact max-kernel search |
   FastMKSStat | The statistic used in trees with FastMKS |
  gmm | Gaussian Mixture Models |
   DiagonalConstraint | Force a covariance matrix to be diagonal |
   DiagonalGMM | A Diagonal Gaussian Mixture Model |
   EigenvalueRatioConstraint | Given a vector of eigenvalue ratios, ensure that the covariance matrix always has those eigenvalue ratios |
   EMFit | This class contains methods which can fit a GMM to observations using the EM algorithm |
   GMM | A Gaussian Mixture Model (GMM) |
   NoConstraint | This class enforces no constraint on the covariance matrix |
   PositiveDefiniteConstraint | Given a covariance matrix, force the matrix to be positive definite |
  hmm | Hidden Markov Models |
   HMM | A class that represents a Hidden Markov Model with an arbitrary type of emission distribution |
   HMMModel | A serializable HMM model that also stores the type |
   HMMRegression | A class that represents a Hidden Markov Model Regression (HMMR) |
  hpt | |
   CVFunction | This wrapper serves for adapting the interface of the cross-validation classes to the one that can be utilized by the mlpack optimizers |
   DeduceHyperParameterTypes | A type function for deducing types of hyper-parameters from types of arguments in the Optimize method in HyperParameterTuner |
    ResultHolder | |
   DeduceHyperParameterTypes< PreFixedArg< T >, Args...> | Defining DeduceHyperParameterTypes for the case when not all argument types have been processed, and the next one is the type of an argument that should be fixed |
    ResultHolder | |
   DeduceHyperParameterTypes< T, Args...> | Defining DeduceHyperParameterTypes for the case when not all argument types have been processed, and the next one (T) is a collection type or an arithmetic type |
    IsCollectionType | A type function to check whether Type is a collection type (for that it should define value_type) |
    ResultHolder | |
    ResultHPType | A type function to deduce the result hyper-parameter type for ArgumentType |
    ResultHPType< ArithmeticType, true > | |
    ResultHPType< CollectionType, false > | |
   FixedArg | A struct for storing information about a fixed argument |
   HyperParameterTuner | The class HyperParameterTuner for the given MLAlgorithm utilizes the provided Optimizer to find the values of hyper-parameters that optimize the value of the given Metric |
   IsPreFixedArg | A type function for checking whether the given type is PreFixedArg |
   PreFixedArg | A struct for marking arguments as ones that should be fixed (it can be useful for the Optimize method of HyperParameterTuner) |
   PreFixedArg< T & > | The specialization of the template for references |
  kde | Kernel Density Estimation |
   DeleteVisitor | |
   DualBiKDE | DualBiKDE computes a Kernel Density Estimation on the given KDEType |
   DualMonoKDE | DualMonoKDE computes a Kernel Density Estimation on the given KDEType |
   KDE | The KDE class is a template class for performing Kernel Density Estimations |
   KDEModel | |
   KDERules | A dual-tree traversal Rules class for kernel density estimation |
   KDEStat | Extra data for each node in the tree for the task of kernel density estimation |
   KernelNormalizer | KernelNormalizer holds a set of methods to normalize estimations applying in each case the appropiate kernel normalizer function |
   ModeVisitor | ModeVisitor exposes the Mode() method of the KDEType |
   TrainVisitor | TrainVisitor trains a given KDEType using a reference set |
  kernel | Kernel functions |
   CauchyKernel | The Cauchy kernel |
   CosineDistance | The cosine distance (or cosine similarity) |
   EpanechnikovKernel | The Epanechnikov kernel, defined as |
   ExampleKernel | An example kernel function |
   GaussianKernel | The standard Gaussian kernel |
   HyperbolicTangentKernel | Hyperbolic tangent kernel |
   KernelTraits | This is a template class that can provide information about various kernels |
   KernelTraits< CauchyKernel > | Kernel traits for the Cauchy kernel |
   KernelTraits< CosineDistance > | Kernel traits for the cosine distance |
   KernelTraits< EpanechnikovKernel > | Kernel traits for the Epanechnikov kernel |
   KernelTraits< GaussianKernel > | Kernel traits for the Gaussian kernel |
   KernelTraits< LaplacianKernel > | Kernel traits of the Laplacian kernel |
   KernelTraits< SphericalKernel > | Kernel traits for the spherical kernel |
   KernelTraits< TriangularKernel > | Kernel traits for the triangular kernel |
   KMeansSelection | Implementation of the kmeans sampling scheme |
   LaplacianKernel | The standard Laplacian kernel |
   LinearKernel | The simple linear kernel (dot product) |
   NystroemMethod | |
   OrderedSelection | |
   PolynomialKernel | The simple polynomial kernel |
   PSpectrumStringKernel | The p-spectrum string kernel |
   RandomSelection | |
   SphericalKernel | The spherical kernel, which is 1 when the distance between the two argument points is less than or equal to the bandwidth, or 0 otherwise |
   TriangularKernel | The trivially simple triangular kernel, defined by |
  kmeans | K-Means clustering |
   AllowEmptyClusters | Policy which allows K-Means to create empty clusters without any error being reported |
   DualTreeKMeans | An algorithm for an exact Lloyd iteration which simply uses dual-tree nearest-neighbor search to find the nearest centroid for each point in the dataset |
   DualTreeKMeansRules | |
   DualTreeKMeansStatistic | |
   ElkanKMeans | |
   HamerlyKMeans | |
   KillEmptyClusters | Policy which allows K-Means to "kill" empty clusters without any error being reported |
   KMeans | This class implements K-Means clustering, using a variety of possible implementations of Lloyd's algorithm |
   MaxVarianceNewCluster | When an empty cluster is detected, this class takes the point furthest from the centroid of the cluster with maximum variance as a new cluster |
   NaiveKMeans | This is an implementation of a single iteration of Lloyd's algorithm for k-means |
   PellegMooreKMeans | An implementation of Pelleg-Moore's 'blacklist' algorithm for k-means clustering |
   PellegMooreKMeansRules | The rules class for the single-tree Pelleg-Moore kd-tree traversal for k-means clustering |
   PellegMooreKMeansStatistic | A statistic for trees which holds the blacklist for Pelleg-Moore k-means clustering (which represents the clusters that cannot possibly own any points in a node) |
   RandomPartition | A very simple partitioner which partitions the data randomly into the number of desired clusters |
   RefinedStart | A refined approach for choosing initial points for k-means clustering |
   SampleInitialization | |
  kpca | |
   KernelPCA | This class performs kernel principal components analysis (Kernel PCA), for a given kernel |
   NaiveKernelRule | |
   NystroemKernelRule | |
  lcc | |
   LocalCoordinateCoding | An implementation of Local Coordinate Coding (LCC) that codes data which approximately lives on a manifold using a variation of l1-norm regularized sparse coding; in LCC, the penalty on the absolute value of each point's coefficient for each atom is weighted by the squared distance of that point to that atom |
  lmnn | Large Margin Nearest Neighbor |
   Constraints | Interface for generating distance based constraints on a given dataset, provided corresponding true labels and a quantity parameter (k) are specified |
   LMNN | An implementation of Large Margin nearest neighbor metric learning technique |
   LMNNFunction | The Large Margin Nearest Neighbors function |
  math | Miscellaneous math routines |
   ColumnsToBlocks | Transform the columns of the given matrix into a block format |
   RangeType | Simple real-valued range |
  matrix_completion | |
   MatrixCompletion | This class implements the popular nuclear norm minimization heuristic for matrix completion problems |
  meanshift | Mean shift clustering |
   MeanShift | This class implements mean shift clustering |
  metric | |
   IPMetric | The inner product metric, IPMetric, takes a given Mercer kernel (KernelType), and when Evaluate() is called, returns the distance between the two points in kernel space: |
   LMetric | The L_p metric for arbitrary integer p, with an option to take the root |
   MahalanobisDistance | The Mahalanobis distance, which is essentially a stretched Euclidean distance |
  naive_bayes | The Naive Bayes Classifier |
   NaiveBayesClassifier | The simple Naive Bayes classifier |
  nca | Neighborhood Components Analysis |
   NCA | An implementation of Neighborhood Components Analysis, both a linear dimensionality reduction technique and a distance learning technique |
   SoftmaxErrorFunction | The "softmax" stochastic neighbor assignment probability function |
  neighbor | |
   AlphaVisitor | Exposes the Alpha() method of the given RAType |
   BiSearchVisitor | BiSearchVisitor executes a bichromatic neighbor search on the given NSType |
   DeleteVisitor | DeleteVisitor deletes the given NSType instance |
   DrusillaSelect | |
   EpsilonVisitor | EpsilonVisitor exposes the Epsilon method of the given NSType |
   FirstLeafExactVisitor | Exposes the FirstLeafExact() method of the given RAType |
   FurthestNS | This class implements the necessary methods for the SortPolicy template parameter of the NeighborSearch class |
   LSHSearch | The LSHSearch class; this class builds a hash on the reference set and uses this hash to compute the distance-approximate nearest-neighbors of the given queries |
   MonoSearchVisitor | MonoSearchVisitor executes a monochromatic neighbor search on the given NSType |
   NaiveVisitor | NaiveVisitor exposes the Naive() method of the given RAType |
   NearestNS | This class implements the necessary methods for the SortPolicy template parameter of the NeighborSearch class |
   NeighborSearch | The NeighborSearch class is a template class for performing distance-based neighbor searches |
   NeighborSearchRules | The NeighborSearchRules class is a template helper class used by NeighborSearch class when performing distance-based neighbor searches |
    CandidateCmp | Compare two candidates based on the distance |
   NeighborSearchStat | Extra data for each node in the tree |
   NSModel | The NSModel class provides an easy way to serialize a model, abstracts away the different types of trees, and also reflects the NeighborSearch API |
   QDAFN | |
   RAModel | The RAModel class provides an abstraction for the RASearch class, abstracting away the TreeType parameter and allowing it to be specified at runtime in this class |
   RAQueryStat | Extra data for each node in the tree |
   RASearch | The RASearch class: This class provides a generic manner to perform rank-approximate search via random-sampling |
   RASearchRules | The RASearchRules class is a template helper class used by RASearch class when performing rank-approximate search via random-sampling |
   RAUtil | |
   ReferenceSetVisitor | ReferenceSetVisitor exposes the referenceSet of the given NSType |
   SampleAtLeavesVisitor | Exposes the SampleAtLeaves() method of the given RAType |
   SearchModeVisitor | SearchModeVisitor exposes the SearchMode() method of the given NSType |
   SingleModeVisitor | Exposes the SingleMode() method of the given RAType |
   SingleSampleLimitVisitor | Exposes the SingleSampleLimit() method of the given RAType |
   TauVisitor | Exposes the Tau() method of the given RAType |
   TrainVisitor | TrainVisitor sets the reference set to a new reference set on the given NSType |
  nn | |
   SparseAutoencoder | A sparse autoencoder is a neural network whose aim to learn compressed representations of the data, typically for dimensionality reduction, with a constraint on the activity of the neurons in the network |
   SparseAutoencoderFunction | This is a class for the sparse autoencoder objective function |
  pca | |
   ExactSVDPolicy | Implementation of the exact SVD policy |
   PCA | This class implements principal components analysis (PCA) |
   QUICSVDPolicy | Implementation of the QUIC-SVD policy |
   RandomizedBlockKrylovSVDPolicy | Implementation of the randomized block krylov SVD policy |
   RandomizedSVDPolicy | Implementation of the randomized SVD policy |
  perceptron | |
   Perceptron | This class implements a simple perceptron (i.e., a single layer neural network) |
   RandomInitialization | This class is used to initialize weights for the weightVectors matrix in a random manner |
   SimpleWeightUpdate | |
   ZeroInitialization | This class is used to initialize the matrix weightVectors to zero |
  radical | |
   Radical | An implementation of RADICAL, an algorithm for independent component analysis (ICA) |
  range | Range-search routines |
   BiSearchVisitor | BiSearchVisitor executes a bichromatic range search on the given RSType |
   DeleteVisitor | DeleteVisitor deletes the given RSType instance |
   MonoSearchVisitor | MonoSearchVisitor executes a monochromatic range search on the given RSType |
   NaiveVisitor | NaiveVisitor exposes the Naive() method of the given RSType |
   RangeSearch | The RangeSearch class is a template class for performing range searches |
   RangeSearchRules | The RangeSearchRules class is a template helper class used by RangeSearch class when performing range searches |
   RangeSearchStat | Statistic class for RangeSearch, to be set to the StatisticType of the tree type that range search is being performed with |
   ReferenceSetVisitor | ReferenceSetVisitor exposes the referenceSet of the given RSType |
   RSModel | |
   SingleModeVisitor | SingleModeVisitor exposes the SingleMode() method of the given RSType |
   TrainVisitor | TrainVisitor sets the reference set to a new reference set on the given RSType |
  regression | Regression methods |
   LARS | An implementation of LARS, a stage-wise homotopy-based algorithm for l1-regularized linear regression (LASSO) and l1+l2 regularized linear regression (Elastic Net) |
   LinearRegression | A simple linear regression algorithm using ordinary least squares |
   LogisticRegression | The LogisticRegression class implements an L2-regularized logistic regression model, and supports training with multiple optimizers and classification |
   LogisticRegressionFunction | The log-likelihood function for the logistic regression objective function |
   SoftmaxRegression | Softmax Regression is a classifier which can be used for classification when the data available can take two or more class values |
   SoftmaxRegressionFunction | |
  rl | |
   Acrobot | Implementation of Acrobot game |
    State | |
   AggregatedPolicy | |
   AsyncLearning | Wrapper of various asynchronous learning algorithms, e.g |
   CartPole | Implementation of Cart Pole task |
    State | Implementation of the state of Cart Pole |
   ContinuousMountainCar | Implementation of Continuous Mountain Car task |
    Action | Implementation of action of Continuous Mountain Car |
    State | Implementation of state of Continuous Mountain Car |
   GreedyPolicy | Implementation for epsilon greedy policy |
   MountainCar | Implementation of Mountain Car task |
    State | Implementation of state of Mountain Car |
   NStepQLearningWorker | Forward declaration of NStepQLearningWorker |
   OneStepQLearningWorker | Forward declaration of OneStepQLearningWorker |
   OneStepSarsaWorker | Forward declaration of OneStepSarsaWorker |
   Pendulum | Implementation of Pendulum task |
    Action | Implementation of action of Pendulum |
    State | Implementation of state of Pendulum |
   QLearning | Implementation of various Q-Learning algorithms, such as DQN, double DQN |
   RandomReplay | Implementation of random experience replay |
   RewardClipping | Interface for clipping the reward to some value between the specified maximum and minimum value (Clipping here is implemented as .) |
   TrainingConfig | |
  sfinae | |
   MethodFormDetector | |
   MethodFormDetector< Class, MethodForm, 0 > | |
   MethodFormDetector< Class, MethodForm, 1 > | |
   MethodFormDetector< Class, MethodForm, 2 > | |
   MethodFormDetector< Class, MethodForm, 3 > | |
   MethodFormDetector< Class, MethodForm, 4 > | |
   MethodFormDetector< Class, MethodForm, 5 > | |
   MethodFormDetector< Class, MethodForm, 6 > | |
   MethodFormDetector< Class, MethodForm, 7 > | |
   SigCheck | Utility struct for checking signatures |
  sparse_coding | |
   DataDependentRandomInitializer | A data-dependent random dictionary initializer for SparseCoding |
   NothingInitializer | A DictionaryInitializer for SparseCoding which does not initialize anything; it is useful for when the dictionary is already known and will be set with SparseCoding::Dictionary() |
   RandomInitializer | A DictionaryInitializer for use with the SparseCoding class |
   SparseCoding | An implementation of Sparse Coding with Dictionary Learning that achieves sparsity via an l1-norm regularizer on the codes (LASSO) or an (l1+l2)-norm regularizer on the codes (the Elastic Net) |
  svd | |
   BiasSVD | Bias SVD is an improvement on Regularized SVD which is a matrix factorization techniques |
   BiasSVDFunction | This class contains methods which are used to calculate the cost of BiasSVD's objective function, to calculate gradient of parameters with respect to the objective function, etc |
   QUIC_SVD | QUIC-SVD is a matrix factorization technique, which operates in a subspace such that A's approximation in that subspace has minimum error(A being the data matrix) |
   RandomizedBlockKrylovSVD | Randomized block krylov SVD is a matrix factorization that is based on randomized matrix approximation techniques, developed in in "Randomized Block Krylov Methods for Stronger and Faster Approximate
Singular Value Decomposition" |
   RandomizedSVD | Randomized SVD is a matrix factorization that is based on randomized matrix approximation techniques, developed in in "Finding structure with randomness:
Probabilistic algorithms for constructing approximate matrix decompositions" |
   RegularizedSVD | Regularized SVD is a matrix factorization technique that seeks to reduce the error on the training set, that is on the examples for which the ratings have been provided by the users |
   RegularizedSVDFunction | The data is stored in a matrix of type MatType, so that this class can be used with both dense and sparse matrix types |
   SVDPlusPlus | SVD++ is a matrix decomposition tenique used in collaborative filtering |
   SVDPlusPlusFunction | This class contains methods which are used to calculate the cost of SVD++'s objective function, to calculate gradient of parameters with respect to the objective function, etc |
  svm | |
   LinearSVM | The LinearSVM class implements an L2-regularized support vector machine model, and supports training with multiple optimizers and classification |
   LinearSVMFunction | The hinge loss function for the linear SVM objective function |
  tree | Trees and tree-building procedures |
   enumerate | |
   split | |
   AllCategoricalSplit | The AllCategoricalSplit is a splitting function that will split categorical features into many children: one child for each category |
    AuxiliarySplitInfo | |
   AllDimensionSelect | This dimension selection policy allows any dimension to be selected for splitting |
   AxisParallelProjVector | AxisParallelProjVector defines an axis-parallel projection vector |
   BestBinaryNumericSplit | The BestBinaryNumericSplit is a splitting function for decision trees that will exhaustively search a numeric dimension for the best binary split |
    AuxiliarySplitInfo | |
   BinaryNumericSplit | The BinaryNumericSplit class implements the numeric feature splitting strategy devised by Gama, Rocha, and Medas in the following paper: |
   BinaryNumericSplitInfo | |
   BinarySpaceTree | A binary space partitioning tree, such as a KD-tree or a ball tree |
    BreadthFirstDualTreeTraverser | |
    DualTreeTraverser | A dual-tree traverser for binary space trees; see dual_tree_traverser.hpp |
    SingleTreeTraverser | A single-tree traverser for binary space trees; see single_tree_traverser.hpp for implementation |
   CategoricalSplitInfo | |
   CompareCosineNode | |
   CosineTree | |
   CoverTree | A cover tree is a tree specifically designed to speed up nearest-neighbor computation in high-dimensional spaces |
    DualTreeTraverser | A dual-tree cover tree traverser; see dual_tree_traverser.hpp |
    SingleTreeTraverser | A single-tree cover tree traverser; see single_tree_traverser.hpp for implementation |
   DecisionTree | This class implements a generic decision tree learner |
   DiscreteHilbertValue | The DiscreteHilbertValue class stores Hilbert values for all of the points in a RectangleTree node, and calculates Hilbert values for new points |
   EmptyStatistic | Empty statistic if you are not interested in storing statistics in your tree |
   ExampleTree | This is not an actual space tree but instead an example tree that exists to show and document all the functions that mlpack trees must implement |
   FirstPointIsRoot | This class is meant to be used as a choice for the policy class RootPointPolicy of the CoverTree class |
   GiniGain | The Gini gain, a measure of set purity usable as a fitness function (FitnessFunction) for decision trees |
   GiniImpurity | |
   GreedySingleTreeTraverser | |
   HilbertRTreeAuxiliaryInformation | |
   HilbertRTreeDescentHeuristic | This class chooses the best child of a node in a Hilbert R tree when inserting a new point |
   HilbertRTreeSplit | The splitting procedure for the Hilbert R tree |
   HoeffdingCategoricalSplit | This is the standard Hoeffding-bound categorical feature proposed in the paper below: |
   HoeffdingNumericSplit | The HoeffdingNumericSplit class implements the numeric feature splitting strategy alluded to by Domingos and Hulten in the following paper: |
   HoeffdingTree | The HoeffdingTree object represents all of the necessary information for a Hoeffding-bound-based decision tree |
   HoeffdingTreeModel | This class is a serializable Hoeffding tree model that can hold four different types of Hoeffding trees |
   HyperplaneBase | HyperplaneBase defines a splitting hyperplane based on a projection vector and projection value |
   InformationGain | The standard information gain criterion, used for calculating gain in decision trees |
   IsSpillTree | |
   IsSpillTree< tree::SpillTree< MetricType, StatisticType, MatType, HyperplaneType, SplitType > > | |
   MeanSpaceSplit | |
   MeanSplit | A binary space partitioning tree node is split into its left and right child |
    SplitInfo | An information about the partition |
   MidpointSpaceSplit | |
   MidpointSplit | A binary space partitioning tree node is split into its left and right child |
    SplitInfo | A struct that contains an information about the split |
   MinimalCoverageSweep | The MinimalCoverageSweep class finds a partition along which we can split a node according to the coverage of two resulting nodes |
    SweepCost | A struct that provides the type of the sweep cost |
   MinimalSplitsNumberSweep | The MinimalSplitsNumberSweep class finds a partition along which we can split a node according to the number of required splits of the node |
    SweepCost | A struct that provides the type of the sweep cost |
   MultipleRandomDimensionSelect | This dimension selection policy allows the selection from a few random dimensions |
   NoAuxiliaryInformation | |
   NumericSplitInfo | |
   Octree | |
    DualTreeTraverser | A dual-tree traverser; see dual_tree_traverser.hpp |
    SingleTreeTraverser | A single-tree traverser; see single_tree_traverser.hpp |
   ProjVector | ProjVector defines a general projection vector (not necessarily axis-parallel) |
   QueueFrame | |
   RandomDimensionSelect | This dimension selection policy only selects one single random dimension |
   RandomForest | |
   RectangleTree | A rectangle type tree tree, such as an R-tree or X-tree |
    DualTreeTraverser | A dual tree traverser for rectangle type trees |
    SingleTreeTraverser | A single traverser for rectangle type trees |
   RPlusPlusTreeAuxiliaryInformation | |
   RPlusPlusTreeDescentHeuristic | |
   RPlusPlusTreeSplitPolicy | The RPlusPlusTreeSplitPolicy helps to determine the subtree into which we should insert a child of an intermediate node that is being split |
   RPlusTreeDescentHeuristic | |
   RPlusTreeSplit | The RPlusTreeSplit class performs the split process of a node on overflow |
   RPlusTreeSplitPolicy | The RPlusPlusTreeSplitPolicy helps to determine the subtree into which we should insert a child of an intermediate node that is being split |
   RPTreeMaxSplit | This class splits a node by a random hyperplane |
    SplitInfo | An information about the partition |
   RPTreeMeanSplit | This class splits a binary space tree |
    SplitInfo | An information about the partition |
   RStarTreeDescentHeuristic | When descending a RectangleTree to insert a point, we need to have a way to choose a child node when the point isn't enclosed by any of them |
   RStarTreeSplit | A Rectangle Tree has new points inserted at the bottom |
   RTreeDescentHeuristic | When descending a RectangleTree to insert a point, we need to have a way to choose a child node when the point isn't enclosed by any of them |
   RTreeSplit | A Rectangle Tree has new points inserted at the bottom |
   SpaceSplit | |
   SpillTree | A hybrid spill tree is a variant of binary space trees in which the children of a node can "spill over" each other, and contain shared datapoints |
    SpillDualTreeTraverser | A generic dual-tree traverser for hybrid spill trees; see spill_dual_tree_traverser.hpp for implementation |
    SpillSingleTreeTraverser | A generic single-tree traverser for hybrid spill trees; see spill_single_tree_traverser.hpp for implementation |
   TraversalInfo | The TraversalInfo class holds traversal information which is used in dual-tree (and single-tree) traversals |
   TreeTraits | The TreeTraits class provides compile-time information on the characteristics of a given tree type |
   TreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, bound::BallBound, SplitType > > | This is a specialization of the TreeType class to the BallTree tree type |
   TreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, bound::CellBound, SplitType > > | This is a specialization of the TreeType class to the UBTree tree type |
   TreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, bound::HollowBallBound, SplitType > > | This is a specialization of the TreeType class to an arbitrary tree with HollowBallBound (currently only the vantage point tree is supported) |
   TreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, BoundType, RPTreeMaxSplit > > | This is a specialization of the TreeType class to the max-split random projection tree |
   TreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, BoundType, RPTreeMeanSplit > > | This is a specialization of the TreeType class to the mean-split random projection tree |
   TreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, BoundType, SplitType > > | This is a specialization of the TreeTraits class to the BinarySpaceTree tree type |
   TreeTraits< CoverTree< MetricType, StatisticType, MatType, RootPointPolicy > > | The specialization of the TreeTraits class for the CoverTree tree type |
   TreeTraits< Octree< MetricType, StatisticType, MatType > > | This is a specialization of the TreeTraits class to the Octree tree type |
   TreeTraits< RectangleTree< MetricType, StatisticType, MatType, RPlusTreeSplit< SplitPolicyType, SweepType >, DescentType, AuxiliaryInformationType > > | Since the R+/R++ tree can not have overlapping children, we should define traits for the R+/R++ tree |
   TreeTraits< RectangleTree< MetricType, StatisticType, MatType, SplitType, DescentType, AuxiliaryInformationType > > | This is a specialization of the TreeType class to the RectangleTree tree type |
   TreeTraits< SpillTree< MetricType, StatisticType, MatType, HyperplaneType, SplitType > > | This is a specialization of the TreeType class to the SpillTree tree type |
   UBTreeSplit | Split a node into two parts according to the median address of points contained in the node |
   VantagePointSplit | The class splits a binary space partitioning tree node according to the median distance to the vantage point |
    SplitInfo | A struct that contains an information about the split |
   XTreeAuxiliaryInformation | The XTreeAuxiliaryInformation class provides information specific to X trees for each node in a RectangleTree |
    SplitHistoryStruct | The X tree requires that the tree records it's "split history" |
   XTreeSplit | A Rectangle Tree has new points inserted at the bottom |
  util | |
   IsStdVector | Metaprogramming structure for vector detection |
   IsStdVector< std::vector< T, A > > | Metaprogramming structure for vector detection |
   NullOutStream | Used for Log::Debug when not compiled with debugging symbols |
   ParamData | This structure holds all of the information about a single parameter, including its value (which is set when ParseCommandLine() is called) |
   PrefixedOutStream | Allows us to output to an ostream with a prefix at the beginning of each line, in the same way we would output to cout or cerr |
   ProgramDoc | A static object whose constructor registers program documentation with the CLI class |
  Backtrace | Provides a backtrace |
  CLI | Parses the command line for parameters and holds user-specified parameters |
  Log | Provides a convenient way to give formatted output |
  Timer | The timer class provides a way for mlpack methods to be timed |
  Timers | |
 std | |
 InitHMMModel | |
 IsVector | If value == true, then VecType is some sort of Armadillo vector or subview |
 IsVector< arma::Col< eT > > | |
 IsVector< arma::Row< eT > > | |
 IsVector< arma::SpCol< eT > > | |
 IsVector< arma::SpRow< eT > > | |
 IsVector< arma::SpSubview< eT > > | |
 IsVector< arma::subview_col< eT > > | |
 IsVector< arma::subview_row< eT > > | |
 TrainHMMModel | |