 boost | Set the serialization version of the FFN class |
  serialization | |
   version< mlpack::ann::FFN< OutputLayerType, InitializationRuleType, CustomLayer...> > | |
   version< mlpack::ann::RNN< OutputLayerType, InitializationRuleType, CustomLayer...> > | |
 mlpack | .hpp |
  adaboost | |
   AdaBoost | The AdaBoost class |
   AdaBoostModel | The model to save to disk |
  amf | Alternating Matrix Factorization |
   AMF | This class implements AMF (alternating matrix factorization) on the given matrix V |
   AverageInitialization | This initialization rule initializes matrix W and H to root of the average of V, perturbed with uniform noise |
   CompleteIncrementalTermination | This class acts as a wrapper for basic termination policies to be used by SVDCompleteIncrementalLearning |
   GivenInitialization | This initialization rule for AMF simply fills the W and H matrices with the matrices given to the constructor of this object |
   IncompleteIncrementalTermination | This class acts as a wrapper for basic termination policies to be used by SVDIncompleteIncrementalLearning |
   MaxIterationTermination | This termination policy only terminates when the maximum number of iterations has been reached |
   NMFALSUpdate | This class implements a method titled 'Alternating Least Squares' described in the following paper: |
   NMFMultiplicativeDistanceUpdate | The multiplicative distance update rules for matrices W and H |
   NMFMultiplicativeDivergenceUpdate | This follows a method described in the paper 'Algorithms for Non-negative |
   RandomAcolInitialization | This class initializes the W matrix of the AMF algorithm by averaging p randomly chosen columns of V |
   RandomInitialization | This initialization rule for AMF simply fills the W and H matrices with uniform random noise in [0, 1] |
   SimpleResidueTermination | This class implements a simple residue-based termination policy |
   SimpleToleranceTermination | This class implements residue tolerance termination policy |
   SVDBatchLearning | This class implements SVD batch learning with momentum |
   SVDCompleteIncrementalLearning | This class computes SVD using complete incremental batch learning, as described in the following paper: |
   SVDCompleteIncrementalLearning< arma::sp_mat > | TODO : Merge this template specialized function for sparse matrix using common row_col_iterator |
   SVDIncompleteIncrementalLearning | This class computes SVD using incomplete incremental batch learning, as described in the following paper: |
   ValidationRMSETermination | This class implements validation termination policy based on RMSE index |
  ann | Artificial Neural Network |
   augmented | |
    scorers | |
    tasks | |
     AddTask | Generator of instances of the binary addition task |
     CopyTask | Generator of instances of the binary sequence copy task |
     SortTask | Generator of instances of the sequence sort task |
   Add | Implementation of the Add module class |
   AddMerge | Implementation of the AddMerge module class |
   AddVisitor | AddVisitor exposes the Add() method of the given module |
   AlphaDropout | The alpha - dropout layer is a regularizer that randomly with probability 'ratio' sets input values to alphaDash |
   AtrousConvolution | Implementation of the Atrous Convolution class |
   BackwardVisitor | BackwardVisitor executes the Backward() function given the input, error and delta parameter |
   BaseLayer | Implementation of the base layer |
   BatchNorm | Declaration of the Batch Normalization layer class |
   BilinearInterpolation | Definition and Implementation of the Bilinear Interpolation Layer |
   Concat | Implementation of the Concat class |
   ConcatPerformance | Implementation of the concat performance class |
   Constant | Implementation of the constant layer |
   ConstInitialization | This class is used to initialize weight matrix with constant values |
   Convolution | Implementation of the Convolution class |
   CopyVisitor | This visitor is to support copy constructor for neural network module |
   CrossEntropyError | The cross-entropy performance function measures the network's performance according to the cross-entropy between the input and target distributions |
   DeleteVisitor | DeleteVisitor executes the destructor of the instantiated object |
   DeltaVisitor | DeltaVisitor exposes the delta parameter of the given module |
   DeterministicSetVisitor | DeterministicSetVisitor set the deterministic parameter given the deterministic value |
   DropConnect | The DropConnect layer is a regularizer that randomly with probability ratio sets the connection values to zero and scales the remaining elements by factor 1 /(1 - ratio) |
   Dropout | The dropout layer is a regularizer that randomly with probability 'ratio' sets input values to zero and scales the remaining elements by factor 1 / (1 - ratio) rather than during test time so as to keep the expected sum same |
   ELU | The ELU activation function, defined by |
   FastLSTM | An implementation of a faster version of the Fast LSTM network layer |
   FFN | Implementation of a standard feed forward network |
   FFTConvolution | Computes the two-dimensional convolution through fft |
   FlexibleReLU | The FlexibleReLU activation function, defined by |
   ForwardVisitor | ForwardVisitor executes the Forward() function given the input and output parameter |
   FullConvolution | |
   GaussianInitialization | This class is used to initialize weigth matrix with a gaussian |
   Glimpse | The glimpse layer returns a retina-like representation (down-scaled cropped images) of increasing scale around a given location in a given image |
   GlorotInitializationType | This class is used to initialize the weight matrix with the Glorot Initialization method |
   GradientSetVisitor | GradientSetVisitor update the gradient parameter given the gradient set |
   GradientUpdateVisitor | GradientUpdateVisitor update the gradient parameter given the gradient set |
   GradientVisitor | SearchModeVisitor executes the Gradient() method of the given module using the input and delta parameter |
   GradientZeroVisitor | |
   GRU | An implementation of a gru network layer |
   HardTanH | The Hard Tanh activation function, defined by |
   HeInitialization | This class is used to initialize weight matrix with the He initialization rule given by He et |
   IdentityFunction | The identity function, defined by |
   InitTraits | This is a template class that can provide information about various initialization methods |
   InitTraits< KathirvalavakumarSubavathiInitialization > | Initialization traits of the kathirvalavakumar subavath initialization rule |
   InitTraits< NguyenWidrowInitialization > | Initialization traits of the Nguyen-Widrow initialization rule |
   Join | Implementation of the Join module class |
   KathirvalavakumarSubavathiInitialization | This class is used to initialize the weight matrix with the method proposed by T |
   KLDivergence | The Kullback–Leibler divergence is often used for continuous distributions (direct regression) |
   LayerNorm | Declaration of the Layer Normalization class |
   LayerTraits | This is a template class that can provide information about various layers |
   LeakyReLU | The LeakyReLU activation function, defined by |
   LecunNormalInitialization | This class is used to initialize weight matrix with the Lecun Normalization initialization rule |
   Linear | Implementation of the Linear layer class |
   LinearNoBias | Implementation of the LinearNoBias class |
   LoadOutputParameterVisitor | LoadOutputParameterVisitor restores the output parameter using the given parameter set |
   LogisticFunction | The logistic function, defined by |
   LogSoftMax | Implementation of the log softmax layer |
   Lookup | Implementation of the Lookup class |
   LSTM | An implementation of a lstm network layer |
   MaxPooling | Implementation of the MaxPooling layer |
   MaxPoolingRule | |
   MeanPooling | Implementation of the MeanPooling |
   MeanPoolingRule | |
   MeanSquaredError | The mean squared error performance function measures the network's performance according to the mean of squared errors |
   MultiplyConstant | Implementation of the multiply constant layer |
   MultiplyMerge | Implementation of the MultiplyMerge module class |
   NaiveConvolution | Computes the two-dimensional convolution |
   NegativeLogLikelihood | Implementation of the negative log likelihood layer |
   NetworkInitialization | This class is used to initialize the network with the given initialization rule |
   NguyenWidrowInitialization | This class is used to initialize the weight matrix with the Nguyen-Widrow method |
   OivsInitialization | This class is used to initialize the weight matrix with the oivs method |
   OrthogonalInitialization | This class is used to initialize the weight matrix with the orthogonal matrix initialization |
   OutputHeightVisitor | OutputWidthVisitor exposes the OutputHeight() method of the given module |
   OutputParameterVisitor | OutputParameterVisitor exposes the output parameter of the given module |
   OutputWidthVisitor | OutputWidthVisitor exposes the OutputWidth() method of the given module |
   ParametersSetVisitor | ParametersSetVisitor update the parameters set using the given matrix |
   ParametersVisitor | ParametersVisitor exposes the parameters set of the given module and stores the parameters set into the given matrix |
   PReLU | The PReLU activation function, defined by (where alpha is trainable) |
   RandomInitialization | This class is used to initialize randomly the weight matrix |
   RectifierFunction | The rectifier function, defined by |
   Recurrent | Implementation of the RecurrentLayer class |
   RecurrentAttention | This class implements the Recurrent Model for Visual Attention, using a variety of possible layer implementations |
   ReinforceNormal | Implementation of the reinforce normal layer |
   ResetCellVisitor | ResetCellVisitor executes the ResetCell() function |
   ResetVisitor | ResetVisitor executes the Reset() function |
   RewardSetVisitor | RewardSetVisitor set the reward parameter given the reward value |
   RNN | Implementation of a standard recurrent neural network container |
   SaveOutputParameterVisitor | SaveOutputParameterVisitor saves the output parameter into the given parameter set |
   Select | The select module selects the specified column from a given input matrix |
   Sequential | Implementation of the Sequential class |
   SetInputHeightVisitor | SetInputHeightVisitor updates the input height parameter with the given input height |
   SetInputWidthVisitor | SetInputWidthVisitor updates the input width parameter with the given input width |
   SigmoidCrossEntropyError | The SigmoidCrossEntropyError performance function measures the network's performance according to the cross-entropy function between the input and target distributions |
   SoftplusFunction | The softplus function, defined by |
   SoftsignFunction | The softsign function, defined by |
   SVDConvolution | Computes the two-dimensional convolution using singular value decomposition |
   SwishFunction | The swish function, defined by |
   TanhFunction | The tanh function, defined by |
   TransposedConvolution | Implementation of the Transposed Convolution class |
   ValidConvolution | |
   VRClassReward | Implementation of the variance reduced classification reinforcement layer |
   WeightSetVisitor | WeightSetVisitor update the module parameters given the parameters set |
   WeightSizeVisitor | WeightSizeVisitor returns the number of weights of the given module |
  bindings | |
   cli | |
    CLIOption | A static object whose constructor registers a parameter with the CLI class |
    ParameterType | Utility struct to return the type that boost::program_options should accept for a given input type |
    ParameterType< arma::Col< eT > > | For vector types, boost::program_options will accept a std::string, not an arma::Col<eT> (since it is not clear how to specify a vector on the command-line) |
    ParameterType< arma::Mat< eT > > | For matrix types, boost::program_options will accept a std::string, not an arma::mat (since it is not clear how to specify a matrix on the command-line) |
    ParameterType< arma::Row< eT > > | For row vector types, boost::program_options will accept a std::string, not an arma::Row<eT> (since it is not clear how to specify a vector on the command-line) |
    ParameterType< std::tuple< mlpack::data::DatasetMapper< PolicyType, std::string >, arma::Mat< eT > > > | For matrix+dataset info types, we should accept a std::string |
    ParameterTypeDeducer | |
    ParameterTypeDeducer< true, T > | |
    ProgramDoc | A static object whose constructor registers program documentation with the CLI class |
   python | |
    PyOption | The Python option class |
   tests | |
    ProgramDoc | A static object whose constructor registers program documentation with the CLI class |
    TestOption | A static object whose constructor registers a parameter with the CLI class |
  bound | |
   addr | |
   meta | Metaprogramming utilities |
    IsLMetric | Utility struct where Value is true if and only if the argument is of type LMetric |
    IsLMetric< metric::LMetric< Power, TakeRoot > > | Specialization for IsLMetric when the argument is of type LMetric |
   BallBound | Ball bound encloses a set of points at a specific distance (radius) from a specific point (center) |
   BoundTraits | A class to obtain compile-time traits about BoundType classes |
   BoundTraits< BallBound< MetricType, VecType > > | A specialization of BoundTraits for this bound type |
   BoundTraits< CellBound< MetricType, ElemType > > | |
   BoundTraits< HollowBallBound< MetricType, ElemType > > | A specialization of BoundTraits for this bound type |
   BoundTraits< HRectBound< MetricType, ElemType > > | |
   CellBound | The CellBound class describes a bound that consists of a number of hyperrectangles |
   HollowBallBound | Hollow ball bound encloses a set of points at a specific distance (radius) from a specific point (center) except points at a specific distance from another point (the center of the hole) |
   HRectBound | Hyper-rectangle bound for an L-metric |
  cf | Collaborative filtering |
   BatchSVDPolicy | Implementation of the Batch SVD policy to act as a wrapper when accessing Batch SVD from within CFType |
   CFType | This class implements Collaborative Filtering (CF) |
   DummyClass | This class acts as a dummy class for passing as template parameter |
   NMFPolicy | Implementation of the NMF policy to act as a wrapper when accessing NMF from within CFType |
   RandomizedSVDPolicy | Implementation of the Randomized SVD policy to act as a wrapper when accessing Randomized SVD from within CFType |
   RegSVDPolicy | Implementation of the Regularized SVD policy to act as a wrapper when accessing Regularized SVD from within CFType |
   SVDCompletePolicy | Implementation of the SVD complete incremental policy to act as a wrapper when accessing SVD complete decomposition from within CFType |
   SVDIncompletePolicy | Implementation of the SVD incomplete incremental to act as a wrapper when accessing SVD incomplete incremental from within CFType |
   SVDWrapper | This class acts as the wrapper for all SVD factorizers which are incompatible with CF module |
  cv | |
   Accuracy | The Accuracy is a metric of performance for classification algorithms that is equal to a proportion of correctly labeled test items among all ones for given test items |
   CVBase | An auxiliary class for cross-validation |
   F1 | F1 is a metric of performance for classification algorithms that for binary classification is equal to |
   KFoldCV | The class KFoldCV implements k-fold cross-validation for regression and classification algorithms |
   MetaInfoExtractor | MetaInfoExtractor is a tool for extracting meta information about a given machine learning algorithm |
   MSE | The MeanSquaredError is a metric of performance for regression algorithms that is equal to the mean squared error between predicted values and ground truth (correct) values for given test items |
   NotFoundMethodForm | |
   Precision | Precision is a metric of performance for classification algorithms that for binary classification is equal to , where and are the numbers of true positives and false positives respectively |
   Recall | Recall is a metric of performance for classification algorithms that for binary classification is equal to , where and are the numbers of true positives and false negatives respectively |
   SelectMethodForm | A type function that selects a right method form |
   SelectMethodForm< MLAlgorithm > | |
    From | |
   SelectMethodForm< MLAlgorithm, HasMethodForm, HMFs...> | |
    From | |
   SimpleCV | SimpleCV splits data into two sets - training and validation sets - and then runs training on the training set and evaluates performance on the validation set |
   TrainForm | A wrapper struct for holding a Train form |
   TrainForm< MT, PT, void, false, false > | |
   TrainForm< MT, PT, void, false, true > | |
   TrainForm< MT, PT, void, true, false > | |
   TrainForm< MT, PT, void, true, true > | |
   TrainForm< MT, PT, WT, false, false > | |
   TrainForm< MT, PT, WT, false, true > | |
   TrainForm< MT, PT, WT, true, false > | |
   TrainForm< MT, PT, WT, true, true > | |
   TrainFormBase | |
  data | Functions to load and save matrices and models |
   CustomImputation | A simple custom imputation class |
   DatasetMapper | Auxiliary information for a dataset, including mappings to/from strings (or other types) and the datatype of each dimension |
   HasSerialize | |
    check | |
   HasSerializeFunction | |
   Imputer | Given a dataset of a particular datatype, replace user-specified missing value with a variable dependent on the StrategyType and MapperType |
   IncrementPolicy | IncrementPolicy is used as a helper class for DatasetMapper |
   ListwiseDeletion | A complete-case analysis to remove the values containing mappedValue |
   LoadCSV | Load the csv file.This class use boost::spirit to implement the parser, please refer to following link http://theboostcpplibraries.com/boost.spirit for quick review |
   MeanImputation | A simple mean imputation class |
   MedianImputation | This is a class implementation of simple median imputation |
   MissingPolicy | MissingPolicy is used as a helper class for DatasetMapper |
  dbscan | |
   DBSCAN | DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a clustering technique described in the following paper: |
   RandomPointSelection | This class can be used to randomly select the next point to use for DBSCAN |
  decision_stump | |
   DecisionStump | This class implements a decision stump |
  det | Density Estimation Trees |
   DTree | A density estimation tree is similar to both a decision tree and a space partitioning tree (like a kd-tree) |
   PathCacher | This class is responsible for caching the path to each node of the tree |
  distribution | Probability distributions |
   DiscreteDistribution | A discrete distribution where the only observations are discrete observations |
   GammaDistribution | This class represents the Gamma distribution |
   GaussianDistribution | A single multivariate Gaussian distribution |
   LaplaceDistribution | The multivariate Laplace distribution centered at 0 has pdf |
   RegressionDistribution | A class that represents a univariate conditionally Gaussian distribution |
  emst | Euclidean Minimum Spanning Trees |
   DTBRules | |
   DTBStat | A statistic for use with mlpack trees, which stores the upper bound on distance to nearest neighbors and the component which this node belongs to |
   DualTreeBoruvka | Performs the MST calculation using the Dual-Tree Boruvka algorithm, using any type of tree |
   EdgePair | An edge pair is simply two indices and a distance |
   UnionFind | A Union-Find data structure |
  fastmks | Fast max-kernel search |
   FastMKS | An implementation of fast exact max-kernel search |
   FastMKSModel | A utility struct to contain all the possible FastMKS models, for use by the mlpack_fastmks program |
   FastMKSRules | The FastMKSRules class is a template helper class used by FastMKS class when performing exact max-kernel search |
   FastMKSStat | The statistic used in trees with FastMKS |
  gmm | Gaussian Mixture Models |
   DiagonalConstraint | Force a covariance matrix to be diagonal |
   EigenvalueRatioConstraint | Given a vector of eigenvalue ratios, ensure that the covariance matrix always has those eigenvalue ratios |
   EMFit | This class contains methods which can fit a GMM to observations using the EM algorithm |
   GMM | A Gaussian Mixture Model (GMM) |
   NoConstraint | This class enforces no constraint on the covariance matrix |
   PositiveDefiniteConstraint | Given a covariance matrix, force the matrix to be positive definite |
  hmm | Hidden Markov Models |
   HMM | A class that represents a Hidden Markov Model with an arbitrary type of emission distribution |
   HMMModel | A serializable HMM model that also stores the type |
   HMMRegression | A class that represents a Hidden Markov Model Regression (HMMR) |
  hpt | |
   CVFunction | This wrapper serves for adapting the interface of the cross-validation classes to the one that can be utilized by the mlpack optimizers |
   DeduceHyperParameterTypes | A type function for deducing types of hyper-parameters from types of arguments in the Optimize method in HyperParameterTuner |
    ResultHolder | |
   DeduceHyperParameterTypes< PreFixedArg< T >, Args...> | Defining DeduceHyperParameterTypes for the case when not all argument types have been processed, and the next one is the type of an argument that should be fixed |
    ResultHolder | |
   DeduceHyperParameterTypes< T, Args...> | Defining DeduceHyperParameterTypes for the case when not all argument types have been processed, and the next one (T) is a collection type or an arithmetic type |
    IsCollectionType | A type function to check whether Type is a collection type (for that it should define value_type) |
    ResultHolder | |
    ResultHPType | A type function to deduce the result hyper-parameter type for ArgumentType |
    ResultHPType< ArithmeticType, true > | |
    ResultHPType< CollectionType, false > | |
   FixedArg | A struct for storing information about a fixed argument |
   HyperParameterTuner | The class HyperParameterTuner for the given MLAlgorithm utilizes the provided Optimizer to find the values of hyper-parameters that optimize the value of the given Metric |
   IsPreFixedArg | A type function for checking whether the given type is PreFixedArg |
   PreFixedArg | A struct for marking arguments as ones that should be fixed (it can be useful for the Optimize method of HyperParameterTuner) |
   PreFixedArg< T & > | The specialization of the template for references |
  kernel | Kernel functions |
   CosineDistance | The cosine distance (or cosine similarity) |
   EpanechnikovKernel | The Epanechnikov kernel, defined as |
   ExampleKernel | An example kernel function |
   GaussianKernel | The standard Gaussian kernel |
   HyperbolicTangentKernel | Hyperbolic tangent kernel |
   KernelTraits | This is a template class that can provide information about various kernels |
   KernelTraits< CosineDistance > | Kernel traits for the cosine distance |
   KernelTraits< EpanechnikovKernel > | Kernel traits for the Epanechnikov kernel |
   KernelTraits< GaussianKernel > | Kernel traits for the Gaussian kernel |
   KernelTraits< LaplacianKernel > | Kernel traits of the Laplacian kernel |
   KernelTraits< SphericalKernel > | Kernel traits for the spherical kernel |
   KernelTraits< TriangularKernel > | Kernel traits for the triangular kernel |
   KMeansSelection | Implementation of the kmeans sampling scheme |
   LaplacianKernel | The standard Laplacian kernel |
   LinearKernel | The simple linear kernel (dot product) |
   NystroemMethod | |
   OrderedSelection | |
   PolynomialKernel | The simple polynomial kernel |
   PSpectrumStringKernel | The p-spectrum string kernel |
   RandomSelection | |
   SphericalKernel | The spherical kernel, which is 1 when the distance between the two argument points is less than or equal to the bandwidth, or 0 otherwise |
   TriangularKernel | The trivially simple triangular kernel, defined by |
  kmeans | K-Means clustering |
   AllowEmptyClusters | Policy which allows K-Means to create empty clusters without any error being reported |
   DualTreeKMeans | An algorithm for an exact Lloyd iteration which simply uses dual-tree nearest-neighbor search to find the nearest centroid for each point in the dataset |
   DualTreeKMeansRules | |
   DualTreeKMeansStatistic | |
   ElkanKMeans | |
   HamerlyKMeans | |
   KillEmptyClusters | Policy which allows K-Means to "kill" empty clusters without any error being reported |
   KMeans | This class implements K-Means clustering, using a variety of possible implementations of Lloyd's algorithm |
   MaxVarianceNewCluster | When an empty cluster is detected, this class takes the point furthest from the centroid of the cluster with maximum variance as a new cluster |
   NaiveKMeans | This is an implementation of a single iteration of Lloyd's algorithm for k-means |
   PellegMooreKMeans | An implementation of Pelleg-Moore's 'blacklist' algorithm for k-means clustering |
   PellegMooreKMeansRules | The rules class for the single-tree Pelleg-Moore kd-tree traversal for k-means clustering |
   PellegMooreKMeansStatistic | A statistic for trees which holds the blacklist for Pelleg-Moore k-means clustering (which represents the clusters that cannot possibly own any points in a node) |
   RandomPartition | A very simple partitioner which partitions the data randomly into the number of desired clusters |
   RefinedStart | A refined approach for choosing initial points for k-means clustering |
   SampleInitialization | |
  kpca | |
   KernelPCA | This class performs kernel principal components analysis (Kernel PCA), for a given kernel |
   NaiveKernelRule | |
   NystroemKernelRule | |
  lcc | |
   LocalCoordinateCoding | An implementation of Local Coordinate Coding (LCC) that codes data which approximately lives on a manifold using a variation of l1-norm regularized sparse coding; in LCC, the penalty on the absolute value of each point's coefficient for each atom is weighted by the squared distance of that point to that atom |
  math | Miscellaneous math routines |
   ColumnsToBlocks | Transform the columns of the given matrix into a block format |
   RangeType | Simple real-valued range |
  matrix_completion | |
   MatrixCompletion | This class implements the popular nuclear norm minimization heuristic for matrix completion problems |
  meanshift | Mean shift clustering |
   MeanShift | This class implements mean shift clustering |
  metric | |
   IPMetric | The inner product metric, IPMetric, takes a given Mercer kernel (KernelType), and when Evaluate() is called, returns the distance between the two points in kernel space: |
   LMetric | The L_p metric for arbitrary integer p, with an option to take the root |
   MahalanobisDistance | The Mahalanobis distance, which is essentially a stretched Euclidean distance |
  mvu | |
   MVU | Meant to provide a good abstraction for users |
  naive_bayes | The Naive Bayes Classifier |
   NaiveBayesClassifier | The simple Naive Bayes classifier |
  nca | Neighborhood Components Analysis |
   NCA | An implementation of Neighborhood Components Analysis, both a linear dimensionality reduction technique and a distance learning technique |
   SoftmaxErrorFunction | The "softmax" stochastic neighbor assignment probability function |
  neighbor | |
   AlphaVisitor | Exposes the Alpha() method of the given RAType |
   BiSearchVisitor | BiSearchVisitor executes a bichromatic neighbor search on the given NSType |
   DeleteVisitor | DeleteVisitor deletes the given NSType instance |
   DrusillaSelect | |
   EpsilonVisitor | EpsilonVisitor exposes the Epsilon method of the given NSType |
   FirstLeafExactVisitor | Exposes the FirstLeafExact() method of the given RAType |
   FurthestNeighborSort | This class implements the necessary methods for the SortPolicy template parameter of the NeighborSearch class |
   LSHSearch | The LSHSearch class; this class builds a hash on the reference set and uses this hash to compute the distance-approximate nearest-neighbors of the given queries |
   MonoSearchVisitor | MonoSearchVisitor executes a monochromatic neighbor search on the given NSType |
   NaiveVisitor | NaiveVisitor exposes the Naive() method of the given RAType |
   NearestNeighborSort | This class implements the necessary methods for the SortPolicy template parameter of the NeighborSearch class |
   NeighborSearch | The NeighborSearch class is a template class for performing distance-based neighbor searches |
   NeighborSearchRules | The NeighborSearchRules class is a template helper class used by NeighborSearch class when performing distance-based neighbor searches |
    CandidateCmp | Compare two candidates based on the distance |
   NeighborSearchStat | Extra data for each node in the tree |
   NSModel | The NSModel class provides an easy way to serialize a model, abstracts away the different types of trees, and also reflects the NeighborSearch API |
   QDAFN | |
   RAModel | The RAModel class provides an abstraction for the RASearch class, abstracting away the TreeType parameter and allowing it to be specified at runtime in this class |
   RAQueryStat | Extra data for each node in the tree |
   RASearch | The RASearch class: This class provides a generic manner to perform rank-approximate search via random-sampling |
   RASearchRules | The RASearchRules class is a template helper class used by RASearch class when performing rank-approximate search via random-sampling |
   RAUtil | |
   ReferenceSetVisitor | ReferenceSetVisitor exposes the referenceSet of the given NSType |
   SampleAtLeavesVisitor | Exposes the SampleAtLeaves() method of the given RAType |
   SearchModeVisitor | SearchModeVisitor exposes the SearchMode() method of the given NSType |
   SingleModeVisitor | Exposes the SingleMode() method of the given RAType |
   SingleSampleLimitVisitor | Exposes the SingleSampleLimit() method of the given RAType |
   TauVisitor | Exposes the Tau() method of the given RAType |
   TrainVisitor | TrainVisitor sets the reference set to a new reference set on the given NSType |
  nn | |
   SparseAutoencoder | A sparse autoencoder is a neural network whose aim to learn compressed representations of the data, typically for dimensionality reduction, with a constraint on the activity of the neurons in the network |
   SparseAutoencoderFunction | This is a class for the sparse autoencoder objective function |
  optimization | |
   aux | |
   test | |
    BoothFunction | The Booth function, defined by |
    BukinFunction | The Bukin function, defined by |
    ColvilleFunction | The Colville function, defined by |
    DropWaveFunction | The Drop-Wave function, defined by |
    EasomFunction | The Easom function, defined by |
    EggholderFunction | The Eggholder function, defined by |
    GDTestFunction | Very, very simple test function which is the composite of three other functions |
    GeneralizedRosenbrockFunction | The Generalized Rosenbrock function in n dimensions, defined by f(x) = sum_i^{n - 1} (f(i)(x)) f_i(x) = 100 * (x_i^2 - x_{i + 1})^2 + (1 - x_i)^2 x_0 = [-1.2, 1, -1.2, 1, ...] |
    MatyasFunction | The Matyas function, defined by |
    McCormickFunction | The McCormick function, defined by |
    RastriginFunction | The Rastrigin function, defined by |
    RosenbrockFunction | The Rosenbrock function, defined by: |
    RosenbrockWoodFunction | The Generalized Rosenbrock function in 4 dimensions with the Wood Function in four dimensions |
    SchwefelFunction | The Schwefel function, defined by |
    SGDTestFunction | Very, very simple test function which is the composite of three other functions |
    SparseTestFunction | |
    SphereFunction | The Sphere function, defined by |
    StyblinskiTangFunction | The Styblinski-Tang function, defined by |
    WoodFunction | The Wood function, defined by f(x) = f1(x) + f2(x) + f3(x) + f4(x) + f5(x) + f6(x) f1(x) = 100 (x2 - x1^2)^2 f2(x) = (1 - x1)^2 f3(x) = 90 (x4 - x3^2)^2 f4(x) = (1 - x3)^2 f5(x) = 10 (x2 + x4 - 2)^2 f6(x) = (1 / 10) (x2 - x4)^2 x_0 = [-3, -1, -3, -1] |
   traits | |
    CheckDecomposableEvaluate | Check if a suitable decomposable overload of Evaluate() is available |
    CheckDecomposableEvaluateWithGradient | Check if a suitable decomposable overload of EvaluateWithGradient() is available |
    CheckDecomposableGradient | Check if a suitable decomposable overload of Gradient() is available |
    CheckEvaluate | Check if a suitable overload of Evaluate() is available |
    CheckEvaluateConstraint | Check if a suitable overload of EvaluateConstraint() is available |
    CheckEvaluateWithGradient | Check if a suitable overload of EvaluateWithGradient() is available |
    CheckGradient | Check if a suitable overload of Gradient() is available |
    CheckGradientConstraint | Check if a suitable overload of GradientConstraint() is available |
    CheckNumConstraints | Check if a suitable overload of NumConstraints() is available |
    CheckNumFeatures | Check if a suitable overload of NumFeatures() is available |
    CheckNumFunctions | Check if a suitable overload of NumFunctions() is available |
    CheckPartialGradient | Check if a suitable overload of PartialGradient() is available |
    CheckShuffle | Check if a suitable overload of Shuffle() is available |
    CheckSparseGradient | Check if a suitable overload of Gradient() that supports sparse gradients is available |
    HasConstSignatures | Utility struct: sometimes we want to know if we have two functions available, and that at least one of them is const and both of them are not non-const and non-static |
    HasNonConstSignatures | Utility struct: sometimes we want to know if we have two functions available, and that at least one of them is non-const and non-static |
    UnconstructableType | This is a utility type used to provide unusable overloads from each of the mixin classes |
   AdaDelta | AdaDelta is an optimizer that uses two ideas to improve upon the two main drawbacks of the Adagrad method: |
   AdaDeltaUpdate | Implementation of the AdaDelta update policy |
   AdaGrad | AdaGrad is a modified version of stochastic gradient descent which performs larger updates for more sparse parameters and smaller updates for less sparse parameters |
   AdaGradUpdate | Implementation of the AdaGrad update policy |
   AdaMaxUpdate | AdaMax is a variant of Adam, an optimizer that computes individual adaptive learning rates for different parameters from estimates of first and second moments of the gradients.based on the infinity norm as given in the section 7 of the following paper |
   AdamType | Adam is an optimizer that computes individual adaptive learning rates for different parameters from estimates of first and second moments of the gradients |
   AdamUpdate | Adam is an optimizer that computes individual adaptive learning rates for different parameters from estimates of first and second moments of the gradients as given in the section 7 of the following paper |
   AdaptiveStepsize | Definition of the adaptive stepize technique, a non-monotonic stepsize scheme that uses curvature estimates to propose new stepsize choices |
   AddDecomposableEvaluate | The AddDecomposableEvaluate mixin class will add a decomposable Evaluate() method if a decomposable EvaluateWithGradient() function exists, or nothing otherwise |
   AddDecomposableEvaluate< FunctionType, HasDecomposableEvaluateWithGradient, true > | Reflect the existing Evaluate() |
   AddDecomposableEvaluate< FunctionType, true, false > | If we have a decomposable EvaluateWithGradient() but not a decomposable Evaluate(), add a decomposable Evaluate() method |
   AddDecomposableEvaluateConst | The AddDecomposableEvaluateConst mixin class will add a decomposable const Evaluate() method if a decomposable const EvaluateWithGradient() function exists, or nothing otherwise |
   AddDecomposableEvaluateConst< FunctionType, HasDecomposableEvaluateWithGradient, true > | Reflect the existing Evaluate() |
   AddDecomposableEvaluateConst< FunctionType, true, false > | If we have a decomposable const EvaluateWithGradient() but not a decomposable const Evaluate(), add a decomposable const Evaluate() method |
   AddDecomposableEvaluateStatic | The AddDecomposableEvaluateStatic mixin class will add a decomposable static Evaluate() method if a decomposable static EvaluateWithGradient() function exists, or nothing otherwise |
   AddDecomposableEvaluateStatic< FunctionType, HasDecomposableEvaluateWithGradient, true > | Reflect the existing Evaluate() |
   AddDecomposableEvaluateStatic< FunctionType, true, false > | If we have a decomposable EvaluateWithGradient() but not a decomposable Evaluate(), add a decomposable Evaluate() method |
   AddDecomposableEvaluateWithGradient | The AddDecomposableEvaluateWithGradient mixin class will add a decomposable EvaluateWithGradient() method if a decomposable Evaluate() method and a decomposable Gradient() method exists, or nothing otherwise |
   AddDecomposableEvaluateWithGradient< FunctionType, false, true, true > | If the FunctionType has EvaluateWithGradient() but not Evaluate(), provide that function |
   AddDecomposableEvaluateWithGradient< FunctionType, HasDecomposableEvaluateGradient, true > | Reflect the existing EvaluateWithGradient() |
   AddDecomposableEvaluateWithGradient< FunctionType, true, false > | If we have a both decomposable Evaluate() and a decomposable Gradient() but not a decomposable EvaluateWithGradient(), add a decomposable EvaluateWithGradient() method |
   AddDecomposableEvaluateWithGradient< FunctionType, true, false, true > | If the FunctionType has EvaluateWithGradient() but not Gradient(), provide that function |
   AddDecomposableEvaluateWithGradient< FunctionType, true, true, false > | If the FunctionType has Evaluate() and Gradient() but not EvaluateWithGradient(), we will provide the latter |
   AddDecomposableEvaluateWithGradientConst | The AddDecomposableEvaluateWithGradientConst mixin class will add a decomposable const EvaluateWithGradient() method if both a decomposable const Evaluate() and a decomposable const Gradient() function exist, or nothing otherwise |
   AddDecomposableEvaluateWithGradientConst< FunctionType, HasDecomposableEvaluateGradient, true > | Reflect the existing EvaluateWithGradient() |
   AddDecomposableEvaluateWithGradientConst< FunctionType, true, false > | If we have both a decomposable const Evaluate() and a decomposable const Gradient() but not a decomposable const EvaluateWithGradient(), add a decomposable const EvaluateWithGradient() method |
   AddDecomposableEvaluateWithGradientStatic | The AddDecomposableEvaluateWithGradientStatic mixin class will add a decomposable static EvaluateWithGradient() method if both a decomposable static Evaluate() and a decomposable static gradient() function exist, or nothing otherwise |
   AddDecomposableEvaluateWithGradientStatic< FunctionType, HasDecomposableEvaluateGradient, true > | Reflect the existing EvaluateWithGradient() |
   AddDecomposableEvaluateWithGradientStatic< FunctionType, true, false > | If we have a decomposable static Evaluate() and a decomposable static Gradient() but not a decomposable static EvaluateWithGradient(), add a decomposable static Gradient() method |
   AddDecomposableGradient | The AddDecomposableGradient mixin class will add a decomposable Gradient() method if a decomposable EvaluateWithGradient() function exists, or nothing otherwise |
   AddDecomposableGradient< FunctionType, HasDecomposableEvaluateWithGradient, true > | Reflect the existing Gradient() |
   AddDecomposableGradient< FunctionType, true, false > | If we have a decomposable EvaluateWithGradient() but not a decomposable Gradient(), add a decomposable Evaluate() method |
   AddDecomposableGradientConst | The AddDecomposableGradientConst mixin class will add a decomposable const Gradient() method if a decomposable const EvaluateWithGradient() function exists, or nothing otherwise |
   AddDecomposableGradientConst< FunctionType, HasDecomposableEvaluateWithGradient, true > | Reflect the existing Gradient() |
   AddDecomposableGradientConst< FunctionType, true, false > | If we have a decomposable const EvaluateWithGradient() but not a decomposable const Gradient(), add a decomposable const Gradient() method |
   AddDecomposableGradientStatic | The AddDecomposableEvaluateStatic mixin class will add a decomposable static Gradient() method if a decomposable static EvaluateWithGradient() function exists, or nothing otherwise |
   AddDecomposableGradientStatic< FunctionType, HasDecomposableEvaluateWithGradient, true > | Reflect the existing Gradient() |
   AddDecomposableGradientStatic< FunctionType, true, false > | If we have a decomposable EvaluateWithGradient() but not a decomposable Gradient(), add a decomposable Gradient() method |
   AddEvaluate | The AddEvaluate mixin class will provide an Evaluate() method if the given FunctionType has EvaluateWithGradient(), or nothing otherwise |
   AddEvaluate< FunctionType, HasEvaluateWithGradient, true > | Reflect the existing Evaluate() |
   AddEvaluate< FunctionType, true, false > | If we have EvaluateWithGradient() but no existing Evaluate(), add an Evaluate() method |
   AddEvaluateConst | The AddEvaluateConst mixin class will provide a const Evaluate() method if the given FunctionType has EvaluateWithGradient() const, or nothing otherwise |
   AddEvaluateConst< FunctionType, HasEvaluateWithGradient, true > | Reflect the existing Evaluate() |
   AddEvaluateConst< FunctionType, true, false > | If we have EvaluateWithGradient() but no existing Evaluate(), add an Evaluate() without a using directive to make the base Evaluate() accessible |
   AddEvaluateStatic | The AddEvaluateStatic mixin class will provide a static Evaluate() method if the given FunctionType has EvaluateWithGradient() static, or nothing otherwise |
   AddEvaluateStatic< FunctionType, HasEvaluateWithGradient, true > | Reflect the existing Evaluate() |
   AddEvaluateStatic< FunctionType, true, false > | If we have EvaluateWithGradient() but no existing Evaluate(), add an Evaluate() without a using directive to make the base Evaluate() accessible |
   AddEvaluateWithGradient | The AddEvaluateWithGradient mixin class will provide an EvaluateWithGradient() method if the given FunctionType has both Evaluate() and Gradient(), or it will provide nothing otherwise |
   AddEvaluateWithGradient< FunctionType, HasEvaluateGradient, true > | Reflect the existing EvaluateWithGradient() |
   AddEvaluateWithGradient< FunctionType, true, false > | If the FunctionType has Evaluate() and Gradient(), provide EvaluateWithGradient() |
   AddEvaluateWithGradientConst | The AddEvaluateWithGradient mixin class will provide an EvaluateWithGradient() const method if the given FunctionType has both Evaluate() const and Gradient() const, or it will provide nothing otherwise |
   AddEvaluateWithGradientConst< FunctionType, HasEvaluateGradient, true > | Reflect the existing EvaluateWithGradient() |
   AddEvaluateWithGradientConst< FunctionType, true, false > | If the FunctionType has Evaluate() const and Gradient() const, provide EvaluateWithGradient() const |
   AddEvaluateWithGradientStatic | The AddEvaluateWithGradientStatic mixin class will provide a static EvaluateWithGradient() method if the given FunctionType has both static Evaluate() and static Gradient(), or it will provide nothing otherwise |
   AddEvaluateWithGradientStatic< FunctionType, HasEvaluateGradient, true > | Reflect the existing EvaluateWithGradient() |
   AddEvaluateWithGradientStatic< FunctionType, true, false > | If the FunctionType has static Evaluate() and static Gradient(), provide static EvaluateWithGradient() |
   AddGradient | The AddGradient mixin class will provide a Gradient() method if the given FunctionType has EvaluateWithGradient(), or nothing otherwise |
   AddGradient< FunctionType, HasEvaluateWithGradient, true > | Reflect the existing Gradient() |
   AddGradient< FunctionType, true, false > | If we have EvaluateWithGradient() but no existing Gradient(), add an Gradient() without a using directive to make the base Gradient() accessible |
   AddGradientConst | The AddGradient mixin class will provide a const Gradient() method if the given FunctionType has EvaluateWithGradient() const, or nothing otherwise |
   AddGradientConst< FunctionType, HasEvaluateWithGradient, true > | Reflect the existing Gradient() |
   AddGradientConst< FunctionType, true, false > | If we have EvaluateWithGradient() but no existing Gradient(), add a Gradient() without a using directive to make the base Gradient() accessible |
   AddGradientStatic | The AddGradient mixin class will provide a static Gradient() method if the given FunctionType has static EvaluateWithGradient(), or nothing otherwise |
   AddGradientStatic< FunctionType, HasEvaluateWithGradient, true > | Reflect the existing Gradient() |
   AddGradientStatic< FunctionType, true, false > | If we have EvaluateWithGradient() but no existing Gradient(), add a Gradient() without a using directive to make the base Gradient() accessible |
   AMSGradUpdate | AMSGrad is an exponential moving average variant which along with having benefits of optimizers like Adam and RMSProp, also guarantees convergence |
   Atoms | Class to hold the information and operations of current atoms in the soluton space |
   AugLagrangian | Implements the Augmented Lagrangian method of optimization |
   AugLagrangianFunction | This is a utility class used by AugLagrangian, meant to wrap a LagrangianFunction into a function usable by a simple optimizer like L-BFGS |
   AugLagrangianTestFunction | This function is taken from "Practical Mathematical Optimization" (Snyman), section 5.3.8 ("Application of the Augmented Lagrangian Method") |
   BacktrackingLineSearch | Definition of the backtracking line search algorithm based on the Armijo–Goldstein condition to determine the maximum amount to move along the given search direction |
   BarzilaiBorweinDecay | Barzilai-Borwein decay policy for Stochastic variance reduced gradient (SVRG) |
   BigBatchSGD | Big-batch Stochastic Gradient Descent is a technique for minimizing a function which can be expressed as a sum of other functions |
   CMAES | CMA-ES - Covariance Matrix Adaptation Evolution Strategy is s a stochastic search algorithm |
   CNE | Conventional Neural Evolution (CNE) is a class of evolutionary algorithms focused on dealing with fixed topology |
   ConstantStep | Implementation of the ConstantStep stepsize decay policy for parallel SGD |
   ConstrLpBallSolver | LinearConstrSolver for FrankWolfe algorithm |
   ConstrStructGroupSolver | Linear Constrained Solver for FrankWolfe |
   CyclicalDecay | Simulate a new warm-started run/restart once a number of epochs are performed |
   CyclicDescent | Cyclic descent policy for Stochastic Coordinate Descent(SCD) |
   ExponentialBackoff | Exponential backoff stepsize reduction policy for parallel SGD |
   ExponentialSchedule | The exponential cooling schedule cools the temperature T at every step according to the equation |
   FrankWolfe | Frank-Wolfe is a technique to minimize a continuously differentiable convex function over a compact convex subset of a vector space |
   FullSelection | |
   FuncSq | Square loss function |
   Function | The Function class is a wrapper class for any FunctionType that will add any possible derived methods |
   GockenbachFunction | This function is taken from M |
   GradientClipping | Interface for wrapping around update policies (e.g., VanillaUpdate) and feeding a clipped gradient to them instead of the normal one |
   GradientDescent | Gradient Descent is a technique to minimize a function |
   GreedyDescent | Greedy descent policy for Stochastic Co-ordinate Descent(SCD) |
   GridSearch | An optimizer that finds the minimum of a given function by iterating through points on a multidimensional grid |
   GroupLpBall | Implementation of Structured Group |
   IQN | IQN is a technique for minimizing a function which can be expressed as a sum of other functions |
   KatyushaType | Katyusha is a direct, primal-only stochastic gradient method which uses a "negative momentum" on top of Nesterov’s momentum |
   L_BFGS | The generic L-BFGS optimizer, which uses a back-tracking line search algorithm to minimize a function |
   LineSearch | Find the minimum of a function along the line between two points |
   LovaszThetaSDP | This function is the Lovasz-Theta semidefinite program, as implemented in the following paper: |
   LRSDP | LRSDP is the implementation of Monteiro and Burer's formulation of low-rank semidefinite programs (LR-SDP) |
   LRSDPFunction | The objective function that LRSDP is trying to optimize |
   NadaMaxUpdate | NadaMax is an optimizer that combines the AdaMax and NAG |
   NadamUpdate | Nadam is an optimizer that combines the Adam and NAG optimization strategies |
   NesterovMomentumUpdate | Nesterov Momentum update policy for Stochastic Gradient Descent (SGD) |
   NoDecay | Definition of the NoDecay class |
   OptimisticAdamUpdate | OptimisticAdam is an optimizer which implements the Optimistic Adam algorithm which uses Optmistic Mirror Descent with the Adam Optimizer |
   ParallelSGD | An implementation of parallel stochastic gradient descent using the lock-free HOGWILD! approach |
   PrimalDualSolver | Interface to a primal dual interior point solver |
   Proximal | Approximate a vector with another vector on lp ball |
   RandomDescent | Random descent policy for Stochastic Coordinate Descent(SCD) |
   RandomSelection | |
   RMSProp | RMSProp is an optimizer that utilizes the magnitude of recent gradients to normalize the gradients |
   RMSPropUpdate | RMSProp is an optimizer that utilizes the magnitude of recent gradients to normalize the gradients |
   SA | Simulated Annealing is an stochastic optimization algorithm which is able to deliver near-optimal results quickly without knowing the gradient of the function being optimized |
   SARAHPlusUpdate | SARAH+ provides an automatic and adaptive choice of the inner loop size |
   SARAHType | StochAstic Recusive gRadient algoritHm (SARAH) |
   SARAHUpdate | Vanilla update policy for SARAH |
   SCD | Stochastic Coordinate descent is a technique for minimizing a function by doing a line search along a single direction at the current point in the iteration |
   SDP | Specify an SDP in primal form |
   SGD | Stochastic Gradient Descent is a technique for minimizing a function which can be expressed as a sum of other functions |
   SGDR | This class is based on Mini-batch Stochastic Gradient Descent class and simulates a new warm-started run/restart once a number of epochs are performed |
   SMORMS3 | SMORMS3 is an optimizer that estimates a safe and optimal distance based on curvature and normalizing the stepsize in the parameter space |
   SMORMS3Update | SMORMS3 is an optimizer that estimates a safe and optimal distance based on curvature and normalizing the stepsize in the parameter space |
   SnapshotEnsembles | Simulate a new warm-started run/restart once a number of epochs are performed |
   SnapshotSGDR | This class is based on Mini-batch Stochastic Gradient Descent class and simulates a new warm-started run/restart once a number of epochs are performed using the Snapshot ensembles technique |
   SPALeRASGD | SPALeRA Stochastic Gradient Descent is a technique for minimizing a function which can be expressed as a sum of other functions |
   SPALeRAStepsize | Definition of the SPALeRA stepize technique, which implementes a change detection mechanism with an agnostic adaptation scheme |
   SVRGType | Stochastic Variance Reduced Gradient is a technique for minimizing a function which can be expressed as a sum of other functions |
   SVRGUpdate | Vanilla update policy for Stochastic variance reduced gradient (SVRG) |
   TestFuncFW | Simple test function for classic Frank Wolfe Algorithm: |
   UpdateClassic | Use classic rule in the update step for FrankWolfe algorithm |
   UpdateFullCorrection | Full correction approach to update the solution |
   UpdateLineSearch | Use line search in the update step for FrankWolfe algorithm |
   UpdateSpan | Recalculate the optimal solution in the span of all previous solution space, used as update step for FrankWolfe algorithm |
   VanillaUpdate | Vanilla update policy for Stochastic Gradient Descent (SGD) |
  pca | |
   ExactSVDPolicy | Implementation of the exact SVD policy |
   PCA | This class implements principal components analysis (PCA) |
   QUICSVDPolicy | Implementation of the QUIC-SVD policy |
   RandomizedBlockKrylovSVDPolicy | Implementation of the randomized block krylov SVD policy |
   RandomizedSVDPolicy | Implementation of the randomized SVD policy |
  perceptron | |
   Perceptron | This class implements a simple perceptron (i.e., a single layer neural network) |
   RandomInitialization | This class is used to initialize weights for the weightVectors matrix in a random manner |
   SimpleWeightUpdate | |
   ZeroInitialization | This class is used to initialize the matrix weightVectors to zero |
  radical | |
   Radical | An implementation of RADICAL, an algorithm for independent component analysis (ICA) |
  range | Range-search routines |
   BiSearchVisitor | BiSearchVisitor executes a bichromatic range search on the given RSType |
   DeleteVisitor | DeleteVisitor deletes the given RSType instance |
   MonoSearchVisitor | MonoSearchVisitor executes a monochromatic range search on the given RSType |
   NaiveVisitor | NaiveVisitor exposes the Naive() method of the given RSType |
   RangeSearch | The RangeSearch class is a template class for performing range searches |
   RangeSearchRules | The RangeSearchRules class is a template helper class used by RangeSearch class when performing range searches |
   RangeSearchStat | Statistic class for RangeSearch, to be set to the StatisticType of the tree type that range search is being performed with |
   ReferenceSetVisitor | ReferenceSetVisitor exposes the referenceSet of the given RSType |
   RSModel | |
   SingleModeVisitor | SingleModeVisitor exposes the SingleMode() method of the given RSType |
   TrainVisitor | TrainVisitor sets the reference set to a new reference set on the given RSType |
  regression | Regression methods |
   LARS | An implementation of LARS, a stage-wise homotopy-based algorithm for l1-regularized linear regression (LASSO) and l1+l2 regularized linear regression (Elastic Net) |
   LinearRegression | A simple linear regression algorithm using ordinary least squares |
   LogisticRegression | The LogisticRegression class implements an L2-regularized logistic regression model, and supports training with multiple optimizers and classification |
   LogisticRegressionFunction | The log-likelihood function for the logistic regression objective function |
   SoftmaxRegression | Softmax Regression is a classifier which can be used for classification when the data available can take two or more class values |
   SoftmaxRegressionFunction | |
  rl | |
   Acrobat | Implementation of Acrobat game |
    State | |
   AggregatedPolicy | |
   AsyncLearning | Wrapper of various asynchronous learning algorithms, e.g |
   CartPole | Implementation of Cart Pole task |
    State | Implementation of the state of Cart Pole |
   ContinuousMountainCar | Implementation of Continuous Mountain Car task |
    Action | Implementation of action of Continuous Mountain Car |
    State | Implementation of state of Continuous Mountain Car |
   GreedyPolicy | Implementation for epsilon greedy policy |
   MountainCar | Implementation of Mountain Car task |
    State | Implementation of state of Mountain Car |
   NStepQLearningWorker | Forward declaration of NStepQLearningWorker |
   OneStepQLearningWorker | Forward declaration of OneStepQLearningWorker |
   OneStepSarsaWorker | Forward declaration of OneStepSarsaWorker |
   Pendulum | Implementation of Pendulum task |
    Action | Implementation of action of Pendulum |
    State | Implementation of state of Pendulum |
   QLearning | Implementation of various Q-Learning algorithms, such as DQN, double DQN |
   RandomReplay | Implementation of random experience replay |
   TrainingConfig | |
  sfinae | |
   MethodFormDetector | |
   MethodFormDetector< Class, MethodForm, 0 > | |
   MethodFormDetector< Class, MethodForm, 1 > | |
   MethodFormDetector< Class, MethodForm, 2 > | |
   MethodFormDetector< Class, MethodForm, 3 > | |
   MethodFormDetector< Class, MethodForm, 4 > | |
   MethodFormDetector< Class, MethodForm, 5 > | |
   MethodFormDetector< Class, MethodForm, 6 > | |
   MethodFormDetector< Class, MethodForm, 7 > | |
  sparse_coding | |
   DataDependentRandomInitializer | A data-dependent random dictionary initializer for SparseCoding |
   NothingInitializer | A DictionaryInitializer for SparseCoding which does not initialize anything; it is useful for when the dictionary is already known and will be set with SparseCoding::Dictionary() |
   RandomInitializer | A DictionaryInitializer for use with the SparseCoding class |
   SparseCoding | An implementation of Sparse Coding with Dictionary Learning that achieves sparsity via an l1-norm regularizer on the codes (LASSO) or an (l1+l2)-norm regularizer on the codes (the Elastic Net) |
  svd | |
   QUIC_SVD | QUIC-SVD is a matrix factorization technique, which operates in a subspace such that A's approximation in that subspace has minimum error(A being the data matrix) |
   RandomizedBlockKrylovSVD | Randomized block krylov SVD is a matrix factorization that is based on randomized matrix approximation techniques, developed in in "Randomized Block Krylov Methods for Stronger and Faster Approximate
Singular Value Decomposition" |
   RandomizedSVD | Randomized SVD is a matrix factorization that is based on randomized matrix approximation techniques, developed in in "Finding structure with randomness:
Probabilistic algorithms for constructing approximate matrix decompositions" |
   RegularizedSVD | Regularized SVD is a matrix factorization technique that seeks to reduce the error on the training set, that is on the examples for which the ratings have been provided by the users |
   RegularizedSVDFunction | The data is stored in a matrix of type MatType, so that this class can be used with both dense and sparse matrix types |
  tree | Trees and tree-building procedures |
   enumerate | |
   split | |
   AllCategoricalSplit | The AllCategoricalSplit is a splitting function that will split categorical features into many children: one child for each category |
    AuxiliarySplitInfo | |
   AllDimensionSelect | This dimension selection policy allows any dimension to be selected for splitting |
   AxisParallelProjVector | AxisParallelProjVector defines an axis-parallel projection vector |
   BestBinaryNumericSplit | The BestBinaryNumericSplit is a splitting function for decision trees that will exhaustively search a numeric dimension for the best binary split |
    AuxiliarySplitInfo | |
   BinaryNumericSplit | The BinaryNumericSplit class implements the numeric feature splitting strategy devised by Gama, Rocha, and Medas in the following paper: |
   BinaryNumericSplitInfo | |
   BinarySpaceTree | A binary space partitioning tree, such as a KD-tree or a ball tree |
    BreadthFirstDualTreeTraverser | |
    DualTreeTraverser | A dual-tree traverser for binary space trees; see dual_tree_traverser.hpp |
    SingleTreeTraverser | A single-tree traverser for binary space trees; see single_tree_traverser.hpp for implementation |
   CategoricalSplitInfo | |
   CompareCosineNode | |
   CosineTree | |
   CoverTree | A cover tree is a tree specifically designed to speed up nearest-neighbor computation in high-dimensional spaces |
    DualTreeTraverser | A dual-tree cover tree traverser; see dual_tree_traverser.hpp |
    SingleTreeTraverser | A single-tree cover tree traverser; see single_tree_traverser.hpp for implementation |
   DecisionTree | This class implements a generic decision tree learner |
   DiscreteHilbertValue | The DiscreteHilbertValue class stores Hilbert values for all of the points in a RectangleTree node, and calculates Hilbert values for new points |
   EmptyStatistic | Empty statistic if you are not interested in storing statistics in your tree |
   ExampleTree | This is not an actual space tree but instead an example tree that exists to show and document all the functions that mlpack trees must implement |
   FirstPointIsRoot | This class is meant to be used as a choice for the policy class RootPointPolicy of the CoverTree class |
   GiniGain | The Gini gain, a measure of set purity usable as a fitness function (FitnessFunction) for decision trees |
   GiniImpurity | |
   GreedySingleTreeTraverser | |
   HilbertRTreeAuxiliaryInformation | |
   HilbertRTreeDescentHeuristic | This class chooses the best child of a node in a Hilbert R tree when inserting a new point |
   HilbertRTreeSplit | The splitting procedure for the Hilbert R tree |
   HoeffdingCategoricalSplit | This is the standard Hoeffding-bound categorical feature proposed in the paper below: |
   HoeffdingNumericSplit | The HoeffdingNumericSplit class implements the numeric feature splitting strategy alluded to by Domingos and Hulten in the following paper: |
   HoeffdingTree | The HoeffdingTree object represents all of the necessary information for a Hoeffding-bound-based decision tree |
   HoeffdingTreeModel | This class is a serializable Hoeffding tree model that can hold four different types of Hoeffding trees |
   HyperplaneBase | HyperplaneBase defines a splitting hyperplane based on a projection vector and projection value |
   InformationGain | The standard information gain criterion, used for calculating gain in decision trees |
   IsSpillTree | |
   IsSpillTree< tree::SpillTree< MetricType, StatisticType, MatType, HyperplaneType, SplitType > > | |
   MeanSpaceSplit | |
   MeanSplit | A binary space partitioning tree node is split into its left and right child |
    SplitInfo | An information about the partition |
   MidpointSpaceSplit | |
   MidpointSplit | A binary space partitioning tree node is split into its left and right child |
    SplitInfo | A struct that contains an information about the split |
   MinimalCoverageSweep | The MinimalCoverageSweep class finds a partition along which we can split a node according to the coverage of two resulting nodes |
    SweepCost | A struct that provides the type of the sweep cost |
   MinimalSplitsNumberSweep | The MinimalSplitsNumberSweep class finds a partition along which we can split a node according to the number of required splits of the node |
    SweepCost | A struct that provides the type of the sweep cost |
   MultipleRandomDimensionSelect | This dimension selection policy allows the selection from a few random dimensions |
   NoAuxiliaryInformation | |
   NumericSplitInfo | |
   Octree | |
    DualTreeTraverser | A dual-tree traverser; see dual_tree_traverser.hpp |
    SingleTreeTraverser | A single-tree traverser; see single_tree_traverser.hpp |
   ProjVector | ProjVector defines a general projection vector (not necessarily axis-parallel) |
   QueueFrame | |
   RandomDimensionSelect | This dimension selection policy only selects one single random dimension |
   RandomForest | |
   RectangleTree | A rectangle type tree tree, such as an R-tree or X-tree |
    DualTreeTraverser | A dual tree traverser for rectangle type trees |
    SingleTreeTraverser | A single traverser for rectangle type trees |
   RPlusPlusTreeAuxiliaryInformation | |
   RPlusPlusTreeDescentHeuristic | |
   RPlusPlusTreeSplitPolicy | The RPlusPlusTreeSplitPolicy helps to determine the subtree into which we should insert a child of an intermediate node that is being split |
   RPlusTreeDescentHeuristic | |
   RPlusTreeSplit | The RPlusTreeSplit class performs the split process of a node on overflow |
   RPlusTreeSplitPolicy | The RPlusPlusTreeSplitPolicy helps to determine the subtree into which we should insert a child of an intermediate node that is being split |
   RPTreeMaxSplit | This class splits a node by a random hyperplane |
    SplitInfo | An information about the partition |
   RPTreeMeanSplit | This class splits a binary space tree |
    SplitInfo | An information about the partition |
   RStarTreeDescentHeuristic | When descending a RectangleTree to insert a point, we need to have a way to choose a child node when the point isn't enclosed by any of them |
   RStarTreeSplit | A Rectangle Tree has new points inserted at the bottom |
   RTreeDescentHeuristic | When descending a RectangleTree to insert a point, we need to have a way to choose a child node when the point isn't enclosed by any of them |
   RTreeSplit | A Rectangle Tree has new points inserted at the bottom |
   SpaceSplit | |
   SpillTree | A hybrid spill tree is a variant of binary space trees in which the children of a node can "spill over" each other, and contain shared datapoints |
    SpillDualTreeTraverser | A generic dual-tree traverser for hybrid spill trees; see spill_dual_tree_traverser.hpp for implementation |
    SpillSingleTreeTraverser | A generic single-tree traverser for hybrid spill trees; see spill_single_tree_traverser.hpp for implementation |
   TraversalInfo | The TraversalInfo class holds traversal information which is used in dual-tree (and single-tree) traversals |
   TreeTraits | The TreeTraits class provides compile-time information on the characteristics of a given tree type |
   TreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, bound::BallBound, SplitType > > | This is a specialization of the TreeType class to the BallTree tree type |
   TreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, bound::CellBound, SplitType > > | This is a specialization of the TreeType class to the UBTree tree type |
   TreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, bound::HollowBallBound, SplitType > > | This is a specialization of the TreeType class to an arbitrary tree with HollowBallBound (currently only the vantage point tree is supported) |
   TreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, BoundType, RPTreeMaxSplit > > | This is a specialization of the TreeType class to the max-split random projection tree |
   TreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, BoundType, RPTreeMeanSplit > > | This is a specialization of the TreeType class to the mean-split random projection tree |
   TreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, BoundType, SplitType > > | This is a specialization of the TreeTraits class to the BinarySpaceTree tree type |
   TreeTraits< CoverTree< MetricType, StatisticType, MatType, RootPointPolicy > > | The specialization of the TreeTraits class for the CoverTree tree type |
   TreeTraits< Octree< MetricType, StatisticType, MatType > > | This is a specialization of the TreeTraits class to the Octree tree type |
   TreeTraits< RectangleTree< MetricType, StatisticType, MatType, RPlusTreeSplit< SplitPolicyType, SweepType >, DescentType, AuxiliaryInformationType > > | Since the R+/R++ tree can not have overlapping children, we should define traits for the R+/R++ tree |
   TreeTraits< RectangleTree< MetricType, StatisticType, MatType, SplitType, DescentType, AuxiliaryInformationType > > | This is a specialization of the TreeType class to the RectangleTree tree type |
   TreeTraits< SpillTree< MetricType, StatisticType, MatType, HyperplaneType, SplitType > > | This is a specialization of the TreeType class to the SpillTree tree type |
   UBTreeSplit | Split a node into two parts according to the median address of points contained in the node |
   VantagePointSplit | The class splits a binary space partitioning tree node according to the median distance to the vantage point |
    SplitInfo | A struct that contains an information about the split |
   XTreeAuxiliaryInformation | The XTreeAuxiliaryInformation class provides information specific to X trees for each node in a RectangleTree |
    SplitHistoryStruct | The X tree requires that the tree records it's "split history" |
   XTreeSplit | A Rectangle Tree has new points inserted at the bottom |
  util | |
   IsStdVector | Metaprogramming structure for vector detection |
   IsStdVector< std::vector< T, A > > | Metaprogramming structure for vector detection |
   NullOutStream | Used for Log::Debug when not compiled with debugging symbols |
   ParamData | This structure holds all of the information about a single parameter, including its value (which is set when ParseCommandLine() is called) |
   PrefixedOutStream | Allows us to output to an ostream with a prefix at the beginning of each line, in the same way we would output to cout or cerr |
   ProgramDoc | A static object whose constructor registers program documentation with the CLI class |
  Backtrace | Provides a backtrace |
  CLI | Parses the command line for parameters and holds user-specified parameters |
  Log | Provides a convenient way to give formatted output |
  Timer | The timer class provides a way for mlpack methods to be timed |
  Timers | |
 std | |
 InitHMMModel | |
 IsVector | If value == true, then VecType is some sort of Armadillo vector or subview |
 IsVector< arma::Col< eT > > | |
 IsVector< arma::Row< eT > > | |
 IsVector< arma::SpCol< eT > > | |
 IsVector< arma::SpRow< eT > > | |
 IsVector< arma::SpSubview< eT > > | |
 IsVector< arma::subview_col< eT > > | |
 IsVector< arma::subview_row< eT > > | |
 SigCheck | Utility struct for checking signatures |
 SparseSVMFunction | |
 TrainHMMModel | |