Uses of Class
io.github.metarank.lightgbm4j.LGBMException
Packages that use LGBMException
-
Uses of LGBMException in io.github.metarank.lightgbm4j
Methods in io.github.metarank.lightgbm4j that throw LGBMExceptionModifier and TypeMethodDescriptionvoidLGBMBooster.addValidData(LGBMDataset dataset) Add new validation data to booster.voidLGBMBooster.close()Deallocate all native memory for the LightGBM model.voidLGBMDataset.close()Deallocate all native memory for the LightGBM dataset.static LGBMBoosterLGBMBooster.create(LGBMDataset dataset, String parameters) Create a new boosting learner.static LGBMDatasetLGBMDataset.createFromFile(String fileName, String parameters, LGBMDataset reference) Load dataset from file (like LightGBM CLI version does).static LGBMDatasetLGBMDataset.createFromMat(double[] data, int rows, int cols, boolean isRowMajor, String parameters, LGBMDataset reference) Create dataset from dense double[] matrix.static LGBMDatasetLGBMDataset.createFromMat(float[] data, int rows, int cols, boolean isRowMajor, String parameters, LGBMDataset reference) Create dataset from dense float[] matrix.static LGBMBoosterLGBMBooster.createFromModelfile(String file) Load an existing booster from model file.voidDumps dataset into a file for debugging.double[]LGBMBooster.featureImportance(int numIteration, LGBMBooster.FeatureImportanceType importanceType) Get model feature importance.double[]LGBMBooster.getEval(int dataIndex) Get evaluation for training data and validation data.String[]LGBMBooster.getEvalNames()Get names of evaluation datasets.String[]LGBMBooster.getFeatureNames()Get names of features.String[]LGBMDataset.getFeatureNames()Gets feature names from dataset, if dataset supports it.float[]LGBMDataset.getFieldFloat(String field) Get float[] field from the dataset.int[]LGBMDataset.getFieldInt(String field) Get int[] field from the dataset.intLGBMBooster.getNumClasses()Get number of classes.intLGBMDataset.getNumData()Get number of data points.intLGBMBooster.getNumFeature()Get number of features.intLGBMDataset.getNumFeatures()Get number of features.longLGBMBooster.getNumPredict(int dataIdx) Get number of predictions for training data and validation data (this can be used to support customized evaluation functions).double[]LGBMBooster.getPredict(int dataIdx) Get prediction for training data and validation data.static LGBMBoosterLGBMBooster.loadModelFromString(String model) Load an existing booster from string.double[]LGBMBooster.predictForMat(double[] input, int rows, int cols, boolean isRowMajor, PredictionType predictionType) double[]LGBMBooster.predictForMat(double[] input, int rows, int cols, boolean isRowMajor, PredictionType predictionType, String parameter) Make prediction for a new double[] dataset.double[]LGBMBooster.predictForMat(float[] input, int rows, int cols, boolean isRowMajor, PredictionType predictionType) double[]LGBMBooster.predictForMat(float[] input, int rows, int cols, boolean isRowMajor, PredictionType predictionType, String parameter) Make prediction for a new float[] dataset.doubleLGBMBooster.predictForMatSingleRow(double[] data, PredictionType predictionType) Make prediction for a new double[] row dataset.doubleLGBMBooster.predictForMatSingleRow(float[] data, PredictionType predictionType) Make prediction for a new float[] row dataset.doubleLGBMBooster.predictForMatSingleRowFast(LGBMBooster.FastConfig config, double[] data, PredictionType predictionType) doubleLGBMBooster.predictForMatSingleRowFast(LGBMBooster.FastConfig config, float[] data, PredictionType predictionType) LGBMBooster.predictForMatSingleRowFastInit(PredictionType predictionType, int dtype, int ncols, String parameter) LGBMBooster.saveModelToString(int startIteration, int numIteration, LGBMBooster.FeatureImportanceType featureImportance) Save model to string.voidLGBMDataset.setFeatureNames(String[] featureNames) Set feature namesvoidSets a double field.voidSets a double field.voidSets an int field.booleanLGBMBooster.updateOneIter()Update the model for one iteration.booleanLGBMBooster.updateOneIterCustom(float[] grad, float[] hess) Update the model by specifying gradient and Hessian directly (this can be used to support customized loss functions).