public static class DeepLearningModel.DeepLearningParameters extends hex.Model.Parameters<DeepLearningModel,DeepLearningModel.DeepLearningParameters,DeepLearningModel.DeepLearningOutput>
| Modifier and Type | Class and Description |
|---|---|
static class |
DeepLearningModel.DeepLearningParameters.Activation
Activation functions
|
static class |
DeepLearningModel.DeepLearningParameters.ClassSamplingMethod |
static class |
DeepLearningModel.DeepLearningParameters.InitialWeightDistribution |
static class |
DeepLearningModel.DeepLearningParameters.Loss
Loss functions
CrossEntropy is recommended
|
static class |
DeepLearningModel.DeepLearningParameters.MissingValuesHandling |
| Modifier and Type | Field and Description |
|---|---|
DeepLearningModel.DeepLearningParameters.Activation |
activation
The activation function (non-linearity) to be used the neurons in the hidden layers.
|
boolean |
adaptive_rate
The implemented adaptive learning rate algorithm (ADADELTA) automatically
combines the benefits of learning rate annealing and momentum
training to avoid slow convergence.
|
boolean |
autoencoder |
double |
average_activation |
boolean |
balance_classes
For imbalanced data, balance training data class counts via
over/under-sampling.
|
water.Key |
checkpoint
A model key associated with a previously trained Deep Learning
model.
|
float[] |
class_sampling_factors
Desired over/under-sampling ratios per class (lexicographic order).
|
boolean |
classification |
double |
classification_stop
The stopping criteria in terms of classification error (1-accuracy) on the
training data scoring dataset.
|
boolean |
col_major |
boolean |
diagnostics
Gather diagnostics for hidden layers, such as mean and RMS values of learning
rate, momentum, weights and biases.
|
double |
epochs
The number of passes over the training dataset to be carried out.
|
double |
epsilon
The second of two hyper parameters for adaptive learning rate (ADADELTA).
|
boolean |
expert_mode
Unlock expert mode parameters than can affect model building speed,
predictive accuracy and scoring.
|
boolean |
fast_mode
Enable fast mode (minor approximation in back-propagation), should not affect results significantly.
|
boolean |
force_load_balance
Increase training speed on small datasets by splitting it into many chunks
to allow utilization of all cores.
|
int[] |
hidden
The number and size of each hidden layer in the model.
|
double[] |
hidden_dropout_ratios
A fraction of the inputs for each hidden layer to be omitted from training in order
to improve generalization.
|
boolean |
ignore_const_cols
Ignore constant training columns (no information can be gained anyway).
|
DeepLearningModel.DeepLearningParameters.InitialWeightDistribution |
initial_weight_distribution
The distribution from which initial weights are to be drawn.
|
double |
initial_weight_scale
The scale of the distribution function for Uniform or Normal distributions.
|
double |
input_dropout_ratio
A fraction of the features for each training row to be omitted from training in order
to improve generalization (dimension sampling).
|
boolean |
keep_cross_validation_splits |
double |
l1
A regularization method that constrains the absolute value of the weights and
has the net effect of dropping some weights (setting them to zero) from a model
to reduce complexity and avoid overfitting.
|
double |
l2
A regularization method that constrdains the sum of the squared
weights.
|
DeepLearningModel.DeepLearningParameters.Loss |
loss
The loss (error) function to be minimized by the model.
|
float |
max_after_balance_size
When classes are balanced, limit the resulting dataset size to the
specified multiple of the original dataset size.
|
int |
max_confusion_matrix_size
For classification models, the maximum size (in terms of classes) of the
confusion matrix for it to be printed.
|
int |
max_hit_ratio_k
The maximum number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable)
|
float |
max_w2
A maximum on the sum of the squared incoming weights into
any one neuron.
|
DeepLearningModel.DeepLearningParameters.MissingValuesHandling |
missing_values_handling |
double |
momentum_ramp
The momentum_ramp parameter controls the amount of learning for which momentum increases
(assuming momentum_stable is larger than momentum_start).
|
double |
momentum_stable
The momentum_stable parameter controls the final momentum value reached after momentum_ramp training samples.
|
double |
momentum_start
The momentum_start parameter controls the amount of momentum at the beginning of training.
|
int |
n_folds |
boolean |
nesterov_accelerated_gradient
The Nesterov accelerated gradient descent method is a modification to
traditional gradient descent for convex functions.
|
boolean |
override_with_best_model
If enabled, store the best model under the destination key of this model at the end of training.
|
boolean |
quiet_mode
Enable quiet mode for less output to standard output.
|
double |
rate
When adaptive learning rate is disabled, the magnitude of the weight
updates are determined by the user specified learning rate
(potentially annealed), and are a function of the difference
between the predicted value and the target value.
|
double |
rate_annealing
Learning rate annealing reduces the learning rate to "freeze" into
local minima in the optimization landscape.
|
double |
rate_decay
The learning rate decay parameter controls the change of learning rate across layers.
|
double |
regression_stop
The stopping criteria in terms of regression error (MSE) on the training
data scoring dataset.
|
boolean |
replicate_training_data
Replicate the entire training dataset onto every node for faster training on small datasets.
|
double |
rho
The first of two hyper parameters for adaptive learning rate (ADADELTA).
|
double |
score_duty_cycle
Maximum fraction of wall clock time spent on model scoring on training and validation samples,
and on diagnostics such as computation of feature importances (i.e., not on training).
|
double |
score_interval
The minimum time (in seconds) to elapse between model scoring.
|
long |
score_training_samples
The number of training dataset points to be used for scoring.
|
long |
score_validation_samples
The number of validation dataset points to be used for scoring.
|
DeepLearningModel.DeepLearningParameters.ClassSamplingMethod |
score_validation_sampling
Method used to sample the validation dataset for scoring, see Score Validation Samples above.
|
long |
seed
The random seed controls sampling and initialization.
|
boolean |
shuffle_training_data
Enable shuffling of training data (on each node).
|
boolean |
single_node_mode
Run on a single node for fine-tuning of model parameters.
|
boolean |
sparse |
double |
sparsity_beta |
double |
target_ratio_comm_to_comp |
long |
train_samples_per_iteration
The number of training data rows to be processed per iteration.
|
boolean |
use_all_factor_levels |
boolean |
variable_importances
Whether to compute variable importances for input features.
|
| Constructor and Description |
|---|
DeepLearningModel.DeepLearningParameters() |
| Modifier and Type | Method and Description |
|---|---|
int |
sanityCheckParameters() |
checksum, hide, validation_error, validation_info, validation_warn, validationErrorspublic boolean classification
public int n_folds
public boolean keep_cross_validation_splits
public water.Key checkpoint
public boolean override_with_best_model
public boolean expert_mode
public boolean autoencoder
public boolean use_all_factor_levels
public DeepLearningModel.DeepLearningParameters.Activation activation
public int[] hidden
public double epochs
public long train_samples_per_iteration
public double target_ratio_comm_to_comp
public long seed
public boolean adaptive_rate
public double rho
public double epsilon
public double rate
public double rate_annealing
public double rate_decay
public double momentum_start
public double momentum_ramp
public double momentum_stable
public boolean nesterov_accelerated_gradient
public double input_dropout_ratio
public double[] hidden_dropout_ratios
public double l1
public double l2
public float max_w2
public DeepLearningModel.DeepLearningParameters.InitialWeightDistribution initial_weight_distribution
public double initial_weight_scale
public DeepLearningModel.DeepLearningParameters.Loss loss
public double score_interval
public long score_training_samples
public long score_validation_samples
public double score_duty_cycle
public double classification_stop
public double regression_stop
public boolean quiet_mode
public int max_confusion_matrix_size
public int max_hit_ratio_k
public boolean balance_classes
public float[] class_sampling_factors
public float max_after_balance_size
public DeepLearningModel.DeepLearningParameters.ClassSamplingMethod score_validation_sampling
public boolean diagnostics
public boolean variable_importances
public boolean fast_mode
public boolean ignore_const_cols
public boolean force_load_balance
public boolean replicate_training_data
public boolean single_node_mode
public boolean shuffle_training_data
public DeepLearningModel.DeepLearningParameters.MissingValuesHandling missing_values_handling
public boolean sparse
public boolean col_major
public double average_activation
public double sparsity_beta
public DeepLearningModel.DeepLearningParameters()
public int sanityCheckParameters()
sanityCheckParameters in class hex.Model.Parameters<DeepLearningModel,DeepLearningModel.DeepLearningParameters,DeepLearningModel.DeepLearningOutput>