Package ai.sklearn4j.preprocessing.data
Class StandardScaler
- java.lang.Object
-
- ai.sklearn4j.base.TransformerMixin<NumpyArray<Double>,NumpyArray<Double>>
-
- ai.sklearn4j.preprocessing.data.StandardScaler
-
public class StandardScaler extends TransformerMixin<NumpyArray<Double>,NumpyArray<Double>>
Standardize features by removing the mean and scaling to unit variance. The standard score of a sample `x` is calculated as: z = (x - u) / s where `u` is the mean of the training samples or zero if `with_mean=False`, and `s` is the standard deviation of the training samples or one if `with_std=False`. Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Mean and standard deviation are then stored to be used on later data using :meth:`transform`. Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual features do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance). For instance many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the L1 and L2 regularizers of linear models) assume that all features are centered around 0 and have variance in the same order. If a feature has a variance that is orders of magnitude larger than others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected. This scaler can also be applied to sparse CSR or CSC matrices by passing `with_mean=False` to avoid breaking the sparsity structure of the data.
-
-
Constructor Summary
Constructors Constructor Description StandardScaler()Instantiate a new object of StandardScaler.
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description String[]getFeatureNamesIn()Gets the Names of features seen during `fit`.NumpyArray<Double>getMean()Gets the The mean value for each feature in the training set.longgetNFeaturesIn()Gets the Number of features seen during `fit`.NumpyArray<Long>getNSamplesSeen()Gets the The number of samples processed by the estimator for each feature.NumpyArray<Double>getScale()Gets the Per feature relative scaling of the data to achieve zero mean and unit variance.NumpyArray<Double>getVariance()Gets the The variance for each feature in the training set.booleangetWithMean()Gets the value of WithMeanbooleangetWithStandardDeviation()Gets the value of WithStdNumpyArray<Double>inverseTransform(NumpyArray<Double> array)Takes a transformed array and reveres the transformation.voidsetFeatureNamesIn(String[] value)Sets the Names of features seen during `fit`.voidsetMean(NumpyArray<Double> value)Sets the The mean value for each feature in the training set.voidsetNFeaturesIn(long value)Sets the Number of features seen during `fit`.voidsetNSamplesSeen(NumpyArray<Long> value)Sets the The number of samples processed by the estimator for each feature.voidsetScale(NumpyArray<Double> value)Sets the Per feature relative scaling of the data to achieve zero mean and unit variance.voidsetVariance(NumpyArray<Double> value)Sets the The variance for each feature in the training set.voidsetWithMean(boolean value)Sets the value of WithMeanvoidsetWithStandardDeviation(boolean value)Sets the value of WithStdNumpyArray<Double>transform(NumpyArray<Double> array)Takes the input array and transforms it.
-
-
-
Method Detail
-
setScale
public void setScale(NumpyArray<Double> value)
Sets the Per feature relative scaling of the data to achieve zero mean and unit variance. Generally this is calculated using `np.sqrt(var_)`. If a variance is zero, we can't achieve unit variance, and the data is left as-is, giving a scaling factor of 1. `scale_` is equal to `None` when `with_std=False`.- Parameters:
value- The new value for scale.
-
getScale
public NumpyArray<Double> getScale()
Gets the Per feature relative scaling of the data to achieve zero mean and unit variance. Generally this is calculated using `np.sqrt(var_)`. If a variance is zero, we can't achieve unit variance, and the data is left as-is, giving a scaling factor of 1. `scale_` is equal to `None` when `with_std=False`.
-
setMean
public void setMean(NumpyArray<Double> value)
Sets the The mean value for each feature in the training set. Equal to `None` when `with_mean=False`.- Parameters:
value- The new value for mean.
-
getMean
public NumpyArray<Double> getMean()
Gets the The mean value for each feature in the training set. Equal to `None` when `with_mean=False`.
-
setVariance
public void setVariance(NumpyArray<Double> value)
Sets the The variance for each feature in the training set. Used to compute `scale_`. Equal to `None` when `with_std=False`.- Parameters:
value- The new value for var.
-
getVariance
public NumpyArray<Double> getVariance()
Gets the The variance for each feature in the training set. Used to compute `scale_`. Equal to `None` when `with_std=False`.
-
setNFeaturesIn
public void setNFeaturesIn(long value)
Sets the Number of features seen during `fit`.- Parameters:
value- The new value for nFeaturesIn.
-
getNFeaturesIn
public long getNFeaturesIn()
Gets the Number of features seen during `fit`.
-
setFeatureNamesIn
public void setFeatureNamesIn(String[] value)
Sets the Names of features seen during `fit`. Defined only when `X` has feature names that are all strings.- Parameters:
value- The new value for featureNamesIn.
-
getFeatureNamesIn
public String[] getFeatureNamesIn()
Gets the Names of features seen during `fit`. Defined only when `X` has feature names that are all strings.
-
setNSamplesSeen
public void setNSamplesSeen(NumpyArray<Long> value)
Sets the The number of samples processed by the estimator for each feature. If there are no missing samples, the `n_samples_seen` will be an integer, otherwise it will be an array of dtype int. If `sample_weights` are used it will be a float (if no missing data) or an array of dtype float that sums the weights seen so far. Will be reset on new calls to fit, but increments across `partial_fit` calls.- Parameters:
value- The new value for nSamplesSeen.
-
getNSamplesSeen
public NumpyArray<Long> getNSamplesSeen()
Gets the The number of samples processed by the estimator for each feature. If there are no missing samples, the `n_samples_seen` will be an integer, otherwise it will be an array of dtype int. If `sample_weights` are used it will be a float (if no missing data) or an array of dtype float that sums the weights seen so far. Will be reset on new calls to fit, but increments across `partial_fit` calls.
-
setWithMean
public void setWithMean(boolean value)
Sets the value of WithMean- Parameters:
value- The new value for WithMean.
-
getWithMean
public boolean getWithMean()
Gets the value of WithMean
-
setWithStandardDeviation
public void setWithStandardDeviation(boolean value)
Sets the value of WithStd- Parameters:
value- The new value for WithStd.
-
getWithStandardDeviation
public boolean getWithStandardDeviation()
Gets the value of WithStd
-
transform
public NumpyArray<Double> transform(NumpyArray<Double> array)
Takes the input array and transforms it.- Specified by:
transformin classTransformerMixin<NumpyArray<Double>,NumpyArray<Double>>- Parameters:
array- The array to transform.- Returns:
- The transformed array.
-
inverseTransform
public NumpyArray<Double> inverseTransform(NumpyArray<Double> array)
Takes a transformed array and reveres the transformation.- Specified by:
inverseTransformin classTransformerMixin<NumpyArray<Double>,NumpyArray<Double>>- Parameters:
array- The array to apply reveres transform.- Returns:
- The inversed transform of array.
-
-