Packages

class LinearRegression extends Regressor[Vector, LinearRegression, LinearRegressionModel] with LinearRegressionParams with DefaultParamsWritable with Logging

Linear regression.

The learning objective is to minimize the specified loss function, with regularization. This supports two kinds of loss:

  • squaredError (a.k.a squared loss)
  • huber (a hybrid of squared error for relatively small errors and absolute error for relatively large ones, and we estimate the scale parameter from training data)

This supports multiple types of regularization:

  • none (a.k.a. ordinary least squares)
  • L2 (ridge regression)
  • L1 (Lasso)
  • L2 + L1 (elastic net)

The squared error objective function is:

$$ \begin{align} \min_{w}\frac{1}{2n}{\sum_{i=1}^n(X_{i}w - y_{i})^{2} + \lambda\left[\frac{1-\alpha}{2}{||w||_{2}}^{2} + \alpha{||w||_{1}}\right]} \end{align} $$

The huber objective function is:

$$ \begin{align} \min_{w, \sigma}\frac{1}{2n}{\sum_{i=1}^n\left(\sigma + H_m\left(\frac{X_{i}w - y_{i}}{\sigma}\right)\sigma\right) + \frac{1}{2}\lambda {||w||_2}^2} \end{align} $$

where

$$ \begin{align} H_m(z) = \begin{cases} z^2, & \text {if } |z| < \epsilon, \\ 2\epsilon|z| - \epsilon^2, & \text{otherwise} \end{cases} \end{align} $$

Note: Fitting with huber loss only supports none and L2 regularization.

Annotations
@Since( "1.3.0" )
Source
LinearRegression.scala
Ordering
  1. Grouped
  2. Alphabetic
  3. By Inheritance
Inherited
  1. LinearRegression
  2. DefaultParamsWritable
  3. MLWritable
  4. LinearRegressionParams
  5. HasLoss
  6. HasAggregationDepth
  7. HasSolver
  8. HasWeightCol
  9. HasStandardization
  10. HasFitIntercept
  11. HasTol
  12. HasMaxIter
  13. HasElasticNetParam
  14. HasRegParam
  15. Regressor
  16. Predictor
  17. PredictorParams
  18. HasPredictionCol
  19. HasFeaturesCol
  20. HasLabelCol
  21. Estimator
  22. PipelineStage
  23. Logging
  24. Params
  25. Serializable
  26. Serializable
  27. Identifiable
  28. AnyRef
  29. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Parameters

A list of (hyper-)parameter keys this algorithm can take. Users can set and get the parameter values through setters and getters, respectively.

  1. final val elasticNetParam: DoubleParam

    Param for the ElasticNet mixing parameter, in range [0, 1].

    Param for the ElasticNet mixing parameter, in range [0, 1]. For alpha = 0, the penalty is an L2 penalty. For alpha = 1, it is an L1 penalty.

    Definition Classes
    HasElasticNetParam
  2. final val featuresCol: Param[String]

    Param for features column name.

    Param for features column name.

    Definition Classes
    HasFeaturesCol
  3. final val fitIntercept: BooleanParam

    Param for whether to fit an intercept term.

    Param for whether to fit an intercept term.

    Definition Classes
    HasFitIntercept
  4. final val labelCol: Param[String]

    Param for label column name.

    Param for label column name.

    Definition Classes
    HasLabelCol
  5. final val loss: Param[String]

    The loss function to be optimized.

    The loss function to be optimized. Supported options: "squaredError" and "huber". Default: "squaredError"

    Definition Classes
    LinearRegressionParams → HasLoss
    Annotations
    @Since( "2.3.0" )
  6. final val maxIter: IntParam

    Param for maximum number of iterations (>= 0).

    Param for maximum number of iterations (>= 0).

    Definition Classes
    HasMaxIter
  7. final val predictionCol: Param[String]

    Param for prediction column name.

    Param for prediction column name.

    Definition Classes
    HasPredictionCol
  8. final val regParam: DoubleParam

    Param for regularization parameter (>= 0).

    Param for regularization parameter (>= 0).

    Definition Classes
    HasRegParam
  9. final val solver: Param[String]

    The solver algorithm for optimization.

    The solver algorithm for optimization. Supported options: "l-bfgs", "normal" and "auto". Default: "auto"

    Definition Classes
    LinearRegressionParams → HasSolver
    Annotations
    @Since( "1.6.0" )
  10. final val standardization: BooleanParam

    Param for whether to standardize the training features before fitting the model.

    Param for whether to standardize the training features before fitting the model.

    Definition Classes
    HasStandardization
  11. final val tol: DoubleParam

    Param for the convergence tolerance for iterative algorithms (>= 0).

    Param for the convergence tolerance for iterative algorithms (>= 0).

    Definition Classes
    HasTol
  12. final val weightCol: Param[String]

    Param for weight column name.

    Param for weight column name. If this is not set or empty, we treat all instance weights as 1.0.

    Definition Classes
    HasWeightCol

Members

  1. final def clear(param: Param[_]): LinearRegression.this.type

    Clears the user-supplied value for the input param.

    Clears the user-supplied value for the input param.

    Definition Classes
    Params
  2. def copy(extra: ParamMap): LinearRegression

    Creates a copy of this instance with the same UID and some extra params.

    Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. See defaultCopy().

    Definition Classes
    LinearRegressionPredictorEstimatorPipelineStageParams
    Annotations
    @Since( "1.4.0" )
  3. def explainParam(param: Param[_]): String

    Explains a param.

    Explains a param.

    param

    input param, must belong to this instance.

    returns

    a string that contains the input param name, doc, and optionally its default value and the user-supplied value

    Definition Classes
    Params
  4. def explainParams(): String

    Explains all params of this instance.

    Explains all params of this instance. See explainParam().

    Definition Classes
    Params
  5. final def extractParamMap(): ParamMap

    extractParamMap with no extra values.

    extractParamMap with no extra values.

    Definition Classes
    Params
  6. final def extractParamMap(extra: ParamMap): ParamMap

    Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values less than user-supplied values less than extra.

    Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values less than user-supplied values less than extra.

    Definition Classes
    Params
  7. def fit(dataset: Dataset[_]): LinearRegressionModel

    Fits a model to the input data.

    Fits a model to the input data.

    Definition Classes
    PredictorEstimator
  8. def fit(dataset: Dataset[_], paramMaps: Array[ParamMap]): Seq[LinearRegressionModel]

    Fits multiple models to the input data with multiple sets of parameters.

    Fits multiple models to the input data with multiple sets of parameters. The default implementation uses a for loop on each parameter map. Subclasses could override this to optimize multi-model training.

    dataset

    input dataset

    paramMaps

    An array of parameter maps. These values override any specified in this Estimator's embedded ParamMap.

    returns

    fitted models, matching the input parameter maps

    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" )
  9. def fit(dataset: Dataset[_], paramMap: ParamMap): LinearRegressionModel

    Fits a single model to the input data with provided parameter map.

    Fits a single model to the input data with provided parameter map.

    dataset

    input dataset

    paramMap

    Parameter map. These values override any specified in this Estimator's embedded ParamMap.

    returns

    fitted model

    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" )
  10. def fit(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): LinearRegressionModel

    Fits a single model to the input data with optional parameters.

    Fits a single model to the input data with optional parameters.

    dataset

    input dataset

    firstParamPair

    the first param pair, overrides embedded params

    otherParamPairs

    other param pairs. These values override any specified in this Estimator's embedded ParamMap.

    returns

    fitted model

    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" ) @varargs()
  11. final def get[T](param: Param[T]): Option[T]

    Optionally returns the user-supplied value of a param.

    Optionally returns the user-supplied value of a param.

    Definition Classes
    Params
  12. final def getDefault[T](param: Param[T]): Option[T]

    Gets the default value of a parameter.

    Gets the default value of a parameter.

    Definition Classes
    Params
  13. final def getOrDefault[T](param: Param[T]): T

    Gets the value of a param in the embedded param map or its default value.

    Gets the value of a param in the embedded param map or its default value. Throws an exception if neither is set.

    Definition Classes
    Params
  14. def getParam(paramName: String): Param[Any]

    Gets a param by its name.

    Gets a param by its name.

    Definition Classes
    Params
  15. final def hasDefault[T](param: Param[T]): Boolean

    Tests whether the input param has a default value set.

    Tests whether the input param has a default value set.

    Definition Classes
    Params
  16. def hasParam(paramName: String): Boolean

    Tests whether this instance contains a param with a given name.

    Tests whether this instance contains a param with a given name.

    Definition Classes
    Params
  17. final def isDefined(param: Param[_]): Boolean

    Checks whether a param is explicitly set or has a default value.

    Checks whether a param is explicitly set or has a default value.

    Definition Classes
    Params
  18. final def isSet(param: Param[_]): Boolean

    Checks whether a param is explicitly set.

    Checks whether a param is explicitly set.

    Definition Classes
    Params
  19. lazy val params: Array[Param[_]]

    Returns all params sorted by their names.

    Returns all params sorted by their names. The default implementation uses Java reflection to list all public methods that have no arguments and return Param.

    Definition Classes
    Params
    Note

    Developer should not use this method in constructor because we cannot guarantee that this variable gets initialized before other params.

  20. def save(path: String): Unit

    Saves this ML instance to the input path, a shortcut of write.save(path).

    Saves this ML instance to the input path, a shortcut of write.save(path).

    Definition Classes
    MLWritable
    Annotations
    @Since( "1.6.0" ) @throws( ... )
  21. final def set[T](param: Param[T], value: T): LinearRegression.this.type

    Sets a parameter in the embedded param map.

    Sets a parameter in the embedded param map.

    Definition Classes
    Params
  22. def toString(): String
    Definition Classes
    Identifiable → AnyRef → Any
  23. def transformSchema(schema: StructType): StructType

    Check transform validity and derive the output schema from the input schema.

    Check transform validity and derive the output schema from the input schema.

    We check validity for interactions between parameters during transformSchema and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled by Param.validate().

    Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.

    Definition Classes
    PredictorPipelineStage
  24. val uid: String

    An immutable unique ID for the object and its derivatives.

    An immutable unique ID for the object and its derivatives.

    Definition Classes
    LinearRegressionIdentifiable
    Annotations
    @Since( "1.3.0" )
  25. def write: MLWriter

    Returns an MLWriter instance for this ML instance.

    Returns an MLWriter instance for this ML instance.

    Definition Classes
    DefaultParamsWritableMLWritable

getExpertParam

  1. def getEpsilon: Double

    Definition Classes
    LinearRegressionParams
    Annotations
    @Since( "2.3.0" )

setExpertParam

  1. def setEpsilon(value: Double): LinearRegression.this.type

    Sets the value of param epsilon.

    Sets the value of param epsilon. Default is 1.35.

    Annotations
    @Since( "2.3.0" )

Parameter setters

  1. def setElasticNetParam(value: Double): LinearRegression.this.type

    Set the ElasticNet mixing parameter.

    Set the ElasticNet mixing parameter. For alpha = 0, the penalty is an L2 penalty. For alpha = 1, it is an L1 penalty. For alpha in (0,1), the penalty is a combination of L1 and L2. Default is 0.0 which is an L2 penalty.

    Note: Fitting with huber loss only supports None and L2 regularization, so throws exception if this param is non-zero value.

    Annotations
    @Since( "1.4.0" )
  2. def setFeaturesCol(value: String): LinearRegression

    Definition Classes
    Predictor
  3. def setFitIntercept(value: Boolean): LinearRegression.this.type

    Set if we should fit the intercept.

    Set if we should fit the intercept. Default is true.

    Annotations
    @Since( "1.5.0" )
  4. def setLabelCol(value: String): LinearRegression

    Definition Classes
    Predictor
  5. def setLoss(value: String): LinearRegression.this.type

    Sets the value of param loss.

    Sets the value of param loss. Default is "squaredError".

    Annotations
    @Since( "2.3.0" )
  6. def setMaxIter(value: Int): LinearRegression.this.type

    Set the maximum number of iterations.

    Set the maximum number of iterations. Default is 100.

    Annotations
    @Since( "1.3.0" )
  7. def setPredictionCol(value: String): LinearRegression

    Definition Classes
    Predictor
  8. def setRegParam(value: Double): LinearRegression.this.type

    Set the regularization parameter.

    Set the regularization parameter. Default is 0.0.

    Annotations
    @Since( "1.3.0" )
  9. def setSolver(value: String): LinearRegression.this.type

    Set the solver algorithm used for optimization.

    Set the solver algorithm used for optimization. In case of linear regression, this can be "l-bfgs", "normal" and "auto".

    • "l-bfgs" denotes Limited-memory BFGS which is a limited-memory quasi-Newton optimization method.
    • "normal" denotes using Normal Equation as an analytical solution to the linear regression problem. This solver is limited to LinearRegression.MAX_FEATURES_FOR_NORMAL_SOLVER.
    • "auto" (default) means that the solver algorithm is selected automatically. The Normal Equations solver will be used when possible, but this will automatically fall back to iterative optimization methods when needed.

    Note: Fitting with huber loss doesn't support normal solver, so throws exception if this param was set with "normal".

    Annotations
    @Since( "1.6.0" )
  10. def setStandardization(value: Boolean): LinearRegression.this.type

    Whether to standardize the training features before fitting the model.

    Whether to standardize the training features before fitting the model. The coefficients of models will be always returned on the original scale, so it will be transparent for users. Default is true.

    Annotations
    @Since( "1.5.0" )
    Note

    With/without standardization, the models should be always converged to the same solution when no regularization is applied. In R's GLMNET package, the default behavior is true as well.

  11. def setTol(value: Double): LinearRegression.this.type

    Set the convergence tolerance of iterations.

    Set the convergence tolerance of iterations. Smaller value will lead to higher accuracy with the cost of more iterations. Default is 1E-6.

    Annotations
    @Since( "1.4.0" )
  12. def setWeightCol(value: String): LinearRegression.this.type

    Whether to over-/under-sample training instances according to the given weights in weightCol.

    Whether to over-/under-sample training instances according to the given weights in weightCol. If not set or empty, all instances are treated equally (weight 1.0). Default is not set, so all instances have weight one.

    Annotations
    @Since( "1.6.0" )

Parameter getters

  1. final def getElasticNetParam: Double

    Definition Classes
    HasElasticNetParam
  2. final def getFeaturesCol: String

    Definition Classes
    HasFeaturesCol
  3. final def getFitIntercept: Boolean

    Definition Classes
    HasFitIntercept
  4. final def getLabelCol: String

    Definition Classes
    HasLabelCol
  5. final def getLoss: String

    Definition Classes
    HasLoss
  6. final def getMaxIter: Int

    Definition Classes
    HasMaxIter
  7. final def getPredictionCol: String

    Definition Classes
    HasPredictionCol
  8. final def getRegParam: Double

    Definition Classes
    HasRegParam
  9. final def getSolver: String

    Definition Classes
    HasSolver
  10. final def getStandardization: Boolean

    Definition Classes
    HasStandardization
  11. final def getTol: Double

    Definition Classes
    HasTol
  12. final def getWeightCol: String

    Definition Classes
    HasWeightCol

(expert-only) Parameters

A list of advanced, expert-only (hyper-)parameter keys this algorithm can take. Users can set and get the parameter values through setters and getters, respectively.

  1. final val aggregationDepth: IntParam

    Param for suggested depth for treeAggregate (>= 2).

    Param for suggested depth for treeAggregate (>= 2).

    Definition Classes
    HasAggregationDepth
  2. final val epsilon: DoubleParam

    The shape parameter to control the amount of robustness.

    The shape parameter to control the amount of robustness. Must be > 1.0. At larger values of epsilon, the huber criterion becomes more similar to least squares regression; for small values of epsilon, the criterion is more similar to L1 regression. Default is 1.35 to get as much robustness as possible while retaining 95% statistical efficiency for normally distributed data. It matches sklearn HuberRegressor and is "M" from A robust hybrid of lasso and ridge regression. Only valid when "loss" is "huber".

    Definition Classes
    LinearRegressionParams
    Annotations
    @Since( "2.3.0" )

(expert-only) Parameter setters

  1. def setAggregationDepth(value: Int): LinearRegression.this.type

    Suggested depth for treeAggregate (greater than or equal to 2).

    Suggested depth for treeAggregate (greater than or equal to 2). If the dimensions of features or the number of partitions are large, this param could be adjusted to a larger size. Default is 2.

    Annotations
    @Since( "2.1.0" )

(expert-only) Parameter getters

  1. final def getAggregationDepth: Int

    Definition Classes
    HasAggregationDepth