Packages

object BaggedResult

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. BaggedResult
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @native() @throws( ... )
  6. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  7. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  8. def finalize(): Unit
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  9. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  10. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  11. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  12. val logger: Logger
  13. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  14. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  15. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  16. def rectifyEstimatedVariance(scores: Seq[Double]): Double

    Make sure the variance is non-negative

    Make sure the variance is non-negative

    The monte carlo bias correction is itself stochastic, so let's make sure the result is positive

    If the sum is positive, then great! We're done.

    If the sum is <= 0.0, then the actual variance is likely quite small. We know the variance should be at least as large as the largest importance, since at least one training point will be important. Therefore, let's just take the maximum importance, which should be a reasonable lower-bound of the variance. Note that we could also sum the non-negative scores, but that could be biased upwards.

    If all of the scores are negative (which happens infrequently for very small ensembles), then we just need a scale. The largest scale is the largest magnitude score, which is the absolute value of the minimum score. When this happens, then a larger ensemble should really be used!

    If all of the treePredictions are zero, then this will return zero.

    scores

    the monte-carlo corrected importance scores

    returns

    A non-negative estimate of the variance

  17. def rectifyImportanceScores(scores: Vector[Double]): Vector[Double]

    Make sure the scores are each non-negative

    Make sure the scores are each non-negative

    The monte carlo bias correction is itself stochastic, so let's make sure the result is positive. If the score was statistically consistent with zero, then we might subtract off the entire bias correction, which results in the negative value. Therefore, we can use the magnitude of the minimum as an estimate of the noise level, and can simply set that as a floor.

    If all of the treePredictions are zero, then this will return a vector of zero

    scores

    the monte-carlo corrected importance scores

    returns

    a vector of non-negative bias corrected scores

  18. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  19. def toString(): String
    Definition Classes
    AnyRef → Any
  20. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  21. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  22. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped