object BaggedResult
- Alphabetic
- By Inheritance
- BaggedResult
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- val logger: Logger
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
rectifyEstimatedVariance(scores: Seq[Double]): Double
Make sure the variance is non-negative
Make sure the variance is non-negative
The monte carlo bias correction is itself stochastic, so let's make sure the result is positive
If the sum is positive, then great! We're done.
If the sum is <= 0.0, then the actual variance is likely quite small. We know the variance should be at least as large as the largest importance, since at least one training point will be important. Therefore, let's just take the maximum importance, which should be a reasonable lower-bound of the variance. Note that we could also sum the non-negative scores, but that could be biased upwards.
If all of the scores are negative (which happens infrequently for very small ensembles), then we just need a scale. The largest scale is the largest magnitude score, which is the absolute value of the minimum score. When this happens, then a larger ensemble should really be used!
If all of the treePredictions are zero, then this will return zero.
- scores
the monte-carlo corrected importance scores
- returns
A non-negative estimate of the variance
-
def
rectifyImportanceScores(scores: Vector[Double]): Vector[Double]
Make sure the scores are each non-negative
Make sure the scores are each non-negative
The monte carlo bias correction is itself stochastic, so let's make sure the result is positive. If the score was statistically consistent with zero, then we might subtract off the entire bias correction, which results in the negative value. Therefore, we can use the magnitude of the minimum as an estimate of the noise level, and can simply set that as a floor.
If all of the treePredictions are zero, then this will return a vector of zero
- scores
the monte-carlo corrected importance scores
- returns
a vector of non-negative bias corrected scores
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )