001/* 002 * Copyright (C) 2014 The Guava Authors 003 * 004 * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except 005 * in compliance with the License. You may obtain a copy of the License at 006 * 007 * http://www.apache.org/licenses/LICENSE-2.0 008 * 009 * Unless required by applicable law or agreed to in writing, software distributed under the License 010 * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express 011 * or implied. See the License for the specific language governing permissions and limitations under 012 * the License. 013 */ 014 015package com.google.common.math; 016 017import static com.google.common.base.Preconditions.checkArgument; 018import static java.lang.Double.NEGATIVE_INFINITY; 019import static java.lang.Double.NaN; 020import static java.lang.Double.POSITIVE_INFINITY; 021import static java.util.Arrays.sort; 022import static java.util.Collections.unmodifiableMap; 023 024import com.google.common.annotations.Beta; 025import com.google.common.annotations.GwtIncompatible; 026import com.google.common.primitives.Doubles; 027import com.google.common.primitives.Ints; 028import java.math.RoundingMode; 029import java.util.Collection; 030import java.util.LinkedHashMap; 031import java.util.Map; 032 033/** 034 * Provides a fluent API for calculating <a 035 * href="http://en.wikipedia.org/wiki/Quantile">quantiles</a>. 036 * 037 * <h3>Examples</h3> 038 * 039 * <p>To compute the median: 040 * 041 * <pre>{@code 042 * double myMedian = median().compute(myDataset); 043 * }</pre> 044 * 045 * where {@link #median()} has been statically imported. 046 * 047 * <p>To compute the 99th percentile: 048 * 049 * <pre>{@code 050 * double myPercentile99 = percentiles().index(99).compute(myDataset); 051 * }</pre> 052 * 053 * where {@link #percentiles()} has been statically imported. 054 * 055 * <p>To compute median and the 90th and 99th percentiles: 056 * 057 * <pre>{@code 058 * Map<Integer, Double> myPercentiles = 059 * percentiles().indexes(50, 90, 99).compute(myDataset); 060 * }</pre> 061 * 062 * where {@link #percentiles()} has been statically imported: {@code myPercentiles} maps the keys 063 * 50, 90, and 99, to their corresponding quantile values. 064 * 065 * <p>To compute quartiles, use {@link #quartiles()} instead of {@link #percentiles()}. To compute 066 * arbitrary q-quantiles, use {@link #scale scale(q)}. 067 * 068 * <p>These examples all take a copy of your dataset. If you have a double array, you are okay with 069 * it being arbitrarily reordered, and you want to avoid that copy, you can use {@code 070 * computeInPlace} instead of {@code compute}. 071 * 072 * <h3>Definition and notes on interpolation</h3> 073 * 074 * <p>The definition of the kth q-quantile of N values is as follows: define x = k * (N - 1) / q; if 075 * x is an integer, the result is the value which would appear at index x in the sorted dataset 076 * (unless there are {@link Double#NaN NaN} values, see below); otherwise, the result is the average 077 * of the values which would appear at the indexes floor(x) and ceil(x) weighted by (1-frac(x)) and 078 * frac(x) respectively. This is the same definition as used by Excel and by S, it is the Type 7 079 * definition in <a 080 * href="http://stat.ethz.ch/R-manual/R-devel/library/stats/html/quantile.html">R</a>, and it is 081 * described by <a 082 * href="http://en.wikipedia.org/wiki/Quantile#Estimating_the_quantiles_of_a_population"> 083 * wikipedia</a> as providing "Linear interpolation of the modes for the order statistics for the 084 * uniform distribution on [0,1]." 085 * 086 * <h3>Handling of non-finite values</h3> 087 * 088 * <p>If any values in the input are {@link Double#NaN NaN} then all values returned are {@link 089 * Double#NaN NaN}. (This is the one occasion when the behaviour is not the same as you'd get from 090 * sorting with {@link java.util.Arrays#sort(double[]) Arrays.sort(double[])} or {@link 091 * java.util.Collections#sort(java.util.List) Collections.sort(List<Double>)} and selecting 092 * the required value(s). Those methods would sort {@link Double#NaN NaN} as if it is greater than 093 * any other value and place them at the end of the dataset, even after {@link 094 * Double#POSITIVE_INFINITY POSITIVE_INFINITY}.) 095 * 096 * <p>Otherwise, {@link Double#NEGATIVE_INFINITY NEGATIVE_INFINITY} and {@link 097 * Double#POSITIVE_INFINITY POSITIVE_INFINITY} sort to the beginning and the end of the dataset, as 098 * you would expect. 099 * 100 * <p>If required to do a weighted average between an infinity and a finite value, or between an 101 * infinite value and itself, the infinite value is returned. If required to do a weighted average 102 * between {@link Double#NEGATIVE_INFINITY NEGATIVE_INFINITY} and {@link Double#POSITIVE_INFINITY 103 * POSITIVE_INFINITY}, {@link Double#NaN NaN} is returned (note that this will only happen if the 104 * dataset contains no finite values). 105 * 106 * <h3>Performance</h3> 107 * 108 * <p>The average time complexity of the computation is O(N) in the size of the dataset. There is a 109 * worst case time complexity of O(N^2). You are extremely unlikely to hit this quadratic case on 110 * randomly ordered data (the probability decreases faster than exponentially in N), but if you are 111 * passing in unsanitized user data then a malicious user could force it. A light shuffle of the 112 * data using an unpredictable seed should normally be enough to thwart this attack. 113 * 114 * <p>The time taken to compute multiple quantiles on the same dataset using {@link Scale#indexes 115 * indexes} is generally less than the total time taken to compute each of them separately, and 116 * sometimes much less. For example, on a large enough dataset, computing the 90th and 99th 117 * percentiles together takes about 55% as long as computing them separately. 118 * 119 * <p>When calling {@link ScaleAndIndex#compute} (in {@linkplain ScaleAndIndexes#compute either 120 * form}), the memory requirement is 8*N bytes for the copy of the dataset plus an overhead which is 121 * independent of N (but depends on the quantiles being computed). When calling {@link 122 * ScaleAndIndex#computeInPlace computeInPlace} (in {@linkplain ScaleAndIndexes#computeInPlace 123 * either form}), only the overhead is required. The number of object allocations is independent of 124 * N in both cases. 125 * 126 * @author Pete Gillin 127 * @since 20.0 128 */ 129@Beta 130@GwtIncompatible 131public final class Quantiles { 132 133 /** Specifies the computation of a median (i.e. the 1st 2-quantile). */ 134 public static ScaleAndIndex median() { 135 return scale(2).index(1); 136 } 137 138 /** Specifies the computation of quartiles (i.e. 4-quantiles). */ 139 public static Scale quartiles() { 140 return scale(4); 141 } 142 143 /** Specifies the computation of percentiles (i.e. 100-quantiles). */ 144 public static Scale percentiles() { 145 return scale(100); 146 } 147 148 /** 149 * Specifies the computation of q-quantiles. 150 * 151 * @param scale the scale for the quantiles to be calculated, i.e. the q of the q-quantiles, which 152 * must be positive 153 */ 154 public static Scale scale(int scale) { 155 return new Scale(scale); 156 } 157 158 /** 159 * Describes the point in a fluent API chain where only the scale (i.e. the q in q-quantiles) has 160 * been specified. 161 * 162 * @since 20.0 163 */ 164 public static final class Scale { 165 166 private final int scale; 167 168 private Scale(int scale) { 169 checkArgument(scale > 0, "Quantile scale must be positive"); 170 this.scale = scale; 171 } 172 173 /** 174 * Specifies a single quantile index to be calculated, i.e. the k in the kth q-quantile. 175 * 176 * @param index the quantile index, which must be in the inclusive range [0, q] for q-quantiles 177 */ 178 public ScaleAndIndex index(int index) { 179 return new ScaleAndIndex(scale, index); 180 } 181 182 /** 183 * Specifies multiple quantile indexes to be calculated, each index being the k in the kth 184 * q-quantile. 185 * 186 * @param indexes the quantile indexes, each of which must be in the inclusive range [0, q] for 187 * q-quantiles; the order of the indexes is unimportant, duplicates will be ignored, and the 188 * set will be snapshotted when this method is called 189 * @throws IllegalArgumentException if {@code indexes} is empty 190 */ 191 public ScaleAndIndexes indexes(int... indexes) { 192 return new ScaleAndIndexes(scale, indexes.clone()); 193 } 194 195 /** 196 * Specifies multiple quantile indexes to be calculated, each index being the k in the kth 197 * q-quantile. 198 * 199 * @param indexes the quantile indexes, each of which must be in the inclusive range [0, q] for 200 * q-quantiles; the order of the indexes is unimportant, duplicates will be ignored, and the 201 * set will be snapshotted when this method is called 202 * @throws IllegalArgumentException if {@code indexes} is empty 203 */ 204 public ScaleAndIndexes indexes(Collection<Integer> indexes) { 205 return new ScaleAndIndexes(scale, Ints.toArray(indexes)); 206 } 207 } 208 209 /** 210 * Describes the point in a fluent API chain where the scale and a single quantile index (i.e. the 211 * q and the k in the kth q-quantile) have been specified. 212 * 213 * @since 20.0 214 */ 215 public static final class ScaleAndIndex { 216 217 private final int scale; 218 private final int index; 219 220 private ScaleAndIndex(int scale, int index) { 221 checkIndex(index, scale); 222 this.scale = scale; 223 this.index = index; 224 } 225 226 /** 227 * Computes the quantile value of the given dataset. 228 * 229 * @param dataset the dataset to do the calculation on, which must be non-empty, which will be 230 * cast to doubles (with any associated lost of precision), and which will not be mutated by 231 * this call (it is copied instead) 232 * @return the quantile value 233 */ 234 public double compute(Collection<? extends Number> dataset) { 235 return computeInPlace(Doubles.toArray(dataset)); 236 } 237 238 /** 239 * Computes the quantile value of the given dataset. 240 * 241 * @param dataset the dataset to do the calculation on, which must be non-empty, which will not 242 * be mutated by this call (it is copied instead) 243 * @return the quantile value 244 */ 245 public double compute(double... dataset) { 246 return computeInPlace(dataset.clone()); 247 } 248 249 /** 250 * Computes the quantile value of the given dataset. 251 * 252 * @param dataset the dataset to do the calculation on, which must be non-empty, which will be 253 * cast to doubles (with any associated lost of precision), and which will not be mutated by 254 * this call (it is copied instead) 255 * @return the quantile value 256 */ 257 public double compute(long... dataset) { 258 return computeInPlace(longsToDoubles(dataset)); 259 } 260 261 /** 262 * Computes the quantile value of the given dataset. 263 * 264 * @param dataset the dataset to do the calculation on, which must be non-empty, which will be 265 * cast to doubles, and which will not be mutated by this call (it is copied instead) 266 * @return the quantile value 267 */ 268 public double compute(int... dataset) { 269 return computeInPlace(intsToDoubles(dataset)); 270 } 271 272 /** 273 * Computes the quantile value of the given dataset, performing the computation in-place. 274 * 275 * @param dataset the dataset to do the calculation on, which must be non-empty, and which will 276 * be arbitrarily reordered by this method call 277 * @return the quantile value 278 */ 279 public double computeInPlace(double... dataset) { 280 checkArgument(dataset.length > 0, "Cannot calculate quantiles of an empty dataset"); 281 if (containsNaN(dataset)) { 282 return NaN; 283 } 284 285 // Calculate the quotient and remainder in the integer division x = k * (N-1) / q, i.e. 286 // index * (dataset.length - 1) / scale. If there is no remainder, we can just find the value 287 // whose index in the sorted dataset equals the quotient; if there is a remainder, we 288 // interpolate between that and the next value. 289 290 // Since index and (dataset.length - 1) are non-negative ints, their product can be expressed 291 // as a long, without risk of overflow: 292 long numerator = (long) index * (dataset.length - 1); 293 // Since scale is a positive int, index is in [0, scale], and (dataset.length - 1) is a 294 // non-negative int, we can do long-arithmetic on index * (dataset.length - 1) / scale to get 295 // a rounded ratio and a remainder which can be expressed as ints, without risk of overflow: 296 int quotient = (int) LongMath.divide(numerator, scale, RoundingMode.DOWN); 297 int remainder = (int) (numerator - (long) quotient * scale); 298 selectInPlace(quotient, dataset, 0, dataset.length - 1); 299 if (remainder == 0) { 300 return dataset[quotient]; 301 } else { 302 selectInPlace(quotient + 1, dataset, quotient + 1, dataset.length - 1); 303 return interpolate(dataset[quotient], dataset[quotient + 1], remainder, scale); 304 } 305 } 306 } 307 308 /** 309 * Describes the point in a fluent API chain where the scale and a multiple quantile indexes (i.e. 310 * the q and a set of values for the k in the kth q-quantile) have been specified. 311 * 312 * @since 20.0 313 */ 314 public static final class ScaleAndIndexes { 315 316 private final int scale; 317 private final int[] indexes; 318 319 private ScaleAndIndexes(int scale, int[] indexes) { 320 for (int index : indexes) { 321 checkIndex(index, scale); 322 } 323 checkArgument(indexes.length > 0, "Indexes must be a non empty array"); 324 this.scale = scale; 325 this.indexes = indexes; 326 } 327 328 /** 329 * Computes the quantile values of the given dataset. 330 * 331 * @param dataset the dataset to do the calculation on, which must be non-empty, which will be 332 * cast to doubles (with any associated lost of precision), and which will not be mutated by 333 * this call (it is copied instead) 334 * @return an unmodifiable, ordered map of results: the keys will be the specified quantile 335 * indexes, and the values the corresponding quantile values. When iterating, entries in the 336 * map are ordered by quantile index in the same order they were passed to the {@code 337 * indexes} method. 338 */ 339 public Map<Integer, Double> compute(Collection<? extends Number> dataset) { 340 return computeInPlace(Doubles.toArray(dataset)); 341 } 342 343 /** 344 * Computes the quantile values of the given dataset. 345 * 346 * @param dataset the dataset to do the calculation on, which must be non-empty, which will not 347 * be mutated by this call (it is copied instead) 348 * @return an unmodifiable, ordered map of results: the keys will be the specified quantile 349 * indexes, and the values the corresponding quantile values. When iterating, entries in the 350 * map are ordered by quantile index in the same order they were passed to the {@code 351 * indexes} method. 352 */ 353 public Map<Integer, Double> compute(double... dataset) { 354 return computeInPlace(dataset.clone()); 355 } 356 357 /** 358 * Computes the quantile values of the given dataset. 359 * 360 * @param dataset the dataset to do the calculation on, which must be non-empty, which will be 361 * cast to doubles (with any associated lost of precision), and which will not be mutated by 362 * this call (it is copied instead) 363 * @return an unmodifiable, ordered map of results: the keys will be the specified quantile 364 * indexes, and the values the corresponding quantile values. When iterating, entries in the 365 * map are ordered by quantile index in the same order they were passed to the {@code 366 * indexes} method. 367 */ 368 public Map<Integer, Double> compute(long... dataset) { 369 return computeInPlace(longsToDoubles(dataset)); 370 } 371 372 /** 373 * Computes the quantile values of the given dataset. 374 * 375 * @param dataset the dataset to do the calculation on, which must be non-empty, which will be 376 * cast to doubles, and which will not be mutated by this call (it is copied instead) 377 * @return an unmodifiable, ordered map of results: the keys will be the specified quantile 378 * indexes, and the values the corresponding quantile values. When iterating, entries in the 379 * map are ordered by quantile index in the same order they were passed to the {@code 380 * indexes} method. 381 */ 382 public Map<Integer, Double> compute(int... dataset) { 383 return computeInPlace(intsToDoubles(dataset)); 384 } 385 386 /** 387 * Computes the quantile values of the given dataset, performing the computation in-place. 388 * 389 * @param dataset the dataset to do the calculation on, which must be non-empty, and which will 390 * be arbitrarily reordered by this method call 391 * @return an unmodifiable, ordered map of results: the keys will be the specified quantile 392 * indexes, and the values the corresponding quantile values. When iterating, entries in the 393 * map are ordered by quantile index in the same order that the indexes were passed to the 394 * {@code indexes} method. 395 */ 396 public Map<Integer, Double> computeInPlace(double... dataset) { 397 checkArgument(dataset.length > 0, "Cannot calculate quantiles of an empty dataset"); 398 if (containsNaN(dataset)) { 399 Map<Integer, Double> nanMap = new LinkedHashMap<>(); 400 for (int index : indexes) { 401 nanMap.put(index, NaN); 402 } 403 return unmodifiableMap(nanMap); 404 } 405 406 // Calculate the quotients and remainders in the integer division x = k * (N - 1) / q, i.e. 407 // index * (dataset.length - 1) / scale for each index in indexes. For each, if there is no 408 // remainder, we can just select the value whose index in the sorted dataset equals the 409 // quotient; if there is a remainder, we interpolate between that and the next value. 410 411 int[] quotients = new int[indexes.length]; 412 int[] remainders = new int[indexes.length]; 413 // The indexes to select. In the worst case, we'll need one each side of each quantile. 414 int[] requiredSelections = new int[indexes.length * 2]; 415 int requiredSelectionsCount = 0; 416 for (int i = 0; i < indexes.length; i++) { 417 // Since index and (dataset.length - 1) are non-negative ints, their product can be 418 // expressed as a long, without risk of overflow: 419 long numerator = (long) indexes[i] * (dataset.length - 1); 420 // Since scale is a positive int, index is in [0, scale], and (dataset.length - 1) is a 421 // non-negative int, we can do long-arithmetic on index * (dataset.length - 1) / scale to 422 // get a rounded ratio and a remainder which can be expressed as ints, without risk of 423 // overflow: 424 int quotient = (int) LongMath.divide(numerator, scale, RoundingMode.DOWN); 425 int remainder = (int) (numerator - (long) quotient * scale); 426 quotients[i] = quotient; 427 remainders[i] = remainder; 428 requiredSelections[requiredSelectionsCount] = quotient; 429 requiredSelectionsCount++; 430 if (remainder != 0) { 431 requiredSelections[requiredSelectionsCount] = quotient + 1; 432 requiredSelectionsCount++; 433 } 434 } 435 sort(requiredSelections, 0, requiredSelectionsCount); 436 selectAllInPlace( 437 requiredSelections, 0, requiredSelectionsCount - 1, dataset, 0, dataset.length - 1); 438 Map<Integer, Double> ret = new LinkedHashMap<>(); 439 for (int i = 0; i < indexes.length; i++) { 440 int quotient = quotients[i]; 441 int remainder = remainders[i]; 442 if (remainder == 0) { 443 ret.put(indexes[i], dataset[quotient]); 444 } else { 445 ret.put( 446 indexes[i], interpolate(dataset[quotient], dataset[quotient + 1], remainder, scale)); 447 } 448 } 449 return unmodifiableMap(ret); 450 } 451 } 452 453 /** Returns whether any of the values in {@code dataset} are {@code NaN}. */ 454 private static boolean containsNaN(double... dataset) { 455 for (double value : dataset) { 456 if (Double.isNaN(value)) { 457 return true; 458 } 459 } 460 return false; 461 } 462 463 /** 464 * Returns a value a fraction {@code (remainder / scale)} of the way between {@code lower} and 465 * {@code upper}. Assumes that {@code lower <= upper}. Correctly handles infinities (but not 466 * {@code NaN}). 467 */ 468 private static double interpolate(double lower, double upper, double remainder, double scale) { 469 if (lower == NEGATIVE_INFINITY) { 470 if (upper == POSITIVE_INFINITY) { 471 // Return NaN when lower == NEGATIVE_INFINITY and upper == POSITIVE_INFINITY: 472 return NaN; 473 } 474 // Return NEGATIVE_INFINITY when NEGATIVE_INFINITY == lower <= upper < POSITIVE_INFINITY: 475 return NEGATIVE_INFINITY; 476 } 477 if (upper == POSITIVE_INFINITY) { 478 // Return POSITIVE_INFINITY when NEGATIVE_INFINITY < lower <= upper == POSITIVE_INFINITY: 479 return POSITIVE_INFINITY; 480 } 481 return lower + (upper - lower) * remainder / scale; 482 } 483 484 private static void checkIndex(int index, int scale) { 485 if (index < 0 || index > scale) { 486 throw new IllegalArgumentException( 487 "Quantile indexes must be between 0 and the scale, which is " + scale); 488 } 489 } 490 491 private static double[] longsToDoubles(long[] longs) { 492 int len = longs.length; 493 double[] doubles = new double[len]; 494 for (int i = 0; i < len; i++) { 495 doubles[i] = longs[i]; 496 } 497 return doubles; 498 } 499 500 private static double[] intsToDoubles(int[] ints) { 501 int len = ints.length; 502 double[] doubles = new double[len]; 503 for (int i = 0; i < len; i++) { 504 doubles[i] = ints[i]; 505 } 506 return doubles; 507 } 508 509 /** 510 * Performs an in-place selection to find the element which would appear at a given index in a 511 * dataset if it were sorted. The following preconditions should hold: 512 * 513 * <ul> 514 * <li>{@code required}, {@code from}, and {@code to} should all be indexes into {@code array}; 515 * <li>{@code required} should be in the range [{@code from}, {@code to}]; 516 * <li>all the values with indexes in the range [0, {@code from}) should be less than or equal 517 * to all the values with indexes in the range [{@code from}, {@code to}]; 518 * <li>all the values with indexes in the range ({@code to}, {@code array.length - 1}] should be 519 * greater than or equal to all the values with indexes in the range [{@code from}, {@code 520 * to}]. 521 * </ul> 522 * 523 * This method will reorder the values with indexes in the range [{@code from}, {@code to}] such 524 * that all the values with indexes in the range [{@code from}, {@code required}) are less than or 525 * equal to the value with index {@code required}, and all the values with indexes in the range 526 * ({@code required}, {@code to}] are greater than or equal to that value. Therefore, the value at 527 * {@code required} is the value which would appear at that index in the sorted dataset. 528 */ 529 private static void selectInPlace(int required, double[] array, int from, int to) { 530 // If we are looking for the least element in the range, we can just do a linear search for it. 531 // (We will hit this whenever we are doing quantile interpolation: our first selection finds 532 // the lower value, our second one finds the upper value by looking for the next least element.) 533 if (required == from) { 534 int min = from; 535 for (int index = from + 1; index <= to; index++) { 536 if (array[min] > array[index]) { 537 min = index; 538 } 539 } 540 if (min != from) { 541 swap(array, min, from); 542 } 543 return; 544 } 545 546 // Let's play quickselect! We'll repeatedly partition the range [from, to] containing the 547 // required element, as long as it has more than one element. 548 while (to > from) { 549 int partitionPoint = partition(array, from, to); 550 if (partitionPoint >= required) { 551 to = partitionPoint - 1; 552 } 553 if (partitionPoint <= required) { 554 from = partitionPoint + 1; 555 } 556 } 557 } 558 559 /** 560 * Performs a partition operation on the slice of {@code array} with elements in the range [{@code 561 * from}, {@code to}]. Uses the median of {@code from}, {@code to}, and the midpoint between them 562 * as a pivot. Returns the index which the slice is partitioned around, i.e. if it returns {@code 563 * ret} then we know that the values with indexes in [{@code from}, {@code ret}) are less than or 564 * equal to the value at {@code ret} and the values with indexes in ({@code ret}, {@code to}] are 565 * greater than or equal to that. 566 */ 567 private static int partition(double[] array, int from, int to) { 568 // Select a pivot, and move it to the start of the slice i.e. to index from. 569 movePivotToStartOfSlice(array, from, to); 570 double pivot = array[from]; 571 572 // Move all elements with indexes in (from, to] which are greater than the pivot to the end of 573 // the array. Keep track of where those elements begin. 574 int partitionPoint = to; 575 for (int i = to; i > from; i--) { 576 if (array[i] > pivot) { 577 swap(array, partitionPoint, i); 578 partitionPoint--; 579 } 580 } 581 582 // We now know that all elements with indexes in (from, partitionPoint] are less than or equal 583 // to the pivot at from, and all elements with indexes in (partitionPoint, to] are greater than 584 // it. We swap the pivot into partitionPoint and we know the array is partitioned around that. 585 swap(array, from, partitionPoint); 586 return partitionPoint; 587 } 588 589 /** 590 * Selects the pivot to use, namely the median of the values at {@code from}, {@code to}, and 591 * halfway between the two (rounded down), from {@code array}, and ensure (by swapping elements if 592 * necessary) that that pivot value appears at the start of the slice i.e. at {@code from}. 593 * Expects that {@code from} is strictly less than {@code to}. 594 */ 595 private static void movePivotToStartOfSlice(double[] array, int from, int to) { 596 int mid = (from + to) >>> 1; 597 // We want to make a swap such that either array[to] <= array[from] <= array[mid], or 598 // array[mid] <= array[from] <= array[to]. We know that from < to, so we know mid < to 599 // (although it's possible that mid == from, if to == from + 1). Note that the postcondition 600 // would be impossible to fulfil if mid == to unless we also have array[from] == array[to]. 601 boolean toLessThanMid = (array[to] < array[mid]); 602 boolean midLessThanFrom = (array[mid] < array[from]); 603 boolean toLessThanFrom = (array[to] < array[from]); 604 if (toLessThanMid == midLessThanFrom) { 605 // Either array[to] < array[mid] < array[from] or array[from] <= array[mid] <= array[to]. 606 swap(array, mid, from); 607 } else if (toLessThanMid != toLessThanFrom) { 608 // Either array[from] <= array[to] < array[mid] or array[mid] <= array[to] < array[from]. 609 swap(array, from, to); 610 } 611 // The postcondition now holds. So the median, our chosen pivot, is at from. 612 } 613 614 /** 615 * Performs an in-place selection, like {@link #selectInPlace}, to select all the indexes {@code 616 * allRequired[i]} for {@code i} in the range [{@code requiredFrom}, {@code requiredTo}]. These 617 * indexes must be sorted in the array and must all be in the range [{@code from}, {@code to}]. 618 */ 619 private static void selectAllInPlace( 620 int[] allRequired, int requiredFrom, int requiredTo, double[] array, int from, int to) { 621 // Choose the first selection to do... 622 int requiredChosen = chooseNextSelection(allRequired, requiredFrom, requiredTo, from, to); 623 int required = allRequired[requiredChosen]; 624 625 // ...do the first selection... 626 selectInPlace(required, array, from, to); 627 628 // ...then recursively perform the selections in the range below... 629 int requiredBelow = requiredChosen - 1; 630 while (requiredBelow >= requiredFrom && allRequired[requiredBelow] == required) { 631 requiredBelow--; // skip duplicates of required in the range below 632 } 633 if (requiredBelow >= requiredFrom) { 634 selectAllInPlace(allRequired, requiredFrom, requiredBelow, array, from, required - 1); 635 } 636 637 // ...and then recursively perform the selections in the range above. 638 int requiredAbove = requiredChosen + 1; 639 while (requiredAbove <= requiredTo && allRequired[requiredAbove] == required) { 640 requiredAbove++; // skip duplicates of required in the range above 641 } 642 if (requiredAbove <= requiredTo) { 643 selectAllInPlace(allRequired, requiredAbove, requiredTo, array, required + 1, to); 644 } 645 } 646 647 /** 648 * Chooses the next selection to do from the required selections. It is required that the array 649 * {@code allRequired} is sorted and that {@code allRequired[i]} are in the range [{@code from}, 650 * {@code to}] for all {@code i} in the range [{@code requiredFrom}, {@code requiredTo}]. The 651 * value returned by this method is the {@code i} in that range such that {@code allRequired[i]} 652 * is as close as possible to the center of the range [{@code from}, {@code to}]. Choosing the 653 * value closest to the center of the range first is the most efficient strategy because it 654 * minimizes the size of the subranges from which the remaining selections must be done. 655 */ 656 private static int chooseNextSelection( 657 int[] allRequired, int requiredFrom, int requiredTo, int from, int to) { 658 if (requiredFrom == requiredTo) { 659 return requiredFrom; // only one thing to choose, so choose it 660 } 661 662 // Find the center and round down. The true center is either centerFloor or halfway between 663 // centerFloor and centerFloor + 1. 664 int centerFloor = (from + to) >>> 1; 665 666 // Do a binary search until we're down to the range of two which encloses centerFloor (unless 667 // all values are lower or higher than centerFloor, in which case we find the two highest or 668 // lowest respectively). If centerFloor is in allRequired, we will definitely find it. If not, 669 // but centerFloor + 1 is, we'll definitely find that. The closest value to the true (unrounded) 670 // center will be at either low or high. 671 int low = requiredFrom; 672 int high = requiredTo; 673 while (high > low + 1) { 674 int mid = (low + high) >>> 1; 675 if (allRequired[mid] > centerFloor) { 676 high = mid; 677 } else if (allRequired[mid] < centerFloor) { 678 low = mid; 679 } else { 680 return mid; // allRequired[mid] = centerFloor, so we can't get closer than that 681 } 682 } 683 684 // Now pick the closest of the two candidates. Note that there is no rounding here. 685 if (from + to - allRequired[low] - allRequired[high] > 0) { 686 return high; 687 } else { 688 return low; 689 } 690 } 691 692 /** Swaps the values at {@code i} and {@code j} in {@code array}. */ 693 private static void swap(double[] array, int i, int j) { 694 double temp = array[i]; 695 array[i] = array[j]; 696 array[j] = temp; 697 } 698}