1997
DOI: 10.1002/(sici)1097-024x(199708)27:8<983::aid-spe117>3.0.co;2-#
|View full text |Cite
|
Sign up to set email alerts
|

Introspective Sorting and Selection Algorithms

Abstract: SUMMARYQuicksort is the preferred in-place sorting algorithm in many contexts, since its average computing time on uniformly distributed inputs is Θ (N log N), and it is in fact faster than most other sorting algorithms on most inputs. Its drawback is that its worst-case time bound is Θ(N 2 . Previous attempts to protect against the worst case by improving the way quicksort chooses pivot elements for partitioning have increased the average computing time too much -one might as well use heapsort, which has a Θ(… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
141
0
4

Year Published

2000
2000
2020
2020

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 182 publications
(145 citation statements)
references
References 9 publications
0
141
0
4
Order By: Relevance
“…An adaptive fallback using a runtime check is a standard heuristic technique to avoid worst case performance in many algorithms. For example, introsort [19] used in the STL's std::sort library method uses quicksort with adaptive fallback to heapsort to avoid the O(N 2 ) worst-case performance of quicksort. From the results shown in Figure 10, for two input arrays with comparable sizes, we start execution with our SIMD algorithm using the block size setting of 4x4 (or 8x8 if we use STTNI on Xeon).…”
Section: Performance For Two Arrays Of Various Sizesmentioning
confidence: 99%
“…An adaptive fallback using a runtime check is a standard heuristic technique to avoid worst case performance in many algorithms. For example, introsort [19] used in the STL's std::sort library method uses quicksort with adaptive fallback to heapsort to avoid the O(N 2 ) worst-case performance of quicksort. From the results shown in Figure 10, for two input arrays with comparable sizes, we start execution with our SIMD algorithm using the block size setting of 4x4 (or 8x8 if we use STTNI on Xeon).…”
Section: Performance For Two Arrays Of Various Sizesmentioning
confidence: 99%
“…For the disk-enabled Vectorwise this is achieved by executing the query several times and reporting only the execution times of runs after the data was fully resident in RAM. In order to cover the most important scenarios, we report benchmark results using datasets Core 2 (8,40) Core 4 (16,48) Core 6 (24,56) Core 1 (4,36) Core 3 (12,44) Core 5 (20,52) Core 7…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…Section 5, Figure 11). We therefore instantiated 32 threads to work on one relation with a total of 1600M (throughout the paper we use M = 2 20 ) tuples, each consisting of a 64-bit sort key and a 64-bit payload, in parallel. (1) We first chunked the relation and sorted the chunks of 50M tuples each as runs in parallel.…”
Section: Introductionmentioning
confidence: 99%
“…Introsort is based on Quicksort, but switches to heap-sort when the recursion depth gets too large. Since it is highly dependent on the computer system and compiler used, we only included it to give a hint as to what could be gained by sorting on the GPU instead of on the CPU [19].…”
Section: Experimental Evaluationmentioning
confidence: 99%