-
Notifications
You must be signed in to change notification settings - Fork 17
/
Copy pathquick.go
152 lines (126 loc) · 5.16 KB
/
quick.go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
package sorting
import "math/rand"
// Quicksort is popular because it is not difficult to implement, works well for
// a variety of different kinds of input data, and is substantially faster than
// another sorting method in typical applications. It is in-place (uses only a
// small auxiliary stack), requires time proportional to NlogN on the average
// to sort N items, and has an extremely short inner loop.
// The basic algorithm
// Quicksort is a divide-and-conquer method for sorting. It works by partitioning
// an array into two parts, then soring the parts independently.
// The crux of the method is the partitioning process, which rearranges the
// array to make the following three conditions hold:
// 1. The entry a[j] is in its final place in the array, for some j
// 2. No entry in a[lo] through a[j-1] is greater than a[j]
// 3. No entry in a[j+1] through a[hi] is less than a[j]
// We achieve a complete sort by partitioning, then recursively applying the
// method to the sub-arrays. It is a randomized algorithm, because it randomly
// shuffles the array before sorting it.
// Partitioning.
// To complete the implementation, we need to implement the partitioning. We
// use the following general strategy:
// First, we arbitrarily choose a[lo] to be the partitioning item, the one that
// will go into its final position.
// Next, we scan from the left end of the array until we find an entry that is
// greater than (or equal to) the partitioning item, and we scan from the right
// end of the array util we find an entry less than (or equal to) the partitioning
// item.
// The two items that stopped the scans are out of place in the final partitioned
// array, so we exchange them. When the scan indices cross, all that we need to
// do to complete the partitioning process is to exchange the partitioning item
// a[lo] with the rightmost entry of the left subarray (a[j]) and return its index j.
func Quicksort(x Sortable) {
// for (mostly) ordered items, shuffle is important
// see cmd/sorting/sort_compare.go
shuffle(x)
quicksort(x, 0, x.Len()-1)
}
func shuffle(x Sortable) {
rand.Shuffle(x.Len(), func(i, j int) {
x.Swap(i, j)
})
}
// quicksort the subarray from x[lo] to x[hi]
func quicksort(x Sortable, lo, hi int) {
if hi <= lo {
return
}
j := partition(x, lo, hi)
quicksort(x, lo, j-1)
quicksort(x, j+1, hi)
}
// partition the subarray x[lo..hi]
// so that x[lo..j-1] <= x[j] <= x[j+1..hi]
// and return the index j
func partition(x Sortable, lo, hi int) int {
i, j := lo+1, hi
for {
// find item on lo to swap
for ; x.Less(i, lo); i++ {
if i == hi {
break
}
}
// find item on hi to swap
for ; x.Less(lo, j); j-- {
if j == lo {
break //redundant since x[lo] acts as sentinel
}
}
// check if pointers cross
if i >= j {
break
}
x.Swap(i, j)
}
// put partitioning item at x[j]
x.Swap(lo, j)
// now, x[lo..j-1] <= x[j] <= x[j+1..hi]
return j
}
type Quick struct{}
func NewQuick() Sorter {
return Quick{}
}
// Implements Sorter
func (s Quick) SortInts(x []int) {
Quicksort(IntSortSlice(x))
}
func (s Quick) SortFloat64s(x []float64) {
Quicksort(Float64SortSlice(x))
}
func (s Quick) SortStrings(x []string) {
Quicksort(StringSortSlice(x))
}
// Implementation details. There are several subtle issues with respect to
// implementing quicksort that are reflected in this code and worthy of mention.
// 1. Partitioning in-place.
// If we use an extra array, partitioning is easy to implement, but not so much
// easier that it is worth the extra cost of copying the partitioned version
// back into the original.
// 2. Staying in bounds.
// If the smallest item or the largest item in the array is the partitioning item,
// we have to take care that the pointers do not run off the left or right ends
// of the array, respectively.
// 3. Preserving randomness.
// The random shuffle put the array in random order. Since it treats all items in
// the sub-arrays uniformly, this implementation has the property that its two
// sub-arrays are also in random order. This fact is crucial to the algorithm's
// predictability. An alternate way to preserve randomness is to choose a random
// item for partitioning within partition().
// 4. Terminating the loop.
// Properly testing whether the pointers have crossed is a bit trickier than it
// might seem at first glance. A common error is to fail to take into account
// that the array might contain other keys with the same value as the partitioning item.
// 5. Handling items with keys equal to the partitioning item's key.
// It is best to stop the left scan of items with keys greater than or equal to
// the partitioning item's key and the right scan for items less than or equal to
// the partitioning item's key. Even though this policy might seem to create
// unnecessary exchanges involving items with keys equal to the partitioning
// item's key, it is crucial to avoid quadratic running time in certain
// typical applications.
// 6. Terminating the recursion.
// A common mistake in implementing quicksort involves not ensuring that one
// item is always put into position, then falling into an infinite recursive
// loop when the partitioning item happens to be the largest or smallest item
// in the array.