Skip to content

Commit 3f7aaa2

Browse files
committed
fix: broken links
1 parent 6f2cdf4 commit 3f7aaa2

File tree

14 files changed

+92
-92
lines changed

14 files changed

+92
-92
lines changed

AVL Tree/README.markdown

+2-2
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ For the rotation we're using the terminology:
5353
* *RotationSubtree* - subtree of the *Pivot* upon the side of rotation
5454
* *OppositeSubtree* - subtree of the *Pivot* opposite the side of rotation
5555

56-
Let take an example of balancing the unbalanced tree using *Right* (clockwise direction) rotation:
56+
Let take an example of balancing the unbalanced tree using *Right* (clockwise direction) rotation:
5757

5858
![Rotation1](Images/RotationStep1.jpg) ![Rotation2](Images/RotationStep2.jpg) ![Rotation3](Images/RotationStep3.jpg)
5959

@@ -76,7 +76,7 @@ Insertion never needs more than 2 rotations. Removal might require up to __log(n
7676

7777
## The code
7878

79-
Most of the code in [AVLTree.swift](AVLTree.swift) is just regular [binary search tree](../Binary Search Tree/) stuff. You'll find this in any implementation of a binary search tree. For example, searching the tree is exactly the same. The only things that an AVL tree does slightly differently are inserting and deleting the nodes.
79+
Most of the code in [AVLTree.swift](AVLTree.swift) is just regular [binary search tree](../Binary%20Search%20Tree/) stuff. You'll find this in any implementation of a binary search tree. For example, searching the tree is exactly the same. The only things that an AVL tree does slightly differently are inserting and deleting the nodes.
8080

8181
> **Note:** If you're a bit fuzzy on the regular operations of a binary search tree, I suggest you [catch up on those first](../Binary%20Search%20Tree/). It will make the rest of the AVL tree easier to understand.
8282

Bounded Priority Queue/README.markdown

+1-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Suppose that we wish to insert the element `G` with priority 0.1 into this BPQ.
2626

2727
## Implementation
2828

29-
While a [heap](../Heap/) may be a really simple implementation for a priority queue, a sorted [linked list](../Linked List/) allows for **O(k)** insertion and **O(1)** deletion, where **k** is the bounding number of elements.
29+
While a [heap](../Heap/) may be a really simple implementation for a priority queue, a sorted [linked list](../Linked%20List/) allows for **O(k)** insertion and **O(1)** deletion, where **k** is the bounding number of elements.
3030

3131
Here's how you could implement it in Swift:
3232

Count Occurrences/README.markdown

+3-3
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ func countOccurrencesOfKey(_ key: Int, inArray a: [Int]) -> Int {
3636
}
3737
return low
3838
}
39-
39+
4040
func rightBoundary() -> Int {
4141
var low = 0
4242
var high = a.count
@@ -50,12 +50,12 @@ func countOccurrencesOfKey(_ key: Int, inArray a: [Int]) -> Int {
5050
}
5151
return low
5252
}
53-
53+
5454
return rightBoundary() - leftBoundary()
5555
}
5656
```
5757

58-
Notice that the helper functions `leftBoundary()` and `rightBoundary()` are very similar to the [binary search](../Binary Search/) algorithm. The big difference is that they don't stop when they find the search key, but keep going.
58+
Notice that the helper functions `leftBoundary()` and `rightBoundary()` are very similar to the [binary search](../Binary%20Search/) algorithm. The big difference is that they don't stop when they find the search key, but keep going.
5959

6060
To test this algorithm, copy the code to a playground and then do:
6161

Depth-First Search/README.markdown

+3-3
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ func depthFirstSearch(_ graph: Graph, source: Node) -> [String] {
4040
}
4141
```
4242

43-
Where a [breadth-first search](../Breadth-First Search/) visits all immediate neighbors first, a depth-first search tries to go as deep down the tree or graph as it can.
43+
Where a [breadth-first search](../Breadth-First%20Search/) visits all immediate neighbors first, a depth-first search tries to go as deep down the tree or graph as it can.
4444

4545
Put this code in a playground and test it like so:
4646

@@ -71,13 +71,13 @@ print(nodesExplored)
7171
```
7272

7373
This will output: `["a", "b", "d", "e", "h", "f", "g", "c"]`
74-
74+
7575
## What is DFS good for?
7676

7777
Depth-first search can be used to solve many problems, for example:
7878

7979
* Finding connected components of a sparse graph
80-
* [Topological sorting](../Topological Sort/) of nodes in a graph
80+
* [Topological sorting](../Topological%20Sort/) of nodes in a graph
8181
* Finding bridges of a graph (see: [Bridges](https://en.wikipedia.org/wiki/Bridge_(graph_theory)#Bridge-finding_algorithm))
8282
* And lots of others!
8383

Deque/README.markdown

+21-21
Original file line numberDiff line numberDiff line change
@@ -9,43 +9,43 @@ Here is a very basic implementation of a deque in Swift:
99
```swift
1010
public struct Deque<T> {
1111
private var array = [T]()
12-
12+
1313
public var isEmpty: Bool {
1414
return array.isEmpty
1515
}
16-
16+
1717
public var count: Int {
1818
return array.count
1919
}
20-
20+
2121
public mutating func enqueue(_ element: T) {
2222
array.append(element)
2323
}
24-
24+
2525
public mutating func enqueueFront(_ element: T) {
2626
array.insert(element, atIndex: 0)
2727
}
28-
28+
2929
public mutating func dequeue() -> T? {
3030
if isEmpty {
3131
return nil
3232
} else {
3333
return array.removeFirst()
3434
}
3535
}
36-
36+
3737
public mutating func dequeueBack() -> T? {
3838
if isEmpty {
3939
return nil
4040
} else {
4141
return array.removeLast()
4242
}
4343
}
44-
44+
4545
public func peekFront() -> T? {
4646
return array.first
4747
}
48-
48+
4949
public func peekBack() -> T? {
5050
return array.last
5151
}
@@ -73,7 +73,7 @@ deque.dequeue() // 5
7373
This particular implementation of `Deque` is simple but not very efficient. Several operations are **O(n)**, notably `enqueueFront()` and `dequeue()`. I've included it only to show the principle of what a deque does.
7474

7575
## A more efficient version
76-
76+
7777
The reason that `dequeue()` and `enqueueFront()` are **O(n)** is that they work on the front of the array. If you remove an element at the front of an array, what happens is that all the remaining elements need to be shifted in memory.
7878

7979
Let's say the deque's array contains the following items:
@@ -92,7 +92,7 @@ Likewise, inserting an element at the front of the array is expensive because it
9292

9393
First, the elements `2`, `3`, and `4` are moved up by one position in the computer's memory, and then the new element `5` is inserted at the position where `2` used to be.
9494

95-
Why is this not an issue at for `enqueue()` and `dequeueBack()`? Well, these operations are performed at the end of the array. The way resizable arrays are implemented in Swift is by reserving a certain amount of free space at the back.
95+
Why is this not an issue at for `enqueue()` and `dequeueBack()`? Well, these operations are performed at the end of the array. The way resizable arrays are implemented in Swift is by reserving a certain amount of free space at the back.
9696

9797
Our initial array `[ 1, 2, 3, 4]` actually looks like this in memory:
9898

@@ -120,26 +120,26 @@ public struct Deque<T> {
120120
private var head: Int
121121
private var capacity: Int
122122
private let originalCapacity:Int
123-
123+
124124
public init(_ capacity: Int = 10) {
125125
self.capacity = max(capacity, 1)
126126
originalCapacity = self.capacity
127127
array = [T?](repeating: nil, count: capacity)
128128
head = capacity
129129
}
130-
130+
131131
public var isEmpty: Bool {
132132
return count == 0
133133
}
134-
134+
135135
public var count: Int {
136136
return array.count - head
137137
}
138-
138+
139139
public mutating func enqueue(_ element: T) {
140140
array.append(element)
141141
}
142-
142+
143143
public mutating func enqueueFront(_ element: T) {
144144
// this is explained below
145145
}
@@ -155,15 +155,15 @@ public struct Deque<T> {
155155
return array.removeLast()
156156
}
157157
}
158-
158+
159159
public func peekFront() -> T? {
160160
if isEmpty {
161161
return nil
162162
} else {
163163
return array[head]
164164
}
165165
}
166-
166+
167167
public func peekBack() -> T? {
168168
if isEmpty {
169169
return nil
@@ -176,7 +176,7 @@ public struct Deque<T> {
176176

177177
It still largely looks the same -- `enqueue()` and `dequeueBack()` haven't changed -- but there are also a few important differences. The array now stores objects of type `T?` instead of just `T` because we need some way to mark array elements as being empty.
178178

179-
The `init` method allocates a new array that contains a certain number of `nil` values. This is the free room we have to work with at the beginning of the array. By default this creates 10 empty spots.
179+
The `init` method allocates a new array that contains a certain number of `nil` values. This is the free room we have to work with at the beginning of the array. By default this creates 10 empty spots.
180180

181181
The `head` variable is the index in the array of the front-most object. Since the queue is currently empty, `head` points at an index beyond the end of the array.
182182

@@ -219,7 +219,7 @@ Notice how the array has resized itself. There was no room to add the `1`, so Sw
219219
|
220220
head
221221

222-
> **Note:** You won't see those empty spots at the back of the array when you `print(deque.array)`. This is because Swift hides them from you. Only the ones at the front of the array show up.
222+
> **Note:** You won't see those empty spots at the back of the array when you `print(deque.array)`. This is because Swift hides them from you. Only the ones at the front of the array show up.
223223
224224
The `dequeue()` method does the opposite of `enqueueFront()`, it reads the value at `head`, sets the array element back to `nil`, and then moves `head` one position to the right:
225225

@@ -250,7 +250,7 @@ There is one tiny problem... If you enqueue a lot of objects at the front, you'r
250250
}
251251
```
252252

253-
If `head` equals 0, there is no room left at the front. When that happens, we add a whole bunch of new `nil` elements to the array. This is an **O(n)** operation but since this cost gets divided over all the `enqueueFront()`s, each individual call to `enqueueFront()` is still **O(1)** on average.
253+
If `head` equals 0, there is no room left at the front. When that happens, we add a whole bunch of new `nil` elements to the array. This is an **O(n)** operation but since this cost gets divided over all the `enqueueFront()`s, each individual call to `enqueueFront()` is still **O(1)** on average.
254254

255255
> **Note:** We also multiply the capacity by 2 each time this happens, so if your queue grows bigger and bigger, the resizing happens less often. This is also what Swift arrays automatically do at the back.
256256
@@ -302,7 +302,7 @@ This way we can strike a balance between fast enqueuing and dequeuing at the fro
302302
303303
## See also
304304

305-
Other ways to implement deque are by using a [doubly linked list](../Linked List/), a [circular buffer](../Ring Buffer/), or two [stacks](../Stack/) facing opposite directions.
305+
Other ways to implement deque are by using a [doubly linked list](../Linked%20List/), a [circular buffer](../Ring%20Buffer/), or two [stacks](../Stack/) facing opposite directions.
306306

307307
[A fully-featured deque implementation in Swift](https://github.com/lorentey/Deque)
308308

Heap Sort/README.markdown

+1-1
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ And fix up the heap to make it valid max-heap again:
4040

4141
As you can see, the largest items are making their way to the back. We repeat this process until we arrive at the root node and then the whole array is sorted.
4242

43-
> **Note:** This process is very similar to [selection sort](../Selection Sort/), which repeatedly looks for the minimum item in the remainder of the array. Extracting the minimum or maximum value is what heaps are good at.
43+
> **Note:** This process is very similar to [selection sort](../Selection%20Sort/), which repeatedly looks for the minimum item in the remainder of the array. Extracting the minimum or maximum value is what heaps are good at.
4444
4545
Performance of heap sort is **O(n lg n)** in best, worst, and average case. Because we modify the array directly, heap sort can be performed in-place. But it is not a stable sort: the relative order of identical elements is not preserved.
4646

Huffman Coding/README.markdown

+14-14
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ If you count how often each byte appears, you can clearly see that some bytes oc
1616
c: 2 p: 1
1717
r: 2 e: 1
1818
n: 2 i: 1
19-
19+
2020
We can assign bit strings to each of these bytes. The more common a byte is, the fewer bits we assign to it. We might get something like this:
2121

2222
space: 5 010 u: 1 11001
@@ -30,12 +30,12 @@ We can assign bit strings to each of these bytes. The more common a byte is, the
3030

3131
Now if we replace the original bytes with these bit strings, the compressed output becomes:
3232

33-
101 000 010 111 11001 0011 10001 010 0010 000 1001 11010 101
33+
101 000 010 111 11001 0011 10001 010 0010 000 1001 11010 101
3434
s o _ m u c h _ w o r d s
35-
35+
3636
010 0010 000 0010 010 111 11011 0110 01111 010 0011 000 111
3737
_ w o w _ m a n y _ c o m
38-
38+
3939
11000 1001 01110 101 101 10000 000 0110 0
4040
p r e s s i o n
4141

@@ -57,7 +57,7 @@ The edges between the nodes either say "1" or "0". These correspond to the bit-e
5757

5858
Compression is then a matter of looping through the input bytes, and for each byte traverse the tree from the root node to that byte's leaf node. Every time we take a left branch, we emit a 1-bit. When we take a right branch, we emit a 0-bit.
5959

60-
For example, to go from the root node to `c`, we go right (`0`), right again (`0`), left (`1`), and left again (`1`). So the Huffman code for `c` is `0011`.
60+
For example, to go from the root node to `c`, we go right (`0`), right again (`0`), left (`1`), and left again (`1`). So the Huffman code for `c` is `0011`.
6161

6262
Decompression works in exactly the opposite way. It reads the compressed bits one-by-one and traverses the tree until we get to a leaf node. The value of that leaf node is the uncompressed byte. For example, if the bits are `11010`, we start at the root and go left, left again, right, left, and a final right to end up at `d`.
6363

@@ -137,7 +137,7 @@ Here are the definitions we need:
137137
```swift
138138
class Huffman {
139139
typealias NodeIndex = Int
140-
140+
141141
struct Node {
142142
var count = 0
143143
var index: NodeIndex = -1
@@ -152,7 +152,7 @@ class Huffman {
152152
}
153153
```
154154

155-
The tree structure is stored in the `tree` array and will be made up of `Node` objects. Since this is a [binary tree](../Binary Tree/), each node needs two children, `left` and `right`, and a reference back to its `parent` node. Unlike a typical binary tree, however, these nodes don't to use pointers to refer to each other but simple integer indices in the `tree` array. (We also store the array `index` of the node itself; the reason for this will become clear later.)
155+
The tree structure is stored in the `tree` array and will be made up of `Node` objects. Since this is a [binary tree](../Binary%20Tree/), each node needs two children, `left` and `right`, and a reference back to its `parent` node. Unlike a typical binary tree, however, these nodes don't to use pointers to refer to each other but simple integer indices in the `tree` array. (We also store the array `index` of the node itself; the reason for this will become clear later.)
156156

157157
Note that `tree` currently has room for 256 entries. These are for the leaf nodes because there are 256 possible byte values. Of course, not all of those may end up being used, depending on the input data. Later, we'll add more nodes as we build up the actual tree. For the moment there isn't a tree yet, just 256 separate leaf nodes with no connections between them. All the node counts are 0.
158158

@@ -183,7 +183,7 @@ Instead, we'll add a method to export the frequency table without all the pieces
183183
var byte: UInt8 = 0
184184
var count = 0
185185
}
186-
186+
187187
func frequencyTable() -> [Freq] {
188188
var a = [Freq]()
189189
for i in 0..<256 where tree[i].count > 0 {
@@ -209,7 +209,7 @@ To build the tree, we do the following:
209209
2. Create a new parent node that links these two nodes together.
210210
3. This repeats over and over until only one node with no parent remains. This becomes the root node of the tree.
211211

212-
This is an ideal place to use a [priority queue](../Priority Queue/). A priority queue is a data structure that is optimized so that finding the minimum value is always very fast. Here, we repeatedly need to find the node with the smallest count.
212+
This is an ideal place to use a [priority queue](../Priority%20Queue/). A priority queue is a data structure that is optimized so that finding the minimum value is always very fast. Here, we repeatedly need to find the node with the smallest count.
213213

214214
The function `buildTree()` then becomes:
215215

@@ -233,7 +233,7 @@ The function `buildTree()` then becomes:
233233

234234
tree[node1.index].parent = parentNode.index // 4
235235
tree[node2.index].parent = parentNode.index
236-
236+
237237
queue.enqueue(parentNode) // 5
238238
}
239239

@@ -286,7 +286,7 @@ Now that we know how to build the compression tree from the frequency table, we
286286
}
287287
```
288288

289-
This first calls `countByteFrequency()` to build the frequency table, then `buildTree()` to put together the compression tree. It also creates a `BitWriter` object for writing individual bits.
289+
This first calls `countByteFrequency()` to build the frequency table, then `buildTree()` to put together the compression tree. It also creates a `BitWriter` object for writing individual bits.
290290

291291
Then it loops through the entire input and for each byte calls `traverseTree()`. That method will step through the tree nodes and for each node write a 1 or 0 bit. Finally, we return the `BitWriter`'s data object.
292292

@@ -309,7 +309,7 @@ The interesting stuff happens in `traverseTree()`. This is a recursive method:
309309
}
310310
```
311311

312-
When we call this method from `compressData()`, the `nodeIndex` parameter is the array index of the leaf node for the byte that we're about to encode. This method recursively walks the tree from a leaf node up to the root, and then back again.
312+
When we call this method from `compressData()`, the `nodeIndex` parameter is the array index of the leaf node for the byte that we're about to encode. This method recursively walks the tree from a leaf node up to the root, and then back again.
313313

314314
As we're going back from the root to the leaf node, we write a 1 bit or a 0 bit for every node we encounter. If a child is the left node, we emit a 1; if it's the right node, we emit a 0.
315315

@@ -395,10 +395,10 @@ Here's how you would use the decompression method:
395395

396396
```swift
397397
let frequencyTable = huffman1.frequencyTable()
398-
398+
399399
let huffman2 = Huffman()
400400
let decompressedData = huffman2.decompressData(compressedData, frequencyTable: frequencyTable)
401-
401+
402402
let s2 = String(data: decompressedData, encoding: NSUTF8StringEncoding)!
403403
```
404404

0 commit comments

Comments
 (0)