辅导CSC111语言编程、讲解Algorithms程序、Python设计程序辅导
- 首页 >> Python编程 CSC111 Tutorial 9: Iterative Sorting Algorithms
Algorithms
In this week’s lecture, you learned about selection sort and insertion sort, two iterative sorting
algorithms. Today, you’ll get some more practice with these sorting algorithms, some running-time
analysis, and develop a simple sorting visualization tool using pygame.
Part 1: Variations on selection sort and insertion
sort
When you first learn a new algorithm like selection sort or insertion sort, it can be tempting to just
memorize the code for that algorithm. However, blindly memorizing code isn’t actually a good strategy
for gaining deep understanding of what that algorithm is doing, or how to implement it. So in the first
part of this tutorial, we’ll use a different strategy to reinforce your learning of these two sorting
algorithms: implementing variations of them.
Download tutorial9_part1.py (starter-files/tutorial9_part1.py) and open it. Inside, you’ll see our
implementations of selection sort and insertion sort from lecture, as well as a few new functions to
implement. Each of these functions can be implemented using a very similar approach as one of our tw
sorting algorithms, but it’ll be up to you to figure out what changes you need to make.
As you complete each function, we recommend comparing your implementation against the code from
lecture, so that you can see both the similarities and differences in the code.
Part 2: Running-time analysis for binary search
In the prep reading for this week, you learned about the binary search algorithm, which can search for
an item in a list faster than the standard linear search algorithm, but only works on sorted list. Here’s
the implementation from Section 16.1 (https://www.teach.cs.toronto.edu/~csc110y/fall/notes/16-
sorting/01-sorting-intro.html):
3/25/2021 CSC111 Tutorial 9: Iterative Sorting Algorithms
https://www.teach.cs.toronto.edu/~csc111h/winter/tutorials/09-iterative-sorting/ 2/6
The course notes briefly discussed the running time of binary search, but only at a high level. Intuitivel
at each loop iteration the size of the sublist being searched decreases by a factor of 2, and so there can b
at most (approximately) iterations in total. But how do we formalize this intuition?
We run into trouble because we don’t have a predictable formula the values of variables b and e after
iterations, since how the search range changes depends on the contents of lst and the item being
searched for.
We can reconcile the intuition with our more formal approach by explicitly introducing and analysing
the behaviour of a new variable. Specifically: we let be a variable representing the size of the
search range ( stands for “range”).
def binary_search(lst: list, item: Any) -> bool:
"""Return whether item is in lst using the binary search algorithm.
Preconditions:
- is_sorted(lst)
"""
b = 0
e = len(lst)
while b < e:
# Loop invariants
assert all(lst[i] < item for i in range(0, b))
assert all(lst[i] > item for i in range(e, len(lst)))
m = (b + e) // 2
if item == lst[m]:
return True
elif item < lst[m]:
e = m
else: # item > lst[m]
b = m + 1
# If the loop ends without finding the item, the item is not in the list.
return False
log2 n
k
r = e − b
r
3/25/2021 CSC111 Tutorial 9: Iterative Sorting Algorithms
https://www.teach.cs.toronto.edu/~csc111h/winter/tutorials/09-iterative-sorting/ 3/6
Using this definition, answer the questions below. By the end of this part, you’ll find a good upper boun
on the worst-case running time of binary_search.
1. Let represent the length of the input list lst. What is the initial value of , in terms of ?
2. For what value(s) of will the loop terminate?
3. Prove that at each loop iteration, if the item x is not found then the value of decreases by at least
a factor of 2. More precisely, let and be the values of immediately before and after the -t
iteration, respectively, and prove that .
Hints: do a proof by cases. Use the property that for all , .
4. Find an exact upper bound number of iterations that could occur (in terms of ), and use this to
show that the worst-case running time of binary_search is .
Part 3: Visualizing sorting algorithms
In the final part of today’s tutorial, you’ll develop a cool way of visualizing our two different sorting
algorithms using pygame.
3/25/2021 CSC111 Tutorial 9: Iterative Sorting Algorithms
https://www.teach.cs.toronto.edu/~csc111h/winter/tutorials/09-iterative-sorting/ 4/6
To visualize a sorting algorithm being performed on a list of length , the pygame window is divided
into an -by- grid, where:
Each row of the grid represents the state of the list at a given loop iteration in the sorting
algorithm.
Each row is divided into grid cells, where each cell represents one element of the list. For
now, we’ll only consider lists of numbers between 0–255, and visualize each number as th
grayscale colour .
For selection sort, the rows show the state of the list at the end of each of the loop iterations.
But for insertion sort, the rows show the state of the list at the start of each loop iteration.
To start, please download the starter file tutorial9_part3.py (starter-files/tutorial9_part3.py). We’v
included the sorting algorithms from lecture (the versions including the key parameter), and then the
tutorial exercises divided into “grayscale” and “colour” parts.
1. Under “Tutorial exercises (grayscale verions)”, implement the function draw_list, which is
responsible for drawing a single row of the grid. You can test your work in our starter
implementation of visualize_selection_sort, which draws just the top row of the grid. Try
creating a random list of 20 numbers between 0–255, and calling visualize_selection_sort o
this list. You should see an image similar to the one pictured above, except only the top row is
displayed (and the rest of the window is white).
2. Implement the visualization functions visualize_selection_sort and
visualize_insertion_sort. Your implementations can copy-and-paste from our algorithms, bu
should also call your draw_list helper in the appropriate place in the loop body. See
visualize_selection_sort’s docstring for some implementation notes that apply to both of
these functions.
After you’re done, try calling each function, choosing a random list of size 10, 100, and 200. You
should see each sorting algorithm visualized in a top-down manner. Make sure you can identify
which part of the visualization is the “sorted” vs. “unsorted” part of each list, and how the two
algorithms differ in what makes up the sorted part.
3. Now repeat Questions 1 and 2 for the “Tutorial exercises (colour version)” section. For those
functions, the input list being sorted is now a list of tuples storing the RGB values of colours.
Unfortunately, just sorting these RGB tuples directly isn’t very visually pleasing, and so we’ve
provided a helper function rgb_to_hsv that converts each tuple to a different colour
representation known as Hue-Saturation-Value (HSV).
So in the visualization functions, you’ll need to run the sorting algorithms on this list of tuples, bu
also use this helper function as the “sort key”, using the generalized sorting technique we
introduced in lecture.
To test your work, try generating some random lists of colours and running your visualization
algorithms on them. (Remember, each random colour should be a random uple with 3 elements i
the range 0–255.) You should see a beautiful rainbow of colours in the “sorted part” of eac
algorithm!
Additional exercises
1. Implement a variation of selection sort and insertion sort that take a list lst and two additional
index parameters b and e, and only sorts the sublist lst[b:e].
3/25/2021 CSC111 Tutorial 9: Iterative Sorting Algorithms
https://www.teach.cs.toronto.edu/~csc111h/winter/tutorials/09-iterative-sorting/ 6/6
2. Prove that the worst-case running time of binary_search is . Note that your description
of the input family should talk about both the input list, lst, and the item being searched for, x.
3. In Part 3, you might notice that there was quite a bit of code duplication in the visualization
functions you implemented. Reduce this code duplication by defining a single visualization
function that can visualize both sorting algorithms, and take in either a list of integers or a list of
colour tuples.
1. This tutorial exercises is inspired by https://corte.si/posts/code/visualisingsorting/
(https://corte.si/posts/code/visualisingsorting/).↩
2. You might recall grayscale colours from our first tutorial all the way back in CSC110!↩
3. The reason for this is a bit subtle. Normally, we would show both the initial state of the list and th
state of the list after each of the loop iterations. But this results in rows total, which break
the symmetry of our visualization. But for selection sort, the initial list state and the state after the
first loop iteration are identical, and so we can skip drawing the initial state. For insertion sort, th
states of the list at the beginning and end of the last iteration are equal, and so we can skip drawin
the very last state. ↩
Ω(log n)
n n + 1
Algorithms
In this week’s lecture, you learned about selection sort and insertion sort, two iterative sorting
algorithms. Today, you’ll get some more practice with these sorting algorithms, some running-time
analysis, and develop a simple sorting visualization tool using pygame.
Part 1: Variations on selection sort and insertion
sort
When you first learn a new algorithm like selection sort or insertion sort, it can be tempting to just
memorize the code for that algorithm. However, blindly memorizing code isn’t actually a good strategy
for gaining deep understanding of what that algorithm is doing, or how to implement it. So in the first
part of this tutorial, we’ll use a different strategy to reinforce your learning of these two sorting
algorithms: implementing variations of them.
Download tutorial9_part1.py (starter-files/tutorial9_part1.py) and open it. Inside, you’ll see our
implementations of selection sort and insertion sort from lecture, as well as a few new functions to
implement. Each of these functions can be implemented using a very similar approach as one of our tw
sorting algorithms, but it’ll be up to you to figure out what changes you need to make.
As you complete each function, we recommend comparing your implementation against the code from
lecture, so that you can see both the similarities and differences in the code.
Part 2: Running-time analysis for binary search
In the prep reading for this week, you learned about the binary search algorithm, which can search for
an item in a list faster than the standard linear search algorithm, but only works on sorted list. Here’s
the implementation from Section 16.1 (https://www.teach.cs.toronto.edu/~csc110y/fall/notes/16-
sorting/01-sorting-intro.html):
3/25/2021 CSC111 Tutorial 9: Iterative Sorting Algorithms
https://www.teach.cs.toronto.edu/~csc111h/winter/tutorials/09-iterative-sorting/ 2/6
The course notes briefly discussed the running time of binary search, but only at a high level. Intuitivel
at each loop iteration the size of the sublist being searched decreases by a factor of 2, and so there can b
at most (approximately) iterations in total. But how do we formalize this intuition?
We run into trouble because we don’t have a predictable formula the values of variables b and e after
iterations, since how the search range changes depends on the contents of lst and the item being
searched for.
We can reconcile the intuition with our more formal approach by explicitly introducing and analysing
the behaviour of a new variable. Specifically: we let be a variable representing the size of the
search range ( stands for “range”).
def binary_search(lst: list, item: Any) -> bool:
"""Return whether item is in lst using the binary search algorithm.
Preconditions:
- is_sorted(lst)
"""
b = 0
e = len(lst)
while b < e:
# Loop invariants
assert all(lst[i] < item for i in range(0, b))
assert all(lst[i] > item for i in range(e, len(lst)))
m = (b + e) // 2
if item == lst[m]:
return True
elif item < lst[m]:
e = m
else: # item > lst[m]
b = m + 1
# If the loop ends without finding the item, the item is not in the list.
return False
log2 n
k
r = e − b
r
3/25/2021 CSC111 Tutorial 9: Iterative Sorting Algorithms
https://www.teach.cs.toronto.edu/~csc111h/winter/tutorials/09-iterative-sorting/ 3/6
Using this definition, answer the questions below. By the end of this part, you’ll find a good upper boun
on the worst-case running time of binary_search.
1. Let represent the length of the input list lst. What is the initial value of , in terms of ?
2. For what value(s) of will the loop terminate?
3. Prove that at each loop iteration, if the item x is not found then the value of decreases by at least
a factor of 2. More precisely, let and be the values of immediately before and after the -t
iteration, respectively, and prove that .
Hints: do a proof by cases. Use the property that for all , .
4. Find an exact upper bound number of iterations that could occur (in terms of ), and use this to
show that the worst-case running time of binary_search is .
Part 3: Visualizing sorting algorithms
In the final part of today’s tutorial, you’ll develop a cool way of visualizing our two different sorting
algorithms using pygame.
3/25/2021 CSC111 Tutorial 9: Iterative Sorting Algorithms
https://www.teach.cs.toronto.edu/~csc111h/winter/tutorials/09-iterative-sorting/ 4/6
To visualize a sorting algorithm being performed on a list of length , the pygame window is divided
into an -by- grid, where:
Each row of the grid represents the state of the list at a given loop iteration in the sorting
algorithm.
Each row is divided into grid cells, where each cell represents one element of the list. For
now, we’ll only consider lists of numbers between 0–255, and visualize each number as th
grayscale colour .
For selection sort, the rows show the state of the list at the end of each of the loop iterations.
But for insertion sort, the rows show the state of the list at the start of each loop iteration.
To start, please download the starter file tutorial9_part3.py (starter-files/tutorial9_part3.py). We’v
included the sorting algorithms from lecture (the versions including the key parameter), and then the
tutorial exercises divided into “grayscale” and “colour” parts.
1. Under “Tutorial exercises (grayscale verions)”, implement the function draw_list, which is
responsible for drawing a single row of the grid. You can test your work in our starter
implementation of visualize_selection_sort, which draws just the top row of the grid. Try
creating a random list of 20 numbers between 0–255, and calling visualize_selection_sort o
this list. You should see an image similar to the one pictured above, except only the top row is
displayed (and the rest of the window is white).
2. Implement the visualization functions visualize_selection_sort and
visualize_insertion_sort. Your implementations can copy-and-paste from our algorithms, bu
should also call your draw_list helper in the appropriate place in the loop body. See
visualize_selection_sort’s docstring for some implementation notes that apply to both of
these functions.
After you’re done, try calling each function, choosing a random list of size 10, 100, and 200. You
should see each sorting algorithm visualized in a top-down manner. Make sure you can identify
which part of the visualization is the “sorted” vs. “unsorted” part of each list, and how the two
algorithms differ in what makes up the sorted part.
3. Now repeat Questions 1 and 2 for the “Tutorial exercises (colour version)” section. For those
functions, the input list being sorted is now a list of tuples storing the RGB values of colours.
Unfortunately, just sorting these RGB tuples directly isn’t very visually pleasing, and so we’ve
provided a helper function rgb_to_hsv that converts each tuple to a different colour
representation known as Hue-Saturation-Value (HSV).
So in the visualization functions, you’ll need to run the sorting algorithms on this list of tuples, bu
also use this helper function as the “sort key”, using the generalized sorting technique we
introduced in lecture.
To test your work, try generating some random lists of colours and running your visualization
algorithms on them. (Remember, each random colour should be a random uple with 3 elements i
the range 0–255.) You should see a beautiful rainbow of colours in the “sorted part” of eac
algorithm!
Additional exercises
1. Implement a variation of selection sort and insertion sort that take a list lst and two additional
index parameters b and e, and only sorts the sublist lst[b:e].
3/25/2021 CSC111 Tutorial 9: Iterative Sorting Algorithms
https://www.teach.cs.toronto.edu/~csc111h/winter/tutorials/09-iterative-sorting/ 6/6
2. Prove that the worst-case running time of binary_search is . Note that your description
of the input family should talk about both the input list, lst, and the item being searched for, x.
3. In Part 3, you might notice that there was quite a bit of code duplication in the visualization
functions you implemented. Reduce this code duplication by defining a single visualization
function that can visualize both sorting algorithms, and take in either a list of integers or a list of
colour tuples.
1. This tutorial exercises is inspired by https://corte.si/posts/code/visualisingsorting/
(https://corte.si/posts/code/visualisingsorting/).↩
2. You might recall grayscale colours from our first tutorial all the way back in CSC110!↩
3. The reason for this is a bit subtle. Normally, we would show both the initial state of the list and th
state of the list after each of the loop iterations. But this results in rows total, which break
the symmetry of our visualization. But for selection sort, the initial list state and the state after the
first loop iteration are identical, and so we can skip drawing the initial state. For insertion sort, th
states of the list at the beginning and end of the last iteration are equal, and so we can skip drawin
the very last state. ↩
Ω(log n)
n n + 1