Algorithms And Data Structures Assignment: A Detailed Analysis
Question
Task:
Write a report on data structures assignment presenting the concise content analysis summaries of the theoretical concepts related to data structures and algorithms.
Answer
Introduction and Background
The data structures assignment presents week wise learning experience of data structures and algorithms. The study includes eleven weeks as a whole and the topics are for intermediate levels. The java programming implementations are also presented with relevant applications.
An algorithm is a procedural way to solve a complex problem. This document focuses on easy to difficult algorithms with simple examples. Data structure is a method in which data is stored. The simple data structures sometimes aren’t feasible to solve a complex problem hence data structures such as linked list, stacks and queues are usually used to store information while solving algorithms such as sorting and searching. The course in each week focused on stack, queue, bag, union-find, priority queue, quicksort, mergesort, heapsort, radix sorts, BST, red-black BST, hash table, BFS, DFS, Prim, Kruskal, Dijkstra, KMP, regular expressions, tries, data compression, B-tree, k-d tree, suffix array and maxflow.
Content analysis
Week 1
a. Theoretical Discussion
i. Dijkstra's two-stack algorithm
Dijkstra’s two-stack algorithm is used primarily in evaluation of arithmetic operations that outputs a value. Two stacks are maintained to store operators (operator stack) and operands separately (operand stack). Stacks and Queues are the fundamental data types and can be implemented using linked-list and arrays. They store objects on which insertion, removal, iteration and testing can be done to check if they are empty. Stacks add the object on top of every previously added object and remove the last object added. However, queues remove the first object added (Goodrich, Tamassia and Goldwasser, n.d.).
Dijkstra’s Algorithm:
- The parsing of each token of the string is performed from left to right.
- In the course of parsing, if values are encountered, it is pushed on value stack.
- Similarly if an operator is encountered, it is pushed on operator stack.
- Parenthesis is important to note if it is right, else it is ignored. When right parenthesis is encountered, an operator is popped along with two values and finally value of the applied result is pushed back to the value or operand stack.
b. Example
Evaluate arithmetic equation, 4 + (8*3).
c. Outcome
The algorithm learnt in this week was Dijkstra’s algorithm. It uses the concept of stack data type majorly hence; the stack implementation using linked-list and arrays was learned. The data structures in this week were fundamental and are important to learn in order to understand working and implementation of Dijkstra’s algorithm. The concepts are now clearer in terms of data structures primarily which has joined many distinct threads.
Week 2
a. Theoretical Discussion
i. Queues
In the first week, stack implementation was learnt using linked-list and arrays. In this week, resizing arrays, queues, generics, iteration and applications of queues were studied.
ii. Definitions
- Resizing arrays: the implementation of stacks it is also important to be able to grow and shrink stacks as per requirements.
- Queues: this is fundamental data types that allows collection of objects with operations enqueue and dequeue. Enqueue operation adds an element in the queue whereas dequeue operation pops the first element added.
- Generics: Generics are implemented in compiler.
- Iterators: Iterations is continuous loop of accessing stack elements.
iii. Mathematical Analysis
- Resizing arrays: This is possible using push and pop operators on stack. However, it can be too expensive in worst case scenario hence new way of resizing is studied in this week.
- Queues: To implement queues, doubly linked-list should be used.
- Generics: Generic stack can be implemented using linked-list and arrays.
- Iterators: Iterations of stack can be implemented using Bag API.
b. Working of Queue
Consider an empty queue, Q.
Enqueue 3 elements, 3,4,5; the queues looks like, Q = {3,4,5}
Dequeue an element, Q={4,5}; element popped in 3.
Enqueue an element 7; Q = {4,5,10}
Dequeue and element, Q={5,10}; element popped is 7.
c. Outcome
The resizing concept is tricky and may lead to excess memory loss if the large sized data types are unused. Hence efficient resizing was learned in this week. Many applications of the key concepts were also learned by the end of this week.
Week 3
a. Theoretical Discussion
i. Analysis of Algorithms
Algorithms are the procedural steps to be followed to complete a required task. The analysis of algorithm mainly focuses on its time and space complexity. The analysis covered in this week is completely on running time of various algorithms.
ii. Definitions
Time Complexity is related to the execution time required for an algorithm to execute in the given environment. Space complexity is related to memory space utilized for execution of algorithm.
iii. Mathematical Analysis
The running time of an algorithm depends highly on size of input.
Total running time: sum of cost x frequency for all operations
Cost is related to compiler and machine type whereas frequency relies on the input and its iterations.
b. Reflection on the Contents
i. Example of Notations to calculate running time
c. Outcome
To measure running time, some functions in java were studied. Notations to calculate and present running time of algorithm were studied with examples of calculations. The concepts about running time were not clear before and were more confusing before the topics were covered in this week.
Week 4
a. Theoretical Discussion
i. Sorting
Sorting is the process of rearrangement of elements present in the data structure. In week 4, insertion, selection and shell sorting algorithms were studied.
ii. Definitions
- Insertion Sort: Rearrangement takes place from left to right and it is made sure that each parsed sub set is in ascending order while parsing is in progress.
- Selection Sort: Starts from left and it makes sure that the elements at left are less and elements at right are greater than the index element.
- Shell Sort: Often termed as h-sorting algorithm where h denotes number of elements.
iii. Mathematical Analysis
- Insertion Sort: Two indices i and j are maintained and the jth element is placed in its correct order in ascending order.
- Selection Sort: min value to the right of index i is chosen as a pivot element. Each time min element is swapped with the element at ith location.
- Shell Sort: An array is h-sorted if a[i-h] <= a[i] for each i.
b. Reflection on the Contents
i. Example of Selection Sort
Consider an array: A ={11,33,66,22,88,55,44}
Selection Sort working:
Step 1: A={11,33,66,22,88,55,44}
Step 2: A={11,22,66,33,88,55,44}
Step 3: A={11,22,33,66,88,55,44}
Step 4: A={11,22,33,44,55,88,66}
Step 5: A={11,22,33,44,55,88,66}
Step 6: A={11,22,33,44,55,66,88}
Step 7: A={11,22,33,44,55,66,88}
c. Outcome
The sorting algorithms studied in this week are quite in detail that helps in understanding till the depth. All the demo working of each sorting algorithm is very well explained and hence it was possible to grasp the working in few minutes. Moreover, java implementation along with animation and the running time analysis is also studied.
Week 5
a. Theoretical Discussion
i. Merge Sort
This sorting algorithm follows bottom up approach.
ii. Definitions
At first a given sorted array of elements is sliced into two parts from middle as termed as low and high array.
iii. Mathematical Analysis
Given two sorted subarrays a[lo] to a[mid] and a[mid+1] to a[hi], replace with sorted subarray a[lo] to a[hi]. Each subarray elements are compared with each other and stored in the final array.
b. Reflection on the Contents
i. Example of Merge Sort
M[lo] = {3,6,8,9} and M[hi] = {1,4,5,8}
3 is compared with 1; M[sorted] = {1}
3 is compared with 4; M[sorted] = {1,3}
6 is compared with 4; M[sorted] = {1,3,4}
6 is compared with 5; M[sorted] = {1,3,4,5}
6 is compared with 8; M[sorted] = {1,3,4,5,6}
8 is compared with 8; M[sorted] = {1,3,4,5,6,8}
9 is compared with 8; M[sorted] = {1,3,4,5,6,8,9}
M[sorted] = {1,3,4,5,6,8,9}
c. Outcome
Merge sort was unexplained till now and once knew about its working, it seems that merge sort isn’t that complicated as once seemed. Java implementation of merge sort was also studied and the running time comparison with insertion sort was also analyzed. It was observed that merge sort is faster with complexity of N log N over insertion sort with complexity N^{2}.
Week 6
a. Theoretical Discussion
i. Quick Sort
This is a sorting algorithm that was developed to translate Russian language into English. In this sorting, there are two important tasks, selection and partitioning; each one of them is explained in this week. The quick sort algorithm is quite largely used to solve complex sorting examples (Sedgewick and Wayne, 2011).
ii. Basic Plan
- The given array is shuffled randomly
- The array is partitioned in a way that at the separation point j, there is no larger element to the left of j and no smaller element is to the right.
- Lastly, recursion is performed to sort each subarray.
iii. Mathematical Analysis
Time complexity for three cases of Quick sort algorithm is given by:
Average case is ~ 1.39 N log N
Best case is ~ N log N
Worst case is ~ ½ N^{2}
b. Example of Quick Sort
c. Outcome
The most important concept learnt is quick sort algorithm which seemed to be very difficult when read about it before. The course content to explain quick sort is detailed and hence helps in gaining deep knowledge about the topic. The time complexities of quick sort are compared with merge sort and observed that quick sort is relatively faster. Hence it is known that quick sort is faster than merge sort that was identified to be faster than insertion sort as studied in previous weeks. The running time of quick sort is affected by number of shuffles at initial step.
Week 7
a. Theoretical Discussion
i. Heap Sort
It is also known as in-place sort. This algorithm makes use of two data structures binary tree and max-heap before actual sorting process is started (Allman, 2012). In this week, heap sort is discussed in detail.
ii. Basic Plan
- Construct a binary tree of the array provided.
- Construct max-heap with the help of N keys given.
- Sorting is performed here by recursively finding maximum key and removing it from max-heap.
iii. Mathematical Analysis
The heap building process makes less than or equals to 2N comparisons whereas less or equals to N exchanges. On the other hand, sorting process utilizes less than or equals to 2 N log N comparisons and exchanges. For best case, heap sort has N log N running time.
b. Example of Heap Sort
H = {82, 90, 10, 12, 15, 77, 55, 23}
Step 1:
Step 2:
Step 3:
Step 4:
Step 5:
Step 6:
Step 7:
Sorted result: 10, 12, 15, 23, 55, 77, 82, 90
c. Outcome
In this week, heap sort basics, example and its complexity was studied. Moreover, java implementation of heap sort was also learned. Heap sort is observed to be optimum for both time and space complexity. The algorithm was known before but seemed to be very difficult to solve. In this week, the explanation related to heap sort helped in thoroughly understanding the algorithm.
Week 8
a. Theoretical Discussion
i. Binary Search Trees
A tree containing a root node, children nodes and leaf nodes arranged in symmetric manner is termed as binary search tree. A binary search tree can be empty or containing two binary search trees as its left and right children. A symmetric order is defined as the order in such a manner that key value of the parent is greater than its left child and smaller than its right child (Sahni, 2005).
ii. Basic Plan
Binary Insertion: A new key node is inserted at leaf.
Binary Deletion: A key is deleted directly if it is a leaf node. If node to delete contains a child, the node is copied to its parent and the node is then deleted.
Binary Search: Searching is performed by deciding the direction on the basis of key value. Go left if large number is to be searched, right otherwise.
iii. Mathematical Analysis
Binary Search tree utilizes 1.39 log N running time complexities. The binary insertion and binary deletion is 1.39 log N and ? N respectively.
b. Example of a Binary Search Tree
c. Outcome
The data structure learned in this week was a new concept as trees and graphs were learned before, binary search tree was a new concept. It is complicated a bit but was made understood really well. The binary search trees insertion and deletion operations along with searching are quite time efficient and faster than sequential searches.
Week 9
a. Theoretical Discussion
i. Topic Covered
This week covered 2-3 trees, left-leaning red-black BSTs, B-trees.
ii. Definitions
2-3 trees are data structures with a condition that each node has 2 children and 2 elements; otherwise, 3 children and 2 elements. The elements are in symmetric order.
Left Leaning Red Black BST has no red links to its children. Traversal from root to every leaf node consists of same count of black connection links. All the red links are at left of the tree.
B-Trees are 2-3 trees that allow M-1 connection pairs per node.
iii. Mathematical Analysis
The performance of 2-3 Trees is fair and is log N in worst case.
Left-leaning red-black BSTs best case performance is 2 log N
b. Example of insertion in B-Tree
Insert elements 78, 52, 81, 40, 33, 90, 85, 20.
c. Outcome
The data structures discussed in this week course were new and complex but by the end of material, the data structures were understood thoroughly.
Week 10
a. Theoretical Discussion
i. Topics Covered
Hashing and Linear Probe Hash tables are studied in this week.
ii. Definitions
The index plays important role while implementing hash tables and hashing functions. The hash function is required to calculate index of array location from the given key.
iii. Mathematical Analysis
Insertion in Linear probe hash table:
Each element represents a hash value. The element is supposed to be inserted at the index of array similar to hash value of it. If the array location is not vacant, next vacant location is searched and value is inserted to the array.
Searching in Linear probe hash table:
Each element represents a hash value as mentioned earlier; the element is first checked at the array index location as hash value. If not found it is checked at other next locations. If mean while null value is identified, value not found can be prompted.
b. Example of insertion in linear probe hash table
c. Outcome
Linear probe hash table is studied in this week which helped in understanding insertion and searching in this data structure.
Week 11
a. Theoretical Discussion
i. Topics Covered
In this week, string sorting i.e. variants of radix sort is studied.
ii. Definitions
A string is a sequence of characters. The radix sorting technique first groups same index elements and then sort them in ascending order (Berlioux, 1990). It avoids unnecessary comparisons
iii. Mathematical Analysis
Running complexity of three algorithms is as presented in table below.
LSD sort |
2 W (N + R) |
MSD sort |
2 W (N + R) |
3-way string radix quick sort |
1.39 W N lg R |
b. Example of Radix Sort
c. Outcome
Radix sort is learned in this week which is a unique algorithm to sort array without comparing the values of the index locations while parsing.
4. Conclusion
The study in relation to data structures and algorithms is presented in this document. The various types of simple and complex data structures are discussed along with their insertion, deletion and searching operations. Several searching and sorting algorithms are also presented and well explained. The concepts made understood in clear and precise manner. Many simple demo examples are illustrated to understand the process of algorithms.
References
Allman, J., 2012. Algorithms. [Niantic, CT]: Quale Press.
Berlioux, P., 1990. Algorithms. Chichester: Wiley.
Goodrich, M., Tamassia, R. and Goldwasser, M., n.d. Data Structures And Algorithms In Java.
Sahni, S., 2005. Data Structures, Algorithms, And Applications In Java. Summit, N.J.: Silicon Press.
Sedgewick, R. and Wayne, K., 2011. Algorithms (1st ed.). Addison-Wesley Professional.