Analysis and Design of Algorithms Winter 2022 GTU Paper Solution | 3150703

Here, We provide Analysis and Design of Algorithms GTU Paper Solution Winter 2022. Read the Full ADA GTU paper solution given below.

Analysis and Design of Algorithms GTU Old Paper Winter 2022 [Marks : 70] : Click Here

(a) Sort the best case running times of all these algorithms in a non-decreasing
order.
LCS, Quick-Sort, Merge-Sort, Counting-Sort, Heap-Sort, Selection-Sort,
Insertion-Sort, Bucket-Sort, Strassen’s Algorithm.


Here is the non-decreasing order of the best case running times for the given algorithms:

  1. Bucket-Sort: O(n)
    • Best case running time of bucket-sort is linear, making it the fastest in this list.
  2. Counting-Sort: O(n + k)
    • Counting-sort has a linear best case running time, where n is the number of elements to be sorted, and k is the range of input values.
  3. LCS (Longest Common Subsequence): O(m * n)
    • The best case running time of the LCS algorithm is quadratic, where m and n are the lengths of the input sequences.
  4. Insertion-Sort: O(n)
    • Insertion-sort has a linear best case running time when the input is already sorted or nearly sorted.
  5. Merge-Sort: O(n log n)
    • Merge-sort has a best case running time of n log n, which occurs when the input is already sorted or nearly sorted.
  6. Heap-Sort: O(n log n)
    • Heap-sort has a best case running time of n log n, which occurs when the input is already sorted or nearly sorted.
  7. Quick-Sort: O(n log n)
    • Quick-sort has a best case running time of n log n, which occurs when the input is already sorted or nearly sorted.
  8. Selection-Sort: O(n^2)
    • Selection-sort has a quadratic best case running time, where n is the number of elements to be sorted.
  9. Strassen’s Algorithm: O(n^log2(7))
    • Strassen’s algorithm for matrix multiplication has a best case running time of n raised to the power of log2(7), which is approximately O(n^2.807).

(b) State whether the statements are correct or incorrect with reasons.
O(f(n)) + O(f(n)) = O (2f(n))
If 3n + 5 = O(n²) , then 3n + 5 = o()

(c) Explain asymptotic analysis with all the notations and its mathematical
inequalities.

(a) What is the use of Loop Invariant? What should be shown to prove that an
algorithm is correct?

A loop invariant is a condition that holds true before and after each iteration of a loop in an algorithm. It is a powerful tool used in program correctness proofs and algorithm analysis. The main use of a loop invariant is to help reason about the correctness and behavior of a loop-based algorithm.

The use of a loop invariant can be summarized as follows:

  1. Understanding and Verification: A loop invariant helps in understanding the behavior of a loop and verifying its correctness. It serves as a guide to ensure that the loop is performing the intended operations and making progress towards the desired outcome.
  2. Design and Debugging: Loop invariants can aid in the design and debugging of algorithms. By formulating and maintaining a loop invariant, developers can identify and fix issues related to loop termination, boundary conditions, or incorrect intermediate results.
  3. Proof of Correctness: Loop invariants play a crucial role in proving the correctness of an algorithm. To demonstrate that an algorithm is correct, you typically need to show three key properties: a. Initialization: The loop invariant holds true before the first iteration of the loop. b. Maintenance: If the loop invariant holds true before an iteration, it remains true before the next iteration. c. Termination: The loop terminates, and the loop invariant, along with any post-conditions, implies the desired result.

By establishing these properties, you can demonstrate that the algorithm satisfies its specification and produces the correct output for all valid inputs.

To prove that an algorithm is correct, you typically need to show that the algorithm terminates and that it meets the required conditions specified by the problem or task. This involves demonstrating that the algorithm produces the expected output or achieves the desired goal. Loop invariants, along with other proof techniques, such as pre- and post-conditions, can be used to construct a rigorous proof of correctness.

(b) Apply LCS on sequence for pattern

(c) Write and explain the recurrence relation of Merge Sort.

(c) Perform the analysis of a recurrence relation T(n)= 2T (n/2) + θ(n/2) by
drawing its recurrence tree.

(a) Consider the array 2,4,6,7,8,9,10,12,14,15,17,19,20. Show (without
actually sorting), how the quick sort performance will be affected with
such input.

QuickSort’s performance heavily depends on the choice of the pivot element. In the worst-case scenario, if the pivot is consistently chosen as the smallest or largest element in the array, QuickSort’s efficiency can degrade, resulting in poor performance.

In the given array [2, 4, 6, 7, 8, 9, 10, 12, 14, 15, 17, 19, 20], if the QuickSort algorithm consistently chooses the first or last element as the pivot, the partitioning step will divide the array into two sub-arrays with highly imbalanced sizes. Let’s consider the case where the pivot is chosen as the first element (2):

  1. First partitioning step: The array is partitioned into two sub-arrays:
    • [2] (elements less than or equal to the pivot)
    • [4, 6, 7, 8, 9, 10, 12, 14, 15, 17, 19, 20] (elements greater than the pivot)
  2. Second partitioning step: The second sub-array is further divided:
    • [4, 6, 7, 8, 9, 10, 12, 14, 15, 17, 19, 20] (elements greater than the pivot)
    • (empty sub-array)

At each step, the pivot element separates the array into one smaller sub-array and one larger sub-array. This imbalanced partitioning results in a skewed recursion tree, where the smaller sub-array has only one element while the larger sub-array contains the remaining elements. Consequently, QuickSort will exhibit poor performance, approaching the worst-case time complexity of O(n^2).

It’s important to note that this worst-case scenario is specific to the choice of pivot and the given input array. QuickSort’s average-case time complexity is O(n log n), and in most cases, it performs efficiently. To mitigate the worst-case scenario, various techniques can be employed, such as choosing a random pivot, using a median-of-three pivot selection, or implementing an optimized version of QuickSort like “IntroSort.”

(b) “A greedy strategy will work for fractional Knapsack problem but not for
0/1″, is this true or false? Explain.

(c) Apply Kruskal’s algorithm on the given graph and step by step generate
the MST.

OR

(a) Consider an array of size 2048 elements sorted in non-decreasing order.
Show how the Binary Search will perform on this size by analysis of its
recurrence relation. Derive the running time.

To analyze the performance of binary search on an array of size 2048, we can derive its recurrence relation and then determine its running time.

Binary search is a divide-and-conquer algorithm that repeatedly divides the search space in half until the target element is found or the search space is exhausted. Here’s how we can analyze its recurrence relation:

  1. Let’s denote the size of the array at each step as n.
  2. In each step of binary search, we divide the array in half, resulting in two subproblems of size n/2.
  3. We compare the target element with the middle element of the array to determine if the target is in the left or right half.
  4. If the target is in the left half, we recursively perform binary search on the left subproblem of size n/2.
  5. If the target is in the right half, we recursively perform binary search on the right subproblem of size n/2.
  6. The base case occurs when the size of the array becomes 1, and we either find the target or conclude that it is not present.

Now, let’s derive the recurrence relation for the running time of binary search:

T(n) = T(n/2) + c

Here, T(n) represents the running time of binary search on an array of size n, and c represents the constant time taken for comparisons and other operations within each recursive call.

Using the recurrence relation, we can determine the running time by solving it recursively:

T(n) = T(n/2) + c = T(n/4) + c + c = T(n/8) + c + c + c = T(n/16) + c + c + c + c = …

This pattern continues until we reach the base case when n = 1.

At each step, we perform a constant amount of work (c) to divide the array in half and compare the target with the middle element. Since we divide the array in half at each step, the number of steps required to reach the base case is log₂n.

Therefore, the total running time of binary search on an array of size 2048 is:

T(n) = c + c + c + … + c (log₂n times) = c * log₂n

(b) Explain the steps of greedy strategy for solving a problem.

(c) Apply Prim’s algorithm on the given graph in Q.3 (C) FIG:1 Graph
G(V,E) and step by step generate the MST.

(a) Given is the S-table after running Chain Matrix Multiplication algorithm.
Calculate the parenthesized output based on
PRINT_OPTIMAL_PARENTHESIS algorithm. Assume the matrix are
names from A1, A2, ….,An

Based on the provided S-table from the Chain Matrix Multiplication algorithm, we can use the PRINT_OPTIMAL_PARENTHESIS algorithm to calculate the parenthesized output. Here’s how it can be done:

To calculate the parenthesized output, we start from the top-right corner of the S-table and recursively follow the optimal parenthesization.

  1. Starting from the top-right cell, if the value in the cell is denoted by S[i][j], we know that the optimal multiplication for matrices A[i] to A[j] lies between A[i] and A[S[i][j]].
  2. If S[i][j] = k, it means that the optimal multiplication occurs between A[i] to A[k] and A[k+1] to A[j].
  3. We recursively apply the PRINT_OPTIMAL_PARENTHESIS algorithm for each half until we reach the base case.

Applying this algorithm to the given S-table:

  1. From the top-right cell (S[1][3]), we have S[1][3] = 2, which means the optimal multiplication occurs between A[1] to A[2] and A[3].
  2. Recursively, for the first half (A[1] to A[2]), we have S[1][2] = 1, indicating that the optimal multiplication occurs between A[1] and A[1].
  3. Recursively, for the second half (A[3]), there is no further split as it contains only one matrix.

Putting it all together, the parenthesized output for the optimal multiplication order is:

(A1(A2A3))

This means that we first multiply matrices A2 and A3, and then multiply the result with A1.

(b) Explain states, constraints types of nodes and bounding function used by
backtracking and branch and bound methods.

(c) Apply the algorithm to find strongly connected components from the
given graph.

OR

(a) Consider a Knapsack with maximum weight capacity M is 7, for the three
objects with value <3, 4, 5> with weights <2, 3, 4> solve using dynamic
programming the maximum value the knapsack can have.

To solve the knapsack problem using dynamic programming, we can create a table to store the maximum value that can be achieved for each combination of object and weight capacity. Here’s how we can solve it for the given scenario:

  1. Create a table with (n + 1) rows and (M + 1) columns, where n is the number of objects and M is the maximum weight capacity of the knapsack. Initialize all cells to 0.
  2. Iterate through each object:
    • For each object i from 1 to n, calculate its value and weight.
    • Iterate through each weight capacity w from 1 to M.
      • If the weight of the object (weights[i-1]) is greater than the current weight capacity w, copy the value from the cell above (table[i-1][w]).
      • Otherwise, calculate the maximum value between taking the object (value[i-1] + table[i-1][w – weights[i-1]]) and not taking the object (table[i-1][w]).
      • Store the maximum value in the current cell (table[i][w]).
  3. The maximum value of the knapsack can be found in the bottom-right cell of the table (table[n][M]).

Applying the dynamic programming approach to the given scenario:

Object values: <3, 4, 5> Object weights: <2, 3, 4> Maximum weight capacity: M = 7

The table will look like this:

      | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
-------------------------------------
Obj 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Obj 1 | 0 | 0 | 3 | 3 | 3 | 3 | 3 | 3 |
Obj 2 | 0 | 0 | 3 | 4 | 4 | 7 | 7 | 7 |
Obj 3 | 0 | 0 | 3 | 4 | 5 | 7 | 8 | 9 |

The maximum value the knapsack can have is 9, which is obtained by including objects with values 3, 4, and 5, and weights 2, 3, and 4 respectively.

Please note that in the table, the row and column indices are shifted by 1 compared to the object and weight indices.

(b) Explain the Minimax principle and show its working for simple tic-tac-toe game
playing.

(c) Given is the DAG, apply the algorithm to perform topological sort and
show the sorted graph.

(a) When can we say that a problem exhibits the property of Optimal Sub-
structure?

A problem exhibits the property of Optimal Substructure when an optimal solution to the problem can be constructed from optimal solutions to its subproblems. In other words, the problem can be broken down into smaller subproblems, and the optimal solution to the overall problem can be found by combining the optimal solutions of the subproblems.

Formally, a problem has Optimal Substructure if it satisfies the following condition:

For a problem P, let’s say we have an optimal solution S to P. If we break down P into smaller subproblems and find optimal solutions for each subproblem, then we can combine these optimal solutions to construct an optimal solution for P.

Optimal Substructure is a key property in dynamic programming, as it allows us to apply the technique of solving a problem recursively and building up the solutions of smaller subproblems to solve the larger problem efficiently.

To determine if a problem exhibits Optimal Substructure, we need to analyze the problem and identify if it can be divided into subproblems that have independent optimal solutions and if the optimal solution of the larger problem can be derived from the optimal solutions of the subproblems.

(b) Create an example of string P of length 7 such that, the prefix function of KMP
string matcher returns π[5] = 3, π[3] = 1 and π[1] = 0

(c) Explain the 3SAT problem and show that it is NP Complete.

OR

(a) Explain Over-lapping Sub-problem with respect to dynamic
programming.

In dynamic programming, the concept of overlapping subproblems refers to the situation where the solution to a larger problem can be efficiently computed by breaking it down into smaller subproblems, and these subproblems share common sub-subproblems. As a result, the same subproblems are repeatedly solved multiple times instead of being recomputed from scratch. Overlapping subproblems are a key characteristic of many problems that can be solved using dynamic programming techniques.

To illustrate the concept of overlapping subproblems, let’s consider the Fibonacci sequence. The Fibonacci sequence is defined as follows: F(0) = 0, F(1) = 1, and for n ≥ 2, F(n) = F(n-1) + F(n-2). We can compute the nth Fibonacci number using a dynamic programming approach:

function Fibonacci(n):
    if n <= 1:
        return n
    else:
        memo = array of size n+1
        memo[0] = 0
        memo[1] = 1
        for i from 2 to n:
            memo[i] = memo[i-1] + memo[i-2]
        return memo[n]

In this approach, we use an array memo to store the computed Fibonacci numbers. By starting from the base cases (F(0) and F(1)) and iteratively computing subsequent Fibonacci numbers up to n, we avoid redundant calculations. Without using dynamic programming, calculating each Fibonacci number would require recomputing the entire sequence from scratch, resulting in exponential time complexity.

The overlapping subproblems occur because when calculating F(n), we need the previously computed Fibonacci numbers F(n-1) and F(n-2), which are subproblems themselves. By storing the solutions to these subproblems in the memo array, we eliminate the need to recompute them and achieve significant performance improvements.

(b) Show that if all the characters of pattern P of size m are different, the naïve string
matching algorithm can perform better with modification. Write the modified
algorithm that performs better than O(n.m).

(c) Explain with example, how the Hamiltonian Cycle problem can be used to solve
the Travelling Salesman problem.


“Do you have the answer to any of the questions provided on our website? If so, please let us know by providing the question number and your answer in the space provided below. We appreciate your contributions to helping other students succeed.