16 MARK ANSWERS — DATA STRUCTURES
Q1. Explain the different types of data structures with examples.
🔷 TYPES OF DATA STRUCTURES

Data Structure is a way of collecting and organizing data in such a way that we can perform operations on these data in an effective way. It is about rendering data elements in terms of some relationship, for better organization and storage.

Data structures are broadly classified into two major categories:

  1. Built-in Data Structures (Primitive)
  2. User Defined Data Structures (Complex/Abstract)
📌 1. BUILT-IN DATA STRUCTURES (PRIMITIVE)

Anything that can store data can be called a data structure. These are known as Primitive Data Structures.

TypeDescription
IntegerUsed to store whole numbers.
FloatUsed to store decimal or fractional values.
CharacterUsed to store single characters.
PointerUsed to store memory addresses.

Example of Primitive Data Organization:

  • We have some data which has a player's name "Virat" and age 26.
  • Here "Virat" is of String data type and 26 is of integer data type.
  • We can organize this data as a record like Player record.
📌 2. USER DEFINED DATA STRUCTURES (COMPLEX)

These Abstract Data Structures are used to store large and connected data.

✅ A. ARRAYS & FILES
  • Arrays: Stores elements sequentially.
  • Files: Used for storing data permanently.
✅ B. LISTS

Lists are categorized into:

🔹 i) LINEAR LISTS
  • Stacks: Follows LIFO principle.
  • Queues: Follows FIFO principle.
🔹 ii) NON-LINEAR LISTS
  • Trees: Hierarchical structure with root node.
  • Graphs: Nodes connected via edges.
🗂️ COMPLETE CLASSIFICATION DIAGRAM
DATA STRUCTURES ├── Built-in Data Structures │ ├── Integer │ ├── Float │ ├── Character │ └── Pointer │ └── User Defined Data Structures ├── Arrays ├── Files └── Lists ├── Linear Lists │ ├── Stacks │ └── Queues │ └── Non-Linear Lists ├── Trees └── Graphs

(Diagram derived from the classification chart of Data Structures)

Q2. Explain the structure and working of various types of linked lists. Discuss advantages and disadvantages.
🔷 LINKED LIST — OVERVIEW

A linked list is a sequence of data structures connected via links.

Key Terminology:
  • Link: Stores data.
  • Next: Points to next node.
  • Head: First node.
[Head] → [Data | Next] → [Data | Next] → NULL
📌 TYPES OF LINKED LISTS
🔹 TYPE 1: SINGLY LINKED LIST
  • Navigation is forward only.
  • Contains data and next pointer.
🔹 TYPE 2: DOUBLY LINKED LIST
  • Forward & backward navigation.
  • Contains next and previous pointers.
NULL ← [Prev | Data | Next] ↔ [Prev | Data | Next] → NULL
🔹 TYPE 3: CIRCULAR LINKED LIST
  • Last node connects to first node.
📊 ADVANTAGES
  • Two-Way Navigation
  • Dynamic Applications
  • Efficient Memory Management
❌ DISADVANTAGES
  • Restricted Navigation
  • Complex Deletion
  • Sequential Access Only
Q3. Discuss Infix, Prefix, and Postfix Notations with Examples.

🔷 EXPRESSION NOTATION

Expression notation is a way of writing mathematical expressions using operators and operands. These notations are widely used in compiler design, expression evaluation, and stack applications.

There are three main types:

  1. Infix Notation
  2. Prefix Notation (Polish Notation)
  3. Postfix Notation (Reverse Polish Notation)

📌 1. INFIX NOTATION

Operator is written between operands

Most commonly used form in mathematics

🔹 Format:

Operand Operator Operand

🔹 Example:

A + B
3 * 5
(A + B) * C

🔹 Characteristics:
  • Requires parentheses to define precedence
  • Easy for humans to understand
  • Difficult for machines to evaluate directly

📌 2. PREFIX NOTATION (POLISH NOTATION)

Operator is written before operands

No need for parentheses

🔹 Format:

Operator Operand Operand

🔹 Example:

+ A B
* 3 5
* + A B C

🔹 Evaluation Order:

Read expression right to left


📌 3. POSTFIX NOTATION (REVERSE POLISH NOTATION)

Operator is written after operands

No parentheses required

🔹 Format:

Operand Operand Operator

🔹 Example:

A B +
3 5 *
A B + C *

🔹 Evaluation Order:

Read expression left to right

Uses stack for evaluation


📊 DIAGRAMMATIC REPRESENTATION
Expression: A + B Infix: A + B Prefix: + A B Postfix: A B +

📌 EXPRESSION CONVERSION EXAMPLE
🔹 Given Infix Expression:

3 * 3 / (4 - 1) + 6 * 2

🔹 Postfix Conversion Steps (Using Stack)
StepExpressionStackOutput
133
2**3
33*33
4//33*
5(/(33*
64/(33*4
7-/(-33*4
81/(-33*41
9)/33*41-
10++33*41-/
116+33*41-/6
12*+*33*41-/62
132+*33*41-/62
Final3341-/62+

📊 COMPARISON TABLE
FeatureInfixPrefixPostfix
Operator PositionMiddleBeginningEnd
Parentheses NeededYesNoNo
EvaluationComplexEasyEasy
Used InHumansCompilersStack Evaluation
ExampleA+B+ABAB+

🔷 APPLICATIONS
  • Expression evaluation using stack
  • Compiler design
  • Syntax parsing
  • Arithmetic expression conversion

Q4. Explain Circular Queue with Example.

🔷 CIRCULAR QUEUE

A circular queue is a linear data structure in which the last position is connected back to the first position, forming a circular structure.

It overcomes the limitation of simple queue (wasted space).


📌 STRUCTURE OF CIRCULAR QUEUE
FRONT ↓ [10] [20] [30] [40] ↑ ↓ ← ← ← ← ← ← ← ← REAR
  • Elements are arranged in a circle
  • Uses modulo operation (%) for wrapping

📌 BASIC OPERATIONS
🔹 1. INSERTION (ENQUEUE)
  • Step 1: If (REAR + 1) % SIZE == FRONT → Overflow
  • Step 2: If queue empty → FRONT = REAR = 0
  • Step 3: Else → REAR = (REAR + 1) % SIZE
  • Step 4: Insert element at REAR
🔹 2. DELETION (DEQUEUE)
  • Step 1: If FRONT == -1 → Underflow
  • Step 2: Remove element at FRONT
  • Step 3: If FRONT == REAR → FRONT = REAR = -1
  • Step 4: Else → FRONT = (FRONT + 1) % SIZE

📌 WORKING EXAMPLE

🔹 Initial State:

Queue: [ ] [ ] [ ] [ ]
FRONT = -1, REAR = -1

🔹 Insert 10, 20, 30

[10] [20] [30] [ ]
FRONT = 0, REAR = 2

🔹 Delete one element

[ ] [20] [30] [ ]
FRONT = 1, REAR = 2

🔹 Insert 40 (Circular Move)

[40] [20] [30] [ ]
FRONT = 1, REAR = 3 → then wraps to 0


📊 DIAGRAM – CIRCULAR MOVEMENT
Index: 0 1 2 3 Step 1: [10] [20] [30] [ ] Step 2: Delete → FRONT moves Step 3: Insert → REAR wraps to index 0 Final: [40] [20] [30] [ ]

📌 ADVANTAGES
#Advantage
1Efficient memory utilization
2No wastage of space
3Faster operations
4Useful in real-time systems
📌 DISADVANTAGES
#Disadvantage
1Complex implementation
2Hard to detect full/empty condition
3Requires modulo operation

🔷 APPLICATIONS
  • CPU scheduling
  • Memory management
  • Traffic signal systems
  • Buffers (keyboard, printer)

📊 LINEAR VS CIRCULAR QUEUE
FeatureLinear QueueCircular Queue
StructureStraightCircular
Space UsageWastedFully utilized
EfficiencyLowHigh
Overflow ConditionRear = Max(Rear+1)%Size = Front
Q5. Discuss Tree Terminologies and Different Types of Tree Representations.

🔷 TREE DATA STRUCTURE

A tree is a non-linear hierarchical data structure consisting of nodes connected by edges. It represents relationships in a parent–child form.


📌 TREE TERMINOLOGIES
🔹 1. NODE
  • Basic element of a tree
  • Contains data + links to child nodes
🔹 2. ROOT NODE
  • Topmost node of the tree
  • Has no parent
A ← Root
🔹 3. EDGE

Connection between two nodes

A → B (Edge)
🔹 4. PARENT & CHILD
  • Parent: Node having child nodes
  • Child: Node derived from parent
A / \ B C A → Parent B, C → Children
🔹 5. LEAF NODE (EXTERNAL NODE)

Node with no children

A / \ B C B, C → Leaf Nodes
🔹 6. INTERNAL NODE

Node having at least one child

🔹 7. DEGREE OF NODE

Number of children of a node

🔹 8. DEGREE OF TREE

Maximum degree of any node in the tree

🔹 9. LEVEL OF NODE

Number of edges from root to node

Level 0 → A
Level 1 → B, C

🔹 10. HEIGHT OF TREE

Maximum level of any node

🔹 11. DEPTH

Distance from root to a specific node

🔹 12. SUBTREE

A tree formed by a node and its descendants


📊 DIAGRAM OF TREE TERMINOLOGY
A (Root) / \ B C / \ \ D E F Leaf Nodes: D, E, F Internal Nodes: A, B, C Edges: (A-B), (A-C), (B-D), (B-E), (C-F)

📌 TYPES OF TREE REPRESENTATION

Tree can be represented in memory using different methods.

🔹 1. ARRAY REPRESENTATION
  • Tree stored in array form
  • Mainly used for binary trees

Rules:

  • If node index = i
  • Left child = 2i
  • Right child = 2i + 1

Example:

Tree: A / \ B C Array: Index: 1 2 3 Value: A B C
🔹 2. LINKED LIST REPRESENTATION
  • Each node contains:
  • Data
  • Pointer to left child
  • Pointer to right child
struct Node { int data; struct Node *left; struct Node *right; };
[A] / \ [B] [C]
🔹 3. PARENT ARRAY REPRESENTATION

Stores parent of each node in array

Index: 0 1 2 3 Node: A B C D Parent: -1 0 0 1
🔹 4. CHILD LIST REPRESENTATION
A → B, C B → D, E C → F

📊 COMPARISON OF REPRESENTATIONS
RepresentationAdvantageDisadvantage
ArraySimple & fastWastes memory
Linked ListDynamicExtra pointer memory
Parent ArrayEasy parent accessNot efficient for traversal
Child ListFlexibleComplex structure

🔷 APPLICATIONS OF TREE
  • File systems
  • Database indexing
  • Expression trees
  • Artificial Intelligence

Q6. Describe Min Heap and Max Heap with Example.

🔷 HEAP DATA STRUCTURE

A heap is a complete binary tree that satisfies the heap property.

There are two types:

  1. Min Heap
  2. Max Heap

📌 1. MIN HEAP

Definition:

Parent node is smaller than or equal to its children

Parent ≤ Child

10 / \ 20 30 / \ 40 50

Root contains minimum element

ARRAY REPRESENTATION:

[10, 20, 30, 40, 50]

PROPERTY:

Smallest element always at root


📌 2. MAX HEAP

Definition:

Parent node is greater than or equal to its children

Parent ≥ Child

50 / \ 30 40 / \ 10 20

Root contains maximum element

ARRAY REPRESENTATION:

[50, 30, 40, 10, 20]


📊 DIAGRAMMATIC COMPARISON
MIN HEAP MAX HEAP 10 50 / \ / \ 20 30 30 40

📌 INSERTION IN HEAP

Steps:

  1. Insert element at last position
  2. Compare with parent
  3. Swap if heap property violated
  4. Repeat until correct position
📌 DELETION IN HEAP

Steps:

  1. Remove root element
  2. Replace with last element
  3. Heapify (adjust tree)
  4. Maintain heap property

📊 COMPARISON TABLE
FeatureMin HeapMax Heap
Root ValueMinimumMaximum
PropertyParent ≤ ChildParent ≥ Child
Use CasePriority queue (min)Priority queue (max)
Example10,20,3050,30,40

🔷 APPLICATIONS OF HEAP
  • Priority Queue
  • Heap Sort
  • Scheduling algorithms
  • Graph algorithms (Dijkstra, Prim)

📌 KEY DIFFERENCE

Min Heap → Extract Minimum

Max Heap → Extract Maximum

Q7. Explain Sorting Techniques (Bubble Sort, Radix Sort) with Time Complexity Comparison.

🔷 SORTING

Sorting is the process of arranging data elements in a specific order (ascending or descending). It improves searching efficiency and data organization.


📌 1. BUBBLE SORT
🔹 DEFINITION

Bubble Sort is a simple sorting technique where adjacent elements are compared and swapped if they are in the wrong order.

🔹 WORKING
  • Repeatedly compare adjacent elements
  • Swap if left > right
  • Largest element moves to the end in each pass
🔹 ALGORITHM
Step 1: Start Step 2: Repeat for i = 0 to n-1 Step 3: For j = 0 to n-i-1 Step 4: If A[j] > A[j+1] Swap A[j], A[j+1] Step 5: Stop
🔹 EXAMPLE
Array: 5, 3, 2, 4 Pass 1: 5 3 → swap → 3 5 2 4 5 2 → swap → 3 2 5 4 5 4 → swap → 3 2 4 5 Pass 2: 3 2 → swap → 2 3 4 5 Sorted Array: 2 3 4 5
🔹 DIAGRAM
Pass 1: [5] [3] [2] [4] ↓ ↓ swap Result: [3] [2] [4] [5]
🔹 TIME COMPLEXITY
CaseComplexity
Best CaseO(n)
Average CaseO(n²)
Worst CaseO(n²)

📌 2. RADIX SORT
🔹 DEFINITION

Radix Sort is a non-comparison sorting algorithm that sorts numbers digit by digit (from least significant digit to most significant).

🔹 WORKING
  • Sort numbers based on individual digits
  • Use bucket system (0–9)
  • Repeat for each digit place
🔹 ALGORITHM
Step 1: Find maximum number (to know number of digits) Step 2: For each digit place (units, tens, hundreds) Step 3: Place elements into buckets (0–9) Step 4: Collect elements from buckets Step 5: Repeat until all digits processed
🔹 EXAMPLE
Numbers: 170, 45, 75, 90 Step 1 (Units place): Buckets: 0 → 170, 90 5 → 45, 75 Result: 170, 90, 45, 75 Step 2 (Tens place): Buckets: 4 → 45 7 → 170, 75 9 → 90 Final Sorted: 45, 75, 90, 170
🔹 DIAGRAM
Buckets (0–9) 0 → 170 5 → 45 7 → 75 9 → 90
🔹 TIME COMPLEXITY
CaseComplexity
BestO(nk)
AverageO(nk)
WorstO(nk)

(k = number of digits)


📊 COMPARISON TABLE
FeatureBubble SortRadix Sort
TypeComparisonNon-comparison
ComplexityO(n²)O(nk)
EfficiencySlowFast for large data
StabilityStableStable
SpaceIn-placeExtra space needed
Use CaseSmall dataLarge numeric data

🔷 KEY POINT

Bubble Sort → Simple but inefficient

Radix Sort → Efficient for large numbers


Q8. Explain Linear Search and Binary Search Algorithms with Example.

🔷 SEARCHING

Searching is the process of finding a specific element in a data structure.

Two main techniques:

  1. Linear Search
  2. Binary Search

📌 1. LINEAR SEARCH
🔹 DEFINITION

Linear Search checks each element sequentially until the target element is found.

🔹 ALGORITHM
Step 1: Start from first element Step 2: Compare each element with target Step 3: If match found → return position Step 4: If end reached → element not found
🔹 EXAMPLE
Array: 10, 20, 30, 40, 50 Search: 30 Step 1: 10 ≠ 30 Step 2: 20 ≠ 30 Step 3: 30 = 30 ✔ Found at position 3
🔹 DIAGRAM
[10] [20] [30] [40] [50] → → ✔
🔹 TIME COMPLEXITY
CaseComplexity
BestO(1)
AverageO(n)
WorstO(n)

📌 2. BINARY SEARCH
🔹 DEFINITION

Binary Search works on sorted arrays and repeatedly divides the search space into half.

🔹 ALGORITHM
Step 1: Set low = 0, high = n-1 Step 2: Find mid = (low + high)/2 Step 3: If A[mid] == key → Found Step 4: If key < A[mid] → search left Step 5: If key > A[mid] → search right Step 6: Repeat until found or range ends
🔹 EXAMPLE
Array: 10, 20, 30, 40, 50 Search: 40 Step 1: mid = 30 40 > 30 → search right Step 2: mid = 40 ✔ Found
🔹 DIAGRAM
[10] [20] [30] [40] [50] ↑ mid Then: [40] [50] ↑ Found
🔹 TIME COMPLEXITY
CaseComplexity
BestO(1)
AverageO(log n)
WorstO(log n)

📊 COMPARISON TABLE
FeatureLinear SearchBinary Search
RequirementUnsorted OKMust be sorted
ComplexityO(n)O(log n)
SpeedSlowFast
MethodSequentialDivide & Conquer
Use CaseSmall dataLarge sorted data

🔷 KEY DIFFERENCE

Linear Search → Check one by one

Binary Search → Divide and search

Q9. Discuss BFS and DFS Graph Traversal Methods with Algorithms and Applications.

🔷 GRAPH TRAVERSAL

Graph traversal is the process of visiting all the vertices (nodes) of a graph in a systematic way.

Two important traversal techniques are:

  1. Breadth First Search (BFS)
  2. Depth First Search (DFS)

📌 1. BREADTH FIRST SEARCH (BFS)
🔹 DEFINITION

BFS visits nodes level by level, starting from the source node.

  • Uses Queue (FIFO)
  • Explores neighbors first
🔹 ALGORITHM
Step 1: Create a queue Q Step 2: Mark all nodes as unvisited Step 3: Insert starting node into Q and mark visited Step 4: While Q is not empty Step 5: Remove node from Q Step 6: Visit the node Step 7: Add all unvisited adjacent nodes to Q Step 8: Stop
🔹 DIAGRAM
Graph: A / \ B C / \ D E BFS Traversal: A → B → C → D → E
🔹 EXAMPLE
Start from A: Queue: [A] Visit A → Add B, C Queue: [B, C] Visit B → Add D Queue: [C, D] Visit C → Add E Final Output: A B C D E
🔹 TIME COMPLEXITY

O(V + E)

🔹 APPLICATIONS
  • Shortest path in unweighted graphs
  • Social networks (friend suggestions)
  • Web crawling
  • Network broadcasting

📌 2. DEPTH FIRST SEARCH (DFS)
🔹 DEFINITION

DFS explores as deep as possible before backtracking.

  • Uses Stack / Recursion
  • Follows depth-wise traversal
🔹 ALGORITHM
Step 1: Mark all nodes as unvisited Step 2: Start from a node Step 3: Mark node as visited Step 4: Visit node Step 5: Recursively visit all unvisited adjacent nodes Step 6: Stop
🔹 DIAGRAM
Graph: A / \ B C / \ D E DFS Traversal: A → B → D → C → E
🔹 EXAMPLE
Start from A: Visit A → go to B Visit B → go to D Backtrack → go to C Visit C → go to E Final Output: A B D C E
🔹 TIME COMPLEXITY

O(V + E)

🔹 APPLICATIONS
  • Cycle detection
  • Topological sorting
  • Path finding
  • Solving puzzles (maze, backtracking)

📊 BFS vs DFS COMPARISON
FeatureBFSDFS
Data StructureQueueStack/Recursion
TraversalLevel-wiseDepth-wise
Shortest PathYesNo
Memory UsageMoreLess
SpeedSlowerFaster (sometimes)

🔷 KEY DIFFERENCE

BFS → Level by level traversal

DFS → Go deep first then backtrack


Q10. Explain Prim’s Algorithm with Example. Compare Shortest Path and Minimum Spanning Tree.

🔷 MINIMUM SPANNING TREE (MST)

A Minimum Spanning Tree is a subset of edges that:

  • Connects all vertices
  • Has no cycles
  • Has minimum total weight

📌 PRIM’S ALGORITHM
🔹 DEFINITION

Prim’s Algorithm is a greedy algorithm used to find the Minimum Spanning Tree of a weighted graph.

🔹 WORKING PRINCIPLE
  • Start from any vertex
  • Select the minimum weight edge
  • Add new vertex to MST
  • Repeat until all vertices included
🔹 ALGORITHM
Step 1: Start with any vertex Step 2: Mark it as visited Step 3: Find smallest edge connecting visited to unvisited node Step 4: Add edge to MST Step 5: Mark new node as visited Step 6: Repeat until all vertices are included Step 7: Stop
📌 EXAMPLE
GRAPH (2) A-------B | \ | (3)| \1 |(4) | \ | C------D (5)
🔹 STEPS
Step 1: Start at A Edges from A: AB(2), AC(3), AD(1) Select AD(1) Step 2: From A, D: Edges: AB(2), AC(3), DB(4), DC(5) Select AB(2) Step 3: From A, D, B: Edges: AC(3), DC(5) Select AC(3)
🔹 MST RESULT
Edges: AD (1) AB (2) AC (3) Total Cost = 6
📊 DIAGRAM OF MST
B | A / \ D C

📌 APPLICATIONS
  • Network design (roads, cables)
  • Electrical wiring
  • Telecommunication networks

📊 SHORTEST PATH vs MINIMUM SPANNING TREE
FeatureShortest PathMinimum Spanning Tree
GoalShortest distance between nodesMinimum total weight
AlgorithmsDijkstra, Bellman-FordPrim, Kruskal
PathBetween two nodesCovers all nodes
CycleAllowedNot allowed
ExampleNavigation systemsNetwork design

🔷 KEY DIFFERENCE

Shortest Path → Min distance between two nodes

MST → Min total cost connecting all nodes


Q11. Explain Asymptotic Notations (Big-O, Omega, Theta). Analyze Linear Search and Binary Search.

🔷 ASYMPTOTIC NOTATIONS

Asymptotic notations are used to analyze the performance of algorithms in terms of time and space complexity. They describe how an algorithm behaves as input size n increases.


📌 TYPES OF ASYMPTOTIC NOTATIONS
  1. Big-O Notation (O)
  2. Omega Notation (Ω)
  3. Theta Notation (θ)

📌 1. BIG-O NOTATION (O)
🔹 DEFINITION

Big-O represents the upper bound (worst case) of an algorithm.

O(f(n)) = maximum time required

🔹 EXAMPLE

Linear Search worst case:
Element at last position → O(n)

🔹 GRAPHICAL IDEA
Time ↑ | | O(n²) | / | / | / |/__________ n →

📌 2. OMEGA NOTATION (Ω)
🔹 DEFINITION

Omega represents the lower bound (best case) of an algorithm.

Ω(f(n)) = minimum time required

🔹 EXAMPLE

Linear Search best case:
Element at first position → Ω(1)


📌 3. THETA NOTATION (θ)
🔹 DEFINITION

Theta represents the tight bound (average case).

θ(f(n)) = both upper and lower bound

🔹 EXAMPLE

Linear Search average case:
θ(n)


📊 COMPARISON TABLE
NotationMeaningCase
Big-OUpper boundWorst
OmegaLower boundBest
ThetaExact boundAverage

🔷 ANALYSIS OF SEARCHING ALGORITHMS

📌 1. LINEAR SEARCH ANALYSIS
🔹 WORKING

Checks elements one by one sequentially

🔹 COMPLEXITY
CaseComplexity
Best CaseΩ(1)
Average Caseθ(n)
Worst CaseO(n)
🔹 EXAMPLE

Array: [10, 20, 30, 40, 50]

Search 10 → 1 step → Ω(1)
Search 50 → 5 steps → O(n)


📌 2. BINARY SEARCH ANALYSIS
🔹 WORKING

Divides array into halves repeatedly
Requires sorted array

🔹 COMPLEXITY
CaseComplexity
Best CaseΩ(1)
Average Caseθ(log n)
Worst CaseO(log n)
🔹 EXAMPLE

Array: [10, 20, 30, 40, 50]

Search 40:
Step 1 → mid = 30
Step 2 → search right → 40 found


📊 COMPARISON
FeatureLinear SearchBinary Search
MethodSequentialDivide & Conquer
BestΩ(1)Ω(1)
WorstO(n)O(log n)
Averageθ(n)θ(log n)
RequirementUnsorted OKSorted required

🔷 KEY POINT

Linear Search → Slower (O(n))

Binary Search → Faster (O(log n))


Q12. Explain Linked List Operations and Its Types with Example.

🔷 LINKED LIST

A linked list is a dynamic data structure consisting of nodes where each node contains:

  • Data
  • Pointer to next node
HEAD → [Data | Next] → [Data | Next] → NULL

📌 TYPES OF LINKED LIST
🔹 1. SINGLY LINKED LIST

Each node points to next node only

[10] → [20] → [30] → NULL
🔹 2. DOUBLY LINKED LIST

Each node has two pointers

NULL ← [10] ↔ [20] ↔ [30] → NULL
🔹 3. CIRCULAR LINKED LIST

Last node points back to first node

[10] → [20] → [30] ↑________________|

📌 NODE STRUCTURE
struct Node { int data; struct Node *next; };

📌 LINKED LIST OPERATIONS
🔹 1. INSERTION

➤ At Beginning

  • Step 1: Create new node
  • Step 2: new->next = head
  • Step 3: head = new node

➤ At End

  • Step 1: Traverse to last node
  • Step 2: last->next = new node
  • Step 3: new->next = NULL

➤ At Middle

  • Step 1: Find position
  • Step 2: new->next = temp->next
  • Step 3: temp->next = new node
🔹 2. DELETION

➤ From Beginning

  • Step 1: temp = head
  • Step 2: head = head->next
  • Step 3: delete temp

➤ From End

  • Step 1: Traverse to second last node
  • Step 2: second_last->next = NULL

➤ From Middle

  • Step 1: Find node
  • Step 2: prev->next = current->next
  • Step 3: delete current
🔹 3. TRAVERSAL
  • Step 1: Start from head
  • Step 2: Print each node
  • Step 3: Move to next until NULL
🔹 4. SEARCH
  • Step 1: Start from head
  • Step 2: Compare each node
  • Step 3: If found → return position

📌 EXAMPLE
Initial: [10] → [20] → [30] Insert 5 at beginning: [5] → [10] → [20] → [30] Delete 20: [5] → [10] → [30]

📊 DIAGRAM
HEAD ↓ [Data|Next] → [Data|Next] → [Data|NULL]

📊 ADVANTAGES
#Advantage
1Dynamic size
2Efficient insertion/deletion
3No memory wastage
📊 DISADVANTAGES
#Disadvantage
1No random access
2Extra memory for pointers
3Traversal is slow

🔷 KEY POINT

Linked List → Dynamic and flexible

Array → Fixed and faster access

Q13. Explain Stack using Array. Discuss Applications like Expression Evaluation.

🔷 STACK (USING ARRAY)

A stack is a linear data structure that follows the LIFO (Last In First Out) principle. All operations are performed at one end called TOP.


📌 ARRAY IMPLEMENTATION OF STACK
  • Stack is implemented using an array
  • A variable top is used to track the top element

Initially:
top = -1

📊 STRUCTURE
Index: 0 1 2 3 4 [ ] [ ] [ ] [ ] [ ] TOP = -1 (Empty Stack)

📌 BASIC OPERATIONS
🔹 1. PUSH OPERATION

Inserts element into stack

Step 1: Check if top == MAX-1 → Overflow Step 2: top = top + 1 Step 3: stack[top] = value
🔹 EXAMPLE
Push 10 → [10] Push 20 → [10, 20] Push 30 → [10, 20, 30] ↑ TOP
🔹 2. POP OPERATION

Removes element from stack

Step 1: Check if top == -1 → Underflow Step 2: value = stack[top] Step 3: top = top - 1
🔹 EXAMPLE
Stack: [10, 20, 30] POP → removes 30 Result: [10, 20]
🔹 3. PEEK OPERATION

Returns top element without removing

value = stack[top]

🔹 4. isEmpty() & isFull()

isEmpty → top == -1
isFull → top == MAX-1

📊 DIAGRAM
TOP ↓ [30] [20] [10]

📌 APPLICATION: EXPRESSION EVALUATION

Stack is widely used for expression conversion and evaluation.

🔹 TYPES OF EXPRESSIONS
TypeExample
InfixA + B
Prefix+AB
PostfixAB+

📌 POSTFIX EVALUATION USING STACK
🔹 ALGORITHM
Step 1: Scan expression left to right Step 2: If operand → push to stack Step 3: If operator → pop two operands Step 4: Apply operation Step 5: Push result back to stack Step 6: Repeat Step 7: Final value is result
🔹 EXAMPLE
Expression: 2 3 1 * + 9 - Step 1: Push 2 → [2] Step 2: Push 3 → [2,3] Step 3: Push 1 → [2,3,1] Step 4: * → 3*1 = 3 → [2,3] Step 5: + → 2+3 = 5 → [5] Step 6: Push 9 → [5,9] Step 7: - → 5-9 = -4 Result = -4

📊 APPLICATIONS OF STACK
  • Expression evaluation
  • Expression conversion
  • Recursion
  • Backtracking
  • Undo operations

🔷 KEY POINT

Stack → LIFO

Used in expression evaluation & recursion


Q13 (Part 2). Explain Queue and its Types in Detail with Example.

🔷 QUEUE

A queue is a linear data structure that follows FIFO (First In First Out) principle.

Insertion → REAR
Deletion → FRONT

📊 STRUCTURE
FRONT → [10] [20] [30] ← REAR

📌 OPERATIONS
🔹 ENQUEUE (Insertion)
Step 1: Check overflow Step 2: REAR++ Step 3: Insert element
🔹 DEQUEUE (Deletion)
Step 1: Check underflow Step 2: Remove FRONT element Step 3: FRONT++

📌 TYPES OF QUEUE
🔹 1. SIMPLE QUEUE
[10] [20] [30] FIFO Space wastage possible
🔹 2. CIRCULAR QUEUE
[10] [20] [30] ↑__________| Last connects to first Efficient memory usage
🔹 3. PRIORITY QUEUE

Elements processed based on priority

High → Medium → Low

🔹 4. DOUBLE ENDED QUEUE (DEQUE)
← [10] [20] [30] →

Insertion & deletion from both ends


📌 EXAMPLE
Insert: 10, 20, 30 Queue: [10, 20, 30] Delete: Remove 10 Result: [20, 30]

📊 APPLICATIONS
  • CPU scheduling
  • Printer queue
  • Buffering
  • Traffic systems

📊 COMPARISON
FeatureStackQueue
OrderLIFOFIFO
Ends UsedOneTwo
ExamplePlatesLine

Q14. Explain Binary Search Tree Operations: Insertion, Deletion, Searching.

🔷 BINARY SEARCH TREE (BST)

A Binary Search Tree is a binary tree where:

  • Left subtree → values less than root
  • Right subtree → values greater than root
📊 STRUCTURE
50 / \ 30 70 / \ / \ 20 40 60 80

📌 1. INSERTION IN BST
🔹 ALGORITHM
Step 1: Start at root Step 2: If value < root → go left Step 3: If value > root → go right Step 4: Repeat until NULL Step 5: Insert node
🔹 EXAMPLE
Insert 25: Compare with 50 → go left Compare with 30 → go left Compare with 20 → go right Insert at correct position

📌 2. SEARCHING IN BST
🔹 ALGORITHM
Step 1: Start at root Step 2: If key == root → found Step 3: If key < root → go left Step 4: If key > root → go right Step 5: Repeat
🔹 EXAMPLE
Search 60: 50 → go right 70 → go left 60 found ✔

📌 3. DELETION IN BST
🔹 CASE 1: LEAF NODE

Simply remove node

🔹 CASE 2: NODE WITH ONE CHILD

Replace node with its child

🔹 CASE 3: NODE WITH TWO CHILDREN

Find inorder successor
Replace node value
Delete successor

🔹 EXAMPLE
Delete 50: Replace with inorder successor (60) Remove original 60 node
📊 DIAGRAM
Before: 50 After: 60

📊 TIME COMPLEXITY
OperationComplexity
InsertionO(log n)
SearchingO(log n)
DeletionO(log n)

🔷 APPLICATIONS
  • Searching & sorting
  • Database indexing
  • Auto-complete systems

🔷 KEY POINT

BST → Faster searching using ordered structure

Q15. Explain AVL Tree and Rotations. Compare AVL and BST.

🔷 AVL TREE

An AVL Tree is a self-balancing Binary Search Tree (BST) where the difference between heights of left and right subtrees is at most 1.

🔹 BALANCE FACTOR (BF)

BF = Height(Left Subtree) – Height(Right Subtree)

🔹 CONDITION

BF must be -1, 0, or +1

If BF is outside this range → Tree becomes unbalanced → Rotations are applied.


📊 AVL TREE STRUCTURE
30 / \ 20 40 / 10 Balance Factor maintained at every node.

📌 AVL TREE ROTATIONS

When imbalance occurs, rotations are performed to restore balance.

🔹 1. LEFT ROTATION (RR Case)

Occurs when node is inserted in right subtree of right child

Before: 10 \ 20 \ 30 After Left Rotation: 20 / \ 10 30
🔹 2. RIGHT ROTATION (LL Case)

Occurs when node is inserted in left subtree of left child

Before: 30 / 20 / 10 After Right Rotation: 20 / \ 10 30
🔹 3. LEFT-RIGHT ROTATION (LR Case)

Occurs when insertion is in right subtree of left child

Before: 30 / 10 \ 20 After: Step 1: Left rotation on 10 Step 2: Right rotation on 30 Result: 20 / \ 10 30
🔹 4. RIGHT-LEFT ROTATION (RL Case)

Occurs when insertion is in left subtree of right child

Before: 10 \ 30 / 20 After: Step 1: Right rotation on 30 Step 2: Left rotation on 10 Result: 20 / \ 10 30

📊 SUMMARY OF ROTATIONS
CaseRotation
LLRight Rotation
RRLeft Rotation
LRLeft + Right
RLRight + Left

📊 AVL vs BST COMPARISON
FeatureAVL TreeBST
BalanceSelf-balancedNot balanced
HeightO(log n)O(n) worst
SearchFastSlow (skewed tree)
ComplexityO(log n)O(n) worst
RotationsRequiredNot required

🔷 KEY POINT

AVL Tree → Balanced → Faster operations

BST → Can become skewed → Slower


Q16. Explain Sorting Techniques (Quick Sort, Merge Sort) with Time Complexity Comparison.

🔷 SORTING TECHNIQUES

Sorting arranges elements in ascending or descending order. Quick Sort and Merge Sort are efficient divide-and-conquer algorithms.


📌 1. QUICK SORT
🔹 DEFINITION

Quick Sort divides the array into smaller subarrays using a pivot element.

🔹 WORKING
  • Select pivot
  • Partition array
  • Elements < pivot → left
  • Elements > pivot → right
  • Recursively sort subarrays
🔹 ALGORITHM
Step 1: Choose pivot Step 2: Partition array Step 3: Place pivot at correct position Step 4: Recursively apply on left and right subarrays Step 5: Stop when subarray size = 1
🔹 EXAMPLE
Array: 5, 3, 8, 4, 2 Pivot = 5 Left: 3, 4, 2 Right: 8 Sorted: 2, 3, 4, 5, 8
🔹 DIAGRAM
[5] / \ [3,4,2] [8]
🔹 TIME COMPLEXITY
CaseComplexity
BestO(n log n)
AverageO(n log n)
WorstO(n²)

📌 2. MERGE SORT
🔹 DEFINITION

Merge Sort divides the array into halves and then merges sorted halves.

🔹 WORKING
  • Divide array into two halves
  • Recursively sort each half
  • Merge sorted halves
🔹 ALGORITHM
Step 1: Divide array into two halves Step 2: Recursively sort left half Step 3: Recursively sort right half Step 4: Merge both sorted halves Step 5: Stop when size = 1
🔹 EXAMPLE
Array: 8, 3, 2, 9 Divide: [8,3] → [8],[3] [2,9] → [2],[9] Merge: [3,8], [2,9] Final: [2,3,8,9]
🔹 DIAGRAM
[8,3,2,9] / \ [8,3] [2,9] / \ / \ [8] [3] [2] [9]
🔹 TIME COMPLEXITY
CaseComplexity
BestO(n log n)
AverageO(n log n)
WorstO(n log n)

📊 QUICK SORT vs MERGE SORT
FeatureQuick SortMerge Sort
MethodPartitionDivide & Merge
SpaceIn-placeExtra memory
SpeedFaster (average)Stable performance
Worst CaseO(n²)O(n log n)
StabilityNot stableStable

🔷 KEY POINT

Quick Sort → Faster in practice

Merge Sort → Guaranteed O(n log n)

Q17. Explain Hashing Techniques and Collision Resolution Methods.

🔷 HASHING

Hashing is a technique used to store and retrieve data efficiently using a hash function.

  • Data is stored in a hash table
  • Key is converted into an index (address)

Index = Hash Function(Key)

📊 HASH TABLE STRUCTURE
Index: 0 1 2 3 4 5 [ ] [ ] [ ] [ ] [ ] [ ] Insert Key → Apply Hash Function → Store in index

📌 HASH FUNCTION

A hash function maps a key to an index.

🔹 Example:

H(key) = key % table_size
H(23) = 23 % 10 = 3


📌 HASHING TECHNIQUES
🔹 1. DIVISION METHOD

H(key) = key mod m

  • Most commonly used
  • m = table size
🔹 2. MID-SQUARE METHOD
  • Square the key
  • Extract middle digits

Key = 123 → 123² = 15129 → Middle = 512

🔹 3. FOLDING METHOD
  • Split key into parts
  • Add parts

1234 → 12 + 34 = 46

🔹 4. MULTIPLICATION METHOD

H(key) = floor(m * (key * A % 1))

(A = constant)


🔷 COLLISION

Collision occurs when two keys map to same index.

H(23) = 3
H(33) = 3 → Collision


📌 COLLISION RESOLUTION METHODS
🔹 1. SEPARATE CHAINING

Use linked list at each index

Index 3 → [23] → [33] → [43]

✔ Advantage:

  • Simple
  • Handles multiple collisions

❌ Disadvantage:

  • Extra memory

🔹 2. OPEN ADDRESSING

Stores elements in the table itself.

🔸 a) LINEAR PROBING

Index = (H(key) + i) % size

Example:

3 occupied → try 4 → 5 → ...

🔸 b) QUADRATIC PROBING

Index = (H(key) + i²) % size

🔸 c) DOUBLE HASHING

Index = (H1(key) + i * H2(key)) % size


📊 COMPARISON TABLE
MethodIdeaAdvantageDisadvantage
ChainingLinked listEasyExtra space
Linear ProbingNext slotSimpleClustering
Quadratici² jumpLess clusteringComplex
Double HashingTwo hashBest distributionCostly

📊 DIAGRAM
Hash Table with Chaining: Index 0 → — Index 1 → [11] Index 2 → [22] Index 3 → [23] → [33]

🔷 APPLICATIONS
  • Database indexing
  • Symbol tables
  • Password storage
  • Caching systems

🔷 KEY POINT

Hashing → Fast data access (O(1))

Collision → Must be handled properly


Q18. Explain Graph Representations. Describe BFS and DFS with Examples.

🔷 GRAPH

A graph is defined as:

G = (V, E)

V → Set of vertices

E → Set of edges


📌 GRAPH REPRESENTATIONS
🔹 1. ADJACENCY MATRIX

2D array representation

🔹 EXAMPLE
Graph: A — B | | C — D MATRIX A B C D A 0 1 1 0 B 1 0 0 1 C 1 0 0 1 D 0 1 1 0

✔ Advantages:

  • Simple
  • Easy to implement

❌ Disadvantages:

  • Uses more memory

🔹 2. ADJACENCY LIST

Uses linked list

🔹 EXAMPLE
A → B → C B → A → D C → A → D D → B → C

✔ Advantages:

  • Memory efficient
  • Better for sparse graphs

❌ Disadvantages:

  • Slightly complex

📊 COMPARISON
FeatureMatrixList
MemoryHighLow
SpeedFast lookupSlower
UsageDense graphSparse graph

📌 GRAPH TRAVERSAL
🔹 1. BREADTH FIRST SEARCH (BFS)
🔹 ALGORITHM
Step 1: Create queue Step 2: Mark node visited Step 3: Add node to queue Step 4: While queue not empty Step 5: Remove node Step 6: Visit neighbors Step 7: Add unvisited nodes
🔹 EXAMPLE
Graph: A / \ B C / \ D E BFS OUTPUT A → B → C → D → E
🔹 2. DEPTH FIRST SEARCH (DFS)
🔹 ALGORITHM
Step 1: Start at node Step 2: Mark visited Step 3: Visit node Step 4: Recursively visit neighbors
🔹 DFS OUTPUT

A → B → D → C → E

📊 DIAGRAM
Graph: A / \ B C / \ D E

📊 BFS vs DFS
FeatureBFSDFS
StructureQueueStack
TraversalLevel-wiseDepth-wise
Shortest PathYesNo
MemoryHighLow

🔷 APPLICATIONS
  • BFS → Shortest path
  • DFS → Cycle detection
  • Network routing
  • AI search algorithms

🔷 KEY POINT

BFS → Breadth (level)

DFS → Depth (deep traversal)

Q19. Explain Dijkstra’s Algorithm and Kruskal Algorithm.

🔷 DIJKSTRA’S ALGORITHM
🔹 DEFINITION

Dijkstra’s Algorithm is used to find the shortest path from a source vertex to all other vertices in a weighted graph with non-negative weights.


📌 WORKING PRINCIPLE
  • Start from source node
  • Select node with minimum distance
  • Update distances of adjacent nodes
  • Repeat until all nodes are visited

📌 ALGORITHM
Step 1: Initialize distance of all vertices as ∞ Step 2: Set source distance = 0 Step 3: Mark all nodes unvisited Step 4: Select node with minimum distance Step 5: Update distances of adjacent nodes Step 6: Mark node as visited Step 7: Repeat until all nodes processed

📊 EXAMPLE
Graph: (4) A -------- B | | (1) (2) | | C -------- D (5)
🔹 STEPS
Start from A A → 0 C = 1 B = 4 Visit C → update D = 1 + 5 = 6 Visit B → update D = min(6, 4+2=6) Final distances: A = 0 C = 1 B = 4 D = 6

📊 RESULT

Shortest path from A:
A → C = 1
A → B = 4
A → D = 6


📌 TIME COMPLEXITY
CaseComplexity
Using arrayO(V²)
Using priority queueO((V+E) log V)

🔷 APPLICATIONS
  • Network routing
  • GPS navigation
  • Shortest path problems

🔷 KRUSKAL’S ALGORITHM
🔹 DEFINITION

Kruskal’s Algorithm is a greedy algorithm used to find the Minimum Spanning Tree (MST) of a graph.


📌 WORKING PRINCIPLE
  • Sort all edges by weight
  • Pick smallest edge
  • Add edge if it does not form cycle
  • Repeat until MST formed

📌 ALGORITHM
Step 1: Sort all edges in ascending order Step 2: Initialize empty MST Step 3: Pick smallest edge Step 4: Check if cycle forms Step 5: If no cycle → include edge Step 6: Repeat until (V-1) edges selected

📊 EXAMPLE
Graph: A---B (1) A---C (3) B---C (2) B---D (4) C---D (5)
🔹 STEPS
Sorted edges: AB(1), BC(2), AC(3), BD(4), CD(5) Step 1: AB(1) ✔ Step 2: BC(2) ✔ Step 3: AC(3) ✘ (cycle) Step 4: BD(4) ✔ MST edges: AB, BC, BD

📊 RESULT

Minimum Spanning Tree:
AB(1), BC(2), BD(4)

Total Cost = 7


📌 TIME COMPLEXITY
CaseComplexity
Using sortingO(E log E)

🔷 APPLICATIONS
  • Network design
  • Road construction
  • Cable layout

📊 DIJKSTRA vs KRUSKAL
FeatureDijkstraKruskal
GoalShortest PathMST
TypePath algorithmTree algorithm
Works onNodesEdges
Cycle allowedYesNo

Q20. Compare Shortest Path and Minimum Spanning Tree.

🔷 SHORTEST PATH

Finds minimum distance between two nodes

Example: Dijkstra, Bellman-Ford


🔷 MINIMUM SPANNING TREE (MST)

Connects all nodes with minimum total weight

Example: Prim, Kruskal


📊 DIAGRAM COMPARISON
Graph: A / \ B C \ / D Shortest Path: A → D (minimum path) MST: Connect all nodes with minimum edges

📊 COMPARISON TABLE
FeatureShortest PathMinimum Spanning Tree
ObjectiveMinimum distance between nodesMinimum total weight
CoverageBetween two nodesCovers all nodes
AlgorithmsDijkstra, Bellman-FordPrim, Kruskal
CycleAllowedNot allowed
ResultPathTree
Use CaseNavigationNetwork design

🔷 EXAMPLE DIFFERENCE

Shortest Path:
A → B → D (minimum distance)

MST:
A-B, B-C, B-D (minimum total cost)


🔷 KEY POINT

Shortest Path → Focus on one destination

MST → Connect all nodes with minimum cost