A Graph is a non-linear data structure consisting of vertices (nodes) and edges connecting them, represented as G = (V, E).
An Adjacency Matrix is a 2D array used to represent a graph, where rows and columns represent vertices and values indicate the presence of edges.
An Adjacency List represents a graph using linked lists, where each vertex stores a list of its adjacent vertices.
Breadth First Search (BFS) is a graph traversal technique that visits nodes level by level using a queue.
Depth First Search (DFS) is a graph traversal technique that explores nodes deeply using a stack or recursion.
Dijkstra’s Algorithm is used to find the shortest path from a source vertex to all other vertices in a weighted graph.
A Minimum Spanning Tree (MST) is a subset of edges that connects all vertices with minimum total weight and no cycles.
Hashing is a technique used to store and retrieve data quickly using a hash function that maps keys to indices.
A collision occurs when two or more keys are mapped to the same index in a hash table.
- Dijkstra’s Algorithm
- Kruskal’s Algorithm
A graph is a non-linear data structure made up of vertices and edges. In your Unit V notes, graphs are represented using both arrays (adjacency matrix) and linked list (adjacency list) methods.
Let:
G = (V, E)
Where:
V = set of vertices
E = set of edges
An adjacency matrix is a 2D array used to represent a graph. If there is an edge between vertex i and j, the matrix value is 1; otherwise it is 0. For weighted graphs, the weight is stored instead of 1. The notes mention that the matrix is symmetric for undirected graphs and non-symmetric for directed graphs.
If the graph has n vertices, an n × n matrix is formed.
For vertices A, B, C, D and edges (A,B), (A,C), (B,D), (C,D):
Simple and easy to implement
Fast edge lookup
Good for dense graphs
Uses more memory: O(n²)
Not suitable for sparse graphs
An adjacency list stores each vertex in an array, and each array entry points to a linked list of adjacent vertices. The notes describe it as an array of vertices with a linked list of neighbors.
For the same graph:
Requires less memory: O(V + E)
Efficient for sparse graphs
Easy to traverse neighbors
Edge lookup takes more time
Slightly more complex than matrix representation
| Feature | Adjacency Matrix | Adjacency List |
|---|---|---|
| Storage | 2D array | Array of linked lists |
| Space | O(n²) | O(V+E) |
| Edge lookup | Fast | Slower |
| Best for | Dense graphs | Sparse graphs |
Graph traversal means visiting all vertices of a graph systematically. Your notes include Breadth First Search (BFS) and Depth First Search (DFS) as the two major graph traversal techniques.
BFS visits vertices level by level, starting from a source vertex. It uses a queue. The Unit V notes describe BFS as visiting all neighbors first and then moving to the next level.
BFS starting from A:
A → B → C → D → E
Shortest path in unweighted graphs
Social networks
Web crawling
Broadcasting in networks
Simple
Finds shortest path in unweighted graphs
Uses more memory
Not ideal for deep graphs
DFS explores a graph as deep as possible before backtracking. It uses a stack or recursion. Your notes explicitly state that DFS goes deep first and then backtracks.
For the same graph:
DFS starting from A:
A → B → D → C → E
Cycle detection
Topological sorting
Path finding
Puzzle solving and backtracking
Uses less memory
Useful in backtracking problems
Does not guarantee shortest path
Can go deep unnecessarily
| Feature | BFS | DFS |
|---|---|---|
| Data structure | Queue | Stack / recursion |
| Traversal style | Level-wise | Depth-wise |
| Shortest path | Yes (unweighted) | No |
| Memory | More | Less |
| Best use | Shortest path, level order | Backtracking, cycle detection |
A shortest path algorithm finds the minimum distance between vertices in a weighted graph. The Unit V notes mention Dijkstra’s algorithm as a single-source shortest path algorithm for graphs with non-negative edge weights.
Dijkstra’s algorithm finds the shortest path from a single source vertex to all other vertices.
Suppose:
A → B = 4
A → C = 2
B → D = 5
C → D = 3
Start from A
Distance of A = 0
Distance of B = 4
Distance of C = 2
From C, distance to D = 2 + 3 = 5
From B, distance to D = 4 + 5 = 9, but minimum remains 5
A = 0
B = 4
C = 2
D = 5
A → C → D = 5
| Implementation | Time |
|---|---|
| Array-based | O(V²) |
| Priority queue-based | O((V+E) log V) |
GPS and navigation systems
Network routing
Transportation systems
A Minimum Spanning Tree is a subset of edges in a connected, weighted, undirected graph that:
connects all vertices
has no cycles
has minimum total weight
Your notes explicitly define MST and give Prim’s and Kruskal’s algorithms as the main methods.
Prim’s algorithm builds the MST by starting from any vertex and repeatedly adding the minimum-weight edge connecting a visited vertex to an unvisited vertex.
Vertices: A, B, C, D Edges:
A-B = 2
A-C = 3
B-C = 1
B-D = 4
C-D = 5
Start at A
Choose A-B (2)
Choose B-C (1)
Choose B-D (4)
2 + 1 + 4 = 7
Kruskal’s algorithm builds the MST by selecting edges in increasing order of weight and avoiding cycles.
Edges sorted:
B-C = 1
A-B = 2
A-C = 3
B-D = 4
C-D = 5
Select B-C (1)
Select A-B (2)
Skip A-C (3) because it forms cycle
Select B-D (4)
1 + 2 + 4 = 7
| Feature | Prim’s | Kruskal’s |
|---|---|---|
| Approach | Vertex-based | Edge-based |
| Start | Any vertex | No specific start |
| Best for | Dense graphs | Sparse graphs |
| Data structure | Priority queue | Union-Find |
| Cycle handling | Implicit | Explicit cycle check |
Hashing is a technique for storing and retrieving data quickly using a hash function that maps a key to an index in a hash table.
Index = H(key)
Example:
H(key) = key % table_size
H(key) = key mod m
Square the key
Use middle digits
Split the key into parts
Add parts
Multiply key by a constant and extract fractional part
A collision occurs when two keys map to the same index.
Example:
H(23) = 3
H(33) = 3
Each table index stores a linked list of elements that hash to the same location.
Index 3 → 23 → 33 → 43
If collision occurs, search for another empty slot.
H'(key) = (H(key) + i) % table_size
H'(key) = (H(key) + i²) % table_size
H'(key) = (H1(key) + i * H2(key)) % table_size
| Method | Idea | Advantage | Disadvantage |
|---|---|---|---|
| Chaining | Linked list at each index | Easy | Extra memory |
| Linear probing | Next empty slot | Simple | Clustering |
| Quadratic probing | Jump by squares | Less clustering | More complex |
| Double hashing | Two hash functions | Better distribution | More computation |
Symbol tables
Databases
Password handling
Caching systems
Divide and Conquer is a problem-solving method in which a problem is:
1. divided into smaller subproblems
2. solved recursively
3. combined to get final solution
1. Divide the problem
2. Conquer each subproblem
3. Combine results
Merge sort is a classic divide-and-conquer algorithm.
Divide array into two halves
Recursively sort each half
Merge sorted halves
Array: 8, 3, 2, 9
Divide:
[8,3] and [2,9]
Further divide:
[8] [3] [2] [9]
Merge:
[3,8] and [2,9]
Final:
[2,3,8,9]
Quick sort is another divide-and-conquer method:
choose pivot
partition elements
recursively sort subarrays
Efficient for large problems
Naturally recursive
Reduces complexity
Recursive overhead
Extra space in some algorithms
Greedy method is an algorithm design technique that chooses the best immediate option at each step with the hope of obtaining a global optimum.
Greedy choice property
Optimal substructure
No backtracking
Find minimum number of coins for amount 18 using coins: 10, 5, 2, 1
Choose 10, remaining 8
Choose 5, remaining 3
Choose 2, remaining 1
Choose 1, remaining 0
10 + 5 + 2 + 1
Kruskal’s MST algorithm is a greedy algorithm because it always chooses the smallest edge available without forming cycles.
Simple
Fast
Easy to implement
Not always optimal for all problems
Local best choice may fail globally
BFS visits nodes level by level using a queue. Your notes describe BFS as a level-wise traversal starting from a source vertex.
DFS explores as deep as possible before backtracking using a stack or recursion. This is also stated in your notes.
| Feature | BFS | DFS |
|---|---|---|
| Data structure | Queue | Stack / recursion |
| Traversal style | Level-wise | Depth-wise |
| Shortest path | Yes in unweighted graphs | No |
| Memory | More | Less |
| Use | Shortest path, broadcasting | Backtracking, cycle detection |
BFS
A → B → C → D → E
DFS
A → B → D → C → E
BFS is best when you need the nearest node
DFS is best when you need to go deep first
0 Comments