1. Explain Planning in the Unification Process.

Unification is a fundamental operation in AI that supports logical planning and automated reasoning. In planning systems based on first-order logic, unification is used to match goals with available actions or rules by finding suitable substitutions. It enables the inference engine to apply relevant rules whose preconditions match the current state.

Unification in Planning

In logic-based planning:

  • Goals are expressed as predicates.
  • Actions are defined with preconditions and effects.
  • The planning system uses unification to match action preconditions with current facts.
  • If unification succeeds, the action is applied.

Unification ensures that variable substitutions are consistent, constants match, function symbols match, and cyclic substitutions are avoided.


Design Problem: Medical Diagnosis Rule-Based Planning

Problem Statement

Design a simple AI planning system for disease diagnosis.

Knowledge Base (Rules)


1. Fever(X) ∧ Cough(X) → Flu(X)

2. Flu(X) → Prescribe(Medicine, X)

Given Facts


Fever(John)

Cough(John)

Goal


Prescribe(Medicine, John)


Step-by-Step Solution Using Unification

Step 1: Match Goal with Rule

Goal: Prescribe(Medicine, John)

Compare with Rule 2:
Flu(X) → Prescribe(Medicine, X)

Unification gives: X = John

New Subgoal: Flu(John)


Step 2: Match Subgoal with Rule

Subgoal: Flu(John)

Compare with Rule 1:
Fever(X) ∧ Cough(X) → Flu(X)

Unification gives: X = John

New Subgoals:
Fever(John), Cough(John)


Step 3: Check Facts

Given Facts:
Fever(John) ✔
Cough(John) ✔

Both preconditions satisfied.


Final Plan


Fever(John) ∧ Cough(John)

→ Flu(John)

→ Prescribe(Medicine, John)


Role of Unification in this Planning

  • Matches goal with rule head.
  • Substitutes variable X with John.
  • Ensures logical consistency.
  • Enables automated reasoning and rule chaining.

Thus, unification acts as the core mechanism for logical planning, theorem proving, and expert systems in AI.


2. Describe State-Space Search with Suitable Examples such as the Water Jug Problem.

State space search is a problem-solving technique in AI where a problem is represented as a collection of states and transitions between them. The objective is to find a sequence of actions that transforms the initial state into the goal state.


Components of State-Space Search

  • State
  • Initial State
  • Goal State
  • Operators (Actions)
  • Path
  • Search Strategy

Example: Water Jug Problem

Problem Definition

Two jugs:

  • 3-liter jug
  • 5-liter jug

Goal: Measure 4 liters of water.


State Representation

State = (x, y)
x → water in 3L jug
y → water in 5L jug

Initial State: (0, 0)
Goal State: (0, 4)


Possible Actions

  • Fill a jug
  • Empty a jug
  • Pour from one jug to another

State-Space Search Using BFS


Step 1: Start (0, 0)

Step 2: Fill 5L Jug (0, 5)

Step 3: Pour 5L → 3L (3, 2)

Step 4: Empty 3L (0, 2)

Step 5: Pour 5L → 3L (2, 0)

Step 6: Fill 5L (2, 5)

Step 7: Pour 5L → 3L (3, 4)

Step 8: Empty 3L (0, 4) ✔ Goal Reached


Search Strategies Used

1. Breadth-First Search (BFS)
Explores level by level.
Complete and optimal for uniform cost problems.

2. Depth-First Search (DFS)
Explores deep before backtracking.
Less memory usage but not always optimal.


Properties of State-Space Search

  • Branching Factor
  • Depth
  • Completeness
  • Optimality
  • Time Complexity
  • Space Complexity

Working Mechanism

  1. Define initial state.
  2. Generate successor states using operators.
  3. Add new states to frontier.
  4. Check goal condition.
  5. Avoid repeated states.
  6. Continue until goal is found or search fails.

State-space search systematically explores all possible configurations and guarantees solution discovery when a valid path exists, making it fundamental in puzzles, robotics, navigation, and AI planning systems.


3. Describe Planning Graph with the Application to Real-World Examples.

A Planning Graph is a data structure used in automated planning to represent the progression of states and actions level by level. It consists of alternating state levels and action levels, starting from the initial state. It helps in goal reachability analysis and efficient plan extraction.


Structure of Planning Graph

1. State Levels (S-levels)

  • Represent logical propositions (facts) about the world.
  • The first level S₀ represents the initial state.
  • Each new state level includes previous propositions and new ones generated by actions.

2. Action Levels (A-levels)

  • Contain actions whose preconditions are satisfied in the previous state level.
  • Actions produce effects that appear in the next state level.

3. Edges

  • From state → action (preconditions).
  • From action → state (effects).

4. Mutual Exclusion (Mutex)

  • Identifies actions or states that cannot occur together.
  • Reduces unnecessary search space.

Mutex between actions occurs due to:

  • Inconsistent effects
  • Interference
  • Competing needs

Mutex between literals occurs due to:

  • Negation
  • Achieved by mutually exclusive actions

Working of Planning Graph (GraphPlan Algorithm)

  1. Initialize with S₀ (initial state).
  2. Add A₀ (all applicable actions).
  3. Generate S₁ from action effects.
  4. Check if goals appear non-mutex.
  5. If yes, extract solution.
  6. If not, expand graph further.

Properties:

  • Literals increase monotonically.
  • Actions increase monotonically.
  • Mutex relations decrease monotonically.

Example: CAKE Problem

Initial State (S₀):
Have(Cake), ¬Eaten(Cake)

Action (A₀):
Eat(Cake)

Next State (S₁):
¬Have(Cake), Eaten(Cake)

Mutex:
Have(Cake) and Eaten(Cake) cannot be true simultaneously.

Further Action (A₁):
Bake(Cake)

This shows how states and actions expand level by level until goal conditions are satisfied.


Applications of Planning Graph in Real World

1. Robotics

Used for task sequencing in robot manipulation.
Example: Pick object → Move → Place object.
Ensures actions do not conflict (mutex handling).

2. Logistics and Supply Chain

Plans delivery routes.
Checks whether resources allow parallel deliveries.

3. Automated Workflow Systems

Business process automation.
Determines valid order of operations.

4. Real-Time Decision Systems

Used in dynamic environments like self-driving cars.
Supports quick plan generation and goal checking.


Planning Graph improves efficiency by reducing search space using mutex relations and providing heuristic guidance for planning problems.


4. Explain the Working Principle of Partial-Order Planning (POP) in Planning Graph.

Partial-Order Planning (POP) is an approach where actions are partially ordered rather than strictly sequenced. It follows the least commitment strategy, meaning ordering decisions are made only when necessary.


Key Concepts of Partial-Order Planning

  1. Flexibility in execution
  2. Causal links between actions
  3. Ordering constraints
  4. Plan refinement process

Working Principle

Step 1: Initialize Plan
Include Start action (initial state).
Include Finish action (goal state).

Step 2: Identify Open Preconditions
If goal is not satisfied, add an action that achieves it.

Step 3: Add Causal Link
If Action A produces effect E needed by Action B:
A → E → B

Step 4: Add Ordering Constraints
Ensure A happens before B if required.

Step 5: Resolve Threats
If another action interferes, reorder or modify plan.

Step 6: Continue Until No Open Preconditions Remain

Plan is complete when:

  • No cycles
  • No open preconditions
  • No conflicts

Example: Wearing Shoes

Actions:
RightSock, RightShoe, LeftSock, LeftShoe

Ordering Constraints:
RightSock < RightShoe
LeftSock < LeftShoe

Left and right branches are independent, so they need not follow strict total order.


POP as Search Problem

Components:

  • Set of actions
  • Set of ordering constraints
  • Set of causal links
  • Set of open preconditions

Advantages

  • More flexible than total-order planning.
  • Handles complex problems efficiently.
  • Reduces unnecessary constraints.

Partial-order planning works effectively with planning graphs by refining actions level by level while maintaining only essential dependencies, thereby improving planning efficiency in complex environments.


5. Discuss Forward March, Backward March, and Planning with Limited Resources.


1. Forward March (Forward Search / Progression Planning)

Forward march is a planning strategy that begins from the initial state and applies actions step by step until the goal state is reached. It generates successor states by applying all possible applicable actions at each stage.

Working Principle

  1. Start from initial state S₀.
  2. Apply valid actions whose preconditions are satisfied.
  3. Generate new states.
  4. Continue expansion until goal state is found.

Applications

  • Robotics navigation
  • Pathfinding in self-driving cars
  • Real-time AI systems

Advantages

  • Natural and intuitive method.
  • Easy to apply domain heuristics.

Challenges

  • Large branching factor.
  • May explore unnecessary states.

Example

Water Jug Problem
Initial → (0,0)
Apply actions → Fill, Pour
Goal → (0,4)


2. Backward March (Backward Search / Regression Planning)

Backward march starts from the goal state and works backward toward the initial state. It determines which actions could have produced the goal and checks their preconditions recursively.

Working Principle

  1. Begin with goal state.
  2. Identify actions that achieve the goal.
  3. Replace goal with action’s preconditions.
  4. Continue until initial state is reached.

Applications

  • Puzzle solving
  • Theorem proving
  • Logical reasoning systems

Advantages

  • Efficient when goal is well defined.
  • Reduces unnecessary exploration.

Challenges

  • Difficult with multiple goal states.
  • Requires clearly defined preconditions.

Example

Goal: Prescribe(Medicine, John)
Find rule producing it → Flu(John)
Check preconditions → Fever(John), Cough(John)


3. Planning with Limited Resources

In real-world planning, AI systems must consider constraints such as time, energy, cost, manpower, and materials. Planning must optimize resource usage while achieving goals.

Examples

  • Logistics: Limited fuel and vehicles.
  • Robot navigation: Limited battery power.
  • Healthcare scheduling: Limited doctors and rooms.

Techniques Used

  • Constraint Satisfaction Problems (CSP)
  • Linear Programming (LP)
  • Heuristic search with cost functions
  • Reinforcement Learning (RL)

Challenges

  • Resource conflicts between actions.
  • Increased computational complexity.
  • Need for optimization strategies.

Comparison

Aspect Forward March Backward March
Search Direction Initial → Goal Goal → Initial
Suitable For Real-time navigation Logical reasoning
Branching Large branching factor Smaller, focused search
Dependency Based on current state Based on goal preconditions

Forward march, backward march, and planning with limited resources are essential AI planning strategies that help systems generate efficient and feasible plans under different problem conditions and constraints.