1. Summarise Various AI Models and Their Applications
AI Models
AI models are systems trained on data to identify patterns, make predictions, and support decision-making without continuous human intervention.
Difference between Algorithm and Model
- Algorithm → Step-by-step procedure to solve a problem
- Model → Output obtained after training an algorithm on data
Types of AI Models
1. Supervised Learning
- Uses labeled data for training
- Input-output pairs are provided
- Model learns mapping between input and output
Examples:
- Image classification
- Spam email detection
2. Unsupervised Learning
- Uses unlabeled data
- Identifies hidden patterns or structures
- Groups similar data automatically
Examples:
- Customer segmentation
- Recommendation systems
3. Reinforcement Learning
- Learns through trial and error
- Uses rewards and penalties
- Improves performance over time
Examples:
- Game playing AI
- Robot navigation
Applications of AI Models
1. Virtual Assistants
- Systems like Siri and Google Assistant
- Perform tasks using voice commands
2. Self-Driving Cars
- Use AI models for object detection and decision-making
- Analyze environment and take actions
3. Recommendation Systems
- Suggest products, movies, or content
- Based on user behavior and preferences
4. Chatbots
- Provide automated responses
- Used in customer service
5. Medical Diagnosis
- Detect diseases using medical data
- Assist doctors in decision making
6. Computer Vision and NLP
- Computer Vision → Image and video understanding
- NLP → Language understanding and processing
Key Points
- AI models automate decision-making
- Improve efficiency and accuracy
- Used in multiple real-world domains
2. Analyse the Problem Types in AI
Problem Solving in AI
Problem solving in AI involves finding solutions using logical reasoning, search methods, and structured representation.
Components of AI Problem
1. Initial State
Starting point of the problem
2. Goal State
Desired solution or final state
3. Operators
Actions that move from one state to another
4. Constraints
Rules that restrict possible actions
Problem Representation
- State Space → Set of all possible states
- Search Trees → Hierarchical structure of states
- State Space Graphs → Nodes and transitions
- Production Systems → Rule-based representation
Types of Problems in AI
1. Well-Defined Problems
- Clearly defined initial state, goal, and rules
- Easy to represent and solve
Example: Puzzle solving
2. Ill-Defined Problems
- Missing or unclear information
- Hard to represent and solve
Example: Natural language understanding
3. Deterministic Problems
- Outcome is predictable
- Same input gives same output
Example: Mathematical calculations
4. Non-Deterministic Problems
- Multiple possible outcomes
- Involves uncertainty
Example: Real-world decision making
Problem Solving Steps
- Problem Definition
- Data Collection and Preparation
- Model Selection and Training
- Solution Generation and Evaluation
- Refinement and Deployment
Key Points
- AI problem solving uses structured representation
- Efficient search strategies are applied
- Helps solve real-world complex problems
3. Explain the Importance of Unification in AI
Unification in AI
Unification is the process of matching two logical expressions by finding a substitution that makes them identical. It plays a key role in reasoning and inference mechanisms.
Role of Unification
- Matches variables with constants or other variables
- Helps in applying rules effectively
- Enables logical inference in AI systems
Importance of Unification
1. Supports Logical Reasoning
- Used to derive conclusions from facts
- Helps in validating logical statements
2. Essential for Theorem Proving
- Used in automated theorem proving
- Matches expressions to prove correctness
3. Used in Expert Systems
- Matches user input with rules
- Helps in decision making
4. Important in Natural Language Processing
- Matches sentence structures
- Helps in understanding language
5. Enables Rule-Based Systems
- Works with IF–THEN rules
- Helps in applying appropriate rules
Conditions for Unification
- Predicate symbols must match
- Number of arguments must be equal
- Substitutions must be consistent
- No contradictions should occur
Example of Unification
Expression 1: P(x, y)
Expression 2: P(a, b)
Substitution: x → a, y → b
Key Points
- Core concept in logic-based AI
- Enables inference and reasoning
- Widely used in NLP and expert systems
4. Describe Forward and Backward Chaining
Forward Chaining
- Data-driven approach
- Starts from known facts
- Applies rules to derive new facts
- Continues until goal is reached
Working of Forward Chaining
- Begin with initial facts
- Apply matching rules
- Generate new facts
- Repeat until goal is achieved
Features of Forward Chaining
- Bottom-up approach
- Useful when data is available
- Used in rule-based systems
Example of Forward Chaining
Fact: It is raining
Rule: If raining → take umbrella
Conclusion: Take umbrella
Backward Chaining
- Goal-driven approach
- Starts from the goal
- Works backward to find facts
- Breaks goal into sub-goals
Working of Backward Chaining
- Start with goal
- Find supporting rules
- Check conditions
- Repeat until verified
Features of Backward Chaining
- Top-down approach
- Useful when goal is known
- Used in theorem proving
Example of Backward Chaining
Goal: Take umbrella
Rule: If raining → take umbrella
Check: Is it raining?
If yes → goal achieved
Difference Between Forward and Backward Chaining
| Forward Chaining | Backward Chaining |
|---|---|
| Data-driven | Goal-driven |
| Bottom-up | Top-down |
| Starts from facts | Starts from goal |
| Used when data is known | Used when goal is known |
Key Points
- Both are inference techniques
- Used in expert systems
- Help in decision making and reasoning
5. Partial Order Planning
Definition
Partial Order Planning (POP) is a planning technique in AI where actions are not strictly arranged in a fixed sequence. Instead, actions are ordered only when necessary based on dependencies and constraints, allowing flexibility in execution.
Key Idea
- Focuses on constraints between actions
- Allows multiple valid sequences
- Avoids unnecessary ordering of independent actions
Important Components
1. Actions
- Set of operations required to achieve the goal
- Each action has preconditions and effects
2. Ordering Constraints
- Specifies when one action must occur before another
- Only applied when required
3. Causal Links
- Represents dependency between actions
- Shows how one action satisfies a condition of another
4. Open Preconditions
- Conditions that must be satisfied before execution
- Initially present in goal state
Steps in Partial Order Planning
- Goal Definition – Identify the final objective
- Initial Plan Creation – Start with minimal plan (start and goal states)
- Action Selection – Choose actions that help satisfy goals
- Add Ordering Constraints – Define sequence between dependent actions
- Establish Causal Links – Connect actions through conditions
- Resolve Conflicts (Threats) – Avoid conflicts between actions
- Refinement of Plan – Continue until all preconditions are satisfied
Working Principle
- Starts with a partial plan
- Gradually adds actions
- Maintains flexibility
- Ensures all constraints are satisfied
Advantages
- Flexible execution of actions
- Supports parallel processing
- Reduces unnecessary restrictions
- Efficient for complex environments
Limitations
- Complex to implement
- Requires conflict resolution mechanisms
- Difficult in very large systems
Example
Goal: Prepare breakfast
Actions:
Boil water
Prepare tea
Toast bread
Constraints:
Water must be boiled before making tea
Toasting bread is independent
👉 Actions can be executed in different valid orders
Applications
- Robotics (task planning)
- Automated scheduling systems
- Workflow management
- AI-based decision systems
Key Points
- POP = flexible planning
- Uses constraints instead of strict sequence
- Suitable for complex problems
- Supports parallel execution
6. Bayes’ Theorem and Its Rule
Definition
Bayes’ Theorem is a probabilistic method used in AI to update probability based on new evidence. It helps in decision-making under uncertainty.
Mathematical Formula
P(A|B) = (P(B|A) × P(A)) / P(B)
Meaning of Terms
- Prior Probability (P(A)) → Initial belief
- Likelihood (P(B|A)) → Evidence given hypothesis
- Evidence (P(B)) → Probability of evidence
- Posterior (P(A|B)) → Updated probability
Bayesian Rule
- Start with prior knowledge
- Collect new evidence
- Update belief using formula
- Repeat as data increases
Working Steps
- Identify hypothesis (A)
- Observe evidence (B)
- Calculate likelihood
- Apply formula
- Compute posterior probability
- Use result for decision making
Example
Disease Diagnosis:
P(Disease) = 0.01
P(Test Positive | Disease) = 0.9
P(Test Positive) = 0.05
👉 Calculate probability of disease given positive test
Applications
- Medical Diagnosis
- Spam Filtering
- Machine Learning
- Risk Analysis
Advantages
- Handles uncertainty
- Improves predictions
- Uses prior + new data
Limitations
- Needs accurate prior data
- Computationally expensive
- Assumes independence
Key Points
- Updates probability using evidence
- Core concept in probabilistic AI
- Used in prediction systems
- Basis for Bayesian networks
0 Comments