Here, We provide Artificial Intelligence GTU Paper Solution Winter 2022. Read the Full AI GTU paper solution given below. Artificial Intelligence GTU Old Paper Winter 2022 [Marks : 70] : Click Here
(a) Discuss various areas where Artificial Intelligence is used.
Artificial Intelligence (AI) has numerous applications across various fields and industries. Some of the key areas where AI is being used are:
 Healthcare: AI is being used in healthcare to improve patient outcomes, reduce costs, and increase efficiency. AIpowered tools are being used for medical imaging, diagnosis, drug development, and personalized treatment planning.
 Finance: AI is being used in the finance industry for fraud detection, risk assessment, portfolio management, and algorithmic trading. AIpowered chatbots are also being used for customer service and support.
 Education: AI is being used in education to personalize learning, improve student engagement, and automate administrative tasks. AIpowered tutoring systems and virtual learning assistants are being used to provide students with personalized feedback and support.
 Transportation: AI is being used in transportation for autonomous vehicles, traffic management, and logistics optimization. AIpowered systems are being used to optimize routes, reduce fuel consumption, and improve safety.
 Marketing and Advertising: AI is being used in marketing and advertising for customer segmentation, personalized recommendations, and ad targeting. AIpowered chatbots are also being used for customer service and support.
 Retail: AI is being used in retail for inventory management, supply chain optimization, and personalized recommendations. AIpowered chatbots are also being used for customer service and support.
 Agriculture: AI is being used in agriculture for crop management, precision farming, and yield optimization. AIpowered systems are being used to analyze soil and weather data, optimize irrigation, and identify pests and diseases.
 Manufacturing: AI is being used in manufacturing for quality control, predictive maintenance, and process optimization. AIpowered systems are being used to monitor equipment, identify defects, and optimize production processes.
(b) Explain different issues in designing of search problems.
Designing search problems can be challenging due to several issues, including:
 State space explosion: As the size of the state space (the set of all possible states of the problem) increases, the number of possible solutions also increases exponentially. This can make it difficult to find an optimal solution in a reasonable amount of time.
 Heuristic evaluation: In some search problems, it is difficult or impossible to determine the exact cost of moving from one state to another. In such cases, heuristic evaluation functions can be used to estimate the cost of moving to a particular state. However, finding an effective heuristic function can be challenging.
 Local optima: In some search problems, it is possible to get stuck in a local optimum, where further search in any direction will only lead to worse solutions. This can prevent the algorithm from finding the global optimum.
 Incomplete information: In some search problems, the algorithm may not have complete information about the state of the problem. This can make it difficult to determine the best course of action.
 Multiple objectives: In some search problems, there may be multiple objectives that need to be optimized simultaneously. This can make it difficult to find a single solution that satisfies all objectives.
 Dynamic environments: In some search problems, the state of the problem may change over time. This can make it difficult to find a solution that is optimal in the long term.
 Adversarial environments: In some search problems, the problem may be adversarial, meaning that there is an opponent who is actively trying to prevent the algorithm from finding a solution.
(c) Explain different issues in designing of search problems.
Designing search problems can be challenging due to several issues, including:
 State space explosion: As the size of the state space (the set of all possible states of the problem) increases, the number of possible solutions also increases exponentially. This can make it difficult to find an optimal solution in a reasonable amount of time.
 Heuristic evaluation: In some search problems, it is difficult or impossible to determine the exact cost of moving from one state to another. In such cases, heuristic evaluation functions can be used to estimate the cost of moving to a particular state. However, finding an effective heuristic function can be challenging.
 Local optima: In some search problems, it is possible to get stuck in a local optimum, where further search in any direction will only lead to worse solutions. This can prevent the algorithm from finding the global optimum.
 Incomplete information: In some search problems, the algorithm may not have complete information about the state of the problem. This can make it difficult to determine the best course of action.
 Multiple objectives: In some search problems, there may be multiple objectives that need to be optimized simultaneously. This can make it difficult to find a single solution that satisfies all objectives.
 Dynamic environments: In some search problems, the state of the problem may change over time. This can make it difficult to find a solution that is optimal in the long term.
 Adversarial environments: In some search problems, the problem may be adversarial, meaning that there is an opponent who is actively trying to prevent the algorithm from finding a solution.
(a) Differentiate Generate and test algorithm with Best First Search
algorithm.
Generate and Test algorithm and Best First Search algorithm are two different search algorithms used in artificial intelligence. The main differences between these two algorithms are as follows:
 Approach: Generate and Test algorithm is a brute force approach, while Best First Search algorithm is a heuristic approach.
 Search Strategy: Generate and Test algorithm generates all possible solutions and tests each one to find the best one. In contrast, Best First Search algorithm uses a heuristic function to guide the search towards the most promising solutions.
 Efficiency: Generate and Test algorithm can be very inefficient, especially when the search space is large, as it generates all possible solutions before testing them. Best First Search algorithm is more efficient, as it focuses the search on the most promising solutions.
 Memory Usage: Generate and Test algorithm uses a lot of memory to store all the possible solutions, while Best First Search algorithm only needs to store the current state of the search.
 Optimality: Generate and Test algorithm can find an optimal solution, but it may take a long time. Best First Search algorithm may not always find an optimal solution, but it is usually faster.
(b) Solve and suggest the appropriate strategy for the following waterjug
problem. You are given two jugs of capacity having 8 liters and 5 liters.
There are no measuring markers on jugs. You have to obtain exact 4
liters of water into 8 liters jug.
The waterjug problem is a classic problem in artificial intelligence and can be solved using various search algorithms. One of the simplest and most effective algorithms for this problem is the BreadthFirst Search (BFS) algorithm. The following is a stepbystep solution using BFS:
 Define the initial state: The initial state is when both jugs are empty.
 Define the goal state: The goal state is when the 8liter jug contains exactly 4 liters of water.
 Define the actions: There are four possible actions in this problem: a. Fill the 8liter jug b. Fill the 5liter jug c. Empty the 8liter jug d. Empty the 5liter jug
 Define the state transition function: The state transition function defines how the state changes when an action is performed. The state transition function can be defined as follows: a. Filling the 8liter jug: If the 8liter jug is not full, fill it up completely. Otherwise, do nothing. b. Filling the 5liter jug: If the 5liter jug is not full, fill it up completely. Otherwise, do nothing. c. Emptying the 8liter jug: If the 8liter jug is not empty, empty it completely. Otherwise, do nothing. d. Emptying the 5liter jug: If the 5liter jug is not empty, empty it completely. Otherwise, do nothing.
 Define the search strategy: The search strategy for this problem is BreadthFirst Search. BFS algorithm will explore all possible states in a levelwise manner starting from the initial state.
 Implement the algorithm: Using the abovedefined steps, we can implement the BFS algorithm to find the solution to the waterjug problem.
 Solution: The solution to the problem is to fill the 5liter jug completely and pour it into the 8liter jug. This will leave 3 liters of water in the 5liter jug. Then, fill the 5liter jug again and pour it into the 8liter jug until it is full. This will leave 4 liters of water in the 5liter jug. Finally, empty the 8liter jug and pour the 4 liters of water from the 5liter jug into the 8liter jug. Now, the 8liter jug contains exactly 4 liters of water.
(c) What is Hill Climbing algorithm? Discuss the cases where Hill climbing
fails.
Hill Climbing is a local search algorithm used in artificial intelligence and optimization problems. The basic idea of the Hill Climbing algorithm is to continuously improve the current solution by making small modifications to it, until it reaches a peak or a plateau.
The Hill Climbing algorithm starts with an initial solution, evaluates its fitness, and generates its neighbors. It then chooses the neighbor with the best fitness and repeats the process until a peak is reached or no better solution can be found.
However, there are some cases where the Hill Climbing algorithm fails, including:
 Local Optima: The Hill Climbing algorithm is prone to getting stuck at local optima, where the current solution is not the best solution but there are no better neighbors to explore.
 Plateaus: The Hill Climbing algorithm can also get stuck on plateaus, where all neighbors have the same fitness value and no direction can be identified for further exploration.
 Ridge: In a ridge, the Hill Climbing algorithm may get stuck on a sequence of local optima with similar fitness values, instead of finding the global optimum.
 Steepest Descent: In some cases, the Hill Climbing algorithm may follow the steepest descent instead of the gradient ascent, which may lead it away from the optimum solution.
(c) Consider the following initial and goal state configuration of 8puzzle
problem. Apply A* algorithm to reach from initial state to goal state by
drawing search tree and show the solution. Consider number of
misplaced tiles as a heuristic function.
Initial State Goal State
2 8 3 1 2 3
1 6 4 8 4
7 5 7 6 5
To apply A* algorithm, we need to first calculate the heuristic function for each state. Here, we are considering the number of misplaced tiles as the heuristic function.
For the initial state, the number of misplaced tiles is 6 (2, 8, 3, 1, 6, and 4 are misplaced).
The search tree for applying A* algorithm is as follows:
++
2 8 3 
1 6 4 
7 5 
+++

+++
 
++ ++
2 8 3 1 8 3
6 1 4  6 4
7 5  7 5 2
+++ +++
 
+++ +++
   
++ ++ ++ ++
2 1 3 8 2 3 1 6 3 1 8 3
6 8 4 1 6 4 7 5 4  2 6
7 5  7 5  +++ +++
++ ++  
  
+++ +++ +++
     
2 1 3 4 1 6 3 4  1 8 2 3 
6 8 5  7 5 8 6   5 6 4 
7 2  2 4 5  7 5 
++ ++ ++
The nodes in the search tree represent the states of the puzzle, and the edges represent the actions (moving a tile) that lead to the next state.
The numbers in each node represent the number of misplaced tiles in that state.
The solution is obtained by following the path from the initial state to the goal state with the least number of misplaced tiles. The solution path is as follows:
Initial State > (2,8,3,1,6,4,7,0,5) > (2,8,3,0,6,4,7,1,5) > (2,0,3,8,6,4,7,
OR
(a) Describe following facts into predicate logic form.
Every child loves Santa.
Everyone who loves Santa loves any reindeer.
Rudolph is a reindeer, and Rudolph has a red nose.
Let the domain be all possible entities.
 Every child loves Santa. ∀x (Child(x) → Loves(x, Santa))
 Everyone who loves Santa loves any reindeer. ∀x (Loves(x, Santa) → ∀y (Reindeer(y) → Loves(x, y)))
 Rudolph is a reindeer, and Rudolph has a red nose. Reindeer(Rudolph) ∧ HasRedNose(Rudolph)
(b) Convert the logical statement to conjunctive normal
form.
To convert a logical statement to conjunctive normal form, we need to follow these steps:
 Eliminate all implications using the equivalence A → B ≡ ¬A ∨ B.
 Move negation inwards using De Morgan’s laws.
 Apply distributivity of ∧ over ∨.
 Repeat the above steps until the formula is in conjunctive normal form, which is a conjunction of disjunctions.
Here is an example:
Statement: (p → q) ∧ (r ∨ s) ∧ ¬(p ∧ s)
 (¬p ∨ q) ∧ (r ∨ s) ∧ (¬p ∨ ¬s)
 (¬p ∨ q) ∧ (r ∨ s) ∧ ¬p ∧ ¬s
 (¬p ∧ (r ∨ s) ∧ ¬s) ∨ (q ∧ (r ∨ s) ∧ ¬s)
 (¬p ∧ r ∧ ¬s) ∨ (¬p ∧ s ∧ ¬s) ∨ (q ∧ r ∧ ¬s) ∨ (q ∧ s ∧ ¬s)
The resulting formula is now in conjunctive normal form.
(c) Translate following sentences to predicate logic and prove that John
likes peanuts using backward chaining.
John like all kinds of food. 5. Bill eats peanuts and is still alive.
Apples are food. 6. Sue eats everything Bill eats
Chicken is food.
Anything anyone eats and isn’t killed by is food.
The first step in backward chaining is to convert the given sentences into predicate logic form. Let’s assign the following predicates:
 L(x, y) – “x likes y”
 F(x) – “x is a kind of food”
 P(x) – “x is peanuts”
 E(x, y) – “x eats y”
 K(x) – “x is killed by eating”
Using these predicates, we can translate the sentences as follows:
 ∀x F(x) → ∀y L(x, y)
 E(Bill, P) ∧ ¬K(Bill)
 F(Apples)
 F(Chicken)
 ∀x ∀y (E(x, y) ∧ ¬K(x, y)) → F(y)
 ∀x (E(Sue, x) → E(Bill, x))
To prove that John likes peanuts using backward chaining, we start with the goal statement:
 L(John, P)
Then we search the knowledge base for any statement that directly implies this goal statement. From statement 2, we know that Bill eats peanuts and is still alive, but this doesn’t help us prove that John likes peanuts. So we move on to the next step.
Next, we look for a statement that implies another statement that implies the goal statement. From statement 5, we know that anything anyone eats and isn’t killed by is food. Since we know that Bill eats peanuts and is still alive, we can conclude that peanuts are food. Therefore, we can infer:
 F(P)
Now we can use statement 1 to infer that John likes peanuts, since he likes all kinds of food:
 ∀x F(x) → ∀y L(x, y)
 F(P) → ∀y L(John, y)
 F(P)
 ∀y L(John, y)
OR
(a) Differentiate propositional logic and predicate logic.
Propositional logic and predicate logic are two branches of symbolic logic that deal with propositions, or statements. However, there are some fundamental differences between the two.
Propositional logic deals with propositions that are either true or false, and it focuses on the relationship between propositions using logical operators such as “and,” “or,” and “not.” In propositional logic, a proposition is represented using a symbol, such as p, q, r, and so on, and a logical operator is used to connect them to form a compound proposition.
Predicate logic, on the other hand, deals with propositions that involve variables and quantifiers, such as “for all” and “there exists.” In predicate logic, propositions are represented using predicates, which are functions that take one or more arguments. For example, the proposition “x is greater than 5” can be represented using a predicate G(x). In predicate logic, variables are used to represent objects, and quantifiers are used to specify the scope of the variable. The universal quantifier “for all” is denoted by the symbol ∀, and the existential quantifier “there exists” is denoted by the symbol ∃.
(b) Explain the Modus Ponens inference rule with example.
Modus Ponens is a rule of inference in propositional logic that allows us to draw a conclusion from a conditional statement and its antecedent. It can be stated as follows:
If p implies q and p is true, then we can conclude that q is true.
Symbolically, this can be written as:
p > q p ∴ q
For example, if we have the following conditional statement and its antecedent:
If it rains, then the streets are wet. It is raining.
Using Modus Ponens, we can conclude that:
The streets are wet.
This is because if it is true that “If it rains, then the streets are wet,” and it is also true that “It is raining,” then we can logically infer that “The streets are wet.”
(c) Translate following sentences to predicate logic and prove that “West is
criminal” using resolution.
It is a crime for an American to sell weapons to hostile nations.
All the missiles were sold to Nono by West.
The country Nono is an enemy of America.
An enemy of America counts as hostile.
Nono has some missiles.
Missiles are weapons.
West is an American.
To translate the given sentences into predicate logic, we can define the following predicates:
 Sell(x, y, z) > x sells y to z
 Crime(x) > x is a crime
 American(x) > x is an American
 Hostile(x) > x is a hostile nation
 Enemy(x, y) > x is an enemy of y
 Missile(x) > x is a missile
 Weapon(x) > x is a weapon
 Has(x, y) > x has y
Using these predicates, we can translate the sentences as follows:
 Crime(x) ∧ American(x) ∧ Sell(x, y, z) ∧ Hostile(z) > Nono(z)
 Sell(West, y, Nono) ∧ Missile(y) > Weapon(y)
 Enemy(Nono, America) ∧ Hostile(Nono)
 ∀x Enemy(x, America) > Hostile(x)
 Has(Nono, y) ∧ Missile(y)
To prove that West is a criminal using resolution, we can negate the statement we want to prove (i.e., ¬Crime(West)) and convert all the statements into clauses. Then, we can use resolution to derive the empty clause, which indicates a contradiction and proves the original statement.
The clauses for the given statements are:
 Crime(x) ∨ ¬American(x) ∨ ¬Sell(x, y, z) ∨ ¬Hostile(z) ∨ Nono(z)
 ¬Sell(West, y, Nono) ∨ ¬Missile(y) ∨ Weapon(y)
 ¬Enemy(Nono, America) ∨ ¬Hostile(Nono)
 ¬Enemy(x, America) ∨ Hostile(x)
 ¬Has(Nono, y) ∨ ¬Missile(y)
Negating the statement we want to prove, we get:
 ¬Crime(West)
Using resolution, we can combine clauses 1 and 2 to get:
 Crime(West) ∨ ¬Missile(y) ∨ Weapon(y) ∨ ¬Hostile(z) ∨ Nono(z)
Then, we can combine clauses 3 and 4 to get:
 Hostile(Nono)
Using resolution again, we can combine clauses 7 and 5 to get:
 Crime(West) ∨ Weapon(y) ∨ ¬Missile(y) ∨ Nono(z)
Finally, we can combine clauses 8 and 9 to get:
 Crime(West) ∨ Weapon(y)
Since there is no way to resolve this clause further, we cannot derive the empty clause, which means that we cannot prove ¬Crime(West). Therefore, we cannot prove that West is a criminal using resolution.
(a) Perform the unification of following atomic sentences. (i.e. Find the
most general unifier.)
Knows(John, x); Knows(y, Mother(y))
Q(a, g(x, a), f(y)), Q(a, g(f(b), a), x)
(i) Unification of Knows(John, x); Knows(y, Mother(y))
Here, the predicate is Knows and there are two different variables x and y. To unify these atomic sentences, we need to find the most general unifier.
 The first predicate Knows(John, x) can be represented as Knows(z, w).
 The second predicate Knows(y, Mother(y)) can be represented as Knows(z, Mother(z)).
To unify these predicates, we can assign z = John and w = Mother(John).
Therefore, the most general unifier for Knows(John, x); Knows(y, Mother(y)) is {z/John, w/Mother(John)}.
(ii) Unification of Q(a, g(x, a), f(y)), Q(a, g(f(b), a), x)
Here, the predicate is Q and there are three different variables x, y, and b. To unify these atomic sentences, we need to find the most general unifier.
 The first predicate Q(a, g(x, a), f(y)) can be represented as Q(u, g(v, u), f(w)).
 The second predicate Q(a, g(f(b), a), x) can be represented as Q(u, g(f(b), u), v).
To unify these predicates, we can assign u = a, v = g(f(b), a), and w = x.
Therefore, the most general unifier for Q(a, g(x, a), f(y)), Q(a, g(f(b), a), x) is {u/a, v/g(f(b), a), w/v}.
(b) What is goal stack planning? Give example of initial state and goal state
in goal stack planning using some predicates.
Goal stack planning is a problemsolving approach that involves breaking down a complex goal into subgoals, and then recursively decomposing these subgoals into smaller subgoals until a set of primitive actions is obtained. It is used to solve problems in which a series of actions must be taken in order to achieve a desired goal state.
An example of initial state and goal state in goal stack planning using some predicates can be as follows:
Initial state:
 Robot is in the living room
 The door to the kitchen is closed
 The door to the bedroom is closed
 The bookshelf is empty
Goal state:
 The robot wants to bring a book from the bookshelf in the living room to the bedroom
Using goal stack planning, the robot would first decompose the overall goal into smaller subgoals, such as:
 Navigate to the bookshelf in the living room
 Pick up a book from the bookshelf
 Navigate to the door to the living room
 Open the door to the living room
 Navigate to the door to the bedroom
 Open the door to the bedroom
 Navigate to the desired location in the bedroom
 Put down the book in the desired location
Each of these subgoals may be further decomposed into smaller subgoals, until a set of primitive actions is obtained. The goal stack planning algorithm would then determine the order in which these actions should be performed, and the robot would execute the actions one at a time until the overall goal state is achieved.
(c) What is wampus world? Explain in detail.
Wumpus world is a classic artificial intelligence problem that involves navigating an agent through a square grid world with hidden hazards. It is named after the fictional creature “Wumpus”, a monster that can eat the agent if it enters the same cell.
The Wumpus world consists of a twodimensional grid of rooms. Each room can be either empty, contain a pit, or contain the Wumpus. There is also a single room that contains a ladder to the next level. The agent is placed in one of the rooms and must navigate through the world to find the ladder while avoiding the hazards.
The agent has access to a number of sensors that allow it to perceive its surroundings. It can sense whether there is a breeze, which indicates that there is a pit in one of the adjacent rooms, or whether there is a stench, which indicates that the Wumpus is in an adjacent room. The agent can also feel if it has fallen into a pit or been eaten by the Wumpus.
The goal of the agent is to reach the ladder without getting killed by the Wumpus or falling into a pit. To accomplish this, the agent must use reasoning and planning to explore the world and avoid the hazards.
Example of initial state and goal state in Wumpus world using some predicates:
Initial State:
 Agent is in room (1,1)
 There is no pit or Wumpus in room (1,1)
 There is a breeze in room (1,2)
 There is a stench in room (2,1)
Goal State:
 Agent is in the room with the ladder
 Agent has not fallen into a pit or been eaten by the Wumpus.
OR
(a) Perform the unification of atomic sentences. (i.e. Find the most general
unifier.)
p(b, X, f(g(Z))) and p(Z, f(Y), f(Y)).
test (11), test(y)
For the first set of atomic sentences:
p(b, X, f(g(Z))) and p(Z, f(Y), f(Y))
We can perform unification as follows:
 Unify the first argument b with Z: p(b, X, f(g(Z))) and p(b, f(Y), f(Y))
 Unify the third argument f(g(Z)) with f(Y): p(b, X, f(g(Z))) and p(b, f(Y), f(g(Z)))
 Unify the second argument X with f(Y): p(b, f(Y), f(g(Z))) and p(b, f(Y), f(g(Z)))
Thus, the most general unifier for these atomic sentences is:
{ X / f(Y), Z / b, Y / g(Z) }
For the second set of atomic sentences:
test(11), test(y)
These atomic sentences cannot be unified as they have different arguments.
(b) Describe the axioms of probability theory.
The axioms of probability theory provide the foundation for the mathematical treatment of probability. They are:
 Nonnegativity: For any event A, the probability of A is greater than or equal to zero. In other words, P(A) ≥ 0.
 Additivity: For any two mutually exclusive events A and B (i.e., events that cannot occur at the same time), the probability of either A or B occurring is equal to the sum of their individual probabilities. In other words, P(A or B) = P(A) + P(B).
 Normalization: The probability of the sample space (i.e., the set of all possible outcomes) is equal to 1. In other words, P(S) = 1.
From these axioms, various other properties of probability can be derived, such as the probability of the complement of an event, the conditional probability of an event given another event, and the chain rule of probability. These properties are used in many applications of probability theory, such as in statistical inference, machine learning, and decision making.
(c) Show the alphabeta cutoff in minmax algorithm by drawing suitable
game tree.
To demonstrate the alphabeta cutoff in minmax algorithm, let’s consider the following game tree:
A
/  \
B C D
/   \
E F G H
Suppose we are using the minmax algorithm to determine the best move for the current player at node A. We start by evaluating the children of A, which are B, C, and D. Suppose the evaluation function returns the following values for each child:
 B = 3
 C = 2
 D = 4
Next, we evaluate the children of B, which are E and F. Suppose the evaluation function returns the following values for each child:
 E = 5
 F = 1
Now, we start the alphabeta pruning process. We set alpha to negative infinity and beta to positive infinity. We evaluate node E:
 alpha = inf
 beta = inf
 E = 5
 alpha = max(alpha, E) = max(inf, 5) = 5
Since alpha is now 5, we know that the maximum value of node B is at least 5. We can prune node F, since its value is less than alpha:
 beta = min(beta, F) = min(inf, 1) = 1
Since beta is now 1, we know that the minimum value of node B is at most 1. We can prune node E, since its value is greater than beta.
We continue the minmax algorithm, evaluating node C and node D in the same manner. We determine that the value of node C is 2 and the value of node D is 4. Since the maximum value of node B is 5 and the minimum value of node C is 2, we know that the value of node A is at least 2. We can prune node G and node H, since their values are less than 2.
Thus, the final game tree after alphabeta pruning looks like this:
A
/ \
B C

E
We have reduced the number of nodes evaluated by the minmax algorithm from 8 to 4, by using the alphabeta pruning technique.
OR
(a) Differentiate predicate and fact in Prolog programming.
In Prolog programming, a fact is a simple statement about a relation between objects, represented as a Prolog clause that consists of a predicate followed by a list of arguments enclosed in parentheses. For example, the fact “John is a man” can be represented as:
man(john).
On the other hand, a predicate in Prolog is a rule that defines a relation between objects based on some condition. It consists of a head and a body, separated by a colon. The head is a predicate and the body is a conjunction of predicates and logical operators. For example, the predicate “X is the father of Y if X is a man and X is the parent of Y” can be represented as:
father(X, Y) : man(X), parent(X, Y).
So, the main difference between a predicate and a fact is that a predicate involves a condition that needs to be satisfied to establish the relation between objects, whereas a fact simply states the relation without any conditions.
(b) Explain fail predicate in Prolog with example.
In Prolog, the fail
predicate is used to signal a failure or to force backtracking. It always fails, regardless of the arguments given to it.
Here’s an example of how fail
can be used in Prolog:
likes(john, pizza).
likes(mary, sushi).
likes(tom, pizza).
likes(tom, sushi).
find_food(X) :
likes(X, Food),
write(X), write(' likes '), write(Food), nl,
fail.
In this example, find_food
is a predicate that finds all the foods that each person likes. The likes
predicate defines who likes what. The find_food
predicate uses likes
to find all the foods that each person likes and prints them out using write
.
When find_food
is called, Prolog first tries to find a person who likes a food. If it succeeds, it prints out the result using write
and then calls fail
. This forces Prolog to backtrack and try to find another person who likes a food. If it fails to find another person, it backtracks again and tries to find another food that the first person likes. This continues until all the possibilities have been exhausted.
Here’s an example of what happens when find_food
is called:
? find_food(X).
john likes pizza
mary likes sushi
tom likes pizza
tom likes sushi
false.
As you can see, Prolog prints out all the foods that each person likes, and then returns false
to indicate that there are no more solutions. The fail
predicate is used to force Prolog to backtrack and try other possibilities.
(c) Write following Prolog programs:
To copy one list to another list.
To check whether given number is odd or even.
Here are the Prolog programs to copy one list to another list and to check whether a given number is odd or even:
 To copy one list to another list:
% Base case: copying an empty list results in an empty list
copy_list([], []).
% Recursive case: copying a nonempty list
copy_list([X  Xs], [X  Ys]) :
copy_list(Xs, Ys).
Example usage:
? copy_list([1, 2, 3], X).
X = [1, 2, 3].
 To check whether a given number is odd or even:
% Base case: 0 is even
even(0).
% Recursive case: subtract 2 until 0 or 1 is reached
even(X) :
X > 0,
X1 is X  2,
even(X1).
% A number is odd if it is not even
odd(X) :
not(even(X)).
Example usage:
? even(4).
true.
? even(5).
false.
? odd(5).
true.
? odd(6).
false.
OR
(a) What will be the output of following Prolog program if program is called
with test(10,2)? Also explain the reason of your output.
test(X,Y):write(‘Hello’),X>=Y,!.
test(X,Y):write(‘Hi’),X<Y.
The output of the Prolog program when called with test(10,2)
will be Hello
.
The reason for this is that the first clause of the test/2
predicate is satisfied when X
is greater than or equal to Y
. In this case, X
is 10 and Y
is 2, so the first clause is satisfied. The write
predicate in the first clause will then output “Hello” to the console, and the cut !
predicate will prevent backtracking to the second clause. Therefore, the second clause will not be executed and no other output will be produced.
(b) What is maximum a posteriori (MAP) learning in Bayesian learning?
Explain it.
Maximum a posteriori (MAP) learning is a method of Bayesian learning used to estimate the most likely hypothesis given the observed data and prior knowledge. It is a type of maximum likelihood estimation that incorporates prior information, which can help to regularize the estimation process.
In MAP learning, we seek to find the hypothesis that maximizes the posterior probability:
P(hypothesis  data) = P(data  hypothesis) * P(hypothesis) / P(data)
where P(data  hypothesis) is the likelihood function, P(hypothesis) is the prior probability distribution over hypotheses, and P(data) is the evidence or marginal likelihood.
The MAP estimate of the hypothesis is then given by:
h* = argmax P(hypothesis  data) hypothesis
This means that we select the hypothesis that maximizes the posterior probability, which is equivalent to minimizing the negative logarithm of the posterior probability. This is also known as the maximum a posteriori probability principle.
MAP learning can be useful in situations where we have prior knowledge about the domain, such as when the hypothesis space is constrained, or when some hypotheses are more likely than others. It can also be used to avoid overfitting, by regularizing the estimation process towards simpler hypotheses or ones that are more consistent with the prior knowledge.
(c) Write following Prolog programs:
To find the greatest variable among the three variables.
To count odd and even elements from a list.
Here are the Prolog programs to find the greatest variable among the three variables and to count odd and even elements from a list:
 Program to find the greatest variable among the three variables:
greatest(X,Y,Z,Max) : X >= Y, X >= Z, Max is X.
greatest(X,Y,Z,Max) : Y >= X, Y >= Z, Max is Y.
greatest(_,_,Z,Max) : Max is Z.
Example query: greatest(5,2,8,Max)
will return Max = 8
.
 Program to count odd and even elements from a list:
count([], 0, 0).
count([HT], EvenCount, OddCount) :
count(T, TE, TO),
( 0 is mod(H, 2)
> EvenCount is TE + 1, OddCount is TO
; EvenCount is TE, OddCount is TO + 1
).
Example query: count([1, 2, 3, 4, 5, 6], EvenCount, OddCount)
will return EvenCount = 3
and OddCount = 3
.
“Do you have the answer to any of the questions provided on our website? If so, please let us know by providing the question number and your answer in the space provided below. We appreciate your contributions to helping other students succeed.”