Midterm Key

PSU CS 441/541 Fall 2000

  1. Early in the history of AI, Alan Turing proposed his famous Test of the intelligence of a computer. Which of the following are true of the Turing Test? (choose all that apply)
    1. Most scientists agree that any intelligent machine must pass it.

      Almost no one believes that language ability is a necessary part of intelligence.

    2. Most scientists agree that any machine that passes it is intelligent.

      Almost everyone agrees that language ability is a sufficient demonstration of intelligence.

    3. Only a robot that looks like a human has a chance of passing it.
    4. A machine may fail it by being `too intelligent'.

      By answering a hard arithmetic question too quickly, for example.

  2. Recall that a problem is in the complexity class NP if a guess at the solution of an instance can be checked in polynomial time in the size of the instance. AI is concerned with NP because (choose one)
    1. Problems in P are not in NP.

      Not true. Certainly guesses at solutions of instances of polynomial problems can be check in polytime.

    2. If we can show that P is not equal to NP, we can automatically build intelligent machines.

      Not by any means I'm aware of. There are still lots of problems that would be hard for an NP oracle, notably planning.

    3. One attribute of intelligence is the ability to deal with small instances of hard problems via guesswork.

      Yes. I think the wording is confusing, though: substitute `heuristic search' for `guesswork', and maybe it will be clearer. (Note that this is really all that heuristic search is: a clever way of guessing the answer!)

    4. Problems in NP are the hardest problems we know.

      No. There seems to be some confusion about this. There are many problems where a correct answer cannot even be checked in polynomial time. Consider the Halting Problem, for example: this problem is undecidable, meaning that no amount of guesswork is guaranteed to tell you the answer to an arbitrary instance.

    5. None of the above.

  3. List three knowledge representations, other than First-Order Logic, which might be useful in AI.

    E.g., Propositional Logic, Computer Programs, Neural Nets, Natural-Language Texts, Databases, Rule bases

  4. Of (1) Propositional, (2) Predicate, and (3) First-Order Logic, what is the simplest logic to which each of these formulae belong? (As in the text, quantified variables and predicates are lowercase, atoms, objects and functions are capitalized.)
    1. \forall x ~.~ p(x) \lor q(x)

      (3)

    2. \forall x ~.~ \exists y ~.~ p(x) \land q(x, y)

      (3)

    3. p(A) \lor q(B)

      (2)

    4. X \land (Y \lor p(A))

      (2)

    5. \lnot (X \land (\lnot Y \lor Z))

      (1)

  5. Convert this formula
    X \lor \lnot (Y \lor Z)
    to CNF. Show your work.

    X \lor (\lnot Y \lor \lnot Z)
    (X \lor \lnot Y) \land (X \lor \lnot Z)

  6. Which of the following entailments hold? (choose all that apply)
    1. ~~~ X \lor Y \models X
    2. ~~~ X \land Y \models X
    3. ~~~ X \models X \lor Y
    4. ~~~ X \models X \land Y

    There still seems to be a lot of confusion here. An entailment holds if every model of the left hand side forces the right hand side to be true.

  7. Express the following as a statement in formal logic: `You can fool every person at some time, and some person at any time, but you can never fool Mom.'

    (\forall p ~.~ \exists t ~.~ can-fool(p, t)) \land (\forall t ~.~ \exists p ~.~ can-fool(p, t)) \land \lnot(\exists t . can-fool(Mom, t))

    It was noted that this is not a satisfiable formula: one student suggested

    (\forall p ~.~ (\exists t ~.~ can-fool(p, t)) \lor mom(p)) \land (\forall t ~.~ \exists p ~.~ can-fool(p, t)) \land (\forall p ~.~ mom(p) \limplies \lnot(\exists t . can-fool(p, t)))
    Another student suggested the sentence implies Mom is not a person: this would allow a logical model, but is not IMHO a natural interpretation of the English.

  8. Consider the following search tree.
    tree
    Give the sequence of node labels visited (from above, i.e. preorder) by each search algorithm. For example, Iterative Broadening would visit
    1,2,4,1,2,4,5,3,6,7
    1. Breadth-First Search

      1,2,3,4,5,6,7
      Breadth-First Search searches the nodes at a given depth before all lower nodes.

    2. Depth-First Search

      1,2,4,5,3,6,7
      Depth-First Search searches the nodes on the left of a given node before the nodes on the right.

    3. Iterative Deepening

      1,1,2,3,1,2,4,5,3,6,7
      Iterative Deepening is DFS with an iterated depth cutoff.

  9. Consider the following satisfiability algorithm for Propositional Logic: Given a propositional formula
    1. Move the negations innermost using De Morgan's Laws (can be done in linear time in the size of the formula).
    2. Find the leftmost term of the corresponding DNF formula by distributing only across leftmost terms (can be done in linear time in the size of the formula).
    For example, given
    (A \lor B) \land \lnot (C \land \lnot D)
    we would first get
    (A \lor B) \land (\lnot C \lor D)
    and then
    (A \land \lnot C) \lor \ldots
    This leftmost clause in the DNF formula gives a model (A = true, C = false) for the original formula.

    Explain why the above is not a general linear-time algorithm for propositional satisfiability.

    Because in general the leftmost term will contain both an atom and its negation. Consider for example

    ((A \lor B) \land (\lnot A \lor C))
    Attempting to generate only the satisfiable DNF terms is in general as hard as any other approach to satisfiability.

    The question seemed to be the source of much confusion, and will be counted only as extra credit. Only one student correctly identified the above problem. Many students seemed to think that the suggested algorithm generated all the clauses of the DNF formula automatically: I think the wording is clear, though.