Intelligence

Fall 2007

due Mon Sep 10

- Read chapters 1 and 2 in the text.
- Submit answers to excercises 1.3, 1.11 through 1.13, 2.4
- Install and run some code for the "vacuum-cleaner world" from http://aima.cs.berkeley.edu/code.html , in a language of your choice. (I'll be using lisp; you are encouraged but not required to do the same.) Report on your experience.

due Mon Sep 17

- Read chapter 3
- Practice lisp
- Do problem 3.1, defining various terms
- Do problem 3.19, looking at searching for the vacuum world
- See how far you can get in looking at problems 3.15 and 3.16

due Mon Sep 24

- Start reading through chapters 4,5,6
- Pick a mini-search project, and describe what progress you've made in planning/implmenting/running it.
- Coming: questions from the ends of chap 4,5,6

due Mon Oct 1

- Finish reading chapter 4,5,6.
- Choose at least one of the search strategies described in the text, and appropriate problem for that strategy, and write an implementation in a programming language of your choice. Come to class prepared to share your results and discuss that search strategy with the class.

due Mon Oct 8

- read chapter 7 (skip 7.6)
- read wikipedia:propositional calculus
- Explain what all this logic formalism has to do with the wumpus world.
- define the following terms; give an example if appropriate
- propositional logic (read ahead or lookup "first-order logic", for comparison)
- inference
- resolution
- de Morgan's laws
- CNF, Horn forms
- circuit-based agent
- inference-based agent
- KB

- Do the following exercises :
- 7.5, 7.8, 7.11

- The PL-Wumpus-Agent algorithm is given on pg 226. Explain it. In particular,
- What information (and how much) does it start with?
- what are the (input, output, side-effects) for the routines TELL(), ASK() ?

due Mon Oct 15

- read chapter 8, on first order logic
- do the following exercises from the text:
- 8.6, turning a bunch of sentences into FOL
- 8.7, "All Germans speak the same languages."
- 8.10, Wumpus world axioms

- Play around with a theorem engine like SNARK, and report on your experience.
- Pick one of the Lewis Carroll puzzles on http://www.math.hawaii.edu/~hile/math100/logice.htm. Represent it in either propositional logic or first order logic (and explain why you chose that one), and find the conclusion.
- Do your Lewis Carroll puzzle again using one of the formal algorithms (e.g. resolution) described in the text.

due Mon Oct 29

- Read chapter 9 and 10 in the text.
- Do either exercises (9.9 and 9.10), or 9.11
- Do 9.18 and 9.19
- Do 10.14

due Mon Nov 5

Describe your experience and document your results in way that others in the class can profit from, including some general remarks about the problem (how easy, how hard, methods of approach) and the system (strengths, weaknesses, syntax). If you have a specific system or problem, discuss with me; otherwise, choose here are some specific suggestions:

- Problems
- http://www.cs.miami.edu/~tptp/ - Thousands of Problems for Theorem Provers
- one from the text, either an example or one of the problems

- Logic systems
- http://en.wikipedia.org/wiki/Automated_theorem_proving gives a list
- one of the systems described on the TPTP site
- clisp + snark as described in my notes
- prolog (if so, describe what it's doing)
- CWM, Turtle or other semantic web stuff
- opencyc

- For full credit, match the type and scope of the problem to the power of the system. In other words, if you're writing a logic engine from scratch, it makes good sense to to work on a very simple logic problem. But if you code up a three line tautology in prolog and conclude that you can prove "A" from the premises "B=>A" and "B", then I will be less than impressed. If in doubt, ask.

due Mon Nov 12

- continue reading chapters 13, 18, 20
- i.e Bayesian Probablity, learning, neural networks, etc

- Do 13.12, a conditional probability question similar to the cancer one I did in class.
- Do either 13.15 (green and blue taxis) or 13.16 (condemned prisoners).
- Do 13.18 on text categorization. Please use the sample I showed in class on Nov 9 - if not code that builds on what I did, at least a detailed explanation using the sorts of numbers that I generated.

due Mon Nov 19

- finish reading chapter 13, 18, 20
- do the following questions from the text :
- 18.3
- 20.11
- 20.13

- propose a final project
- look at the following papers
- optional coding assigment
- use the FANN C library on cs (or lush, though it isn't running on cs) to implement horizontal vs vertical "edge" detection for a 6x6 grid.

due Mon Dec 3

- This is your last turned in assignment, covering the last several weeks of the semester.
- Work on your final projects.
- Read through chap 15, especially the first part and last part. Look for the key idea, not the details of the algorithms.
- Also read pg 727 - 731, which discusses briefly mentions learning in these hidden Markov models.
- Continue into chap 22 and 23, on language, which we'll discuss the week of Dec 3
- Browse the philosophy and the future chapters 26 and 27, which we may also discuss.
- Explore these same topics through the following articles and the references on them :
- Explore either question (A) or (B) from the Dow breakfast drink problem (from www.cs.northwestern.edu/~pardo/courses/eecs349 ); see dow problem.jpg.
- Do 15.1 and 15.2
- Answer 22.4 and 22.9
- Check out 22.1, 22.7, and 22.14

due Mon Dec 10

- Come to class prepared to show / discuss your your final project. (The submitted version isn't due until the end of the week.)

due Fri Dec 14

- A closer look at a topic of your choice that was examined this term. Details coming; talk to me if you have specific ideas.

- placeholder for Jim's comments

http://cs.marlboro.edu/ courses/ fall2007/ai/ special/assignments

last modified Sunday February 1 2009 3:02 pm EST

last modified Sunday February 1 2009 3:02 pm EST