Some of the premises of logical AI are scientific in the sense that they are subject to scientific verification. This may also be true of some of the premises listed above as philosophical.
Different animals have different innate knowledge. Dogs know about permanent objects and will look for them when they are hidden. Very likely, cockroaches don't know about objects.
Identifying human innate knowledge has been the subject of recent psychological research. See [Spelke 1994] and the discussion in [Pinker 1997] and the references Pinker gives. In particular, babies and dogs know innately that there are permanent objects and look for them when they go out of sight. We'd better build that in.
The most straightforward example is that a simple substitution cipher cryptogram of an English sentence usually has multiple interpretations if the text is less than 21 letters and usually has a unique interpretation if the text is longer than 21 letters. Why 21? It's a measure of the redundancy of English. The redundancy of a person's or a robot's interaction with the world is just as real--though clearly much harder to quantify.
We expect these philosophical and scientific presuppositions to become more important as AI begins to tackle human level intelligence.