Much of the early optimism about Artificial Intelligence was crushed by what is now called Moravec’s Paradox: it turns out that it’s often substantially harder to replicate the lower-level sensorimotor skills of humans and animals than higher reasoning tasks. Students of embodied cognition see intelligence arising from a necessary interplay with our senses taking this even further, higher-order theories of consciousness–which I’ve recently become a fan of–have consciousness as a higher order layering on top of mere intelligence. Thus, we’re adding a few more layers to the layer cake of supervenience (similar to what scientists often call “emergence”, not used in Philosophy that way because the term has been claimed by a distinct usage).
I would love to see more work on complete end-to-end AI systems that have increasingly deeper levels of feedback. That’s the only we’ll see better AI. That and neural modelling which is getting reeeal interesting. I think I’ll start posting links to some fun papers soon.