There are two major ways to think about the future and current utilization and power of artificial intelligence. The weak AI hypothesis states that a machine running a program is at most only capable of simulating real human behavior and consciousness (or understanding, if you prefer). Strong AI, on the other hand, purports that the correctly written program running on a machine actually is a mind -- that is, there is no essential difference between a (yet to be written) piece of software exactly emulating the actions of the brain, and the actions of a human being, including their understanding and consciousness. I'm sure many people would like to argue the finer points of these rough definitions, but the general ideas are universally accepted (not to be correct truths, but as valid definitions). Some supporters of weak AI prefer to call it cautious AI.
I very strongly believe in the strong AI hypothesis. My cognitivie science teacher (Caroline Palmer at Ohio State U.) supports weak AI. Daniel Dennett supports strong AI. John Searle (inventer of the CRA) supports weak AI, as does Joseph Levine (OSU prof of philosophy). I will present ideas by each of these people, except Caroline Palmer.