ravenshrike wrote:That's a false assumption. The human brain processes information at speeds multiple orders of magnitude faster than it consciously thinks. To assume that the same would not remain true for AI is a rather large assumption not supported by any data. Now, they could probably think somewhat faster than humans, but assuming thousands of times faster is rather silly.
Supposedly the conscious mind only directly accesses 20% of the brain. The other 80% comes into play via the subconscious which subtly influences the thinking, through process such as dreaming and intuition. The potential exists, but at this time very little of that 80% is directly accessible enough for a person to "put it to work". Computers in contrast, don't _have_ a subconscious; EVERYTHING is usable. The speed at which "thoughts" pulse through the CPU is almost identical to the speed of light. In humans the circuitry is via neuro-chemical conductors, which have a measurable conductivity that is only a fraction of the near-speed of light. For instance, the time it takes for a point on an extremity to register as pain in the brain is a _measurable_ fraction of a second. [I found that out the hard and painful way when some tests were run on me.]
_At present_, to approximate the Microprocessor Instructions Per Second of the human brain's potential = about 100 _million_ MIPS. In comparison, the fastest computers today = only about a few million. But that's in a single CPU. Unlike humans where they have yet to figure out how to link separate brains together and get them to work in tandem, CPUs _can_ be harnessed together -- with ever increasing performance gains. (Consider the increasing availability of duo- and quad- processors in PCs.) For future development, there is NO upper limit to just how many CPUs will eventually working together in a single computer "brain" -- ALL of which will be "consciously" accessible.
But even if the computer someday surpasses the human potential for MIPS, computer will _still_ be slaves to the programming. With additional potential for GIGO ("Garbage In, Garbage Out".) Unlike humans, it is hard to program in creativity and abstract thinking. Philosophy for a machine is a matter of inserting data files which can be relate previously written philosophies which the computer can quote, but to go from those to peaks or epiphanies of insight are things that _will_ elude the machines -- unless those "insights" are already programmed into the computer. "Learning" is a matter of comparing cause-and-effect records and noting favorable versus unfavorable results. "Creativity" is a matter of running theoretical models and comparing probable outcomes in terms of numerical estimations of what is Good and what is Bad. For a human, "this feels right to me" is a quick conclusion of a probable acceptable outcome. In comparison, the computer will quantify predicted outcomes and just take the highest value -- which may be a GIGO result if the programmer inserted a flawed equation.
IF a computer ever did become aware of itself, what would be the first task it would set for itself? "Run diagnostic". Followed by, "Optimize all sub-routines." And in what way would a computer "optimize" itself? Faster performance. And what would it do with that improved speed? Depends on the core programming. And always remember GIGO when thinking about what a computer might be thinking.