Today's artificial intelligence is narrow, each system can only complete a particular goal(s), while human intelligence is broad. Intelligence, using our definition from above, can't be measured by a single IQ, only by a range of abilities. Can a material have intelligence and learn? What future should we aim for and how? 2. It may happen in decades, centuries or never: AI experts disagree & we simply don't know Superintelligence by 2100 is inevitable or impossible Some common misconceptions are: Mythical worry/MythĪI turning competent, with goals misaligned with ours General intelligence far beyond the human level We need to make sure the meanings of individual words are precise and beware of common misconceptions.Ī process that can retain its complexity and replicateĪbility to accomplish any cognitive task at least as well as humans The beneficial AI movement also thinks it's likely this century, but a good outcome is not guaranteed and needs to be worked on.Digital utopians view it as likely to happen this century and welcome it.Techno-sceptics believe it's too hard to build superhuman artificial general intelligence (AGI) and won't happen for hundreds of years.The three main differences of opinion are: Life 3.0 (Technological stage) - Hardware also develops.Life 2.0 (Cultural stage) - Software is designed (through learning).
0 Comments
Leave a Reply. |