News Release

The luxury of causality: Parallel Intelligence, a proposed move toward the intelligent future

Steps toward Parallel Intelligence

Peer-Reviewed Publication

Chinese Association of Automation

Artificial intelligence (AI) is learning - from the real world. Nine months ago, a computer beat one of the world's best players at one of the world's oldest games, Go. That was the start of a new era, the era of new IT: Intelligent Technology, according to Fei-Yue Wang, a professor at the Chinese Academy of Sciences.

"This victory stunned many in the AI field and beyond," wrote Wang in an editorial published in IEEE/CAA Journal of Automatica Sinica (JAS), based on a speech he gave at the 30th anniversary of the Institute of Artificial Intelligence and Robotics at the Xian Jiaotong University in Xian, China. "It marked the beginning of a new era in AI... parallel intelligence."

Defined as the interaction between actual reality and virtual reality, parallel intelligence flips traditional AI. Rather than big, universal laws directing small amounts of data, small, complex laws guide huge data, a jump from Newton to Merton, as pointed out by Professor Wang. AlphaGo, the computer victorious against Go player Lee Sedol, played more than 30 million games with itself - more than a single, century-old person could play in their entire life. And the computer learned from every game.

"[Sedol] was not defeated by a computer program, but by all the humans standing behind the program, combined with the significant cyber-physical information inside it," Wang wrote. "This also verifies the belief of many AI experts that intelligence must emerge from the process of computing and interacting."

Input X, output Y is not as simple as it once was. There's more to physical space than just cyber space. Machines must also make room for social space. According to Wang, AI is lingering in a phase of hybrid intelligence where humans, information, and machines are equally integral to the process of progress. The problem lies in learning how to model AI on shifting terms of possibility. X does not always cause Y in such complicated systems, where uncertainty, diversity, and complexity typically prevail. In order to move forward, a new framework is needed to model the next step of parallel intelligence.

Wang proposes the ACP Approach to generate big data from small data, then reduce big data to specific laws, where software (Artificial systems) learn from millions of scenarios (Computational experiments) to make the best decisions while interacting (in Parallel) with real-world physical systems. AlphaGo had learned, from 30 million games, how to make the best decisions when faced with the physical being of Sedol. It paid off.

"AI is not 'artificial' anymore," Wang wrote. "Ultimately, it becomes the 'real' intelligence that can be embodied into machines, artifacts, and our societies."

###

Fulltext of the paper is available: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7589480

http://html.rhhz.net/ieee-jas/html/20160401.htm

IEEE/CAA Journal of Automatica Sinica (JAS) is a joint publication of the Institute of Electrical and Electronics Engineers, Inc (IEEE) and the Chinese Association of Automation. JAS publishes papers on original theoretical and experimental research and development in all areas of automation. The coverage of JAS includes but is not limited to: Automatic control/Artificial intelligence and intelligent control/Systems theory and engineering/Pattern recognition and intelligent systems/Automation engineering and applications/Information processing and information systems/Network based automation/Robotics/Computer-aided technologies for automation systems/Sensing and measurement/Navigation, guidance, and control.

To learn more about JAS, please visit: http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6570654

http://www.ieee-jas.org


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.