News Release

Research offers radical rethink of how to improve artificial intelligence in the future

Computer scientists at the University of Essex have devised a radically different approach to improving artificial intelligence (AI) in the future.

Peer-Reviewed Publication

University of Essex

Computer scientists at the University of Essex have devised a radically different approach to improving artificial intelligence (AI) in the future.

Published in top machine learning journal – the Journal of Machine Learning Research – the Essex team hope this research will provide a backbone for the next generation of AI and machine learning breakthroughs.

This could be translated to improvements in everything from driverless cars and smartphones having a better understanding of voice commands, to improved automatic medical diagnoses and drug discovery.

“Artificial intelligence research ultimately has the goal of producing completely autonomous and intelligent machines which we can converse with and will do tasks for us, and this new published work accelerates our progress towards that,” explained co-lead author Dr Michael Fairbank, from Essex’s School of Computer Science and Electronic Engineering.

The recent impressive breakthroughs in AI around vision tasks, speech recognition and language translation have involved "deep learning", which means training multi-layered artificial neural networks to solve a task.  However, training these deep neural networks is a computationally expensive task, requiring huge amounts of training examples and computing time.

What the Essex team, which includes Professor Luca Citi and Dr Spyros Samothrakis, has achieved is to devise a radically different approach to training neural networks in deep learning.

“Our new method, which we call Target Space, provides researchers with a step change in the way they can improve and build their AI creations,” added Dr Fairbank. “Target Space is a paradigm-changing view, which turns the training process of these deep neural networks on its head, ultimately enabling the current progress in AI developments to happen faster.”

The standard way people train neural networks to improve performance is to repeatedly make tiny adjustments to the connection strengths between the neurones in the network. The Essex team have taken a new approach. So, instead of tweaking connection strengths between neurones, the new “target-space” method proposes to tweak the firing strengths of the neurones themselves. 

Professor Citi added: “This new method stabilises the learning process considerably, by a process which we call cascade untangling. This allows the neural networks being trained to be deeper, and therefore more capable, and at the same time potentially requiring fewer training examples and less computing resources. We hope this work will provide a backbone for the next generation of artificial intelligence and machine-learning breakthroughs.”

The next steps for the research are to apply the method to various new academic and industrial applications.

Ends

Note to Editors

For more information, please contact the University of Essex Communications Office by emailing comms@essex.ac.uk.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.