Raytheon Developing System that Lets Artificial Intelligence Explain Itself

DARPA programme applies 'trust but verify' to AI

Date:

 

CAMBRIDGE, Massachusetts, August 28, 2018. Under the Defense Research Project Agency’s (DARPA) Explainable Artificial Intelligence programme (XAI), Raytheon BBN Technologies is developing a first of its kind neural network that explains itself.

ads

The XAI programme aims to create a suite of machine learning techniques that produce more explainable models while maintaining a high level of performance. It also aims to help human users understand, appropriately trust and effectively manage the emerging generation of artificially intelligent partners.

Raytheon BBN’s Explainable Question Answering System will allow AI programmes to ‘show their work,’ increasing the human user’s confidence in the machine’s suggestions. “Our goal is to give the user enough information about how the machine’s answer was derived and show that the system considered relevant information so users feel comfortable acting on the system’s recommendation,” said Bill Ferguson, lead scientist and EQUAS principal investigator at Raytheon BBN.

EQUAS will show users which data mattered most in the AI decision-making process. Using a graphical interface, users can explore the system’s recommendations and see why it chose one answer over another. The technology is still in its early phases of development but could potentially be used for a wide-range of applications.

“Say a doctor has an x-ray image of a lung and her AI system says that its cancer. She asks why and the system highlights what it thinks are suspicious shadows, which she had previously disregarded as artifacts of the X-ray process. Now the doctor can make the call – to diagnose, investigate further, or, if she still thinks the system is in error, to let it go”

“A fully developed system like EQUAS could help with decision-making not only in DoD operations, but in a range of other applications like campus security, industrial operations and the medical field,” said Ferguson. “Say a doctor has an x-ray image of a lung and her AI system says that its cancer. She asks why and the system highlights what it thinks are suspicious shadows, which she had previously disregarded as artifacts of the X-ray process. Now the doctor can make the call – to diagnose, investigate further, or, if she still thinks the system is in error, to let it go.”

big bang

As the system is enhanced, EQUAS will be able to monitor itself and share factors that limit its ability to make reliable recommendations. This self-monitoring capability will help developers refine AI systems, allowing them to inject additional data or change how data is processed.

More like this

Unravelling the Mineral Deal Agenda Amid Jeddah Peace Meet and Beyond

The Jeddah Meet Paving the Way Ahead: The US, Ukrainian...

BEL Receives Rs 2,463 Crore Order for Ashwini Radars from Indian Air Force

Bengaluru/Delhi: Navratna Defence PSU Bharat Electronics Limited (BEL) has...

Ukraine-US Officials Open Talks Focused on Ending War with Russia

Jeddah: Senior officials from Ukraine and the United States...

Germany’s Military Build Up Continues Amidst Personnel Shortages

The Hague: The German military continued its rearmament but...

Moonlight: Thales Alenia Space to Develop the Space Segment of the Navigation System Orbiting Around the Moon

New Delhi/Cannes. Thales Alenia Space, a joint venture between...

Govt to Set Up SPV for Manufacturing Regional Transport Aircraft in India

New Delhi: The government is in the process of...

India, Brazil Nearing Finalisation of MoU on ‘Scorpene’ Class Submarine Maintenance

New Delhi: India and Brazil are nearing the finalisation...
Indian Navy Special EditionLatest Issue