AI to Use Transparent that Performs Human like Reasoning to Solve Problems

Decision Technologies Group at the MIT and researchers from the Lincoln Laboratory’s Intelligence have come up with a new and apparently transparent neural network, which acts in a human-like reasoning way to answer questions about the contents of images. The model is named the Transparency by Design Network (TbD-net), which is capable of allowing human analysts to understand its decision-making process, visually renders its decision making process as it answers problems, and can eventually outperforms today’s finest visual-reasoning neural networks.

As neutral networks have input and output layers, as well as layers between that converts the input into correct output. Several deep neural networks are so complex that it becomes extremely difficult to follow the transformation process. Therefore, in the new network the researchers have made the inner network transparent. This might allow the researchers to teach the neural network to change any incorrect assumptions.

In a statement given by Ryan Soklaski, the lead researcher who built TbD-net with his fellow researchers David Mascharka, Arjun Majumdar, and Philip Tran said, advancements in improving performance in visual reasoning has resultantly affected the interpretability.

To meet the difference between interpretability and performance, the researchers involved a collection of modules, in which they build small sub-networks that will specifically carry out subtasks. For instance, TbD-net will break down a question and allocates it to the relevant module. Each sub-network will be built on the basis of pervious one’s conclusion.

Moreover, AI techniques are vastly used across network users to interpret human language questions and further it breakdowns the sentences into subtasks, which is followed by an AI technique used in multiple computer vision that helps in analysis of the imagery.