Researchers have now shown how to interpret what the human brain is observing with the help of the artificial intelligence to decode the fMRI scans from people watching videos, making it a kind of a technology for mind-reading.
These advances in technology could help the efforts to enhance the artificial intelligence and direct to newer insights into the functioning of human brain. Important to the research is a kind of algorithm known as the convolutional neural network, which has been vital in making smartphones and computers able to identify objects and faces.
Convolutional neural networks, a type of ‘deep learning’ algorithm, have been chiefly utilized to understand how the human brain interprets and processes static pictures and different visual stimuli. However, these new discoveries can be seen as the novel approach used for the first time to understand how the human brain processes movies of natural scenes, a huge step towards interpreting the brain while people are trying to grasp and make sense of the dynamic as well as complex visual surroundings.
This new research has been led Haiguang Wen who is also the lead author and the findings were published in the October issue of Cerebral Cortex Journal.
The researchers gathered 11.5 hour long fMRI data from of each of the three subject women who were made to watch 972 video clips which included clips of people or animals in nature or action scenes. The information was used to train and instruct convolutional neural network model to project the activity in the visual cortex of brain while the women were watching the videos.