In science fiction films brain scanning is often used on captives as a way of extracting information, and sooner or later they give up that information. Now a team of scientists from Purdue University has developed something similar, mind reading technology.
Scientists know that every thought that people have traveled along a certain neural path and it has a signature or pattern that can be identified and picked up. If there was technology that could read the patterns, then the thoughts of people could be deciphered. While this might sound like something that would be in a film, scientists are getting closer to mind reading as they are starting to decipher the patterns that are in the visual cortex by using advanced artificial intelligence along with a fMRI machine.
How to decode the human brain
Purdue University researchers used a type of artificial intelligence so that they could read people’s minds. They put three female subjects into the fMRI machine and let them watch videos that showed natural scenes, people, and animals while they were having their brains scanned. The scientists collected more than 11 hours of data in the fMRI machine, and the deep learning program also had access to the same data when the test subjects were watching the videos, which helped in the training of it.
Deep Learning Program Knew What Person Was Seeing On Screen
The artificial intelligence technology used the data for predicting what brain activity would take place in the visual cortex of the subject, which was dependent on the scene being played. The further it went on the deep learning program managed to decode the fMRI data and then placed each of the images signatures in a specific category. This allowed the scientists to find the regions of the test human’s brain that was responsible for each of the visuals. It was not long before the deep learning program was able to say what object the person was seeing on the screen by just using the brain scan data on its own.
Artificial Intelligence Program Reconstructed Videos Flawlessly
Researchers then took things further, and they had the artificial intelligence program reconstruct the videos from the brain scans of the subject alone, and it managed to do so flawlessly. The scientists believe that the findings should allow a much better understanding of how the brain works and functions. It is also giving the scientists the chance to develop artificial intelligence that is more advanced.
The assistant professor at the Purdue Weldon School of Biomedical Engineering and School of Electrical and Computer Engineering, Zhongming Liu has been studying brains for around two decades. However, he did make any major progress until he began to use artificial intelligence. He said that being able to reconstruct the visual experience of a person is exciting as it allows scientists to see how the brain explains the images.
The lead research, Haiguang Wen talked more about the approach of the scientists. He said that if the person was looking at a scene with a car that was moving in front of building the brain dissects it into pieces. One location of the brain represents the car, and another would represent the building. He said that by using the technique they had developed, they could visualize the information that was represented by any location of the brain and look through all the visual cortex locations of the brain. He went on to say that this allowed the scientists to see just how the brain divides visual scenes into pieces and then puts them back together, so scientists get a full understanding of the scene.
Ultimate Goal Is To Evaluate Brain In Real Time Dynamic Situation
The deep learning algorithm has the name of the convolutional neural network, and it had been used for isolating patterns that had been linked with visual images that did not move in the brain. It also helps with facial and object recognition software. It is the first time that the technique has been used with natural scenes and video, and this meant that the scientists could get one step closer to their ultimate goal, which was to evaluate the brain in a real-time dynamic situation.
While this could essentially lead the way to people being able to get information from another, at the moment, it is still in the very early stages, and it has taken the scientists a long time to get to this point. It was also said that the process could be fought off as the person could just fail to cooperate. Dr. Liu said that there are some limits to it due to the fact that the brain is so complex. He went on to say that it was difficult to ask the AI to understand the brain due to the fact that even scientists do not fully understand it. Right now artificial intelligence is informing scientists about the brain and in turn, this helps to improve the AI.