AI Mind Hunter: Researchers make AI bot that can read your mind and draw 80% accurate images

Researchers from Osaka University have made a new AI bot that can read minds and then draw them using Dall-E 2 and Stable Diffusion, The sketches are up to 80 per cent accurate. The bot uses fMRI scans to “see” what a subject is thinking or visualising.

The day when AI machines and bots are given the ability to read human minds would be the day when machines finally start taking over the human race – this is what many conspiracy theorists and AI alarmists will keep on telling you.

Thanks to a bunch of researchers from Osaka University, that day, is not far behind.

Artificial intelligence already has the ability to generate very realistic-looking images based on just written cues. Now, a group of scientists have revealed a gallery of images that they created by combining AI with a technology that can practically read brain activity.

Also read: Chinese researchers claim to have made AI systems that can ‘read minds,’ could be used to test loyalty to CCP

What does the bot do exactly?Researchers from Osaka University used the popular Stable Diffusion model, included in OpenAI’s DALL-E 2, which can create any imagery based on text inputs.

The new AI-powered algorithm that the researchers used, drew around 1,000 images, including a teddy bear and an airplane, by taking their cues and prompts from brain scans. The images were about 80 per cent accuracy, meaning that the images were 80 per cent or so close to the prompt that was being thought by the test subjects.

The top row consists of images that were used to generate the mental cues from the test subjects. The bottom row shows the corresponding image that the AI-bot generated. Image Credit: Science.org

The researchers presented their subjects with a different collection of images and gathered fMRI (functional magnetic resonance imaging) scans, which were then decoded by the AI.

‘We show that our technique can recreate high-resolution pictures with high semantic accuracy from human brain activity,’ the researchers wrote in their paper, which was published in bioRxiv.

‘Unlike prior studies of image reconstruction, our approach does not require complicated deep-learning model training or fine-tuning.’

How does the bot read your mind?According to Yu Takagi, who conducted the study, the programme draws information from areas of the brain involved in picture processing, such as the occipital and temporal lobes.

According to a report by Science.org, the team used fMRI because it detects variations in blood flow in active brain regions.

FMRI can identify oxygen molecules, allowing sensors to see where in the brain our neurons — brain nerve cells — work the hardest (and consume the most oxygen) while we are thinking or feeling.

This research used four subjects, each of whom viewed a collection of 10,000 images.

The AI generates the images as noise, similar to television static, which is then substituted with distinct characteristics seen in the action by comparing to the images it was taught on and finding a match.

Also read: Elon Musk’s Neuralink: Two neurosurgeons weigh in on the feasibility of Musk’s brain implant and its potential

‘We show that our simple approach can reconstruct high-resolution (512 x 512) pictures from brain activity with good semantic fidelity,’ the research says.

‘From a neuroscience standpoint, we numerically analyse each component of an LDM by mapping particular components to discrete brain regions. We offer an objective explanation of how an LDM [a latent diffusion model] text-to-image translation process integrates the semantic information conveyed by the conditional text while keeping the look of the original picture.’

Combining artificial intelligence with brain scanners has been a challenge for the scientific community, which thinks it will open new doors to our interior worlds.

Real-life applications of this botIn a November research, scientists used the technologies to assess nonverbal, paralysed patients’ brainwaves and convert them into sentences on a computer screen in real-time.

The ‘mind-reading’ machine can decode brain activity while a person quietly tries to spell words phonetically in order to construct full phrases.

Also read: Instabrain? Scientists develop brain implant to let people use Insta through minds

The University of California researchers believe their neuroprosthesis speaking device has the potential to return the connection to individuals who are unable to talk or write due to paralysis.

The device deciphered the volunteer’s brain activity as they tried to quietly pronounce each letter phonetically to create phrases from a 1,152-word lexicon at a pace of 29.4 characters per minute and an average character error rate of 6.13 per cent in tests.

Read all the Latest News, Trending News, Cricket News, Bollywood News,India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.

Similar Articles

Most Popular