Saturday, May 4, 2024
7 C
New Jersey

Unlocking the Power of AI: Turning Thoughts into Pictures with 80% Accuracy

Must read

Warren Henry
Warren Henry is a tech geek and video game enthusiast whose engaging and immersive narratives explore the intersection of technology and gaming.

AI can create images based on text cues, but the scientists presented a gallery of images that the technology creates by reading brain activity.

A new AI-based algorithm reconstructed about 1,000 images, including a teddy bear and an airplane, from these brain scans with 80% accuracy.

Researchers at Osaka University used the well-known stable diffusion model built into OpenAI’s DALL-E 2, which can generate any image based on text input.

The team showed participants individual sets of images and functional magnetic resonance imaging (fMRI) scans, which were then interpreted by AI.

The team participated in a study published in bioRxiv: “We have shown that our method can reconstruct high-resolution images with high semantic accuracy from human brain activity. And, unlike previous image reconstruction studies, our method does not require training or fine-tuning complex deep learning models.”

The algorithm extracts information from parts of the brain involved in image perception, such as the temporal lobes, according to Yu Takagi, who led the study.

The team used fMRI because it captures changes in blood flow in active areas of the brain, Science.org reports.

And FMRI can detect oxygen molecules, so scanners can see where neurons — the nerve cells in the brain — work harder (and consume the most oxygen) when we have thoughts or emotions.

In total, four participants participated in this study, each of which showed a set of 10,000 images.

The AI ​​starts producing images as noise, similar to static TV, which is then replaced by distinguishable features that the algorithm sees in action, pointing to the images it was trained on and finding matches.

According to the study, we show that our simple framework can reconstruct high-resolution (512 x 512) images of brain activity with high semantic accuracy. We quantitatively interpret each component of the LDM in terms of neuroscience, mapping specific components to specific areas of the brain. We provide an objective explanation of how the LDM (Latent Diffusion Model) text-to-image transformation process integrates the semantic information expressed by conditional text while maintaining the appearance of the original image.

The combination of artificial intelligence and brain-scanning devices has been a hit with the scientific community, which they believe holds new clues to our inner world.

In a November study, scientists used techniques to analyze the brain waves of non-verbal and paralyzed patients and convert them into sentences on a computer screen in real time.

A “mind-reading” machine can decode brain activity as a person silently tries to speak words out loud to form complete sentences.

Researchers from the University of California said that a communication neuron can restore communication in people who cannot speak or write due to paralysis.

In tests, the device deciphered the brain activity of a volunteer as they tried to silently pronounce each phonetic letter to form sentences from a vocabulary of 1,152 words at a rate of 29.4 characters per minute and an average writing error rate of 6.13 percent. .

In other experiments, the researchers found that the approach generalized to a large vocabulary of over 9,000 words, with an average error rate of 8.23%.

Source: Daily Mail

More articles

Leave a Reply

Latest article