Friday, April 26, 2024
11.8 C
New Jersey

Artificial Intelligence Experiment: Astonishing Results

Must read

Warren Henry
Warren Henry is a tech geek and video game enthusiast whose engaging and immersive narratives explore the intersection of technology and gaming.

Imagine that you are thinking about a goldfish. What if an AI could interpret your thoughts and turn them into an image, or at least a very close version of it?

It may sound like science fiction, but a preliminary study published in November attempted to do just that, producing some pretty cool AI-generated images in the process.

And the images show that the technology designed to reconstruct images based on recorded human brain activity isn’t perfect. But a group of researchers from Stanford University, the National University of Singapore and the Chinese University of Hong Kong are getting closer.

Incredible experiment shows that AI can read minds to visualize our thoughts https://t.co/t3jUwHtTFZ

— ScienceAlert (@ScienceAlert) March 31, 2023

During the experiment, the team collected data about the brain activity of the participants when they were shown pictures. While the research team collected samples from many of the participants, the study only released AI imaging results from one of the study participants.

The researchers found that the AI ​​was able to create images that matched the original image’s attributes, such as color and texture, and the overall subject matter 84 percent of the time, according to NBC.

The data was then fed into an AI model, and the images were reconstructed based on the participants’ thoughts while they were inside a functional magnetic resonance imaging (fMRI) machine. The team published their findings, including the resulting images, in a research paper in November.

The team’s work is focused on studying activity based on human brain scans and ultimately using that data to decipher a person’s thoughts. But in order to create these images, the AI ​​model used in the experiment had to go through many hours of training.

The AI ​​model, known as Mind-Vis, was trained on a large pre-existing dataset with over 160,000 brain scans.

In the second part of the training, the AI ​​was trained on a different set of data from a small group of participants whose brain activity was recorded by an fMRI machine while they were shown images.

Based on this, the AI ​​model learned to associate certain brain activities with the perception of features of the image – such as color, shape and texture. To train the AI ​​model, each participant’s brain activity also had to be measured for 20 hours, according to NBC.

For the actual experiment, the participants were again shown a series of new images inside the fMRI machine, and the machine recorded their brain activity. After all, AI was able to create images based on its assessments of the brain activity of the participants.

This experiment is just one of several similar experiments with brain activity and AI imaging that have emerged over the past few years.

Source: Science Alert

More articles

Leave a Reply

Latest article