• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Crypto Currency
  • Technology
NEO Share

NEO Share

Sharing The Latest Tech News

  • Home
  • Artificial Intelligence
  • Machine Learning
  • Computers
  • Mobile
  • Crypto Currency

IntroductioDeciphering cognitive processes using NeuroImaging — Computational NeuroScience

December 25, 2020 by systems

The fMRI image of the human brain (picture: toubibe)

The human brain is a fascinating organ while being more energy-efficient than most of our modern computational engines is infinitely more complex. Its activity fluctuates as we perform simple to complex cognitive tasks. And all these activities elicit unique patterns of activity in diverse regions of the brain. These activity patterns can be captured using Functional Magnetic resonance Imaging (fMRI). This basically takes a three-dimensional snapshot of the activity of neurons in the brain at a particular instant.

Structure of the fMRI dataset

fMRI data is usually associated with four dimensions, three spatial and one temporal. It is not possible to visualize all the dimensions at once. The spatial dimension consists of three dimensions and the individual units are called ‘voxels’. The three planes are termed coronal, transversal, and sagittal(more info about these anatomical planes is here). The transversal plane provides the top view of the brain and its regions. To make things clearer a 64*64*36(l*w*d) brain image can be visualized as a cube holding the brain with a height of 36. The “depth” listed above indicates that there are 36 layers (slices through the brain) in every image object. Now, if we slice this cube into 36 pieces, each piece holds a 64*64 2D image as shown below.

The slices of transversal view of the brain (picture: personal genome project)

The most important dimension is the temporal dimension that depicts how the activations flow across the timeline. The easiest way to plot this is by picking any random voxel in any of the slices and plot its time course. But, that can also be tricky because the brain’s regions work in unison and consist of multiple voxels, hence one single voxel might not be of much help. Therefore decoding these patterns demands more sophisticated statistical analysis.

Analyzing the fMRI data

Analyzing the fMRI data is a daunting task due to the inherent complexity and prolific volumes of the data produced. For instance, a twenty-minute fMRI session creates a series of three-dimensional brain images each containing approximately 15,000 voxels, collected once per second, yielding tens of millions of data observations. Most of the scholarships around the analysis focus on identifying the regions and the specific voxel activity when humans perform specific activities (e.g., reading, classification). This analysis can provide both key aspects of the workings of the brain and insights into designing more efficient and natural neural network models. This can have a lot of implications on the basic units of our existing models and can revolutionize the learning approach, bridging the gap between artificial and natural intelligence. Let’s look at how we can decipher this data using machine learning and the challenges it poses.

Machine learning on NeuroImage

The most important problem that needs to be solved is training machine learning classifiers to automatically decode the subject’s cognitive activity at a single time instant or interval. More specifically, let’s consider an example problem statement: to detect whether a subject is looking at a picture or reading a sentence(P vs S). The experiment is simple: The subject is made to sit in front of a screen and displayed with an image(e.g., an image in which there are a + sign and * sign below it.) for 5s and the image is switched to the caption of the image. This caption(e.g., It is not true that the star is above the plus.) can either be the right description of the image or an incorrect one, either way, it is prompting the subject to think about the caption and ponder whether it’s apocryphal or accurate. While the subject performs this activity, the fMRI scanner continuously takes the images of the brain map at an interval of 0.5s. At the end of this activity, we will have 20 images, in 10 of them the subject is looking at the picture and the rest is when the subject is thinking about the caption. This activity is repeated with multiple images and subjects.

The learning task is to train a classifier to determine, given a particular interval of fMRI data, whether the subject is viewing a sentence or a picture during this interval. In other words, the aim is to learn a separate classifier that given the cognitive states can identify the type of activity the subject is performing. Such classifier learning approaches are also potentially applicable to medical diagnosis problems which are often cast as classification problems, such as diagnosing Alzheimer’s disease. The solution and the approach to solving this problem are out of the scope of this post. The problem domain poses some difficulties to the traditional machine learning tasks.

Challenges in developing machine learning models

  • Handline the huge volumes of each instance if the dataset

This problem domain is also quite interesting from the perspective of machine learning because it provides a case study of classifier learning from extremely high dimensional, sparse, and noisy data. In our case studies, we encounter problems where the examples are described by 100,000 features, and where we have less than a dozen, very noisy, training examples per class. The design of appropriate feature selection, feature abstraction, and classifier training methods tuned to these problem characteristics are key in learning superior classifiers.

Although each feature consists of the value of a single voxel at a single time, we group the features involving the same voxel together for the purpose of feature selection and thus focus on selecting a subset of a voxel. There are a plethora of feature selection methods including but not limited to selecting the most discriminating voxels, picking voxels with the highest activity across the timeframe of the training process. Recently, there has been a growing interest in the method based on the region of interest (ROI ). These ROIs represent the major regions across the brain that carry out trivial tasks. THen picking the n most active voxel from each of the designated ROIs is found to provide the best results.

Another major hurdle in solving these problems arises due to the inherent differences in the structure, orientation of the brain of each individual. These differences have to be normalized across multiple subjects when are aiming to train a classifier to detect the common patterns of cognitive activity across different subjects all performing the same task.

Conclusion

Although this domain poses its unique problems, pursuing the challenges is not without its results and can lead to breakthroughs in our perspective of training deep learning models. this post gave a brief overview of the problem statement and introduction to fMRI. In the coming posts, I will delve deeper into training specific classifiers aimed at solving these problems while addressing the challenges in the best possible way.

Useful links

  • openfMRI — The open-source dataset repository of fMRI data.
  • UCSD fMRI Lab — More insights into the structure and acquisition of fMRI data.

Filed Under: Artificial Intelligence

Primary Sidebar

AdaBelief , uma variação mais eficiente do popular otimizador Adam

EB Games selling PS5 bundles at 11 AM ET

Why I founded TeamSportz

Does more Airline Survey Questions leads to better understanding of passengers’ satisfaction?

Lenovo Thinkpad X1 Fold Review: Fun to fold, frustrating to use

Footer

  • Privacy Policy
  • Terms and Conditions

Copyright © 2021 NEO Share

Terms and Conditions - Privacy Policy