Women that use men

Added: Chaquita Kranz - Date: 01.09.2021 04:10 - Views: 44980 - Clicks: 1208

Do men and women have different brains? neuroimage studies sought to answer this question based on morphological difference between specific brain regions, reporting unfortunately conflicting. In the present study, we aim to use a deep learning technique Women that use men address this challenge based on a large open-access, diffusion MRI database recorded from 1, young healthy subjects, including men and women healthy subjects.

The proposed 3D CNN was applied to the maps of factional anisotropy FA in the whole-brain as well as specific brain regions. The entropy measure was applied Women that use men the lowest-level image features extracted from the first hidden layer to examine the difference of brain structure complexity between men and women.

The proposed 3D CNN yielded a better classification result Moreover, high classification accuracies are also shown in several specific brain regions including the left precuneus, the left postcentral gyrus, the left cingulate gyrus, the right orbital gyrus of frontal lobe, and the left occipital thalamus in the gray matter, and middle cerebellum peduncle, genu of corpus callosum, the right anterior corona radiata, the right superior corona radiata and the left anterior limb of internal capsule in the while matter.

This study provides a new insight into the structure difference between men and women, which highlights the importance of considering sex as a biological variable in brain research. Recent studies indicate that gender may have a substantial influence on human cognitive functions, including emotion, memory, perception, etc.

Men and women appear to have different ways to encode memories, sense emotions, recognize faces, solve certain problems, and make decisions. Since the brain controls cognition and behaviors, these gender-related functional differences may be associated with the gender-specific structure of the brain Cosgrove et al.

Diffusion tensor imaging DTI is an effective tool for characterizing nerve fibers architecture. By computing fractional anisotropy FA parameters in DTI, the anisotropy of nerve fibers can be quantitatively evaluated Lasi et al. By computing FA, researchers has revealed subtle changes related to normal brain development Westlye et al.

Nevertheless, existing studies are yet to provide consistent on exploring the difference of brain structure between men and women. Ingalhalikar et al. However, other studies reported no ificant gender difference in brain structure Raz et al. A recent critical opinion article suggested that more research is needed to investigate whether men and women really have different brain structures Joel and Tarrasch, However, recent studies indicated that machine learning techniques may provide us with a more powerful tool for analyzing brain images Shen et al. Especially, deep learning can extract non-linear network structure, realize approximation of complex function, characterize distributed representation of input data, and demonstrate the powerful ability to learn the essential features of datasets based on a small size of samples Zeng et al.

In particular, the deep convolutional neural network CNN uses the convolution kernels to extract the features of image and can find the characteristic spatial difference in brain images, which may promise a better result than using other conventional machine learning and statistical methods Cole et al. In this study, we performed CNN-based analyses on the FA images and extracts the features of the hidden layers to investigate the difference between man and woman brains.

Different from commonly used 2D CNN model, we innovatively proposed a 3D CNN model with a new structure including 3 hidden layers, a linear layer and a softmax layer. Each hidden layer is comprised of a convolutional layer, a batch normalization layer, an activation layer and followed by a pooling layer.

This novel CNN model allows using the whole 3D brain image i. The linear layer between the hidden layers and the softmax layer reduces the of parameters and therefore avoids over-fitting problems. This open-access database contains data from 1, subjects, including men and women. The ages range is from 22 to This database represents a relatively large sample size compared to most neuroimaging studies. Using this open-access dataset allows replication and extension of this work by other researchers.

We performed DTI data preprocessing includes format conversion, b0 image extraction, brain extraction, eddy current correction, and tensor FA calculation. In the final step we use dtifit to calculate the tensors to get the FA, as well as mean diffusivity MDaxial diffusivity ADand radial diffusivity RD values. This procedure may lead to a better classification result, since a smaller size of the input image can provide a larger receptive field to the CNN model.

TFRecord file format is a simple record oriented binary format that is widely used in Tensorflow application for the training data to get a high performance of input efficiency. The labels were processed into the format of one-hot. We implemented a pipeline to read data asynchronously from TFRecord according to the interface specification provided by Tensorflow Abadi et al.

The pipeline included the reading of TFRecord files, data decoding, data type conversion, and reshape of data. The operation system of the GPU work station was Ubutnu We used FSL to preprocess the data. The commonly used CNN structures are based on 2D images. When using a 2D CNN to process 3D MRI images, it needs to map the original image from different directions to get 2D images, which will lose the spatial structure information of the image.

Besides, traditional CNN model usually uses several fully connected layers to connect the hidden layers and the output layer. The fully connected layer may be prone to the over-fitting Women that use men in binary classification when the of samples is limited like our data. To address this problem, we used a linear layer to replace the fully connected layer. The linear layer integrates the outputs of hidden layers i. Moreover, we performed a Batch Normalization Ioffe and Szegedy, after each convolution operation. Each of the hidden layer contains a convolutional layer, a Batch Normalization layer, an activation layer, a pooling layer with several feature maps as the outputs.

The shape of the input vector in our 3D PCNN model was [ n, d, w, h, c ], where d, w, h, c represent the depth, width, height and channel s which is 1 for a grayscale image of the input vector, respectively, and n is the batch size which is a hyperparameter that was set to 45 an empirical value in this paper. The shape of the convolution kernel was [ d kw kh kc inc out ], where d kw kh k represents the depth, width, and height of the convolution kernel, respectively.

The c in is the of input channels which is equal to the channel of the input vector. The c out is the of output channels. As each kernel has an output channel, c out is equal to the of convolution kernels, and is also the same as the of input channels for the next hidden layer. Batch normalization was performed after the convolutional layer.

Batch normalization is a kind of training trick which normalizes the data of each mini-batch with zero mean and variance of one in the hidden layers of the network. To alleviate the gradient internal covariate shift phenomenon and speed up the CNN training, an Women that use men Gradient Decent method was used to train the model Kingma and Ba, After the batch normalization operation, an activation function was used to non-linearize the convolution result.

Pooling layer was added after the activation layer. Pooling layers in the CNN summarize the outputs of neighboring groups of neurons in the same kernel map Krizhevsky et al. Max-pooling method was used in this layer. The outputs of each hidden layer were feature maps, which were the features extracted from the input images to the hidden layer. The outputs from the hidden layer were the inputs to the next layer. In our model, the first hidden layer generated 32 feature maps, the second hidden layer produced 64 feature maps, and the third hidden layer yielded feature maps.

Finally, we integrated the last feature maps into the input of the softmax layer through a linear layer, and then got the final classification from the softmax layer. Then we had:. The initial values of the weights of the convolution kernels were random values selected from a truncated normal distribution with standard deviation of 0. We defined a cost function to adjust these weights based on the softmax cross entropy Dunne and Campbell, :. We used the Adam Gradient Descent Kingma and Ba, optimization algorithm to achieve this goal in the model training. All parameters in the Adam algorithm were set to the empirical values recommended by Kingma and Bai.

To ensure the independent training and testing in the cross-validation. The process of cross-validation is shown in Figure 2. We implemented a two-loop nested cross-validation scheme Varoquaux et al. We divided the data set into three parts, i. To eliminate the random error of model training, we run 10 fold cross validation and then took the average of classification accuracies as the final result. Figure 2. Model training and nested cross validation.

A General overview. B 10 fold cross validation. CNN has an advantage that it can extract key features by itself Zeng et al. However, these features may be difficult to interpret since they are highly abstract features. Thus, in this study, we only analyzed the features obtained in the first hidden layer, since they are the direct outputs from the convolution on the grayscale FA images.

In this case, the convolution operation of the first layer is equivalent to applying a convolution kernel based spatial filter on the FA images. The obtained features are less abstractive than those from the second and three hidden layers.

There are totally 32 features in the first hidden layer. These features are the lowest-level features which may represent the structural features of FA images. We firstly computed the mean of voxel values across all subjects in each group man vs. Besides, we also computed the entropy on each feature for each individual:. The entropy of each feature likely indicates the complexity of brain structural encoded in that feature. We also performed a two-sample t -test on entropy to explore the differences between men and women. A strict Bonferroni correction was applied for multiple comparisons with the threshold of 0.

In order to determine which brain regions may play important role in gender-related brain structural differences, we repeated the same 3D PCNN-based classification on each specific brain region. The classification accuracy was then obtained for each ROIs. A higher accuracy indicates a more important role of that ROI in gender-related difference. A map was then obtained based on the classification accuracies of different ROIs to show their distribution in the brain.

We compared the in following two conditions: 1 We used the SVM as the classifier while keeping the same preprocessing procedure in order to compare its with our 3D PCNN method. This result is much better than using the SVM, whose classification accuracy is only The classification accuracy of MD is All of them are lower than the classification accuracy obtained by using FA. The result of two-sample t -test of 32 features of men Women that use men women shows Women that use men there are 25 features had ificant gender differences including 13 features that women have larger values and 12 features that men have larger values see Figure 3.

Interestingly, men have ificantly higher entropy than women for all features see Figure 4. Figure 3. Between-group differences of 32 features in voxel values. The mean bar height and standard deviation error bars of voxel values across all subjects in each group were evaluated for each feature. Their group-level difference was examined using a two-sample t -test. Bonferroni correction was applied for multiple comparisons with the threshold equal to 0. Figure 4. Between-group differences of 32 features in entropy values. The mean bar height and standard deviation error bars of entropy value were computed across all subjects in each group for each feature.

Their group-level difference was evaulated using a two-sample t -test.

Women that use men

email: [email protected] - phone:(790) 830-3741 x 5109

The End of Men