General

New Study Explores Deep Neural Networks’ Visual Perception

NewsGram Desk

A team of researchers at the Centre for Neuroscience (CNS) in the Indian Institute of Science (IISc) recently conducted a study to compare the visual perception of the deep neural networks to that of humans and after conducting series of experiments they have concluded that deep neural networks visual perception are different from that of humans. According to IISc's Associate Professor at the CNS, S.P.Arun, who led the team of researchers, said that deep neural networks are machine learning systems inspired by the network of brain cells or neurons in the human brain, which can be trained to perform specific tasks.

The team's study published in Nature Communications, a journal, stated that deep networks, although a good model for understanding how the human brain visualizes objects, works differently from the latter."While complex computation is trivial for them, certain tasks that are relatively easy for humans can be difficult for these networks to complete," Arun claimed.

Follow NewsGram on Facebook to stay updated.

"These networks have played a pivotal role in helping scientists understand how our brains perceive the things that we see. Although deep networks have evolved significantly over the past decade, they are still nowhere close to performing as well as the human brain in perceiving visual cues. The team has compared various qualitative properties of these deep networks with those of the human brain," he explained. His team studied 13 different perceptual effects and uncovered previously unknown qualitative differences between deep networks and the human brain.

Arun and his team attempted to understand which visual tasks can be performed by these networks naturally by virtue of their architecture, and which require further training"Lots of studies have been showing similarities between deep networks and brains, but no one has really looked at systematic differences," Arun claimed, who is the senior author of the study. He added that identifying these differences can push us closer to making these networks more brain-like.

While Georgin Jacob, first author and Ph.D. student at the CNS told that deep neural networks have revolutionized computer vision, and their object representations across layers match coarsely with visual cortical areas in the brain." However, whether these representations exhibit qualitative patterns seen in human perception or brain representations remains unresolved," he added. Citing an example of the Thatcher effect, Arun pointed out it is a phenomenon where humans find it easier to recognize local feature changes in an upright image, but this becomes difficult when the image is flipped upside-down.

His team studied 13 different perceptual effects and uncovered previously unknown qualitative differences between deep networks and the human brain. Pixabay

"Deep networks trained to recognize upright faces showed a Thatcher effect when compared with networks trained to recognize objects. Another visual property of the human brain, called mirror confusion, was tested on these networks. To humans, mirror reflections along the vertical axis appear more similar than those along the horizontal axis. This proves that deep networks also show stronger mirror confusion for vertical compared to horizontally reflected images," he explained.

Jacob added that another phenomenon peculiar to the human brain is that it focuses on coarser details first."This is known as the global advantage effect. For example, in an image of a tree, our brain would first see the tree as a whole before noticing the details of the leaves in it. Similarly, when presented with an image of a face, humans first look at the face as a whole, and then focus on finer details like the eyes, nose, mouth, and so on," he said.

He added that surprisingly, neural networks showed a local advantage which means that, unlike the brain, the networks focus on the finer details of an image first."Therefore, even though these neural networks and the human brain carry out the same object recognition tasks, the steps followed by the two are very different."Their study also finds that convolutional or deep neural networks have revolutionized computer vision with their human-like accuracy on object-recognition tasks, and their object representations match coarsely with the brain.

Some phenomena were present in randomly initialized networks, such as the global advantage effect, sparseness, and relative size. Pixabay

"Yet they are still outperformed by humans and show systematic finer-scale deviations from human perception. Even these differences are largely quantitative in that there are no explicit or emergent properties that are present in humans but absent in deep networks," the study opined. According to this study, it is possible that these differences can be fixed by training deep networks on larger datasets, incorporating more constraints, or by modifying network architecture such as by including recurrence.

The team added that they recast well-known perceptual and neural phenomena in terms of distance comparisons, and ask whether they are present in feed-forward deep neural networks trained for object recognition."Some phenomena were present in randomly initialized networks, such as the global advantage effect, sparseness, and relative size. Many others were present after object recognition training, such as the Thatcher effect, mirror confusion, Weber's law, relative size, multiple object normalization, and correlated sparseness.

Yet other phenomena were absent in trained networks, such as 3D shape processing, surface invariance, occlusion, natural parts, and the global advantage. These findings indicate sufficient conditions for the emergence of these phenomena in brains and deep networks, and offer clues to the properties that could be incorporated to improve deep networks," the study explained.

The study observed that deep neural networks trained on specific tasks (like scene parsing) can explain the responses in functionally specific brain regions (like the occipital place area that is known to be involved in recognizing navigational affordances) better than a deep neural network trained on a different task. The study concluded by stating that their analyses can help researchers build more robust neural networks that not only perform better but are also immune to "adversarial attacks" that aim to derail them. (IANS/JC)

Mid-Century Modern Marvels: Retro-Inspired Office Table Designs for the Trendsetter

Tech-Savvy Spaces: Office Table Designs Equipped with the Latest Innovations

Lab-Grown Human Immune System Uncovers Weakened Response in Cancer Patients

African Elephants Face Severe Decline Over Past Half-Century

US Senate Democrats rush to confirm judges before Trump takes office