Babies are smarter than AI

By
Web Desk
A baby smiling while laying on a bed.— Unsplash
A baby smiling while laying on a bed.— Unsplash

These days, artificial intelligence (AI) is pervasive. Our daily lives are influenced by AI, which is used in voice recognition and self-driving automobiles. However, a recent study found that infants outperformed AI in crucial psychology tasks.

Infants may be able to perceive the motive behind a person's gestures better than artificial intelligence, according to a recent study from New York University that was published in the journal Cognition. As it demonstrated the clear distinction between cognition and computation, this study emphasises the significance of advancing current technology and identifying AI's limitations.

"Adults and even infants can easily make reliable inferences about what drives other people’s actions," said an assistant professor in New York University’s Department of Psychology and the paper's senior author, Moira Dillon, PhD. "Current AI finds these inferences challenging to make."

Researchers are now better able to characterise newborns' innate knowledge of other people and offer suggestions for how to incorporate that knowledge into AI due to the novel idea of competing infants and artificial intelligence on the same tasks.

Babies are fascinated by other people, as seen by the way they look at and interpret people. Also, they are able to converse with others and comprehend basic human emotions. Infants' skills to form objectives and express particular preferences help them to develop human social intelligence.

More than 80 11-month-old babies were used in the study to examine the differences between infants and AI. The group compared infants to artificial intelligence and watched how they responded to a "state-of-the-art learning-driven neural-network model." The group employed the "Baby Intuitions Benchmark" (BIB), a set of six exercises that test people's psychological realism.

BIB was developed in order to compare the performance of newborns and machines and, more crucially, to provide an observed basis for creating humanistic AI. Infants watched videos on Zoom that included simple, animated shapes bouncing around the screen. By pulling items from the screen and making additional actions, human behaviour and decision-making were replicated.

Also developed, taught, and put to the test were learning-driven AI devices, which help computers recognise patterns and emulate human intellect. The researchers discovered that infants could recognise human-like intentions in simple actions and animated shapes. Despite the continuous contextual changes, they had to recognise the retrieval of identical objects on the screen. Infants gave moving things longer looks that suggested recognition.

Yet, there was no indication of recognition from AI tools. As of right now, it appears that only humans have this capacity for thought. It enables us to collaborate and interact with others.

Dillon concluded: "A human infant’s foundational knowledge is limited, abstract, and reflects our evolutionary inheritance, yet it can accommodate any context or culture in which that infant might live and learn."