For everything we know about our brains and bodies, there remain quite a few things that we just don’t know a lot about, speech included.
We’ve come leaps and bounds in the last 30 years, with speech-recognition software being the proof, but scientists, psychologists and researchers keep working to better understand exactly how it all works.
So what’s the point of understanding our capacity for language?
Well, for one, new learning on the subject could reshape the way we approach technology, especially devices that use speech-rec and software like our IVR systems, and result in some really amazingly accurate and powerful new tech.
To understand our capacity for speech, though, we need to first understand how and when we learn to speak, and that means understanding children.
For a long time we’ve regarded language-learning as an activity that comes later in the development of a child, but according to a recent article in Science Daily, it turns out that we may be learning language even earlier.
A new study out of New York University’s Department of Psychology suggests that infants as young as nine months old can make distinctions between speech and non-speech sounds in both humans and animals, even if they don’t understand words just yet.
Assistant professor and study leader Athena Vouloumanos told Science Daily, “Our results show that infant speech perception is resilient and flexible… This means that our recognition of speech is more refined at an earlier age than we’d thought.”
The infant participants in the study were able to distinguish between non-speech sounds (human whistles and throat clearing and parrot squawks and chirps) and human words (spoken both by recorded human voices and parrots).
While our IVR target market doesn’t really include infants, new studies such as this may give us hints as to how exactly the human brain wraps itself around language, meaning better speech-rec in the future and perhaps even computers who produce their own language.