Sometimes in order to explain something we use a comparison. So today I’d like to discuss hearing and how our perception of sound is due to a lot of unaware processing that goes on. This I think is a good way of drawing a comparison to the limitations of our vision.
It is my belief that most photographers assume that their pictures aren’t good because they just need to practice more (this is sort of true to a point), but there are serious limitations to our vision that once we understand this, things will never be the same again for us.
So let’s talk about hearing today.
In Bella Bathurst’s book about Sound, she describes many of the psychoacoustic issues surrounding hearing.
When fitted with a hearing aid for the first time, many hearing-loss sufferers tend to find them too loud. We just trust that what we hear, is a true representation of the sounds around us. But this is not so and in the case of someone with hearing loss, an adaption has happened. They have become habituated to the lower levels of auditory experience. When normal volume levels are reinstated, they find them overwhelming.
Our brain processes the sound we hear, and our ‘auditory processing’ filters a lot of unwanted sound out for us. For instance, when in a loud environment such as a motorway, we are often able to still pick out the words someone is speaking to us over the rumble of background traffic. In its truest form, the sounds may be a lot louder and yet our experience of them is quite different.
When someone who has suffered hearing loss is given aids, it takes months for their brain to ‘adapt’ to processing the new sound. Loud environments may be very difficult because to the patient, everything seems at the same level for a while. Voices are no longer over background traffic. Instead the traffic noise is not a background noise. It is just as forward in experience as someone talking is. The brain needs time to ‘learn to hear’ again.
Should the patient continue to use the aids for a long time, things settle down. Their auditory processing ‘learns’ to separate out sounds from each other, and also to filter out, or place in the background the sounds it knows aren’t important.
This is the key point: our hearing innately filters out what is not needed. This is a fundamental function of hearing for all of us. We do not hear what is there, we hear a highly filtered, processed version of the sounds around us.
The same is true for vision. Our brains take in the visual information and ‘construct’ a representation of what is happening around is. What was before our eye, and what we experiences are never the same.
We ‘construct’ what we see. It happens so quickly that we aren’t even aware of doing it. This is why I have discussed the necker cube before, as this wire frame cube allows us to experience the ‘construction’ first hand.
Let’s consider the cube then. We all see this cube right?
Of course there is no cube there. It is only white lines on a black background. But we have ‘constructed’ a cube in our minds.
This is the first level of abstraction that is happening innately. But even this description of what we are seeing isn’t entirely accurate either. Not only does the cube not exist, but neither do the white lines and black background. In truth we are staring at a bunch of black and white pixels.
To experience the construction of the cube in our mind’s eye, we can do this by forcing the front and back walls of the cube to invert. What we perceived as the front wall of the cube is now the back wall, and what was the back wall is now the front wall. Try it - stare at the cube and imagine which wall is the back wall and then flip it.
If you’re still not with me about this, then consider the four drawings below. All are cubes. The top two are the hardest to ‘see’. With these harder to ‘see’ cubes, if you work at them, you’ll eventually ‘see’ them. Except you’re not ‘seeing’ them. You are ‘constructing’ them. This is ‘visual construction’ at play:
This allows you to experience the construction that is an innate part of your vision. This ‘construction’ is happening all the time, without you knowing it and it is a reminder to me that what I am seeing is really a processed version. This I find quite interesting because it leads me down a rabbit hole of ‘what is reality’ if we can’t entirely trust our senses?
As we all know, cameras do not see the way we see. They see a 2D representation of what we call ‘reality’, and they also record tone very differently. Whereas we see a compression of tone (and thus do not require ND grads to control sky tones), the Camera does not compress tone. So we have to use grad filters on the camera to compress the tonal range entering the camera.
I find this all very fascinating. What we see, and what is really there are two separate things. Our senses are highly adaptable. They cannot be trusted to show us what is there, and in knowing this, we have a better understanding of why our images often do not come out the way we ‘saw’ them at the point of capture.