A lot of research about echolocation on water was done by Björn Siemers, the head of my former lab in Seewiesen. In his own PhD work, Björn could show that within five sister species of bats, the ones that foraged over water had the echolocation calls with the smallest bandwidth, the „least sophisticated“ echolocation calls. Those two species also showed the poorest performance when it came to detect prey in front of a structured background (Siemers & Schnitzler, Nature 2004). Björn and colleagues also found out that a target floating on water produces a longer and louder echo than a target suspended in air. This acoustic mirror effect makes a prey item on water easier to detect with echolocation (Siemers et al, Naturwissenschaften 2005). What comes on top of that is that a water surface generally sounds to a bat like a hole in the ground. This is because all the echolocation calls get reflected away from the bat (Greif & Siemers, Nat Comm 2010). Any item floating on the water will be the only thing producing an echo, and thanks to the acoustic mirror effect, an even louder echo than in air. So trawling bats don’t have to deal with a lot of background echoes because water acts as a mirror. But what about waves?



If you're a trawling bat, a bat that hunts directly above the water surface, you both have to process the structure and the movement of the waves. The structure is described by the spatial frequency. Spatial frequency quantifies change as a function of space, or position. The movement of the waves is described by their temporal frequency.  We can measure it if we focus on one certain spot on the surface and see how fast the waves roll by.


In my studies, I asked the question how sensitive exactly is echolocation to spatial and temporal frequency?


Being able to distinguish different frequencies would be very beneficial to detect prey on moving water surfaces. But echolocation doesn’t exactly seem well-suited for this task. And that’s for two reasons: basically one “spatial” limitation and one “temporal” limitation.


We need to keep in mind that for hearing, we don’t have a nice spatially arranged sensory epithelium like the retina in our eyes. We have the tonotopically arranged cochlea. Tonotopical means that the place of a sensor cell codes for the pitch of the tone, not its direction. We don’t get any direct cues about spatial layout. Everything needs to be computed somewhere in the brain. But luckily we’ve got two ears, so we can use time and intensity differences between left and right ear. But imagine how much tinier a bat head is! The differences between sounds at one ear and the other are also much smaller than in humans!












The second thing we need to remember is that bats don’t get a constant stream of information, but only sample the environment at certain time points. Above you can see a spectrogram of a typical echolocation sequence*. The colours code for loudness. On the y-axis we’ve got frequency in kilohertz (note that we are well above 20 kHz, which is the human limit for high frequencies, that’s why we call everything above it ultrasound). On the x-axis we’ve got time in milliseconds. You can see that calls are frequency modulated, i.e. that they start at a very high frequency and then very steeply go through many lower frequencies during a very short call duration. But what interest us now are the temporal parameters. You can see both the call length and the inter-call interval are quite flexible, but over all the length is short and the interval is long. That means that changes in your target over time (e.g. movement of prey) can only be detected by comparing an entire sequence of call-echo pairs.


Now that we’re familiar with the challenges that echolocating bats face when they hunt above water we can have a look at the method I’m using to study their sensitivity: psychophysics.


The field of psychophysics was founded in 1860 by German philosopher, psychologist and physicist Gustav Fechner and is to this day the most important behavioural method to study perception. You might know the Weber-Fechner law which describes the relationship between a physical stimulus and the according sensation, or more specifically the relation between the actual change in a physical stimulus and the perceived change.


Psychophysics actually comprises a whole bunch of methods, but what they all have in common is that in the end you want to measure a psychometric function to see that relationship between „psycho“ and „physic“ in a nice curve. In humans you would play a stimulus, e.g. a loud tone at 440 Hz, to a person and ask them if they can hear it. Then play them a fainter tone and so on. Once you find the threshold for that tone, you repeat the entire process with another frequency. In the end you can get a psychophysical function that shows you exactly how well that person can perceive sounds of different frequencies. You have recorded an audiogram. Since we can’t simply ask animals if they hear a sound or not you first have to train them to react to the sound in a certain way. This is done most easily by operant conditioning with a food reward. Once they show the correct response you change the stimulus until the animal „responds“ with behaviour that signals „I cannot perceive that stimulus anymore“. In my diploma thesis, I recorded an audiogram of six bats. Training the bats to respond to your stimuli often takes several months or sometimes even more than a year. For my Phd projects, I trained another six bats each to measure psychophysical functions for sensitivity to spatial and to temporal frequency.



* In my work I only focus on FM bats, because for the other group, the CF bats, a lot of work has been done already. FM stands for frequency modulated and CF stands for constant frequency, both describing the frequency content of a typical echolocation call of a bat.

anneleoniebaier at gmail.com