How to make an autonomous vertical array & time-sync SoundTraps

Guest blog post by bioacoustics PhD student Chloe Malinka, larrybird-2@c_malinka

I recently had an opportunity to study the bioacoustics of a deep-diving toothed whale. I was interested in collecting passive acoustic recordings on an array containing multiple hydrophones. With this, I planned to detect echolocation clicks, classify them, and localise them.

However, this presented me with a challenge: how do I deploy an array and collect recordings at several hundred meters of depth, where I anticipate my animal of interest to be? With traditional towed arrays, the cables all connect to recording gear on the boat, whereby all channels usually get recorded on the same soundcard. If I want to go deep, perhaps I could use a vertical array, but if this is tethered to a [drifting] boat, then this loses its ‘vertical’ descriptor because the drift of the boat and differential currents will drag the array underwater. Plus, 1000 m of cable is heavy and expensive, which means a big boat is needed, which also expensive. …If only I had an autonomous array that I could set into the deep, without having to worry about its connection to a boat. Furthermore, I would need this array to be vertically oriented, and as straight as possible, to allow for minimal errors in acoustic localisations.

I looked around the lab, and I came across a few (okay, 14) SoundTraps. These are autonomous hydrophones made by Ocean Instruments, a company based out of New Zealand. I’ve used these devices many times before and appreciated their user-friendliness, low noise floor, and large dynamic audio range.

I got in touch with their director, who had the foundations in place for a “Master / Slave” dynamic. This means that so long as all the devices on an array are connected with a cable, the “Master” can send out an electrical signal to all of the “Slaves” on the array. These pulses are sent out at a rate of 1 per second. The one Master and each Slave records the sample number at which it either sent or received each pulse. This information is stored and can be used to time-align the audio recordings on all devices on the array after data has been collected, to sample-level accuracy. In other words, we now have a way to treat these autonomous devices as if they were collecting audio data on the same soundcard.

How did I do this, and how can you do it, too? Below, I’ll outline what was needed to build the vertical passive acoustics array, how to set it up for deployment, and then share my code library which allows for subsequent time-alignment of audio files after recordings have been made.

Building the array

pic5.jpg
A multi-channel branching array for SoundTraps
  1. Get one long waterproof cable with at least 3 cores. This will be what connects all the SoundTraps, allowing the pulse from the Master to be transmitted to all of the Slaves. I used a cable that had a layer of braided Kevlar inside, allowing it to have a large breaking strength (of 1000 kg). While this may have been overkill, it is handy to have a cable with strength that will allow for both the tensions of weight on the bottom of your array, as well as being able to withstand being hauled aboard.
  2. Each SoundTrap has the female side of an 8-pin micro-SubConn connector. We need to space SoundTraps along a cable, with branches coming off that connect the black (pin 1), orange (pin 8), and red-and-black (pin 5) pins of the SoundTraps rear SubConn connector to allow for the pulse from the Master to be transmitted to all of the Slaves. Obtain one branch for each SoundTrap (I used 8-pin non-bulkhead micro-SubConns, found here).
  3. Use a scalpel to carefully cut small sections of the outer rubber layer off, being careful not to cut through the Kevlar.
  4. Bend the Kevlar braid and pull out 3 wire colours of interest. Solder a 3-way connection between the wires you just cut with the branching SubConn male cable. Make sure that, for example, colour ‘X’ on your large cable consistently connects to the pin corresponding to the black pin on the subconn cables.

    pic1
    3-way soldered connection between long connecting cable and branching SubConn.
  5. As a safety measure, I put heat shrink, and then an electrical coating (I used “Scotchkote”) around these soldered connections. Position the Kevlar braid back around this connection.
  6. I then had some 3D-printed moulds custom made, in which I poured a layer of resin (I used this “Scotchcast”) to provide waterproofing and structural support.pic2

    pic3
    Fill this with a resin or polyeurythane-like substance to waterproof.
  7. You now have waterproofed connections around your breakouts.pic4.jpgpic6-e1532019826676.jpgRemember to measure the distances between each of your sensors (tips of the SoundTraps).

Setting up a SoundTrap Array for Deployment

  1. Set one of your devices as the ‘Master’ (this option will be available in the next SoundTrap Host software release). The rest are ‘Slaves’ by default.
  2. Configure all SoundTraps on the same computer. Prior to each deployment, click View –> Service –> sync SUDAR clock.
  3. Set all SoundTraps to the same sample rate (otherwise the time-sync will not work).
  4. Plug all of your SoundTraps into the array, and turn them on with a remote control.
  5. As a sanity check, lightly tap each Slave to the Master a few times. This way, you can always go back to double check that your time-syncing is working as expected. I did this just before deploying and just after recovering my array.
  6. Deploy your array. I had several kilograms of weight on the bottom, and buoys on the top of the array, to encourage its vertical orientation. I also attached tilt sensors to the SoundTraps so that any deviations an array hanging perfectly straight (at 90°) could be accounted for when calculating localisation errors later on.
  7. When downloading your data in the SoundTrap host software, check the box for “zero-fill dropouts” under the Tools menu. Essentially, every now and then, the SoundTrap, when writing files, will skip over tiny chunks of data. Usually, it’s unchecked default is to just skip over these gaps. However, we want to keep track of time as much as possible, and checking this box tells the ST to fill any of these tiny gaps (a couple samples) with zeros, instead of skipping them altogether. Note that you can only set this when downloading data from the device. If you downloaded a file from a ST without this box checked, and erase it from the ST, you cannot re-retrieve this information from the already downloaded file.

Time Syncing Audio Files

My scripts for time-aligning the WAV audio recordings of multiple connected SoundTraps are available here: sync_lib_Chloe_publish. All you have to do is arrange your data so that all of the folders containing each SoundTrap are nested in deployment-level folders, tell Matlab where this folder is, and press play. You, the user, interacts with ‘run_wav_timesync.m, and the rest of the items in this folder are functions nested within this script. The output is a series of time-synced multi-channel WAV files from your deployment.

Audio data was recorded on multiple soundcards, and we need to time-align them to allow for acoustic localisation. However, each SoundTrap is subject to an internal clock drift (like all electronic devices are). In the SoundTrap, I have found that its clock drift is ~2 seconds per day. We need to account for the independent clock drifts between devices in addition to maintaining sample-level-accuracy in time alignments across the whole deployment.

If we are recording at the highest available sample rate (576 kHz), and assume that this clock drift is linear, a clock drift of 2 sec/day this works out to a rate of 800 samples per minute (576,000*2/24/60) in which devices could be off from one another. At the high sample rate, 800 samples corresponds to 1.4 ms, which translates to an error in the position estimate of your hydrophone by ~2.1 m per minute. Oh dear – we can’t do much with that!

Additionally, each SoundTrap on the array should have clock drifts that are approximately equal, but if your array is like mine, the depth range between devices can be large enough to have several degrees of difference in ambient temperature (and clock drift can be temperature dependent).

So, if we didn’t need to worry about clock drift didn’t exist (or if the clocks drifted at the same speed), we could just tap the SoundTraps at the start of a deployment and figure out the offset between devices, and assign this as the time-offset for the entire deployment. BUT, due to multiple variables, including temperature and variation between individual devices themselves, the clocks on the devices do not run on the same speed. Thus, each SoundTrap is continually going out of sync with every other SoundTrap.

To address this hurdle, I need to align all of my recordings every second, for every emitted ‘Master’ pulse (once every second), and stitch the resulting wavfile together. This means that the clock drifts occurring on separate devices, while still occurring, is now accounted for. This allows you to be certain that time delays between a signal received on your channels is a reflective of the actual location of an animal – that is to say, the time of arrival differences are now reliable so as to allow you to be confident in the results from your acoustic localisations. The output from the script is a series of time-synced files with which you can import into programs which support multi-channel acoustic data, such as Ismael, Raven, PAMGuard, etc. Then you can go on to conduct acoustic localisations and determine where the ranges of sounds of interest to your array.

time sync demo
Example of a time synced file as opened in Audacity (free software handy for looking at multi-channel recordings). Here, the channel on the top was a Master, and you can see time-synchronised taps between the Master and 3 different Slaves on 3 channels.

Note that the Master/Slave feature was available to me via a private beta-release of the updated SoundTrap firmware. However, this feature will be made available in the next update from Ocean Instruments – keep an eye on their website . I’ll update my library whenever this happens, should there be any variations in how the time-sync information is stored/recovered.

Take-home messages

In summary, this Master/Slave dynamic was so handy because:

  • It allowed me to deploy an autonomous hydrophone array down into the deep ocean.
  • I am now able to treat several autonomous sound recorders as if they are recorded on the same sound card.
  • The master/slave feature worked on multiple versions of the SoundTraps that I had available to me (these included versions 1.6 and 1.7 of ST300HF).
  • The length of the cable was only as long as the distance between periphery hydrophones. This means that I did not have to worry about 500 m of cable in order to listen that deep.
  • Now, because the array has allowed for the localisation of animals, and since I can identify on-axis clicks, I can now describe acoustic parameters and behaviours of my deep-diving animal of interest (once I complete the analysis and publish it, watch this space…).

Questions, comments? Get in touch.

OK, cheers!

Guest blog post by bioacoustics PhD student Chloe Malinka, larrybird-2@c_malinka

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s