phone01234 639550emailSALES@ACSOFT.CO.UK

FAQ’s

What is calibration?

When you are measuring something accurately, you need to know that the results are really what they say they are. Therefore, it’s necessary to compare your measurement capability to some reference. This might be a simple hand-held calibrator, or reference to an absolute standard in a National laboratory. The choice depends on the purpose for which your measurements are to be used.

Why should you calibrate your equipment?

Modern instrumentation is very accurate and reliable. But it’s not perfect and can go wrong, sometimes in ways which are not completely obvious. This is exacerbated by the fact that precision instrumentation is used in hostile environments, such as building sites, where rain, dust, handling and corrosive substances can affect your measurements. For example, dropping an accelerometer on the ground may break a crystal – the device will still have an output, but it may not be as linear as it was when delivered. A calibration will allow you to check this and have confidence going forward. Think of it as a kind of MOT test for your instrument.

Where are calibrations performed?

There are many laboratories around the UK which can offer calibration of sound & vibration measurement equipment, and the calibration will be performed in a laboratory under climatically controlled conditions, using equipment which is itself traceable to national laboratories. Of course, calibration can also be performed in the field, using portable calibrators, but the uncertainties will be larger than a laboratory calibration.

How long does calibration take?

This depends on what is being calibrated. A four-channel instrument will take longer than a single channel one. A complete sound level meter will take longer than a microphone. Also, a lot of time is spent in the laboratory setting up for a specific instrument type. You can minimise the turnaround time by planning with the laboratory in advance – for example, if several sound level meters of a particular type are to be done, the tests are set up just once, saving time. Also, let the laboratory know the timescales – there’s no point sending in a meter if their input shelf is already full.

How often should I get my equipment calibrated?

The standards are not proscriptive on this, but as a rule of thumb, instruments should be re-calibrated every two years, and calibrators annually. For example, the recommendation in BS 4142:2014 follows this approach for industrial noise measurement. However, if your measurements are really critical, you may wish to calibrate more frequently.

What is a calibration standard?

Most measurement instrumentation will be built to a national or international standard – for example, BS EN 61672-1 defines the characteristics of sound level meters. A ‘calibration standard’ will define how that instrument is calibrated, so in this case, BS EN 61672-3 documents the procedures for calibration. Very often, calibration standards don’t exist for some types of instruments, so laboratories will perform a series of tests, using their own procedures, to check conformance with the instrument standard.

What is traceable calibration?

When a device is calibrated, you need to know that the reference equipment is also correct within certain tolerances. This sets up a hierarchy of calibration, where you can trace your measurements up to a higher standard. The reference equipment will itself be calibrated behind the scenes, and the documentation allows you to follow that hierarchy typically to a national standard. In practice, all calibrations will have some form of traceability.

What is UKAS calibration?

UKAS is the United Kingdom Accreditation Service which is a body which audits the procedures and performance of calibration laboratories. UKAS accreditation therefore ensures the laboratory meets minimum standards, often laid out in instrumentation standards. Note that a UKAS calibration may be no better than a well-documented traceable calibration, but it will certainly be more expensive. However, it will often be required if there are legal implications for your measurements.

How do i calibrate a remote microphone system?

Microphone checks and calibration for remote microphone systems

Condenser microphones are extremely stable and sensitive transducers when used in their normal operating conditions. When used for long-term noise monitoring, they can be exposed to more extreme environments (e.g. high temperatures, high windspeeds, moisture, etc) which may cause a change in sensitivity, and even damage. The nature of this type of monitoring often precludes a site visit to calibrate the microphone in the normal way, such as using a sound level calibrator or pistonphone mounted directly on the microphone capsule.

We therefore need a method of checking that all is well at the microphone end, so this article describes two methods that can be used.

SysCheck

SysCheck or Charge Injection Calibration which simply injects an electrical signal into the microphone circuit, via the preamplifier, to check the signal path integrity. This includes the microphone capsule itself, which means that any change in the resulting measured signal can be used to deduce if the microphone capacitance has changed, which might be an indicator of damage (e.g. damage or corrosion on the diaphragm, or physical damage).

It’s important to note that this is not a ‘calibration’ as such, it is simply a means to check that nothing has changed out of tolerances, which can be preset by the measuring system. It is not a traceable acoustical signal.

In order to inject the signal, an additional connection is required on the preamplifier, and this is normally available via an industry standard 7-pin Lemo connector, used by many MTG power supplies. This method, therefore, precludes a simple co-axial connection to the microphone, such as IEPE, which is used solely to provide power to the preamplifier, and return the measured signal. Some front-ends have this calibration method built-in, such as Apollo from Sinus, where the Samurai software can provide the necessary signal on the correct pin of the Lemo connector.

Electrostatic Actuation

A more stable and repeatable calibration can be achieved by using an electrostatic actuator mounted on the microphone itself. This takes the form of a plate mounted very close to the microphone diaphragm, and normally replaces the standard protection grid. Some microphones incorporate the actuator in the weather protection system (e.g. rain cover). A variant is to electrically isolate the top plate of the standard microphone grille, so this doubles as the actuator (e.g. the MK255 capsule in the Svantek SV200).

A signal (typically at 1kHz) is supplied by a generator, via a special amplifier, to create an electrostatic modulation of the microphone diaphragm. This is effectively an acoustic signal, so it checks not only the integrity of the microphone, but also the sensitivity. This is similar to the way microphones are calibrated in the laboratory.

This method, therefore, requires the generator and amplifier to drive the actuator, and is completely separate to the signal and powering chain of the microphone. Outdoor microphones such as the MTG WME960H have all the necessary electronics integrated, and the actuation can be triggered by a simple contact closure on a serial port for example. This method is used in many long-term monitoring systems, such as the Sinus Swing.

The SV200 from Svantek is a complete outdoor noise analyser, and has the necessary system built-in. The electrostatic calibration can be triggered via a web page, either manually or automatically at predetermined intervals.

Although this method can be used as a ‘calibration’, the sensitivity of the system is not normally adjusted, but the levels logged to ensure the accuracy of the results. Again, because the method requires external electronics and connections, it is not possible to do this via a single co-axial connection such as IEPE.

Accelerometers, geophones and seismometers – which to choose?

Report by John Shelton, AcSoft Ltd, Svantek UK Ltd

Recent years have seen a large increase in measurements of
vibration, for a variety of applications, such that the diary of the
busy acoustic consultant is just as full of vibration surveys, as noise
measurements. Be it for health and safety applications, such as hand-arm
and whole-body vibration, annoyance, such as ground vibration, or
damage, such as building or blasting vibration, the methodology still
seems more an art than a science.
All acousticians should have a firm grip on the performance of their
sound level meters and know how to use and calibrate them. Sadly, this
does not always appear to be the case with vibration instrumentation.
Several instrumentation standards exist, the key one being BS EN
ISO8041:2005, along with procedural standards such as BS 6472:2008, but
sometimes it can appear very confusing. Can I calculate PPV from a
spectrum? Can I measure VDV with a geophone? Should PPV measurements
use Wd weighting?
This brief article goes back to basics and addresses some of the more
common questions we get asked, if only for a quiet life!

Transducers

Vibration transducers can be split basically into two types – accelerometers
and geophones (or seismometers). Accelerometers have an output
proportional to, er, acceleration, and geophones have an output proportional
to velocity. So how can both be used to measure vibration?
There’s a basic relationship between acceleration and velocity – the
former being the rate of change, or the differential, of velocity. Therefore
we can easily convert between the two by integrating an acceleration
signal to yield a velocity signal. This is normally done in the time domain,
using a filter (called an integrator), but it can also be done in the
frequency domain by dividing an acceleration spectrum by 2πf,
where f is the frequency. This effectively slopes the spectrum by -
6dB/octave, so a velocity spectrum will appear to have a lot fewer
high-frequency components!

Accelerometers

The majority of accelerometers for our applications are piezoelectric
devices. A small piezoceramic crystal is sandwiched between the base and
a seismic mass, so when the base is accelerated, the crystal is stressed,
causing a proportional charge output. Because it is a simple mass/spring
system, it will have a fundamental resonance – the crystal is very stiff, so
this will be high, some kilohertz for most devices. Below that resonance,
the response is virtually flat and linear, making an excellent transducer.
To make a sensitive accelerometer, make the mass and/or crystal
bigger – but, this brings the resonance down, so there’s a trade-off to be
made. Thankfully, most requirements for sensitive accelerometers are at
low frequencies!
The output of the crystal is a charge, which requires a specialised
charge amplifier, with extremely high input impedance, in order to drive
our measuring system. These used to be separate boxes, with specialised
low-noise cabling, but nowadays, the charge amplifier is built into the
accelerometer itself, and this uses a ‘phantom’ powering system known as
IEPE (integrated electronic piezo-electric), also known by a variety of
proprietary names such as ICP®, CCP etc. At least IEPE is standardised!
This means that long cables can be driven, and as long as your instrument
can provide the powering, you should be in business. But always
check that you have an IEPE accelerometer rather than a charge
accelerometer first!
Due to being a capacitor, such accelerometers do not have a DC
response and will roll-off at low frequencies. Make sure you select one
suitable for your task if you want to measure down to 0.5Hz for example.
Accelerometers are rugged and will measure in any axis, but being
a mass stuck to a piece of glass, the crystal can crack, not always
obviously. This is particularly true for the sensitive ones with big seismic
masses, so don’t drop them on a concrete floor!

Geophones

A typical geophone is a moving coil device. Think of a loudspeaker
backwards. A magnet is suspended in a coil (sometimes vice-versa),
attached to the base of the transducer. As the base is moved, a current is
induced in the coil, which can then be used as an output proportional
to velocity.
Like an accelerometer, the geophone is a resonant device, but this
time, the resonance is low frequency due to the mass (magnet) being
suspended. Typically, this is around 3-4Hz, and the usable linear range is
above this. This potentially gives an issue with measuring low-frequency
vibration – we are often interested in measuring vibration at just the
frequencies that a geophone has a resonance. However, the design of
geophones is a mature art, and careful damping and linearization will
provide excellent performance.
Geophones are normally designed to operate in one axis, either
vertical or horizontal, so they must be oriented to their design axis. If you
have a triaxial unit, it will have one vertical, and two horizontal coils. Stick
it to a wall instead of a floor, and you’ll probably lose any signal, so they
should always be mounted in the same orientation – on a bracket
for example.
Conditioning an active geophone is very specific, but recent devices
also support the IEPE system, already dominant in accelerometers.
A big plus of geophones is their lower price, and they are very rugged –
hence their popularity in the mining engineer’s toolkit – no fiddly
microdot connectors!

Choose your weapons

The choice of transducer would seem to depend on what you want to
measure – an accelerometer for measuring acceleration (VDV, MTVV, etc)
and a geophone for measuring velocity (PPV). This complicates the
instrumentation, so it would be nice to use one for the other.
We can get velocity by integrating the accelerometer output, so this
would appear to be the ideal solution. Well, it works well, but integrating
can cause some side-effects. If you consider that the integration
process emphasises low frequencies (think of the -6dB/octave slope in the
spectrum), any noise present at low frequencies in the amplifier chain, or
extraneous environmental effects can cause spurious results. Some
accelerometers, due to their physical design, can be sensitive to temperature
transients. This shows up as a very low-frequency signal, which, when
integrated, generates a large velocity output. Try blowing on your
accelerometer and see what happens! This also applies to poor or badly
maintained cables.
Careful design of high pass filters can mitigate these effects, but these
can introduce phase errors, which might be important when trying to
measure the peak amplitude of the velocity signal (PPV). It’s interesting
but beyond the scope of this article to compare the raw output of an integrated
accelerometer and a geophone for the same signal and measure
its peak!
Of course, integrating the acceleration signal in the frequency domain
is a lot easier, but generally will not yield a PPV value, almost all spectra
being RMS values.
A geophone is excellent for its design purpose. We could calculate
acceleration by differentiating the signal, but often they have a limited
dynamic range, compared to accelerometers so this can result in noise
being amplified. Also, as their resonance is often bang in the middle of
the frequency range of interest, the phase performance becomes very
significant, and needs careful design.
So which is best? Without resorting to Harry Hill to find out, it’s
probably best to start with an accelerometer and integrate to velocity
when you need to. This will cover the majority of applications with one
transducer. But if your application is for PPV only, then the geophone
may make a better choice. But either way, make sure you know the
performance characteristics and limitations.

Future technologies

New technologies such as MEMS (MicroElectroMechanical Systems) are
now looking promising for use in both sound & vibration transducers.
Recent developments at NPL have shown that a microphone meeting
Class 1 is attainable, and the same can be said for accelerometers. MEMS
accelerometers have been used for years in airbag sensors, and you’ve
probably got one in your smartphone, so it knows when to change the
display if you tilt it from vertical to horizontal.
The use of MEMS for measurement accelerometers is on the way and
they have an advantage in their low price and stability/ruggedness.
Already MEMS devices are being used for hand-arm and whole-body
vibration and very linear high sensitivity devices for ground vibration are
on the near horizon.
A nice feature of MEMS accelerometers is their DC response – it makes
calibration easy – by turning them upside-down the change should be 2g!
Their low noise floor and lack of low-frequency resonance also makes
integration easier

Calibration

No acoustician worth their salt will leave the house without a sound level
calibrator. Its use is written into countless standards and is your only
contact with reality. Historically, this is due to microphones having often
large dependencies on environmental effects, so field calibration was
a must.
These days, microphones are very stable, and if you see a difference
in sensitivity over a few measurements, then something is wrong somewhere.
Somewhat bizarrely, the same calibration habit doesn’t seem to have
caught on with vibration measurements. Perhaps this is due to the
complexity and cost of vibration calibrators, or simply a belief that a
transducer that looks like a hex nut couldn’t possibly get damaged!
BS EN ISO 8041:2005 is the instrumentation standard which is crossreferenced
in nearly every standard for human vibration measurement. It
defines the performance of instrumentation (much like BS EN 61672 for
sound level meters), and significantly almost forgets to mention
geophones, concentrating on accelerometers as transducers (the
Germans are ahead of us here – they have bolted that down in DIN 45669
for example).
The standard has a lot to say about calibration, for type approval,
periodic calibration and field calibration, but very few practitioners seem
to be aware. This is probably down to the limited availability of a practical
calibrator which allows checks on performance at the frequencies of
interest (often below 80Hz).
Most field calibrators operate at 159.15Hz – an odd frequency until
you consider it is 1000 radians/second, which makes converting from
acceleration to velocity and displacement easy, e.g. 10ms-2 acceleration is
10mms-1 at that frequency. These are handy devices (but often three or
four times the cost of a sound level calibrator) and can be used to check
the complete measurement chain, albeit at a high frequency – you just
have to assume your filters and low-frequency response is OK.
Another limitation is that ground vibration transducers are often large
(high sensitivity), so such calibrators cannot be used – there is not enough
power available.
The ISO standard recommends calibration at 15.91Hz and 79.6Hz for
low-frequency vibration instruments, in the field, as well as periodic calibration,
for example. This allows the whole-body weighting filters and
RMS/RMQ detectors to be checked too. This requires a much bigger
vibration exciter, and such devices are now coming to market to address
this need for field calibration.
Geophones give a particular problem. Vertical geophones can be field
calibrated in the same way as accelerometers, but horizontal geophones
cannot be mounted on a vertical calibrator, so the only solution is to send
them to a laboratory equipped with a horizontal slip table – time
consuming and expensive.
A new working group has been set up to address the issues of vibration
transducers, but the standardisation wheels grind exceeding slow.

Conclusion

Hopefully, this article will have given some insight into some of the issues
practitioners should consider before equipping themselves with vibration
instrumentation and heading out into the unknown. There are many
more issues not covered here, but browsing the standards appropriate to
the measurement will provide a wealth of information. Hopefully, future
articles in Instrumentation Corner will enlighten further!
John Shelton is a member of the IOA Measurement & Instrumentation
Committee and AcSoft Ltd is a sponsor member of the Institute.

 

Smart attack: Some thoughts on Internet noise monitoring

History

There’s nothing like moving house or office to unearth some old
project from the past, and while unloading all my shelves of documentation,
I happened upon an old presentation I put together more than
20 years ago when I started AcSoft. Lovingly crafted on overhead transparencies
using Word Perfect and Freelance Graphics, it extolled the
virtues of PC-based instrumentation, which is what we peddled at the
time, but also made some rash predictions for the next 20 years.
One prediction was that PC-based systems would become
commonplace, based on emerging operating systems and hardware,
and that dedicated instruments would not fade away, but become
more “consumerist”. This means that sound or vibration meters
would be built for specific tasks, become cheaper, and much more
widely available.
It’s fair to say that PC-based systems are now industry-standard,
and most noise and vibration acquisition systems now consist of a
front-end combined with software running on a PC or similar device.
Similarly, dedicated sound level meters are now widespread, and
much lower in cost. A Class 1 sound analyser that used to cost more
than £10,000 is now available for little over £1,000, and specific applications,
such as STIPA, can be built into low-cost devices.
Not a bad prediction then, except that the idea of using the Internet
in noise and vibration applications completely passed me by. At least
I was in good company – Microsoft famously made the same mistake,
and it could be said that they have only caught up in recent years!
Generally speaking, though, we are using our modern kit for broadly
the same purposes now, as we were 20 years ago. Noise enforcement,
aircraft and road noise, product development, product quality,
building acoustics, health and safety etc all have a set of procedural
standards to which we adhere, with instrumentation standards
ensuring the quality of our instrumentation. These have of course
been tightened over the years, and now BS EN 61672:2013 lays down
some tough criteria which must be met before an instrument system is
labelled Class 1.
This is as it should be, but it also creates a "closed" market, with
some innovations being stifled in favour of doing the same thing, but
faster/cheaper. Rather than widening the appeal and application of
acoustic measurements, the tendency has been to keep it amongst
"the professionals".
Ultimately, a Class 1 sound level meter can only be made so cheap,
a large part of that cost coming from the condenser microphone,
which in many cases is still hand-built by angels on the south face of
Happy Mountain.

The Internet of noisy things

The ubiquity of the Internet, along with new technologies, now challenges
that, as well as making completely new possibilities in democratising
noise measurement.
The first idea to come along is the “Internet of things” (IOT) where
any device can now be connected via the internet to provide data
and also control our environment. This is not just happening with
noise – it applies also to your refrigerator (order some more milk on
Supermarket.com when you’re running low), your car (tells the dealer
when you need a service and what parts might be needed), weather
data (real-time online weather for the budding sailor) or air pollution
(redirect traffic to avoid build-up of particulates). The list is endless,
but one thing is clear – all the information is easily available to Joe
Public, and perhaps no longer in the hands of the closed-shop professional.
Noise is just another number (albeit a difficult to understand
decibel), but it makes sense that noise, pollution, vibration, temperature,
UV radiation, rain, etc. are just part of the information flow.
The idea of the ‘Smart City’ is now with us, where our environment
can be managed to improve the quality of urban life, and also make
large efficiency savings.

Smart cities

The Measurement and Instrumentation (M&I) Group in the IOA
regularly runs one-day meetings covering aspects of noise and
vibration measurement. One such meeting was organised by Ben
Piper, a member of our committee, last year called Sound sensing in
smart cities. It was a fascinating day, which covered exciting stuff on
instrumentation and data management.
NPL has been working for some years on new microphone technologies
such as MEMS to see if it’s possible to make a low-cost microphone
meeting accepted standards of accuracy. The idea of this is to
make noise monitors so cheap that they can be widely deployed in
a network, for urban and other applications. A MEMS-based microphone
was developed and demonstrated to meet Class 1, albeit in a
"traditional" package.
Similarly, the Dreamsys project showed how data from such a
system could be collated and publicly presented as part of a noise
management programme.
I had the opportunity to visit NPL recently and see the latest developments.
Ben and his colleagues are now working with a little box
based on a Raspberry Pi, with Class 1 MEMS microphone, measuring
Leq and 1/3 octaves (!) and delivering the data to the Interweb. Very
impressive considering the whole hardware cost is around an order
of magnitude cheaper than a conventional system. Trial sites include
a large railway development in Central London, and around a large
airport in the London area.
Of course, they are not the only ones doing this kind of thing –
Azimut Monitoring in France have networks which measure noise
and other air pollutants; the Sounds of New York project have
many noise monitors deployed and feeding data in real-time; and
European projects such as DYNAMAP are working towards dynamic
noise mapping.

Measurement Quality

As the M&I Group, we should of course ask questions about the quality
of such noise data. Does it meet recognised standards like BS EN
61672? How do you calibrate it? Is the cost-saving in hardware irrelevant
if the cost of deployment and maintenance dominates?
Should it be Type-approved? The measurement accuracy of any system is normally defined
by the purpose for which the data will be used. If certifying the sound power of a machine, or
settings limits to noise exposure, or certifying aircraft engines,
or testing pass-by noise of cars, then clearly the instrumentation
must meet very tight standards, and demonstrably so.  Is the same true of wide-area
noise monitoring/mapping?  Perhaps we are more interested in trends, rather than absolute
accuracy.  Is it noisier today than it was yesterday? What was that loud noise at two in the morning?
Do we only need to measure over a limited range? Noise in London for
example rarely falls outside the range 45-65 dBA, so why measure it
with an instrument that can measure linearly from 20 up to 140 dBA?
Taking the example of the Raspberry Pi, some even have a MEMS
microphone on the PCB, so why not use this and forget Class 1 completely?
Perhaps we can use a different quality-of-life indicator too, on a simple scale A-G instead of confusing decibels?

Calibration

We are all familiar with calibrating our sound measurement instrument,
using a reference source to confirm we are measuring the right
levels. For a remote noise monitoring system, this could also be done
by such techniques as electrostatic actuation, or insert voltage.
Regular calibration of, say, 300 noise monitors could be a real chore
and cost for the operator, negating the cost advantage of the hardware.
Perhaps other techniques could be used.
Again, NPL are working on this – by looking at the statistics of the
measured data, e.g. LA,50, it’s possible to spot slow trends, indicating
system calibration drift, or system failure (obviously wrong data). As
the network is so widespread, all you need to do is flag or ignore the
data until the monitor has been visited and fixed, just like a faulty light
bulb in a street lamp, on the next maintenance round. You could also
put in a couple of regular expensive noise monitors to provide a sanity
check to the data.
This is a great example of doing things differently, rather than just
doing the same but more cheaply.

Summing up

The idea of this article was to be thought-provoking, as we move to an
even more connected world. Of course, all the traditional players are
watching with interest – is this the end of the sound level meter? How
will we pay the mortgage in five years’ time?
Of course, the “legal” metrology will continue, with the associated
costs, standards and procedures and will undoubtedly feed future
meetings of the M&I Group.
But noise (and other pollutant) monitoring over wide areas will
become widespread, perhaps with completely different technologies
and methodologies. Exciting times indeed!

John Shelton, MIOA is with AcSoft, GRAS UK and Svantek UK, and is
the chairman of the IOA Measurement and Instrumentation Group.

Recent and not so recent developments in sound measurement instrumentation

By John Shelton AcSoft and Svantek UK

Introduction

The 40th anniversary of the IOA is as good a time as any to
review what has happened in the instrumentation market
over the same period, and in particular, to the humble sound
level meter.
This article reviews the basic architecture, and looks at how
things have changed, often on the back of consumer electronics,
and gives some pointers of where we are headed in
the future.

The sound level meter

The basic layout of the sound level meter has not really
changed over the years – we’re simply trying to make an
objective and traceable measurement of the noise level, to
allow us to assess environmental noise impact or potential
damage to workers’ hearing, for example.
The starting point is of course the microphone, which transduces
the acoustic pressure variation into a voltage analogue,
which we can feed into our electronic circuits. Typically, we use
a condenser type microphone, for its stability, linearity and
ease of calibration. We need to polarise the capacitor, typically
with 200 volts DC, and match its inconveniently high output
impedance into something we can drive down the line. This
requires specialised circuitry, taking the form of a dedicated
conditioning preamplifier which normally sits just behind the
microphone – the familiar silver tube.
Now we have a signal to work with, and two types of
‘detector’ are commonly used to make a measurement of sound
pressure level.  The root mean square or RMS detector does what it
says on the tin – backwards! Firstly, the waveform is squared,
making all the negative excursions positive, then this is
averaged to estimate the power in the signal, and finally the
square root is taken to get back to a number which is related to
a pressure level. The output of an RMS detector will fluctuate as
much as the input signal, so in order for us to conveniently
read the level on a meter, we need to ‘damp down’ these fluctuations,
so a time constant is applied, the choice of which will
depend on how much variation there is. We are of course
familiar with the old standardised time weightings Fast, Slow
and Impulse (more recently updated to just ‘F’ and ‘S’).
The Peak detector simply measures the maximum
excursion of the acoustic signal (either positive or negative)
and this might be useful for estimating damage potential from
the noise, such as from blasting or gun shots. The peak
detector will normally be used with a hold circuit to make the
level readable.
The output of our detectors will be fed to a display, and
traditionally, this was a high quality moving coil dial, which
even did the decibel conversion to give a readout directly.
If we wanted to assess the noise level and not just the
sound level, then there would also be frequency weighting
circuits prior to the detector, A and C being the most popular,
and for analysis of the frequency makeup of the signal, there
may also be some filters, 1/1 octave or 1/3 octave being the
most common.
Finally, statistical analysis of the fluctuations of the noise
was starting to become interesting, for assessment of notional
background noise level for example, and this was achieved,
typically in the laboratory, by a fantastic array of equipment
attached to the output of the sound level meter. Again, all
realised in the analogue world.
Forty years ago, all this was achieved with high quality analog
circuitry from microphone right through to the display. The
classic example of this was the B & K Type 2203, which was the
weapon of choice for the serious noise warrior. Built in a herniainducing
case, with all of the elements of our circuit realised
with analogue switching, it remains today a great educational
tool to understand the science of sound measurement.

The march of digitisation

No-one today could have overlooked the fact that everything is
going or has gone ‘digital’. The sound level meter was no
different, and the process started at the back end of the chain –
the display. By sampling the output of the detector, albeit at the
slow sample rates (~1Hz) available at that time, the values
could be displayed with greater precision on a digital display,
to the nearest 0.1dB, and the limited dynamic range of the A/D
converters could be improved by doing the log conversion in
the detector before sampling.
Of course, the accuracy of the meter did not improve, but
0.1dB resolution was a lot more impressive! Some meters even
combined analog and digital displays, such as the rare B&K
2210.
The next step was to sample the detector output at a higher
rate, which allowed some basic mathematics to be done, for
example calculating the average value of the signal over a time
period. At this time, the idea of the equivalent continuous
sound pressure level, or Leq, gained a foothold, and this was
easily estimated by sampling the output of a Fast timeweighted
detector. The first ‘integrating sound level meters’
had been born.
Similarly, the samples could also be used for the statistical
analysis, resulting in the breakthrough CEL-393 statistical integrating
sound level meter, which swept the board in
environmental health markets, despite having the user
interface from hell!
However, sampling the output of a time weighted detector
was always an estimate of the Leq, and as faster A/D converters
with adequate dynamic range became available, the Leq could
be calculated from the output of the mean-square detector
directly – as we should all now know, ‘F’ time weighting has
nothing to do with Leq.
The new family of digital sound level meters now followed
the layout of, with the output of the detector being
sampled at 256 Hz for example. Note that the statistics were
still sampled at a lower rate from the time weighted output,
and currently there is still no standardisation of the calculation
of statistical indices.
Also at this time, the concept of Short Leq emerged, where
the digital detector spat out Leq values over short periods,
commonly 125ms or shorter. This was ideal for the new idea of
datalogging, where complete measurements could be sampled
and stored to memory, for later display and processing on newfangled
computers. In fact, memory in sound level meters is a
surprisingly new phenomenon – even in the early nineties,
portable devices like Psion Organisers and Epson computers
were being used to store sound level meter data!
In general, the weighting networks and filters were still
realised as analogue networks – a frequency analysis required
stepping through the filters one at a time, and hoping the
signal was the same at the end, or even still there!
The trend in SLM development by now had been a slow
increase in sampling rate, and dynamic range, and already,
digital consumer audio was upon us – the compact disc
emerging as early as 1982, with 16bit A/D converters and
44.1kHz sampling rates. The advent of low power digital signal
processing suddenly made it realistic to digitise the output of
the microphone preamplifier directly, and do the rest in Big
Sums. Not an easy task necessarily, as our sound level meter
still has to cover the complete range of human perception
both in level and frequency, but now we can calculate
weighting filters, 1/1 & 1/3 octaves, Leq and statistics
completely digitally. The idea of digital dynamic range was no
different to the old ways.
Coupled with vastly increased memory, A/D converter and a
DSP, almost anything is possible.
You could be forgiven for thinking that this makes sound
level meters really easy to make, and therefore the price should
drop dramatically. This is not wholly untrue, but there is still a
huge skill in signal processing development, especially for our
applications, and engineers who used to dabble in LCR circuit
design have largely been replaced by firmware engineers, who
still cost money for what is a small market compared to CD
players. But price-wise, in the early 80s, an integrating type 1
sound level meter, with no memory, cost around £1,800. Today,
a completely digital Class 1 sound level meter with
gigabytes of memory will cost around £1,200. A saving, yes, but
not dramatic even allowing for inflation.

Completely digital?

Part of the reason sound level meters are relatively expensive,
apart from the size of the market and development costs,
is the microphone – the last analog bastion in the measurement
chain.
Since precision sound level measurements began, the
condenser microphone has been the gold standard,
the ½” capsule providing the best compromise in dynamic
range and frequency range. Manufactured by a select few
companies, the price of such capsules can be anywhere
from £400 to over £1,000, a large chunk of the sound level
meter budget.
However, for other much larger markets, such as hearing
aids, telephones etc, a digital revolution has been happening
in microphone development. The use of MEMS (micro electro
mechanical systems) or micro-machined silicon transducers is
now well established – the mobile phone in your pocket
probably has not one, but several MEMS microphones built-in.
These are used also for advanced noise cancellation, to make
your phone call that much clearer both ends. MEMS microphones are still based on the
capacitor principle, but the capacitor is machined on to a tiny
silicon wafer, which is packaged into a more manageable pot
which can be directly soldered onto the circuit board. In some
recent cases, the A/D converter can even be built in to the
silicon, making what is effectively a digital microphone. MEMS
microphones are also incredibly rugged, and of course, the low
price of a few dollars is a real advantage.
Can these be used for measuring sound? The answer to that
lies in the international standards that govern sound level
meter performance, and right now, MEMS microphone
performance falls short of those requirements. But already,
there is a place for them – noise dosemeters now
employ MEMS techniques, as well as specialised techniques
such as MIRE for in-ear measurements.
A recent project at NPL proved that a MEMS microphone
meeting Class 1 tolerances is possible, so it is only a matter of
time for many applications. This will undoubtedly reduce the
size and price of sound level meters still further.

Consumer sound level meters?

Another trend in the market, now that everything can be done
with an A/D converter and a DSP, is the rise of the App. Using
the life support system of the smartphone (which already has
MEMS microphones and DSP to burn), software applications
are appearing which turn your phone into a sound level
meter. Specialised extension microphones are
also available to improve the acoustics and performance.
Some even claim to meet sound level meter standards.
Ironically, a few of these even have ‘retro’ analogue displays –
a real full-circle!
As with PC-based sound level meters 20 years ago, we
should still be sure that standards are met, and demonstrably
so, so where do these apps fit in? The spectrum analyser apps
for example are very good at finding the frequency of an
audible tone, but when it comes to measuring the level, this is
often only achieved accurately over a limited dynamic range.
Also bear in mind that the electromagnetic environment
inside a mobile phone is particularly hostile to low level
noise measurements.
It’s unlikely that Apple, Google, RIM and the like will ever go
into the sound level meter market – it’s just too small and
specialised. Also, producing a new model or operating system
every year will obsolete our phone-based instrument too
quickly, but the traditional manufacturers can feed off the
crumbs left behind – a Class 1 sound level meter with a MEMS
microphone is not far off.

Summary

This article has, I hope, given an overview of sound level meter
development over the last few decades, highlighting the move
from analogue to digital, and consequent increase in value for
money. Of course, the same progress applies to vibration
meters, spectrum analysers and all manner of sound and
vibration instrumentation – 20 years ago, a PC-based spectrum
analyser was rocket science – now it’s commonplace.
Where will it end? In my view, sound measurements will
become even more integrated to the internet – maybe one day
our digital MEMS microphone will connect directly to the
Cloud, and our noise report will be written before we even get
back to the office, along with weather, photos, GPS, maps. Plug
your microphone into your Google glasses?
One thing’s for sure, the Measurement & Instrumentation
Group at the IOA will keep abreast of developments, and make
sure the membership is kept informed about best practice!

John Shelton has been in the sound & vibration instrumentation
business for over 30 years, and this year celebrates 20 years of
AcSoft Ltd, pioneers of PC-based instrumentation. A member of
the IOA, he is a founder member of the M&I Group and sits on
several committees relating to sound & vibration measurement.

 

Can't find what you're looking for?

Please give us a call on 01234 639550 or email sales@acsoft.co.uk, and we’ll do our best to help you find what you need