Traffic Movement

By Steve Jones and Sally Rodgers Traffic Movement is an imagined environment which transforms a recognizable street scene into a sonorous tone-poem. In this future soundscape, intelligent traffic lights speak their minds, the hum notes and partials of Electric Vehicles (EVs) ascend and descend, birds can be heard in the distant trees and footsteps echo on the city streets.

As governments worldwide begin to deal with environmental pressures by providing strategic economic stimulus to green energy start up programs, EVs are finally becoming a viable solution to many environmental concerns, for example air pollution. Loans and grants are available for infrastructure and the development of pollutant free fuel cells such as the energy dense lithium-ion battery. As the technology becomes more accessible, electric and hybrid-electric cars and scooters will begin to replace traditional combustion powered vehicles.

But there are still problems to address.

Since EVs are not powered by combustion, they produce almost no engine noise. There is growing concern that this absence of sound poses a risk to pedestrians and other road users. Because human beings are reliant on sound to confirm an action is taking place, EVs present very real safety issues for children, the elderly, the blind and the partially sighted. Early scientific research has concluded that a conventional vehicle can be heard at over 30 feet away, while an EV can only be identified at a distance of 7 feet (1). In response, governments are considering introducing legislation to regulate a minimum sound emission (2). Likewise, manufacturers are looking for solutions ahead of legislation to forestall the negative impact of product liability litigation and help ensure positive PR (3).

So what can be done and what might the future sound like?

It would be retrogressive to simply mimic the sound of a combustion engine, so perhaps we might look to the world of art for inspiration. Karlheinz Stockhausen observed that rhythmic pulses from an impulse generator would transform into a tone when their speed reached around 600 bpm. This same tone would rise in pitch as the speed increased: the rhythm now perceived as musical timbre. If we imagine this principle as adapted to function in conjunction with the acceleration and deceleration of an EV via onboard sound synthesis software, we might begin to hear the future of traffic noise.

Sound designers Steve Jones and Sally Rodgers are currently developing real-time software to make this concept a reality.

(1) "Hybrid Cars are Harder to Hear." University of California, Riverside Newsroom, April 28, 2008. Accessed November 7, 2011.

(2) "President Signs Pedestrian Safety Act." National Federation for the Blind. January 5, 2011. Accessed November 7, 2011.

(3) "Adding Sounds to the Silence of the Electric Car." PRI’s The World. June 27, 2011. Accessed November 7, 2011.

Contributors' Biographies

Steve Jones has an MSc in Sound Design from the University of Edinburgh and Sally Rodgers has an M.Litt from the University of St. Andrews, where she continues to conduct doctoral research into the historical impact of technology on modern poetics. Their enduring collaboration includes many licensed works and recordings, under the artist name A Man Called Adam, which are popular with electronic music fans around the world.

As sound designers they have a reputation for delivering high quality compositions and gallery-enabling sound for a diverse range of clients including The British Museum, Johnson Banks, The Burns Group, Clay Interactive and The BME. Recent commissions include a series of musical identifications for the National Science Museum and the sound for short films from award-winning biomimetic architects Tonkin and Liu. Their A/V work ‘Maud,’ based on Tennyson’s monodrama, will be exhibited this December as part of the Engine Room Festival celebrating the work of Cornelius Cardew at Morley College, London.

In performance they are currently experimenting with a concept using installation technology, which they loosely describe as ‘talking with spaces,’ in which they improvise with the sounds of the space they are in to generate a new sound. From recitation to the hidden sounds of obsolete technologies, they use real-time processing to create a unique audible discourse.

For more information about their work go to: