Video of it in action below:

http://www.youtube.com/watch?v=Du0zp7AjlBY

The code:

Code: Select all

```
#!/usr/bin/env python
# 8 bar Audio equaliser using MCP2307
import alsaaudio as aa
import smbus
from time import sleep
from struct import unpack
import numpy as np
bus=smbus.SMBus(0) #Use '1' for newer Pi boards;
ADDR = 0x20 #The I2C address of MCP23017
DIRA = 0x00 #PortA I/O direction, by pin. 0=output, 1=input
DIRB = 0x01 #PortB I/O direction, by pin. 0=output, 1=input
BANKA = 0x12 #Register address for Bank A
BANKB = 0x13 #Register address for Bank B
#Set up the 23017 for 16 output pins
bus.write_byte_data(ADDR, DIRA, 0); #all zeros = all outputs on Bank A
bus.write_byte_data(ADDR, DIRB, 0); #all zeros = all outputs on Bank B
def TurnOffLEDS ():
bus.write_byte_data(ADDR, BANKA, 0xFF) #set all columns high
bus.write_byte_data(ADDR, BANKB, 0x00) #set all rows low
def Set_Column(row, col):
TurnOffLEDS()
bus.write_byte_data(ADDR, BANKA, col)
bus.write_byte_data(ADDR, BANKB, row)
# Initialise matrix
TurnOffLEDS()
# Set up audio
sample_rate = 44100
no_channels = 2
chunk = 512 # Use a multiple of 8
data_in = aa.PCM(aa.PCM_CAPTURE, aa.PCM_NORMAL)
data_in.setchannels(no_channels)
data_in.setrate(sample_rate)
data_in.setformat(aa.PCM_FORMAT_S16_LE)
data_in.setperiodsize(chunk)
def calculate_levels(data, chunk,sample_rate):
# Convert raw data to numpy array
data = unpack("%dh"%(len(data)/2),data)
data = np.array(data, dtype='h')
# Apply FFT - real data so rfft used
fourier=np.fft.rfft(data)
# Remove last element in array to make it the same size as chunk
fourier=np.delete(fourier,len(fourier)-1)
# Find amplitude
power = np.log10(np.abs(fourier))**2
# Araange array into 8 rows for the 8 bars on LED matrix
power = np.reshape(power,(8,chunk/8))
matrix= np.int_(np.average(power,axis=1)/4)
return matrix
print "Processing....."
while True:
TurnOffLEDS()
# Read data from device
l,data = data_in.read()
data_in.pause(1) # Pause capture whilst RPi processes data
if l:
# catch frame error
try:
matrix=calculate_levels(data, chunk,sample_rate)
for i in range (0,8):
Set_Column((1<<matrix[i])-1,0xFF^(1<<i))
except audioop.error, e:
if e.message !="not a whole number of frames":
raise e
sleep(0.001)
data_in.pause(0) # Resume capture
```

The i2c part of the code is well documented so I'll just explain a few bits I didn't comment in the code. The original code referenced by yamanoorsai did not make good use of the powerful numpy routines. For starters the audio data is real (integers) and rfft is approriate for these arrays.

As I was using 8 columns for the equaliser, I arranged the array into 8 rows (each with 64 elements (chunk/8)). These numbers represent the 'amplitudes' for the first 64 frequencies in jumps of (0.5*sample_rate/chunk) (0.5*44100/512= 43 Hz) i.e row 1 is the amplitudes for the following frequencies:

0, 43, 86,.......,2713

2756, 2799..............

.

.

19294,...........,22007 (row 8)

I then took the mean for each row (very crude I know) to return 8 values for the LED matrix.

There are too many areas for improvement to talk about. The most obvious one is to focus on the frequencies more applicable to music/speech etc. (maybe just 60-5000 Hz) and ignore the rest.

I hope some of you have a laugh with this - it is very cool when hooked up to an iPod or the radio.

If some audiophiles could post their successes or suggestions for improvement, I would be grateful (so that I can use it in my lessons!)