Real-Time Digital Equalizer for Electric Guitar
Designed a real-time audio equalizer on top of a pipelined CPU by implementing a DSP pipeline using Fast Fourier Transform (FFT) and Inverse FFT in Verilog, also leveraging the CPU’s ISA in Assembly. Deployed on an Artix-7 FPGA, and built to interface with an electric guitar and amp via ADC/DAC. Developed testbenches for system-level and module-level verification, and executed logic synthesis and timing analysis in Vivado.
Media
Design Decisions
Have you ever seen a DJ "boost the bass"? As a musician, I wanted this project to be a few things. I wanted it to be something that felt tangible, something that could teach me a bit more about signal processing, something memorable, and something to serve as a challenge. Being an avid guitar player, I decided that it would be insanely cool to have a real time equalizer, which could boost or dampen the high, mid, and low frequencies as I play my electric guitar. This project served as an extension to the Pipelined CPU, which I also wrote about here. Given the short deadline and the enormous task, we first set about clarifying scope. First, we ran some figures to decide some constraints; to make this real-time, we would have to sample fast. And so we decided 44kHz, being a common sampling rate on CDs, would be a good standard (especially as it covers twice the full 20Hz-20kHz hearing range for humans, by Nyquist's). We also knew that the system clock on the FPGA was 100MHz. This means that for every sample, we would have roughly 100,000/44 [adjust, i forget but we ended up changing sysclock] --- system clock cycles between each sample, during which we can operate on all the samples stored up to this point. Now knowing how many cycles we had to work, we then set about building the pipeline for the modulation itself. We knew that we had to take analog audio from the guitar, amplify it, convert it into a digital signal fed into the FPGA, run FFT on the digital signal to extract the frequency spectrum of the signal, modulate that (with each frequency bin having its own multiplier), then taking the IFFT of the modulated signal, and outputting that through a DAC into a speaker. Each one of these stages, along with the transitions between them, were designed, written, and individually tested. After reading into the literature, we settled on a SDF-2^2 architecture, which is a way of organizing the branches of a "traditional" butterfly discrete Fourier Transform in a way that's smaller than a SDF-4 architecture but faster than an SDF-2 architecture. After consulting a research paper on this from MIT, however in building it we realized that a number of components and connections in their diagrams were omitted and even mislabeled. This led to a lot of confusion, but after some late nights we understood the general idea well enough that we were able to make the proper changes to get things working. It was quite involved, as the task entailed complex timings, complex (in the mathematical sense) operations, and also reordering of the samples (which get scrambled out of order as it passes through the FFT pipeline.) With the FFT and IFFT components done, we moved on to the audio interfaces. Due to limitation of parts, we were only able to secure a 12bit ADC, which sacrificed some audio quality; we zero-padded this to fit into our 16bit FFT module. We also had to implement custom FFT and IFFT registers to store the intermediate results of the FFT/IFFT modules. Thankfully, a few days before our demo we were able to get everything working nicely. However, the morning of, we had an unpleasant surprise- somehow our FPGA was stolen and the hardware wiring was completely dismantled. (Our ECE lab has plenty of spare FPGAs, wires, and resistors). We decided not to dwell on the "who", as we had saved the bitstream we tested with to GitHub- we just needed to rewire the simple voltage divider circuit to my amp and our new FPGA. That afternoon we had our final technical defense, and a live demo- watch here! (Do mind that I had no sleep and the scare of the stolen FPGA earlier, so I am a bit frazzled in the video here). You can hear the audio play through the speakers as I play- we implemented a low pass filter which amps up the low frequencies and mutes the highs. While Dr. Board pushed us hard in the final stretch, it was ultimately worth it. He said it was well done, and one of ~5 projects that incorporated FFTs in the 60 years that he's been teaching at Duke.
Key Learnings & Takeaways
This project not only taught us the fundamentals of digital design, but it also taught us how to navigate the challenges that come along with signal processing. It also taught us the value of being independent, resourceful, and persistent, even in the face of an overwhelming task. (I took a graduate level design class concurrently with this one, which had its own final project- see here!)