My project has stalled temporarily due to some unforseen problems with insects at my house (réf. Gaah thread) , and a good chunk of cash, more money than I was spending on the boombox project, has gone to purchase insecticides and make a collapsible bed. Oh well I'll be able to recover at least half of the money by working an extra day this week.
In the meantime, I've had an idea related to my interests in more advanced surround sound techniques, back in my early years of college. As it happened, in high-school and my early college years, I was experimenting with Dolby surround and Matrix Quadraphonic as I wrote above. But while I'm college, I had access to a higher science and library materials. I started a project around 1990 which I called Psychoacoustic Surround Sound System (or PSSS for short) which explored knowledge on human sound processing more or less historically, developing my work in stages (Surround Sound was Stage II of the project). The academic research during the late 70s I could find at the library were the peak of human knowledge regarding the way humans locate sound in 3-dimensions. As it happens, the science of Psychoacoustics is a cross between physics, anatomy and psychology and its a rather ethereal subject that eventually gave us 3D sound in computer games and virtual surround sound coming from only two speakers in television systems. It's not a science that started researching exactly how humans perceive sound, but more like an aggregate of research that accumulated throughout the centuries, as far back as the middle ages, and then by the late 19th century began to coalesce into a field of study. It came of age during the 20th century when military technology pushed the science further.
The reason I found out about the science was an article on Popular Science / Popular Mechanics / OMNI (?) on a contract awarded by the United States Air Force to NASA and the late Hughes Aerospace, circa 1986, for a system which could simulate 3 dimensional sounds inside fighter jet cockpits. The idea was to take away the heavy visual information load pilots had to endure during combat, and place the burden of processing that information on the pilot's hearing system instead. In theory that should allow pilots to focus on the airspace around them, instead of having to look at dials and readouts on their front panel or heads up display.
The key lay hidden in findings by scientists around the world for centuries, but the pinnacle of science's understanding on how humans process sound did not happen until the late 1970s and early 1980s, when scientists could use computers to measure sounds captured by tiny microphones embedded in the ear canals of test subjects just outside the ear drum. Coupled with physical acoustics, biology, anatomy, and psychology, scientists could evaluate people's perceptions of sound, and quantify the ability of human beings to detect a source of sound in 3 dimensions. Only a handful of scientists around the world were sufficiently trained to cover all of the subjects required during the mid-late 20th century, so it was rather easy to follow their progress.
What scientists discovered, besides the obvious, such as a sound intensity difference between ears, is that sound phase differences between right and left ears could actually be perceived by the human brain. Things like sound wave front time of arrival could also be measured by the brain. And the latest findings from the 1970s were that the outer ears (pinnae), your head shape and even your upper body, comprise a complex surface that diffracts, and reverberates sound in very complex ways, changing the way your eardrums register sound, depending on the location of the sound source. The last piece of the puzzle was finding that as a sound source location changes, the frequency content of sound is changed drastically, depending on the direction of sound. The outer ear and head, in particular, create a deep notch on mid and high frequency sounds, depending on sound source location.
Using computers to operate in Laplace transform space (frequency and phase), scientists could now make a map of frequency and phase changes in sound perceived by the ear drums, depending on which direction the sound waves came at you. Coupled with previous knowledge about sound intensity and phase changes, scientists could now, theoretically, program a computer to vary the frequency content, phase, and time delay between right and left ears to create the illusion that a sound source was moving about your head in 3 dimensional space. There needed to be a "key" or mathematical operator that could take regular monaural sound and map it to 3D space; the Fourier Transform of the 3 dimensional sound field around your head is called the "Head Related Transfer Function" (HRTF). And with that map, coupled with a similar map for phase changes, in theory you can take a monaural sound, or a two channel stereo sound and turn that sound into source located anywhere in a 3 dimensional sphere around your head.
In the US Air Force contract proposal, NASA came to the conclusion that a computer was needed to vary the sound content on phase and frequency, and you absolutely needed to play back the sound via headphones, in order to get the right 3D effect. The issue was cross-talk between right and left speakers, they noted, and the 3D sound cues were too subtle to survive that low stereo separation of regular speakers. They argued also that the HRTF needed to measured for every single person intended to become a test subject or user, as there is a great variety of shapes and sizes of heads, noses, and ears in the population. The 3D sound needed to be customized to the individual. Using a computer system called "Convolvotron" they set out to make their case to the Air Force.
Hughes Aerospace came back with a different answer. They were of the opinion that a sufficiently generalized HRTF could be found to work on most humans, so a regular set of stereo speakers could do the job. The amount of cross-talk between right and left speakers, they opined, wasn't that critical. And if you didn't mind only having a 180 sound sphere in front of you, as opposed to a full 360 degree sphere, and further you didn't need arc 1-5 degree accuracy on the sound source location, such as humans have, then a pair of stereo speakers could be made to work. Hughes went to develop a stereo sound processor called "Sound Retrieval System" (SRS), which eventually was sold and used for Hi Fi equipment and flat panel televisions that still have the system today. A new company called SRS Corporation (?) was formed to market the system, but today you have 1001 other similar methods used in computer games and such. The black Sony amplifier has about 3 different variations on the method for generating a "virtual surround sound" stage.
So... I was actually going through my old college records, and I found a spreadsheet with the main data on the HRTF published by some of those high brow scientists in the late 1970s. The data set I found comes from a German research team published in 1977, in the Journal of the Acoustical Society of America, and is the most complete set of HRTF measurements I ever found. Their data gives the HRTF field from 500 Hz to 15 kHz, and I'm sure there are newer, better measurements out there, but this data set is pretty compete. I was thinking I might explore creating a simple analog processor to try to approximate this HRTF, similar to what Hughes did for their SRS system.
Data adapted from Mehrgardt, S., Mellert, V. (1977). Transformation characteristics of the external human ear.
J. Acoust. Soc. Am. 61, 1567–1576.

Now, to be honest, I don't even know if there's a simple way to do it without resorting to a digital computer. But back between 1988 and 1997, in between studying various aeronautical engineering topics, I was dead serious about developing a stereo Matrix system coupled with an HRTF operator, similar to the Hughes SRS method. Basically using the "right minus left" difference between stereo channels to pass through the HRTF operator and generate *some* form of virtual surround sound. Hopefully without the use of computers, just a simple system using operational amplifiers and phase changers... I'd be shooting for something relatively simple using a generalized HRTF based on the figure above.
Maybe it's time I restart Project PSSS III... I'll try to think about this while I'm not fighting bugs and building beds.