# Two ears - two loudspeakers ?



## syntheticwave (Feb 9, 2011)

...that question separates the audio freaks in two parts, stereo or surround.
However, what are the important problems, constiricting true spatial audio until today?

This website take attempt for describing the main problems:
http://www.holophony.net/Two_Ears-Two_Loudspeakers.htm

H.


----------



## Kal Rubinson (Aug 3, 2006)

syntheticwave said:


> ...that question separates the audio freaks in two parts, stereo or surround.


Actually, it is usually offered as a simplistic statement and, as such, it separates the informed from the ignorant. 

As we all know, stereo means solid and not 2 channel. Even Wikipedia seems to have caught on to this.

Kal


----------



## jackfish (Dec 27, 2006)

*stereo-*  
a combining form borrowed from Greek, where it meant “solid”, used with reference to hardness, solidity, three-dimensionality in the formation of compound words: stereochemistry; stereogram; stereoscope; stereophonic.

It relates to how the offset between complementary input phenomenon can be realized as a three dimensional perspective. Both your ears and eyes are stereo sensor systems.


----------



## syntheticwave (Feb 9, 2011)

jackfish said:


> *stereo-*
> It relates to how the offset between complementary input phenomenon can be realized as a three dimensional perspective. Both your ears and eyes are stereo sensor systems.


...let me citate the Page:

"The signal differences between both receptors we use in audio for the determination of direction are not, as with the eyes, used to estimate the distance."

H.


----------



## Kal Rubinson (Aug 3, 2006)

syntheticwave said:


> ...let me citate the Page:
> 
> "The signal differences between both receptors we use in audio for the determination of direction are not, as with the eyes, used to estimate the distance."
> 
> H.


I assume you mean "cite" but what you quote is something that has been known for a while and is one of the major missing pieces in realistic reproduction. 

I recall a 20.2 (or was it 20.4?) demo at CES some years back where one was led into the dark demo room by a guide with a flashlight so that you had no idea of the size of the room, the audio setup or, indeed, anything or anyone in the room. The surround effect was remarkably realistic and enjoyable but, in retrospect, was somewhat at a distance from me, all around. However, while I was concentrating on the sound, a voice, not more than 2 feet from my right ear, said, "Impressive, isn't it?" It was the voice's closeness that was startling as everything else to be heard was off in the distance.

When the lights came up, I could see the array of 20 speakers surrounding the room and all 20 feet or more away from me and this other guy sitting one seat away. This event crystallized in my mind the awareness that all the reproduced sounds we hear never seem much closer to us than the sources (speakers) and, often, they appear even more distant. Thus, the soundstage begins in the speaker plane (regardless of the number of speakers/channels) but, without some sort of phase processing, never nearer.

Proximity is the last frontier in audio reproduction.


----------



## patchesj (Jun 17, 2009)

I would think it might be difficult to "Trick" your mind/ears into believing sound has arrived before it could actually get there, e.g. if it takes 6ms to get from the speaker to your head, how can you perceive that it got there in 3ms? Headphones and binaural mics anyone? 

Perhaps a small sphere that you place your head into with a nearly limitless number of drivers, each built in a way that they have a directly line to your ear and never interfere with each other... Oh wait, headphones again...

You can only "hear" with two sensors anyway, right?


----------



## Kal Rubinson (Aug 3, 2006)

patchesj said:


> I would think it might be difficult to "Trick" your mind/ears into believing sound has arrived before it could actually get there, e.g. if it takes 6ms to get from the speaker to your head, how can you perceive that it got there in 3ms? Headphones and binaural mics anyone?


Aha! There has to be a trick based on psychoacoustics. Now, for the music I listen to, it is not a problem since I am do not want to be "in" the music and I care much less about sound for HT. It is for the latter that proximity becomes relevant.



> You can only "hear" with two sensors anyway, right?


Yes, but they are connected to a really complex processor.


----------



## syntheticwave (Feb 9, 2011)

That is described in the appendix of the site, under 

http://www.holophony.net/Phantom-sources.htm


"One- dimensionality

As like as a perspective drawing trigger spatial impression only in certain limits, likewise, the phantom source is shortened able to create the perception, a sound source would be more or less distant. Just like all ink remain inside the drawing, all phantom sources remain on the line between the loudspeakers. 

That's comprehensible, if we are moving in the listening area. For example, if the violin in the concert hall straight in front of the timbale, we hear both instruments from the same direction. However, the real violin occurred left from timbal, if we move to the right wall at the concert hall. Both phantom sources between the loudspeakers, conversely, remain at her common starting point.

On that reason seems not realistic, to expect the spatial deepness of the real event from any phantom source based reproduction. We have only limited possibilities for fault the brain, regarding the phantom source distance. 

Firstly, we can change the levels, of course. We are connecting higher levels in perception at decreasingly distance. 
At the second, the portion of direct wave compared with the diffuse field level of reflections, is eminently important. In case of the sound source came closely the listener, the direct wave sound pressure increase accordingly the 1/r function. The diffuse field level remained unchanged, however. 

Unfortunately, solitary loudspeakers cannot produce such relations, because its own radiation became the overlay from the playback room reflections. Thus, we cannot depict any source more closely the listener as the loudspeakers itself. That's the reason, why we are only surrounded in surround, not include in the sonic field. 
The third reason for the disturbing source distance in the home cinema is the initial time delay gap (ITDG). We cannot avoid short sound detours in small rooms, in large rooms the first reflection arrives later. 

Regarding very close sound sources, a further sign is important. In case of the signal in one of the ears is much louder as at another, the only possible reason, the sound source is very closely. The Headphones can be reproducing this, but not the loudspeakers.

Besides wearing headphones, only way out..."


----------



## monomer (Dec 3, 2006)

patchesj said:


> I would think it might be difficult to "Trick" your mind/ears into believing sound has arrived before it could actually get there, e.g. if it takes 6ms to get from the speaker to your head, how can you perceive that it got there in 3ms?


These are just my thoughts out-loud and not based upon any studies outside my own personal experiences:

I'm thinking time would be irrelavent here since there are no visuals to sync with. In fact, if only the speakers were providing *ALL* the acoustical cues (IOWs, no room reflections at all, anechoic) then providing distance cues would boil down to the delay in timing between the direct sound arrival times at each ear to give one the cues to determine distance and I don't really see how any amount of recording/engineering 'trickery' could make a sound seem closer than the speaker(s) is(are) regardless of how many there are. I'm thinking this is the reason headphones cannot provide the cues necessary to get a sound source to image outside of the confines of essentially between your ears when playing back ordinary sound tracks. However it would seem to me that with headphones, because each sound source is dedicated to a single ear and effectively isolated from being heard at the other ear, it would be possible to engineer a pair of specialized recording tracks that could put the imaging outside of one's head... but I don't see how this would be possible *without* that isolation (as would be the case with 'free' speakers in a room).

Amplitude differences between each ear gives us directional cues but its the time delay that tells us at what distance the sound source is.


----------



## syntheticwave (Feb 9, 2011)

monomer said:


> ... However it would seem to me that with headphones, because each sound source is dedicated to a single ear and effectively isolated from being heard at the other ear, it would be possible to engineer a pair of specialized recording tracks that could put the imaging outside of one's head... but I don't see how this would be possible *without* that isolation (as would be the case with 'free' speakers in a room).
> 
> Amplitude differences between each ear gives us directional cues but its the time delay that tells us at what distance the sound source is.



Not alone, is more complex. Seriousely, go reading the http://www.holophony.net/Two_Ears-Two_Loudspeakers.htm

For signals outside your head, but closely at it, wear up your headphones and listen:






Ambiophonics is the way for separate loudspeaker signals.


H.


----------



## koyaan (Mar 2, 2010)

It appears to me that Mr. Oellers is objecting to the lack of a holographic sound field , as well as distortion from the listening area and limitations of microphone recording. 
If the object is to completely reproduce the live performance, these are very valid objections,however, I'm not sure they impact a lot on enjoyment of the music.
For me the proof in the pudding is the enjoyment.


----------



## syntheticwave (Feb 9, 2011)

koyaan said:


> It appears to me that Mr. Oellers is objecting to the lack of a holographic sound field , as well as distortion from the listening area and limitations of microphone recording.
> If the object is to completely reproduce the live performance, these are very valid objections,however, I'm not sure they impact a lot on enjoyment of the music.
> For me the proof in the pudding is the enjoyment.


That is an valid argue, but I think, Mr. Oellers shows new ways for enlarging the artistic possibilities of audio reproduction. I cannot see any reduction of the known prospects of the audio producing. However, the described subtraction of the playback room acoustics from transmitting chain seems essential. The pudding will enjoy much more.


----------



## DanTheMan (Oct 12, 2009)

I wrote a little blurb about this a little over a week ago and by far the easiest way for 3D to happen is binaural recording. A well treated HT isn't bad either. I've even made binaural recordings of my HT(sick man). You really can paint a picture with SS processing. You can read this as it's likely simpler than the original like, but don't quote me on that.
http://dtmblabber.blogspot.com/2011/02/tightening-loudspeaker-recording-and.html

The references on the bottom are really useful for the HT crowd though it may not seem intuitively obvious. Knowing how control rooms are set up should really be an intriguing guide to how you HT is set up.

This would make for an interesting read on the topic as well: http://www.princeton.edu/3D3A/index.html
Lot's of detailed loudspeaker measurements there to boot.

Dan


----------



## patchesj (Jun 17, 2009)

DanTheMan said:


> I wrote a little blurb about this a little over a week ago and by far the easiest way for 3D to happen is binaural recording. A well treated HT isn't bad either. I've even made binaural recordings of my HT(sick man). You really can paint a picture with SS processing. You can read this as it's likely simpler than the original like, but don't quote me on that.
> http://dtmblabber.blogspot.com/2011/02/tightening-loudspeaker-recording-and.html
> 
> The references on the bottom are really useful for the HT crowd though it may not seem intuitively obvious. Knowing how control rooms are set up should really be an intriguing guide to how you HT is set up.
> ...


Dan, you mention in the blog that absorbtion should be as thick as possible on the dead end. While I agree somewhat, if the goal of this absorbtion is to lessen the "distraction" of the early reflections I don't see the need to extend the effect much below 250Hz. Don't we begin to lose the ability to decipher directionality at lower frequency anyway? I would think the resources would be better spent on effectively covering all of the early reflection points vs. only getting a few points (but to very low frequency). This assumes of course, limited resources...


----------



## jackfish (Dec 27, 2006)

The demonstration cannot make it seem that the sound is coming from in front of you, why is that?


----------



## DanTheMan (Oct 12, 2009)

patchesj said:


> Dan, you mention in the blog that absorbtion should be as thick as possible on the dead end. While I agree somewhat, if the goal of this absorbtion is to lessen the "distraction" of the early reflections I don't see the need to extend the effect much below 250Hz. Don't we begin to lose the ability to decipher directionality at lower frequency anyway? I would think the resources would be better spent on effectively covering all of the early reflection points vs. only getting a few points (but to very low frequency). This assumes of course, limited resources...


Budget considerations are difficult. A strong case could definitely be made for 250Hz as a minimum, but a strong case could be made for even lower. 80Hz is the traditional point of no discernible direction, but being effective down that low requires a lot of absorption. Deep bass absorption can be effective for actually increasing output and lessening the strain on amp and speakers. It's hard to EQ SBIR and modal issues effectively. Yes there are a number of programs that do a remarkable job, but the less variation you have to begin with, the more even your sound field will be in the end.

Dan


----------

