Why can we distinguish different pitches in a chord but not different hues of light?











up vote
65
down vote

favorite
12












In music, when two or more pitches are played together at the same time, they form a chord. If each pitch has a corresponding wave frequency (a pure, or fundamental, tone), the pitches played together make a superposition waveform, which is obtained by simple addition. This wave is no longer a pure sinusoidal wave.



For example, when you play a low note and a high note on a piano, the resulting sound has a wave that is the mathematical sum of the waves of each note. The same is true for light: when you shine a 500nm wavelength (green light) and a 700nm wavelength (red light) at the same spot on a white surface, the reflection will be a superposition waveform that is the sum of green and red.



My question is about our perception of these combinations. When we hear a chord on a piano, we’re able to discern the pitches that comprise that chord. We’re able to “pick out” that there are two (or three, etc) notes in the chord, and some of us who are musically inclined are even able to sing back each note, and even name it. It could be said that we’re able to decompose a Fourier Series of sound.



But it seems we cannot do this with light. When you shine green and red light together, the reflection appears to be yellow, a “pure hue” of 600nm, rather than an overlay of red and green. We can’t “pick out” the individual colors that were combined. Why is this?



Why can’t we see two hues of light in the same way we’re able to hear two pitches of sound? Is this a characteristic of human psychology? Animal physiology? Or is this due to a fundamental characteristic of electromagnetism?










share|cite|improve this question




















  • 1




    A short answer would be: our eyes perceive so much more information per second. HEaring sounds is sporadic, you can afford to interpret them well, since that's useful in order to know what it is coming. However, decomposing pixels on every 24fps would need so many resources that it just doesn't worth it, you won't get really useful information for that either.
    – FGSUZ
    2 days ago






  • 3




    2 beams of different color lights do not superimpose into a single wave form the way sound does. One is a electromagnetic wave the other one is just a pressure traveling through air.
    – MadHatter
    2 days ago






  • 4




    Mammals were typically nocturnal in the time of the dinosaurs, that's why they sunburn easily and have whiskers. Only primates have RGB eyesight, dolphins only see green, and most mammals don't see red. Eyes have 3 wavelength sense photoreceptors, ears have have thousands of continuous wavelength sense nerves in a cone-tapered spiral tube. Photons do not merge BTW, sound pressure does.
    – com.prehensible
    2 days ago








  • 3




    @MadHatter — EM waves are famously known to superimpose, causing constructive/destructive interference, as demonstrated in the double-slit experiment
    – chharvey
    2 days ago






  • 3




    The ear contains a harp, with many strings, each sensitive to a particular frequency. The eye contains three types of receptors -- red, green, and blue. Any colors other than those are "guessed at" by judging the relative intensities of the three colors.
    – Hot Licks
    yesterday















up vote
65
down vote

favorite
12












In music, when two or more pitches are played together at the same time, they form a chord. If each pitch has a corresponding wave frequency (a pure, or fundamental, tone), the pitches played together make a superposition waveform, which is obtained by simple addition. This wave is no longer a pure sinusoidal wave.



For example, when you play a low note and a high note on a piano, the resulting sound has a wave that is the mathematical sum of the waves of each note. The same is true for light: when you shine a 500nm wavelength (green light) and a 700nm wavelength (red light) at the same spot on a white surface, the reflection will be a superposition waveform that is the sum of green and red.



My question is about our perception of these combinations. When we hear a chord on a piano, we’re able to discern the pitches that comprise that chord. We’re able to “pick out” that there are two (or three, etc) notes in the chord, and some of us who are musically inclined are even able to sing back each note, and even name it. It could be said that we’re able to decompose a Fourier Series of sound.



But it seems we cannot do this with light. When you shine green and red light together, the reflection appears to be yellow, a “pure hue” of 600nm, rather than an overlay of red and green. We can’t “pick out” the individual colors that were combined. Why is this?



Why can’t we see two hues of light in the same way we’re able to hear two pitches of sound? Is this a characteristic of human psychology? Animal physiology? Or is this due to a fundamental characteristic of electromagnetism?










share|cite|improve this question




















  • 1




    A short answer would be: our eyes perceive so much more information per second. HEaring sounds is sporadic, you can afford to interpret them well, since that's useful in order to know what it is coming. However, decomposing pixels on every 24fps would need so many resources that it just doesn't worth it, you won't get really useful information for that either.
    – FGSUZ
    2 days ago






  • 3




    2 beams of different color lights do not superimpose into a single wave form the way sound does. One is a electromagnetic wave the other one is just a pressure traveling through air.
    – MadHatter
    2 days ago






  • 4




    Mammals were typically nocturnal in the time of the dinosaurs, that's why they sunburn easily and have whiskers. Only primates have RGB eyesight, dolphins only see green, and most mammals don't see red. Eyes have 3 wavelength sense photoreceptors, ears have have thousands of continuous wavelength sense nerves in a cone-tapered spiral tube. Photons do not merge BTW, sound pressure does.
    – com.prehensible
    2 days ago








  • 3




    @MadHatter — EM waves are famously known to superimpose, causing constructive/destructive interference, as demonstrated in the double-slit experiment
    – chharvey
    2 days ago






  • 3




    The ear contains a harp, with many strings, each sensitive to a particular frequency. The eye contains three types of receptors -- red, green, and blue. Any colors other than those are "guessed at" by judging the relative intensities of the three colors.
    – Hot Licks
    yesterday













up vote
65
down vote

favorite
12









up vote
65
down vote

favorite
12






12





In music, when two or more pitches are played together at the same time, they form a chord. If each pitch has a corresponding wave frequency (a pure, or fundamental, tone), the pitches played together make a superposition waveform, which is obtained by simple addition. This wave is no longer a pure sinusoidal wave.



For example, when you play a low note and a high note on a piano, the resulting sound has a wave that is the mathematical sum of the waves of each note. The same is true for light: when you shine a 500nm wavelength (green light) and a 700nm wavelength (red light) at the same spot on a white surface, the reflection will be a superposition waveform that is the sum of green and red.



My question is about our perception of these combinations. When we hear a chord on a piano, we’re able to discern the pitches that comprise that chord. We’re able to “pick out” that there are two (or three, etc) notes in the chord, and some of us who are musically inclined are even able to sing back each note, and even name it. It could be said that we’re able to decompose a Fourier Series of sound.



But it seems we cannot do this with light. When you shine green and red light together, the reflection appears to be yellow, a “pure hue” of 600nm, rather than an overlay of red and green. We can’t “pick out” the individual colors that were combined. Why is this?



Why can’t we see two hues of light in the same way we’re able to hear two pitches of sound? Is this a characteristic of human psychology? Animal physiology? Or is this due to a fundamental characteristic of electromagnetism?










share|cite|improve this question















In music, when two or more pitches are played together at the same time, they form a chord. If each pitch has a corresponding wave frequency (a pure, or fundamental, tone), the pitches played together make a superposition waveform, which is obtained by simple addition. This wave is no longer a pure sinusoidal wave.



For example, when you play a low note and a high note on a piano, the resulting sound has a wave that is the mathematical sum of the waves of each note. The same is true for light: when you shine a 500nm wavelength (green light) and a 700nm wavelength (red light) at the same spot on a white surface, the reflection will be a superposition waveform that is the sum of green and red.



My question is about our perception of these combinations. When we hear a chord on a piano, we’re able to discern the pitches that comprise that chord. We’re able to “pick out” that there are two (or three, etc) notes in the chord, and some of us who are musically inclined are even able to sing back each note, and even name it. It could be said that we’re able to decompose a Fourier Series of sound.



But it seems we cannot do this with light. When you shine green and red light together, the reflection appears to be yellow, a “pure hue” of 600nm, rather than an overlay of red and green. We can’t “pick out” the individual colors that were combined. Why is this?



Why can’t we see two hues of light in the same way we’re able to hear two pitches of sound? Is this a characteristic of human psychology? Animal physiology? Or is this due to a fundamental characteristic of electromagnetism?







visible-light waves acoustics biology perception






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 2 days ago









Qmechanic

100k121801128




100k121801128










asked Dec 1 at 2:08









chharvey

468412




468412








  • 1




    A short answer would be: our eyes perceive so much more information per second. HEaring sounds is sporadic, you can afford to interpret them well, since that's useful in order to know what it is coming. However, decomposing pixels on every 24fps would need so many resources that it just doesn't worth it, you won't get really useful information for that either.
    – FGSUZ
    2 days ago






  • 3




    2 beams of different color lights do not superimpose into a single wave form the way sound does. One is a electromagnetic wave the other one is just a pressure traveling through air.
    – MadHatter
    2 days ago






  • 4




    Mammals were typically nocturnal in the time of the dinosaurs, that's why they sunburn easily and have whiskers. Only primates have RGB eyesight, dolphins only see green, and most mammals don't see red. Eyes have 3 wavelength sense photoreceptors, ears have have thousands of continuous wavelength sense nerves in a cone-tapered spiral tube. Photons do not merge BTW, sound pressure does.
    – com.prehensible
    2 days ago








  • 3




    @MadHatter — EM waves are famously known to superimpose, causing constructive/destructive interference, as demonstrated in the double-slit experiment
    – chharvey
    2 days ago






  • 3




    The ear contains a harp, with many strings, each sensitive to a particular frequency. The eye contains three types of receptors -- red, green, and blue. Any colors other than those are "guessed at" by judging the relative intensities of the three colors.
    – Hot Licks
    yesterday














  • 1




    A short answer would be: our eyes perceive so much more information per second. HEaring sounds is sporadic, you can afford to interpret them well, since that's useful in order to know what it is coming. However, decomposing pixels on every 24fps would need so many resources that it just doesn't worth it, you won't get really useful information for that either.
    – FGSUZ
    2 days ago






  • 3




    2 beams of different color lights do not superimpose into a single wave form the way sound does. One is a electromagnetic wave the other one is just a pressure traveling through air.
    – MadHatter
    2 days ago






  • 4




    Mammals were typically nocturnal in the time of the dinosaurs, that's why they sunburn easily and have whiskers. Only primates have RGB eyesight, dolphins only see green, and most mammals don't see red. Eyes have 3 wavelength sense photoreceptors, ears have have thousands of continuous wavelength sense nerves in a cone-tapered spiral tube. Photons do not merge BTW, sound pressure does.
    – com.prehensible
    2 days ago








  • 3




    @MadHatter — EM waves are famously known to superimpose, causing constructive/destructive interference, as demonstrated in the double-slit experiment
    – chharvey
    2 days ago






  • 3




    The ear contains a harp, with many strings, each sensitive to a particular frequency. The eye contains three types of receptors -- red, green, and blue. Any colors other than those are "guessed at" by judging the relative intensities of the three colors.
    – Hot Licks
    yesterday








1




1




A short answer would be: our eyes perceive so much more information per second. HEaring sounds is sporadic, you can afford to interpret them well, since that's useful in order to know what it is coming. However, decomposing pixels on every 24fps would need so many resources that it just doesn't worth it, you won't get really useful information for that either.
– FGSUZ
2 days ago




A short answer would be: our eyes perceive so much more information per second. HEaring sounds is sporadic, you can afford to interpret them well, since that's useful in order to know what it is coming. However, decomposing pixels on every 24fps would need so many resources that it just doesn't worth it, you won't get really useful information for that either.
– FGSUZ
2 days ago




3




3




2 beams of different color lights do not superimpose into a single wave form the way sound does. One is a electromagnetic wave the other one is just a pressure traveling through air.
– MadHatter
2 days ago




2 beams of different color lights do not superimpose into a single wave form the way sound does. One is a electromagnetic wave the other one is just a pressure traveling through air.
– MadHatter
2 days ago




4




4




Mammals were typically nocturnal in the time of the dinosaurs, that's why they sunburn easily and have whiskers. Only primates have RGB eyesight, dolphins only see green, and most mammals don't see red. Eyes have 3 wavelength sense photoreceptors, ears have have thousands of continuous wavelength sense nerves in a cone-tapered spiral tube. Photons do not merge BTW, sound pressure does.
– com.prehensible
2 days ago






Mammals were typically nocturnal in the time of the dinosaurs, that's why they sunburn easily and have whiskers. Only primates have RGB eyesight, dolphins only see green, and most mammals don't see red. Eyes have 3 wavelength sense photoreceptors, ears have have thousands of continuous wavelength sense nerves in a cone-tapered spiral tube. Photons do not merge BTW, sound pressure does.
– com.prehensible
2 days ago






3




3




@MadHatter — EM waves are famously known to superimpose, causing constructive/destructive interference, as demonstrated in the double-slit experiment
– chharvey
2 days ago




@MadHatter — EM waves are famously known to superimpose, causing constructive/destructive interference, as demonstrated in the double-slit experiment
– chharvey
2 days ago




3




3




The ear contains a harp, with many strings, each sensitive to a particular frequency. The eye contains three types of receptors -- red, green, and blue. Any colors other than those are "guessed at" by judging the relative intensities of the three colors.
– Hot Licks
yesterday




The ear contains a harp, with many strings, each sensitive to a particular frequency. The eye contains three types of receptors -- red, green, and blue. Any colors other than those are "guessed at" by judging the relative intensities of the three colors.
– Hot Licks
yesterday










6 Answers
6






active

oldest

votes

















up vote
42
down vote



accepted










Our sensory organs for light and sound work quite differently on a physiological level. The eardrum directly reacts to pressure waves while the photoreceptors on the retina are only senstive to a narrow range around the frequencies associated with red, green and blue. All light frequencies in between partly excite these receptors and the impression of seeing for example yellow arises due to the green and red receptors being exited with certain relative intensities. That's why you can fake out the color spectrum with only 3 different colors at each pixel of the display.



Seeing color in this sense is also more of a useful illusion than direct sensing of physical properties. Mixing colors in the middle of the visible spectrum retains a good approximation of the average frequency of the light mix. If colors from the edges of the spectrum are mixed, i.e. red and blue, the brain invents the color purple or pink to make sense of that sensory input. This however doesn't correspond to the average of the frequencies (which would result in a greenish color) nor does it correspond to any physical frequency of light. Same goes for seeing white or any shade of grey, as these correspond to all receptors being activated with equal intensity.



Mammal eyes also evolved in a way to distinguish intensity rather than color, since most mammals are nocturnal creatures. But I'm not sure if the ability to see in color was only established recently, that would be question for a biologist.






share|cite|improve this answer








New contributor




Halberd Rejoyceth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.














  • 3




    Note that you cannot actually fake all the colors using only three primaries. Human-visible color gamut is not a triangle, so some colors will always be outside of output gamut of your display device.
    – Ruslan
    2 days ago








  • 17




    Perhaps a nitpick, but it's not the eardrum that detects sound. It's more of a transmission device. The actual sensory organ is the cochlea en.wikipedia.org/wiki/Cochlea It's a spiral-shaped tube with sensory hairs along it. Sounds of a particular frequency vibrate the hairs at the spot in the cochlea where the sound resonates. So sound sensing is effectively continuous, while color sensing depends on the mix of the 3 color sensors.
    – jamesqf
    2 days ago






  • 4




    Actually, the photoreceptors are sensitive to quite large bands (compared to the distance of their peaks), even overlapping ones.
    – Paŭlo Ebermann
    2 days ago






  • 2




    @HalberdRejoyceth, yes, please do update. I chose your answer because it hit the underlying point—that our ears sense true waveforms while our eyes do not. I found that to sufficiently answer my question, even if it’s not the complete truth. However, I do think you would benefit the community to explain in further detail the differences in how the cochlea and the retina work.
    – chharvey
    2 days ago






  • 2




    Do you have any source for your claim that most mammals are nocturnal? While we assume they (we) were during the high time of the dinosaurs, is this still the case?
    – phresnel
    18 hours ago


















up vote
63
down vote













This is because of the physiological differences in the functioning of the cochlea (for hearing) and the retina (for color perception).



The cochlea separates out a single channel of complex audio signals into their component frequencies and produces an output signal that represents that decomposition.



The retina instead exhibits what is called metamerism, in which only three sensor types (for R/G/B) are used to encode an output signal that represents the entire spectrum of possible colors as variable combinations of those RGB levels.






share|cite|improve this answer

















  • 4




    This is the only answer so far that correctly focuses on the role of the cochlea. This is a better answer than the accepted answer.
    – Ben Crowell
    2 days ago










  • I agree that this answer is more technically correct, but I think it’s missing the key point: that our ears are able to sense mechanical waveforms while our eyes cannot sense electromagnetic waveforms. There’s room for improvement, which I welcome.
    – chharvey
    2 days ago






  • 14




    In short, the reason "it could be said that we’re able to decompose a Fourier Series of sound" is because that's exactly what the cochlea does.
    – Mark
    yesterday








  • 2




    Exactly. quite a device- until it starts to fail, as mine have!
    – niels nielsen
    yesterday










  • I think it's worth mentioning that just like with vision, we ultimately don't hear with our ears but with our brains, and the ear-brain system can be fooled too en.wikipedia.org/wiki/Auditory_masking
    – whatsisname
    12 hours ago


















up vote
19
down vote













This is due mostly to physiology. There is a fundamental difference in the way we perceive sound vs. light: For sound we can sense actual waveform, whereas for light we can sense only the intensity. To elaborate:




  • Sound waves entering your ear cause synchronous vibrations in your cochlea. Different regions of the cochlea have tiny hairs which vibrate in a frequency-selective way. The vibrations of these hairs are turned into electrical signals which are passed on to the brain. Due to the frequency selectivity of the hairs, the cochlea essentially performs a Fourier transform, which is why we can perceive superpositions of waves.


  • Light has such a high frequency that almost nothing can resolve the actual waveform (even state of the art electronics nowadays cannot do this). All we can effectively measure is the intensity of the light, and this is all that the eyes can perceive as well. Knowing the intensity of a light beam is not sufficient to determine its spectral content. E.g. a superposition of two monochromatic waves can have the same intensity as a pure monochromatic wave of a different frequency.



    We can differentiate superpositions of light in a limited way, due to the fact that eyes perceive three separate color channels (roughly RGB). This is why we can distinguish equal intensities of red and blue light. People with colorblindness have a defective receptor, and so color combinations that most humans can distinguish appear identical to them.



    Not all colors that we perceive correspond to a color of a monochromatic light wave. Famously, there is an entire "line of purples" which do not represent any monochromatic light wave. So for people trained in distinguishing purple colors, they can actually differentiate superpositions of light waves in a limited way.



    enter image description here








share|cite|improve this answer



















  • 5




    "...electrical signals representing the actual waveform of the sound. The brain ... does a Fourier transform..." This part of your answer is unfortunately incorrect. The decomposition into different audio frequencies happens mechanically in the cochlear before any vibrations are turned into nerve signals. So the actual waveform is not send to the brain.
    – Emil
    2 days ago










  • @Emil Do you have a reference for that? I'm not an expert, so I would happily revise my answer with better information, but my understanding is that the eardrum passes sound waves into the fluid of the cochlea, which cause stereocilia in the organ of Corti to vibrate, which in turn mechanically activate certain neurotransmitter channels. It's described on the Wikipedia page for organ of Corti. I see no reference to frequency discrimination in the cochlea.
    – Yly
    2 days ago






  • 1




    @Yly Emil is correct; the cochlea does the Fourier transform, mechanically. See cochlea.eu/en/cochlea/function
    – zwol
    2 days ago










  • @zwol Thanks. I have corrected the answer accordingly.
    – Yly
    2 days ago






  • 2




    I'm not sure about your 2nd point. Surely a simple spectrograph does a good job of resolving light frequencies? But eyes are arranged primarily for spatial discrimination, rather than frequency, like the ear. If we wanted one organ to do both, it'd need far more sensors: each rod/cone in an eye would need a separate neuron for each frequency band you want to discriminate.
    – jamesqf
    yesterday


















up vote
14
down vote













Rod (1 type) plus cone (3 types) neurons in the eye give you the potential for 4-D sensation.
Since the rod signal is nearly redundant to the totality of cone signals, this is effectively a 3-D sensation.



Cochlear (roughly 3500 "types" simply due to 3500 different inner hair positions) neurons
in the ear give you the potential for 3500-D sensation, so trained ears can potentially
recognize the simulatenous amplitudes from thousands of frequencies.



So, to answer your question, eyes simply didn't evolve to have many cone types. An improvement, however, is seen through the eyes of mantis shrimp (with the potential for 16-D sensation). Notice the trade-off between spatial image resolution and color perception (and that audio spatial resolution was less important in evolution, and more difficult due to the longer wavelength).






share|cite|improve this answer





















  • Rod signal is not redundant in mesopic vision conditions. In these conditions you get tetrachromatic vision. See e.g. this paper (paywalled unfortunately).
    – Ruslan
    19 hours ago












  • Finally an answer that puts it concisely and correctly :-)
    – cmaster
    6 hours ago


















up vote
3
down vote













The hairs form a 1D-array along the frequency axis, while rods and rods and cones form a spatial 2D array. In addition, that 2D array has 4 channels (rods and 3 types of cones). So the 2 ears have a poor spatial resolution, while the eyes have poor frequency resolution.



You could imagine an eye with many more types of cones, giving you a better frequency resolution. However, that would mean that the cones for a single color would be spaced further apart, limiting spatial resolution. In the end, that's an evolutionary trade-off. Physics tells us you can't have both at the same time, but biology is why we end up with this particular outcome.






share|cite|improve this answer




























    up vote
    0
    down vote













    Actually you can discern the different combinations to some extent. I figured this out after taking many art classes, particularly color theory and painting. I really started noticing the effect after the painting class. The reason is I was taught how to mix colors and needed to learn to see and think about the combinations while looking at the subject. This is the same for the note discernment you mentioned. The people who are usually able to do it, and especially well, are people who have studied music and have had to learn to hear and think about the combinations while listening.



    So, now when I look at something "green" I see that it is actually blue with a little yellow and sometimes a bit of another color. Nothing is ever "white" or "black" to me anymore either, there is always a reflected color or a texture of some sort which catches the light from the surrounding sources.



    Edit:
    After further consideration I am inclined to say the question posted here is backwards. We actually discern many more separate colors in combination and at the same time than we do sounds. Look around and you will see a plethora of color combinations. We are actually perceiving many billions of separate photons coming from many billions of separate locations in our field of view. In contrast, listening to so many separate sounds would overwhelm our ears and perception of the sounds to the point of hearing only discord, static, or hissing like noises that have no meaning. It is only the combination of very few separate noises which can be discerned.






    share|cite|improve this answer























    • You cannot distinguish a color that is a mixture of two or more frequencies of light from a color that is a pure frequency. Such a green still looks like it has yellow and blue even though it doesn't.
      – Kaz
      yesterday










    • Ok. I see where the confusion is. Think about this, if you could make a pure "green" light, where is it exactly on the spectrum? Is it an infinitely exact point on the spectrum or does it contain a little bit from the yellowish green light just a hair before it, and a little bit of the bluish green light just a hair after? Furthermore, when we make a chord or other combination of sounds, it does not make an "other" color on the same "spectrum". For instance playing a "G" and a "B" does not make an "A". Chords which make a "note" contain that note and are simply a fuller more harmonious sound.
      – takintoolong
      5 hours ago










    • It doesn't matter; there is a green range in the spectrum, around the 490 to 570 nM wavelengths. Any light of pure frequency in that range will produce a perception of some hue of green. That same color perception can be produced from a mixture of frequencies quite distant from that one. It's just a matter of stimulating the same cells in the retina to the same degree. For sound, the human ear has a very fine resolution of frequency in which narrow bands are quite separately perceived.
      – Kaz
      5 hours ago












    • And what is the source of your so called "pure" light?
      – takintoolong
      4 hours ago










    protected by Community yesterday



    Thank you for your interest in this question.
    Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



    Would you like to answer one of these unanswered questions instead?














    6 Answers
    6






    active

    oldest

    votes








    6 Answers
    6






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    42
    down vote



    accepted










    Our sensory organs for light and sound work quite differently on a physiological level. The eardrum directly reacts to pressure waves while the photoreceptors on the retina are only senstive to a narrow range around the frequencies associated with red, green and blue. All light frequencies in between partly excite these receptors and the impression of seeing for example yellow arises due to the green and red receptors being exited with certain relative intensities. That's why you can fake out the color spectrum with only 3 different colors at each pixel of the display.



    Seeing color in this sense is also more of a useful illusion than direct sensing of physical properties. Mixing colors in the middle of the visible spectrum retains a good approximation of the average frequency of the light mix. If colors from the edges of the spectrum are mixed, i.e. red and blue, the brain invents the color purple or pink to make sense of that sensory input. This however doesn't correspond to the average of the frequencies (which would result in a greenish color) nor does it correspond to any physical frequency of light. Same goes for seeing white or any shade of grey, as these correspond to all receptors being activated with equal intensity.



    Mammal eyes also evolved in a way to distinguish intensity rather than color, since most mammals are nocturnal creatures. But I'm not sure if the ability to see in color was only established recently, that would be question for a biologist.






    share|cite|improve this answer








    New contributor




    Halberd Rejoyceth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.














    • 3




      Note that you cannot actually fake all the colors using only three primaries. Human-visible color gamut is not a triangle, so some colors will always be outside of output gamut of your display device.
      – Ruslan
      2 days ago








    • 17




      Perhaps a nitpick, but it's not the eardrum that detects sound. It's more of a transmission device. The actual sensory organ is the cochlea en.wikipedia.org/wiki/Cochlea It's a spiral-shaped tube with sensory hairs along it. Sounds of a particular frequency vibrate the hairs at the spot in the cochlea where the sound resonates. So sound sensing is effectively continuous, while color sensing depends on the mix of the 3 color sensors.
      – jamesqf
      2 days ago






    • 4




      Actually, the photoreceptors are sensitive to quite large bands (compared to the distance of their peaks), even overlapping ones.
      – Paŭlo Ebermann
      2 days ago






    • 2




      @HalberdRejoyceth, yes, please do update. I chose your answer because it hit the underlying point—that our ears sense true waveforms while our eyes do not. I found that to sufficiently answer my question, even if it’s not the complete truth. However, I do think you would benefit the community to explain in further detail the differences in how the cochlea and the retina work.
      – chharvey
      2 days ago






    • 2




      Do you have any source for your claim that most mammals are nocturnal? While we assume they (we) were during the high time of the dinosaurs, is this still the case?
      – phresnel
      18 hours ago















    up vote
    42
    down vote



    accepted










    Our sensory organs for light and sound work quite differently on a physiological level. The eardrum directly reacts to pressure waves while the photoreceptors on the retina are only senstive to a narrow range around the frequencies associated with red, green and blue. All light frequencies in between partly excite these receptors and the impression of seeing for example yellow arises due to the green and red receptors being exited with certain relative intensities. That's why you can fake out the color spectrum with only 3 different colors at each pixel of the display.



    Seeing color in this sense is also more of a useful illusion than direct sensing of physical properties. Mixing colors in the middle of the visible spectrum retains a good approximation of the average frequency of the light mix. If colors from the edges of the spectrum are mixed, i.e. red and blue, the brain invents the color purple or pink to make sense of that sensory input. This however doesn't correspond to the average of the frequencies (which would result in a greenish color) nor does it correspond to any physical frequency of light. Same goes for seeing white or any shade of grey, as these correspond to all receptors being activated with equal intensity.



    Mammal eyes also evolved in a way to distinguish intensity rather than color, since most mammals are nocturnal creatures. But I'm not sure if the ability to see in color was only established recently, that would be question for a biologist.






    share|cite|improve this answer








    New contributor




    Halberd Rejoyceth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.














    • 3




      Note that you cannot actually fake all the colors using only three primaries. Human-visible color gamut is not a triangle, so some colors will always be outside of output gamut of your display device.
      – Ruslan
      2 days ago








    • 17




      Perhaps a nitpick, but it's not the eardrum that detects sound. It's more of a transmission device. The actual sensory organ is the cochlea en.wikipedia.org/wiki/Cochlea It's a spiral-shaped tube with sensory hairs along it. Sounds of a particular frequency vibrate the hairs at the spot in the cochlea where the sound resonates. So sound sensing is effectively continuous, while color sensing depends on the mix of the 3 color sensors.
      – jamesqf
      2 days ago






    • 4




      Actually, the photoreceptors are sensitive to quite large bands (compared to the distance of their peaks), even overlapping ones.
      – Paŭlo Ebermann
      2 days ago






    • 2




      @HalberdRejoyceth, yes, please do update. I chose your answer because it hit the underlying point—that our ears sense true waveforms while our eyes do not. I found that to sufficiently answer my question, even if it’s not the complete truth. However, I do think you would benefit the community to explain in further detail the differences in how the cochlea and the retina work.
      – chharvey
      2 days ago






    • 2




      Do you have any source for your claim that most mammals are nocturnal? While we assume they (we) were during the high time of the dinosaurs, is this still the case?
      – phresnel
      18 hours ago













    up vote
    42
    down vote



    accepted







    up vote
    42
    down vote



    accepted






    Our sensory organs for light and sound work quite differently on a physiological level. The eardrum directly reacts to pressure waves while the photoreceptors on the retina are only senstive to a narrow range around the frequencies associated with red, green and blue. All light frequencies in between partly excite these receptors and the impression of seeing for example yellow arises due to the green and red receptors being exited with certain relative intensities. That's why you can fake out the color spectrum with only 3 different colors at each pixel of the display.



    Seeing color in this sense is also more of a useful illusion than direct sensing of physical properties. Mixing colors in the middle of the visible spectrum retains a good approximation of the average frequency of the light mix. If colors from the edges of the spectrum are mixed, i.e. red and blue, the brain invents the color purple or pink to make sense of that sensory input. This however doesn't correspond to the average of the frequencies (which would result in a greenish color) nor does it correspond to any physical frequency of light. Same goes for seeing white or any shade of grey, as these correspond to all receptors being activated with equal intensity.



    Mammal eyes also evolved in a way to distinguish intensity rather than color, since most mammals are nocturnal creatures. But I'm not sure if the ability to see in color was only established recently, that would be question for a biologist.






    share|cite|improve this answer








    New contributor




    Halberd Rejoyceth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.









    Our sensory organs for light and sound work quite differently on a physiological level. The eardrum directly reacts to pressure waves while the photoreceptors on the retina are only senstive to a narrow range around the frequencies associated with red, green and blue. All light frequencies in between partly excite these receptors and the impression of seeing for example yellow arises due to the green and red receptors being exited with certain relative intensities. That's why you can fake out the color spectrum with only 3 different colors at each pixel of the display.



    Seeing color in this sense is also more of a useful illusion than direct sensing of physical properties. Mixing colors in the middle of the visible spectrum retains a good approximation of the average frequency of the light mix. If colors from the edges of the spectrum are mixed, i.e. red and blue, the brain invents the color purple or pink to make sense of that sensory input. This however doesn't correspond to the average of the frequencies (which would result in a greenish color) nor does it correspond to any physical frequency of light. Same goes for seeing white or any shade of grey, as these correspond to all receptors being activated with equal intensity.



    Mammal eyes also evolved in a way to distinguish intensity rather than color, since most mammals are nocturnal creatures. But I'm not sure if the ability to see in color was only established recently, that would be question for a biologist.







    share|cite|improve this answer








    New contributor




    Halberd Rejoyceth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.









    share|cite|improve this answer



    share|cite|improve this answer






    New contributor




    Halberd Rejoyceth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.









    answered Dec 1 at 3:06









    Halberd Rejoyceth

    63626




    63626




    New contributor




    Halberd Rejoyceth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.





    New contributor





    Halberd Rejoyceth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






    Halberd Rejoyceth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.








    • 3




      Note that you cannot actually fake all the colors using only three primaries. Human-visible color gamut is not a triangle, so some colors will always be outside of output gamut of your display device.
      – Ruslan
      2 days ago








    • 17




      Perhaps a nitpick, but it's not the eardrum that detects sound. It's more of a transmission device. The actual sensory organ is the cochlea en.wikipedia.org/wiki/Cochlea It's a spiral-shaped tube with sensory hairs along it. Sounds of a particular frequency vibrate the hairs at the spot in the cochlea where the sound resonates. So sound sensing is effectively continuous, while color sensing depends on the mix of the 3 color sensors.
      – jamesqf
      2 days ago






    • 4




      Actually, the photoreceptors are sensitive to quite large bands (compared to the distance of their peaks), even overlapping ones.
      – Paŭlo Ebermann
      2 days ago






    • 2




      @HalberdRejoyceth, yes, please do update. I chose your answer because it hit the underlying point—that our ears sense true waveforms while our eyes do not. I found that to sufficiently answer my question, even if it’s not the complete truth. However, I do think you would benefit the community to explain in further detail the differences in how the cochlea and the retina work.
      – chharvey
      2 days ago






    • 2




      Do you have any source for your claim that most mammals are nocturnal? While we assume they (we) were during the high time of the dinosaurs, is this still the case?
      – phresnel
      18 hours ago














    • 3




      Note that you cannot actually fake all the colors using only three primaries. Human-visible color gamut is not a triangle, so some colors will always be outside of output gamut of your display device.
      – Ruslan
      2 days ago








    • 17




      Perhaps a nitpick, but it's not the eardrum that detects sound. It's more of a transmission device. The actual sensory organ is the cochlea en.wikipedia.org/wiki/Cochlea It's a spiral-shaped tube with sensory hairs along it. Sounds of a particular frequency vibrate the hairs at the spot in the cochlea where the sound resonates. So sound sensing is effectively continuous, while color sensing depends on the mix of the 3 color sensors.
      – jamesqf
      2 days ago






    • 4




      Actually, the photoreceptors are sensitive to quite large bands (compared to the distance of their peaks), even overlapping ones.
      – Paŭlo Ebermann
      2 days ago






    • 2




      @HalberdRejoyceth, yes, please do update. I chose your answer because it hit the underlying point—that our ears sense true waveforms while our eyes do not. I found that to sufficiently answer my question, even if it’s not the complete truth. However, I do think you would benefit the community to explain in further detail the differences in how the cochlea and the retina work.
      – chharvey
      2 days ago






    • 2




      Do you have any source for your claim that most mammals are nocturnal? While we assume they (we) were during the high time of the dinosaurs, is this still the case?
      – phresnel
      18 hours ago








    3




    3




    Note that you cannot actually fake all the colors using only three primaries. Human-visible color gamut is not a triangle, so some colors will always be outside of output gamut of your display device.
    – Ruslan
    2 days ago






    Note that you cannot actually fake all the colors using only three primaries. Human-visible color gamut is not a triangle, so some colors will always be outside of output gamut of your display device.
    – Ruslan
    2 days ago






    17




    17




    Perhaps a nitpick, but it's not the eardrum that detects sound. It's more of a transmission device. The actual sensory organ is the cochlea en.wikipedia.org/wiki/Cochlea It's a spiral-shaped tube with sensory hairs along it. Sounds of a particular frequency vibrate the hairs at the spot in the cochlea where the sound resonates. So sound sensing is effectively continuous, while color sensing depends on the mix of the 3 color sensors.
    – jamesqf
    2 days ago




    Perhaps a nitpick, but it's not the eardrum that detects sound. It's more of a transmission device. The actual sensory organ is the cochlea en.wikipedia.org/wiki/Cochlea It's a spiral-shaped tube with sensory hairs along it. Sounds of a particular frequency vibrate the hairs at the spot in the cochlea where the sound resonates. So sound sensing is effectively continuous, while color sensing depends on the mix of the 3 color sensors.
    – jamesqf
    2 days ago




    4




    4




    Actually, the photoreceptors are sensitive to quite large bands (compared to the distance of their peaks), even overlapping ones.
    – Paŭlo Ebermann
    2 days ago




    Actually, the photoreceptors are sensitive to quite large bands (compared to the distance of their peaks), even overlapping ones.
    – Paŭlo Ebermann
    2 days ago




    2




    2




    @HalberdRejoyceth, yes, please do update. I chose your answer because it hit the underlying point—that our ears sense true waveforms while our eyes do not. I found that to sufficiently answer my question, even if it’s not the complete truth. However, I do think you would benefit the community to explain in further detail the differences in how the cochlea and the retina work.
    – chharvey
    2 days ago




    @HalberdRejoyceth, yes, please do update. I chose your answer because it hit the underlying point—that our ears sense true waveforms while our eyes do not. I found that to sufficiently answer my question, even if it’s not the complete truth. However, I do think you would benefit the community to explain in further detail the differences in how the cochlea and the retina work.
    – chharvey
    2 days ago




    2




    2




    Do you have any source for your claim that most mammals are nocturnal? While we assume they (we) were during the high time of the dinosaurs, is this still the case?
    – phresnel
    18 hours ago




    Do you have any source for your claim that most mammals are nocturnal? While we assume they (we) were during the high time of the dinosaurs, is this still the case?
    – phresnel
    18 hours ago










    up vote
    63
    down vote













    This is because of the physiological differences in the functioning of the cochlea (for hearing) and the retina (for color perception).



    The cochlea separates out a single channel of complex audio signals into their component frequencies and produces an output signal that represents that decomposition.



    The retina instead exhibits what is called metamerism, in which only three sensor types (for R/G/B) are used to encode an output signal that represents the entire spectrum of possible colors as variable combinations of those RGB levels.






    share|cite|improve this answer

















    • 4




      This is the only answer so far that correctly focuses on the role of the cochlea. This is a better answer than the accepted answer.
      – Ben Crowell
      2 days ago










    • I agree that this answer is more technically correct, but I think it’s missing the key point: that our ears are able to sense mechanical waveforms while our eyes cannot sense electromagnetic waveforms. There’s room for improvement, which I welcome.
      – chharvey
      2 days ago






    • 14




      In short, the reason "it could be said that we’re able to decompose a Fourier Series of sound" is because that's exactly what the cochlea does.
      – Mark
      yesterday








    • 2




      Exactly. quite a device- until it starts to fail, as mine have!
      – niels nielsen
      yesterday










    • I think it's worth mentioning that just like with vision, we ultimately don't hear with our ears but with our brains, and the ear-brain system can be fooled too en.wikipedia.org/wiki/Auditory_masking
      – whatsisname
      12 hours ago















    up vote
    63
    down vote













    This is because of the physiological differences in the functioning of the cochlea (for hearing) and the retina (for color perception).



    The cochlea separates out a single channel of complex audio signals into their component frequencies and produces an output signal that represents that decomposition.



    The retina instead exhibits what is called metamerism, in which only three sensor types (for R/G/B) are used to encode an output signal that represents the entire spectrum of possible colors as variable combinations of those RGB levels.






    share|cite|improve this answer

















    • 4




      This is the only answer so far that correctly focuses on the role of the cochlea. This is a better answer than the accepted answer.
      – Ben Crowell
      2 days ago










    • I agree that this answer is more technically correct, but I think it’s missing the key point: that our ears are able to sense mechanical waveforms while our eyes cannot sense electromagnetic waveforms. There’s room for improvement, which I welcome.
      – chharvey
      2 days ago






    • 14




      In short, the reason "it could be said that we’re able to decompose a Fourier Series of sound" is because that's exactly what the cochlea does.
      – Mark
      yesterday








    • 2




      Exactly. quite a device- until it starts to fail, as mine have!
      – niels nielsen
      yesterday










    • I think it's worth mentioning that just like with vision, we ultimately don't hear with our ears but with our brains, and the ear-brain system can be fooled too en.wikipedia.org/wiki/Auditory_masking
      – whatsisname
      12 hours ago













    up vote
    63
    down vote










    up vote
    63
    down vote









    This is because of the physiological differences in the functioning of the cochlea (for hearing) and the retina (for color perception).



    The cochlea separates out a single channel of complex audio signals into their component frequencies and produces an output signal that represents that decomposition.



    The retina instead exhibits what is called metamerism, in which only three sensor types (for R/G/B) are used to encode an output signal that represents the entire spectrum of possible colors as variable combinations of those RGB levels.






    share|cite|improve this answer












    This is because of the physiological differences in the functioning of the cochlea (for hearing) and the retina (for color perception).



    The cochlea separates out a single channel of complex audio signals into their component frequencies and produces an output signal that represents that decomposition.



    The retina instead exhibits what is called metamerism, in which only three sensor types (for R/G/B) are used to encode an output signal that represents the entire spectrum of possible colors as variable combinations of those RGB levels.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Dec 1 at 2:33









    niels nielsen

    14.2k42347




    14.2k42347








    • 4




      This is the only answer so far that correctly focuses on the role of the cochlea. This is a better answer than the accepted answer.
      – Ben Crowell
      2 days ago










    • I agree that this answer is more technically correct, but I think it’s missing the key point: that our ears are able to sense mechanical waveforms while our eyes cannot sense electromagnetic waveforms. There’s room for improvement, which I welcome.
      – chharvey
      2 days ago






    • 14




      In short, the reason "it could be said that we’re able to decompose a Fourier Series of sound" is because that's exactly what the cochlea does.
      – Mark
      yesterday








    • 2




      Exactly. quite a device- until it starts to fail, as mine have!
      – niels nielsen
      yesterday










    • I think it's worth mentioning that just like with vision, we ultimately don't hear with our ears but with our brains, and the ear-brain system can be fooled too en.wikipedia.org/wiki/Auditory_masking
      – whatsisname
      12 hours ago














    • 4




      This is the only answer so far that correctly focuses on the role of the cochlea. This is a better answer than the accepted answer.
      – Ben Crowell
      2 days ago










    • I agree that this answer is more technically correct, but I think it’s missing the key point: that our ears are able to sense mechanical waveforms while our eyes cannot sense electromagnetic waveforms. There’s room for improvement, which I welcome.
      – chharvey
      2 days ago






    • 14




      In short, the reason "it could be said that we’re able to decompose a Fourier Series of sound" is because that's exactly what the cochlea does.
      – Mark
      yesterday








    • 2




      Exactly. quite a device- until it starts to fail, as mine have!
      – niels nielsen
      yesterday










    • I think it's worth mentioning that just like with vision, we ultimately don't hear with our ears but with our brains, and the ear-brain system can be fooled too en.wikipedia.org/wiki/Auditory_masking
      – whatsisname
      12 hours ago








    4




    4




    This is the only answer so far that correctly focuses on the role of the cochlea. This is a better answer than the accepted answer.
    – Ben Crowell
    2 days ago




    This is the only answer so far that correctly focuses on the role of the cochlea. This is a better answer than the accepted answer.
    – Ben Crowell
    2 days ago












    I agree that this answer is more technically correct, but I think it’s missing the key point: that our ears are able to sense mechanical waveforms while our eyes cannot sense electromagnetic waveforms. There’s room for improvement, which I welcome.
    – chharvey
    2 days ago




    I agree that this answer is more technically correct, but I think it’s missing the key point: that our ears are able to sense mechanical waveforms while our eyes cannot sense electromagnetic waveforms. There’s room for improvement, which I welcome.
    – chharvey
    2 days ago




    14




    14




    In short, the reason "it could be said that we’re able to decompose a Fourier Series of sound" is because that's exactly what the cochlea does.
    – Mark
    yesterday






    In short, the reason "it could be said that we’re able to decompose a Fourier Series of sound" is because that's exactly what the cochlea does.
    – Mark
    yesterday






    2




    2




    Exactly. quite a device- until it starts to fail, as mine have!
    – niels nielsen
    yesterday




    Exactly. quite a device- until it starts to fail, as mine have!
    – niels nielsen
    yesterday












    I think it's worth mentioning that just like with vision, we ultimately don't hear with our ears but with our brains, and the ear-brain system can be fooled too en.wikipedia.org/wiki/Auditory_masking
    – whatsisname
    12 hours ago




    I think it's worth mentioning that just like with vision, we ultimately don't hear with our ears but with our brains, and the ear-brain system can be fooled too en.wikipedia.org/wiki/Auditory_masking
    – whatsisname
    12 hours ago










    up vote
    19
    down vote













    This is due mostly to physiology. There is a fundamental difference in the way we perceive sound vs. light: For sound we can sense actual waveform, whereas for light we can sense only the intensity. To elaborate:




    • Sound waves entering your ear cause synchronous vibrations in your cochlea. Different regions of the cochlea have tiny hairs which vibrate in a frequency-selective way. The vibrations of these hairs are turned into electrical signals which are passed on to the brain. Due to the frequency selectivity of the hairs, the cochlea essentially performs a Fourier transform, which is why we can perceive superpositions of waves.


    • Light has such a high frequency that almost nothing can resolve the actual waveform (even state of the art electronics nowadays cannot do this). All we can effectively measure is the intensity of the light, and this is all that the eyes can perceive as well. Knowing the intensity of a light beam is not sufficient to determine its spectral content. E.g. a superposition of two monochromatic waves can have the same intensity as a pure monochromatic wave of a different frequency.



      We can differentiate superpositions of light in a limited way, due to the fact that eyes perceive three separate color channels (roughly RGB). This is why we can distinguish equal intensities of red and blue light. People with colorblindness have a defective receptor, and so color combinations that most humans can distinguish appear identical to them.



      Not all colors that we perceive correspond to a color of a monochromatic light wave. Famously, there is an entire "line of purples" which do not represent any monochromatic light wave. So for people trained in distinguishing purple colors, they can actually differentiate superpositions of light waves in a limited way.



      enter image description here








    share|cite|improve this answer



















    • 5




      "...electrical signals representing the actual waveform of the sound. The brain ... does a Fourier transform..." This part of your answer is unfortunately incorrect. The decomposition into different audio frequencies happens mechanically in the cochlear before any vibrations are turned into nerve signals. So the actual waveform is not send to the brain.
      – Emil
      2 days ago










    • @Emil Do you have a reference for that? I'm not an expert, so I would happily revise my answer with better information, but my understanding is that the eardrum passes sound waves into the fluid of the cochlea, which cause stereocilia in the organ of Corti to vibrate, which in turn mechanically activate certain neurotransmitter channels. It's described on the Wikipedia page for organ of Corti. I see no reference to frequency discrimination in the cochlea.
      – Yly
      2 days ago






    • 1




      @Yly Emil is correct; the cochlea does the Fourier transform, mechanically. See cochlea.eu/en/cochlea/function
      – zwol
      2 days ago










    • @zwol Thanks. I have corrected the answer accordingly.
      – Yly
      2 days ago






    • 2




      I'm not sure about your 2nd point. Surely a simple spectrograph does a good job of resolving light frequencies? But eyes are arranged primarily for spatial discrimination, rather than frequency, like the ear. If we wanted one organ to do both, it'd need far more sensors: each rod/cone in an eye would need a separate neuron for each frequency band you want to discriminate.
      – jamesqf
      yesterday















    up vote
    19
    down vote













    This is due mostly to physiology. There is a fundamental difference in the way we perceive sound vs. light: For sound we can sense actual waveform, whereas for light we can sense only the intensity. To elaborate:




    • Sound waves entering your ear cause synchronous vibrations in your cochlea. Different regions of the cochlea have tiny hairs which vibrate in a frequency-selective way. The vibrations of these hairs are turned into electrical signals which are passed on to the brain. Due to the frequency selectivity of the hairs, the cochlea essentially performs a Fourier transform, which is why we can perceive superpositions of waves.


    • Light has such a high frequency that almost nothing can resolve the actual waveform (even state of the art electronics nowadays cannot do this). All we can effectively measure is the intensity of the light, and this is all that the eyes can perceive as well. Knowing the intensity of a light beam is not sufficient to determine its spectral content. E.g. a superposition of two monochromatic waves can have the same intensity as a pure monochromatic wave of a different frequency.



      We can differentiate superpositions of light in a limited way, due to the fact that eyes perceive three separate color channels (roughly RGB). This is why we can distinguish equal intensities of red and blue light. People with colorblindness have a defective receptor, and so color combinations that most humans can distinguish appear identical to them.



      Not all colors that we perceive correspond to a color of a monochromatic light wave. Famously, there is an entire "line of purples" which do not represent any monochromatic light wave. So for people trained in distinguishing purple colors, they can actually differentiate superpositions of light waves in a limited way.



      enter image description here








    share|cite|improve this answer



















    • 5




      "...electrical signals representing the actual waveform of the sound. The brain ... does a Fourier transform..." This part of your answer is unfortunately incorrect. The decomposition into different audio frequencies happens mechanically in the cochlear before any vibrations are turned into nerve signals. So the actual waveform is not send to the brain.
      – Emil
      2 days ago










    • @Emil Do you have a reference for that? I'm not an expert, so I would happily revise my answer with better information, but my understanding is that the eardrum passes sound waves into the fluid of the cochlea, which cause stereocilia in the organ of Corti to vibrate, which in turn mechanically activate certain neurotransmitter channels. It's described on the Wikipedia page for organ of Corti. I see no reference to frequency discrimination in the cochlea.
      – Yly
      2 days ago






    • 1




      @Yly Emil is correct; the cochlea does the Fourier transform, mechanically. See cochlea.eu/en/cochlea/function
      – zwol
      2 days ago










    • @zwol Thanks. I have corrected the answer accordingly.
      – Yly
      2 days ago






    • 2




      I'm not sure about your 2nd point. Surely a simple spectrograph does a good job of resolving light frequencies? But eyes are arranged primarily for spatial discrimination, rather than frequency, like the ear. If we wanted one organ to do both, it'd need far more sensors: each rod/cone in an eye would need a separate neuron for each frequency band you want to discriminate.
      – jamesqf
      yesterday













    up vote
    19
    down vote










    up vote
    19
    down vote









    This is due mostly to physiology. There is a fundamental difference in the way we perceive sound vs. light: For sound we can sense actual waveform, whereas for light we can sense only the intensity. To elaborate:




    • Sound waves entering your ear cause synchronous vibrations in your cochlea. Different regions of the cochlea have tiny hairs which vibrate in a frequency-selective way. The vibrations of these hairs are turned into electrical signals which are passed on to the brain. Due to the frequency selectivity of the hairs, the cochlea essentially performs a Fourier transform, which is why we can perceive superpositions of waves.


    • Light has such a high frequency that almost nothing can resolve the actual waveform (even state of the art electronics nowadays cannot do this). All we can effectively measure is the intensity of the light, and this is all that the eyes can perceive as well. Knowing the intensity of a light beam is not sufficient to determine its spectral content. E.g. a superposition of two monochromatic waves can have the same intensity as a pure monochromatic wave of a different frequency.



      We can differentiate superpositions of light in a limited way, due to the fact that eyes perceive three separate color channels (roughly RGB). This is why we can distinguish equal intensities of red and blue light. People with colorblindness have a defective receptor, and so color combinations that most humans can distinguish appear identical to them.



      Not all colors that we perceive correspond to a color of a monochromatic light wave. Famously, there is an entire "line of purples" which do not represent any monochromatic light wave. So for people trained in distinguishing purple colors, they can actually differentiate superpositions of light waves in a limited way.



      enter image description here








    share|cite|improve this answer














    This is due mostly to physiology. There is a fundamental difference in the way we perceive sound vs. light: For sound we can sense actual waveform, whereas for light we can sense only the intensity. To elaborate:




    • Sound waves entering your ear cause synchronous vibrations in your cochlea. Different regions of the cochlea have tiny hairs which vibrate in a frequency-selective way. The vibrations of these hairs are turned into electrical signals which are passed on to the brain. Due to the frequency selectivity of the hairs, the cochlea essentially performs a Fourier transform, which is why we can perceive superpositions of waves.


    • Light has such a high frequency that almost nothing can resolve the actual waveform (even state of the art electronics nowadays cannot do this). All we can effectively measure is the intensity of the light, and this is all that the eyes can perceive as well. Knowing the intensity of a light beam is not sufficient to determine its spectral content. E.g. a superposition of two monochromatic waves can have the same intensity as a pure monochromatic wave of a different frequency.



      We can differentiate superpositions of light in a limited way, due to the fact that eyes perceive three separate color channels (roughly RGB). This is why we can distinguish equal intensities of red and blue light. People with colorblindness have a defective receptor, and so color combinations that most humans can distinguish appear identical to them.



      Not all colors that we perceive correspond to a color of a monochromatic light wave. Famously, there is an entire "line of purples" which do not represent any monochromatic light wave. So for people trained in distinguishing purple colors, they can actually differentiate superpositions of light waves in a limited way.



      enter image description here









    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited 2 days ago

























    answered Dec 1 at 2:36









    Yly

    1,236420




    1,236420








    • 5




      "...electrical signals representing the actual waveform of the sound. The brain ... does a Fourier transform..." This part of your answer is unfortunately incorrect. The decomposition into different audio frequencies happens mechanically in the cochlear before any vibrations are turned into nerve signals. So the actual waveform is not send to the brain.
      – Emil
      2 days ago










    • @Emil Do you have a reference for that? I'm not an expert, so I would happily revise my answer with better information, but my understanding is that the eardrum passes sound waves into the fluid of the cochlea, which cause stereocilia in the organ of Corti to vibrate, which in turn mechanically activate certain neurotransmitter channels. It's described on the Wikipedia page for organ of Corti. I see no reference to frequency discrimination in the cochlea.
      – Yly
      2 days ago






    • 1




      @Yly Emil is correct; the cochlea does the Fourier transform, mechanically. See cochlea.eu/en/cochlea/function
      – zwol
      2 days ago










    • @zwol Thanks. I have corrected the answer accordingly.
      – Yly
      2 days ago






    • 2




      I'm not sure about your 2nd point. Surely a simple spectrograph does a good job of resolving light frequencies? But eyes are arranged primarily for spatial discrimination, rather than frequency, like the ear. If we wanted one organ to do both, it'd need far more sensors: each rod/cone in an eye would need a separate neuron for each frequency band you want to discriminate.
      – jamesqf
      yesterday














    • 5




      "...electrical signals representing the actual waveform of the sound. The brain ... does a Fourier transform..." This part of your answer is unfortunately incorrect. The decomposition into different audio frequencies happens mechanically in the cochlear before any vibrations are turned into nerve signals. So the actual waveform is not send to the brain.
      – Emil
      2 days ago










    • @Emil Do you have a reference for that? I'm not an expert, so I would happily revise my answer with better information, but my understanding is that the eardrum passes sound waves into the fluid of the cochlea, which cause stereocilia in the organ of Corti to vibrate, which in turn mechanically activate certain neurotransmitter channels. It's described on the Wikipedia page for organ of Corti. I see no reference to frequency discrimination in the cochlea.
      – Yly
      2 days ago






    • 1




      @Yly Emil is correct; the cochlea does the Fourier transform, mechanically. See cochlea.eu/en/cochlea/function
      – zwol
      2 days ago










    • @zwol Thanks. I have corrected the answer accordingly.
      – Yly
      2 days ago






    • 2




      I'm not sure about your 2nd point. Surely a simple spectrograph does a good job of resolving light frequencies? But eyes are arranged primarily for spatial discrimination, rather than frequency, like the ear. If we wanted one organ to do both, it'd need far more sensors: each rod/cone in an eye would need a separate neuron for each frequency band you want to discriminate.
      – jamesqf
      yesterday








    5




    5




    "...electrical signals representing the actual waveform of the sound. The brain ... does a Fourier transform..." This part of your answer is unfortunately incorrect. The decomposition into different audio frequencies happens mechanically in the cochlear before any vibrations are turned into nerve signals. So the actual waveform is not send to the brain.
    – Emil
    2 days ago




    "...electrical signals representing the actual waveform of the sound. The brain ... does a Fourier transform..." This part of your answer is unfortunately incorrect. The decomposition into different audio frequencies happens mechanically in the cochlear before any vibrations are turned into nerve signals. So the actual waveform is not send to the brain.
    – Emil
    2 days ago












    @Emil Do you have a reference for that? I'm not an expert, so I would happily revise my answer with better information, but my understanding is that the eardrum passes sound waves into the fluid of the cochlea, which cause stereocilia in the organ of Corti to vibrate, which in turn mechanically activate certain neurotransmitter channels. It's described on the Wikipedia page for organ of Corti. I see no reference to frequency discrimination in the cochlea.
    – Yly
    2 days ago




    @Emil Do you have a reference for that? I'm not an expert, so I would happily revise my answer with better information, but my understanding is that the eardrum passes sound waves into the fluid of the cochlea, which cause stereocilia in the organ of Corti to vibrate, which in turn mechanically activate certain neurotransmitter channels. It's described on the Wikipedia page for organ of Corti. I see no reference to frequency discrimination in the cochlea.
    – Yly
    2 days ago




    1




    1




    @Yly Emil is correct; the cochlea does the Fourier transform, mechanically. See cochlea.eu/en/cochlea/function
    – zwol
    2 days ago




    @Yly Emil is correct; the cochlea does the Fourier transform, mechanically. See cochlea.eu/en/cochlea/function
    – zwol
    2 days ago












    @zwol Thanks. I have corrected the answer accordingly.
    – Yly
    2 days ago




    @zwol Thanks. I have corrected the answer accordingly.
    – Yly
    2 days ago




    2




    2




    I'm not sure about your 2nd point. Surely a simple spectrograph does a good job of resolving light frequencies? But eyes are arranged primarily for spatial discrimination, rather than frequency, like the ear. If we wanted one organ to do both, it'd need far more sensors: each rod/cone in an eye would need a separate neuron for each frequency band you want to discriminate.
    – jamesqf
    yesterday




    I'm not sure about your 2nd point. Surely a simple spectrograph does a good job of resolving light frequencies? But eyes are arranged primarily for spatial discrimination, rather than frequency, like the ear. If we wanted one organ to do both, it'd need far more sensors: each rod/cone in an eye would need a separate neuron for each frequency band you want to discriminate.
    – jamesqf
    yesterday










    up vote
    14
    down vote













    Rod (1 type) plus cone (3 types) neurons in the eye give you the potential for 4-D sensation.
    Since the rod signal is nearly redundant to the totality of cone signals, this is effectively a 3-D sensation.



    Cochlear (roughly 3500 "types" simply due to 3500 different inner hair positions) neurons
    in the ear give you the potential for 3500-D sensation, so trained ears can potentially
    recognize the simulatenous amplitudes from thousands of frequencies.



    So, to answer your question, eyes simply didn't evolve to have many cone types. An improvement, however, is seen through the eyes of mantis shrimp (with the potential for 16-D sensation). Notice the trade-off between spatial image resolution and color perception (and that audio spatial resolution was less important in evolution, and more difficult due to the longer wavelength).






    share|cite|improve this answer





















    • Rod signal is not redundant in mesopic vision conditions. In these conditions you get tetrachromatic vision. See e.g. this paper (paywalled unfortunately).
      – Ruslan
      19 hours ago












    • Finally an answer that puts it concisely and correctly :-)
      – cmaster
      6 hours ago















    up vote
    14
    down vote













    Rod (1 type) plus cone (3 types) neurons in the eye give you the potential for 4-D sensation.
    Since the rod signal is nearly redundant to the totality of cone signals, this is effectively a 3-D sensation.



    Cochlear (roughly 3500 "types" simply due to 3500 different inner hair positions) neurons
    in the ear give you the potential for 3500-D sensation, so trained ears can potentially
    recognize the simulatenous amplitudes from thousands of frequencies.



    So, to answer your question, eyes simply didn't evolve to have many cone types. An improvement, however, is seen through the eyes of mantis shrimp (with the potential for 16-D sensation). Notice the trade-off between spatial image resolution and color perception (and that audio spatial resolution was less important in evolution, and more difficult due to the longer wavelength).






    share|cite|improve this answer





















    • Rod signal is not redundant in mesopic vision conditions. In these conditions you get tetrachromatic vision. See e.g. this paper (paywalled unfortunately).
      – Ruslan
      19 hours ago












    • Finally an answer that puts it concisely and correctly :-)
      – cmaster
      6 hours ago













    up vote
    14
    down vote










    up vote
    14
    down vote









    Rod (1 type) plus cone (3 types) neurons in the eye give you the potential for 4-D sensation.
    Since the rod signal is nearly redundant to the totality of cone signals, this is effectively a 3-D sensation.



    Cochlear (roughly 3500 "types" simply due to 3500 different inner hair positions) neurons
    in the ear give you the potential for 3500-D sensation, so trained ears can potentially
    recognize the simulatenous amplitudes from thousands of frequencies.



    So, to answer your question, eyes simply didn't evolve to have many cone types. An improvement, however, is seen through the eyes of mantis shrimp (with the potential for 16-D sensation). Notice the trade-off between spatial image resolution and color perception (and that audio spatial resolution was less important in evolution, and more difficult due to the longer wavelength).






    share|cite|improve this answer












    Rod (1 type) plus cone (3 types) neurons in the eye give you the potential for 4-D sensation.
    Since the rod signal is nearly redundant to the totality of cone signals, this is effectively a 3-D sensation.



    Cochlear (roughly 3500 "types" simply due to 3500 different inner hair positions) neurons
    in the ear give you the potential for 3500-D sensation, so trained ears can potentially
    recognize the simulatenous amplitudes from thousands of frequencies.



    So, to answer your question, eyes simply didn't evolve to have many cone types. An improvement, however, is seen through the eyes of mantis shrimp (with the potential for 16-D sensation). Notice the trade-off between spatial image resolution and color perception (and that audio spatial resolution was less important in evolution, and more difficult due to the longer wavelength).







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered 2 days ago









    bobuhito

    7141511




    7141511












    • Rod signal is not redundant in mesopic vision conditions. In these conditions you get tetrachromatic vision. See e.g. this paper (paywalled unfortunately).
      – Ruslan
      19 hours ago












    • Finally an answer that puts it concisely and correctly :-)
      – cmaster
      6 hours ago


















    • Rod signal is not redundant in mesopic vision conditions. In these conditions you get tetrachromatic vision. See e.g. this paper (paywalled unfortunately).
      – Ruslan
      19 hours ago












    • Finally an answer that puts it concisely and correctly :-)
      – cmaster
      6 hours ago
















    Rod signal is not redundant in mesopic vision conditions. In these conditions you get tetrachromatic vision. See e.g. this paper (paywalled unfortunately).
    – Ruslan
    19 hours ago






    Rod signal is not redundant in mesopic vision conditions. In these conditions you get tetrachromatic vision. See e.g. this paper (paywalled unfortunately).
    – Ruslan
    19 hours ago














    Finally an answer that puts it concisely and correctly :-)
    – cmaster
    6 hours ago




    Finally an answer that puts it concisely and correctly :-)
    – cmaster
    6 hours ago










    up vote
    3
    down vote













    The hairs form a 1D-array along the frequency axis, while rods and rods and cones form a spatial 2D array. In addition, that 2D array has 4 channels (rods and 3 types of cones). So the 2 ears have a poor spatial resolution, while the eyes have poor frequency resolution.



    You could imagine an eye with many more types of cones, giving you a better frequency resolution. However, that would mean that the cones for a single color would be spaced further apart, limiting spatial resolution. In the end, that's an evolutionary trade-off. Physics tells us you can't have both at the same time, but biology is why we end up with this particular outcome.






    share|cite|improve this answer

























      up vote
      3
      down vote













      The hairs form a 1D-array along the frequency axis, while rods and rods and cones form a spatial 2D array. In addition, that 2D array has 4 channels (rods and 3 types of cones). So the 2 ears have a poor spatial resolution, while the eyes have poor frequency resolution.



      You could imagine an eye with many more types of cones, giving you a better frequency resolution. However, that would mean that the cones for a single color would be spaced further apart, limiting spatial resolution. In the end, that's an evolutionary trade-off. Physics tells us you can't have both at the same time, but biology is why we end up with this particular outcome.






      share|cite|improve this answer























        up vote
        3
        down vote










        up vote
        3
        down vote









        The hairs form a 1D-array along the frequency axis, while rods and rods and cones form a spatial 2D array. In addition, that 2D array has 4 channels (rods and 3 types of cones). So the 2 ears have a poor spatial resolution, while the eyes have poor frequency resolution.



        You could imagine an eye with many more types of cones, giving you a better frequency resolution. However, that would mean that the cones for a single color would be spaced further apart, limiting spatial resolution. In the end, that's an evolutionary trade-off. Physics tells us you can't have both at the same time, but biology is why we end up with this particular outcome.






        share|cite|improve this answer












        The hairs form a 1D-array along the frequency axis, while rods and rods and cones form a spatial 2D array. In addition, that 2D array has 4 channels (rods and 3 types of cones). So the 2 ears have a poor spatial resolution, while the eyes have poor frequency resolution.



        You could imagine an eye with many more types of cones, giving you a better frequency resolution. However, that would mean that the cones for a single color would be spaced further apart, limiting spatial resolution. In the end, that's an evolutionary trade-off. Physics tells us you can't have both at the same time, but biology is why we end up with this particular outcome.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered 18 hours ago









        MSalters

        4,3471223




        4,3471223






















            up vote
            0
            down vote













            Actually you can discern the different combinations to some extent. I figured this out after taking many art classes, particularly color theory and painting. I really started noticing the effect after the painting class. The reason is I was taught how to mix colors and needed to learn to see and think about the combinations while looking at the subject. This is the same for the note discernment you mentioned. The people who are usually able to do it, and especially well, are people who have studied music and have had to learn to hear and think about the combinations while listening.



            So, now when I look at something "green" I see that it is actually blue with a little yellow and sometimes a bit of another color. Nothing is ever "white" or "black" to me anymore either, there is always a reflected color or a texture of some sort which catches the light from the surrounding sources.



            Edit:
            After further consideration I am inclined to say the question posted here is backwards. We actually discern many more separate colors in combination and at the same time than we do sounds. Look around and you will see a plethora of color combinations. We are actually perceiving many billions of separate photons coming from many billions of separate locations in our field of view. In contrast, listening to so many separate sounds would overwhelm our ears and perception of the sounds to the point of hearing only discord, static, or hissing like noises that have no meaning. It is only the combination of very few separate noises which can be discerned.






            share|cite|improve this answer























            • You cannot distinguish a color that is a mixture of two or more frequencies of light from a color that is a pure frequency. Such a green still looks like it has yellow and blue even though it doesn't.
              – Kaz
              yesterday










            • Ok. I see where the confusion is. Think about this, if you could make a pure "green" light, where is it exactly on the spectrum? Is it an infinitely exact point on the spectrum or does it contain a little bit from the yellowish green light just a hair before it, and a little bit of the bluish green light just a hair after? Furthermore, when we make a chord or other combination of sounds, it does not make an "other" color on the same "spectrum". For instance playing a "G" and a "B" does not make an "A". Chords which make a "note" contain that note and are simply a fuller more harmonious sound.
              – takintoolong
              5 hours ago










            • It doesn't matter; there is a green range in the spectrum, around the 490 to 570 nM wavelengths. Any light of pure frequency in that range will produce a perception of some hue of green. That same color perception can be produced from a mixture of frequencies quite distant from that one. It's just a matter of stimulating the same cells in the retina to the same degree. For sound, the human ear has a very fine resolution of frequency in which narrow bands are quite separately perceived.
              – Kaz
              5 hours ago












            • And what is the source of your so called "pure" light?
              – takintoolong
              4 hours ago















            up vote
            0
            down vote













            Actually you can discern the different combinations to some extent. I figured this out after taking many art classes, particularly color theory and painting. I really started noticing the effect after the painting class. The reason is I was taught how to mix colors and needed to learn to see and think about the combinations while looking at the subject. This is the same for the note discernment you mentioned. The people who are usually able to do it, and especially well, are people who have studied music and have had to learn to hear and think about the combinations while listening.



            So, now when I look at something "green" I see that it is actually blue with a little yellow and sometimes a bit of another color. Nothing is ever "white" or "black" to me anymore either, there is always a reflected color or a texture of some sort which catches the light from the surrounding sources.



            Edit:
            After further consideration I am inclined to say the question posted here is backwards. We actually discern many more separate colors in combination and at the same time than we do sounds. Look around and you will see a plethora of color combinations. We are actually perceiving many billions of separate photons coming from many billions of separate locations in our field of view. In contrast, listening to so many separate sounds would overwhelm our ears and perception of the sounds to the point of hearing only discord, static, or hissing like noises that have no meaning. It is only the combination of very few separate noises which can be discerned.






            share|cite|improve this answer























            • You cannot distinguish a color that is a mixture of two or more frequencies of light from a color that is a pure frequency. Such a green still looks like it has yellow and blue even though it doesn't.
              – Kaz
              yesterday










            • Ok. I see where the confusion is. Think about this, if you could make a pure "green" light, where is it exactly on the spectrum? Is it an infinitely exact point on the spectrum or does it contain a little bit from the yellowish green light just a hair before it, and a little bit of the bluish green light just a hair after? Furthermore, when we make a chord or other combination of sounds, it does not make an "other" color on the same "spectrum". For instance playing a "G" and a "B" does not make an "A". Chords which make a "note" contain that note and are simply a fuller more harmonious sound.
              – takintoolong
              5 hours ago










            • It doesn't matter; there is a green range in the spectrum, around the 490 to 570 nM wavelengths. Any light of pure frequency in that range will produce a perception of some hue of green. That same color perception can be produced from a mixture of frequencies quite distant from that one. It's just a matter of stimulating the same cells in the retina to the same degree. For sound, the human ear has a very fine resolution of frequency in which narrow bands are quite separately perceived.
              – Kaz
              5 hours ago












            • And what is the source of your so called "pure" light?
              – takintoolong
              4 hours ago













            up vote
            0
            down vote










            up vote
            0
            down vote









            Actually you can discern the different combinations to some extent. I figured this out after taking many art classes, particularly color theory and painting. I really started noticing the effect after the painting class. The reason is I was taught how to mix colors and needed to learn to see and think about the combinations while looking at the subject. This is the same for the note discernment you mentioned. The people who are usually able to do it, and especially well, are people who have studied music and have had to learn to hear and think about the combinations while listening.



            So, now when I look at something "green" I see that it is actually blue with a little yellow and sometimes a bit of another color. Nothing is ever "white" or "black" to me anymore either, there is always a reflected color or a texture of some sort which catches the light from the surrounding sources.



            Edit:
            After further consideration I am inclined to say the question posted here is backwards. We actually discern many more separate colors in combination and at the same time than we do sounds. Look around and you will see a plethora of color combinations. We are actually perceiving many billions of separate photons coming from many billions of separate locations in our field of view. In contrast, listening to so many separate sounds would overwhelm our ears and perception of the sounds to the point of hearing only discord, static, or hissing like noises that have no meaning. It is only the combination of very few separate noises which can be discerned.






            share|cite|improve this answer














            Actually you can discern the different combinations to some extent. I figured this out after taking many art classes, particularly color theory and painting. I really started noticing the effect after the painting class. The reason is I was taught how to mix colors and needed to learn to see and think about the combinations while looking at the subject. This is the same for the note discernment you mentioned. The people who are usually able to do it, and especially well, are people who have studied music and have had to learn to hear and think about the combinations while listening.



            So, now when I look at something "green" I see that it is actually blue with a little yellow and sometimes a bit of another color. Nothing is ever "white" or "black" to me anymore either, there is always a reflected color or a texture of some sort which catches the light from the surrounding sources.



            Edit:
            After further consideration I am inclined to say the question posted here is backwards. We actually discern many more separate colors in combination and at the same time than we do sounds. Look around and you will see a plethora of color combinations. We are actually perceiving many billions of separate photons coming from many billions of separate locations in our field of view. In contrast, listening to so many separate sounds would overwhelm our ears and perception of the sounds to the point of hearing only discord, static, or hissing like noises that have no meaning. It is only the combination of very few separate noises which can be discerned.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited yesterday

























            answered 2 days ago









            takintoolong

            1093




            1093












            • You cannot distinguish a color that is a mixture of two or more frequencies of light from a color that is a pure frequency. Such a green still looks like it has yellow and blue even though it doesn't.
              – Kaz
              yesterday










            • Ok. I see where the confusion is. Think about this, if you could make a pure "green" light, where is it exactly on the spectrum? Is it an infinitely exact point on the spectrum or does it contain a little bit from the yellowish green light just a hair before it, and a little bit of the bluish green light just a hair after? Furthermore, when we make a chord or other combination of sounds, it does not make an "other" color on the same "spectrum". For instance playing a "G" and a "B" does not make an "A". Chords which make a "note" contain that note and are simply a fuller more harmonious sound.
              – takintoolong
              5 hours ago










            • It doesn't matter; there is a green range in the spectrum, around the 490 to 570 nM wavelengths. Any light of pure frequency in that range will produce a perception of some hue of green. That same color perception can be produced from a mixture of frequencies quite distant from that one. It's just a matter of stimulating the same cells in the retina to the same degree. For sound, the human ear has a very fine resolution of frequency in which narrow bands are quite separately perceived.
              – Kaz
              5 hours ago












            • And what is the source of your so called "pure" light?
              – takintoolong
              4 hours ago


















            • You cannot distinguish a color that is a mixture of two or more frequencies of light from a color that is a pure frequency. Such a green still looks like it has yellow and blue even though it doesn't.
              – Kaz
              yesterday










            • Ok. I see where the confusion is. Think about this, if you could make a pure "green" light, where is it exactly on the spectrum? Is it an infinitely exact point on the spectrum or does it contain a little bit from the yellowish green light just a hair before it, and a little bit of the bluish green light just a hair after? Furthermore, when we make a chord or other combination of sounds, it does not make an "other" color on the same "spectrum". For instance playing a "G" and a "B" does not make an "A". Chords which make a "note" contain that note and are simply a fuller more harmonious sound.
              – takintoolong
              5 hours ago










            • It doesn't matter; there is a green range in the spectrum, around the 490 to 570 nM wavelengths. Any light of pure frequency in that range will produce a perception of some hue of green. That same color perception can be produced from a mixture of frequencies quite distant from that one. It's just a matter of stimulating the same cells in the retina to the same degree. For sound, the human ear has a very fine resolution of frequency in which narrow bands are quite separately perceived.
              – Kaz
              5 hours ago












            • And what is the source of your so called "pure" light?
              – takintoolong
              4 hours ago
















            You cannot distinguish a color that is a mixture of two or more frequencies of light from a color that is a pure frequency. Such a green still looks like it has yellow and blue even though it doesn't.
            – Kaz
            yesterday




            You cannot distinguish a color that is a mixture of two or more frequencies of light from a color that is a pure frequency. Such a green still looks like it has yellow and blue even though it doesn't.
            – Kaz
            yesterday












            Ok. I see where the confusion is. Think about this, if you could make a pure "green" light, where is it exactly on the spectrum? Is it an infinitely exact point on the spectrum or does it contain a little bit from the yellowish green light just a hair before it, and a little bit of the bluish green light just a hair after? Furthermore, when we make a chord or other combination of sounds, it does not make an "other" color on the same "spectrum". For instance playing a "G" and a "B" does not make an "A". Chords which make a "note" contain that note and are simply a fuller more harmonious sound.
            – takintoolong
            5 hours ago




            Ok. I see where the confusion is. Think about this, if you could make a pure "green" light, where is it exactly on the spectrum? Is it an infinitely exact point on the spectrum or does it contain a little bit from the yellowish green light just a hair before it, and a little bit of the bluish green light just a hair after? Furthermore, when we make a chord or other combination of sounds, it does not make an "other" color on the same "spectrum". For instance playing a "G" and a "B" does not make an "A". Chords which make a "note" contain that note and are simply a fuller more harmonious sound.
            – takintoolong
            5 hours ago












            It doesn't matter; there is a green range in the spectrum, around the 490 to 570 nM wavelengths. Any light of pure frequency in that range will produce a perception of some hue of green. That same color perception can be produced from a mixture of frequencies quite distant from that one. It's just a matter of stimulating the same cells in the retina to the same degree. For sound, the human ear has a very fine resolution of frequency in which narrow bands are quite separately perceived.
            – Kaz
            5 hours ago






            It doesn't matter; there is a green range in the spectrum, around the 490 to 570 nM wavelengths. Any light of pure frequency in that range will produce a perception of some hue of green. That same color perception can be produced from a mixture of frequencies quite distant from that one. It's just a matter of stimulating the same cells in the retina to the same degree. For sound, the human ear has a very fine resolution of frequency in which narrow bands are quite separately perceived.
            – Kaz
            5 hours ago














            And what is the source of your so called "pure" light?
            – takintoolong
            4 hours ago




            And what is the source of your so called "pure" light?
            – takintoolong
            4 hours ago





            protected by Community yesterday



            Thank you for your interest in this question.
            Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



            Would you like to answer one of these unanswered questions instead?



            Popular posts from this blog

            Bressuire

            Cabo Verde

            Gyllenstierna