Previous Next Title Page Index Contents Site Index


5.1. Appendix I: Glossary of Terms

5. ============ Appendices Section ============

5.1. Appendix I: Glossary of Terms

5.1.0.1. Aesthetics

Aesthetics is the branch of philosophy that aims to establish the general principles of art and beauty. It can be divided into the philosophy of art and the philosophy of beauty. Although some philosophers have considered one of these a subdivision of the other, the philosophies of art and beauty are essentially different. The philosophy of beauty recognizes aesthetic phenomena outside of art, as in nature or in nonartistic cultural phenomena such as morality, science, or mathematics; it is concerned with art only insofar as art is beautiful. The history of the arts in the West, however, has made it increasingly clear that there is much more to art than beauty and that art often has little or nothing to do with beauty. Until the 18th century, the philosophy of beauty was generally given more attention than the philosophy of art. Since that time, aestheticians have devoted more energy to the philosophy of art.

PHILOSOPHY OF ART
Metaphysics of Art
Aestheticians ask two main questions about the metaphysics of art: (1) What is the ontological status of works of art, or what kind of entity is a work of art? (2) What access, if any, does art give the viewer or hearer to reality, or what kind of knowledge, if any, does art yield? The first question arises, in part, because some works of art, such as SCULPTURES, are much like ordinary physical objects; others, such as PAINTINGS, have aspects that suggest that not all works of art can be merely physical objects. A painting, for example, is typically flat, but it can represent spatial depth; and what the painting represents often seems more relevant aesthetically than its physical dimensions. To some aestheticians, the representational character seems to be what is essential to a painting as a work of art. Some philosophers have therefore concluded that works of art are mental entities of some sort, because it is mental entities, such as visions and dreams, that are typically representational. Other philosophers, who have noticed that artists can and do express some of their own attitudes, emotions, and personality traits in their art, have concluded that art works belong in a category with NONVERBAL COMMUNICATIONS rather than with physical objects.
A different line of thought suggests that works of art are not like objects even on a first impression. For example, the score of a SYMPHONY is not the same as the symphony. The score is a set of directions for playing the music, but the musical work can exist even if no one ever plays the score. Considerations such as these have led many philosophers to say that works of art exist only in the minds of their creators and of their hearers, viewers, or readers.
The question whether art can provide knowledge of, or insight into, reality is as old as philosophy itself. Plato argued in The Republic that art has the power to represent only the appearances of reality. According to this theory, a painter reproduces (imitates) a subject on canvas. The counterposition, that art can yield insight into the real, is commonly held by modern philosophers, artists, and critics. Many critics, in fact, allege that art offers a special, nondiscursive, and intuitive knowledge of reality that science and philosophy cannot achieve.

Experience of Art
Modern discussions about how art is experienced have been dominated by theories devised in the 18th century to describe the experience of beauty. As a consequence, many philosophers still think of the typical experience of art as distanced, disinterested, or contemplative. This experience is supposed to be different, and removed, from everyday affairs and concerns. A few modern aestheticians, especially John DEWEY, have stressed the continuity between aesthetic experience and everyday experience and have claimed for the experience of art a psychologically integrative function.

Judgments and Interpretations
The study of critics' judgments and interpretations of art tries to specify the kind of reasoning involved in such opinions. One question is whether evaluative judgments can be backed by strictly deductive reasoning based on premises descriptive of the art-work.
A radical position on this issue is that evaluative judgments are merely expressions of preference and thus cannot be considered either true or false. With respect to critical interpretations of a work, as distinct from evaluations, a basic question is whether conflicts over interpretations of a work can be definitively settled by facts about the work, or whether more than one incompatible but reasonable interpretation of the same work is possible. A related concern is what the criteria of relevance are for justifying an interpretation or evaluation. Some aestheticians in this century, for example, have argued that appeals to the artist's intentions about a work are never relevant in such contexts.

Production of Art
Philosophical speculation about the production of art centers primarily on the following questions: What is the role of genius, or innate ability, in artistic production? What is the meaning of creativity? How do the conditions for producing fine art differ from those for producing CRAFTS? On the last issue, ancient and medieval philosophers assumed the same model for producing fine art and crafts; they had no conception that the two are distinct. The present distinction between the two emerged in Western culture after the RENAISSANCE; nearly all aestheticians now assume that something is unique about producing fine and especially great art.

Definition of Art
Attempts to define art generally aim at establishing a set of characteristics applicable to all fine arts as well as the differences that set them apart. By the middle of the 20th century, aestheticians had not agreed upon a definition of art, and a skeptical position became popular, holding that it is impossible in principle to define art. This skepticism has an interesting parallel in the 18th century when, after many unsuccessful attempts to define beauty, most philosophers agreed that beauty could not be defined in terms of the qualities shared by all beautiful objects.

PHILOSOPHY OF BEAUTY
The skepticism about beauty culminated in the Critique of Judgment (1790), Immanuel KANT's contribution to aesthetics. In that work, Kant analyzed the "judgment of taste," that is, the judgment that a thing is beautiful. He asserted that the judgment of beauty is subjective. Before Kant, the common assumption was that "beauty" designated some objective feature of things. Most earlier theories of beauty had held that beauty was a complex relation between parts of a whole. Some philosophers called this relation "harmony." From the time of the Greeks, a common assumption was that beauty applied not only, or primarily, to art, but that it manifested itself in cultural institutions and moral character as well as in natural and artificial objects. By the end of the 18th century, however, the range of accepted beautiful things was becoming more and more restricted to natural things and artworks.
Whereas theorists of beauty had generally admitted that the perception of beauty always gives pleasure to the perceiver, Kant turned the pleasure into the criterion of beauty. According to Kant, people can judge a thing beautiful only if they take pleasure of a certain kind in experiencing it. The American philosopher George SANTAYANA took this subjectivism a step further by declaring that beauty is the same as pleasure--but pleasure then can be seen as "objectified" in things. Santayana's work (1896) marked the virtual end, until recently, of aestheticians' serious theoretical interest in beauty.

Guy Sircello

Bibliography: Adorno, T. W., Aesthetic Theory, trans. by G. Lenhardt (1984); Beardsley, Monroe, Aesthetics, 2d ed. (1981); Collingwood, R. G., The Principles of Art (1938); Croce, Benedetto, Aesthetic, trans. by Douglas Ainslie (1909); Dewey, John, Art as Experience, (1934); Kant, Immanuel, Critique of Judgment (1790; new trans. by J. C. Meredith, 1957); Langer, Susanne K., Feeling and Form (1953); Margolis, Joseph, Philosophy Looks at the Arts, 3d ed. (1986); Santayana, George, The Sense of Beauty (1896); Sircello, Guy, A New Theory of Beauty (1975); Tartarkiewicz, W., History of Aesthetics, 3 vols. (1970, 1974)

5.1.0.2. Atomic constants

The goal of physics is to understand and formulate the basic laws that govern the various processes of nature, such as gravity and electricity, as well as subatomic processes. These laws must be mathematically precise and must have physical implications testable by accurate laboratory experiments.
To express any law of nature, two kinds of physical quantities are required: one that expresses the variables characterizing a given situation, and another kind that is assumed to be independent of any particular situation in which the laws operate. The latter quantities are called fundamental constants.
For example, according to the laws of electrostatics, the force F experienced by two static electric charges P and Q separated by a distance r is F=kPQ/rr, where k is a number that depends only on the nature of medium containing the charges. P, Q, and r are the variables that characterize the electric charge and their distance, and in a given medium, k is a constant. In a medium free of any matter (a vacuum), k=9.0 billion newton-meter-meter/coulomb. Thus, k is a fundamental constant, called the dielectric constant of free space. An analogous constant, called permeability, is encountered in the study of forces between magnets.
In formulating the laws of physics pertaining to various phenomena occurring in nature, several such parameters are encountered that are considered fundamental constants. Some of the more familiar ones are explained below.

Elementary Unit of Charge
It has been observed that all electrically charged bodies in nature carry an electric charge that is an integral multiple of the absolute value of the charge of a single electron, e. No smaller unit has been found. This is called quantization of charge, and e, therefore, is a fundamental constant.

Planck's Constant
Energy released in atomic processes comes in extremely tiny bundles, with a fixed amount of energy in each bundle. This discrete nature of energy was first recognized by the German physicist Max PLANCK, who postulated that for a given radiation frequency n, the amount of energy E is given by E=hn, where h is a fundamental constant called PLANCK's CONSTANT. The entire subject of quantum physics is based on this fundamental atomic constant h.

Velocity of Light
The velocities of all moving objects encountered in daily life are known to depend on the frame of reference from which they are measured. Albert EINSTEIN postulated that the velocity of light is unlike any other velocity. Light (or any other form of electromagnetic radiation, such as X rays) travels with a speed, c, that is fixed and independent of any frame of reference. The velocity of light, c, is therefore, a fundamental constant.

Gravitational Constant
Newton's law of GRAVITATION states that any two bodies in the universe attract each other with a force F defined by the law F=Gmn/rr, where m and m are their masses and r is the distance between them. G is an absolute constant called gravitational constant.

Electron and Proton Mass
Other atomic constants that describe atomic and subatomic systems are the masses of the proton and the electron. These particles, along with the neutron, are constituents of atoms.
Avogadro's Number
In the early days of molecular physics, Amedeo AVOGADRO (1776-1856) postulated that at a given temperature and pressure, equal volumes of different gases contain the same number of molecules. The AVOGADRO NUMBER is the number of molecules in one MOLE of the gas and is a constant for all substances.

Boltzmann's Constant
The description of thermal properties of gases is given by the ideal-gas law relating the pressure P, volume V, and temperature T of the gas as follows: PV=NkT, where N is the number of molecules in the gas. The parameter k is an absolute constant called the BOLTZMANN CONSTANT. The determination of Boltzmann's constant was made possible by Avogadro's hypothesis.

In addition to these basic fundamental constants, several other constants can be calculated from those previously defined. These include the Rydberg's constant, which is used in SPECTRUM analysis, the BOHR MAGNETON, which is used to describe the magnetic moment of atomic systems, and the electron charge-to-mass ratio.
The values used for the fundamental constants and their derivatives undergo adjustments over the years, as scientific advances make more precise measurements possible.

R. N. Mohapatra

Bibliography: CODATA, The 1986 Adjustment of the Fundamental Physical Constants (1986); Nolen, J. A., and Benenson, W., eds., Atomic Masses and Fundamental Constants (1980); Rossini, F. D., Fundamental Measures and Constants for Science and Technology (1974)

5.1.0.3. Background Radiation

Background radiation is a low-temperature radiation that pervades the universe at microwave wavelengths. Its source is believed to have been the extremely hot fireball with which the universe began, according to the BIG BANG THEORY. The existence of cosmic background radiation was first predicted in 1948 when Hans BETHE, George GAMOW, and R. A. Alpher proposed a theory of the origin of the elements based on Einstein's theory of general relativity (see ELEMENTS, ORIGIN OF). According to this proposal, the elements were formed under the conditions of extremely high temperature that prevailed in the initial moments of the universe. As the universe expanded and cooled, the radiation field corresponding to the initial high-temperature state decayed in a corresponding manner. Using the data available at that time, Alpher, Bethe, and Gamow calculated that the universal radiation temperature should now be about 25 K. In 1948, however, no experimental technique with sufficient sensitivity was available to detect such a weak radiation field.
The existence of a background radiation was also suggested by the observed excitation state of interstellar gas. In 1964, during the course of measurements made for another purpose, Arno A. PENZIAS and Robert W. WILSON of Bell Telephone Laboratories discovered the existence of a uniform background radiation at a temperature around 3.5 K. This was identified as the radiation field predicted earlier. The discovery was soon confirmed by R. H. DICKE and associates at Princeton University. Today the more accurately determined temperature is 2.7 plus or minus 0.2 K. The radiation's spectrum is not entirely uniform. Variations observed in the late 1980s were taken by some theorists as signs of an early cycle of star birth and death following the big bang.
The discovery of the background radiation supported the singular origin of the universe as expressed in the big bang theory and demonstrated the correctness of the application of Einstein's theory of general relativity to cosmology. The background radiation serves as an effective standard rest frame with which to compare motion in the universe. For example, because of the DOPPLER EFFECT, any motion of the Earth with respect to the background radiation will cause a difference in the measured intensity of the radiation, depending on the direction of measurement. Several attempts to determine the magnitude of this effect have been made. According to one recent result, the Earth and the Milky Way have an unexpectedly large motion with respect to the background radiation.
Hong Yee Chiu
Bibliography: Friedlander, M. W., Astronomy (1985); Gribbin, John, In Search of the Big Bang (1986); Wilkinson, D. T., "Anisotropy of the Cosmic Blackbody Radiation," Science, June 20, 1986.
See also: COSMOLOGY; INTERSTELLAR MATTER.

5.1.0.4. Biological equilibrium

All living things are constantly interacting with and opposing gravity. Three biological systems integrate to keep vertebrates oriented to the balanced and upright position: (1) the vestibular system--organs in the inner ear that act like a carpenter's level; (2) vision--information from the eyes about the position of the horizon constantly feeds to the brain; and (3) proprioception--the brain's knowledge of the position of the body parts without the help of the external senses.
Vestibular System
This system is composed of two closely related organs in the inner ear that have two separate functions: gravitational orientation and orientation to movement through space. The saccule and utricle, two saclike structures, provide orientation to gravity. These organs contain small granules embedded in a gelatinous material. Nerves respond to changes in the position of these granules in respect to gravity. The brain knows which way is down by sensing the position of the granules in the ear.

For movement through space, there are three semicircular canals that are oriented at right angles to one another. These canals are filled with a fluid that flows through the canals when the head moves. The flow of liquid is sensed by small hairs that "feel" the flow and constantly update the brain. The brain processes the data and utilizes the information for many uses in balance and orientation. This system coordinates with the visual system so the eyes can continue tracking even when the head is moving. These two systems can be intentionally confused by spinning the head in one direction for a number of revolutions to produce dizziness. When the fluids in the ear stop flowing, the dizziness disappears.
Visual System
The brain uses the horizon as a reference to gravity. The eyes send signals to the brain to help it find the perpendicular to gravity and the position of the head while viewing the horizon. This helps the brain find the direction of the gravitational pull. As mentioned above, the visual system works closely with the vestibular system so that the eyes can keep tracking in a stable pattern even though they are moving through space. These two systems together let us continue reading even as we rock our heads from side to side.
Proprioception
This special sense, found in all muscles, tells the brain where the muscles are in space in relation to the rest of the body. It is best demonstrated by the ability to touch the tip of the nose with the tip of any finger even though the eyes are closed. This sense automatically coordinates with the vestibular and visual senses to keep us upright.

Integration of these three systems occurs primarily in the vestibular nuclei, located near the inner ear, and the cerebellum. The three systems are redundant, so that if one system fails, an upright position can be maintained by using the information supplied by the other two. For example, a blind person can function well if the proprioceptive and vestibular systems are intact. However, a disruption of the vestibular system may cause a mix-up in the brain's gravity and motion detectors. Problems such as viral or bacterial infection, a blow to the head, alcohol, some medications, and certain diseases like diabetes or multiple sclerosis may cause brief or chronic attacks of dizziness, nausea, and balance problems.

Louis D. Lowry, M.D
Barany, Robert
{bah'-rahn-ee}
Robert Barany, b. Apr. 22, 1876, d. Apr. 8, 1936, was an Austrian physician who pioneered work on the function of the inner ear in maintaining balance. He was awarded the 1914 Nobel Prize for physiology or medicine "for his work on the physiology and pathology of the vestibular apparatus." Barany's experiments, which demonstrated how fluid movement affects vestibular organs in the inner ear's semicircular canals and causes changes in the sense of equilibrium, resulted in improved methods of diagnosing and treating inner ear disorders. Barany became associated with the University of Uppsala, Sweden, in 1917 and wrote several textbooks about the ear.

5.1.0.5. Consciousness

The terms conscious and consciousness are used in different ways. In one sense, a person is conscious when awake, but unconscious when asleep, knocked out, or comatose. Yet people also do things requiring perception and thought unconsciously even when they are awake. One can be conscious of an event or condition in one's physical surroundings; more intimately, one can be conscious of a sudden pain or a wish. Finally, a creature might be called conscious only when it is to some degree aware of itself.
The term consciousness is most often used by philosophers and psychologists as meaning "attention to the contents or workings of one's own mind." This notion had little significance for the ancients, but it was articulated and emphasized in the 17th century by John LOCKE and Rene DESCARTES.
Contemporaries of these two philosophers thought of consciousness as the operation of an inner eye, scanning certain of the mind's own "internal operations." Both Locke and Descartes went further. They held that consciousness accompanies every waking mental state--that no mental state goes unscanned by its owner. In this view the mind is transparent to itself--that is, it can perceive its own activity--and is known infallibly by its own inner aspect, or "feel." Indeed, such self-transparency was taken for nearly 300 years to be defining feature of the mind. That conception culminated in the psychological theories of William WUNDT and Edward TITCHENER, who advocated a science of introspection. Careful attentive examination of the stream of conscious experience would allow the psychologist to analyze mental processes exhaustively and to reduce them to a set of basic elements.
Early in the 20th century the transparency doctrine came to grief for three separate reasons. The first reason was Sigmund FREUD's compelling evidence that some very important mental activity is not only subconscious, but firmly resists conscious access through the mechanism of repression. At first Freud's ideal of the UNCONSCIOUS was greeted with consternation as being virtually self-contradictory, but whatever the fates of particular Freudian explanations, it has since won acceptance as being useful and entirely feasible.
The second difficulty for the transparency doctrine was that it made the mind inscrutable to objective science. What is known introspectively to a single person would be utterly private to that person, and no external investigation of brain or any other aspects of the subject could even be relevant to the character of the person's inner experience. Yet good scientific method demands objectivity and replicability of data.
The behaviorists John B. WATSON and B.F. SKINNER and the philosopher Gilbert RYLE rebelled against the idea of an intractably private inner sense and its equally private objects, and they denied the very existence of consciousness in the strong sense promulgated by Locke, Descartes, and the introspective psychologists. Ryle insisted that mind is an illusory concept, and that it is really nothing more than a collection of observable behaviors. Similarly, the behaviorists argued that behavioral responses to environmental stimuli are merely responses to the stimuli, and do not inherently represent hidden mental states or events; accordingly, psychology should be the science of behavior, not of introspection (see BEHAVIORISM).
The identity theory of mind, proposed by U.T. Place in the 1950s, reconciled the original idea that mental activity is genuinely inner and introspectable with the demands of contemporary scientific methods that scientific facts be verifiable. Place supposed that mental states and events simply are physical states and events of the central nervous system, at once perceivable by an equally physical inner sense and open to investigation by psychobiology. In one form or another, Place's view still dominates the philosophy of mind.
The third difficulty for the transparency doctrine was COGNITIVE PSYCHOLOGY's comparatively recent discovery that everyone does a great deal of mental processing, reasoning, and analysis of many sorts without being able to introspect it at all. To date, however, cognitive psychology has had little further to say about consciousness.
Studies of consciousness continue on many fronts. PSYCHO-PHYSICS examines the mathematical dependence of sensation variables on stimulus variables, and there is significant literature on the selectivity of ATTENTION, stemming from early work by Donald Eric BROADBENT. "Altered states of consciousness" are being explored; however, they are seen as unusual sorts of experience and mental activity generally, not consciousness per se. Perhaps the closest thing to a cognitive theory of consciousness is D.C. Dennett's hierarchical organization theory--based on earlier work by Ulric Neisser--according to which high-level brain centers selectively command lower-level components during mental activity.
William G. Lycan
Bibliography: Dennett, Daniel C., Brainstorms: Philosophical Essays on Mind and Psychology (1980); Lycan, William G., Consciousness (1987); Marcel, A.J., and Bisiach, E., eds., Consciousness in Contemporary Science (1988); Neisser, Ulric, Cognition and Reality (1976)

5.1.0.6. Ear

The ear is the organ of hearing and equilibrium (balance) in vertebrates. The ear converts sound waves (see SOUND AND ACOUSTICS) in the air to nerve impulses that are relayed to the brain, where they are interpreted as sound rather than as mere vibrations. The innermost portion of the ear maintains BIOLOGICAL EQUILIBRIUM through the so-called vestibular apparatus, which includes the semicircular canals. Any change in the position of the head or body causes the apparatus to transmit nerve impulses to the brain, evoking muscular reflexes that tend to restore the normal position. The ear first evolved as an organ of equilibrium, and the vestibular apparatus is basically alike in all vertebrates; structures concerned with hearing evolved later in humans and other higher vertebrates.
Many invertebrates also have specialized sense organs, rather than ears, for hearing and equilibrium. Crickets and spiders, for example, have membranes much like sounding boards on the legs. Moths have a similar rudimentary ear on the thorax that apparently serves as a warning system for attacks by bats.
STRUCTURE OF THE EAR
The ear in humans and most other mammals consists of three parts: the outer, middle, and inner portions. The outer ear, or pinna, is the structure commonly called the ear. It is a skin-covered flap of elastic cartilage projecting from the side of the head and funneling sound into the middle ear. The middle ear is an air-filled chamber containing the eardrum, or tympanic membrane, and connected to the pharynx by the eustachian tube, thus equalizing the pressure on the two sides of the eardrum. The inner ear alone contains the sensory receptors for hearing, which are enclosed in a fluid-filled chamber called the cochlea. The middle and outer ears serve only to receive and amplify sound waves and occur only in amphibians and mammals, whereas the inner ear is present in all vertebrates.
In fish, the ear is primarily an organ of equilibrium and possesses neither cochlea nor outer or inner ears. Amphibians possess a middle ear cavity; a thin membrane separating the middle ear from the outside becomes the eardrum. The pinna occurs only in mammals. In birds and reptiles, the eardrum may be in a depression (the auditory canal) below the surface of the head.
EVOLUTION OF THE EAR
The evolutionary origin of the inner ear is unknown, but it may have arisen from the so-called lateral-line system of fish. That system consists of a series of grooves on the head and sides. Clusters of specialized hair cells in the grooves are sensitive to the pressure of water movement, but not to sound in the conventional sense. The sensory cells of the inner ear are apparently adaptations of cells sensitive to the motion of liquids. The middle ear and eustachian tube evolved from the respiratory apparatus of the fish, and various inner ear structures evolved from parts of the fish jaw. A small outgrowth of the vestibular apparatus in amphibians evolved into the cochlea in mammals.
HEARING
The characteristics of sound that can be detected by the human ear include volume, pitch, and tone. In general, sound volume depends on the amplitude, or intensity, of the sound wave; the greater the amplitude, the louder the sound. Pitch is related to the frequency of the sound wave, or the number of waves per unit time passing a point of reference; the greater the frequency, the higher the pitch. The tone, or quality, of a sound is a more complex property than volume or pitch. Variations in quality, such as are produced when an oboe and a violin play the same note, depend on the number and kind of overtones or harmonics (combinations of frequencies).
Humans can hear frequencies between about 30 and 20,000 waves, or cycles, per second (cps, or Hertz, abbreviated Hz). A whistle producing 30,000 Hz is audible to dogs. Bats can produce and hear sounds of approximately 100,000 Hz, in the ultrasonic range, and use this ability in their highly evolved systems of navigation known as ECHOLOCATION.
Experiments indicate that humans and other higher vertebrates hear in much the same way. Basically, the ear is adapted for transmitting vibrations from air to the fluid medium of the cochlea. Sounds travel down the auditory canal and cause the eardrum to vibrate. The vibrations are transmitted through the middle ear by a sequence of three tiny bones, the auditory ossicles, called, because of their shapes, the hammer, anvil, and stirrup. The last of the bones, the stirrup, rests on a membrane-covered opening (the oval window) in the bony wall of the snail-shaped cochlea, and carries the vibrations to fluids inside the cochlea. The vibrations create waves on a membrane running along the length of the cochlea (the basilar membrane).
The true sound receptors are thousands of specialized hair cells, in the organ of Corti, spread across the basilar membrane. The deformation of the hairs causes them to initiate electrical impulses that are relayed by the auditory nerve to the brain. The ability to recognize pitch is based on the fact that cells stimulated by low frequencies occur at the apex of the cochlea, whereas those stimulated by high frequencies occur at the base. Nerve impulses from each region along the basilar membrane are relayed to slightly different regions of the brain, and the sensation of pitch depends on which area of the brain is stimulated.
Loud sounds cause more intense stimulation of hair cells and result in the transmission of more impulses per unit time to the brain. This increased transmission is interpreted as loudness.
ARNDT J. DUVALL, III, M.D. And Peter A. Santi
Bibliography: Batkin, R.B., Hearing and Hearing Disorders (1988); Keidel, W. D., The Physiological Basis of Hearing (1983); Singh, R. P., Anatomy of Hearing and Speech (1980); Stevens, S. Smith, and Davis, Hallowell, Hearing (1983); Yost, William A., and Nielsen, Donald W., Fundamentals of Hearing, 2d ed. (1985)

5.1.0.7. Ethology

{eth-ahl'-uh-jee}
Ethology is the science of the behavior of animals in their natural, or wild, state. Thus, ethology mainly concerns instinctive or inherited behavior rather than learned behavior. The ultimate goal of ethologists is to discover how instinctive behavior among related species evolved and now serves to enhance survival. The first ethologists, in the early 1900s, believed that their studies would reveal the origins of human ethics; hence, they borrowed the term ethology from philosophy, where it refers to the evolution of human values. The term remains popular in Europe, but American counterparts of European ethologists prefer the terms SOCIOBIOLOGY, behavioral biology, or comparative psychology.
Early in the 20th century, several European zoologists, including Oskar Heinroth of Germany, began systematically observing animals in their natural surroundings. In the 1930s, Konrad LORENZ and Nikolaas TINBERGEN became the leaders of this new science. The science grew steadily, and in 1973 the Nobel Prize for physiology or medicine was awarded to Lorenz, Tinbergen, and Karl von FRISCH for their work in identifying animal behavior-patterns.
The basic procedure in ethology is the formation of an ethogram--a detailed description of all the behaviors exhibited by the subject species. The ethologist postulates various theories to explain the cause, development, and adaptive function of each behavior. Finally, experiments are conducted to confirm or refute the theories. For example, Lorenz found that the newly hatched greylag goose followed any large moving stimulus presented shortly after hatching. The function of this process, termed IMPRINTING, is to secure the association between parent and offspring.
Many of the behaviors described in the ethogram consist of fixed action-patterns, which are inherited, stereotyped behaviors, or instincts. These patterns are stimulated by specific cues, called releasers or sign stimuli, from the environment and are carried out by the innate releasing mechanism, or nervous system. Releasers are especially important in social behaviors such as aggression or courtship.
Terry F. Pettijohn
Bibliography: Eibl-Eibesfeldt, Irenaus, Ethology: The Biology of Behavior, 2d ed. (1970); Hinde, Robert A., Ethology (1982); Lorenz, Konrad Z., Studies in Animal and Human Behavior, vol. 1 (1970).
See also: ANIMAL BEHAVIOR; ANIMAL COURTSHIP AND MATING

5.1.0.8. Eye

Almost all animals can perceive and respond to light, but eyes are as varied as the animals that possess them.
Eyes that form definite images are found only in some mollusks, mainly squid, octopus, and cuttlefish; in a few worms; in most arthropods, including insects, spiders, lobsters, and crabs; and in vertebrates. Except for most insects, these animals have eyes that are similar in structure and function to a camera, which uses a single LENS to focus a picture on a surface of densely packed cells called photoreceptors. The receptor surface, called the retina, functions like a piece of film. An external object is pictured on the retina like the points of a newspaper photograph. The picture later received in the BRAIN, however, is not the same simple point-by-point image. Exactly what this picture is remains unknown, but PERCEPTION is a process that takes place in the brain, not in the eye. Information from the eye, like the piece of a puzzle, is analyzed in the brain and fitted into meaningful forms.
Most insect eyes are built on an entirely different principle from that described above and are called compound eyes. Thousands of densely packed lenses are spread like a honeycomb over a spherical surface so that a mosaic image is formed. Each lens is associated with relatively few receptor cells, and the entire unit is called an ommatidium. No structure, therefore, is strictly analogous to the retina of a camera eye. What kind of image this arrangement conveys to the insect is not known.
EVOLUTION
At least three times during evolution, eyes with lenses have developed independently in animals as widely different as insects, mollusks, and vertebrates. Fish move the whole lens closer to the retina when focusing on distant objects. Mammals, including humans, have evolved a more complex method of focusing by changing the curvature of the lens--flattening it for close objects, thickening it for distant ones. Predatory birds have an effective strategy of keeping the prey in focus while sweeping down on it: instead of adjusting the lens, they quickly change the curvature of the more flexible structure called the cornea, which is a transparent membrane covering the lens and also supporting the eyeball.
Another essential refinement, COLOR PERCEPTION, also evolved independently several times, although intermittently. Among mammals, only humans, primates, and a few other species can recognize colors. Among insects, honeybees can be trained to distinguish colors, but they are color-blind to red. Similar training experiments have shown that at least some teleost, or bony fish, can discriminate colors, but elasmobranches, such as sharks, cannot. Why most mammals do not share the same ability is a major puzzle of evolution.
Finally, evolution resulted in the gradual development of binocular vision--the shifting of the eyes' position from the side of the head to the front; this permitted the fusion of the images in each separate eye into a single, three-dimensional image in the brain.
INVERTEBRATES
The light receptors of many invertebrates do not form definite images; they simply register light or dark or the direction of a source of light. The simplest such eyes are the light-sensitive patches found on the flagella, or limblike projections, of the protozoan Euglena and the eyespots of certain flatworms called planaria. Some organisms that have evolved true eyes have also retained simple photoreceptors of this type. Examples are the so-called ocelli found in the tails of lobsters and in the brain area under the skull; these organisms can perceive light even when their true eyes have been removed.
DETECTION OF LIGHT
Despite the variety of types of eyes, the chemical process that transforms light into nerve impulses in the eye is basically similar in all land vertebrates and marine fishes, and in some insects. In 1967, George Wald of Harvard shared a Nobel Prize in physiology or medicine for discovering the details of the first step, which occurs in the retina or ommatidium.
The substances in the retina that detect light are called photosensitive, or visual, pigments. The major pigment in the eye is rhodopsin, or visual purple, which is composed of two distinct parts: a protein molecule called opsin, and a molecule made from vitamin A called retinene. When light strikes rhodopsin, the retinene portion is split away, or bleached, from the opsin portion; this leads, by a mechanism whose details are still unclear, to nerve impulses that relay visual information to the animal's brain.
In the dark, and with the aid of chemical energy obtained from metabolism, retinene and opsin are recombined and rhodopsin is reconstituted. In very intense light, visual purple may be split faster than it can be reconstituted. Vision may then become impaired, for example, as in so-called snow blindness. Vision may be similarly impaired if vitamin A is deficient in supply, and a shortage of retinene results in so-called night blindness. (See EYE DISEASES.)
Vitamin A has the structure of one-half molecule of b-carotene, a pigment found in almost all plants. It cannot be made by animals and must be present in the food or be made from plant carotene. In plants carotene seems to be responsible for the growth toward light, and it also plays a role in photosynthesis, the process by which sunlight and water are combined to produce organic nutrients. Remarkably, evolution has adapted this almost universal plant pigment to animal vision.
STRUCTURE OF THE EYE
The eyes of vertebrates differ in some details, yet they are all built to a common plan. More is known about the human eye than about that of any other vertebrate, and it may therefore serve as an example.
Protecting the eyeball is a bony socket called the orbit. Each eye is suspended within its orbit and is surrounded by a cushion of fat and blood vessels and motor and sensory nerves, including the optic nerve. There are six small muscles attached to each eye to allow coordinated movement of the pair. The eyelids provide some protection in the front and also serve to keep the cornea lubricated by spreading the tear fluid with each blink, as well as an oily fluid produced by Meibomian glands in the lid. The tear fluid is produced by the lacrimal glands near the outer portion of each eyebrow and is collected and drained through tiny canals within the upper and lower lids near the nose. The tears eventually flow into the nasal passages and are swallowed.
The adult human eye is a hollow globe with a diameter of approximately 2.5 cm (1 in). The wall of the globe is composed of three coats. The outer coat, called the fibrous tunic, supplies the basic support of the eye and gives it shape. The fibrous tunic is divided into the cornea, which is the transparent, exposed membrane in front of the lens, and the sclera, the firm, white coat of the eye to which is attached the muscles that move the eyeball. The middle, or vascular, coat is composed of three regions. The choroid layer is pigmented black and carries blood vessels to and from the eye. In mammals other than humans, it has an iridescent layer that increases the retina's sensitivity to low-intensity light. The ciliary body consists of a ring-shaped muscle, which can change the lens shape, and ciliary processes to which the lens is attached. The iris, which contains an opening, the pupil, is colored and has a sphincter and a dilator muscle, called a contractile diaphragm. The innermost coat is the retina, which lies behind the lens. It contains the optic disc, or blind spot, which is the junction of nerve fibers passing to the brain. The retina also contains rods and cones, light-sensitive cells. The lens is a biconvex, transparent structure.
The Eye as a Camera
Light is excluded or permitted to enter by the eyelids, the equivalent of the camera shutter. Once admitted, the amount of light is further regulated by a variable opening, the pupil, which is like the aperture of a camera. The diameter of the pupil is controlled by the expansion and contraction of muscles in the iris. If a bright light is shone into the eye, the pupil immediately constricts. This is the light reflex, the purpose of which is to protect the retina from too intense illumination. As time passes, the retina adapts to the new level of light, and the pupil returns to its original size.
Light rays are focused by a lens system composed of the cornea and a crystalline lens, and an inverted image is projected on the retina. To prevent the blurring of images by internal reflection, the inner walls of the camera--the choroid layer--are painted black. The process by which the lens focuses on external objects is called accommodation. When a distant object is viewed, the lens is fairly flat. As the object moves nearer, the lens increasingly thickens, or curves outward. Lens shape is controlled by the ciliary body. A blurred image on the retina elicits reflex impulses to the ciliary body that promote contraction or relaxation of the body until the image is sharp.
The Retina
The retina is made of several layers of nerve cells and one layer of so-called rods and cones. Together, these constitute the photoreceptors that translate light energy into nerve impulses. The rods and cones are farthest removed from the light entering the front of the eye. Light must first pass through the nerve cells, strike the rods and cones, and then pass back to the nerve cells in order to generate nerve impulses. Because of this, the retinas of vertebrates are said to be inverted, and another problem of the evolution of the eye is that of accounting for the origin of the inverted retina.
The rods contain rhodopsin, are sensitive to dim light, and are important in black-white vision and the detection of motion. Cones are responsible for color vision and for the perception of bright images. Little, however, is known about their conversion of light to electrical impulses. The greatest concentration of cone cells is found in a tiny depression in the center of the retina called the fovea. Only cones are present there; rods are absent. Because of this dense accumulation of cones, vision is most acute at the fovea.
Nerve fibers from the retina eventually collect in one region and form the optic nerve, which relays visual information to the brain. Where this nerve leaves the eye, somewhat off-center, it interrupts the continuity of the rods and cones.
THE ROLE OF THE BRAIN
The optic nerve enters an area on the underside of the brain called the lateral geniculate body, which partially processes the data before passing it to the visual cortex at the rear of the brain. The degree of such processing varies with the species. Frogs, for example, have very complex retinas containing specialized cells for detecting the characteristic shapes and movements of insects. The retinas of humans and other primates are less complex, and less processing occurs in their eyes. The difference is also correlated with the presence of a visual cortex in the brain or the degree of its development; the frog has no visual cortex, whereas primates have a well-developed cortex.
The nerve fibers connecting the retina and the brain are so arranged that the right half of a field of vision "crosses over" and registers in the left half of the brain, and the left half of a field registers in the right half of the brain. The brain is able to smoothly superimpose the "left" picture of the external world on the "right" picture. Moreover, both halves of the picture are seen right side up, even though the retinas receive inverted images.
Thomas P. Mattingly and Melvin L. Rubin

Bibliography: Chalkley, Thomas, Your Eyes, 2d ed. (1982); Eden, John, The Eye Book (1978); Hollyfield, J. G., ed., Structure of the Eye (1982); Newell, F. W., Ophthalmology 5th ed. (1982); Snell, Richard, Clinical Anatomy of the Eye (1989); Young, Stephen, "Ways of Seeing," New Scientist, Aug. 18, 1984; Zurer, Pamela S., "The Chemistry of Vision," Chemical & Engineering News, Nov. 28, 1983

5.1.0.9. Kelvin, William Thomson

Kelvin, William Thomson , 1st Baron
{kel'-vin}
The thermodynamics studies of the Scottish physicist William Thomson, b. June 26, 1824, d. Dec. 17, 1907, led to his proposal (1848) of an absolute scale of TEMPERATURE. The Kelvin absolute temperature scale, developed later, derives its name from the title--Baron Kelvin of Largs--that he received from the British government in 1892. Thomson also observed (1852) what is now called the JOULE-THOMSON EFFECT--the decrease in temperature of a gas when it expands in a vacuum.
Thomson served as professor of natural philosophy (1846-99) at the University of Glasgow. One of his first projects was to calculate the age of the Earth, based on the rate of cooling of the planet--assuming it had once been a piece of the Sun. (His result--20 to 400 million years--was far short of the current estimate of 4.5 billion years.) Greatly interested in the improvement of physical instrumentation, he designed and implemented many new devices, including the mirror-galvanometer that was used in the first successful sustained telegraph transmissions in transatlantic submarine cable. Thomson's participation in the telegraph cable project formed the basis of a large personal fortune.
Sheldon J. Kopperl
Bibliography: Burchfield, Joe D., Lord Kelvin and the Age of the Earth (1975); Gray, Andrew, Lord Kelvin: An Account of His Scientific Life and Work (1908; repr. 1973); Sharlin, Harold and Tiby, Lord Kelvin: The Dynamic Victorian (1978); Smith, C.W., and Wise, M.N., Energy and Empire: a Biographical Study of Lord Kelvin (1989); Thompson, Silvanus P., The Life of Lord Kelvin, 2 vols., 2d ed. (1977).

5.1.0.10. Leibniz, Gottfried Wilhelm von

{lyb'-nitz}
The German philosopher and mathematician Gottfried Wilhelm von Leibniz, b. July 1, 1646, d. Nov. 14, 1716, was a universal genius and a founder of modern science. He anticipated the development of symbolic LOGIC and, independently of Isaac Newton, invented the calculus with a superior notation, including the symbols for integration and differentiation. He expounded a theory of substance based on monads, which were metaphysical and animistically endowed points of force and perception. Leibniz also advocated Christian ecumenism in religion, codified Roman laws and introduced natural law in jurisprudence, propounded the metaphysical law of optimism (satirized by Voltaire in Candide) that our universe is the "best of all possible worlds," and transmitted Chinese thought to Europe. For his work, he is considered a progenitor of German idealism and a pioneer of the Enlightenment.
Leibniz was the son of a professor of moral philosophy at Leipzig. A precocious youth, Leibniz taught himself Latin and some Greek by age 12 so that he might read the books in his father's library. From 1661 to 1666 he majored in law at the University of Leipzig. When refused admission to its doctoral program in law in 1666, he went to the University of Altdorf, which awarded him the doctorate in jurisprudence in 1667.
In the tradition of Cicero and Francis Bacon, Leibniz chose to pursue the active life of a courtier. He thus declined a professorship at Altdorf because he had "very different things in view." After serving as secretary of the Rosicrucian Society in Nuremberg in 1667, he moved to Frankfurt to work on legal reform. From 1668 to 1673 he served the elector-archbishop of Mainz. He was sent to Paris in 1672 to try to dissuade Louis XIV from attacking German areas. Leibniz proposed a campaign against Egypt and the Levant as well as building a canal through the Isthmus of Suez. Although his proposals were unheeded, Leibniz remained until 1676 in Paris, where he practiced law, examined Cartesian thought with Nicolas de Malebranche and Antoine Arnauld, and studied mathematics and physics under Christian Huygens.
From 1676 until his death, Leibniz served the Brunswick family in Hanover as librarian, judge, and minister. After 1686 he served primarily as historian, preparing a genealogy of the Hanovers based on the critical examination of primary source materials. In search of sources, he traveled to Austria and Italy from 1687 to 1690. Because of his Lutheran background, he declined the position of custodian of the Vatican Library, which required his conversion to Catholicism.
In his later years, Leibniz attempted to build an institutional framework for the sciences in central Europe and Russia. At his urging, the Brandenburg Society (Berlin Academy of Science) was founded in 1700. He met several times with Peter the Great to recommend educational reforms in Russia and proposed what later became the Saint Petersburg Academy of Science.
Although shy and bookish, Leibniz knew no master in disputation. After 1700 he opposed John Locke's theory that the mind is a tabula rasa (blank tablet) at birth and that we learn only through the senses. He strongly protested the Royal Society's charge (1712-13) of plagiarism against him regarding the invention of the calculus. In his final debate with Samuel Clarke, who defended Newtonian science, Leibniz argued that space, time, and motion are relative.
Leibniz's most important works are the Essais de Theodicee (1710; Eng. trans., 1951), in which much of his general philosophy is found, and the Monadology (1714; trans. as The Monadology and other Philosophical Writings, 1898), in which he propounds his theory of monads. His work was systematized and modified in the 18th century by the German philosopher Christian Wolff.
Ronald Calinger
Bibliography: Broad, C.D., and Lewy, C., Leibniz: An Introduction (1975); Calinger, Ronald, Gottfried Wilhelm Leibniz (1976); Frankfurt, Harry G., ed., Leibniz: A Collection of Critical Essays (1976); Hostler, J.M., Leibniz's Moral Philosophy (1975); Ishiguro, Hide, Leibniz's Philosophy of Logic and Language, 2d ed. (1990); Leclerc, Ivor, ed., The Philosophy of Leibniz and the Modern World (1973); Loemker, Leroy E., Struggle for Synthesis (1972); Parkinson, G.H., Logic and Reality in Leibniz's Metaphysics (1965; repr. 1985); Rescher, Nicholas, ed., Leibniz: An Introduction to His Philosophy (1986); Ross, George M., Leibniz (1984); Russell, Bertrand, Critical Exposition of the Philosophy of Leibniz (1900; 2d ed., 1961); Woolhouse, R.S., ed., Leibniz (1981).

5.1.0.11. Leonardo da Vinci

{lay-oh-nar'-doh dah vin'-chee}
The life and work of Leonardo da Vinci have proved endlessly fascinating for later generations. What most impresses people today, perhaps, is the immense scope of Leonardo's achievement. In the past, however, he was admired chiefly for his art and art theory, on which his reputation was based. Leonardo's equally impressive contribution to science is a modern rediscovery, having been preserved in a vast quantity of notes that became widely known only in the 20th century.
LIFE
Leonardo was born on Apr. 15, 1452, near the town of Vinci, not far from Florence. He was the illegitimate son of a Florentine notary, Piero da Vinci, and a young woman named Caterina. His artistic talent must have revealed itself early, for he was soon apprenticed (c.1469) to Andrea VERROCCHIO, a leading Renaissance master. In this versatile Florentine workshop, where he remained until at least 1476, Leonardo acquired a variety of skills. He entered the painters' guild in 1472, and his earliest extant works date from this time. In 1478 he was commissioned to paint an altarpiece for the Palazzo Vecchio in Florence. Three years later he undertook to paint the Adoration of the Magi for the monastery of San Donato a Scopeto. This project was interrupted when Leonardo left Florence for Milan about 1482. Leonardo worked for Duke Lodovico Sforza in Milan for nearly 18 years. Although active as court artist, painting portraits, designing festivals, and projecting a colossal equestrian monument in sculpture to the duke's father, Leonardo also became deeply interested in nonartistic matters during this period. He applied his growing knowledge of mechanics to his duties as a civil and military engineer; in addition, he took up scientific fields as diverse as anatomy, biology, mathematics, and physics. These activities, however, did not prevent him from completing his single most important painting, The Last Supper.
With the fall (1499) of his patron to the French, Leonardo left Milan to seek employment elsewhere: he went first to Mantua and Venice, but by April 1500 he was back in Florence. His stay there was interrupted by time spent working in central Italy as a mapmaker and military engineer for Cesare Borgia. Again in Florence in 1503, Leonardo undertook several highly significant artistic projects, including the Battle of Anghiari mural for the council chamber of the Town Hall, the portrait of Mona Lisa, and the lost Leda and the Swan. At the same time his scientific interests deepened: his concern with anatomy led him to perform dissections, and he undertook a systematic study of the flight of birds.
Leonardo returned to Milan in June 1506, called there to work for the new French government. Except for a brief stay in Florence (1507-08), he remained in Milan for 7 years. The artistic project on which he focused at this time was the equestrian monument to Gian Giacomo Trivulzio, which, like the Sforza monument earlier, was never completed. Meanwhile, Leonardo's scientific research began to dominate his other activities, so much so that his artistic gifts were directed toward scientific illustration; through drawing, he sought to convey his understanding of the structure of things. In 1513 he accompanied Pope Leo X's brother, Giuliano de'Medici, to Rome, where he stayed for 3 years, increasingly absorbed in theoretical research. In 1516-17, Leonardo left Italy forever to become architectural advisor to King Francis I of France, who greatly admired him. Leonardo died at the age of 67 on May 2, 1519, at Cloux, near Amboise, France.
ARTISTIC ACHIEVEMENTS
Early Work in Florence
The famous angel contributed by Leonardo to Verrocchio's Baptism of Christ (c.1475; Uffizi, Florence) was the young artist's first documented painting. Other examples of Leonardo's activity in Verrocchio's workshop are the Annunciation (c.1473; Uffizi); the beautiful portrait Ginevra Benci (c.1474; National Gallery, Washington, D.C.); and the Madonna with a Carnation (c.1475; Alte Pinakothek, Munich). Although these paintings are rather traditional, they include details, such as the curling hair of Ginevra, that could have been conceived and painted only by Leonardo.
Other, slightly later works, such as the so-called Benois Madonna (c.1478-80; The Hermitage, Leningrad) and the unfinished Saint Jerome (c.1480; Vatican Gallery), already show two hallmarks of Leonardo's mature style: contrapposto, or twisting movement; and CHIAROSCURO, or emphatic modeling in light and shade. The unfinished Adoration of the Magi (1481-82; Uffizi) is the most important of all the early paintings. In it, Leonardo displays for the first time his method of organizing figures into a pyramid shape, so that interest is focused on the principal subject--in this case, the child held by his mother and adored by the three kings and their retinue.
Work in Milan
In 1483, soon after he arrived in Milan, Leonardo was asked to paint the Madonna of the Rocks. This altarpiece exists in two nearly identical versions, one (1483-85), entirely by Leonardo, in the Louvre, Paris, and the other (begun 1490s; finished 1506-08) in the National Gallery, London. Both versions depict a supposed meeting of the Christ Child and the infant Saint John. The figures, again grouped in a pyramid, are glimpsed in a dimly lit grotto setting of rocks and water that gives the work its name. Not long afterward, Leonardo painted a portrait of Duke Lodovico's favorite, Cecilia Gallerani, probably the charming Lady with the Ermine (c.1485-90; Czartoryski Gallery, Krakow, Poland). Another portrait dating from this time is the unidentified Musician (c.1490; Pinacoteca Ambrosiana, Milan). In the great The Last Supper (422 x 910 cm / 13 ft 10 in x 29 ft 71/2 in), completed in 1495-98 for the refectory of the ducal church of Santa Maria delle Grazie in Milan, Leonardo portrayed the apostles' reactions to Christ's startling announcement that one of them would betray him. Unfortunately, Leonardo experimented with a new fresco technique that was to show signs of decay as early as 1517. After repeated attempts at restoration, the mural survives only as an impressive ruin.
Late Work in Florence
When he returned to Florence in 1500, Leonardo took up the theme of the Madonna and Child with Saint Anne. He had already produced a splendid full-scale preparatory drawing (c.1498; National Gallery, London); he now treated the subject in a painting (begun c.1501; Louvre). We know from Leonardo's recently discovered Madrid notebooks that he began to execute the ferocious Battle of Anghiari for the Great Hall of the Palazzo Vecchio in Florence on June 6, 1505. As a result of faulty technique the mural deteriorated almost at once, and Leonardo abandoned it; knowledge of this work comes from Leonardo's preparatory sketches and from several copies. The mysterious, evocative portrait Mona Lisa (begun 1503; Louvre), probably the most famous painting in the world, dates from this period, as does Saint John the Baptist (begun c.1503-05; Louvre).
SCIENTIFIC INVESTIGATIONS
Written in a peculiar right-to-left script, Leonardo's manuscripts can be read with a mirror. The already vast corpus was significantly increased when two previously unknown notebooks were found in Madrid in 1965. From them we learn, among much else, how Leonardo planned to cast the Sforza monument.
The majority of Leonardo's technical notes and sketches make up the Codex Atlanticus in the Ambrosian Library in Milan. At an early date they were separated from the artistic drawings, some 600 of which belong to the British Royal Collection at Windsor Castle.
The manuscripts reveal that Leonardo explored virtually every field of science. They not only contain solutions to practical problems of the day--the grinding of lenses, for instance, and the construction of canals and fortifications--but they also envision such future possibilities as flying machines and automation.
Leonardo's observations and experiments into the workings of nature include the stratification of rocks, the flow of water, the growth of plants, and the action of light. The mechanical devices that he sketched and described were also concerned with the transmission of energy. Leonardo's solitary investigations took him from surface to structure, from catching the exact appearance of things in nature to visually analyzing how they function.
Leonardo's art and science are not separate, then, as was once believed, but belong to the same lifelong pursuit of knowledge. His paintings, drawings, and manuscripts show that he was the foremost creative mind of his time.
David Brown
Bibliography: Clark, Kenneth, Leonardo da Vinci, rev. ed. (1959; repr. 1989); Cooper, Margaret, The Inventions of Leonardo Da Vinci (1968); Emboden, William, Leonardo Da Vinci on Plants and Gardens (1987); Galluzzi, Paolo, ed., Leonardo Da Vinci: Engineer and Architect (1988); Goldscheider, Ludwig, Leonardo da Vinci (1959); Gould, Cecil, Leonardo: The Artist and the Non-Artist (1975); Hart, Ivor B., The Mechanical Inventions of Leonardo da Vinci (1963; repr. 1982); Heydenreich, Ludwig H., Leonardo da Vinci, 2 vols. (1954), and Leonardo: The Last Supper, ed. by John Fleming and Hugh Honour (1974); Kemp, Martin, Leonardo da Vinci (1989) and Leonardo on Painting (1989); Mannering, Douglas, Art of Leonardo Da Vinci (1989); Pater, Walter, Leonardo Da Vinci (1971); Payne, Robert, Leonardo (1978); Pedretti, Carlo, Leonardo: A Study in Chronology and Style (1973); Popham, A. E., The Drawings of Leonardo da Vinci (1945); Rampaggi, Lorenzo, The Life and Art of Leonardo da Vinci (1984); Reti, Ladislao, ed., The Unknown Leonardo, trans. by Alan Morgan (1974); Richter, Jean P., The Literary Works of Leonardo da Vinci, 2 vols., 3d ed. (1970); Rosci, Marco, The Hidden Leonardo (1977); Wallace, Robert, The World of Leonardo (1966); Wasserman, Jack, Leonardo da Vinci (1975); Winternitz, E., Leonardo Da Vinci As a Musician (1982).
See also: ART; ITALIAN ART AND ARCHITECTURE; PAINTING; RENAISSANCE ART AND ARCHITECTURE.

5.1.0.12. Locke, John

John Locke, b. Aug. 29, 1632, d. Oct. 28, 1704, was an English philosopher and political theorist, the founder of British EMPIRICISM. He undertook his university studies at Christ Church, Oxford. At first, he followed the traditional classical curriculum but then turned to the study of medicine and science. Although Locke did not actually earn a medical degree, he obtained a medical license. He joined the household of Anthony Ashley Cooper, later 1st earl of SHAFTESBURY, as a personal physician. He became Shaftesbury's advisor and friend. Through him, Locke held minor government posts and became involved in the turbulent politics of the period.
In 1675, Locke left England to live in France, where he became familiar with the doctrines of Rene Descartes and his critics. He returned to England in 1679 while Shaftesbury was in power and pressing to secure the exclusion of James, duke of York (the future King JAMES II) from the succession to the throne. Shaftesbury was later tried for treason, and although he was acquitted, he fled to Holland. Because he was closely allied with Shaftesbury, Locke also fled to Holland in 1683; he lived there until the overthrow (1688) of James II. In 1689, Locke returned to England in the party escorting the princess of Orange, who was to be crowned Queen MARY II of England. In 1691, Locke retired to Oates in Essex, the household of Sir Francis and Lady Masham. During his years at Oates, Locke wrote and edited, and received many influential visitors, including Sir Isaac Newton. He continued to exercise political influence. His friendships with prominent government officers and scholars made him one of the most influential men of the 17th century.
Locke's Essay Concerning Human Understanding (1690) is one of the classical documents of British empirical philosophy. The essay had its origin in a series of discussions with friends that led Locke to the conclusion that the principal subject of philosophy had to be the extent of the mind's ability to know (see EPISTEMOLOGY). He set out "to examine our abilities and to see what objects our understandings were or were not fitted to deal with." The Essay is a principal statement of empiricism, and, broadly speaking, was an effort to formulate a view of knowledge consistent with the findings of Newtonian science.
Locke began the Essay with a critique of the rationalistic idea that the mind is equipped with INNATE IDEAS, ideas that do not arise from experience. He then turned to the elaboration of his own empiricism: "Let us suppose the mind to be, as we say, white paper, void of all characters, without any ideas; how comes this to be furnished? . . . whence has it all the materials of reason and knowledge? To this I answer, in a word, from experience." What experience provides is ideas, which Locke defined as "the object of the understanding when a man thinks." He held that ideas come from two sources: sensation, which provides ideas about the external world, and reflection, or introspection, which provides the ideas of the internal workings of the mind.
Locke's view that experience produces ideas, which are the immediate objects of thought, led him to adopt a causal or representative view of human knowledge. In perception, according to this view, people are not directly aware of physical objects. Rather, they are directly aware of the ideas that objects "cause" in them and that "represent" the objects in their consciousness. A similar view of perception was presented by earlier thinkers such as Galileo and Descartes.
Locke's view raised the question of the extent to which ideas are like the objects that cause them. His answer was that only some qualities of objects are like ideas. He held that primary qualities of objects, or the mathematically determinable qualities of an object, such as shape, motion, weight, and number, exist in the world, and that ideas copy them. Secondary qualities, those which arise from the senses, do not exist in objects as they exist in ideas. According to Locke, secondary qualities, such as taste, "are nothing in the objects themselves but powers to produce ideas in use by their primary qualities." Thus, when an object is perceived, a person's ideas of its shape and weight represent qualities to be found in the object itself. Color and taste, however, are not copies of anything in the object.
One conclusion of Locke's theory is that genuine knowledge cannot be found in natural science, because the real essences of physical objects that science studies cannot be known. It would appear that genuine certainty can be achieved only through mathematics. Locke's view of knowledge anticipated developments by later philosophers and exercised an important influence on the subsequent course of philosophical thought.
Locke's considerable importance in political thought is better known. As the first systematic theorist of the philosophy of LIBERALISM, Locke exercised enormous influence in both England and America. In his Two Treatises of Government (1690), Locke set forth the view that the state exists to preserve the natural rights of its citizens. When governments fail in that task, citizens have the right--and sometimes the duty--to withdraw their support and even to rebel. Locke opposed Thomas HOBBES's view that the original state of nature was "nasty, brutish, and short," and that individuals through a SOCIAL CONTRACT surrendered--for the sake of self-preservation--their rights to a supreme sovereign who was the source of all morality and law. Locke maintained that the state of nature was a happy and tolerant one, that the social contract preserved the preexistent natural rights of the individual to life, liberty, and property, and that the enjoyment of private rights--the pursuit of happiness--led, in civil society, to the common good. Locke's notion of government was a limited one: the checks and balances among branches of government (later reflected in the U.S. Constitution) and true representation in the legislature would maintain limited government and individual liberties.
A Letter Concerning Toleration (1689) expressed Locke's view that, within certain limits, no one should dictate the form of another's religion. Other important works include The Reasonableness of Christianity (1695), in which Locke expressed his ideas on religion, and Some Thoughts Concerning Education (1693).
Thomas K. Hearn, Jr.
Bibliography: Aaron, Richard I., John Locke, 3d ed. (1971); Collins, James D., The British Empiricists: Locke, Berkeley, Hume (1967); Colman, John, John Locke's Moral Philosophy (1983); Cranston, Maurice, John Locke: A Biography (1957; repr. 1985); Dunn, John, Political Thought of John Locke (1969; repr. 1983); Gough, J. W., John Locke's Political Philosophy: Eight Studies, 2d ed. (1973); Grant, Ruth W., John Locke's Liberalism (1987); Mabbott, J. D., John Locke (1973); Sahakian, Mabel L. and William S., John Locke (1975); Vaughn, Karen L., John Locke (1982); Yolton, John W., John Locke and the Way of Ideas (1956) and, as ed., John Locke: Problems and Perspectives (1969)

5.1.0.13. Maxwell, James Clerk

The Scottish physicist James Clerk Maxwell, b. Nov. 13, 1831, d. Nov. 5, 1879, did revolutionary work in electromagnetism and the kinetic theory of gases. After graduating (1854) with a degree in mathematics from Trinity College, Cambridge, he held professorships at Marischal College in Aberdeen (1856) and King's College in London (1860) and became the first Cavendish Professor of Physics at Cambridge in 1871.
Maxwell's first major contribution to science was a study of the planet Saturn's rings, the nature of which was much debated. Maxwell showed that stability could be achieved only if the rings consisted of numerous small solid particles, an explanation still accepted. Maxwell next considered molecules of gases in rapid motion. By treating them statistically he was able to formulate (1866), independently of Ludwig Boltzmann, the Maxwell-Boltzmann kinetic theory of gases (see KINETIC THEORY OF MATTER). This theory showed that temperatures and heat involved only molecular movement. Philosophically, this theory meant a change from a concept of certainty--heat viewed as flowing from hot to cold--to one of statistics--molecules at high temperature have only a high probability of moving toward those at low temperature. This new approach did not reject the earlier studies of thermodynamics; rather, it used a better theory of the basis of thermodynamics to explain these observations and experiments.
Maxwell's most important achievement was his extension and mathematical formulation of Michael FARADAY's theories of electricity and magnetic lines of force. In his research, conducted between 1864 and 1873, Maxwell showed that a few relatively simple mathematical equations could express the behavior of electric and magnetic fields and their interrelated nature; that is, an oscillating electric charge produces an electromagnetic field. These four partial differential equations first appeared in fully developed form in Electricity and Magnetism (1873). Since known as Maxwell's equations they are one of the great achievements of 19th-century physics.
Maxwell also calculated that the speed of propagation of an electromagnetic field is approximately that of the speed of light. He proposed that the phenomenon of light is therefore an electromagnetic phenomenon. Because charges can oscillate with any frequency, Maxwell concluded that visible light forms only a small part of the entire spectrum of possible ELECTROMAGNETIC RADIATION.
Maxwell used the later-abandoned concept of the ether to explain that electromagnetic radiation did not involve action at a distance. He proposed that electromagnetic-radiation waves were carried by the ether and that magnetic lines of force were disturbances of the ether. Heinrich Hertz discovered such waves in 1888.
Sheldon J. Kopperl
Bibliography: Campbell, Lewis, and Garnett, William, The Life of James Clerk Maxwell (1882; repr. 1969); Hendry, John, James Clerk Maxwell and the Theory of the Electromagnetic Field (1986); Tolstoy, Ivan, James Clerk Maxwell (1982); Tricke, R. R., R., The Contributions of Faraday and Maxwell to Electrical Science (1966)

5.1.0.14. Michelson-Morley experiment

@:ENC_MICHELSON
In 1887 two American scientists, Albert A. Michelson and Edward W. Morley, performed a classic experiment that contributed to the downfall of the concepts of absolute space and the ether (see ETHER, physics). The accepted theories of late-19th-century physics required space to be filled with a medium--the ether--through which light was thought to propagate. If the Earth moves through the ether, the speed of a light ray as measured on Earth would depend on its direction, much as the speed of a swimmer depends on whether he or she swims with, against, or across the current.
Michelson designed an apparatus, called an INTERFEROMETER, which could detect this effect. Schematically, an interferometer consists of two straight arms set at right angles to each other. Each arm has a mirror at one end. At the intersection where the arms are joined a half-silvered mirror splits a light beam into two. Each half of the split beam travels down one arm and is reflected back by the mirror at the end of each arm. When the two beams are recombined, they interfere in such a way as to produce a characteristic pattern of fringes that depends on the difference in time required for the two beams to make the round trip. If the apparatus is rotated through 90 deg, the roles of parallel and perpendicular arms are reversed and the fringe pattern would shift.
The expected fringe shift was four-tenths of a wavelength; however, no shift as large as four-hundredths of a wavelength was observed. Many repetitions of the experiment by other researchers have confirmed this null result. Einstein's theory of RELATIVITY provides the only fully consistent explanation; it postulates that the speed of light is always the same, regardless of the motion of the observer, and therefore is the same in each direction along each arm of the interferometer. Einstein was apparently unaware of the Michelson-Morley experiment when he proposed his theory in 1905.
Clifford M. Will
Bibliography: Bernstein, Jeremy, Einstein (1973); Michelson, Albert A., Studies in Optics (1927; repr. 1962); Swenson, Loyd S., Jr., The Ethereal Aether: A History of the Michelson-Morley-Miller Aether-Drift Experiments (1972); Whittaker, E. T., A
History of the Theories of Aether and Electricity (1954; repr. 1987)

5.1.0.15. Nose

The nose, the site of the sense of smell, is the organ through which mammals take in air. It is supported by cartilage and bone, covered with skin, lined with a mucous membrane, and provided with muscle. A nasal septum divides it into two passages, each of which begins with a vestibule and contains a respiratory and olfactory region. The lining of the vestibule is continuous with the skin, and contains coarse hairs, sweat glands, and sebaceous (oil-producing) glands.

The respiratory region includes nearly all of the septum and the lateral walls of the nose. Goblet cells, which produce and secrete a watery mucus, are present in the lining, as is a type of erectile tissue, composed of large, thin-walled veins whose blood supply serves to warm incoming air. The olfactory region is located on the superior concha and adjacent septum. Olfactory cells are present in its lining and have delicate slender processes (modified cilia) at their free surfaces. Odors from chemicals in the air are received by these processes. Nerve cells that impinge upon the olfactory cells convert the chemical information into nerve impulses and convey the sensory information to the brain.

Roy Hartenstein
Bibliography: Barlow, H. B., and Mollon, J. D., eds., The Senses (1982); Finger, T. E., and Silver, W. S., eds., Neurobiology of Taste and Smell (1987); Wright, R. H., ed., The Sense of Smell (1982).

Picture Caption[s]
The nose is divided by the septum into two cavities, each containing three folds called conchae and lined with a mucous membrane. Air taken in through the nostrils is filtered by the cilia--small hairs in the mucous membrane--moistened by the mucus, and warmed by the blood vessels of the superior conchae. The olfactory membrane of the superior conchae and adjacent part of the septum, contains olfactory cells, nerve cells sensitive to odors. Airborne chemicals interact with the ciliated endings of these cells; nerve impulses then are carried by the olfactory nerve to the brain.

5.1.0.16. Perception

Perception is the process and experience of gaining sensory information about the physical world. The characteristic questions that perceptual psychology poses and the methods it employs derive from its main theoretical aims and assumptions described below.

CLASSICAL PERCEPTUAL THEORY
In the classical approach of Hermann HELMHOLTZ, the first step was to divide sensory experience into modalities such as vision, touch, and smell, and to subdivide the modalities into elementary SENSATIONS from which all more complex perceptual experiences--such as those of objects and events--were presumed to be constructed. Sensations were to be explained in terms of their physiological bases (the receptor neurons) and the physical energies to which the receptors are specially adapted to respond.

The Psychophysical Methods
Each noticeably different sensory experience was presumed to rest on a corresponding receptor process; the psychophysical methods (see PSYCHOPHYSICS) were quantitative procedures designed to measure and identify such noticeable differences. For example, the eye focuses the physical light from an object into an image on a mosaic of photosensitive receptors (the retina); these photoreceptors provide the basic sensations of light and color as responses. According to classical perceptual theory, perception of the important attributes of the world, such as the relative brightness of an object, are not sensations; rather, they are complex learned perceptions.

Depth Perception
Depth perception has been similarly explained. Depth perception is the experience of the third dimension of visual space. It includes perception of the distance of an object from the observer (absolute distance) and perception of the distance of objects from one another (relative distance). Since three dimensions cannot be reproduced on the two dimensions of the retina's surface, the question arises of how to explain accurate human and animal depth perception. Helpful supplementary information in the form of depth cues is one answer. Most depth cues available to the stationary eye were listed by Leonardo da Vinci, such as linear perspective, occlusion of a far object by a near one, and aerial perspective or increasing haze. Classical perceptual theory assumed that depth perception was learned from such cues. Perceived distance would result from the visual color and shade sensations associated with memory images of previous muscle-stretch and touch sensations. However, Edward L. Thorndike showed in 1899 that some animals can respond appropriately to visual depth cues even though they have had no prior visual experience, suggesting that some depth perception is innate rather than learned. Subsequent research has corroborated and extended Thorndike's findings.

CONSTANCIES, ILLUSIONS, AND ORGANIZATIONAL PHENOMENA
Three sets of phenomena cause difficulties for classical perceptual theory and have been responsible for most research in perception: constancies, ILLUSIONS, and organizational phenomena.
Perceptions accord more often with objects' properties than with the sensory stimulation; for example, a man's perceived height remains constant even though his retinal image size changes as he approaches an observer. There are many such perceptual constancies, which usually cause one to perceive the world more correctly than would be expected from sensory stimulation (in the previous example, from the changes in retinal image).
Illusions are cases in which perception accords neither with how the receptors are stimulated, nor with the characteristics of the objects themselves. For example, in brightness contrast, an object's reflectance--in fact constant--appears to change when its surroundings change. In certain geometrical illusions, the appearance of size or length of horizontal or vertical lines is drastically altered by the addition of a few lines.
Whereas experience with the world might teach people to perceive things correctly, as shown by the constancies, it is less evident why experience should result in illusions. Illusions are in fact pervasive phenomena.
The organizational phenomena rest on the perceptual distinction between figure and ground: when a contour gives shape to one area, the region bounded by the other side of the contour (ground) usually has no recognizable shape. In such cases, a figure may be perceived as one object or another, but not as both simultaneously. Which area becomes the figure is therefore critical to what object will be perceived. GESTALT PSYCHOLOGY opposed classical perceptual theory by considering the form (the Gestalt) of the stimulating energies to be the essential attribute. Gestaltists sought laws of organization such as the "law of good continuation," which states that people perceive the figure-ground organization that interrupts the fewest smoothly continuing lines. Such factors have not been quantitatively or objectively studied. Nevertheless many of them provide impressive demonstrations relevant to the casually observable facts of perception and (it was thought) unrelated to familiarity. Highly familiar objects can in fact be concealed in favor of quite unfamiliar shapes in apparent contradiction to classical theory.
Like a melody that remains the same when transposed in key, a particular form might have the same effect on the nervous system regardless of its particular place on the sensory surface, its specific size, and so on. This confirms the Gestaltist explanation for the perceptual constancies, which differs from the classical one, but was never adequately worked out. The accounts of physiological processes to which Gestalt theory attributed its demonstrations have been thoroughly discredited, but there have been continued attempts at objective formulation of the laws of organization by later psychologists using the tools of information theory. The principle here is that one perceives the simplest organization that could be fitted to a particular pattern of stimulation. Despite their central importance to the Gestalt approach, theories of form and shape perception have not progressed far along these lines; in fact, the classical approach has come to assimilate the Gestalt demonstrations as explained in the following section of this article.
One explanation of the constancies and illusions that has continued to gain support since Helmholtz is that both reflect the same processes. That is, one perceives those objects or events that would normally be responsible for the sensory stimulation received. In this way, a person's visual system acquires associations that reflect the normal structure of the physical world. For example, the perceptual system learns to take distances into account when estimating the sizes of objects. Such sophisticated inferences are surely not conscious, if in fact they are made at all, so this theory is often phrased as "unconscious inferences based on unnoticed sensations." The theory is difficult to test, because the sensations cannot be directly observed.

CLASSICAL PERCEPTUAL THEORY REVISED
Because the elementary experiences (sensations) in classical theory must be considered unobservable and unpredictable (as the constancies, illusions, and organizational phenomena demonstrate), Egon BRUNSWIK [104] restated (1956) Helmholtz's position as follows: Because of the regularities in the physical world, the light at the eye normally contains packets of cues to any property of the physical world. The correlations are usually less than perfect--that is, the cues are only probabilistic. The organism presumably learns to rely on any cue to a degree proportional to the cue's correlation with an object's attributes.
In this version of classical perceptual theory, not only are the constancies and illusions examples of the same reliance on cues, but also the Gestalt phenomena are explained. The figure-ground phenomenon is considered to be an inference made by the perceptual system about which side of a line is really an object's edge, and the laws of organization to be merely cues on which those inferences are based; for instance, the law of "good continuation" reflects the extreme unlikelihood that two objects' edges, at different distances, will line up precisely in the retinal image.
To be usefully specific and subject to experimental verification, this approach must be based on quantitative knowledge about the correlation between cues and the object-attributes they reflect. Such information could presumably be obtained from "ecological surveys," which in effect explain what perceptual learning has taught the perceiver. Until recently, few such ecological surveys have been undertaken.

RECENT PHYSIOLOGICAL FINDINGS
Recent physiological and psychophysical study of the nervous system suggests that the latter contains receptive units much more complex than a mere mosaic of photoreceptors (in vision), and that allow for a more direct perceptual theory than the classical version. Ernst MACH and Ewald Hering, contemporaries of Helmholtz, made early proposals to account explicitly for at least some perception of an object's lightness, form, and distance in terms of innate sensory mechanisms. These proposals have recently gained immensely in popularity with the discovery of lateral connections between the receptors, as well as in the higher levels of the nervous system, that provide for more direct response to object properties. For example, neural networks exist that respond directly to the ratio of the light coming from some object relative to the light from its immediate surroundings. Such responses would normally remain constant with changes in illumination, because any change in lighting of both target and background would proportionally change the light that each of them sends to the eye, leaving the ratio itself intact.
Furthermore, cells have been found in the retina and higher nervous systems of amphibians and mammals that respond primarily to patterns and relationships in the retinal image, not merely to physical energy. For instance, such cells respond to dark disks surrounded by bright rings, and vice versa; to edges of a particular orientation or direction of movement; and to simultaneous stimulation of corresponding points in the retinas of both eyes.
It is not yet clear to what extent, and in what manner, such pattern-sensitive networks, or feature detectors, actually contribute to perception, but their existence lends plausibility to more direct theories of perception. James Jerome Gibson has proposed the most thoroughgoing of such theories, in which the properties of the scenes and events in the perceived world are direct responses to information in the light at the eye. A particular aspect of Gibson's theory--that the visual system registers differences in textual gradients between near and far surfaces and uses these impressions to comprehend depth--has been used successfully in developing computer vision systems. It is not yet known, however, whether these computer vision systems are analogous to the human vision system. Besides gradients, other environmental clues, such as motion, contour, and shape, have been shown as being important to visual perception.

NEW DIRECTIONS IN RESEARCH
Other lines of research that might test this general theory are in progress. Primary among these is the study of perceptual development. A fair amount of evidence exists that some animals can respond innately to depth cues, thus showing that the classical analysis of sensory processes was at least incomplete in that regard. But evidence that infants' perceptions of size remain constant in spite of changes in object distances (and despite changes in the resulting retinal image size)--which is the issue most central to this question--remains controversial. Research in perceptual development has pushed back earlier and earlier the stage at which the infant is considered perceptually competent, but the classical theory has not been finally dismissed (see also INFANCY).
In the earliest classical theory, a perception of a shape was thought to consist of the memories of the eye movements that would have to be made in order to bring each point on its contour to the center of vision. Such a definition leaves out a great deal (for instance, the organizational phenomena discussed above). But it does raise an extremely important point: because one sees detail only at the fovea (a small region in the center of the retina), the eye makes successive rapid, aimed movements, called saccades, at different parts of any object or scene. With each eye movement, of course, the image of the scene shifts on the retina. The shifting retinal images are normally not noticed, a form of constancy that is often explained as a compensation for the eye movements. Much research was, and is, being done on altering the extent of the compensation through relearning (for instance, by the prolonged wearing of prism spectacles, which change the correlation between where the eye muscles point the eye and the image it receives).
Just as the question of how individual successive glances are perceived has been studied, so research on brief "tachistoscopic" glimpses has studied the effects of attention, expectation, familiarity, and motivation. Words with which the viewer is more familiar, has reason to expect, or that accord with his or her interest and concerns, will be detected at briefer exposures. Explanations for these effects remain under debate. In any case, however, such research does not address the most serious problem posed by eye movements--how one uses the sequence of partial glimpses to perceive completed objects and scenes.
The process of how one fills out and stores momentary glimpses is close to (and may be identical with) that of mental imagery--that is, experiences of objects not actually stimulating the sense organs. In classical theory, as noted, mental images provided the vehicle for transforming raw sensations into perceptions of the world, but research on imagery proved so unreliable that the problem was set aside for many years. Objective work on imagery has recently increased on many fronts since the 1970s. Neurologists are studying how NEURAL NETWORKS in the brain process perceptual information, and computer scientists continue work (with varying degrees of success) on developing computer analogs of these networks. Studies of people who have suffered brain trauma have also yielded information on perceptual processing centers in the BRAIN. Finally, the field of COGNITIVE PSYCHOLOGY has produced important insights into perception.
Julian Hochberg

Bibliography: Caws, Mary, Perspectives on Perception (1989); Gibson, James Jerome, The Senses Considered as Perceptual Systems (1966) and The Perception of the Physical World (1950); Goldstein, E. Bruce, Sensation and Perception, 3d ed. (1989); Gombrich, Ernst H., The Image and the Eye (1982); Hochberg, Julian, Perception, 2d ed. (1978); Hubel, D.H., Eye, Brain, and Vision (1988); Rock, Irvin, The Logic of Perception (1983); Wilding, J.M., Perception (1983).
See also: SENSES AND SENSATION; SENSORY DEPRIVATION

5.1.0.17. Phonology and morphology

Phonology is the system of deployment of a language's phonetic resources; morphology is the aggregate of patterns and other regularities involving word-formation within a given language. Because the majority of phonological patterns in most languages probably can be stated in terms of morphology, with only limited recourse to SYNTAX, this article will first discuss morphology and then phonology.

MORPHOLOGY
Morphology, as a branch of LINGUISTICS, is the study of word-formation. Although linguists are nearly unanimous in their belief that all languages have elements called words, they have yet to agree upon a universal definition of word. The common definition, that a word corresponds to a stretch of writing with spaces fore and aft, is not cogent for various reasons. For instance, many languages lack a writing system, orthography, or altogether. Others have orthographies that do not use spaces or other word-isolating devices (see WRITING SYSTEMS, EVOLUTION OF). Some languages have two orthographies that differ as to how to isolate words, for example, the Arabic and Roman orthographies for Swahili. Even within one orthographic system, arbitrary conventions or inconsistencies exist.
(is the correct spelling firehouse, fire-house, or fire house?)
Although the criteria for what constitutes a word have so far proved elusive, linguists have succeeded in developing a few rather adequate tests, one of which might be called the test of minimum pronounceability. The following conversation may be considered.
A: "I will walk."
B: "What was the last word you said?"
A: "Walk" (not "will walk").
But if A begins by saying "I walked," then his response to B's question must be "walked," not "-ed." It may thus be concluded that will walk consists of two words, but walked of only one.

Morphemes
The -ed of walked is an example of a morpheme--a minimum meaning-bearing constituent of a word. If a word has no smaller meaning-bearing parts, the word itself is a morpheme. Because -ed bears the meaning "past," and cannot itself be resolved into smaller meaning-bearing parts, it is a morpheme. For the same reasons walk is a morpheme, whether coterminous with a word in a sentence like I will walk or part of a word, as in I walked.

Compounding
Just as morphemes can be strung together to make words, so words can be joined to make compound words, or compounds. Firehouse, for example, contains the two independent words fire and house. Although English is quite flexible in compounding, some languages use this kind of morphology even more liberally. For example, the German word for ambulance, Krankenwagen, literally means "patient-vehicle"; the Chinese word for psychology, Hsinli, is a compound meaning "mind-principles."

Affixation
Perhaps the most common morphological process among the world's languages is affixation, in which an affix morpheme, normally with a grammatical, as opposed to a lexical, function, is added to a stem morpheme. In English and most European languages the type of affixation used in verb inflection is suffixation (for example, walk-ed, walk-s, walk-ing); many other languages use prefixation (for example, Swahili tembea, "walk," (a-)li-tembea, "(he) walked"). Rarer is infixation, in which the stem morpheme is interrupted by the affix (for example, Tagalog (Philippine) lakad, "walk," l-um-akad, "walked").
A limiting case of infixation is the internal flexion of the Semitic languages; the stem consists of a consonantal framework--the root--and the affix consists of vowels within that framework--the scheme. In the Hebrew word halakh, "walked," h-l-kh is "walk" and a-a signals the past tense. For so-called superfixation, the affix consists of a stress, tone, or other suprasegmental. Superfixation is prevalent in various languages of Africa and Central America and is perhaps marginally involved in some English words, such as insert and record, in which stress on the first syllable marks the noun, and stress on the second syllable marks the verb.

Other Morphological Processes
Certain types of morphology do not lend themselves to clear segmentation into morphemes. Ablaut involves a series of two or more vowel replacements--as in sing, sang, sung--or, less commonly, consonant replacements--as in Irish pog, "kiss," fog (spelled phog), "kissed." Portmanteau morphology involves two or more morphemic functions invested in what is apparently one minimal form. (What part of was marks past tense?) Series such as crash, bash, smash constitute phonesthemes, which represent an especially difficult case of unique morphs, or nonclassifiable residues of morphemic segmentation. For example, cranberry ostensibly contains the morpheme berry, but the intractability of cran to further classification makes it a classic example of a unique morph.
Although unique morphs may cause problems for linguistic theorizing, they are, notably, often involved in word creation in contemporary English. Indeed, processes building on unique morphs such as -oholic in food-oholic, smoke-oholic, and so on may well be more dominant than novel stem-affix combinations, such as sex-ism, age-ism, and the like, on the model of such established forms as racism. Many modern coinages--the so-called blends, like motel, from motorist and hotel, or smog, from smoke and fog--represent a particularly severe case of such tendencies.

Inflection and Derivation
Word-formation may be classified in terms of form (affixation, compounding, ablaut, and so on) or function. The most common functional classification of morphology is that of inflection and derivation.
Inflectional morphology characteristically involves relatively tight systems of grammatical marks--most commonly but not always affixes--on one lexical item without change in part of speech. In grammars the inflected forms of a given lexical item are frequently grouped into paradigms. English has relatively modest inflection, both of nouns (singular-plural-possessive: boy-boys-boy's-boys') and of verbs (present and past; active and passive participial: speak, speaks, spoke; speaking, spoken). English and other relatively uninflected languages, such as Chinese, compensate for their scant inflection periphrastically, that is, by syntactic means. For example, English uses nine words--If I had known I would not have waited--for what Latin can convey in four--Si scivissem non mansissem, "if known-had-I not waited-would-have-I"--or Turkish in two--bileydim bekmezdim, "known-if-had-I waited-not-would-have-I."
Derivational morphology characteristically involves relatively loose systems of marks (as with inflection, the marks commonly but not invariably are affixal) by which a family of different lexical items are related, frequently but not always across different parts of speech. An example of multiple derivation is provided by the noun verbalization, which is related by the derivational suffix -ation to the verb verbalize, which is in turn derived from verbal by the addition of -ize; and finally, verbal is built on verb by -al.

PHONOLOGY
To say that a language's phonology involves the deployment of that language's phonetic resources within the framework of its morphology and syntax is virtually tantamount to saying that a language's phonological system cannot be identified with either its phonetic or morphosyntactic system but rather mediates between those systems. This situation can be illustrated by a few English words: mopper, mop, slobber, pop. First, the plural suffix -s is pronounced differently in moppers and mops, like the z of booze in the former but like the s of moose in the latter. Moreover, these differences in pronunciation of the plural -s are not idiosyncratic facts about the words mopper and mop (as, for example, could be claimed for dice as the plural of die), but rather bespeak a pervasive regularity of English. The z pronunciation of -s is the norm for nouns ending in a voiced sound--that is, a sound articulated with concomitant vibration of the vocal cords (see PHONETICS). Similarly, the s pronunciation of -s is the norm for nouns ending in a voiceless sound--a sound made with the vocal cords at rest.
The preceding discussion might suggest that phonology is not necessary, and all that is needed is a correlation between the morphological fact that English has a morpheme, the plural suffix -s, and the phonetic facts that this morpheme has two pronunciations--z following voiced sounds and s following voiceless sounds. Exactly the same z-s pattern, however, is found in two other morphemes in addition to the noun plural -s: for the third-person-singular present-tense suffix -s (slobbers, like moppers; pops, like mops) and the possessive suffix -'s (the pronunciation of mopper's is identical to that of the plural moppers, and likewise the pronunciations of mop's and the plural mops are the same). Thus three morpheme-pronunciation statements now must be formulated--one for each of the three morphemes involved--even though ostensibly the same pattern is in some way involved for each of the three cases. It is in large part situations like these that have led linguists to posit the existence of a phonological level of language organization.

Phonological Rules
Rather than relating the three suffix morphemes of the previous paragraph directly to their pronunciations, they can be said to share an abstract phoneticlike symbolization--a phonological representation--which will arbitrarily be called X. Then one pronunciation statement can be formulated for X--a phonological rule--to the effect that X is pronounced as s following a voiceless sound, but as z following a voiced sound.
The manner in which phonological rules are formulated and the names and symbols used vary considerably from linguistic school to linguistic school and from theory to theory. The phonological element that serves as input to the rule, symbolized by X in the above example, is variably called a morphophoneme, underlying segment, or phoneme. Despite differences of other sorts, all theories of phonology recognize the importance of distinctiveness in the organization and function of sound systems, normally by taking phonemes to be distinct from one another, with non-phonemic differences in sound following from phonemic distinctions. Thus it usually assumed that mob and mop differ distinctively (phonemically) in the difference b = p, while the difference in vowel length follows from that (the pronunciation of o being longer before b than before p).

Segmental and Suprasegmental Phonology
Segmental phonology is the phonology of vowels and consonants; suprasegmental or prosodic phonology involves phenomena such as stress (intensity) and tone (pitch). An accentual pattern involves the deployment of suprasegmentals within a word (for example, the stress differences between the noun insert--with stress on the first syllable--and the verb insert--with stress on the second syllable--), whereas an intonational pattern involves suprasegmentals within the framework of a sentence (for example, all the words in Mary worries Martin are accentually stressed on the first syllable, but the stress in Martin is intonationally most prominent). Because the sentence characteristically constitutes the framework for intonation, and because sentences are fundamentally syntactic constructs, intonation is one phonological phenomenon whose domain goes beyond morphology.

Joseph L. Malone

Bibliography: Anderson, Stephen R., Phonology in the 20th Century (1985); Goldsmith, J., Autosegmental and Metrical Phonology (1989); Hawkins, Peter, Introducing Phonology (1984); Kaye, Jonathan, Phonology: A Cognitive View (1989); Lass, Roger, Phonology: An Introduction to Basic Concepts (1984); Pulleyblank, D., and Archangeli, D., The Content and Structure of Phonological Representations (1990)

5.1.0.18. Pyramids

In architecture the term pyramid denotes a monument that resembles the geometrical figure of the same name. It is almost exclusively applied to the stone structures of ancient Egypt and of the pre-Columbian cultures of Central America and Mexico.
Egypt
The Egyptian pyramids were funerary monuments built for the pharaohs and their closest relatives. Most date from the Old Kingdom (c.2686-2181 BC) and are found on the west bank of the Nile, in a region approximately 100 km (60 mi) long and situated south of the delta, between Hawara and Abu Ruwaysh. Pyramids developed from the MASTABA, a low, rectangular stone structure erected over a tomb. The oldest pyramid known, the Step Pyramid of King Zoser at SAQQARA (c.2650 BC), has a large mastaba as its nucleus and consists of six terraces of diminishing sizes, one built upon the other. It was surrounded by an elaborate complex of buildings, now partially restored, whose function related to the cult of the dead.
The next phase of development is represented by the 93-m-high (305-ft) pyramid at Maydum, built at the order of Snefru, founder of the 4th dynasty (c.2613-c.2498 BC). This structure was designed as a step pyramid; later the steps were covered with a smooth stone facing to produce sloping sides. The pyramid at Dahshur was also built by Snefru. Halfway between its base and apex its inclination was changed, so that it is bent in appearance.
A characteristic feature of all classical Egyptian pyramids, including those of Snefru, is a temple complex, comprising a lower or valley temple at a short distance from the pyramid and connected by a causeway with a mortuary temple, situated adjacent to the pyramid. The most elaborate example of the temple complex is found at Giza, near modern Cairo, where the 4th-dynasty pyramids of Kings KHUFU (Cheops), KHAFRE (Chephren), and MENKAURE (Mycerinus) lie in close proximity to each other. The pyramid of Khufu, erected c.2500 BC, is the largest in the world, measuring 230 m (756 ft) on each side of its base and originally measuring 147 m (482 ft) high. Beginning in the 10th century AD the entire Giza complex served as a source of building materials for the construction of Cairo, and, as a result, all three pyramids were stripped of their original smooth outer facing of limestone. The temples have disappeared, with the exception of the extremely well preserved granite valley temple of Khafre.
The last great pyramid of the Old Kingdom is that of Pepi II of the 6th dynasty (c.2345-2181 BC). In the following turbulent era (the First Intermediate Period, c.2181-2040 BC), almost no pyramids were built. When King Mentuhotep II of the 11th dynasty attained power (c.2060 BC), pyramid construction resumed. During the 11th and 12th dynasties until 1786 BC, pyramids continued to be built (at Dahshur and al-Faiyum), but later, rock-cut tombs were preferred.
The first structures built in imitation of the pyramids of ancient Egypt were those built by Nubian and Meroitic kings from c.700 BC to AD 350. Near the cities of MEROE and Napata (in modern Sudan) are rows of royal graves that consist of small, steeply sloped pyramids. Of special interest is the Cestius pyramid (12 BC) in Rome, the funerary monument of the tribune Gaius Cestius, which for many centuries was the only European example of an Egyptian-style pyramid. During the neoclassical period in the art of the 18th century the French architect Etienne Louis BOULLEE and the Italian sculptor Antonio CANOVA designed a number of pyramidal-shaped funerary monuments.
Pre-Columbian America
All pre-Columbian pyramids are truncated, stepped pyramids and served as the foundations for temples. The largest ones usually slope less steeply than the Egyptian pyramids, but the smaller ones often have an even steeper incline. Stairways carved into one or more sides of the pyramid lead to the temple.
Pyramids were erected by the ancient Mesoamerican cultures of the MAYA, TOLTECS, and AZTECS, and they are found in many areas of Mexico, Honduras, Guatemala, and El Salvador. Most were built during the classic period (AD 300-900) and in the following postclassic period (900-1542). The pyramid of EL TAJIN, which was built between the 4th and 9th centuries in northern Veracruz, Mexico, is unique. On each of its terraces is a series of recessed niches in which sacrificial offerings were probably placed. In the pyramid of the Temple of the Inscriptions at PALENQUE, Mexico, which also dates from the classic period, a passage discovered beneath the floor of the temple leads to a richly furnished burial crypt deep within the pyramid. One of the largest pyramids in Central America is the 66-m-high (216-ft) Pyramid of the Sun (2d century AD) at TEOTIHUACAN, Mexico. Temple-pyramid complexes at late civic-ceremonial centers such as CHICHEN ITZA and UXMAL, dating from the postclassic Maya-Toltec period, are generally lower in height, topped with a larger, flat platform; they therefore are generally not considered true pyramids.
Bibliography: David, A.R., The Pyramid Builders of Ancient Egypt (1986); Davidovits, Joseph, and Morris, Margie, The Pyramids: An Enigma Solved (1988); Edwards, I.E.S., The Pyramids of Egypt, rev. ed. (1961; repr. 1987); Evans, Humphrey, The Mystery of the Pyramids (1979); Fakhry, Ahmed, The Pyramids, 2d ed. (1969); Hunter, C. Bruce, Guide to Ancient Mexican Ruins (1977); Seiss, Joseph A., The Great Pyramid (1981); Tompkins, Peter, Mysteries of the Mexican Pyramids (1976; repr. 1987) and Secrets of the Great Pyramid (1978); Weeks, John, Pyramids (1971).
See also: EGYPT, ANCIENT; PRE-COLUMBIAN ART AND ARCHITECTURE.
Picture Caption[s]
A camel caravan passes the pyramids of Khufu (Cheops), Khafre, and Menkaure at Giza, Egypt, on the eastern edge of the Sahara. (Tony Stone Worldwide Photo Library)
The Pyramid of the Sun, built in the 2nd century AD, dominates the landscape of the ancient city of Teotihuacan in Mexico. Teotihuacan was the first true city in Mesoamerica, at its peak (AD c.600) it housed more than 100,000 people. (ACE Photo Agency

5.1.0.19. Quantum mechanics

{kwahn'-tuhm}
Quantum mechanics is a description of the behavior of matter and energy on a small scale--a scale small enough that the discrete or discontinuous nature of all matter and radiation becomes noticeable.
The difference between classical mechanics and quantum mechanics is analogous to the difference between a ramp and a staircase. The ramp (classical theory) is continuous and an object may assume any position on it. If the height of the object represents its energy, it may have any value. In moving up or down the ramp (gain or loss of energy), the object passes through all intermediate energy states in a continuous increase or decrease. An object placed on a staircase (quantum theory) can occupy only particular, discrete positions. Each step represents a quantum of energy. According to quantum theory, the object can increase or decrease its energy level only by absorbing or emitting exactly enough energy to permit it to exist at another allowed energy level. In making a "quantum jump" the object simply does not exist between allowed levels.
If the steps are sufficiently small, the description of the staircase is virtually the same as the description of the ramp. In reality, the individual quanta are extremely minute, so that for macroscopic phenomena the discontinuous nature is not noticeable. The energy E in a single quantum of radiation of frequency nu is given by E = h (nu), where h is PLANCK'S CONSTANT. Classical physics assumed h = 0; in quantum physics it has the very small but nonzero value of approximately 6.62 X (10 to the power of -34) joule-seconds.
Development of Quantum Mechanics
Before the 20th century it was thought that matter and radiation could be described in a continuous fashion--that an object could be of any size and could absorb or emit radiation of any energy. By the beginning of the 20th century, much evidence showing the discrete nature of phenomena, especially those involving atomic structure and spectra, was available. This evidence provided the impetus for the development of quantum mechanics. The new quantum mechanics was able to explain a multitude of physical phenomena that classical physics could not and so became quite rapidly accepted. Quantum mechanics, however, requires quite a different set of assumptions than does the continuum classical mechanics, even though the quantum description always agrees with the classical description for systems that are large enough. For extremely large systems and for those traveling near the speed of light, the classical mechanics is also inadequate, and the theory of RELATIVITY, proposed by Albert Einstein in 1905, is needed. Quantum mechanics and the theory of relativity together utterly upset the foundations of the classical physics. The new theories have posed philosophical problems, many of which continue to be investigated.
Herbert L. Strauss
Bibliography: Cropper, William H., The Quantum Physicists and An Introduction to Their Physics (1970); Feynmann, Richard P., The Feynmann Lectures on Physics, vol. 3, Quantum Mechanics (1965); Jammer, Max, The Conceptual Development of Quantum Mechanics (1966); Jausch, Josef, M., Are Quanta Real? A Galilean Dialogue (1973) and Foundations of Quantum Mechanics (1968); Saxon, D. S., Physics for Liberal Arts Students (1971); Schiff, Leonard I., Quantum Mechanics, 3d ed. (1968); Weinberg, Steven, The First Three Minutes (1977); Wichman, E. H., Quantum Physics (1967)

5.1.0.20. Renaissance

{ren'-uh-sahns}
The term Renaissance, describing the period of European history from the early 14th to the late 16th century, is derived from the French word for rebirth, and originally referred to the revival of the values and artistic styles of classical antiquity during that period, especially in Italy. To Giovanni BOCCACCIO in the 14th century, the concept applied to contemporary Italian efforts to imitate the poetic style of the ancient Romans. In 1550 the art historian Giorgio VASARI used the word rinascita (rebirth) to describe the return to the ancient Roman manner of painting by GIOTTO DI BONDONE about the beginning of the 14th century.
It was only later that the word Renaissance acquired a broader meaning. Voltaire in the 18th century classified the Renaissance in Italy as one of the great ages of human cultural achievement. In the 19th century, Jules MICHELET and Jakob BURCKHARDT popularized the idea of the Renaissance as a distinct historical period heralding the modern age, characterized by the rise of the individual, scientific inquiry and geographical exploration, and the growth of secular values. In the 20th century the term was broadened to include other revivals of classical culture, such as the Carolingian Renaissance of the 9th century or the Renaissance of the 12th Century. Emphasis on medieval renaissances tended to undermine a belief in the unique and distinctive qualities of the Italian Renaissance, and some historians of science, technology and economy even denied the validity of the term. Today the concept of the Renaissance is firmly secured as a cultural and intellectual movement; most scholars would agree that there is a distinctive Renaissance style in music, literature and the arts.
The Renaissance as a Historical Period.
The new age began in Padua and other urban communes of northern Italy in the 14th century, where lawyers and notaries imitated ancient Latin style and studied Roman archaeology. The key figure in this study of the classical heritage was PETRARCH, who spent most of his life attempting to understand ancient culture and captured the enthusiasm of popes, princes, and emperors who wanted to learn more of Italy's past. Petrarch's success stirred countless others to follow literary careers hoping for positions in government and high society. In the next generations, students of Latin rhetoric and the classics, later known as humanists, became chancellors of Venice and Florence, secretaries at the papal court, and tutors and orators in the despotic courts of northern Italy. Renaissance HUMANISM became the major intellectual movement of the period, and its achievements became permanent.
By the 15th century intensive study of the Greek as well as Latin classics, ancient art and archaeology, and classical history, had given Renaissance scholars a more sophisticated view of antiquity. The ancient past was now viewed as past, to be admired and imitated, but not to be revived.
In many ways, the period of the Renaissance saw a decline from the prosperity of the High Middle Ages. The Black Death (bubonic and pneumonic plague), which devastated Europe in the mid-14th century, reduced its population by as much as one-third, creating chaotic economic conditions. Labor became scarce, industries contracted, and the economy stagnated, but agriculture was put on a sounder basis as unneeded marginal land went out of cultivation. Probably the actual per capita wealth of the survivors of the Black Death rose in the second half of the 14th century. In general, the 15th century saw a modest recovery with the construction of palaces for the urban elites, a boom in the decorative arts, and renewed long-distance trade headed by Venice in the Mediterranean and the HANSEATIC LEAGUE in the north of Europe.
The culture of Renaissance Italy was distinguished by many highly competitive and advanced urban areas. Unlike England and France, Italy possessed no dominating capital city, but developed a number of centers for regional states: Milan for Lombardy, Rome for the Papal States, Florence and Siena for Tuscany, and Venice for northeastern Italy. Smaller centers of Renaissance culture developed around the brilliant court life at Ferrara, Mantua, and Urbino. The chief patrons of Renaissance art and literature were the merchant classes of Florence and Venice, which created in the Renaissance palace their own distinctive home and workplace, fitted for both business and rearing and nurture of the next generation of urban rulers. The later Renaissance was marked by a growth of bureaucracy, an increase in state authority in the areas of justice and taxation, and the creation of larger regional states. During the interval of relative peace from the mid-15th century until the French invasions of 1494, Italy experienced a great flowering of culture, especially in Florence and Tuscany under the MEDICI. The brilliant period of artistic achievement continued into the 16th century--the age of LEONARDO DA VINCI, RAPHAEL, TITIAN, and MICHELANGELO--but as Italy began to fall under foreign domination, the focus gradually shifted to other parts of Europe.
During the 15th century, students from many European nations had come to Italy to study the classics, philosophy, and the remains of antiquity, eventually spreading the Renaissance north of the Alps. Italian literature and art, even Italian clothing and furniture designs were imitated in France, Spain, England, the Netherlands, and Germany, but as Renaissance values came to the north, they were transformed. Northern humanists such as Desiderius ERASMUS of the Netherlands and John Colet (c. 1467-1519) of England planted the first seeds of the Reformation when they applied critical methods developed in Italy to the study of the New Testament.
Philosophy, Science, and Social Thought.
No single philosophy or ideology dominated the intellectual life of the Renaissance. Early humanists had stressed a flexible approach to the problems of society and the active life in service of one's fellow human beings. In the second half of the 15th century, Renaissance thinkers such as Marsilio FICINO at the Platonic Academy in Florence turned to more metaphysical speculation. Though favored by the humanists, Plato did not replace Aristotle as the dominant philosopher in the universities. Rather there was an effort at philosophical syncretism, to combine apparently conflicting philosophies, and find common ground for agreement about the truth as did Giovanni PICO DELLA MIRANDOLA in his Oration on the Dignity of Man (1486). Renaissance science consisted mainly of the study of medicine, physics, and mathematics, depending on ancient masters, such as Galen, Aristotle, and Euclid. Experimental science in anatomy and alchemy led to discoveries both within and outside university settings.
Under the veneer of magnificent works of art and the refined court life described in BALDASSAIC CASTIGLIONE's Book of the Courtier, the Renaissance had a darker side. Warfare was common, and death by pestilence and violence was frequent. Interest in the occult, magic, and astrology was widespread, and the officially sanctioned persecution for witchcraft began during the Renaissance period. Many intellectuals felt a profound pessimism about the evils and corruptions of society as seen in the often savage humanist critiques of Giovanni Francesco Poggio Bracciolini (1380-1459) and Desiderius Erasmus. Sir Thomas MORE, in his Utopia, prescribed the radical solution of a classless, communal society, bereft of Christianity and guided by the dictates of natural reason. The greatest Renaissance thinker, Nicolo MACHIAVELLI, in his Prince and Discourses, constructed a realistic science of human nature aiming at the reform of Italian society and the creation of a secure civil life. Machiavelli's republican principles informed by a pragmatic view of power politics and the necessity of violent change were the most original contribution of the Renaissance to the modern world.
Influence.
The Renaissance lived on in established canons of taste and literature and in a distinctive Renaissance style in art, music, and architecture, the last often revived. It also provided the model of many-sided achievement of the creative genius, the "universal man," exemplified by Leonardo da Vinci or Leon Battista ALBERTI. Finally, the Renaissance spawned the great creative vernacular literature of the late 16th century: the earthy fantasies of RABELAIS, the worldly essays of MONTAIGNE, the probing analysis of the human condition in the plays of William SHAKESPEARE.
Benjamin G. Kohl
Bibliography: Baron, Hans, Crisis of the Early Italian Renaissance, rev. ed. (1966); Burckhardt, Jakob C. The Civilization of the Renaissance in Italy (1944); Ferguson, W. K., The Renaissance in Historical Thought (1948); Gilmore, Myron P., The World of Humanism (1952); Hale, J. R. (as ed), A Concise Encyclopaedia of the Italian Renaissance (1981) and Renaissance Europe, 1480-1520 (1971); Hay, Enys, The Italian Renaissance in Its Historical Background, 2d ed. (1977); Kristeller, Paul O. Renaissance Thought and Its Sources (1979); Miskimin, Harry A. The Economy of the Early Renaissance (1970)

5.1.0.21. Schopenhauer, Arthur

{shoh'-pen-how-ur}
The German philosopher Arthur Schopenhauer, b. Feb. 22, 1788, d. Sept. 21, 1860, taught a pessimistic view of existence that placed emphasis on human will instead of intellect. Educated in France and England by unconventional parents, Schopenhauer entered the University of Gottingen as a medical student but in 1811 transferred to Berlin to study philosophy. His thesis, On the Fourfold Root of Sufficient Reason, appeared in 1813 (Eng. trans., 1974). Schopenhauer's mother, a novelist of considerable ability, had bitter and antagonistic relations with her son. She established a salon at Weimar, however, which allowed him to meet literary figures, including Johann Wolfgang von Goethe, whose conversations inspired Schopenhauer's Uber das Sehn und die Farben (On Vision and Colors, 1816). The World as Will and Representation, his major work, appeared 2 years later (Eng. trans. of 3d ed., 1966).
To Schopenhauer's bitter disappointment, this book did not make him famous, but it did enable the young philosopher to lecture at Berlin, where he set his lectures at the same hour as those of the thinker to whom he was most vehemently opposed, Georg Wilhelm Friedrich Hegel. The attempt to undermine Hegel failed, and from 1831 on Schopenhauer lived a solitary life, resentful at the world's failure to recognize his genius. His subsequent writings, On the Will in Nature (1836; Eng. trans., 1888) and The Basis of Morality (1841; Eng. trans., 1901), develop concepts implicit in his earlier work. Not until the publication of Parerga and Paralipomena (1851; Eng. trans., 1974), a collection of essays and aphorisms, did fame and influence finally arrive. By the time of his death Schopenhauer's system was taught in German universities, and a growing circle of admirers had appeared in Russia, Britain, and the United States.
Although considering himself a follower of Immanuel Kant, Schopenhauer emphasized the will and its irrationality in a way Kant would have rejected. Kant had shown that the human mind organizes sensation into stable and coherent patterns, but he denied the possibility of going beyond these patterns to a knowledge of things as they really are. Schopenhauer agreed that individuals ordinarily conceive the world in this neat and stable fashion but held that it is possible to go beyond such pretty pictures to know the ultimate reality: the will. Humans are active creatures who find themselves compelled to love, hate, desire, and reject; the knowledge that this nature is so is irreducible. Although the will is entirely real, it is not free, nor does it have any ultimate purpose. Rather, it is all-consuming, pointless, and negative. There is also no escape from the will in nature; expressions of the will are seen throughout nature--in the struggles of animals, the stirring of a seed, the turning of a magnet.
The only purpose in life must be that of escaping the will and its painful strivings. The arts, with their "will-less perception," provide a temporary haven--especially music, the highest of the arts. The only final escape, however, is the "turning of the will against itself," a mysterious process that results in liberation, in sheer extinction of the will.
Although Schopenhauer is now neglected, his influence on Friedrich Wilhelm Nietzsche, Sigmund Freud, and the young Ludwig Wittgenstein serves in part to keep his thought alive.
Pete A. Y. Gunter
Bibliography: Copleston, Frederick C., Arthur Schopenhauer: Philosopher of Pessimism, 2d ed. (1975); Gardiner, Patrick, Schopenhauer (1963); Hamlyn, David W., Schopenhauer (1985); McGill, Vivian J., Schopenhauer (1973); Taylor, Richard, The Will to Live (1962); Wallace, William, Life of Arthur Schopenhauer (1890; repr. 1970).

5.1.0.22. Spinoza, Baruch

{spin-oh'-zuh, bah-rook'}
Baruch (or Benedict) Spinoza, b. Amsterdam, Nov. 24, 1632, d. Feb. 21, 1677, was one of the most important philosophers of the European tradition of RATIONALISM.
Life
Spinoza was born into a family of Portuguese Jews who were refugees to Holland at the end of the 16th century. His early education was in Hebrew, the Bible, the Talmud, and the Kabbalah. Later he studied such Jewish thinkers as Maimonides, Gersonides, and Crescas. After 1651 he read some Renaissance Neoplatonism and stoicism as well as the work of certain Dutch Calvinist scholastics. He also studied Latin, mathematics, and Cartesian philosophy. Not yet 24 years old, Spinoza rejected traditional interpretations of Scripture and thus deviated from Jewish orthodoxy. In 1656 he was expelled from the synagogue at Amsterdam.
Supporting himself by grinding lenses for optical instruments, Spinoza stayed for a period of time in the vicinity of Amsterdam, where he gave private lessons and carried on a wide correspondence. In 1660 he went to Rijnsburg, near Leiden, where he began his correspondence with Henry Oldenburg, secretary of the Royal Society in London. In 1664 he settled in Voorsburg near The Hague, where he vainly sought solitude and tranquillity, but in 1671 he moved to The Hague itself. In order not to compromise his freedom of thought and speech, he refused a chair at the University of Heidelberg 2 years later. By now he was famous and, among others, even Gottfried Wilhelm von Leibniz came to visit him. He died of tuberculosis, a disease made worse by the dust from his lens grinding.
Works
During his lifetime Spinoza published only one work under his own name: a geometry-style exposition of Rene DESCARTES's Principia philosophiae (Principles of Philosophy), with Spinoza's own Cogitata metaphysica (Metaphysical Thoughts) appended (1663). His Tractatus theologico-politicus (Theological-Political Treatise) was published anonymously in 1670. Spinoza's Opera posthuma (Posthumous Works) appeared shortly after his death in 1677 and included his Tractatus de emendatione intellectus (Treatise on the Improvement of Understanding) as well as his definitive work Ethica ordine geometrico demonstrata (Ethics Demonstrated in Geometrical Order), which he had completed in draft form by 1665 and had subsequently revised. Translations of these writings include The Chief Works of Spinoza, translated by R. H. M. Elwes (2 vols., 1955-56).
Philosophy
Although opinions vary about Spinoza's sources (at his death only 161 volumes were found in his small library), no one can deny the considerable influence of Descartes. Spinoza uses much of Descartes's philosophical vocabulary and definitions, and he often organizes his own thoughts in response to Cartesian problems. He owes to Descartes the idea of a mathematical method that distinguishes his main work, the Ethics.
Spinoza's Ethics is divided into five parts: "On God," "On the Nature and Origin of the Mind," "On the Nature and Origin of the Emotions," "On Human Bondage," and "On Human Liberty."
Each part follows a rigorous geometrical method, passing through definitions, axioms, and postulates to propositions, demonstrations, corollaries, scholia (biblical exegeses), and lemmata (intermediate theorems). Spinoza had earlier employed the same method in discussing Descartes's Principles. The overall aim of the work is to lay out a program for "the perfection of human nature."
In part 1, Spinoza defines God as the only true cause and the unique substance, outside of which "no other substance can be given or even conceived." Although this one Divine Substance has an infinite number of attributes, humans can know only two: thought and extension. Entailed in each attribute (like the way properties are entailed in the essence of a triangle) is an infinity of particular things, or modes. Again, humans can know only those modes emanating from the attributes of thought and extension. Concretely, this concept means that although ideas and bodies appear to be separate things in human experience, they are in fact only aspects of the one Divine Substance.
A basic axiom in the whole unfolding of Spinoza's system states the strict parallelism between the two lines of thought and extension: "The order and connection of ideas is the same as the order and connection of things." As this parallelism develops, a universal necessity is attached to it. Neither in the Divine Substance nor in its attributes or modes is there any room for contingency. What is termed Divine Freedom is simply the absence of external constraint. This determinism emerges in human nature as well; it is nothing more than ignorance of the true causes of an individual's action.
In part 2 human existence is reduced to modes of thought and extension. Descartes's dualism of mind and body is reflected here. Spinoza still sees a dualism, but it is at the level of modes rather than of substance. The mind is a mode of thought and as such it is "a part of the infinite intellect of God." The body is a mode of the Divine extension. In virtue of the parallelism of attributes and their modes, a natural correspondence exists between mind and body in humankind. At the same time, however, no real interaction exists between them. Mind and body are but two aspects, or expressions, of one underlying Divine Substance.
In part 3, Spinoza defines an affect as a modification by which the body's power to act is increased or diminished. Affects involve both thought and extension. Human beings are the "adequate cause" (even though ultimately God alone is a cause) of those affects which are actions and the inadequate cause of those which are passions.
In part 4 of the Ethics, Spinoza discusses the concept of "human bondage." A natural tendency exists for an individual's passive feelings, or passions, to take control of life and make that individual a slave. The only remedy is to convert passions into actions.
In part 5, Spinoza explains how action is achieved. To the extent that humans understand how everything, including their passion, is a necessary mode of a Divine attribute, they can gain an "adequate idea" of it. As they "clearly and distinctly" understand their passions, they gain power and become a more adequate cause of the passions. The latter become actions, and humans overcome their bondage.
The last stage of human liberation is seeing that "all bodily affections are referred to God." At this stage all passion is transformed into an action that is "the intellectual love of God." This process is the very perfection of human nature, in which humans intuit their oneness with God. It not only liberates and beatifies but also confers upon them a kind of immortality.
During his lifetime Spinoza was a controversial figure, largely because his philosophical pantheism was not widely appreciated in either Jewish or Christian religious circles. His influence then and immediately after his death is not always easy to pinpoint. Although he left no school of disciples, his works were read by Leibniz and others. His popularity increased in the 18th and 19th centuries when he influenced such diverse persons as the French Encyclopedists, Goethe, Coleridge, and even Hegel. Today the depth and rigor of his thought is widely recognized.
John P.
Doyle
Bibliography: Allison, H. E., Benedict de Spinoza (1975; repr. 1987); Curley, E. M., Spinoza's Metaphysics (1969); Freeman, Eugene, and Mandelbaum, Maurice, eds., Spinoza: Essays in Interpretation (1975); Grene, M., ed., Spinoza: A Collection of Critical Essays (1979); Hampshire, Stuart, Spinoza (1951); Kashap, S. P., ed., Studies in Spinoza (1973); Kennington, Richard, ed., The Philosophy of Baruch Spinoza (1980); Levin, Dan, Spinoza (1970); Roth, Leon, Spinoza (1954; repr. 1986); Wolfson, H. A., The Philosophy of Spinoza (1934; repr. 1983).
Picture Caption[s]
An early advocate of intellectual freedom, the 17th-century Dutch metaphysician Baruch Spinoza (1632-77) was formally expelled for heresy by the traditionalist Jewish community of Amsterdam in 1656. Thereafter, he supported his lifelong rationalist inquiries by working as a lens grinder, refusing any compromising scholarly patronage. (The Bettmann Archive)

5.1.0.23. Vikings

The Vikings were venturesome seafarers and raiders from Scandinavia who spread through Europe and the North Atlantic in the period of vigorous Scandinavian expansion (AD 800-1100) known as the Viking Age. From Norway, Sweden, and Denmark, they appeared as traders, conquerors, and settlers in Finland, Russia, Byzantium, France, England, the Netherlands, Iceland, and Greenland.
For many centuries before the year 800, such tribes as the Cimbrians, Goths, Vandals, Burgundians, and Angles had been wandering out of Scandinavia. The Vikings were different because they were sea warriors and because they carried with them a civilization that was in some ways more highly developed than those of the lands they visited. Scandinavia was rich in iron, which seems to have stimulated Viking cultural development. Iron tools cleared the forests and plowed the lands, leading to a great increase in population. Trading cities such as Birka and Hedeby appeared and became the centers of strong local kingdoms. The Viking ship, with its flexible hull and its keel and sail, was far superior to the overgrown rowboats still used by other peoples. Kings and chieftains were buried in ships (see GOKSTAD SHIP BURIAL; OSEBERG SHIP BURIAL), and the rich grave goods of these and other burial sites testify to the technical expertise of the Vikings in working with textiles, stone, gold and silver, and especially iron and wood. The graves also contain Arab silver, Byzantine silks, Frankish weapons, Rhenish glass, and other products of an extensive trade. In particular, the silver kufic (or cufic) coins that flowed into the Viking lands from the caliphate further stimulated economic growth. Viking civilization flourished with its SKALDIC LITERATURE and eddic poetry, its runic inscriptions (see RUNES), its towns and markets, and, most of all, its ability to organize people under law to achieve a common task--such as an invasion.
Expansion was apparently propelled by the search for new trading opportunities and new areas in which to settle the growing population. By the end of the 8th century, Swedish Vikings were already in the lands around the Gulf of Finland, Danish Vikings were establishing themselves along the Dutch coast, and Norwegian Vikings had colonized the Orkney and Shetland islands.
During the 9th century they expanded beyond these three bases, arriving first as rapacious raiders (looting the treasures of monasteries, for example, and capturing slaves for sale in the Middle East) but soon establishing themselves on a more permanent basis. Swedes called Rus or Varangians established fortified cities at Novgorod and then at Kiev, creating the first Russian state (see RURIK dynasty), and traded down the great rivers of Russia to Byzantium and Persia. Norwegian Vikings established kingdoms in Ireland, where they founded Dublin about 840, and in northwestern England. They settled Iceland and colonized Greenland in the 10th century and founded the short-lived North American colony called VINLAND in the early 11th century (see L'ANSE AUX MEADOWS). Great armies of Danes and Norwegians conquered the area called the DANELAW in England, overthrowing all the Anglo-Saxon kingdoms except King Alfred's Wessex. They attacked cities in France, Germany, the Low Countries, and Spain and, in 911, seized control of Normandy in France, where their descendants became known as the NORMANS.
After conquering and settling foreign lands, the Vikings came under the cultural influence of the conquered peoples. Originally pagan worshipers of Thor and Odin, many became Christians, and during the 10th century they brought Christianity back to Scandinavia.
The process of conquest slackened during the 10th century as civil wars raged in Scandinavia. Out of these wars emerged powerful new kingdoms with great new fortresses, including TRELLEBORG in Denmark. Soon armies of a renewed Viking age were sailing forth. In 1013, SWEYN of Denmark conquered all of England. His son, CANUTE, built an empire that included England, Denmark, and Norway.
By the second half of the 11th century, however, the emergence of stronger political systems and stronger armies in Europe, the development of new types of ships, and the redirection of military endeavor by the Crusades brought the Viking Age to an end.
J.
R. Christianson
Bibliography: Brondsted, Johannes, The Vikings, trans. by Kalle Skov (1960; repr. 1971); Foote, Peter G., and Wilson, David M., The Viking Achievement (1970); Graham-Campbell, James, The Viking World (1980); Jones, Gwyn, A History of the Vikings (1968); Kendrick, Thomas Downing, A History of the Vikings (1930; repr. 1968); Kirkby, Michael, The Vikings (1977); Poertner, Rudolf, The Vikings (1975); Sawyer, P. H., The Age of the Vikings, 2d ed. (1972)

5.1.0.24. Writing systems, evolution of

Full writing systems may be defined as those collections of arbitrary signs that can represent all the words of the languages to which they are applied. Limited writing systems, consisting of marks made for counting or identification, go back 30,000 years; but the evolution of full writing systems has taken place only during the past 5,000 years.

Although in use for only a relatively brief period of history, writing systems have made possible the technological advances that have taken humanity from hunting, gathering, and simple farming to the exploration of space. Writing created a permanent record of knowledge so that a fund of information could accumulate from one generation to the next. Before writing, human knowledge was confined by the limits of memory--what one could learn for oneself or find out from talking to someone else. Writing extended the geography of communication: whereas early visual systems, such as signaling by gestures or with fires or smoke, were limited to the range of eyesight and subject to misinterpretation, writing allowed accurate communication at a distance without traveling or relying on the memory of a messenger.

Limited and Full Writing.
Limited writing includes both picture writing, or pictography, and ideography, the use of pictures to represent not the object drawn but some attribute or idea suggested by the object (for example, the use of a drawing of the sun to represent the idea of warmth). Limited writing refers directly to the object or idea portrayed. Pictograms or ideograms call to mind an image or concept that then may be expressed in language; the reader does not need to know the language of the writer but can translate the signs directly into his or her own language.
A full or true writing system represents words, not objects. However elaborate, the earliest systems of Mesopotamia, Egypt, and Central America qualify only as limited writing since they used signs that refer to the objects represented and not to the words for the objects. A recently created limited writing system, international traffic signs, is effective because it avoids language; simple pictures, not words or phrases incomprehensible to illiterates or speakers of other languages, warn drivers of road hazards and traffic regulations. Other widely used modern systems of limited writing, such as musical or scientific notation, electronic circuit diagrams, and blueprints, all use less space than full writing to convey specific technical information.

Word, Syllabic, and Alphabetic Writing.
To represent a language adequately, a full writing system must maintain fixed correspondences between its signs and the elements of the language. A writing system that has a sign for each word in the language is called logographic, one that has signs for the different syllables that occur is called syllabic, and one that has a sign for each sound of the language is called alphabetic.
To understand a message written in a full writing system, the reader must know the language of the writer. This does not mean, however, that a writing system can be used for only one language and no other. Throughout history writing systems have been transferred with great effectiveness from one language to another--as from Chinese to Japanese or from Latin to English.

Forerunners of writing
Called petrograms if drawn or printed on the surface of rocks and petroglyphs if cut into the rock, primitive drawings have been found on every continent except Antarctica. Early paintings like those on the ceiling of the cave at ALTAMIRA, Spain (c.14,000-c.9500 BC), or on the walls of Barrier Canyon, Utah (c.4000 BC), belong as much to the history of art as to the history of writing. Other pictures or series of pictures, however, such as Eskimo ivory carvings and New Zealand petroglyphs, seem to have been designed more for communication than for aesthetic pleasure (see INSCRIPTION).
The markings of the Mesolithic AZILIAN culture of southern France that were made on flint pebbles may descend from pictures of men and animals but apparently took on a magical or religious significance. Prehistoric Egyptian and Anatolian potters and masons used marks to identify their handiwork. In China, Africa, and the Americas, ancient peoples used knotted cords, notched sticks, and other mnemonic devices to help them count or keep track of time or distance.
The AZTEC took their writing system from the MAYA. Although all the highly pictographic Aztec symbols and most of the increasingly stylized Mayan ones stood for objects or concepts (including numbers) rather than for words, syllables, or sounds, some Mayan symbols may have become phonetic.

Logographic Systems
When pictograms or ideograms become so stylized as to be no longer recognizable as representations of particular objects, users (both readers and writers) begin to transfer the significance of the signs from the objects to the names for those objects--that is, the signs come to signify words rather than objects, and writing becomes phoneticized. So far, scholars have discovered seven ancient civilizations in which the transference from picture writing to word writing took place: Sumerian (3100 BC; see SUMER), Egyptian (3000 BC; see EGYPT, ANCIENT), Proto-Elamite (3000 BC; see ELAM), Proto-Indic (2200 BC; see INDUS CIVILIZATION), Cretan (2000 BC; see CRETE), Hittite (1500 BC; see HITTITES), and Chinese (1500 BC; see CHINA, HISTORY OF). Of the writing systems used by these civilizations, three--Proto-Elamite, Proto-Indic, and Cretan--have yet to be deciphered, and only one, Chinese, remains in use today. The Proto-Elamite and Proto-Indic systems left no known descendants, but Cretan gave rise to Linear A and LINEAR B.
The Sumerians, living in southern Mesopotamia, evolved their CUNEIFORM writing system toward the end of the 4th millennium BC. Derived from Latin cuneus, "wedge," the word cuneiform describes the wedge-shaped strokes used to form the characters of the Sumerian and several later, derivative scripts, including those for Akkadian (see AKKAD) and its two dialects, Babylonian (see BABYLONIA) and Assyrian (see ASSYRIA), and for Eblaite (see EBLA). Egyptian HIEROGLYPHICS were perfected during the first dynasty (3110-2884 BC). The adjective hieroglyphic, from Greek Hieroglyphikos, "of holy carvings," denotes any system of highly stylized but still recognizable pictures and has been applied to both the ancient Cretan and Mayan writing systems.
Various forms of Hittite (see LANGUAGES, EXTINCT), belonging to the group of INDO-EUROPEAN LANGUAGES, were spoken in Anatolia from the 2d millennium BC to shortly after the time of Christ. So-called Hieroglyphic Hittite (1500-700 BC) used pictures unrelated to those of the Egyptian system, but Cuneiform Hittite (1500-1200 BC)--a distinct language--borrowed its characters from Mesopotamia. Most of them scratches on bones or shells, the 2,000-3,000 identifiable characters of ancient Chinese constitute the ancestors of the present-day script (see SINO-TIBETAN LANGUAGES).

Semantic and Phonetic Indicators.
In most logographic systems, one sign can represent several distinct words. With purely logographic writing, this ambiguity is not resolved, and the reader must deduce the correct word from the context. Other logographic systems, however, include semantic or phonetic complements--often called determinatives or phonetic indicators.
Determinatives indicate the class or category--such as gods, countries, fish, birds, verbs of motion, verbs of building, objects made of wood, objects made of stone--to which the word represented by the logogram belongs. For example, the Sumerian logogram APIN (originally a pictogram of a plow) stood for the Sumerian words apin, "plow," engar, "farmer," and uru, "to cultivate." When APIN appeared together with the logogram GIS, for "wood," the combination GIS-APIN indicated that the intended word belonged to the class of objects made of wood, and hence APIN was to be read as apin, "plow." Similarly, when used in conjunction with the logogram LU, "man," APIN meant engar, "farmer." Since Sumerian writing did not use determinatives with verbs, APIN alone, without a determinative, normally represented uru, "to cultivate."
Phonetic indicators show part or all of the pronunciation of the word represented by the logogram. To use a modern example, the numeral 4 (a logogram) means the cardinal number "four." To express the ordinal, a phonetic indicator -th is attached and the combination 4th read as "fourth." The sign -th calls to mind not an idea or even the word associated with the idea, but a sound constituting part of the word represented by the logogram. Every logogram, with or without a phonetic indicator, is wholly phonetic in the sense that it stands for a specific word with a phonetic realization.

The Rebus Principle
After a logogram has lost all resemblance to the object that it refers to, the logogram may come to stand for other words--homonyms--that have the same, or nearly the same, pronunciation. If the sign subsequently comes to stand not for the words themselves but only for their common phonetic shape or pronunciation, then the logogram has become a REBUS. Writing with such logograms is sometimes called writing according to the rebus principle.
The Sumerian sign TI (originally a pictogram of an arrow), standing for the Sumerian word ti, "arrow," came also to represent the near-homonym til, "to live," creating a new logogram with a meaning wholly distinct from the sign's original meaning. Then the sign began to be used simply for the sound, or syllable, ti, independent of any logographic connotation.

Logosyllabic Writing.
With signs such as TI representing only syllables, case endings and verbal inflections could be expressed by attaching the appropriate syllabic sign to the root logogram. Unlike phonetic indicators, syllabic signs were meant to be read and interpreted as elements of the language being written. In most logosyllabic (or word-syllabic) writing, words are still indicated with logograms, while the syllabic signs are reserved for grammatical elements.
Syllabic writing also enabled signs partly to express a desired grammatical element and partly to serve as phonetic indicators. The Sumerian logogram DU (originally a pictogram of a foot) represented several verbs connected with the feet, including gin, "to go," gub, "to stand," and tum, "to bring." Adding the syllabic sign for the nominalizing particle -a allowed DU-a to represent all three verbal nouns, but it soon became a convention to use a syllabic sign that also showed the proper reading of the logogram. Thus the combined symbol DU-na meant gin-a, "going," DU-ba meant gub-a, "standing," and DU-ma meant tum-a, "bringing."

Syllabaries
A conflict arises in any logographic or logosyllabic writing system between economy--the number of signs required to write a given message--and explicitness--the number of signs required to avoid ambiguity of meaning. Even after grouping all words with similar meanings under one logogram, a logosyllabic system still needs 500-600 signs. By contrast, a purely syllabic system may have less than 100 signs and seldom has more than 200. An elaborate syllabary--the name given to the collection of characters each of which represents a syllable--can have signs for consonant plus vowel, vowel plus consonant, or consonant plus vowel plus consonant. A purely phonetic script, syllabic writing reduces ambiguity by indicating the precise pronunciation of each word.
An open syllabary--a syllabary simplified to only consonant-plus-vowel signs--reduces the number of signs required to the number of consonants times the number of vowels in the language, plus signs for just the vowels. Even so, an open syllabary cannot express such phonological features as double consonants, consonant clusters, and final consonants. Reducing the consonant-plus-vowel signs simply to signs for a consonant plus any vowel greatly increases economy--it reduces the number of signs to the number of consonant sounds in the language--but decreases explicitness because the reader must supply the correct vowel sounds.

Ancient Syllabaries.
Four types of syllabaries developed from the seven ancient logographic systems: cuneiform syllabaries from Sumerian, West Semitic syllabaries from Egyptian, the Cypriot syllabary from Cretan (see CYPRUS), and the Japanese syllabary, or kana, from Chinese (see JAPANESE LANGUAGE). Cuneiform syllabaries derived from Sumerian include those for the extinct languages Urartian (see URARTU), Elamite, Hattic, Hurrian, Luwian, and Palaic. West Semitic peoples of Syria and Palestine created an open syllabary from the Egyptian hieroglyphic system by leaving out logograms and the signs for more complex syllables (see AFROASIATIC LANGUAGES). Apart from that exemplified by a few short inscriptions of c.1500 BC found in Sinai, the earliest such West Semitic syllabary belongs to UGARIT, on the northern Syrian coast, and dates from c.1300 BC.

The Cherokee Syllabary.
In 1809 a native Cherokee named SEQUOYA (George Guess) undertook to develop a writing system for his people. After discarding an ideographic system as too cumbersome, by 1821 he had perfected a syllabary with 85 characters. Sequoya borrowed some of his signs from the Latin alphabet but gave them completely different values (the sign D, for instance, represents the vowel a). Other signs resemble Arabic numerals or Latin letters either turned upside down or otherwise modified; still others seem to be arbitrary creations. Within a decade almost all the men of the tribe had learned to read and write, and although the script subsequently fell into disuse, it is preserved in many manuscripts, newspapers, and printed books.

The Alphabet
By c.1000 BC, other West Semitic peoples besides the Ugarits had developed syllabaries from Egyptian hieroglyphics; it was from one of these peoples--ARABS, ARAMAEANS, Hebrews (see HEBREW LANGUAGE), or Phoenicians (see PHOENICIA), but probably the last--that the Greeks borrowed their writing system during the 9th century BC (see GREEK LANGUAGE). Soon after, the Greeks made the final step of dividing the consonants from the vowels and writing each separately. The resulting system--called an alphabet, from the names of the first two Greek letters, alpha and beta--is unique. The Greeks and no other civilization before or since (with the doubtful exception of the Koreans; see KOREAN LANGUAGE) invented the alphabet; all subsequent alphabets, ancient or modern, derive from the Greek one. Alphabetic writing represents the best compromise yet developed between economy and explicitness. Although for a given utterance alphabetic writing requires more signs than does logographic or syllabic writing, the total number of signs in the system remains small, and ambiguity is virtually eliminated because the writer can spell out each sound of each word.

The Greek Alphabet and Its Descendants.
Since the Semitic syllabaries had signs only for consonants, the Greeks needed to find characters to represent the vowels of their language. According to the standard view, the Greeks simply adopted the Semitic signs for five consonants that did not occur in Greek and applied the signs to vowels. The Semitic letter aleph, representing a smooth breathing, became Greek alpha, representing the vowel "a"; he became epsilon, "e"; yodh became iota, "i"; ayin became omicron, "o"; and waw became upsilon, "u." Hebrew and other Semitic systems had already used some consonant signs to indicate the vowel of the preceding syllable. The Greek innovation, however, consisted of having signs that represented only vowels and having the signs represent the vowels directly.
Asian offshoots of the Greek alphabet include those used in LYCIA, LYDIA, Caria, Pamphylia, and Phrygia. In Africa, the term Coptic denotes both the language descended from ancient Egyptian and the Greek-derived alphabet used to write the language from the 3d to 13th centuries AD. When Ulfilas (AD c.311-381), bishop of the GOTHS, created an alphabet for Gothic (see GERMANIC LANGUAGES), he took 19 or 20 of his 27 letters from Greek and most of the rest from Latin. The earliest surviving texts written in SLAVIC LANGUAGES, from the 10th and 11th centuries, employ the CYRILLIC ALPHABET, also derived from Greek and traditionally ascribed to Saint Cyril (see CYRIL AND METHODIUS). On the Italian peninsula, both the Messapii to the south and the ETRUSCANS to the north had adapted the Greek alphabet to their languages several centuries before Christ.

The Etruscan and Latin Alphabets and Their Descendants.
The Etruscan alphabet, exemplified by more than 10,000 inscriptions dating from the 8th century BC to the 1st century AD, in its original form consisted of 26 letters. From the Etruscan derive the Piceni, Venetic, Italic (Oscan, Umbrian, and Siculian), North Etruscan or Alpine, and Latin or Roman alphabets. The first three did not survive into the Christian era, but early Germanic tribes took their RUNES from the North Etruscan alphabet, and with only slight modifications the Latin alphabet has been adopted as the script of most modern European languages, including English.
The earliest Latin inscriptions, from the 7th to 5th centuries BC, used 21 letters, retaining only one (that derived from Greek sigma) of the three Etruscan symbols for s-sounds and reserving for numbers the symbols for three aspirates not found in the LATIN LANGUAGE (derivatives of theta, phi, and chi signified 100, 1,000, and 50, respectively, but later became identified with those letters--C, M, and L--whose forms they most closely resembled). During the 1st century BC, the Romans added the letters Y and Z to the end of their alphabet to represent two sounds newly introduced into Latin by such Greek loanwords as zephyrus, "the west wind." The letter J developed as a variant of I, and U and W developed as variants of V during late classical times, but the distinctions were not kept systematically until the 17th century.
Both in its majuscule (capital letters) and minuscule (small letters) forms, the Latin alphabet was carried throughout medieval Europe by the Roman Catholic church--to the Irish (see CELTIC LANGUAGES) and MEROVINGIANS in the 6th century and the ANGLO-SAXONS and Germans in the 7th. The oldest surviving texts in the ENGLISH LANGUAGE written with Latin letters date from c.700 (see the articles on the individual letters of the English alphabet, A, B, C, and so on).

Mixed Writing Systems
Few writing systems exist in purely logographic, syllabic, or alphabetic form. Most systems use logograms for numbers, and English includes the signs & ("and") and $ ("dollars") and the percent sign. English also creates logograms from initials, like the readily recognizable configurations USA and FBI. Other abbreviations, such as NATO and UNESCO, are pronounced like words, and some, like laser (formerly written LASER, for Light Amplification by Stimulated Emission of Radiation), have become lowercased.
Furthermore, neither logograms, syllabaries, alphabets, nor any combinations thereof can capture in themselves such crucial prosodic nuances of spoken language as pause, stress, tone, and pitch, indicating hesitation, surprise, anger, or interrogation. The bare written expression sit down may remain inscrutable, but its vocal equivalent, depending on the speaker's stress and tone of voice, reveals whether a polite invitation, a command, or a threat is intended. To relieve ambiguity, writing systems through the ages have developed a number of conventions and auxiliary marks, notably spacing and PUNCTUATION.

Spelling, Pronunciation, and Change
The ideal alphabet imposes a direct relation between the sounds of a language and the signs that represent them. In practice, signs represent combinations of sounds (the English letter X stands for the sounds k + s) or more than one sound (C stands for k or s), and combinations of signs represent one sound (the letters PH for f) or different sounds (TH represents the voiceless initial fricative of thin as well as the voiced one of this). Still, serious obstacles confront attempts to reform the spelling of English or any other language. In all languages, pronunciation changes continually, so a new spelling system would itself need reforming after a time. Every language has dialects; some English speakers rhyme log and dog, or marry and merry, but others do not.
Whose pronunciation will determine the spellings of words?
Writing systems tend to be conservative. Ancient peoples attributed a divine origin to their scripts and therefore hesitated to change or modify them. Major innovations occur when one people borrows a writing system from another. When the Akkadians adapted the syllabic portion of Sumerian cuneiform to their own language, they reserved the logograms as a kind of shorthand, thus replacing a logosyllabic system with a syllabic system supplemented by logograms. When the Hittites subsequently borrowed Akkadian cuneiform, they eliminated most of the polyphonous and homophonous signs and many of the logograms but retained several Akkadian syllabic spellings as logograms.
Although unwilling to modify the structure of their writing systems, ancient people did simplify signs. The Akkadians kept the basic principles of their cuneiform intact for more than 2,000 years, but they reduced the number of strokes per sign and within each sign grouped together all strokes running in the same direction.

Literature
I. J. Gelb And R. M. Whiting
Bibliography: Chadwick, John, The Decipherment of Linear B, 2d ed. (1967); Cleator, P. E., Lost Languages (1959); Diringer, David, Writing (1962) and The Alphabet, 3d ed., 2 vols. (1968); Doblhofer, Ernst, Voices in Stone: The Decipherment of Ancient Scripts and Writings, trans. by Mervyn Savill (1961; repr. 1973); Driver, G. R., Semitic Writing, 3d ed. (1976); Gelb, Ignace J., A Study of Writing, rev. ed. (1963); Marshack, Alexander, The Roots of Civilization (1972); Mercer, Samuel, The Origin of Writing and Our Alphabet (1959); Moorhouse, A. C., The Triumph of the Alphabet (1953); Ober, J. H., Writing (1965); Ogg, Oscar, The 26 Letters, rev. ed. (1971); Ullman, Berthold L., Ancient Writing and Its Influence (1932; repr. 1969)


[104]Brunswik, Egon
Egon Brunswik, b. Mar. 18, 1903, d. July 7, 1955, was a Hungarian-born American psychologist noted for his research in stimulation and perception. He also raised the question of the ecological validity of experiments in psychology, that is, the extent to which laboratory findings could be applied to situations in real life

Previous Next Title Page Index Contents Site Index