Saturday, March 14, 2009

A History of Optics and Modern Science

The Fjordman Report

The noted blogger Fjordman is filing this report via Gates of Vienna.
For a complete Fjordman blogography, see The Fjordman Files. There is also a multi-index listing here.


This portion of this essay dealing with optics was originally published separately in six parts. See Part 1, Part 2, Part 3, Part 4, Part 5, and Part 6.

A plain HTML printer-friendly version is available here.


Initially, this essay was born out of a desire to understand the early modern history of optics. I had heard several people, even individuals otherwise critical of Islamic culture, state that the scholar known as Alhazen in Western literature in the eleventh century did important work in optics. Yet it was an indisputable historical fact that photography, the telescope, the microscope and other optical advances happened in Europe, not elsewhere. Exactly what did Mr. Alhazen do, and why did the science of optics stagnate in the Middle East, if we assume that the region played a leading role in medieval times? This question triggered my curiosity.

As so often happens, the more I read about the subject the more fascinating it became. This essay therefore grew far larger than I had initially planned. I will in the following text look at optics in the widest possible sense, from Roman glassmaking traditions via medieval eyeglasses and early telescopes to quantum optics. While the primary focus will be on various aspects of optics, I will explore many other subjects along the way, sometimes because they are directly or indirectly relevant to the main topic and occasionally simply because I happen to find the topic interesting in its own right and hope that my readers will do so, too.

I have a special interest in astronomy and astrophysics. Since I don’t write a separate history of astronomy I will include some parts of that fascinating story here and other parts in my history of mathematics and mathematical astronomy. I will not, however, write much about the development of microscopes since I explore that subject in some detail in my history of medicine. I will write some sections on the development of modern chemistry and physics. It was advances in chemistry that made possible the invention of photography as well as the creation of the first battery in 1800, which soon triggered the study of electromagnetism. The discovery of the dual wave-particle nature of light was intimately associated with other advances in nuclear physics at the time. I will therefore explore this subject, too.

Studies of the properties of light made vital contributions to the development of physics, from quantum mechanics to the general theory of relativity. It is no exaggeration to say that without a deep understanding of the nature of light, our modern society simply could not exist.

The following quotes by the eminent scholar David C. Lindberg, who is widely recognized as a leading expert on ancient, medieval and early modern optics, refer to his book Theories of vision — From al-Kindi to Kepler, except when explicitly stated otherwise.

Speculations about the rainbow can be traced almost as far back as written records go. In China, a systematic analysis of shadows and reflection existed by the fourth century BC. I will concentrate mainly on the Greek, Middle Eastern and European optical traditions here, but will say a few words about Chinese ideas later. The theories of vision of the atomists Democritus and Epicurus, of Plato and his predecessors, of the Stoics and of Galen and Aristotle were almost entirely devoid of mathematics. The first Greek exposition of a mathematical theory of vision was in the Optica by the great mathematician Euclid, author of the Elements, perhaps the most influential textbook in the history of mathematics. Scholar Victor J. Katz in A History of Mathematics, second edition:
- - - - - - - - -
“The most important mathematical text of Greek times, and probably of all time, the Elements of Euclid, written about 2300 years ago, has appeared in more editions than any work other than the Bible….Yet to the modern reader the work is incredibly dull…There are simply definitions, axioms, theorems, and proofs. Nevertheless, the book has been intensively studied. Biographies of many famous mathematicians indicate that Euclid’s work provided their initial introduction into mathematics, that it in fact exited them and motivated them to become mathematicians. It provided them with a model of how ‘pure mathematics’ should be written, with well-thought-out axioms, precise definitions, carefully stated theorems, and logically coherent proofs. Although there were earlier version of Elements before that of Euclid, his is the only one to survive, perhaps because it was the first one written after both the foundations of proportion theory and the theory of irrationals had been developed in Plato’s school and the careful distinctions always to be made between number and magnitude had been propounded by Aristotle. It was therefore both ‘complete’ and well organized.”

The Elements is a compendium organized from previously existing texts, but Euclid did give the work an overarching structure. Although he added some original material of his own, he is thus first and foremost famous for creating a brilliant synthesis of the work of others. Sadly, almost nothing is known about him personally, but he lived in the early Hellenistic period and was probably born a few years before Archimedes of Syracuse (ca. 287-212 BC). Katz again:

“In any case, it is generally assumed that Euclid taught and wrote at the Museum and Library at Alexandria. This complex was founded around 300 B.C.E. by Ptolemy I Soter, the Macedonian general of Alexander the Great who became ruler of Egypt after the death of Alexander in 323 B.C.E. ‘Museum’ here means a ‘Temple of the Muses,’ that is, a location where scholars meet and discuss philosophical and literary ideas. The Museum was to be, in effect, a government research establishment. The Fellows of the Museum received stipends and free board and were exempt from taxation. In this way, Ptolemy I and his successors hoped that men of eminence would be attracted there from the entire Greek world. In fact, the Museum and Library soon became a focal point of the highest developments in Greek scholarship, both in the humanities and the sciences.”

Even though other, similar works had existed before, Euclid’s version was greatly successful. Copies of it were made for centuries, sometimes with new additions. Katz:

“In particular, Theon of Alexandria (fourth century C.E.) was responsible for one important new edition. Most of the extant manuscripts of Euclid’s Elements are copies of this edition. The earliest such copy now in existence is in the Bodleian Library at Oxford University and dates from 888. There is, however, one manuscript in the Vatican Library, dating from the tenth century, which is not a copy of Theon’s edition but of an earlier version. It was from a detailed comparison of this manuscript with several old manuscript copies of Theon’s version that the Danish scholar J. L. Heiberg compiled a definitive Greek version in the 1880s, as close to the Greek original as possible. (Heiberg did the same for several other important Greek mathematical texts.) The extracts to be discussed here are all adapted from Thomas Heath’s 1908 English translation of Heiberg’s Greek. Euclid’s Elements is a work in thirteen books, but it is certainly not a unified work.”

Johan Ludvig Heiberg (1854-1928), philologist and historian of mathematics at the University of Copenhagen, Denmark, inspected a manuscript in Constantinople in 1906 which contained previously unknown mathematical works by Archimedes. It is worth noting here that manuscripts from the Byzantine Middle Ages containing very important works could be found in Constantinople (Istanbul), now under Turkish control, yet Turkish Muslims did not show much interest in discovering these works themselves. Christian Europeans did.

Archimedes was the first mathematician to derive quantitative results from the creation of mathematical models of physical problems on Earth. He was responsible for the first proof of the law of the lever as well as of the basic principle of hydrostatics. The principle of the lever was known before this, but as far as we know no-one had created a mathematical model for it before Archimedes. His genius as an engineer of various military devices kept the Roman invasion forces at bay for months. He was allegedly killed by a Roman soldier after the capture of Syracuse (212 BC), even though the commander Marcellus wanted to spare his life.

Another prominent Greek mathematician was Apollonius. Again, the cited dates of his birth conflict, apart from the fact that he was active in the years before and slightly after 200 BC. Victor J. Katz in A History of Mathematics:

“Apollonius was born in Perge, a town in southern Asia Minor, but few details are known about his life. Most of the reliable information comes from the prefaces to the various books of his magnum opus, the Conics. These indicate that he went to Alexandria as a youth to study with successors of Euclid and probably remained there for most of his life, studying, teaching, and writing. He became famous in ancient times first for his work on astronomy, but later for his mathematical work, most of which is known today only by titles and summaries in works of later authors. Fortunately, seven of the eight books of the Conics do survive, and these represent in some sense the culmination of Greek mathematics. It is difficult for us today to comprehend how Apollonius could discover and prove the hundreds of beautiful and difficult theorems without modern algebraic symbolism. Nevertheless, he did so, and there is no record of any later Greek mathematical work that approaches the complexity or intricacy of the Conics.”

According to the quality website Molecular Expressions, which contains valuable biographies of many important figures in the history of optics, “Though often overshadowed by his mathematical reputation, Euclid is a central figure in the history of optics. He wrote an in-depth study of the phenomenon of visible light in Optica, the earliest surviving treatise concerning optics and light in the western world. Within the work, Euclid maintains the Platonic tradition that vision is caused by rays that emanate from the eye, but also offers an analysis of the eye’s perception of distant objects and defines the laws of reflection of light from smooth surfaces. Optica was considered to be of particular importance to astronomy and was often included as part of a compendium of early Greek works in the field. Translated into Latin by a number of writers during the medieval period, the work gained renewed relevance in the fifteenth century when it underpinned the principles of linear perspective.”

Hero of Alexandria did some optical work, but arguably the greatest Greek optician was Ptolemy. Claudius Ptolemaeus, or Ptolemy, was a Greek mathematician and scholar who lived in Alexandria in Roman Egypt in the second century AD. Ptolemy’s work represented the culmination of Greek scholarship in several disciplines. Most people know that his great astronomical treatise, completed around AD 150 and later known as the Almagest, was the dominant astronomical text in Europe until the sixteenth or seventeenth centuries, and even longer than that in the Middle East. It included and superseded earlier Greek astronomical works, above all those by Hipparchus from the second century BC. While geocentric (Earth-centered) Ptolemaic astronomy is widely familiar, some readers may know that he was an excellent geographer for his time as well. The recovery of Ptolemy’s Geography about AD 1295 revolutionized Byzantine geography and cartography, as it revolutionized Western European geography and cartography when it was translated into Latin a century later. It was very popular among Renaissance humanists from the fifteenth century onwards. His Tetrabiblos (“Four books”) was a standard astrological text for centuries. Fewer people know that his Optics was one of the most important works on optics in Antiquity.

After Ptolemy, the legacy of Greek Antiquity was passed on to medieval times, to the Middle East and to Europe. According to scholar F. R. Rosenthal: “Islamic rational scholarship, which we have mainly in mind when we speak of the greatness of Muslim civilisation, depends in its entirety on classical antiquity…in Islam as in every civilisation, what is really important is not the individual elements but the synthesis that combines them into a living organism of its own…Islamic civilisation as we know it would simply not have existed without the Greek heritage.”

Greek knowledge was of vital importance to Muslim scholars. Al-Kindi (died AD 873), or Alkindus as he was known in Europe, lived in Baghdad in the ninth century and was close to several Abbasid Caliphs. He was one of the first to attempt to reconcile Islam with Greek philosophy, especially with Aristotle, a project that was to last for several centuries and ultimately prove unsuccessful due to religious resistance. In the book How Greek Science Passed to the Arabs, De Lacy O’Leary states that “Aristotelian study proper began with Abu Yusuf Ya’qub ibn Ishaq al-Kindi (d. after 873), commonly known as ‘the Philosopher of the Arabs.’ It is significant that almost all the great scientists and philosophers of the Arabs were classed as Aristotelians tracing their intellectual descent from al-Kindi and al-Farabi.”

Al-Kindi’s De aspectibus was based upon Euclid’s Optica, but he could be critical of it in some cases. His book on optics influenced the Islamic world for centuries. Al-Kindi was a younger contemporary of the mathematician al-Khwarizmi (died ca. AD 850), who also worked in Baghdad, and together they provided an early introduction to the Middle East of the decimal numeral system with the zero which was gradually spreading from India.

The Baghdad-centered Abbasid dynasty, which replaced the Damascus-centered Umayyad dynasty after AD 750, was closer to Persian culture and was clearly influenced by the pre-Islamic Sassanid Zoroastrian practice of translating works and creating great libraries. Even Dimitri Gutas admits this in his book Greek Thought, Arab Culture. There was still a large number of Persian Zoroastrians as well as Christians and Jews, and they clearly played a disproportionate role in the translation of scholarly works to Arabic.

One of the most prominent translators was Hunayn or Hunain ibn Ishaq (AD 808-873), called Johannitius in Latin. He was a Nestorian (Assyrian) Christian who had studied Greek in Greek lands, presumably in the Byzantine Empire, and eventually settled in Baghdad. Since he was a contemporary of al-Kindi and employed by the same patrons they were probably acquainted. Soon he, his son and his nephew had made available in Arabic Galen’s medical treatises as well as Hippocratic works and texts by Aristotle, Plato and others. In some cases, he apparently translated a work into Syriac (Syro-Aramaic) and his son translated them further into Arabic. Their efforts preserved via Arabic translations some of Galen’s works that were later lost in the Greek original. Hunain’s own compositions include two on ophthalmology: the Ten Treatises on the Eye and the Book of the Questions on the Eye. His books had some influence but transmitted an essentially pure Galenic theory of vision.

By far the most important optical work to appear during the Middle Ages was the Book of Optics (Kitab al-Manazir? in the original Arabic; De Aspectibus in Latin translation). It was written during the first quarter of the eleventh century by Ibn al-Haytham (AD 965-ca. 1039), who was born in present-day Iraq but spent much of his career in Egypt. He is known as Alhazen in Western literature. David C. Lindberg explains:

“Abu ‘Ali al-Hasan ibn al-Haytham (known in medieval Europe as Alhazen or Alhacen) was born in Basra about 965 A.D. The little we know of his life comes from the biobibliographical sketches of Ibn al-Qifti and Ibn Abi Usaibi’a, who report that Alhazen was summoned to Egypt by the Fatimid Khalif, al-Hakim (996-1021), who had heard of Alhazen’s great learning and of his boast that he knew how to regulate the flow of the Nile River. Although his scheme for regulating the Nile proved unworkable, Alhazen remained in Egypt for the rest of his life, patronized by al-Hakim (and, for a time, feigning madness in order to be free of his patron). He died in Cairo in 1039 or shortly after. Alhazen was a prolific writer on all aspects of science and natural philosophy. More than two hundred works are attributed to him by Ibn Abi Usaibi’a, including ninety of which Alhazen himself acknowledged authorship. The latter group, whose authenticity is beyond question, includes commentaries on Euclid’s Elements and Ptolemy’s Almagest, an analysis of the optical works of Euclid and Ptolemy, a resume of the Conics of Apollonius of Perga, and analyses of Aristotle’s Physics, De anima, and Meteorologica.”

Alhazen did work in several scholarly disciplines but is especially remembered for his contributions to optics. He read Hippocrates and Galen on medicine, Plato and Aristotle on philosophy and wrote commentaries on Apollonius, Euclid, Ptolemy, Archimedes’ On the Sphere and Cylinder and other mathematical works. He was probably familiar with al-Kindi’s De aspectibus and Hunain ibn Ishaq’s Ten Treatises, too. Alhazen had the resources to develop a theory of vision which incorporated elements from all of the optical traditions of the past. His treatise contains a substantially correct model of vision: the passive reception of light reflected from other objects, not an active emanation of light rays from the eyes, and he combined mathematical reasoning with some forms of experimental verification. He relied heavily on the Greek scientific tradition, but the synthesis he created was new. Lindberg:

“Alhazen’s essential achievement, it appears to me, was to obliterate the old battle lines. Alhazen was neither Euclidean nor Galenist nor Aristotelian — or else he was all of them. Employing physical and physiological argument, he convincingly demolished the extramission theory; but the intromission theory he erected in its place, while satisfying physical and physiological criteria, also incorporated the entire mathematical framework of Euclid, Ptolemy, and al-Kindi. Alhazen thus drew together the mathematical, medical, and physical traditions and created a single comprehensive theory.”

Curiously enough, his Book of Optics was not widely used in the Islamic world afterwards. There were a few exceptions, prominent among them the Persian natural philosopher Kamal al-Din al-Farisi (1267-ca.1320) in Iran, who made the first mathematically satisfactory explanation of the rainbow. Similar ideas were articulated at roughly the same time by the German theologian and natural philosopher Theodoric of Freiberg (ca. 1250-1310). They apparently had nothing in common apart from the fact that both were familiar with Alhazen’s work. Here is David C. Lindberg in The Beginnings of Western Science, second edition:

“Kamal al-Din used a water-filled glass sphere to simulate a droplet of moisture on which solar rays were allowed to fall. Driven by his observations to abandon the notion that reflection alone was responsible for the rainbow (the traditional view, going back to Aristotle), Kamal al-Din concluded that the primary rainbow was formed by a combination of reflection and refraction. The rays that produced the colors of the rainbow, he observed, were refracted upon entering his glass sphere, underwent a total internal reflection at the back surface of the sphere (which sent them back toward the observer), and experienced a second refraction as they exited the sphere. This occurred in each droplet within a mist to produce a rainbow. Two internal reflections, he concluded, produced the secondary rainbow. Location and differentiation of the colored bands of the rainbow were determined by the angular relations between sun, observer, and droplets of mist. Kamal’s theory was substantially identical to that of his contemporary in Western Europe, Theodoric of Freiberg. It became a permanent part of meteorological knowledge after publication by René Descartes in the first half of the seventeenth century.”

Alhazen’s scientific mindset wasn’t always appreciated by his contemporaries. Here is how his writings were received by fellow Muslims, as quoted in Ibn Warraq’s modern classic Why I Am Not a Muslim:

“A disciple of Maimonides, the Jewish philosopher, relates that he was in Baghdad on business, when the library of a certain philosopher (who died in 1214) was burned there. The preacher, who conducted the execution of the sentence, threw into the flames, with his own hands, an astronomical work of Ibn al-Haitham, after he had pointed to a delineation therein given of the sphere of the earth, as an unhappy symbol of impious Atheism.”

Muslims had access to good ideas but failed to appreciate their full potential. It was in the West that Alhazen had his greatest influence. The Book of Optics was translated into Latin and had a significant impact on the English scholar Roger Bacon (ca. 1220-1292) and others in the thirteenth century. Bacon was educated at Oxford and lectured on Aristotle at the University of Paris. He wrote about many subjects and was among the first scholars to argue that lenses could be used for the correction of eyesight. This was eventually done in the late 1200s in Italy with the invention of eyeglasses, as we shall see later. His teacher, the English bishop and scholar Robert Grosseteste (ca. 1170-1253), was an early proponent of validating theory through experimentation. Grosseteste played an important role in shaping Oxford University in the first half of the thirteenth century with great intellectual powers and administrative skills.

As John North says in his book God’s Clockmaker, “Robert Grosseteste was the most influential Oxford theologian of the thirteenth century. Like [Alexander] Neckham he applied his scientific knowledge to theological questions, but — unlike Neckham — he had a very original scientific mind. He had much astronomical and optical knowledge; and, without having a very profound knowledge of mathematics, he appreciated its importance to the physical sciences. There was nothing especially new in this, although it was a principle that had been largely overlooked in the West. It did no harm to have the principle proclaimed repeatedly by Grosseteste’s leading advocate after his death, the Franciscan Roger Bacon, lecturer in both Oxford and Paris.”

There was much interest in optics in Europe between the thirteenth and the seventeenth centuries, with Bacon, Witelo, John Pecham and the Italian scholars Giambattista della Porta (1535-1615), who helped popularize the camera obscura, and Francesco Maurolico (1494-1575), an astronomer and monk who studied the refraction of light and the camera obscura.

Pecham and Witelo had access to the works of Roger Bacon and Alhazen and contributed considerably to the dissemination of their ideas. Witelo (born ca. 1220, died after 1280) was from the region known as Silesia and is often labelled a Polish scholar. He was a friend of the Flemish scholar William of Moerbeke (ca. 1215-1286), the great translator of Aristotle’s works as well as texts by Archimedes and others from the original Greek. Witelo’s major surviving work on optics, Perspectiva, completed after 1270, was dedicated to William. The Englishman John Pecham (died 1292), Archbishop of Canterbury, studied optics and astronomy and was influenced by Roger Bacon’s work.

Optical theory was incorporated into the medieval European university curriculum. What is unique about optics in Europe is that it was applied to figurative art, a usage that was entirely absent in the Islamic world. Lindberg in Theories of vision — From al-Kindi to Kepler:

“About 1303, a little more than a decade after the deaths of Roger Bacon and John Pecham, Giotto di Bondone (ca. 1266-1337) began work on the frescoes of the Arena Chapel in Padua — paintings that later generations would view as the first statement of a new understanding of the relationship between visual space and its representation on a two-dimensional surface. What Giotto did was to eliminate some of the flat, stylized qualities that had characterized medieval painting by endowing his figures with a more human, three-dimensional, lifelike quality; by introducing oblique views and foreshortening into his architectural representations, thereby creating a sense of depth and solidity; and by adjusting the perspective of the frescoes to the viewpoint of an observer standing at the center of the chapel. This was the beginning of a search for ‘visual truth,’ an ‘endeavor to imitate nature,’ which would culminate a century later in the theory of linear perspective. Historians of art are unanimous in crediting the invention of linear perspective to the Florentine Filippo Brunelleschi (1377-1446). Although Brunelleschi left no written record of his achievement, his disciple Antonio Manetti gives us an account in his Vita di Brunelleschi.”

The techniques he used were given a theoretical expression in the treatise Della pittura (On Painting), written about 1435 by fellow Italian artist Leon Battista Alberti (1404-72) and dedicated to Brunelleschi. At about this time, new and flat glass mirrors were available, replacing the older flat metal and hemispherical glass mirrors. Giotto painted with the aid of a mirror, and Brunelleschi used a plane mirror in his perspective demonstration. Lindberg:

“What is beyond conjecture is that the creators of linear perspective knew and utilized ancient and medieval optical theory. Alessandro Parronchi has argued that Brunelleschi’s friend Paolo Toscanelli brought a copy of Blasius of Parma’s Questiones super perspectivam to Florence when he returned from Padua in 1424 and that Brunelleschi could also have had access to the works of Alhazen, Bacon, Witelo, and Pecham. He argues, moreover, that these works may have played a decisive role in the working out of Brunelleschi’s perspective demonstration. We are on much surer ground with Alberti, whose description of the visual pyramid clearly reveals knowledge of the perspectivist tradition. Moreover, Alberti’s reference to the central ray of the visual pyramid as that through which certainty is achieved can only come from Alhazen or the Baconian tradition.”

Renaissance Europe was the first civilization to institute the regular use of human dissection for scientific purposes, integrated into the medical education at the new universities. The Italian polymath Leonardo da Vinci (1452-1519), the archetypal “Renaissance man,” performed dissections in order to gain a better grasp of human anatomy, knowledge which he employed extensively in his artistic work. He read optical treatises as well, but he was isolated and without influence in the field. Leonardo was an acute observer of the world around him and kept daily records in his journals of his ideas and observations, most of which were written backwards in his curious mirror script. Luckily, many of these journals have survived. They contain insights into hydraulics and geology as well as descriptions of many highly original mechanical devices that were centuries ahead of their time, from war machines such as tanks via parachutes and hang gliders to bridges. However, most of these manuscripts were in private hands until long after his death and were not seriously studied until centuries later.

Friedrich Risner (died 1580), a German mathematician who spent most of his career at the University of Paris, published a well-edited printed edition of the works of Alhazen and Witelo in 1572, the Opticae thesaurus, which benefited leading seventeenth-century figures such as Kepler, Huygens and Descartes. Risner was among the first to suggest the use of a portable camera obscura in the form of a lightweight wooden hut. Previously, a camera obscura was the size of a room, with a tiny hole in the wall or the roof.

Kepler tested a tent-size portable camera obscura for astronomical observations in the early 1600s, but the earliest references to a small portable box camera came in the second half of the seventeenth century. The use of the camera obscura as an aid to painters and artists was virtually nonexistent in the Islamic world, but indirectly led to the development of box cameras used for photography in nineteenth century Europe.

The great German astronomer Johannes Kepler (1571-1630) was interested in optics before the telescope had been invented and had probably received an introduction to the subject at the university. Kepler was primarily a mathematician and did not personally study the anatomy of the eye, but his description does not contain any major errors. From other scholars he obtained as much anatomical knowledge as he needed to develop his theory of the retinal image. Alhazen’s contributions influenced most important works on optics written in Europe up to and including Kepler, but it was nevertheless Kepler, not Alhazen, who created the first recognizably modern theory of human vision. Lindberg:

“He has painstakingly demonstrated that all the radiation from a point in the visual field entering the eye must be returned to a point of focus on the retina. If all the radiation entering the eye must be taken into account (and who could gainsay that proposition after reflecting on Kepler’s argument?), and if the requirement of a one-to-one correspondence between the point sources of rays in the visual field and points in the eye stimulated by those rays is accepted, then Kepler’s theory appears to be established beyond serious dispute. An inverted picture is painted on the retina, as on the back of the camera obscura, reproducing all the visual features of the scene before the eye. The fact that Kepler’s geometrical scheme perfectly complemented Platter’s teaching about the sensitivity of the retina surely helped to confirm this conclusion. It is perhaps significant that Kepler employed the term pictura in discussing the inverted retinal image, for this is the first genuine instance in the history of visual theory of a real optical image within the eye — a picture, having an existence independent of the observer, formed by the focusing of all available rays on a surface.”

Kepler compared the eye to a camera obscura, but only once in his treatise. The most difficult challenge was the fact that the picture on the retina is upside down and reversed from right to left. This inverted picture caused Kepler considerable problems. He lacked the means to cope with this issue but argued that “geometrical laws leave no choice in the matter” and excluded the problem from optics, separating the optical from the nonoptical aspects of vision, which was the sensible thing to do. Optics ceases with the formation of the picture on the retina. What happens after that is for somebody else to find out. The image gets turned “right” by the brain, but the functions of the brain were not understood by any culture at that time. The term “neurology” was coined by the English doctor Thomas Willis (1621-1675).

Although Kepler’s theory of the retinal image is correctly identified as the birth of modern optical theory, Lindberg argues that he was the culminating figure of centuries of scholarship:

“That his theory of vision had revolutionary implications, which would be unfolded in the course of the seventeenth century, must not be allowed to obscure the fact that Kepler himself remained firmly within the medieval framework. The theory of the retinal image constituted an alteration in the superstructure of visual theory; at bottom, it remained solidly upon a medieval foundation. Kepler attacked the problem of vision with greater skill than had theretofore been applied to it, but he did so without departing from the basic aims and criteria of visual theory established by Alhazen in the eleventh century. Thus neither extreme of the continuity-discontinuity spectrum will suffice to describe Kepler’s achievement: his theory of vision was not anticipated by medieval scholars; nor did he formulate his theory out of reaction to, or as a repudiation of, the medieval achievement. Rather, Kepler presented a new solution (but not a new kind of solution) to a medieval problem, defined some six hundred years earlier by Alhazen. By taking the medieval tradition seriously, by accepting its most basic assumptions but insisting upon more rigor and consistency than the medieval perspectivists themselves had been able to achieve, he was able to perfect it.”

David C. Lindberg argues that Alhazen’s Book of Optics must have been translated during the late twelfth or early thirteenth century. Indirect evidence indicates Spain as the point of translation, and the high quality of the translation points to the great Italian (Lombard) translator Gerard of Cremona (ca. 1114-1187) or somebody from his school, although we do not know this with certainty. Many of the works initially translated from Arabic by Gerard and his associates, among them Ptolemy’s great astronomical work the Almagest, were later translated directly from Greek into Latin from Byzantine manuscripts. Obviously, Alhazen’s work had to be translated from Arabic since it was written in that language in the first place.

Optical theory was widely utilized by artists in Europe to create mathematical perspective. Leonardo da Vinci’s most famous painting is undoubtedly the Mona Lisa, which is now in the Musée du Louvre in Paris, but The Last Supper, finished in 1498 in the Convent of Santa Maria delle Grazie in Milan, Italy, runs a close second. The story it tells is narrated in the Gospel of John 13:21 in the New Testament, with the first celebration of the Eucharist, when Jesus announces that one of his Twelve Apostles will betray him. The picture is a great example of one point perspective, with Christ’s head as the midpoint of the composition.

Albrecht Dürer (1471-1528) was a German printmaker, painter and artist-mathematician from Nuremberg and one of the leading figures of the Northern Renaissance. He spent several years in Italy to study the art of perspective and had to develop a mathematical terminology in German because some of it did not yet exist at the time. His Vier Bücher von menschlicher Proportion, or Four Books on Human Proportion, from 1528 was dedicated to the study of human proportions. Like Leonardo da Vinci, he was inspired by the Roman architect Vitruvius from the first century BC but also did empirical research on his own. The examples of Dürer, Leonardo and others demonstrate that there was much mathematical theory behind the more accurate representation of human figures in post-Renaissance European art.

The Roman architect and engineer Marcus Vitruvius Pollio, or Vitruvius, was the author of the only major work on architecture and technology to have survived intact from the Greco-Roman world. De architectura, in English known as On Architecture or The Ten Books on Architecture, contains entries on water clocks and pumps as well as military devices and siege engines, which makes it of great historical value. It was written around 25 BC (dedicated to Emperor Augustus) and “rediscovered” in the early 1400s, when European scholars actively sought out books from Antiquity which they could find in Constantinople or in European monasteries. The copying during the brief but nevertheless important Carolingian Renaissance in the late eighth and early ninth centuries under Charlemagne ensured the survival of a number of Classical texts. The book’s rediscovery had a huge impact on Renaissance architects such as Filippo Brunelleschi, and its influence arguably lives on to this day. Leon Battista Alberti made it widely known with the publication of his De re aedificatoria (On the Art of Building) after 1450. Leonardo da Vinci’s drawing the Vitruvian Man was inspired by Vitruvius’ writings about architecture’s relations to the proportions of the human body.

It is true that you can find elements of perspective among the ancient Greeks, and sporadically in Indian, Chinese, Korean, Japanese and other artistic traditions. One prominent example is the masterpiece Going Up the River or Along the River During the Qingming Festival by the Chinese painter Zhang Zeduan (AD 1085-1145). The painting, which is sometimes referred to as China’s Mona Lisa, depicts daily life in the Song Dynasty capital Kaifeng with great attention to detail. The work masters some techniques related to shading and foreshortening, but these experiments were later abandoned. East Asian art tended to consider images as a form of painted poetry. Alan Macfarlane and Gerry Martin explain in Glass: A World History:

“It is well known that Plato felt that realist, illusionary art should be banned as a deceit, and most civilisations have followed Plato, if for other reasons. For the Chinese (and Japanese) the purpose of art was not to imitate or portray external nature, but to suggest emotions. Thus they actively discouraged too much realism, which merely repeated without any added value what could anyway be seen. A Van Eyck or a Leonardo would have been scorned as a vulgar imitator. In parts of Islamic tradition, realistic artistic representations of living things above the level of flowers and trees are banned as blasphemous imitations of the creator’s distinctive work. Humans should not create graven images, or any images at all, for thereby they took to themselves the power of God. Again, Van Eyck or Leonardo would have been an abhorrence. Even mirrors can be an abomination, for they create duplicates of living things.”

The Chinese had a passion for mirrors, but of the highly polished bronze variety. These were often believed to have magical properties, could be made into plane, convex or concave shapes and were sometimes used for optical experiments. Japanese mirrors were traditionally made of brass or steel, not glass, and were used as sacred symbols, to look into the soul instead of the body. The Romans knew how to make glass mirrors, but metal mirrors were preferred. Fine mirrors (as produced in Venice) were never made in the medieval glass traditions of the Islamic Middle East, possibly for religious reasons. The development of flat glass and metal mirrors combined with the study of optics facilitated the rise of a new kind of art in Renaissance Europe. That mirrors played a part in the development of linear perspective is a theme taken up by the scholar Samuel Edgerton. Macfarlane and Martin again:

“Mirrors had been standing in artists’ studios for several hundred years, for example Giotto had painted ‘with the aid of mirrors’. Yet Brunelleschi’s extraordinary breakthrough is the culminating moment. Without what Edgerton calculates to be a twelve-inch-square flat mirror, the most important single change in the representation of nature by artistic means in the last thousand years could not, Edgerton argues, have occurred. Leonardo called the mirror the ‘master of painters’. He wrote that ‘Painters oftentimes despair of their power to imitate nature, on perceiving how their pictures are lacking in the power of relief and vividness which objects possess when seen in a mirror…’ It is no accident that a mirror is the central device in two of the greatest of paintings — Van Eyck’s ‘Marriage of Arnolfini’, and Velazquez’s ‘Las Meninas’. It was a tool that could be used to distort and hence make the world a subject of speculation. It was also a tool for improving the artist’s work, as Leonardo recommended.”

The Flemish painter Jan van Eyck (ca. 1395-1441) is strongly associated with the development of oil painting, yet he did not invent the medium. The Islamic Taliban regime destroyed two ancient Buddha statues in the Afghan region of Bamiyan in 2001. Recent discoveries indicate that Buddhists made oil paintings in this region already in the mid-seventh century AD. Nevertheless, the perfection of oil by van Eyck and others allowed depth and richness of color, and Dutch and Flemish painters in the fifteenth century were the first to make oil the preferred medium. One masterpiece of Jan van Eyck is the altarpiece in the cathedral at Ghent, the Adoration of the Lamb, from 1432. Another is The Arnolfini Portrait or Marriage of Arnolfini, presumably from the Flemish city of Bruges in 1434.

It is possible that this painting inspired another masterpiece, Las Meninas (The Maids of Honor) from Madrid in 1656, painted by the great Spanish artist Diego Velázquez (1599-1660). Born in Seville, Andalusia, Velázquez came from a part of the Iberian Peninsula which had been under Islamic rule for many centuries, yet Islamic Spain never produced a painter of his stature. Christian Spain did. Las Meninas displays a highly accurate handling of light and shade as well as of linear perspective. A reflecting mirror occupies a central position in the picture, just like in Marriage of Arnolfini. The mirror also gave the artist a third eye so that he could see himself. Without a good mirror, many self-portraits, culminating in the brilliant series by Rembrandt (1606-1669) during the Dutch Golden Age, could not have been made.

Cutting tools made of obsidian, a natural form of volcanic glass, have been employed since prehistoric times and were extensively used by Mesoamerican cultures as late as the sixteenth century AD (pre-Columbian Americans knew neither metal tools nor man-made glass). We do not know exactly where or when glass was first artificially created. Some say it happened after 3000 BC, others say it happened earlier than this, maybe by accident at first and perhaps in more than one place. What we do know is that the region we recognize as the Middle East, stretching from Mesopotamia via the Levant to Egypt, played a crucial role in the development of this material for several millennia. Today we primarily think of glass as clear and transparent, but the earliest types of man-made glass were colored and non-transparent. Glass was long regarded as an alternative to pottery or as a way of making replicas of opaque precious stones, to glaze pottery, for jewelry and to make small containers, mainly for liquids.

By the mid-second millennium BC artisans found that incorporating calcium oxide reduced the solubility of the glass. During this period, the Late Bronze Age, there was a vast increase in the number of practical metal tools, and eventually iron came into regular use. The parallel between the increasing use of metal and of glass is probably not coincidental, as both materials are utilized through the controlled use of high temperatures. Anthony Harding writes in The Oxford Illustrated History of Prehistoric Europe:

“Beads of primitive glass (so-called ‘faience’, actually a glass-like substance fired to a rather low temperature) had been known since the Early Bronze Age, but it was only rarely that the higher firing temperature necessary for the formation of true glass was achieved. When this did happen, practically the only objects created were beads, though in Egypt and the Near East elaborate objects such as vessels and various ornaments were being produced. The discovery of partially and fully formed glass beads, crucibles with glass adhering, and partly fused glass raw materials at Frattesina in the Po valley in northern Italy is of great importance, the more so as the analytical composition of the glass demonstrates that the material is of a local composition type and not brought in by Near Eastern traders….movement of glass is now a well-established phenomenon in the Bronze Age. Production in the barbarian world was on the small scale. True, certain more highly decorated forms were created, as the eye beads and those with twists of different colours (such as the ‘Pile dwelling beads’ of Switzerland) demonstrate.”

The global center of glassmaking was the Middle East and the Eastern Mediterranean, especially the Levant, present-day Syria, Lebanon and Israel. Large-scale glass factories began operating in Egypt, from the Hellenistic era with Alexandria as one of the centers. The glass of the early civilizations was molded, not blown. It was often cloudy and blue and was a luxury item as rare as precious stones.

Glass didn’t become a product available to the masses until the invention of glassblowing, which happened after ca. 50 BC, most likely somewhere in Syria or the Levant. This region was by then a part of the emerging Roman Empire, which contributed greatly to the expansion of glassmaking. Alan Macfarlane and Gerry Martin explain in their fascinating book Glass: A World History, which is about the social and cultural history of glass more than about how to make a vase in a particular shape and color:

“With the development of glass blowing it was possible to produce glass vessels cheaply and in large quantities. Glass was such a versatile, clean and beautiful substance that fine pieces became highly priced and symbols of wealth. Its success was so great that it began to undermine its main competitor, ceramics. Glass was principally used for containers of various kinds: dishes, bottles, jugs, cups, plates, spoons, even lamps and inkwells. It was also used for pavements, for coating walls, for forcing frames for seedlings, and even for drainpipes. It is no exaggeration to say that glass was used for a wider range of objects than at any other time in history, including the present. It was especially appreciated for the way it enhanced the attractiveness of the favourite Roman drink, wine. In order to appreciate the colours of wine it was necessary to see through the glass. Thus another development, with great implications for the future, was the realisation that clear glass was both useful and beautiful. In all civilisations up to Rome, and in all other civilisations outside western Eurasia, glass was chiefly valued in its coloured and opaque forms, particularly as an imitation of precious stones.”

The Romans did not, however, use glass for lenses or other optical instruments to any great extent. This was the product of medieval and early modern European civilization. Glass as a tool for obtaining reliable knowledge, in optics or in chemical equipment, was not much developed in Antiquity, but the Romans laid the foundations for later uses of glass.

It is worth pondering the connection between glass and wine. As indicated above, the extensive manufacture of glass, and of clear glass in particular, was concentrated in the western parts of Eurasia: the Middle East, the Mediterranean region and Europe. This happens to be the regions where grape wine was widely grown and — coincidental or not — the regions which had arguably the most sophisticated optical traditions in the world by medieval times.

In our own time, excellent wines are grown in South America, in Argentina and Chile, in California in North America, in South Africa, Australia and New Zealand. In all of these cases the production of wine was historically an extension of the European wine- and beer-making traditions. No wine was grown either in the Americas or in Australasia before the European colonial expansion in the early modern era. Alcoholic beverages were consumed in sub-Saharan Africa and pre-Columbian America but based on other substances, cacao beans, maize, potatoes etc. Likewise, in East Asia fermented beverages made from grapes were not totally unknown, but never widely consumed until modern times. Scholar Patrick E. McGovern elaborates in Ancient Wine: The Search for the Origins of Viniculture:

“The wild Eurasian grapevine has a range that extends over 6000 kilometers from east to west, from Central Asia to Spain, and some 1300 kilometers from north to south, from the Crimea to Northwest Africa….The plasticity of the plant and the inventiveness of humans might appear to argue for multiple domestications. But, if there was more than one domestication event, how does one account for the archaeological and historical evidence that the earliest wine was made in the upland, northern parts of the Near East? From there, according to the best substantiated scenario, it gradually spread to adjacent regions such as Egypt and Lower Mesopotamia (ca. 3500-3000 B.C.). Somewhat later (by 2200 B.C.), it was being enjoyed on Crete. Inexorably, the elixir of the ancient world made its way in temporal succession westward to Rome and its colonies and up the major rivers into Europe. From there, the prolific Eurasian grapevine spread to the New World, where it continues to intertwine itself with emerging economies.”

Although there is disagreement over the issue, some scholars claim that the earliest “wine culture” in the world emerged in Transcaucasia between the Black Sea and the Caspian Sea, comprising modern Georgia, Armenia and Azerbaijan. The ancient Sumerians imported wine to southern Mesopotamia from the Zagros Mountains in Iran. Thousands of wine jars were deposited in the tombs of the first pharaohs of Egypt at Saqqara (Memphis) and Abydos before 3000 BC. The jars appear to have been imported from Southern Palestine and the Levant. Although both the Mesopotamian and Egyptian civilizations consumed beer on an everyday basis, the most prestigious beverage was still wine. McGovern:

“The city-states of Dor, Tyre, Sarepta, Sidon, Berytus (modern Beirut), Byblos, Tripoli, and Arad hugged the shoreline, and from their well-protected harbors, the Phoenician ships carried wine, their famous textiles dyed purple, and other goods to Egypt, Greece, the far western isles, and beyond the ‘Pillars of Hercules’ (Gibraltar) to Cornwall and the west coast of Africa. The Phoenicians and their ancestors before them, the Canaanites, deserved their fame as the seafarers of the ancient world: beyond transporting valuable physical commodities from place to place, they were responsible for transmitting the alphabet, new arts and technologies, and the ideology of a ‘wine culture’ throughout the Mediterranean. Even the opponents of ‘Canaanite’ culture made an exception when it came to their wine. Hosea, the eighth-century B.C. Israelite prophet, urged his listeners to return to Yahweh, so that ‘they will blossom as the vine, [and] their fragrance will be like the wine of Lebanon’ (14:7).”

The first alphabetic scripts may have been inspired by the Egyptian writing system, which included a set of hieroglyphs for single consonants. The letter “A” came from a pictogram of an ox head (the Semitic word for “ox” was aleph), the drawing of a house (the Semitic word for “house” was baytu) represented the sound “B” etc. A cuneiform alphabet existed in the Syrian city of Ugarit ca. 1500-1300 BC, but this version later died out. A modified version of the early alphabet was used for the Semitic languages Hebrew and Aramaic from about the ninth century BC. After the Persians adopted the use of Aramaic in their vast Empire, the concept of the alphabet spread to the Indian subcontinent and from there on to Southeast Asia and other regions of Asia. The Phoenicians exported their Semitic alphabet to the Greeks and eventually the Romans. In the modern era, the Roman/Latin alphabet was then brought by Europeans to the rest of the world. Consequently, all peoples in the world today, except those who use Chinese characters, can ultimately trace their script back to a Semitic-speaking people inspired by a limited number of Egyptian hieroglyphs in the second millennium BC.

Indo-European languages such as Greek and Latin contain more vowels than Semitic ones, so the Greeks invented signs for vowels when they adopted the Phoenician consonantal alphabet. This new script was intimately associated with, and spread together with, wine culture. Some of the earliest known examples of Greek alphabetic writing are scratched onto wine jugs, and the earliest preserved examples of the Etruscan and Roman alphabets are inscriptions on drinking cups and wine containers. The Phoenicians competed with and taught the Greeks, and brought wine to some regions of Spain, Portugal and France, many of the Mediterranean islands as well as Carthage in North Africa. Patrick E. McGovern in Ancient Wine:

“The Phoenicians competed with another wine-loving people, the Greeks, as both groups plied their ships throughout the Mediterranean and traded their goods. Together, they carved up the world marketplace and planted vineyards as they went. Oenotria (‘the land of trained vines’), now Calabria in the toe of southern Italy, illustrates how seriously the Greeks took their task of promoting the ‘culture of the vine and wine’ elsewhere. By establishing the domesticated grapevine on foreign soil — whether in the Black Sea or at Messenia in eastern Sicily — they stimulated and were better able to supply local demand. Some regions, such as the coastline extending from ancient Etruria up to Massalia (Marseilles), might be contested. The Etruscans, the native Italic peoples, were more than willing to learn about viniculture from the Phoenicians or the Lydians, but they also wanted and got a role in supplying wine to trans-Alpine Burgundy.”

The principal means for storing and transporting wine, grains, olive oil and other commodities in Mediterranean Antiquity were ceramic amphorae, but the manufacture of glass products as drinking vessels gradually expanded. Hugh Johnson in The Story of Wine:

“Wine was first drunk from pottery, occasionally and ceremonially from gold, but by as early as the late Bronze Age, about 1500 BC, also from glass. The technique of firing a glassy or ‘vitreous’ substance onto solid objects was discovered in about 4000 BC. In about 1500 BC the idea of a hollow glass vessel appeared — possibly in Egypt. It was made by dipping a cloth bag of sand into a crucible of molten glass, then modelling it by rolling it on a marver, a flat stone bench, then when the glass had cooled, emptying out the sand. The technique was known all over the Near East until about 1200 BC, then apparently lost in the first ‘Dark Age’, to re-emerge in the eighth century BC, with Egypt, Phoenicia, and Syria as glassmaking centres, but also with workshops in Italy and Celtic Europe. The idea of glassblowing originated in Syria in the first century BC. It spread rapidly around the Roman Empire, with Syrian or Alexandrian craftsmen setting up workshops, especially in Italy, Gaul, and the Rhineland. Glassmaking survived the fall of the Empire, with the Rhineland as a continuing centre….Wine glasses remained objects of luxury until the eighteenth century.”

An urban, literate money economy with wine and theater was established by the Greeks and popularized by the Romans. It is no exaggeration to say that for the Romans, wine was civilization. Wine was considered a daily necessity and viticulture was spread to every part of the Empire. Most of present-day Germany never became a part of the Roman Empire after Roman troops suffered a devastating defeat to an alliance of Germanic tribes in the Battle of the Teutoburg Forest in AD 9, which established the Rhine as the lasting border of the Empire, but Germany’s oldest city, Trier, was founded as a Roman garrison next to the river Moselle (Mosel). The Moselle valley still produces quality wine. The city of Cologne (Köln) in the Rhineland developed as the hub of the Roman glassmaking industry in the region. Here at least, glass and wine clearly went hand in hand.

It is instructive to compare this example to that of India. Egypt with its fertile Nile Valley was the grain chamber of the Roman Empire, but was also important in other ways with its connections to the Indian Ocean. Scholar David Peacock explains in The Oxford History of Ancient Egypt:

“Perhaps one of the strangest and most bizarre aspects of taste among the Roman nobility was the predilection for oriental luxuries: pearls, pepper, silks, frankincense, and myrrh, as well as various other spices and exotic medicines. Egypt articulated this trade, for these goods were brought by ship across the Indian Ocean and thence to the western shores of the Red Sea. Here they were offloaded and dragged across 150 km. of desert to the Nile, whence they were floated to Alexandria and then on to Rome. India benefited from this trade, for in return it received glass, textiles, wine, grain, fine pottery, and precious metals as well as human cargoes, such as singing boys and maidens for the pleasure of Indian potentates.”

Glass was known in India, but mainly used for decoration. Roman wine was at least occasionally imported and it is possible that Indians imported the knowledge of glassblowing along with it, which gradually spread eastwards in Asia. Indians from the first to the fifth centuries AD made more use of glass than they had before, but then the native glass industry declined almost to the point of non-existence a thousand years later. India never became a center for winemaking, as did western Germany. That could be one of the key differences.

The founder of the Persian Empire in the sixth century BC, Cyrus the Great, was known for his love of wine. However, after the seventh century AD, a very different force came to dominate this region: Islam. The Islamic ban on the consumption of wine and alcoholic beverages was not always upheld. The ruling classes took many liberties, and some of the established vineyards, often run by non-Muslims, managed to survive well into the Islamic era. Nevertheless, in the long run Islam greatly inhibited the ancient traditions of beer- and winemaking in this region. The Turks of the Ottoman Empire were the strictest of all. In contrast, the Christian Church and its network of monasteries in Europe often encouraged the production of beer and wine. Norman Davies tells the tale in Europe: A History:

“Commercial wine-growing in medieval Europe was pioneered by the Benedictines at Château-Prieuré in the Bordeaux region, and at locations such as the Clos Vougeot on the Côte de Beaune in Burgundy. The Cluniacs on the Côte d’Or near Macon, and the Cistercians at Nuits St Georges, extended the tradition. According to Froissart, England’s possession of Bordeaux demanded a fleet of 300 vessels to carry the vintage home. Bénédictine (1534) from the Abbey of Fécamp, and Chartreuse (1604) from the Charterhouse in Dauphiné, pioneered the art of fortified wine. Europe’s wine zone cuts the Peninsula in two. Its northern reaches pass along a line stretching from the Loire, through Champagne to the Mosel and the Rhineland, and thence eastwards to the slopes of the Danube, and on to Moldavia and Crimea. There are very few wine-growing districts which did not once belong to the Roman Empire. Balkan wines in Serbia, Romania, Bulgaria, and Greece, inhibited by the anti-alcoholic Ottomans, are every bit as ancient as those of Spain, Italy, or France.”

Today we see buildings with glass windows in every city in the world, yet most people don’t know that the Romans were the first civilization to make glass windows. Their legacy of glassmaking survived the fall of the Empire (although in diminished quantities) and was carried in different directions. Under the influence of Christianity and the Roman Church, the introduction of glazed windows and the development of painted and stained glass manufacture was one of the most decorative uses. It is again interesting to notice how glassmaking and winemaking progressed together under the influence of the Benedictines and others. Here is the book Glass: A World History by Alan Macfarlane and Gerry Martin:

“There are references to such windows from fifth century France at Tours, and a little later from north-east England, in Sunderland, followed by developments at Monkwearmouth, and in the far north at Jarrow dating to the period between 682 and c.870. By AD 1000 painted glass is mentioned quite frequently in church records, for example in those of the first Benedictine Monastery at Monte Cassino in 1066. It was the Benedictine order in particular that gave the impetus for window glass. It was they who saw the use of glass as a way of glorifying God through their involvement in its actual production in their monasteries, injecting huge amounts of skill and money into its development. The Benedictines were, in many ways, the transmitters of the great Roman legacy. The particular emphasis on window glass would lead into one of the most powerful forces behind the extraordinary explosion of glass manufacture from the twelfth century.”

Often cited as the first Gothic construction, the choir of the Abbey of Saint-Denis, 1140-44, gives an important place to stained glass. In the twelfth century, monks were still the elite class of society in Europe, although urbanization was proceeding rapidly. This story is explored in the book The History of Stained Glass by Virginia Chieffo Raguin:

“The windows they commissioned reflected not only their erudition but also their method of prayer: gathering several times a day in the choir area of the church to pray communally, primarily by singing psalms. The monks remained in the presence of the works of art they set in these spaces. With the construction of his abbey’s new choir, Abbot Suger (1081-1155) of Saint-Denis installed a series of windows exemplary of monastic spirituality and twelfth-century visual thinking. Suger, a man of unusual determination and management skills, was a trusted advisor of Louis VII, who reigned from 1137 to 1180. Responding to the call of Bernard of Clairvaux, Louis embarked on the unsuccessful Second Crusade, 1147-49, leaving Suger to act as regent of France in his absence. The abbot’s influence with the monarchy consolidated Saint-Denis’s place as the site of burial for French kings and the repository of the regalia — crown, sceptre, spurs, and other ceremonial objects — of coronation (coronations themselves, however, were held in the cathedral of Reims). Suger rebuilt the eastern and western ends of the church around 1141-44, using revolutionary vaulting and construction techniques that proclaimed the new Gothic style.”

Stained glass developed as a major art form in late medieval Europe and was often used in churches such as Chartres Cathedral and Reims Cathedral in France, Cologne Cathedral in Germany, York Minster in England, Florence Cathedral in Italy and many others. Glass painting, what the Germans call Glasmalerei, gave artists the opportunity to construct large-scale imagery using light, color and line. With stained glass, unlike other graphic media, the artist must be sensitive to translucency as well as form. Raguin again:

“The importance of stained glass and gems may be explained by a prevailing attitude toward light as a metaphor in premodern Europe. In the Old Testament light is associated with good, and darkness with God’s displeasure. The very first verses of Genesis announce to the reader that ‘the earth was void and empty, and darkness was upon the face of the deep’, then God created light and ‘saw the light, that it was good’ (Genesis 1:2-3). Light was associated with knowledge and power, ‘the brightness of eternal light, and the unspotted mirror of God’s majesty’ (Wisdom 7:26). Light also functioned as a symbol of God’s protection.”

Syria and Egypt, and to some extent Iran and Iraq, remained important glassmaking regions for some centuries into the Islamic period and created colorful, decorated glass which was exported to other countries. There was also some transfer of glassmaking technology from Syria to Venice in medieval times. Glass was used for scientific instruments in alchemy/chemistry. Mosque lanterns were the closest equivalent to the stained glass of Western European churches. Window glass was not widely made, but this could be for climatic reasons since in warmer countries it was important that air circulated in the hot season. It is clear that the Romans in Mediterranean countries knew how to make windows of glass and occasionally did so, but not to any great extent. Further developments in the manufacture of window glass happened primarily in the colder regions of northern Europe.

All the details regarding glassmaking in the Middle East during medieval times are not fully known. For instance, what effect did the ban on the consumption of wine have? Wine-glass manufacture was important in Italy for the development of fine glass. The destruction brought about by the Mongols and later by Tamerlane, while certainly significant, is sometimes overstated by those who want to blame the subsequent decline of the region on external factors alone. The truth is that some of the best work in theoretical optics, for instance by al-Farisi in Iran, happened after the Mongol conquests, as did the so-called Maragha school of astronomy. In contrast, Egypt was never conquered by the Mongols, yet despite the fact that Alhazen had written his Book of Optics there, optics did not progress any further in Egypt than it did in Iran/Iraq. Whatever the cause, the once-proud glassmaking traditions of the Middle East declined and never fully recovered. Macfarlane and Martin:

“But what may have made the European development from about 1200 onwards so much more powerful in the end, is that the thinking tools of glass — particularly lenses and prisms, spectacles and mirrors — were emphasised in a way that, at present at least, does not seem to be the case in Islamic glass-making. Double-sided lenses and spectacles, flat planes of glass (as used in Renaissance painting) and very fine mirrors (as produced in Venice) were never developed in the medieval glass traditions of Islam. Is this the crucial difference? The story after 1400 is quite briefly told. A little glass was produced in Turkey under the Ottomans, but glass technology had to be reintroduced from Venice in the later eighteenth century. There is evidence of a little glass made in Turkey in the sixteenth century and in Iran in the seventeenth century, but it was of low quality. There are other instances of minor glass manufacture, but in general there is almost no authenticated glass manufactured in the Middle East between the end of the fourteenth and the nineteenth century.”

China was among the world’s most advanced civilizations in weaving, metalworking and engineering, yet contributed little to the development of glassmaking. Glass was seen as an inferior substitute for precious substances, less interesting than clay, bamboo or paper. Pottery was cheaper, and with porcelain cups you could drink hot drinks without burning yourself, which was not the case with silverware. Coincidental or not, Europeans widely adopted porcelain just at the time when they started drinking hot non-alcoholic beverages, chocolate, coffee and tea.

With good oiled paper and a warmer climate, certainly in the south, the pressure to make glass windows was largely absent in China. The houses of the peasantry were not suitable for glass windows and were lit by empty gaps or paper or shell windows. Grand religious or secular buildings built out of stone to last for centuries hardly existed in China. The equivalents of the European cathedrals or noble houses were absent. Consequently, glassmaking was more limited in East Asia than it was in the West. Macfarlane and Martin again:

“Much of the important development of European glass (in Venice and elsewhere) was in the making of drinking glasses, a continuation of its use in Roman times. Yet in Japan, drinking with glass seems, until the middle of the nineteenth century, to be more or less totally absent. Why? Again there are several obvious reasons. One concerns the nature of the drink. The Venetian glass was developed for the highest-status and ubiquitous cold drink — wine. In northern countries, where beer was the main drink, it was not drunk from glass, but pewter and pottery. Wine and glass seem to go together. One drinks with the eyes, as well as with the lips, and the glass enhances the effect. Certainly, if one is drinking very large quantities of hot drinks, hot tea, hot water, hot sake, then glass is a bad container. It will crack and the situation is made worse by the fact that thick glass (as was early glass) cracks more easily than thin. A second, and obviously related fact, is the development of ceramics. With such fine ceramics and wonderful pottery, who needs glass for drinking vessels? Indeed, glass is hardly needed for any other utensils.”

Derk Bodde (1909—2003), one of the most prominent Western Sinologists and historians of China during the twentieth century, elaborates:

“True porcelain is distinguished from ordinary pottery or earthenware by its hardness, whiteness, smoothness, translucence when made in thin pieces, nonporousness, and bell-like sound when tapped. The plates you eat from, even heavy thick ones, have these qualities and are therefore porcelain. A flower pot, on the other hand, or the brown cookie jar kept in the pantry are not porcelain but earthenware….The first description that seems to point definitely to porcelain is that of the famous Arabic traveler, Suleyman, in his account dated 851 of travels in India and China. There he speaks of certain vases made in China out of a very fine clay, which have the transparency of glass bottles. In the centuries following Suleyman’s time the southern sea route to China rose to a position of commanding importance. Over it porcelain became by all odds the major export shipped from China to the outside world. Tremendous quantities of porcelain went to Southeast Asia, including the Philippines, Indo-China, Siam, Malaya, the East Indies, Ceylon, and adjoining regions. Much porcelain went even farther, crossing the Indian Ocean and passing up the Persian Gulf to reach Persia, Syria, and Egypt.”

It is possible that the invention of porcelain was itself at least partly a geological accident due to the natural presence of two key materials in China. Macfarlane and Martin:

“There were large deposits of kaolin and petuntse near each other. The kaolin provides the body of the object, the petuntse acts as a flux which will cause overglaze colours to vitrify. It was hence possible to make an excellent hard, dense, beautiful, translucent ceramic. Potters were using the clays that were around them and found that they produced a wonderful substance which we call ‘china’. The original discovery of porcelain itself was probably the result of the accidental presence of ‘natural’ porcelain in China. The resulting ceramics were so desirable that Europeans spent immense fortunes on buying chinaware. The makers of such a fine substance had a high status. Meanwhile in western Europe these substances were not available, either in the same high quality or quantity….So it was a matter of luck as to where certain clays were to be found….Rome, and through her medieval Europe, opted for pottery and glass, China and Japan for ceramics and paper. Once the divergence had begun it was self-reinforcing. It became more and more difficult to change track. So if one asks why the Chinese did not develop clear glass, one should equally ask why the Romans did not make porcelain.”

Porcelain probably existed by the Tang Dynasty (AD 618-907), which is incidentally just when tea came into widespread use. The Chinese writer Lu Yu (733-804) wrote The Classic of Tea before AD 780. Camellia sinensis occurs naturally in Burma and in the Chinese Yunnan and Sichuan provinces. Tea had been used as a medicinal herb since ancient times but became a daily drink between the fourth and ninth centuries AD, when its use spread to Japan via Buddhist monks. The elaborate Japanese tea ceremony was codified by Sen Rikyu (1522-1591) in the sixteenth century. Europeans who came to China in the early modern period quickly developed a taste for the beverage, so much so that they spread its use to regions far beyond where it had previously been enjoyed, thus globalizing a Chinese invention and in return introducing American specialties such as tomatoes, sweet potatoes, maize and tobacco to Asia. The Dutch East India Company brought tea to Europe in the seventeenth century, and the Dutch later grew tea in their colonies in Indonesia. The British promoted tea culture in India and Ceylon (Sri Lanka) in the nineteenth century, when Thomas Lipton (1848-1931) created his famous tea brand.

The date when true porcelain was first made is disputed, but it clearly existed by the Song Dynasty (AD 960-1279) and most likely earlier. The relationship between tea and porcelain in China appears to be at least as strong as the relationship between wine and glass in Europe. Mary Lou Heiss and Robert J. Heiss explain in their excellent book The Story of Tea:

“Song emperor Huizong (r. 1101-1125) commanded the royal pottery works to create new tea-drinking cups. Known for his aesthetic tastes, he ushered in the creation of luxurious porcelains characterized by refined elegance, underglaze decorations, subtle etched designs, and sensuous glazes. Song porcelains were mostly monochromatic and the most popular type — Qingbai porcelain — had a bluish-white glaze. These cups not only increased tea-drinking pleasure, but they also encouraged awareness and admiration of the tea liquor itself. It was during this point in the development of tea culture that teawares began to be viewed as objects of desire and value and not just as functional tools. At one time Huizong favored deep chocolate-brown, almost black glazed teacups, streaked with fine, thin tan lines. Known as ‘rabbit hair glaze,’ this style became very popular as it was said that the black glaze pleasingly offset the color of the froth of the whisked tea. These dark glazed cups were favorites in Song tea competitions….This imperial desire for strong but thin vessels that could endure near-boiling liquid was the beginning of the Chinese porcelain trade that would, centuries later, influence the course of ceramics manufacturing throughout Japan and Europe.”

The manufacture of porcelain became a major Chinese export industry which employed sophisticated mass-production techniques; a single piece of porcelain could pass through dozens of hands during manufacture. Europeans eventually made fine porcelain of their own, starting with the Germans Ehrenfried Walther von Tschirnhaus (1651-1708) and Johann Friedrich Böttger (1682-1719) and the production of Meissen porcelain from 1710, but its reinvention was directly inspired by European efforts to duplicate Chinese examples.

Chinese teahouses became important places to socialize, conduct business, play board games and gossip. Guo Danying and Wang Jianrong in The Art of Tea in China:

“Teahouses burgeoned in the Song Dynasty. In the famous painting scroll, Festival of Pure Brightness on the River by the Northern Song (960-1127) painter Zhang Zeduan, teahouses are dotted along the river flowing through the capital. Teahouses were also often the venues for performances of Yuan opera and ping tan (storytelling in the local dialect combined with ballad singing) during the Yuan Dynasty. Thus emerged the tradition of Chinese teahouses hosting small-scale theatrical performances. In the Ming and Qing dynasties, teahouses took on more diversified forms and a more expansive range of functions. Places meant for giving people a chance to quench their thirst and to taste a good beverage originally, teahouses have deviated from their original simple orientation as urban society has evolved. They have become an important socio-cultural arena, welcoming people from all walks of life.”

The most common form of tea in Tang times was “tea cakes.” In 1391 the Hongwu Emperor, or Taizu (1368-1398), who founded the Ming Dynasty (1368 to 1644) and forced the Mongols out of China, decided that making tea cakes was too time-consuming and consequently prohibited it. Loose-leaf tea then gradually replaced tea cakes, leading to a new, diversified array of processed teas such as scented tea, black tea, red tea and green tea as well as utensils more suitable for brewing loose-leaf tea, which eventually led to the invention of teapots.

So what does this have to do with optics, you say? Well, indirectly, quite a bit. In the pre-Columbian Americas, sub-Saharan Africa and Oceania, the native cultures lacked the technological know-how to make glass. This was clearly not the case with the major Asian nations who all knew how to make glass and occasionally did so, but usually not to any great extent. The best explanation for this is that they simply didn’t need it, as they had other materials at their disposal which better suited their needs.

The major use for glass until a couple of hundreds years ago was for containers, but the Chinese, Japanese, Koreans and Indians had excellent containers made of clay. For everyday uses, pottery and porcelain were at least as good as glass, but not for scientific purposes. Clear glass was a superior material for use in many experiments and indispensable for making lenses to microscopes and telescopes. Glass: A World History by Macfarlane and Martin:

“The use of glass for ‘verroterie’, that is glass beads, counters, toys and jewellery, is almost universal, at least in Eurasia, though even this was absent in the half of the historical world comprising the Americas, sub-Saharan Africa and Australasia….There was very little use of glass for vessels in India, China and Japan. Even in the Islamic territories and Russia, the use declined drastically from about the fourteenth century with the Mongol incursions. In relation to China, in particular, this use can be seen as mainly an alternative to pottery and porcelain. The great developers were the Italians, first the Romans, with their extensive use of glass, and then the Venetians with their ‘cristallo’. Much of the technical improvement of glass manufacture arose from this and it is particularly associated with wine drinking. Thus we have a phenomenon much more specific in scope, finding its epicentres in Italy and Bohemia. There are various links to science here, for example the fact that the fine glass needed for the earliest microscopes was made from fragments of Venetian wine ‘cristallo’. Likewise the development of tubes, retorts and measuring flasks for chemistry, as well as thermometers and barometers, developed out of this.”

There is still much we do not know about optics in the ancient world. Every now and then, claims about the alleged existence of ancient eyeglasses or even telescopes have surfaced, but we currently possess no solid evidence to back these claims up, and if such devices ever existed they were later lost again. The earliest known lenses were made of rock crystal, quartz, and other minerals and were employed as burning glasses to concentrate the sun’s rays and use them for heating. The first lenses used as a reading aid were probably the so-called reading stones, magnifying glasses known in the Mediterranean region after AD 1000. The Visby lenses, found in a Viking grave on the Swedish island of Gotland, date from about the eleventh century AD and were made of rock crystal. The first eyeglasses had quartz lenses set into bone, metal or even leather mountings, but lenses were later primarily made of glass. We have definitive proof of eyeglasses for the correction of eyesight only from Europe, from the late thirteenth and early fourteenth centuries onwards. As scholar Joel Mokyr says:

“[G]lass, although known in China, was not in wide use, in part perhaps the result of supply considerations (expensive fuel), and possibly in part due to lack of demand (tea was drunk in porcelain cups, and the Chinese examined themselves in polished bronze mirrors). Some past societies might well have made lenses given enough time and better luck: Islamic civilization for centuries had a magnificent glass industry, yet never came up with either spectacles or a telescope, despite the ubiquity of presbyopia and a strong interest in astronomy. In the later Middle Ages, glass making in the Islamic world declined, in part because of the devastation inflicted by the Mongols. But elsewhere knowledge must have played a central role. Tokugawa Japan had a flourishing industry making glass trinkets and ornaments, but no optical instruments emerged there either until the Meiji restoration. Not having access to the Hellenistic geometry that served not only Ptolemy and Alhazen, but also sixteenth century Italians such as Francesco Maurolico (1494-1575) who studied the characteristics of lenses, made the development of optics in the Orient difficult. The probability of a microscope being invented by someone who does not have access to geometry is very low.”

It is possible that the first eyeglasses were made through trial and error by practical artisans with limited mathematical knowledge. Kepler in 1604 published a book about optics and explained that some people have bad eyesight because imperfections in the eye cause the rays to be focused at a point either in front of or behind the retina. He went on to describe how eyeglasses work to correct these defects, and a few years later developed his ideas to explain how the newly invented telescope worked. In other words: People had used eyeglasses for several centuries without having a full theoretical understanding of how they functioned. Nevertheless, for the progress of optics, access to Greek geometry was certainly an advantage.

The Kangxi Emperor (1654-1722) of the Manchu Qing Dynasty, the longest-reigning Emperor in China’s history and often considered one of its best, was an open-minded man. Jesuit missionaries were involved in the glass workshop that he established in 1696, for instance Kilian Stumpf (1655-1720) who worked to produce decorative glass under imperial auspices in the early 1700s. Glass production reached its high point in the 1750s when Giuseppe Castiglione (1688-1776) became involved in decorating European-style palaces and gardens for the Lofty Pavilion, but after this the interest in glass in China declined again.

Glass and Greek optical theory were available to the Romans, yet Italians during the Roman era never made eyeglasses. Italians during the medieval era did. Scholars Alan Macfarlane and Gerry Martin place great, perhaps too great, emphasis on clear glass technology and argue that this made essential contributions to the Scientific and Industrial Revolutions:

“Glass, we know, has two unique properties. Not merely can it be made in a transparent form so that the experimenter can watch what is happening, but it is also, in the case of most elements and chemical compounds, resistant to chemical change: it has the great advantage of remaining neutral to the experiment itself. Its virtues do not end here. It is easy to clean, seal, transform into the desired shape for the experiment, strong enough to make thin apparatus and to withstand the pressure of the atmosphere when a vacuum is created within it. It is resistant to heat and can be used as an insulator. It has a combination of features we do not find in any other material. Where, as Lewis Mumford asked, would the sciences be without the distilling flask, the test-tube, the barometer, the thermometer, the lenses and the slide of the microscope, the electric light, the X-ray tube, the audion, the cathode-ray tube?”

However, they are careful to point out that glass was an enabling device, perhaps a necessary cause for some later developments but by no means a sufficient one. Even if clear glass had been widely available in Asia, it is not certain that it would have led to the invention of the microscope, the barometer or the mercury-in-glass thermometer. In the East, there was not the same interest in deriving knowledge from nature. Medieval and early modern Europeans had both the glass and the particular curiosity. The most difficult case to explain is why eyeglasses, and by extension telescopes and microscopes, were not invented by Middle Eastern Muslims, who had access to much of the same body of knowledge as did Europeans.

The first true eyeglasses are believed to have been made in the late thirteenth century AD, most likely in the 1280s in northern Italy. The great American scientist and Enlightenment thinker Benjamin Franklin (1706-1790) invented bifocals in the eighteenth century. Vincent Ilardi explains in his book Renaissance Vision from Spectacles to Telescopes:

“As in Italy, documentary evidence and artistic representations of eyeglasses in the fourteenth century are relatively few in other European countries. They occur much more frequently from the early fifteenth century onwards with the massive diffusion of spectacles. There is little doubt that the use of eyeglasses and the knowledge to construct them spread with reasonable rapidity across the Alps among clergymen, monks, merchants, and artisans who probably traveled with more frequency than has been realized. It should also be recalled that the papacy resided in Avignon for sixty-eight years (1309-77) and attracted suppliers and professional people along with clerics and many other visitors from all nations. France, in fact, produced the second undisputed and clear mention of spectacles in a medical treatise — Chirurgia magna — completed in 1363 by Guy de Chauliac (ca. 1300-1368), surgeon and professor of medicine at the University of Montpellier, little more than a day’s journey (96 km) from the papal court. Guy received his medical degree at Montpellier but also studied medicine at Bologna, and from about 1344 until his death he resided in Avignon at the service of three popes.”

The birth of eyeglasses coincides with the birth of the Italian Renaissance. This was the age of Francesco Petrarca or Petrarch (1304-1374), the prominent poet and Renaissance humanist who developed the sonnet and spread its use to other European languages. In the spirit of his age, he was inspired by the personalities and achievements of Greco-Roman Antiquity, writers such as Cicero and above all Publius Vergilius Maro (70 BC-19 BC), known as Virgil or Vergil, one of the greatest poets of the Roman era. The Florentine poet Durante degli Alighieri or Dante Alighieri (1265-1321) wrote his famous Divina Commedia, the Divine Comedy. We have no positive proof that Dante wore eyeglasses, but it is quite possible that he was familiar with the newly invented device. It is likely that Petrarch had tried it.

The printing press introduced by the German goldsmith Johannes Gutenberg (born before 1400, died 1467 or 1468 at Mainz) led to a rapid and enormous increase in the number of books in circulation in Europe and further encouraged the use of reading aids. It is a well-known problem that eyesight often fails as you get older, and failing eyesight cuts short the professional life of many workers. It is not coincidental that the usage of eyeglasses/spectacles spread during a period of economic growth and bureaucratic expansion. Alan Macfarlane and Gerry Martin tell the story in Glass: A World History:

“The eyeglasses made of two bi-convex lenses suspended on the nose to help those with old age long-sight (presbyopia) were probably invented at around AD 1285 in northern Italy and their use spread rapidly in the next century, so that spectacles were a widespread feature of European life half a century before movable metal printing was invented by Gutenberg in the mid fifteenth century. The effects of this development in western Europe were immense. The invention of spectacles increased the intellectual life of professional workers by fifteen years or more… Much of the later work of great writers such as Petrarch would not have been completed without spectacles. The active life of skilled craftsmen, often engaged in very detailed close work, was also almost doubled. The effect was both multiplied, and in turn made more rapid, by another technological revolution to which it was connected, namely movable type printing, from the middle of the fifteenth century. Obviously, the need to read standard-sized print from metal types in older age was another pressure for the rapid development and spread of spectacles, and the presence of spectacles encouraged printers to believe they had a larger public.”

By the sixteenth century spectacles had become very widespread in many regions of Europe and were sometimes exported to the Middle East and even to East Asia. The oldest certain references to double-lens spectacles using glass in China are from Ming Dynasty accounts of the sixteenth century and refer to Western imports. Macfarlane and Martin again:

“Earlier references to spectacles were to dark substances (often ‘tea’ crystal) used to protect the eyes against glare and dirt, for healing (crystals had magical properties) or to disguise the reactions of judges from litigants who appeared before them. It was from the middle of the seventeenth century that spectacles made of glass became fairly widespread. The tradition that glasses were as important for status and eye protection as to counter the effects of ageing continued up to the end of the eighteenth century. This is shown by Gillian’s account when he accompanied the Macartney Embassy of 1793-4: ‘The Chinese make great use of spectacles…The eye glasses are all made of rock crystal.’ He continues that ‘I examined a great number of polished eye glasses after they were ready for setting, but I could not observe any diversity of form among them; they all appeared to me quite flat with parallel sides. The workmen did not seem to understand any optical principles for forming them in different manners so as to accommodate them to the various kinds of imperfect vision.’“

Eskimo (Inuit) peoples used “sunglasses” made with a very thin slit in a piece of wood or leather, but these were not made of glass lenses which could later by used for scientific instruments, which is why there were Eskimo sunglasses but no Eskimo microscopes. Sunglasses of the modern type date from the twentieth century, especially from the 1930s onwards when their popularity was spread by Hollywood movie stars and other celebrities.

The glass industry in the Netherlands was promoted in the sixteenth century with the aid of Italian glass/mirror makers and spread rapidly from Antwerp to Amsterdam and elsewhere. The demand for high quality glass lenses grew steadily, and in the seventeenth century the Dutch philosopher Baruch Spinoza (1632-1677) could make a decent living as a skilled lens grinder while working out his ideas. His trade may have caused the lung illness which ended his life. After making simple lenses (magnifying glasses), eyeglass makers eventually discovered how to use a second lens to magnify the image produced by the primary lens.

John Gribbin’s book The Scientists is a good popular introduction to the modern Western history of science, which is why I quote it frequently, but it does have a few weaknesses. Gribbin does not mention many contributors outside of the European tradition and he starts from the late sixteenth century, largely ignoring the medieval era. Modern, organized science was founded in Europe between the seventeenth and nineteenth centuries AD, that is undisputable, but that does not mean that we should totally ignore contributions from others.

Leonard Digges pioneered the use of the theodolite in the 1550s in connection with his work as a surveyor, and Gribbin claims that his interest in seeing accurately over long distances led him to invent the reflecting telescope, and possibly the refracting telescope as well. It is true that there were a number of optical advances made during this period, but Digges wasn’t the only one to be involved in this.

The German artist-mathematician Albrecht Dürer used drawing aids called “sighting tubes “ already in the early sixteenth century. By the end of the sixteenth century, experiments with combinations of lenses with or without a tube, and of combinations of concave mirrors and lenses, were fairly common among instrument- and spectacle makers in Europe. This partly explains the remarkably rapid diffusion of the telescope in the early seventeenth century. Vincent Ilardi elaborates in Renaissance Vision from Spectacles to Telescopes:

“About the same time of the alleged Danti’s construction of the two-lens telescopic device in Florence, another combination was tried in England (ca. 1563) by Leonard Digges (ca. 1520-ca. 1559), mathematical practitioner and designer of instruments. He constructed a tubeless magnifier composed of a combination of a concave mirror as an ocular and a convex lens as the objective appropriately positioned. It is significant to note that in the description of the device published in the Pantometria (1571) by his son, Thomas (ca. 1546-95), the principal purpose of the instrument was to construct topographical maps of distant city views. A similar construction was described by William Bourne around 1580. These descriptions have laid the basis for the still debated question of whether Elizabethan England had the telescope before Holland and Italy. These telescopes as described, however, were not very practical. If one looked in the mirror with his back to the lens he would see an inverted image; if he placed the mirror at an angle on his chest and bent his head downwards, he would see an upright image. Moreover, the instruments required a lens and mirror with large diameters, both of which…were not readily available in spectacle/mirror shops.”

The introduction of the telescope has been ascribed to the lensmaker Hans Lippershey or Lipperhey (1570-1619) in the Netherlands, or alternatively to the Dutch opticians Jacob Metius (ca. 1571-1628) or Zacharias Janssen (ca. 1580-1638). The spectacle-maker Zacharias Janssen, sometimes written Sacharias Jansen, has been mentioned as a possible inventor of the compound microscope around 1595 as well. Whether this is true is disputed among historians, but microscopes and telescopes were indeed made by the early 1600s. The technologies were obviously closely related and in many ways a direct outgrowth of eyeglass making.

While microscopes would eventually facilitate the greatest medical revolution in human history, the germ theory of disease pioneered by the brilliant French scholar Louis Pasteur (1822-1895), the process was slow and destined to take several centuries. I will deal with this story separately in my history of medicine and will concentrate on astronomy here. The telescope triggered great changes faster than the microscope did. The earliest working telescope we currently know of with absolute certainty appeared in 1608. Ilardi again:

“[D]evices comprised of a combination of lenses within or without tubes had been constructed or at least conceived from antiquity onwards to extend natural vision, but the first known practical applications of these devices seem to date from the late sixteenth century. In essence, the yet to be named instrument was in the minds and hands of many before they realized what they had. But it was only Lipperhey in Holland, the lucky optician, who first brought this first three-power spyglass to the attention of the European world. Its predecessor, the humble and by then all too common pair of spectacles, also was discovered by chance perhaps centuries earlier than the thirteenth century, but it was another lucky optician and a kindly Dominican friar, both residing in Pisa, who made it part of the historical record. Both instruments had many fathers, as we have seen, and were easy to duplicate in kind but not necessarily in quality. It is in this context that Galileo played the initial leading role in transforming a three-power spyglass of limited use into a twenty-plus power scientific instrument capable of searching the outer reaches of the universe, an instrument that deserved a new name, ‘telescope.’“

Scholar John North supports this view in his book Cosmos, revised 2008 edition:

“By the end of the thirteenth century, converging (convex) lenses were in use for reading spectacles (the Latin word spectaculum was used for a single lens at the beginning of the same century). There are numerous imprecise references in medieval literature to the possibility of seeing distant objects clearly, as though they were near at hand. By the seventeenth century there was a well-established trade in spectacle lenses, and in some ways it is surprising that the discovery of a method of combining them into a telescope — and later into a compound microscope — was so long in coming…Claims for prior invention have been made on behalf of various sixteenth-century scholars, such as John Dee, Leonard and Thomas Digges, and Giambattista della Porta, but they are without foundation and rest on an excessively generous reading of ambiguous texts. Some confusion has been created by the existence of medieval illustrations of philosophers looking at the heavens through tubes. Aristotle himself referred to the power of the tube to improve vision — it can improve contrast by cutting down extraneous light — but the tubes in question were always without lenses.”

The English mathematician Thomas Harriot (1560-1621) was an early pioneer in telescopic astronomy. His telescopic drawing of the Moon from August 1609 is the first on record and preceded Galileo’s study of the Moon by several months, yet Harriot published almost nothing. There are claims of a few similar attempts that may or may not have been made elsewhere in Europe, too, but the studies made by the Italian (Tuscan) scientist Galileo Galilei (1564-1642) were particularly energetic and systematic, and he had by far the greatest impact on the future course of science and astronomy. His influential book Sidereus Nuncius (Sidereal Messenger or Starry Messenger in English), published in March 1610, is generally considered to mark the birth of telescopic astronomy.

Galileo had heard reports about a new Dutch spyglass and soon made one of his own. He secured the best glass available and better methods for lens grinding and by late 1609 he had made a telescope with a magnifying power of twenty times. Using this improved device, Galileo discovered the four largest moons of Jupiter early in 1610, which have become known as the Galilean satellites. He found that the surface of the Moon is not a perfectly smooth sphere (as the Aristotelians believed) but is scarred by craters and has mountain ranges. These discoveries and others were presented in the Starry Messenger in 1610. Galileo made several telescopes of comparable power, one of which was sent to Kepler.

Kepler ‘s earliest work was based on the naked-eye observations of the astronomer Tycho Brahe (1546-1601) from Denmark, the last and the greatest of the pre-telescopic observers. Kepler was the founder of modern optics; the first to describe upright and inverted images and the concept of magnification. He made the first thorough theoretical explanation of how eyeglasses work. After he became familiar with Galileo’s telescope, he extended this explanation to the new invention as well. Scholar James R. Voelkel tells the story:

“In addition to his astronomical work, Kepler made important contributions to optics and mathematics. His Astronomia pars optica (1604) was the foundational work of seventeenth-century optics. It included the inverse-square diminution of light and the first recognizably modern description of the formation of the retinal image. His Dioptrice (1611), the first theoretical analysis of the telescope, included an improved telescope design. In mathematics, his Nova stereometria doliorum vinariorum (1615), concerning the volumes of wine casks, was a pioneering work of precalculus, and his Chilias logarithmorum (1624), an important early work on logarithms. He also published chronological works and balanced defenses of astrology. Kepler’s mathematical genius and his important conviction that astronomical theory must be derived from physics places him without doubt among the greatest early modern applied mathematicians, although the technical difficulty of his works and an unwarranted reputation for mysticism impeded his recognition by historians.”

The heliocentric model suggested by the Polish astronomer Nicolaus Copernicus (1473-1543) in 1543 was still not fully accepted at this point. Galileo’s discovery of the moons of Jupiter did not by itself prove the Copernican model, but it did prove that not all bodies in the Solar System orbit the Earth, which served to undermine the geocentric Ptolemaic model.

The German astronomer Simon Marius, or Simon Mayr in German (1573-1624), claimed to have discovered the four largest moons of Jupiter before Galileo. It is possible that he did in fact see them independently of Galileo, but whether he did so before him is questionable. On the other hand, the names of these moons — Io, Europa, Ganymede and Callisto — came from Marius, who also published the first systematic description of the Andromeda Nebula in 1612.

Observations of sunspots, dark areas of irregular shape on the surface of the Sun, may have been made previously in China and perhaps elsewhere, as they can be larger than the Earth itself and consequently possible to see through naked-eye observations. With the introduction of the telescope, a number of Europeans starting from 1610-1611 more or less independently pioneered the systematic study of sunspots. In addition to Galileo and Thomas Harriot they included the German Jesuit Christoph Scheiner (1573-1650) and the German theologian David Fabricius (1564-1617) and his son Johannes Fabricius (1587-1615), all of whom championed the use of camera obscura telescopy to get a better view of the solar disk.

This was not the first recorded case of using this instrument for astronomical purposes. The camera had been known at least since medieval times. Levi ben Gerson or Gersonides (1288-1344), a French Jewish mathematician, philosopher and Talmudic scholar, observed a solar eclipse in 1337 and made observations of the Moon, Sun and planets using a camera obscura.

Scheiner, a Jesuit mathematician, at first wished to preserve the Aristotelian doctrine of the unchanging perfection of the Sun and the heavens and argued that sunspots were satellites of the Sun. However, he later abandoned this idea, and his Rosa Ursina from 1630, the first major book on solar research, became the standard treatise on sunspots for over a century.

According to the website The Galileo Project at Rice University, “Scheiner’s definitive sunspot studies were followed up by others. In France Pierre Gassendi made numerous observations (not published until 1658); in Gdansk Johannes Hevelius (1647) and in Bologna Giovanni Battista Riccioli (1651) did the same. There is, therefore, a reasonably good sunspot record for the years 1610-1645. After this time, however, sunspot activity was drastically reduced. When, in 1671, a prominent sunspot was observed, it was treated as a rare event. Sunspot activity increased again after about 1710. The period of low activity is now referred to as the Maunder Minimum, after Edward Walter Maunder (1851-1928), one of the first modern astronomers to study the long-term cycles of sunspots. Modern studies of sunspots originated with the rise of astrophysics, around the turn of the [twentieth] century. The chief early investigator of these phenomena in the United States was George Ellery Hale (1868-1938), who built the first spectro-heliograph and built the Yerkes and Mount Wilson observatories, including the 200-inch telescope on Palomar Mountain.”

David Fabricius is credited with the modern discovery of variable stars, i.e. stars whose apparent brightness as seen from Earth changes over time, although it is conceivable that similar observations had been made in other cultures, too. In 1596 he saw a particular star brighten, and again in 1609. The Polish astronomer Hevelius named it Mira, “wonderful.”

The English astronomer John Goodricke (1764-1786) noticed that some variable stars were periodic. In 1782 he suggested that the variability of Algol was due to its being eclipsed by a darker companion body. We now know that it is an eclipsing double star. It is possible that its strange nature was known to other nations in the pre-telescopic era as the star was reputed to be an “unlucky” one for many centuries, but as far as we know, Goodricke was the first person to correctly explain its puzzling behavior. The Italian Giovanni Battista Riccioli (1598-1671) in 1650 became the first European astronomer to discover a double star, Mizar.

That the Milky Way consists of many individual stars had been suggested by a number of observers previously, in Europe and elsewhere, but only with the aid of the telescope could it be proven that this was in fact the case. Our name “Milky Way” goes back to Greek mythology. The infant Heracles, the mightiest of the Greek heroes (known as Hercules to the Romans), son of the god Zeus and a mortal woman, had been placed at the bosom of the goddess Hera while she was asleep so that he would drink her divine milk and become immortal. Hera woke up and removed him from her breast, in the process spilling some of her milk across the sky. The modern term “galaxy” consequently comes from the Greek root galaxies, “milky”. A galaxy is a system of stars, dust and gas held together by gravity. Here is how the website of the National Aeronautics and Space Administration (NASA) explains it:

“Our solar system is in a galaxy called the Milky Way. Scientists estimate that there are more than 100 billion galaxies scattered throughout the visible universe. Astronomers have photographed millions of them through telescopes. The most distant galaxies ever photographed are as far as 10 billion to 13 billion light-years away. A light-year is the distance that light travels in a vacuum in a year — about 5.88 trillion miles (9.46 trillion kilometers). Galaxies range in diameter from a few thousand to a half-million light-years. Small galaxies have fewer than a billion stars. Large galaxies have more than a trillion. The Milky Way has a diameter of about 100,000 light-years. The solar system lies about 25,000 light-years from the center of the galaxy. There are about 100 billion stars in the Milky Way. Only three galaxies outside the Milky Way are visible with the unaided eye. People in the Northern Hemisphere can see the Andromeda Galaxy, which is about 2 million light-years away. People in the Southern Hemisphere can see the Large Magellanic Cloud, which is about 160,000 light-years from Earth, and the Small Magellanic Cloud, which is about 180,000 light-years away.”

The Magellanic Clouds are named after the Portuguese navigator Ferdinand Magellan (1480-1521), whose crew in the service of Spain performed the first circumnavigation of the world in history between 1519 and 1522. A bit unfairly perhaps, since millions of people had seen these galaxies before Magellan did. He also named the Pacific Ocean (Mar Pacifico) due to its apparent stillness, and his expedition, which he himself did not survive, established the need for an international date line as the (few) returning travelers found that they were a day behind their European families.

However, even if numerous people had seen these galaxies, the fact that they and countless others are star systems similar to our own wasn’t proven until the work of the American astronomer Edwin Hubble (1889-1953) in the 1920s. He grouped them according to their appearance, spiral galaxies, elliptical galaxies, irregular galaxies etc.

The French philosopher and scientist Pierre Gassendi (1592-1655) is primarily remembered for his revival of Epicureanism and Epicurean atomism as a substitute for Aristotelianism (the rejection of medieval, scholastic Aristotelianism was an important theme during the Scientific Revolution), but he did valuable work in astronomy as well. Mercury, being small and the planet closest to the Sun, was difficult to study in the pre-telescopic era. When Gassendi observed it in Paris during a transit it turned out to be smaller than anticipated by most observers; he actually mistook it for a sunspot until he noticed that it moved across the surface of the Sun far too fast. Kepler had attempted to witness a transit of Mercury in 1607 by means of a camera obscura and had only seen what must have been a sunspot.

According to the book Measuring the Universe by Albert van Helden, “Accustomed as we are to thinking about transits primarily in connection with parallax measurements, we must be careful not to make the mistake of thinking that this measurement or even the correction of planetary elements was considered by Gassendi and his colleagues to be the most important aspect of Mercury’s transit of 1631. From Kepler’s admonition to Hortensius’s defense of Gassendi, the main issue was Mercury’s apparent size. For Kepler this measurement was of crucial importance for his scheme of sizes and distances. Although corrections were made in Mercury’s elements as a result of Gassendi’s observation, no parallax of Mercury resulted from it. The planet’s ‘entirely paradoxical smallness’ was by far the most important result of the observations of the transit of 1631.”

Further improvements to refracting telescopes were made in the seventeenth century. The Dutch mathematician and astronomer Christiaan Huygens (1629-1695), who had studied law and mathematics at the University of Leiden, was one of the pioneers. John Gribbin writes:

“In 1655, Christiaan Huygens began working with his brother Constantijn (named after their father) on the design and construction of a series of telescopes that became the best astronomical instruments of their time…The Huygens brothers found a way to reduce chromatic aberration considerably…The brothers were also very good at grinding lenses, producing large, accurately shaped lenses that alone would have made their telescopes better than anything else around at the time. With the first telescope built to the new design, Huygens discovered Titan, the largest of the moons of Saturn, in 1655; this was a discovery only marginally less sensational than Galileo’s discovery of the moons of Jupiter. By the end of the decade, using a second, larger telescope also constructed with his brother, Huygens had solved the mystery of the peculiar appearance of Saturn itself, which he found to be surrounded by a thin, flat ring of material, which is sometimes seen edge on from Earth (so that it seems to disappear) and is sometimes seen face on (so that with a small telescope like the one Galileo used, Saturn appears to grow a pair of ears).”

Many new astronomical discoveries were made in the course of the eighteenth century, too. The Russian scientist, writer and grammarian Mikhail Lomonosov (1711-1765) discovered the atmosphere of Venus in 1761. Lomonosov was a prominent natural philosopher and a pioneer of modern science in the Russian Empire. He spent several years studying in Western Europe, especially at the universities of Marburg and Freiburg, and got a German wife before he returned home. In 1748 he opened the first scientific chemical laboratory in Russia.

Maybe the Russians established a special relationship with this planet quite early. When the first spacecrafts were sent to physically explore other planets in our Solar System during the final decades of the twentieth century, the Americans often had the leading role, yet many of the probes that were sent to Venus were from the Soviet Union.

According to the book Venus in Transit by Eli Maor, “It was during the 1761 transit that Lomonosov, observing from his home in St. Petersburg, saw a faint, luminous ring around Venus’s black image just as it entered the sun’s face; the sight was repeated at the moment of exit. He immediately interpreted this as due to an atmosphere around Venus, and he predicted that it might even be thicker than Earth’s. Lomonosov reported his finding in a paper which, like most of his written work, was only published many years after his death in 1765. But it was not until 1910, one hundred and fifty years after the transit, that his paper appeared in German translation and became known in the West. Up until then the discovery of Venus’s atmosphere had been credited to William Herschel.”

The great astronomer Sir William Herschel (1738-1822) was born in Hanover, Germany, where his father Isaac was an oboist and brought up his sons to be musicians. William became an organist in England, and his sister Caroline joined him there. His interest in music led him to mathematics, and from there on to astronomy. He is credited with the discovery of Uranus in 1781, the first planet to be identified with the telescope, and became famous after that.

Technically speaking, Uranus can be seen by a person with good eyesight under optimal conditions, but only very faintly. It had never been recognized as a planet by any culture prior to the invention of the telescope. The German astronomer Johann Elert Bode (1747-1826), director of the Berlin Observatory, determined its orbit and gave it the name Uranus. Bode collected virtually all observations of this planet by various astronomers and found that Uranus had been observed before William Herschel, among others by the great English astronomer John Flamsteed (1646-1719), yet nobody had realized that it was a planet.

Caroline Herschel (1750-1848) became William’s valued assistant. She was granted a salary from the king, like her brother, and could thus be viewed as the first professional woman astronomer. She personally discovered eight comets and together with the Scottish science writer Mary Somerville (1780-1872) became the first honorary woman member of the Royal Society in 1835.

I will continue with the history of telescopic astronomy later but will first look into another invention, photography. The history of the camera is older than the history of photography. The basic principles behind the pinhole camera, a precursor to the camera obscura (Latin: dark chamber), were understood by Aristotle and the ancients Greeks in the fourth century BC as well as the Chinese engineer and thinker Mo Ti or Mozi in the fifth century BC. The school of thought that he founded, Mohism, flourished during the Warring States era (479-221 BC) prior to the unification of Imperial China. During this period it provided a crucial stimulus for Confucian thinkers Mencius and Xunzi, for Daoists and for Legalists, adherents of the militaristic-totalitarian ideology which enabled the Qin state to unify China under its First Emperor. However, Mozi’s optical ideas were not widely followed in China later.

One exception was the government official and polymath Shen Kuo or Shen Kua (AD 1031-1095). Leading members of the Chinese scholar-official elite in the Song Dynasty were often men of great intellectual breadth, and Shen Kuo was perhaps the most broadly accomplished of them all. Patricia Buckley Ebrey in The Cambridge Illustrated History of China:

“During his official career, Shen designed drainage and embankment systems that reclaimed vast tracts of land for agriculture; he served as a financial expert skilled at calculating the effects of currency policies; he headed the Bureau of Astronomy; he supervised military defence preparations; and he even travelled to the Liao state as an envoy to negotiate a treaty. Over the course of his life he wrote on geography, history, archaeology, ritual, music, mathematics, military strategy, painting, medicine, geology, poetry, printing, and agricultural technology. Although often labelled a scientist, he wrote commentaries to Confucian classics and had deep interests in divination and Buddhist meditation.”

In his major work Mengxi bitan or Dream Pool Essays of 1088, Shen Kuo experimented with the camera obscura as the Mohists had done. He had promising insights into many other subjects, too, ranging from meteorology and geology to fossils. However, these insights usually lacked clear-cut organization and were not developed into coherent scientific theories. Although the Chinese did on occasion perform various experiments with mirrors and other optical tools, progress in optics stagnated in China after initial advances. The clearest medieval description of the camera obscura was made by Alhazen (Ibn al-Haytham) in the Middle East. Toby E. Huff explains in The Rise of Early Modern Science, second edition:

“In optics, which in early science probably played something like the role of physics in modern science, the Chinese, in Needham’s words, ‘never equalled the highest level attained by the Islamic students of light such as Ibn al-Haytham.’ Among other reasons, this was a reflection of the fact that the Chinese were ‘greatly hampered by the lack of the Greek deductive geometry’ that the Arabs had inherited. Finally, though we think of physics as the fundamental natural science, Joseph Needham concluded that there was very little systematic physical thought among the Chinese. While one can find Chinese physical thought, ‘one can hardly speak of a developed science of physics,’ and it lacked powerful systematic thinkers, who could correspond to the so-called precursors of Galileo, represented in the West by such names as Philoponus, Buridan, Bradwardine, and Nicole d’Oresme.”

During the sixteenth century the camera obscura was combined with convex lenses and/or concave mirrors, which projected more detailed images. There was an intense interest in sixteenth century Europe for long-distance visual instruments. The use of a bi-convex lens in conjunction with a camera obscura was first published by the gifted Italian mathematician Girolamo Cardano (1501-1576). Another Italian, Giovanni Battista Della Porta, published his Magia naturalis in 1558 and a much-expanded version of it in 1589. Della Porta helped popularize the camera obscura.

The Dutch painter Johannes or Jan Vermeer (1632-1675), famous for beautiful paintings such as Girl with a Pearl Earring, is believed to have used the camera obscura as a visual aid. The same goes for the Venetian artist Giovanni Antonio Canal (1697-1768), better known as Canaletto, in his landscapes of Venice. I could add that the extent to which a particular artist did or did not use the camera obscura when working on a specific painting is often debated among art historians. What is not disputed is that it was widely used among European artists.

The first cameras were room-sized objects. From the early seventeenth century, portable versions of the camera obscura the size of tent had been constructed. The German Johann Zahn (1631-1707), author of Oculus Artificialis Teledioptricus Sive Telescopium (1685), was among the first to create a version that was small enough to be practical for photography.

I can recommend the Internet website precinemahistory.net, created by the Canadian film historian Paul Burns. It tracks the developments leading up to the invention of moving pictures, movies, from ancient times to the year 1900, and covers subjects ranging from the history of the camera obscura to early photographic chemistry. I disagree with Mr. Burns on a few details here and there, but he has clearly put a lot of research into his website and it is well worth a visit. According to him, “Zahn’s camera obscuras were the closest thing to what 19th century cameras were.” He also claims that “During the 18th Century and the first half of the 19th, the camera obscura was embraced more by artists than by scientists. It was encouraged to be used for drawing, sketching and painting.”

It is worth reflecting here on the special nature of European art, which is highly relevant to the history of photography. E.H. Gombrich explains in his classic The Story of Art:

“Buddhism influenced Chinese art not only by providing the artists with new tasks. It introduced an entirely new approach to pictures, a reverence for the artist’s achievement such as did not exist either in ancient Greece or in Europe up to the time of the Renaissance. The Chinese were the first people who did not think of the making of pictures as a rather menial task, but who placed the painter on the same level as the inspired poet. The religions of the East taught that nothing was more important than the right kind of meditation…Devout artists began to paint water and mountains in a spirit of reverence, not in order to teach any particular lesson, nor merely as decorations, but to provide material for deep thought. Their pictures on silk scrolls were kept in precious containers and only unrolled in quiet moments, to be looked at and pondered over as one might open a book of poetry and read and reread a beautiful verse. That is the purpose behind the greatest of the Chinese landscape paintings of the twelfth and thirteenths centuries.”

Chinese artists wanted to capture the mood of a landscape and did not consider it important that it was accurate in all details. In fact, they would have considered it childish to compare pictures with the real world. The Chinese house traditionally represented Confucianism and the harmony of the social order while the Chinese garden represented Daoism and the harmony of man with nature. A garden can itself be viewed as a form of poem, as it still is in East Asia. Even paintings of bamboo could carry a political message since the plant is the symbol of the Chinese gentleman, who bends in adversity but does not break. It was used in this fashion by Wu Zhen (1280-1354 AD), a leading Chinese painter during the Yuan (Mongol ruled) Dynasty, famous for his paintings of landscapes and of nature. Gombrich:

“There is something wonderful in this restraint of Chinese art, in its deliberate limitation to a few simple motifs of nature. But it almost goes without saying that this approach to painting also had its dangers. As time went on, nearly every type of brushstroke with which a stem of bamboo or a rugged rock could be painted was laid down and labelled by tradition, and so great was the general admiration for the works of the past that artists dared less and less to rely on their own inspiration. The standards of painting remained very high throughout the subsequent centuries both in China and in Japan (which adopted the Chinese conceptions) but art became more and more like a graceful and elaborate game which has lost much of its interest as so many of its moves are known. It was only after a new contact with the achievements of Western art in the eighteenth century that Japanese artists dared to apply the Eastern methods to new subjects.”

The aim of painting in China, Korea and Japan was to create a surface which conveyed a symbolic meaning, not to achieve a photograph-like mirror of the world. The materials, absorbent paper laid flat and large ink-filled brushes, encouraged a swift execution based on memory and strict rules. It resembled calligraphy more than Western art. Japan lived in a state of self-imposed national isolation from the seventeenth to the mid-nineteenth century, but some information about Western ideas did trickle through, especially via Dutch traders. Alan Macfarlane and Gerry Martin in Glass: A World History:

“In the late eighteenth century Kokan was fascinated by western realism and even made a camera obscura to help in creating perspective drawings. He wrote that ‘If one follows only the Chinese orthodox methods of painting, one’s picture will not resemble Fuji.’ There was only one way out. ‘The way to depict Mount Fuji accurately,’ he declared, ‘is by means of Dutch painting.’ During the eighteenth century perspective pictures became quite widespread in Japan. There are notable examples in some of the works of two of the most widely known Japanese artists, Utamoro and Hokusai.”

Katsushika Hokusai (1760-1849) was a painter and printmaker whose father had produced mirrors for the shogun. He was far too talented to copy anybody slavishly, but he was more than willing to adopt foreign ideas, including European ones, and use them in a different way. Several Japanese artists during this period modified or abandoned the traditional Chinese-inspired artistic styles, but the artistic influence was two-sided. Japanese art in turn had a major impact on Impressionists and painters such as the Dutchman Vincent van Gogh (1853-1890) in the West, and Japanese wood-block prints with their curved lines inspired Art Nouveau or Jugendstil in the late nineteenth and early twentieth centuries.

Indirectly, it is possible to trace this influence in some of the works of Scottish designer Charles Mackintosh (1868-1928), Austrian painter Gustav Klimt (1862-1918) or architects Victor Horta (1861-1947) in Brussels and Antoni Gaudí (1852-1926) in Barcelona, Spain. Indeed, entire cities such as Riga in Latvia came to be dominated by Art Nouveau architecture. Japanese art inspired many in the circle of Édouard Manet (1832-1883). He was influenced by the works of Diego Velázquez and the Italian Renaissance masters Raphael (1483-1520), Michelangelo (1475-1564) and Titian (b. before 1490, d. 1576), but he was open for other impulses and befriended Paul Cézanne (1839-1906) as well as Claude Monet (1840-1926), the founder of French Impressionist painting.

The basic principles of the camera obscura were known in the Middle East, in East Asia and possibly elsewhere. What is special about Europe is that the camera was employed more extensively and actively here than anywhere else, not just for scientific purposes (for instance to observe eclipses) but for artistic purposes as well. The latter use appears to be virtually unique to Europe. In the Islamic Middle East, pictorial art met with religious resistance. It was much more culturally accepted in East Asia, but the European emphasis on photorealism was not shared by most Chinese, Korean, Japanese or Vietnamese artists. The first “cameras” were room-sized, but from the seventeenth century portable versions were constructed. During the eighteenth and nineteenth centuries, the camera obscura was embraced by artists as an aid to sketching and painting. It was this smaller box version of the camera obscura that was eventually used for photography. It is possible to view the invention of photography as the culmination of a centuries-long European quest for “photorealistic” depictions of the world, dating back at least to the development of geometrical perspective in Renaissance art.

The irony is that once this goal had been achieved, no painter could ever again hope to match photography for sheer detail and realism. It is thus not a coincidence that there was a proliferation of abstract painting in late nineteenth century and early twentieth century Europe. There were cultural and ideological reasons for this as well, no doubt, but technological changes played a part, too. Artists had to look for other goals. In addition to this, before the invention of photography, artists could make a living by painting personal portraits, but this traditional market disappeared almost overnight with the introduction of photography. In the early years, one of the most popular uses of the new invention was for portrait photography. If photography can to some extent be seen as an outgrowth of the Western artistic tradition, it also changed this tradition quite profoundly after its invention.

Obviously, the camera obscura alone was not enough to give birth to photography. The images had to be “fixed,” and this required advances in chemistry. In the 1720s the German professor Johann Heinrich Schulze (1687-1744) showed that certain silver salts, most notably silver chloride and silver nitrate, darken in the presence of light. The Italian scholar Angelo Sala (1576-1637) in 1614 had discovered that the Sun blackened powdered silver nitrate as well as paper that was wrapped around it. The process was clearly known before Schulze — some sources even claim it was known in the Middle Ages, but I have been unable to verify this — yet the exact cause was disputed. Some scholars believed the reaction was caused by heat, but Schulze proved that it was caused by light. The Swedish pioneering chemist Carl Wilhelm Scheele (1742-1786) demonstrated in 1777 that the violet rays of the prismatic spectrum were most effective in decomposing silver chloride.

According to historian Paul Burns, “It is generally understood within the photographic historical community that Schulze gathered the first ‘image’ on a prepared page. The image was in fact text, written on a sheet prepared with silver nitrate and chalk. The sunlight blackened the semi-translucent paper, leaving white text on black paper. It is not known what Schulze wrote on the paper.”

Although Johann Heinrich Schulze was the first person we know of to use chemicals to produce an image in this way, he did not manage to permanently preserve this image, which is why he is not considered the inventor of photography. That feat was finally achieved by the Frenchman Joseph Nicéphore Niépce (1765-1833) a century later.

William Herschel discovered infrared radiation because thermometers, which had recently been developed in Europe, showed a higher temperature just beyond the red end of the visible spectrum of sunlight. The German chemist Johann Wilhelm Ritter (1776-1810), after hearing about Herschel’s discovery from 1800, identified another “invisible” radiation which we now know as ultraviolet (UV) in 1801. He experimented with silver chloride since blue light was known to cause a greater reaction to it than did red light, and he found that the area just beyond the violet end of the visible spectrum showed the most intense reaction of all.

During the 1790s Thomas Wedgwood (1771-1805), an early experimenter together with the leading English chemist Humphry Davy (1778-1829) in photography, sun-printed “profiles” of objects onto paper and leather moistened with silver nitrate, but he could not fix these images. According to Davy’s 1802 report, they were initially successful in producing a negative image (a white silhouette on a dark background), but unless the picture was kept in the dark, the image eventually vanished. There are those who claim that Wedgwood should be credited as the inventor of photography, but they currently constitute a minority.

The first universally accepted permanent images were recorded by the French inventor Joseph Nicéphore Niépce in the 1820s, following years of experiments. He was interested in lithography, which had been invented by Alois Senefelder (1771-1834) in 1796. I have encountered conflicting information as to when Niépce recorded his first permanent image. Some say that his heliograph “Boy Leading His Horse“ from 1825 is the world’s oldest photography. In 1826 he successfully produced a camera obscura view of his courtyard on a bitumen-coated pewter plate, which took eight hours to complete. Photography was still hampered by very long exposure times. Only with later technical advances came the ability to expand the repertoire of views cityscapes, street scenes, aerial photography etc.

Niépce teamed up with Louis Jacques Mandé Daguerre (1787-1851), a successful Parisian theater designer and painter of the popular spectacle known as the diorama, a form of illusionistic entertainment which was the closest thing to a modern movie theater in those days. Together they tried to create easier ways to do photography. Daguerre eventually succeeded in this, but only after the death of his partner. Whereas Daguerre became famous, Niépce’s name as the inventor of photography was almost forgotten for a long time. Scholar Eva Weber writes in the book Pioneers of Photography:

“After Niépce’s death in 1833, Daguerre found a way to sensitize a silver-coated copper plate with iodine fumes and to produce a direct positive image without the use of Niépce’s bitumen coating. A crucial success came in 1835 when he discovered the phenomenon of the latent image: the camera image does not appear during the exposure of the plate, but is revealed later only during the chemical development process. At the same time, he found a way to bring out this latent image by using mercury vapor, considerably shortening the required exposure time. The fixing process — making the image permanent — was the final hurdle Daguerre surmounted in 1837 by washing the exposed and developed plate with a solution of salt water. In March 1839 he changed the fixing solution to hyposulphite of soda, a method discovered in 1819 by English scientist Sir John Herschel (1792-1871).”

Sir John Herschel was the son of Sir William Herschel. At the University of Cambridge in the company of Charles Babbage (1791-1871), the inventor of the mechanical computer, he replaced the cumbersome symbolism of Newton with the Leibnizian version of calculus invented by the German mathematician Gottfried Wilhelm Leibniz (1646-1716). After 1816 he assisted the research of his aging father and gained the full benefit of his unrivaled experience with large telescopes. In the 1830s he relocated to South Africa for several years to make astronomical observations from the Southern Hemisphere and amassed a large amount of data. Although mostly remembered as an astronomer he also did work in other disciplines, above all in chemistry and photography. He has been credited with coining the very word “photography” and the terms “positive” and “negative” applied to photographs.

In 1839 in France, a crowded meeting of scientists observed Daguerre’s demonstration of the daguerreotype process, the first form of photography to enjoy some commercial success. However, Daguerre was not the only person working with the possibilities of photography, which clearly was an invention whose time had come. Weber again:

“In 1834, William Henry Fox Talbot (1800-1877), an English country gentleman scholar and scientist, began trying to fix a camera obscura image on paper. By 1835 he was making exquisite ‘photogenic drawings,’ as he called them, or contact prints, by placing botanical specimens and pieces of lace on sheets of good quality writing paper sensitized with silver chloride and silver nitrate, exposing them to sunlight, and then fixing them with a rinse of hot salt water. (Like Daguerre, he also changed his fixative to hyposulphite of soda in 1839 on Herschel’s recommendation). He also made a small negative image of his home, Lacock Abbey, on sensitized paper in 1835. Temporarily losing interest in photography he turned his attention to other studies. When news of Daguerre’s discovery reached him, he went back to experimenting, independently discovering the latent image and its development in 1840, as well as the process of making multiple positive paper prints from a single paper negative. He worked hard to perfect his paper process and patented it in February 1841 as the calotype (from the Greek, meaning beautiful image), also known as the talbotype.”

Talbot became the inventor of the negative/positive photographic process, the precursor to most photographic processes used in the nineteenth and twentieth centuries. He had independently devised photogenic drawing paper by 1835. In 1839 Talbot noted the greater sensitivity of silver bromide — later the chief constituent of all modern photographic materials — made possible by the isolation of the chemical element bromine by the French chemist Antoine Jerome Balard (1802-1876) and the German chemist Carl Jacob Löwig (1803-1890) independently of each other in 1825-26. Talbot made another discovery in 1840, that an invisibly weak dormant picture in silver iodide could be brought out by gallic acid, thus increasing the speed of his camera photography greatly, from hours to minutes. From now on, a quest was mounted for shorter camera exposures and higher resolution.

The daguerreotype was much more popular than the calotype in the early years, but Talbot, in contrast to Daguerre, remained active and continued to experiment. His most significant discovery, the reproducible negative, came to be applied universally with the development of the wet-plate collodion process in 1851. There were other early pioneers, too. Eva Weber:

“In 1833 Antoine Hercules Florence, a French artist in Brazil, started to experiment with producing direct positive paper prints of drawings. Most importantly, Hippolyte Bayard (1801-1887), a French civil servant in the Ministry of Finance, began experimenting in 1837 and by 1839 had created a method for making direct positive prints on paper. Official support for the daguerreotype overshadowed Bayard’s achievement. Discouraged but persistent, he went on to work with the calotype and other photographic processes. As a photographer he produced a large body of high quality work, covering a wide range of subject matter from still lifes, portraits, cityscapes, and architectural views to a record of the barricades of the 1848 revolution. Other pioneers include Joseph Bancroft Reade, and English clergyman, and Hans Thøger Winther, a Norwegian publisher and attorney.”

Further technical improvements were made by the French artist Gustave Le Gray (1820-1884) and the English sculptor Frederick Scott Archer (1813-1857), among others. Weber again:

“Throughout the nineteenth century, each refinement of the photographic process led to a new flourishing of talented photographers, sometimes in a single region or nation, and at other times globally. It is generally agreed that during the daguerreotype era an exceptionally fine body of work came from the United States. In March 1839 Daguerre personally demonstrated his process to inventor and painter Samuel Morse (1791-1872) who enthusiastically returned to New York to open a studio with John Draper (1811-1882), a British-born professor and doctor. Draper took the first photograph of the moon in March 1840 (a feat to be repeated by Boston’s John Adams Whipple in 1852), as well as the earliest surviving portrait, of his sister Dorothy Catherine Draper. Morse taught the daguerreotype process to Edward Anthony, Albert Southworth and possibly Mathew Brady, all of whom became leading daguerreotypists.”

The American physician Henry Draper (1837-1882), son of John Draper, became a pioneer in astrophotography. In 1857 he visited Lord Rosse, or William Parsons (1800-1867), who had built the largest telescope in use at the time in Ireland. After that, Draper became a passionate amateur astronomer. He died at the age of forty five and his widow later established the Henry Draper Memorial to support photographic research in astronomy. The Memorial funded the Henry Draper Catalog, a massive photographic stellar spectrum survey, and the Henry Draper Medal, which continues to be awarded for outstanding contributions to astrophysics.

A daguerreotype by George Barnard (1819-1902) of the 1853 fire at the Ames Mill in New York is the earliest known work of photojournalism. Mathew Brady (1823-1896) became one of the most important photographers during the American Civil War (1861-1865). The English photographer Roger Fenton’s (1819-1869) views of the Crimean War (1853-1856) battlefields are widely regarded as the first systematic photographic war coverage. Much impressive work of elegant landscapes and street scenes, portraiture etc. still came from France. In 1858 the French journalist Gaspard-Félix Tournachon (1820-1910), known as Nadar, made the first aerial photographs of the village of Petit-Becetre taken from a hot-air balloon, 80 meters above the ground. The oldest aerial photograph still in existence is James Wallace Black’s (1825-1896) image of Boston from a hot-air balloon in 1860.

This was also an age of travel photography, facilitated by steamships, railways and cheaper transport, with French photographers taking pictures in Mexico, Central America and Indochina, British in the Middle East, India, China, Japan, etc. For Easterners in the USA, Western views from the frontier were popular and exotic. Edward S. Curtis (1868-1952) recorded the lives of the Native Americans. Photographs of the remarkable Yellowstone area influenced the authorities to preserve it as the country’s first national park in 1872.

The American George Eastman (1854-1932) pioneered the use of celluloid-base roll film, which greatly sped up the process of recording multiple images and opened up photography to amateurs on a wide scale since cameras were no longer so large, heavy and complicated. He registered the trademark Kodak in 1888. Glass plates remained in use among astronomers well into the late twentieth century due to their superiority for research-quality imaging.

There were many early experiments with moving pictures or “movies” in Europe and in North America, with the French inventor Louis Le Prince (1842-1890) being one of the pioneers and the German inventor Max Skladanowsky (1863-1939) in Berlin another. Nevertheless, the brothers Auguste (1862-1954) and Louis Lumière (1864-1948) are usually credited with the birth of cinema with their public screening with admission charge in Paris in December 1895.

The American creative genius and patient experimenter Thomas Alva Edison (1847-1931), one of the most prolific inventors in recorded history, played a key role in the development of cinema, too. His involvement with the emerging industry of telegraphy allowed him to travel around the United States and gain practical experience with the new technology. Eventually, he settled down and changed his profession from telegrapher to inventor. He was better in the laboratory than as a financial manager, and his industrial research laboratory became a model for evolving research facilities elsewhere, not just in North America. Well over one thousand patents were issued to Edison, either to him alone or jointly with others, more than had been issued to any other person in US history. Thomas Edison was still experimenting up until the day he died, 84 years old. Through his years of working as a telegraph operator he had learned much about electricity, and later developed new techniques for recording sounds.

However, “records,” as in the analog sound storage medium we know as gramophone or vinyl records, which remained the most common storage medium for music until Compact Discs (CDs) and the digital revolution of the 1980s and 90s, were patented by the German-born American inventor Emile Berliner (1851-1929) in 1896. James E. McClellan and Harold Dorn in Science and Technology in World History, second edition:

“In 1895, with their Cinématographe…Auguste and Louis Lumière first successfully brought together the requisite camera and projection technologies for mass viewing, and so launched the motion-picture era. With paying customers watching in theaters — sometimes stupefied at the illusion of trains surely about to hurtle off the screen and into the room — movies immediately became a highly successful popular entertainment and industry. Not to be outdone, the Edison Manufacturing Company quickly adopted the new technology and produced 371 films, including The Great Train Robbery (1903), until the company ceased production in 1918. Sound movies — the talkies — arrived in 1927 with Al Jolson starring in The Jazz Singer; by that time Hollywood was already the center of a vigorous film industry with its ‘stars’ and an associated publicity industry supplying newsstands everywhere with movie magazines. The use of color in movies is virtually as old as cinema itself, but with technical improvements made by the Kodak Company in the film, truly vibrant color movies made it to the screen in the 1930s in such famous examples as The Wizard of Oz (1939) and Gone with the Wind (1939). (Color did not become an industry standard, however, until the 1960s.)”

Photography in natural colors was first achieved by the Scottish scientist James Clerk Maxwell (1831-1879) as early as in 1861, but the autochrome process of the brothers Lumière from 1907 was the first moderate commercial success. The Russian photographer Sergey Prokudin- Gorsky (1863-1944) developed some early techniques for taking color photographs and documented the Russian Empire between 1909 and 1915. Color photography progressed with research in synthetic organic chemistry of dyestuffs and the Eastman Kodak Company produced Kodachrome in 1935, yet it did not become cheap and accessible enough to become the standard until the second half of the twentieth century. Black and white photography remains in use to this day for certain artistic purposes, for instance portraits.

While photography was of great use in arts and entertainment, it became an invaluable tool in numerous scientific disciplines, from medicine via geology and botany to archaeology and astronomy, since it can detect and record things that the human eye cannot see. The Austrian physicist and philosopher Ernst Mach (1838-1916) used it for his investigations in the field of supersonic velocity, and from the 1870s developed photographic techniques for the measurement of shock waves. The Englishman Eadweard J. Muybridge (1830-1904) and the Frenchman Étienne-Jules Marey (1830-1904) invented new ways of recording movement.

In the late twentieth and early twenty-first centuries, traditional photography was gradually replaced by digital techniques. Asian and especially Japanese companies such as Sony played a major role in the digitalization of music, movies and photography, in addition to Western ones. However, with the creation of photography in early nineteenth century, advances in chemistry were crucial.

Chemistry developed out of medieval alchemy. In India, alchemy was used in serious metallurgy, medicine, leather tanning, cosmetics, dyes etc. The work of Chinese alchemists facilitated inventions such as gunpowder, which was to revolutionize warfare throughout the world. Although their views differed considerably in the details, scholars in Japan, China, Korea, India, the Middle East and Europe as late as the year 1750 would have agreed that “water” is an element, not a compound of hydrogen and oxygen as we know today. Likewise, the fact that “air” consists of a mixture of several substances was only fully grasped in the second half of the eighteenth century. The easiest way to date when chemistry was born, as distinct from alchemy, is when scholars started talking about “oxygen” instead of “water” as an element. This transition happened in Europe in the late eighteenth century, and only there.

The first seeds of this can be found in Europe during the Scientific Revolution in the seventeenth century, with a new emphasis on experimentation and a more critical assessment of the knowledge of the ancients. The French philosopher Pierre Gassendi attempted to reconcile atomism with Christianity and thus helped revive the Greek concept of atoms, which, though not totally forgotten, had not been much discussed during the European Middle Ages. Some of the earliest known atomic theories were developed in ancient India in the sixth century BC by Kanada, a Hindu philosopher, and later Jainic philosophy linked the behavior of matter to the nature of the atoms.

The concept of atomism was championed among the Greeks by Democritus (ca. 460 BC-ca. 370 BC), who believed that all matter is made up of various imperishable, indivisible elements he called atoma or “indivisible units.” This idea was supported by Epicurus (341 BC-270 BC). The belief in atomism was not shared by Aristotle and remained a minority view among the ancient Greeks. The philosopher Empedocles (ca. 490-ca. 430 BC) believed that all substances are composed of four elements: air, earth, fire and water, a view which became known as the Greek Classical Elements. According to legend, Empedocles was a self-styled god who flung himself into the volcanic crater of Mount Etna to convince his followers that he was divine. The Chinese had their Five Phases or Elements: fire, earth, water, metal and wood. Similar, though not identical ideas were shared by the major Eurasian civilizations.

During the Middle Ages, alchemists could question the Classical Elements, but most chemical elements, substances that cannot be decomposed into simpler substances by ordinary chemical processes, were not identified until the 1800s. As of 2009, there are about 120 known chemical elements. The exact number is disputed because several of them are highly unstable and very short-lived. About 20 percent of the known elements do not normally exist in nature.

Ibn Warraq in his books is critical of Islam but gives due credit to scholars within the Islamic world who deserves it, a sentiment which I happen to share. One of them is the Persian physician al-Razi (AD 865-925), known in the West as Rhazes, the first to describe the differences between smallpox and measles. Here is the book Why I Am Not a Muslim:

“Al-Razi was equally empirical in his approach to chemistry. He shunned all the occultist mumbo jumbo attached to this subject and instead confined himself to ‘the classification of the substances and processes as well as to the exact description of his experiments.’ He was perhaps the first true chemist as opposed to an alchemist.”

He considered the Koran to be an assorted mixture of “absurd and inconsistent fables” and was certainly a freethinker, but unlike Ibn Warraq, I still view Rhazes as a committed alchemist who believed in transmutation and the possibility of turning base metal into gold. Another well-known Persian scholar, Ibn Sina or Avicenna (ca. 980-1037) was more skeptical of the possibility of transmutation.

After the gifted alchemist Geber in the eighth century AD, scholars in the Middle East, among them Rhazes, made some advances in alchemy, for instance regarding the distillation of alcohol. Not all of the works that were later attributed to Geber (Jabir ibn Hayyan) were written by him, just like all the medical works attributed to Hippocrates cannot have been written by him. According to the Encyclopædia Britannica, “Only a tiny fraction of the Jabirian works made their way into the medieval West,” but those that did had some impact.

Belief in the possibility of transmutation was not necessarily stupid according to the definition of chemical elements of the time. David C. Lindberg in The Beginnings of Western Science, second edition:

“Aristotle had declared the fundamental unity of all corporal substance, portraying the four elements as products of prime matter endowed with pairs of the four elemental qualities: hot, cold, wet, dry. Alter the qualities, and you transmute one element into another….It is widely agreed by historians that alchemy had Greek origins, perhaps in Hellenistic Egypt. Greek texts were subsequently translated into Arabic and gave rise to a flourishing and varied Islamic alchemical tradition. Most of the Arabic alchemical writings are by unknown authors, many of them attributed pseudonymously to Jabir ibn Hayyan (fl. 9th-10th c., known in the West as Geber). Important, along with this Geberian (or Jabirian) corpus, was the Book of the Secret of Secrets by Muhammad ibn Zakariyya al-Razi (d. ca. 925). Beginning about the middle of the twelfth century, this mixed body of alchemical writings was translated into Latin, initiating (by the middle of the thirteenth century) a vigorous Latin alchemical tradition. Belief in the ability of alchemists to produce precious metals out of base metals was widespread but not universal; from Avicenna onward, a strong critical tradition had developed, and much ink was devoted to polemics about the possibility of transmutation.”

The most influential of all medieval alchemical writings in the West was the Summa perfectionis from the early 1300s by the Franciscan monk Paul of Taranto. He was strongly influenced by Geber and is often called Pseudo-Geber. Crucially, he believed that the four Classical Elements exist in the form of tiny corpuscles. This tradition was continued by Daniel Sennert (1572-1637), a German professor of medicine and an outspoken proponent of atomism. His example again influenced the Englishman Robert Boyle (1627-1691).

The German scholar known under the Latinized name Georgius Agricola, or Georg Bauer in German (1494-1555), is considered “the father of mineralogy.” Agricola was a university-educated physician and alchemist and a friend of the leading Dutch Renaissance humanist Erasmus of Rotterdam (1469-1536). He spent prolonged periods of time among the mining towns of Saxony and the rich Bohemian silver mines. Agricola’s book De re metallica dealt with the arts of mining and smelting, while his De natura fossilium is considered the first mineralogy textbook. The Swiss physician and alchemist Paracelsus (1493-1541) grew up with miners and would eventually promote the role of chemicals in medicine. Agricola and Paracelsus can to some extent be seen as sixteenth century forerunners of the Scientific Revolution in that they based their research upon observation rather than ancient authority.

The German physician and alchemist Andreas Libavius (ca. 1540-1616) wrote the book Alchemia in 1597, which is sometimes described as the first chemistry textbook, but he was nevertheless a believer in the possibility of transmutation of base metals into gold.

Robert Boyle’s publication of The Sceptical Chymist in 1661 was a milestone in alchemy’s evolution towards modern chemistry, but progress was slow and gradual. The German Hennig Brand (ca. 1630-ca. 1710) discovered phosphorus a few years later, but he was still an alchemist working with transmutation and searching for the “philosopher’s stone.” He recognized the usefulness of the new substance, but although he became the first known discoverer of an element (the discovery of gold, silver, copper etc. is lost in prehistory) he didn’t recognize it as a chemical element as we understand it.

The Swedish chemist Georg Brandt (1694-1768) discovered cobalt around 1735, the first metal not known to the ancients, and his pupil Axel Cronstedt (1722-1765) discovered nickel in 1751. The Spanish chemist Fausto Elhuyar (1755-1833) discovered tungsten in 1783 together with his brother Juan José Elhuyar. Johan Gadolin (1760-1852) from Turku, Finland, in 1794 discovered yttrium, the first of the rare-earth elements. Gadolin studied at Sweden’s University of Uppsala under Torbern Bergman (1735-1784), who also mentored Carl Scheele. Here is Bill Bryson in his highly entertaining book A Short History of Nearly Everything:

“Brand became convinced that gold could somehow be distilled from human urine. (The similarity of colour seems to have been a factor in his conclusion.) He assembled fifty buckets of human urine, which he kept for months in his cellar. By various recondite processes, he converted the urine first into a noxious paste and then into a translucent waxy substance. None of it yielded gold, of course, but a strange and interesting thing did happen. After a time, the substance began to glow. Moreover, when exposed to air, it often spontaneously burst into flame.”

The substance was named phosphorus, from Greek meaning “light-bearer.” This discovery was utilized further by the great Swedish chemist Carl Scheele. Bryson again:

“In the 1750s a Swedish chemist named Karl (or Carl) Scheele devised a way to manufacture phosphorus in bulk without the slop or smell of urine. It was largely because of this mastery of phosphorus that Sweden became, and remains, a leading producer of matches. Scheele was both an extraordinary and an extraordinarily luckless fellow. A humble pharmacist with little in the way of advanced apparatus, he discovered eight elements — chlorine, fluorine, manganese, barium, molybdenum, tungsten, nitrogen and oxygen — and got credit for none of them. In every case, his finds either were overlooked or made it into publication after someone else had made the same discovery independently. He also discovered many useful compounds, among them ammonia, glycerin and tannic acid, and was the first to see the commercial potential of chlorine as a bleach — all breakthroughs that made other people extremely wealthy. Scheele’s one notable shortcoming was a curious insistence on tasting a little of everything he worked with….In 1786, aged just forty-three, he was found dead at his workbench surrounded by an array of toxic chemicals, any one of which could have accounted for the stunned and terminal look on his face.”

The most famous Swedish chemist is undoubtedly Alfred Nobel (1833-1896). Gunpowder remained the principal explosive from the Mongol conquests brought it from China in the thirteenth century until the chemical revolution in nineteenth century Europe. Nitroglycerin was discovered by the Italian chemist Ascanio Sobrero (1812-1888) in 1847, but it was highly unstable and its use was banned by several governments following serious accidents. Alfred Nobel succeeded in stabilizing it and named the new explosive “dynamite” in reference to its dynamic force. The invention made him a very rich man. The foundation of the various Nobel Prizes, awarded annually since 1901, were laid in 1895 when the childless Nobel wrote his last will, establishing the foundation which carries his name.

The most scientifically important Swedish chemist, however, was clearly Jöns Jacob Berzelius (1779-1848). He and his students, among them the Swedish scholar Carl Gustaf Mosander (1797-1858), discovered several chemical elements, but above all Berzelius created a simple and logical system of symbols — H for hydrogen, C for carbon, O for oxygen etc., a system of chemical formula notation which essentially remains in use to this day.

Yet this happened in the nineteenth century. Even an otherwise brilliant scientist such as the Englishman Sir Isaac Newton (1643-1727), who devoted more time to alchemy than to optics, was clearly an alchemist. Newton was a deeply religious Christian but in a theologically unorthodox way, and looked for hidden information in the Bible. McClellan and Dorn:

“In the quest after secret knowledge, alchemy occupied the major portion of Newton’s time and attention from the mid-1670s through the mid-1680s. His alchemical investigations represent a continuation and extension of his natural philosophical researches into mechanics, optics, and mathematics. Newton was a serious, practicing alchemist — not some sort of protochemist. He kept his alchemical furnaces burning for weeks at a time, and he mastered the difficult occult literature. He did not try to transmute lead into gold; instead, using alchemical science, he pried as hard as he could into forces and powers at work in nature. He stayed in touch with an alchemical underground, and he exchanged alchemical secrets with Robert Boyle and John Locke. The largest part of Newton’s manuscripts and papers concern alchemy, and the influence of alchemy reverberates throughout Newton’s published opus. This was not the Enlightenment’s Newton.”

The earliest known use of the word “gas,” as opposed to just “air,” has been attributed to the Flemish scholar Jan Baptist van Helmont (1580-1644), who was a proponent of the experimental method. He did not have access to adequate laboratory apparatus to collect the gas. One of the obstacles he faced was the issue of containment. Cathy Cobb and Harold Goldwhite explain in their book Creations of Fire:

“Helmont did not always appreciate the volume of gas that would be released in his reactions, so he routinely burst the crude and delicate glassware of the day….However, a Benedictine monk, Dom Perignon, showed that effervescence in his newly invented beverage, champagne, could be trapped in glass bottles with bits of the bark of a special oak tree. The resultant cork was a triumph for celebrants and chemists alike. Another worker, Jean Bernoulli, used a burning lens (a lens used to focus the sun — soon to be standard equipment in the chemist’s repertoire) to ignite gunpowder in a flask. To avoid repeating the shattering experience of Helmont, Bernoulli did his work in an open, rather than a sealed, system, running a tube from the ignition flask to a vat of water. He was able to show in this manner that gases from the reaction occupied a much larger volume than the gunpowder (and became wet in the process). Otto von Guericke designed a practical air pump in the mid-1600s, and armed with this and new techniques for containment — corks and Bernoulli’s vat — a group of young scientists took on the task of determining the qualities of Helmont’s gases.”

The French monk Dom Perignon (1638-1715) did not technically speaking invent champagne, but he was indeed a pioneer in the use of corks to keep the new creation in place. Among the scientists who continued Helmont’s lead were the Englishmen Robert Boyle and Robert Hooke (1635-1703). Hooke compared cork cells he saw through his microscope to the small rooms monks lived in. These were not cells in the modern biological meaning of the term, but when cells were later identified, biologists took over that name from Hooke.

The great contribution of the seventeenth century to the story of wine was bottles and corks. Wine was traditionally shipped and consumed rather quickly, not stored. Wine bottles made of glass were rare as they were expensive and fragile. The Englishman Kenelm Digby (1603-1665) is often credited with creating the modern wine bottle in the 1630s and 1640s. Hugh Johnson explains in The Story of Wine:

“It now remained to equip them with the perfect stopper. How to plug bottles of whatever sort was a very old problem. The Romans had used corks, but their use had been forgotten. Looking at medieval paintings one sees twists of cloth being used, or cloth being tied over the top. Leather was also used, and sometimes covered with sealing wax. Corks begin to be mentioned in the middle of the sixteenth century. It has often been suggested, and may well be true, that cork became known to the thousands of pilgrims who tramped across northern Spain to Santiago de Compostella. It seems that the marriage of cork and bottle, at least in England, took place by degrees over the first half of the seventeenth century; stoppers of ground glass made to fit the bottle neck snugly held their own for a remarkably long time….Eventually, glass stoppers were abandoned because they were usually impossible to extract without breaking the bottle. Cider, beer, and homemade wines were what the seventeenth-century householder chiefly bottled. Bottling by wine merchants only began at the very end of the century.”

Santiago de Compostela in Galicia in the northwest of Spain was a major center of pilgrimage as Saint James the Great, one of the disciples of Jesus, is said to be buried there. Cork is the thick outer bark of the cork oak, Quercus suber, which grows in the Western Mediterranean and especially in the Iberian Peninsula. Portugal and Spain are still the most important exporters of cork. Cork is light, elastic, clean, largely unaffected by temperature and does not let air in or out of the bottle, which is what makes it so useful. Corkscrews were invented soon after the introduction of corked glass bottles.

Following advances made by Robert Boyle, Daniel Bernoulli (1700-1782) of the Swiss Bernoulli family of talented mathematicians published his Hydrodynamica in 1738, which laid the basis for the kinetic theory of gases.

The Scottish scientist Joseph Black (1728-1799), a friend of the inventor James Watt (1736-1819), discovered the gas we call carbon dioxide. It was by now quite clear that “air” consists of several different substances, an insight which led to further experiments in pneumatic chemistry. The brilliant English experimental scientist Henry Cavendish (1731-1810) identified hydrogen, or what he called “inflammable air.” Another Englishman, Joseph Priestley (1733-1804), a contemporary of Cavendish who corresponded with him, is usually credited with discovering oxygen, although Scheele had in fact done so before him. The Frenchman Antoine-Laurent de Lavoisier (1743-1794) noted its tendency to form acids by combining with different substances and named the element oxygen (oxygène) from the Greek words for “acid former.” Lavoisier worked closely with the mathematical astronomer Pierre-Simon Laplace (1749-1827) in developing new chemical equipment.

It is worth noting here that James Watt, a practical man of steam engine fame, and the great theoretical scientist Pierre-Simon Laplace both made contributions to the advancement of chemical science. This illustrates that theoretical science and applied technology were now gradually growing closer, a development of tremendous future importance which in my view had begun in Europe already in the eighteenth century, if not before, but whose effects would only become fully apparent some generations later.

Several observers noticed that water formed when a mixture of hydrogen with oxygen (or common air) was sparked, but they were cautious in their conclusions. Cobb and Goldwhite:

“Lavoisier did not hesitate. He made the pronouncement that water was not an element as previously thought but the combination of oxygen with an inflammable principle, which he named hydrogen, from the Greek for the begetter of water. He claimed priority for this discovery, making only slight reference to the work of others. There was perhaps understandably a furor. Watt felt that Cavendish and Lavoisier had used some of his ideas, but of course all three owed some debt to Priestley. Again it may be asserted that the significance of Lavoisier’s work lies not in the timing of his experimental work but in his interpretation of the results….Lavoisier however saw it as the combination of two elements to form a compound….Laplace favored a mechanical explanation of heat as the motion of particles of matter (as it is currently understood), but Lavoisier described heat as a substance. This material he called caloric, the matter of fire….His true accomplishments however were that he broke the Aristotelian barrier of four elements, established the conservation of mass as an inviolate law, and confirmed the need for verifiable experimental results as the basis for valid chemical theory.”

Lavoisier is widely considered the “father of modern chemistry.” He had not yet fully arrived at the modern definition of a chemical element, but he was a great deal closer to it than past scholars and had given chemists a logical language for naming compounds and elements. Together with Laplace he conducted a number of studies of respiration and concluded that oxygen was the element in air necessary for life. In 1772 Lavoisier had demonstrated that charcoal, graphite and diamond contain the same substance: carbon.

Although himself an honest man, Lavoisier represented the hated tax collectors and found himself on the wrong side of the French Revolution, which began in 1789. He was guillotined after the revolutionary judge remarked, “The Republic has no need of scientists.” As scholar Frederic Lawrence Holmes states:

“Despite his monumental achievements in chemistry, Lavoisier’s work in that field constituted only one of his many activities. In addition to his duties in the Tax Farm, Lavoisier took on important administrative functions in the Academy of Sciences, developed an experimental farm at a country estate he had purchased outside Paris, made improvements in the manufacture of gunpowder, wrote important papers on economics, and became deeply involved in reformist political and administrative roles during the early stages of the French Revolution. As a member of the Temporary Commission on Weights and Measures from 1791 to 1793, he played an important part in the planning for the metric system. He devised plans for the reform of public instruction and the finances of the revolutionary government. In his efforts to deal rationally with the turbulent events of the Revolution, however, Lavoisier repeatedly underestimated the power of public passion and political manipulation, and despite his generally progressive views and impulses, he was guillotined, along with twenty-eight other Tax Farmers, as an enemy of the people at the height of the Reign of Terror in 1794.”

The construction of a metric system had been suggested before, but the idea was only successfully implemented with the Revolution. The great French mathematician Adrien-Marie Legendre (1752-1833), among others, continued this work afterwards. The metric system of measurement has since conquered the entire world, despite resistance from English-speaking countries. It is one of the most positive outcomes of the French Revolution, in my view an otherwise predominantly destructive event which championed a number of damaging ideas.

The French pioneering chemist Claude Louis Berthollet (1748-1822) determined the composition of ammonia in 1785 and laid the foundations for understanding chemical reactions. The Norwegians Cato Guldberg (1836-1902) and Peter Waage (1833-1900) in the 1860s, building on Berthollet’s ideas, discovered the law of mass action regarding the relationship of speed, heat and concentration in chemical reactions. The Italian scholar Amedeo Avogadro (1776-1856), expanding on the Frenchman Joseph Louis Gay-Lussac ‘s (1778-1850) work on the volumes of gases from 1808, in 1811 hypothesized that all gases at the same volume, pressure and temperature are made up of the same number of particles (molecules), a principle later known as Avogadro’s Law.

During the nineteenth century, dozens of new chemical elements were discovered and described, not only because of better electrochemical equipment but also because scientists now knew what to look for. This eventually enabled the Russian scholar Dmitri Mendeleyev (1834-1907) in Saint Petersburg to create his famous periodic table of the chemical elements in 1869. He was not the first person to attempt to construct such a table, but he took the bold step of leaving open spaces for elements not yet discovered. After these were identified, with roughly the chemical characteristics predicted by Mendeleyev, his periodic table won general acceptance among scholars. It brought a new sense of order and clarity to chemistry.

The scientific method was established gradually, and modern science with it, in Europe during the late sixteenth and early seventeenth centuries. The Flemish scholar Jan Baptist van Helmont could conduct experiments in the modern sense of the term, and the empirical studies of the English physician William Harvey (1578-1657) led to a better understanding of the circulation of blood in the human body. The colorful and prolific German priest and scholar Athanasius Kircher (1601-1680) nevertheless still embodied elements both from the new experimental era and from the more mystical age which preceded it.

Static electricity gained from rubbing amber or other substances, which could then attract nearby small objects, was known to the ancient Greeks and to other cultures. The modern study of electricity started with the Scientific Revolution and accelerated throughout the Enlightenment era of the eighteenth century. The seeds of the European electrical revolution in the nineteenth and twentieth centuries, which transformed everyday life in every corner of this planet, had been sown already during the seventeenth and eighteenth centuries.

The English natural philosopher William Gilbert (1544-1603) was one of the originators of the term “electricity.” He published his treatise De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure (“On the Magnet, Magnetic Bodies, and the Great Magnet of the Earth”) in 1600. Although his investigation of static electricity was less complete than his study of magnetism, it became a standard work throughout Europe. He was a successful physician in Elizabethan London and the personal physician of Queen Elizabeth I, but conducted original research as well. In De magnete, he treats the Earth as a huge magnet, a connection that the French scholar Peter Peregrinus of Maricourt had failed to make in his thirteenth century work on magnetism. Gilbert did careful experiments with his terrella Earth model by moving a compass around it and demonstrated that it always pointed north-south. He claimed that this is also the case with the Earth, which is magnetic. Previously, many had believed that Polaris, the North Star, or some unknown object made the compass point north. Gilbert extended his theories to include the cosmos, too. The pre-Newtonian concept of magnetic forces between planets interested Kepler, and Galileo was directly inspired by Gilbert’s experimental philosophy.

The Englishman Francis Hauksbee (1666-1713) produced an electrostatic generator in 1706. He obtained more powerful effects by mounting a glass globe on a spindle and rubbing it as it rotated, but he was the too far ahead of his time. The English scholar Stephen Gray (1666-1736) discovered in 1729 that electricity can flow and that electrical conductors must be insulated. Gray showed that the electrical attracting power could be transmitted over great distances, provided that the conducting line was made of a suitable material.

In the 1730s the French physicist Charles du Fay (1698-1739) demonstrated that there are two kinds of electric charge, dubbed “positive” and “negative,” with like charges repelling one another and opposite charges attracting one another. The French clergyman and physicist Jean-Antoine Nollet (1700-1770) constructed one of the first electrometers and in 1748 discovered osmosis, the diffusion of a solvent (usually water) through a semi-permeable membrane separating two solutions with different concentrations.

Benjamin Franklin, journalist, scientist, inventor, statesman, philosopher, economist and the most famous American of his time, developed many electrical theories, among them the idea that lightning is a form of electrical discharge. His conclusion that erecting pointed conductors, lightning rods, on buildings could protect them from lightning strikes was hailed as a triumph of reason over nature, very much an Enlightenment ideal. Franklin’s rough ideas were developed into a more mathematically consistent theory of electricity and magnetism in 1759 by the German natural philosopher Franz Aepinus (1724-1802), whose theories were taken up by the next generation of scholars such as Volta, Coulomb and Cavendish.

The French physicist Charles de Coulomb (1736-1806) discovered that the force between two electrical charges is proportional to the product of the charges and inversely proportional to the square of the distance between them. According to the Encyclopædia Britannica, “Coulomb developed his law as an outgrowth of his attempt to investigate the law of electrical repulsions as stated by Joseph Priestley of England. To this end he invented sensitive apparatus to measure the electrical forces involved in Priestley’s law and published his findings in 1785-89. He also established the inverse square law of attraction and repulsion of unlike and like magnetic poles, which became the basis for the mathematical theory of magnetic forces developed by Siméon-Denis Poisson.”

The Dutch scholar Pieter van Musschenbroek (1692-1761) and the German scholar Ewald Georg von Kleist (1700-1748) independently invented the first capacitor in the 1740s, a device for storing static electricity which became known as the Leyden jar. It created a craze in the second half of the eighteenth century for electrical party tricks as well as for scientific studies of electricity. Franklin Cathy Cobb and Harold Goldwhite in Creations of Fire:

“Electricity itself, like atomic theory, was nothing new. The Greeks knew how to generate static electricity by rubbing amber with wool (the word electricity is derived from elektron, the Greek word for amber), and Otto von Guericke of air-pump fame made a machine for generating a high-potential electric charge in the 1500s. The Leiden jar for storing static charge was invented in 1745 by Pieter van Musschenbroek of Leiden, who stumbled across the method while trying to preserve electrical charge in an empty glass bottle. Unknowingly he built up considerable static charge on the surface of the bottle, which he discovered when he touched the bottle, ‘the arm and the body was affected in a terrible manner which I cannot express; in a word, I thought it was all up with me.’ In the 1750s Benjamin Franklin carried out his famous kite experiment in which he collected a charge from a thunder cloud in a Leiden jar. He was fortunate in surviving this experiment; others who attempted to duplicate it did not. Franklin performed many revealing experiments with the Leiden jar, but these experiments were limited because the Leiden jar provided only one jolt of electricity at a time.”

The discovery of current electricity opened the door to a whole new area of research in the early 1800s. James E. McClellan and Harold Dorn:

“In experiments conducted with frogs’ legs in the 1780s, the Italian scientist Luigi Galvani (1737-98)…sought to investigate the ethereal ‘animal electricity’ that seemed to ‘flow’ in an animal’s body. His compatriot Alessandro Volta (1745-1827) built on Galvani’s work and in 1800 announced the invention of the pile, or battery, which could produce flowing electricity. Volta’s battery and the ever-larger ones that soon followed manifested profound new connections between electricity and chemistry. The battery — layers of metals and cardboard in salt (later acid) baths — was itself a chemically based instrument, and so the generation of current electricity was self-evidently associated in fundamental ways with chemistry. More than that, through electrolysis or using a battery to run electricity through chemical solutions, scientists….discovered new chemical elements….Lavoisier had been content to describe chemical elements as merely the last products of chemical analysis without saying anything about the constitution — atomic or otherwise — of these elements….John Dalton (1766-1844) noticed that the proportions of elements entering into reactions were often ratios of small integers, suggesting that chemical elements are in fact discrete particles. He thus became the first modern scientist to propose chemical atoms — or true indivisibles — in place of the more vague concept of chemical elements.”

The invention of the battery which, like photography, happened after rapid advances in chemistry, for the first time in human history made possible the study of electromagnetism. One of the greatest revolutions in the history of optics was the realization that visible light is just one of several forms of electromagnetic radiation. Volta’s device also made possible further advances in chemistry through electrolysis, by Humphry Davy and others.

The rise of “modern” atomism is often taken to start with the work of the English Quaker John Dalton. The philosophical atomism of the ancients was not supported by empirical verification. It was “a deduction from certain mental postulates, not from experience, and therefore should be considered as literature rather than as science,” as one writer put it. Atomism during the nineteenth century got a much firmer experimental basis. Dalton developed his atomic theory in detail in two volumes of A New System of Chemical Philosophy, published in 1808 and 1810. Later developers of chemical atomism such as Jöns Jacob Berzelius, Justus von Liebig, Amedeo Avogadro and Joseph Louis Gay-Lussac were inspired by Dalton’s atomistic vision even though they could modify certain details within it.

The refracting telescope as an efficient research tool depended on technological progress in the making of high-quality optical glass. From the early seventeenth century, shortage of wood led to the use of coal in English glass furnaces, which produced higher temperatures. Through an unexpected accident, the properties of lead glass led to improved telescopes and microscopes. The Englishman George Ravenscroft (1632-1683) travelled in Italy and may have studied glassmaking in Venice. He later managed to create clear lead crystal glass (known as flint glass), and his glassworks produced many fine drinking glasses with techniques that were soon copied by other glassworks in England. He was the first to produce clear lead crystal glassware on an industrial scale, as an alternative to Venetian cristallo.

Even with the fine lenses made by the brothers Huygens, by the 1660s refracting telescopes had reached the limits of what was reasonable in the absence of achromatic lenses. Newton’s discovery that sunlight is made up of rays of different refrangibilities was demonstrated in his Opticks from 1704. Sunlight passed through a glass prism yielded a spectrum of rays of different colors which, if reunited by a second prism, again produced white light. The different refrangibilities of rays of different colors make lenses cast colored images. Newton thought, mistakenly, that this evil could not be cured, and made the first practical reflecting telescope in 1668. The first known reflecting telescope was created by the Italian Jesuit astronomer Niccolò Zucchi (1586-1670) already in 1616, but the quality was far from perfect.

It was widely believed that Newton had proved the impossibility of removing chromatic aberration and this delayed the development of achromatic lenses. The English lawyer and optician Chester Moore Hall (1703-1771) proved through a series of experiments that different sorts of glass (crown and flint) could be combined to produce an achromatic combination. He had at least two achromatic telescopes made after 1730 in collaboration with the optician George Bass, and told the English instrument maker John Dollond (1706-1761) of his success in the 1750s. His son Peter Dollond (1730-1820) had recently started making optical instruments, and John’s reputation grew rapidly. The brilliant Swiss mathematician Leonhard Euler saw the potential in achromatism and wrote to Dollond about it.

Dollond at first doubted the possibility of making achromatic lenses, but the Swedish scientist Samuel Klingenstierna (1698-1765) found errors in Newton’s theories of refraction, published his theoretical account of this in 1754 and sent his geometrical notes to Dollond. John Dollond in 1757 managed to produce achromatic doublets by a combination of different forms of glass and became the first to patent the method. This quickly led to much-improved refracting telescopes, but in the long run reflecting telescopes would still win out.

Great practical and theoretical improvements took place in optics in the nineteenth century, especially in Germany at the firm of Carl Zeiss (1816-1888), supported by the theoretician Ernst Abbe (1840-1905) in collaboration with the scientific glassworks of Otto Schott (1851-1935), a leading pioneer in modern glass chemistry and industry. Justus von Liebig (1803-1873) and other chemists made substantial contributions as well. John North again:

“The future of telescopes was with reflectors, and this simply because the glass at the center of lenses of the order of a meter in diameter is so thick that the absorption of light there is intolerable. Physical deformation under the glass’s weight is also a problem. A new medium was needed, however, to replace the unworkable speculum metal. Glass mirrors were not new, and glass grinding and polishing was a highly developed art, but early methods of silvering the glass were crude. In 1853, the German chemist Justus von Liebig devised a technique for depositing a thin and uniform layer of silver on a clean glass surface, from an aqueous solution of silver nitrate. The technique had been shown at the Great Exhibition of 1851 in London but was seemingly reinvented by Liebig, who brought it to the attention of a wide scientific circle. A few years afterwards, the renowned Munich instrument maker Carl August von Steinheil of Munich, and Jean Bernard Léon Foucault of Paris, both made use of the technique, depositing layers of silver on glass mirrors for use in telescopes — at first in fairly small ones.”

Foucault used mirrors in experiments to study the speed of light with unprecedented accuracy, as we will see later. The nature of light was obviously important to understand for astronomers. By 1685, Newton used a new technique devised by the prominent Scottish astronomer and mathematician James Gregory (1638-1675) to show that the stars must lie at much greater distances from the Sun than had previously been supposed. In Optica Promota (“The Advance of Optics”) from 1663, Gregory introduced photometric methods to estimate the distances of stars and a description of a practical reflecting telescope known as the Gregorian telescope. He also pointed out the possible use of transits of Venus and Mercury to determine the distance to the Sun, something which was successfully done after his death.

The photometric method used by Newton depended upon a comparison of the brightness of the Sun with that of a star by considering the sunlight reflected off Saturn. He had to show that the stars were so far away that their gravitational attractions on one another and their impact on the bodies of our Solar System was minimal. This was important to him as he wondered why the world does not collapse on itself, under gravity.

Newton supported the Frenchman Pierre Gassendi’s corpuscular theory of light. The revived atomism of Gassendi had many supporters during the seventeenth century. The Englishman Thomas Hobbes, remembered for his work Leviathan from 1651 written during the English Civil War (1642-51) where he developed the concept of “the war of all against all” and the idea of the social contract, which would later be elaborated in very different ways by John Locke (1632-1704) and Jean-Jacques Rousseau (1712-1778), used atomism as an inspiration for his political philosophy. The Dutchman Christiaan Huygens had practical experience from working with lenses, and like René Descartes (1596-1650) supported a wave theory of light.

According to Gribbin, “Italian physicist Francesco Grimaldi (1618-1663), professor of mathematics at the Jesuit college in Bologna…studied light by letting a beam of sunlight into a darkened room through a small hole. He found that when the beam of light was passed through a second small hole and on to a screen, the image on the screen formed by the spot of light had coloured fringes and was slightly larger than it should be if the light had travelled in straight lines through the hole. He concluded (correctly) that the light had been bent outwards slightly as it passed through the hole, a phenomenon to which he gave the name ‘diffraction’. He also found that when a small object (such as a knife edge) was placed in the beam of light, the shadow cast by the object had coloured edges where light had been diffracted around the edge of the object and leaked into the shadow. This is direct evidence that light travels as a wave, and the same sort of effect can be seen when waves on the sea, or on a lake, move past obstructions or through gaps in obstructions.”

Because the wavelengths are so small, the effects are tiny and difficult to measure. Moreover, Grimaldi’s work was published two years after he died. There were a few scholars who supported a wave theory of light in the eighteenth century, most notably Leonhard Euler, yet Newton’s conception of light as a stream of particles dominated the field for a long time due to his great personal authority as the man who had discovered the laws of universal gravity.

New experimental evidence came with the English scholar Thomas Young’s (1773-1829) work. Young knew Italian, Hebrew, Syriac, Turkish, Persian and other languages when in his teens. He studied physics, chemistry and ancient history and contributed to deciphering the Egyptian hieroglyphs, but he is often remembered for his studies of light. He experimented with the phenomenon of interference in his double-slit experiment in the late 1790s and early 1800s and eventually came out in support of the wave model of Huygens over that of Newton. Different colors of light, he said, represent different wavelengths of light.

Young’s optical studies were continued by a better mathematician than he, the French physicist Augustin Fresnel (1788-1827), the inventor of the Fresnel lens which was first adopted in lighthouses and eventually used in many other applications, too. Fresnel derived formulas to explain reflection, diffraction, interference, refraction and double refraction.

Thomas Young contributed to the study of vision as well, and theorized that the most sensitive points of the retina, which are connected directly to the brain, can detect three primary colors. The brain later mixes the sensations to create all possible colors. Individuals suffering from color blindness, a disorder first recognized during Young’s lifetime (the English scholar John Dalton published a scientific paper on the subject in 1798) have an abnormally low number of retinal cones which detect color.

This idea was developed further by the German physician and physicist Hermann von Helmholtz (1821-1894) and is called the Young-Helmholtz theory. Later experiments demonstrated that you can indeed make other colors by mixing the three primary colors red, green and blue. The existence of three types of cones, sensitive to different wavelengths of light, was confirmed in the late twentieth century. This trichromatic theory of color vision was later challenged by the German physiologist Ewald Hering (1834-1918), who proposed an alternative theory. We know today that the theories describe different parts of the process.

Traditionally, the human eye has been understood to have two types of photoreceptor cells: rods and cones. Rods are more sensitive to light than cones and thus responsible for night vision, but do not distinguish color equally well. In the 1990s there were claims that a third type of retinal photoreceptors had been discovered, so-called photosensitive ganglion cells, but while they may affect some biological processes their role is apparently non-image-forming.

Hermann von Helmholtz invented the ophthalmoscope, which could examine the inside of the human eye. He studied color vision, inspired by Young’s and Maxwell’s work, and published Handbuch der Physiologischen Optik (Handbook of Physiological Optics) in 1860. He made contributions to our understanding of the mechanisms of hearing and was familiar with the Theory of Colours (Zur Farbenlehre) from 1810 by Johann Wolfgang von Goethe (1749-1832). Goethe had a powerful mind and was one of the greatest writers in European history, but he was not a physicist and concerned himself mainly with color perception, hence his work influenced artists such as the Russian painter Wassily Kandinsky (1866-1944) and the English romantic painter J. M. W. Turner (1775-1851) more than scientists.

Goethe was a friend of the pioneering Czech experimental physiologist Jan (Johannes) Evangelista Purkinje (1787-1869), who helped create a modern understanding of vision, brain and heart function. Research he did at the University of Prague led to his discovery of a phenomenon known as the Purkinje effect where, as light intensity decreases, red objects are perceived to fade faster than blue objects. He was one of the founders of neuroscience and in 1837 aided by an improved compound microscope discovered Purkinje cells, large neurons in the human brain. He introduced protoplasm and plasma (as in blood plasma, the clear, fluid portion of the blood in which the blood cells are suspended) as scientific terms, and recognized fingerprints as a means of identification.

The Dutch scholar Willebrord Snellius or Snell (1580-1626) was a professor of mathematics at the University of Leiden. I Link Text 1617 he published Eratosthenes Batavus which contains his methods for measuring the Earth by triangulation, the first thorough and accurate modern geodetic measurement. He discovered the law for computing the refraction of light in 1621, although it is possible that several scholars discovered it independently. He determined that transparent materials have different indices of refraction depending upon their composition. Snell’s Law demonstrates that every substance has a specific bending ratio. As we know today, light travels at different speeds through different materials, water, glass, crystals etc.

The physician Erasmus Bartholin (1625-1698) from Denmark in 1669 discovered double refraction, which causes you to see a double image, by studying a transparent crystal of Iceland spar (calcite) which he had gathered during an expedition to Iceland. (Iceland had political ties to Scandinavia dating back to the Viking Age and remained a part of the Kingdom of Denmark until the twentieth century.) This phenomenon could not be explained until the wave theory of light had triumphed. Bartholin was the teacher of the great Danish astronomer Ole Rømer (1644-1710), the first person to successfully measure the speed of light, and appointed him the task of editing Tycho Brahe’s manuscripts.

The French mathematician Étienne-Louis Malus (1775-1812) had accompanied Napoleon’s invasion of Egypt in 1798 and remained in the Middle East until 1801. In 1808 he discovered that light could be polarized (a term coined by Malus) by reflection as he observed sunlight reflected from the windows of the Luxemburg Palace in Paris through an Iceland spar crystal that he rotated. His discovery of the polarization of light by reflection was published in 1809 and his theory of double refraction of light in crystals in 1810.

More complete laws of polarization were formulated by the Scottish physicist Sir David Brewster (1781-1868). The English scientist and inventor Sir Charles Wheatstone (1802-1875), who was a pioneer in telegraphy, developed the theory of stereoscopic or binocular vision, the idea that each eye sees slightly different views of a single scene which are then combined in a way that results in depth perception, although stereopsis is not the only thing that contributes to this. He published a paper on this subject in 1838 and invented the stereoscope, a device for displaying three-dimensional images, in 1840. David Brewster made improvements to the stereoscope a few years later. Brewster also invented the colorful and popular kaleidoscope in 1816 and studied the diffraction and polarization of light.

The industrial research laboratories, another European innovation of the nineteenth century, were first applied to chemistry, which was used to analyze the properties of a wide range of known materials and understand how they could be improved by testing, measuring, analyzing and quantifying processes and products that already existed in metallurgy, textiles, etc. Eventually this led to the creation of entirely new products, the synthetic dye industry, synthetic textiles and to the era of plastics and synthetic fibers.

The very idea of contact lenses had been suggested by Leonardo da Vinci, Descartes and others, but the first usable such lenses were made in the nineteenth century. In 1887 a German glassblower, F.E. Muller, produced the first eye covering to be seen through and tolerated. In the following year, the physiologist Adolf Eugen Fick and the Paris optician Edouard Kalt simultaneously reported using contact lenses to correct optical defects. Although some contact lenses have been and still are being made of glass, the widespread use of such lenses came in the twentieth century with the development of other, alternative materials. In 1936 the American optometrist William Feinbloom (1904-1985) introduced the use of plastic. The most important breakthrough came with the Czech chemists Otto Wichterle (1913-1998) and Drahoslav Lím (1925-2003) and their experiments in Prague in the 1950s with lenses made of a soft, water-absorbing plastic they developed. Soft contact lenses, which are thinner, lighter and more comfortable to wear than hard ones, were made commercially available from the 1970s onwards, and only then became a widely used alternative to spectacles.

A chemistry-based innovation of far greater importance to science than the invention of contact lenses was spectroscopy. The English chemist William Hyde Wollaston (1766-1828) noted in 1802 some dark features in the solar spectrum, but he didn’t follow this insight up. In 1814, the German physicist Joseph von Fraunhofer (1787-1826) independently discovered these dark features (absorption lines) in the optical spectrum of the Sun, which are now known as Fraunhofer lines. He carefully studied them and noted that they exist in the spectra of Venus and the stars as well, which meant that they had to be a property of the light itself. As so many times before, this optical advance was aided by advances in glassmaking.

In the 1780s a Swiss artisan, Pierre-Louis Guinand (1748-1824), began experimenting with the manufacture of flint glass, and in 1805 managed to produce a nearly flawless material. He passed on this secret to Fraunhofer, a skilled artisan working in the secularized Benedictine monastery of Benediktbeuern, who improved upon Guinand’s complex glass-stirring technique and managed to manufacture flint glass of unprecedented homogeneity in the 1810s and 1820s. Fraunhofer then began a more systematic study of the mysterious spectral lines. To the stronger ones he assigned the letters A to Z, a system which is still used today.

However, it was left to two other German scholars to prove the full significance of these lines. The birth of spectroscopy, the systematic study of the interaction of light with matter, came with the work of Robert Bunsen (1811-1899) and Gustav Kirchhoff (1824-1887). Cathy Cobb and Harold Goldwhite explain:

“Newton separated white light into component colors with a glass prism, then recombined it into white light by passing it through a second prism; the ancient texts of India report the use of flame color in chemical analysis (though the Indian savants were looking for poisons, not new elements); later chemists used flame colors as their only way of distinguishing sodium and potassium salts. The discovery to be exploited in the 1800s however was that elements in flames have spectra that show characteristic line patterns, and these line patterns can be measured and cataloged. German chemist Robert Bunsen….needed a steady, essentially colorless flame to analyze flame colors of salts in mineral waters, so he invented the laboratory burner named in his honor, the Bunsen burner….Gustav Robert Kirchhoff, a physicist and colleague at Heidelberg, suggested passing the light through a prism to spread out its component parts into a spectrum. Together the two workers assembled the flame, prism, lenses, and viewing tubes on a stand and produced the first spectrometer, and in very short order they used their spectrometer to identify the new elements cesium and rubidium, showing in each case that these new elements produced line spectra that were unique.”

Modern astrophysics was born with the development of spectroscopy and photography in combination with telescopes. For the first time, scientists could investigate what celestial bodies were made of. The website of the American Institute of Physics ( AIP ) elaborates:

“In 1859, Bunsen reported to a colleague that Kirchhoff had made ‘a totally unexpected discovery.’ He had identified the cause of the dark lines seen in the solar spectra by Fraunhofer and others. When certain chemicals were heated in Bunsen’s burner, characteristic bright lines appeared. In some cases these were at exactly the same points in the spectrum as Fraunhofer’s dark lines. The bright lines were light coming from a hot gas, whereas the dark lines showed absorption of light in the cooler gas above the Sun’s surface. The two scientists found that every chemical element produces a unique spectrum. This provides a sort of ‘fingerprint’ which can confirm the presence of that chemical. Kirchhoff and Bunsen recognized that this could be a powerful tool for ‘the determination of the chemical composition of the Sun and the fixed stars.’ Throughout the 1860s, Kirchoff managed to identify some 16 different chemical elements among the hundreds of lines he recorded in the sun’s spectrum. From those data, Kirchoff speculated on the sun’s chemical composition as well as its structure. Early astronomical spectroscopy concentrated on the sun because of its brightness and its obvious importance to life on earth.”

There was at least one other person who had touched upon the right explanation for Fraunhofer lines, but not as systematic as Kirchhoff did. This was the physicist Sir George Gabriel Stokes (1819-1903). Stokes was born in Ireland and attended school in Dublin, but later moved to England and Cambridge University. A gifted mathematician, he also emphasized the importance of experimentation and made contributions to hydrodynamics and optics. In 1852 he named the phenomenon of fluorescence, which results from the absorption of ultraviolet light and the emission of blue light. Stokes Law stipulates that the wavelength of emitted fluorescent light is always greater than the wavelength of the exciting light.

Fluorescence microscopy is now an important tool in cellular biology. The Polish physicist Alexander Jablonski (1898-1980), working at the University of Warsaw, was a pioneer in fluorescence spectroscopy.

The introduction of the telescope revolutionized astronomy, but it did not found astronomy as a discipline. Astronomy in some form has existed for thousands of years on different continents. It is consequently impossible to assign a specific date to its founding. This is not the case with astrophysics. It is probably safe to assume that most people throughout human history have had divine associations with the celestial bodies, although a few cases of a more scientific mindset can be encountered.

The Greek philosopher Anaxagoras in the fifth century BC was the first of the Pre-Socratic philosophers to live in Athens. He was both famous and notorious for his scientific theories, including claims that the stars are fiery stones. Anaxagoras allegedly got his idea about the composition of the Sun when a meteorite fell near Aegospotami. It was red hot when it reached the ground and it had come from the sky, so he reasoned that it came from the Sun. It consisted largely of iron, so he concluded that the Sun was made of red-hot iron. Here is a quote from the entry about him in the online Stanford Encyclopedia of Philosophy:

“The sun is a mass of fiery metal, and the moon is an earthy lump (with no light of its own). The same rotation ultimately produces the stars and planets as well. Sometimes the force of the rotation snatches up stones from the surface of the Earth and spins them around the Earth as they gradually rise higher through the force of the rotation. Until these bodies are high enough, they remain unseen between the Earth and the moon and so sometimes intervene to prevent heavenly bodies from being seen by terrestrial observers. The force and shaking of the rotation can cause slippage, and so sometimes a star (a flaming mass of rock and iron) is thrown downwards toward the earth as a meteor (such as the one Anaxagoras is supposed to have predicted at Aegospotami)….Anaxagoras is also credited with discovering the causes of eclipses — the interposition of another body between earth and the sun or the moon….Anaxagoras also gave explanations for the light of the Milky Way, the formation of comets, the inclination of the heavens, the solstices, and the composition of the moon and stars….Anaxagoras claims that the earth is flat.”

It is interesting to notice that an intelligent man such as Anaxagoras apparently still believed that the Earth was flat; a few generations later Aristotle definitely knew that it was spherical. The realization that the Earth is spherical was clearly developed among the Greeks during the Classical or Hellenic period and ranks as one of the great achievements of Greek scholarship. Yet rational as he may have been, Anaxagoras did not found astrophysics. He could speculate on the composition of the heavenly bodies, but he had no way of proving his claims. Neither did observers in Korea, Syria, Peru or elsewhere. Some sources indicate that Anaxagoras was charged with impiety for his claims, as most Greeks still shared the divine associations with the heavenly bodies, but political considerations may have played a part in this, too.

Stones falling from the sky were viewed by many peoples as signs from the gods. On the other hand, there were scholars who insisted that meteorites were formed on the Earth. In Enlightenment Europe, for instance, stories about rocks from the sky were dismissed as common superstition. The German physicist Ernst Chladni (1756-1827), who is now often regarded as the founder of meteoritics, in 1794 published a paper suggesting the extraterrestrial origin of meteorites, asserting that masses of iron and of rock fall from the sky and produce fireballs when heated by friction with the air. He concluded that they must be cosmic objects. This view was defended by the German astronomer Heinrich Olbers (1758-1840), but ridiculed by those who believed meteorites were of volcanic origin and refused to believe that stones from space rained on the Earth.

Eyewitness accounts of fireballs were initially dismissed, yet fresh and seemingly reliable reports of stones falling from the sky appeared shortly after the publishing of the book. The young English chemist Charles Howard (1774-1816) read Chladni’s book and decided to analyze the chemical composition of these rocks. Working with the French mineralogist Jacques-Louis de Bournon he made the first thorough scientific analysis of meteorites. Here is the book Cosmic Horizons, edited by Neil De Grasse Tyson and Steven Soter:

“The two scientists found that the stones had a dark shiny crust and contained tiny ‘globules’ (now called chondrules) unlike anything seen in terrestrial rocks. All the iron masses contained several percent nickel, as did the grains of iron in the fallen stones. Nothing like this had ever been found in iron from the Earth. Here was compelling evidence that the irons and rocks were of extraterrestrial origin. Howard published these results in 1802. Meanwhile, the first asteroid, Ceres, was discovered in 1801, and many more followed. The existence of these enormous rocks in the solar system suggested a plausible source for the meteorites. Space wasn’t empty after all. Finally, in 1803, villagers in Normandy witnessed a fireball followed by thunderous reverberations and a spectacular shower of several thousand stones. The French government sent the young physicist Jean-Baptist Biot to investigate. Based on extensive interviews with witnesses, Biot established the trajectory of the fireball. He also mapped the area where the stones had landed: it was an ellipse measuring 10 by 4 kilometers, with the long axis parallel to the fireball’s trajectory. Biot’s report persuaded most scientists that rocks from the sky were both real and extraterrestrial.”

The French physicist Jean-Baptiste Biot (1774-1862) also did work on the polarization of light and contributed to electromagnetic theory. He accompanied Joseph Gay-Lussac in 1804 on the first balloon flight undertaken for scientific purposes, reaching a height of several thousand meters while doing research on the Earth’s magnetism and atmosphere. The Montgolfier brothers had performed the first recorded manned balloon flight in France in 1783. The French meteorologist Léon Teisserenc de Bort (1855-1913) later discovered the stratosphere, the layer of the Earth’s atmosphere above the troposphere, which contains most of the clouds and weather systems, by using unmanned, instrumented balloons.

Unmanned balloons, usually filled with lighter-than-air gas like helium, are still used for meteorological, scientific and even military purposes; manned ballooning becomes dangerous in the upper reaches of the atmosphere due to the lack of oxygen. The Swiss inventor Auguste Piccard (1884-1962), who served as a professor of physics in Brussels, created balloons equipped with pressurized cabins and set a number of records during the 1930s, reaching an altitude of 23,000 meters.

His son Jacques Piccard (1922-2008), a Brussels-born Swiss oceanographer, explored the deepest reaches of the world’s oceans when he and the American explorer Don Walsh (born 1931) in 1960 used the bathyscaphe Trieste to travel 10,900 metres down to the bottom of the Challenger Deep. The justly famous French ocean explorer Jacques-Yves Cousteau (1910-1997) invented the aqualung together with the engineer Émile Gagnan (1900-1979) in 1943. Cousteau was also a pioneer in the development of underwater cameras.

The Austrian physicist Victor Francis Hess (1883-1964), educated at the universities of Graz and Vienna, in a series of balloon ascents in 1911-13 established that radiation increased with altitude. This high-energy radiation originating in outer space is now called cosmic rays:

“The first explorers of radioactivity found it in air and water as well as in the earth. Shielded electroscopes placed out of doors lost their charges as if they were exposed to penetrating radiation. Since leak diminished with height, physicists assigned its cause to rays emanating from the earth. As they mounted ever higher, however, from church steeples to the Eiffel Tower to manned balloons, the leak leveled off or even increased. In 1912-1913, Victor Hess of the Radium Institute of Vienna ascertained that the ionization causing the leak declined during the first 1,000 m (3,280 ft) of ascent, but then began to rise, to reach double that at the earth’s surface at 5,000 m (16,400 ft). Hess found further, by flying his balloon at night and during a solar eclipse, that the ionizing radiation did not come from the sun. He made the good guess — it brought him the Nobel Prize in physics in 1936 — that the radiation came from the great beyond.”

Astrophysics as a scientific discipline was born in nineteenth century Europe; in Germany in 1859 with the work of Gustav Kirchhoff. It could not have happened before as the crucial combination of chemical knowledge, telescopes and photography did not exist much earlier. In the first qualitative chemical analysis of a celestial body, Kirchhoff compared laboratory spectra from thirty known elements to the Sun’s spectrum and found matches for iron, calcium, magnesium, sodium, nickel and chromium.

In case we forget what a huge step this was we can recall that as late as in the sixteenth century AD in Mesoamerica, the region in the Americas usually credited with having the most sophisticated astronomical traditions before European colonial contact, thousands of people had their hearts ripped out every year to please the sun god and ensure that the Sun would keep on shining. A little over three centuries later, European scholars could empirically study the composition of the Sun and verify that it was essentially made of the same stuff as the Earth, only much hotter. Within the next few generations, European and Western scholars would proceed to explain how the Sun and the stars generate their energy and why they shine. By any yardstick, this represents one of the greatest triumphs of the human mind in history.

By the twenty-first century, astronomy has progressed to the point where we can use spectroscopy not only to determine the chemical composition of stars other than our own, but even to study the atmospheres of planets orbiting other stars. Around 1990, the first planets beyond our own Solar System, extrasolar planets or exoplanets, were discovered. In 2008 it was reported that the Hubble Space Telescope had made the first detection ever of an organic molecule, methane, in a planet orbiting another star. The American Kepler Mission space telescope was launched into orbit in 2009, designed to find Earth-size exoplanets.

Although we now have the technological capability to send probes to physically explore the other planets in our own Solar System, as the Americans in particular have done in recent decades, it will be impossible for us in the foreseeable future to visit the planets of other star systems light-years away. The only way we have of studying them is through telescopes and spectroscopy.

The Englishman Sir William Huggins (1824-1910) was excited by Kirchhoff’s discoveries and tried to apply his methods to other stars as well. He was assisted by his astronomer wife Margaret Lindsay Huggins (1848-1915). Through spectroscopic methods he then showed that stars are composed of the same elements as the Sun and the Earth.

According to his Bruce Medal biography, “William Huggins was one of the wealthy British ‘amateurs’ who contributed so much to 19th century science. At age 30 he sold the family business and built a private observatory at Tulse Hill, five miles outside London. After G.R. Kirchhoff and R. Bunsen’s 1859 discovery that spectral emission and absorption lines could reveal the composition of the source, Huggins took chemicals and batteries into the observatory to compare laboratory spectra with those of stars. First visually and then photographically he explored the spectra of stars, nebulae, and comets. He was the first to show that some nebulae, including the great nebula in Orion, have pure emission spectra and thus must be truly gaseous, while others, such as that in Andromeda, yield spectra characteristic of stars. He was also the first to attempt to measure the radial velocity of a star. After 1875 his observations were made jointly with his talented wife, the former Margaret Lindsay Murray.”

Spectroscopic techniques were eventually applied to measure motion in the line of sight. Johann Christian Doppler (1803-1853), an Austrian physicist, observed the phenomenon now known as the Doppler effect: That the observed frequency of a wave depends on the relative speed of the source and the observer. For instance, we have all heard how the sound of a car engine apparently changes as it moves towards us and then moves away from us again.

Doppler predicted that this phenomenon should apply to all waves, not only to sound but also to light, and in 1842 argued that motion of a source of light should shift the lines in its spectrum. A more correct explanation of the principle involved was published by the French physicist Armand-Hippolyte-Louis Fizeau (1819-1896) in 1848. As the excellent reference book The Oxford Guide to the History of Physics and Astronomy states:

“Not all scientists accepted the theory. In 1868, however, Huggins found what appeared to be a slight shift for a hydrogen line in the spectrum of the bright star Sirius, and by 1872 he had more conclusive evidence of the motion of Sirius and several other stars. Early in the twentieth century Vesto M. Slipher at the Lowell Observatory in Arizona measured Doppler shifts in spectra of faint spiral nebulae, whose receding motions revealed the expansion of the universe. Instrumental limitations prevented Huggins from extending his spectroscopic investigations to other galaxies. Astronomical entrepreneurship in America’s gilded age saw the construction of new and larger instruments and a shift of the center of astronomical spectroscopic research from England to the United States. Also, a scientific education became necessary for astronomers, as astrophysics predominated and the concerns of professional researchers and amateurs like Huggins diverged. George Ellery Hale, a leader in founding the Astrophysical Journal in 1895, the American Astronomical and Astrophysical Society in 1899, the Mount Wilson Observatory in 1904, and the International Astronomical Union in 1919, was a prototype of the high-pressure, heavy-hardware, big-spending, team-organized scientific entrepreneur.”

The astronomer George Ellery Hale (1868-1938) represented the dawn of a new age, not only because he was American and the United States would soon emerge as a leading center of astronomical research (although scientifically and technologically speaking clearly an extension of the European tradition), but at least as much because he personified the increasing professionalization of astronomy and indeed of science in general.

There is still room for non-professional astronomers. Even today, amateurs can occasionally spot new comets before the professionals do, for instance. Yet it is a safe bet to say that never again will we have a situation like in the eighteenth century when William Herschel, a musician by training and profession, was one of the leading astronomers of his age. From a world of a few enlightened (and often wealthy) gentlemen in the eighteenth century would emerge a world of many trained scientists with often very expensive and complicated equipment in the twentieth century. The nineteenth century was a transitional period. As the example of Huggins demonstrates, amateur astronomers were to enjoy their last golden age.

The Englishman William Lassell (1799-1880) made good money from brewing beer and used some of it to indulge his interest in astronomy. Liverpool was the fastest-growing port in Europe, and the first steam-hauled passenger railway ran from Liverpool to Manchester in 1830. The Industrial Revolution, where Britain played a leading role, was a golden age for the beer-brewing industry. The combination of beer and science is not unique. The seventeenth century Polish astronomer Hevelius came from a brewing family and the English brewer James Joule (1818-1889) studied the nature of heat and the conservation of energy.

In 1846 William Lassell discovered Triton, the largest moon of Neptune, shortly after the planet had itself been mathematically predicted by the French mathematician Urbain Le Verrier (1811-1877) and spotted by the German astronomer Johann Gottfried Galle (1812-1910). Lassel later discovered two moons around Uranus, Ariel and Umbriel, and a satellite of Saturn, Hyperion, although Hyperion was spotted independently by the American astronomers George (1825-1865) and William Bond (1789-1859) as well. The American Asaph Hall (1829-1907) discovered the tiny moons of Mars (Deimos and Phobos) in 1877.

The German amateur astronomer Samuel Heinrich Schwabe (1789-1875) in 1843, based on daily observation records between 1826 and 1843, announced his discovery that sunspots vary in number in a cycle of about ten or eleven years. The English amateur astronomer Richard Carrington (1826-1875) found by observing the motions of sunspots that the Sun rotates faster at the equator than near the poles.

The American astronomer Edward Barnard (1857-1923) introduced wide-field photographic methods to study the structure of the Milky Way. The faint Barnard’s Star, which he discovered in 1916, had the largest proper motion of any known star. At a distance of about six light-years it is the closest neighboring star to the Sun after the members of the Alpha Centauri system, about 4.4 light-years away. In 1892 he discovered Amalthea, the first moon of Jupiter to be discovered since the four largest ones spotted by Galileo Galilei in 1610.

Four centuries after Galileo first discovered the “Galilean satellites” of Jupiter we know a lot more about them than we did in the past, thanks to better telescopes but above all visits from several American space probes. Io is the most volcanically active body in our Solar System. The large Ganymede is the only moon with its own magnetic field. A liquid ocean may lie beneath the frozen crust of Europa, and maybe beneath the crusts of Callisto and Ganymede, too. Both Jupiter and Saturn are now known to possess literally dozens of natural satellites.

The Danish-Irish astronomer John Dreyer (1852-1926) in 1888 published the monumental New General Catalogue of Nebulae and Clusters of Stars, whose numbers are still in wide use today as a reference list of star clusters, nebulae and galaxies. He based his work on earlier lists compiled by the Herschel family of astronomers. His NGC system gradually replaced the Messier catalog. This was created by the French astronomer Charles Messier (1730-1817), who was the first to compile a systematic catalog of nebulae and star clusters in the 1760 and 70s. Some objects are known both by their Messier and their NGC catalog numbers. For instance, the Crab Nebula is called M 1 and NGC 1952. It is the likely remnant of the bright supernova of AD 1054, which was recorded by Chinese and Middle Eastern astronomers.

The Italian astronomer Giovanni Schiaparelli (1835-1910) explained the regular meteor showers as the result of the dissolution of comets and proved it for the Perseids. He observed Mars and named the “seas” and “continents”. His alleged observations of Martian canali, “channels” but mistranslated to English as canals, stimulated the American businessman and astronomer Percival Lowell (1855-1916) to found his observatory and search for life on Mars. The young American astronomer Clyde Tombaugh (1906-1997) discovered Pluto in 1930 while working at the Lowell Observatory. He used photographic plates, which were used in astronomy and particle physics long after they had gone out of popular use, but by the twenty-first century astronomers, too, have switched to high-resolution digital cameras.

The Kuiper belt is a disc of small, icy bodies that revolve around the Sun beyond the orbit of the planet Neptune. It is named after the Dutch American astronomer Gerard Kuiper (1905-1973) but is sometimes called the Edgeworth-Kuiper belt after Kuiper and the Irish astronomer Kenneth Edgeworth (1880-1972) who proposed the existence of icy bodies beyond Neptune in the 1940s.

Kuiper was a highly influential astronomer. In 1944 he confirmed the presence of a methane atmosphere around Saturn’s moon Titan. In 1948 he correctly predicted that carbon dioxide is an important component of the atmosphere of Mars. He discovered the fifth moon of Uranus, Miranda and the second moon of Neptune, Nereid, and also contributed to the planning of NASA’s program of lunar and planetary exploration.

Overlapping the Kuiper belt but extending further outwards is the scattered disc. Objects here were originally native to the Kuiper belt but ejected into erratic orbits by the gravitational influence of Neptune and the other gas giants. The dynamic but sparsely populated scattered disc is believed to be the source of many of the comets that sometimes visit our part of the Solar System. The entire trans-Neptune region is believed to be inhabited by primitive left-overs from the nebula of dust and gas that formed the Sun and the planets more than four and a half billion years ago. All the various objects that orbit the Sun at a greater average distance than the planet Neptune are collectively known as trans-Neptunian objects, or TNOs.

The scattered disc object Eris, which is slightly more massive than Pluto, was spotted in 2005 by a team led by Michael E. Brown (born 1965), professor of planetary astronomy at the California Institute of Technology. Anticipating the possibility that there could be more objects of similar size out there, the International Astronomical Union (IAU) in 2006 after heated discussions defined Eris as a “dwarf planet” along with Pluto, which consequently lost its position as the ninth planet of our Solar System. Brown’s team has discovered many distant bodies orbiting the Sun, among them the object Sedna in 2003.

According to the website of the Planetary Society, the existence of a belt of objects orbiting the outer reaches of our Solar System has been theorized since Pluto’s discovery in 1930:

“These objects would be primitive bodies, leftovers from the formation of the solar system, in a region too cold and sedate for planetary formation to proceed. For a long time Pluto and Charon were the only bodies known to inhabit this region of the solar system. But beginning in 1992 with the discovery of 1992 QB1, the observed population of this belt has grown almost to a thousand objects. The Kuiper belt spans a region of the solar system outside the orbit of Neptune, from about 30 to 50 Astronomical Units (AU). This region is close enough to Neptune that all of the Kuiper belt objects are considered to be under Neptune’s gravitational influence. Almost no objects have been observed beyond 50 AU, though astronomers should be able to detect them if they exist. The 50-AU boundary is referred to as the ‘Kuiper cliff.’ Whether the Kuiper cliff represents the outer boundary of the original planetary nebula, or whether it is merely the inner edge of a large ‘Kuiper gap’ extending at least to 70 or 80 AU, is not known. (The most distant known trans-Neptunian object, Sedna, has a perihelion of 76 AU, outside Neptune’s gravitational influence.)”

An Astronomical Unit (AU) equals the distance between the Sun and the Earth, slightly less than 150 million kilometers. Neptune orbits the Sun at about 30 AU. The most distant man-made object, the American spacecraft Voyager 1, as of 2009 was approaching a distance of 110 AU from the Sun. One light-year, the distance that light travels in a vacuum in a year, is 9.46 trillion (million times a million) kilometers, more than 63 thousand AU. Professional astronomers use the unit parsec more frequently than light-year. One parsec is about 3.26 light-years. Our Milky Way Galaxy is at least 100 thousand light-years in diameter and our large galactic neighbor the Andromeda Galaxy is more than 2 million light-years away. The Sun ‘s radius is approximately 109 times that of the Earth’s radius, it has 333 thousand times more mass than the Earth and contains 99.9 % of the total known mass of our Solar System.

I could add that the numbers I quote here regarding the size of the universe are primarily the result of research by Western astronomers. Although some of the ancient Greeks such as Eratosthenes and Hipparchus could make fairly realistic estimates of the size of the Earth and its distance to the Moon, which was in itself a major achievement and as far as I know unique among the ancient cultures, the true scale of our Solar System was worked out during the European astronomical revolution between the seventeenth and the nineteenth centuries.

The Dutch astronomer Jan Oort (1900-1992) theorized that the Sun is surrounded by a sphere of cometary nuclei. This so-called Oort cloud is believed to be exceedingly distant from the Sun, at 50 or perhaps 100 thousand AU, or more than a light-year away. The latter figure is about a quarter of the distance between our Sun and the closest neighboring star and near the limit of the Sun’s gravitational pull. These objects can be affected by the gravity of other stars and be lost to interstellar space or pulled into the inner regions of our Solar System.

The gifted Estonian astronomer Ernst Öpik (1893-1985) suggested the existence of such a region already in 1932 whereas Jan Oort did so independently in 1950. For this reason, the Oort cloud is sometimes referred to as the Öpik-Oort cloud. It is thought that long-period comets, with orbits lasting thousands of years, originate here, but no Öpik-Oort cloud object has so far been directly observed. The body Sedna is believed to be too far away from Neptune to be influenced by its gravity and may have been affected by some unknown and very distant planetary-sized object, but as of 2009 this remains speculations.

The English astronomer Sir Joseph Norman Lockyer (1836-1920) and the French astronomer Pierre Janssen (1824-1907) are credited with discovering helium in 1868 through studies of the solar spectrum. Helium (from Greek helios for the Sun) is thus the only element (so far) discovered in space before being discovered on Earth.

Photography made possible a way of recording and preserving images of the spectra of stars. The Italian Catholic (Jesuit) priest, meteorologist and astrophysicist Pietro Angelo Secchi (1818-1878) is considered the discoverer of the principle of stellar classification. When the Jesuits were expelled in 1848, Secchi visited England and the United States. He became professor of astronomy and director of the observatory at the Roman College when he was allowed to return to Rome in 1849. After the discovery of spectrum analysis by Kirchhoff and Bunsen in 1859, Secchi was among the first to investigate closely the spectra of Uranus and Neptune. On an expedition to Spain to observe the total eclipse of 1860 he “definitively established by photographic records that the corona and the prominences rising from the chromosphere (i.e. the red protuberances around the edge of the eclipsed disc of the sun) were real features of the sun itself, and not optical delusions or illuminated mountains on the moon.” In the 1860s he began collecting the spectra of stars, accumulating some 4,000 stellar spectrograms, and classified them according to spectral characteristics. This was later expanded into the Harvard classification system, based on the star’s surface temperature.

The technical problem of producing an artificial eclipse to study the Sun, which because of its bright light is difficult to study directly through a telescope, was solved by the French astronomer Bernard Lyot (1897-1952), an expert in optics. In 1930 he invented the coronagraph, which permitted extended observations of the Sun’s coronal atmosphere. By 1931 he was obtaining photographs of the corona and its spectrum. He found new spectral lines in the corona and made the first motion pictures of solar prominences.

The Harvard system was developed from the 1880s onwards. Several of its creators were women. The astronomer Edward Pickering (1846-1919) at the Harvard College Observatory hired a number of assistants, among them the Scottish-born Williamina Fleming (1857-1911) and especially the Americans Annie Jump Cannon (1863-1941) and Antonia Maury (1866-1952), to classify the prism spectra of hundreds of thousands of stars. Cannon developed a classification system based on temperature where stars, from hot to cool, were put in the categories O, B, A, F, G, K and M, and Maury developed a somewhat different system.

Pickering and the German astronomer Hermann Karl Vogel (1841-1907) independently discovered spectroscopic binaries — double-stars that are too close to be detected through direct observation but which through the analysis of their light have been found to be two stars revolving around one another. Vogel pioneered the use of the spectroscope in astronomy.

Another system was worked out during the 1940s by the American astronomers William Wilson Morgan (1906-1994) and Philip Keenan (1908-2000) in cooperation with Edith Kellman. They introduced stellar luminosity classes. For the first time, astronomers could determine the luminosity of stars directly by analyzing their spectra, their stellar fingerprints. This is known as the Yerkes or MK (after Morgan and Keenan) spectral classification system.

Maury’s classifications were not preferred by Edward Pickering, but the Danish astronomer Ejnar Hertzsprung (1873-1967) realized their value and adopted them for his own use. According to his Bruce Medal profile, “Ejnar Hertzsprung studied chemical engineering in Copenhagen, worked as a chemist in St. Petersburg, and studied photochemistry in Leipzig before returning to Denmark in 1901 to become an independent astronomer. In 1909 he was invited to Göttingen to work with Karl Schwarzschild, whom he accompanied to the Potsdam Astrophysical Observatory later that year. From 1919-44 he worked at the Leiden Observatory in the Netherlands, the last nine years as director. He then retired to Denmark but continued measuring plates into his nineties. He is best known for his discovery that the variations in the widths of stellar lines discovered by Antonia Maury reveal that some stars (giants) are of much lower density than others (main sequence or ‘dwarfs’) and for publishing the first color-magnitude diagrams.”

Herztsprung discovered the relationship between the brightness of a star and its color, but published these findings in a photographic journal which went largely unnoticed by astronomers. The American astronomer Henry Norris Russell (1877-1957), who spent six decades at Princeton University as student, professor and observatory director, made essentially the same discovery as Herztsprung, but published it in 1913 in a journal read by astronomers and presented the findings in a graph, which made them easier to understand. “With Walter S. Adams Russell applied Meghnad Saha’s theory of ionization to stellar atmospheres and determined elemental abundances, confirming Cecilia Payne-Gaposchkin’s discovery that the stars are composed mostly of hydrogen.”

The Indian astrophysicist Meghnad Saha (1893-1956) provided a theoretical basis for relating the spectral classes of stars to surface temperatures. The temperature of a star is closely related to its color. The Hertzsprung-Russell diagram helped give astronomers their first insight into the internal workings of stars and their lifespan and became a cornerstone of modern stellar astrophysics.

The study of electromagnetism began when the scholar Hans Christian Ørsted (1777-1851) from Denmark, sometimes called Orsted or Oersted in scientific literature, discovered in 1820 that there is a magnetic effect associated with an electric current. This announcement led to intensive research among scientists all over Europe. The French physicist André-Marie Ampère (1775-1836) soon after published Ampere’s Law where he worked out a mathematical expression of Ørsted’s relationship between magnetism and electricity. The German physicist Georg Ohm (1789-1854) published Ohm’s Law, that an electrical current is equal to the ratio of the voltage to the resistance, in 1827. The French mathematician Siméon Denis Poisson (1781-1840) applied mathematical theory to electricity and magnetism as well as to other branches of physics.

The French physicist François Arago (1786-1853) found that when a magnetic compass needle was suspended by a thread over a copper disc and the disc was rotated, the needle was deflected. Arago was influential in the development of the understanding of light and suggested in 1845 that Urbain Le Verrier investigate anomalies in the motion of Uranus, which led to the discovery of Neptune in 1846. More advances in electromagnetism were made by the brilliant English physicist Michael Faraday (1791-1867). John Gribbin explains:

“The key experiment took place on 29 August 1831. To his surprise, Faraday noticed that the galvanometer needle flickered just as the first coil was connected to the battery, then fell back to zero. When the battery was being disconnected, it flickered again. When a steady electric current was flowing, producing a steady magnetic influence in the ring, there was no induced electric current. But during the brief moment when the electric current was changing (either up or down) and the magnetic influence was also changing (either increasing or decreasing) there was an induced current. In further experiments, Faraday soon found that moving a bar magnet in and out of a coil of wire was sufficient to make a current flow in the wire. He had discovered that just as moving electricity (a current flowing in a wire) induces magnetism in its vicinity, so a moving magnet induces an electric influence in its vicinity, a neatly symmetrical picture which explains Arago’s experiment, and also why nobody had ever been able to induce an electric current using static magnets. Along the way, having already in effect invented the electric motor, Faraday had now invented the electric generator, or dynamo.”

He carried out groundbreaking research in electrochemistry, popularizing the terms “electrolyte,” “electrode,” “anode” and “cathode.” According to the quality website Molecular Expressions, “ Faraday succeeded in discovering the aromatic hydrocarbon benzene, built the first electric motor, and his studies spawned the vast field of cryogenics. He also invented the transformer and dynamo, and then established the principle of electromagnetic induction in 1831 to explain his experimental findings. By 1832, Faraday had also revealed the laws of electrolysis that bear his name. In 1845, Faraday began studying the influence of magnetic fields on plane-polarized light waves, and discovered that the plane of vibration is rotated when the light path and the direction of the applied magnetic field are parallel, a phenomenon now known as the Faraday effect. In his attempts to prove that all matter reacts to a magnetic force, Faraday established the classes of materials known as paramagnetic and diamagnetic, and ultimately revolutionized contemporary notions of space and force.”

The Frenchman Claude Chappe (1763-1805) in the 1790s invented the semaphore telegraph, an optical signaling system which was important during the Napoleonic Wars. However, the real telecommunications revolution, which would lead from the electrical telegraph via radio and television to the Internet, began with the discovery of electromagnetism.

The English electrical engineer William Sturgeon (1783-1850) invented the first electromagnet in 1825, which was soon improved upon by Michael Faraday and the American scientist Joseph Henry (1797-1878). At Göttingen, Germany, the physicist Wilhelm Eduard Weber (1804-1891) and the brilliant mathematician Carl Friedrich Gauss (1777-1855) in 1833 built a telegraph to connect the physics laboratory with the astronomical observatory where Gauss worked, the first practical telegraph to operate anywhere in the world. Weber and Gauss did careful studies of terrestrial magnetism and made sensitive magnetometers to measure magnetic fields. The electrical telegraph was further developed in the 1830s and 40s by Samuel Morse with Alfred Vail (1807-1859) in the USA and Charles Wheatstone with William Fothergill Cooke (1806-1879) in Britain.

The development of various uses of electricity was rapid during the following generations. Late in the nineteenth century, electric trains were running in Germany, Britain and the USA. The Serb, and later American, inventor and electrical engineer Nikola Tesla (1856-1943) made many valuable contributions in the field of electricity, magnetism and radio. The physicist James Clerk Maxwell from Edinburgh, Scotland was inspired by Faraday to study electromagnetism, and eventually concluded that light was a form of electromagnetic radiation. Before I explain that I have to talk about measurements of the speed of light.

I have consulted many sources, and as far as I can gather, no known scholar in any culture before seventeenth century Europe had ever made valid scientific measurements of the speed of either sound or light. In the eleventh century, the Persian scholar al-Biruni is said to have believed that the speed of light is much greater than the speed of sound, but this can easily be observed during a thunderstorm, when the flash is always seen before you hear the bang. Alhazen is claimed to have stated that the speed of light, while extremely great, is finite, as is the German medieval scholar Albertus Magnus. I’m sure it is possible to find examples of Chinese, Indian or other scholars speculating on this as well, but as long as no accurate measurements were made, this insight remained without practical significance.

Some European scholars during the Scientific Revolution, among them René Descartes, still believed that the speed of light was infinite. As a matter of fact, light from the Sun takes more than eight minutes to reach the Earth. The first scientifically valid measurement of the speed of light which yielded a result that was in the right ballpark was made by the astronomer Ole Rømer from Denmark. He developed one of the first scientific temperature scales as well.

Daniel Gabriel Fahrenheit (1686-1736), a German physicist and maker of scientific instruments who spent most of his life in the Netherlands, visited Rømer in 1708 and improved on his temperature scale, the result being the Fahrenheit temperature scale still in use today in a few countries. In addition to this, Fahrenheit invented the mercury-in-glass thermometer, the first accurate thermometer, in 1714.

The most widely used temperature scale is the one created by Anders Celsius (1701-1744), a Swedish professor of astronomy at Uppsala University. In 1742 he proposed the temperature scale which now carried his name (Celsius himself called it the centigrade scale), but he used 100° Celsius for the freezing point of water and 0° C for the boiling point. This scale was reversed to its now-familiar form after his death by the Swedish botanist Carl Linnaeus. Celsius also did research into the phenomenon known as aurora borealis and suggested that they were connected to changes in the magnetic field of the Earth.

Auroras in the Northern Hemisphere are called northern lights; in the Southern Hemisphere southern lights. They appear chiefly as arcs, clouds and streaks which move across the night sky. The most common color is green, although red and other colors may occur, too. They are associated with the solar wind, the constant flow of electrically charged particles from the Sun. Some of them get trapped by the Earth’s magnetic field where they tend to move toward the magnetic poles and release some of their energy.

Auroras have been observed on some of the other planets in our Solar System as well. They become more frequent and spectacular the closer you get to the Arctic or Antarctic regions, which is one of the reasons why their true nature was worked out in northern Europe.

The first substantially correct theory of the origin of auroras, presented after the electromagnetic revolution, was created by the Norwegian physicist Kristian Birkeland (1867-1917). Birkeland grew up in Kristiania, as the city of Oslo was then called. He undertook expeditions to study the aurora currents in the late 1890s and early 1900s and hypothesized that they were caused by the interaction of energetic particles from outside of the Earth’s atmosphere with atoms of the upper atmosphere. Birkeland managed to reproduce the Solar System in miniature his laboratory. He placed a magnetized sphere, a “ terrella “ representing the Earth, inside a vacuum chamber, aimed a beam of electrons towards it and could see that the electrons were steered by the magnetic field to the vicinity of the terrella’s magnetic poles. His ideas were nevertheless rejected by most scientists at the time. In a drive to finance his often expensive research he teamed up with the Norwegian industrialist Samuel Eyde (1866-1940) and founded the company Norsk Hydro, inventing the first industrial scale method to extract nitrogen-based fertilizers from the air. However, by the 1920s this method was no longer able to compete with the German Haber-Bosch process.

The Swedish physicist Hannes Alfvén (1908-1995), one of the founders of plasma physics and magnetohydrodynamics, the study of plasmas in magnetic fields, supported Birkeland’s ideas, yet positive proof that Birkeland’s theory was correct was only obtained with the space age and observations made with satellites during the 1960s and 70s.

One large piece of the puzzle was the discovery of zones of highly energetic charged particles trapped in the Earth’s magnetic field. After the Soviet Union in October 1957 launched the world’s first artificial satellite into orbit, Sputnik 1, the Americans launched their own Explorer 1 in February 1958. Its Geiger counter detected a powerful radiation belt surrounding the Earth. This was the first major scientific discovery of the space age. The belts were named the Van Allen radiation belts after the American space scientist James Van Allen (1914-2006). As we know today, there are two different radiation belts surrounding the Earth, and a third, weaker one which comes and goes depending on the level of solar activity.

The enormous conceptual leap that the speed of light, although very large, was not infinite, stemmed from the work of Ole Rømer. According to John Gribbin:

“Rømer’s greatest piece of work was achieved as a result of his observations of the moons of Jupiter, carried out in conjunction with Giovanni Cassini (who lived from 1625 to 1712 and is best remembered for discovering a gap in the rings of Saturn, still known as the Cassini division)….Rømer predicted, on the basis of a pattern he had discovered in the way the eclipse times varied, that an eclipse of Jupiter’s innermost Galilean moon, Io, due on 9 November 1679, would occur ten minutes later than expected according to all earlier calculations, and he was sensationally proven right. Using the best estimate available of the diameter of the Earth’s orbit, Rømer calculated from this time delay that the speed of light must be (in modern units) 225,000 kilometres per second. Using the same calculation but plugging in the best modern estimate of the size of the Earth’s orbit, Rømer’s own observations give the speed of light as 298,000 kilometres per second. This is stunningly close to the modern value for the speed of light, 299,792 kilometres per second, given that it was the first measurement, ever, of this speed.”

The Italian-French scholar Giovanni Cassini (1625-1712) was one of the leading astronomers of his time. He was invited by King Louis XIV of France to Paris in 1669 to join the recently formed Académie Royale des Sciences. Cassini assumed the directorship of the Observatory in Paris after it was completed in 1671. In the 1670s and 80s he discovered four of Saturn’s moons: Iapetus, Rhea, Tethys and Dione. He was able to measure Jupiter’s and Mars’s rotational periods and determined the parallax of Mars, which allowed the calculation of the distance to Mars and the Earth-Sun distance. After Cassini’s work, European astronomers for the first time in human history had a reasonably realistic understanding of the size of the Solar System. This was refined further in the eighteenth century.

During the nineteenth century, these methods for measuring the speed of light were supplemented by increasingly precise non-astronomical measurements. Gribbin again:

“At the end of the 1840s, the French physicist Armand Fizeau (1819-1896)…had made the first really accurate ground-based measurement of the speed of light. He sent a beam of light through a gap (like the gaps in the battlements of a castle) in a rotating toothed wheel, along a path 8 kilometres long between the hilltop of Suresnes and Montmartre, off a mirror and back through another gap in the toothed wheel…. Fizeau was able to measure how long it took for light to make the journey, getting an estimate of its speed within 5 per cent of the modern determination….Léon Foucault (1819-1868), who had worked with Fizeau on scientific photography in the 1840s (they obtained the first detailed photographs of the surface of the Sun together), was also interested in measuring the speed of light and developed an experiment devised by Arago (and based on an idea by Wheatstone)….in 1850 Foucault first used this method to show (slightly before Fizeau did) that light travels more slowly in water than in air….By 1862 he had refined the experiment so much that he came up with a speed of 298,005 km/s, within 1 per cent of the modern value.”

That light travels more slowly through water than through air was a key prediction of all wave models of light, and more or less put to rest Newton’s theory of light as a stream of particles, at least for a while. The new, accurate measurements of the speed of light were invaluable for Maxwell’s theory of electromagnetism and light. In 1862 he calculated that the speed of light and the speed of propagation of an electromagnetic field are the same. His book Electricity and Magnetism appeared in 1873 and included the four partial differential equations known as Maxwell’s Equations. Soon after, the German physicist Heinrich Hertz (1857-1894), a student of Hermann von Helmholtz, expanded and experimentally verified Maxwell’s electromagnetic theory of light. Hertz demonstrated the reality of radio waves in 1887. James E. McClellan and Harold Dorn:

“Hertz worked exclusively within the tradition of nineteenth-century theoretical and experimental physics, but when the young Italian Guglielmo Marconi (1874-1937) first learned of Hertzian waves in 1894, he immediately began to exploit them for a practical wireless telegraphy, and by the following year he had produced a technology that could communicate over a distance of one mile. Marconi, who went on to build larger and more powerful systems, received his first patent in England in 1896 and formed a company to exploit his inventions commercially. In 1899 he sent his first signal across the English Channel, and in a historic demonstration in 1901 he succeeded with the first radio transmission across the Atlantic….in this instance the line between science and technology became so blurred that in 1909 Marconi received a Nobel Prize in physics for his work on wireless telegraphy. The case is also noteworthy because it illustrates that the outcome of scientific research and technological change often cannot be foreseen. What drove Marconi and his research was the dream of ship-to-shore communications. He had no prior notion of what we know as radio or the incredible social ramifications that followed the first commercial radio broadcasts in the 1920s.”

The German inventor Karl Ferdinand Braun (1850-1918), who developed the first cathode-ray tube oscilloscope in 1897, shared the Nobel Prize with Marconi. While Marconi is generally credited as the “inventor” of radio, other pioneers were investigating radio waves, too. In addition to Maxwell and Hertz they include above all Nikola Tesla, who was often at the frontlines of electromagnetic research during this time, but also Sir Jagadish Chandra Bose (1858-1937) in then-British India and the Russian physicist and electrical engineer Alexander Popov (1859-1906), who built a radio receiver in the mid-1890s.

Marconi’s wireless telegraph was soon carried on a number of ships. The RMS Titanic, after it hit the iceberg and sank on 14 April 1912, had Marconi wireless radio operators calling for help from other ships. Sadly, in their case it was too late to avoid the tragedy which occurred.

Commercial radio broadcasts began in the 1920s. Radio astronomy was fully developed from the 1950s, but its successful practice goes as far back as 1932. Karl G. Jansky (1905-1950), an American radio engineer at the Bell Telephone Laboratories, was studying interference on the newly inaugurated trans-Atlantic radio-telephone service when he discovered that some of the interference came from extraterrestrial sources. After studying the phenomenon he concluded that much of the radiation came from the Milky Way. He published his findings in 1933.

Grote Reber (1911-2002), a radio engineer in Chicago, the USA, was the first to follow up Jansky’s announcement. He constructed a 9-meter dish antenna in his back yard. His 1940 and 1944 publications of articles titled “Cosmic Static” in the Astrophysical Journal marked the beginning of intentional radio astronomy. Reber remained a loner and an amateur all his life. He thus represented something of an anomaly in the increasingly professionalized astronomical community in that he virtually founded an important branch of astronomy without having any formal education in astronomy. Further advances were made after the Second World War with pioneers such as the American physicist John D. Kraus (1910-2004).

The German physicist Wilhelm Conrad Röntgen (1845-1923) discovered X-rays, electromagnetic radiation of very short wavelength and high frequency, in 1895, which earned him the first Nobel Prize in Physics in 1901 and triggered a wave of interest which facilitated the discovery of radioactivity. X-rays are invisible to the human eye but affect photographic plates. The French physicist Paul Villard (1860-1934) discovered gamma rays in 1900 while studying uranium and radium. They have the highest frequency and energy and the shortest wavelength in the electromagnetic spectrum and can cause serious damage to living organisms. They were first detected from astronomical sources in the 1960s. As with X-rays, gamma ray observations must preferably be made above the Earth’s absorbing atmosphere.

As late as they year 1800, before William Herschel discovered infrared radiation and Johann Wilhelm Ritter discovered ultraviolet radiation, “light” still meant radiation that is visible to the human eye, nothing more. During the nineteenth century, our understanding of light was totally transformed into an electromagnetic spectrum stretching from radio waves via infrared radiation, visible light and ultraviolet radiation to X-rays and gamma rays. Obviously, when referring to “visible light” we mean visible to humans. While human beings cannot see UV radiation, we know that quite a few animals and birds can distinguish colors in the ultraviolet spectrum. If we add the quantum revolution in the first three decades of the twentieth century, which I will soon describe, we can see that Europeans changed our understanding of light more in the space of just five generations than all known civilizations had done combined in the previous five thousand years of recorded human history.

This had major consequences as it was realized that visible light is only a small portion of the electromagnetic spectrum. By studying radiation in other wavelengths, astronomers have uncovered new phenomena such as quasars and pulsars.

According to Michael Kennedy, “Max Planck, writing in 1931, stated that while neither Faraday or Maxwell ‘originally considered optics in connection with their consideration of the fundamental laws of electromagnetism,’ yet ‘the whole field of optics, which had defied attack from the side of mechanics for more than a hundred years, was at one stroke conquered by Maxwell’s Electrodynamic Theory.’ Planck considered this one of ‘the greatest triumphs of human intellectual endeavor.’ Heinrich Hertz confirmed Maxwell’s and Faraday’s work with experiments measuring the speed of light and electromagnetic waves. He showed that the electromagnetic waves behaved exactly like light in properties of reflection, refraction and polarization and that they could be focused. The Germans took Maxwell’s theory and subtracted some of his tortured ideas about how these forces acted at a distance. Gauss had already worked on the subject of static charges and the way that they act at a distance. One issue was the speed of propagation of electromagnetic forces along a wire, through a vacuum and through air. The old theory, based on Newton, was that these forces acted instantaneously. Maxwell believed that all the forces acted at the same rate, the speed of light. Hertz proved this to be true.”

Albert Einstein (1879-1955) later said that “The most fascinating subject at the time I was a student was Maxwell’s theory.” One of the key findings of the late nineteenth century was the Michelson—Morley experiment in 1887, conducted by Albert Abraham Michelson (1852-1931), an American physicist born to a Polish-Jewish family, and the American scientist Edward Morley (1838-1923). Michelson was a master of precision optical measurement. The Oxford Guide to the History of Physics and Astronomy tells the tale:

“His determination of the speed of light and the lengths of light waves were the best of his day, and his attempt of 1887 in collaboration with Edward Morley to detect the motion of the earth through the ether helped set the stage for Albert Einstein’s theory of relativity. In 1907 Michelson became the first American to receive a Nobel Prize in the sciences. Born in Strelno, Prussia (now Poland), Michelson emigrated to America with his family while still a child. He grew up in gold rush towns in California and Nevada….Michelson and Morley’s null result seemed impossible to reconcile with the known facts of optics. George Francis FitzGerald in 1889 and Hendrik Antoon Lorentz in 1892 independently proposed a striking solution: perhaps motion through the ether slightly alters the forces between molecules, causing Michelson and Morley’s sandstone block to shrink by just enough to nullify the effect they had been seeking. The ‘FitzGerald-Lorentz contraction’ later became an important part of relativity theory. Although scholars have often exaggerated the influence of Michelson and Morley’s experiment on Einstein’s thinking, Einstein knew at least indirectly of their result and it certainly loomed large in later discussions of his ideas.”

George Francis FitzGerald (1851—1901), a professor at Trinity College in Dublin, Ireland, in the 1890s hypothesized the Fitzgerald contraction — that distance contracts with speed — in order to account for the results of the Michelson-Morley experiment. The Dutch physicist Hendrik Antoon Lorentz (1853-1928) hypothesized that mass increases with velocity as well. Lorentz shared the 1902 Nobel Prize in Physics with the Dutch physicist Pieter Zeeman (1865-1943) for the discovery and explanation of the Zeeman effect, the splitting of lines in a spectrum by a magnetic field, later used to study the details of atomic structure. The experimental work of Hungarian physicist Loránd Eötvös (1848-1919) established the identity of gravitational and inert masses, which Einstein used for his general theory of relativity.

While it is certainly true that he benefited from the work done by others, the young Swiss bureaucrat Albert Einstein in the years before 1905 had no university affiliation, no access to a laboratory and was not at all a part of the mainstream of scientists, which makes his achievement all the more impressive. I have read conflicting accounts of his life, but it is likely that he knew about the Michelson-Morley experiment.

Einstein was born at Ulm, Germany, into a family of non-practicing Jews. In 1896 he entered the Swiss Federal Polytechnic School in Zürich to be trained as a teacher in physics. In 1901 he acquired Swiss citizenship and accepted a position as technical assistant in the Swiss Patent Office. It was during his time as a patent cleric in Bern, Switzerland, that Einstein did much of his remarkable early work. He published his special theory of relativity in 1905 and his general theory of relativity in 1916. John North explains in his book Cosmos:

“In the special theory of 1905, Einstein considered only frames of reference moving at constant relative speed. Into this theory he introduced a very important principle: the measured velocity of light in a vacuum is constant and does not depend on the relative motion of the observer and the source of the light. He drew several important conclusions. One was that if different observers are in relative motion, they will form different conclusions about the relative timing and separation of the things they observe; and that instead of distinguishing sharply between space and time coordinates, we should consider all together as coordinates in a combined space-time. Another conclusion was that the mass of a body increases with its velocity, and that the speed of light is a mechanical upper limit that cannot be crossed. Perhaps the best-known of all his conclusions, summarized in the famous equation E = mc2, was that mass and energy are equivalent and interchangeable….The conversion of nuclear mass into nuclear energy is of course a fact of modern life, but understanding the conversion of mass to energy has also been of the greatest importance to an understanding of the production of energy in stars.”

The nineteenth century had witnessed a revolution in thermodynamics and the establishment of the laws of conservation of energy: Energy can be transformed from one form to another but it cannot disappear, and the total amount of energy always remains constant. Einstein took this a step further in his paper on mass-energy equivalence with the famous equation E = mc2, which indicated that tiny amounts of mass could be converted into huge amounts of energy. The general theory of relativity from 1916 described gravity as a property of the geometry of space and time. Einstein required space-time to be curved.

As it happens, in the century before his theory of relativity, European mathematicians, among them the German Bernhard Riemann (1826-1866), had worked out a vocabulary of non-Euclidean geometry that was needed to describe Einstein’s curved space-time.

According to our current understanding of physics, as established by the theory of relativity and so far verified by experiments and observations, nothing can travel faster than the speed of light. As it happens, there are a few known exceptions. One of them was discovered by the Russian physicist Pavel Cherenkov (1904-1990) in the Soviet Union, working together with Sergey I. Vavilov (1891-1951): “It was in 1934, whilst he was working under S.I. Vavilov, that Cerenkov observed the emission of blue light from a bottle of water subjected to radioactive bombardment. This ‘Cerenkov effect’, associated with charged atomic particles moving at velocities higher than the speed of light, proved to be of great importance in subsequent experimental work in nuclear physics and for the study of cosmic rays. The Cerenkov detector has become a standard piece of equipment in atomic research for observing the existence and velocity of high-speed particles, and the device was installed in Sputnik III.”

Cherenkov didn’t fully understand what caused this effect, which has become known as Cherenkov radiation. The explanation was worked out by the Russian physicists Igor Tamm (1895-1971) and Ilya Frank (1908-1990). They showed in 1937 that this electromagnetic radiation is caused when high-energy, charged particles travel through a medium, for instance water, at a speed greater than the speed of light in that medium. The emission of fast electrons is called beta radiation. The bluish light that emanates from the water in which highly radioactive nuclear reactor fuel rods are stored is caused by this effect. Cherenkov radiation can be compared to the sonic boom produced by a plane flying faster than the speed of sound. Cherenkov, Tamm and Frank shared the Nobel Prize in Physics in 1958 for the discovery.

Albert Einstein’s formula E = mc2 treats c, the speed of light, as a universal constant and predicts that no object can travel faster than this. However, it is important to remember that c indicates the speed of light in a vacuum. Light travels significantly slower through water or plastic. In the case of Cherenkov radiation, the speed of some electrons may be faster than the speed of light in that particular medium, but it is still below c, and that is what matters.

The French mathematician Henri Poincaré (1854-1912), who was equally at home in both pure and applied mathematics, developed many of the equations of the special theory of relativity independently of Einstein and Hendrik Lorentz. Poincaré studied at the prestigious engineering school École Polytechnique, which has produced so many great scholars. Another one of them was the French physicist Antoine Henri Becquerel (1852-1908).

Henri Becquerel came from a family of distinguished scholars. His father Alexandre Edmond Becquerel (1820-1891) invented the phosphoroscope, a device capable of measuring the duration of time between the exposure of a solid, liquid or gas to a light source and the substance’s exhibition of phosphorescence. Henri Becquerel’s earliest work was concerned with phosphorescence and light, but the discovery of X-rays in late 1895 by Wilhelm Röntgen fascinated him. While doing research on this new phenomenon, he discovered radioactivity in 1896. Here is a quote from his official Nobel Prize biography:

“Following a discussion with Henri Poincaré on the radiation which had recently been discovered by Röntgen (X-rays) and which was accompanied by a type of phosphorescence in the vacuum tube, Becquerel decided to investigate whether there was any connection between X-rays and naturally occurring phosphorescence. He had inherited from his father a supply of uranium salts, which phosphoresce on exposure to light. When the salts were placed near to a photographic plate covered with opaque paper, the plate was discovered to be fogged. The phenomenon was found to be common to all the uranium salts studied and was concluded to be a property of the uranium atom. Later, Becquerel showed that the rays emitted by uranium, which for a long time were named after their discoverer, caused gases to ionize and that they differed from X-rays in that they could be deflected by electric or magnetic fields. For his discovery of spontaneous radioactivity Becquerel was awarded half of the Nobel Prize for Physics in 1903, the other half being given to Pierre and Marie Curie for their study of the Becquerel radiation.”

The couple Pierre Curie (1859-1906), French physicist, and Marie Curie, born Maria Sklodowska (1867-1934) in Warsaw but later educated at the Sorbonne in France where she met Pierre, coined the term “radioactivity” and discovered the elements radium and polonium, the latter named after Marie’s native country Poland. Much research was done during this period which would radically alter our ideas about the subatomic world.

A revolution began with the invention of a better vacuum pump in the mid-nineteenth century. Up until then, the pumps differed little from the one created by Otto von Guericke two centuries earlier. The German Heinrich Geissler (1814-1879) made the breakthrough in Bonn in the 1850s. His improved vacuum pump used mercury to make airtight contacts. He was a trained glassblower as well as an engineer and devised a technique of sealing two electrodes into the evacuated glass vessel, thus creating a tube in which there was a permanent vacuum. He had invented the vacuum tube. His invention was improved by himself and others, chief among them the Englishman Sir William Crookes (1832-1919), over the next years.

The German scholar Julius Plücker (1801-1868) and his student Johann Hittorf (1824-1914) were among the first to use the new device for studying the glowing rays emitted from the cathode (negative electrode) of these vacuum tubes, which we now know are streams of electrons. The German physicist Eugen Goldstein (1850-1930) named them “cathode rays” in 1876. In 1886 Goldstein discovered anode rays, beams of positive ions from the anodes (positive electrodes) of the tubes. With his improved vacuum tube known as the Crookes tube, William Crookes managed to carry out experiments which indicated a corpuscular nature of the cathode rays and that they were not electromagnetic waves.

Maxwell’s electromagnetic theories indicated that light has momentum and can exert pressure on objects. In 1873 William Crookes developed a special kind of radiometer or light mill which he thought would demonstrate the pressure exerted by light, but he failed to do so. The Russian physicist Pyotr Lebedev (1866-1912) in 1899 managed to show experimentally that light does exert a mechanical pressure on material bodies, thus proving Maxwell’s prediction.

Sir Joseph John “J. J.” Thomson (1856-1940), the English physicist who is credited with the discovery of the electron, was elected Cavendish Professor of Physics at Cambridge University in England in 1884. One of his students was the New Zealand-born Ernest Rutherford (1871-1937), who was to become at least as famous in his own right through his pioneering studies of radioactivity and the development of the orbital model of the atom. Thomson in 1897 showed that all the properties of cathode rays could be explained by assuming that they were subatomic charged particles which he called “corpuscles.” The Irish physicist George Johnstone Stoney (1826-1911) introduced the term electron. The discovery of the electron led Thomson to create what has often been called the “plum pudding model” of the atom, with electrons as negatively charged “plums” inside a positively charged atomic “pudding.”

After Henri Becquerel had discovered radioactivity, Marie Curie and Pierre Curie soon deduced that it was a phenomenon associated with atoms. By 1899 Ernest Rutherford had established that there were at least two types of “rays” in uranium radiation. Those that were more powerful but easily absorbed he termed alpha rays while those that produced less radiation and had greater penetration ability he termed beta rays. A third type of radiation, gamma rays, was discovered in 1900 by the Frenchman Paul Villard, who recognized them as different from X-rays because they had a much greater penetrating depth.

Subsequent experiments in which these various radiations were subjected to magnetic and electric fields showed that beta particles are negatively charged, alpha particles are positively charged and a lot heavier than beta particles and gamma rays are uncharged. Rutherford had established by 1909 that alpha particles are helium nuclei. Beta particles are high-energy electrons whereas gamma radiation is electromagnetic radiation, in other words light.

The German physicist Hans Geiger (1882-1945), who also invented the radiation detector which bears his name, and the English physicist Sir Ernest Marsden (1889-1970) performed the famous gold foil or Geiger-Marsden experiment in 1909, under Rutherford’s supervision. When positively charged alpha particles were fired at a thin sheet of gold, Geiger and Marsden to their surprise saw that while most of them passed through, a few of them were scattered by large angles or bounced back. Ernest Rutherford interpreted the results in a 1911 paper. He proposed that the atom is mostly empty space and introduced the concept of the atomic nucleus, assuming that the positive charge of the atom and most of its mass formed a tiny concentration at the center of the atom. He discovered the proton a few years later.

As we know today, the proton has a positive charge equal to the negative charge of an electron, but contains almost two thousand times more mass. The number of protons in an atom defines the atomic number of a chemical element and its position in the periodic table. Every atomic nucleus of an element contains the same number of protons. Along with neutrons, which have no net electric charge, protons make up atomic nuclei, which account for 99.9 percent of an atom’s mass. While protons had been identified by Rutherford before 1920, neutrons were only discovered in the early 1930s with the work of the French physicists Jean Frédéric Joliot (1900-1958) and Irène Joliot-Curie (1897-1956), the daughter of Marie and Pierre Curie, and finally the English physicist Sir James Chadwick (1891-1974) in 1932.

The noble gases are the least reactive of the chemical elements. The English physicist Lord Rayleigh (1842-1919) discovered that the nitrogen he drew from the air had a specific weight greater than that of the nitrogen he derived from mineral sources. He then came across a paper from 1785 where the always careful experimenter Henry Cavendish mentioned a non-reactive residue of gas, making up less than 1% of the air, which he obtained after sparking atmospheric nitrogen with oxygen. Together with the Scottish chemist Sir William Ramsay (1852-1916), Rayleigh in 1895 announced the discovery of a new constituent of the Earth’s atmosphere. They named it “argon,” from Greek for “lazy.” Helium had already been named from the spectral lines of the Sun but was isolated on Earth for the first time by Ramsay in 1895. Ramsay soon discovered other noble gases, neon, krypton and xenon.

The German scholar Friedrich Ernst Dorn (1848-1916) is usually credited with discovering radon, a radioactive gas emitted from radium, which had itself been discovered by Pierre and Marie Curie in 1898. However, Dorn did not fully understand its properties and credit for its discovery is often shared with Rutherford or the French chemist André-Louis Debierne (1874-1949). The proper location of radon in the periodic table was determined by William Ramsay. Astonishingly, the periodic table created by the Russian scholar Dmitri Mendeleyev could accommodate the newcomers as a separate group, thus confirming its scientific relevance.

The noble gases have been called rare gases, but this name is misleading as argon makes up about 1% of the Earth’s atmosphere, making it the third most common gas after nitrogen and oxygen, and helium is the second most common element in the universe after hydrogen.

According to scholar J. L. Heilbron, “Ernest Rutherford and Frederick Soddy identified the ‘emanation’ from thorium as a new and flighty member of the noble family, now called radon; the occurrence of a decaying nonreactive gas in their experiments provided the clue for working out their theory of the transmutation of atoms. Radium also gives off a radioactive emanation and the two similar (indeed chemically identical) noble gases offered an early example of isotopy. However, the lightest of the noble gases proved the weightiest. Helium is often found with uranium and other active ores. With the spectroscopist Thomas Royds and an apparatus made by the virtuoso glass blower Otto Baumbach, Rutherford demonstrated in 1908 that the alpha particles emitted from radioactive substances turned into helium atoms when they lost their electric charge. In 1910-1911 he showed that alpha particles acted as point charges when fired at metal atoms, and devised the nuclear model of the atom to explain the results of the scattering and to deduce that helium atoms have exactly two electrons. The replacement of atomic weight by atomic number (the charge on the nucleus) as the ordering principle of the periodic table followed.”

The English physicist Francis William Aston (1877-1945) invented the mass spectrograph to measure the mass of atoms in 1919. The discovery of the noble gases played an important role in the investigations of radioactivity. Rayleigh, Ramsay, Aston and Rutherford all received Noble Prizes partly due to work on noble gases. Rayleigh also carried out acoustic research; his treatise The Theory of Sound from 1877-78 marks the beginning of modern acoustics.

The number of neutrons determines the isotope of a chemical element. The English radiochemist Frederick Soddy (1877-1956) formulated the concept of isotope in 1913 by demonstrating that the same element can have more than one atomic mass. Ernest Rutherford and Soddy concluded in 1902 that radioactive decay was a phenomenon involving atomic disintegration, with the transformation of one element into another. Together with William Ramsay, Soddy demonstrated that helium was produced in the decay of radium bromide. The Polish American physical chemist Kasimir Fajans (1887-1975) discovered the radioactive displacement law at about the same time as Soddy. This law stipulates that when a radioactive atom decays by emitting an alpha particle (a helium nucleus), the atomic number of the resulting atom is two fewer than that of the parent atom.

The English experimental physicist Henry Moseley (1887-1915) clearly established the relationship between atomic number and the amount of positive charge on the nucleus in 1913. He was killed in the Gallipoli Campaign in Turkey during World War I (1914-1918).

There were many important developments following the electromagnetic work of Faraday and Maxwell in the late nineteenth century. The physicist Joseph Stefan (1835-1893), of ethnic Slovenian background and Slovene mother tongue but with Austrian citizenship, discovered Stefan’s Law, that the radiation of a blackbody is proportional to the fourth power of its absolute temperature, in 1879.

The same law was derived in 1889 by his former assistant, the Austrian physicist Ludwig Boltzmann (1844-1906), from thermodynamic considerations and is therefore sometimes called the Stefan-Boltzmann law. A blackbody is an ideal body that absorbs all the electromagnetic radiation that falls on it. Boltzmann worked on statistical mechanics using mathematical probability theory to describe how the properties of atoms determine the properties of matter on a macroscopic scale. Statistical mechanics was the first foundational physical theory in which probabilistic concepts played a fundamental role.

The American mathematical physicist Josiah Willard Gibbs (1839-1903) created the theoretical foundation for chemical thermodynamics. During the years between 1866 and 1869 he studied in Paris, Berlin and Heidelberg and was inspired by European scientists such as Gustav Kirchhoff and Hermann von Helmholtz. He was eventually appointed professor of mathematical physics at Yale University in the USA. According to J J O’Connor and E F Robertson, “A series of five papers by Gibbs on the electromagnetic theory of light were published between 1882 and 1889. His work on statistical mechanics was also important, providing a mathematical framework for quantum theory and for Maxwell’s theories. In fact his last publication was Elementary Principles in Statistical Mechanics and this work is a beautiful account putting the foundations of statistical mechanics on a firm foundation. Except for his early years and the three years in Europe, Gibbs spent his whole life living in the same house which his father had built only a short distance from the school Gibbs had attended, the College at which he had studied and the University where he worked the whole of his life.”

In 1896 the German physicist Wilhelm Wien (1864-1928) described the spectrum produced by a blackbody when it radiates. He discovered that the wavelength at which the maximum energy is radiated becomes shorter as the temperature of the blackbody is increased. To explain the colors of hot glowing matter the German physicist Max Planck (1858-1947) proposed that electromagnetic energy is radiated in minute and discrete quantized packets. He suggested a model which included “energy elements” of a large but finite number of equal parts, determined by a constant of nature h which has become known as Planck’s constant. Planck’s constant is one of the basic constants of physics and is used to describe the behavior of particles and waves at the atomic scale.

Max Planck announced his quantum hypothesis in 1900, but he did not regard these energy quanta as real, only as mathematical constructs. Albert Einstein, on the other hand, was convinced that they were real.

The papers which Einstein published in the German scientific journal Annalen der Physik in 1905 contained no footnotes and very little mathematics, yet included several radical conceptual innovations. In 1827 the Scottish botanist Robert Brown (1773-1858) noticed that pollen grains suspended in water jiggled about under the lens of the microscope in a strange zigzag pattern. Brownian motion, the seemingly random movement of particles suspended in a liquid, was finally explained by Einstein. This definitely proved the existence of atoms, which was still doubted by a number of scientists at the time. The French physicist Jean Perrin (1870-1942) then did experimental work on Brownian motion and from this calculated atomic size in 1908, thereby confirming the atomic nature of matter.

In addition to his special theory of relativity, Einstein explained the photoelectric effect. Of all the papers he published in 1905, the one he personally singled out as “very revolutionary,” and which would earn him a Nobel Prize, was the one on light quanta. The term “photon” was coined by the American scientist Gilbert Newton Lewis (1875-1946) in 1926.

The photoelectric effect had been described by Heinrich Hertz in 1887. In 1900 the Hungarian-German physicist Philipp Lenard (1862-1947), a student of Hertz, showed that it was caused by electrons, which had been discovered by J. J. Thomson three years before, being ejected from the surface of a metal plate when it was struck by light rays. Lenard had earlier made improvements to the Crookes tubes with what came to be known as Lenard windows so that cathode rays (electrons) became easier to study. He eventually became a passionate Nazi, eagerly denouncing the “Jewish physics” of Einstein and others.

Einstein explained the photoelectric effect by assuming that light was composed of energy quanta which could each give the same amount of energy to an electron in the metal, which is why the ejected electrons all have the same energy. The wave theory of light had triumphed during the nineteenth century but it could not explain this phenomenon. Ironically, Einstein, who along with Max Planck gave birth to quantum physics, had serious objections to the discipline later.

The American physicists Robert Millikan (1868-1953) and Harvey Fletcher (1884-1981) in 1909 began careful studies measuring the electric charge of the electron in the famous oil-drop experiment. In 1916 Millikan experimentally verified the equation introduced by Albert Einstein in 1905 to describe the photoelectric effect, even though he was initially skeptical of this theory given all the evidence demonstrating that light behaves as waves.

The definitive proof that photons exist was provided by the American physicist Arthur Compton (1892-1962) in 1923. The so-called Compton effect can only be understood as the exchange of momentum between particles. It cannot be explained if you view light as waves alone. It was the final confirmation of the validity of the quantum hypothesis of Planck, Einstein and Bohr that electromagnetic radiation comes in discrete packets (photons), with energy proportional to frequency. A photon is the basic unit (quantum) of electromagnetic radiation. Low-energy photons make up radio waves and microwaves, medium-energy photons visible light and high-energy photons X-rays and gamma rays.

Ernest Rutherford in 1911 had introduced the concept of the atomic nucleus, with electrons orbiting around the positive charge in the center of the atom. Yet this left the problem of why the electrons remain in their orbits and don’t radiate their energy and spiral inwards into the nucleus. In other words: Why are atoms largely stable? The answer came with the physicist Niels Bohr (1885-1962) from Denmark, who spent time in England in 1912 working with J. J. Thomson and Rutherford. He borrowed conceptions from the quantum theory of Planck and Einstein and applied them to Rutherford’s atomic model. In 1913 he postulated that electrons can only have a few stable orbits around the nucleus with distinct energy levels. An electron can not lose energy in a continuous manner, but only through “quantum leaps” between these fixed energy levels, thus emitting a light quantum (photon) of discrete energy. Bohr ‘s model was able to predict the spectral lines of hydrogen. It wasn’t yet 100% correct, but it was an important step toward a better understanding of the atomic structure.

According to J J O’Connor and E F Robertson, “Bose published his paper Planck’s Law and the Hypothesis of Light Quanta in 1924 which derived the blackbody radiation from the hypothesis that light consisted of particles obeying certain statistical laws. In the same year de Broglie put forward his particle-wave duality theory in his doctoral thesis which proposed that matter has the properties of both particles and waves. Not only could photons of light behave like waves, suggested de Broglie, but so could other particles such as the electron. In 1927 de Broglie’s claim that electrons could behave like waves was experimentally verified and, in the following year, Bohr put forwards his complementarity principle which stated that photons of light (and electrons) could behave either as waves or as particles, but it is impossible to observe both the wave and particle aspects simultaneously. Two mathematical models of quantum mechanics were presented, that of matrix mechanics, proposed by Werner Heisenberg, Max Born, and Pascual Jordan, and that of wave mechanics proposed by Erwin Schrödinger. In 1927 Heisenberg put forward his uncertainty principle which states that there is a limit to the precision with which the position and the momentum a particle of light can be known.”

Erwin Schrödinger (1887-1961) was born in Vienna, Austria. The physicists Max Born (1882-1970), Werner Heisenberg (1901-1976) and Pascual Jordan (1902-1980) were all Germans.

Satyendra Nath Bose (1894-1974) was one of the most important scientists from India during the twentieth century. He is remembered for introducing the state of matter known as a Bose-Einstein condensate (BEC), where atoms or subatomic particles, cooled to near absolute zero (0 K or minus 273.15 °C), coalesce into a single quantum mechanical entity. This form of matter was predicted in 1924 by Einstein on the basis of the quantum formulations of Bose, but the first atomic BEC was made as late as in 1995. Bose’s name is honored in the name of a class of particles called bosons, while the particle class called fermions are named after the Italian physicist Enrico Fermi (1901-1954).

BECs are related to superconductivity, a phenomenon of virtually zero electrical resistance which occurs in certain materials at very low temperatures. Because of the European electrochemical revolution, the nineteenth and early twentieth centuries saw rapid advances in cryogenics, the production of low temperatures. The Dutch physicist Heike Kamerlingh Onnes (1853-1926), following advances made by the Dutch physicist Johannes Diderik van der Waals (1837-1923), in 1908 at the Leiden University managed to liquify helium. Heike Kamerlingh Onnes then discovered superconductivity in 1911, and his student Willem Hendrik Keesom (1876-1956) managed to solidify helium in 1926.

The gifted Russian physicist Pyotr Kapitsa (1894-1984) in 1937 discovered the superfluidity of liquid helium. Superconductivity and superfluidity are macroscopic quantum phenomena. In 1935 the German-born physicist Fritz London (1900-1954) was the first to propose that superfluidity was Bose-Einstein condensation, and then in the late 1930s with his brother Heinz London (1907-1970) developed the first successful theory of superconductivity. In the Soviet Union, the physicists Vitaly Ginzburg (born 1916) and Lev Landau (1908-1968), both Jews, later built a mathematical model to describe superconductivity.

According to the third law of thermodynamics, formulated by the German physical chemist Walther Nernst (1864-1941) in 1905, absolute zero cannot be attained by any means. Modern science has attained temperatures of about one-millionth of a degree above absolute zero, but absolute zero itself cannot be reached. After 1887 Nernst became an assistant at the University of Leipzig to the German chemist Wilhelm Ostwald (1853-1932) who, with his colleagues the Dutchman Jacobus van ‘t Hoff (1852-1911) and the Swedish scholar Svante Arrhenius (1859-1927), was one of the founders of physical chemistry.

Another great scientist from the Indian subcontinent was the astrophysicist Subrahmanyan Chandrasekhar (1910-1995). Born and raised in India, he studied at the University of Cambridge in England and ended up at the University of Chicago in 1937, where he remained for the rest of his life. He is remembered above all for his contributions to the subject of stellar evolution. A star of the same mass as the Sun will end its life as a white dwarf. A star that ends its nuclear-burning lifetime with a mass greater than the Chandrasekhar limit of about 1.4 solar masses will become either a neutron star or a black hole. NASA’s Chandra X-ray Observatory, launched into space in 1999, was named after Chandrasekhar.

Chandrasekhar was the nephew of the Indian physicist Sir Chandrasekhara Venkata Raman (1888-1970), who discovered in 1928 that when light traverses a transparent material, some of the light that is deflected changes in wavelength, a phenomenon known as Raman scattering.

The neutron was discovered by James Chadwick in 1932. Soon after, the German-born Walter Baade (1893-1960) and the Swiss astronomer Fritz Zwicky (1898-1974), both eventually based in the United States, proposed the existence of neutron stars. Zwicky was not as systematic a scientist as Baade but he could have excellent intuitive ideas. They introduced the term “supernova” and showed that supernovae are completely different from ordinary novae and occur less often. Zwicky and Baade proposed in 1934 that supernovae could produce cosmic rays and neutron stars, extremely compact stars the size of a city but with more mass than the Sun. After the collapse of a large star, the residue of which would be a neutron star (or a black hole), there would still be a large amount of energy left over.

The Ukraine-born astrophysicist Iosif Shklovsky (1916-1985), who became a professor at Moscow University and a leading authority in radio astronomy, proposed that cosmic rays from supernovae might have caused mass extinctions on Earth. The hypothesis is difficult to verify even if true, but such explosions are among the most violent events in the universe, and a nearby (in astronomical terms) supernova could theoretically cause such a disaster.

We know from the fossil record that there have been several mass extinctions, but the causes of them are disputed. Many people believe that the extinction which ended the age of the dinosaurs sixty-five million years ago was at least partly caused by the impact of a large asteroid. The American physicist Luis W. Alvarez (1911-1988) and his son Walter Alvarez (born 1940) suggested this in 1980 based on geological evidence. Since then, a huge impact crater from about this period has been identified outside of the Yucatán Peninsula in Mexico.

The Polish astronomer Bohdan Paczynski (1940-2007) was born in Vilnius, Lithuania, educated at Warsaw University and in 1982 moved to Princeton University in the USA. He was a leading expert on the lives of stars. Because gravity bends light rays, an astronomical object passing in front of another can focus its light in a manner akin to a telescope lens. Paczynski showed that this effect could be applied to survey the stars in our galaxy. This is called gravitational microlensing. The possibility of gravitational lensing had been predicted by the general theory of relativity, but Paczynski worked out its technical underpinnings. He also championed the idea that gamma ray bursts originate billions of light-years away, which means that light from them has traveled billions of years to reach us.

Gamma-ray bursts are short-lived but extremely powerful bursts of gamma-ray photons which can briefly shine hundreds of times brighter than a normal supernova. They were discovered in the late 1960s by the first military satellites, and are still not fully understood.

Neutron stars were first observed in the 1960s, with the development of non-optical astronomy. In 1967 the British astrophysicist Jocelyn Bell (born 1943) and the British radio astronomer Antony Hewish (born 1924) discovered the first pulsar. The Austrian-born Jewish and later American astrophysicist Thomas Gold (1920-2004) soon after identified these objects as rotating neutron stars with powerful magnetic fields. The first widely accepted black holes, such as the object known as Cygnus X-1, were found in the early 1970s. The Italian American astrophysicist Riccardo Giacconi (born 1931) was one of the pioneers in the discovery of cosmic X-ray sources, among them a number of suspected black holes.

A black hole is an object that has such a concentrated mass that no nearby object can escape its gravitational pull; the escape velocity, the speed required for matter to escape from its gravitational field, exceeds the speed of light. The possibility that such an object could exist was envisioned in Europe already in the late eighteenth century.

As we recall, during the seventeenth century it had been established by Ole Rømer that light has a very great, but finite speed, and Isaac Newton had brilliantly introduced the concept of gravity. The idea that an object could have such a great mass that even light could not escape its gravitational pull was proposed independently by the English natural philosopher John Michell (1724-1793) in 1783 and the French mathematical astronomer Pierre-Simon Laplace in 1796. Nevertheless, their ideas had little or no impact on later developments of the concept. In addition to this, Michell devised the famous experiment, successfully undertaken by Henry Cavendish in 1797-98, which measured the mass of the Earth.

Modern theories of black holes began soon after the publishing of Einstein’s general theory of relativity. The German Jewish physicist Karl Schwarzschild (1873-1916) derived the first model in 1916. The “Schwarzschild radius” defines the outer boundaries of a black hole, its event horizon.

According to scholar Ted Bunn, “The idea of a mass concentration so dense that even light would be trapped goes all the way back to Laplace in the 18th century. Almost immediately after Einstein developed general relativity, Karl Schwarzschild discovered a mathematical solution to the equations of the theory that described such an object. It was only much later, with the work of such people as Oppenheimer, Volkoff, and Snyder in the 1930’s, that people thought seriously about the possibility that such objects might actually exist in the Universe. (Yes, this is the same Oppenheimer who ran the Manhattan Project.) These researchers showed that when a sufficiently massive star runs out of fuel, it is unable to support itself against its own gravitational pull, and it should collapse into a black hole. In general relativity, gravity is a manifestation of the curvature of spacetime. Massive objects distort space and time, so that the usual rules of geometry don’t apply anymore. Near a black hole, this distortion of space is extremely severe and causes black holes to have some very strange properties. In particular, a black hole has something called an ‘event horizon.’ This is a spherical surface that marks the boundary of the black hole.”

The American physicist John Archibald Wheeler (1911-2008) is widely credited with having coined the term black hole as well as the term wormhole, a hypothetical tunnel between two black holes which could theoretically provide a shortcut between their end points. It has so far never been proved that wormholes actually exist.

The physicist Yakov B. Zel’dovich (1914-1987), born in Minsk into a Jewish family, played a major role in the development of Soviet nuclear and thermonuclear weapons and was a pioneer in attempts to relate particle physics to cosmology. Together with Rashid Sunyaev (born 1943) he proposed the Sunyaev-Zel’dovich effect, an important method for determining absolute distances in space. Sunyaev has developed a model of disk accretion onto black holes and of X-radiation from matter spiraling into such a hole. Working in Moscow, Rashid Sunyaev led the team which built the X-ray observatory attached to the MIR space station.

The English theoretical physicist Stephen Hawking (born 1942) in the 1970s combined the relativity theory with quantum mechanics and worked out the prediction that black holes can emit radiation and thus mass. This has become known as Hawking or Bekenstein-Hawking radiation, after Hawking and Jacob Bekenstein (born 1947), an Israeli physicist at the Hebrew University of Jerusalem. The English mathematical physicist Sir Roger Penrose (born 1931) has developed a method of mapping the regions of space-time surrounding a black hole.

Walter Baade worked at the Hamburg Observatory from 1919 to 1931 and at Mt. Wilson in the USA from 1931 to 1958. During the World War II blackouts, Baade used the large Hooker telescope to resolve stars in the central region of the Andromeda Galaxy for the first time. This led to the realization that there were two kinds of Cepheid variable stars, and from there to a doubling of the assumed scale of the universe. The German American astronomer Rudolph Minkowski (1895-1976) joined with Baade in studying supernovae. He was the nephew of the mathematician Hermann Minkowski (1864-1909), who did very important work on the study of non-Euclidean geometry and four-dimensional space-time.

The optician Bernhard Schmidt (1879-1935) was born off the coast of Tallinn, Estonia, in the Baltic Sea, then part of the Russian Empire. He spoke Swedish and German and spent most of his adult life in Germany. During a journey to Hamburg in 1929 he discussed the possibility of making a special camera for wide angle sky photography with Walter Baade. He then developed the Schmidt camera and telescope in 1930, which permitted wide-angle views with little distortion and opened up new possibilities for astronomical research. Yrjö Väisälä (1891-1971), an astronomer from Finland, had been working on a related design before Schmidt but left the invention unpublished at the time.

The Austrian physicist Wolfgang Pauli (1900-1958) in 1925 defined the Pauli Exclusion Principle, which states that in an atom no two electrons can occupy the same quantum state simultaneously. He worked as an assistant to Max Born and spent a year with Niels Bohr at his Institute in Denmark, which due to Bohr’s status became an important international center during this revolution in quantum and nuclear physics. The Hungarian radiochemist Georg Karl von Hevesy (1885-1966), born Hevesy György, together with the Dutch physicist Dirk Coster (1889-1950) discovered the element Hafnium in 1923 while working at Bohr’s Institute. Hafnia is the Latin name for Copenhagen.

Pauli’s Exclusion Principle helped to lay the foundations of the quantum theory of fields. The existence of the neutrino, an elementary subatomic particle with no electric charge and little or no mass, was predicted in 1930 by Wolfgang Pauli. This was experimentally confirmed by the Americans Frederick Reines (1918-1998) and Clyde Cowan (1919-1974) in 1956. In the 1990s, Japanese and American scientists obtained experimental evidence indicating that neutrinos do have mass, yet it is extremely small even compared to electrons.

These great advances in physics had consequences for other branches of science, too, from quantum chemistry to molecular biology and medicine. One person who left his mark in all of these fields was the American scientist Linus Pauling (1901-1994), the only person so far to have won two unshared Nobel Prizes. After education in the USA, he spent time in Europe during the 1920s with the leading scientists of the age and gained insight into the developing quantum revolution. Pauling created his electronegativity scale in 1932. Electronegativity is a measure of the tendency of an atom to attract a bonding pair of electrons. The outermost shell of electrons determines the chemical properties of an atom as it allows bonds to be formed with other atoms. Pauling contributed to our understanding of chemical bonds as well as to the study of DNA, although he was more successful at the former than at the latter.

The French physicist Louis de Broglie (1892-1987) in his doctoral thesis in 1924 introduced the radical concept of wave-particle duality. In doing so, he laid the basis for the general theory of wave mechanics and transformed our knowledge of physical phenomena on the atomic scale. His theory explained a number of previously unaccountable phenomena. According to his equation, all moving objects have a dual wave-particle nature, including objects on a macroscopic scale. Theoretically speaking, the reader of these words has a wavelength, too, but this is without practical importance in everyday life. Yet it is of vital importance in the world of subatomic particles such as electrons. The German physicist Otto Stern (1888-1969) demonstrated the wave nature of atoms, which has later been experimentally verified even for rather large molecules.

The Austrian physicist Erwin Schrödinger, influenced by de Broglie’s work, “attributed the quantum energies of the electron orbits in the atom thought to exist to the vibration frequencies of electron matter waves, now known as de Broglie waves, around the nucleus of the atom.”

De Broglie’s hypothesis regarding the wave properties of electrons (electron diffraction) was proven experimentally in 1927, independently by the American physicists Clinton Davisson (1881-1958) and Lester Halbert Germer (1896-1971) as well as the English physicist Sir George Paget Thomson (1892-1975), the son of J. J. Thomson. While Thomson senior got a Nobel Prize for discovering electrons and proving that they are particles, his son got one for proving that they are waves, and they were both right.

Irène Joliot-Curie, the daughter of Marie and Pierre Curie, won a Nobel Prize of her own, as did Aage Niels Bohr (born 1922), Danish nuclear physicist and son of Niels Bohr. Sometimes talent does indeed run in families. One of the most fascinating cases is the father-and-son team Sir William Henry Bragg (1862-1942) and Sir William Lawrence Bragg (1890-1971), who shared a Nobel Prize in Physics in 1915 for their groundbreaking studies of X-ray crystallography.

Because electrons have wavelengths far shorter than those of visible light, the discovery of electron waves made possible microscopes with unprecedented powers of resolution. The German physicist Ernst Ruska (1906-1988) built the first practical electron microscope with a resolution greater than that achieved in optical microscopes already in 1933, only nine years after the existence of electron waves had been theoretically suggested and six years after they had been experimentally demonstrated. The German- born physicist Erwin Wilhelm Müller (1911-1977) invented the field emission microscope in 1936 and the field ion microscope in 1951, capable of giving a resolution almost down to the atomic level.

Ernst Ruska was awarded the Nobel Prize for Physics in 1986 together with the Swiss physicist Heinrich Rohrer (born 1933) and the German physicist Gerd Binnig (born 1947). Rohrer’s and Binnig’s development of the scanning tunneling microscope in 1981 while working for IBM at Zürich, Switzerland, enabled scientists to image the position of individual atoms, an innovation which soon triggered many further improvements.

A nanometer (nm) is one billionth of a meter (10-9 m), or one millionth of a millimeter. The wavelength of visible light is in the range of 400-700 nm; objects smaller than this can never be seen in optical microscopes. Virtually all viruses and some bacteria are smaller than this. Individual virus particles could not be seen until the invention of the electron microscope. Mimivirus, discovered in 1992, is one of the largest known viruses with a diameter of more than 400 nm, yet it is unusual and so large and complex that it blurs the lines between life and non-life. The smallest known bacteria measure 200 nanometers across, which is currently considered the lower size limit for a living cell. There are those who believe in the existence of even smaller nanobacteria, but this idea remains controversial and so far unproven. An ångström or angstrom equals 0.1 nanometer. It is a unit used for expressing the lengths of chemical bonds and molecules and is named after the Swedish physicist Anders Jonas Ångström (1814-1874), one of the founders of spectroscopy. Today’s best electron microscopes have a resolution of 0.05 nanometers, or about the diameter of a hydrogen atom.

Quantum optics is a field of quantum physics that deals specifically with the interaction of photons with matter. Lasers would be the most obvious application of quantum optics. The American physicist Theodore Maiman (1927-2007) made the first working laser in 1960. Since then, lasers have become an integral part of our everyday lives, used in everything from measuring systems, eye surgery and weapons via compact discs (CDs) and DVD players to fiber-optic communication and bar code readers.

The Hungarian electrical engineer Dennis Gabor (1900-1979) was born and educated in Budapest and until the 1930s worked as a research engineer for the Berlin firm Siemens and Halske. After Hitler’s rise to power he relocated to Britain. In the 1940s he attempted to improve the resolution of the electron microscope using a procedure he called wavefront reconstruction. In 1947 he conceived the idea of holography, yet the practicality and usefulness of Gabor’s work with three dimensional images, which he termed holograms after the Greek phrase for “whole message,” remained limited until the introduction of lasers in the 1960s. This provided the intense, coherent light necessary for clear holograms, which have since then been adopted in a wide range of different applications as well as in art.

The theoretical physicist Paul Dirac (1902-1984), born in England to a Swiss father and an English mother, was concerned with the mathematical aspects of quantum mechanics and began working on the new quantum mechanics as soon as it was introduced by Heisenberg in the mid-1920s. His wave equation, which introduced special relativity into Schrödinger’s equation, unified aspects of quantum mechanics and relativity theory and described electron spin (magnetic moment) — a fundamental but until then not properly explained feature of quantum particles. His Principles of Quantum Mechanics from 1930 is a landmark in the history of science, and Paul Dirac could be considered the creator of the first complete theoretical formulation of quantum mechanics.

Following Paul Dirac ‘s lead, quantum electrodynamics ( QED ), a very successful scientific theory with great predictive powers, was developed by a number of physicists, employing mathematics developed by the Hungarian American John von Neumann (1903-1957), the German David Hilbert (1862-1943), the Italian-French Joseph-Louis Lagrange (1736-1813) and others. QED describes mathematically interactions of light with matter and of charged particles with one another. Albert Einstein’s special theory of relativity is built into its equations. The American physicists Richard Feynman (1918-1988) and Julian Schwinger (1918-1994) were awarded the Nobel Prize in Physics in 1965 for their efforts in this development, together with the Japanese physicist Sin-Itiro Tomonaga (1906-1979). QED applies to all electromagnetic phenomena associated with charged fundamental particles.

The Japanese theoretical physicist Hideki Yukawa (,1907-1981) in 1935 published a theory of mesons which explained the interaction between protons and neutrons and predicted the existence of the particle pion (pi-meson). It was discovered by the English physicist Cecil Powell (1903-1969) in 1947 after studies of cosmic rays, a breakthrough which earned both men a Nobel Prize. The physicist Cesar Lattes (1924-2005) from Brazil is often considered a co-discoverer of the pion.

Yukawa became Japan’s first Nobel laureate, but by no means the last. His co-student at the Kyoto Imperial University, Sin-Itiro Tomonaga, won a Prize of his own. Tsung-Dao Lee (born 1926) and Chen-Ning Yang (born 1922) in 1957 became the first Nobel laureates of Chinese descent. Both of them were born in China but eventually settled in the United States and became part of the Chinese American scientific diaspora. They got their Prizes for work in particle physics, on parity violation.

Paul Dirac had predicted the existence of an antiworld identical to ours but made out of antimatter, with particles that are the mirror image of particles of known matter. The positron or antielectron was discovered by the American physicist Carl David Anderson (1905-1991), born of Swedish parents, while studying cosmic rays in 1932. As had been demonstrated by the Austrian physicist Victor Hess, with whom Anderson was to share a Nobel Prize, cosmic rays come from outer space. The Italian and later American Jewish physicist Emilio Segrè (1905-1989), a student of Enrico Fermi, together with the American Owen Chamberlain (1920-2006) discovered the antiproton in 1955 while working at Berkeley, California.

The Scottish physicist Charles Wilson (1869-1959) created the cloud chamber, an invaluable tool for studying sub-atomic particles, in the first decades of the twentieth century. The improved bubble chamber was invented in 1952 by the American physicist Donald A. Glaser (born 1926). The basic principles of synchrotron design were proposed independently by the Ukraine-born physicist Vladimir Veksler (1907-1966) in the Soviet Union (1944) and Edwin McMillan (1907-1991) in the United States (1945).

Thanks to the development of high energy physics and particle accelerators in the second half of the twentieth century, scientists discovered many previously unknown particles. According to the Standard Model formulated in the 1970s, protons and neutrons are not elementary particles like electrons since they consist of smaller particles called quarks, which virtually never exist on their own in nature.

Ironically, the smaller the units of matter that are investigated become, the larger and more expensive the equipment needed to study them gets. The Large Hadron Collider (LHC), the largest particle accelerator ever built so far, was opened by the European Organization for Nuclear Research (CERN) near Geneva, Switzerland in 2008, funded by thousands of scientists in dozens of countries. It is hoped that it, and other particle accelerators in North America and Asia, can answer some of the fundamental questions about the physical structure of the universe, possibly even the existence of other dimensions in addition to the four known (three space plus time), as some believers of string theory have suggested. We almost certainly will discover new phenomena.

The rapid advances in nuclear physics before and during the Second World War (1939-1945) eventually facilitated the development of nuclear weapons. The English physicist Sir John Cockcroft (1897-1967) and the Irish physicist Ernest Walton (1903-1995) in the early 1930s managed to split the atomic nucleus. The highly influential Italian physicist Enrico Fermi did important work in this field while in Italy, but moved to the United States before the outbreak of the war as his Jewish wife in particular faced hard times under the Fascist regime.

The German Otto Hahn (1879-1968), who had earlier worked with William Ramsay and Ernest Rutherford, together with Fritz Strassman (1902-1980) in 1938 published results from experiments with bombarding uranium with neutrons. This produced barium, and the result was correctly interpreted by their colleague, the Austrian Lise Meitner (1878-1968) and her nephew Otto Frisch (1904-1979), as nuclear fission. Meitner had become a refugee from the Nazi regime because she was a Jew.

A number of leading scientists, among them Albert Einstein, left for the USA during this period, and some became involved in the project to develop nuclear weapons, possibly the largest and most expensive undertaking in organized science in world history until that time. Ironically, several of them had not previously had an active Jewish identity, but were nevertheless classified as Jews due to their ancestry, according to the twisted Nazi race laws.

Niels Bohr, while visiting the United States early in 1939, brought news of the latest discoveries in Europe. Together with the young American physicist John Archibald Wheeler he did important theoretical work on nuclear fission. When Denmark was under Nazi occupation, Bohr became an active contributor to the Manhattan Project at the top-secret Los Alamos laboratory in New Mexico.

The Hungarian Jew Leo Szilard (1898-1964) was partly responsible for initiating the Manhattan Project through writing the Einstein-Szilard letter sent by the famous scientist Einstein to President Franklin D. Roosevelt (1882-1945) in August 1939, urging the USA to study the use of nuclear fission for weapons before Nazi Germany could make them. Another Hungarian Jew who became a key person in the American nuclear program was Edward Teller (1908-2003), who played a central role in the development of thermonuclear weapons, or hydrogen bombs as they are commonly called.

The Russian nuclear physicist and human rights activist Andrei Sakharov (1921-1989) led the development of hydrogen bombs in the Soviet Union, along with the Russian physicist Igor Kurchatov (1903-1960), in the arms race between the two superpowers during the Cold War.

Leo Szilard helped Enrico Fermi construct the first nuclear reactor. Fermi’s group, which included the Canadian nuclear physicist Walter Zinn (1906-2000), achieved the world’s first self-sustaining nuclear chain reaction in December 1942 in Chicago. The research of Fermi and others paved the way for the peaceful use of energy from nuclear fission from the 1950s and 60s onwards, but in the short run it was another milestone in the development of nuclear weapons. General Leslie Richard Groves (1896-1970) became the military leader of the Manhattan Project, whereas J. Robert Oppenheimer (1904-1967) was the scientific director.

The Manhattan Project, while ultimately a success and one of the greatest triumphs of technology as applied science in history, cost enormous sums and involved tens of thousands of people in several secret locations, including large numbers of scientists, from Szilard and Fermi via Arthur Compton to Glenn T. Seaborg (1912-1999), the co-discoverer of plutonium and the transuranium elements. The transuranium elements are the chemical elements that lie beyond uranium (atomic number 92) in the periodic table, for instance plutonium with atomic number 94. All of them are unstable and decay radioactively into other elements, but once again, Dmitri Mendeleyev’s periodic table was able to make room for these new elements.

The German nuclear physicist J. Hans Daniel Jensen (1907-1973), the German-born American physicist Maria Göppert- Mayer (1906-1972) and the Jewish Hungarian-born and later American physicist Eugene Wigner (1902-1995) in the late 1940s independently worked out the shell structure of the atomic nucleus. Jensen and Göppert-Mayer in 1955 co-authored the book Elementary Theory of Nuclear Shell Structure, which chronicled their discoveries.

In addition to weapons, the breakthroughs in nuclear physics made it possible for the first time to work out the processes that fuel the stars. There were discussions in nineteenth century Europe regarding the source of solar energy. Chemical combustion was eventually rejected as it would have burnt away a mass as large as the Sun in a few thousand years. Other theories such as gravitational contraction were abandoned after the discovery of radioactivity in 1896, which suddenly provided a previously unknown source of heat with great potential.

The English astrophysicist Sir Arthur Stanley Eddington (1882-1944) was one of the first to provide observational support for Einstein’s general theory of relativity from 1916 and explain it to a mass audience. He was among the first to suggest that processes at the subatomic level involving hydrogen and helium could explain why stars generate energy, but most people still believed that hydrogen and helium formed just a small part of the mass of stars, and the atomic structure was not properly understood until the identification of the neutron in 1932. It was left for others to work out the details of stellar energy production.

As late as in the 1920s, many leading scientists still assumed that the Sun was rich in heavy elements. This changed with the work of the English-born astronomer Cecilia Payne, later named Cecilia Payne- Gaposchkin (1900-1979) after she married the Russian astronomer Sergei Gaposchkin. Her interest in astronomy was triggered after she heard Arthur Eddington lecture on relativity. She arrived at the Harvard College Observatory in the USA as an assistant to the American astronomer Harlow Shapley (1885-1972). By using spectroscopy she worked out that hydrogen and helium are the most abundant elements in stars. Otto Struve (1897-1963), a Russian astronomer of ethnic German origins who eventually settled in the USA, called her thesis Stellar Atmospheres from 1925 “undoubtedly the most brilliant Ph.D. thesis ever written in astronomy.”

The Irish astronomer William McCrea (1904-1999) and the German astrophysicist Albrecht Unsöld (1905-1995) independently established that the prominence of hydrogen in stellar spectra indicates that the presence of hydrogen in stars is greater than that of all other elements put together. Unsöld studied under the German physicist Arnold Sommerfeld (1868-1951) at the University of Munich and began working on stellar atmospheres in 1927.

The stage was finally set for a coherent theory of stellar nuclear energy production. This was worked out by the German American physicist Hans Bethe in the USA and the German physicist Carl von Weizsäcker (1912-2007) in Berlin in the late 1930s.

Bethe described the proton-proton chain, which is the dominant energy source in stars such as our Sun. In 1939, in a paper entitled Energy Production in Stars, he described the carbon-nitrogen-oxygen (CNO) cycle, which is important in more massive stars. Carl von Weizsäcker and Hans Bethe independently derived the CNO cycle in 1938 and 1939. When hydrogen atoms are fused to form helium, some mass is lost and converted to energy in the process. According to the equation E = mc2, where E stands for energy, m for mass and c for the speed of light in a vacuum, very little mass is required to generate huge amounts of energy.

The German theoretical physicist Arnold Sommerfeld made many valuable contributions to the development of quantum and wave mechanics. He modified Bohr’s atomic theory to include elliptical orbits and used statistical mechanics to explain the electronic properties of metals. As an influential teacher he groomed many great scholars such as Wolfgang Pauli, Werner Heisenberg and the German-born British physicist Sir Rudolf Peierls (1907-1995).

Hans Bethe was a former student of Sommerfeld but was forced to leave Germany after Adolf Hitler (1889-1945) and the Nazi Party came to power in 1933. Weizsäcker was a member of the team that performed nuclear research in Germany during the Second World War, while Bethe became the head of the theoretical division at Los Alamos during the development of nuclear weapons in the United States.

According to John North, “Moving to the United States, he eventually joined Cornell University, where he concentrated on nuclear physics in general. It was not until 1938, when attending a Washington conference organized by Gamow, that he was first persuaded to turn his attention to the astrophysical problem of stellar energy creation. Helped by Chandrasekhar and Strömgren, his progress was astonishingly rapid. Moving up through the periodic table, he considered how atomic nuclei would interact with protons. Like Weizsäcker, he decided that there was a break in the chain needed to explain the abundances of the elements through a theory of element-building. Both were stymied by the fact that nuclei with mass numbers 5 and 8 were not known to exist, so that the building of elements beyond helium could not take place….Like Weizsäcker, Bethe favored the proton-proton reaction chain and the CNO reaction cycle as the most promising candidates for energy production in main sequence stars, the former being dominant in less massive, cooler, stars, the latter in more massive, hotter, stars. His highly polished work was greeted with instant acclaim by almost all of the leading authorities in the field.”

Bengt Strömgren (1908-1987) was a Danish astrophysicist and the son of a Swedish-born astronomer. He studied in Copenhagen and stayed in touch with the latest developments in nuclear physics via Niels Bohr’s Institute close by. Strömgren did important research in stellar structure in the 1930s and calculated the relative abundances of the elements in the Sun and other stars. Another leading figure was the Russian-born theoretical physicist George Gamow (1904-1968). Gamow worked in the Soviet Union but fled the brutal oppression under the Communist dictator Joseph Stalin (1878-1953) and moved to the United States in 1934.

While their work represented a huge conceptual breakthrough, the initial theories of Weizsäcker and Bethe did not explain the creation of elements heavier than helium. Edwin Ernest Salpeter (1924-2008) was an astrophysicist who emigrated from Austria to Australia, studied at the University of Sydney and finally ended up at Cornell University in the USA, where he worked in the fields of quantum electrodynamics and nuclear physics with Hans Bethe. In 1951 he explained how with the “triple-alpha” reaction carbon atoms could be produced from helium atoms in the nuclear reactions within certain large and hot stars.

More advances were made by the Englishman Sir Fred Hoyle (1915-2001) and others. Hoyle was educated by some of the leading scientists of his day, among them Arthur Eddington and Paul Dirac. During World War II he contributed to the development of radar. With the German American astronomer Martin Schwarzschild (1912-1997), son of astrophysicist Karl Schwarzschild and a pioneer in the use of electronic computers and high-altitude balloons to carry scientific instruments, he developed a theory of the evolution of red giant stars.

Hoyle remained controversial throughout his life for his support of many highly unorthodox ideas, yet he made indisputable contributions to our understanding of stellar nucleosynthesis and together with a few others showed how the heavy elements are created during supernova explosions. Building on the work of Hans Bethe, Fred Hoyle in 1957 co-authored with the American astrophysicist William Alfred Fowler (1911-1995) and the English astrophysicists Margaret Burbidge (born 1919) and Geoffrey Burbidge (born 1925) the paper Synthesis of the Elements in Stars. They demonstrated how the cosmic abundances of essentially all but the lightest nuclei could be explained as the result of nuclear reactions in stars.

Astrophysicists spent the 1960s and 70s working out detailed descriptions of the internal workings of the stars and how all the elements up to iron can be manufactured from the hydrogen and helium supposedly produced after the Big Bang. The Japanese astrophysicist Chushiro Hayashi (born 1920) and his students made significant contributions to the models of stellar evolution in the 1960s and 70s. Hayashi was a leader in building astrophysics as a discipline in Japan. The Armenian scientist Victor Ambartsumian (1908-1996) was one of the pioneers in astrophysics in the Soviet Union, studied the evolution of stars and hosted conferences to search for extraterrestrial civilizations.

The process of combining light elements into heavier ones — nuclear fusion — happens in the hot central region of stars. In the extremely hot core, instead of individual atoms you have a mix of nuclei and free electrons, or what we call plasma. The term “ plasma “ was first applied to ionized gas by Irving Langmuir (1881-1957), an American chemist and physicist, in 1923. It is the fourth and by far the most common state of matter in the universe, in addition to the three we are familiar with from everyday life on Earth: solid, liquid and gas. Extreme temperatures and pressure is needed to overcome the mutual electrostatic repulsion (called the Coulomb barrier, after Charles de Coulomb who formulated the laws of electrostatic attraction and repulsion) of positively charged atomic nuclei (ions) and allow them to fuse together. The minimum temperature required for the fusion of hydrogen is 5 million degrees. Fusing heavier elements requires temperatures of hundreds of millions or even billions of degrees Celsius.

Most of the heavy elements are produced in stars far more massive than our Sun. No star, regardless of how big and hot it is, can generate energy by fusing elements heavier than iron, with the atomic number 26 (i.e. 26 protons in the nucleus). Iron nuclei represent a very stable form of matter. The fusion of elements lighter than iron or the splitting of heavier ones generally leads to a slight loss in mass and thus to a net release of energy. It is the latter principle, nuclear fission, which is employed in most nuclear weapons by splitting heavy atoms such as those of uranium or plutonium, while the former, nuclear fusion, takes place in hydrogen bombs and in the stars.

To make elements heavier than iron such as lead or uranium, energy has to be added to fuse them together. This happens when stars significantly more massive than our Sun have exhausted their fuel supplies, collapse and release enormous amounts of gravitational energy that is converted into heat. This star then becomes a supernova (a so-called Type II supernova), which can for a brief period of time shine brighter than an entire galaxy. Some of the excess energy is used to fuse atomic nuclei to form heavy elements, which are then scattered throughout interstellar space by a massive explosion. When the outer layers of a star are thrown back into space, the material can be incorporated into clouds of gas and dust (nebulae) that can later form new stars and planets. The remaining core of the exploded star will become a neutron star or a black hole, depending upon how massive it is. It is believed that the heavy elements we find on Earth, for instance gold with atomic number 79, are the result of ancient supernova explosions and were once a part of the cloud of gas and dust which formed our Solar System about 4.6 billion years ago.

The nebular hypothesis, the idea that our Solar System formed from a nebula, a cloud of gas, was first proposed in 1734 by the Swedish philosopher and theologian Emanuel Swedenborg (1688-1772). The German Enlightenment philosopher Immanuel Kant (1724-1804) developed this theory further, and Pierre-Simon Laplace also advanced a nebular hypothesis in 1796.

Even after the introduction of the telescope it took centuries for Western astronomers to work out the true scale of the universe. The English astronomer and architect Thomas Wright (1711-1786) suggested around 1750 that the Milky Way was a disk-like system of stars and that there were other star systems similar to the Milky Way but very far away. Soon after, Immanuel Kant in 1755 hypothesized that the Solar System is part of a huge, lens-shaped collection of stars and that similar such “island universes” exist elsewhere. Kant’s thoughts about the universe, however, were philosophical and had little observational content.

William Herschel’s On the Construction of the Heavens from 1785 was the first quantitative analysis of the Milky Way’s shape based on careful telescopic observations. William Parsons in Ireland, with the largest telescope in the nineteenth century, the Leviathan of Parsonstown, was after 1845 able to see the spiral structure of some nebulae, what we now recognize as spiral galaxies. Already in 1612 the German astronomer Simon Marius had published the first systematic description of the Andromeda Nebula from the telescopic era, but he could not resolve it into individual stars. The final breakthrough came in the early twentieth century.

The Mount Wilson Observatory in California was founded by George Ellery Hale. Spectroscopic studies of the Sun by the American astronomer Walter Adams (1876-1956) with Hale and others at the Mt. Wilson Solar Observatory led to the discovery that sunspots are regions of lower temperatures and stronger magnetic fields than their surroundings. Adams also discovered carbon dioxide in the atmosphere of Venus and identified Sirius B as one of the first known white dwarf stars. Our Sun will end up as a white dwarf a few billion years from now. The spectroheliograph for studying the Sun was developed independently by Hale and the French astronomer Henri-Alexandre Deslandres (1853-1948) around 1890.

George Ellery Hale offered Harlow Shapley a research post, and in 1918 Shapley had made what was to become his greatest single contribution to science: the discovery of the true size of our galaxy. It was much bigger than earlier estimates by William Herschel and others made it out to be, and the Sun was not close to its center. Yet even Shapley didn’t get everything right. He believed that the so-called spiral nebulae were a part of the Milky Way and that the universe essentially consisted of one large galaxy: our own.

Jacobus Kapteyn (1851-1922) founded the productive Dutch school of astronomers and was after Gerard Kuiper and Jan Oort among the most prominent Dutch-born astronomers of the twentieth century. Kapteyn observed that stars could be divided into two streams, moving in nearly opposite directions. His discovery of “star streaming” led to the finding of galactic rotation. The Swedish astronomer Bertil Lindblad (1895-1965), who directed the Stockholm Observatory from 1927-65, confirmed Shapley’s approximate distance to the center of our galaxy, estimated the galactic mass and deduced that the rate of rotation of the stars in the outer part of the galaxy, where the Sun is located, decreased with distance from the galactic core. Oort at the University of Leiden in 1927 confirmed Lindblad’s theory that Milky Way rotates, and their model of galactic rotation was verified by the Canadian astronomer John Plaskett (1865-1941). In the 1950s, the Dutch astronomer Hendrik C. van de Hulst (1918-2000) and others mapped the clouds of the Milky Way and delineated its spiral structure.

The job of cataloging individual stars and recording their position and brightness from photographic plates at the Harvard College Observatory was done by a group of women, “human computers” working with Edward Pickering, among them the American astronomer Henrietta Swan Leavitt (1868-1921). The concept of “standard candles,” stars whose brightness can be reliably calculated and used as benchmarks to measure the distance of other stars, was introduced by Leavitt for Cepheid variable stars, which could be employed to measure relative distances in the sky. This principle was used by Hubble in his work.

Scientists are just like other people, only more so. Isaac Newton could be a difficult man to deal with, yet he was undoubtedly one of the greatest geniuses in history. Henry Cavendish was a brilliant experimental scientist as well as painfully shy and eccentric. Judging from the articles I have read about him, Edwin Hubble must have had an ego the size of a small country, but that doesn’t change the fact that he was one of the most important astronomers of the twentieth century and that his worked permanently altered our view of the universe.

The 100 inch (2.5 meter) Hooker telescope at Mount Wilson was completed before 1920, at which point it was the largest telescope in the world. Using this telescope, Edwin Hubble identified Cepheid variable stars in the Andromeda Nebula. They allowed him to show that the distance to Andromeda was greater than Shapley’s proposed extent of our Milky Way Galaxy. Therefore the Andromeda Nebula was a galaxy much like our own. Hubble proceeded to demonstrate that there are countless other galaxies of different shapes and sizes out there, and that the universe was far larger than anybody had imagined before. He discovered Hubble’s Law and introduced the concept of an expanding universe. “His investigation of these and similar objects, which he called extragalactic nebulae and which astronomers today call galaxies, led to his now-standard classification system of elliptical, spiral, and irregular galaxies, and to proof that they are distributed uniformly out to great distances. (He had earlier classified galactic nebulae.) Hubble measured distances to galaxies and with Milton L. Humason extended Vesto M. Slipher’s measurements of their redshifts, and in 1929 Hubble published the velocity-distance relation which, taken as evidence of an expanding Universe, is the basis of modern cosmology.”

The Austrian physicist Christian Doppler described what is known as the Doppler effect for sound waves in the 1840s and predicted that it would be valid for other forms of waves, too. An observed redshift in astronomy is believed to occur due to the Doppler effect whenever a light source is moving away from the observer, displacing the spectrum of that object toward the red wavelengths. Hubble discovered that the degree of redshift observed in the light coming from other galaxies increased in proportion to the distance of those galaxies from us.

Hubble’s observational work led the great majority of scientists to believe in the expansion of the universe. This had a huge impact on cosmology at the time, among others on the Dutch mathematician and astronomer Willem de Sitter (1872-1934).

According to J J O’Connor and E F Robertson, “Einstein had introduced the cosmological constant in 1917 to solve the problem of the universe which had troubled Newton before him, namely why does the universe not collapse under gravitational attraction. This rather arbitrary constant of integration which Einstein introduced admitting it was not justified by our actual knowledge of gravitation was later said by him to be the greatest blunder of my life. However de Sitter wrote in 1919 that the term ‘… detracts from the symmetry and elegance of Einstein’s original theory, one of whose chief attractions was that it explained so much without introducing any new hypothesis or empirical constant.’ In 1932 Einstein and de Sitter published a joint paper with Einstein in which they proposed the Einstein-de Sitter model of the universe. This is a particularly simple solution of the field equations of general relativity for an expanding universe. They argued in this paper that there might be large amounts of matter which does not emit light and has not been detected. This matter, now called ‘dark matter’, has since been shown to exist by observing is gravitational effects.”

The cosmologist Georges Lemaître (1894-1966) from Belgium was a Catholic priest as well as a trained scientist. The combination is not unique. The Italian astronomer Angelo Secchi was a priest and the creator of the first modern system of stellar classification; the German-Czech scholar Gregor Mendel (1822-1884) was a priest and the father of modern genetics. In Lemaître’s view there was no conflict between religion and science. He reviewed the general theory of relativity and his calculations showed that the universe had to be either shrinking or expanding. Einstein had invented a cosmological constant to keep the universe stable, but Lemaître argued that the entire universe was initially a single particle — the “primeval atom” as he called it — which disintegrated in a massive explosion, giving rise to space and time. He published a model of an expanding universe in 1927 which had little impact at the time. However, by 1929 Hubble had discovered that galaxies were moving away at high speeds. In 1930 Lemaître’s former teacher at Cambridge University, Arthur Eddington, shared his paper with de Sitter. Einstein eventually confirmed that Lemaître’s work “fits well into the general theory of relativity,” and de Sitter soon praised Lemaître’s discovery.

Unknown to Lemaître, another person had come up with overlapping ideas. This was the Russian mathematician Alexander Friedmann (1888-1925), who in 1922 had published a set of possible mathematical solutions that gave a non-static universe. Already in 1905 he wrote a mathematical paper and submitted it to the German mathematician David Hilbert (1862-1943) for publication. In 1914 he went to Leipzig to study with the Norwegian physicist Vilhelm Bjerknes (1862-1951), the leading theoretical meteorologist of the time. He then got caught up in the turbulent times of the Russian Revolution in 1917 and the birth of the Soviet Union.

Friedmann’s work was hampered by a very abstract approach and aroused little interest at the time of publishing. Lemaître attacked the issue from a much more physical point of view. Friedmann sent an article for publishing in 1922, which Einstein read without too much enthusiasm. Friedmann died from typhoid fever in 1925, only 37 years old, but he lived to see his city Saint Petersburg renamed Leningrad after the revolutionary leader and Communist dictator Vladimir Lenin (1870-1924). George Gamow studied briefly under Alexander Friedmann, yet he fled the country in 1933 due to the increasingly brutal repression of the Communist regime, which killed millions of its own citizens during this time period.

If Lemaître’s “primeval atom “ from the 1920s is widely considered the first version of the “Big Bang” theory of the origin of the universe, a more comprehensive, modified version of this theory was published in 1948 by Gamow and the American cosmologist Ralph Alpher (1921-2007). The term “Big Bang” was coined somewhat mockingly by Fred Hoyle, who did not believe in the theory. Gamow decided as a joke to include his old friend Hans Bethe as co-author of the paper, thus making it known as the Alpher, Bethe, Gamow or alpha-beta-gamma paper, after the first three letters of the Greek alphabet. It can be seen as the beginning of Big Bang cosmology as a coherent scientific model.

Yet this joke had the effect of downplaying Alpher’s contributions. He was then a young doctoral student, and when his name appeared next to those of two of the leading astrophysicists in the world it was easy to assume that he was a junior partner. As a matter of fact, he made very substantial contributions to the Big Bang model whereas Bethe, brilliant though he was as a scientist, in this case had contributed virtually nothing. Ralph Alpher in many ways ended up being the “forgotten father” of the Big Bang theory and often felt, with some justification, that he didn’t receive the recognition that he deserved. Steven Weinberg (born 1933), an American Jewish physicist and Nobel laureate, has described Alpher’s research as “the first thoroughly modern analysis of the early history of the universe.”

Alpher published two papers in 1948. Apart from the one with Gamow, in another paper with the American scientist Robert Herman (1914-1997) he predicted the existence of a cosmic background radiation as an “echo” of the Big Bang. Sadly, astronomers did not bother to search for this predicted echo at the time, and Alpher and Herman were later seldom credited for their contribution. The cosmic microwave background radiation, which is considered one of the strongest proofs in favor of the Big Bang theory as a remnant of the early universe, was accidentally discovered by the Americans Robert Wilson (born 1936) and Arno Penzias (born 1933) in 1964. Yet they did not initially understand the significance of what they had found, and Alpher and Herman were ignored when they received a Nobel Prize in 1978.

The English-born American engineer William Shockley (1910-1989) together with the American physicists Walter Houser Brattain (1902-1987) and John Bardeen (1908-1991) in 1947 at Bell Telephone Laboratories invented the transistor, the semiconductor underlying virtually all modern electronic devices. Many historians of technology rank the transistor as one of the greatest inventions of the twentieth century, if not the greatest.

In 1958 and 1959 the Americans Jack Kilby (1923-2005) and Robert Noyce (1927-1990) independently developed the integrated circuit or microchip, a system of interconnected transistors where several could be made at the same time on the same piece of semiconductor. Kilby also invented a handheld calculator and a thermal printer. Noyce co-founded Intel Corporation in 1968 with Gordon Moore (born 1929). From the 1960s the number of transistors per unit area doubled every 1.5 years, a development known as Moore’s Law.

The final decades of the twentieth century became known as the Information Revolution, the Digital Revolution or simply the Age of the Microchip. The introduction of personal computers into private homes and eventually the rise of the Internet as a mass medium had a huge impact on everyday life, but above all electronic computers revolutionized the sciences, among them astronomy and astrophysics. Along with the dawn of the Space Age, the rise of communication satellites, space telescopes and the possibility of physically exploring other bodies in our Solar System, a development which went hand-in-hand with the evolution of rocket technology and electronics, faster electronic computers opened up vast new possibilities for processing the information that has been gathered. They can run massively complex simulations of supernova explosions, the first seconds of the Big Bang or other processes and events that are either too difficult or too time-consuming to do manually.

Those who claim that the West owes much to the East are right in many cases, but also exaggerate this debt in others. Space travel was not so much a “mutual exchange of ideas” as an overwhelmingly European and Western creation based on science and technology that did not exist nor develop anywhere else in the world. It is true that the concept of a “rocket” was invented in Asia. The Chinese used rockets for military purposes for centuries, and they were known in other regions such as the Indian subcontinent where Europeans encountered them.

According to Arnold Pacey in his book Technology in World Civilization, “British armies on the subcontinent encountered rockets, a type of weapon of which they had no previous experience. The basic technology had come from the Ottoman Turks or from Syria before 1500, although the Chinese had invented rockets even earlier. In the 1790s, some Indian armies included very large infantry units equipped with rockets. French mercenaries in Mysore had learned to make them, and the British Ordnance Office was enquiring for somebody with expertise on the subject. In response, William Congreve, whose father was head of the laboratory at Woolwich Arsenal, undertook to design a rocket on Indian lines. After a successful demonstration, about 200 of his rockets were used by the British in an attack on Boulogne in 1806. Fired from over a kilometre away, they set fire to the town. After this success, rockets were adopted quite widely by European armies.”

Rockets were not totally unknown in Europe prior to this, but the major introduction of them to the West happened during the Napoleonic Wars in the early 1800s. When the English civil engineers George (1781-1848) and Robert Stephenson (1803-1859) built their famous steam locomotive Rocket in 1829, rockets were still something of a novelty. Nevertheless, by the twentieth century they had developed into devices which in size and complexity differed so much from traditional Asian rockets that they had little in common apart from the name.

Space travel depended upon a host of scientific and technological innovations and in many ways represented the culmination of centuries of Western advances in these fields. It is unthinkable that you could have had space travel without the European Scientific and Industrial Revolutions. The chemical revolution which facilitated the discovery or invention of the materials and fuels needed for space technology started in the eighteenth century or earlier. The first device for generating an electrical current was created by the Italian Alessandro Volta in 1800. Electromagnetism was discovered by a Dane, Hans Christian Ørsted, and developed by people from Germany, Britain, France and other European nations.

Asian rockets were powered by gunpowder and weighed a couple of kilograms at most. None of them would have been able to challenge the Earth’s gravity, leave the atmosphere and explore the Solar System. The huge Saturn V multistage rocket, which was designed in the USA under the leadership of the rocket scientist Wernher von Braun (1912-1977) and allowed the astronauts Neil Armstrong (born 1930) and Buzz Aldrin (born 1930) to land on the Moon on July 20 1969, had a million times more mass and employed liquid oxygen and hydrogen.

As we recall, oxygen was discovered independently by Carl Wilhelm Scheele in Sweden and Joseph Priestley in England in the 1770s. Henry Cavendish had discovered hydrogen a few years before. Both gases were studied and named by the Frenchman Antoine Lavoisier. The Englishman Michael Faraday liquefied a number of gases in the 1820s, but not oxygen, nitrogen and hydrogen. Small droplets of oxygen and nitrogen were obtained by the physicist Raoul-Pierre Pictet (1846-1929) in Geneva, Switzerland and the physicist Louis-Paul Cailletet (1832-1913) in Paris, France in 1877. Liquid oxygen in a stable state and in appreciable quantities was created by the Polish chemists and physicists Karol Olszewski (1846-1915) and Zygmunt Wróblewski (1845-1888) in 1883. The German engineer Carl von Linde (1842-1934) and the British chemist William Hampson (1859-1926) made improvements to the apparatus for reaching low temperatures. Liquid hydrogen was produced by the Scottish chemist Sir James Dewar (1842-1923) in 1898 and solidified the year after. The Dutchman Heike Kamerlingh Onnes liquefied helium in 1908 and discovered superconductivity in 1911.

While the materials and electrical equipment needed to launch human beings into space, communicate with them and bring them safely back home could not have been made without prior advances in electricity, electromagnetism, chemistry and other fields, we must not forget the importance of mathematical tools as well. Without modern branches of mathematics such as calculus it would have been more or less impossible to bring people to the Moon and back, or to send robotic probes to Mars and other planets. Indeed, the very concept of “gravity” was only developed in Europe during the Scientific Revolution, by Sir Isaac Newton in England.

It is fashionable these days to claim that there were a number of “alternative paths” to modern science, which was born more or less independently in several regions of the world. Yet there was no alternative Mayan, Maori, African, Chinese or Indian path to space travel. A number of Asian countries such as Japan, China and India in the early twenty-first century have ambitious space programs of their own and will undoubtedly leave their mark in the field in the coming generations. If the Western world continues its current cultural decline, perhaps Asian countries will even play the leading role in future space exploration. This is certainly conceivable. Yet their space programs were initially established on the basis of wholesale borrowing from Western or Russian technology and did not have an independent local basis.

The developments which would lead to space travel began in the late nineteenth century. Writers such as the Englishman H. G. Wells (1866-1946) and the Frenchman Jules Verne (1828-1905), the fathers of the science fiction genre, inspired many people, including scientists who would develop practical rockets, to envision visits to the Moon and elsewhere. Rocket technology developed so much that space travel went from science fiction to fact.

The great American rocket scientist Robert Goddard (1882-1945) launched the world’s first successful rocket powered by liquid fuel in 1926. He used a steam turbine nozzle invented by the Swedish inventor Gustaf de Laval (1845-1913) a few years before. His rocket flight in 1929 carried the first scientific payload, a barometer and a camera. Robert Goddard was both a theoretical visionary as well as a practical engineer, a rare combination. He developed the mathematical theories of rocket propulsion, but his proposal for a rocket flight to the Moon received much ridicule in the media at the time. Like so many other pioneers, his genius was only fully appreciated after his death. He responded by stating that “Every vision is a joke until the first man accomplishes it; once realized, it becomes commonplace.”

Telescopic astronomy in the Russian Empire began for real with Mikhail Lomonosov. Serious ideas about space exploration began with Konstantin Tsiolkovsky (1857-1935), Link Text Russian scientist and visionary, the son of a Polish father and a Russian mother. In 1903 he published the article Exploration Of Space With Rocket Devices and drafted the design of a rocket powered by liquid oxygen and hydrogen. He calculated that a single-stage rocket would have to carry too much fuel to reach escape velocity and concluded that a multi-stage rocket would be more efficient. He once stated that “The Earth is the cradle of the mind, but we cannot live forever in a cradle.” Tsiolkovsky’s theoretical work about space travel was not well known in the West initially, but he had a major impact on some of those who would later become leading figures in the Soviet space program, among them Valentin Glushko (1908-1989).

The Ukraine-born Sergey Korolyov (1907-1966) was the leading Soviet rocket designer during the rapid developments of the 1950s and 1960s. He experienced triumphs such as the launch of the world’s first artificial satellite, the Sputnik 1, and the first human in space, the cosmonaut Yuri Gagarin (1934-1968) in April 1961. Sergey Korolyov died from failing health just before the Americans launched their successful missions to the Moon, perhaps due to his long stay in the Siberian Gulag during the ruthless purges of Communist dictator Stalin.

The German rocket pioneer Hermann Oberth (1894-1989) published the book The Rocket into Interplanetary Space in 1923, which explained mathematically how rockets could achieve a speed that would allow them to escape the Earth’s gravitational pull. He organized enthusiasts into a society for space flight, to which the young Wernher von Braun belonged. During the 1930s they were very interested in Robert Goddard’s efforts in the USA and copied some of his ideas. Von Braun worked with military rockets during World War II, among them the V-2 rocket used against Allied targets. He engineered his own surrender to the Americans after the war together with some of his top scientists. Hermann Oberth worked for his former assistant von Braun both in Germany and in the USA.

A number of scientists and experts were captured by Allied forces. The Soviet Union, too, captured a number of the engineers who had worked in the German rocket program, the most sophisticated in the world by the 1940s, but the most important person by far was Wernher von Braun himself. His background from Nazi Germany was obviously controversial, but he eventually became a naturalized American citizen and a driving force behind the American space efforts, including the Apollo Program which would lead to the first successful manned missions to the Moon. His skills as an engineer and visionary were certainly considerable, and Wernher von Braun is considered one of the greatest rocket scientists of the twentieth century.

When the Soviet Union with Korolyov launched the Sputnik 1 into orbit in October 1957, they ignited the Space Race between the two superpowers within the Cold War. The Americans responded quickly and established the National Aeronautics and Space Administration (NASA) in 1958. There was a large military component to this as rockets and missiles could be equipped with nuclear warheads, but the Space Race was also a competition for global prestige which generated peaceful benefits such as weather and communications satellites, too. Above all, it opened up the possibility of placing observatories outside of the Earth’s atmosphere and of sending probes to explore other bodies in our Solar System.

Among the most successful interplanetary space probes were the American Voyager 1 and 2, launched in 1977, whose data collected by their cameras and other instruments provided us with a great deal of new information about the gas giant planets. Voyager 2 in the 1980s became the first spacecraft to visit Uranus and Neptune, and both probes continue to transmit information about the far outer reaches of our Solar System.

The American Cassini orbiter, named after the Italian-French astronomer Giovanni Cassini, carried the European Huygens probe which in 2005 landed on Saturn’s largest moon Titan. Titan was discovered in 1655 by Christiaan Huygens and is the only moon with a dense atmosphere and a surface shaped by rivers and lakes of liquid ethane and methane. This was the first landing on a body in the Outer Solar System. Other probes have been sent or are being planned to study comets, asteroids, the Sun, Mercury, Venus, Pluto, Jupiter and above all Mars, in the anticipation of a possible manned mission to that planet later this century.

The idea of putting a telescope in orbit above the Earth’s atmosphere was suggested already in 1923 by Oberth. However, the first person to suggest this and live to see his vision implemented was the American scientist Lyman Spitzer (1914-1997). He proposed a large telescope in space in 1946 and was analyzing data from the Hubble Space Telescope the day he died. The HST, named after the discoverer of the extragalactic universe, Edwin Hubble, has been one of the most successful scientific instruments in recent decades. NASA named its infrared space observatory the Spitzer Space Telescope in Lyman Spitzer’s honor.

The American James Webb Space Telescope ( JWST ), named after former NASA administrator James Webb (1906-1992), is the successor to the Hubble Space Telescope, scheduled for launch in 2013. It will work in the infrared range of the electromagnetic spectrum, with some capability in the visible range, and will orbit 1.5 million km from the Earth. By comparison, the average distance to our Moon is just over 384.000 kilometers.

Not all future telescopes will be space telescopes, which are after all quite expensive. Some wavelengths such as X-rays and gamma rays need to be studied primarily above the Earth’s atmosphere, but most forms of both optical and non-optical astronomy will be covered by a combination of space telescopes and sophisticated ground-based ones. There are serious plans afoot to create permanent telescopes in remote places such as Antarctica, where the first robotic observatories have already been installed. Antarctica is the coldest but also the highest and driest continent. It has predominantly stable, clear weather and offers ideal conditions for astronomy, especially in the high-altitude plateaus called Dome A and Dome C.

A whole range of Extremely Large Telescopes such as the Thirty Meter Telescope (TMT), the Giant Magellan Telescope ( GMT ) or the European Extremely Large Telescope (E-ELT), a reflecting telescope with a mirror diameter of 42 meters for studying visible light, are under planning. The Square Kilometre Array ( SKA ) is a planned radio telescope with a power that far exceeds any comparable radio telescope in existence today. It is an international collaboration of scholars from Europe, the USA, Canada, Australia, China and India, among others, and will be placed somewhere in the Southern Hemisphere, possibly in Australia.

Hopefully, these new and powerful instruments can help settle some of the biggest questions in astronomy in the twenty-first century, such as the nature of so-called dark matter.

The existence of dark matter was predicted in the 1930s by the Swiss US-based astrophysicist Fritz Zwicky. He was an unconventional and eccentric scientist known for his rough language, in striking contrast to his friend Walter Baade, but he was also highly innovative. He studied physics at the Federal Institute of Technology in Zürich, Switzerland where one of his teachers was Auguste Piccard, the inventor of the bathyscape. He moved to California in 1925 but never took US citizenship. As mentioned before, together with Baade he introduced the term “supernova,” and personally discovered a large number of supernovae. Zwicky speculated on the existence of more matter than visible matter, i.e. “dark matter,” in 1933.

The American astronomer Vera Rubin (born 1928) studied under Richard Feynman, Hans Bethe, George Gamow and other prominent scholars in the United States. She became a leader in the study of the structure, rotation and motions of galaxies. Her calculations based on empirical observations showed that galaxies must contain ten times as much mass as can be accounted for by visible stars. She soon realized that she had discovered evidence for Zwicky’s proposed dark matter. Her work brought the subject to the forefront of astrophysical research. Rubin ‘s pioneering studies in the 1970s have so far stood the test of time.

It is currently held by cosmologists that the universe consists of about 4% ordinary matter, 21% dark matter and 75% dark energy, an even more mysterious entity than dark matter. An alternative theory called Modified Newtonian Dynamics has been introduced by the Israeli astrophysicist Professor Mordehai Milgrom, which has received the backing from some scholars but so far only a minority. The terms “dark matter” and “dark energy” reveal that scientists cannot currently explain some of the observed properties of the visible universe.

In the late nineteenth century, some European scholars believed that they soon understood all the basic laws of physics. They had reason for this optimism as the previous century had indeed produced enormous progress, culminating in the new science of thermodynamics and the electromagnetic theories of Maxwell. Max Planck was once famously advised not to study physics as all the major discoveries there had already been made. Lucky for us he didn’t heed this advice but went on to start the quantum revolution. We have far greater knowledge today than people had back then, but also greater humility: We know how little we understand, and how much still remains to learn about the universe. And maybe that is a good thing.

2 comments:

Steven Luotto said...

Grazie Fjordman.

Profitsbeard said...

Glass was "invented" by meteorites striking ~ and fusing~ earthly sand.

The best known historical example is so-called "Libyan Desert Glass" created by a 27 million year ago strike of a celestial object into the sands of the region later known as "Libya".

The globules of yellowish material are still sold for unique jewelry, good luck charms, astrophysical curios, etc.

The "magical" properties of this protean translucent material are literally out-of-this-world.

A vivifying essay, Fjordman, as always.

Many thanks and kudos!