How bright is bright

At one time or another, virtually all celestial navigators have looked at the data for the navigational planets and puzzled at the cryptic numbers alongside the names. Some numbers are positive, and some are negative, and they change over the course of the year. When he asks about these labels, the navigator is told that the numbers represent “magnitude,” or the apparent brightness of the planets. And it helps not at all to be further informed that the larger the negative number, the brighter the object.

What, one might wonder, is the basis of this crazy system? The seemingly contradictory nature of this method for describing the luminosity of stellar objects results from trying to graft a 2,000-year-old, naked eye observation system devised by ancient Greek astronomers on to scientific measurements using sophisticated instruments.

The Greek system was first used in a star catalog compiled by Hipparchus (ca. 130 BC). He used a scale from one to six to indicate the relative brightness of starswith one indicating the most luminous and six designating those stars that were just visible. He then assigned stars to each category. For example, the 20 brightest stars were put in the magnitude one category, the next brightest labeled magnitude two, and so on. This simple system worked well until the invention of the telescope, which allowed for the observation of stars dimmer than magnitude six. Not content with the subjective system devised by Hipparchus, nineteenth-century astronomers worked to find a mathematical basis for determining magnitude. The English astronomer Sir John Herschel determined that the apparent brightness of a first magnitude star was 100 times that of a sixth magnitude body. In the 1850s Norman Pogsonanother English astronomerproposed a fixed scale of stellar magnitude based on Hipparchus’s system but taking into account Herschel’s finding. Since the ratio between magnitude one and magnitude six was 100, each step on the scale represents a ratio of 2.512 (there are five steps altogether, so the fifth root of 100 gives us 2.512). Thus, a first magnitude star is 2.512 times as bright as a second magnitude star, and 6.310 times as bright as a third magnitude body (2.512 x 2.512 = 6.310).

This approach to stellar magnitude worked well until astronomers began to devise increasingly more accurate instruments like photometers. It was then determined that certain stars were more than 100 times brighter than sixth magnitude stars. This presented a bit of a problem since where do we go when we’re already at magnitude one? Rather than revamp the Greek system, astronomers decided to start using negative numbers to indicate the brightest objects. Thus, the bright star Canopus is given a magnitude of -0.9, and the very bright star Sirius has a magnitude of -1.6. Therefore, the higher the negative number, the brighter the object. (Ironically enough, because the system was developed before the advent of instruments that could measure the exact luminosity of stars, the magnitude system ends up being anchored to dim sixth magnitude stars, rather than using the brightest star as the touchstone.)

Given this system, the magnitude of the four navigational planets involves some large negative numbers, as one can see by checking the Nautical Almanac. For 1994, the magnitude of Venusthe brightest planet of the fourvaries from -3.9 to -4.6. The least brilliant member of the club, Saturn, goes from a lackluster +1.1 to the more respectably bright +0.5. The real champs in the magnitude game, however, are the moon at full, -12.6, and the sun, a whopping -26.7.

Astronomers use another type of brightness called “absolute magnitude.” This measures the luminosity of stars not as observed from Earth but from a standard distance (10 parsecs, or 32.6 light years). For example, while Sirius has a much greater apparent magnitude than dim Polaris, it actually is far less bright than Polaris in terms of absolute magnitude.

Tim Queeney

By Ocean Navigator