Skip to content

Hoag (1979): The History of Apollo On-Board GN&C

David Hoag wrote the 1963 paper that defined the Apollo G&N system architecture — the sensors, the computer, the displays, and the man-machine philosophy that held them together. Sixteen years later, after 11 manned Apollo missions, he wrote this retrospective. The 1963 paper describes what the system will be. This paper describes what it was.

Hoag was the G&N Technical Director at MIT’s Instrumentation Laboratory (by 1979 renamed the Charles Stark Draper Laboratory). He oversaw the entire effort from concept through the final mission. This is not a technical report about a subsystem — it is a history of the program, written by the person who set its direction, with the benefit of knowing how the story ended.

The paper begins earlier than expected — not with the 1961 NASA contract, but with the Instrumentation Laboratory’s inertial navigation work of the 1950s. Draper’s laboratory had built inertial navigation systems for submarines and aircraft. The fundamental approach — inertial measurement calibrated by periodic external fixes — was domain-independent. Hoag traces a direct line from marine inertial navigation to the Apollo G&N concept: the same measurement physics, adapted for a different vehicle in a different medium.

By the time NASA selected MIT IL for the Apollo G&N contract in August 1961, the laboratory already had preliminary designs and a conceptual framework. This head start shaped the program. The G&N system design was ahead of most other Apollo subsystems, which gave the team time to get the architecture right before the schedule pressure became crushing.

The most valuable content in this paper is the comparison between design intent and flight experience.

The 1963 paper describes the choice of three gimbals instead of four for the IMU — accepting gimbal lock as an operational constraint managed by pre-alignment procedures rather than adding a fourth gimbal ring. Hoag confirms this decision was never revisited and never caused a mission problem. The weight savings and mechanical simplification justified the operational overhead of gimbal lock avoidance procedures throughout all Apollo flights.

The sextant and scanning telescope exceeded expectations. Cislunar navigation using star-horizon and star-landmark measurements achieved position accuracies of approximately 1 nautical mile at lunar distances — better than the pre-flight error budgets predicted. The astronauts turned out to be better at sextant measurements than the design team assumed.

Hoag describes the crew skill factor as a genuine surprise. The optical system was designed for a certain measurement accuracy. The astronauts, with training and practice, consistently exceeded it. This is a rare case where the human element improved system performance rather than degrading it.

The most significant hardware evolution was the growth from Block I to Block II:

ParameterBlock IBlock II
Fixed memory12,288 words36,864 words
Erasable memory1,024 words2,048 words
Instruction set8 instructions11 instructions

Even the threefold expansion of fixed memory proved barely sufficient. The final flight software (COLOSSUS for the Command Module, LUMINARY for the Lunar Module) consumed virtually all 36,864 words. Programming teams fought for memory allocations. Routines were rewritten to save tens of words. The AGC ran the most complex real-time software of its era in a space the size of a modern web page’s JavaScript bundle.

The verb-noun interface worked. Hoag notes that crews adapted quickly, could operate the DSKY in pressure suits, and used it effectively under stress (including during the Apollo 11 descent alarms and the Apollo 13 emergency). The design — 19 keys, 2-digit verb and noun codes, flashing display for computer requests — was simple enough to memorize and unambiguous enough to operate under duress.

The paper’s most candid section concerns the growth of the flight software effort. Hoag is direct: the original estimates were far too low.

Early projections assumed a small team could produce the necessary programs. The reality:

  • Flight software grew from a few thousand words to over 36,000 words
  • The programming team grew from a handful of people to hundreds
  • Software became the pacing item — driving the program schedule for several missions
  • Configuration management of the flight software became one of the most difficult management challenges in Apollo

The “rope freeze” deadline — beyond which no software changes could be made because the core rope modules had to be manufactured — created a hard boundary that no amount of schedule pressure could move. Late software changes meant re-weaving ropes, a physical manufacturing process with lead times measured in months. This constraint imposed a discipline on the software process that no management directive could have achieved: at some point, the software was done because it had to be woven into wire.

Hoag’s retrospective view is that the software effort’s scale was fundamentally underestimated, not because the problems were unexpected, but because nobody in 1961 had experience building real-time software systems of this complexity. The Apollo program invented the discipline of large-scale flight software development while executing it.

Hoag traces the operational experience across the Apollo flights, focusing on what the G&N system encountered that the designers had not anticipated:

Apollo 8 (first translunar flight) — validated cislunar navigation. The star-horizon sightings worked as designed. The crew adapted to the measurement procedures and achieved the predicted accuracy. Hoag notes that this mission proved the fundamental concept: a crew with a sextant and a 15-bit computer could navigate to the Moon.

Apollo 11 (first landing) — the 1202/1201 executive overflow alarms during powered descent. Hoag discusses the rendezvous radar phasing problem from the system architect’s perspective. The interface specification said “frequency locked.” The hardware phases were unsynchronized. The coupling drive units consumed approximately 13% of AGC capacity tracking a phantom angle. The executive shed low-priority work and continued the descent. Hoag is clear that the executive’s behavior — dropping non-essential tasks under overload — was the design working as intended, even though the cause of the overload was not anticipated.

Apollo 13 — the G&N system was not directly involved in the oxygen tank failure, but the crew used the LM’s G&N system for navigation and guidance after the Command Module lost power. Hoag describes the operational adaptation: using the LM’s AGC and IMU for functions they were never designed to perform (transearth navigation from the LM, which had no sextant calibrated for that purpose). The crew performed manual star alignments through the LM’s Alignment Optical Telescope.

Apollo 14 — the abort switch problem that Eyles solved with 61 DSKY keystrokes. Hoag provides the system-level context: a spurious signal from a loose solder joint was indistinguishable from a legitimate abort command. The software workaround exploited the computer’s own state machine to neutralize the signal. The fix was devised on the ground, radioed to the crew, and executed manually — a demonstration of the man-machine integration philosophy at its most extreme.

Later missions (Apollo 15-17) — extended landing site options to more challenging terrain (Hadley Rille, Descartes Highlands, Taurus-Littrow). The descent guidance algorithms and crew procedures evolved to handle steeper approach angles and more precise targeting. By Apollo 17, the G&N system was operating at a level of maturity and crew confidence that the early missions had not approached.

A recurring theme — one that connects Hoag’s retrospective to Eyles’ account and Hamilton’s analysis — is the difficulty of interface control between organizations.

The Apollo G&N system sat at the intersection of several contractors: MIT IL designed and built the G&N hardware and software. Grumman built the Lunar Module. North American built the Command Module. NASA’s centers (MSC in Houston, MSFC in Huntsville, KSC at the Cape) managed the overall program. Each interface between organizations was governed by an Interface Control Document (ICD).

Hoag acknowledges that the ICD process was imperfect. The rendezvous radar phasing spec — “frequency locked” without specifying phase synchronization — is the most consequential example. The throttle lag spec that Eyles describes (0.3 seconds documented, 0.075 seconds actual) is another. In each case, the document that was supposed to be the contract between teams was either ambiguous or stale.

Hamilton’s later finding that 75% of Apollo software errors were interface errors between system components is consistent with Hoag’s experience. The technical problems within any single team’s domain were manageable. The problems at the boundaries between teams were the ones that nearly ended missions.

The 1963 paper’s central premise — automation as a tool the crew controls, not a system that controls the crew — was tested across 11 manned missions. Hoag’s verdict is unambiguous: the philosophy was correct.

The evidence is specific:

  • Armstrong redesignated the landing target on Apollo 11 to avoid boulders the computer could not detect
  • The Apollo 13 crew used the LM’s G&N system for purposes it was never designed for, adapting procedures in real time
  • The Apollo 14 crew executed a 61-keystroke software workaround devised on the ground minutes before powered descent
  • Later mission crews refined their descent techniques to exploit the LPD redesignation capability for precision landing

In each case, the crew’s ability to override, adapt, and improvise was essential to mission success. A fully automatic system would have landed Apollo 11 in a boulder field, would have had no fallback after Apollo 13’s explosion, and would have aborted Apollo 14 due to a loose solder joint.

Hoag wrote this paper in 1979 — only seven years after the last Apollo mission. He notes, even at that short remove, the difficulty of reconstructing a complete and accurate history. Documentation was scattered across organizations. Personal memories had begun to diverge. Some decisions had no written record.

This observation — that institutional memory decays faster than anyone expects — echoes through the rest of this collection. Eyles, writing 25 years later, found “differing versions offered to history” for the Apollo 14 workaround. Hamilton’s formalization of the error-prevention patterns was partly motivated by the need to capture lessons before they were lost.

Hoag’s 1979 paper is itself an artifact of this concern: an architect writing down what happened while enough of the participants were still available to check the account.

The 1963 paper and the 1979 paper form a pair. The first describes a system designed before anyone had flown it. The second evaluates that design against 11 missions and 16 years of experience. Reading them together reveals an unusual outcome: the fundamental architecture was right.

The three-gimbal IMU, the sextant navigation, the digital computer with priority executive, the verb-noun crew interface, the man-machine philosophy that connected them — all survived contact with reality. What the designers got wrong was not the architecture but the scale of the software effort required to implement it, and the difficulty of maintaining accurate interface specifications between organizations.

The program’s deepest lesson, as Hoag tells it, is that hardware can be designed in advance but software must be discovered. The flight software grew by a factor of six beyond initial estimates, not because the estimates were careless, but because nobody in 1961 understood how much code it takes to navigate to the Moon and back. Apollo did not just land on the Moon. It invented the discipline of building the software to get there.