The Human in the Loop
What Apollo 13 Teaches Us About AI, Institutional Knowledge, and the People We Cannot Afford to Lose
What Apollo 13 Teaches Us About AI, Institutional Knowledge, and the People We Cannot Afford to Lose
My father was a physician’s assistant on the Mercury and Gemini programs at the School of Aerospace Medicine at Brooks Air Force Base in San Antonio. Which meant that when I was a child, astronauts came to our house for dinner. This was Texas back in the day. There were cookouts. There was laughter. There were men who by day were being pushed to the outer edges of human endurance centrifuges, altitude chambers, the quiet brutality of preparing a human body for space and who by evening were just people, tired and funny and hungry, sitting in a backyard in San Antonio eating barbecue.
One of them I knew only as Scotty.
He was kind the way that certain adults are kind to children genuinely, without condescension. I didn’t know what he did. I didn’t need to. He was just Scotty, a grownup who treated me like I was worth talking to.Years later, in high school, my mother brought out the autograph books from that time. And I discovered, slowly, with the particular disorientation of a world rearranging itself, that my childhood friend Scotty was really Scott Carpenter Mercury astronaut, one of the Original Seven, a man who had ridden an Atlas rocket into orbit and circled the Earth and looked out the window with such wonder that he forgot to fire his retrorocket on time.
I tell you this not to trade on a famous name, but because everything I am about to say about humans and machines and the cost of removing people from systems I learned it, first, from men like him. From what my father’s work meant. From what my sister later carried forward as a project manager at JPL during the early shuttle program.
You must understand that this is not an abstract argument for me. It is a family one.
Three Missions. One Lesson.
In January 1967, Apollo 1 burned on the launchpad. Three men died because a hatch was designed to open inward, because the atmosphere was pure oxygen, because schedule and budget and the pressure of a Space Race had accumulated into a design that could not survive its own vulnerability. The humans who knew something was wrong had been overruled and those men perished.
In 1986 and again in 2003, the shuttle program paid the same price twice with Challenger and then Columbia, fourteen lives, two more vehicles, the same root cause dressed in different technical clothing: institutional pressure silencing the people who understood the system best and were not heeded. These are the stories of what happens when the human warning is ignored.
But there is a third story. And it is the one that completes the picture.
April 1970. 200,000 Miles From Home.
Apollo 13 launched on April 11th. Two days later, an oxygen tank ruptured as the result of a section of damaged Teflon insulation on electrical wiring, a maintenance decision made years earlier that had never been fully connected to its consequences. The electrical short caused the explosion that crippled the service module. The Moon was no longer the mission. Getting home was.
What followed over the next four days was the most extraordinary demonstration of human improvisation under pressure that the space program ever produced. The crew Jim Lovell, Jack Swigert, Fred Haise had to power down Odyssey, the command module to conserve energy, live in Aquarius, the lunar module designed for two people for four days, navigate by the stars manually, and execute a precisely timed engine burn using the Moon’s gravity as a slingshot to bring themselves back to Earth.
And on the ground, the engineers at Mission Control had to figure out, in real time, with the materials already aboard the spacecraft, how to fit a square carbon dioxide scrubber cartridge into a round hole because if they didn’t, the crew would suffocate before they reached home. The solution involved a plastic bag, a sock, duct tape, and a manual cover. It was engineered under pressure, communicated across 200,000 miles of space, and it worked.
Now ask yourself this question: what would have happened if that mission was unmanned?
The tank ruptures. The systems fail. The craft drifts. There is no improvisation, no creativity, no Gene Kranz in Mission Control saying, “failure is not an option” and meaning it with every cell in his body. There is no crew looking at each other and deciding, wordlessly, that they are going to solve this. There is just a trajectory calculation, and eventually a debris field, and a mystery that might never have been fully solved because the people who understood the system at its deepest level were not there. The human element did not just survive Apollo 13. It was Apollo 13. It was the only reason anyone came home.
The System That Cannot Debug Itself
I think about Apollo 13 constantly when I watch what is happening right now in the technology industry. We are in the middle of a significant, accelerating reduction of the human element in software systems. Developers are being replaced or simply not replaced at all on the assumption that AI can generate the code, maintain the systems and fill the gap. On a spreadsheet, the logic looks seductive. On a balance sheet, the savings are real. But a balance sheet cannot capture what Gene Kranz knew. It cannot value the engineer who has spent five years understanding why a particular system behaves the way it does under stress. It cannot quantify the institutional memory that never lives in documentation but rather in the people in the accumulated judgment of those who have seen the system fail before and know exactly what to watch for.
AI generates code. It does not understand systems. It does not carry history. It cannot look at an anomaly at 2 in the morning and say I’ve seen something like this before, and here is what it means. It cannot improvise a solution from a plastic bag and a sock when the playbook runs out. And the playbook always runs out. Not in normal conditions in normal conditions, AI will perform beautifully. But complex systems do not fail in normal conditions. They fail at the edges, in the unexpected combinations, in the ways that no training data anticipated. That is precisely when you need the human in the loop. That is precisely when institutional knowledge becomes the difference between recovery and absolute catastrophe. We are building systems of increasing consequence infrastructure, healthcare, finance, defense and we are simultaneously reducing our capacity to understand, oversee, and course-correct them. We are, in a very real sense, choosing to send the mission unmanned.
What Scotty Knew
Scott Carpenter was not NASA’s most technically precise astronaut. He was something else curious, alive to the experience, present in a way that occasionally frustrated the engineers who needed him focused on the instruments. He looked out the window. He noticed things. He also survived. And those qualities that made him occasionally maddening to mission planners were the same qualities that made him fully, indefatigably human the capacity to be surprised, to respond to the unexpected, to bring something to the mission that no checklist could contain.That is not a romantic notion. It is a technical requirement.
My father spent his career ensuring that the humans going into space were prepared for what the machines could not anticipate. My sister spent hers helping build a program that, at its best, understood that humans and machines were partners each making the other capable of things neither could accomplish alone. That understanding is what we are at risk of discarding.
The Triad Is a Warning
Three missions. Apollo 1 tells us what it costs when human warnings are ignored. Challenger and Columbia tell us what it costs when institutional knowledge is overruled by schedule, budget and arrogance. Apollo 13 tells us what is only possible when the human is trusted, present, and empowered to improvise. Together they form a complete argument and not against technology, but for the irreplaceable role of the humans who understand it, oversee it, and can save it when it fails in ways that no one could have predicted.
We are at a moment of genuine choice. The organizations making decisions right now about AI and engineering capacity are not just making budget decisions. They are deciding what kind of human infrastructure will exist when something goes wrong at a scale and complexity we have not yet encountered. The question is not whether something will go wrong. Complex systems always fail eventually. The question is whether there will be anyone left who understands the system well enough to bring it home.
Jim Lovell, Jack Swigert, and Fred Haise came home because Gene Kranz refused to give up, because engineers improvised solutions that didn’t exist in any manual, because the humans in the loop were trusted and empowered and present.
We should be so fortunate, when our moment comes, to have kept our humans in the loop.
This is the third in a series on AI, technical debt, and the human cost of moving too fast. Parts one and two are linked below. I would be honored to hear from those of you who have your own connections to this history the people who built these programs, who worked alongside them, who carry their legacy forward.







