NASA’s IG Report on Artemis Landers Just Dropped—Here’s Why the Human vs. Autonomy Debate Matters (Way Beyond the Moon)

Mike Ciannilli Mar 24, 2026 5 min read

Recently (March 10, 2026), NASA’s Office of Inspector General put out a 50-page audit on the Human Landing System contracts—IG-26-004. If you’re in aerospace, AI, robotics, or just following how autonomy is creeping into everything, this one’s worth your time.

The good news first: NASA gets real credit for keeping costs in check. SpaceX’s Starship HLS and Blue Origin’s Blue Moon contracts are up only about 7% in total, despite nearly $7 billion already obligated. Firm-fixed-price deals with these providers are actually working as intended—rare praise in government contracting these days.

But the schedule was already slipping. The old Artemis III (now assigned to Artemis IV) target of June 2027 for crewed landing was becoming more unrealistic; and has been retargeted for 2028. An added test flight in Earth orbit (the new Artemis III), integration headaches with Orion/Gateway, refueling in orbit, and the brutal South Pole terrain are all piling on. More concerning are the crew safety gaps flagged: demos aren’t fully “Test Like You Fly,” crew survival plans aren’t end-to-end yet, and there’s no backup rescue for the first couple of missions.

The section that really caught my eye (page 26) gets to the heart of it: how much should we rely on human pilots versus full autonomy when touching down on the Moon today?

How Apollo Handled It vs. What Artemis Is Trying

Apollo kept things relatively simple and human-centered. The commander flew the last part of descent manually with a hand controller—the computer did the math, but the pilot called the shots. Armstrong had to take over on 11 to dodge boulders and barely made it with fuel in the red. Every landing used some level of manual input when things got dicey.

Pros of that Apollo approach:

  • Human eyes and instincts for surprises no sensor saw coming

  • No worrying about dust blinding cameras or lidar

  • It worked—six safe touchdowns

Downsides:

  • Brutal workload in a suit

  • Humans aren’t as fast or tireless as code

  • A few landings were nail-biters

Now Artemis—Starship at a massive height of 171 feet, Blue Moon at 53—leans hard into modern tech: AI for spotting safe spots, learning models for plume effects, fancy sensors, even an elevator on Starship. NASA still insists on manual override being available in every phase, but the report notes SpaceX might push for waivers to stay on track, much like they did in crew Dragon.

Pros here:

  • Way better at handling the South Pole’s nightmare landscape—steep slopes, huge rocks, shadows

  • Software doesn’t get tired or stressed; it just keeps improving

  • Sets up nicely for cargo runs and eventually Mars

Downsides:

  • We haven’t seen anything this big actually land on the Moon yet

  • One bad sensor glitch or dust storm could cascade

  • Less room for that last-second human gut call

A hybrid idea floating around: keep certified manual override mandatory for those early crewed flights (leaning on Apollo’s proven track record in unknown territory), then phase in more full autonomy for cargo and repeat missions. The report basically says NASA needs to think this through carefully—drawing on past waiver lessons—to get the safety/schedule balance right.

This Isn’t Just a Space Thing—It’s Already Hitting Your World

The same push-pull between human control and autonomy is happening everywhere:

  • Aviation: autopilot runs most of the flight; smart landing aids are getting close to certified

  • Cars & trucks: Level 4 self-driving rolling out in fleets and robotaxis

  • Factories: lights-out shifts with cobots taking over routine work

  • Surgery: robots doing precise cuts while the doc oversees

  • Trading: algos making moves faster than any human could

The report is a live example of the tough calls we all face: When do you hand off to the machine? How much risk is okay? And who’s on the hook when something unexpected trips it up?

Down the line, a lot of us will spend more time guiding or checking these systems than doing the hands-on work ourselves. Trust, liability rules, and regs are all shifting—up on the Moon and right here on Earth.

Artemis isn’t only about boots back on the Moon; it’s helping figure out how humans and machines team up across the board.

What do you think?

For the full NASA report click "NASA HLS Audit IG-26-004."

What is one safeguard (tech, policy, whatever) you’d push NASA—or any company—to nail down?

All of us together will end up writing the rulebook of this new complex and integrated future. To get that first chapter off to the right start…I humbly ask…What Say You?

Mike Ciannilli is a former NASA mission leader who explains how disciplined decisions prevent failure in complex space missions. Drawing on experience in mission operations and test director environments, he analyzes developments in human spaceflight, mission risk, and major program decisions.

Preventing failure in complex space missions through disciplined decisions and lessons applied. https://preventfailure.com