After Lion Air flight 610 crashed into the Java Sea thirteen minutes after takeoff from Jakarta, Indonesia, on October 29, 2018, Boeing cited pilot error as a likely cause of the tragedy that killed all one 189 people on board its 737 Max aircraft. Post-flight analysis, however, showed an unusual trajectory for the crash. Shortly after takeoff, a series of twenty nosedives started to drive the plane downward, with the pilots recovering each time only to experience another rapid dive as the plane got lower and lower in the sky and crashed. On the recovered flight recorder, pilots could be heard furiously leafing through the technical manual of the airplane as it crashed into the sea. When another 737 Max, Ethiopia Airlines flight 302, crashed with a similar trajectory after taking off from Addis Ababa on March 10, 2018, killing all 149 people on board, the search for a cause beyond pilot error began in earnest. In both cases, an automatic system operating unbeknownst to the flight crews that they had no way of interacting with or turning off had taken control of the airplanes and driven them down, despite pilots’ efforts to save the planes and, indeed, even determine what was happening. How could an autonomous system that pilots could not interact with during flight, nor turn off, come to be installed in widely used aircraft unbeknownst to pilots flying those aircraft—and why did that system fail? What roles did engineers play in the design and certification process? What consequences did engineers, and Boeing as a company, face after the crashes? What do different codes of ethics say about engineering decisions that affect the health, safety, and welfare of the public in such circumstances? Did the engineers involved act appropriately according to the different ethical codes?