Doctor_Strange
I'd have to say the jet fighter from Stealth, the movie was horrible but the subject matter is modern science.
Modern UAV's are already capable of determining the best route to reach a target, and capable of picking said target. They require no human interaction after the initial programming so something like what is portrayed in the film could actually happen.
It wouldn't have to contain sentience, merely a malfunction.
If a malfunction occured, what are the chances that the resulting programming would be coherent at all? That is, would the AI still know to follow the necessary steps to launch a missile (open bay doors first, etc.)? Try scrambling a piece of computer code and see if the program still works logically.