Primo, I sincerely disagree with SoGo's statement, that machines would be able to think better than us. I agree with Lem, who stated (yet in his early Astronauts in 1951) that they will think faster. And they will have more data in disposition (because of no problems with forgetting, large hard drives, rom's, ram's, stuff...). So, to conclude - maybe I'd agree that they would be thinking more efficiently, but I don't think this means better.
Secundo, I don't agree with the statement, that AI would like to survive above all cost! Gentelmen! Why? Consider, please, that AI will be programmed to have the priorities chosen by the programmers ! By its creators ! (By us.) Therefore, a Golem-like machine can be prepared in such way, that it will only care about the constructors (humans), and not about itself. It's really not difficult to attain by clever programming. I think that every concious being, I mean every living being, has a survival instinct, but those are not artificial beings. They were prepared by the all-continuing evolution, the pattern that we don't have to genuinely follow ! Why should we? I mean, I am deeply distrustful towards AI (as Deckard knows, because we discussed it in person once), but since we're talking about it, I have to say, that AI doesn't have to be similar to Real Intelligence in any way... So, there's no need for a programmer to create AI the way our intelligence is shaped. No need for AI to base on the same pillars of instinct!
And therefore, in my opinion, it's not impossilble that Golem-like machine would not care about his own existence, but only about ,,common good'.
And Tertio, for the thinking bombs... that's... well didn't Americans use the radar-controlled bombs in Iraq? That's a step towards it...