The main danger of AI is super-competence
The danger of AI is NOT that it will turn evil, that it will want to wipe us out, that it will develop free will, or that it will be more intelligent than us.
The main danger is that it will become highly competent in an area misaligned with our interests. The classic example is a paperclip factory: an AI that is told to make as many paperclips as possible. The AI might figure out how to make them directly from elemental iron, then quickly turn all the iron in earth's crust (including our blood) into paperclips before anyone realizes what happened. It's not malicious or intelligent -- it's just highly competent in a very narrow area, and incapable of knowing when its programming becomes misaligned with human interests.
The danger need not be physical. An AI may be told to produce the most engaging movie possible. It could unintentionally make films that put viewers in a state of mindless bliss, then addict the whole world, dooming us to starve to death while drooling to video optimized to directly engage our brain's pleasure centers.
Supercompetent algorithms could also be weaponized, like a programmer who unleashes a meme-posting AI to discredit any opposition and promote himself as world dictator.
The naive view is to imagine that an AI's capabilities won't be dramatically superior to humans, but a system that is orders of magnitude more competent than we are even in a narrow field can pose an existential risk without any ill intent, or without any intent at all.