Why is The Terminator scenario in AI unrealistic? Modern AI focuses on automated reasoning, based on the combination of perfectly understandable principles and plenty of input data, both of which are provided by humans or systems deployed by humans. To think that common algorithms such as nearest neighbour classifier or linear regression could somehow spawn consciousness and start evolving into superintelligent AI minds is farfetched in our opinion.
The idea of exponential intelligence increase is also unrealistic for the simple reason that even if a system could optimise its own workings, it would keep facing more and more difficult problems that would slow down its progress. This would be similar to the progress of human scientists requiring ever greater efforts and resources from the whole research community and indeed the whole society, which the superintelligent entity wouldn’t have access to.
Read more about the doomsday scenarios within the AI field in this article written by Strongbytes.