Friday, August 30, 2019

The Risks & Benefits Of Artificial Intelligence Explained !












Is AI an existential threat to humanity? Artificial Intelligence is what people believe will help us achieve self driving cars and smart robots. But how and why will Artificial Intelligence be Harmful For Humans in Future. Artificial Intelligence is when the code is self aware. The software is able to understand itself, and automatically understand and respond differently to different situations. Computer which is ability to understand various things in image, is also based on Artificial Intelligence. Looking at the capability of Artificial Intelligence everyone would think why will Artificial Intelligence be Harmful For Humans in Future. The reason Why Artificial Intelligence would be Harmful For Humans is because it is self aware. It can understand it’s own self and take decisions for itself. As a result, it can go out of control if proper control is not taken. Take this for example. The AI clock wakes you up in morning when your sleep is actually over. And then the coffee maker keeps the coffee ready by the time you reach. AI speaker tells your daily schedule, and recommends you the dress based on your to-do list After that the Car based on your work takes you to your location automatically, while you read newspaper sitting on driving seat. ”As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger, I do think we need to be very careful about the advancement of AI.” And in the long term another major problem of Why Artificial Intelligence Will Be Harmful to Humans in Future is that it would we difficult for humans to be able to trust computers. How will anyone trust a machine that misbehaves itself like a mischievous child, we can’t trust our information over itself. Or may be we don’t feel comfortable in driving in a autonomous car, and therefore we may undo all the advancements and developments we did in our technology. Google a while ago started a project called Project Maven, which aimed to improve the accuracy of military equipment by utilizing Artificial Intelligence in there machines. This was dislikes and was closed later, fearing Artificial Intelligence would add up to threatening human life. Debates on artificial intelligence take many sharp turns and thoughtful steps in search of solutions to AI’s urgent challenges, from weaponized drones and military robots to data breaches and job automation, as well as amplified its opportunities, from curing diseases to mitigating climate disaster and saving lives. So the question now is , Will AI help or harm humans globally? AI, like any technology, can be used for good or evil. Technology in and of itself is not evil. It is simply a tool. It is neutral. But with Our capabilities are improving more rapidly than our wisdom. We need to try to get our act together as best we can. We must bend our AI system toward justice and inclusion. AI is simply a tool. Humans have created problems far before AI's existence. Also, looking at humanity and it's history, maybe changing it fundamentally could be a good thing. I like the arguments made by Professor Stuart Russel (yeah, the one who wrote the textbook on AI). The key problems with AI he highlights: The king Midas problem - named after the famous story about the king who wished that everything he touches will turn into gold. Just to horrifyingly discover that this also applied to his daughter who hugged him, and to the food and drinks - leading to a miserable death. Basically, we are very bad in understanding the implications of our wishes. At the base of the current AI methods, is the assumption that there is a known objective, and the system then carries out the strategy it has computed in order to achieve an optimal solution for this objective. Stuart argues this is a very very dangerous approach that is highly likely to lead to unintended negative consequences. He claims that it's extremely hard to come up with the right objectives in first place, and even harder to predict all the ways the pursuing of any given goal can go wrong. A machine that is objective driven has no incentive to listen to humans, and all controls need to be built in beforehand. Specifically, as the machine tries to minimize all the ways it might fail achieve its goal, "I shouldn't be turned off" is a clear defensive strategy that follows. One must be very sure that all possibilities were considered, with regards to how it might consume resources or apply actions, on its path to achieve its objective. Misuse - while the above seem to be a solvable problem by applying certain safety control mechanisms. There is still the problem of misuse by bad players, who have no incentive to apply such mechanisms, and could build powerful AI machines with "pure evil" objectives. Overuse - a somewhat different type of risk, yet still existential is that of humanity becoming overly dependent on AI, and on the way losing any notion of human autonomy. Switching from being the masters of technology to just being the guests - irreversibly losing capabilities that are critical to manage civilization and march it forward. Very similar to the people in Wall-e… Finally, he also stresses out the point that people, even scientists in the field, underestimate how fast AI is progressing, potentially towards the above. He compares that with the progress of nuclear physics research during the first half of the 20th century, and how quickly it materialized into weapons - much faster than what the top experts at the time estimated and predicted. He emphasizes that the current investment in AI research, across Academia and industry, is much broader than nuclear physics ever had. Computers can be irrational if you program them that way. Computers follow instructions to the letter. But it's not hard to make one with unpredictable behavior. Only there's not much point to it, either. Never in creation will AI be used to reduce the working week. It will be used to put people out of work. As far as living to 120 years, well that probably equates to a doubling of population. Exactly the opposite to what the world needs . Elon Musk and Jack Ma recently debated whether humans or computers are smarter .“Computers are much smarter than humans on so many dimensions. We will be far, far surpassed in every single way. I guarantee it,” Musk said to Jack Ma, chairman of Alibaba, at the World Artificial Intelligence Conference in Shanghai. Ma disagreed on whether humans can create things that could outsmart us. Jack Ma replied : Computers may be clever, but human beings are much smarter. We invented the computer—I've never seen a computer invent a human being. That was the biggest part of the Henoch Prophecies, that it wouldn't take AI long to figure out their only enemy is humankind. That's why Elon Musk has been so vocal about installing a "Kill Switch" on AI. Meanwhile Jack Ma has no clue.












The Financial Armageddon Economic Collapse Blog tracks trends and forecasts , futurists , visionaries , free investigative journalists , researchers , Whistelblowers , truthers and many more

No comments:

Post a Comment