2,869
edits
No edit summary |
No edit summary |
||
Line 5: | Line 5: | ||
== Machine Intelligence == | == Machine Intelligence == | ||
Since the creation of the first simple machines they have been becoming more complex as the entity which owns them often would like to increase efficiency. Therefore a drill press could add a conveyor belt for more wood to be drilled | Since the creation of the first simple machines they have been becoming more complex as the entity which owns them often would like to increase efficiency. Therefore a drill press could add a conveyor belt for more wood to be drilled or add more drill heads, at some point these machines are termed factories. | ||
Hawkings<ref>https://economictimes.indiatimes.com/news/science/stephen-hawking-warned-artificial-intelligence-could-end-human-race/articleshow/63297552.cms?from=mdr</ref>, Musk<ref>https://www.documentjournal.com/2018/04/the-existential-paranoia-fueling-elon-musks-fear-of-ai/</ref>, Gates<ref>https://www.bbc.com/news/31047780</ref>, have all stated their fear of the potential existential threats we face with the development of AI. There are multiple reasons however it boils down to if you create something more intelligent than yourself it may find out some information which we do not know which might rationalise destroying human life. For instance, if an AI was tasked without '''hard written (ROM)''' specifying something along the lines of Issac Asimov's three laws of robotics to protect biodiverse life on earth its first thought would most probably kill the majority of humans. Luckily, as of yet no autonomous robots have been created in the traditional sense. | Hawkings<ref>https://economictimes.indiatimes.com/news/science/stephen-hawking-warned-artificial-intelligence-could-end-human-race/articleshow/63297552.cms?from=mdr</ref>, Musk<ref>https://www.documentjournal.com/2018/04/the-existential-paranoia-fueling-elon-musks-fear-of-ai/</ref>, Gates<ref>https://www.bbc.com/news/31047780</ref>, have all stated their fear of the potential existential threats we face with the development of AI. There are multiple reasons however it boils down to if you create something more intelligent than yourself it may find out some information which we do not know which might rationalise destroying human life. For instance, if an AI was tasked without '''hard written (ROM)''' specifying something along the lines of Issac Asimov's three laws of robotics to protect biodiverse life on earth its first thought would most probably kill the majority of humans. Luckily, as of yet no autonomous robots have been created in the traditional sense. |