![]() ![]() The space dogfights of Star Wars, which influenced so much on-screen space combat, were themselves influenced by WWII air combat movies. This video from the "It's Okay to Be Smart" YouTube channel points out a number of these inaccuracies, and also why they're so prevalent. Small space fighters would not maneuver the way fighter jets do in the Earth's atmosphere. Lasers weapons would be hard to focus over distance. Yes, if you know basic physics and have stopped to think critically for a moment about the space battles seen in Star Trek, Star Wars, Battlestar Galactica, or many other franchises, you probably realized that it's all way off from reality. Those fantastic "pew pew pew" sound effects of laser weapons. Photon torpedoes exploding on the surface of an enemy ship. is moving up fast and could now be only 20 years away, and now is the time to have conversations about advanced A.I.’s risks.X-Wings bobbing and weaving around other ships, with nimble TIE fighters in hot pursuit. Hinton said in an interview with the Washington Post this month that the horizon for superintelligent A.I. Fears are growing in the community that superintelligent A.I., which would be able to think and reason for itself, is closer than many believe, and some experts warn that the technology is not currently aligned with human interests and well-being. Others have even argued that by publicly discussing A.I.’s existential risks, CEOs like Altman have been trying to distract from the technology’s current issues which are already creating problems, including facilitating the spread of fake news just in time for a pivotal election year.īut A.I.’s doomsayers have also warned that the technology is developing fast enough that existential risks could become a problem faster than humans can keep tabs on. developers including OpenAI and even Google have called on governments to move faster on regulating A.I., some experts warn that it is counter-productive to discuss the technology’s future existential risks when its current problems, including misinformation and potential biases, are already wreaking havoc. research in March, citing the technology’s destructive potential.Īnd Altman warned Congress this month that sufficient regulation is already lacking as the technology develops at a breakneck pace.īut while executives from leading A.I. Elon Musk was one of over 1,000 technologists and experts to call for a six-month pause on advanced A.I. research without stricter government oversight. It isn’t the first letter calling for more attention to be paid to the possible disastrous outcomes of advanced A.I. Hinton recently left Google so that he could more openly discuss A.I.’s risks. Both Bengio and Hinton have issued several warnings in recent weeks about what dangerous capabilities the technology is likely to develop in the near future. due to their contributions to modern computer science. Other signatories include former Google engineer Geoffrey Hinton and University of Montreal computer scientist Yoshua Bengio, who are known as two of the Godfathers of A.I. The letter’s preamble said the statement is intended to “open up discussion” on how to prepare for the technology’s potentially world-ending capabilities. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” “Mitigating the risk of extinction from A.I. The letter is a short single statement to capture the risks associated with A.I.: Safety, a nonprofit research organization. ![]() risk” published Tuesday by the Center for A.I. Sam Altman, CEO of ChatGPT creator OpenAI, is one of over 300 signatories behind a public “ statement of A.I. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |