Hello There, Guest! (LoginRegister)

Post Reply 
On the Dangers of AI Development
Author Message
MileHighBronco Offline
Legend
*

Posts: 34,345
Joined: Mar 2005
Reputation: 1732
I Root For: Broncos
Location: Forgotten Time Zone
Post: #1
On the Dangers of AI Development
I saw this article in Time of all places. It is a letter from a prominent researcher in Artificial Intelligence development. Issues to ponder.

Quote: By Eliezer Yudkowsky
March 29, 2023 6:01 PM EDT
Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.

An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin.

I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.

The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.

Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.


Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

The likely result of humanity facing down an opposed superhuman intelligence is a total loss. Valid metaphors include “a 10-year-old trying to play chess against Stockfish 15”, “the 11th century trying to fight the 21st century,” and “Australopithecus trying to fight Homo sapiens“.

To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.

If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.

Much more at this link

Done in by our own hands would really suck. We are a species that is smart and stupid all at once, seemingly seeking suicide.

Having numerous engineers in my family, I have, at times left them sputtering, unable to seemingly grasp what I am saying. Their mindset as engineers, is something like CAN we build it? I have asked them at times, SHOULD we build it? What might the unintended consequences be and can they be negated? They act like this is something for somebody else to grapple with. Engineers have done great things for society but with that creativity there needs to be a sober assessment of what could come out of it and the potential downsides. Those conversations should happen BEFORE something is built and unleashed on the world.
04-13-2023 11:02 AM
Find all posts by this user Quote this message in a reply
Advertisement


Native Georgian Offline
Legend
*

Posts: 27,629
Joined: May 2008
Reputation: 1042
I Root For: TULANE+GA.STATE
Location: Decatur GA
Post: #2
RE: On the Dangers of AI Development
(04-13-2023 11:02 AM)MileHighBronco Wrote:  Done in by our own hands would really suck. We are a species that is smart and stupid all at once, seemingly seeking suicide.
It would suck, yes, but it would be so appropriate.

As a society, we have passed the point at which political solutions can provide any meaningful help to our most urgent problems. Sure, it’s painful to watch the country go down the tubes, but… oh, well.
04-13-2023 11:16 AM
Find all posts by this user Quote this message in a reply
Advertisement


TigerBlue4Ever Offline
Unapologetic A-hole
*

Posts: 72,844
Joined: Feb 2008
Reputation: 5856
I Root For: yo mama
Location: is everything
Post: #3
RE: On the Dangers of AI Development
I've never been more glad that my life expectancy decreases daily.
04-13-2023 02:38 PM
Find all posts by this user Quote this message in a reply
ECUGrad07 Offline
Hall of Famer
*

Posts: 12,274
Joined: Feb 2011
Reputation: 1282
I Root For: ECU
Location: Lafayette, LA
Post: #4
RE: On the Dangers of AI Development
This could be "the great filter" that most civilizations in the universe don't pass through.
04-13-2023 02:46 PM
Find all posts by this user Quote this message in a reply
Post Reply 




User(s) browsing this thread: 1 Guest(s)


Copyright © 2002-2024 Collegiate Sports Nation Bulletin Board System (CSNbbs), All Rights Reserved.
CSNbbs is an independent fan site and is in no way affiliated to the NCAA or any of the schools and conferences it represents.
This site monetizes links. FTC Disclosure.
We allow third-party companies to serve ads and/or collect certain anonymous information when you visit our web site. These companies may use non-personally identifiable information (e.g., click stream information, browser type, time and date, subject of advertisements clicked or scrolled over) during your visits to this and other Web sites in order to provide advertisements about goods and services likely to be of greater interest to you. These companies typically use a cookie or third party web beacon to collect this information. To learn more about this behavioral advertising practice or to opt-out of this type of advertising, you can visit http://www.networkadvertising.org.
Powered By MyBB, © 2002-2024 MyBB Group.