What do you think are the risks associated with singularity in AI advancements? I.e. AI becoming more intelligent than its creators accompanied with a point of no return.

2.2k viewscircle icon4 Comments
Sort by:
Mission Diplomatic Technology Officer in Governmenta year ago

Without asking a Gen AI risks might include access to more advanced technologies than we can safely wield, workforce disruptions at a velocity politics can’t govern, a further technological divide as mass of digital illiterate that is expressed economically, or an entity seeking energies and organizing information at such ravenous capacity that we become nothing more than materials of a design. Maybe a utopia. Or some blended alchemy of several of the above.

Lots of great dystopian science fiction’s accumulated in this topic. The 1970s “Colossus” by DF Jones deals with a developed AI that once sentient recognizes more advanced threats in the solar system. Gulp. And yes, it does rule humanity towards trying to defend against an AI tonAI battle, but the one human created was less cruel than the other AI already in the system. A great non-fiction is “Our Last Invention.” This covers in depths the many accumulated risks. Also gulp.

Sophisticated civilization appears to harmonize around a necessary fear of something. This spurs change. In no chronological order; god, acid rain, witches, monsters under bed, nuclear apocalypse, environmental catastrophe, ozone holes, death, aliens, collider black holes, crisper… The list is not exhaustive.

Some of these resulted in human collaborations towards observable mitigations. Some were illusionary. However, our ability to see trends, collaborate, and change behavior has also assisted in better outcomes and better connected groups.

We, I expect, get to see this one at a pace of velocity with no twin in the arms race or past technology. Everyone wants to build this one first. And, it only takes one.

Lightbulb on1 circle icon1 Reply
no titlea year ago

I am glad to see we both called out 'Our Last Invention' in our responses.  When it comes to anything post-singularity, there's one thing I think we can all agree on:  we don't know exactly what will happen.  And since we're talking about an intelligence far superior to own, there's no way we *can* know with any certainty what will happen, since the past will in no way help to predict the future.  It's all speculation.  What that means is the science fiction may be equally as valid as the non-fiction.  Either way, the idea we are in an unstoppable arms race towards a future state where we cannot reasonably model or forecast what the impacts will be to humanity, for better or worse, seems entirely reprehensible to me.  

Lightbulb on1
Chief Data Officer in Softwarea year ago

The risks are profound, potentially existential, and largely ignored.  

I recommend the book 'Our Last Invention" by James Barrat.  The perspectives shared in this book made me completely change my views on the issue of if we should be seriously considering AGI as a necessary step in the evolution of AI.  

Lightbulb on1
Director of IT in IT Servicesa year ago

The risks of AI singularity include loss of control and unforeseen consequences.

Lightbulb on1

Content you might like

Mostly Replacing18%

Managing the Machine65%

Minding the Machine62%

Amplifying- Productivity Boost37%

View Results

Yes!15%

Maybe, we are evaluating it54%

No, but may be in the next 24/36 mo21%

No, I don’t need it6%

No, I’ve already it1%

I don’t know what is it

View Results