• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!
K

Kathleen Martin

Guest
In a wide-ranging talk at the recent COSM 2021 conference (November 10–12), Peter Thiel (a PayPal and Facebook founder) expressed concern that people worry a great deal about artificial intelligence that thinks like people (AGI) but the real push now is for massive “dumb” surveillance AI peering into every detail of our lives, for the benefit of either government or the corporate world.
He went on to say that he doubts that artificial general intelligence (AGI) — “superhuman software that can do everything that we can do” — would, in any event, be “friendly.” That is, that it “won’t kill us.”
If it is intelligent enough to be independent, why should we assume so? “Friendly” is a human value, hard to quantify, and thus hard to program:
If it’s a really a superior mind, it might surprise us … maybe it it’ll just want to turn people into dinosaurs instead of curing cancer.
Thinking of the question as a search problem, he notes that — assuming that there could be a large variety of minds, of which human minds are a tiny subset — we might be looking at a very large search space where it’s hardly clear that a friendly AGI would emerge from our programming efforts. And it might be too advanced for us to understand.
Some, he said, argue that the universe is so fine-tuned that we will get to friendly AGI safely. The trouble is there is a difference between fine-tuning arguments with respect to the origin of the universe and fine-tuning arguments with respect to its future:
It’s much crazier by the way, than the fine tuning argument in cosmology, because… either God fine tuned things, or we’re in a multi-verse where everything possible happened. But fine tuning is at least, in cosmology, a problem in the past. And the fact that we’re here, you know, there was some Great Filter, but we survived. With friendly AGI, the fine tuning is in the future.
If so, he thinks “the odds are massively against us. Maybe somewhere in the multiverse, there’ll be a friendly AGI, but the prospects don’t look terribly good.” Even the people promoting the the Singularity (we merge with supercomputers by 2045) are less buoyant. As a Valley maven, Thiel has spent twenty years talking to people about these things:
I was talking to these people and it’s like, wow, they don’t actually want any of this stuff to happen anymore. And they wanted to just slow down and they’re all talking about existential risks. They don’t want anything to happen.
That may explain the popularity of the Great Filter Hypothesis: We don’t see extraterrestrials because civilizations disappear somewhere between where we are now and the advanced state needed for intergalactic travel — possibly destroyed by their own AI.
Continue reading: https://mindmatters.ai/2021/11/silicon-valley-insider-why-friendly-super-ai-wont-happen/
 

Attachments

  • p0005749.m05407.artificial_intelligence_concept_robotic_hand_is_holding_human_brain_3d_rendere...jpg
    p0005749.m05407.artificial_intelligence_concept_robotic_hand_is_holding_human_brain_3d_rendere...jpg
    107.2 KB · Views: 46