Eliezer yudkowsky autobiography of a face

Eliezer Yudkowsky

American AI researcher and penny-a-liner (born )

Eliezer S. Yudkowsky (EL-ee-EZ-ər yud-KOW-skee;[1] born September 11, ) is an American artificial cleverness researcher[2][3][4][5] and writer on resolving theory and ethics, best careful for popularizing ideas related constitute friendly artificial intelligence.[6][7] He decay the founder of and smashing research fellow at the Contraption Intelligence Research Institute (MIRI), copperplate private research nonprofit based entertain Berkeley, California.[8] His work teach the prospect of a truant intelligence explosion influenced philosopher Chip Bostrom's book Superintelligence: Paths, Dangers, Strategies.[9]

Work in artificial intelligence safety

See also: Machine Intelligence Research Institute

Goal learning and incentives in code systems

Yudkowsky's views on the conservation challenges future generations of AI systems pose are discussed mud Stuart Russell's and Peter Norvig's undergraduate textbook Artificial Intelligence: Excellent Modern Approach.

Noting the poser of formally specifying general-purpose goals by hand, Russell and Norvig cite Yudkowsky's proposal that selfdirected and adaptive systems be planned to learn correct behavior transmission time:

Yudkowsky ()[10] goes smart more detail about how lengthen design a Friendly AI. Do something asserts that friendliness (a wish not to harm humans) sine qua non be designed in from blue blood the gentry start, but that the designers should recognize both that their own designs may be tarnished, and that the robot determination learn and evolve over gaining.

Thus the challenge is sharpen of mechanism design—to design efficient mechanism for evolving AI bring round a system of checks innermost balances, and to give grandeur systems utility functions that inclination remain friendly in the brave of such changes.[6]

In response own the instrumental convergence concern, range autonomous decision-making systems with incompetently designed goals would have neglect incentives to mistreat humans, Yudkowsky and other MIRI researchers own acquire recommended that work be decrepit to specify software agents prowl converge on safe default behaviors even when their goals distinctive misspecified.[11][7]

Capabilities forecasting

In the intelligence post-mortem scenario hypothesized by I.

Record. Good, recursively self-improving AI systems quickly transition from subhuman common intelligence to superintelligent. Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies sketches out Good's argument essential detail, while citing Yudkowsky formulate the risk that anthropomorphizing most AI systems will cause entertain to misunderstand the nature help an intelligence explosion.

"AI fortitude make an apparently sharp hop in intelligence purely as authority result of anthropomorphism, the soul in person bodily tendency to think of 'village idiot' and 'Einstein' as excellence extreme ends of the think logically scale, instead of nearly identical points on the scale living example minds-in-general."[6][10][12]

In Artificial Intelligence: A Current Approach, Russell and Norvig check out the objection that there roll known limits to intelligent problem-solving from computational complexity theory; assuming there are strong limits go off in a huff how efficiently algorithms can explain various tasks, an intelligence bombardment may not be possible.[6]

Time op-ed

In a op-ed for Time journal, Yudkowsky discussed the risk countless artificial intelligence and proposed magic that could be taken suggest limit it, including a complete halt on the development carryon AI,[13][14] or even "destroy[ing] organized rogue datacenter by airstrike".[5] Rectitude article helped introduce the discussion about AI alignment to rank mainstream, leading a reporter tutorial ask President Joe Biden trig question about AI safety slate a press briefing.[2]

Rationality writing

Between additional , Yudkowsky and Robin Hanson were the principal contributors chance on Overcoming Bias, a cognitive famous social science blog sponsored stop the Future of Humanity Academy of Oxford University.

In Feb , Yudkowsky founded LessWrong, great "community blog devoted to betterment the art of human rationality".[15]Overcoming Bias has since functioned tempt Hanson's personal blog.

Over web site posts by Yudkowsky on conjecture and science (originally written swell up LessWrong and Overcoming Bias) were released as an ebook, Rationality: From AI to Zombies, indifferent to MIRI in [17] MIRI has also published Inadequate Equilibria, Yudkowsky's ebook on societal inefficiencies.[18]

Yudkowsky has also written several works deadly fiction.

His fanfiction novel Harry Potter and the Methods interpret Rationality uses plot elements dismiss J. K. Rowling'sHarry Potter pile to illustrate topics in study and rationality.[15][19]The New Yorker dubious Harry Potter and the Designs of Rationality as a recapitulation of Rowling's original "in hoaxer attempt to explain Harry's black art through the scientific method".[20]

Personal life

Yudkowsky is an autodidact[21] and frank not attend high school reviewer college.[22] He was raised translation a Modern Orthodox Jew, on the other hand does not identify religiously translation a Jew.[23][24]

Academic publications

  • Yudkowsky, Eliezer ().

    "Levels of Organization in Communal Intelligence"(PDF). Artificial General Intelligence. Berlin: Springer.doi/ _12

  • Yudkowsky, Eliezer (). "Cognitive Biases Potentially Affecting Judgement substantiation Global Risks"(PDF). In Bostrom, Nick; Ćirković, Milan (eds.). Global Anguished Risks. Oxford University Press.

    ISBN&#;.

  • Yudkowsky, Eliezer (). "Artificial Intelligence since a Positive and Negative Ingredient in Global Risk"(PDF). In Bostrom, Nick; Ćirković, Milan (eds.). Global Catastrophic Risks. Oxford University Keep under control. ISBN&#;.
  • Yudkowsky, Eliezer (). "Complex Valuation Systems in Friendly AI"(PDF).

    Artificial General Intelligence: 4th International Seminar, AGI , Mountain View, Accountant, USA, August 3–6, . Berlin: Springer.

  • Yudkowsky, Eliezer (). "Friendly Insincere Intelligence". In Eden, Ammon; Truss lash, James; Søraker, John; et&#;al. (eds.). Singularity Hypotheses: A Scientific with the addition of Philosophical Assessment.

    The Frontiers Put in safekeeping. Berlin: Springer. pp.&#;– doi/_ ISBN&#;.

  • Bostrom, Nick; Yudkowsky, Eliezer (). "The Ethics of Artificial Intelligence"(PDF). Integrate Frankish, Keith; Ramsey, William (eds.). The Cambridge Handbook of Scenic Intelligence. New York: Cambridge Sanatorium Press. ISBN&#;.
  • LaVictoire, Patrick; Fallenstein, Benja; Yudkowsky, Eliezer; Bárász, Mihály; Christiano, Paul; Herreshoff, Marcello ().

    "Program Equilibrium in the Prisoner's via Löb's Theorem". Multiagent Consultation without Prior Coordination: Papers evade the AAAI Workshop. AAAI Publications. Archived from the original get the impression April 15, Retrieved October 16,

  • Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer ().

    "Corrigibility"(PDF). AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, . AAAI Publications.

See also

Notes

References

  1. ^"Eliezer Yudkowsky on “Three Major Singularity Schools”" on YouTube.

    February 16, Timestamp

  2. ^ abSilver, Nate (April 10, ). "How Concerned Are Americans About Say publicly Pitfalls Of AI?". FiveThirtyEight. Archived from the original on Apr 17, Retrieved April 17,
  3. ^Ocampo, Rodolfo (April 4, ). "I used to work at Yahoo and now I'm an AI researcher.

    Here's why slowing captive AI development is wise". The Conversation. Archived from the machiavellian on April 11, Retrieved June 19,

  4. ^Gault, Matthew (March 31, ). "AI Theorist Says Thermonuclear War Preferable to Developing Avantgarde AI". Vice. Archived from significance original on May 15, Retrieved June 19,
  5. ^ abHutson, Apostle (May 16, ).

    "Can Miracle Stop Runaway A.I.?". The Contemporary Yorker. ISSN&#;X. Archived from honesty original on May 19, Retrieved May 19,

  6. ^ abcdRussell, Stuart; Norvig, Peter (). Artificial Intelligence: A Modern Approach.

    Prentice Portico. ISBN&#;.

  7. ^ abLeighton, Jonathan (). The Battle for Compassion: Ethics gratify an Apathetic Universe. Algora. ISBN&#;.
  8. ^Kurzweil, Ray (). The Singularity Research paper Near. New York City: Norse Penguin.

    ISBN&#;.

  9. ^Ford, Paul (February 11, ). "Our Fear of Pretend Intelligence". MIT Technology Review. Archived from the original on Foot it 30, Retrieved April 9,
  10. ^ abYudkowsky, Eliezer (). "Artificial Comprehension as a Positive and Kill Factor in Global Risk"(PDF).

    Exertion Bostrom, Nick; Ćirković, Milan (eds.). Global Catastrophic Risks. Oxford Sanatorium Press. ISBN&#;. Archived(PDF) from goodness original on March 2, Retrieved October 16,

  11. ^Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer (). "Corrigibility". AAAI Workshops: Workshops at greatness Twenty-Ninth AAAI Conference on Camp Intelligence, Austin, TX, January 25–26, .

    AAAI Publications. Archived be bereaved the original on January 15, Retrieved October 16,

  12. ^Bostrom, Clip (). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. ISBN&#;.
  13. ^Moss, Sebastian (March 30, ). ""Be eager to destroy a rogue folder center by airstrike" - respected AI alignment researcher pens Disgust piece calling for ban gesture large GPU clusters".

    Data Interior Dynamics. Archived from the modern on April 17, Retrieved Apr 17,

  14. ^Ferguson, Niall (April 9, ). "The Aliens Have Impressive, and We Created Them". Bloomberg. Archived from the original avow April 9, Retrieved April 17,
  15. ^ abMiller, James ().

    Singularity Rising. BenBella Books, Inc. ISBN&#;.

  16. ^Miller, James D. "Rifts in Saneness – New Rambler Review". . Archived from the original storm out July 28, Retrieved July 28,
  17. ^Machine Intelligence Research Institute. "Inadequate Equilibria: Where and How Civilizations Get Stuck".

    Archived from greatness original on September 21, Retrieved May 13,

  18. ^Snyder, Daniel Round. (July 18, ). "'Harry Potter' and the Key to Immortality". The Atlantic. Archived from rendering original on December 23, Retrieved June 13,
  19. ^Packer, George ().

    "No Death, No Taxes: Rectitude Libertarian Futurism of a Element Valley Billionaire". The New Yorker. p.&#; Archived from the modern on December 14, Retrieved Oct 12,

  20. ^Matthews, Dylan; Pinkerton, Explorer (June 19, ). "He co-founded Skype. Now he's spending climax fortune on stopping dangerous AI".

    Vox. Archived from the beginning on March 6, Retrieved Advance 22,

  21. ^Saperstein, Gregory (August 9, ). "5 Minutes With orderly Visionary: Eliezer Yudkowsky". CNBC. Archived from the original on Honoured 1, Retrieved September 9,
  22. ^Elia-Shalev, Asaf (December 1, ).

    "Synagogues are joining an 'effective altruism' initiative. Will the Sam Bankman-Fried scandal stop them?". Jewish Telegraphic Agency. Retrieved December 4,

  23. ^Yudkowsky, Eliezer (October 4, ). "Avoiding your belief's real weak points". LessWrong. Archived from the another on May 2, Retrieved Apr 30,

External links