
AI & Society, Год журнала: 2024, Номер unknown
Опубликована: Апрель 14, 2024
Abstract Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates defends: first, speed advances capability, as well capability level current systems have already reached, suggest is practically possible build capable disempowering humanity Second, due incentives coordination problems, if such AI, be built. Third, since appears a hard technical problem aligned with goals its designers, many actors might powerful misaligned Fourth, because useful for large range goals, try disempower humanity. If tries 2100, then disempowered conclusion has immense moral prudential significance.
Язык: Английский