首页 >> 宠物食品
宠物食品

马斯克签署千人联名信:GPT-4 太危险,所有 AI 实验室不必立即暂停研究!

发布时间:2023-05-30 12:17

是毫无准备地冲向秋天。参考信息:[1]Bender, E. M., Gebru, T., McMillan-Major, A., Andrew Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?????. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).Bostrom, N. (2016). Superintelligence. Oxford University Press.Bucknall, B. S., Andrew Dori-Hacohen, S. (2022, July). Current and near-term AI as a potential existential risk factor. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 119-129).Carlsmith, J. (2022). Is Power-Seeking AI an Existential Risk?. arXiv preprint arXiv:2206.13353.Christian, B. (2020). The Alignment Problem: Machine Learning and human values. Norton Andrew Company.Cohen, M. et al. (2022). Advanced Artificial Agents Intervene in the Provision of Reward. AI Magazine, 43(3) (pp. 282-293).Eloundou, T., et al. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.Hendrycks, D., Andrew Mazeika, M. (2022). X-risk Analysis for AI Research. arXiv preprint arXiv:2206.05862.Ngo, R. (2022). The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626.Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.Weidinger, L. et al (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.[2]Ordonez, V. et al. (2023, March 16). OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this'. ABC News.Perrigo, B. (2023, January 12). DeepMind CEO Demis Hassabis Urges Caution on AI. Time.[3]Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712.OpenAI (2023). GPT-4 Technical Report. arXiv:2303.08774.[4]Ample legal precedent exists – for example, the widely adopted OECD AI Principles require that AI systems "function appropriately and do not pose unreasonable safety risk"[5]Examples include human cloning, human germline modification, gain-of-function research, and eugenics.在这份公开信里,附上的手写表或许要比信的主旨本身要精彩,你可以在这里找到很多与众不同的名字,每一个手写都表达了手写者对于 AI 的强烈猜疑态度。不过,有网友辨认出这些手写也不一定完全可信,因为在列表中会显现出来了 OpenAI CEO Sam Altman(我封我自己?)和电影《疾速追杀》男主角 John Wick 的名字,具体的事实还有待可知。

点击「在看」

是对我们远超过的鼓励

泉州男科医院哪里比较好
再林阿莫西林颗粒治儿童鼻炎吗
英太青凝胶和扶他林软膏哪个效果好
北京妇科医院
扬州男科医院哪家比较专业
友情链接