大学到底要不要禁止使用ChatGPT?
该不该禁?
能不能禁?
怎么禁?
这些都是现在非常有争议,也悬而未决的话题。
就在这周,我们学校发布了一份指南,规定本校学生如何使用GPT写作业。
我把这份文档直接甩给ChatGPT,让他帮我翻译成中文。所以这份推送是GPT写的。同时,在翻译完成后,我也问GPT自己怎么看待这份指南,附在本推送的最后。
我之前写了六份关于ChatGPT的推送,介绍这方面的文献,参考见下:
* 中国互联网大厂造不出ChatGPT,也和性别有关?(GPT社会学第6期)
* 以人类的名义:ChatGPT公司CEO提出乌托邦计划(GPT社会学第4期)
* 让ChatGPT更像人类?社会学家的百年前预言,今日更值得思考
* ChatGPT时代,北大将不如蓝翔技校?
关于使用生成式人工智能(如ChatGPT)的指导
Guidance for students on the use of Generative AI (such as ChatGPT)
【技术、伦理和人工智能的使用是一个快速发展的领域。
本指导截至2023年3月有效,将随时更新。】
[The technology, ethics, and use of AI is a fast moving area. This guidance is current as of March 2023 and will be updated as necessary]
目前,对生成式人工智能系统存在很大兴趣。ChatGPT(由OpenAI开发)只是其中的一个例子,还有其他一些(如DALLE-2、CoPilot和Google Bard)。这是一个令人兴奋的领域,自然而然地我们想要探索它的功能,并学习如何使用它。
There is currently a lot of interest in generative AI systems. ChatGPT (by OpenAI) is just one example, but there are others (such as DALLE-2, CoPilot, and Google Bard). It is an exciting area and naturally we want to explore what it can do and learn how to make use of it.
大学的立场并不是要全面限制使用生成式人工智能,而是要:
– 强调期望作业应该包含学生自己的原创作品;
– 强调生成式人工智能的局限性和依赖它作为信息来源的危险性;
– 强调需要承认(允许使用的)生成式人工智能的使用。
The University position is not to impose a blanket restriction the use of generative AI, but rather to:
– Emphasise the expectation that assignments should contain students’ own original work;
– Highlight the limitations of generative AI and the dangers of relying on it as a source of information;
– Emphasise the need to acknowledge the use of generative AI where it is (permitted to be) used.
有些作业可能会明确要求您使用人工智能工具,并分析和批判其生成的内容,而其他作业可能会指定不应使用人工智能工具,或只能以特定方式使用。这取决于您所选课程的学习目标。请参考您作业的具体标准,如果有疑问,请咨询您的讲师。
Some assignments may explicitly ask you to work with AI tools and to analyse and critique the content it generates, others may specify that AI tools should not be used, or only used in specific ways. This will depend on the learning objectives for your courses. Please refer to the specific criteria for your assignments and ask your lecturers if in doubt.
一、期望自己的原创作品
所有提交评估的作品都应该是您自己的原创作品。在某些情况下,您可能会被要求签署一份关于自己作品的声明。将由人工智能生成的内容冒充为自己的作品是不恰当的。
Expectation of own original work
All work submitted for assessment should be your own original work. In some cases, you may be asked to sign a declaration of own work. It is not appropriate to misrepresent AI generated content as your own work
重要提示:请注意,如果您使用人工智能工具(如ChatGPT或其他工具)生成作业(或作业的一部分),并将其提交为自己的作品,这将被视为学术不端行为并受到相应处理。
“学术不端行为被大学定义为在任何大学评估中使用不公平手段。不端行为的例子包括(但不限于)抄袭、自我抄袭(即在同一或不同机构的相同作品获得两次学分)、串通、伪造、作弊(包括合同作弊,即学生支付其他人撰写或编辑作品)、欺骗和冒名顶替(即冒充另一个学生或允许他人在评估中冒充学生)。”(《爱丁堡大学学术不端行为程序》)
Important note: Be aware that if you use AI tools (such as ChatGPT or others) to generate an assignment (or part of an assignment) and submit this as if it were your own work, this will be regarded as academic misconduct and treated as such.
“Academic misconduct is defined by the University as the use of unfair means in any University assessment. Examples of misconduct include (but are not limited to)plagiarism, self-plagiarism (that is, submitting the same work for credit twice at the same or different institutions), collusion, falsification, cheating (including contract cheating, where a student pays for work to be written or edited by somebody else), deceit, and personation (that is, impersonating another student or allowing another person to impersonate a student in an assessment).” (University of Edinburgh, Academic Misconduct Procedures)
二、生成式人工智能的当下局限性
生成式人工智能提供了许多好处,但它也有其局限性,您需要了解。
Current limitations of generative AI
Generative AI offers a number of benefits, but it also has its limitations, which you need to aware of
重要提示:您需要做到以下几点:
– 理解您正在使用的任何人工智能系统的限制;
– 检查其生成内容的事实准确性;
– 不要将由人工智能生成的内容作为主要来源依赖,应该将它与其他来源一起使用。
It is important that you:
– Understand the limitations of any AI system you are using;
– Check the factual accuracy of the content it generates;
– Do not rely on AI generated content as a key source – use it in conjunction with other sources.
2.1 生成式人工智能工具是语言机器,而不是知识数据库——它们通过从大型数据集中“学习”的模式来预测下一个可能的单词或编程代码片段;
- Generative AI tools are language machines rather than databases of knowledge –they work by predicting the next plausible word or section of programming code from patterns that have been ‘learnt’ from large data sets;
2.2 AI工具没有理解它们所生成内容的能力。需要一个有知识的人来检查工作(通常是在迭代中进行);
- AI tools have no understanding of what they generate. A knowledgeable human must check the work (often in iterations);
2.3 这些工具正在学习的数据集存在缺陷、偏见和限制;
- The data sets that such tools are learning from are flawed and contain inaccuracies, biases and limitations;
2.4 它们生成的文本并不总是事实正确的;
- They generate text that is not always factually correct;
2.5 它们可以创建具有安全漏洞、错误和使用非法库或调用 – 或侵犯版权的软件/代码;
- They can create software/code that has security flaws, bugs, and use illegal libraries or calls – or infringe copyrights;
2.6 通常情况下,AI生成的代码或计算看起来是合理的,但在更近距离的细节工作中存在错误。在此方式产生的任何代码或计算中,应该由熟练掌握该编程语言的人员进行全面检查;
- Often the code or calculation produced by AI will look plausible but contain errors in detailed working on closer inspection. A human trained in that programming language should fully check any code or calculation produced in this way;
2.7 它们的模型训练数据不是最新的 – 目前它们在某个时间点(例如ChatGPT的情况下为2021年)之后的世界和事件上拥有有限或受限制的数据;
- The data their models are trained on is not up-to-date – they currently have limited or constrained data on the world and events after a certain point (2021 in the case of ChatGPT);
2.8 它们可以生成冒犯性内容;
- They can generate offensive content;
2.9 它们会产生虚假的引文和参考资料;
- They produce fake citations and references;
2.10 这样的系统是无道德的——它们不知道生成冒犯性、不准确或误导性内容是错误的;
Such systems are amoral – they don’t know that it is wrong to generate offensive,inaccurate or misleading content;
2.11 它们包括隐藏的剽窃 – 意味着它们利用了来自人类作者的单词和想法,但没有引用它们,我们认为这是剽窃的行为;
- They include hidden plagiarism – meaning that they make use of words and ideas from human authors without referencing them, which we would consider as plagiarism;
2.12 存在侵犯图片和其他受版权保护的材料的风险。
- There are risks of copyright infringements on pictures and other copyrighted material.
重要提示:过度依赖人工智能工具来生成书面内容、软件代码或分析结果会减少您练习和发展重要技能(例如写作、批判性思维、评估、分析或编程技能)的机会。这些都是重要的技能,在大学和毕业后成功所必需的。