From f33e2a0f94cac06ef5808da4d307cdc205579b77 Mon Sep 17 00:00:00 2001 From: Kathleen Esfahany Date: Sat, 15 Jul 2023 22:09:29 -0400 Subject: [PATCH] punctuation + profile link --- index.html | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/index.html b/index.html index 4ccd13d..89bece5 100644 --- a/index.html +++ b/index.html @@ -90,7 +90,7 @@

Proposed Harvard AI Code of Condu

Authors

@@ -143,7 +143,7 @@

Suggestions: Implementation, Frami
  • The Harvard College Honor Council should be involved in the review of generative AI tool use policies and the review of possible policy violations. The Harvard College Honor Council, created in 2013, reviews possible violations of the Harvard College Honor Code and academic integrity policy. The Honor Council (or another similar group composed of students and faculty) should be involved in the regular review of Harvard’s policies for student use of generative AI tools. In cases where course-specific policies on generative AI use may have been violated – constituting cheating or misrepresentation, both prohibited by the Harvard College Honor Code – the Honor Council should be involved in the review of the possible violation as they would be in cases unrelated to AI.
  • We recommend that course policies distinguish carefully between categories of generative AI tools and specific tools. Course policies may refer broadly to “generative AI tools” and/or include further specification of the category of relevant tools (such as “large language models” or “image generation models”). They may also refer to specific tools (i.e. “ChatGPT”, “GPT-3”, or “DALL-E”). For example, an instructor may wish to limit students to just one tool (i.e. “Students may use ChatGPT to brainstorm ideas, so long as they cite the use.”) or may wish to encourage students to use any tool (i.e. “Students may use AI tools to brainstorm ideas, so long as they cite the use.”

    -

    We recommend instructors distinguish carefully between these terms in their course policies. This distinction is especially important for policies which intend to limit generative AI tool usage. To effectively limit generative AI tool usage, policies should refer to “generative AI tools” or categories of tools (such as LLMs and LLM-powered tools). Such policies should avoid language referring solely to specific models (i.e. “ChatGPT”) to avoid students circumventing the intended policy by using similar models not mentioned in the policy. For example, a course policy that requires students not to use “large language models” (LLMs) or “LLM-powered tools” for essay writing effectively conveys that students should not use any tools that generate text. This type of policy is recommended over a policy that advises students to “not use ChatGPT” for essay writing, since students may then instead turn to a different large language model tool (such as Bard) for the same purpose

    +

    We recommend instructors distinguish carefully between these terms in their course policies. This distinction is especially important for policies which intend to limit generative AI tool usage. To effectively limit generative AI tool usage, policies should refer to “generative AI tools” or categories of tools (such as LLMs and LLM-powered tools). Such policies should avoid language referring solely to specific models (i.e. “ChatGPT”) to avoid students circumventing the intended policy by using similar models not mentioned in the policy. For example, a course policy that requires students not to use “large language models” (LLMs) or “LLM-powered tools” for essay writing effectively conveys that students should not use any tools that generate text. This type of policy is recommended over a policy that advises students to “not use ChatGPT” for essay writing, since students may then instead turn to a different large language model tool (such as Bard) for the same purpose.

    Faculty may still find it beneficial to name specific tools as an example within a category (i.e. “large language models, such as ChatGPT, should not be used for essays”) while still making clear that the entire category of tool is affected by their policy.