Skip to content

Commit

Permalink
punctuation + profile link
Browse files Browse the repository at this point in the history
  • Loading branch information
esfahany committed Jul 16, 2023
1 parent accea08 commit f33e2a0
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ <h1 class="mb-6 font-bold text-5xl text-black">Proposed Harvard AI Code of Condu
<h2 class="mb-4 font-bold text-3xl text-black">Authors</h2>
<ul class="list-dash relative">
<li><p><span class="font-semibold">Harvard College Students and Teaching Fellows</span> in Creativity (Gen Ed 1067) (Spring 2023)</p></li>
<li><p><span class="font-semibold">David Atherton</span>, Assistant Professor of East Asian Languages and Civilizations (FAS)</p></li>
<li><p><span class="font-semibold"><a href="https://ealc.fas.harvard.edu/people/david-atherton">David Atherton</a></span>, Assistant Professor of East Asian Languages and Civilizations (FAS)</p></li>
<li><p><span class="font-semibold">Sarah Newman</span>, Director of Art & Education and Project Lead, AI Pedagogy Project, metaLAB (at) Harvard, Berkman Klein Center for Internet & Society (HLS)</p></li>
<li><p><span class="font-semibold">Kathleen Esfahany</span>, Research Assistant at metaLAB (at) Harvard, Berkman Klein Center for Internet & Society / PhD Student, Harvard Program in Neuroscience (HMS/FAS)</p></li>
</ul>
Expand Down Expand Up @@ -143,7 +143,7 @@ <h3 class="mb-4 font-bold text-xl text-black">Suggestions: Implementation, Frami
<li><span class="font-semibold">The Harvard College Honor Council should be involved in the review of generative AI tool use policies and the review of possible policy violations.</span> The Harvard College Honor Council, created in 2013, reviews possible violations of the Harvard College Honor Code and academic integrity policy. The Honor Council (or another similar group composed of students and faculty) should be involved in the regular review of Harvard’s policies for student use of generative AI tools. In cases where course-specific policies on generative AI use may have been violated – constituting cheating or misrepresentation, both prohibited by the Harvard College Honor Code – the Honor Council should be involved in the review of the possible violation as they would be in cases unrelated to AI.</li>
<li class="space-y-4">
<p><span class="font-semibold">We recommend that course policies distinguish carefully between <em>categories</em> of generative AI tools and <em>specific</em> tools.</span> Course policies may refer broadly to “generative AI tools” and/or include further specification of the category of relevant tools (such as “large language models” or “image generation models”). They may also refer to specific tools (i.e. “ChatGPT”, “GPT-3”, or “DALL-E”). For example, an instructor may wish to limit students to just one tool (i.e. “Students may use ChatGPT to brainstorm ideas, so long as they cite the use.”) or may wish to encourage students to use any tool (i.e. “Students may use AI tools to brainstorm ideas, so long as they cite the use.”</p>
<p>We recommend instructors distinguish carefully between these terms in their course policies. This distinction is especially important for policies which intend to limit generative AI tool usage. To effectively limit generative AI tool usage, policies should refer to “generative AI tools” or categories of tools (such as LLMs and LLM-powered tools). Such policies should avoid language referring solely to specific models (i.e. “ChatGPT”) to avoid students circumventing the intended policy by using similar models not mentioned in the policy. For example, a course policy that requires students not to use “large language models” (LLMs) or “LLM-powered tools” for essay writing effectively conveys that students should not use any tools that generate text. This type of policy is recommended over a policy that advises students to “not use ChatGPT” for essay writing, since students may then instead turn to a different large language model tool (such as Bard) for the same purpose</p>
<p>We recommend instructors distinguish carefully between these terms in their course policies. This distinction is especially important for policies which intend to limit generative AI tool usage. To effectively limit generative AI tool usage, policies should refer to “generative AI tools” or categories of tools (such as LLMs and LLM-powered tools). Such policies should avoid language referring solely to specific models (i.e. “ChatGPT”) to avoid students circumventing the intended policy by using similar models not mentioned in the policy. For example, a course policy that requires students not to use “large language models” (LLMs) or “LLM-powered tools” for essay writing effectively conveys that students should not use any tools that generate text. This type of policy is recommended over a policy that advises students to “not use ChatGPT” for essay writing, since students may then instead turn to a different large language model tool (such as Bard) for the same purpose.</p>
<p>Faculty may still find it beneficial to name specific tools as an example within a category (i.e. “large language models, such as ChatGPT, should not be used for essays”) while still making clear that the entire category of tool is affected by their policy.</p>
</li>
<li class="space-y-4">
Expand Down

0 comments on commit f33e2a0

Please sign in to comment.