Skip to content
#

aisafety

Here are 19 public repositories matching this topic...

Materials for the course Principles of AI: LLMs at UPenn (Stat 9911, Spring 2025). LLM architectures, training paradigms (pre- and post-training, alignment), test-time computation, reasoning, safety and robustness (jailbreaking, oversight, uncertainty), representations, interpretability (circuits), etc.

  • Updated Dec 18, 2024

This project facilitates structured debates between two Language Model (LLM) instances on a given topic. It organises the debate into distinct phases: opening arguments, multiple rounds of rebuttals, and concluding statements.

  • Updated Sep 23, 2024
  • Python

This repository contains the code, data, and analysis used in the study "Religious-Based Manipulation and AI Alignment Risks," which explores the risks of large language models (LLMs) generating religious content that can encourage discriminatory or violent behavior.

  • Updated Sep 28, 2024
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the aisafety topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the aisafety topic, visit your repo's landing page and select "manage topics."

Learn more