Skip to content

Materials for the course Principles of AI: LLMs at UPenn (Stat 9911, Spring 2025). LLM architectures, training paradigms (pre- and post-training, alignment), test-time computation, reasoning, safety and robustness (jailbreaking, oversight, uncertainty), representations, interpretability (circuits), etc.

License

Notifications You must be signed in to change notification settings

dobriban/Principles-of-AI-LLMs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Principles of AI: LLMs (UPenn, Stat 9911, Spring 2025)

This course explores of Large Language Models (LLMs), from their fundamental principles to cutting-edge research directions. We aim to discuss the design and future of AI systems through lecture content and student-led presentations.

The course is structured around topics such as transformer architectures, empirical behaviors, training paradigms, and safety considerations. Students will also explore emerging challenges and the broader implications of AI technologies.

Course Topics and Goals

  • Introduce fundamental concepts in AI and LLMs.
  • Discuss architecture and principles of LLMs, including transformers.
  • Explore topics in LLMs and modern AI systems such as training paradigms (pre-training, post-training, alignment), inference/test-time computations, embeddings/representations, evaluations, capabilities, safety/security (jailbreaking, oversight, hallucinations, uncertainty), interpretability (circuits).
  • Student presentations on key research papers and recent breakthroughs.

Reference Materials

  • Course Syllabus. Note: the official course title is "Sem In Adv Appl Of Stat: Advances In Artificial Intelligence"; however we will use the unofficial title for our purposes.
  • Lecture Notes; Note: these are work in progress.

Lecture Content

Introduction

  • What is AI? Definitions and Goals
  • Historical Overview of Artificial Intelligence
  • The Challenge of AGI and Feasibility of AI in Daily Tasks

LLM Architectures

  • Input/Output Processing in AI Systems
  • Transformer Mechanisms and Attention
  • Key Architecture Details: Positional Encoding, Faster Attention
  • Variations Across Model Architectures (e.g., GPT, Llama)
  • Empirical Behavior: Scaling Laws, Emergence
  • Extensions: Vision and Multimodal Language Models

Pre-Training and Post-Training

  • Pre-Training Paradigms
  • Post-Training: Fine-Tuning and Instruction Tuning
  • Alignment: Reward Learning and Reinforcement Learning from Human Feedback (RLHF)

Inference and Decoding (Test-Time Computation)

  • Simple and Advanced Sampling Methods
  • Prompting, Chain-of-Thought, and Tree-of-Thought
  • Reasoning

Safety and Robustness

  • Jailbreaking and Oversight Mechanisms
  • Addressing Hallucinations in AI Systems
  • Ensuring Robustness and Security

Mechanistic Interpretability

  • Embeddings and Representations
  • Transformer Circuits

Student Presentations

After initial lectures, students will lead presentations on topics of their choice in recent advances or research questions in AI.

Additional Resources

About

Materials for the course Principles of AI: LLMs at UPenn (Stat 9911, Spring 2025). LLM architectures, training paradigms (pre- and post-training, alignment), test-time computation, reasoning, safety and robustness (jailbreaking, oversight, uncertainty), representations, interpretability (circuits), etc.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published