Skip to content
This repository was archived by the owner on May 27, 2025. It is now read-only.

Commit f2b05ac

Browse files
committed
vault backup: 2025-04-12 22:50:39
1 parent 7703ca1 commit f2b05ac

File tree

9 files changed

+30
-5
lines changed

9 files changed

+30
-5
lines changed

content/ai/llms/reviews/claude/claude-sonnet-3.5.md

Whitespace-only changes.

content/ai/llms/reviews/gemini/gemini-2.5-pro.md

Whitespace-only changes.

content/ai/llms/reviews/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
Reviews for LLMs that I've used.

content/machine-learning/llms/tokenization-is-all-you-need.md renamed to content/ai/llms/tokenization-is-all-you-need.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,9 @@ tags:
44
title: Tokenization is all you need
55
---
66

7+
> [!attention] Disclaimer (4/11/25)
8+
> I wrote this a couple years ago. Having more experience with LLMs on both a technical level and a professional level, I feel most of this article is misinformed / incorrect. I'm leaving it up because it's an interesting snapshot into what I was thinking as a true amateur, but I do not think it's really intelligent or worthwhile.
9+
710
A LLM, at the end of the day, is a fancy autocomplete. A fancy autocomplete that ingests a bunch of tokens, embeds them to "understand" the semantic context of said tokens, and then outputs a bunch of tokens based on statistical modeling.
811

912
I've been thinking a lot recently about how we can make LLMs really start to do interesting things. The current meta (pun not intended) is to make the LLMs output structured info. Say we want the LLM to call a function, we simply tell the LLM "hey, here's my registry of functions" and we expect it to output JSON that describes how it wants to utilize these functions. Of course, is very buggy as LLM's do not understand what "JSON" is and instead understand the statistical nature of what JSON should look like, leading to the development of many fault-tolerant JSON parsers.

content/ai/llms/ways-i-use-llm.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
---
2+
title: Ways I use LLMs to maximize my personal performance and improve my quality of life
3+
tags:
4+
- notes
5+
---
6+
# Abusing long context
7+
8+
(4/12/25) I've been investigating fine-grained authorization recently, and landed on using [[openfga]]. While it's cool, it's also sparsely documented. I used [gitingest.com](http://gitingest.com) to get the entire source code of the python sdk into one file, then dumped it into [[gemini-2.5-pro]] and asked it to write me FastAPI middleware. It designed something that had _good_ taste. It was a really beautiful abstraction, and took 80 seconds to create.

content/management/scrum/obsidian/check-in.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,11 @@ I'm a big fan of 1:1 meetings, especially as a means to close out a sprint and b
77

88
````eta
99
---
10-
date: <% tp.date.now("YYYY-MM-DD") %>
11-
time: <% tp.date.now("HH:mm") %>
10+
date: 2025-04-11
11+
time: 20:01
1212
team_member:
13-
project: <% tp.file.folder(true).split("/")[1] %>
14-
sprint_num: <% tp.file.folder(true).split("/")[3] %>
13+
project: programming
14+
sprint_num: python
1515
type: sprint-checkin
1616
---
1717
# Sprint Check-in

content/programming/languages/python/asyncio/interview-question.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ If it isn't obvious to you: time to study :)
3737

3838
This isn't a particularly esoteric question or anything, but I wanted to see what LLMs think about it. The prompt was exactly the text above, with the first and last paragraphs excluded. Here's what a few of them say.
3939

40-
> [!example]- Claude 3.5 Sonnet
40+
> [!example]- [[claude-sonnet-3.5|Claude Sonnet 3.5]]
4141
>
4242
> This is a great question about API design! Let me break down why these APIs are different and why the ProcessPoolExecutor/ThreadPoolExecutor can't use the same approach as TaskGroup.
4343
> The key difference lies in how these executors handle function execution:
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
---
2+
title: Building a FIPS-enabled Keycloak image
3+
---
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
---
2+
title: x509 client authentication/mTLS for Keycloak behind a reverse proxy.
3+
---
4+
I had to solve an interesting problem recently:
5+
6+
- I have a keycloak instance, living in a docker compose stack / kubernetes deployment.
7+
- The keycloak instance DOES NOT handle its own TLS, instead deferring TLS termination to the reverse proxy.
8+
- The goal is to have keycloak authenticate with mTLS and use the specified x509 id if it maps to authenticate an user without an username/password.
9+
10+
This was surprisingly difficult, and took me about three or four hours of fiddling to figure this out.

0 commit comments

Comments
 (0)