Skip to content

Conversation

kean
Copy link
Contributor

@kean kean commented Sep 16, 2025

Fixes https://linear.app/a8c/issue/CMM-739/generate-post-excerpt

Recording

generate-excerpt.mov

Screenshots (iPhone)

Screenshot 2025-09-16 at 2 21 26 PM Screenshot 2025-09-16 at 2 21 36 PM Screenshot 2025-09-16 at 2 21 40 PM

Screenshots (Negative Scenarios)

Simulator.Screen.Recording.-.iPhone.17.-.2025-09-17.at.06.54.12.mov

Screenshots (iPad)

Screenshot 2025-09-17 at 7 21 18 AM

@kean kean added this to the 26.4 milestone Sep 16, 2025
@dangermattic
Copy link
Collaborator

dangermattic commented Sep 16, 2025

1 Warning
⚠️ This PR is larger than 500 lines of changes. Please consider splitting it into smaller PRs for easier and faster reviews.

Generated by 🚫 Danger

@kean kean force-pushed the task/add-excerpt-generation branch 2 times, most recently from 3f05821 to f60e252 Compare September 16, 2025 19:17
@kean kean requested a review from crazytonyli September 16, 2025 19:27
@kean kean force-pushed the task/add-excerpt-generation branch 2 times, most recently from 377fde8 to 31362a1 Compare September 16, 2025 19:38
@wpmobilebot
Copy link
Contributor

wpmobilebot commented Sep 16, 2025

App Icon📲 You can test the changes from this Pull Request in Jetpack by scanning the QR code below to install the corresponding build.
App NameJetpack
ConfigurationRelease-Alpha
Build Number29091
VersionPR #24852
Bundle IDcom.jetpack.alpha
Commit5beb232
Installation URL2fbp30gimpao8
Automatticians: You can use our internal self-serve MC tool to give yourself access to those builds if needed.

@wpmobilebot
Copy link
Contributor

wpmobilebot commented Sep 16, 2025

App Icon📲 You can test the changes from this Pull Request in WordPress by scanning the QR code below to install the corresponding build.
App NameWordPress
ConfigurationRelease-Alpha
Build Number29091
VersionPR #24852
Bundle IDorg.wordpress.alpha
Commit5beb232
Installation URL5q2mk8tjue07o
Automatticians: You can use our internal self-serve MC tool to give yourself access to those builds if needed.

@kean kean force-pushed the task/add-excerpt-generation branch from d00970f to dce58de Compare September 17, 2025 11:49
@kean kean force-pushed the task/add-excerpt-generation branch from dce58de to 6011562 Compare September 17, 2025 11:52
HStack(spacing: 4) {
Image(systemName: "sparkle")
.font(.caption2)
Text(Strings.generateButton)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if this "Generate" label is needed. The icon would probably be enough?

@@ -0,0 +1,128 @@
import Foundation

#if canImport(FoundationModels)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The canImport checks can be removed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Claude added it for some reason and I kept it – removed.

private func startGeneration() async throws {
let session = LanguageModelSession()
let prompt = LanguageModelHelper.makeGenerateExcerptPrompt(
content: postContent,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to use one single LanguageModelSession, with postContent (which won't be changed during the session) loaded into it? I think that should speed up the generation process, because it does not need to process the same postContent again and again in the startGeneration function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is LanguageModelSession should be reused only if you want to keep context between interactions. I don't think this is what you want when changing the criteria. I did just add a "Suggest More Options" button to generate more results where I reuse LanguageModelSession. From my testing, even running a simple prompt like "generate more results" in the same session still takes roughly the same amount of time as starting a new session.

excerpt-load-more.mov

case loading
case error
case finished
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be removed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. I used it when iterating on the design of cells. I don't think we'll need it anymore – removed.


Image(systemName: "chevron.right")
.font(.caption2.weight(.semibold))
.foregroundStyle(Color(.secondaryLabel).opacity(0.5))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This chevron makes me think there is a details view upon tapping it. Can it be removed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to show that it's tappable, but I agree I think it adds more confusion than anything else. You'll figure out that you just need to tap on one of the options. Removed.

.navigationBarTitleDisplayMode(.inline)
.toolbar {
ToolbarItem(placement: .topBarTrailing) {
if !postContent.isEmpty && LanguageModelHelper.isSupported {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generated result is not great when the post content is short. The result is heavily influenced by the prompt, rather than the "source content". Maybe we should only show the "Generate" button when the post content is at least a certain characters long?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opened a bug https://linear.app/a8c/issue/CMM-763/if-the-post-content-is-too-short-the-results-are-not-helpful. I tried to program the model itself to do that, but it ignores this instruction.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking do an if (postContent.length > 1000) { useLLM() } check.

• Include the post's main value proposition
• Use active voice (avoid "is", "are", "was", "were" when possible)
• End with implicit promise of more information
• Do not use ellipsis (...) at the end
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generated result is English when the content is non-English. Adding a requirement here saying that the excerpt should be in the same language as the source content?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't going to test different locales but ran out of time.

If the post content is not supported, it will show this:

Screenshot 2025-09-18 at 8 20 34 AM

I can't get it to produce the results in anything other than English. I opened a ticket –https://linear.app/a8c/issue/CMM-762/excerpts-are-created-in-the-system-language-not-the-content-language. Help is welcome.

Excerpt 3: Lead with the primary benefit or outcome
SOURCE CONTENT:
\(content)
Copy link
Contributor

@crazytonyli crazytonyli Sep 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The content is in HTML format. That does not seem to bother the LLM, because the result is pretty on point. But I wonder if we should still explicitly mention that the source content is in HTML format in the prompt.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I initially wanted to trim HTML, but I wasn't sure it'll work well with Gutenberg blocks. Ideally, we want a rendered version of the post, but we don't have that. Based on my testing, it just figures it out, so I think we are good.

@kean kean requested a review from crazytonyli September 18, 2025 12:44
@kean kean enabled auto-merge September 19, 2025 18:10
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants