Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .vscode/settings.json
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took the liberty to commit this, but I understand if you don't want this here. I thought it might help other VS Code users like me, that have the "format on save" setting turned on

Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"dart.lineLength": 120
}
7 changes: 7 additions & 0 deletions lib/src/fonts/fonts.dart
Original file line number Diff line number Diff line change
@@ -1,12 +1,19 @@
import 'package:flutter_test/flutter_test.dart';
import 'package:flutter_test_goldens/src/fonts/golden_toolkit_fonts.dart' as golden_toolkit;

/// Remember if fonts have already been loaded in this isolate.
bool _fontsLoaded = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this actually needed? What's the current behavior that you're trying to fix with this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to me like right now if I have multiple testGoldenScene tests in the same suite, they will all read the font manifest, even though the fonts have already been loaded. The same thing would happen with my change, but even if we don't move font loading somewhere else it seems like wasted work?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just looking for clarity as to whether this actually prevents additional execution, or if we're just repeating something that's already tracked inside the font loading behavior. So without this we're re-reading the manifest on every call even if fonts are already loaded? (I haven't looked at that in a while).


/// Tools for working with fonts in tests.
abstract class TestFonts {
/// Load all fonts registered with the app and make them available
/// to widget tests.
static Future<void> loadAppFonts() async {
if (_fontsLoaded) {
return;
}
await golden_toolkit.loadAppFonts();
_fontsLoaded = true;
}

static const openSans = "packages/flutter_test_goldens/OpenSans";
Expand Down
62 changes: 36 additions & 26 deletions lib/src/scenes/gallery.dart
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import 'package:flutter/material.dart' hide Image;
import 'package:flutter_test/flutter_test.dart';
import 'package:flutter_test_goldens/src/flutter/flutter_camera.dart';
import 'package:flutter_test_goldens/src/flutter/flutter_test_extensions.dart';
import 'package:flutter_test_goldens/src/fonts/fonts.dart';
import 'package:flutter_test_goldens/src/goldens/golden_collections.dart';
import 'package:flutter_test_goldens/src/goldens/golden_comparisons.dart';
import 'package:flutter_test_goldens/src/goldens/golden_rendering.dart';
Expand Down Expand Up @@ -296,33 +297,42 @@ class Gallery {
Future<void> run(WidgetTester tester) async {
FtgLog.pipeline.info("Rendering or comparing golden - $_sceneDescription");

// Build each golden tree and take `FlutterScreenshot`s.
final camera = FlutterCamera();
await _takeNewScreenshots(tester, camera);

// Convert each `FlutterScreenshot` to a golden `GoldenSceneScreenshot`, which includes
// additional metadata, and multiple image representations.
final screenshots = await _convertFlutterScreenshotsToSceneScreenshots(tester, camera.photos);

if (autoUpdateGoldenFiles) {
// Generate new goldens.
FtgLog.pipeline.finer("Generating new goldens...");
// TODO: Return a success/failure report that we can publish to the test output.
await _updateGoldenScene(
tester,
_fileName,
screenshots,
);
FtgLog.pipeline.finer("Done generating new goldens.");
} else {
// Compare to existing goldens.
FtgLog.pipeline.finer("Comparing existing goldens...");
// TODO: Return a success/failure report that we can publish to the test output.
await _compareGoldens(tester, _fileName, screenshots);
FtgLog.pipeline.finer("Done comparing goldens.");
await TestFonts.loadAppFonts();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't want to force this on everyone. If people want standard Ahem goldens, they need to be able to get them.


tester.view
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't wanna force this either.

I see that you're trying to merge things together, but that's actually the opposite of what a toolkit should do. Higher level conveniences that add default behaviors is fine. But forcing decisions on everyone at the center of the tool will lead to angry devs who get to a certain point of adoption and then realize they can't control something that they need to control.

..devicePixelRatio = 1.0
..platformDispatcher.textScaleFactorTestValue = 1.0;

try {
// Build each golden tree and take `FlutterScreenshot`s.
final camera = FlutterCamera();
await _takeNewScreenshots(tester, camera);

// Convert each `FlutterScreenshot` to a golden `GoldenSceneScreenshot`, which includes
// additional metadata, and multiple image representations.
final screenshots = await _convertFlutterScreenshotsToSceneScreenshots(tester, camera.photos);

if (autoUpdateGoldenFiles) {
// Generate new goldens.
FtgLog.pipeline.finer("Generating new goldens...");
// TODO: Return a success/failure report that we can publish to the test output.
await _updateGoldenScene(
tester,
_fileName,
screenshots,
);
FtgLog.pipeline.finer("Done generating new goldens.");
} else {
// Compare to existing goldens.
FtgLog.pipeline.finer("Comparing existing goldens...");
// TODO: Return a success/failure report that we can publish to the test output.
await _compareGoldens(tester, _fileName, screenshots);
FtgLog.pipeline.finer("Done comparing goldens.");
}
FtgLog.pipeline.fine("Done with golden generation/comparison");
} finally {
tester.view.reset();
}

FtgLog.pipeline.fine("Done with golden generation/comparison");
}

/// For each scene screenshot request, pumps its widget tree, and then screenshots it with
Expand Down
199 changes: 105 additions & 94 deletions lib/src/scenes/timeline.dart
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import 'dart:ui' as ui;

import 'package:flutter/material.dart' hide Image;
import 'package:flutter_test/flutter_test.dart';
import 'package:flutter_test_goldens/flutter_test_goldens.dart';
import 'package:flutter_test_goldens/src/flutter/flutter_camera.dart';
import 'package:flutter_test_goldens/src/flutter/flutter_test_extensions.dart';
import 'package:flutter_test_goldens/src/goldens/golden_collections.dart';
Expand Down Expand Up @@ -223,119 +224,129 @@ class Timeline {
"Can't render or compare golden file without a setup action. Please call setup() or setupWithPump().");
}

FtgLog.pipeline.info("Rendering or comparing golden - $_fileName");
await TestFonts.loadAppFonts();

// Always operate at a 1:1 logical-to-physical pixel ratio to help reduce
// anti-aliasing and other artifacts from fractional pixel offsets.
tester.view.devicePixelRatio = 1.0;
tester.view
..devicePixelRatio = 1.0
..platformDispatcher.textScaleFactorTestValue = 1.0;

final camera = FlutterCamera();
final testContext = TimelineTestContext();
try {
FtgLog.pipeline.info("Rendering or comparing golden - $_fileName");

// Setup the scene.
FtgLog.pipeline.info("Running any given setup delegate before running steps.");
await _setup!.setupDelegate(tester);
// Always operate at a 1:1 logical-to-physical pixel ratio to help reduce
// anti-aliasing and other artifacts from fractional pixel offsets.
tester.view.devicePixelRatio = 1.0;

// Take photos and modify scene over time.
for (int i = 0; i < _steps.length; i += 1) {
final step = _steps[i];
FtgLog.pipeline.info("Running step: $step");
final camera = FlutterCamera();
final testContext = TimelineTestContext();

if (step is _TimelineModifySceneAction) {
await step.delegate(tester, testContext);
continue;
}
// Setup the scene.
FtgLog.pipeline.info("Running any given setup delegate before running steps.");
await _setup!.setupDelegate(tester);

if (step is _TimelinePhotoRequest) {
expect(step.photoBoundsFinder, findsOne);
// Take photos and modify scene over time.
for (int i = 0; i < _steps.length; i += 1) {
final step = _steps[i];
FtgLog.pipeline.info("Running step: $step");

final renderObject = step.photoBoundsFinder.evaluate().first.findRenderObject();
expect(
renderObject,
isNotNull,
reason:
"Failed to find a render object for photo '${step.description}', using finder '${step.photoBoundsFinder}'",
);
if (step is _TimelineModifySceneAction) {
await step.delegate(tester, testContext);
continue;
}

await tester.runAsync(() async {
await camera.takePhoto(step.description, step.photoBoundsFinder);
});
if (step is _TimelinePhotoRequest) {
expect(step.photoBoundsFinder, findsOne);

continue;
}
final renderObject = step.photoBoundsFinder.evaluate().first.findRenderObject();
expect(
renderObject,
isNotNull,
reason:
"Failed to find a render object for photo '${step.description}', using finder '${step.photoBoundsFinder}'",
);

throw Exception("Tried to run a step when rendering a Timeline, but we don't recognize this step type: $step");
}
await tester.runAsync(() async {
await camera.takePhoto(step.description, step.photoBoundsFinder);
});

// Lay out photos in a row.
final photos = camera.photos;
// TODO: cleanup the modeling of these photos vs renderable photos once things are working
final renderablePhotos = <GoldenSceneScreenshot, GlobalKey>{};
await tester.runAsync(() async {
for (final photo in photos) {
final byteData = (await photo.pixels.toByteData(format: ui.ImageByteFormat.png))!;

final candidate = GoldenSceneScreenshot(
// FIXME: When I refactored image modeling to become FlutterScreenshot and GoldenImage, I changed
// how IDs and descriptions were stored. The new structure worked fine for Galleries, where
// we already had an ID and a description. But timeline didn't appear to have an explicit
// ID for a given screenshot, so I gave the description as the "photo ID", which is why it's
// now used in 2 places here. We should probably create a first-class concept of an ID for
// a given timeline screenshot (independent from step index).
photo.id,
GoldenScreenshotMetadata(
description: photo.id,
simulatedPlatform: photo.simulatedPlatform,
),
decodePng(byteData.buffer.asUint8List())!,
byteData.buffer.asUint8List(),
);
continue;
}

renderablePhotos[candidate] = GlobalKey();
throw Exception("Tried to run a step when rendering a Timeline, but we don't recognize this step type: $step");
}
});

// Layout photos in the timeline.
final sceneMetadata = await _layoutPhotos(
tester,
photos,
SceneLayoutContent(
description: _description,
goldens: renderablePhotos,
),
_layout,
goldenBackground: _goldenBackground,
);

FtgLog.pipeline.finer("Running momentary delay for render flakiness");
await tester.runAsync(() async {
// Without this delay, the screenshot loading is spotty. However, with
// this delay, we seem to always get screenshots displayed in the widget tree.
// FIXME: Root cause this render flakiness and see if we can fix it.
await Future.delayed(const Duration(milliseconds: 1));
});
// Lay out photos in a row.
final photos = camera.photos;
// TODO: cleanup the modeling of these photos vs renderable photos once things are working
final renderablePhotos = <GoldenSceneScreenshot, GlobalKey>{};
await tester.runAsync(() async {
for (final photo in photos) {
final byteData = (await photo.pixels.toByteData(format: ui.ImageByteFormat.png))!;

final candidate = GoldenSceneScreenshot(
// FIXME: When I refactored image modeling to become FlutterScreenshot and GoldenImage, I changed
// how IDs and descriptions were stored. The new structure worked fine for Galleries, where
// we already had an ID and a description. But timeline didn't appear to have an explicit
// ID for a given screenshot, so I gave the description as the "photo ID", which is why it's
// now used in 2 places here. We should probably create a first-class concept of an ID for
// a given timeline screenshot (independent from step index).
photo.id,
GoldenScreenshotMetadata(
description: photo.id,
simulatedPlatform: photo.simulatedPlatform,
),
decodePng(byteData.buffer.asUint8List())!,
byteData.buffer.asUint8List(),
);

await tester.pumpAndSettle();
renderablePhotos[candidate] = GlobalKey();
}
});

final relativeGoldenFilePath = "$_relativeGoldenDirectory/$_fileName.png";
if (autoUpdateGoldenFiles) {
// Generate new goldens.
await _updateGoldenScene(
// Layout photos in the timeline.
final sceneMetadata = await _layoutPhotos(
tester,
relativeGoldenFilePath,
sceneMetadata,
);
} else {
// Compare to existing goldens.
await _compareGoldens(
tester,
sceneMetadata,
relativeGoldenFilePath,
find.byType(GoldenSceneBounds),
photos,
SceneLayoutContent(
description: _description,
goldens: renderablePhotos,
),
_layout,
goldenBackground: _goldenBackground,
);
}

FtgLog.pipeline.finer("Done with golden generation/comparison");
FtgLog.pipeline.finer("Running momentary delay for render flakiness");
await tester.runAsync(() async {
// Without this delay, the screenshot loading is spotty. However, with
// this delay, we seem to always get screenshots displayed in the widget tree.
// FIXME: Root cause this render flakiness and see if we can fix it.
await Future.delayed(const Duration(milliseconds: 1));
});

await tester.pumpAndSettle();

final relativeGoldenFilePath = "$_relativeGoldenDirectory/$_fileName.png";
if (autoUpdateGoldenFiles) {
// Generate new goldens.
await _updateGoldenScene(
tester,
relativeGoldenFilePath,
sceneMetadata,
);
} else {
// Compare to existing goldens.
await _compareGoldens(
tester,
sceneMetadata,
relativeGoldenFilePath,
find.byType(GoldenSceneBounds),
);
}

FtgLog.pipeline.finer("Done with golden generation/comparison");
} finally {
tester.view.reset();
}
}

Future<GoldenSceneMetadata> _layoutPhotos(
Expand Down
Loading