Skip to content

Commit

Permalink
chore(release)
Browse files Browse the repository at this point in the history
  • Loading branch information
faithoflifedev committed Aug 14, 2024
1 parent 3d540fb commit 429f42f
Show file tree
Hide file tree
Showing 53 changed files with 1,436 additions and 254 deletions.
8 changes: 8 additions & 0 deletions packages/google_vision/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Changelog

## 1.3.0

* custom headers with API Key authentication Issue #23

## 1.3.0

* custom headers with API Key authentication Issue #23

## 1.2.1+2

* custom headers with API Key authentication Issue #23
Expand Down
45 changes: 25 additions & 20 deletions packages/google_vision/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,24 +29,17 @@ Please feel free to submit PRs for any additional helper methods, or report an [

## Recent Changes

### New for v1.3.0
- This version of the package supports both the `image` and `file` annotation APIs for Google Vision. The previous versions of the package supported only the `image` API.
- A number of methods and classes have been **Deprecated** in this version. All the provided examples still work without any changes, so the changes in this package should not cause any issue to existing code.
- The `file` functionality added to this release allows for the annotation of file formats that have pages or frames, specifically `pdf`, `tiff` and `gif`. Google Vision allows annotation of up to 5 pages/frames.

### New for v1.2.0
- helper methods that simplify any `single` detection so a simple face detection can be performed with the `faceDetection(JsonImage jsonImage)` method, see the table below.

### New for v1.0.8
- web entities and pages detection [https://cloud.google.com/vision/docs/detecting-web](https://cloud.google.com/vision/docs/detecting-web), provides urls of web pages that match the specified image

### New for v1.0.7

[JLuisRojas](https://github.com/JLuisRojas) has provided code for:
- detect text in images
- detect handwriting in images

In addition support for the following has also been added:
- detect crop hints
- detect image properties
- detect landmarks
- detect logos

## Getting Started

### pubspec.yaml
Expand All @@ -56,32 +49,44 @@ To use this package, add the dependency to your `pubspec.yaml` file:
```yaml
dependencies:
...
google_vision: ^1.2.1+2
google_vision: ^1.3.0
```
### Obtaining Authorization Credentials
### Obtaining Authentication/Authorization Credentials
[Authenticating to the Cloud Vision API](https://cloud.google.com/vision/product-search/docs/auth) requires a JSON file with the JWT token information, which you can obtain by [creating a service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating_a_service_account) in the API console.
[Authenticating to the Cloud Vision API](https://cloud.google.com/vision/product-search/docs/auth) can be done with one of two methods:
- The first method requires a JSON file with the JWT token information, which you can obtain by [creating a service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating_a_service_account) in the API console.
- The second method requires an [API key](https://console.cloud.google.com/apis/credentials) to be created.
Both of the authorization/authentication methods listed above assume that you already have a Google account, you have created a Google Cloud project and you have enabled the Cloud Vision API in the Google API library.
### Usage of the Cloud Vision API
```dart
final googleVision =
await GoogleVision.withJwtFile('service_credentials.json');
final googleVision = await GoogleVision.withApiKey(
Platform.environment['GOOGLE_VISION_API_KEY'] ?? '[YOUR API KEY]',
// additionalHeaders: {'com.xxx.xxx': 'X-Ios-Bundle-Identifier'},
);

print('checking...');

final faceAnnotationResponses = await googleVision.faceDetection(
JsonImage.fromFilePath('sample_image/young-man-smiling-and-thumbs-up.jpg'));
final faceAnnotationResponses = await googleVision.image.faceDetection(
JsonImage.fromGsUri(
'gs://gvision-demo/young-man-smiling-and-thumbs-up.jpg'));

for (var faceAnnotation in faceAnnotationResponses) {
print('Face - ${faceAnnotation.detectionConfidence}');

print('Joy - ${faceAnnotation.enumJoyLikelihood}');
}

// Output:
// Face - 0.9609375
// Joy - Likelihood.UNLIKELY

print('done.');
```

## New Helper Methods

| <div style="width:420px">**Method Signature** | **Description** |
Expand All @@ -103,7 +108,7 @@ print('done.');

For a quick intro into the use of Google Vision in a Flutter, take a look at the [`google_vision_flutter`](https://github.com/faithoflifedev/google_vision_workspace/tree/main/packages/google_vision_flutter) package and the [example](https://github.com/faithoflifedev/google_vision_workspace/tree/main/packages/google_vision_flutter/example) folder of the project's GitHub repository.

If Flutter specific Google Vision Widget doesn't suite your requirements, then to work with Flutter it's usually necessary to convert an object that is presented as an `Asset` or a `Stream` into a `File` for use by this `google_vision` package. This [StackOverflow](https://stackoverflow.com/questions/55295593/how-to-convert-asset-image-to-file) post gives an idea on how this can be accomplished. A similar process can be used for any `Stream` of data that represents an image supported by `google_vision`. Essentially, the Google Vision REST API needs to be able to convert the image data into its Base64 representation before submitting it to the Google server and having the `bytedata` available in the code makes this easier.
If Flutter specific Google Vision Widget doesn't meet your requirements, then to work with Flutter it's usually necessary to convert an object that is presented as an `Asset` or a `Stream` into a `File` for use by this `google_vision` package. This [StackOverflow](https://stackoverflow.com/questions/55295593/how-to-convert-asset-image-to-file) post gives an idea on how this can be accomplished. A similar process can be used for any `Stream` of data that represents an image supported by `google_vision`. Essentially, the Google Vision REST API needs to be able to convert the image data into its Base64 representation before submitting it to the Google server and having the `bytedata` available in the code makes this easier.

## Vision cli (google_vision at the command prompt)

Expand Down
5 changes: 5 additions & 0 deletions packages/google_vision/bin/vision.dart
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,11 @@ import 'package:universal_io/io.dart';
void main(List<String> arguments) async {
CommandRunner('vision',
'A command line interface for making API requests to the Google Vision.')
..argParser.addOption(
'pages',
abbr: 'p',
valueHelp: 'comma delimited list of pages to process (max 5)',
)
..argParser.addOption('credential-file',
defaultsTo: '${Util.userHome}/.vision/credentials.json',
valueHelp: 'credentials file path')
Expand Down
38 changes: 38 additions & 0 deletions packages/google_vision/example/document_text_detection_file.dart
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
import 'package:google_vision/google_vision.dart';

void main() async {
final googleVision =
await GoogleVision.withJwtFile('service_credentials.json');

print('checking...');

final annotateFileResponses = await googleVision.file.documentTextDetection(
InputConfig.fromFilePath('sample_image/allswell.pdf'),
pages: [1],
);

String text = '';

for (var annotateFileResponse in annotateFileResponses) {
if (annotateFileResponse.error != null) {
print('error');
} else {
print('pages: ${annotateFileResponse.totalPages}');
}

for (var annotateImageResponse in annotateFileResponse.responses!) {
annotateImageResponse.fullTextAnnotation!.pages.first.blocks
?.forEach((block) {
block.paragraphs?.forEach((paragraph) {
paragraph.words?.forEach((word) {
var segment = word.symbols?.map((e) => e.text).join();

text += (segment ?? '') + ' ';
});
});
});
}
}

print(text);
}
6 changes: 3 additions & 3 deletions packages/google_vision/example/face_detection.dart
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,16 @@ import 'package:universal_io/io.dart';

void main() async {
// final googleVision =
// await GoogleVision.withJwtFile('example/service_credentials.json');
// await GoogleVision.withJwtFile('service_credentials.json');

final googleVision = await GoogleVision.withApiKey(
Platform.environment['GOOGLE_VISION_API_KEY'] ?? '[YOUR API KEY]',
additionalHeaders: {'com.xxx.xxx': 'X-Ios-Bundle-Identifier'},
// additionalHeaders: {'com.xxx.xxx': 'X-Ios-Bundle-Identifier'},
);

print('checking...');

final faceAnnotationResponses = await googleVision.faceDetection(
final faceAnnotationResponses = await googleVision.image.faceDetection(
JsonImage.fromGsUri(
'gs://gvision-demo/young-man-smiling-and-thumbs-up.jpg'));

Expand Down
2 changes: 1 addition & 1 deletion packages/google_vision/example/label_detection.dart
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ void main() async {

print('checking...');

final entityAnnotations = await googleVision.labelDetection(
final entityAnnotations = await googleVision.image.labelDetection(
JsonImage.fromGsUri(
'gs://cloud-samples-data/vision/label/setagaya.jpeg'));

Expand Down
2 changes: 1 addition & 1 deletion packages/google_vision/example/landmark_detection.dart
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ void main() async {

print('checking...');

final landmarkAnnotationsResponse = await googleVision
final landmarkAnnotationsResponse = await googleVision.image
.labelDetection(JsonImage.fromFilePath('sample_image/cn_tower.jpg'));

for (var landmarkAnnotation in landmarkAnnotationsResponse) {
Expand Down
2 changes: 1 addition & 1 deletion packages/google_vision/example/logo_detection.dart
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ void main() async {

print('checking...');

final logoAnnotationsResponses = await googleVision
final logoAnnotationsResponses = await googleVision.image
.logoDetection(JsonImage.fromFilePath('sample_image/logo.png'));

for (var logoAnnotation in logoAnnotationsResponses) {
Expand Down
3 changes: 1 addition & 2 deletions packages/google_vision/example/pubspec.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: google_vision_example
name: google_vision_examples
description: examples for google_vision
version: 0.0.1
repository: https://github.com/faithoflifedev/google_vision
Expand All @@ -11,6 +11,5 @@ environment:

dependencies:
pcanvas: ^1.1.0
universal_io: ^2.2.2
google_vision:
path: ../
Binary file not shown.
4 changes: 2 additions & 2 deletions packages/google_vision/example/web_detection.dart
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ void main() async {
print('checking...');

AnnotatedResponses annotatedResponses = await googleVision.annotate(
requests: AnnotationRequests(
requests: AnnotateImageRequests(
requests: [
AnnotationRequest(
AnnotateImageRequest(
jsonImage: JsonImage(byteBuffer: imageFile.buffer),
features: [
Feature(maxResults: 10, type: AnnotationType.webDetection)
Expand Down
18 changes: 14 additions & 4 deletions packages/google_vision/lib/google_vision.dart
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@
library google_vision;

export 'src/google_vision_base.dart';
export 'src/google_vision_file.dart';
export 'src/google_vision_image.dart';
export 'src/annotate_json_serializable.dart';
export 'src/token_generator.dart';

export 'src/cmd/vision_crop_hint_command.dart';
Expand All @@ -14,10 +17,14 @@ export 'src/cmd/vision_safe_search_command.dart';
export 'src/cmd/vision_score_command.dart';
export 'src/cmd/vision_version_command.dart';

export 'src/model/annotate_file_request.dart';
export 'src/model/annotate_file_response.dart';
export 'src/model/annotate_image_response.dart';
export 'src/model/annotated_responses.dart';
export 'src/model/annotation_request.dart';
export 'src/model/annotation_requests.dart';
export 'src/model/batch_annotate_images_response.dart';
export 'src/model/annotate_image_request.dart';
// TODO: remove this depricated class in the next verion
export 'src/model/annotate_image_requests.dart';
export 'src/model/batch_annotate_files_response.dart';
export 'src/model/block.dart';
export 'src/model/bounding_poly.dart';
export 'src/model/color_info.dart';
Expand All @@ -36,6 +43,7 @@ export 'src/model/full_text_annotation.dart';
export 'src/model/image_annotation_context.dart';
export 'src/model/image_context.dart';
export 'src/model/image_properties_annotation.dart';
export 'src/model/input_config.dart';
export 'src/model/json_image.dart';
export 'src/model/json_settings.dart';
export 'src/model/jwt_credentials.dart';
Expand All @@ -62,8 +70,10 @@ export 'src/model/web_detection.dart';
export 'src/model/web_detection_params.dart';
export 'src/model/word.dart';

export 'src/provider/files.dart';
export 'src/provider/images.dart';
export 'src/provider/oauth.dart';
export 'src/provider/vision.dart';

export 'src/util/logging_interceptors.dart';
export 'src/util/serializable_image.dart';
export 'src/util/util.dart';
2 changes: 1 addition & 1 deletion packages/google_vision/lib/meta.dart
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ library meta;
import 'dart:convert' show json;

final pubSpec = json.decode(
'{"name":"google_vision","version":"1.2.1+2","homepage":"https://github.com/faithoflifedev/google_vision/tree/main/packages/google_vision","environment":{"sdk":">=3.2.0 <4.0.0"},"description":"Allows you to add Google Visions image labeling, face, logo, and landmark detection, OCR, and detection of explicit content, into cross platform applications.","dependencies":{"args":"^2.5.0","collection":"^1.18.0","crypto_keys_plus":"^0.4.0","dio":"^5.4.3+1","http":"^1.2.1","image":"^4.1.7","jose_plus":"^0.4.6","json_annotation":"^4.9.0","retrofit":"^4.1.0","universal_io":"^2.2.2"},"dev_dependencies":{"build_runner":"^2.4.9","grinder":"^0.9.5","json_serializable":"^6.8.0","lints":"^4.0.0","publish_tools":"^1.0.0+4","retrofit_generator":"^8.1.0","test":"^1.25.5"},"executables":{"vision":""},"repository":"https://github.com/faithoflifedev/google_vision"}');
'{"name":"google_vision","version":"1.3.0","homepage":"https://github.com/faithoflifedev/google_vision/tree/main/packages/google_vision","environment":{"sdk":">=3.2.0 <4.0.0"},"description":"Allows you to add Google Visions image labeling, face, logo, and landmark detection, OCR, and detection of explicit content, into cross platform applications.","dependencies":{"args":"^2.5.0","collection":"^1.18.0","crypto_keys_plus":"^0.4.0","dio":"^5.5.0+1","http":"^1.2.2","image":"^4.1.7","jose_plus":"^0.4.6","json_annotation":"^4.9.0","loggy":"^2.0.3","mime":"^1.0.5","retrofit":"^4.1.0","universal_io":"^2.2.2"},"dev_dependencies":{"build_runner":"^2.4.9","grinder":"^0.9.5","json_serializable":"^6.8.0","lints":"^4.0.0","publish_tools":"^1.0.0+4","retrofit_generator":"^8.1.2"},"executables":{"vision":""},"repository":"https://github.com/faithoflifedev/google_vision"}');
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
abstract class AnnotateJsonSerializable {
Map<String, dynamic> toJson();
}
19 changes: 13 additions & 6 deletions packages/google_vision/lib/src/cmd/vision_crop_hint_command.dart
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ class VisionCropHintCommand extends VisionHelper {
globalResults!['credential-file'],
'https://www.googleapis.com/auth/cloud-vision');

final imageFile = File(argResults!['image-file']).readAsBytesSync();
final imageFile = File(argResults!['image-file']);

final aspectRatios = argResults?['aspect-ratios'] == null
? null
Expand All @@ -39,9 +39,9 @@ class VisionCropHintCommand extends VisionHelper {
.map((aspectRatio) => double.parse(aspectRatio))
.toList();

final requests = AnnotationRequests(requests: [
AnnotationRequest(
jsonImage: JsonImage(byteBuffer: imageFile.buffer),
final requests = AnnotateImageRequests(requests: [
AnnotateImageRequest(
jsonImage: JsonImage(byteBuffer: imageFile.readAsBytesSync().buffer),
features: [Feature(type: AnnotationType.cropHints)],
imageContext: aspectRatios != null
? ImageContext(
Expand All @@ -51,8 +51,15 @@ class VisionCropHintCommand extends VisionHelper {
)
]);

final annotatedResponses = await googleVision.annotate(requests: requests);
if (pages != null) {
final annotatedResponses = await annotateFile(imageFile, pages: pages!);

print(annotatedResponses.responses);
print(annotatedResponses.responses);
} else {
final annotatedResponses =
await googleVision.annotate(requests: requests);

print(annotatedResponses.responses);
}
}
}
13 changes: 10 additions & 3 deletions packages/google_vision/lib/src/cmd/vision_detect_command.dart
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,17 @@ class VisionDetectCommand extends VisionHelper {

@override
void run() async {
final imageFile = File(argResults!['image-file']).readAsBytesSync();
final imageFile = File(argResults!['image-file']);

final annotatedResponses = await annotate(imageFile.buffer);
if (pages != null) {
final annotatedResponses = await annotateFile(imageFile, pages: pages!);

print(annotatedResponses.responses);
print(annotatedResponses.responses);
} else {
final annotatedResponses =
await annotateImage(imageFile.readAsBytesSync().buffer);

print(annotatedResponses.responses);
}
}
}
Loading

0 comments on commit 429f42f

Please sign in to comment.