Skip to content

Commit

Permalink
fix readme
Browse files Browse the repository at this point in the history
  • Loading branch information
faithoflifedev committed Mar 17, 2024
1 parent 3e8eb41 commit 5583c31
Show file tree
Hide file tree
Showing 5 changed files with 89 additions and 59 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/flutter.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -39,4 +39,4 @@ jobs:

- name: Analyze project source
working-directory: ./packages/google_vision_flutter
run: dart analyze .
run: dart analyze . --no-fatal-warnings
8 changes: 4 additions & 4 deletions packages/google_vision/tool/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,12 +91,12 @@ print('done.');
| Future<FullTextAnnotation?> **documentTextDetection**(<br/>&nbsp;&nbsp;JsonImage jsonImage, <br/>&nbsp;&nbsp;{int maxResults = 10,}<br/>) | Extracts text from an image (or file); the response is optimized for dense text and documents. The break information. A specific use of documentTextDetection is to detect handwriting in an image. |
| Future<List<FaceAnnotation>> **faceDetection**(<br/>&nbsp;&nbsp;JsonImage jsonImage, <br/>&nbsp;&nbsp;{int maxResults = 10,}<br/>) | Face Detection detects multiple faces within an image along with the associated key facial attributes such as emotional state or wearing headwear. |
| Future<ImagePropertiesAnnotation?> **imageProperties**(<br/>&nbsp;&nbsp;JsonImage jsonImage, <br/>&nbsp;&nbsp;{int maxResults = 10,}<br/>) | The Image Properties feature detects general attributes of the image, such as dominant color. |
| Future<List<EntityAnnotation>> **labelDetection**(<br/>&nbsp;&nbsp;JsonImage jsonImage, <br/>&nbsp;&nbsp;{int maxResults = 10,}<br/>) | Labels can identify general objects, locations, activities, animal species, products, and more. Labels are returned in English only. |
| Future<List<EntityAnnotation>> **landmarkDetection**(<br/>&nbsp;&nbsp;JsonImage jsonImage, <br/>&nbsp;&nbsp;{int maxResults = 10,}<br/>) | Landmark Detection detects popular natural and human-made structures within an image. |
| Future<List\<EntityAnnotation>> **labelDetection**(<br/>&nbsp;&nbsp;JsonImage jsonImage, <br/>&nbsp;&nbsp;{int maxResults = 10,}<br/>) | Labels can identify general objects, locations, activities, animal species, products, and more. Labels are returned in English only. |
| Future<List\<EntityAnnotation>> **landmarkDetection**(<br/>&nbsp;&nbsp;JsonImage jsonImage, <br/>&nbsp;&nbsp;{int maxResults = 10,}<br/>) | Landmark Detection detects popular natural and human-made structures within an image. |
| Future<List<EntityAnnotation>> **logoDetection**(<br/>&nbsp;&nbsp;JsonImage jsonImage, <br/>&nbsp;&nbsp;{int maxResults = 10,}<br/>) | Logo Detection detects popular product logos within an image. |
| Future<List<LocalizedObjectAnnotation>> **objectLocalization**(<br/>&nbsp;&nbsp;JsonImage jsonImage, <br/>&nbsp;&nbsp;{int maxResults = 10,}<br/>) | The Vision API can detect and extract multiple objects in an image with Object Localization. Object localization identifies multiple objects in an image and provides a LocalizedObjectAnnotation for each object in the image. Each LocalizedObjectAnnotation identifies information about the object, the position of the object, and rectangular bounds for the region of the image that contains the object. Object localization identifies both significant and less-prominent objects in an image. |
| Future<List\<LocalizedObjectAnnotation>> **objectLocalization**(<br/>&nbsp;&nbsp;JsonImage jsonImage, <br/>&nbsp;&nbsp;{int maxResults = 10,}<br/>) | The Vision API can detect and extract multiple objects in an image with Object Localization. Object localization identifies multiple objects in an image and provides a LocalizedObjectAnnotation for each object in the image. Each LocalizedObjectAnnotation identifies information about the object, the position of the object, and rectangular bounds for the region of the image that contains the object. Object localization identifies both significant and less-prominent objects in an image. |
| Future<SafeSearchAnnotation?> **safeSearchDetection**(<br/>&nbsp;&nbsp;JsonImage jsonImage, <br/>&nbsp;&nbsp;{int maxResults = 10,}<br/>) | SafeSearch Detection detects explicit content such as adult content or violent content within an image. This feature uses five categories (adult, spoof, medical, violence, and racy) and returns the likelihood that each is present in a given image. See the SafeSearchAnnotation page for details on these fields. |
| Future<List<EntityAnnotation>> **textDetection**(<br/>&nbsp;&nbsp;JsonImage jsonImage, <br/>&nbsp;&nbsp;{int maxResults = 10,}<br/>) | Detects and extracts text from any image. For example, a photograph might contain a street sign or traffic sign. The JSON includes the entire extracted string, as well as individual words, and their bounding boxes. |
| Future<List\<EntityAnnotation>> **textDetection**(<br/>&nbsp;&nbsp;JsonImage jsonImage, <br/>&nbsp;&nbsp;{int maxResults = 10,}<br/>) | Detects and extracts text from any image. For example, a photograph might contain a street sign or traffic sign. The JSON includes the entire extracted string, as well as individual words, and their bounding boxes. |
| Future<WebDetection?> **webDetection**(<br/>&nbsp;&nbsp;JsonImage jsonImage, <br/>&nbsp;&nbsp;{int maxResults = 10,}<br/>) | Web Detection detects Web references to an image. |

## Usage with Flutter
Expand Down
19 changes: 9 additions & 10 deletions packages/google_vision_flutter/example/lib/face_detection.dart
Original file line number Diff line number Diff line change
Expand Up @@ -51,16 +51,15 @@ class _MyHomePageState extends State<FaceDetection> {
'assets/service_credentials.json'),
imageProvider: _processImage.image,
builder: (BuildContext context,
List<FaceAnnotation>? faceAnnotations,
ImageDetail imageDetail) {
return CustomPaint(
foregroundPainter: AnnotationPainter(
faceAnnotations: faceAnnotations,
imageDetail: imageDetail,
),
child: Image(image: _processImage.image),
);
},
List<FaceAnnotation>? faceAnnotations,
ImageDetail imageDetail) =>
CustomPaint(
foregroundPainter: AnnotationPainter(
faceAnnotations: faceAnnotations,
imageDetail: imageDetail,
),
child: Image(image: _processImage.image),
),
),
)
],
Expand Down
4 changes: 2 additions & 2 deletions packages/google_vision_flutter/pubspec.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: google_vision_flutter
description: Add Google Visions image labeling, face, logo, and landmark detection into your Flutter applications.
version: 1.1.0
version: 1.2.0
repository: https://github.com/faithoflifedev/google_vision
homepage: https://github.com/faithoflifedev/google_vision/tree/main/packages/google_vision_flutter

Expand All @@ -11,7 +11,7 @@ environment:
flutter: ">=3.16.0"

dependencies:
google_vision: ^1.2.0+1
google_vision: ^1.2.0+4
# google_vision:
# path: ../google_vision

Expand Down
115 changes: 73 additions & 42 deletions packages/google_vision_flutter/tool/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Google Vision Images Flutter Widget

[![pub package](https://img.shields.io/pub/v/google_vision.svg)](https://pub.dartlang.org/packages/google_vision)
[![pub package](https://img.shields.io/pub/v/google_vision_flutter.svg)](https://pub.dartlang.org/packages/google_vision_flutter)

Native [Dart](https://dart.dev/) package that integrates Google Vision features, including image labeling, face, logo, and landmark detection into Flutter applications.

[![Build Status](https://github.com/faithoflifedev/google_vision/workflows/Dart/badge.svg)](https://github.com/faithoflifedev/google_vision/actions) [![github last commit](https://shields.io/github/last-commit/faithoflifedev/google_vision)](https://shields.io/github/last-commit/faithoflifedev/google_vision) [![github build](https://img.shields.io/github/actions/workflow/status/faithoflifedev/google_vision/dart.yml?branch=main)](https://shields.io/github/workflow/status/faithoflifedev/google_vision/Dart) [![github issues](https://shields.io/github/issues/faithoflifedev/google_vision)](https://shields.io/github/issues/faithoflifedev/google_vision)
[![Build Status](https://github.com/faithoflifedev/google_vision/workflows/Dart/badge.svg)](https://github.com/faithoflifedev/google_vision/actions) [![github last commit](https://shields.io/github/last-commit/faithoflifedev/google_vision)](https://shields.io/github/last-commit/faithoflifedev/google_vision) [![github build](https://img.shields.io/github/actions/workflow/status/faithoflifedev/google_vision_workspace/flutter.yaml?branch=main)](https://shields.io/github/workflow/status/faithoflifedev/google_vision/Dart) [![github issues](https://shields.io/github/issues/faithoflifedev/google_vision)](https://shields.io/github/issues/faithoflifedev/google_vision)

Please feel free to submit PRs for any additional helper methods, or report an [issue](https://github.com/faithoflifedev/google_vision/issues) for a missing helper method and I'll add it if I have time available.

Expand Down Expand Up @@ -33,49 +33,80 @@ dependencies:
### Usage of the GoogleVisionBuilder Widget
See the [example app](https://github.com/faithoflifedev/google_vision_workspace/tree/main/packages/google_vision_flutter/example) for the full code.
```dart
GoogleVisionBuilder(
// use the underlying `google_vision` package to initialize and authenticate for future API calls
googleVision: GoogleVision.withAsset(
'assets/service_credentials.json'),
// the image that will be processed by the Google Vision API
imageProvider: _processImage.image,
// the features to detect in the image
features: [
Feature(
maxResults: 10,
type: AnnotationType
.faceDetection,
),
Feature(
maxResults: 10,
type: AnnotationType
.objectLocalization,
),
],
builder: (
BuildContext context,
AsyncSnapshot<AnnotatedResponses> snapshot,
ImageDetail? imageDetail,
) {
if (snapshot.hasError) {
return Text('Error: ${snapshot.error}');
}

if (snapshot.hasData) {
// custom code that will write annotation text and boxes around detected objects (see example)
return CustomPaint(
foregroundPainter: AnnotationPainter(
annotatedResponses: snapshot.data!,
imageDetail: imageDetail!,
import 'package:flutter/material.dart';
import 'package:google_vision_flutter/google_vision_flutter.dart';

class FaceDetection extends StatefulWidget {
const FaceDetection({super.key, required this.title});

final String title;

@override
State<FaceDetection> createState() => _MyHomePageState();
}

class _MyHomePageState extends State<FaceDetection> {
final _processImage = Image.asset(
'assets/young-man-smiling.jpg',
fit: BoxFit.fitWidth,
);

@override
Widget build(BuildContext context) => SafeArea(
child: Scaffold(
appBar: AppBar(
leading: IconButton(
icon: const Icon(Icons.arrow_back, color: Colors.black),
onPressed: () => Navigator.of(context).pop(),
),
title: Text(widget.title),
),
body: SingleChildScrollView(
child: Column(
mainAxisAlignment: MainAxisAlignment.start,
children: <Widget>[
const Padding(
padding: EdgeInsets.all(8.0),
child: Text('assets/young-man-smiling.jpg'),
),
Padding(
padding: const EdgeInsets.all(8.0),
child: _processImage,
),
const Padding(
padding: EdgeInsets.all(8.0),
child: Text(
'Processed image will appear below:',
),
),
Padding(
padding: const EdgeInsets.all(8.0),
child: GoogleVisionBuilder.faceDetection(
googleVision: GoogleVision.withAsset(
'assets/service_credentials.json'),
imageProvider: _processImage.image,
builder: (BuildContext context,
List<FaceAnnotation>? faceAnnotations,
ImageDetail imageDetail) {
return CustomPaint(
foregroundPainter: AnnotationPainter(
faceAnnotations: faceAnnotations,
imageDetail: imageDetail,
),
child: Image(image: _processImage.image),
);
},
),
)
],
),
),
),
child: Image(image: _processImage.image),
);
}

return const Center(child: CircularProgressIndicator());
},
)
}
```

## Contributing
Expand Down

0 comments on commit 5583c31

Please sign in to comment.