Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runtime error while try to run AutoML Vision edge #6

Open
iamprakash13 opened this issue Nov 4, 2019 · 0 comments
Open

Runtime error while try to run AutoML Vision edge #6

iamprakash13 opened this issue Nov 4, 2019 · 0 comments

Comments

@iamprakash13
Copy link

iamprakash13 commented Nov 4, 2019

i followed all instructions and try to run example of this repo, just replaced wtih my custom model
to detect labels of image. text detection is working fine. i m using latest flutter and android Q sdk.

here is my log:
I/BufferQueueProducer(2878): [SurfaceTexture-0-2878-0](this:0xc4a7c000,id:0,api:1,p:2878,c:2878) queueBuffer: fps=9.97 dur=1003.41 max=102.14 min=99.24 E/AndroidRuntime( 2878): FATAL EXCEPTION: CameraBackground E/AndroidRuntime( 2878): Process: com.example.ml_flr, PID: 2878 E/AndroidRuntime( 2878): java.lang.NullPointerException: Attempt to invoke virtual method 'com.google.android.gms.tasks.Task com.google.firebase.ml.vision.label.FirebaseVisionImageLabeler.processImage(com.google.firebase.ml.vision.common.FirebaseVisionImage)' on a null object reference E/AndroidRuntime( 2878): at io.flutter.plugins.firebaselivestreammlvision.LocalVisionEdgeDetector.handleDetection(LocalVisionEdgeDetector.java:48) E/AndroidRuntime( 2878): at io.flutter.plugins.firebaselivestreammlvision.FirebaseLivestreamMlVisionPlugin$Camera.processImage(FirebaseLivestreamMlVisionPlugin.java:539) E/AndroidRuntime( 2878): at io.flutter.plugins.firebaselivestreammlvision.FirebaseLivestreamMlVisionPlugin$Camera.access$1300(FirebaseLivestreamMlVisionPlugin.java:279) E/AndroidRuntime( 2878): at io.flutter.plugins.firebaselivestreammlvision.FirebaseLivestreamMlVisionPlugin$Camera$3.onImageAvailable(FirebaseLivestreamMlVisionPlugin.java:549) E/AndroidRuntime( 2878): at android.media.ImageReader$ListenerHandler.handleMessage(ImageReader.java:812) E/AndroidRuntime( 2878): at android.os.Handler.dispatchMessage(Handler.java:106) E/AndroidRuntime( 2878): at android.os.Looper.loop(Looper.java:164) E/AndroidRuntime( 2878): at android.os.HandlerThread.run(HandlerThread.java:65) I/BufferQueueProducer( 2878): [SurfaceTexture-1-2878-1](this:0xbba7e000,id:2,api:4,p:485,c:2878) queueBuffer: slot 1 is dropped, handle=0xdb7a3760 I/BufferQueueProducer( 2878): [ImageReader-1440x1080f23m2-2878-0](this:0xc0d0c000,id:1,api:4,p:2878,c:2878) queueBuffer: slot 1 is dropped, handle=0xdb7a5d00 D/ViewRootImpl@a702665[MainActivity]( 2878): MSG_WINDOW_FOCUS_CHANGED 0

code:

`
import 'package:firebase_livestream_ml_vision/firebase_livestream_ml_vision.dart';
import 'package:flutter/material.dart';
import 'detector_painters.dart';

    void main() => runApp(MaterialApp(home: _MyHomePage()));

   class _MyHomePage extends StatefulWidget {
        @override
          _MyHomePageState createState() => _MyHomePageState();
                  }

     class _MyHomePageState extends State<_MyHomePage> {
               FirebaseVision _vision;
                 dynamic _scanResults;
                 Detector _currentDetector = Detector.text;

                  @override
                void initState() {
                 super.initState();
                _initializeCamera();
           }

   void _initializeCamera() async {
   List<FirebaseCameraDescription> cameras = await camerasAvailable();
   _vision = FirebaseVision(cameras[0], ResolutionSetting.high);
   _vision.initialize().then((_) {
    if (!mounted) {
      return;
    }
    setState(() {});
  });
   }

  Widget _buildResults() {
  const Text noResultsText = Text('No results!');

  CustomPainter painter;

  final Size imageSize = Size(
    _vision.value.previewSize.height,
    _vision.value.previewSize.width,
  );

    switch (_currentDetector) {
    case Detector.visionEdgeLabel:
     _vision.addVisionEdgeImageLabeler('flowers', ModelLocation.Local).then((onValue){
       onValue.listen((onData){
         setState(() {
           _scanResults = onData;
           print("detected");
         });
        });
      }); 
      if (_scanResults is! List<VisionEdgeImageLabel>) return noResultsText;
      painter = VisionEdgeLabelDetectorPainter(imageSize, _scanResults);
      break;
    default:
       assert(_currentDetector == Detector.text ||
          _currentDetector == Detector.visionEdgeLabel);
      _vision.addTextRecognizer().then((onValue){
           onValue.listen((onData){
            setState(() {
            _scanResults = onData;
          });
        });
       });
       if (_scanResults is! VisionText) return noResultsText;
          painter = TextDetectorPainter(imageSize, _scanResults);
       }

      return CustomPaint(
      painter: painter,
     );
      }

          Widget _buildImage() {
            return Container(
              constraints: const BoxConstraints.expand(),
                child: _vision == null
      ? const Center(
          child: Text(
            'Initializing Camera...',
            style: TextStyle(
              color: Colors.green,
              fontSize: 30.0,
            ),
          ),
        )
      : Stack(
          fit: StackFit.expand,
          children: <Widget>[
            FirebaseCameraPreview(_vision),
            _buildResults(),
          ],
        ),
         );
       }

              @override
                Widget build(BuildContext context) {
             return Scaffold(
             appBar: AppBar(
        title: const Text('ML Vision Example'),
    actions: <Widget>[
        PopupMenuButton(
        onSelected: (result) {
          _currentDetector = result;
        },
        itemBuilder: (BuildContext context) => <PopupMenuEntry>[
        /*  const PopupMenuItem(
            child: Text('Detect Barcode'),
            value: Detector.barcode,
          ),
          const PopupMenuItem(
            child: Text('Detect Face'),
            value: Detector.face,
          ),
          const PopupMenuItem(
            child: Text('Detect Label'),
            value: Detector.label,
          ),
          const PopupMenuItem(
            child: Text('Detect Cloud Label'),
            value: Detector.cloudLabel,
          ),
          const PopupMenuItem(
            child: Text('Detect Cloud Text'),
            value: Detector.cloudText,
          ), */
           const PopupMenuItem(
            child: Text('Detect Text'),
            value: Detector.text,
          ),
          const PopupMenuItem(
            child: Text('Detect AutoML Vision Label'),
            value: Detector.visionEdgeLabel,
             ),
          ],
        ),
      ],
      ),
    body: _buildImage(),
      );
     }

      @override
          void dispose() {
               _vision.dispose().then((_) {
           _vision.barcodeDetector.close();
               _vision.faceDetector.close();
                _vision.localImageLabeler.close();
               _vision.cloudImageLabeler.close();
                         _vision.textRecognizer.close();
                         _vision.visionEdgeImageLabeler.close();
                          });

                _currentDetector = null;
                      super.dispose();
                                           }

                   }

`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant