Skip to content
This repository has been archived by the owner on May 27, 2019. It is now read-only.
Hovig Ohannessian edited this page Oct 28, 2017 · 11 revisions

Short Name

Upload and analyze an image into IBM Cloud and receive status alerts

Short Description

Build an IoT project with IBM Cloud Functions (serverless), Visual Recognition, Node-RED, Node.js and along with IoT Platform

Offering Type

  • Emerging Technologies

Introduction

Build a bundle of apps that will insert images into Cloudant database, analyze it and based on the analysis it will trigger an alert showing if there's a danger or not.

Authors

Code

  • Please let us know what you think about this project. Comment on it, open issues and/or give us suggestions or reach us out.

Demo

Video

Overview

Industrial or high tech maintenance companies usually start with a concept on how to analyze images and inform the responsible person that an action should be taken. Our application setup in this tutorial will do that.

This tutorial avoided the need of a real camera but it is using an application that will upload and insert an image to Cloudant database in forms of binary events. The app will act like a real device.

You can run this app or similar apps on devices such as Raspberry Pi, etc. to upload the captured images.

A real-world architecture designs is implied in this tutorial. The IBM Functions service makes a REST call to the Watson Visual Recognition service and converts these binaries into JSON events. In turn, these events are sent to the IoT Platform in form of processed data of the image and under a registered gateway.

This processed data is already evaluated by the visual recognition service. So Noder-RED flow will capture the exceptions and trigger alerts based on that data.

Flow

architecture-diagram

Architecture

IMPORTANT: For more detailed step-by-step instructions, please make sure you visit the README.md page on the Github repo.

As the diagram above in the picture presents six steps. It will be best to start as the following:

  1. viz-send-image-app folder can be executed locally or be pushed to the cloud if you want

  2. Create a Node-RED package that includes Cloudant, IoT Platform and Visual Recognition services

  3. Create IBM Functions from the Catalog

  4. Copy/Paste your credentials from Cloudant, IoT Platform, Visual Recognition into credentials.cfg (in viz-openwhisk-functions) and credentials.json (in viz-send-image-app)

  5. Copy/Paste the json flow in your Node-RED editor

  6. Make sure that ibmiot in Node-RED have the correct information of IoT Platform

Make sure you can see the Internet of Things Platform, Visual Recognition and Cloudant services available in your Bluemix: Dashboard -> Connections.

It's worth mentioning that the IBM Cloud Functions requires a setup on its own.

Included components

Featured technologies

  • IBM IoT Platform
  • Node-Red
  • Javascript
  • IBM Cloud Functions (serverless)
  • Watson Visual Recognition classifiers

Blog post

Using IBM Cloud Functions to analyze an image for the Watson IoT Platform

  • Application takes images from devices or uploads it from local image folders to Cloudant NoSQL database

  • Couldant database, in its turn, receives the binary data and triggers an action on IBM Cloud Functions

  • IBM Cloud Functions Composer performs the Visual Recognition analysis and receives a response in JSON format

  • The response will be sent to the IoT Platform and it will register itself as a device receiving the analyzed image.

  • Node-RED flow will keep reading these events from this device on IoT Platform and it will trigger alerts based on image's features. For example: iot-2/type/Device/id/motor1/evt/eventData/fmt/json

  • image: fire
  • score: 0.679
  • alert: EMERGENCY ALERT!
  • time: Tue Oct 24 2017 01:20:49 GMT+0000 (UTC)

For more detailed instructions, visit our project at: README.md

Related links

Clone this wiki locally