Skip to content
This repository has been archived by the owner on May 21, 2019. It is now read-only.

Commit

Permalink
Merge pull request #282 from watson-developer-cloud/updates
Browse files Browse the repository at this point in the history
feat: Add semantic releases and IAM support
  • Loading branch information
mamoonraja committed Jul 20, 2018
2 parents 72bd98b + d3b0154 commit b52bd4b
Show file tree
Hide file tree
Showing 21 changed files with 9,521 additions and 4,138 deletions.
Binary file removed .env.enc
Binary file not shown.
2 changes: 1 addition & 1 deletion .env.example
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
# Environment variables
VISUAL_RECOGNITION_API_KEY=
VISUAL_RECOGNITION_IAM_APIKEY=
15 changes: 7 additions & 8 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,18 +1,17 @@
language: node_js
dist: trusty
sudo: required
node_js: 6
node_js: 8
script: npm test
cache:
directories:
- node_modules
env:
global:
- BX_APP=visual-recognition-demo
- BX_API=https://api.ng.bluemix.net
- BX_ORGANIZATION=WatsonPlatformServices
- BX_SPACE=demos
- VISUAL_RECOGNITION_API_KEY="bogus key to let server spin up for offline tests"
before_install:
- openssl aes-256-cbc -K $encrypted_cca766a893ae_key -iv $encrypted_cca766a893ae_iv
-in .env.enc -out .env -d
deploy:
- provider: script
skip_cleanup: true
script: npx semantic-release
on:
node: 8
60 changes: 54 additions & 6 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,64 @@
# Questions

If you are having difficulties running the app or have a question about the service, please ask a question on [dW Answers](https://developer.ibm.com/answers/questions/ask/?topics=watson) or [Stack Overflow](http://stackoverflow.com/questions/ask?tags=ibm-watson).
If you are having problems using the APIs or have a question about the IBM
Watson Services, please ask a question on
[dW Answers](https://developer.ibm.com/answers/questions/ask/?topics=watson)
or [Stack Overflow](http://stackoverflow.com/questions/ask?tags=ibm-watson).

# Code

* Our style guide is based on [Google's](https://google.github.io/styleguide/jsguide.html), most of it is automaticaly enforced (and can be automatically applied with `npm run autofix`)
* Commits should follow the [Angular commit message guidelines](https://github.com/angular/angular/blob/master/CONTRIBUTING.md#-commit-message-guidelines). This is because our release tool uses this format for determining release versions and generating changelogs. To make this easier, we recommend using the [Commitizen CLI](https://github.com/commitizen/cz-cli) with the `cz-conventional-changelog` adapter.

# Issues

If you encounter an issue with this sample app, you are welcome to submit a [bug report](https://github.com/watson-developer-cloud/visual-recognition-nodejs/issues). Before that, please search for similar issues. It's possible somebody has encountered this issue already.
If you encounter an issue with the Node.js library, you are welcome to submit
a [bug report](https://github.com/watson-developer-cloud/visual-recognition-nodejs/issues).
Before that, please search for similar issues. It's possible somebody has
already encountered this issue.

# Pull Requests

If you want to contribute to the repository, here's a quick guide:
If you want to contribute to the repository, follow these steps:

1. Fork the repo.
1. develop your code changes: `npm install -d`
1. Commit your changes
1. Push to your fork and submit a pull request
2. Develop and test your code changes: `npm install -d && npm test`.
3. Travis-CI will run the tests for all services once your changes are merged.
4. Add a test for your changes. Only refactoring and documentation changes require no new tests.
5. Make the test pass.
6. Commit your changes.
7. Push to your fork and submit a pull request.

# Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or

(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or

(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.

(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.

## Tests

Ideally, we'd like to see both unit and innervation tests on each method.
(Unit tests do not actually connect to the Watson service, integration tests do.)

Out of the box, `npm test` runs linting and unit tests, but skips the integration tests,
because they require credentials.
62 changes: 33 additions & 29 deletions INSTRUCTIONS.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,18 @@ You can see a version of this app that is already running [here](https://visual-

So let’s get started. The first thing to do is to build out the shell of our application in the IBM Cloud.

## Creating an [IBM Cloud][bluemix] Account
## Prerequisites

1. Sign up for an [IBM Cloud account](https://console.bluemix.net/registration/).
1. Download the [IBM Cloud CLI](https://console.bluemix.net/docs/cli/index.html#overview).
1. Create an instance of the Visual Recognition service and get your credentials:
- Go to the [Visual Recognition](https://console.bluemix.net/catalog/services/visual-recognition) page in the IBM Cloud Catalog.
- Log in to your IBM Cloud account.
- Click **Create**.
- Click **Show** to view the service credentials.
- Copy the `apikey` value
- Copy the `url` value.

1. Go to https://bluemix.net/
2. Create an IBM Cloud account if required.
3. Log in with your IBM ID (the ID used to create your IBM Cloud account)

**Note:** The confirmation email from the IBM Cloud mail take up to 1 hour.

Expand All @@ -37,25 +44,11 @@ So let’s get started. The first thing to do is to build out the shell of our a
NODE_ENV: production
```
1. Connect to the IBM Cloud by running the following commands in a terminal window:
```none
cf api https://api.ng.bluemix.net
cf login
```

1. Create and retrieve service keys to access the [Visual Recognition][visual_recognition] service by running the following command:
1. Copy the credentials from the prerequisites to the application by creating a `.env` file using this format:

```none
cf create-service watson_vision_combined free visual-recognition-service
cf create-service-key visual-recognition-service myKey
cf service-key visual-recognition-service myKey
```

1. Provide the credentials from step 6 to the application by creating a `.env` file using this format:

```none
VISUAL_RECOGNITION_API_KEY=<your-alchemy-api-key>
VISUAL_RECOGNITION_IAM_API_KEY=<your-api-key>
VISUAL_RECOGNITION_URL=<your-url>
```

1. Install the dependencies you application need:
Expand All @@ -72,19 +65,30 @@ So let’s get started. The first thing to do is to build out the shell of our a

1. Test your application locally by going to: [http://localhost:3000/](http://localhost:3000/)

## Deploying your application to the IBM Cloud
## Deploying to IBM Cloud as a Cloud Foundry Application

1. Push the updated application live by running the following command:
1. Login to IBM Cloud with the [IBM Cloud CLI](https://console.bluemix.net/docs/cli/index.html#overview)

```none
cf push
```
```
ibmcloud login
```

After completing the steps above, you are ready to test your application. Start a browser and enter the URL of your application.
1. Target a Cloud Foundry organization and space.

<your-application-name>.mybluemix.net
```
ibmcloud target --cf
```

1. Edit the *manifest.yml* file. Change the **name** field to something unique.
For example, `- name: my-app-name`.
1. Deploy the application

```
ibmcloud app push
```

You can also find your application name when you click on your application in the IBM Cloud.
1. View the application online at the app URL.
For example: https://my-app-name.mybluemix.net

## Classifying Images in the Starter Application

Expand Down
3 changes: 0 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@ The [Visual Recognition][visual_recognition_service] Service uses deep learning

Give it a try! Click the button below to fork into IBM DevOps Services and deploy your own copy of this application on the IBM Cloud.

[![Deploy to IBM Cloud](https://bluemix.net/deploy/button.png)](https://bluemix.net/deploy?repository=https://github.com/watson-developer-cloud/visual-recognition-nodejs)

## Getting Started

1. You need a IBM Cloud account. If you don't have one, [sign up][sign_up]. Experimental Watson Services are free to use.
Expand Down Expand Up @@ -145,7 +143,6 @@ training form with your existing classifier.
Find more open source projects on the [IBM Github Page](http://ibm.github.io/).


[deploy_track_url]: https://github.com/cloudant-labs/deployment-tracker
[service_url]: https://www.ibm.com/watson/services/visual-recognition/
[cloud_foundry]: https://github.com/cloudfoundry/cli
[visual_recognition_service]: https://www.ibm.com/watson/services/visual-recognition/
Expand Down
20 changes: 7 additions & 13 deletions app.js
Original file line number Diff line number Diff line change
Expand Up @@ -22,23 +22,19 @@ var fs = require('fs');
var extend = require('extend');
var path = require('path');
var async = require('async');
var watson = require('watson-developer-cloud');
var VisualRecognitionV3 = require('watson-developer-cloud/visual-recognition/v3'); // watson sdk
var uuid = require('uuid');
var bundleUtils = require('./config/bundle-utils');
var os = require('os');

var ONE_HOUR = 3600000;
var TWENTY_SECONDS = 20000;
var FOURTY_SECONDS = 40000;

// Bootstrap application settings
require('./config/express')(app);

// Create the service wrapper
// If no API Key is provided here, the [email protected] library will check for an VISUAL_RECOGNITION_API_KEY
// environment property and then fall back to the VCAP_SERVICES property provided by the IBM Cloud.
var visualRecognition = new watson.VisualRecognitionV3({
// api_key: '<api-key>',
version_date: '2015-05-19'
var visualRecognition = new VisualRecognitionV3({
version: '2018-03-19'
});

app.get('/', function(req, res) {
Expand Down Expand Up @@ -233,8 +229,7 @@ function parseBase64Image(imageString) {
app.post('/api/classify', app.upload.single('images_file'), function(req, res) {
var params = {
url: null,
images_file: null,
owners: []
images_file: null
};

if (req.file) { // file image
Expand Down Expand Up @@ -267,14 +262,13 @@ app.post('/api/classify', app.upload.single('images_file'), function(req, res) {
params.threshold = 0.5; //So the classifers only show images with a confindence level of 0.5 or higher
methods.push('classify');
methods.push('detectFaces');
methods.push('recognizeText');
}

// run the 3 classifiers asynchronously and combine the results
async.parallel(methods.map(function(method) {
var fn = visualRecognition[method].bind(visualRecognition, params);
if (method === 'recognizeText' || method === 'detectFaces') {
return async.reflect(async.timeout(fn, TWENTY_SECONDS));
if (method === 'detectFaces') {
return async.reflect(async.timeout(fn, FOURTY_SECONDS));
} else {
return async.reflect(fn);
}
Expand Down
7 changes: 5 additions & 2 deletions casper-runner.js
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
'use strict';

if (!process.env.VISUAL_RECOGNITION_API_KEY) {
console.log('Skipping integration tests because VISUAL_RECOGNITION_API_KEY is null');
require('dotenv').config({silent: true});


if (!process.env.VISUAL_RECOGNITION_IAM_APIKEY) {
console.log('Skipping integration tests because VISUAL_RECOGNITION_IAM_APIKEY is null');
process.exit(0);
}

Expand Down
1 change: 0 additions & 1 deletion config/express.js
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ module.exports = function(app) {
// Configure Express
app.set('view engine', 'jade');

app.use(require('express-status-monitor')());
app.use(compression({filter: function (req, res) {

// This is kind of dumb, but I've had a few people reporting errors like
Expand Down
10 changes: 5 additions & 5 deletions config/security.js
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,11 @@ module.exports = function(app) {
}));

// 2. rate-limit to /api/
// app.use('/api/', rateLimit({
// windowMs: 30 * 1000, // seconds
// delayMs: 0,
// max: 10
// }));
app.use('/api/', rateLimit({
windowMs: 30 * 1000, // seconds
delayMs: 0,
max: 10
}));

// 3. csrf
var csrfProtection = csrf({
Expand Down
Loading

0 comments on commit b52bd4b

Please sign in to comment.