In this project I use curl
and Python 3 to play around with IBM Personality Insights.
I used IBM's free Lite service,
which provides 1,000 API calls per month at no cost, and deploy to the US South region.
I enabled a Lite Personality Insights service instance, which only lasts 30 days.
I followed the Getting started instructions and referred to the API reference.
The service name was automatically created: Personality Insights-ca
The Getting Started examples use curl
to call methods of the HTTP interface.
Ubuntu's version of curl
installed by default is the correct version.
I used jq
to pretty-print the returned JSON.
Here is how to install it in Ubuntu:
$ sudo apt install jq
Step 1
of the Getting Started instructions has a serious error: instead of providing the apiKey
,
your username:password
must be provided.
Because I did not want to hard-code credentials into a program, I defined environment variables to hold this sensitive data.
export PI_USERNAME="999999-8888-7777-6666-12345678" # replace with your Personality Insights username
export PI_PASSWORD="zYxWv" # replace with your Personality Insights password
The IBM Personality Insights Getting Started instructions show a curl
example, which I modified as shown.
I then wrote a simple Python equivalent, shown in the next section.
export PATH_OF_TEXT_TO_ANALYSE=./profile.txt
curl -sSX POST --user "$PI_USERNAME:$PI_PASSWORD" \
--header "Content-Type: text/plain;charset=utf-8" \
--header "Accept: application/json" \
--data-binary @"$PATH_OF_TEXT_TO_ANALYSE" \
"https://gateway.watsonplatform.net/personality-insights/api/v3/profile?version=2017-10-13" | \
jq .
Notice the leading @
character before $PATH_OF_TEXT_TO_ANALYSE
.
If you don't put that character there, the path is interpreted as the text analyze, instead of the filename of containing the text to analyze.
I found the
Python docstring for the PersonalityInsightsV3 constructor
to be helpful.
The Python version is more ambitious than the curl
version;
the Python version reads two blog postings, combines them, and
submits them for analysis. Here is the resulting JSON.