Skip to content

USDS Playbook Usage (Play 1 to 5)

aemrie edited this page Mar 3, 2017 · 59 revisions

Play 1: Understand What People Need

CHECKLIST


Play 1.Checklist 1 - Early in the project, spend time with current and prospective users of the service

We started interviewing with prospective users early in the project by conducting interviews with various user groups in California.

Group 1 - The ADPQ prototype team's own experience was our starting point. After we discussed the application and potential needs, we then contacted other users.

Group 2 - We interviewed an IBM employee who moved from Dublin Ireland to San Francisco within the last five years. This person (Podge) works in a completely different group, or business unit in IBM, so he does not have an association with the team developing the prototype.

Group 3 - We interviewed people on the street in downtown Sacramento.

Group 4 - We used Survey Monkey to conduct a survey of potential users in California.

Group 5 - We interviewed a manager of an emergency management center in a southern California County.

Group 6 - We conducted user testing of the final working prototype with two lifelong California residents working at Macys.


Play 1.Checklist 2 - Use a range of qualitative and quantitative research methods to determine people’s goals, needs, and behaviors; be thoughtful about the time spent

We applied design thinking methodology to understand the users’ needs goals, behaviors, and pain points. We conducted interviews with the users, sent surveys, and ran workshops image 1 and image 2 using the following research methods:

  1. Ideation of Alert Types Emergency Alerts and Non Emergency Alerts

  2. Personna/Admin, Personna/CA Resident, + Empathy Map Non-Emergency

  3. Pain Point Identification

  4. To-Be Storyboard 1 and To-Be Storyboard 2

  5. Vision Statements

  6. Assumptions + Risks

  7. Hypothesis

  8. MVP (Minimum viable product) definition

These methods map to the IBM Garage process as follows: Understand (1-3), Explore (4), and Define (5-8).


Play 1.Checklist 3 - Test prototypes of solutions with real people, in the field if possible

We used clickable prototypes to test and validate design direction with the admin and resident users (notes). This was done through task analysis with 3 users for the Brenda (resident) sign-up flow, and we validated usefulness with an industry expert and a State Administrator for the Alan (admin) side of the application.


Play 1.Checklist 4 - Document the findings about user goals, needs, behaviors, and preferences

We posted our findings about user goals, needs, behaviors and preferences on the Github project repository.

Notes based on conversation with two California residents


Play 1.Checklist 5 - Share findings with the team and agency leadership

Retrospectives (Image 1 and Image 2) were conducted periodically throughout the prototype. We learned what worked and what can be improved.

We had weekly playbacks to share with the larger team how the working prototype was being developed. In addition, we used Slack and Sametime (our online collaboration tools) to share findings with whole team; then daily stands up and team calls.

We shared our notes with the Ventura County program manager who was a user, as well. Also, the team was very direct with each other if something needed to be addressed immediately.

Create a prioritized list of tasks the user is trying to accomplish, also known as “user stores” were created in Trello, and were prioritized during backlog grooming with the Product Owner.


Play 1.Checklist 6 - Create a prioritized list of tasks the user is trying to accomplish, also known as “user stories”

User stories were created in Trello, and were prioritized during backlog grooming with the Product Owner.


Play 1.Checklist 7 - As the digital service is being built, regularly test it with potential users to ensure it meets people’s needs

We conducted usability and usefulness testing with the users during prototyping.


KEY QUESTION


Play 1.Key Question 1 - Who are your primary users?

Our primary users are: California Residents and State Officials (Administrators).

We created these two personas as the composites of what we learned based on interviews with users who are California residents, including a California County employee in a role similar to the Admin in Prototype B (Ventura County Admin).


Play 1.Key Question 2 - What user needs will this service address?

Persona 1 is "Alan" the System Administrator: This user needs include to refine the alert process, create and publish alerts, manage different type/category of alerts, set automatic alerts, and measure notifications (penetration/awareness). Additionally, the service needs to have improved visualization (able to map an area), analytics (can drill down per street, zip code, city, county, etc.).

[Persona 2 is "Brenda"] (https://github.com/ibmbluemixgarage/caliconnects/blob/master/documentation/IBM%20Design%20Thinking%20Workshop/DesignThinking_Brenda_CA_Personna.jpg), a California resident.

Based on the prioritization, our prototype MVP is as follows (other features will be for the future releases):

For Alan:

  • Login using a valid credential

  • Create new alert notification (we call it 'Campaign')

  • Send notification for an area (using street, zip, or city level)

  • View created alerts (message, date & time, and recipients)

  • View alerts events based on alert category

For Brenda:

  • Sign up or Login (if user has signed up)

  • Password setting is validated

  • Enter profile info (phone, email, and address)

  • Manage profile (edit phone, email, and address)

  • Receive alerts via email and phone (based on provided address)

  • Can view the alert and click the link provided in email or text

  • Cancel account


Play 1.Key Question 3 - Why does the user want or need this service?

We created an empathy map that was used to gain insight into our customers. In this instance, while we used individuals for our empathy map, it also represents a group of users or customer segment. The four quadrants on the empathy map include what the customers says, thinks, feels and does. Each member of their team used sticky notes to identify their perspectives regarding the customer. Then, the team group these into similar categories.

For Alan: This user needs the service to publish concise event based alerts to engage California residents and continuously improve communication for safety and awareness.

For Brenda: This user needs the service to subscribe to receive alerts from the state based, on the location that she cares about and appropriately act on the alerts.


Play 1.Key Question 4 - Which people will have the most difficulty with the service?

Users who do not have access to a computer or a phone capable of receiving text messages, and users whose devices are offline would have the most difficulty with the service and would not be able to use the service.


Play 1.Key Question 5 - Which research methods were used?

A variety of research methods were used including 1) In person interviews 2) Surveys 3) Phone interviews with California County emergency management staff and residents. We engaged users during the design and development of the prototype, and conducted usability and usefulness testing using a clickable prototype. Using their feedback, we made ongoing adjustments to the prototype.

We applied design thinking methodology to understand the users’ needs/goals, behaviors, and pain points. We conducted interviews with the users, sent surveys, and ran workshops (image 1 and image 2) using the following research methods:

  1. Ideation of Alert Types Emergency Alerts and Non Emergency Alerts

  2. Pain Point Identification

  3. To-Be Storyboard 1 and To-Be Storyboard 2

  4. Vision Statements

  5. Assumptions + Risks

  6. Hypothesis

  7. MVP (Minimum viable product) definition

These methods map to the IBM Garage process as follows: Understand (1-3), Explore (4), and Define (5-8).


Play 1.Key Question 6 - What were the key findings?

We interviewed users from both an administrative and resident perspective to solicit their understanding of what constitutes notifications, emergency and non-emergency, as well as their pain points and opportunities.

Our key findings:

  • Brenda (resident user): would like to be educated more on the California alert system (who sends it, why, and what steps should be taken). For example, some of the users do not know what an Amber Alert is, when they get the notification on their phone from the state iPAWS (Integrated Public Alert and Warning System). Residents would also like to be able to see alerts for their loved ones, who lives separately in other parts of California. They would like to be able to have the option to opt-in or opt-out of different alert methods (e.g., email, text, or both).

  • Alan (admin user): would like to able to create and send alerts based on location proximity of the users addresses, see a map of all alert types, and maintain historical alert data. He would also like to see the alert penetration rate for analysis (e.g., confirmation rates, or invalid phone numbers). Currently, it is hard to get accurate penetration rates, since some phone numbers may be landlines.


Play 1.Key Question 7 - How were the findings documented? Where can future team members access the documentation?

We documented our findings in the following format:

  • videos and photos (e.g., of the interviews and workshops)

Videos https://vimeo.com/206621709

Photos of Interviewees Photo 1, Photo 2, Photo 3,

  • notes and reporting from the conversation with the California County rep

  • links (e.g., of the survey and the survey results)

Survey Video and [PDF] (https://github.com/ibmbluemixgarage/Caliconnects/blob/master/documentation/Metrics%20and%20Research/CA_Emergency_AlertSystem_Survey.pdf)

Survey Results

All documentation was posted in GitHub.


Play 1.Key Question 8 - How often are you testing with real people?

We performed testing with real people once or twice a week for three weeks. We interviewed an IBM employee not involved with the project, to get his feedback on our initial prototype, conducted usability testing with people on the street, and regularly had playbacks with the manager of an emergency management center in a southern California County as well as with the IBM employee.


Play 2: Address the Whole Experience, from Start to Finish

CHECKLIST


Play 2.Checklist 1 - Understand the different points at which people will interact with the service – both online and in person

There are various touchpoints in the process for the two types of users; resident, and public officials (administrators).

The resident interacts in the following ways: signing-up (after receiving notice of the application via public education or word of mouth or advertising), managing profile(s), testing SMS alerts, and, ultimately, receiving SMS alerts when the administrator user pushes a notification using our system.

The administrative interaction with the system is through the log-in to the system when actions may be created: create alert, view active campaigns, end a campaign, etc.


Play 2.Checklist 2 - Identify pain points in the current way users interact with the service, and prioritize these according to user needs

The users pain points are based on our interviews with the users, the Empathy Map, and Vision Statements Brenda and Alan for the personas.

For Brenda, and the pain points are:

  • Need better education on the state alert/notification system. For example, many users are not aware of what an AMBER alert is, why it was sent, or who sent it. Also, it is not clear what steps need to be taken upon receiving the alert.

For Alan, the pain points are:

  • Current notification process is complicated
  • Character limit on the text messages can impact the clarity of the message
  • Text messages for mobile phone may be truncated, or may be delivered out of order (as the messages are broken down by the cellular provider)
  • Analyzing penetration rate is time consuming due to the unreachable phone numbers

Play 2.Checklist 3 - Design the digital parts of the service so that they are integrated with the offline touch points people use to interact with the service

The prototype is a fully online product and there is no offline capability in the prototype, it is a responsive web application. The administrative user must be online for access to the web application. The resident user must be online when using the web application either when signing up for alerts, viewing or alerts or managing or view other information. The resident user must have a cell phone capable of receiving text alerts. In the future, the application could be expanded beyond the Minimum Viable Product, so that those users who are not capable of receiving alerts on a cell phone capable of text alerts, through other means, such as a phone call or other notification approach.

We considered a few offline scenarios, such as a user seeing an incident and reporting it through our application for a System Administrator to be able to draft alerts or analyze data offline. These offline scenarios are a nice-to-haves, but due to time constraints we are not implementing offline features at this time in the Minimum Viable Product (MVP).


Play 2.Checklist 4 - Develop metrics that will measure how well the service is meeting user needs at each step of the service

Defined metrics are as follows (to be monitored after deployment):

Quantitatively: Saved time for admin user. Penetration metrics for admin user (% of user response).

Qualitatively: How CA residents feel the safeness by using this system.


KEY QUESTION


Play 2.Key Question 1 - What are the different ways (both online and offline) that people currently accomplish the task the digital service is designed to help with?

From talking with users, we learned that public education is a big deal. Based on our interview with a California County Emergency Management Center, we learned that some residents do not know who is sending the alerts or why they are receiving them (i.e. Amber alerts). Many residents learn of emergency and non-emergency from radio, TV, and other news media. Some users are willing to sign up / opt-in to receiving weather alerts, as well as sign up for MyHazards/Cal Fires apps that send Emergency and Non-Emergency alert.

Currently, the Admin user buys data from wireless telephone companies, receive data from white pages and yellow pages, then they send notifications. The admin users have the ability to use Everbridge / Nixle (Emergency Notification System focusing on public safety), EAS (Emergency Alert System), IPAWS (Integrated Public Alert and Warning System), and other custom systems.


Play 2.Key Question 2 - Where are user pain points in the current way people accomplish the task?

For Brenda, the pain points are:

  • Need better education on the state alert/notification system. For example, currently there is a Federal system to send alert/notification through IPAWS (integrated Public Alert and Warning System) for AMBER alert. However, many users are not aware of what AMBER alert is, why it was sent, and who sent it. Also, it is not clear what steps that need to be taken upon receiving alert.

For Alan, the pain points are:

  • Current process of notifications could be simpler

  • Number of character limit on the text message, means shorter message may not be as clear as longer ones

  • Text messages for mobile phone may be truncated, or may be delivered out of order (as message being broken down by provider)

  • Time consuming when analyzing penetration rate, since some phone numbers are unreachable


Play 2.Key Question 3 - Where does this specific project fit into the larger way people currently obtain the service being offered?

The state has multiple tools it uses to provide emergency and non-emergency notifications. For example, EAS (Emergency Alert System), WEA/Wireless Emergency Alerts (IPAWs/Integrated Public Alert & Warning System), Radio, Web, and Phone. In addition, specific agencies have their own custom tools. This app is more about public education for resident the user and simplifying the send features for the admin user. However, the tools the state is currently using are pretty sophisticated.


Play 2.Key Question 4 - What metrics will best indicate how well the service is working for its users?

Defined metrics as follows (to be monitored after deployment) that would identify how well the service is meeting the user needs at each step.

For Administrative user:

  • The amount of time saved by the administrative user (as a result of having an electronic means of notification and relying on that as a primary means of notification)

  • The number of notifications that did not reach the intended users (failed delivery attempts)

  • The degree to which the notification service reaches residents as measured by penetration metrics targeted by the various geographic areas (such as a street level, zip code level, etc.). Contacting residents through the Notification system will save time for the administrator and potentially save time and lives for the resident.

For Resident users:

  • Whether Californians feel safer as a result of using the system (for emergency notifications): The admin user currently has the ability to poll the resident using canned responses to gather feedback

  • Whether Californians feel the notifications are received in such a manner (timeliness and level of detail) that they are able to receive just the information they need

  • Whether Californians have acted on the notifications (i.e., click rate)

  • The degree to which notifications were received which enabled the resident to not take any action

  • Whether a California resident would recommend signing-up for the CaliConnects app to a friend or family (Net Promoter Score)


Play 3: Make it simple and intuitive

CHECKLIST


Play 3.Checklist 1 - Create or use an existing, simple, and flexible design style guide for the service

We used a design style guide that we created, in addition to the US Web Design Standards.

URL to US Web Design Standards


Play 3.Checklist 2 - Use the design style guide consistently for related digital services

We use the same style guide for both the resident user and the administrator (these were identified by the persona "Brenda" and "Alan").


Play 3.Checklist 3 - Give users clear information about where they are in each step of the process

Our prototypes for User 1 (Brenda) uses breadcrumbs to know where they are on the sign up process. For User 2 (Alan), we use sidebar navigation to help them understand where they are in the system.


Play 3.Checklist 4 - Follow accessibility best practices to ensure all people can use the service

We use the Section 508 of the Rehabilitation Act of 1973 checklist to ensure compliance: http://webaim.org/standards/508/checklist, as well as the IBM FED community’s a11y field guide. We also searched for recent guidance and for a Technology Accessibility Playbook which we have posted in our GitHub repository.

We use the Section 508 of the Rehabilitation Act of 1973 checklist to ensure compliance: http://webaim.org/standards/508/checklist, as well as the IBM FED community’s a11y field guide. We also searched for guidance and for a Technology Accessibility Playbook which we posted in our GitHub repository.


Play 3.Checklist 5 - Provide users with a way to exit and return later to complete the process

Users are able to complete process (sign up) at a later time.


Play 3.Checklist 6 - Use language that is familiar to the user and easy to understand

We used conversational UI, that is easy to understand, to guide the users through the process (sign up).

The first release will be in English only, and future release could be included in other languages prioritized by the demographics associated with the California population.


Play 3.Checklist 7 - Use language and design consistently throughout the service, including online and offline touch points

We are using a Style Guide to maintain consistent language and design throughout the system.


KEY QUESTION


Play 3.Key Question 1 - What primary tasks are the user trying to accomplish?

We identified the primary tasks based on the personas:

1 - Brenda (the resident user), needs to be able to create her profile, receive alerts, and click more info links on the alerts

2 - Alan (the admin user) needs to be able to publish alerts, manage and update alerts, track metrics of alert penetration, and use existing data to inform publishing of alerts.


Play 3.Key Question 2 - Is the language as plain and universal as possible?

We seek input of the language from the Usability testing and made the adjustment according to users' feedback. Usability testing - [lesson learned] https://github.com/ibmbluemixgarage/Caliconnects/blob/master/documentation/Prototype%20Testing/User_Test_Lessons_Learned.pdf)


Play 3.Key Question 3 - What languages is your service offered in?

Initially we are using English. Other languages will be in future releases and prioritized based on demographics in the California population: Chinese, Russian, Spanish, Vietnamese, Arabic, Armenian, Cambodian, Farsi, Hmong, Korean, Lao, and Tagalog.


Play 3.Key Question 4 - If a user needs help while using the service, how do they go about getting it?

The resident user will be able to send an email, if they have questions about the CaliConnect App. We tested the help desk feature with users. When we asked resident users how they would go about getting help, they clicked the "i" (information) icon that displayed on the page.


Play 3.Key Question 5 - How does the service’s design visually relate to other government services?

We are using the federal style guide applied to the State of CA brand guidelines to make the system feel like it belongs to State of CA.


Play 4: Build the service using agile and iterative practices

CHECKLIST


Play 4.Checklist 1 - Ship a functioning “minimum viable product” (MVP) that solves a core user need as soon as possible, no longer than three months from the beginning of the project, using a “beta” or “test” period if needed

The initial prototype went live on week 2, with limited features for Alan, the administrator. We completed the MVP of the prototype in three weeks.


Play 4.Checklist 2 - Run usability tests frequently to see how well the service works and identify improvements that should be made

We performed build and deploy daily, and did the usability testing (within the team and/or with the external users) more than once a week.


Play 4.Checklist 3 - Ensure the individuals building the service communicate closely using techniques such as launch meetings, war rooms, daily standups, and team chat tools

The team ran daily standups, and used collaboration tools, such as Slack, Sametime and Box


Play 4.Checklist 4 - Keep delivery teams small and focused; limit organizational layers that separate these teams from the business owners

The Product Owner (PO) worked closely with 3 developers and 2 designers on a daily basis. The PO talked with them and other team members on a daily basis.


Play 4.Checklist 5 - Release features and improvements multiple times each month

We performed build and deploy daily.


Play 4.Checklist 6 - Create a prioritized list of features and bugs, also known as the “feature backlog” and “bug backlog”

We have feature backlog and listed the priority for each of the features. We implemented the XP Agile methodology, which presumes bugs to be fixed immediately after testing. Therefore, we only track features in Trello, which is our Agile tool to manage the product/features backlog.


Play 4.Checklist 7 - Use a source code version control system

We utilized GitHub to control source code versioning.


Play 4.Checklist 8 - Give the entire project team access to the issue tracker and version control system

We granted the team access to Git (for version control system) and Trello (for issue tracker).


Play 4.Checklist 9 - Use code reviews to ensure quality

Developers performed code reviews during the paired programming session.


KEY QUESTION


Play 4.Key Question 1 - How long did it take to ship the MVP? If it hasn't shipped yet, when will it?

The prototype was ready in 3 weeks.


Play 4.Key Question 2 - How long does it take for a production deployment?

Running test and deploying to Production using Cloud Foundry takes a few minutes.


Play 4.Key Question 3 - How many days or weeks are in each iteration/sprint?

We use XP (extreme programming), which is an Agile methodology. We have daily deployments and IPMs (Iteration Planning Meetings) two times a week. Therefore, we have new code reveals on a daily basis.


Play 4.Key Question 4 - Which version control system is being used?

We are using Git, as the version control system.


Play 4.Key Question 5 - How are bugs tracked and tickets issued? What tool is used?

The XP Agile methodology presumes bugs to be fixed immediately when tested, therefore we only track features in Trello, which is the tool for the product backlog.


Play 4.Key Question 6 - How is the feature backlog managed? What tool is used?

We are using Trello tool, to manage the feature backlog, which are prioritized and groomed by Product Owner frequently. The team provide status of each of the work item on the backlog accordingly.


Play 4.Key Question 7 - How often do you review and reprioritize the feature and bug backlog?

Daily. Product Owner will add to the backlog on a daily basis; either via accepting stories and/ or creating new stories. In addition, the product owner is responsible for updating the backlog, and works with the lead developer to ensure prioritization is taking place on a daily basis, not just in the IPMs. As a result, we don't have a bug backlog because the bugs have to be addressed immediately (product owner will decline stories in real-time, which the development team will fix asap).


Play 4.Key Question 8 - How do you collect user feedback during development? How is that feedback used to improve the service?

We conduct usability tests on new wireframes and walk users through the prototype during our playbacks to get feedback from larger team and user testers who are California residents. One of the testers is from within the company at IBM but work for a different business unit (i.e., Podge lives in California and works for Watson Health, which is not our business unit, and he provides weekly feedback on the prototype).


Play 4.Key Question 9 - At each stage of usability testing, which gaps were identified in addressing user needs?

We conducted a number of usability testing, with the following results:

User Test 1 (Podge)

  • Change language for Brenda account, build in help desk, allow for account creation.

  • User Test 2 (Street users from Galvanize office, we tested three users and each provided new insights):

  • Person 1 - Commented on the need for one screen, even though she enjoyed the conversational UI and thought it was pretty standard. She also said she didn't like long amounts of text and didn't understand how many steps she would have to go through.

  • Person 2 - Made comments about social media integration. Said she would sign-up family members and other loved ones.

  • Person 3 - Text boxes. Didn't understand what types of notifications she would receive and the frequency of those notifications.

User Test 3 (Ventura County Admin)

  • Person 1 - change language for alert types, no need to include event types, simplify the UI, ensure there are some visualizations.

Play 5: Structure budgets and contracts to support delivery

CHECKLIST


Play 5.Checklist 1 - Budget includes research, discovery, and prototyping activities

Our budget for the prototype included time for research, discovery and prototyping activities. We reviewed our lessons learned regarding the prior submissions and incorporated items for improvement as well. Our budget for the prototype was completed well in advanced of the release of the Request for Interest so our team was ready to being work immediately upon release of the Request for Interest.


Play 5.Checklist 2 - Contract is structured to request frequent deliverables, not multi-month milestones

Our contract with the IBM garage team requires frequent deliverables since that is the way the Garage team works. So, the team was already experienced with the Agile method which included frequent deliverables so we could assess the project status. Playbacks were conducted weekly to review the work completed to date and to assess our estimates to complete the prototype by March 3. In addition, we request an Agile Coach from the IBM Competency Center for Agile Development to be part of our team to also provide another opinion regarding our status and any actions needed for improvement.


Play 5.Checklist 3 - Contract is structured to hold vendors accountable to deliverables

In this case, there was no external contract since all of the work was performed internally by IBM employees. Our project plan and approach identify responsibilities for the various deliverables so everyone knew who was accountable for the various work products. When team members identify any ambiguities that may have existed these were cleared and clarified so all team members knew their respective responsibilities.


Play 5.Checklist 4 - Contract gives the government delivery team enough flexibility to adjust feature prioritization and delivery schedule as the project evolves

In this case, our contract with the various IBM groups and team gave the team full flexibility and accountability to adjust features, prioritization and delivery schedule as the project evolved. In retrospect, all members of the team were highly responsible and effective in their jobs.


Play 5.Checklist 5 - Contract ensures open source solutions are evaluated when technology choices are made

The IBM team working on the prototype included several different groups from within IBM including most importantly the IBM Garage. The Garage team works with many open source solutions and we used open source as our de facto standard in developing the prototype. Example of open source solutions includes Cloud Foundry, Ruby on Rails, and Trello.


Play 5.Checklist 6 - Contract specifies that software and data generated by third parties remains under our control, and can be reused and released to the public as appropriate and in accordance with the law

There was no software generated by third parties. The data sources that were included in our prototype were those identified by the State for use in the prototype (in IBM's case, we selected Prototype B). We have used the data sources in accordance with the applicable laws as it relates to this prototype.


Play 5.Checklist 7 - Contract allows us to use tools, services, and hosting from vendors with a variety of pricing models, including fixed fees and variable models like “pay-for-what-you-use” services

This item is not applicable to the ADPQ prototype. Many of the tools we used were open source such or free, such as Trello.

For any tools that required payment, these were handled through IBM company agreements; no specific fees were paid as it relates to the ADPQ prototype.


Play 5.Checklist 8 - Contract specifies a warranty period where defects uncovered by the public are addressed by the vendor at no additional cost to the government

Typically, IBM contracts include a warranty period. The ADPQ prototype was provided to enable the State to select a vendor pool and therefore does not include a warrant period; no contract is signed as part of the ADPQ submission. Contracts awarded through the ADPQ vendor pool will, of course, include a contract that specifies a warranty period.


Play 5.Checklist 9 - Contract includes a transition of services period and transition-out plan

Occasionally, IBM contracts include a transition of services period if IBM is assuming responsibility from an existing, incumbent vendor. In other cases, IBM transitions our services to other companies as well. So, IBM has valuable experience with transitional services periods that are sometimes required. In this case, we envision our Emergency and Non-Emergency Notification system as a new service and no particular transition is required because we are not replacing a system with the prototype submission. However, if the notification system similar to the prototype were ever implemented, we would envision that some County or even State notification systems may be transition to the new system.


KEY QUESTION


Play 5.Key Question 1 - What is the scope of the project? What are the key deliverables?

The scope of the project was a working software prototype to address the requirements of the Request for Interest # CDT–ADPQ–0117 Pre-Qualified Vendor Pool for Digital Services – Agile Development. IBM submitted the Working Prototype that demonstrates our agile software development capabilities; including a publicly-available URL to the prototype at the top of a README.md file located in the root directory of your repository. IBM chose Prototype B as part of our response submission. Prototype B Requirements including a working prototype will be an application that will allow California residents to establish and manage their profile and receive emergency and non-emergency notifications via email, Short Message Service (SMS), and/or push notification based on the location and contact information provided in their profile and/or the geo-location of their cellphone if they have opted in for this service. In addition, the working prototype will provide the authorized administrative users with the ability to publish notifications and track, and analyze and visualize related data. The working prototype does not need to implement any authentication or authorization against an external directory or authentication mechanism per the RFI.

Key deliverables:

  1. Working prototype B

  2. Brief description/narrative, no greater than 2,000 words, of the Technical Approach used to create the Working Prototype and placed this description/narrative in the README.md file located in the root directory of its GitHub repository. The Documentation shows code flow from client UI, to JavaScript library, to REST service to database, pointing to code in the GitHub repository.

  3. References as listed in the RFI for the Technical Approach.

  4. We also demonstrated we followed the US Digital Services Playbook by providing actions taken and answers, including applicable evidence in the repository.

  5. Coding and design assets created for the prototype.


Play 5.Key Question 2 - What are the milestones? How frequent are they?

Week 1 - Understand requirements of RFP. Define and scope problem. Conduct a design thinking session. Reach out to potential resident and admin users and set-up meetings.

Week 2 - Spike out maps data provided by states and visualization. Develop some back wireframes for the resident user (Brenda). Develop code for initial feature of the admin (Alan), specifically around creating a campaign. Learn from potential resident and admin users, by having conversations with California County rep and residents living in California.

Week 3 - Finalize Alan functionality; develop Alan wireframes; assemble Twilio and Swagger and Devise integrations. User test InVision wireframes and prototypes with resident and Admin users. Develop a sign-up page for Brenda. Do playbacks and retrospectives. Complete 50+ percent of documentation.

Week 4- Finalize prototype and document. Complete 100 percent of documentation by mid-week.


Play 5.Key Question 3 - What are the performance metrics defined in the contract (e.g., response time, system uptime, time period to address priority issues)?

The Prototype, Minimum Viable Product did not include performance metrics. Performance metrics would be identified if the prototype were accepted and a decision made to continue to build out the product. For an application such as an emergency and non-emergency notification system, the system up time should be very high availability, response times would need to be very fast (e.g., < 3 seconds or less), and priority issues would need to be addressed in short time periods. The specific metrics would be agreed upon with the users, estimates established as to the time required to incorporate the metrics and then built into the application.