Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 29 additions & 0 deletions Coordination.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@

DUE FRI AT 5pm
[PHASE 3 REQUIREMENTS](https://docs.google.com/document/d/1oaXD2gjbQTMcSbYllbsGI17IqQbSJP5T0lSpxT6BRAs/edit?tab=t.0)

- [others_should_add_what_they_think_is_needed] Make sure readme is complete
- [ ] Be sure to link video presentation
- [x] add in more mock data via mockaroo or similar
- [ ] add in bridge tables (which imo is great news -- we wont have to build a front end for the a few features)
- [ ] Implement REST API to python (imo im not sure we need as many rows as we have -- which is also good news)
- [ ] Jose Landing Page
- [x] Jose Feature 1
- 3.4 Done by Ryan, though its not pretty. I mostly did it to understand streamlit syntax
- [ ] Jose Feature 2
- [ ] Jose Feature 3
- [ ] Jack Landing Page
- [ ] Jack Feature 1
- [ ] Jack Feature 2
- [ ] Jack Feature 3
- [ ] Alan Landing Page
- [ ] Alan Feature 1
- [ ] Alan Feature 2
- [ ] Alan Feature 3
- [ ] Avery Landing Page
- [x] Avery Feature 1
- 1.1 Done by Ryan. I interpreted it as just having subgoals, though that shouldnt be an issue.
- [ ] Avery Feature 2
- 1.5
- [ ] Avery Feature 3

189 changes: 112 additions & 77 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,108 +1,143 @@
# Summer 2 2025 CS 3200 Project Template
# Summer 2 2025 CS 3200 Project - What is Goal Planner (Global GoalFlow)?

This is a template repo CS 3200 Summer 2 2025 Course Project.
Goal Planner is a comprehensive goal and habit management platform that transforms how people approach long-term achievement by making data work for them, not against them. Unlike traditional to-do apps that leave users drowning in endless lists, Goal Planner intelligently breaks down ambitious projects into manageable phases, automatically suggests next tasks when you complete something, and seamlessly integrates daily habits with major milestones.

It includes most of the infrastructure setup (containers), sample databases, and example UI pages. Explore it fully and ask questions!
By collecting and analyzing user progress patterns, deadline adherence, and completion rates, our app provides personalized insights that help users understand their productivity patterns and optimize their approach to goal achievement.

We're building this for four distinct user types: individual achievers like freelancers and students who juggle multiple projects, professionals and researchers who need structured approaches to complex work, business analysts who require data-driven insights into team performance and goal completion rates, and system administrators who need robust, scalable platforms for managing user communities.

This repo includes the infrastrucure setup, a MySQL database along with mock data, and example UI pages.

### Project Members

- Ryan Baylon
- Hyeyeon Seo
- Jaden Hu
- Rishik Kellar
- Fabrizio Flores

---

## Prerequisites

- A GitHub Account
- A terminal-based git client or GUI Git client such as GitHub Desktop or the Git plugin for VSCode.
- VSCode with the Python Plugin installed
- A distribution of Python running on your laptop. The distribution supported by the course is Anaconda or Miniconda.
- Create a new Python 3.11 environment in conda named `db-proj` by running:
```bash
conda create -n db-proj python=3.11
```
- Install the Python dependencies listed in `api/requirements.txt` and `app/src/requirements.txt` into your local Python environment. You can do this by running `pip install -r requirements.txt` in each respective directory.
Before starting, make sure you have:

- A GitHub account
- Git client (terminal or GUI such as GitHub Desktop or Git plugin for VSCode)
- VSCode with the Python Plugin or your preferred IDE
- Docker and Docker Compose installed on your machine

---

## Repo Structure
The repo is organized into five main directories:

- `./app` – Frontend Streamlit app for user interaction.
- `./api` – Backend REST API (Flask) to handle business logic and database communication.
- `./database-files` – SQL scripts to initialize and seed the MySQL database with mock data.
- `./datasets` – Folder for datasets (if needed).
- `docker-compose.yaml` – Configuration to start the app, API, and MySQL database containers.

---

## Database Setup

We use a MySQL database named `global-GoalFlow`. The schema includes tables to manage users, goals, tasks, posts, tags, bug reports, and more, supporting the core functionality of Goal Planner.

## Structure of the Repo
### Key Tables Overview

- The repo is organized into five main directories:
- `./app` - the Streamlit app
- `./api` - the Flask REST API
- `./database-files` - SQL scripts to initialize the MySQL database
- `./datasets` - folder for storing datasets
- **users**: Stores user profiles, roles, contact info, and management relationships.
- **tags**: Categories for goals, posts, and tasks.
- **posts** & **post_reply**: Community forum posts and replies.
- **user_data**: Tracks user activity, devices, and login info.
- **bug_reports**: For tracking issues submitted by users.
- **consistent_tasks**, **daily_tasks**: Task management for recurring and daily items.
- **goals** & **subgoals**: Hierarchical goal tracking with status, priority, and deadlines.

- The repo also contains a `docker-compose.yaml` file that is used to set up the Docker containers for the front end app, the REST API, and MySQL database.
The database schema is designed to support role-based access, data integrity, and efficient queries with proper indexes and foreign keys.

## Suggestion for Learning the Project Code Base
---

If you are not familiar with web app development, this code base might be confusing. But don't worry, we'll get through it together. Here are some suggestions for learning the code base:
## How to Build and Run

1. Have two versions of the template repo - one for you to individually explore and learn and another for your team's project implementation.
1. Start by exploring the `./app` directory. This is where the Streamlit app is located. The Streamlit app is a Python-based web app that is used to interact with the user. It's a great way to build a simple web app without having to learn a lot of web development.
1. Next, explore the `./api` directory. This is where the Flask REST API is located. The REST API is used to interact with the database and perform other server-side tasks. You might also consider this the "application logic" or "business logic" layer of your app.
1. Finally, explore the `./database-files` directory. This is where the SQL scripts are located that will be used to initialize the MySQL database.
### 1. Clone the Repository

### Setting Up Your Personal Testing Repo
```bash
git clone <YOUR_REPO_URL>
cd <REPO_FOLDER>
```

**Before you start**: You need to have a GitHub account and a terminal-based git client or GUI Git client such as GitHub Desktop or the Git plugin for VSCode.
### 2. Set up Environment Variables
Copy the **.env.template** file inside the **api** folder and rename it to **.env**. Edit the .env file to include your database credentials and secrets. Make sure passwords are secure and unique.

1. Clone this repo to your local machine.
1. You can do this by clicking the green "Code" button on the top right of the repo page and copying the URL. Then, in your terminal, run `git clone <URL>`.
1. Or, you can use the GitHub Desktop app to clone the repo. See [this page](https://docs.github.com/en/desktop/adding-and-cloning-repositories/cloning-a-repository-from-github-to-github-desktop) of the GitHub Desktop Docs for more info.
1. Open the repository folder in VSCode.
1. Set up the `.env` file in the `api` folder based on the `.env.template` file.
1. Make a copy of the `.env.template` file and name it `.env`.
1. Open the new `.env` file.
1. On the last line, delete the `<...>` placeholder text, and put a password. Don't reuse any passwords you use for any other services (email, etc.)
1. For running the testing containers (for your personal repo), you will tell `docker compose` to use a different configuration file than the typical one. The one you will use for testing is `sandbox.yaml`.
1. `docker compose -f sandbox.yaml up -d` to start all the containers in the background
1. `docker compose -f sandbox.yaml down` to shutdown and delete the containers
1. `docker compose -f sandbox.yaml up db -d` only start the database container (replace db with api or app for the other two services as needed)
1. `docker compose -f sandbox.yaml stop` to "turn off" the containers but not delete them.
### 3. Start Docker Containers
Use Docker Compose to start the full stack:

### Setting Up Your Team's Repo
```bash
docker compose up -d
```
This will start:
- MySQL database container
- Flask REST API backend
- Streamlit frontend app

**Before you start**: As a team, one person needs to assume the role of _Team Project Repo Owner_.
To stop and remove containers:
```bash
docker compose down
```

1. The Team Project Repo Owner needs to **fork** this template repo into their own GitHub account **and give the repo a name consistent with your project's name**. If you're worried that the repo is public, don't. Every team is doing a different project.
1. In the newly forked team repo, the Team Project Repo Owner should go to the **Settings** tab, choose **Collaborators and Teams** on the left-side panel. Add each of your team members to the repository with Write access.
### 4. Initialize the Database
Run the SQL scripts inside ./database-files to create tables and insert initial data:

**Remaining Team Members**
```bash
mysql -u <username> -p < ./database-files/schema.sql
```
Or connect to the running MySQL container and execute the scripts.

1. Each of the other team members will receive an invitation to join.
1. Once you have accepted the invitation, you should clone the Team's Project Repo to your local machine.
1. Set up the `.env` file in the `api` folder based on the `.env.template` file.
1. For running the testing containers (for your team's repo):
1. `docker compose up -d` to start all the containers in the background
1. `docker compose down` to shutdown and delete the containers
1. `docker compose up db -d` only start the database container (replace db with api or app for the other two services as needed)
1. `docker compose stop` to "turn off" the containers but not delete them.
---

**Note:** You can also use the Docker Desktop GUI to start and stop the containers after the first initial run.
## User Personas & Stories

Persona 1: Avery - Freelance Designer
- Juggles client and personal projects.
- Needs task automation and habit tracking to stay consistent.
- Wants a visual dashboard for progress and deadlines.
- Requires space for creative ideas and manageable workflows.

## Handling User Role Access and Control
Persona 2: Dr. Alan - Professor
- Math professor balancing research and teaching.
- Needs categorized projects, priority control, and deadline management.
- Wants completed projects archived but accessible for reference.

In most applications, when a user logs in, they assume a particular role. For instance, when one logs in to a stock price prediction app, they may be a single investor, a portfolio manager, or a corporate executive (of a publicly traded company). Each of those _roles_ will likely present some similar features as well as some different features when compared to the other roles. So, how do you accomplish this in Streamlit? This is sometimes called Role-based Access Control, or **RBAC** for short.
Persona 3: Jose – System Administrator
- Oversees app scalability, user support, and community engagement.
- Requires bug tracking dashboard, user analytics, and payment plan insights.

The code in this project demonstrates how to implement a simple RBAC system in Streamlit but without actually using user authentication (usernames and passwords). The Streamlit pages from the original template repo are split up among 3 roles - Political Strategist, USAID Worker, and a System Administrator role (this is used for any sort of system tasks such as re-training ML model, etc.). It also demonstrates how to deploy an ML model.
Persona 4: Jack – Financial Analyst
- Tracks company goals and employee task completion.
- Needs subgoal checkboxes, deadlines, and aggregated progress reports.

Wrapping your head around this will take a little time and exploration of this code base. Some highlights are below.
---

### Getting Started with the RBAC
### Features
- Automatic project phase generation prevents overwhelming long-term goals
- Intelligent task queuing surfaces next actionable items automatically
- Comprehensive analytics dashboards provide insights into productivity patterns
- Role-based access control supports users with distinct permissions and views
- Community forum for user discussions, bug reports, and feedback
- Task and goal hierarchy with tags, priorities, and scheduling

1. We need to turn off the standard panel of links on the left side of the Streamlit app. This is done through the `app/src/.streamlit/config.toml` file. So check that out. We are turning it off so we can control directly what links are shown.
1. Then I created a new python module in `app/src/modules/nav.py`. When you look at the file, you will se that there are functions for basically each page of the application. The `st.sidebar.page_link(...)` adds a single link to the sidebar. We have a separate function for each page so that we can organize the links/pages by role.
1. Next, check out the `app/src/Home.py` file. Notice that there are 3 buttons added to the page and when one is clicked, it redirects via `st.switch_page(...)` to that Roles Home page in `app/src/pages`. But before the redirect, I set a few different variables in the Streamlit `session_state` object to track role, first name of the user, and that the user is now authenticated.
1. Notice near the top of `app/src/Home.py` and all other pages, there is a call to `SideBarLinks(...)` from the `app/src/nav.py` module. This is the function that will use the role set in `session_state` to determine what links to show the user in the sidebar.
1. The pages are organized by Role. Pages that start with a `0` are related to the _Political Strategist_ role. Pages that start with a `1` are related to the _USAID worker_ role. And, pages that start with a `2` are related to The _System Administrator_ role.
---

## Notes on User Roles and Access Control
Our platform implements a simple Role-Based Access Control (RBAC) system, differentiating between:
- Individual users (freelancers, researchers)
- Business analysts and managers
- System administrators

## Incorporating ML Models into your Project (Optional for CS 3200)
Each role experiences a customized view with access to features relevant to their needs and permissions.

_Note_: This project only contains the infrastructure for a hypothetical ML model.
---

1. Collect and preprocess necessary datasets for your ML models.
1. Build, train, and test your ML model in a Jupyter Notebook.
- You can store your datasets in the `datasets` folder. You can also store your Jupyter Notebook in the `ml-src` folder.
1. Once your team is happy with the model's performance, convert your Jupyter Notebook code for the ML model to a pure Python script.
- You can include the `training` and `testing` functionality as well as the `prediction` functionality.
- Develop and test this pure Python script first in the `ml-src` folder.
- You may or may not need to include data cleaning, though.
1. Review the `api/backend/ml_models` module. In this folder,
- We've put a sample (read _fake_) ML model in the `model01.py` file. The `predict` function will be called by the Flask REST API to perform '_real-time_' prediction based on model parameter values that are stored in the database. **Important**: you would never want to hard code the model parameter weights directly in the prediction function.
1. The prediction route for the REST API is in `api/backend/customers/customer_routes.py`. Basically, it accepts two URL parameters and passes them to the `prediction` function in the `ml_models` module. The `prediction` route/function packages up the value(s) it receives from the model's `predict` function and send its back to Streamlit as JSON.
1. Back in streamlit, check out `app/src/pages/11_Prediction.py`. Here, I create two numeric input fields. When the button is pressed, it makes a request to the REST API URL `/c/prediction/.../...` function and passes the values from the two inputs as URL parameters. It gets back the results from the route and displays them. Nothing fancy here.
## Contact & Support
For questions or bug reports, please open an issue in the GitHub repository or contact the system administrator (Ryan).
35 changes: 35 additions & 0 deletions TO_ADD/00_Dr_Alan_Home.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
import logging
logger = logging.getLogger(__name__)

import streamlit as st
from modules.nav import SideBarLinks

st.set_page_config(layout = 'wide')

# Show appropriate sidebar links for the role of the currently logged in user
#SideBarLinks()

#st.title(f"Welcome Professor, {st.session_state['first_name']}.")
st.write('')
st.write('')
st.write('### What would you like to do today?')

if st.button('Add new project',
type='primary',
use_container_width=True):
st.switch_page('pages/01_Add_New_Project.py')

if st.button('View completed projects',
type='primary',
use_container_width=True):
st.switch_page('pages/02_Completed_Projects.py')

if st.button('View project by tags',
type='primary',
use_container_width=True):
st.switch_page('pages/03_Project_Tags.py')

if st.button('Manage planner and tasks',
type='primary',
use_container_width=True):
st.switch_page('pages/04_Planner_And_Tasks.py')
39 changes: 39 additions & 0 deletions TO_ADD/01_Add_New_Project.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
import streamlit as st
import requests

from modules.nav import SideBarLinks
SideBarLinks(show_home=True)

st.title("Add New Project")

# Form inputs
userID = st.text_input("User ID")
tagID = st.text_input("Tag ID")
title = st.text_input("Title")
notes = st.text_area("Notes")
status = st.selectbox("Status", ["onIce", "inProgress", "completed"])
priority = st.slider("Priority", 1, 4, 4)
schedule = st.text_input("Deadline")

if st.button("Submit"):
project_data = {
"userID": userID,
"tagID": tagID,
"title": title,
"notes": notes,
"status": status,
"priority": priority,
"completedAt": completedAt or None,
"schedule": schedule
}

try:
# Replace this URL with your actual backend API URL
response = requests.post("http://localhost:4000/projects", json=project_data)

if response.status_code == 200:
st.success("Project added successfully!")
else:
st.error(f"Failed to add project: {response.text}")
except Exception as e:
st.error(f"Error: {e}")
29 changes: 29 additions & 0 deletions TO_ADD/02_Completed_Projects.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
import streamlit as st
import requests

st.title("Completed Projects")

try:
response = requests.get("http://localhost:4000/projects/completedprojects")

if response.status_code == 200:
projects = response.json()

if projects:
for project in projects:
st.subheader(project.get('title', 'Untitled Project'))
st.write(f"Notes: {project.get('notes', 'No notes')}")
st.write(f"Priority: {project.get('priority', 'N/A')}")
st.write(f"Completed At: {project.get('completedAt', 'N/A')}")
st.write("---")
else:
st.info("No completed projects found.")

elif response.status_code == 404:
st.info("No completed projects found.")

else:
st.error(f"Error fetching completed projects: {response.status_code}")

except Exception as e:
st.error(f"Failed to fetch projects: {e}")
Loading