diff --git a/README.md b/README.md index f99e78f..c6c1f40 100644 --- a/README.md +++ b/README.md @@ -1,2 +1,5 @@ # Traffic-Management Project for both SPA and SPE + + +Link to the [documentation (DOC).](https://fpirazz.github.io/Traffic-Management/) diff --git a/docs/SAP/SAP_Project_Traffic_Management_System.pdf b/docs/SAP/SAP_Project_Traffic_Management_System.pdf new file mode 100644 index 0000000..09913a1 Binary files /dev/null and b/docs/SAP/SAP_Project_Traffic_Management_System.pdf differ diff --git a/docs/SPE/CD.md b/docs/SPE/CD.md new file mode 100644 index 0000000..90d45ef --- /dev/null +++ b/docs/SPE/CD.md @@ -0,0 +1,55 @@ +# Continuous Deployment with AWS and Kubernetes + +## Overview + +Continuous Deployment (CD) is a critical aspect of modern software development, automating the release and deployment processes to deliver new features and improvements seamlessly. In this project, CD is facilitated through the integration of Amazon Web Services (AWS) and Kubernetes. + +## Amazon Web Services (AWS) + +Amazon Web Services (AWS) is a comprehensive cloud computing platform offered by Amazon. It provides a wide range of services, including computing power, storage, databases, machine learning, analytics, and more. AWS enables developers to build scalable and flexible applications without the need for extensive infrastructure management. + +## Continuous Deployment Workflow + +The CD workflow is orchestrated using Kubernetes, a container orchestration platform, Docker, since the frontend will be hosted on a container, and also Kubernetes is supported my the Minikube technology which requires the instantiation of a minikube container, along with AWS services. Here's an overview of the process: + +1. **Minikube Setup**: Before starting the deployment, ensure Docker Desktop, or the Docker daemon, is running, and Minikube is installed. Execute `minikube start` to initiate the Minikube cluster. + +2. **Kubernetes Deployment**: Navigate to the directory containing the files regaring CD and execute the following commands to deploy the Kubernetes services: + + ```bash + kubectl apply -f intersection-agents-deployment.yaml,intersection-agents-service.yaml,spring-db-app-deployment.yaml,spring-db-app-service.yaml,user-db-deployment.yaml,user-db-service.yaml + ``` + +3. **Docker Compose for Vue**: Build and run the Vue application container: + + ```bash + docker compose build --no-cache + docker run -it --rm -d -p 8080:80 --name vue-app traffic-management-vue-app:latest + ``` + +4. **Minikube Tunneling**: Start Minikube tunneling to expose services locally: + + ```bash + minikube tunnel + ``` + +5. **Check Service IPs**: In a separate terminal, verify that the external IPs of the services are set to their assigned Cluster-IPs: + + ```bash + kubectl get svc + ``` + +6. **Accessing the Application**: Visit the hostname where the application is being hosted, on the port 8008 in your browser, to access the application. The Vue app communicates with Kubernetes clusters (Spring, User-DB, Intersection Agents) through AWS services. + +## AWS Integration + +AWS plays a crucial role in this CD workflow by hosting the Vue application, a Nginx reverse proxy and the Kubernets services. + +The Vue containerized Vue application itself also contains an internal Nginx reverse-proxy, used to redirect any Axios requests (RPC requests) made towards the services properly, in our case the requests are re-directed to another Nginx proxy instantiated in the EC2 instance itself. +The second Nginx proxy routes HTTP calls from the browser to the appropriate Kubernetes clusters, mainly by letting the frontend know where each REST endpoint is. +And finally each Kubernetes service exposes an external IP, on a port, to let request come in and be processed. + +This CD approach ensures that changes are seamlessly deployed to the Kubernetes clusters, providing a smooth and efficient development and deployment pipeline, which is also made secure by the fact that the only port that is accessibile externally from the EC2 instance it's only the one related to the frontend, so no other request can be done on other ports. In our case, because of the fact that this is an academic project, the EC2 instance is only turned on when necessary because running any type of service on AWS costs money, so therefore we only test deployment when necessary, but in a real world scenario, the instance/s can be left running, also employing other EC2 services such as requesting a static IP for the instance. + + +[Go Back.](./index.md) diff --git a/docs/SPE/CI.md b/docs/SPE/CI.md new file mode 100644 index 0000000..26a1c6d --- /dev/null +++ b/docs/SPE/CI.md @@ -0,0 +1,83 @@ +# Continuous Integration with GitHub Actions + +GitHub Actions provides a powerful platform for automating Continuous Integration (CI) workflows, allowing developers to streamline the build and testing processes of their projects directly within their GitHub repositories. + +## Build Gradle Project Workflow + +```yaml +name: Build Gradle project + +on: + push: + branches: + - main + - develop + +jobs: + build-gradle-project: + runs-on: ubuntu-latest + steps: + - name: Checkout project sources + uses: actions/checkout@v4 + - name: Set up JDK 19 + uses: actions/setup-java@v3 + with: + java-version: '19' + distribution: 'temurin' # Or other distributions like 'adopt' + - name: Setup Gradle + uses: gradle/actions/setup-gradle@v3 + - name: Execute Gradle build + run: ./gradlew build +``` + +This GitHub Actions workflow automates the Gradle project's build process triggered by pushes to the main and develop branches. The workflow sets up the environment with JDK 19, configures Gradle, and executes the build tasks. +It ensures a consistent and efficient CI pipeline for Gradle-based development. + +## Automatic Deployment to AWS. + +```yaml +name: Deploy Application to AWS EC2 Instance +on: + push: + branches: + - prepDeploy + - main + +jobs: + Deploy: + name: Deploy to EC2 + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v2 + - name: Build & Deploy + env: + PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }} + HOSTNAME: ${{secrets.SSH_HOST}} + USER_NAME: ${{secrets.USER_NAME}} + + run: | + echo "$PRIVATE_KEY" > ./private_key && chmod 600 ./private_key + cat ./private_key + ssh -o StrictHostKeyChecking=no -i ./private_key ${USER_NAME}@${HOSTNAME} ' + + + # Now we have got the access of EC2 and we will start the deploy . + kubectl delete --all svc + kubectl delete --all deploy + kubectl delete --all pods + sudo docker container delete vue-app + + minikube start + cd my-dashboard/ + + sudo docker pull leomarzoli/traffic_management_system:latest + kubectl apply -f intersection-agents-deployment.yaml,intersection-agents-service.yaml,spring-db-app-deployment.yaml,spring-db-app-service.yaml,user-db-deployment.yaml,user-db-service.yaml + docker run -it --rm --add-host=host.docker.internal:host-gateway -d -p 8080:80 --name vue-app leomarzoli/traffic_management_system:latest + + minikube tunnel -c + ' +``` +This YAML file configures a GitHub Actions workflow for deploying an app to an AWS EC2 instance. Triggered by a push to either the'prepDeploy' or 'main' branch, establishes an SSH connection using secrets which are added to the secret repository section (in our case since the EC2 instance costs money, the secret regarding the host name, needs to be changed everytime the instance is turned off then on), and initiates deployment operations on the EC2 instance. It removes Kubernetes services and deployments and also deletes the containerized frontend via Docker of the frontend just in case any of these services were already started from a previous deployment, then it starts Minikube, applies Kubernetes services and deployments, and starts the frontend Docker container automatically by pulling it and running it, exposing port 8080, and also routing said port to port 80. The goal is to automate the application deployment process on a remote machine, streamlining the development cycle, and improving release management. + +[Go Back.](./index.md) [Go Next.](./containerization.md) diff --git a/docs/SPE/build_automation.md b/docs/SPE/build_automation.md new file mode 100644 index 0000000..1148bf4 --- /dev/null +++ b/docs/SPE/build_automation.md @@ -0,0 +1,98 @@ +# System Implementation + +After analyzing the project requirements, the application was implemented by developing three sub-applications, specifically three microservices, and using a Java-based external SQL database named H2. The sub-applications are defined as follows: + +1. **userContext:** + - A Spring Boot Java project creating REST endpoints for frontend interaction. + - Utilizes Spring Boot's dialects for various DB interactions. + - All RPCs to userContext perform queries on the H2 Database. + +2. **H2 Database:** + - Derived from an existing image of a basic H2 Database implementation. + - Modified for deployment with all necessary components. + +3. **intersectionAggregate:** + - A Multi-Agent System implemented through the JaCaMo library. + - Leverages JaCaMo-Rest for RPC endpoints to access agent system artifacts. + - Deployed as a microservice. + +4. **tcm_frontend:** + - A simple Vue application serving as the frontend. + - When deployed as a container, it includes an internal reverse proxy Nginx server. + - The Nginx server handles redirecting calls from the Vue app to the right endpoints and resolves CORS Policy issues. + +## Continuous Delivery + +To automate the initial execution of the system, Gradle (v8.2) was employed, and a series of "build.gradle" files were defined. These files specify how the project is compiled, manage dependencies, and configure other aspects of the build process. + +### Implementation Strategy: + +The implementation strategy employs a hierarchical structure, with each microservice having its own "build.gradle" file. These microservices are then linked via dependencies to a more generic "build.gradle" file, which defines several tasks within it. + +The structure ensures modularization and allows for centralized management of common build tasks. This approach enhances maintainability and streamlines the continuous delivery process. + +This is how appears the main and generalized "build.gradle" file. + +
+ +
+ +In the "build.gradle" file, the following tasks are defined: + +1. **runAll:** + - Executes all the necessary components for the application. + +2. **cleanAll:** + - Cleans the various microservices from the files generated during the compilation phase. + +3. **buildAll:** + - Cleans the various microservices from the files generated during the compilation phase. + +4. **cleanAndBuildAll:** + - Executes all the previously described tasks. + +These tasks are designed to streamline the build and deployment process. Running `cleanAndBuildAll` ensures a fresh start by cleaning the microservices and then building them, providing a comprehensive approach to managing the application's lifecycle. + +In this next section of the page we want to highlight the best task for each each microservices that has been developed. + ++ +
+ +1. **runAgents:** + - Executes a JaCaMo application using the class `jacamo.infra.JaCaMoLauncher`. + - Before execution, it creates a log directory. + - Depends on the "classes" task to ensure correct compilation of source code. + - The argument "intersection.jcm" is passed, representing the file containing the agent environment. + - Configures the classpath with necessary dependencies. + + The `runAgents` task, in particular, is crucial for launching JaCaMo applications (agents) with the necessary configurations. + ++ +
+ +2. **runUserApplication:** + - Runs a user context application using the class `com.userContext.infrastructure_layer.springBoot.UserApplication`. + - Configures the classpath with the necessary dependencies using the runtimeClasspath of the main source set + +In the `build.gradle` file of the `tcm_frontend` microservice, the following task is defined for building the Vue project: + ++ +
+ +1. **npmBuildProject:** + - Groups tasks necessary for building a Vue project. + - Installs dependencies defined in `package.json` using the "npmInstallProject" task. + - Cleans the project with "npmClean" by removing the "dist" and "build" directories. + - Runs the build script via "npmRunBuild". + +### Conclusion and usage: +So as a final step to actually test the build automation of the application one can simply navigate to the project root and execute the command: + +```bash +gradle cleanAndBuildAll +``` + +[Go Back.](./index.md) [Go Next.](./CI.md) diff --git a/docs/SPE/containerization.md b/docs/SPE/containerization.md new file mode 100644 index 0000000..f0b8c10 --- /dev/null +++ b/docs/SPE/containerization.md @@ -0,0 +1,83 @@ +# Containerization using Docker + +Containerization, facilitated by Docker, plays a crucial role in efficiently isolating and distributing applications. Docker containers encapsulate everything needed to run an application, ensuring consistency across various environments. This approach simplifies distribution, versioning, and dependency management, enhancing the overall portability of applications. + +To containerize the system, a strategy was devised to incorporate a Dockerfile within each microservice, tailoring it to the specific platform used for the service. From JaCaMo for the agents to Spring for the backend and Vue for the frontend, each Dockerfile is crafted accordingly. + +### Dockerfiles and Descriptions: + +#### 1. **JaCaMo Agents - Dockerfile for Build and Execution Phases:** + ++ +
+ + - *Build Phase:* + - Starts from a Gradle image version 8.6.0 with JDK 21 on Alpine Linux. + - Copies the source code into the directory /home/gradle/src. + - Sets the working directory. + - Executes the `gradle wrapper` command. + - Builds the project with `./gradlew build --parallel`. + + - *Execution Phase:* + - Uses an OpenJDK image version 19 on Alpine Linux. + - Sets the working directory for the application to /app. + - Exposes port 9080. + - Copies the source code from the build phase. + - Executes the Gradle task "runAgents" during application startup. + +#### 2. **Spring Boot Application - Dockerfile for Build and Deployment Phases:** + ++ +
+ + - *Build Phase:* + - Utilizes the base Gradle image version 8.2.0 with Alpine Linux. + - Copies the project content into the container's directory. + - Sets the working directory. + - Exposes ports 9085 and 9092. + - Configures Gradle Wrapper. + - Builds the project using `./gradlew build` in parallel mode. + + - *Deployment Phase:* + - Uses an OpenJDK image version 19. + - Creates a directory "/app" within the container. + - Copies the build result from the previous phase into the "/app" directory. + - Specifies the entrypoint to execute the Spring Boot application upon container launch. + +#### 3. **Node.js Application with Nginx - Dockerfile for Build and Production Phases:** + ++ +
+ + - *Build Phase with Node.js:* + - Utilizes a Node.js LTS image on Alpine Linux. + - Sets the working directory to `/app`. + - Copies the source code into the current directory. + - Executes the `npm install` command. + - Executes the `npm run build` command. + + - *Production Phase:* + - Utilizes a stable Nginx image on Alpine Linux. + - Copies the build result from the previous phase into the Nginx application directory. + - Copies the Nginx configuration file. + - Exposes ports 80 and 8080. + - Starts Nginx with the command `nginx -g 'daemon off;` during container execution. + +### Docker Compose: + +The Dockerfiles are orchestrated using Docker-Compose, a tool simplifying the management of multi-container Docker applications. The `compose.yml` file defines the configuration, services, and dependencies of the application. + ++ +
+ + - Specifies the version of the Compose file format. + - Defines four distinct services: Vue.js application, user database, Spring application with a database, and a generic application with agents. + - Configures mapped ports for each service. + +Running the command `docker compose up` will initiate the Docker containers, automatically executing `gradlew` within them. This fully automates the system, providing a seamless and consistent deployment environment. + +[Go Back.](./index.md) [Go Next.](./CD.md) diff --git a/docs/SPE/domain-driven-approach-summary.md b/docs/SPE/domain-driven-approach-summary.md new file mode 100644 index 0000000..0edf9c5 --- /dev/null +++ b/docs/SPE/domain-driven-approach-summary.md @@ -0,0 +1,70 @@ +# Italian Traffic Management System + +## Explaining the Premise + +As outlined in the project proposal, the objective is to develop an application capable of performing Traffic Management, referred to as TMS or Traffic Management System. The focus is on mimicking and simulating how a system would handle traffic at intersections, based on Italian driving laws. Specifically, the goal is to influence the timing of Traffic Lights based on the number of cars present at each intersection. + +The domain for this application encompasses traffic, driving laws, conventions, and habits of land drivers. It's important to note that "Traffic Management" in this context specifically refers to the management of ground vehicle traffic, as opposed to Naval and Air traffic, which are separate domains. + +## Knowledge Crunching by Event Storming + +Knowledge Crunching was executed using an Event Storming methodology, facilitated by the Miro tool. The process involved the collaborative gathering of ideas and knowledge on the topic. Each team member acquired information independently before coming together to consolidate their insights on the domain. + +Miro simplified this process by enabling the construction of a Mind Map. This map contained key concepts related to the project's environment. Using words and connecting them, hierarchies of concepts were formed to enhance the understanding of the domain. + ++ +
+ +# Knowledge Crunching: Ubiquitous Language and Requirements Analysis + +## Forming Ubiquitous Language + +After constructing the Mind Map, we established a Ubiquitous Language for the domain. This language serves as a detailed glossary for names and concepts used throughout the project. Key terms include: + +- **Intersections:** Location where roads meet, featuring Traffic Lights for traffic management. +- **Vehicle/Standard Vehicle:** Autonomous road entity traveling on Lanes. +- **Emergency Vehicle:** Vehicle with priority in lane travel. +- **Lane/s:** Road segments for vehicle travel. +- **Vehicles on Lanes (VoL):** Concept denoting the number of Vehicles on a Lane at a given time. +- **Monitoring:** Users observe Intersection states, including Traffic Lights and Lanes. +- **Manually Operating Intersections (MOI):** Operators control Traffic Lights at Intersections. +- **Traffic Lights:** Control Intersection flow with Red, Yellow, and Green states. +- **Driver:** Entity logging into the system and capable of Monitoring. +- **Operator:** User with advanced privileges capable of MOI. +- **Priority:** Privilege for certain Vehicles due to travel nature. + +## Requirements Analysis + +The Requirements Analysis, stemming from Knowledge Crunching, details high-level functional requirements for the project. The Traffic Management System aims to: + +- Manage autonomous Traffic Lights at intersections to optimize traffic flow. +- Feature a front-end for Users to observe Intersection states and manually control Traffic Lights. +- Allow Traffic Lights to operate automatically, making decisions based on the number of vehicles in each lane. +- Track the number of lanes, vehicles in each lane (with vehicle type), and Traffic Light states for each Intersection. +- Enable Operators to manually override Traffic Light operation. +- Automatically manage traffic based on predetermined or changeable configurations. +- Determine prioritization of lanes based on vehicle count and Emergency Vehicles. + +## Additional Development Tasks + +- **Integration of Continuous Integration via GitHub Action** +- **Extension of Initial Part on Domain-Driven Design (DDD)** +- **Integration of Continuous Delivery via Gradle** + - Include Gradle files in each microservice (hierarchy). + - Develop specific tasks for each microservice type (e.g., build all and clean all). + +## Containerization + +- Develop Docker images for each microservice. +- Create a Docker-compose file grouping containers and defining dependencies. + +## Context Map + ++ + +
+ + +[Go Back.](./index.md) diff --git a/docs/SPE/img/admin-user.png b/docs/SPE/img/admin-user.png new file mode 100644 index 0000000..e0ba6cf Binary files /dev/null and b/docs/SPE/img/admin-user.png differ diff --git a/docs/SPE/img/agent-deploy.png b/docs/SPE/img/agent-deploy.png new file mode 100644 index 0000000..f2d1dc1 Binary files /dev/null and b/docs/SPE/img/agent-deploy.png differ diff --git a/docs/SPE/img/agent-service.png b/docs/SPE/img/agent-service.png new file mode 100644 index 0000000..de31291 Binary files /dev/null and b/docs/SPE/img/agent-service.png differ diff --git a/docs/SPE/img/automation1.png b/docs/SPE/img/automation1.png new file mode 100644 index 0000000..2105d65 Binary files /dev/null and b/docs/SPE/img/automation1.png differ diff --git a/docs/SPE/img/automation2.png b/docs/SPE/img/automation2.png new file mode 100644 index 0000000..2792217 Binary files /dev/null and b/docs/SPE/img/automation2.png differ diff --git a/docs/SPE/img/automation3.png b/docs/SPE/img/automation3.png new file mode 100644 index 0000000..2d1663c Binary files /dev/null and b/docs/SPE/img/automation3.png differ diff --git a/docs/SPE/img/automation4.png b/docs/SPE/img/automation4.png new file mode 100644 index 0000000..77e01fc Binary files /dev/null and b/docs/SPE/img/automation4.png differ diff --git a/docs/SPE/img/container1.png b/docs/SPE/img/container1.png new file mode 100644 index 0000000..fd5b65f Binary files /dev/null and b/docs/SPE/img/container1.png differ diff --git a/docs/SPE/img/container2.png b/docs/SPE/img/container2.png new file mode 100644 index 0000000..b6149b7 Binary files /dev/null and b/docs/SPE/img/container2.png differ diff --git a/docs/SPE/img/container3.png b/docs/SPE/img/container3.png new file mode 100644 index 0000000..041f15a Binary files /dev/null and b/docs/SPE/img/container3.png differ diff --git a/docs/SPE/img/context-map1.png b/docs/SPE/img/context-map1.png new file mode 100644 index 0000000..809cac4 Binary files /dev/null and b/docs/SPE/img/context-map1.png differ diff --git a/docs/SPE/img/context-map2.png b/docs/SPE/img/context-map2.png new file mode 100644 index 0000000..1ae3398 Binary files /dev/null and b/docs/SPE/img/context-map2.png differ diff --git a/docs/SPE/img/docker-compose.png b/docs/SPE/img/docker-compose.png new file mode 100644 index 0000000..9b199df Binary files /dev/null and b/docs/SPE/img/docker-compose.png differ diff --git a/docs/SPE/img/mind-map.jpg b/docs/SPE/img/mind-map.jpg new file mode 100644 index 0000000..af9d630 Binary files /dev/null and b/docs/SPE/img/mind-map.jpg differ diff --git a/docs/SPE/img/spring-db-deploy.png b/docs/SPE/img/spring-db-deploy.png new file mode 100644 index 0000000..499ff61 Binary files /dev/null and b/docs/SPE/img/spring-db-deploy.png differ diff --git a/docs/SPE/img/spring-db-service.png b/docs/SPE/img/spring-db-service.png new file mode 100644 index 0000000..7018444 Binary files /dev/null and b/docs/SPE/img/spring-db-service.png differ diff --git a/docs/SPE/img/user-db-deploy.png b/docs/SPE/img/user-db-deploy.png new file mode 100644 index 0000000..4e30f2c Binary files /dev/null and b/docs/SPE/img/user-db-deploy.png differ diff --git a/docs/SPE/img/user-db-service.png b/docs/SPE/img/user-db-service.png new file mode 100644 index 0000000..402bfea Binary files /dev/null and b/docs/SPE/img/user-db-service.png differ diff --git a/docs/SPE/img/vue-deploy.png b/docs/SPE/img/vue-deploy.png new file mode 100644 index 0000000..82ec2ba Binary files /dev/null and b/docs/SPE/img/vue-deploy.png differ diff --git a/docs/SPE/img/vue-service.png b/docs/SPE/img/vue-service.png new file mode 100644 index 0000000..6fd4f5a Binary files /dev/null and b/docs/SPE/img/vue-service.png differ diff --git a/docs/SPE/index.md b/docs/SPE/index.md index 8a78a38..ea5fc42 100644 --- a/docs/SPE/index.md +++ b/docs/SPE/index.md @@ -1,2 +1,35 @@ -FEDE SEI BELLISSIMO -[Home.](../index.md) +# Software Process Engineering - Italian Traffic Management System + +## Overview + +- **Premise** + - Develops a Traffic Management System (TMS) + - Simulates traffic at intersections + - Influences Traffic Lights based on Italian driving laws + +## Domain-Driven Approach + +- [**Domain-Driven Design - Summary.**](./domain-driven-approach-summary.md) + +## DevOps Scenario + +- [**Versioning and Licensing**](./versioning_licesing.md) + +- [**Version Control**](./version_control.md) + +- [**Build Automation**](./build_automation.md) + +- [**Continuous Deployment**](./CD.md) + +- [**Continuous Integration**](./CI.md) + +- [**Containerization**](./containerization.md) + +- [**Orchestration**](./orchestration.md) + +## Conclusion + +- Apply software engineering systems to an application that Summarizes Italian Traffic Management System Project. + + +[Go Back.](../index.md) diff --git a/docs/SPE/orchestration.md b/docs/SPE/orchestration.md new file mode 100644 index 0000000..ce2ede8 --- /dev/null +++ b/docs/SPE/orchestration.md @@ -0,0 +1,76 @@ +# Kubernetes Orchestration + +In the journey of orchestrating services with Kubernetes, several components and configurations have been implemented to ensure seamless functionality within the cluster. Let's delve into the key aspects of the Kubernetes setup. + +## Metrics Server Deployment + +To enhance monitoring capabilities, the Metrics Server has been integrated into the Kubernetes cluster. This server facilitates the collection and retrieval of crucial metrics related to pods and nodes. The following components into the `components.yaml` contribute to this deployment: + +- **Service Account:** `metrics-server-service-account.yaml` +- **Cluster Roles:** `metrics-server-cluster-roles.yaml` +- **Role Binding:** `metrics-server-role-binding.yaml` +- **Service:** `metrics-server-service.yaml` +- **Deployment:** `metrics-server-deployment.yaml` + +These files collectively establish the Metrics Server, granting it the necessary permissions to access metrics and ensuring its availability within the `kube-system` namespace. + +## Kubernetes Dashboard Admin User + +For managing the Kubernetes Dashboard effectively, an admin user has been defined. The files `dashboard-service-account.yaml` and `dashboard-cluster-role-binding.yaml` create a service account and establish the necessary role bindings, enabling the admin-user service account to assume the `cluster-admin` role. This ensures privileged access to the Kubernetes Dashboard. ++ +
+ +## Intersection Agents Deployment + +The deployment of intersection agents involves creating a Kubernetes Deployment and Service for the `intersection-agents` microservice. The following files are involved: + +- **Deployment:** `intersection-agents-deployment.yaml` +- **Service:** `intersection-agents-service.yaml` + +These files deploy and expose the `intersection-agents` microservice within the Kubernetes cluster, paving the way for effective traffic management simulations. + ++ + +
+ +## Spring Database Application Deployment + +The Spring database application, a crucial component of the system, is orchestrated using Kubernetes. The files `spring-db-app-deployment.yaml` and `spring-db-app-service.yaml` manage the deployment and service aspects, ensuring seamless communication and accessibility. + ++ + +
+ +## User Database Deployment + +The User database, based on the SAP SPE, is deployed using Kubernetes resources. The files `user-db-deployment.yaml` and `user-db-service.yaml` govern the deployment and service configurations, facilitating user data management within the cluster. + ++ + +
+ +## Vue Application Deployment + +The frontend Vue.js application is deployed and exposed using Kubernetes resources. The files `vue-app-deployment.yaml` and `vue-app-service.yaml` dictate the deployment and service configurations, allowing users to interact with the system through the Vue.js application. + ++ + +
+ +## Cluster Overview + +The orchestrated components form a cohesive Kubernetes cluster, ensuring the following functionality: + +- **Metrics Server:** Monitors and collects metrics related to pods and nodes. +- **Dashboard Admin User:** Empowers administrative access to the Kubernetes Dashboard. +- **Intersection Agents:** Manages traffic simulations and interactions. +- **Spring Database Application:** Handles data storage and retrieval for the Spring application. +- **User Database:** Istanciate a database that can be used to store users' info. +- **Vue Application:** Facilitates user interaction with the system through the Vue.js frontend. + +This orchestrated Kubernetes environment provides a robust foundation for the Traffic Management System, encompassing monitoring, data storage, and user interaction seamlessly. diff --git a/docs/SPE/version_control.md b/docs/SPE/version_control.md new file mode 100644 index 0000000..c9ec172 --- /dev/null +++ b/docs/SPE/version_control.md @@ -0,0 +1,39 @@ +# Version Control and Development Workflow + +## Git Branch Structure + +Our Git repository is organized with three primary branches: + +- **main:** Represents the production-ready code. Official releases are merged into this branch at the end of the project. +- **develop:** The development branch where active development takes place. New features and bug fixes are developed here. +- **prepDeploy:** The preparations and testing for future deployments are done on this branch, usually by mergin changes made on develop. +- **doc:** Dedicated to documentation updates. Changes related to documentation are made and committed in this branch. +## Git Rebase Policy + +We have adopted a rebase policy to maintain a clean and linear project history. When working on the dev branch, instead of merging changes from main, we use interactive rebasing to integrate the latest changes. This keeps the commit history streamlined and avoids unnecessary merge commits. + +## Git Commit Verification + +All commits in our repository are verified using GPG signatures. This ensures the authenticity of the commits and helps maintain the integrity of the project history. The GPG signature is generated by adding a secret key to GitHub Secrets, adding an extra layer of security to our commits. + +## Branch Policy + +To control the flow of changes into the main branch, we have implemented a policy. Direct pushes to the main branch are restricted. Developers are required to create a feature branch from dev, develop the feature, and then submit a pull request. + +## Pull Request and Code Review + +Our development workflow includes the following steps: + +1. Developers create a feature branch from dev for new features or bug fixes. +2. After completing the development, developers submit a pull request. +3. Pull requests require approval from at least two reviewers, considering our group size of three members. +4. Once the pull request is approved, it can be rebased into the dev branch. +5. Thereafter, once the rebase is done and has been verified, the feature branch is deleted. + +This approach ensures that changes are thoroughly reviewed before integration, maintaining code quality and consistency. + +## Conventional Commits + +We adhere to the conventional commits specification for our commit messages. This convention helps in tracking the progress of the project more efficiently and facilitates automated versioning and changelog generation. + +[Go Back.](./index.md) [Go Next.](./build_automation.md) diff --git a/docs/SPE/versioning_licesing.md b/docs/SPE/versioning_licesing.md new file mode 100644 index 0000000..bf991e5 --- /dev/null +++ b/docs/SPE/versioning_licesing.md @@ -0,0 +1,23 @@ +# Versioning and Licensing + +## Licensing + +For this project, we have opted to use the [MIT License](https://opensource.org/licenses/MIT). The MIT License is known for its permissive nature, allowing users to freely use, modify, and distribute the software, with minimal restrictions. This choice aligns with our commitment to an open and collaborative development environment. + +## Versioning + +To manage versioning effectively, we have incorporated the Gradle plugin **org.danilopianini.git-sensitive-semantic-versioning** into our project. This plugin, currently at version 0.1.0, provides a sophisticated approach to versioning that takes into account the sensitivity of the changes made in the Git repository. + +### Features of the Plugin: + +- **Semantic Versioning:** The plugin follows the principles of semantic versioning (SemVer), ensuring version numbers reflect the nature of changes made. + +- **Git Sensitivity:** By analyzing Git commit messages, the plugin intelligently determines the impact of changes on versioning. This sensitivity allows for a more accurate versioning strategy. + +- **Automated Version Bumping:** With each commit, the plugin automatically bumps the version based on the identified changes. This automation streamlines the versioning process, reducing the burden on developers. + +- **Integration with Gradle:** Seamless integration into the Gradle build system facilitates easy adoption and incorporation into the overall development workflow. + +By leveraging the capabilities of this Gradle plugin, we aim to maintain a clear and meaningful version history that reflects the evolution of our project in a systematic and informative manner. + +[Go Back.](./index.md) [Go Next.](./version_control.md) diff --git a/docs/index.md b/docs/index.md index 0167d20..4340d30 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,37 +1,5 @@ -# Introduzione - -L'obiettivo del progetto è la realizzazione di un clone del gioco Metal Slug 3, famoso platformer degli anni 2000. - -Il gioco consiste nel guidare un soldato, con una pistola e 10 bombe a mano nello zainetto, lungo il territorio nemico e farsi strada attraverso diversi nemici. Durante la sua missione, il soldato può scegliere di salire su una serie di macchinari chiamati Slug che caratterizzano la serie. Questi Slug, ovviamente, danno al giocatore da un lato una maggiore potenza di fuoco e, dall'altro, una migliore resistenza ai colpi subiti. - -## Obiettivo del gioco - - Completare la missione assegnata terminando con una vittoria. - -## Requisiti Obbligatori - -- Creazione di un motore di gioco personalizzato: la logica di esecuzione del platform viene creata ad hoc senza ricorrere all’utilizzo di game engine esterni. -- Implementazione del gioco: si vuole replicare quanto più fedelmente le meccaniche di gioco originali, tra cui quelle essenziali che risultano: - - Salto. - - Arrampicata. - - Accovacciarsi. - - Sparare. - - Lanciare bombe. - - Raccolta di munizioni casualmente generate e inserite all’interno della scena di gioco. - - Raccolta di armi casualmente generate e inserite all’interno della scena di gioco. - - Utilizzare uno “Slug”. -- Gestione degli NPC (Non-Playable Character): definizione di un modello di gioco per entità malevoli atte ad ostacolare il giocatore. -- Rendering grafico: creazione di una finestra visiva per visualizzare il campo di gioco. - -## Requisiti Opzionali - -- Aggiunta di meccaniche di gioco come: - - Liberare prigionieri. - - Un sistema basato su punteggio. - - Un sistema a vite del giocatore. -- Incremento di difficoltà del sistema, tarato su “facile”, “medio” e “difficile”. -- Creazione di sessioni di gioco cooperative sulla stessa macchina fisica. -- Creazione di una versione multiplayer distribuita. +Pagina iniziale. +Scegliere la materia che si vuole analizzare: # Link alle pagine della repo. 1. [Report di SAP](./SPA/index.md).