Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Blog] New Blog on the Builder Service #16

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
---
title: Neura Launch Builder Service
pubDatetime: 2023-10-31T15:22:00Z
postSlug: launch-ml-models-like-builder
featured: true
draft: false
tags:
- machine learning, devops, go, docker
ogImage: ""
description: The Heart of the Project - Neura Launch Builder Service
category: project
projectTitle: "Neura Launch"
---

# The Builder Service

## PROLOGUE

In this blog, we will explore the Builder Service, a service that is a part of the larger project called NeuraLaunch. NeuraLaunch is an open source organisation that simplifies the deployment of machine learning models. In our previous blogs we have spoken about the project architecture and the cli tool. In this blog, we will focus on the functionalities of the Builder Service, specifically the three endpoints that comprise it:

- `/downloadFiles`
- `/verifyChecksum`
- `/buildDockerImage`

## Introduction: The Builder Service

The Builder Service plays a crucial role in the NeuraLaunch ecosystem. It is heart of the project that facilitates you with the APIs for your model. Its primary responsibility is to handle the downloading of files from the S3 bucket and perform necessary verifications to ensure data integrity and security.

Neura Launch is a platform that takes care of the complex task of deploying machine learning models, providing users with a hassle-free experience. With the Neura Launch CLI tool and library, users can quickly deploy their models without worrying about the technical intricacies.

### The `/downloadFiles` endpoint

One of the key features of the Builder Service is the `/downloadFiles` endpoint. This endpoint allows users to retrieve the necessary files from the S3 bucket. By providing the required parameters, users can easily access the files they need for their machine learning projects.

When using the `/downloadFiles` endpoint, users can specify the files they need for their specific project. The Builder Service handles the process of fetching these files from the S3 bucket and making them available for download. This eliminates the need for users to manually search for and download the files themselves.

### The `/verifyChecksum` endpoint

To ensure the integrity of the downloaded files, the Builder Service offers the `/verifyChecksum` endpoint. Users are required to provide the checksum of the files from their local system, and the Builder Service calculates its own checksum for comparison. This process guarantees that the downloaded files have not been tampered with or corrupted during the transfer.

The `/verifyChecksum` endpoint provides an added layer of security and confidence in the downloaded files. By comparing the user-provided checksum with the one calculated by the Builder Service, any discrepancies or potential tampering can be detected.

### The `/buildDockerImage` endpoint

After successfully downloading and verifying the files, the Builder Service takes the process a step further by providing the `/buildDockerImage` endpoint. This endpoint allows users to build a standardised Docker image for the web application of their final machine learning model. By simplifying the deployment process, users can focus on their core tasks without worrying about the complexities of containerization.

The `/buildDockerImage` endpoint automates the process of creating a Docker image for the final machine learning model web application. This standardised image ensures consistency and ease of deployment across different environments. By abstracting away the complexities of containerization, the Builder Service enables developers to focus on their machine learning models and application logic.

## Conclusion

The Builder Service plays a vital role in the Neura Launch project, enabling users to effortlessly download files, verify their integrity, and build Docker images for their machine learning web applications. With the help of the Builder Service, developers can streamline their deployment process and focus on delivering exceptional machine learning solutions.

Stay tuned for more exciting updates and features from Lainforge
18 changes: 7 additions & 11 deletions src/content/blog/neura-launch/about-neura-launch-cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,7 @@ draft: false
tags:
- machine learning, devops, python, cli
ogImage: ""
description:
Neura Launch CLI tool allows you to publish your machine learning models to the cloud using a single command.
description: Neura Launch CLI tool allows you to publish your machine learning models to the cloud using a single command.
category: project
projectTitle: "Neura Launch"
---
Expand Down Expand Up @@ -53,7 +52,7 @@ First, let's figure out the requirements. We want our CLI tool to do the followi

### Initialize the project

Initializing a project is simply adding a `inference.py` and `config.yaml` file to the root directory of the project. The `inference.py` file will contain the code that will be used to make predictions using the model. The `config.yaml` file will contain all the information about the project and the model that we want to publish to the cloud.
Initializing a project is simply adding a `inference.py` and `config.yaml` file to the root directory of the project. The `inference.py` file will contain the code that will be used to make predictions using the model. The `config.yaml` file will contain all the information about the project and the model that we want to publish to the cloud.

For this we will use the `init` command that will run the following code:

Expand Down Expand Up @@ -94,7 +93,7 @@ As you can see in the code above, we are simply copying the `inference.py` file

### Add project token

Next up, we want to add the project token. For this we will use the `add_token` command.
Next up, we want to add the project token. For this we will use the `add_token` command.
Here I had a long discussion with my team about two things:

- If we should hide the project token when the user is typing it in the terminal or not.
Expand All @@ -117,9 +116,9 @@ import keyring
keyring.set_password("neura-launch", "project-token", token)
```

Let's take a step back and talk about why it was necessary for us to use keyring to store the project token.
Let's take a step back and talk about why it was necessary for us to use keyring to store the project token.

Suppose our user is working on a project with multiple collaborators. Now if we store the project token in the `config.yaml` file, then the project token will be visible to all the collaborators. This is a security concern because anyone with the project token can push the project to the cloud.
Suppose our user is working on a project with multiple collaborators. Now if we store the project token in the `config.yaml` file, then the project token will be visible to all the collaborators. This is a security concern because anyone with the project token can push the project to the cloud.

But since we are using keyring, only the user who has the project token stored in their system's keyring can push the project to the cloud. This introduces an extra layer of security to our project.

Expand All @@ -139,11 +138,11 @@ First of all we have to understand that we are going to use AWS to store the pro

#### Why?

Because S3 buckets are cheap and easy to use. They are also highly scalable and secure.
Because S3 buckets are cheap and easy to use. They are also highly scalable and secure.

#### How?

We wanted to abstract the process of pushing the project from the CLI tool. So we decided to send a POST request to the Neura Launch API with the project information. The API will then push the project to the S3 bucket and update the project information on the Neura Launch Dashboard.
We wanted to abstract the process of pushing the project from the CLI tool. So we decided to send a POST request to the Neura Launch API with the project information. The API will then push the project to the S3 bucket and update the project information on the Neura Launch Dashboard.

There's one very important thing that we need to discuss here and that is `code validation`.

Expand All @@ -153,7 +152,6 @@ To solve this problem we generate a `checksum` of the zip file before pushing th

This way our builder service can compare the checksum of the zip file that it generates with the checksum that is stored in the database. If the checksum matches, then the code is valid and the builder service can proceed with building the docker image.


#### Back to the CLI tool

Now that we have a basic understanding of the architecture of our project, let's get back to the CLI tool.
Expand Down Expand Up @@ -202,5 +200,3 @@ That's it. This is all the code that you need to write to push your project to t
I know I haven't covered the `/upload` API in this devlog but I will cover it in the next devlog so stay tuned for that 😉.
The code for the CLI tool is available on [Github](https://github.com/LainForge/neura-launch-cli).
If you have any questions or suggestions, feel free to reach out to us on our [discord](https://discord.gg/UxGdN56meC)


35 changes: 16 additions & 19 deletions src/content/blog/neura-launch/about-neura-launch.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,28 +7,27 @@ draft: false
tags:
- machine learning, devops
ogImage: ""
description:
Neura Launch Is Your One-Stop Solution for Hassle-Free ML Model Deployment.
description: Neura Launch Is Your One-Stop Solution for Hassle-Free ML Model Deployment.
category: project
projectTitle: "Neura Launch"
---

# Introduction

Imagine this: After spending more than 6 months alone, drinking 10 cups of coffee everyday inside of your small room, glued to your computer screen, you've birthed a ML model that can write flawless code and is wayyyyy better than GPT4.
Imagine this: After spending more than 6 months alone, drinking 10 cups of coffee everyday inside of your small room, glued to your computer screen, you've birthed a ML model that can write flawless code and is wayyyyy better than GPT4.

And now you want to earn some money from it.

But you don't know how to host/manage ML models in cloud 😞 soo...

You have two options -
You have two options -

- Either you go and learn devOps and all other ML model hosting related stuff and waste another 6 months of your life.
- Or you type `neura-launch push` and get API endpoints to interact with your ML model that is magically hosted in cloud for you.

Considering you are a **sane** person, I'm sure you'd want to go with the second option. (?)

So in the rest of the article I'm going to tell you how **Neura Launch** can help you _host and manage your machine learning models_ as if they are some frontend websites!!!
So in the rest of the article I'm going to tell you how **Neura Launch** can help you _host and manage your machine learning models_ as if they are some frontend websites!!!

![tell me more image](/imgs/neura-launch/cat.jpeg)

Expand All @@ -37,7 +36,8 @@ So in the rest of the article I'm going to tell you how **Neura Launch** can hel
Neura Launch is a collection of bunch of different softwares services that together lets you magically host and manage your models. I don't want to scare you with their names and how they fit in together.. so I'll tell you a story.

Say you created the legendary **cats vs dogs** machine learning model that classifies an image into either of the two categories.
Your folder structure for your project would look something like this -
Your folder structure for your project would look something like this -

```
- model.py
- test.py
Expand All @@ -49,8 +49,9 @@ Your folder structure for your project would look something like this -
```

### Step 0: Make an account on neura-launch-dashboard

The first step of the process is to create your account on the neura-launch-dashboard.
This dashboard would be your one-stop-solution for managing all of your different machine learning models.
This dashboard would be your one-stop-solution for managing all of your different machine learning models.

You can start/stop an existing service, track usage and manage resources for your models from this dashboard

Expand All @@ -73,19 +74,19 @@ import neuralaunch
from .model import CatVsDogNN

class Inference(neuralaunch.InferenceBase):

def load(**kwargs):
# load your NN

def infer(**kwargs):
# make predictions using your NN

```

After this you have to ask *neura-launch-dashboard* to a create a token for your project. You can do so by running the following command on the terminal:
After this you have to ask _neura-launch-dashboard_ to a create a token for your project. You can do so by running the following command on the terminal:

```
neura-launch remote add <remote_project_name>
neura-launch remote add <remote_project_name>
```

This will open a neura-launch-dashboard webpage on your browser where you have to signin to your account (if you haven't already).
Expand All @@ -94,6 +95,7 @@ This will open a neura-launch-dashboard webpage on your browser where you have t

And now you're ready to push your code to the cloud!!! 🥳
Just type the following command in your terminal:

```
neura-launch push
```
Expand All @@ -102,19 +104,14 @@ With this the cli tool will create a zip of your folder and send it to the neura

Ofcs this will take sometime... Expected to be around 5ish minutes. But you can track the progress of this entire pipeline through neura-launch-dashboard.

Once the service is up and running you can use the dashboard to increase/decrease the amount of resources, track usage, start or stop the service etc.
Once the service is up and running you can use the dashboard to increase/decrease the amount of resources, track usage, start or stop the service etc.

![drake meme](/imgs/neura-launch/meme.png)

-------
---

# Conclusions

This is an ongoing open source project under [LainForge](http://lainforge.org/) and you can track the development on [github](https://github.com/LainForge/Neura-Launch-Dashboard), [discord](https://discord.gg/UxGdN56meC) or [contact](https://bento.me/tarat) me if you have any thoughts/ideas around this or just want to say Hi.
This is an ongoing open source project under [LainForge](http://lainforge.org/) and you can track the development on [github](https://github.com/LainForge/Neura-Launch-Dashboard), [discord](https://discord.gg/UxGdN56meC) or [contact](https://bento.me/tarat) me if you have any thoughts/ideas around this or just want to say Hi.

Thank you for reading! Enjoy the rest of your day/night ☺️