Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Trunk Superlinter #164

Merged
merged 18 commits into from
Dec 12, 2023
Merged

Add Trunk Superlinter #164

merged 18 commits into from
Dec 12, 2023

Conversation

KastanDay
Copy link
Member

@KastanDay KastanDay commented Dec 8, 2023

https://docs.trunk.io/check

Things to consider:

  • I prefer yapf instead of black for longer lines. (88 is not enough).
  • I added yapf and removed black.

Copy link

You need to setup a payment method to use Lintrule

You can fix that by putting in a card here.

Copy link

railway-app bot commented Dec 8, 2023

This PR is being deployed to Railway 🚅

flask: ◻️ REMOVED

@KastanDay
Copy link
Member Author

Users must install (via brew or bash): https://docs.trunk.io/check/cli/install-trunk

@KastanDay KastanDay merged commit 77e2249 into main Dec 12, 2023
1 check passed
@KastanDay KastanDay deleted the add_trunk_superlinter branch December 12, 2023 01:41
KastanDay added a commit that referenced this pull request Dec 15, 2023
* initial attempt

* add parallel calls to local LLM for filtering. It's fully working, but it's too slow

* add newrelic logging

* add langhub prompt stuffing, works great. prep newrelic logging

* optimize load time of hub.pull(prompt)

* Working filtering with time limit, but the time limit is not fully respected, it will only return the next one after your time limit expires

* Working stably, but it's too slow and under-utilizing the GPUs. Need VLLM or Ray Serve to increase GPU Util

* Adding replicate model run to our utils... but the concurrency is not good enough

* Initial commit for multi query retriever

* Integrating Multi query retriever with in context padding.
Replaced LCEL with custom implementation for retrieval and reciprocal rank fusion.
Added llm to Ingest()

* Bumping up langchain version for new imports

* Adding langchainhub to requirements

* Using gpt3.5 instead of llm server

* Updating python version in railway

* Updated Nomic in requirements.txt

* fix openai version to pre 1.0

* anyscale LLM inference is faster than replicate or kastan.ai, 10 seconds for 80 inference

* upgrade python from 3.8 to 3.10

* trying to fix tesseract // pdfminer requirements for image ingest

* adding strict versions to all requirements

* Bump pymupdf from 1.22.5 to 1.23.6 (#136)

Bumps [pymupdf](https://github.com/pymupdf/pymupdf) from 1.22.5 to 1.23.6.
- [Release notes](https://github.com/pymupdf/pymupdf/releases)
- [Changelog](https://github.com/pymupdf/PyMuPDF/blob/main/changes.txt)
- [Commits](pymupdf/PyMuPDF@1.22.5...1.23.6)

---
updated-dependencies:
- dependency-name: pymupdf
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* compatible wheel version

* upgrade pip during image startup

* properly upgrade pip

* Fully lock ALL requirements. Hopefully speed up build times, too

* Limit unstructured dependencies, image balloned from 700MB to 6GB. Hopefully resolved

* Lock version of pip

* Lock (correct) version of pip

* add libgl1 for cv2 in Docker (for unstructured)

* adding proper error logging to image ingest

* Installing unstructured requirements individually to hopefully redoce bundle size by 5GB

* Downgrading openai package version to pre-vision release

* Update requirements.txt to latest on main

* Add langchainhub to requirements

* Reduce use of unstructured, hopefully the install is much smaller now

* Guarantee Unique S3 Upload paths (#137)

* should be fully working, in final testing

* trying to fix double nested kwargs

* fixing readable_filename in pdf ingest

* apt install tesseract-ocr, LAME

* remove stupid typo

* minor bug

* Finally fix **kwargs passing

* minor fix

* guarding against webscrape kwargs in pdf

* guarding against webscrape kwargs in pdf

* guarding against webscrape kwargs in pdf

* adding better error messages

* revert req changes

* simplify prints

* Bump typing-extensions from 4.7.1 to 4.8.0 (#90)

Bumps [typing-extensions](https://github.com/python/typing_extensions) from 4.7.1 to 4.8.0.
- [Release notes](https://github.com/python/typing_extensions/releases)
- [Changelog](https://github.com/python/typing_extensions/blob/main/CHANGELOG.md)
- [Commits](python/typing_extensions@4.7.1...4.8.0)

---
updated-dependencies:
- dependency-name: typing-extensions
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Kastan Day <[email protected]>

* Bump flask from 2.3.3 to 3.0.0 (#101)

Bumps [flask](https://github.com/pallets/flask) from 2.3.3 to 3.0.0.
- [Release notes](https://github.com/pallets/flask/releases)
- [Changelog](https://github.com/pallets/flask/blob/main/CHANGES.rst)
- [Commits](pallets/flask@2.3.3...3.0.0)

---
updated-dependencies:
- dependency-name: flask
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Kastan Day <[email protected]>

* modified context_padding function to handle edge case

* added print statements for testing

* added print statements for multi-query test

* added prints for valid docs

* print statements for valid docs metadata

* Guard against kwargs failures during webscrape

* added fix for url parameter

* HOTFIX: kwargs in html and pdf ingest for /webscrape

* fix for pagenumber error

* removed timestamp parameter

* url fix

* guard against missing URL metadata

* minor refactor & cleanup

* modified function to only pad first 5 docs

* modified context_padding with multi-threading

* modified context padding for only first 5 docs

* modified function for removing duplicates in padded contexts

* minor changes

* Export conversation history on /analysis page (#141)

* updated nomic version in requirements.txt

* initial commit to PR

* created API endpoint

* completed export function

* testing csv export on railway

* code to remove file from repo after download

* moved file storing out of docs folder

* created a separate endpoint for multi-query-retrieval

* added similar format_for_json for MQR

* added option for extending one URL our when on baseurl or to opt out of it

* merged context_filtering with MQR

* added replicate to requirements.txt

* added openai type to all openai functions

* added filtering to the retrieval pipeline

* moved filtering after context padding

* changed model in run_anyscale()

* minor string formatting in print statements

* added ray.init() before calling filter function

* added a wrapper function for run()

* modified the code to use thread pool processor

* fixed pool execution errors

* replaced threadpool with processpool

* testing multiprocessing with 10 contexts

* restored to using all contexts

* changed max_workers to 100

* changed max_workers to 100

* Guarentee unique s3 upload paths, support file updates (e.g. duplicate file guardfor Cron jobs) (#99)

* added the add_users() for Canvas

* added canvas course ingest

* updated requirements

* added .md ingest and fixed .py ingest

* deleted test ipynb file

* added nomic viz

* added canvas file update function

* completed update function

* updated course export to include all contents

* modified to handle diff file structures of downloaded content

* modified canvas update

* modified ingest function

* modified update_files() for file replacement

* removed the extra os.remove()

* fix underscore to dash in for pip

* removed json import and added abort to canvas functions

* created separate PR for file update

* added file-update logic in ingest, WIP

* removed irrelevant text files

* modified pdf ingest function

* fixed PDF duplicate issue

* removed unwanted files

* updated nomic version in requirements.txt

* modified s3_paths

* testing unique filenames in aws upload

* added missing library to requirements.txt

* finished check_for_duplicates()

* fixed filename errors

* minor corrections

* added a uuid check in check_for_duplicates()

* regex depends on this being a dash

* regex depends on this being a dash

* Fix bug when no duplicate exists.

* cleaning up prints, testing looks good. ready to merge

* Further print and logging refinement

* Remove s3 pased method for de-duplication, use Supabase only

* remove duplicate imports

* remove new requirement

* Final print cleanups

* remove pypdf import

---------

Co-authored-by: root <root@ASMITA>
Co-authored-by: Kastan Day <[email protected]>

* changed workers to 30 in run.sh

* Add Trunk Superlinter on-commit hooks (#164)

* First attempt, should auto format on commit

* maybe fix my yapf github action? Just bad formatting.

* Finalized, excellent Trunk configs for my desired formatting

* Further fix yapf GH Action

* Full format of all files with Trunk

* Fix more linting errors

* Ignore .vscdoe folder

* Reduce max line size to 120 (from 140)

* Format code

* Delete GH Action & Revert formatting in favor of Trunk.

* Ignore the Readme

* Remove trufflehog -- failing too much, confusing to new devs

* Minor docstring update

* trivial commit for testing

* removing trivial commit for testing

* Merge main into branch, vector_database.py probably needs work

* Cleanup all Trunk lint errors that I can

---------

Co-authored-by: KastanDay <[email protected]>
Co-authored-by: Rohan Marwaha <[email protected]>

* changed workers to 3

* logging time in API calling

* removed wait parameter from executor.shutdown()

* added timelog after openai completion

* set openai api type as global variable

* reduced max workers to 30

* moved filtering after MQR and modified th filtering code

* minor function name change

* minor changes

* minor changes to print statements

* Add example usage of our public API for chat calls

* Add timeout to request, best practice

* Add example usage notebook for our public API

* Improve usage example to return model's response for easy storage. Fix linter inf loop

* Final fix: Switch to https connections

* Enhance logging in getTopContexts(), improve usage exmaple

* Working implementation. Using ray, tested end to end locally

* cleanup imports and dependencies

* hard lock requirement versions

* fix requirements hard locks

* slim down reqs

* Merge main.. touching up lint errors

* Add pydantic req

* fix ray start syntax

* Improve prints logging

* Add posthot logging for filter_top_contexts

* Add course name to posthog logs

* Remove langsmith hub for prompts because too unstable, hardcoded instead

* remove osv-scanner from trunk linting runs

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Kastan Day <[email protected]>
Co-authored-by: Asmita Dabholkar <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: jkmin3 <[email protected]>
Co-authored-by: root <root@ASMITA>
Co-authored-by: KastanDay <[email protected]>
KastanDay added a commit that referenced this pull request Dec 19, 2023
* updated nomic version in requirements.txt

* Updated Nomic in requirements.txt

* fix openai version to pre 1.0

* upgrade python from 3.8 to 3.10

* trying to fix tesseract // pdfminer requirements for image ingest

* adding strict versions to all requirements

* Bump pymupdf from 1.22.5 to 1.23.6 (#136)

Bumps [pymupdf](https://github.com/pymupdf/pymupdf) from 1.22.5 to 1.23.6.
- [Release notes](https://github.com/pymupdf/pymupdf/releases)
- [Changelog](https://github.com/pymupdf/PyMuPDF/blob/main/changes.txt)
- [Commits](pymupdf/PyMuPDF@1.22.5...1.23.6)

---
updated-dependencies:
- dependency-name: pymupdf
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* compatible wheel version

* upgrade pip during image startup

* properly upgrade pip

* Fully lock ALL requirements. Hopefully speed up build times, too

* Limit unstructured dependencies, image balloned from 700MB to 6GB. Hopefully resolved

* Lock version of pip

* Lock (correct) version of pip

* add libgl1 for cv2 in Docker (for unstructured)

* adding proper error logging to image ingest

* Installing unstructured requirements individually to hopefully redoce bundle size by 5GB

* Reduce use of unstructured, hopefully the install is much smaller now

* Guarantee Unique S3 Upload paths (#137)

* should be fully working, in final testing

* trying to fix double nested kwargs

* fixing readable_filename in pdf ingest

* apt install tesseract-ocr, LAME

* remove stupid typo

* minor bug

* Finally fix **kwargs passing

* minor fix

* guarding against webscrape kwargs in pdf

* guarding against webscrape kwargs in pdf

* guarding against webscrape kwargs in pdf

* adding better error messages

* revert req changes

* simplify prints

* Bump typing-extensions from 4.7.1 to 4.8.0 (#90)

Bumps [typing-extensions](https://github.com/python/typing_extensions) from 4.7.1 to 4.8.0.
- [Release notes](https://github.com/python/typing_extensions/releases)
- [Changelog](https://github.com/python/typing_extensions/blob/main/CHANGELOG.md)
- [Commits](python/typing_extensions@4.7.1...4.8.0)

---
updated-dependencies:
- dependency-name: typing-extensions
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Kastan Day <[email protected]>

* Bump flask from 2.3.3 to 3.0.0 (#101)

Bumps [flask](https://github.com/pallets/flask) from 2.3.3 to 3.0.0.
- [Release notes](https://github.com/pallets/flask/releases)
- [Changelog](https://github.com/pallets/flask/blob/main/CHANGES.rst)
- [Commits](pallets/flask@2.3.3...3.0.0)

---
updated-dependencies:
- dependency-name: flask
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Kastan Day <[email protected]>

* Guard against kwargs failures during webscrape

* HOTFIX: kwargs in html and pdf ingest for /webscrape

* Export conversation history on /analysis page (#141)

* updated nomic version in requirements.txt

* initial commit to PR

* created API endpoint

* completed export function

* testing csv export on railway

* code to remove file from repo after download

* moved file storing out of docs folder

* added option for extending one URL our when on baseurl or to opt out of it

* Guarentee unique s3 upload paths, support file updates (e.g. duplicate file guardfor Cron jobs) (#99)

* added the add_users() for Canvas

* added canvas course ingest

* updated requirements

* added .md ingest and fixed .py ingest

* deleted test ipynb file

* added nomic viz

* added canvas file update function

* completed update function

* updated course export to include all contents

* modified to handle diff file structures of downloaded content

* modified canvas update

* modified ingest function

* modified update_files() for file replacement

* removed the extra os.remove()

* fix underscore to dash in for pip

* removed json import and added abort to canvas functions

* created separate PR for file update

* added file-update logic in ingest, WIP

* removed irrelevant text files

* modified pdf ingest function

* fixed PDF duplicate issue

* removed unwanted files

* updated nomic version in requirements.txt

* modified s3_paths

* testing unique filenames in aws upload

* added missing library to requirements.txt

* finished check_for_duplicates()

* fixed filename errors

* minor corrections

* added a uuid check in check_for_duplicates()

* regex depends on this being a dash

* regex depends on this being a dash

* Fix bug when no duplicate exists.

* cleaning up prints, testing looks good. ready to merge

* Further print and logging refinement

* Remove s3 pased method for de-duplication, use Supabase only

* remove duplicate imports

* remove new requirement

* Final print cleanups

* remove pypdf import

---------

Co-authored-by: root <root@ASMITA>
Co-authored-by: Kastan Day <[email protected]>

* Add Trunk Superlinter on-commit hooks (#164)

* First attempt, should auto format on commit

* maybe fix my yapf github action? Just bad formatting.

* Finalized, excellent Trunk configs for my desired formatting

* Further fix yapf GH Action

* Full format of all files with Trunk

* Fix more linting errors

* Ignore .vscdoe folder

* Reduce max line size to 120 (from 140)

* Format code

* Delete GH Action & Revert formatting in favor of Trunk.

* Ignore the Readme

* Remove trufflehog -- failing too much, confusing to new devs

* Minor docstring update

* trivial commit for testing

* removing trivial commit for testing

* Merge main into branch, vector_database.py probably needs work

* Cleanup all Trunk lint errors that I can

---------

Co-authored-by: KastanDay <[email protected]>
Co-authored-by: Rohan Marwaha <[email protected]>

* Add example usage of our public API for chat calls

* Add timeout to request, best practice

* Add example usage notebook for our public API

* Improve usage example to return model's response for easy storage. Fix linter inf loop

* Final fix: Switch to https connections

* Enhance logging in getTopContexts(), improve usage exmaple

* minor changes for postman testing

* minor changes for testing

* added print statements

* re-creating error

* added condition to check if content is a list

* added json handling needed to test with Postman

* exception handling for get-nomic-map

* json decoding for testing

* added prints for testing

* added prints for testing

* added prints for testing

* added prints for testing

* fix for string error in nomic log

* removed json debugging code

* Cleanup comments

* Enhance type checking, cleanup formatting

* formatting

* Fix type checks to isinstance()

* Revert vector_database.py to status on main

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Kastan Day <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: jkmin3 <[email protected]>
Co-authored-by: root <root@ASMITA>
Co-authored-by: KastanDay <[email protected]>
Co-authored-by: Rohan Marwaha <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants