Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installation is so painful #20

Open
nihirv opened this issue May 9, 2024 · 6 comments
Open

Installation is so painful #20

nihirv opened this issue May 9, 2024 · 6 comments
Assignees

Comments

@nihirv
Copy link

nihirv commented May 9, 2024

My man. This seems like a fantastic project and I've been trying to get it running for the last 2 days.

It's been a constant roadblock of errors everywhere. Admittedly, not all of it comes down to your package (I'm on a headless server without admin rights), but some of the installation process is quite ridiculous.

Why do I need to install broken-source & every package you've created? Why are there sound libraries in a project which doesn't need sound? Why does installation take over 30mins?

I feel this could be as simple as "git clone depthflow" and then "depthflow --image X", with only the dependencies that are required for this project.

Or, stick your model on huggingface spaces/replicate. I know you may have a lot on your plate but the exposure you'd get by doing this might encourage some open source contributors to help out.

This is the first model of its kind on github (that I could find) -- make getting started with it easy and you'll be the main repo for this stuff

@Tremeschin
Copy link
Member

I'll not argue as your point is totally valid, and is something to be worked on the future, but reason on what led to this


Python Packaging

Python tooling for multiple local dependent packages is terrible, I'm fighting against this for at least 2 years. The simplest solution, and that works the best so far, was to just bundle everything on a single wheel and pyproject.toml

It's too useful having a main library for common behavior. Rust does this VERY well defining a main [lib] and multiple targets, +enables optional dependencies specified on each bin. Let Python/rye have it, dependency bloat is resolved

Poetry needed some over-engineered code to manage the multiple venvs per project and was very error prone. Rye solved 90% of the immediate problems, namely a single venv for all projects, managing a specific Python version and bundling


Versioning

I really wanted to have multiple separate packages for isolation and git clone run, but synchronizing versions between them became too abstract for me. Any minor change, a simple rename from .callable to .target, might require three new sequential PyPI uploads, bumping versions "recursively" down the monorepo, but in what order?

I don't have to worry about this if the PyPI wheel contains always the "current of everything", and submodules are always up-to-date. That's why it's the most effective one currently

Logistics issues of this form will become less common, as lib/projects settle on a stable naming and behavior


Users and Python Installation

For that, I chose to manage the submodule hell on development side. Nevertheless, "from source" mode is automated with a single script under https://brokensrc.dev/get/source, and I'll be uploading new wheels soon..

..but then, if I recommend pip install, it requires Python, which users might not add to PATH, or not have pip (hi Linux Mint), or 3.12/3.13 requires compilers for some packages, etc. It's painful to instruct on every errors possibility


Closing Thoughts

Also, DepthFlow is really a ShaderFlow spin-off, which focus partly on audio reactiveness. It's a tech demo / full application of the bigger project. I hope this clarifies it.

Python really is simultaneously the best and worst programming language 😓

@nihirv
Copy link
Author

nihirv commented May 9, 2024

I did not mean to come across as critical -- I'm was/am very excited to use this, but I've spent half of yesterday and half of today fighting to get it running.

I thought pip install would be easiest, but was running into issues with cmake and samplerate (none of which are your fault). Then I was fighting pip to fix that before giving up and building from source. Building from source was a lot nicer than I expected, but I had issues with symlink and xvfb since it isn't installed on my machine and I don't have admin rights (neither are your fault, but most AI based projects tend to be headless linux first).

As I said -- most of it isn't your fault, but I think the traction this has is substantially bigger than ShaderFlow and in my mind (although I understand that you are the one with the vision of what you want this to evolve into), deserves to work as a standalone project.

Maybe it helps to clarify my use-case. I want to add simple animation to static images to make them more eye catching. This does a perfect job of being unique and eyecatching. Now I can build programmatically build a video slideshow from a bunch of images and make them be a lot more interesting than standard transitions.

Hell, if there was an API for this so I didn't need to host it myself, I would pay for that (hence my recommendation for you to stick this on Replicate 😉)

@RemyMachado
Copy link

RemyMachado commented Jun 28, 2024

Hey ! Great project, but same issue here. I just can't figure out how to run the project on linux.

I tried Docker and python env, with no success.

There is too much "guessing", and I'm not sure I even understand what we are supposed to install to run this project.

image

At what time did we install broken? I get "command not found" and it's perfectly normal I guess if I'm following the installation from source correctly.

Any help would be greatly appreciated.

@Tremeschin
Copy link
Member

Tremeschin commented Jun 28, 2024

Hey @RemyMachado, I'm assuming you just downloaded the DepthFlow repo "standalone"?

This repository can't be used alone on development mode as there's a monorepo structure involved, manual instructions here for what needs to happen in the main repo (or use the automated scripts on the same page, I've successfully deployed them this week even on Windows)

~

If you've followed those, maybe you didn't source the Python venv (do it so with source ./.venv/bin/activate if on bash, alas the scripts should have done it on the first run)

Or prefer installing it as a regular python package (though I gotta update it to the v0.4.0)

@RemyMachado
Copy link

Thanks for your prompt response. Indeed, I was trying to run it with the DepthFlow repo "standalone".

I installed it with the manual instructions and it went flawlessly. It even installed pytorch for me. Fortunately, I've been running your get.sh command prior to the manual install so I already had rye ready.


I successfully ran a first render ! 🎉
It's working great. I'll try to play with the different options you are providing.

I see so much potential in this project. I'm certain that a clearer installation guide and usage documentation would help your project get the recognition it deserves.


Also, I have an effiency related question. I installed Pytorch CPU flavor, but I'm only running the rendering with about ~20% of my CPU capacity. Do you know by any chance a way to increase the usage for a faster computation?

@Tremeschin
Copy link
Member

Tremeschin commented Jun 28, 2024

I successfully ran a first render ! 🎉

Nice :) 👍🏻 💯

I see so much potential in this project. I'm certain that a clearer installation guide and usage documentation would help your project get the recognition it deserves.

Ya, will work on documentation after the presets systems is implemented, as I'm mostly moving fast and breaking things in the past month or two, adding important features (changing upscalers, depth estimators, post fx) :)

Also, I have an effiency related question. I installed Pytorch CPU flavor, but I'm only running the rendering with about ~20% of my CPU capacity. Do you know by any chance a way to increase the usage for a faster computation?

I'm assuming you're meaning that the realtime window, after estimating the image, is using only 20% of a core then that's a good sign, as the framerate is limited to 60fps, you can hit TAB to change it in real time!

But 20% CPU for estimating the Depth, faster way is only using the GPU with CUDA or ROCm on PyTorch

When exporting to a video file the CPU will go crazy for encoding the video with FFmpeg and should be near 100% all time, or low usage if you're rendering with GPU acceleration namely NVENC (depthflow ... main -o ./video -c h264-nvenc)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants