Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about code + output #74

Open
Laura-0201 opened this issue Mar 24, 2023 · 1 comment
Open

Question about code + output #74

Laura-0201 opened this issue Mar 24, 2023 · 1 comment

Comments

@Laura-0201
Copy link

Hello,

I'm including pyVHR in my thesis, however I'm really stuck and was hoping someone could help me :). For the purpose of my thesis I only have to use the timepoints and BPM. I got a lot of errors on the original code, so my supervisor gave me adjusted code. I do get an output now, but do not know how to interpet them. So my question is, does this code even work properly this way? If so how do I interpet the output?

Code:
`
import pyVHR as vhr
import numpy as np
from pyVHR.analysis.pipeline import Pipeline
from pyVHR.plot.visualize import *
import os
import plotly.express as px
from pyVHR.utils.errors import getErrors, printErrors, displayErrors

from pyVHR.analysis.pipeline import Pipeline
from pyVHR.plot.visualize import *
from pyVHR.utils.errors import getErrors, printErrors, displayErrors

from pyVHR.extraction.utils import sig_windowing

fps = vhr.extraction.get_fps("C:\Users\laura\Documents\Semester 6\Thesis\VS8.MOV")
#wsize = 8 # seconds of video processed (with overlapping) for each estimate
#test_bvp = sigGT.data
#bpmGT, timesGT = sigGT.getBPM(wsize)

pipe = Pipeline() # object to execute the pipeline
bvps, timesES, bpmES = pipe.run_on_video("C:\Users\laura\Documents\Semester 6\Thesis\VS8.MOV",
roi_method='convexhull',
roi_approach='patches',
method='cpu_POS',
bpm_type='welch',
pre_filt=False,
post_filt=True,
verb=True,
cuda=False)

print('time:', timesES, 'bpm:', bpmES, 'bvps:', bvps)`

Output:
image

@wang-tf
Copy link

wang-tf commented Apr 7, 2023

I think the best way to test is using the video in a open dataset like UBFC. The output is closely depend on your input video.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants