Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decoding received message from capture_image command #24

Open
Nicolaas94 opened this issue Apr 13, 2022 · 3 comments
Open

Decoding received message from capture_image command #24

Nicolaas94 opened this issue Apr 13, 2022 · 3 comments

Comments

@Nicolaas94
Copy link

Hi,

I found your bk5000.py setup and it works very well in collecting a real-time stream from the BK5000.
However, I am new to bytes and TCP links and I try to figure out how to decode the received message after the command CAPTURE_IMAGE.
I figured out that the response message is much shorter than the bytearray length, assuming that the single image is sent in multiple parts through the buffer. The get_frame and decode_image functions from the bk5000.py do not solve the problem.

I expected an array of win_size[0]*win_size[1] which is around 360.000 bytes. However, in the bytearray I receive around 132.000 bytes.
When sending the command through the BK OEM test tool I received a 132kb .bin file which is the entire image when opened with ImageJ. This means that the bytearray containing the 132.000 bytes should be the entire image. But I cannot decode the bytes and reshape them into the win_sizes.

Could you please help me?

Nicolaas

@tdowrick
Copy link
Collaborator

Hi there,

I haven't tried to implement this particular command before, so am not sure what the decoding steps need to be. I also don't have access to a BK at the moment, so aren't able to look into a solution at present unfortunately.

What is the workflow you are trying to implement? Would you be able to just use the 'normal' streaming configuration to capture an image in Python?

@Nicolaas94
Copy link
Author

Hi,
I try to implement a stepwise acquisition method. The images needs to be acquired when a step is performed and that I am sure that the transducer is standing still at the certain position.
Currently I figured out how to use the 'normal' streaming to get an single image. However, it seems that streaming with FPS=25 ends up in missing images between steps. I use a CIRS phantom to validate the acquisitions.
When streaming with FPS=1 or 2 then the problem does not occur. I want to figure out how this is possible. The buffer is cleared at every step.

Do you know what could be the underlying cause of missing images in a stepwise acquisition method?

Thank you very much!

@tdowrick
Copy link
Collaborator

Are you able to post the Python code that you are using for acquisition? I might be able to spot any issue that are affecting the streaming.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants