You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please make sure to check off these prerequisites before submitting a bug report.
Test that the bug appears on the current version of the master branch. Make sure to include the commit hash of the commit you checked out.
Check that the issue hasn't already been reported, by checking the currently open issues.
If there are steps to reproduce the problem, make sure to write them down below.
If relevant, please include the hls4ml project files, which were created directly before and/or after the bug.
Quick summary
Hello, I have been having issues running hls_model.build(csim=False, export=True, bitfile=True) for the project I am doing. Pre-synthesis fails due to some limit being exceeded. I'm somewhat new to this so I'm not sure what the origin of the error is, or the optimal solution.
Details
I encounter the error when running hls_model.build(csim=False, export=True, bitfile=True).
ERROR: [XFORM 203-103] Array 'mult.V' (firmware/nnet_utils/nnet_dense_latency.h:17): partitioned elements number (4000) has exeeded the threshold (1024), which may cause long run-time.
I am loading a trained, pruned, Keras MLP. It's a simple model with one hidden layer with 10 neurons, but the input shape is (10, 400). There have been previous discussions here and here on the same problem. Based on the latter I shrank my model to its current state but realize that the large shape of the input tensor, and given that I'm trying to put this model on a pynq-z2, might still cause problems.
Steps to Reproduce
To test this I simply opened the tutorials (which I had already gone through, before) and re-ran parts 1-4, then ran part 7a. I changed nothing of substance in any of the notebooks, except in parts 1-4 I changed the use of XILINX_VITIS to XILINX_VIVADO and Vitis to Vivado, where relevant. Otherwise the code is the original, checked-out, code. I still get the error I quoted above, in part 7a (screenshots below). Parts 1-4 ran without issue. I am using hls4ml==1.0.0. My Vivado version is 2019.2. The commit hash for the tutorial checkout is 29a7f7e7891ddc40c7feb2f9f9d7e116778785c1.
The only other issue that popped up in part 7a was the warning
WARNING:tensorflow:No training configuration found in the save file, so the model was not compiled. Compile it manually.
when running model = load_model('model_3/KERAS_check_best_model.h5', custom_objects=co). I'm not sure if that's relevant, since I haven't changed anything in the part 7a notebook.
The tutorial error surprised me since the same error appeared for my own model and the tutorial model, implying the input tensor shape of my own model isn't the origin of the error (unless the same error is being generated by more than one thing).
Additional context
Originally in my personal project, I had a much larger MLP and was getting errors similar to those here. I did perform the suggested fixes, such as changing the ReuseFactor, changing Strategy to Resource, and setting io_type to io_stream. However, the issue resolved when I shrank the model. However, then I started getting the error this post is about. I still thought that because of the size of the input tensor was large, that that was the issue. But after running the hls4ml tutorial out of the box, as is, and getting essentially the same error, I am not sure that is the case. As such, I'm not certain that this (the error thrown) is a bug or I have overlooked something/done something impermissible.
My own model is trained with Keras==2.15.0. I have also set up a pip venv with hls4ml==1.0.0 and all the necessary libraries and use that for the kernel when I run the notebooks, including for the tutorials. I am running Ubuntu 24.04.1 LTS (not sure if this matters).
If this seems to not be a bug and there is a more appropriate forum for this question, please let me know.
Thank you.
The text was updated successfully, but these errors were encountered:
Prerequisites
Please make sure to check off these prerequisites before submitting a bug report.
Quick summary
Hello, I have been having issues running
hls_model.build(csim=False, export=True, bitfile=True)
for the project I am doing. Pre-synthesis fails due to some limit being exceeded. I'm somewhat new to this so I'm not sure what the origin of the error is, or the optimal solution.Details
I encounter the error when running
hls_model.build(csim=False, export=True, bitfile=True)
.I am loading a trained, pruned, Keras MLP. It's a simple model with one hidden layer with 10 neurons, but the input shape is (10, 400). There have been previous discussions here and here on the same problem. Based on the latter I shrank my model to its current state but realize that the large shape of the input tensor, and given that I'm trying to put this model on a
pynq-z2
, might still cause problems.Steps to Reproduce
To test this I simply opened the tutorials (which I had already gone through, before) and re-ran parts 1-4, then ran part 7a. I changed nothing of substance in any of the notebooks, except in parts 1-4 I changed the use of
XILINX_VITIS
toXILINX_VIVADO
andVitis
toVivado
, where relevant. Otherwise the code is the original, checked-out, code. I still get the error I quoted above, in part 7a (screenshots below). Parts 1-4 ran without issue. I am usinghls4ml==1.0.0
. My Vivado version is2019.2
. The commit hash for the tutorial checkout is29a7f7e7891ddc40c7feb2f9f9d7e116778785c1
.The only other issue that popped up in part 7a was the warning
when running
model = load_model('model_3/KERAS_check_best_model.h5', custom_objects=co)
. I'm not sure if that's relevant, since I haven't changed anything in the part 7a notebook.The tutorial error surprised me since the same error appeared for my own model and the tutorial model, implying the input tensor shape of my own model isn't the origin of the error (unless the same error is being generated by more than one thing).
Additional context
Originally in my personal project, I had a much larger MLP and was getting errors similar to those here. I did perform the suggested fixes, such as changing the
ReuseFactor
, changingStrategy
toResource
, and settingio_type
toio_stream
. However, the issue resolved when I shrank the model. However, then I started getting the error this post is about. I still thought that because of the size of the input tensor was large, that that was the issue. But after running thehls4ml
tutorial out of the box, as is, and getting essentially the same error, I am not sure that is the case. As such, I'm not certain that this (the error thrown) is a bug or I have overlooked something/done something impermissible.My own model is trained with
Keras==2.15.0
. I have also set up a pip venv withhls4ml==1.0.0
and all the necessary libraries and use that for the kernel when I run the notebooks, including for the tutorials. I am running Ubuntu24.04.1 LTS
(not sure if this matters).If this seems to not be a bug and there is a more appropriate forum for this question, please let me know.
Thank you.
The text was updated successfully, but these errors were encountered: