Releases: Brainchip-Inc/akida_examples
Releases · Brainchip-Inc/akida_examples
MetaTF 2.19 - Upgrade to Quantizeml 1.2.3, Akida/CNN2SNN 2.19.1 and Akida models 1.13.1
Update QuantizeML to version 1.2.3
New features
- Updated StatefulRecurrent and StatefulProjection layers to 8-bit for Pico
- Introduced PicoPostProcessing layer for Anomaly detection use cases
- Handle Reshape nodes in input pattern "Cast > Mul > Add > Transpose" when equivalent to transpose
- Improved rescale quantization in ONNX pipeline
- Handle transpose/flatten when aligning rescaling parameters in sanitizer
- Dropped support for up and down shaping for Pico
Bug fixes
- Sanitizer is performed twice to better handle folding layers like Pads in Convs
- Models with mutliple outputs were not correctly handled (only keeping the main branch)
- Some indentity convolutions were missing for skip connections and hardware compatiblity
- Prevented a Tensorflow and Pytorch incompatbility when importing QuantizeML
- Fixed bad quantization in models with multiple outputs
- Fixed bad rescaling folding in models with Add + Conv
- Fixed a core dump while sanitizing models with high resolution
- Fixed tensorflow_text dependency since latest keras_hub would pull an incompatible version for optional [test] dependencies
Update Akida and CNN2SNN to version 2.19.1
Aligned with FPGA-1764(2-nodes)/1765(6-nodes)/1766(6-nodes bittware)/905(pico)
New features and updates:
- [cnn2snn] Aligned with QuantizeML 1.2.2 for Pico layers updates
- [akida] Added support for Pico FPGA. This include StatefulRecurrent and PicoPostProcessing layers, KWS and anomaly detection models, nn_ops bindings, PicoIP virtual device, 16-bit I/O support and hardware constraints checks during mapping. The conversion, mapping and inference process is kept unchanged from Akida 2.0.
- [akida/cnn2snn] Dropped support for up and down shaping for Pico
- [akida] Virtual devices helpers have been updated for Pico
Bug fixes
- [akida] Fixed HRC issue for greyscale and many channels
- [akida] Fixed an overflow in PicoPostProcessing shift
Update Akida models to 1.13.1
Changes in akida_models package
- Aligned with QuantizeML 1.2.2 and CNN2SNN 2.1.0
- Added Pico model pipeline for Keyword Spotting (model definition, training and evaluation recipes)
- Dropped support for up and down shaping for Pico
Documentation update
- Added a mention to compatibility to Pytorch >= 2.6
- Added doucmentation for Pico related APIs
MetaTF 2.18 - Upgrade to Quantizeml 1.1.1, Akida/CNN2SNN 2.18.2 and Akida models 1.12.0
Update QuantizeML to version 1.1.1
New features
- Added InputQuantizer feature for Keras models (already available for ONNX). This will allow user to forward float32 inputs to quantized models and later to Akida models.
- Added support for Cast node in ONNX InputQuantizer to handle Keras models exported to ONNX with an input type different from float32
- Encapsulated sanitizers operations into a safe fail mechanism to prevent cryptic error messages.When prerequisites are not met for a sanitizer to be applied or some failure happens, the sanitizer is ignored and quantization will result in a clear unsupported chain of operations.
- Added new sets of sanitizers for both quantization and hardware compatibility:
- Transform
0.5 * (x * (op.Erf(x / math.sqrt(2)) + 1.0))into an explicit GeLU - Transform
Reshape(Conv)orSqueeze(Conv)toGemm(Flatten) - Transform depthwise layers with kernel 5 or 7 and stride 2 to an equivalent kernel 5 or 7 with stride 1 followed by identity kernel 3 with stride 2
- Invert batch normalization and pooling nodes for ONNX
- Transform
- Aligned depthwise quantization pattern with standard convolution patterns
- Dropped Add > ReLU quantization pattern that is not allowed in hardware
- Warnings now clearly state that process is continuing when non critical
Bug fixes
- Gemm followed by activation will now properly be downscaled to 8-bit (instead of 32)
- Quantization could produce out of range scales values that were not compatible with Akida
- Prevent adding identity layers after Add when outbound is a depthwise layer since hardware supports it
- Apply 8-bit downscaling to nodes that have both a quantized and a float outbound in ONNX
Update Akida and CNN2SNN to version 2.18.2
Aligned with FPGA-1764(2-nodes)/1765(6-nodes)/1766(6-nodes bittware)
New features and updates:
- [cnn2snn] Aligned with QuantizeML 1.1.1
- [cnn2snn] Now supports InputQuantizer layers in conversion patterns for both Keras and ONNX
- [akida] Introduced Akida.Quantizer layer that is the recipient for an InputQuantizer layer and that holds quantization scale, zero point, channel order and sign
- [akida] Introduced a new Akida.Padding.SameUpper padding mode for kernel 3, 5, 7 convolutions with a stride of 2 that matches PyTorch/ONNX "same upper" or "symmetric" padding scheme
- [akida] Introduced akida.compute_min_device and akida.compute_common_device APIs that allows to build minimal virtual devices to map a given model or list of models respectively
- [akida] Improved skip connection mapping for Add layers that are both merge and splits layers, especially when oubounds are two convolutional layers
- [akida] Improved hw_only=True mapping to build single sequence
- [akida] Warnings now explicitly state when non-critical
- [akida] Improved mapping error message when components shortage happens
- [akida] Removed "activation" parameter of Add layer (not a suppported feature)
Bug fixes
- [akida] Fixed an error message for out-of-bounds variables that was reporting incorrect signedness
- [akida] Fixed both computation and error message for HWPR number of descriptors and passes limitations
- [akida] Fixed a mapping error on concatenate layers when outbounds branches were of the same depth
- [akida] Fixed stride computation for StatefulRecurrent layer through nn.ops
- [akida] Fixed some top-level registers definition used in mesh discovery
- [akida] Fixed HRC hardware constraints check on 1.0
- [akida] Fixed 4-bit weights value check on 2.0
- [akida] Fixed mismatches for HRC in hardware when it has a large number of filters
- [akida] Fixed an hardware FIFO overflow error that could cause mismatches in models outputs
- [cnn2snn] Fixed a conversion issue for Rescaling layer and InputData in 1.0
- [cnn2snn] Fixed a conversion issue for InputConv2D layers with padding_value=0
Update Akida models to 1.12.0
Changes in akida_models package
- Aligned with QuantizeML 1.1.0 and CNN2SNN 2.18.0
- All models from the zoo have been updated with a typed input layer so that they will not have an InputQuantizer or akida.Quantizer layer. Training and evaluation pipelines have been updated accordingly.
- All quantized models from the zoo have been updated, optionally tuned, with the latest QuantizeML version without major accuracy difference
- Detection models training pipelines have been improved for an increased accuracy
- Edge models have been removed from 2.0 model zoo since the edge learning feature is 1.0 only
Bug fixes
- Removed the Rescaling layer from Jester model definition that was not used
- CenterNet models had an extra identity layer (sanitization issue) that has been removed
Documentation update
- Removed advanced quantization tutorial on custom pattern that was deprecated
- Updated models in tutorials so that they will not have an input quantizer
MetaTF 2.17 - Upgrade to Quantizeml 1.0.1, Akida/CNN2SNN 2.17.0 and Akida models 1.11.1
Common to all packages:
- Python 3.9 support dropped
- Python 3.12 support added
- Updated dependencies from Tensorflow/Keras 2.15 to TF-Keras: this is a pure-TensorFlow implementation of Keras.
- Due to TF-Keras 2.19, MetaTF is now using numpy >= 2.0.0
- Keras 3 features (backend agnostic) and models are NOT supported in MetaTF.
Update QuantizeML to version 1.0.1
New features
- Dropped
python 3.9support and added suppport forpython 3.12 - Updated dependencies to
TF-Keras 2.19. Keras 3 models are NOT supported in QuantizeML - Updated dependencies to
onnxruntime 1.19.2,onnxscript 0.4.0 - Fixed
onnx-irversion to <=0.1.8 because later version comes with a breaking change - Analysis is now an optional module and dependencies are installed with
pip install quantizeml[analysis]
Bug fixes
- Correctly reject 4D inputs in MatMul to Gemm ONNX sanitizer
- Added missing value info in quantized model in ONNX quantization pipeline
Update Akida and CNN2SNN to version 2.17.0
Aligned with FPGA-1701(2-nodes)/1700(6-nodes)
New features and updates:
- [akida/cnn2snn] Dropped
python 3.9support and added suppport forpython 3.12 - [cnn2snn] Updated dependencies to
TF-Keras 2.19andQuantizeML 1.0.1
Update Akida models to 1.11.1
Changes in akida_models package
- Dropped
python 3.9support and added suppport forpython 3.12 - Updated dependencies to
TF-Keras 2.19 - Updated dependencies to cnn2snn 2.17.0 and quantizeml 1.0.1
- Updated external dependencies too: tensorflow_datasets is now an explicit dependency while tonic is implicit, deprecated imgaug replaced with imaug
Bug fix
- Allow detection datasets to be downloaded when
--dataparameter is not provided
Documentation update
- Updated overview and installation pages with python, TF-Keras and ONNX version changes
- Tutorials rebased on TF-Keras usage
- Removed edge models for 2.0 (1.0 feature only)
Upgrade to Quantizeml 0.19.0, Akida/CNN2SNN 2.16.1 and Akida models 1.10.0
This is the last release supporting python 3.9 and TensorFlow 2.15.
In the next release python 3.9 will be dropped, python 3.12 will be added and requirements will be updated to TF-Keras 2.19.
Update QuantizeML to version 0.19.0
New features
- Now built using pyproject.toml
Bug fixes
- Prevent requantization of ONNX models or quantization of models with non-float inputs
Update Akida and CNN2SNN to version 2.16.1
Aligned with FPGA-1701(2-nodes)/1700(6-nodes)
New features and updates:
- [akida/cnn2snn] Now built using pyproject.toml
- [cnn2snn] Updated QuantizeML requirement to 0.19.0
- [cnn2snn] Improved ONNX conversion errors adding faulty node names in messages
- [akida] HRC is now part of the mesh discovery process, LUT included. As a result Akida.Mesh constructor API was changed from
has_hrcboolean to a properNP.Info.hrcobject - [akida] Dropped Dense2D layer since it does not exist in hardware. Dense layers are converted to Dense1D when compatible and rejected otherwise
- [akida] Added layer name to errors messages happening during mapping
- [akida] Mapping errors will now display the list of all hardware incompatibilities that are identified during constraint checks
- [akida] Reworked invalid sequence mapping error message to list the original faulty layer and the list of all issues encountered while trying to map a sequence of layers
- [akida] Mapping with
hw_only=Truewill now result in a explicit error message if the model cannot be mapped in a single sequence - [akida] Improved mapping on devices with custom HwVersion and feature like HwPR, dense outputs or sparse inputs that are not available on all product ids
Bug fixes
- [akida] Properly adapt outputs allocated memory in engine depending on bitwidth while 32-bit was assumed
- [akida] Mapping spatiotemporal models no longer trigger a pass after last TNP-B layer
- [akida] Mapping with a bad
MapModenow properly raises an error - [akida] Prevent mapping CNP with max pooling when not followed by another CNP for any device
- [akida] Prevent mapping Add layer with ReLU activation since this is not supported in hardware
- [akida]
Model.evaluatewas implicitely always casting input data to 8-bit unsigned. Data type is now checked against model input type - [akida]
create_devicewas creating a device with LUT disabled in HRC - [akida] Fixed an erroneous layer name in mapping error message
Update Akida models to 1.10.0
New features and updates:
- Now built using pyproject.toml
- Updated QuantizeML requirement to 0.19.0, CNN2SNN to 2.16.1
- Updated Modelnet40/Pointnet++ architecture to map in a single sequence on 6-node FPGA
Documentation update
- Added a last "python 3.9 and TF2.15" information in installation page
- Added HW capabilities page in the user guide section
- Updated icons display for iOS/Ubuntu
- Updated dead link on VOC dataset
Upgrade to Quantizeml 0.18.0, Akida/CNN2SNN 2.15.0 and Akida models 1.9.0
Update QuantizeML to version 0.18.0
New features
- Introduced PleiadesLayer for SpatioTemporal TENNs on Keras
- Keras sanitizer will now bufferize Conv3D layers, same as ONNX sanitizer. As a result, SpatioTemporal TENNs from both frameworks will be bufferized and quantized at once.
- Dropped quantization patterns with MaxPooling and LUT activation since this is not supported in Akida
- Dropped all transformers features
Update Akida and CNN2SNN to version 2.15.0
Aligned with FPGA-1696(2-nodes)/1695(6-nodes)
New features and updates:
- [cnn2snn] Updated QuantizeML requirement to 0.18.0
- [akida] Full support of look-up-table (LUT) activation in 2.0 FPGA (HRC and CNP are the only NPs supporting LUT). Known limitation: MaxPooling and LUT are not possible together.
- [akida] Extended support to 4-bit weights and 8-bit activation combos (was limited to 4-bit weights and activation)
- [akida] Depthwise convolution now supports global average pooling
- [akida] Added equality operator to akida.Component for easier mapping comparison
- [akida/cnn2snn] Dropped all transformers layers and features
Bug fixes
- [akida] When a single FNP is requested in akida.create_device, it will no longer be both FNP and CNP to ensure proper minimum device computation
- [akida] Mapping will now properly reject skip branches not ending with a merge layer (CNP)
Update Akida models to 1.9.0
New features and updates:
- Updated OpenCV dependency to <4.12 because later versions are based on numpy 2
- Updated spatiotemporal_block with a temporal_first parameter that allows to invert spatial and temporal blocks order
- Optimized the spatiotemporal TENNs model for EyeTracking leading to 1.4x speed increase and 54% memory reduction in HW while maintaining accuracy
- Dropped all transformers features
Bug fixes
- Fixed reset_buffers usage in TENNs training modules when evaluating an Akida model
Documentation update
- Add mentions to LUT in user guides and APIs references
- Updated buffering section in Jester TENNs tutorial
Upgrade to Quantizeml 0.17.1, Akida/CNN2SNN 2.14.0 and Akida models 1.8.0
Upgrade to Quantizeml 0.17.1, Akida/CNN2SNN 2.14.0 and Akida models 1.8.0
Update QuantizeML to version 0.17.1
New features
- Now handling models with dynamic shapes in both Keras and ONNX. Shape is deduced from calibration samples or from the input_shape parameter.
- Added a Keras and ONNX common reset_buffers entry point for spatiotemporal models
- GlobalAveragePooling output will now be quantized to QuantizationParams.activation_bits instead of QuantizationParams.output_bits when preceeded by an activation
Bug fixes
- Applied reset_buffers to variables recording to prevent shape issue when converting a model to Akida
- Handle ONNX models with shared inputs that would not quantize or convert properly
- Handle unsupported strides when converting an even kernel to odd
- Fixed analysis module issue when applied to TENNs models
- Fixed analysis module weight quantization error on Keras models
- Keras set_model_shape will now handle tf.dataset samples
- It is now possible to quantize a model with a split layer as input
Update Akida and CNN2SNN to version 2.14.0
Aligned with FPGA-1692(2-nodes)/1691(6-nodes)
New features and updates:
- [cnn2snn] Updated requirement to QuantizeML 0.17.0
- [akida] Added support for 4-bit in 2.0. Features aligned with 1.0, that is InputConv2D, Conv2D, DepthwiseConv2D and Dense layers support 4-bit weights (except InputConv2D) and activations.
- [akida] Extented TNP_B support to 2048 channels and filters
- [akida] HRC is now optional in a virtual device
- [akida] For real device, input and weight SRAM values are now read from the mesh
- [akida] Introduce an akida.NP.SramSize object to manage default memories
- [akida] Extended python Layer API with "is_target_component(NP.type)" and "macs"
- [akida] Added "akida.compute_minimal_memory" helper
Bug fixes
- [akida] Fixed several issues when computing input or weight memory sizes for layers
Update Akida models to 1.8.0
- Updated QuantizeML dependency to 0.17.0 and CNN2SNN to 2.14.0
- Updated 4-bit models for 2.0 and added a bitwidth parameter to the pretrained helper
- TENNs EyeTracking is now evaluated on the labeled test set
- Dropped MACS computation helper and CLI: MACS are natively available on Akida layers and models.
Documentation update
- Updated 2.0 4-bit accuracies in model zoo page
- Updated advanced ONNX quantization tutorial with MobiletNetV4
Upgrade to Quantizeml 0.16.0, Akida/CNN2SNN 2.13.0 and Akida models 1.7.0
Update QuantizeML to version 0.16.0
New features
- Added a bunch of sanitizing steps targetting native hardware compatibility:
- Handle first convolution that cannot be a split layer
- Added support for "Add > ReLU > GAP" pattern
- Added identity layers when no merge layers are present after skip connections
- BatchNormalisation layers are now properly folded in ConvTranspose nodes
- Added identity layers to enforce layers to have 2 outbounds only
- Handled Concatenate node with a duplicated input
- Added support for TENNs ONNX models, which include sanitizing, converting to inference mode and quantizing
- Set explicit ONNXScript requirement to 0.2.5 to prevent later versions that use numpy 2.x
Bug fixes
- Fixed an issue where calling sanitize twice (or sanitize then quantize) would lead to invalid ONNX graphs
- Fixed an issue where sanitizing could lead to invalid shapes for ONNX Matmul/GEMM quantization
Update Akida and CNN2SNN to version 2.13.0
Aligned with FPGA-1679(2-nodes)/1678(6-nodes)
New features
- [cnn2snn] Updated requirement to QuantizeML 0.16.0
- [cnn2snn] Added support for ONNX QuantizedBufferTempConv and QuantizedDepthwiseBufferTempConv conversion to Akida
- [akida] Full support for TNP-B in hardware, including partial reconfiguration with a constraint that TNP-B cannot be the first layer of a pass
- [akida] Full support of Concatenate layers in hardware, feature set aligned on Add layers
- [akida] Prevented the mapping of models with both TNP-B and skip connections
- [akida] Renamed akida.NP.Mapping to akida.NP.Component
- [akida] Improved model summary for skip connections and TNP-B layers. The summary now shows the number of required SkipDMA channels and the number of components by type.
- [akida] Updated mapping details retrieval: model summary now contains information on external memory used. For that purpose, some C++/Python binding was updated and cleaned. The NP objects in the API have external members for memory.
- [akida] Renamed existing virtual devices and added SixNodesIPv2 and TwoNodesIPv2 devices
- [akida] Introduced create_device helper to build custom virtual devices
- [akida] Mesh now needs an IP version to be built
- [akida] Simplified model statistics API and enriched with inference and program clocks when available
- [akida] Dropped the deprecated evaluate_sparsity tool
Update Akida models to 1.7.0
- Updated QuantizeML dependency to 0.16.0 and CNN2SNN to 2.13.0
- Sparsity tool name updated. Now returns python objects instead of simply displaying data and support models with skip connections
- Introduced tenn_spatiotemporal submodule that contains model definition and training pipelines for DVS128, EyeTracking and Jester TENNs models
- Added creation and training/evaluation CLI entry points for TENNs
Introducing TENNs modules 0.1.0
- First release of the package that aims at providing modules for Branchip TENNs
- Contains blocks of layers for model definition: SpatialBlock, TemporalBlock, SpatioTemporalBlock that come with compatibility checks and custom padding for Akida
- The TemporalBlock can optionally be defined as a PleiadesLayer following https://arxiv.org/abs/2405.12179
- An export_to_onnx helper is provided for convenience
Documentation update
- Added documentation for TENNs APIs, including tenns_modules package
- Introduced two spatiotemporal TENNs tutorials
- Updated model zoo page with mAP50, removed 'example' column and added TENNs
- Added automatic checks for broken external links and fixed a few
- Cosmetic changes: updated main logo and copyright to 2025
Upgrade to Quantizeml 0.13.0, Akida/CNN2SNN 2.11.0 and Akida models 1.6.2
Update QuantizeML to version 0.13.0
New features
- Added an
input_dtypein quantization parameters - Improved sanitizer to fold non-even convolution kernels into an even kernel when possible
- Improved sanitizer to handle 'pad > transpose' configurations
- Added support Upsample operation in ONNX
- Added support for multi-outputs models in ONNX quantization
- Unified data generators in the package (calibration, recording, analysis)
Bug fixes
- Fixed an incorrect padding value in QuantizedConv2DTranspose
- Fixed calibration issue for ONNX when using random samples
- Limited the possibility to define FixedPoint with more than 1D frac_bits and QFloat with more than 1D scales
Update Akida and CNN2SNN to version 2.11.0
Aligned with FPGA 1562(2-node)/1563(6-node)/1532(ViT)
New features
- [akida] Added hwpr_loopitself parameter to MapConstraints that enables reusing an NP between passes
- [akida] Added missing docstrings for MapConstraints
- [akida] Added early support for skip connection (Add) mapping and inference
- [cnn2snn] Updated requirement to QuantizeML 0.13.0
- [cnn2snn] Updated conversion to leverage input_dtype quantization parameter introduced in QuantizeML
- [cnn2snn] Added Akida conversion for ONNX QuantizedConcat, QuantizedConv2DTranspose and QuantizedDepthwise2DTranspose
Bug fixes:
- [akida] Added a graph validation step at mapping to reject unsupported models
Update Akida models to 1.6.2
- Updated QuantizeML dependency to 0.13.0 and CNN2SNN to 2.11.0
- 2.0 pretrained models are now 8-bit (updated from 4-bit)
- Added a utility function accessible though CLI to compute a model sparsity
- Fixed Portrait128 evaluation pipeline
Documentation update
- Added a section for the sparsity tool in Akida models zoo user guide
- Updated coil100 download path
- Updated numpy random usage in examples
Upgrade to Quantizeml 0.13.0, Akida/CNN2SNN 2.11.0 and Akida models 1.6.2
Update QuantizeML to version 0.13.0
New features
- Added an
input_dtypein quantization parameters - Improved sanitizer to fold non-even convolution kernels into an even kernel when possible
- Improved sanitizer to handle 'pad > transpose' configurations
- Added support Upsample operation in ONNX
- Added support for multi-outputs models in ONNX quantization
- Unified data generators in the package (calibration, recording, analysis)
Bug fixes
- Fixed an incorrect padding value in QuantizedConv2DTranspose
- Fixed calibration issue for ONNX when using random samples
- Limited the possibility to define FixedPoint with more than 1D frac_bits and QFloat with more than 1D scales
Update Akida and CNN2SNN to version 2.11.0
Aligned with FPGA 1562(2-node)/1563(6-node)/1532(ViT)
New features
- [akida] Added hwpr_loopitself parameter to MapConstraints that enables reusing an NP between passes
- [akida] Added missing docstrings for MapConstraints
- [akida] Added early support for skip connection (Add) mapping and inference
- [cnn2snn] Updated requirement to QuantizeML 0.13.0
- [cnn2snn] Updated conversion to leverage input_dtype quantization parameter introduced in QuantizeML
- [cnn2snn] Added Akida conversion for ONNX QuantizedConcat, QuantizedConv2DTranspose and QuantizedDepthwise2DTranspose
Bug fixes:
- [akida] Added a graph validation step at mapping to reject unsupported models
Update Akida models to 1.6.2
- Updated QuantizeML dependency to 0.13.0 and CNN2SNN to 2.11.0
- 2.0 pretrained models are now 8-bit (updated from 4-bit)
- Added a utility function accessible though CLI to compute a model sparsity
- Fixed Portrait128 evaluation pipeline
Documentation update
- Added a section for the sparsity tool in Akida models zoo user guide
- Updated coil100 download path
- Updated numpy random usage in examples
Upgrade to Quantizeml 0.12.1, Akida/CNN2SNN 2.10.0 and Akida models 1.6.1
Update QuantizeML to version 0.12.1
New features
- ONNX Runtime and ONNX requirements bumped to 1.19 and 1.16 respectively
- Limited requantization support and improved errors and warnings
- Added support for ONNX 'sub' nodes when rescaling
- Improved ONNX nodes naming during sanitizing and quantization
- When they comes with more than 2 inputs, Concatenate layers are now split into several layers
Bug fixes:
- Fixed RuntimeError management when using custom patterns for quantization
- Combined custom patterns with standard patterns so that quantization defaults to standard when custom fails
- 'input_weight_bits' was not always properly applied to the first layers depending on its type
- Fixed ONNX quantization issue with padding variable name in convolution layers
Update Akida and CNN2SNN to version 2.10.0
Aligned with FPGA 1538/1532(ViT)
New features
- [akida] Mapping mode AllNps (default) will limit search to layer that can be split
- [akida] Custom MapConstraints to override mapping mode has been reworked for easier usage
- [akida] Improved mapping search for transposed convolution
- [akida] Aligned skip DMA info in mesh discovery with other NPs info
- [cnn2snn] Updated requirement to QuantizeML 0.12.1
Bug fixes:
- [akida] Fixed a crash when loading non .FBZ files
- [akida] Fixed InputConvolutional output for resolution 384
- [akida] Prevent using the Model constructor with a list of layers when the list contains skip connections
Update Akida models to 1.6.1
- Limited MTCNN dependency version
- Updated QuantizeML dependency to 0.12.1 and CNN2SNN to 2.10.0
Documentation update
- Updated broken ONNX links in "custom pattern" advanced example