Replies: 2 comments
-
For reference, this is my WIP version: Some interesting points:
|
Beta Was this translation helpful? Give feedback.
-
Data should be ordered by both exchange and local timestamps. To accurately simulate exchange(order fills), it should be ordered by exchange timestamp since it's the actual event sequence. Also, hftbacktest ensures that the user (local) sees data in the order of local timestamp for a more realistic backtest using a different index. But, as you pointed out, in the case of Binance, timestamps can be mixed across stream types. So, the validation util inserts rows that are only valid for the exchange and only valid for the local by duplicating the mixed timestamped rows. Please see the following in detail. There is a limitation if you use the snapshot by converting it into the diff even through the exchange provides the raw diff. It's a lot better to use the most granular data that the exchange provides. |
Beta Was this translation helpful? Give feedback.
-
Hi, I am implementing support for https://crypto-lake.com/ order book data and wonder why the data have to be ordered by exchange timestamp. If we want to be realistic, we should order the data as we would receive them and that means that exchange timestamps can be mixed due to exchange delays on individual data types. Imho data rows should be only ordered by exchange timestamp among the same data type (eg. trade and trade).
Are there any hftbacktest internals relying on this order? Can i safely disable this part of data validation?
Example:
Beta Was this translation helpful? Give feedback.
All reactions