Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add fixed (1/512) Filter index #1007

Open
kiss81 opened this issue Apr 26, 2022 · 7 comments
Open

add fixed (1/512) Filter index #1007

kiss81 opened this issue Apr 26, 2022 · 7 comments

Comments

@kiss81
Copy link

kiss81 commented Apr 26, 2022

Had no idea how to describe this in a proper way so here is the background:
Although I know spindowns of harddisks is a controversial topic: I want to let my chia harddisks spin down. Af first I was thinking of making k39 plots, but that's not really feasible. Therefore I want to put (k33 or k34) plots with the same filter index on the same drive. By doing so the disks have a lot of time to spin down.

So here is the real question: Is it possible to create plots with a constant filter index? So I can choose on beforehand?

@ericaltendorf
Copy link

ericaltendorf commented Apr 26, 2022

I believe the plot filter is based on a hash of the plot ID and the challenge (and maybe something else). Different plots have to have different plot IDs, and you can't pick two plot IDs that will consistently hash to the same value when combined with a variety of challenges. I believe the system was specifically designed to prevent the strategy you're proposing.

@kiss81
Copy link
Author

kiss81 commented Apr 27, 2022

I believe the plot filter is based on a hash of the plot ID and the challenge (and maybe something else). Different plots have to have different plot IDs, and you can't pick two plot IDs that will consistently hash to the same value when combined with a variety of challenges. I believe the system was specifically designed to prevent the strategy you're proposing.

Kind of make sense of course. But if some kind of filtering is possible before completing the plot that would be great.

@altendky
Copy link

https://github.com/Chia-Network/chia-blockchain/blob/a48fd431009ff0cd4874e4da99bc3071f45564da/chia/types/blockchain_format/proof_of_space.py#L73-L87

    @staticmethod
    def passes_plot_filter(
        constants: ConsensusConstants,
        plot_id: bytes32,
        challenge_hash: bytes32,
        signage_point: bytes32,
    ) -> bool:
        plot_filter: BitArray = BitArray(
            ProofOfSpace.calculate_plot_filter_input(plot_id, challenge_hash, signage_point)
        )
        return plot_filter[: constants.NUMBER_ZERO_BITS_PLOT_FILTER].uint == 0

    @staticmethod
    def calculate_plot_filter_input(plot_id: bytes32, challenge_hash: bytes32, signage_point: bytes32) -> bytes32:
        return std_hash(plot_id + challenge_hash + signage_point)

The plot id, challenge hash, and signage point are all concatenated and then hashed. The result is checked against the filter setting. The overall hash cannot be predicted based on the plot id alone so there is no opportunity to control generation of nor group the plots such that they either all or none pass the filter. Nor that they mostly do or mostly don't pass the filter.

As far as I know, big disks should run and little disks can potentially be shut down to save power. Where that boundary is depends on your concern about the life of the disk and your ability or willingness to create large plots.

@kiss81
Copy link
Author

kiss81 commented Apr 28, 2022

https://github.com/Chia-Network/chia-blockchain/blob/a48fd431009ff0cd4874e4da99bc3071f45564da/chia/types/blockchain_format/proof_of_space.py#L73-L87

    @staticmethod
    def passes_plot_filter(
        constants: ConsensusConstants,
        plot_id: bytes32,
        challenge_hash: bytes32,
        signage_point: bytes32,
    ) -> bool:
        plot_filter: BitArray = BitArray(
            ProofOfSpace.calculate_plot_filter_input(plot_id, challenge_hash, signage_point)
        )
        return plot_filter[: constants.NUMBER_ZERO_BITS_PLOT_FILTER].uint == 0

    @staticmethod
    def calculate_plot_filter_input(plot_id: bytes32, challenge_hash: bytes32, signage_point: bytes32) -> bytes32:
        return std_hash(plot_id + challenge_hash + signage_point)

The plot id, challenge hash, and signage point are all concatenated and then hashed. The result is checked against the filter setting. The overall hash cannot be predicted based on the plot id alone so there is no opportunity to control generation of nor group the plots such that they either all or none pass the filter. Nor that they mostly do or mostly don't pass the filter.

As far as I know, big disks should run and little disks can potentially be shut down to save power. Where that boundary is depends on your concern about the life of the disk and your ability or willingness to create large plots.

Great explanation! What I could do is sort the plots on my disks. But if I want to improve it to the maximum I need either go with very large plots (k38 / k39) and / or sort them... If I have a disk full of filter "1" plots that would work as well as one large plot.

@altendky
Copy link

The point is that no, you can't sort the plots. Say you have plots A, B, and C. For challenge one, maybe plots A and B pass the filter. For challenge two, it could be A and C. Then for challenge three just C passes. The challenge, the signage point, and the plot id are all included in the filter check. You have to handle many challenges and signage points and you can not predict them so you can not pre-sort your plots into groups that will align with passing the filter or not.

@kiss81
Copy link
Author

kiss81 commented Apr 28, 2022

That makes sense. So the only solution to achieve the least possible spin ups a day is super large plots... That raises the question if it will be possible to make a k38 / k39 plot :p

@altendky
Copy link

I largest plot I am aware of was a single k38 that took 43 days. Presumably it could be somewhat quicker now. With larger drives the power consumption is less of an issue. As far as I know 18TB drives aren't generally more power hungry than 1TB drives. As such they end up 18x more efficient anyways. With 18TB drives you are probably looking at maybe 500W/PB? It all depends on your setup. So, the drives that save the most power per TB are the smaller drives where the smaller k34 etc are still a relevant reduction. There are a few people that have been looking at this for smaller drives. It might be worth checking out #farming on https://keybase.io/team/chia_network.public if you want to discuss it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants