Skip to content

Commit

Permalink
revive: Include immutable storage deposit into the contracts `storage…
Browse files Browse the repository at this point in the history
…_base_deposit` (#7230)

This PR is centered around a main fix regarding the base deposit and a
bunch of drive by or related fixtures that make sense to resolve in one
go. It could be broken down more but I am constantly rebasing this PR
and would appreciate getting those fixes in as-one.

**This adds a multi block migration to Westend AssetHub that wipes the
pallet state clean. This is necessary because of the changes to the
`ContractInfo` storage item. It will not delete the child storage
though. This will leave a tiny bit of garbage behind but won't cause any
problems. They will just be orphaned.**

## Record the deposit for immutable data into the `storage_base_deposit`

The `storage_base_deposit` are all the deposit a contract has to pay for
existing. It included the deposit for its own metadata and a deposit
proportional (< 1.0x) to the size of its code. However, the immutable
code size was not recorded there. This would lead to the situation where
on terminate this portion wouldn't be refunded staying locked into the
contract. It would also make the calculation of the deposit changes on
`set_code_hash` more complicated when it updates the immutable data (to
be done in #6985). Reason is because it didn't know how much was payed
before since the storage prices could have changed in the mean time.

In order for this solution to work I needed to delay the deposit
calculation for a new contract for after the contract is done executing
is constructor as only then we know the immutable data size. Before, we
just charged this eagerly in `charge_instantiate` before we execute the
constructor. Now, we merely send the ED as free balance before the
constructor in order to create the account. After the constructor is
done we calculate the contract base deposit and charge it. This will
make `set_code_hash` much easier to implement.

As a side effect it is now legal to call `set_immutable_data` multiple
times per constructor (even though I see no reason to do so). It simply
overrides the immutable data with the new value. The deposit accounting
will be done after the constructor returns (as mentioned above) instead
of when setting the immutable data.

## Don't pre-charge for reading immutable data

I noticed that we were pre-charging weight for the max allowable
immutable data when reading those values and then refunding after read.
This is not necessary as we know its length without reading the storage
as we store it out of band in contract metadata. This makes reading it
free. Less pre-charging less problems.

## Remove delegate locking

Fixes #7092

This is also in the spirit of making #6985 easier to implement. The
locking complicates `set_code_hash` as we might need to block settings
the code hash when locks exist. Check #7092 for further rationale.

## Enforce "no terminate in constructor" eagerly

We used to enforce this rule after the contract execution returned. Now
we error out early in the host call. This makes it easier to be sure to
argue that a contract info still exists (wasn't terminated) when a
constructor successfully returns. All around this his just much simpler
than dealing this check.

## Moved refcount functions to `CodeInfo`

They never really made sense to exist on `Stack`. But now with the
locking gone this makes even less sense. The refcount is stored inside
`CodeInfo` to lets just move them there.

## Set `CodeHashLockupDepositPercent` for test runtime

The test runtime was setting `CodeHashLockupDepositPercent` to zero.
This was trivializing many code paths and excluded them from testing. I
set it to `30%` which is our default value and fixed up all the tests
that broke. This should give us confidence that the lockup doeposit
collections properly works.

## Reworked the `MockExecutable` to have both a `deploy` and a `call`
entry point

This type used for testing could only have either entry points but not
both. In order to fix the `immutable_data_set_overrides` I needed to a
new function `add_both` to `MockExecutable` that allows to have both
entry points. Make sure to make use of it in the future :)

---------

Co-authored-by: command-bot <>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: PG Herveou <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>
Co-authored-by: Oliver Tale-Yazdi <[email protected]>
  • Loading branch information
5 people authored Feb 4, 2025
1 parent a883475 commit 4c28354
Show file tree
Hide file tree
Showing 31 changed files with 1,944 additions and 1,745 deletions.
2 changes: 2 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ pallet-assets-freezer = { workspace = true }
pallet-aura = { workspace = true }
pallet-authorship = { workspace = true }
pallet-balances = { workspace = true }
pallet-migrations = { workspace = true }
pallet-multisig = { workspace = true }
pallet-nft-fractionalization = { workspace = true }
pallet-nfts = { workspace = true }
Expand Down Expand Up @@ -133,6 +134,7 @@ runtime-benchmarks = [
"pallet-balances/runtime-benchmarks",
"pallet-collator-selection/runtime-benchmarks",
"pallet-message-queue/runtime-benchmarks",
"pallet-migrations/runtime-benchmarks",
"pallet-multisig/runtime-benchmarks",
"pallet-nft-fractionalization/runtime-benchmarks",
"pallet-nfts/runtime-benchmarks",
Expand Down Expand Up @@ -177,6 +179,7 @@ try-runtime = [
"pallet-balances/try-runtime",
"pallet-collator-selection/try-runtime",
"pallet-message-queue/try-runtime",
"pallet-migrations/try-runtime",
"pallet-multisig/try-runtime",
"pallet-nft-fractionalization/try-runtime",
"pallet-nfts/try-runtime",
Expand Down Expand Up @@ -230,6 +233,7 @@ std = [
"pallet-balances/std",
"pallet-collator-selection/std",
"pallet-message-queue/std",
"pallet-migrations/std",
"pallet-multisig/std",
"pallet-nft-fractionalization/std",
"pallet-nfts-runtime-api/std",
Expand Down
22 changes: 22 additions & 0 deletions cumulus/parachains/runtimes/assets/asset-hub-westend/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -185,6 +185,7 @@ impl frame_system::Config for Runtime {
type SS58Prefix = SS58Prefix;
type OnSetCode = cumulus_pallet_parachain_system::ParachainSetCode<Self>;
type MaxConsumers = frame_support::traits::ConstU32<16>;
type MultiBlockMigrator = MultiBlockMigrations;
}

impl cumulus_pallet_weight_reclaim::Config for Runtime {
Expand Down Expand Up @@ -1095,6 +1096,25 @@ impl TryFrom<RuntimeCall> for pallet_revive::Call<Runtime> {
}
}

parameter_types! {
pub MbmServiceWeight: Weight = Perbill::from_percent(80) * RuntimeBlockWeights::get().max_block;
}

impl pallet_migrations::Config for Runtime {
type RuntimeEvent = RuntimeEvent;
#[cfg(not(feature = "runtime-benchmarks"))]
type Migrations = pallet_migrations::migrations::ResetPallet<Runtime, Revive>;
// Benchmarks need mocked migrations to guarantee that they succeed.
#[cfg(feature = "runtime-benchmarks")]
type Migrations = pallet_migrations::mock_helpers::MockedMigrations;
type CursorMaxLen = ConstU32<65_536>;
type IdentifierMaxLen = ConstU32<256>;
type MigrationStatusHandler = ();
type FailedMigrationHandler = frame_support::migrations::FreezeChainOnFailedMigration;
type MaxServiceWeight = MbmServiceWeight;
type WeightInfo = weights::pallet_migrations::WeightInfo<Runtime>;
}

// Create the runtime by composing the FRAME pallets that were previously configured.
construct_runtime!(
pub enum Runtime
Expand All @@ -1106,6 +1126,7 @@ construct_runtime!(
Timestamp: pallet_timestamp = 3,
ParachainInfo: parachain_info = 4,
WeightReclaim: cumulus_pallet_weight_reclaim = 5,
MultiBlockMigrations: pallet_migrations = 6,

// Monetary stuff.
Balances: pallet_balances = 10,
Expand Down Expand Up @@ -1430,6 +1451,7 @@ mod benches {
[pallet_asset_conversion_tx_payment, AssetTxPayment]
[pallet_balances, Balances]
[pallet_message_queue, MessageQueue]
[pallet_migrations, MultiBlockMigrations]
[pallet_multisig, Multisig]
[pallet_nft_fractionalization, NftFractionalization]
[pallet_nfts, Nfts]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ pub mod pallet_assets_pool;
pub mod pallet_balances;
pub mod pallet_collator_selection;
pub mod pallet_message_queue;
pub mod pallet_migrations;
pub mod pallet_multisig;
pub mod pallet_nft_fractionalization;
pub mod pallet_nfts;
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,225 @@
// Copyright (C) Parity Technologies (UK) Ltd.
// This file is part of Cumulus.

// Cumulus is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.

// Cumulus is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.

// You should have received a copy of the GNU General Public License
// along with Cumulus. If not, see <http://www.gnu.org/licenses/>.

//! Autogenerated weights for `pallet_migrations`
//!
//! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 32.0.0
//! DATE: 2025-01-27, STEPS: `50`, REPEAT: `20`, LOW RANGE: `[]`, HIGH RANGE: `[]`
//! WORST CASE MAP SIZE: `1000000`
//! HOSTNAME: `17938671047b`, CPU: `Intel(R) Xeon(R) CPU @ 2.60GHz`
//! WASM-EXECUTION: `Compiled`, CHAIN: `None`, DB CACHE: 1024
// Executed Command:
// frame-omni-bencher
// v1
// benchmark
// pallet
// --extrinsic=*
// --runtime=target/production/wbuild/asset-hub-westend-runtime/asset_hub_westend_runtime.wasm
// --pallet=pallet_migrations
// --header=/__w/polkadot-sdk/polkadot-sdk/cumulus/file_header.txt
// --output=./cumulus/parachains/runtimes/assets/asset-hub-westend/src/weights
// --wasm-execution=compiled
// --steps=50
// --repeat=20
// --heap-pages=4096
// --no-storage-info
// --no-min-squares
// --no-median-slopes

#![cfg_attr(rustfmt, rustfmt_skip)]
#![allow(unused_parens)]
#![allow(unused_imports)]
#![allow(missing_docs)]

use frame_support::{traits::Get, weights::Weight};
use core::marker::PhantomData;

/// Weight functions for `pallet_migrations`.
pub struct WeightInfo<T>(PhantomData<T>);
impl<T: frame_system::Config> pallet_migrations::WeightInfo for WeightInfo<T> {
/// Storage: `MultiBlockMigrations::Cursor` (r:1 w:1)
/// Proof: `MultiBlockMigrations::Cursor` (`max_values`: Some(1), `max_size`: Some(65550), added: 66045, mode: `MaxEncodedLen`)
/// Storage: UNKNOWN KEY `0x583359fe0e84d953a9dd84e8addb08a5` (r:1 w:0)
/// Proof: UNKNOWN KEY `0x583359fe0e84d953a9dd84e8addb08a5` (r:1 w:0)
fn onboard_new_mbms() -> Weight {
// Proof Size summary in bytes:
// Measured: `171`
// Estimated: `67035`
// Minimum execution time: 8_697_000 picoseconds.
Weight::from_parts(8_998_000, 0)
.saturating_add(Weight::from_parts(0, 67035))
.saturating_add(T::DbWeight::get().reads(2))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: `MultiBlockMigrations::Cursor` (r:1 w:0)
/// Proof: `MultiBlockMigrations::Cursor` (`max_values`: Some(1), `max_size`: Some(65550), added: 66045, mode: `MaxEncodedLen`)
fn progress_mbms_none() -> Weight {
// Proof Size summary in bytes:
// Measured: `42`
// Estimated: `67035`
// Minimum execution time: 2_737_000 picoseconds.
Weight::from_parts(2_813_000, 0)
.saturating_add(Weight::from_parts(0, 67035))
.saturating_add(T::DbWeight::get().reads(1))
}
/// Storage: UNKNOWN KEY `0x583359fe0e84d953a9dd84e8addb08a5` (r:1 w:0)
/// Proof: UNKNOWN KEY `0x583359fe0e84d953a9dd84e8addb08a5` (r:1 w:0)
/// Storage: `MultiBlockMigrations::Cursor` (r:0 w:1)
/// Proof: `MultiBlockMigrations::Cursor` (`max_values`: Some(1), `max_size`: Some(65550), added: 66045, mode: `MaxEncodedLen`)
fn exec_migration_completed() -> Weight {
// Proof Size summary in bytes:
// Measured: `129`
// Estimated: `3594`
// Minimum execution time: 6_181_000 picoseconds.
Weight::from_parts(6_458_000, 0)
.saturating_add(Weight::from_parts(0, 3594))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: UNKNOWN KEY `0x583359fe0e84d953a9dd84e8addb08a5` (r:1 w:0)
/// Proof: UNKNOWN KEY `0x583359fe0e84d953a9dd84e8addb08a5` (r:1 w:0)
/// Storage: `MultiBlockMigrations::Historic` (r:1 w:0)
/// Proof: `MultiBlockMigrations::Historic` (`max_values`: None, `max_size`: Some(266), added: 2741, mode: `MaxEncodedLen`)
fn exec_migration_skipped_historic() -> Weight {
// Proof Size summary in bytes:
// Measured: `225`
// Estimated: `3731`
// Minimum execution time: 11_932_000 picoseconds.
Weight::from_parts(12_539_000, 0)
.saturating_add(Weight::from_parts(0, 3731))
.saturating_add(T::DbWeight::get().reads(2))
}
/// Storage: UNKNOWN KEY `0x583359fe0e84d953a9dd84e8addb08a5` (r:1 w:0)
/// Proof: UNKNOWN KEY `0x583359fe0e84d953a9dd84e8addb08a5` (r:1 w:0)
/// Storage: `MultiBlockMigrations::Historic` (r:1 w:0)
/// Proof: `MultiBlockMigrations::Historic` (`max_values`: None, `max_size`: Some(266), added: 2741, mode: `MaxEncodedLen`)
fn exec_migration_advance() -> Weight {
// Proof Size summary in bytes:
// Measured: `171`
// Estimated: `3731`
// Minimum execution time: 11_127_000 picoseconds.
Weight::from_parts(11_584_000, 0)
.saturating_add(Weight::from_parts(0, 3731))
.saturating_add(T::DbWeight::get().reads(2))
}
/// Storage: UNKNOWN KEY `0x583359fe0e84d953a9dd84e8addb08a5` (r:1 w:0)
/// Proof: UNKNOWN KEY `0x583359fe0e84d953a9dd84e8addb08a5` (r:1 w:0)
/// Storage: `MultiBlockMigrations::Historic` (r:1 w:1)
/// Proof: `MultiBlockMigrations::Historic` (`max_values`: None, `max_size`: Some(266), added: 2741, mode: `MaxEncodedLen`)
fn exec_migration_complete() -> Weight {
// Proof Size summary in bytes:
// Measured: `171`
// Estimated: `3731`
// Minimum execution time: 12_930_000 picoseconds.
Weight::from_parts(13_272_000, 0)
.saturating_add(Weight::from_parts(0, 3731))
.saturating_add(T::DbWeight::get().reads(2))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: UNKNOWN KEY `0x583359fe0e84d953a9dd84e8addb08a5` (r:1 w:0)
/// Proof: UNKNOWN KEY `0x583359fe0e84d953a9dd84e8addb08a5` (r:1 w:0)
/// Storage: `MultiBlockMigrations::Historic` (r:1 w:0)
/// Proof: `MultiBlockMigrations::Historic` (`max_values`: None, `max_size`: Some(266), added: 2741, mode: `MaxEncodedLen`)
/// Storage: `MultiBlockMigrations::Cursor` (r:0 w:1)
/// Proof: `MultiBlockMigrations::Cursor` (`max_values`: Some(1), `max_size`: Some(65550), added: 66045, mode: `MaxEncodedLen`)
fn exec_migration_fail() -> Weight {
// Proof Size summary in bytes:
// Measured: `171`
// Estimated: `3731`
// Minimum execution time: 13_709_000 picoseconds.
Weight::from_parts(14_123_000, 0)
.saturating_add(Weight::from_parts(0, 3731))
.saturating_add(T::DbWeight::get().reads(2))
.saturating_add(T::DbWeight::get().writes(1))
}
fn on_init_loop() -> Weight {
// Proof Size summary in bytes:
// Measured: `0`
// Estimated: `0`
// Minimum execution time: 162_000 picoseconds.
Weight::from_parts(188_000, 0)
.saturating_add(Weight::from_parts(0, 0))
}
/// Storage: `MultiBlockMigrations::Cursor` (r:0 w:1)
/// Proof: `MultiBlockMigrations::Cursor` (`max_values`: Some(1), `max_size`: Some(65550), added: 66045, mode: `MaxEncodedLen`)
fn force_set_cursor() -> Weight {
// Proof Size summary in bytes:
// Measured: `0`
// Estimated: `0`
// Minimum execution time: 2_737_000 picoseconds.
Weight::from_parts(2_919_000, 0)
.saturating_add(Weight::from_parts(0, 0))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: `MultiBlockMigrations::Cursor` (r:0 w:1)
/// Proof: `MultiBlockMigrations::Cursor` (`max_values`: Some(1), `max_size`: Some(65550), added: 66045, mode: `MaxEncodedLen`)
fn force_set_active_cursor() -> Weight {
// Proof Size summary in bytes:
// Measured: `0`
// Estimated: `0`
// Minimum execution time: 3_087_000 picoseconds.
Weight::from_parts(3_320_000, 0)
.saturating_add(Weight::from_parts(0, 0))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: `MultiBlockMigrations::Cursor` (r:1 w:0)
/// Proof: `MultiBlockMigrations::Cursor` (`max_values`: Some(1), `max_size`: Some(65550), added: 66045, mode: `MaxEncodedLen`)
/// Storage: UNKNOWN KEY `0x583359fe0e84d953a9dd84e8addb08a5` (r:1 w:0)
/// Proof: UNKNOWN KEY `0x583359fe0e84d953a9dd84e8addb08a5` (r:1 w:0)
fn force_onboard_mbms() -> Weight {
// Proof Size summary in bytes:
// Measured: `147`
// Estimated: `67035`
// Minimum execution time: 6_470_000 picoseconds.
Weight::from_parts(6_760_000, 0)
.saturating_add(Weight::from_parts(0, 67035))
.saturating_add(T::DbWeight::get().reads(2))
}
/// Storage: `MultiBlockMigrations::Historic` (r:256 w:256)
/// Proof: `MultiBlockMigrations::Historic` (`max_values`: None, `max_size`: Some(266), added: 2741, mode: `MaxEncodedLen`)
/// The range of component `n` is `[0, 256]`.
fn clear_historic(n: u32, ) -> Weight {
// Proof Size summary in bytes:
// Measured: `1022 + n * (271 ±0)`
// Estimated: `3834 + n * (2740 ±0)`
// Minimum execution time: 15_864_000 picoseconds.
Weight::from_parts(24_535_162, 0)
.saturating_add(Weight::from_parts(0, 3834))
// Standard Error: 8_688
.saturating_add(Weight::from_parts(1_530_542, 0).saturating_mul(n.into()))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().reads((1_u64).saturating_mul(n.into())))
.saturating_add(T::DbWeight::get().writes((1_u64).saturating_mul(n.into())))
.saturating_add(Weight::from_parts(0, 2740).saturating_mul(n.into()))
}
/// Storage: `Skipped::Metadata` (r:0 w:0)
/// Proof: `Skipped::Metadata` (`max_values`: None, `max_size`: None, mode: `Measured`)
/// The range of component `n` is `[0, 2048]`.
fn reset_pallet_migration(n: u32, ) -> Weight {
// Proof Size summary in bytes:
// Measured: `1680 + n * (38 ±0)`
// Estimated: `758 + n * (39 ±0)`
// Minimum execution time: 2_168_000 picoseconds.
Weight::from_parts(2_226_000, 0)
.saturating_add(Weight::from_parts(0, 758))
// Standard Error: 2_841
.saturating_add(Weight::from_parts(935_438, 0).saturating_mul(n.into()))
.saturating_add(T::DbWeight::get().reads((1_u64).saturating_mul(n.into())))
.saturating_add(T::DbWeight::get().writes((1_u64).saturating_mul(n.into())))
.saturating_add(Weight::from_parts(0, 39).saturating_mul(n.into()))
}
}
Loading

0 comments on commit 4c28354

Please sign in to comment.