Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential incorrect index update in revived gauge under specific conditions #4

Open
c4-bot-1 opened this issue Sep 24, 2024 · 3 comments
Labels
2 (Med Risk) Assets not at direct risk, but function/availability of the protocol could be impacted or leak value bug Something isn't working M-06 🤖_primary AI based primary recommendation 🤖_04_group AI based duplicate group recommendation satisfactory satisfies C4 submission criteria; eligible for awards selected for report This submission will be included/highlighted in the audit report sponsor acknowledged Technically the issue is correct, but we're not going to resolve it for XYZ reasons

Comments

@c4-bot-1
Copy link
Contributor

Lines of code

https://github.com/code-423n4/2024-09-fenix-finance/blob/main/contracts/core/VoterUpgradeableV2.sol#L250-L256

Vulnerability details

Impact

This vulnerability could allow revived gauges to claim more rewards than intended under specific circumstances, potentially leading to unfair distribution of rewards.

Description

The reviveGauge function fails to update the gauge's index to the current global index when reviving a previously killed gauge. While this issue is mitigated in most scenarios by the distributeAll function, which updates all gauges' indices to the global index on each epoch, a vulnerability still exists under specific conditions.

Relevant code snippet:

function reviveGauge(address gauge_) external onlyRole(_GOVERNANCE_ROLE) {
    if (gaugesState[gauge_].isAlive) {
        revert GaugeNotKilled();
    }
    gaugesState[gauge_].isAlive = true;
    emit GaugeRevived(gauge_);
}
function _distribute(address gauge_) internal {
    GaugeState memory state = gaugesState[gauge_];
    uint256 currentTimestamp = _epochTimestamp();
    if (state.lastDistributionTimestamp < currentTimestamp) {
        uint256 totalVotesWeight = weightsPerEpoch[currentTimestamp - _WEEK][state.pool];
        if (totalVotesWeight > 0) {
            uint256 delta = index - state.index; // @contest-info outdated index can cause problem here
            if (delta > 0) {
                uint256 amount = (totalVotesWeight * delta) / 1e18;
                if (state.isAlive) {
                    gaugesState[gauge_].claimable += amount;
                } else {
                    IERC20Upgradeable(token).safeTransfer(minter, amount);
                }
            }
        }
        gaugesState[gauge_].index = index;
        uint256 claimable = gaugesState[gauge_].claimable;
        if (claimable > 0 && state.isAlive) {
            gaugesState[gauge_].claimable = 0;
            gaugesState[gauge_].lastDistributionTimestamp = currentTimestamp;
            IGauge(gauge_).notifyRewardAmount(token, claimable);
            emit DistributeReward(_msgSender(), gauge_, claimable);
        }
    }
}

The vulnerability arises in scenarios where:

  1. There's a large number of gauges in the protocol.
  2. Due to gas limitations, distributeAll cannot update all gauges in a single transaction.
  3. Manual iteration through gauges is required.
  4. A killed gauge might not be updated before it's revived as there is no incentive to call distribute function for a killed gauge.

In this specific scenario, a revived gauge could retain an outdated index, leading to incorrect reward calculations.

Example scenario

Epoch x:

  • Gauge A is active with an index of 100
  • Global index is 100

Epoch x+1:

  • Gauge A is killed, its index stays at 100
  • Global index updates to 150
  • distributeAll fails to update all gauges due to gas limitations

Epoch x+2:

  • Before manual updates reach Gauge A, it is revived with index still at 100
  • Global index updates to 200

When claiming rewards:

  • Gauge B (updated correctly) gets (200 - 150) * weight_B
  • Gauge A incorrectly gets (200 - 100) * weight_A

Gauge A claims excess rewards for the period it was killed. This discrepancy, while rare, could lead to unfair reward distribution for all gauges.

Rationale on severity

High impact - Lead to loss of funds of other gauges.
Low likelihood - Only happen in specific circumstances.
Hence, Medium severity.

Proof-of-Concept

The following test tries to demonstrate described scenario where GaugeA is killed and due to specific circumstance doesn't get update before being revived.

Steps

  1. Create a new test file, reviveGaugeBug.ts in test/core/VoterV2/
  2. Run npx hardhat test test/core/VoterV2/reviveGaugeBug.ts --grep "reviveGaugeDoesNotUpdateToGlobalIndex" --trace
  3. Observe that gaugeA gets more reward than gaugeB
import { HardhatEthersSigner } from '@nomicfoundation/hardhat-ethers/signers';
import { loadFixture, mine, time } from '@nomicfoundation/hardhat-toolbox/network-helpers';
import { expect } from 'chai';
import { ethers } from 'hardhat';
import {
  ERC20Mock,
  MinterUpgradeable,
  SingelTokenBuybackUpgradeableMock__factory,
  VoterUpgradeableV2,
  VotingEscrowUpgradeableV2,
} from '../../../typechain-types';
import completeFixture, { CoreFixtureDeployed, deployERC20MockToken, mockBlast, SignersList } from '../../utils/coreFixture';
import { ERRORS, getAccessControlError } from '../../utils/constants';

describe('VotingEscrow_V2', function () {
  let VotingEscrow: VotingEscrowUpgradeableV2;
  let Voter: VoterUpgradeableV2;
  let Minter: MinterUpgradeable;
  let signers: SignersList;
  let token: ERC20Mock;
  let token2: ERC20Mock;
  let deployed: CoreFixtureDeployed;

  beforeEach(async () => {
    deployed = await loadFixture(completeFixture);
    VotingEscrow = deployed.votingEscrow;
    Voter = deployed.voter;
    Minter = deployed.minter;
    signers = deployed.signers;
    token = await deployERC20MockToken(signers.deployer, 'MOK', 'MOK', 18);
    token2 = await deployERC20MockToken(signers.deployer, 'MOK2', 'MOK2', 18);
  });

  async function getNextEpochTime() {
    return (BigInt(await time.latest()) / 604800n) * 604800n + 604800n;
  }
    describe('reviveGaugeDoesNotUpdateToGlobalIndex', async () => {
        beforeEach(async () => {
            await deployed.fenix.transfer(signers.otherUser1.address, ethers.parseEther('2'));
            await deployed.fenix.connect(signers.otherUser1).approve(VotingEscrow.target, ethers.parseEther('100'));
            await VotingEscrow.connect(signers.otherUser1).create_lock_for_without_boost(ethers.parseEther('1'), 15724800, signers.otherUser1);
        });        
        it('reviveGaugeDoesNotUpdateToGlobalIndex', async() => {
            // update Minter
            await Voter.updateAddress('minter', deployed.minter);
            
            // Deploy two new pools and create new gauges, gaugeA and gaugeB
            let poolA = await deployed.v2PairFactory.createPair.staticCall(deployed.fenix.target, token.target, false);
            await deployed.v2PairFactory.createPair(deployed.fenix.target, token.target, false);
            let res = await Voter.createV2Gauge.staticCall(poolA);
            let gaugeA = res[0];
            let tx = await Voter.createV2Gauge(poolA);                

            let poolB = await deployed.v2PairFactory.createPair.staticCall(deployed.fenix.target, token2.target, false);
            await deployed.v2PairFactory.createPair(deployed.fenix.target, token2.target, false);
            res = await Voter.createV2Gauge.staticCall(poolB);
            let gaugeB = res[0];
            tx = await Voter.createV2Gauge(poolB);

            // Epoch X
            // vote for gaugeA and gaugeB (equally)
            await Voter.connect(signers.otherUser1).vote(1, [poolA, poolB], [100n, 100n]);
            expect(await Voter.totalWeightsPerEpoch(await Minter.active_period())).to.be.greaterThan(0n);
            
            // Advance to next epoch, call distributeAll
            // Epoch X+1
            let nextEpoch = await getNextEpochTime();
            await time.increaseTo(nextEpoch + 3601n);
            await Voter.distributeAll();
            
            // Kill gaugeA in this epoch
            await Voter.killGauge(gaugeA);
            // Vote for only gaugeB in this epoch
            await Voter.connect(signers.otherUser1).vote(1, [poolB], [100n]);
            expect(await Voter.totalWeightsPerEpoch(await Minter.active_period())).to.be.greaterThan(0n);
            
            // Advance to next epoch,
            // Epoch X+2
            // Supposed that there are large number of gauges, and gaugeA doesn't get update with distribute function
            nextEpoch = await getNextEpochTime();
            await time.increaseTo(nextEpoch + 3601n);
            await Voter.distribute([gaugeB]);
        
            // gaugeA is revived
            await Voter.reviveGauge(gaugeA);
            // gaugeA index is outdated after being revived
            expect((await Voter.gaugesState(gaugeA))[6]).to.be.lessThan(await Voter.index());
            
            // vote for gaugeA and gaugeB (equally)
            await Voter.connect(signers.otherUser1).vote(1, [poolA, poolB], [100n, 100n]);

            // Advance to next epoch
            // Epoch X+3
            nextEpoch = await getNextEpochTime();
            await time.increaseTo(nextEpoch + 3601n);
            
            /**
                Try each with gaugeA and gaugeB using --trace 
                Notice that gauageA get more reward despite getting the same vote weight

                If you distribute both gauges, it will fail due to insufficient balance in Voter contract
            */
            // await Voter.distribute([gaugeB]);
            await Voter.distribute([gaugeA]);
        });
    });
});

Recommended Mitigations

function reviveGauge(address gauge_) external onlyRole(_GOVERNANCE_ROLE) {
    if (gaugesState[gauge_].isAlive) {
        revert GaugeNotKilled();
    }
    gaugesState[gauge_].isAlive = true;
    gaugesState[gauge_].index = index; // <-- update to global index
    emit GaugeRevived(gauge_);
}

Assessed type

Context

@c4-bot-1 c4-bot-1 added 2 (Med Risk) Assets not at direct risk, but function/availability of the protocol could be impacted or leak value bug Something isn't working labels Sep 24, 2024
c4-bot-2 added a commit that referenced this issue Sep 24, 2024
@c4-bot-11 c4-bot-11 added 🤖_04_group AI based duplicate group recommendation 🤖_primary AI based primary recommendation labels Sep 25, 2024
@b-hrytsak b-hrytsak added the sponsor acknowledged Technically the issue is correct, but we're not going to resolve it for XYZ reasons label Sep 30, 2024
@b-hrytsak
Copy link

Thank you for your submission.

Similar to #16

Still needs some specific conditions, although this is technically a valid submission

Thank you for your participation

@c4-judge
Copy link

alcueca marked the issue as satisfactory

@c4-judge c4-judge added satisfactory satisfies C4 submission criteria; eligible for awards selected for report This submission will be included/highlighted in the audit report labels Sep 30, 2024
@c4-judge
Copy link

alcueca marked the issue as selected for report

@C4-Staff C4-Staff added M-05 M-06 and removed M-05 labels Oct 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
2 (Med Risk) Assets not at direct risk, but function/availability of the protocol could be impacted or leak value bug Something isn't working M-06 🤖_primary AI based primary recommendation 🤖_04_group AI based duplicate group recommendation satisfactory satisfies C4 submission criteria; eligible for awards selected for report This submission will be included/highlighted in the audit report sponsor acknowledged Technically the issue is correct, but we're not going to resolve it for XYZ reasons
Projects
None yet
Development

No branches or pull requests

5 participants