-
Notifications
You must be signed in to change notification settings - Fork 11.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement P256 verification via RIP-7212 precompile with Solidity fallback #4881
base: master
Are you sure you want to change the base?
Conversation
🦋 Changeset detectedLatest commit: 2fe4a16 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a couple of questions and notes while reviewing. I've been delaying this review for a while but at first sight it looks really well implemented, and I mostly need to familiarize and check the math is correct.
Co-authored-by: Ernesto García <[email protected]>
Relevant source for discussion: https://www.hyperelliptic.org/EFD/g1p/auto-shortw-jacobian.html |
if (pos > 0) { | ||
(x, y, z) = _jAdd(x, y, z, points[pos].x, points[pos].y, points[pos].z); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So here the if is optional.
If we remove the if, we are going to load points[0]
which is (0,0,0)
... and the _jAdd
will skip that as the "neutral element". The if here as a cost. 15/16 we pay it for no real reason (and we still pay the check in _jAdd). 1/16 the if avoids the overhead of a function call.
I'm going to benchmark which one is better and comment that so we don't go back and forward.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I checked, skipping the mloads in 1/16 cases is a bigger gain than the loss of the if in the other 15/16 cases. Keeping the if is the more effective solution here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I managed to make the implementation compile and pass tests without requiring to enable via-ir
. Although I agree users should be encouraged to use it given the gas costs, I think it's better not to force them.
LGTM again
I understand the logic for 9b24014, but we surelly can do better. EDIT: actually, everytime we do a _jAdd, one of the point is already in memory, so we can just pass the memory object and load it at the last moment. See ecd3aa2. IMO that is WAY cleaner (and is doesn't need storing to the scratch space). Also, I re-applied the changes to hardhat.config.json, which are IMO a significant improvement to the clarity ofd the config, and which do not enable IR by default. |
} | ||
|
||
(bool success, bytes memory returndata) = address(0x100).staticcall(abi.encode(h, r, s, qx, qy)); | ||
return (success && returndata.length == 0x20) ? (abi.decode(returndata, (bool)), true) : (false, false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if Ethereum puts another precompile at 0x100
in the future that also returns a 0x20
length return data? Might be an edge case, but EIP and RIP must not align in the long run! Or another example is, if an L2 already put a precompile on 0x100
which is different but also returns 0x20
length (haven't done research on this tbh).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We were told that EIP-7212 is not up to date, that RIP-7212 is the one to follow, and that precompile address allocation would be organised at the RIP level to avoid conflicts.
If the community can't figure that out, wtf are we (library devs) expected to do ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"would be organised at the RIP level to avoid conflicts." -> talk is cheap ;) so idk, I just don't trust this really. The reason why I haven't provided a native way to call the P256
precompile in my snekmate (Vyper) library is that I leave this decision to the devs deploying it. I.e. they plan to deploy on 5 chains, 3 of them have the precompile and 2 not, so they write their own wrapper around it using e.g. 0x100
as target since it's ensured that it exists there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you don't want to risk it, just use verifySolidity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you know how people are: "hey let's gas optimise everything and native looks sweet in our tests since we run it on a P256
-supported fork." I think a good compromise is to put a warning in the docs about the current situation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but not with the idea that we should not provide features unless they work on every chain.
I never said this. My main point here is that the precompile check is not future proof as the return data of length 32 bytes could also come from other precompiles in the future on that address. I'm not saying this will happen, but I have seen too much shit over the last years to be paranoid enough about this edge case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pointing out @pcaversaccio. I agree with the concern and now I think more about it I also agree we should explain the situation at the very minimum. We raised a similar concern for Math.modexp that ended up in an explicit documentation mention, so I would consider adding a note here.
Regardless, both RIP-7212 and EIP-7212 agree that the precompile "returns 1 in 32 bytes format". The current implementation reverts if the returndata can't be decoded as a boolean. In my opinion, it's unlikely that a precompile at address(0x100)
will randomly return true
but I recognize the possibility. Would you agree the decoding constrains the possibility?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you agree the decoding constrains the possibility?
Overall yes, but I want to raise that this might result in an unexpected behaviour since you don't expect a revert from a precompile usually (but no return data to indicate an error).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree it's unexpected behavior, but it's a manageable risk.
I think a good compromise is to put a warning in the docs about the current situation?
I hear you with this and think an "IMPORTANT" block should be enough
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sgtm!
Co-authored-by: sudo rm -rf --no-preserve-root / <[email protected]>
Re: Transaction reverted: trying to deploy a contract whose code is too large I think If a non-exposed contract is too large it should trigger the compiler warning and fail CI anyway: openzeppelin-contracts/hardhat.config.js Lines 91 to 97 in c3f8b76
|
I'd like to hear why this is a problem if using scratch space is safe. Regardless of how hacky it is I do think it's worse that we didn't discuss the config and is again there. I would appreciate if you communicate better your decisions. It's pretty much impossible to follow your preferences here! |
Moved that part to a separate PR OpenZeppelin#5099
function verifySolidity(bytes32 h, bytes32 r, bytes32 s, bytes32 qx, bytes32 qy) internal view returns (bool) { | ||
if (r == 0 || uint256(r) >= N || s == 0 || uint256(s) > HALF_N || !isOnCurve(qx, qy)) { | ||
return false; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for a public key to be valid, it should:
- not be point of infinity
- have coordinates
$(x, y)$ satisfy$0 <= x, y <= p$ , and - satisfy the equation
$y^2=x3+ax+b$ modulo$p$
on property one: the public key is used in _preComputeJacobianPoints
by points[0x01] = JPoint(px, py, 1); // 1,0 (p)
, which would never be the point of infinity. _affineFromJacobian
uses the convention that _preComputeJacobianPoints
(L278) would be incorrect, but given that the point of infinity is not valid anyways, it's not an issue as we instead reject the public key isOnCurve
.
on property two: x and y coordinates of the public key get reduced to modulo p in calculations, so missing a check for property two means that the verify function will check the signature for the public key with coordinates
on property three: we are checking that it lies on the curve via the Weierstrass equation (isOnCurve
).
i think properties 1 and 3 are ok, but missing 2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Because
B != 0
in Sepc256r1, then(0, 0)
is not on the curve ... that should be ok - So should we add
uint256(qx) >= P || uint256(qy) >= P
? Also, is it P or N?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah that should be good. it's P because we are using that for the modulus operations on calculations like isOnCurve
function verifySolidity(bytes32 h, bytes32 r, bytes32 s, bytes32 qx, bytes32 qy) internal view returns (bool) { | |
if (r == 0 || uint256(r) >= N || s == 0 || uint256(s) > HALF_N || !isOnCurve(qx, qy)) { | |
return false; | |
} | |
function verifySolidity(bytes32 h, bytes32 r, bytes32 s, bytes32 qx, bytes32 qy) internal view returns (bool) { | |
if (r == 0 || uint256(r) >= N || s == 0 || uint256(s) > HALF_N || uint256(qx) >= P || uint256(qy) >= P || !isOnCurve(qx, qy)) { | |
return false; | |
} |
tests pass with this change, but i think it's useful to add a unit test for this property specifically, i can add it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think it also makes sense to have this in _tryVerifyNative
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the check directly in isOnCurve
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: the public key is unlikelly to be arbitrary user input. It will be recorded/whitelisted onchain, by some admin. So this attack vector is not really critical. But still its good to add the (cheap) check in isOnCurve
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would allow for public keys
$(x,y)$ , where$x<2^{256} - p$ or$y<2^{256} - p$ , which there would be another pair$(x^\prime,y^\prime)$ — such as$(x+p,y)$ — that can be used as a public key and for which signatures made for$(x,y)$ would also verify.'
Although this is true, it's unclear to me whether you can create a signature for public keys
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if it's not on the curve
: the problem is that you can find
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, so it's the other way around. Still it's relevant that public keys are unlikely to be arbitrary, but let's keep the check regardless.
} | ||
|
||
/** | ||
* @dev Same as {verify}, but it will revert if the required precompile is not available. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this would address general concerns about the availability of the precompile
* @dev Same as {verify}, but it will revert if the required precompile is not available. | |
* @dev Same as {verify}, but it will revert if the required precompile is not available. | |
* | |
* IMPORTANT: The use of this function relies on the RIP-7212 precompile to be available at `address(0x100)`. | |
* Make sure the specified address contains the expected precompile, otherwise the returned value may be | |
* misinterpreted as a positive boolean. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit
/** | ||
* @dev Checks if (x, y) are valid coordinates of a point on the curve. | ||
* In particular this function checks that x <= P and y <= P. | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add note on why (x,y) are not valid coordinates
/** | |
* @dev Checks if (x, y) are valid coordinates of a point on the curve. | |
* In particular this function checks that x <= P and y <= P. | |
*/ | |
/** | |
* @dev Checks if (x, y) are valid coordinates of a point on the curve. | |
* In particular this function checks that x <= P and y <= P. There is no need to | |
* explicitly check if both x and y are 0, because the presence of 'b' coefficient | |
* in the Weierstrass equation. | |
*/ |
Fixes LIB-1225
Why do we care about secp256r1?
Most security application uses secp256r1 (also known as p256). This lead hardware manufacturers to implement it, and leave other “exotic” curves on the side. Today, billions of people around the world own devices with spetial security hardware that supports secp256r1. If that was the curve used by ethereum, all these people would basically already own a hardware wallet … but unfortunatelly that is not the case.
If we cannot easily modify the curves supported by major smartphones manufacturer, we can provide tools to verify secp256r1 curve onchain. This would allow control of ERC-4337 smart wallets (among others) through a device designed to handle security keys (something users are notoriously bad at).
What @openzeppelin/contracts could provide
Existing wallets provide mechanisms to produce secp256k1 signature, both for transactions and messages. Solidity provides a precompile that, given a hash and a signature, will recover the address of the signer (using secp256k1). No such precompile exist for secp256r1.
There exist solidity implementations of the secp256r1 “verification” workflow. There is also a proposal to provide that verification through a precompile. Even if the precompile is implemented, it is likelly that many chains will not upgrade soon. A solidity implementation would remain usefull for users on these chains.
In some cases, users may want to follow the “recovery” flow that they are familiar with. There is also no proposal for a precompile that would do that operation. A solidity implementation would possibly be usefull to many users, and remain uncontested in the near future.
Notes
Stack too depth
Current proposed implementation works well when turning optimization on. However, compilation fails with "stack to deep" if optimizations are NOT turned on. This PR does enable optimizations for all tests to circumvent this issue. Also, users will have to enable optimizations if they want to use this library, which they should definitelly do given the gast costs.
details: { yul: true },
to the optimizer settings. This change in optimization setup may affect the accuracy of gas reporting in this PR (reference doesn't use the same settings)Benchmarking
This repo provides benchmarking of this implementation against other existing ones.
![Capture d’écran du 2024-02-07 11-33-29](https://private-user-images.githubusercontent.com/2432299/302948006-3a80200c-bcbf-4bd7-b0e8-255295880f3e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk4MDIzNzUsIm5iZiI6MTcxOTgwMjA3NSwicGF0aCI6Ii8yNDMyMjk5LzMwMjk0ODAwNi0zYTgwMjAwYy1iY2JmLTRiZDctYjBlOC0yNTUyOTU4ODBmM2UucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDcwMSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA3MDFUMDI0NzU1WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9ZjAzZjg3YWMyOTc3ZDQ2MmZjNmUzODRmZjhmZDQwNzk1ZmI2OTAxZDI3MjU2YzkxY2VhZWI1MGE0NzgxMDUwNCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.kuYejkpTXOA0EJXZi638aMd67G_6NUo4-txFmpKfWYQ)
PR Checklist
npx changeset add
)