Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor/v2 #112

Open
wants to merge 30 commits into
base: main
Choose a base branch
from
Open

Refactor/v2 #112

wants to merge 30 commits into from

Conversation

Keyrxng
Copy link
Member

@Keyrxng Keyrxng commented Jun 5, 2024

Resolves #109, #106

A complete overhaul was required as mentioned here

  • It is now a far more readable and maintainable codebase
  • It validates using multiple on-chain sources
  • It produces cleaner and more human-friendly output for manual verification
  • It is now geared towards seeding the DB with verifiable permits which will be used to build a leaderboard

I do not have private repo access so I cannot confirm whether those repos would be tallied if the script is ran by someone who does but I'm sure it would.

More than half of the found data has has been verified and connected with it's onchain counterpart although there are still ~350 without txhash which is a staggering amount. It's still unclear to me whether or not it's user error, skill issue or what but I cannot get the number down lower than that. With that said I am confident in the my efforts to make it as verifiable as possible given the challenges of it.

This discrepancy is clearer when comparing the claimed and unclaimed leaderboards. I have suggested previously to bundle these unclaimed txs and invalidate all of them and either create one permit for the total user amount invalidated or create multiple depending on total value. This would be made simple using the output of this tool.


Actually pushing the output to my DB is near impossible given the RLS constraints. I did try to seed my local DB with the prod data but had no luck with it, which makes testing that aspect difficult but if the output data matches the schema it should go right through.

Some edge cases that I have tried to either resolve or side step completely are:

  • duplicate nonce found on more than one occasion in multiple repos, I have grouped these by repo for validation and attempt to use whichever has the highest value and valid txhash. /production for instance I know is mostly a test repo with fake permits, but it's happened in 15 repos.
  • permits exist throughout the org for users who do not exist in the database, it's unclear why that would be the case.

@Keyrxng
Copy link
Member Author

Keyrxng commented Jun 5, 2024

I didn't think committing all of the output files was the best way to do things

Claimed Leaderboard:
image

Unclaimed Leaderboard:
image

@rndquu rndquu self-requested a review June 26, 2024 07:27
Copy link
Member

@rndquu rndquu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Keyrxng

  1. I tried running yarn dune, yarn issue and yarn userTx for my own address only for the ubiquity/ubiquity-dollar repo which generated the necessary json files (as expected). Then I ran yarn all which keeps throwing an error this should not happen. There's no such string/error in the codebase so it seems that this error is produced by some package.
  2. Pls create your own supabase instance, run these migrations, uncomment these lines and make sure the permits are saved in a DB as expected.

@Keyrxng
Copy link
Member Author

Keyrxng commented Jun 26, 2024

@Keyrxng

  1. I tried running yarn dune, yarn issue and yarn userTx for my own address only for the ubiquity/ubiquity-dollar repo which generated the necessary json files (as expected). Then I ran yarn all which keeps throwing an error this should not happen. There's no such string/error in the codebase so it seems that this error is produced by some package.
  2. Pls create your own supabase instance, run these migrations, uncomment these lines and make sure the permits are saved in a DB as expected.
  1. This stumped me for a bit too but it's inside the WebSocketProvider class, why it's happening exactly has eluded me, it's difficult to catch but I will spend more time on pinpointing the cause, it does appear innocent from what I've seen.
 this.websocket.onmessage = (messageEvent: { data: string }) => {
    const data = messageEvent.data;
    const result = JSON.parse(data);
    if (result.id != null) {
      ...
        } else {
            let error: Error = null;
            if (result.error) {
            } else {
                error = new Error("unknown error");
            }
        }
    } else if (result.method === "eth_subscription") {
        // Subscription...
    } else {
        console.warn("this should not happen");
    }
};
  1. I will give that a try and let you know, I've setup with the first already but I don't recall running the second migration

@Keyrxng
Copy link
Member Author

Keyrxng commented Jun 28, 2024

@rndquu what should be done

  • add DAI into the tokens table with id 2
  • update constants.ts with key and url
  • run the three parsers
  • run yarn all

Because it first fetches and filters the current DB it can be re-run any number of times and it'll populate the DB only with newly found permits, so it would be an effective solution for ubiquity/audit.ubq.fi#12 that would be gas free and there is far more data available to improve things further like pinning permits to issues etc. Thankfully actions can support the length of time it takes to run this collection of scripts so this could either be a standalone cron wf or converted into a plugin.

I had to be explicit with the row ids because it was trying to insert with single figure IDs automatically without defining them when the highest is 111 rn, I'm not sure if that's going to be a problem in other codebases.

image
image


The claimed vs unclaimed, I didn't want to jump to conclusions with things but we do need to consider that multiple permit comments is not an uncommon thing. Neither is the comment being quoted, especially by pav and both of these things would produce duplicate counts. Then there are the permits that are buried out there that are actually unspent.

I went through the unspent-permits.json for my own address and it found two for me (likely both participation rewards looking at the amount) I just claimed. Until we deal with all of those unspent permits this is the best things are getting I think.

@Keyrxng
Copy link
Member Author

Keyrxng commented Oct 21, 2024

@rndquu what should be done

  • add DAI into the tokens table with id 2
  • update constants.ts with key and url
  • run the three parsers
  • run yarn all

Because it first fetches and filters the current DB it can be re-run any number of times and it'll populate the DB only with newly found permits, so it would be an effective solution for ubiquity/audit.ubq.fi#12 that would be gas free and there is far more data available to improve things further like pinning permits to issues etc. Thankfully actions can support the length of time it takes to run this collection of scripts so this could either be a standalone cron wf or converted into a plugin.

I had to be explicit with the row ids because it was trying to insert with single figure IDs automatically without defining them when the highest is 111 rn, I'm not sure if that's going to be a problem in other codebases.

image image

The claimed vs unclaimed, I didn't want to jump to conclusions with things but we do need to consider that multiple permit comments is not an uncommon thing. Neither is the comment being quoted, especially by pav and both of these things would produce duplicate counts. Then there are the permits that are buried out there that are actually unspent.

I went through the unspent-permits.json for my own address and it found two for me (likely both participation rewards looking at the amount) I just claimed. Until we deal with all of those unspent permits this is the best things are getting I think.

This is where I had left things. The script just needs ran by a someone with access to the core DB. They need to add another token as-of when I last touched this.

Would you guys prefer that I "re-open" this and spend a day or so trying to improve it further, or are we happy to go ahead or what should we do with this PR exactly?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Backfill DB with legacy permits
2 participants