You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Those tables have tokenIds which one can recover from a few different APIs on the smart contract that are standard ERC721. For example, you can call ownerOfTokens to get a list of table tokenIds you own on the current network.
From that list of IDs, e.g. [5, 15, 24], you should be able to then select data from the tables. For example, if you got those from the mumbai network, you might call, SELECT * FROM 80001_5. But that isn't going to work because you don't know the table prefix.
Ideal scenario
When we first discussed the idea of adding prefixes to tables, they were motivated by
Developer experience. Just being able to look at the tables you've created and use the prefix to quickly remember what you've done is very helpful. If the tablename was just networkId_tokenId they would be very difficult to work with.
However, we didn't expect that the prefix would be required. E.g., ideally the following would be equivalent
SELECT * FROM mynewtable_80001_1
SELECT * FROM 80001_1
Likewise, a join could be
SELECT * FROM mynewtable_80001_1 WHERE id IN (SELECT id FROM 80001_1) mixing and matching. The prefix doesn't matter.
In this way, the prefix is available when you want it, but isn't required when you don't have it.
Side effects
This actually makes using the registry a smidge easier from smart contracts since you don't even think about prefixes. You just track Token IDs the custom smart contract owns. Any contract using the Tableland registry would never need to store it's own owned table info, since it could just fetch it from the Registry dynamically if needed (i suspect most would still just store the ids).
Just to build off this ticket a bit more. At one point, we had discussed allowing the "prefix" of a table name to be a human-readable "nice to have", that could be ignored in queries etc. It might be nice to support this in our query APIs so that users can make queries on tables without knowing the full prefix ahead of time.
brunocalza
changed the title
Table prefixes are required for network reads but aren't recoverable on-chain or easily.
[GOT-48] Table prefixes are required for network reads but aren't recoverable on-chain or easily.
Mar 23, 2023
Table prefixes aren't easy to recover on chain.
When you look at the registry on any chain, it contains all the tables as NFTs. E.g. https://testnets.opensea.io/collection/tableland-tables-mumbai
Those tables have tokenIds which one can recover from a few different APIs on the smart contract that are standard ERC721. For example, you can call
ownerOfTokens
to get a list of table tokenIds you own on the current network.From that list of IDs, e.g.
[5, 15, 24]
, you should be able to then select data from the tables. For example, if you got those from the mumbai network, you might call,SELECT * FROM 80001_5
. But that isn't going to work because you don't know the table prefix.Ideal scenario
When we first discussed the idea of adding prefixes to tables, they were motivated by
networkId_tokenId
they would be very difficult to work with.However, we didn't expect that the prefix would be required. E.g., ideally the following would be equivalent
SELECT * FROM mynewtable_80001_1
SELECT * FROM 80001_1
Likewise, a join could be
SELECT * FROM mynewtable_80001_1 WHERE id IN (SELECT id FROM 80001_1)
mixing and matching. The prefix doesn't matter.In this way, the prefix is available when you want it, but isn't required when you don't have it.
Side effects
This actually makes using the registry a smidge easier from smart contracts since you don't even think about prefixes. You just track Token IDs the custom smart contract owns. Any contract using the Tableland registry would never need to store it's own owned table info, since it could just fetch it from the Registry dynamically if needed (i suspect most would still just store the ids).
GOT-48
The text was updated successfully, but these errors were encountered: