You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OTR will need to standardize on an approach to maintaining historical metadata for sequential NFTs.
Background
Generally, every part of a registry should be "curated from the present" – identities can contain historical information in past snapshots, but that historical information cannot be expected to have remained completely unchanged since it was originally published. This is an unavoidable consequence of registries being published in the present – even if a registry claims to accurately represent historical information related to a particular identity, you're still trusting the present publisher of the registry to have accurately maintained that historical information.
In practice, the only way to verify "primary source" historical metadata registry information is to demonstrate that the information was published at that time, by a source trusted at that time, in a way that can not have been corrupted in the interim. E.g. if a previously-trusted identity published a version of the registry in a Metadata Registry Publication Output, we can verify metadata published at that time given it's provable existence in the blockchain (regardless of whether or not the entity remains trusted in the present).
Additionally, publishers commonly need to make simple corrections to published information – correcting typos, fixing links, clarifying descriptions, etc. – which do not substantially change the underlying information. These minor corrections would add unnecessary noise to an interface showing users the history of a particular token. Instead, historical information is more useful when curated from the present: outdated information uses the past tense, irrelevant trivia and now-broken links are excluded, links to new resources describing the past behavior or migration process are included, clarifications are added given information learned in the interim, etc.
BCMR CHIP includes examples demonstrating this, e.g. fungible-token.json documents some historical background for the XAMPL token, including the timestamp at which it was rebranded and re-denominated from EXAMPLE, the company's blog post about the historical asset (https://blog.example.com/example-asset-is-now-XAMPL) and a migrate URI. Notably excluded are, for example, old social media profile URIs which are no longer the correct URI and may even be misleading if included (e.g. a compromised social media profile).
Given this background, there's little reason for metadata registries to attempt to preserve historical identity snapshots without any sort of modification.
Preserving historical sequential NFT metadata
The open "policy" question for OTR is: should OTR attempt to preserve the former parsing behavior of token categories as part of the historical information curated by OTR?
A specific case: @bchguru is releasing a "second wave" of 2,000 NFTs within the same category as the 2,000 that are already listed in OTR (and they've published that they plan to ultimately release a total of 10,000 NFTs).
To be useful in BCMR clients, all nft.types must be present in the latest identity snapshot (otherwise, NFTs from the "first wave" would be interpreted as having an unknown type given the latest snapshot, regardless of whether they're included in historical snapshots). So all 4,000 NFTs must be listed in the latest snapshot. The question is, should the first set of 2,000 NFTs continue to be listed in the now-outdated snapshot? (Such that the information is duplicated across the two snapshots.)
By including the "first wave" NFTs in the older snapshot too, we're technically preserving some additional historical information for those particular NFTs: those "NFT types" already existed in that snapshot, before this latest release. However, that historical information is already available via the blockchain – NFTs created in either "wave" can already be traced to their creation transaction, so we already have a more definitive "primary source" for that information than snapshots in a present-day metadata registry.
Considerations for parsable NFTs
I think it's also clarifying to also consider the question for "parsable NFTs", e.g. NFTs used in decentralized applications to represent pledges, receipts, orders, etc.
In most cases, I expect that changes in such an applications "token API" will happen with a migration to another token category – it's generally a security concern if previously issued tokens (following a different issuance scheme) are out there floating around simultaneously with the new scheme (at least requiring extra bytes in contracts to identify/exclude them being used in place of certain tokens from the new scheme). But it's also reasonable to expect that some applications will design upgrade strategies prior to their first release, allowing them to e.g. deploy a new version of the sidechain after agreement by 80% of the bridging token holders. In these cases, it's even more important that the historical snapshot is not used by clients in rendering information about tokens: a token representing some sort of privilege in the v1 system might be rendered worthless following the migration to a v2 system; if the token is still displayed with the old information (e.g. "Liquidity Provider Share"), clients could be mislead into believing they continue to have value in the v2 system. Here again, the only snapshot that can safely be used without warning a user is the latest one, previous snapshots can only be useful for explaining historical details to a more advanced end user.
So: it's critical that all nft.types are included in the latest snapshot (including now-outdated types – even if those types are worthless in the modern system.)
Additionally, it seems like the "right place" to indicate relevant historical information about a particular NFT type is actually in that type's listing within the latest snapshot (e.g. "These shares used to entitle the owner to withdraw their share of the liquidity pool in Example Application v1, but as part of the upgrade to v2, these shares could be exchanged for [...]. Remaining holders can choose to [...]"). The information must be in the latest snapshot, and while it could also be duplicated in older snapshots, we'd really only get additional value if multiple past snapshots required different historical information (and BCMR clients were advanced enough to essentially show a timeline of that past information for each possible interpretation of a held NFT). That also quickly gets out of hand if a listing has many different historical "parsing bytecode" values – the client would have to parse all NFTs of that category using every historical parsing bytecode, then assemble the timeline for each token.
In practice, it makes much more sense for clients to only parse once, using the latest parsing bytecode (or none, for sequential NFTs), allowing the client to store only one interpretation of every NFT, and if applicable, a summary of the relevant historical details for that NFT should be part of the current snapshot.
A policy for sequential NFTs
I think this post has clarified my thoughts on the nft.types field within historical (non-current) snapshots – I think it's reasonable for OTR (and likely most registries) to exclude completely. Curating historical "NFT parsing information" could maybe be interesting for some niche development/educational purposes, but it's not practically applicable for most of the wallets, block explorers, and indexers targeted by OTR.
So I think the best policy for OTR here is: always move the nft.types field to the latest snapshot and exclude the field from all previous snapshots. Previous snapshots can (and should!) still include a description indicating what happened, since UIs might e.g. display a timeline of the evolving names, descriptions, and URIs for a particular identity.
{
"89cad9e3e34280eb1e8bc420542c00a7fcc01002b663dbf7f38bceddf80e680c": {
"2023-01-13T00:00:00.000Z": {
"name": "Example NFT Collection",
"description": "This is a short description of the collection; in most interfaces, it will be hidden beyond 140 characters or the first newline character.\n\nThis sentence should be hidden in user interfaces with limited space.\n\nThis collection defines metadata for 3 sequential NFTs, Example #0 (XAMPLZ-0), Example #1 (XAMPLZ-1), and Example #2 (XAMPLZ-2). Note that the 'icon' for each NFT is published via IPFS, so clients may download each icon by querying IPFS or by using an IPFS HTTP Gateway.",
"token": {
"category": "89cad9e3e34280eb1e8bc420542c00a7fcc01002b663dbf7f38bceddf80e680c",
"symbol": "XAMPLZ",
"nfts": {
"parse": {
"types": {
"": {
"name": "Example #0",
"description": "An NFT of this category with a zero-length on-chain commitment (VM number 0). Where appropriate, user interfaces may display the ticker symbol of NFTs matching this type as XAMPLZ-0.\n\nIn this example, the art represented by this NFT has a square aspect ratio and uses the SVG format, so the same URI can be used for both the 'icon' and 'image' URIs. For NFTs that represent art in raster formats other aspect ratios, the 'icon' URI should point to a 400px by 400px or SVG icon representing the NFT.",
"uris": {
"icon": "ipfs://bafybeihnmh5bkbaspp3xfdanje74pekhsklhobzzraeyywq6gcpb3iuvey/0.svg",
"image": "ipfs://bafybeihnmh5bkbaspp3xfdanje74pekhsklhobzzraeyywq6gcpb3iuvey/0.svg",
"web": "https://example.com/xamplz/0/details"
}
},
"01": {
"name": "Example #1",
"description": "An NFT of this category with an on-chain commitment of 0x01 (VM number 1). Where appropriate, user interfaces may display the ticker symbol of NFTs matching this type as XAMPLZ-1.",
"uris": {
"icon": "ipfs://bafybeihnmh5bkbaspp3xfdanje74pekhsklhobzzraeyywq6gcpb3iuvey/1.svg",
"image": "ipfs://bafybeihnmh5bkbaspp3xfdanje74pekhsklhobzzraeyywq6gcpb3iuvey/1.svg",
"web": "https://example.com/xamplz/1/details",
"custom-uri-identifier": "protocol://data-for-some-protocol"
}
},
"02": {
"name": "Example #2",
"description": "An NFT of this category with an on-chain commitment of 0x02 (VM number 2). Where appropriate, user interfaces may display the ticker symbol of NFTs matching this type as XAMPLZ-2.",
"uris": {
"icon": "ipfs://bafybeihnmh5bkbaspp3xfdanje74pekhsklhobzzraeyywq6gcpb3iuvey/2.svg",
"image": "ipfs://bafybeihnmh5bkbaspp3xfdanje74pekhsklhobzzraeyywq6gcpb3iuvey/2.svg",
"web": "https://example.com/another/path",
"another-uri-identifier": "protocol://data-for-that-protocol"
}
}
}
}
}
},
"uris": {
"icon": "https://example.com/xamplz-icon.svg",
"web": "https://example.com/about-xamplz-nfts",
"blog": "https://blog.example.com/",
"chat": "https://chat.example.com/",
"forum": "https://forum.example.com/",
"registry": "https://example.com/.well-known/bitcoin-cash-metadata-registry.json",
"support": "https://support.example.com/",
"custom-uri-identifier": "protocol://connection-info-for-some-protocol"
}
},
"2023-01-03T00:00:00.000Z": {
"name": "Example NFT Collection (First Wave)",
"description": "The first wave of 2,000 Example NFTs was released on ... More description info here.",
"token": {
"category": "89cad9e3e34280eb1e8bc420542c00a7fcc01002b663dbf7f38bceddf80e680c",
"symbol": "XAMPLZ",
},
"uris": {
"icon": "https://example.com/first-wave-icon.png",
"web": "https://blog.example.com/first-wave"
}
}
}
},
Note that the older snapshot includes no nft.types field, but may still be useful in rendering a sort of historical timeline about the Example NFT Collection in a wallet or block explorer.
@bchguru what do you think about this approach/reasoning?
The text was updated successfully, but these errors were encountered:
Thanks for writing up the considerations and working out this policy for OpenTokenRegistry for our use-case!
Looks like a well thought out solution to us! Appreciate that you provided an example for a sequential NFT collection
We will make a PR following this policy to update our metadata as soon as our wave2 sale has concluded.
OTR will need to standardize on an approach to maintaining historical metadata for sequential NFTs.
Background
Generally, every part of a registry should be "curated from the present" – identities can contain historical information in past snapshots, but that historical information cannot be expected to have remained completely unchanged since it was originally published. This is an unavoidable consequence of registries being published in the present – even if a registry claims to accurately represent historical information related to a particular identity, you're still trusting the present publisher of the registry to have accurately maintained that historical information.
In practice, the only way to verify "primary source" historical metadata registry information is to demonstrate that the information was published at that time, by a source trusted at that time, in a way that can not have been corrupted in the interim. E.g. if a previously-trusted identity published a version of the registry in a Metadata Registry Publication Output, we can verify metadata published at that time given it's provable existence in the blockchain (regardless of whether or not the entity remains trusted in the present).
Additionally, publishers commonly need to make simple corrections to published information – correcting typos, fixing links, clarifying descriptions, etc. – which do not substantially change the underlying information. These minor corrections would add unnecessary noise to an interface showing users the history of a particular token. Instead, historical information is more useful when curated from the present: outdated information uses the past tense, irrelevant trivia and now-broken links are excluded, links to new resources describing the past behavior or migration process are included, clarifications are added given information learned in the interim, etc.
BCMR CHIP includes examples demonstrating this, e.g.
fungible-token.json
documents some historical background for theXAMPL
token, including the timestamp at which it was rebranded and re-denominated fromEXAMPLE
, the company's blog post about the historical asset (https://blog.example.com/example-asset-is-now-XAMPL
) and amigrate
URI. Notably excluded are, for example, old social media profile URIs which are no longer the correct URI and may even be misleading if included (e.g. a compromised social media profile).Given this background, there's little reason for metadata registries to attempt to preserve historical identity snapshots without any sort of modification.
Preserving historical sequential NFT metadata
The open "policy" question for OTR is: should OTR attempt to preserve the former parsing behavior of token categories as part of the historical information curated by OTR?
A specific case: @bchguru is releasing a "second wave" of 2,000 NFTs within the same category as the 2,000 that are already listed in OTR (and they've published that they plan to ultimately release a total of 10,000 NFTs).
To be useful in BCMR clients, all
nft.types
must be present in the latest identity snapshot (otherwise, NFTs from the "first wave" would be interpreted as having an unknown type given the latest snapshot, regardless of whether they're included in historical snapshots). So all 4,000 NFTs must be listed in the latest snapshot. The question is, should the first set of 2,000 NFTs continue to be listed in the now-outdated snapshot? (Such that the information is duplicated across the two snapshots.)By including the "first wave" NFTs in the older snapshot too, we're technically preserving some additional historical information for those particular NFTs: those "NFT types" already existed in that snapshot, before this latest release. However, that historical information is already available via the blockchain – NFTs created in either "wave" can already be traced to their creation transaction, so we already have a more definitive "primary source" for that information than snapshots in a present-day metadata registry.
Considerations for parsable NFTs
I think it's also clarifying to also consider the question for "parsable NFTs", e.g. NFTs used in decentralized applications to represent pledges, receipts, orders, etc.
In most cases, I expect that changes in such an applications "token API" will happen with a migration to another token category – it's generally a security concern if previously issued tokens (following a different issuance scheme) are out there floating around simultaneously with the new scheme (at least requiring extra bytes in contracts to identify/exclude them being used in place of certain tokens from the new scheme). But it's also reasonable to expect that some applications will design upgrade strategies prior to their first release, allowing them to e.g. deploy a new version of the sidechain after agreement by 80% of the bridging token holders. In these cases, it's even more important that the historical snapshot is not used by clients in rendering information about tokens: a token representing some sort of privilege in the v1 system might be rendered worthless following the migration to a v2 system; if the token is still displayed with the old information (e.g. "Liquidity Provider Share"), clients could be mislead into believing they continue to have value in the v2 system. Here again, the only snapshot that can safely be used without warning a user is the latest one, previous snapshots can only be useful for explaining historical details to a more advanced end user.
So: it's critical that all
nft.types
are included in the latest snapshot (including now-outdated types – even if those types are worthless in the modern system.)Additionally, it seems like the "right place" to indicate relevant historical information about a particular NFT type is actually in that type's listing within the latest snapshot (e.g. "These shares used to entitle the owner to withdraw their share of the liquidity pool in Example Application v1, but as part of the upgrade to v2, these shares could be exchanged for [...]. Remaining holders can choose to [...]"). The information must be in the latest snapshot, and while it could also be duplicated in older snapshots, we'd really only get additional value if multiple past snapshots required different historical information (and BCMR clients were advanced enough to essentially show a timeline of that past information for each possible interpretation of a held NFT). That also quickly gets out of hand if a listing has many different historical "parsing bytecode" values – the client would have to parse all NFTs of that category using every historical parsing bytecode, then assemble the timeline for each token.
In practice, it makes much more sense for clients to only parse once, using the latest parsing bytecode (or none, for sequential NFTs), allowing the client to store only one interpretation of every NFT, and if applicable, a summary of the relevant historical details for that NFT should be part of the current snapshot.
A policy for sequential NFTs
I think this post has clarified my thoughts on the
nft.types
field within historical (non-current) snapshots – I think it's reasonable for OTR (and likely most registries) to exclude completely. Curating historical "NFT parsing information" could maybe be interesting for some niche development/educational purposes, but it's not practically applicable for most of the wallets, block explorers, and indexers targeted by OTR.So I think the best policy for OTR here is: always move the
nft.types
field to the latest snapshot and exclude the field from all previous snapshots. Previous snapshots can (and should!) still include a description indicating what happened, since UIs might e.g. display a timeline of the evolving names, descriptions, and URIs for a particular identity.Modifying the
identities
portion of theart-collection.json
example, here's that in practice:Note that the older snapshot includes no
nft.types
field, but may still be useful in rendering a sort of historical timeline about theExample NFT Collection
in a wallet or block explorer.@bchguru what do you think about this approach/reasoning?
The text was updated successfully, but these errors were encountered: