Using art from Nouns contracts

I am working on an art project that I would like to integrate closely with the Nouns contracts.

I’ve written the contracts for new generative on-chain pixel SVG graphics and am trying to swap out the head trait for my custom svg. I am looking to connect with a smart contract developer on the nouns team that can help me understand how to replace the head traits with my own pieces.

1 Like

Some links that will help you

Cool thanks!

Just to outline what I am thinking in terms of approach.

I am thinking I will implement the generateSVGImage and _getPartsForSeed functions from the NounsDescriptor in my own contract. This function will be called when a someone looks up the tokenURI.

For _getPartsForSeed in my contract (which is called by generateSVGImage), I would replace the heads part (_part[2]) with the head part that I created. I would have to encode this part using the custom Nouns RLE.

Does this seem like a good approach? Is there a more efficient way to do this?

One of the drawbacks of this approach is that I will need to replicate the NounsDescriptor traits (backgrounds, bodies, accessories, and glasses) as well as the color palatte (stored in palattes) in my own contract. This is redundant and would be expensive, but as far as i can tell is necessary

@verb-e Is the NounsDescriptorV2 contract live? I noticed you mentioned that its going live soon here.

If so, the contracts README should be updated–and what is the address?

I think this would help me a lot with creating my project.

yes, there’s a descriptorV2 version deployed. you can see the code in the repo, and you can go on etherscan to the Nouns token and find the descriptor it’s using.

will also update the README soon :slight_smile:


Awesome, thanks for the tip!

Now I need to figure out how to encode my image data into the Nouns RLE format. I can encode arbitrary images into this format, correct?

I see that they are encoded in this format:

Palette Index, Bounds [Top (Y), Right (X), Bottom (Y), Left (X)] (4 Bytes), [Pixel Length (1 Byte), Color Index (1 Byte)][]

but I am having some trouble figuring out exactly how this works.

Edit: disregard i figured it out :smile:

I’m trying to figure it out too, can you help me?

This page explains it well.

Okay, thanks for responding, I’d check it out, and get back to you

Wow that was really helpful, now what I need to know is how “14|17 14|17 14|17 14|17 02|17 01|00 11|17 02|17 01|00 11|17 02|17 01|00 11|17 02|17 01|00 11|17 02|17 01|00 11|17 02|17 01|00 11|17 02|17 01|00 11|17” is turned into a string and then hex coded.

Hey, this is the script that does the entire process from PNG assets to the encoded and compressed assets:

Thank you very much. Works like a charm.