Mo Amin Archive

I worked on a quick project for Google Arts & Culture in partnership with the Mo Amin Foundation.
the foundation opened up their archive allowing everyone to access Mohamed (‘Mo’) Amin’s legacy.
to be honest, I had no clue who he was so I was pretty surprised to discover his life and work.
he documented Africa from the inside, without that western patronizing filter I’m so used to (and bothered by).
so I’d like to encourage you to visit the dedicated page on GA&C: and learn more about the man and his work.

the experiment itself lives here :

the tech bit

the experiment boils down to displaying a wall of many images (as of today: 5893).
the project had already been started and was using PixiJS, a (mostly) 2D WebGL engine.
I never worked with PixiJS and once I understood that the canonical way to use it was:

the work got much easier. Pixi is well documented and all the features I needed were there, out of the box.

now, to tackle such a high number of images, you should avoid using the scene graph as much as possible: it’s inefficient and each time someone uses the addChild() method, a kitten dies.
enter Instanced geometry, if you’re a real friend, you may have read a previous article about instanced geometries in three.js, it’s the same, minus three.js, plus Pixi.
as the name suggests, instanced geometry lets you use a geometry and instantiate it many times.
the biggest benefit is that no matter how many instances you create, they’ll be rendered in a single drawcall which is a good thing.

the Pixi doc has a simple example, the somewhat tricky part is building the buffer and the attributes:

more specifically, this method:

geometry.addAttribute(id, buffer, size, normalized, type, stride, start, instance)

can be a bit intimidating, here’s a commented intitialisation (learning purpose code, not sure it works if copy/pasted):

hope it makes sense.

Atlas generation

now, before playing with our instances, we’ll  need an atlas. an atlas is a texture that contains many smaller pictures, it’s memory efficient and PixiJS massively uses it under the hood.
a GPU is a state machine that needs to be set up before rendering any object. with many objects and many textures, this set up can slow down the execution a lot, if we use a single object and a single texture, it’s much faster (the instanced geometry above is processed as a single object).

there are tools to create such atlases, the most famous is Texture Packer, it has a lot of nice features ( tightly pack items, create metadatas, compress textures, etc. ), in my case this was only to create very coarse thumbnails to wait for the preload of larger thumbnails.
I usually have a Python workflow to pre-process the data and don’t need much artistic control so I use OpenCV.
the script lists the images of a folder, create a grid from a tile size and stick the images into a big square texture.

here’s the python atlas script calling it like :

produces a color image  with 64px RGB thumbnails next to each other

setting the ‘pack’ flag to true converts the images to greyscale and packs them into the R,G,B channels

the Mo Amin collection contains a lot of black and white pictures and the packing trick drastically reduced the atlas size (16Mo > 2.2Mo)
the downside is that the JPG compression uses all 3 channels and each channel creates very visible artifacts on the other channels.
In this case, it was aligned with the art direction of the experiment, it looks like dust and scratches and the thumbnail is here to improve the quality.
but it may be something you want to avoid (saving the image as PNG solves the issue bug increases the file size…).
I wrote a very basic python script to determine whether a picture is color or greyscale, leaving it below for posterity:

so now we have everything we need to display the wall of images!

you should add a “sampler” and a “channel” attribute to the instances so that they know which texture and which channel to sample.
in the fragment shader:

Lens effect

there was a (rather gimmicky) lens effect to implement, I was surprised how concise the code was with Pixi (three.js is a bit more verbose when it comes to postprocessing).

then instantiation looks like this:

the surprisingly hard part was to make it work with various pixel ratios, I may have missed something but my shader is uselessly convoluted and almost longer than the Lens class.

by no means a canonical way of doing stuff but it works on most devices.

and that’s about it, for this nice little project.

Leave a Reply

Your email address will not be published. Required fields are marked *