FBO particles

update 210525:
Mario Carrillo was kind enough to port the code samples to ES6, something I’ve been willing to do for years.
so check out his repo: https://github.com/marioecg/gpu-party/ ( and check out his work while you’re at it )

particles are awesome.

I can’t tell how many particle engines I’ve written for the past 15 years but I’d say a lot. one reason is that it’s easy to implement and quickly gives good looking / complex results.

in august 2014, I started a year-long project that never shipped (which I playfully codenamed “the silent failure”), the first thing they asked for was a particle engine to emulate a shitload of particles.

the way to go in this case is to use a GPGPU approach a.k.a. FBO particles. it is a fairly well documented technique, there were working examples of FBO particles running in THREE.js and especially this one by Joshua Koo & Ricardo Cabello

in a nutshell, 2 passes are required:

  • simulation: uses a Data Texture as an input, updates the particles’ positions and writes them back to a RenderTarget
  • render: uses the RenderTarget to distribute the particles in space and renders the particles to screen

the first pass requires a bi-unit square, an orthographic camera and the ability to render to a texture. the second pass is your regular particles rendering routine with a twist on how to retrieve the particle’s position.

the FBO class goes like:

I left the comments so it should be easy to understand. step by step, it unrolls as follow:

  1. we need to determine if the hardware is capable of rendering the shaders. for the simulation pass, we’ll need to use float textures, if the hardware doesn’t support them, throw an error.
  2. for the render pass, we’ll have to access the textures in the vertex shader which isn’t always supported by the hardware, if unsupported, bail out & throw error.
  3. create a scene and a bi-unit orthographic camera (bi-unit = left:-1,right:1, top:1, bottom:-1) near and far are not relevant as there is no depth so to speak in the simulation.
  4. create the RenderTarget that will allow the data transfer between the simulation and the render shaders. as this is not a “regular” texture, it’s important to set the filtering to NearestFilter (crispy pixels). also the format can be either RGB (to store the XYZ coordinates) or RGBA if you need to store an extra value for each particle.
  5. straight forward: we create a bi-unit square geometry & mesh and associate the simulation shader to it, it will be rendered with the orthographic camera.
  6. we create the render mesh, this time we need as many vertices as the pixel count in the float texture: width * height & to make things easy, we normalize the vertices’ coordinates. then we initialize a Points object( a.k.a Particles, a.k.a PointCloud depending on which version of THREE.js you use)
  7. initialization is over now the update loop does 2 things:
    1 render the simulation into the renderTarget
    2 pass the result to the renderMaterial (assigned to the partciles object)

that’s all good and sound, now a basic instantiation would look like this (I’ll skip the scene setup you can find it here):

now, you may be wondering what do the shaders look like, here they are:

ok this was a long explanation, time to do something with it, the above will probably look somehow like this (click for live version):

basic

which is a bit dry I’ll admit, but at least it works :)

the benefit of this system is its ability to support lots of particles, I mean lots of them, while preserving a rather light memory footprint. the above uses a 256^2 texture or 65536 particles, 512^2 = 262144, 1024^2 = 1048576 etc…. and as many vertices which is often more than what is needed to display …well anything (imagine a mesh with 1+ Million vertices).

on the other  hand particles often cause buffer overdraw which can slow down the render a lot if you render many particles on the same location for instance.

it’s trivial to create random or geometric position buffers, it’s straight forward to display an image of course (vertex position = normalized pixel position + elevation), it’s easy to create buffer describing 3D objects as we don’t need the connection information (the faces) and as we shall see, it’s also easy to animate this massive amount of particles.

let’s start with a random buffer:

this is what I used for the first demo not very useful apart from debug, an image is a touch more interesting, how would we do that?

an image is a grid of values (pixels) so given an image, its width, height and an arbitrary elevation, the buffer creation goes as follow:

the getContext() method creates a 2D context to access the image’s pixels values and the loooooong line  computes the greyscale value for that pixel.

which should give this (can’t view online because of CORS restrictions, it’s on the repo though):
thanks to @makc for pointing out in the comments that images need a crossOrigin, in this case, img.crossOrigin = “anonymous”; solved the problem, so enjoy the live demo :)
image

it uses this 256 * 256 greyscale image:

noise

loading a mesh is even easier:

the method takes the geometry of the loaded mesh and the trick here is to determine the size of the texture from the amount of vertices. this total is simply the square root of the vertices count.

click the picture for a live version

mesh

in the render shader, I compute the depth and the size of the particles is indexed on it which gives the illusion of faces but those are only particles (47516 particles, less than the first example).

what about animation?

say we want to morph a cube into a sphere, first we need a sphere:

note that it uses the “discard” approach ; a point is generated randomly in the range [-1,1] and if it’s length > 1, it’s discarded. it is quite inefficient but prevents points from being stuck in the corners of a normalized cube. there’s also an exact way to prevent this problem that goes:

it needs cubic roots and involves more computations so all in all the discard method is not that bad and easier to understand (for people like me at least).

now back to morphing, the way to go is to create 2 DataTextures, one for the cube, one for the sphere and pass them to the simulation shader. the simulation shader will perform the animation between the 2 and render the result to the RenderTarget. then the render target will be used to draw the particles.

the simulation’s fragment shader is a bit more complex:

that wasn’t too scary right? we have our 2 DataTextures (the cube and the sphere) passed to the simulation material along with a timer value which is a float in the range [0,1]. we sample the coordinates of the first model, store them in origin, the coordinates of the second model, store them in destination then use the mix(a,b, T) to blend between the two.

here’s a preview of the timer value being set at 0, 0.5 and 1:

morph_0 morph_1 morph_2

of course we can create more sophisticated animations, the idea is the same, anything that should alter the particles’ positions will happen in the simulation shader.

for instance this uses a curl noise to move particles around:

animation

the size is set like this:

it reads, if the current “uv.x” is lower than 0,998046875, it’s a small particle, otherwise it’s big. if you’re interested in the simulation shader, it’s here, I don’t think it’s an example of good practices though.

to wrap it up, this technique allows to easily control insane amounts of particles, it is well supported (as compared to when I started using it at least) and – when using smaller texture sizes – it performs relatively well on most platforms ; usually GPUs optimize nearby texture sampling, which is what the FBO is based on.

I’ll leave you here, all the sources are on github.
enjoy

31 Comments

  1. > (can’t view online because of CORS restrictions, it’s on the repo though)

    rawgit serves Access-Control-Allow-Origin:* it is probably that you do not specify crossOrigin in the code.

    • nico

      well hello there :) that was it !!
      I’ve specified img.crossOrigin = “anonymous” and it works, good to know :)
      updated the article, thanks a bunch!

  2. but more easily you could just push gh-pages branch and it will get official github.io url where you don’t have to deal with cross-domain stuff.

  3. Matthias

    Thank you for your clear explanations.
    However I can’t run your (and mr doobs and blursplines) examples. I am running on a Intel Mobile graphics card and it is likely that FBO is not supported there, although the WebGL checks should speak for it.
    I am running into those errors:
    [GroupMarkerNotSet(crbug.com/242999)!:E85E60BA1C240000]GL ERROR :GL_INVALID_FRAMEBUFFER_OPERATION : glClear: framebuffer incomplete (clear)
    noise.html:1 [GroupMarkerNotSet(crbug.com/242999)!:E85E60BA1C240000]GL ERROR :GL_INVALID_FRAMEBUFFER_OPERATION : glDrawArrays: framebuffer incomplete (clear)

    Maybe you know anything about it.

    Thx again for explaining things!

    • nico

      I believe this is where WebGL falls short ; the Hardware specifics … I wouldn’t recommend using this on mobile devices anyway ; you’re not the gl.MAX_VERTEX_TEXTURE_IMAGE_UNITS maybe forbidden, or worst the Float Textures.
      for your problem I don’t know, it seems to come from the RTT, maybe having a look at how to get a render to texture to work might help.

      • Matthias

        ok i fixed the problem :)
        the THREE.DataTexture needs to be defined as THREE.RGBAFormat (not THREE.RGBFormat), so that it works on Intel Mobile Chips. I found out because mrdoobs FBO examples worked where these did not. So I compared the code and tada.

        now I can play with FBOs on my cheap intel gpu as well :) beautiful!

    • Hi,

      We see the phrase “framebuffer incomplete” in the error message. For this code, it indicates that you are unable to write floating point values into the frame buffer. The OES_texture_float extension, which the code checks, only indicates that you can create, and that the shaders can read floating point textures, not that the shaders can write to those textures.

      To determine if the texture is writable, you have to setup the rendering pipeline, and do a CheckFramebufferStatus check. Perhaps more details than you want, but I have some notes on working around issues with floating point textures: http://www.vizitsolutions.com/portfolio/webgl/gpgpu/speedBumps.html

      Also, for OpenGL ES, which WebGL is built on, reading textures in the vertex shader is optional, so this will trip up a lot of mobile graphics cards.

  4. tlecoz

    Hello Nico !
    Glad to see you finally have some fun with shaders ;)
    You should consider the fact to create the triangles directly inside the vertex-shader, based on 3 constants located in the shader ( defined once for all) representing the 3 vertex of a triangle.
    Then you only need the position of your particle (the center of your “shader-triangle”) in your VBO and then you can put much more data in it ! :)

    I posted an example showing how I use to do in the processing-forum
    The GLSL code should work in WebGL without any modification.

    https://forum.processing.org/two/discussion/8855/how-to-draw-millions-of-points-efficiently-in-processing

    • nico

      hey, thanks for passing by :)
      this uses GL.POINTS instead of GL.TRIANGLES. if we use triangels, we’ll need to triple the vertex count (like in your example).
      it’s a trade off ; GL.POINTS will perform faster with small point size (less data to process, less overdraw) but will slow dosn terribly when the point size gets bigger.
      if the idea is to work with bigger surfaces, then your approach is the way to go :)

      on a side note, using .5’s gives an isoceles triangle, an equilateral triangle (of circumradius 1) is defined like: ( 0, -1 ),( 0.8660254, 0.5 ),( -0.8660254, 0.5 )

    • nico

      hey, thanks for your kind words :)

      short answer: no, at least not directly. There are different primitives to draw with in WebGL, one of which is POINT and will draw particles (what we do here), to draw lines, you’ll need to use other primitives like: LINES, LINE_STRIP & LINE_LOOP (see https://www.khronos.org/files/webgl/webgl-reference-card-1_0.pdf).

      the idea would be to draw a set of lines then draw the particles on top of it. looks easy enough but it requires a piece of information you don’t have in the particle system: connectivity ; indeed to draw lines, you need to know which points to connect. Over more, the picture you linked looks like a “node garden” (or a Force Directed Graph), it’s an emergent structure computed from the relative distance of nodes ; if they’re “close enough” draw a line, otherwise don’t. to compute this, you need to know where the nodes are before knowing if 2 nodes should be linked. this operation cannot be performed easily on the GPU (it would require a specific data structure and extra GPGPU steps) and given the amount of particles, computing on the CPU will require a lot of resources (and time).

      this being said, you can use a grid or any mesh rendered with lines and use the same FBO technique to compute the vertices’ positions. I never did anything like this though, just a wild guess :)

  5. esteban

    Hi! This is one of the best articles on shaders i’ve read so far. Thanks for sharing, it’s immensely helpful and clear!!

    I am trying to build an animation to morph meshes of any number of vertices. I have some extra vertices on one mesh that i need to hide. So, in the DataTexture i am using a THREE.RGBAFormat, instead of THREE.RGBFormat. Vertices are defined by a THREE.Vector4.
    Now all vertices that are not needed have xyz values set to random and the alpha value set to 0 (hidden).

    Everything works fine. But i cannot get that alpha value in the shaders.
    If i add ‘transparent: true’ in the simulation shader, I get a strange behaviour: all the particles with alpha zero get scaled to position 0! And their alpha is still 1.

    This is my simulation fragment shader:

    void main() {
    vec4 origin = texture2D( textureA, vUv ).xyzw;
    vec4 destination = texture2D( textureB, vUv ).xyzw;
    vec4 pos = mix( origin, destination, timer );

    gl_FragColor = vec4( pos.xyz, pos.w );
    }

    And in the render shaders i added a varying containing to pass the alpha value to the fragment shader. Any idea why it does not work and it scales the positions?

    (ignore this if it’s too confusing, sorry :)

    • esteban

      Found the solution already and works great.
      Just use RGBAFormat (instead of RGBFormat) in the FBO class: in the options of the rtt (WebGLRenderTarget).

      As simple as that :)

      • riss

        I’m trying to do the same, but my code is not working and I can’t pinpoint the issue. Do you still have all the code for this by any chance? It would be immensely helpful. I am trying to store an additional value for the color of each particle.

        Or do you remember what else you had to do for this to work? Does the simulation VS needs changes too? I changed the render vertices to be a BufferAttribute(vertices, 4), but does the simulation position need to be a size 4 vector?

        Thank you so much

  6. Hey Nico,

    Really cool project. I have this bookmarked for a few months now because i was trying to create something similar but because i am new to THREE and WEBGL i was not able to follow your instructions.

    In the meantime i have found this example http://www.pshkvsky.com/gif2code/animation-13/ which i was able to follow. But it doesn’t work as smoothly as yours does. I guess the ‘raycasting’ metehod is too inefficient for defining points on a mesh.

    I am looking for a better way to form shapes (loaded 3d object) out of particles in THREE.js. Do you have some time to explain that in more details?

    You can find my current progres here http://www.sander-wilbrink.com/ but again, i am quite new to webgl and most of this code comes from http://www.pshkvsky.com (credits where credits are due of course)

    PS: have you seen this awsome example? https://xmas.astral.de/#

    Kind regards
    Sander

    • nico

      hi,
      sorry I didn’t see this comment earlier :)
      your question boils down to knwoing where to place particles on a mesh.
      as far as the astral website is concerned, it’s very nicely done! :) they probably used a depth sensor (kincet, leap motion, 3D camera…) to scan the faces and assign the size of the dots depending on how close the particle is from the ‘camera’. it’s possible to use photogrammetry too but it would be somehow overkill and less efficient in this case :)

      if you have a mesh, the raycasting – though very slow as you mentioned – could be the way to go ; shoot X random rays at the mesh, store the intersections, rotate the mesh a bit, repeat. this is basically what the kinect (or any LIDAR device) does. then you obtain a series of points on the surface of the mesh.

      if you have a mesh and only need points onn the surface, you can use the geometry’s ‘faces’ and radomly distribute points on the face. this is fairly trivial. the trick here is to use a ratio based on the area of the triangles to set the amount of random particles to create for each face.
      hope this will help you go on, thanks for passing by :)

  7. Hi I’m Johan!

    Your particle effect is really cool !!!!
    But when I try to view it on my phone, three.js throw an warning that it does not support “EXT_frag_depth”.And the effect of the particles did not appear.
    But I did not see “gl_FragDepthEXT” in your glsl code.
    I am very confused about this.

    ps: The performance particles on phone is very important and I really want to use your technology on the phone.

  8. Hi! Admiring this article! Good explanation!!
    I tried to follow your steps and what you did and have one question.
    Am i right, that what is done in the article may be done without FBO approach actually?
    We can just compute that same curl noise in vertex shader in one pass. Based on some geometry positions attribute (your sphere) and time uniform?
    I mean, of course, FBO can be extended by adding velocities texture and etc. But does this exact animation in article benefit from it?
    Again, thank you for this article, it has been my first step into this type of animations!

  9. riss

    This was really useful for learning how to deal with a lot of particles, thank you so much!

    What would be the most efficient way to handle dynamic color changes? E.g. constantly changing colors based on their previous colour, evolving into new colours all the time. Would it need another pass to store and update the RGB values of the particles or use the same pass?

    Cheers!

    • nico

      hey,
      thanks :)
      I guess you found out by yourself but for the record, unless the color is based on the position, you’ll need a second texture to encode the color state. now if it’s a color cycle, maybe using a time uniform + a delta (stored as an attribute) can be enough. not sure what was best for your use case…

  10. Max

    Hi Youpi

    I really found this work awesome I was trying to incorporate it into a modern Next.Js project but kept on hitting some stumbling blocks. Is there anything like this with react-three-fiber to modernise your work I did try but its quite difficult without yourself :) Would love to hear back from you on your thoughts?

    • nico

      hey,
      this means that either your device does not support float textures ( the Samsung S series is famous for that ) or the extension test fails.
      if your device does not support float textures, this won’t work and unfortunately there’s not much you can do.

  11. leon

    Hi youpi, thank you, your article really helps me to understand this. I wonder if we can have multiple models in it, if we are allowed to have multiple models in it, how do I morph them in the simulation fragment shader? thank you

    • nico

      hey,
      well if you managed to morph between 2 objects (the cube and sphere in the article), adding new models follow the same logic ; you’ll need to convert a model into a datatexture, pass it to the shader and blend between datatextures with uniforms.
      the trick is that all the datatextures must have the same size so your models should ideally have the same vertex count or you must find a way to add vertices to the lesser defined models so that they all have th esame count.
      not sure if it’s clear but I hope it helps.

  12. Pratyugna

    I clones the repo and i works totally fine. I tried integrating the noise.html file with react js. I did all i could. Installed three js using npm. Copied the fbo script, copied the noise.html script.

    The issue I faced was float gl oes texture not supported. However if that was the case how did it work when i cloned the repo for the first time…

    All suggestions welcomed

  13. Pratyugna

    If you are using RGBA instead of RGB then there needs to be 4 fields instead of 3. The changes in order to accomodate the RGBA should be len = size * 4 instead of size * 3. Also change the texture loop accordingly setting up the ‘a’ values to 1.

Leave a Reply to Sander Cancel reply

Your email address will not be published. Required fields are marked *