a month ago, for no specific reason, I decided to implement a 2D top-down visibility algorithm. I always found the output beautiful both graphically & conceptually yet, as I don’t make games, I never tacckled it.

visibility algorithms are often used to simulate lights ; to determine which areas of a space are enlightened and which lie in the shadow but they can also be used to power an AI ; make an agent aware of its whereabouts.

if you’re after a robust solution or just curious, I’d suggest you open one of those links:

also, if you only need the graphic data, you might be interested in GPU solutions ; this article explains the shader based approach. and if you’re more into the game-making thing, the beamcasting approach seems very efficient, especially as the triangular structure offers many advantages to handle the game’s logic.

I know you’ll find something that suits your needs along with valuable insights.

the algorithm

as mentioned above, I’m only interested in the algorithm itself so this article gathers a couple of thoughts on the topic, some sample animations and some JavaScript code.

I wanted to keep the algorithm topology-agnostic ; it should work on any series of segments (edges) & make as few assumptions about the data structure as possible. in real life, one would probably adapt what follows to perform faster ; for instance re-using the edges instead of rebuilding them each time might be a good start.

I kept the examples intentionally unoptimized here, a naive pseudo-code goes as follow:

  1. create edges
  2. store them in a an array
  3. compute & store the edge’s start & end angles
  4. sort the angles in ascending order
  5. for each angle of the angles’ list
  6. cast a ray from the light source at this angle
  7. with all the edges
  8.     if the edge contains the angle
  9.     compute & store the closest intersection of this ray with the edge
  10. push the closest intersection point to the solution

which translated into (a somewhat obfuscated) code gives:

now if you suspect some trickery here, you’re right. this snippet alone won’t do much for it needs 2 data-holders (Point & Edge) along with some methods (angle, distance, project) to work.

the Point object simply stores a X, Y coordinates & a distance value, now the Edge object is slightly more tricky as it needs to ensure the points are properly oriented (CW).

the orientation is important because the angles are sorted in ascending order so the scanline rotates clockwise & if an edge is oriented CounterClockwise (CCW), the computation will fail.

angle & distance methods are fairly easy to find, so here is the projection method:

I used the projection method found here & precomputed most of the values: A,B,C represent the ray & are set on step 6 of the algorithm, D, E, F represent the Edge and are computed during the edge’s instantiation (see Edge Object). note that we can use Square Distance (faster) instead of the actual distance as we only need to know if a value is smaller than another. below is a little demo staging a caribou

naive approach fullscreen
it works nicely when the light is inside the shape, yet some major flaws happen

  1. all edges are computed which is not a good idea
  2. the algorithm does not make a difference between the inside / outside
  3. from “outside”, the visibility polygon is not closed properly
  4. some mid-edge projections are lost as the scanline moves forward

in other words, it sucks.

which edges?

the first step is to determine which edges to process. we need to process an edge only if it is potentially visible from the light source. this is achieved by computing the dot product of the edge’s normal vector & the vector going from the light source towards one of the edge’s vertices. if the result is negative, the vectors are pointing in opposite directions meaning the edge is “facing” the light source & needs to be processed.

this is the code you need:

statistically, this removes 50% of the computations. note that I said as few assumptions as possible concerning the edges but this is a strong one ; I assume that edges are opaque on both sides. by using a more sophisticated edge data structure, we could specify which side is visible or have one-sided walls ( like a stainless mirror ).

a nice (& necessary) addition is the visibility range ; beyond a given distance from the light source, we don’t have to compute anything.
by adding a simple distance check to the previous test, we can decimate the potentially visible edges’ list further.

overmore, if we consider the edges as walls & we know that we can have closed shapes, an interesting test can be performed before actually processing anything to determine whether or not a polygon contains the light source & eventually stop the computation there. in my previous article, I’ve shown a method that does this.

here’s a live demo:

potential visibility fullscreen
move around to see the potentially visible edges (blue) click & drag to use the visibility range, when the shape turns grey, it means that the shape contains the light so nothing is computed.

when clicked, the pink edges are out of range & don’t need to be processed which leaves us only with the blue edges. I’ve displayed the decimation ratio that indicates how much of the original edges are valid and need to be processed.

closing the visibility polygon

the visibility range is a nice addition yet it induces a greater problem ; how to close the Visibility Polygon.

the simple solution is to enclose all the polygons inside a bounding polygon (usually a quadrilateral) & use infinite rays ; this way we are certain that the rays will hit something… some time.

in the next example, I’ve added a series of 4 (yellow), 8 (red) & 60(blue) edges forming a circle around the light.
I also perform the tests described above so when running, only the visible edges within the light range are being processed. this should run much faster than the naive approach (roughly twice).

here’s the result:

educated approach fullscreen

before starting the animation, you should see the outlines of the visible polygons.

even though it works, this is not a very elegant solution ; we have to create extra geometry or we’ll lose the benefit of a visibility range. in the case of AI, the agents would have an infinite line of sight, in the case of graphics, the light would flood offscreen which is not the expected result. overmore, this approach doesn’t solve the mid-edge projection issue & the precision of the boundaries depends on the number of edges added (the edge count is discrete as opposed to continuous arcs).

I had the intuition that the problem could boil down to determining when a ray would switch between distances from the light source. indeed if we convert all cartesian coordinates to polar coordinates, we get the 2 values we need to solve the algorithm:

  • the angles at which we should project rays
  • the distances from each point to the light source

as it’s not a trivial concept to understand, here’s a demo of polarization

polar fullscreen
move your mouse around, you’ll see the arcs described by the edges’ start and end angles projected on the visibility range around the mouse.

the important bit here is what happens at the bottom where all the angles are represented as vertical lines and all the edges as segments.
before going further, computing the edge’s projection against the visibility circle allows for a nifty trick ; instead of computing the visible polygon, what if we compute the invisible polygon?

instead of casting light, why not casting shadows?

here’s how the code would go:

& this would be the result (click maintain & drag):

shadow casting fullscreen

note: you’ll need a big range for it to work as it is supposed to overflow the screen.
if the light source is too close from the edge, it will fail. as the edge’s end are projected on a huge circle, the closer you get to a wall, the closer the arc’s chord will be to the back of the wall. it’s still a fast and easy solution for graphics rendering of shadows.

back to the polar view, we saw it’s possible to consider the edges’ projections as angle-distance rather than X Y coordinates (polar coordinates vs cartesian coordinates).

to solve the problem in the polar space, we have to change the approach a bit. the scanline would move from left to right & the light would cast a ray from the top towards the bottom. the picture below sums up the cases we need to solve.

we’re after the thick blue line.

the code remains the same as before but instead of systematically picking the closest intersection, we’ll have to perform some extra tests depending on the nature of the point being processed & for this we need…

a less stupid data structure

a less stupid Point

now the point is aware of the edge it belongs to, stores its angle and knows if it belongs to a chain or if it is the end of an edge. in addition, it computes the values (ABC) required to perform the projection test & can be flagged as an arc (see render method below).

a less stupid edge

it stores the two points, the two angles & precomutes the projection value D,E,F.

another thing we have to determine is whether or not a point is occluded ; indeed if a point is completely hidden behind an edge, it cannot belong to the solution.
during the projection loop, I’ve added an occluded boolean that is set to false for each point & switches to true if the point’s projection is closer to the light source than the point itself (those are the red crosses the picture above).

with this less stupid data structure, we can refine the tests after finding the closest point:

when a point belongs to the visibility range, we need to create a point using the polarToCartesian() method. no black magic here either:

which gives us this glorious demo:

behold YE glorious fullscreenhere’s the source code
the source code is not the cleanest you can dream of but at least it is standalone.

another thing is that to properly close the the visibility polygon, we need to draw arcs. that’s where the EdgePoint.isArc comes to the rescue.
the arcs are always drawn between 2 points of the solution so the rendering method only needs to know when 2 consecutive points of the solution have their .isArc flag set to true & draw a sector (a.k.a. pie slice) otherwise draw a triangle.

this takes advantage of the way the Canvas API works : beginPath() opens a buffer of mixed instructions until the closePath() or a draw method is called. as our solution is guaranteed to be convex & to have a CW winding, we can iterate on the solution’s points linearly.

it’s pretty similar to having created the circular frame around the light with the notable difference that it doesn’t create extra geometry. unfortunately there are still some cases where it breaks ; the arcs are not drawn properly so the safest is still to use a bounding polygon.


I tried to implement a penumbra (soft shadows) as described in this article but failed miserably.
the plan was simple, as we know when we render an arc, we should be able to draw an area that simulates the light fading away. to do this – as the shading is not linear – we need a texture like the one below:

(in photoshop, it’s a transparent “angle gradient” with a very small (<2%) opaque black span at both ends) so whenever we hit an arc, we translate the canvas at the location of the point before the arc point, rotate the canvas to orient it along this edge. then we set the texture as fill pattern & draw an arc of 45° (= PI / 4). the code:

& here’s the result:


there are flaws here too ; the self-projections are not taken into account ; it would mean we know which side fades in which direction. as we use a texture, it can’t be faded away along the edge’s axis like a radial gradient would so the length of the soft shadow depends on the texture itself.


this basically relies on the way edges are built & oriented ; the same algorithm works with “compound” objects.
here’s a compound object for instance ; the eyes and the nose have a different orientation then the rest.

20 light sources are computed on click (might take some time…), sometimes the light lies within a “hole” & doesn’t spread around or is being blocked by the shape containing the “hole”.
I used 2 color palettes: canary & Hibiki note that the shapes are not being drawn at all ; it’s only made of light.


along the way, I thought of a couple of extensions that could be done to improve the thing further

  • various light shapes: I used only point lights but why not using a direction & an angle span to create a spot light? or a line that would emit light along its normals?
  • translucency: if the ends of the edge know how much light they let through, they can affect the visibility range thus creating a cheap “translucency”
  • caustics: as for translucency, caustics would bounce some of the light it recieives either outwards (reflection) or inwards(refraction)
  • curves: I’m only using line/line intersections here but with a more clever data format & the appropriate intersection methods, it is possible to use curves (for instance using Kevin Lindsey’s intersection routines)
  • handling the primitives such as circles or regular polygons separately ; there are “shortcuts” to perform the projections on those shapes

… the hidebehind?

I like all kinds of insects, chimeras, monsters (dragons seems trendy those days) & I often envision the algorithms as creatures with an autonomous life. I first read about the hidebehind in the Book of Imaginary Beings by Jorge Luis Borges. here’s a description of this creature ( wikipedia source )

[…] As its name suggests, the hidebehind is noted for its ability to conceal itself. When an observer attempts to look directly at it, the creature hides again behind an object or the observer and therefore can’t be directly seen […]

the hidebehind was a good match for what I was after: a creature that can’t be seen.


  1. nico

    hey Mark thanks for your comment :)
    I missed this article of yours, nice and clever, thanks!
    I was after the geometric algorithm here – not only the graphics – in which case I think I would have used the “GPU approach” and done something in the way of what you describe in your article

    as far as the inversed shape goes, I think drawing a circle of radius “range” counter clockwise at the end of the rendering loop should do the trick (wild guess here, not tried)

    • nico

      ( thanks for passing by! :) ) this is what I described as “4 – some mid-egde projections are lost as the scanline moves forward” in the naive approach, only the closest candidate is being picked ; in that case, the point is closer to the light than its projection on the (foot’s) segment. this is a typical case where a point is the “end” of an edge and should be able to cast a projection that is located further from the light than the point itself. in the naive approach, this case is not handled ; only the closest projection is kept which causes this bug.

Leave a Reply to makc Cancel reply

Your email address will not be published. Required fields are marked *