LambdaCube 3D
Purely Functional Rendering Engine
Ambient Occlusion Fields
May 15, 2016
Posted by on Recently I created a new example that we added to the online editor: a simple showcase using ambient occlusion fields. This is a lightweight method to approximate ambient occlusion in real time using a 3D lookup table.
There is no single best method for calculating ambient occlusion, because various approaches shine under different conditions. For instance, screen-space methods are more likely to perform better when shading (very) small features, while working at the scale of objects or rooms requires solutions that work in world space, unless the artistic intent calls for a deliberately unrealistic look. VR applications especially favour world-space effects due to the increased need for temporal and spatial coherence.
I was interested in finding a reasonable approximation of ambient occlusion by big moving objects without unpleasant temporal artifacts, so it was clear from the beginning that a screen-space postprocessing effect was out of the question. I also ruled out approaches based on raymarching and raytracing, because I wanted it to be lightweight enough for mobile and other low-performance devices, and support any possible occluder shape defined as a triangle mesh. Being physically accurate was less of a concern for me as long as the result looked convincing.
First of all, I did a little research on world-space methods. I quickly found two solutions that are the most widely cited:
- Ambient Occlusion Fields by Kontkanen and Laine, which uses a cube map to encode the occlusion by a single object. Each entry of the map contains coefficients for an approximation function that returns the occlusion term given the distance from the centre of the object in the direction corresponding to the entry. They also describe a way to combine the occlusion terms originating from several objects by exploiting blending.
- Ambient Occlusion Fields and Decals in Infamous 2, which is a more direct approach that stores occlusion information (amount and general direction) in a 3D texture fitted around the casting object. This allows a more accurate reconstruction of occlusion, especially close to or within the convex hull of the object, at the cost of higher memory requirements.
I thought the latter approach was promising and created a prototype implementation. However, I was unhappy with the results exactly where I expected this method to shine: inside and near the the object, and especially when it should have been self-shadowing.
After exhausting my patience for hacks, I had a different idea: instead of storing the general direction and strength of the occlusion at every sampling point, I’d directly store the strength in each of the six principal (axis-aligned) directions. The results surpassed all my expectations! The shading effect was very well-behaved and robust in general, and all the issues with missing occlusion went away instantly. While this meant increasing the number of terms from 4 to 6 for each sample, thanks to the improved behaviour the sampling resolution could be reduced enough to more than make up for it – consider that decreasing resolution by only 20% is enough to nearly halve the volume.
The real beef of this method is in the preprocessing step to generate the field, so let’s go through the process step by step. First of all, we take the bounding box of the object and add some padding to capture the domain where we want the approximation to work:
Next, we sample occlusion at every point by rendering the object on an 8×8 cube map as seen from that point. We just want a black and white image where the occluded parts are white. There is no real need for higher resolution or antialiasing, as we’ll have more than 256 samples affecting each of the final terms. Here’s how the cube maps look like (using 10x10x10 sampling points for the illustration):
Now we need a way to reduce each cube map to just six occlusion terms, one for each of the principal directions. The obvious thing to do is to define them as averages over half cubes. E.g. the up term is an average of the upper half of the cube, the right term is derived from the right half etc. For better accuracy, it might help to weight the samples of the cube map based on the relative directions they represent, but I chose not to do this because I was satisfied with the outcome even with simple averaging, and the difference is unlikely to be significant. Your mileage may vary.
The resulting terms can be stored in two RGB texels per sample, either in a 3D texture or a 2D one if your target platform has no support for the former (looking at you, WebGL).
To recap, here’s the whole field generation process in pseudocode:
principal_directions = {left, down, back, right, up, forward} for each sample_index in (1, 1, 1) to (x_res, y_res, z_res) pos = position of the grid point at sample_index sample = black and white 8x8 cube map capture of object at pos for each dir_index in 1 to 6 dir = principal_directions[dir_index] hemisphere = all texels of sample in the directions at acute angle with dir terms[dir_index] = average(hemisphere) field_negative[sample_index] = (r: terms[1], g: terms[2], b: terms[3]) field_positive[sample_index] = (r: terms[4], g: terms[5], b: terms[6])
This is what it looks like when sampling at a resolution of 32x32x32 (negative XYZ terms on top, positive XYZ terms on bottom):
The resulting image is basically a voxelised representation of the object. Given this data, it is very easy to extract the final occlusion term during rendering. The key equation is the following:
occlusion = dot(minField(p), max(-n, 0)) + dot(maxField(p), max(n, 0)), where
- p = the position of the sampling point in field space (this is normalised, i.e. (0,0,0) corresponds to one corner of the original bounding box used to generate the samples, and (1,1,1) covers the opposite corner)
- n = the normal of the surface in occluder local space
- minField = function to sample the minimum/negative terms (a single texture lookup if we have a 3D texture, two lookups and a lerp if we have a 2D texture)
- maxField = function to sample the maximum/positive terms
All we’re doing here is computing a weighted sum of the occlusion terms, where the weights are the clamped dot products of n with the six principal directions. These weights happen to be the same as the individual components of the normal, so instead of doing six dot products, we can get them by zeroing out the negative terms of n and -n.
Putting aside the details of obtaining p and n for a moment, let’s look at the result. Not very surprisingly, the ambient term computed from the above field suffers from aliasing, which is especially visible when moving the object. Blurring the field with an appropriate kernel before use can completely eliminate this artifact. I settled with the following 3x3x3 kernel:
1 | 4 | 1 | 4 | 9 | 4 | 1 | 4 | 1 | ||
4 | 9 | 4 | 9 | 16 | 9 | 4 | 9 | 4 | ||
1 | 4 | 1 | 4 | 9 | 4 | 1 | 4 | 1 |
Also, since the field is finite in size, I decided to simply fade out the terms to zero near the edge to improve the transition at the boundary. In the Infamous 2 implementation they opted for remapping the samples so the highest boundary value would be zero, but this means that every object has a different mapping that needs to be fixed with other magic coefficients later on. Here’s a comparison of the original (left) and the blurred (right) fields:
Back to the problem of sampling. Most of the work is just transforming points and vectors between coordinate systems, so it can be done in the vertex shader. Let’s define a few transformations:
- F – occluder local to (normalised) field space, i.e. map the bounding box in the occluder’s local space to the range (0,0,0)-(1,1,1); this matrix is tied to the baked field, therefore it’s constant
- O – occluder local to world space, i.e. occluder model transformation
- R – receiver local to world space, i.e. receiver model transformation
I’ll use the f, o, and r subscripts to denote that a point or vector is in field, occluder local or receiver local space, e.g. pf is the field space position, or nr is the receiver’s local surface normal. When rendering an occluder, our constant input is F, O, R, and per vertex input is pr and nr. Given this data, we can now derive the values of p and n needed for sampling:
n = no = normalize(O-1 * R * nr)
p = pf + n * bias = F * O-1 * R * pr + n * bias
The bias factor is the inversely proportional to the field’s resolution (I’m using 1/32 in the example, but it could also be a non-uniform scale if the field is not cube shaped), and its role is to prevent surfaces from shadowing themselves. Note that we’re not transforming the normal into field space, since that would alter its direction.
And that’s pretty much it! So far I’m very pleased with the results. One improvement I believe might be worth looking into is reducing the amount of terms from 6 to 4 per sample, so we can halve the amount of texture space and lookups needed. To do this, I’d pick the normals of a regular tetrahedron instead of the six principal directions for reference, and compute 4 dot products in the vertex shader (the 4 reference normals could be packed in a 4×3 matrix N) to determine the contribution of each term:
weights = N * no = (dot(no, n1), dot(no, n2), dot(no, n3), dot(no, n4))
occlusion = dot(field(p), max(weights, 0))
As soon as LambdaCube is in a good shape again, I’m planning to augment our beloved Stunts example with this effect.
Version 0.5 released
February 29, 2016
Posted by on The time has come to release a new version of LambdaCube 3D on Hackage, which brings lots of improvements. Also, the previous release is available on Stackage since the 2016-02-19 nightly. Just as last time, this post will only scratch the surface and give a high-level overview of what the past weeks brought.
The most visible change in the language is in the handling of tuples. Instead of defining them as distinct product types, the underlying machinery was changed to heterogeneous lists. As a consequence, the language is more complete, as we don’t have to explicitly define tuples of different arities in the compiler any more, and this also allowed us to simplify the codebase quite a bit. There is one gotcha though: unary tuples are a thing now, and they must be explicitly marked where they can occur (e.g. across shader boundaries). With the current syntax, a unary tuple is formed by surrounding any expression with double parentheses. You can read more about it in the language specification.
Another important change is that functions in the source program appear as functions in the generated code, i.e. they aren’t automatically inlined. Since modern GPU drivers often perform CSE during shader compilation, aggressively inlining everything puts unnecessary burden on them, so it’s probably for the better not to do it in most cases. This change also makes it much easier to read the generated code.
As for the internals of the compiler, many things were changed and improved since the last release. We’d like to highlight just one of these developments: we switched from parsec to megaparsec, which brought some performance improvements.
The online editor has a new time control feature: you can both pause the time and set it with a slider. We removed most of the custom uniforms, and now every example calculates everything using only the time as the input. As an added bonus, the LambdaCube 3D logo texture can be used in the editor as showcased by the Texturing example.
On the community side, the most important new thing is the lambdacube3d-discuss mailing list. We also added a new community page to the website so it’s easier to find all the places for LambdaCube related discussions. As for the website, both the API docs and the language specs pages received some love, plus we added a package overview page to dispel some confusion. Finally, this being the second release of the new system, we’re also introducing a changelog for the compiler.
Our short term plan is to take a little break from the compiler itself and improve the Quake 3 example. It serves both as a benchmark/testbed as well as a reality check that shows us how it feels to develop against our API. On a bit longer term, we intend to separate the compiler frontend as a self-contained component that could be used for making Haskell-like languages independently of the LambdaCube 3D project.
DSL in the wild
February 13, 2016
Posted by on After a few months of radio silence, the first public version of the new LambdaCube 3D DSL is finally available on Hackage. We have also updated our website at the same time, so if you want to get your hands dirty, you can head over to our little Getting Started Guide right away. The rest of this post will provide some context for this release.
The summer tour was a fun but exhausting experience, and we needed a few weeks of rest afterwards. This paid off nicely, as development continued with renewed energy in the autumn, and we’ve managed to keep up the same pace ever since. The past few months have been quite eventful!
First of all, our team has a new member: Andor Pénzes. Andor took it upon himself to improve the infrastructure of the project, which was sorely needed as there was no manpower left to do it before. In particular, this means that we finally have continuous integration set up with Travis, and LambdaCube 3D can also be built into a Docker image.
It is also worth noting that this release is actually the second version of the DSL. The sole purpose of the first version was to explore the design space and learn about the trade-offs of various approaches in implementing a Haskell-like language from scratch given our special requirements. It would be impossible to list all the changes we made, but there are a few highlights we’d like to point out:
- The speed of reduction is greatly improved.
- Reduction is based on partial evaluation.
- We have a much more expressive type system with a faster inference algorithm.
- Pattern match compilation is based on new research.
We had an all-team meeting in December and after some discussion we came up with a detailed roadmap (disclaimer: this is a living internal document) for the first half of 2016. Without the gory details, this is what you should expect in the coming months:
- A new release is planned for every 2-3 weeks. In the current roadmap, every release would bring improvements across several areas, e.g. compiler internals, language features, editor usability, backend performance, new target platforms.
- We have explicitly left some time for improving documentation (guides and references) and keeping it up-to-date with the releases.
- As a feature milestone, we’d like to get to a point where it’s possible to write a small game for a mobile platform by the summer (we already have a working iOS example, but it’s far from production ready).
Everything said, this is an early release intended for a limited audience. If you happen to be an adventurous Haskell programmer interested in computer graphics – especially the realtime kind – and its applications, this might be a good time for you to try LambdaCube 3D. Everyone else is welcome, of course, but you’re on your own for the time being. In any case, we’re happy to receive any kind of feedback.
Playing around with distance field font rendering
November 12, 2014
Posted by on While waiting for LambdaCube to reach a stable state I started thinking about displaying text with it. I wanted to build a system that doesn’t limit the user to a pre-determined set of characters, so all the static atlas based methods were out of the question. From experience I also know that a dynamic atlas can fill very quickly as the same characters need to be rendered to it at all the different sizes we need to display them in. But I also wanted to see nice sharp edges, so simply zooming the texture was out of the question.
What’s out there?
Short of converting the characters to triangle meshes and rendering them directly, the most popular solution of this problem is to represent them as signed distance fields (SDF). The most commonly cited source for this approach is a well-known white paper from Valve. Unfortunately, the problem with SDF based rendering is that it generally cannot preserve the sharpness of corners beyond the resolution of the texture:
The above letter was rendered at 64 pixels per em size. Displaying SDF shapes is very simple: just sample the texture with bilinear interpolation and apply a step function. By choosing the right step function adapted to the scale, it’s possible to get cheap anti-aliasing. Even simple alpha testing can do the trick if that’s not needed, in which case rendering can be done without shaders. As an added bonus, by choosing different step functions we can render outlines, cheap blurs, and various other effects. In practice, the distance field does an excellent job when it comes to curves. When zoomed in, the piecewise bilinear nature of the curves becomes apparent – it actually tends to look piecewise linear –, but at that point we could just swap in the real geometry if the application needs higher fidelity.
Of course, all of this is irrelevant if we cannot afford to lose the sharp corners. There are some existing approaches to solve this problem, but only two come to mind as serious attempts: Loop and Blinn’s method (low-poly mesh with extra information on the curved parts) and GLyphy (SDF represented as a combination of arc spline approximations). Both of these solutions are a bit too heavy for my purposes, and they also happen to be patented, which makes them not so desirable to integrate in one’s own system.
The Valve paper also points towards a possible solution: corners can be reconstructed from a two-channel distance field, one for each incident edge. They claimed not to pursue that direction because they ‘like the rounded style of this text’ (yeah, right!). Interestingly, I haven’t been able to find any follow-up on this remark, so I set out to create a solution along these lines. The end result is part of the LambdaCube Font Engine, or Lafonten for short.
Doubling the channels
Valve went for the brute-force option when generating the distance fields: take a high-resolution binary image of the shape and for each pixel measure the distance to the nearest pixel of the opposite colour. I couldn’t use this method as a starting point, because in order to properly reconstruct the shapes of letters I really needed to work with the geometry. Thankfully, there’s a handy library for processing TrueType fonts called FontyFruity that can extract the Bézier control points for the character outlines (note: at the time of this writing, the version required by Lafonten is not available on Hackage yet).
The obvious first step is to take the outline of the character, and slice it up along the corners that need to be kept sharp. These curve sections need to be extruded separately and they need to alternate between the two channels. If there’s an odd number of sections, one segment – preferably the longest one to minimise artifacts – can gradually transition from one channel to the other:
The extrusion is performed in both directions, so the real outline of the character is the median line of the extruded sections. Afterwards, the internal part needs to be filled somehow. It turns out that simply using the maximum value in both channels yields good results:
The shape used for filling is created from the inner edges of the extruded curve sections. The convex corners are derived as intersections between the consecutive curves, while the concave corners need to be covered more aggressively to make sure there are no dark holes within the contour (not to mention that those curves don’t necessarily intersect if the angle is sharp enough).
Another transformation that turned out to be useful to avoid some undesired interaction between unrelated curves is an extra cutting step applied at obtuse convex angles. For instance, the bottom left part of the 3 above actually looks like this:
It might be the case that this cutting step is redundant, as it was originally introduced while I was experimenting with a different, more brittle channel selection scheme.
Unfortunately, this two-channel distance field doesn’t contain enough information to reconstruct the original shape. The problem is that depending on whether a corner is convex or concave we need to combine the fields with a different function to get an approximation. Convex corners are defined as the minimum of the two values, while concave ones are the maximum. When we render the above image on a low-res texture and sample it linearly, we cannot really determine which case we’re looking at without additional data.
Quadrupling the channels
After some experimentation, I came to the conclusion that I needed two additional channels to serve as masks. The final formula to calculate the approximate distance is the following:
max(max(d1 * m1, d2 * m2), min(d1, d2)).
The values of d1 and d2 come from the channels shown above, while m1 and m2 depend on what colour section we are in. If we’re in a section approximated by d1, then m1 = 1 and m2 = 0. For d2, it’s the opposite. The values of m1 and m2 can be obtained by thresholding the two extra channels.
The new channels can be thought of as weights, and they explicitly mark which distance channel is relevant in a given position. This is what they look like for the same character:
The trickiest part is around the corners: we need to make sure that the weights kick in before the distance from the other channel disappears, but not too soon, otherwise the current channel would interfere with the other side of the corner. There’s a lot of hand tuning and magic constants involved, because I couldn’t find the time yet to derive a proper formula in a principled manner, but it handles most situations well enough already. It’s also obvious that the weights are erroneously low in the concave corners, but somehow this bug never seems to lead to visible errors so I haven’t fixed it yet.
One way to visualise the interaction of the four channels is thresholding them and adding the corresponding weights and distances:
The left hand side shows the result of thresholding and adding the channels. The right hand side is the same image, but the green channel is filled with the result of evaluating the reconstruction formula. This visualisation also shows how delicate the balance is around convex curvatures when generating the distance fields with this approach. Fortunately it does the trick!
One last step towards happiness
Unfortunately, this is not the end of the story. When the above geometry is rendered at a small resolution, we get some new artifacts partly from rasterisation, partly from linear interpolation in certain cases. The first problem is caused by the fact that the corner vertices of the fill geometry are not shared with the outlines, just derived with a formula, so there’s no guarantee that the rasteriser will fill all the pixels. The second issue can happen when inner edges of the opposite channels touch. When this happens, interpolation is performed between opposite values that both represent inner areas, but their average is so low that it’s interpreted as an outer area:
As it turns out, both issues can be resolved in a single post-processing pass that detects these specific situations and patches up the texture accordingly. All-maximum pixels are spread to adjacent all-minimum pixels, and directly facing opposite channels also unified with the max function.
The bottom line
So was the final outcome worth the trouble? I believe so. The resolution necessary to correctly represent different characters highly depends on the font. Fonts with small details (e.g. small serifs) obviously require a finer resolution to reproduce with high fidelity. For instance, Times New Roman needs about 128 pixels per em, which is probably too much for most purposes. On the other hand, Droid Sans is fine with just 64 pixels per em, and Droid Serif with 72. As an extreme example, Ubuntu Regular renders nicely from just 40 pixels per em. In any case, the improvement over the simple distance field of the same resolution is quite spectacular:
In the end, I achieved the original goal. The characters are added to the atlas dynamically on demand, and they can be rendered at arbitrary sizes without getting blurry or blocky in appearance. At the moment the baking step is not very optimised, so it’s not possible to generate many characters without introducing frame drops. But it’s fast enough to be used on a loading screen, for instance.
What next?
I’m surprised how well this method works in practice despite the ad hoc approach I took in generating the distance field. However, I’m missing the robustness of the simple distance field, which doesn’t break down in surprising ways no matter what the input shape is. One alternative I’d like to explore involves a very different way of filling the channels. Instead of rendering the geometry directly as described above, I’m considering the use of diffusion curves or generalised Voronoi diagrams. Since the resolution needed to reproduce individual characters is not terribly high, it could be okay to do the baking on the CPU with a dumb brute-force algorithm at first. This would also make it possible to generate characters in a background thread, which would be ideal for games.
LambdaCube on Hackage at last
January 4, 2014
Posted by on We noticed a while ago that due to the easy accessibility of Hackage packages many people still equate LambdaCube with its former incarnation that intended to replicate Ogre3D in Haskell. Today we got around to finally uploading the real thing. To make everything clear, we also marked the old packages as deprecated. The packages to use at the moment are the following:
- lambdacube-core: the library itself, which still depends on OpenGLRaw;
- lambdacube-samples: sample code for the new library.
Note that this update is somewhat of a hasty response to deprecate the old engine. The current structure is by no means final; firstly, we’re working on separating the OpenGL specific parts from the core and splitting the package in two, which would later allow us or others to create different back-ends for LambdaCube. Also, we’re going to upload the Stunts and Quake 3 demos built on the new engine in the near future.
Introducing the LambdaCube Intermediate Representation
October 12, 2013
Posted by on A few months ago we gave a quick overview of our long-term plans with LambdaCube. One of the central elements in this vision is an intermediate representation (IR) that allows us to split the LambdaCube compiler, separating the front-end and back-end functionalities. Currently we’re reorganising the implementation into two packages: one to compile the EDSL to the IR, and the other to execute the IR over native OpenGL. Today we’ll have a quick glance at the present shape of the IR that makes this possible.
Before jumping to the meat of the topic, we need a little context. The figure in the linked post provides a good overview, so let’s have a good look at it:
The front-end compiler produces the IR, which is just a plain data structure. The back-end provides a library-style API that allows applications to manipulate the inputs of the pipeline (uniforms, textures, and geometry), and execute it at will. A typical LambdaCube back-end API is expected to contain functions along these lines:
compilePipeline :: IR -> IO Pipeline setUniform :: Pipeline -> UniformName -> GpuPrimitive -> IO () setTexture :: Pipeline -> TextureName -> Texture -> IO () addGeometry :: Pipeline -> SlotName -> Geometry -> IO GeometryRef removeGeometry :: Pipeline -> GeometryRef -> IO () executePipeline :: Pipeline -> IO ()
Or if the back-end is e.g. a Java library, it could define a Pipeline class:
class Pipeline { Pipeline(IR description) { ... } void setUniform(UniformName name, GpuPrimitive value) { ... } void setTexture(TextureName name, Texture texture) { ... } GeometryRef addGeometry(SlotName slot, Geometry geometry) { ... } void removeGeometry(GeometryRef geometry) { ... } void execute() { ... } }
In typical usage, compilePipeline is invoked once in the initialisation phase to build a run-time representation of the pipeline. Afterwards, rendering a frame consists of setting up the inputs as desired, then executing the pipeline. Note that geometry doesn’t necessarily mean just vertex attributes, it can also include uniform settings. Our OpenGL back-end also allows per-object uniforms.
Now we can have a closer look at the pipeline! The top level definition of the pipeline IR is captured by the following data structure:
data Pipeline = Pipeline { textures :: Vector TextureDescriptor , samplers :: Vector SamplerDescriptor , targets :: Vector RenderTarget , programs :: Vector Program , slots :: Vector Slot , commands :: [Command] } deriving Show
A pipeline is represented as a collection of various top-level constants followed by a series of rendering commands. The constants are sorted into five different categories:
- Texture descriptions: specify the size and shape of textures, and also contain references to samplers.
- Sampler descriptions: specify the sampler parameters like wrap logic or filter settings.
- Render targets: each target is a list of images (either a texture or the output) with semantics (colour, depth, or stencil) specified.
- Shader programs: isolated fragments of the original pipeline that can be compiled separately, each corresponding to a rendering pass; they also specify the structure of their inputs and outputs (names and types).
- Input slots: each slot is a reference to some storage space for geometry that will be fed to the rendering passes. Slot descriptions define the structure of the input (vertex attributes and uniforms coming from the geometry), and they also enumerate references to all the passes that use them as a convenience feature.
Given all the data above, the commands describe the steps needed to execute the pipeline.
data Command = SetRasterContext RasterContext | SetAccumulationContext AccumulationContext | SetRenderTarget RenderTargetName | SetProgram ProgramName | SetSamplerUniform UniformName TextureUnit | SetTexture TextureUnit TextureName | SetSampler TextureUnit (Maybe SamplerName) | RenderSlot SlotName | ClearRenderTarget [(ImageSemantic,Value)] | GenerateMipMap TextureUnit | SaveImage FrameBufferComponent ImageRef | LoadImage ImageRef FrameBufferComponent
As we can see, this instruction set doesn’t resemble that of a conventional assembly language. There are no control structures, and we don’t deal with values at this level, only data dependencies. In essence, the pipeline program defines a suitable traversal order for the original pipeline definition. The job of the front-end compiler is to figure out this order and the necessary allocation of resources (mainly the texture units).
Most of the instructions – the ones whose name starts with Set – just specify the GPU state required to render a given pass. When everything is set up, RenderSlot is used to perform the pass. It can be optionally preceded by a ClearRenderTarget call. We also need some extra machinery to keep the results of passes around for further processing if the dependency graph refers to them several times. For the time being LoadImage and SaveImage are supposed to serve this purpose, but this part of the IR is still in flux.
The nice thing about this scheme is its clear separation of static structure and data. We basically take the standard OpenGL API and replace direct loads of data with named references. The rendering API is used to assign data during run-time using these names. This doesn’t necessarily have to involve a hash lookup, since the API can provide additional functionality to retrieve direct references to use in time critical code. The language maps closely to existing graphics interfaces, so it’s easy to create a lightweight interpreter or even a native code generator for it. Finally, there is a straightforward way to extend it with features we don’t support yet, like instancing or transform feedback, if the need arises.
A few thoughts on geometry shaders
June 21, 2013
Posted by on We just added a new example to the LambdaCube repository, which shows off cube map based reflections. Reflections are rendered by sampling a cube map, which is created by rendering the world from the centre of the reflecting object in six directions. This is done in a single pass, using a geometry shader to replicate every incoming triangle six times. Here is the final result:
While the main focus of this blog is language and API design, we need to describe the pipeline structure of the example to put the rest of the discussion into context. The high-level structure corresponds to the following data-flow graph:
The most important observation is that several pieces of this graph are reused multiple times. For instance, all geometry goes through the model-view transformation, but sometimes this is performed in a vertex shader (VS), sometimes in a geometry shader (GS). Also, the same lighting equation is used when creating the reflection map as well as the non-reflective parts of the final rendering, so the corresponding fragment shader (FS) is shared.
The Good
For us, the most important result of writing this example was that we could express all the above mentioned instances of shared logic in a straightforward way. The high-level graph structure is captured by the top declarations in sceneRender’s definition:
sceneRender = Accumulate accCtx PassAll reflectFrag (Rasterize rastCtx reflectPrims) directRender where directRender = Accumulate accCtx PassAll frag (Rasterize rastCtx directPrims) clearBuf cubeMapRender = Accumulate accCtx PassAll frag (Rasterize rastCtx cubePrims) clearBuf6 accCtx = AccumulationContext Nothing (DepthOp Less True :. ColorOp NoBlending (one' :: V4B) :. ZT) rastCtx = triangleCtx { ctxCullMode = CullFront CCW } clearBuf = FrameBuffer (DepthImage n1 1000 :. ColorImage n1 (V4 0.1 0.2 0.6 1) :. ZT) clearBuf6 = FrameBuffer (DepthImage n6 1000 :. ColorImage n6 (V4 0.05 0.1 0.3 1) :. ZT) worldInput = Fetch "geometrySlot" Triangles (IV3F "position", IV3F "normal") reflectInput = Fetch "reflectSlot" Triangles (IV3F "position", IV3F "normal") directPrims = Transform directVert worldInput cubePrims = Reassemble geom (Transform cubeMapVert worldInput) reflectPrims = Transform directVert reflectInput
The top-level definition describes the last pass, which draws the reflective capsule – whose geometry is carried by the primitive stream reflectPrims – on top of the image emitted by a previous pass called directRender. The two preceding passes render the scene without the capsule (worldInput) on a screen-sized framebuffer as well as the cube map. We can see that the pipeline section generating the cube map has a reassemble phase, which corresponds to the geometry shader. Note that these two passes have no data dependencies between each other, so they can be executed in any order by the back-end.
It’s clear to see how the same fragment shader is used in the first two passes. The more interesting story is finding a way to express the model-view transformation in one place and use it both in directVert and geom. As it turns out, we can simply extract the common functionality and give it a name. The function we get this way is frequency agnostic, which is reflected in its type:
transformGeometry :: Exp f V4F -> Exp f V3F -> Exp f M44F -> (Exp f V4F, Exp f V4F, Exp f V3F) transformGeometry localPos localNormal viewMatrix = (viewPos, worldPos, worldNormal) where worldPos = modelMatrix @*. localPos viewPos = viewMatrix @*. worldPos worldNormal = normalize' (v4v3 (modelMatrix @*. n3v4 localNormal))
The simpler use case is directVert, which simply wraps the above functionality in a vertex shader:
directVert :: Exp V (V3F, V3F) -> VertexOut () (V3F, V3F, V3F) directVert attr = VertexOut viewPos (floatV 1) ZT (Smooth (v4v3 worldPos) :. Smooth worldNormal :. Flat viewCameraPosition :. ZT) where (localPos, localNormal) = untup2 attr (viewPos, worldPos, worldNormal) = transformGeometry (v3v4 localPos) localNormal viewCameraMatrix
As for the geometry shader…
The Bad
… we already mentioned in the introduction of our functional pipeline model that we aren’t happy with the current way of expressing geometry shaders. The current approach is a very direct mapping of two nested for loops as an initialisation function and two state transformers – essentially unfold kernels. The outer loop is responsible for one primitive per iteration, while the inner loop emits the individual vertices. Without further ado, here’s the geometry shader needed by the example:
geom :: GeometryShader Triangle Triangle () () 6 V3F (V3F, V3F, V3F) geom = GeometryShader n6 TrianglesOutput 18 init prim vert where init attr = tup2 (primInit, intG 6) where primInit = tup2 (intG 0, attr) prim primState = tup5 (layer, layer, primState', vertInit, intG 3) where (layer, attr) = untup2 primState primState' = tup2 (layer @+ intG 1, attr) vertInit = tup3 (intG 0, viewMatrix, attr) viewMatrix = indexG (map cubeCameraMatrix [1..6]) layer vert vertState = GeometryOut vertState' viewPos pointSize ZT (Smooth (v4v3 worldPos) :. Smooth worldNormal :. Flat cubeCameraPosition :. ZT) where (index, viewMatrix, attr) = untup3 vertState vertState' = tup3 (index @+ intG 1, viewMatrix, attr) (attr0, attr1, attr2) = untup3 attr (localPos, pointSize, _, localNormal) = untup4 (indexG [attr0, attr1, attr2] index) (viewPos, worldPos, worldNormal) = transformGeometry localPos localNormal viewMatrix
The init function’s sole job is to define the initial state and iteration count of the outer loop. The initial state is just a loop counter set to zero plus the input of the shader in a single tuple called attr, while the iteration count is 6. The prim function takes care of increasing this counter, specifying the layer for the primitive (equal to the counter), and picking the appropriate view matrix from one of six uniforms. It defines the iteration count (3, since we’re drawing triangles) and the initial state of the inner loop, which contains another counter set at zero, the chosen view matrix, and the attribute tuple. Finally, the vert function calculates the output attributes using transformGeometry, and also its next state, which only differs from the current one in having the counter incremented.
On one hand, we had success in reusing the common logic between different shader stages by simply extracting it as a pure function. On the other, it is obvious at this point that directly mapping imperative loops results in really awkward code. At least it does the job!
The Next Step?
We’ve been thinking about alternative ways to model geometry shaders that would allow a more convenient and ‘natural’ manner of expressing our intent. One option we’ve considered lately would be to have the shader yield a list of lists. This would allow us to use scoping to access attributes in the inner loop instead of having to pass them around explicitly, not to mention doing away with explicit loop counters altogether. We could use existing techniques to generate imperative code, e.g. stream fusion. However, it is an open question how we could introduce lists or some similar structure in the language without disrupting other parts, keeping the use of the new feature appropriately constrained. One thing is clear: there has to be a better way.
Designing a custom kind system for rendering pipeline descriptions
April 28, 2013
Posted by on We are in the process of creating the next iteration of LambdaCube, where we finally depart from the Haskell EDSL approach and turn the language into a proper DSL. The reasons behind this move were outlined in an earlier post. However, we still use the EDSL as a testing ground while designing the type system, since GHC comes with a rich set of features available for immediate use. The topic of today’s instalment is our recent experiment to make illegal types unrepresentable through a custom kind system. This is made possible by the fact that GHC recently introduced support for promoting datatypes to kinds.
Even though the relevant DataKinds extension is over a year old (although it’s been officially supported only since GHC 7.6), we couldn’t find any example to use it for modelling a real-life domain. Our first limited impression is that this is a direction worth pursuing.
It might sound surprising that this idea was already brought up in the context of computer graphics in the Spark project. Tim Foley’s dissertation briefly discusses the type theory behind Spark (see Chapter 5 for details). The basic idea is that we can introduce a separate kind called Frequency (Spark refers to this concept as RecordType), and constrain the Exp type constructor (@ in Spark) to take a type of this kind as its first argument.
To define the new kind, all we need to do is write a plain data declaration, as opposed to the four empty data declarations we used to have:
data Frequency = Obj | V | G | F
As we enable the DataKinds extension, this definition automatically creates a kind and four types of this kind. Now we can change the definition of Exp to take advantage of it:
data Exp :: Frequency -> * -> * where ...
For the time being, we diverge from the Spark model. The difference is that the resulting type has kind * in our case, while in the context of Spark it would be labelled as RateQualifiedType. Unfortunately, Haskell doesn’t allow using non-* return kinds in data declarations, so we can’t test this idea with the current version. Since having a separate Exp universe is potentially useful, we might adopt the notion for the DSL proper.
We don’t have to stop here. There are a few more areas where we can sensibly constrain the possible types. For instance, primitive and fragment streams as well as framebuffers in LambdaCube have a type parameter that identifies the layer count, i.e. the number of framebuffer layers we can emit primitives to using geometry shaders. Instead of rolling a custom solution, now we can use type level naturals with support for numeric literals. Among others, the definition of stream types and images reflects the new structure:
data VertexStream prim t data PrimitiveStream prim (layerCount :: Nat) (stage :: Frequency) t data FragmentStream (layerCount :: Nat) t data Image (layerCount :: Nat) t where ...
Playing with kinds led to a little surprise when we looked into the texture subsystem. We had marker types called DIM1, DIM2, and DIM3, which were used for two purposes: to denote the dimension of primitive vectors and also the shape of equilateral textures. Both structures have distinct fourth options: 4-dimension vectors and rectangle-shaped textures. While they are related – e.g. the texture shape implies the dimension of vectors used to address texels –, these are different concepts, and we consider it an abuse of the type system to let them share some cases. Now vector dimensions are represented as type-level naturals, and the TextureShape kind is used to classify the phantom types denoting the different options for texture shapes. It’s exactly like moving from an untyped language to a typed one.
But what did we achieve in the end? It looks like we could express the very same constraints with good old type classes. One crucial difference is that kinds defined as promoted data types are closed. Since LambdaCube tries to model a closed domain, we see this as an advantage. It also feels conceptually and notationally cleaner to express simple membership constraints with kinds than by shoving them in the context. However, the final decision about whether to use this approach has to wait until we have the DSL working.
Optimising convolution filters
April 11, 2013
Posted by on We recently added a new example to the repository that shows one way of implementing convolution filters. Our variance shadow mapping example uses a Gaussian blur as a component, but it’s implemented in a rather naive, straightforward way. This time we’re going to have a closer look at it and improve its performance while trying to preserve the output as much as possible. This little experiment was primarily inspired by a post from Daniel Rákos. While the optimisation is not specific to the Gaussian blur, we’ll stick with it as an example.
First of all, we’ll define a function to calculate the weights of a blur for a given filter width. As Daniel points out, taking the binomial coefficients, i.e. the appropriate row of Pascal’s triangle, and normalising it gives us the desired values. First let’s define the weights without normalisation:
binomialCoefficients :: Int -> [Float] binomialCoefficients n = iterate next [1] !! (n-1) where next xs = [x+y | x <- xs ++ [0] | y <- 0:xs]
A convolution filter takes samples from the neighbourhood of a given texel and computes a weighted sum from them, therefore we need to assign offsets to the weights before we can use them. We will measure the offsets in texels. The following function assigns offsets to any list of weights with the assumption that the middle element of the list corresponds to the centre of the filter (from now on, we’ll assume the filter width is an odd integer):
withOffsets :: [Float] -> [(Float, Float)] withOffsets cs = [(o, c) | c <- cs | o <- [-lim..lim]] where lim = fromIntegral (length cs `quot` 2)
Finally, we want to normalise the weights, so their sum is 1. We define our normalisation step to work on the weights with offsets:
normalise :: [(Float, Float)] -> [(Float, Float)] normalise ocs = [(o, c/s) | (o, c) <- ocs] where s = sum [c | (_, c) <- ocs]
At this point, we can define a function to calculate weighted samples for a Gaussian blur:
gaussianSamples :: Int -> [(Float, Float)] gaussianSamples = normalise . withOffsets . binomialCoefficients
For instance, gaussianSamples 5 equals [(-2.0,0.0625),(-1.0,0.25),(0.0,0.375),(1.0,0.25),(2.0,0.0625)].
To improve the performance of the filter, we should make it as small as possible. One obvious thing we can do is to simply remove terms that don’t contribute to the results significantly. Since our primary use case is real-time graphics, a bit of inaccuracy most likely won’t make any perceivable difference. Our solution is to introduce a thresholding operation that throws away terms that are much smaller than the biggest term:
threshold :: Float -> [(Float, Float)] -> [(Float, Float)] threshold t ocs = [oc | oc@(_, c) <- ocs, c*t >= m] where m = maximum [c | (_, c) <- ocs]
This simple method can be surprisingly efficient. When we calculate the weights for wide Gaussian blurs, it turns out that a large portion of the outer areas hardly contribute to the final value. For instance, threshold 1000 (withOffsets (binomialCoefficients 101)) leaves only 37 terms.
Another, more clever trick we can apply is specific to texture sampling. Since the GPU provides linear interpolation for free, we can get a weighted sum of two adjacent samples with a single query: f(x) = (1 – frac(x)) * f(floor(x)) + frac(x) * f(ceil(x)). All we need to do is merge adjacent samples by calculating the necessary sampling position. The following function assumes that all samples are adjacent and separated by the unit distance; both criteria are met if we provide the output of withOffsets:
offsetWeight :: [(Float, Float)] -> [(Float, Float)] offsetWeight [] = [] offsetWeight [ow] = [ow] offsetWeight ((o1,w1):(o2,w2):ows) = (o1+w2/w', w') : offsetWeight ows where w' = w1+w2
The resulting list is half as long as the input. We can also combine this optimisation with thresholding, but only with care: we need to do the sample merging first, since offsetWeight assumes consecutive samples. It’s best to wrap it all up in a single function:
gaussianSamples :: Float -> Int -> [(Float, Float)] gaussianSamples tolerance = normalise . threshold tolerance . offsetWeight . withOffsets . binomialCoefficients
To stick with the previous example, length (gaussianSamples 1000 101) equals 19. This graph shows how these two optimisations affect the number of samples needed to reproduce a Gaussian of a given width:
So how shall we take these weights into use? First of all, we need some input to apply the filter to. We went with a simple pattern generated in a fragment shader; the geometry rendered is just a screen-sized quad:
originalImage :: Exp Obj (FrameBuffer N1 V4F) originalImage = Accumulate accCtx PassAll frag (Rasterize triangleCtx prims) clearBuf where accCtx = AccumulationContext Nothing (ColorOp NoBlending (one' :: V4B) :. ZT) clearBuf = FrameBuffer (ColorImage n1 (V4 0 0 0 1) :. ZT) prims = Transform vert (Fetch "geometrySlot" Triangle (IV2F "position")) vert :: Exp V V2F -> VertexOut () vert pos = VertexOut pos' (floatV 1) (ZT) where V2 x y = unpack' pos pos' = pack' (V4 x y (floatV 0) (floatV 1)) frag :: Exp F () -> FragmentOut (Color V4F :+: ZZ) frag _ = FragmentOut (col :. ZT) where V4 x y _ _ = unpack' fragCoord' x' = sqrt' x @* floatF 16 y' = sqrt' y @* floatF 16 r = Cond ((x' @+ y') @% (floatF 50) @< (floatF 25)) (floatF 0) (floatF 1) g = floatF 0 b = Cond ((x' @- y') @% (floatF 50) @< (floatF 25)) (floatF 0) (floatF 1) col = pack' (V4 r g b (floatF 1))
This is a complete single-pass pipeline description. The vertex shader does nothing beyond simply extending the position vector with the missing Z and W coordinates. The fragment shader uses the fragment coordinate as its only input, so we don’t need to pass any data between the two shader phases. The resulting image is the following:
Next, we’ll define a one-dimensional convolution pass that takes a direction vector, a list of weights with offsets (which are used to scale the aforementioned vector), and an input image. The list is used to generate an unrolled loop in the fragment shader, while the vertex shader performs just an elementary coordinate transformation.
convolve :: V2F -> [(Float, Float)] -> Exp Obj (Image N1 V4F) -> Exp Obj (FrameBuffer N1 V4F) convolve (V2 dx dy) weights img = Accumulate accCtx PassAll frag (Rasterize triangleCtx prims) clearBuf where resX = windowWidth resY = windowHeight dir' :: Exp F V2F dir' = Const (V2 (dx / fromIntegral resX) (dy / fromIntegral resY)) accCtx = AccumulationContext Nothing (ColorOp NoBlending (one' :: V4B) :. ZT) clearBuf = FrameBuffer (ColorImage n1 (V4 0 0 0 1) :. ZT) prims = Transform vert (Fetch "postSlot" Triangle (IV2F "position")) vert :: Exp V V2F -> VertexOut V2F vert uv = VertexOut pos (Const 1) (NoPerspective uv' :. ZT) where uv' = uv @* floatV 0.5 @+ floatV 0.5 pos = pack' (V4 u v (floatV 1) (floatV 1)) V2 u v = unpack' uv frag :: Exp F V2F -> FragmentOut (Color V4F :+: ZZ) frag uv = FragmentOut (sample :. ZT) where sample = foldr1 (@+) [ texture' smp (uv @+ dir' @* floatF ofs) @* floatF coeff | (ofs, coeff) <- weights] smp = Sampler LinearFilter Clamp tex tex = Texture (Texture2D (Float RGBA) n1) (V2 resX resY) NoMip [img]
Incorporating the convolution pass is as simple as using function composition:
weights = gaussianSamples 1000 101 dirH = V2 1 0 dirV = V2 0 1 finalImage :: Exp Obj (FrameBuffer N1 V4F) finalImage = filterPass dirV (filterPass dirH originalImage) where filterPass dir = convolve dir weights . projectBuffer projectBuffer = PrjFrameBuffer "" tix0
Using our original definition for gaussianSamples, i.e. not using thresholding or merging, we get the following rendering:
When we add both optimisations, the resulting image is indistinguishable from the previous one:
Despite the striking similarity, the two images are not identical. If we calculate the difference and enhance the result, we get the following pattern:
With 8-bit RGB components, the maximum difference anywhere on the image is just 1, so no wonder it cannot be detected by the naked eye. This is a really nice result, because the run-time increases steeply with the number of samples. The width of the filter is also a factor, but it’s difficult to predict its effect due to the complexity of the GPU hardware. On the hardware we tested (nVidia GeForce 9650M GT) we got the following timings:
At a first glance, it looks like the time per frame is more or less directly proportional to the number of samples used in the filtering passes after we deduct the time needed to run the no-op 1×1 filter. However, in this particular case, the variant with both optimisations turned on seems to have a smaller time to sample count ratio than the naive version, about 0.2 vs. 0.25 ms, so the gains are even better than expected. On other hardware we might see different relations depending on the peculiarities of memory access costs.
At this point we ran into the limits of the current LambdaCube implementation. One more potential optimisation worth trying would be to calculate the sample coordinates in the vertex shader, making the texture reads non-dependent. Currently this is not possible due to the lack of support for array types in the shaders. It’s also evident that the EDSL approach without explicit let bindings is not viable, because compilation time is exponential with respect to the complexity of the shader. While there have been some positive developments in this area, we intend to switch to the DSL approach in the next version anyway, which should solve this problem. Finally, if you run the example, you’ll notice that you cannot resize the window. The reason for this is rather mundane: currently texture sizes are pipeline compile-time constants. In the future we’ll change their type from V2I to Exp Obj V2I, which will make it possible to modify the values during run-time.
Using Bullet physics with an FRP approach (part 3)
December 20, 2012
Posted by on After a long hiatus, we shall conclude our series of posts on the FRP physics example. The previous post discussed the falling brick whose motion state is updated upon collision events, while this one will focus on the mouse draggable rag doll. In fact, the Bullet portion of this feature turned out to be less interesting than some of the observations we made while implementing the interaction in Elerea.
Defining a rag doll
The rag doll itself is straightforward business: it consists of a few capsules (one for each joint), and the constraints that hold them together. These objects are instantiated in the complexBody function given a description. The general idea in Bullet is that constraints are configured in world space. E.g. if we want to connect two bricks with a spring, we have to provide the world-space coordinates of the pivot points given the current position and orientation of the bricks, and the system calculates the necessary parameters based on this input. Afterwards, everything is handled by the physics world, and we don’t have to worry about it at all.
Originally, we wanted to extend the attribute system to be able to describe all the parameters of the constraints in a convenient way. Unfortunately, it turned out that the Bullet API is rather inconsistent in this area, and it would have required too much up front work to create a cleaner façade in front of it for the sake of the example. However, we intend to revisit this project in the future, when LambdaCube itself is in a better shape.
Reactive mouse picking logic
To make things interesting, we allow the user to pick up objects one at a time and drag them around. This is achieved by temporarily establishing a so-called point-to-point constraint while the mouse button is pressed. This constraint simply makes sure that two points always coincide in space without imposing any limits on orientation.
The logic we want to implement is the following:
- When the mouse button is pressed, check what the pointer is hovering over by shooting a ray into the world.
- If the ray hits a dynamic (finite-mass and not player controlled) object first, instantiate a constraint whose pivot point is the exact same spot where the ray hit.
- While the button is pressed, ensure that the other pivot point of the constraint is kept equal to a spatial position that corresponds to the current mouse position and is at the same distance from the camera as the original grabbing position.
- When the button is released, destroy the constraint.
The high-level process is described by the pickConstraint function:
pickConstraint :: BtDynamicsWorldClass bc => bc -> Signal Vec2 -> Signal CameraInfo -> Signal Bool -> Signal Vec2 -> SignalGen (Signal ()) pickConstraint dynamicsWorld windowSize cameraInfo mouseButton mousePos = do press <- edge mouseButton release <- edge (not <$> mouseButton) pick <- generator $ makePick <$> press <*> windowSize <*> cameraInfo <*> mousePos releaseInfo <- do rec sig <- delay Nothing $ do released <- release newPick <- pick currentPick <- sig case (released, newPick, currentPick) of (True, _, _) -> return Nothing (_, Just (constraintSignal, body), _) -> do constraint <- constraintSignal return $ Just (constraint, body, constraintSignal) (_, _, Just (_, body, constraintSignal)) -> do constraint <- constraintSignal return $ Just (constraint, body, constraintSignal) _ -> return Nothing return sig effectful2 stopPicking release releaseInfo
First, we define press and release events by detecting rising and falling edges of the mouseButton signal. The derived signals yield True only at the moment when the value of mouseButton changes in the appropriate direction. Afterwards, we define the pick signal, which has the type Maybe (Signal BtPoint2PointConstraint, BtRigidBody). When the user presses the button while hovering over a dynamic body, pick carries a signal that corresponds to the freshly instantiated constraint plus a reference to the body in question, otherwise it yields Nothing.
The releaseInfo signal is defined recursively through a delay, which is the most basic way of defining a stateful stream transformer in Elerea. In fact, the stateful and transfer combinators provided by the library are defined in a similar manner. The reason why we can’t use them in this case is the fact that the state contains signals that we need to sample to calculate the next state. This flattening is made possible thanks to Signal being a Monad instance.
The type of the state is Maybe (BtPoint2PointConstraint, BtRigidBody, Signal BtPoint2PointConstraint). The elements of the triple are: the current sample of the constraint, the body being dragged, and the time-changing signal that represents the constraint. The transformation rules described through pattern matching are the following:
- if the mouse button is released, the state is cleared to Nothing;
- if a pick was generated, it is stored along with the current sample of the constraint signal;
- if there is an ongoing pick, the constraint signal is sampled;
- in any other case, the state is cleared.
In the end, releaseInfo will carry a triple wrapped in Just between a successful pick and a release event, and Nothing at any other moment. This signal, along with release itself, forms the input of stopPicking, which just invokes the appropriate Bullet functions to destroy the constraint at the right moment.
The missing piece of the puzzle is makePick, which is responsible for creating the constraint signal:
makePick :: Bool -> Vec2 -> CameraInfo -> Vec2 -> SignalGen (Maybe (Signal BtPoint2PointConstraint, BtRigidBody)) makePick press windowSizeCur cameraInfoCur mousePosCur = case press of False -> return Nothing True -> do pickInfo <- execute $ pickBody dynamicsWorld windowSizeCur cameraInfoCur mousePosCur case pickInfo of Nothing -> return Nothing Just (body, hitPosition, distance) -> do constraint <- createPick dynamicsWorld body hitPosition distance windowSize cameraInfo mousePos return $ Just (constraint, body)
This is a straightforward signal generator, and passing it into generator in the definition of pick ensures that it is invoked in every frame. The pickBody function is an ordinary IO operation that was already mentioned in the first post of this series. Most of the work is done in createPick when an appropriate body is found:
createPick :: (BtDynamicsWorldClass bc, BtRigidBodyClass b) => bc -> b -> Vec3 -> Float -> Signal Vec2 -> Signal CameraInfo -> Signal Vec2 -> SignalGen (Signal BtPoint2PointConstraint) createPick dynamicsWorld body hitPosition distance windowSize cameraInfo mousePos = do make' (createPickConstraint dynamicsWorld body hitPosition) [ setting :!~ flip set [impulseClamp := 30, tau := 0.001] , pivotB :< pivotPosition <$> windowSize <*> cameraInfo <*> mousePos ] where createPickConstraint dynamicsWorld body hitPosition = do bodyProj <- transformToProj4 <$> btRigidBody_getCenterOfMassTransform body let localPivot = trim ((extendWith 1 hitPosition :: Vec4) .* fromProjective (inverse bodyProj)) pickConstraint <- btPoint2PointConstraint1 body localPivot btDynamicsWorld_addConstraint dynamicsWorld pickConstraint True return pickConstraint pivotPosition windowSize cameraInfo mousePos = Just (rayFrom &+ (normalize (rayTo &- rayFrom) &* distance)) where rayFrom = cameraPosition cameraInfo rayTo = rayTarget windowSize cameraInfo mousePos
The actual constraint is instantiated in createPickConstraint, which is just a series of Bullet API calls. We define the second pivot point as a signal attribute; the signal is a stateless function of the starting distance, the mouse position, and the view projection parameters. Such signals can be defined by lifting a pure function (in this case pivotPosition) using the applicative combinators. Since pivotPosition never yields Nothing, the pivot point is updated in every frame.
The final take-away
The most interesting outcome of this experiment, at least in our opinion, is the realisation how FRP can make it easier to deal with mutable state in a disciplined way. In particular, it provides a nice solution in the situation when a mutable variable needs to be modified by several entities. Since all the future edits are available as a signal, it is straightforward to resolve edit conflicts with a state machine. In fact, the FRP approach practically forces us to do so.
Dealing with the interdependencies of several time-varying values can also be tricky. Again, with FRP we have no choice but to clearly define what happens in all the possible constellations. One example for this in the above code is the definition of releaseInfo, where we used pattern matching to account for all the possibilities. It is an open question how this method scales as the program grows in complexity, and we’ll see that better in our future experiments.