In a Nutshell
LambdaCube 3D is a domain specific language and library that makes it possible to program GPUs in a purely functional style.
Purely Functional Rendering Engine
After a long hiatus, we shall conclude our series of posts on the FRP physics example. The previous post discussed the falling brick whose motion state is updated upon collision events, while this one will focus on the mouse draggable rag doll. In fact, the Bullet portion of this feature turned out to be less interesting than some of the observations we made while implementing the interaction in Elerea.
The rag doll itself is straightforward business: it consists of a few capsules (one for each joint), and the constraints that hold them together. These objects are instantiated in the complexBody function given a description. The general idea in Bullet is that constraints are configured in world space. E.g. if we want to connect two bricks with a spring, we have to provide the world-space coordinates of the pivot points given the current position and orientation of the bricks, and the system calculates the necessary parameters based on this input. Afterwards, everything is handled by the physics world, and we don’t have to worry about it at all.
Originally, we wanted to extend the attribute system to be able to describe all the parameters of the constraints in a convenient way. Unfortunately, it turned out that the Bullet API is rather inconsistent in this area, and it would have required too much up front work to create a cleaner façade in front of it for the sake of the example. However, we intend to revisit this project in the future, when LambdaCube itself is in a better shape.
To make things interesting, we allow the user to pick up objects one at a time and drag them around. This is achieved by temporarily establishing a so-called point-to-point constraint while the mouse button is pressed. This constraint simply makes sure that two points always coincide in space without imposing any limits on orientation.
The logic we want to implement is the following:
The high-level process is described by the pickConstraint function:
pickConstraint :: BtDynamicsWorldClass bc => bc -> Signal Vec2 -> Signal CameraInfo -> Signal Bool -> Signal Vec2 -> SignalGen (Signal ()) pickConstraint dynamicsWorld windowSize cameraInfo mouseButton mousePos = do press <- edge mouseButton release <- edge (not <$> mouseButton) pick <- generator $ makePick <$> press <*> windowSize <*> cameraInfo <*> mousePos releaseInfo <- do rec sig <- delay Nothing $ do released <- release newPick <- pick currentPick <- sig case (released, newPick, currentPick) of (True, _, _) -> return Nothing (_, Just (constraintSignal, body), _) -> do constraint <- constraintSignal return $ Just (constraint, body, constraintSignal) (_, _, Just (_, body, constraintSignal)) -> do constraint <- constraintSignal return $ Just (constraint, body, constraintSignal) _ -> return Nothing return sig effectful2 stopPicking release releaseInfo
First, we define press and release events by detecting rising and falling edges of the mouseButton signal. The derived signals yield True only at the moment when the value of mouseButton changes in the appropriate direction. Afterwards, we define the pick signal, which has the type Maybe (Signal BtPoint2PointConstraint, BtRigidBody). When the user presses the button while hovering over a dynamic body, pick carries a signal that corresponds to the freshly instantiated constraint plus a reference to the body in question, otherwise it yields Nothing.
The releaseInfo signal is defined recursively through a delay, which is the most basic way of defining a stateful stream transformer in Elerea. In fact, the stateful and transfer combinators provided by the library are defined in a similar manner. The reason why we can’t use them in this case is the fact that the state contains signals that we need to sample to calculate the next state. This flattening is made possible thanks to Signal being a Monad instance.
The type of the state is Maybe (BtPoint2PointConstraint, BtRigidBody, Signal BtPoint2PointConstraint). The elements of the triple are: the current sample of the constraint, the body being dragged, and the time-changing signal that represents the constraint. The transformation rules described through pattern matching are the following:
In the end, releaseInfo will carry a triple wrapped in Just between a successful pick and a release event, and Nothing at any other moment. This signal, along with release itself, forms the input of stopPicking, which just invokes the appropriate Bullet functions to destroy the constraint at the right moment.
The missing piece of the puzzle is makePick, which is responsible for creating the constraint signal:
makePick :: Bool -> Vec2 -> CameraInfo -> Vec2 -> SignalGen (Maybe (Signal BtPoint2PointConstraint, BtRigidBody)) makePick press windowSizeCur cameraInfoCur mousePosCur = case press of False -> return Nothing True -> do pickInfo <- execute $ pickBody dynamicsWorld windowSizeCur cameraInfoCur mousePosCur case pickInfo of Nothing -> return Nothing Just (body, hitPosition, distance) -> do constraint <- createPick dynamicsWorld body hitPosition distance windowSize cameraInfo mousePos return $ Just (constraint, body)
This is a straightforward signal generator, and passing it into generator in the definition of pick ensures that it is invoked in every frame. The pickBody function is an ordinary IO operation that was already mentioned in the first post of this series. Most of the work is done in createPick when an appropriate body is found:
createPick :: (BtDynamicsWorldClass bc, BtRigidBodyClass b) => bc -> b -> Vec3 -> Float -> Signal Vec2 -> Signal CameraInfo -> Signal Vec2 -> SignalGen (Signal BtPoint2PointConstraint) createPick dynamicsWorld body hitPosition distance windowSize cameraInfo mousePos = do make' (createPickConstraint dynamicsWorld body hitPosition) [ setting :!~ flip set [impulseClamp := 30, tau := 0.001] , pivotB :< pivotPosition <$> windowSize <*> cameraInfo <*> mousePos ] where createPickConstraint dynamicsWorld body hitPosition = do bodyProj <- transformToProj4 <$> btRigidBody_getCenterOfMassTransform body let localPivot = trim ((extendWith 1 hitPosition :: Vec4) .* fromProjective (inverse bodyProj)) pickConstraint <- btPoint2PointConstraint1 body localPivot btDynamicsWorld_addConstraint dynamicsWorld pickConstraint True return pickConstraint pivotPosition windowSize cameraInfo mousePos = Just (rayFrom &+ (normalize (rayTo &- rayFrom) &* distance)) where rayFrom = cameraPosition cameraInfo rayTo = rayTarget windowSize cameraInfo mousePos
The actual constraint is instantiated in createPickConstraint, which is just a series of Bullet API calls. We define the second pivot point as a signal attribute; the signal is a stateless function of the starting distance, the mouse position, and the view projection parameters. Such signals can be defined by lifting a pure function (in this case pivotPosition) using the applicative combinators. Since pivotPosition never yields Nothing, the pivot point is updated in every frame.
The most interesting outcome of this experiment, at least in our opinion, is the realisation how FRP can make it easier to deal with mutable state in a disciplined way. In particular, it provides a nice solution in the situation when a mutable variable needs to be modified by several entities. Since all the future edits are available as a signal, it is straightforward to resolve edit conflicts with a state machine. In fact, the FRP approach practically forces us to do so.
Dealing with the interdependencies of several time-varying values can also be tricky. Again, with FRP we have no choice but to clearly define what happens in all the possible constellations. One example for this in the above code is the definition of releaseInfo, where we used pattern matching to account for all the possibilities. It is an open question how this method scales as the program grows in complexity, and we’ll see that better in our future experiments.