Grandmother Cells and Black Swans

When I was first learning about deep learning, the teacher brought up an issue with image classifiers and black swans. I would call this the black swan problem, but it turns out that has a related but different meaning, so let’s go with swan classifier generalization problem. It goes like this:

If you train a classifier to identify objects in images, and one of those categories is swans, that classifier will tend to be training exclusively or predominantly on white swans. At deployment it is likely to misclassify images of Australian black swans.

This isn’t all that surprising. Generalization in deep learning is hard.

What is surprising is:

Most humans familiar with swans would be able to recognize an Australian black swan as a swan.

In other words, humans are good at generalization and are able to do it better than many of our traditional deep learning tools, at least in comparison to ImageNet and other, older, deep image classifiers.

This post is an exploration into deep learning, classification, and generalization, and about why I think feature embeddings go a long way to building a better knowledge representation. None of this is particularly new or insightful – I just find it useful to work it out and tie it all together.

Grandmother Cells

A deep image classifier takes as input an image and produces as output a softmax distribution over a discrete set of categories:

Conventional knowledge is that deeper levels reason about more sophisticated, higher-level features. Very early convolutional layers typically only have access to small neighborhoods of pixels, so learn local features like edge detections and textures. Deeper layers might piece together larger structures like a bird beak or ripples in a lake. Finally, all of this information is brought together to produce the final classification.

Under such an approach, you literally have one, single neuron at the end responsible for firing when it thinks the image has a swan. The more intensely its output value, the more confident the swan prediction.

There have historically been two opposing views on the relationship between brains and behavior — the localist view that specific brain regions are responsible for specific behaviors, and the holistic view that neural activity is spread out throughout the nervous system. A critic of the localist view might think it absurd that there is a single neuron somewhere in your head that fires over the concept of “Grandmother”.

A grandmother cell is just that — a neuron exclusively dedicated to one high-level but specific concept. (Funny enough, there was a lot of hubbub in 2005 about recordings that suggested that a single neuron had been found that triggers only for Jennifer Aniston.)

By and large, researchers do not believe that the best way to represent knowledge is through a 1:1 representation such as grandmother cells. As such, it may not come as a surprise when a traditional convolutional deep image classifier like ImageNet struggles to classify black swans.

Sparse Encodings

A traditional image classifier is structured to prefer sparse outputs. A confident prediction should produce a high value in the appropriate category and very low values elsewhere:

If the model is uncertain, it is forced to make the appropriate trade-off to assigning some probability mass to the other potential categories. That might mean assigning some likelihood to other black birds:

This sort of representation might be convenient for the output of a classifier, but it isn’t all that useful for reasoning. If I am trying to think about what it means for an object to be a swan, I don’t want to simply know it is a swan, I want to think about where I might find it (e.g. on a lake), what it might do (e.g. honk at me), and sure, what it looks like (e.g. tends to be white-feathered).

A reasoning network that receives the fact that there is a swan would thus have to unpack this discrete bit of knowledge into these myriad facts:

Worse yet, if you have an uncertain input, you have to unpack all contributors and figure out how to combine them:

Working directly with the discrete bit of knowledge is fragile. If \(P(\text{swan})\) is low, then the network just can’t associate the object to a swan. However, if we’re working with the distributed properties and associations of a swan, its a whole lot easier to get to swan if one property (color), is unusual.

The third thing going on here is that sparse representations don’t use the state space as efficiently. If my reasoning network receives a \(128-\) dimensional vector, and we’re working with one-hot encodings where everything but one dimension is zero, then we can only represent 128 different concepts. In contrast, if we’re willing to use the whole state space, we can represent more or less any number of concepts.

A 2D embedding for the MNIST digits. The left-side shows how digits are mapped to the embedding, and the right shows how samples from the space produce digit images. Images from Algorithms for Decision Making.

Discrete representations are thus hard to work with, fragile, and wasteful.

Transformer Embeddings

You might think that the transformer model suffers from this same problem of discrete reasoning, as they operate on sequences of one-hot tokens. However, these discrete tokens are immediately mapped to a rich embedding vector:

The encoder literally has a separate high-dimensional embedding vector for each discrete token. If we have a vocabulary of \(m\) unique tokens and an \(n\)-dimensional embedding space, then our embeddings are given by an \(n \times m\) matrix. Multiplying by the one-hot token extracts the embedding vector:

\[\boldsymbol{e}^{(i)} = \begin{bmatrix}\boldsymbol{e}^{(1)}, \boldsymbol{e}^{(2)}, \cdots \boldsymbol{e}^{(i)}, \cdots, \boldsymbol{e}^{(m)} \end{bmatrix} \begin{bmatrix}0 \\ 0 \\ \vdots \\ 1 \\ \vdots \\ 0\end{bmatrix}\]

Transformers learn what values to assign to these embedding vectors. As such, they can pack a lot of meaning into those \(n\) dimensions, far more than would be used if it was stuck operating on \(m\) discrete categories.

I talk about transformers in Transformers, How and Why They Work, but gloss over what these embedding vectors really give us. Transformer layers are best thought of as taking the input embedding vectors, which each point in some direction, and incrementally rotating them to point in other directions. (There’s a great video of this by 3Blue1Brown.)

Keep in mind that these are actually very high-dimensional feature spaces.

The initial embedding value is the one the raw token is associated with. The final embedding value is the output of all of the transformer layers, right before a final set of affine layers to go from an \(n\)-dimensional embedding to an \(m\)-dimensional set of logits for the next token.

That means:

  • the final embedding should have of the information for predicting the next token
  • the initial embedding value have the superposition of all meanings the general token

Those are two very different things – hence the need for all those transformer layers and incremental updates.

A final embedding that predicts the next word in “the cat sat on the” needs to capture all the things that a cat might sit on (e.g. laps, mats), as well as adjectives of places cats might sit (e.g. warm laps), and who knows what else people append to that sentence. There is no way that we could capture that superposition of meaning with only \(m\) discrete options.

Interestingly, an initial embedding value also needs to represent a superposition of concepts. The embedding for “mat” for example, might mean a nice place for a cat to sit in one context, but could also be a large concrete slab, or a thick wad of hair.

That means the initial embedding for “mat” should lie in a similar direction as those other concepts. At the very least, it would likely have subcomponents of its \(n\)-dimensional feature space that lie in similar directions. A part of “mat” and “slab” will align for the meaning they share, and likely a different part of “mat” and “matted hair” would align for the meaning those two words share.

This all shows how Transformers are able to assign juxtapositions of meanings using embedding vectors. It can do this in part because the continuous feature space allows for cramming a lot of meanings into the same number of dimensions, allows for smoothly interpolating between meanings, and because a single direction can be made up for sub-directions that have their own meanings. That is exactly what we’re looking for when we want to predict a black swan when we know we need a word for a feathered creature with a big beak on a lake that honks and happens to be colored black.

Why do Transformers learn all this? Because they have to in order to predict the next token accurately. Language is incredibly rich and carries all of these layered meanings.

More Holistic than Local?

One of the big takeaways for me is that knowledge representation using transformer-like dense embeddings is able to pack in and superimpose many concepts, and ends up being less fragile as a result. If enough of the concepts point to a swan, we can still deduce that we need a swan.

I am not a neuroscientist, but I find it highly likely that true grandmother neurons are quite rare. Instead, meaning is more likely to be found packed into subspaces represented by groups of neural firing patterns.

Similarly, I am reminded of writing software for robotics applications. If you’re writing code yourself, then you’re likely basing your reasoning on a comparatively small set of concepts. Take a sidewalk delivery robot for example. A reasonable programmer might try to enforce that the sidewalk delivery robot never cross a light on red. However, said reasonable programmer might get sad when they run into a case where an intersection is under construction and the light is red, but the path is open to pedestrians and delivery robots:

The world is a complicated place, and we quickly find that the number of cases our coded heuristics can handle is quite small. We can easily set ourselves up for failure if we code like grandmother cells. At the very least, code that makes declarative statements needs to be very careful to make declarative statements about what it really is judging — whether the contextual scene is appropriate for crossing or not rather than whether the light is red. As we’ve already learned – getting hung up on color is what motivated this blog post in the first place.

Can we rewrite robotics logic to use embedding vectors? Perhaps. Unfortunately, transformer embeddings are fairly inscrutable to anyone other than the transformer. Interpretable machine learning is still a nascent field.

I think transformers are incredible, but I also think some fundamental properties are missing. They don’t really understand the world yet, not really. AI fails in embarrassing ways. We as humans can write code to reason about \(m\) discrete things. We have a much harder time reasoning about all of the overlapping subtleties of the real world.

Conclusion

This post doesn’t really have a decisive answer. In fact, I hope it serves as food for thought.

I think it is worth pondering these questions as we move forward in this bold new world of Software 2.0, where everything is a transformer and we can do more than we ever could before but don’t really understand how it works or just how far it can take us.

I think this is an incredibly exciting time. It seems like we are very close to cracking the nut of “how we think”. Heck, even John Karmack started working on AI because he feels similarly.

There is likely still something to be learned from how the human brain works. It is the reference model that we have, the irrefutable evidence that there are systems that can reliably learn and then reliably perform well on real-world tasks. Intelligences that know that crossing at red when the road is blocked to cars. Intelligences that more-or-less do the right thing, with very few training examples. Those intelligences may not have figured it out just yet, but perhaps by searching more within, they can finally get to the bottom of things.

Coding your own Tools

We’re working on finalizing the 2nd Edition of Algorithms for Optimization. We originally set out to add three new chapters, but in addition to that overhauled most of the book. Its pretty exciting!

These projects are so big, and so long, that you’ll inevitably run into all sorts of new challenges because time has passed, some dependencies are no longer supported, you try to do things better / a different way, or the MIT Press requirements have changed. That last one is the inspiration for this blog post.

I ended up writing some new tooling to support alttext. Short for alternative text for images, alttext is textual content that can be associated with an image and read out to someone using a screen reader. It wasn’t part of the submission materials when we wrote Algorithms for Optimization v1 and Algorithms for Decision Making, but this time around, MIT Press asked that we supply alttext for every figure in a big spreadsheet. New challenge!

Mykel and I are somewhat different when it comes to being textbook authors. Most authors submit large Word documents with disparate images and let the MIT Press team handle the final text layout. Not us. We provide the final printable PDF.

Our setup is quite nice. It is all under source control, we have a ton of control over how everything looks, and we have everything for the book in one place.

When I saw the ask to supply this additional spreadsheet, I instantly became worried that having a separate sheet could cause problems. That sheet needs to be kept in-sync with the textbook — if any figures are added or removed, we want to make sure they are also added or removed from the sheet. The sheet is also a somewhat inconvenient place to write the alttext. Ideally it would be defined in the LaTeX documents, alongside the figure that it describes. Most importantly, we need to know if we’re missing any alttext.

Storing Tests by Code

We already have some nice technology in our textbook-writing workflow that lets us use the algorithms that we present to the reader to both generate figures and author + execute unit tests.

We present our algorithms using Pythontex and algorithm environments:

\begin{algorithm}
\begin{juliaverbatim}
diff_forward(f, x; h=1e-9) = (f(x+h) - f(x))/h
diff_central(f, x; h=1e-9) = (f(x+h/2) - f(x-h/2))/h
diff_backward(f, x; h=1e-9) = (f(x) - f(x-h))/h
\end{juliaverbatim}
\caption{...}
\end{algorithm}

The juliaverbatim blocks get typeset, but they aren’t executed.

We have a script that parses our source files for algorithm blocks and exports the juliaverbatim contents into a big Julia source file belonging to an Alg4Opt.jl Julia package. We can then load this package when executing Pythontex blocks that do execute, for generating our figures.

We have had unit testing since the beginning. When we first wrote Algorithms for Optimization, we had the unit tests in a separate directory, written in the test files for the Alg4Opt.jl Julia package we exported to. That worked, but the tests were written in an entirely different place than the methods. Sound like storing alttext somewhere other than the figures?

We ended up defining a no-op LaTeX environment:

\excludecomment{juliatest}

and then add those after every algorithm block:

\begin{juliatest}
let
for (f,x,∂) in [(x->x, 0.0, 1.0),
(x->x, 1.0, 1.0),
(x->x, 1.0, 1.0),
(x->x^2, 0.0, 0.0),
(x->x^2, 1.0, 2.0),
(x->x^2, -1.0,-2.0)]
@test isapprox(diff_forward(f, x), ∂, atol=1e-6)
@test isapprox(diff_central(f, x), ∂, atol=1e-6)
@test isapprox(diff_backward(f, x), ∂, atol=1e-6)
end
end
\end{juliatest}

We then parse the LaTeX source files in the same way we do for the algorithm blocks, and export the contents of any juliatest block as unit tests.

Storing the tests next to the algorithms makes things a lot nicer.

Pulling Alttext

I decided that I could do something very similar for alttext.

I defined a dummy command that like juliatest does nothing when compiling the book, but lets us put the alttext content into it:

\newcommand{\alttext}[1]{}

We can then use it in the source code to define the alttext alongside the figure:

\caption{
A one-dimensional optimization problem.
Note that the minimum is merely the best in the feasible set---lower points may exist outside the feasible region.
\label{fig:one-d-opt-prob}
\alttext{A line chart with a single undulating curve and an interval
containing a local minimum identified as the feasible set.}
}

I then wrote a script that runs through our source files and finds all figure and marginfigure blocks, and searches for such a command. If it finds it — great, we can pull out the alttext content and export it to that spreadsheet we need. If not, we can print out a warning that that figure (whose \label ID we also extract), is missing alttext. A nice, simple scripted solution.

Expand this to view the full script.

using Printf

mutable struct FigureEntry
    file_index::Int    # Index into chapter files
    line_index_lo::Int # Index into the chapter's lines at which the \begin resides
    line_index_hi::Int # Index into the chapter's lines at which the \end resides
    label::String      # Figure label, as defined by \label command (or empty)
                       # Figures may not have labels if they are in solutions or examples.
    alttext::String    # Alt text, as given by an \alttext command (or empty)
                       # Every figure is expected to have alttext for the final deliverable.
end

function is_start_of_block(str, block)
    return startswith(str, "\\begin{$block}")
end

function is_end_of_block(str, block)
    return startswith(str, "\\end{$block}")
end

function get_files(; chapter_regex::Regex = r"include\{chapter")
    retval = String[]
    for line in readlines("optimization-chapter.tex")
        if occursin(chapter_regex, line)
            m = match(r"chapter/\S*(?=\})", line)
            @assert isa(m, RegexMatch)
            push!(retval, m.match*".tex")
        end
    end
    return retval
end

function find_matching_paren(str::String, starting_index::Int=something(findfirst(isequal('('), str), 0))
    @assert str[starting_index] == '('
    nopen = 1
    i = starting_index
    n = lastindex(str)
    while nopen > 0 && i < n
        i = nextind(str,i)
        nopen += str[i] == '('
        nopen -= str[i] == ')'
    end
    return nopen == 0 ? i : -1
end

"""
Find the text for a label, such as "fig:gradient_descent_rosenbrock" from
\\label{fig:gradient_descent_rosenbrock}

There should only ever be one \\label entry. In the event that there are multiple,
this methods returns the first one.
If no label is found, this method returns an empty string.
"""
function find_label(lines, line_index_lo::Int, line_index_hi::Int)::String
    for line in lines[line_index_lo:line_index_hi]
        m = match(r"\\label\{([a-zA-Z0-9_:\\-]+)\}", line)
        if isa(m, RegexMatch)
            return m[1]
        end
    end
    return ""
end

"""
Find the alttext for a figure, which is contained inside an \\alttext{} command.
There should only ever be one \\alttext entry per figure. In the event that there are multiple,
this methods returns the first one.
If no alttext is found, this method returns an empty string.
"""
function find_alttext(lines, line_index_lo::Int, line_index_hi::Int)::String
    for line in lines[line_index_lo:line_index_hi]
        m = match(r"\\alttext\{([^}]+)\}", line)
        if isa(m, RegexMatch)
            return m[1]
        end
    end
    return ""
end

function pull_figures()
    is_start_of_ignore = str -> is_start_of_block(str, "ignore")
    is_start_of_figure = str -> is_start_of_block(str, "figure")
    is_start_of_marginfigure = str -> is_start_of_block(str, "marginfigure")
    is_start_of_relevant_block = str -> is_start_of_figure(str) || is_start_of_marginfigure(str) || is_start_of_ignore(str)

    figures = FigureEntry[]
    for (file_index, filepath) in enumerate(get_files())
        filename = splitext(splitdir(filepath)[2])[1]

        println("\treading ", filename)
        lines = [replace(line, "\n"=>"") for line in open(readlines, filepath, "r")]

        counter = 0
        i = something(findfirst(is_start_of_relevant_block, lines), 0)
        while i != 0
            block = is_start_of_ignore(lines[i]) ? "ignore" :
                    is_start_of_figure(lines[i]) ? "figure" :
                                                   "marginfigure"
            j = findnext(str -> is_end_of_block(str, block), lines, i+1)

            if block != "ignore"
                label = find_label(lines, i, j)
                alttext = find_alttext(lines, i, j)
                push!(figures, FigureEntry(file_index, i, j, label, alttext))
            end

            i = something(findnext(is_start_of_relevant_block, lines, j), 0)
        end
    end
    return figures
end

# Find all figure and marginfigure blocks
println("Pulling all figures")
figures = pull_figures()
for (i_figure, figure) in enumerate(figures)
    @printf "%3d %2d [%04d:%04d] %s\n" i_figure figure.file_index figure.line_index_lo figure.line_index_hi figure.label
    println("      $(figure.alttext)")
end

n_figures_missing_alttext = sum(fig.alttext == "" for fig in figures)
if n_figures_missing_alttext > 0
    println("MISSING ALT TEXT!")
    files = get_files()
    for (i_figure, figure) in enumerate(figures)
        label_text = figure.label
        if label_text == ""
            label_text = "UNLABELED"
        end
        @printf "%2d %s in %s\n" i_figure figure.label files[figure.file_index]
    end
end

println("")
println("$(length(figures) - n_figures_missing_alttext) / $(length(figures)) figures have labels")
println("Good job!")

The Joy of Coding your own Tools

That’s what this blog post is really about. The fact that you can dig in and code your own solution. We spend so much time coding for big company projects, that it is easy to forget that we can code small, useful things for ourselves.

The coding we did here is not particularly clever, nor particularly difficult, nor particularly large. That isn’t the point. The point is that we had a problem, and we were able to solve it ourselves with software. Our tools of the trade were brought to bear on our own problem.

I don’t often use coding to solve my own problems, but it does happen every so often. I used coding to create placecards for my wedding, for example, and to create the wedding website. I’ve written code to generate .svg files for CNC laser cutters, in order to craft a loved one a nice birthday present. In high school, I wrote a basic notecard program for practicing my French vocab. That one was super useful.

I am a big fan of Casey Muratori of Handmade Hero (and Computer, Enhance!), which gave rise to the handmade movement. The ideas there are very similar — there is joy to be had from building things yourself, and you are smart enough to dive into something and learn how it works.

Anyhow, I think its nice to be reminded of all this from time to time. Happy coding!

Emscripten is Neat

I spent most of December on vacation, so no big post. However, I did learn about Emscripten, a C++ compiler that produces an executable in WebAssembly. I tried it out and have to say, it was remarkably easy to get something up and running.

I made a very simple crossword game. You you can try it out here.

Best on desktop. While it does load in the browser, it is looking for key events, which is really cumbersome on mobile.

This was written with SDL2 and Dear ImGUI.

Compilation is pretty straightforwad. Instead of running gcc <srcs> <libs> you run emcc <srcs> <libs>. The compiler does some magic to reinterpret use of OpenGL with WebGL, and spits out a .wasm binary blob, a .js script that can execute it, and a .html page that runs it all. My game link above is simply to that .html page.

To test it locally without uploading it to a web page, you can simply start a Python server in your dev directory:

python -m http.server

and then open your .html file in the browser:

http://localhost:8000/your_thing.html

I spend a lot of time tinkering with games and other programs in C++, but I’m developing on a Linux machine. As a result, I can’t really share my games with my friends. Emscripten might be a nice way to achieve that.

Happy 2025 everyone. Hope its a good one.

Rollouts on the GPU

Last month I wrote about moving the Sokoban policy training code from CPU to GPU, yielding massive speedups. That significantly shortened both training time and the time it takes to compute basic validation metrics. It has not, unfortunately, significantly changed how long it takes to run rollouts, and relatedly, how long it takes to run beam search.

The Bottleneck

The training that I’ve done so far has all been with teacher forcing, which allows all inputs to be passed to the net at once:

When we do a rollout, we can’t pass everything in at once. We start with our initial state and use the policy to discover where we end up:

The problem is that the left-side of that image, the policy call, is happening on the GPU, but the right side, the state advancement, is happening on the CPU. If a rollout involves 62 player steps, then instead of one data transfer step like we have for training, we’re doing 61 transfers! Our bottleneck is all that back-and-forth communication:

Let’s move everything to the GPU.

CPU Code

So what is currently happening on the CPU?

At every state, we are:

  1. Sampling an action for each board from the action logits
  2. Applying that action to each board to advance the state

Sampling from the actions is pretty straightforward to run on the GPU. That’s the bread and butter of transformers and RL in general.

# policy_logits are [a×s×b] (a=actions, s=sequence length, b = batch size)
policy_logits, nsteps_logits = policy(inputs)

# Sample from the logits using the Gumbel-max trick
sampled_actions = argmax(policy_logits .+ gumbel_noise, dims=1)

where we use the Gumbel-max trick and the Gumble noise is sampled in advance and passed to the GPU like the other inputs:

using Distributions.jl
gumbel_noise = rand(Gumbel(0, 1), size(a, s, b))

Advancing the board states is more complicated. Here is the CPU method for a single state:

function maybe_move!(board::Board, dir::Direction)::Bool
    □_player::TileValue=find_player_tile(board)
    step_fore = get_step_fore(board, dir)

    □ = □_player # where the player starts
    ▩ = □ + step_fore # where the player potentially ends up

    if is_set(board[▩], WALL)
        return false # We would be walking into a wall
    end

    if is_set(board[▩], BOX)
        # We would be walking into a box.
        # This is only a legal move if we can push the box.
        ◰ = ▩ + step_fore # where box ends up
        if is_set(board[◰],  WALL + BOX)
            return false # We would be pushing the box into a box or wall
        end

        # Move the box
        board[▩] &= ~BOX # Clear the box
        board[◰] |= BOX # Add the box
    end

    # At this point we have established this as a legal move.
    # Finish by moving the player
    board[□] &= ~PLAYER # Clear the player
    board[▩] |= PLAYER # Add the player

    return true
end

There are many ways to represent board states. This representation is a simple Matrix{UInt8}, so an 8×8 board is just an 8×8 matrix. Each tile is a bitfield with components that can be set for whether that tile has/is a wall, box, floor, or tile.

Moving the player has 3 possible paths:

  • successful step: the destination tile is empty and we just move the player to it
  • successful push: the destination tile has a box, and the next one over is empty, so we move both the player and the box
  • failed move: otherwise, this is an illegal move and the player stays where they are

Moving this logic to the GPU has to preserve this flow, use the GPU’s representation of the board state, and handle a tensor’s worth of board states at a time.

GPU Representation

The input to the policy is a tensor of size \([h \times w \times f \times s \times b]\), where 1 board is encoded as a sparse \((h = \text{height}) \times (w=\text{width}) \times (f = \text{num features} = 5)\) tensor:

and we have board sequences of length \(s\) and \(b\) sequences per batch of them:

I purposely chose 4-step boards here, but sequences can generally be much longer and of different lengths, and the first state in each sequence is the goal state.

Our actions will be the \([4\times s \times b]\) actions tensor — one up/down/left/right action per board state.

Shifting Tensors

The first fundamental operation we’re going to need is to be able to check tile neighbors. That is, instead of doing this:

□ = □_player # where the player starts
▩ = □ + step_fore # where the player potentially ends up

we’ll be shifting all tiles over and checking that instead:

is_player_dests = shift_tensor(is_players, d_row=0, d_col=1)

The shift_tensor method method takes in a tensor and shifts it by the given number of rows and columns, padding in new values:

We pass in the number of rows or columns to shift, figure out what that means in terms of padding, and then leverage NNlib’s pad_constant method to give us a new tensor that we clamp to a new range:

function shift_tensor(
    tensor::AbstractArray,
    d_row::Integer,
    d_col::Integer,
    pad_value)

    pad_up    = max( d_row, 0)
    pad_down  = max(-d_row, 0)
    pad_left  = max( d_col, 0)
    pad_right = max(-d_col, 0)

    tensor_padded = NNlib.pad_constant(
        tensor,
        (pad_up, pad_down, pad_left, pad_right, 
            (0 for i in 1:2*(ndims(tensor)-2))...),
        pad_value)

    dims = size(tensor_padded)
    row_lo = 1 + pad_down
    row_hi = dims[1] - pad_up
    col_lo = 1 + pad_right
    col_hi = dims[2] - pad_left

    return tensor_padded[row_lo:row_hi, col_lo:col_hi,
                         (Colon() for d in dims[3:end])...]
end

This method works on tensors with varying numbers of dimensions, and always operates on the first two dimensions as the row and column dimensions.

Taking Actions

If we know the player move, we can use the appropriate shift direction to get the “next tile over”. Our player moves can be reflected by the following row and column shift values:

UP = (d_row=-1, d_col= 0)
LEFT = (d_row= 0, d_col=-1)
DOWN = (d_row=+1, d_col= 0)
RIGHT = (d_row= 0, d_col=+1)

This lets us convert the CPU-movement code into a bunch of Boolean tensor operations:

function advance_boards(
    inputs::AbstractArray{Bool}, # [h,w,f,s,b]
    d_row::Integer,
    d_col::Integer)

    boxes  = inputs[:,:,DIM_BOX,   :,:]
    player = inputs[:,:,DIM_PLAYER,:,:]
    walls  = inputs[:,:,DIM_WALL,  :,:]

    player_shifted = shift_tensor(player, d_row, d_col, false)
    player_2_shift = shift_tensor(player_shifted, d_row, d_col, false)

    # A move is valid if the player destination is empty
    # or if its a box and the next space over is empty
    not_box_or_wall = .!(boxes .| walls)

    # 1 if it is a valid player destination tile for a basic player move
    move_space_empty = player_shifted .& not_box_or_wall

    # 1 if the tile is a player destination tile containing a box
    move_space_isbox = player_shifted .& boxes

    # 1 if the tile is a player destination tile whose next one over
    # is a valid box push receptor
    push_space_empty = player_shifted .& shift_tensor(not_box_or_wall, -d_row, -d_col, false)

    # 1 if it is a valid player move destination
    move_mask = move_space_empty

    # 1 if it is a valid player push destination
    # (which also means it currently has a box)
    push_mask = move_space_isbox .& push_space_empty

    # new player location
    mask = move_mask .| push_mask
    player_new = mask .| (player .* shift_tensor(.!mask, -d_row, -d_col, false))

    # new box location
    box_destinations = shift_tensor(boxes .* push_mask, d_row, d_col, false)
    boxes_new = (boxes .* (.!push_mask)) .| box_destinations

    return player_new, boxes_new
end

The method appropriately moves any player tile that has an open space in the neighboring tile, or any player tile that has a neighboring pushable box. We create both a new player tensor and a new box tensor.

This may seem extremely computationally expensive — we’re operating on all tiles rather than on just the ones we care about. But GPUs are really good at exactly this, and it is much cheaper to let the GPU churn through that than wait for the transfer to/from the CPU.

The main complication here is that we’re using the same action across all boards. In a given instance, there are \(s\times b\) boards in our tensor. We don’t want to be using the same action in all of them.

Instead of sharding different actions to different boards, we’ll compute the results of all 4 actions and then index into the resulting state that we need:

Working with GPUs sure makes you think differently about things.

function advance_boards(
    inputs::AbstractArray{Bool}, # [h,w,f,s,b]
    actions::AbstractArray{Int}) #       [s,b]

    succ_u = advance_boards(inputs, -1,  0) # [h,w,s,d], [h,w,s,d]
    succ_l = advance_boards(inputs,  0, -1)
    succ_d = advance_boards(inputs,  1,  0)
    succ_r = advance_boards(inputs,  0,  1)

    size_u = size(succ_u[1])
    target_dims = (size_u[1], size_u[2], 1, size_u[3:end]...)
    player_news = cat(
        reshape(succ_u[1], target_dims),
        reshape(succ_l[1], target_dims),
        reshape(succ_d[1], target_dims),
        reshape(succ_r[1], target_dims), dims=3) # [h,w,a,s,d]
    box_news = cat(
        reshape(succ_u[2], target_dims),
        reshape(succ_l[2], target_dims),
        reshape(succ_d[2], target_dims),
        reshape(succ_r[2], target_dims), dims=3) # [h,w,a,s,d]

    actions_onehot = onehotbatch(actions, 1:4) # [a,s,d]
    actions_onehot = reshape(actions_onehot, (1,1,size(actions_onehot)...)) # [1,1,a,s,d]

    boxes_new = any(actions_onehot .& box_news, dims=3)
    player_new = any(actions_onehot .& player_news, dims=3)

    return cat(inputs[:,:,1:3,:,:], boxes_new, player_new, dims=3)
end

We’re almost there. This updates the boards in-place. To get the new inputs tensor, we want to shift our boards in the sequence dimension, propagating successor boards to the next sequence index. However, we can’t just shift the entire tensor. We want to keep the goals and the initial states:

The code for this amounts to a cat operation and some indexing:

function advance_board_inputs(
    inputs::AbstractArray{Bool}, # [h,w,f,s,b]
    actions::AbstractArray{Int}) #       [s,b]

    inputs_new = advance_boards(inputs, actions)

    # Right shift and keep the goal and starting state
    return cat(inputs[:, :, :, 1:2, :],
               inputs_new[:, :, :, 2:end-1, :], dims=4) # [h,w,f,s,b]
end

And with that, we’re processing actions across entire batches!

Rollouts on the GPU

We can leverage this new propagation code to propagate our inputs tensor during a rollout. The policy and the inputs have to be on the GPU, which in Flux.jl can be done with gpu(policy). Note that this requires a CUDA-compatible GPU.

A single iteration is then:

# Run the model
# policy_logits are [4 × s × b]
# nsteps_logits are [7 × s × b]
policy_logits_gpu, nsteps_logits_gpu = policy0(inputs_gpu)

# Sample from the action logits using the Gumbel-max trick
actions_gpu = argmax(policy_logits_gpu .+ gumbel_noise_gpu, dims=1)
actions_gpu = getindex.(actions_gpu, 1) # Int64[1 × s × b]
actions_gpu = dropdims(actions_gpu, dims=1) # Int64[s × b]

# Apply the actions
inputs_gpu = advance_board_inputs(inputs_gpu, actions_gpu)

The overall rollout code just throws this into a loop and does some setup:

function rollouts!(
    inputs::Array{Bool, 5},      # [h×w×f×s×b]
    gumbel_noise::Array{Float32, 3}, # [4×s×b]
    policy0::SokobanPolicyLevel0,
    s_starts::Vector{Board}, # [b]
    s_goals::Vector{Board}) # [b]

    policy0 = gpu(policy0)

    h, w, f, s, b = size(inputs)

    @assert length(s_starts) == b
    @assert length(s_goals) == b

    # Fill the goals into the first sequence channel
    for (bi, s_goal) in enumerate(s_goals)
        set_board_input!(inputs, s_goal, 1, bi)
    end

    # Fill the start states in the second sequence channel
    for (bi, s_start) in enumerate(s_starts)
        set_board_input!(inputs, s_start, 2, bi)
    end

    inputs_gpu = gpu(inputs)
    gumbel_noise_gpu = gpu(gumbel_noise)

    for si in 2:s-1

        # Run the model
        # policy_logits are [4 × s × b]
        # nsteps_logits are [7 × s × b]
        policy_logits_gpu, nsteps_logits_gpu = policy0(inputs_gpu)

        # Sample from the action logits using the Gumbel-max trick
        actions_gpu = argmax(policy_logits_gpu .+ gumbel_noise_gpu, dims=1)
        actions_gpu = getindex.(actions_gpu, 1) # Int64[1 × s × b]
        actions_gpu = dropdims(actions_gpu, dims=1) # Int64[s × b]

        # Apply the actions
        inputs_gpu = advance_board_inputs(inputs_gpu, actions_gpu)
    end

    return cpu(inputs_gpu)
end

There are several differences:

  1. The code is simpler. We only have a single loop, over the sequence length (number of steps to take). The content of that loop is pretty compact.
  2. The code does more work. We’re processing more stuff, but because it happens in parallel on the GPU, its okay. We’re also propagating all the way to the end of the sequence whether we need to or not. (The CPU code would check whether all boards had finished already).

If we time how long it takes to doing a batch worth of rollouts before and after moving to the GPU, we get about a \(60\times\) speedup. Our efforts have been worth it!

Beam Search on the GPU

Rollouts aren’t the only thing we want to speed up. I want to use beam search to explore the space using the policy and try to find solutions. Rollouts might happen to find solutions, but beam search should be a lot better.

The code ends up being basically the same, except a single goal and board is used to seed the entire batch (giving us a number of beams equal to the batch size), and we have to do some work to score the beams and then select which ones to keep:

unction beam_search!(
    inputs::Array{Bool, 5},      # [h×w×f×s×b]
    policy0::SokobanPolicyLevel0,
    s_start::Board,
    s_goal::Board)

    policy0 = gpu(policy0)

    h, w, f, s, b = size(inputs)

    # Fill the goals and starting states into the first sequence channel
    for bi in 1:b
        set_board_input!(inputs, s_goal, 1, bi)
        set_board_input!(inputs, s_start, 2, bi)
    end

    # The scores all start at zero
    beam_scores = zeros(Float32, 1, b) |> gpu # [1, b]

    # Keep track of the actual actions
    actions = ones(Int, s, b) |> gpu # [s, b]

    inputs_gpu = gpu(inputs)

    # Advance the games in parallel
    for si in 2:s-1

        # Run the model
        # policy_logits are [4 × s × b]
        # nsteps_logits are [7 × s × b]
        policy_logits, nsteps_logits = policy0(inputs_gpu)

        # Compute the probabilities
        action_probs = softmax(policy_logits, dims=1) # [4 × s × b]
        action_logls = log.(action_probs) # [4 × s × b]

        # The beam scores are the running log likelihoods
        action_logls_si = action_logls[:, si, :]  # [4, b]
        candidate_beam_scores = action_logls_si .+ beam_scores # [4, b]
        candidate_beam_scores_flat = vec(candidate_beam_scores) # [4b]

        # Get the top 'b' beams
        topk_indices = partialsortperm(candidate_beam_scores_flat, 1:b; rev=true)

        # Convert flat indices back to action and beam indices
        selected_actions = (topk_indices .- 1) .÷ b .+ 1  # [b] action indices (1 to 4)
        selected_beams   = (topk_indices .- 1) .% b .+ 1  # [b] beam indices (1 to b)
        selected_scores  = candidate_beam_scores_flat[topk_indices]  # [b]
        inputs_gpu = inputs_gpu[:,:,:,:,selected_beams]

        actions[si,:] = selected_actions

        # Apply the actions to the selected beams
        inputs = advance_board_inputs(inputs_gpu, actions)
    end

    return (cpu(inputs_gpu), cpu(actions))
end

This again results in what looks like way simpler code. The beam scoring and such is all done on tensors, rather than a bunch of additional for loops. It all happens on the GPU, and it is way faster (\(23\times\)).

Conclusion

The previous blog post was about leveraging the GPU during training. This blog post was about leveraging the GPU during inference. We had to avoid expensive data transfers between the CPU and the GPU, and to achieve that had to convert non-trivial player movement code to computations amenable to the GPU. Going about that meant thinking about and structuring our code very differently, working across tensors and creating more work that the GPU could nonetheless complete faster.

This post was a great example of how code changes based on the scale you’re operating at. Peter van Hardenberg gives a great talk about similar concepts in Why Can’t We Make Simple Software?. How you think about a problem changes a lot based on problem scale and hardware. Now that we’re graduating from the CPU to processing many many boards, we have to think about the problem differently.

Our inference code has been GPU-ized, so we can leverage it to speed up validation and solution search. It was taking me 20 min to train a network but 30 min to run beam search on all boards in my validation set. This change avoids that sorry state of affairs.

Tuning a Sokoban Policy Net

In a previous post, I had covered a project in which I was trying to get a neural network to learn to play Sokoban. I trained a transformer that operated on encoded board positions and predicted both the next action and how many steps were left until a solution would be reached:

My training was all done with Flux.jl and my own transformer implementation, not because it was more efficient or anything, but because I like to learn by doing it myself.

This training all happened on my little personal laptop. I love my laptop, but it is not particularly beefy, and it does not have much of a GPU to speak of. Training a fairly simple transformer was taking a long time — around 8 hours — which is pretty long. That doesn’t leave much room for experimentation. I knew that all this training was happening on the CPU, and the best way to make it faster would be to move to the GPU.

Flux making moving to the GPU incredibly easy:

using CUDA
policy = TransformerPolicy(
    batch_size = BATCH_SIZE,
    max_seq_len = MAX_SEQ_LEN,
    encoding_dim = ENCODING_DIM,
    dropout_prob=dropout_prob,
    no_past_info=no_past_info) |> gpu

That’s it – you just use the CUDA package and pipe your model to the GPU.

I tried this, and… it didn’t work. Unfortunately my little laptop does not have a CUDA-capable GPU.

After going through a saga of trying to get access to GPUs in AWS, I put the project to the side. There is sat, waiting for me to eventually pick it back up again whenever I ultimately decided to get a machine with a proper GPU or try to wrangle AWS again.

Then, one fateful day, I happened to be doing some cleaning and noticed an older laptop that I no longer used. Said laptop is bigger, and, just perhaps it had a CUDA-capable GPU. I booted it up, and lo and behold, it did. Not a particularly fancy graphics card, but CUDA-capable nonetheless.

This post is about how I used said GPU to train my Sokoban models, and how I then set up a task system in order to run a variety of parameterizations.

Model Training with a GPU

I switched my training code to move as much stuff as possible over to the GPU. After some elbow grease, I kicked off the same training run before and found that it ran in about 15 min – a 32x speed increase.

The next thing I tried was increasing the encoding from a length of 16 to 32. On the CPU, such an increase would at least double the training time. On the GPU, the training time remained the same.

How could that be?

Simply put, the GPU is really fast at crunching numbers, and the training runtime is dominated by using the CPU to unpack our training examples, sending data to and from the GPU, and running the gradient update on the CPU. In this case there seems to be a free lunch!

Here is a simple depiction of what was happening before (top timeline) vs. what is happening now:

We pay the upfront cost of sending the model to the GPU, but then can more efficiently shovel data into it to get results faster than computing them ourselves. There is nothing magical happening here, just literally passing data there, having it crunch it, and receiving it once its done.

Parameter Tuning

Now that we have faster training, it is nice to look at how various training and model parameters affect our metrics. Good parameters can easily make or break a deep learning project.

Our model parameters are:

  • board dimension – always \(8 \times 8\)
  • encoding dimension – the size of the transformer embedding
  • maximum sequence length – how long of a sequence the transformer can handle (always 64)
  • number of transformer layers

Our training parameters are:

  • learning rate – affects the size of the steps that our AdamW optimizer takes
  • AdamW 1st and 2nd momentum decay
  • AdamW weight decay
  • batch size – the number of training samples per optimization step
  • number of training batches – the number of optimization steps
  • number of training entries – the size of our training set
  • dropout probability – certain layers in the transformer have a chance of randomly dropping outputs for robustness purposes
  • gradient threshold – the gradient is clipped to this value to improve stability

That’s a lot of parameters. How are we going to go about figuring out the best settings?

The tried and true method that happens in practice is try-and-see, where humans just try things they think are reasonable and see how the model performs. That’s what I was doing originally, when each training run took 8 hours. While that’s okay, it’d be nice to do better.

The next simplest approach is grid search. Here we discretize all parameters and train a model on every possible parameter combination. In 2 dimensions this ends up looking like a grid, hence the name:

We have about 10 parameters. Even if we only consider 2 values for each of them, doing a grid search over them all would require training \(2^{10} = 1024\) models, which at 15 min per model is ~10.7 days. That’s both too long and pretty useless – we want higher granularity than that.

With grid search out, the next alternative is to conduct local searches for specific parameters. We already have a training parameterization that works pretty well, the one from the last blog post, and we can vary a single parameter and see how that affects training. That’s much cheaper – just the cost of the number of training points we’d like to evaluate per parameter. If we want to evaluate 5 values per parameter, that’s just \(5 \cdot 10 = 50\) models, or ~12.5 hours of training time. I could kick that off and come back to look at it the next day.

What I just proposed is very similar to cyclic coordinate search or coordinate descent, which is an optimization approach that optimizes one input at a time. It is quite simple, and actually works quite well. In fact, Sebastian Thrun himself has expressed his enthusiasm for this method to me.

There’s a whole realm of more complicated sampling strategies that could be followed. I rather like uniform projection plans and quasi-random space-filling sets like the Halton sequence. They don’t take that much effort to set up and do their best to fill the search space with a limited number of samples.

The method I most leaned up was ultimately a mixture of coordinate search and random sampling. Random sampling is what Andrej Karpathy recommended in CS231n, because it lets you evaluate far more independent values for specific parameters than something like grid search:

Here we see about 1/3 as many samples as grid search covering way more unique values for param 1.

Okay, so we want a way to run random search and perhaps some other, more targeted search approaches. How would we go about doing that?

Tasks

In the next phase of my project I want to expand from just training a transformer policy to predict player up/down/left/right moves to more complicated models that may interact with this simpler model. I also want to use my models to discover solutions to random problems, and perhaps refine previously discovered solutions with better models. I thus don’t want to merely support training this one model, I want to be able to run more general tasks.

function run_tasks()

    done = false
    while !done
        
        task_file = get_next_task()
        if !isa(task_file, String)
            println("No tasks left!")
            done = true
            break
        end

        task_filepath = joinpath(FOLDER_TODO, task_file::String)
        res = run_task(task_filepath)
        dest_folder = res.succeeded ? FOLDER_DONE : FOLDER_TRIED

        # name the task according to the time
        task_dst_name = Dates.format(Dates.now(), "yyyymmdd_HHMMss") * ".task"
        mv(task_filepath, joinpath(dest_folder, task_dst_name))
        write_out_result(res, task_dst_name, dest_folder)
        println(res.succeeded ? "SUCCEEDED" : "FAILED")
    end
end

The task runner simply looks for its next task in the TODO folder, executes it, and when it is done either moves it to the DONE folder or the TRIED folder. It then writes out additional task text that it captured (which can contain errors that are useful for debugging failed tasks).

The task runner is its own Julia process, and it spawns a new Julia process for every task. This helps ensure that issues in a task don’t pollute other tasks. I don’t want an early segfault to waste an entire night’s worth of training time.

The task files are simply Julia files that I load and prepend with a common header:

function run_task(task_filepath::AbstractString)
    res = TaskResult(false, "", time(), NaN)

    try
        content = read(task_filepath, String)

        temp_file, temp_io = mktemp()
        write(temp_io, TASK_HEADER)
        write(temp_io, "\n")
        write(temp_io, content)
        close(temp_io)

        output = read(`julia -q $(temp_file)`, String)

        res.succeeded = true
        res.message = output
    catch e
        # We failed to
        res.succeeded = false
        res.message = string(e)
    end

    res.t_elapsed = time() - res.t_start

    return res
end

This setup is quite nice. I can drop new task files in and the task runner with just dutifully run them as soon as its done with whatever came before. I can inspect the TRIED folder for failed tasks and look at the output for what went wrong.

Results

I ran a bunch of training runs and then loaded and plotted the results to get some insight. Let’s take a look and see if we learn anything.

We’ve got a bunch of metrics, but I’m primarily concerned with the top-2 policy accuracy and the top-2 nsteps accuracy. Both of these measure how often the policy had the correct action (policy accuracy) or number of steps remaining (nsteps accuracy) in its top-2 most likely predictions. The bigger these numbers are the better, with optimal performance being 1.0.

First let’s look at the learning rate, the main parameter that everyone typically has to futz with. First, the top-2 policy accuracy:

We immediately see a clear division between training runs with terrible accuracies (50%) and training runs with reasonable performance. This tells us that some of our model training runs did pretty poorly. That’s good to know – the parameters very much matter and can totally derail training.

Let’s zoom in on the good results:

We don’t see a super-clear trend in learning rate. The best policy accuracy was obtained with a learning rate around 5e-4, but that one sample is somewhat of an outlier.

The nsteps accuracy also shows the bad models, so we’ll jump straight to the zoomed version:

Interestingly, the same learning rate of 5e-4 produces the best nsteps accuracy as well, which is nice for us. Also, the overall spread here tends to prefer larger learning rates, with values down at 1e-4 trending markedly worse.

Next let’s look at the dropout probability. Large enough values are sure to tank training performance, but when does that start happening?

We don’t really have great coverage on the upper end, but based on the samples here it seems that a dropout probability of about 0.01 (or 1%) performs best. The nsteps accuracy shows a similar result.

Next let’s look at the weight decay.

We’ve found our performance-tanking culprit! Weight decay values even moderately larger than zero appear to be highly correlated with terrible performance. It seems to drag the model down and prevent learning. Very small weight decay values appear to be fine, so we’ll have to be careful to just search those.

This is an important aspect of parameter tuning – parameters like the learning rate or weight decay can take on rather small values like 1e-4. Its often more about finding the right exponent rather than finding the right decimal value.

With those learning parameters out of the way, let’s look at some model parameters. First, the encoding dimension:

Naively we would expect that bigger encoding dimensions would be better given infinite training examples and compute, but we those are finite. We didn’t exhaustively evaluate larger encoding dimensions, but find that the nsteps prediction doesn’t benefit all that much from going from 32 to 64 entries, whereas the policy does.

We can also look at the number of transformer layers:

Having more layers means we have a bigger model, with more room to perform operations on our token embeddings as they pass through the model. Bigger is often better, but is ultimately constrained by our training data and compute.

In this case the nsteps predictor can achieve peak performance across a variety of depths, whereas the policy seems to favor larger layer counts (but can still do pretty well even with a single layer).

The next question we might ask is whether the mode size overall is predictive of performance. We can plot the total number of trainable parameters:

In terms of policy accuracy, we are seeing the best performance with the largest models, but the nsteps predictor doesn’t seem to need it as much. That is consistent with what we’ve already observed.

Let’s now identify the best model. How would we do that?

I’m going to consider the best model to be the one with the highest value of top-2 policy accuracy + top-2 nsteps accuracy. That’s the same as asking for the model most top-right in the following plot:

The two accuracies are highly correlated (which makes sense – its hard to predict how many steps it will take to reach the goal without also being a good Sokoban policy). The model that does the best has an encoding dim of 64 with 5 transformer layers, uses a learning rate of 0.0005, has weight decay = 0.0002, and a dropout probability of 0.01.

Conclusion

In this post I got excited about our ability to use the GPU to train our networks, and then I tried to capitalize on it by running a generic task runner. I did some sampling and collected a bunch of metrics in order to try to learn a thing or two about how best to parameterize my model and select the training hyperparameters.

Overall, I would say that the main takeaways are:

  • Its worth spending a little time to help the computer do more work for you
  • Its important to build insight into how the various parameters affect training

That’s all folks. Happy coding.

3D 2-Bone Inverse Kinematics

I am having fun figuring out how to write a 2D sidescroller from scratch, with 3D graphics. I’ve covered how posing and animating those skeletal meshes works. This post extends that work with inverse kinematics, which allows for the automatic placement of hands or feet at specific locations.

This wonderful low-poly knight comes from Davmid at SketchFab.

Some Recap

A character’s pose is defined by the transforms of their skeleton. A skeleton is made up of bones, where each bone is translated, offset, and scaled with respect to a parent bone.

The \(i\)th bone’s transform is given by:

\[\begin{aligned}\boldsymbol{T}_i &= \text{trans}(\boldsymbol{\Delta}_i) \cdot \text{rot}(\boldsymbol{q}_i) \cdot \text{scale}(\boldsymbol{s}) \\ &= \begin{bmatrix}1 & 0 & 0 & \Delta_x \\ 0 & 1 & 0 & \Delta_y \\ 0 & 0 & 1 & \Delta_z \\ 0 & 0 & 0 & 1\end{bmatrix} \cdot \begin{bmatrix} 1 – 2(q_y^2+q_z^2) & 2(q_x q_y – q_w q_z) & 2(q_x q_z + q_w q_y) & 0 \\ 2(q_x q_y + q_w q_z) & 1 – 2(q_x^2 + q_z^2) & 2(q_y q_z – q_w q_x) & 0 \\ 2(q_x q_z – q_w q_y) & 2(q_y q_z – q_w q_x) & 1 – 2(q_x^2 + q_y^2) & 0 \\ 0 & 0 & 0 & 1\end{bmatrix} \cdot \begin{bmatrix}s_x & 0 & 0 & 1 \\ 0 & s_y & 0 & 1 \\ 0 & 0 & s_z & 1 \\ 0 & 0 & 0 & 1\end{bmatrix}\end{aligned}\]

where \(\boldsymbol{\Delta}_i\) is its translation vector, \(\boldsymbol{q}_i\) is its orientation quaternion, and \(\boldsymbol{s}\) is its scaling vector. The transforms are \(4\times 4\) matrices because we’re using homogeneous coordinates such that we can support translations. A 3D position is recovered by dividing each of the first three entries by the fourth entry.

The resulting transform converts homogeneous locations in the bone’s local coordinate frame into the coordinate frame of its parent:

\[\boldsymbol{p}_{\text{parent}(i)} = \boldsymbol{T}_i \boldsymbol{p}_i\]

We can keep on applying these transforms all the way to the root bone to transform the bone into the root coordinate frame:

\[\boldsymbol{p}_{\text{root}} = \boldsymbol{T}_{\text{root}} \cdots \boldsymbol{T}_{\text{parent}(\text{parent}(i))} \boldsymbol{T}_{\text{parent}(i)} \boldsymbol{T}_i \boldsymbol{p}_i = \mathbb{T}_i \boldsymbol{p}_i\]

This aggregate transform \(\mathbb{T}_i\) ends up being very useful. If we have a mesh vertex \(\boldsymbol{v}\) defined in mesh space (with respect to the root bone), we can transform it to the \(i\)th bone’s frame via \(\mathbb{S}^{-1}_i \boldsymbol{v}\) according to the aggregate transform of the original skeleton pose and then transform it back into mesh space via \(\mathbb{T}_i \mathbb{S}^{-1}_i \boldsymbol{v}\). That gives us the vertex’s new location according to the current pose.

The Problem

We can define keyframes and interpolate between them to get some nice animations. However, the keyframes are defined in mesh space, and have no way to react to the world around them. If we have a run animation, the player’s foot will always be placed in the same location, regardless of the ground height. If the animation has the player’s arms extend to grab a bar, it can be very easy for the player’s position to be off and the hands not quite be where the edge is.

Here one foot is hovering in the air and the other is sticking into the ground.

The animation system we have thus far produces hand and foot positions with forward kinematics — the positions are based on the successive application of transforms based on bones in the skeleton hierarchy. We want to do the opposite — find the transforms needed such that an end effector like a hand or a foot ends up where we want it to be.

There are several common ways to formulate this problem. I am going to avoid heuristic approaches like cyclic coordinate descent and FABRIK. This method will be closed-form rather than iterative.

Instead of supporting any number of bones, I will stick to 2-bone systems. For example, placing a wrist by moving an upper an lower arm, or placing a foot by moving an upper and lower leg.

Finally, as is typical, I will only allow rotations. We don’t want our bones to stretch or displace in order to reach their targets.

Our inverse kinematic problem thus reasons about four 3D vectors:

  • a fixed hip position \(\boldsymbol{a}\)
  • an initial knee position \(\boldsymbol{b}\)
  • an initial foot position \(\boldsymbol{c}\)
  • a target foot position \(\boldsymbol{t}\)

Our job is to adjust the rotation transform of the hip and knee bones such that the foot ends up as close as possible to the target position.

The image makes it easy to forget that this is all happening in 3D space.

Working it Out

We are going to break this down into two parts. First we will find the final knee and foot positions \(\boldsymbol{b}’\) and \(\boldsymbol{c}’\). Then we’ll adjust the bone orientations to achieve those positions.

Finding the New Positions

There are three cases:

  • we can reach the target location and \(\boldsymbol{c}’ = \boldsymbol{t}\)
  • the bone segments aren’t long enough to reach the target location
  • the bone segments are too long to reach the target location

Let’s denote the segment lengths as \(\ell_{ab}\) and \(\ell_{bc}\), and additionally compute the distance between \(\boldsymbol{a}\) and \(\boldsymbol{t}\):

The target is too close if \(\ell_{at} < |\ell_{ab} – \ell_{bc}|\). When that happens, we extend the longer segment toward the target and the other segment away.

The target is too far if \(\ell_{at} > \ell_{ab} + \ell_{bc}\). When that happens, we extend the segments directly toward the target, to maximum extension.

The remaining, more interesting case is when the target can be reached exactly, and we know \(\boldsymbol{c}’ = \boldsymbol{t}\). In that case we still have to figure out where the knee \(\boldsymbol{b}’\) ends up.

The final knee position will lie on a circle that is the intersection of two spheres: the sphere of radius \(\ell_{ab}\) centered at \(\boldsymbol{a}\) and the sphere of radius \(\ell_{bc}\) centered at \(\boldsymbol{t}\):

We can calculate \(\theta\) from the law of cosines:

\[\cos \theta = \frac{\ell_{bc}^2 – \ell_{ab}^2 – \ell_{at}^2}{-2 \ell_{ab} \ell_{at}} \]

The radius of the circle that \(\boldsymbol{b}’\) lies on is:

\[r = \ell_{ab} \sin \theta = \ell_{ab} \sqrt{1 – \cos^2 \theta}\]

The center of the circle is then

\[\boldsymbol{m} = \boldsymbol{a} \> – \> \ell_{ab} \cos \theta \hat{\boldsymbol{n}}\]

where \(\hat{\boldsymbol{n}}\) is the unit vector from \(\boldsymbol{t}\) to \(\boldsymbol{a}\).

We can place the knee anywhere we want on this circle using some unit vector \(\hat{\boldsymbol{u}}\) perpendicular to \(\hat{\boldsymbol{n}}\):

\[\boldsymbol{b}’ = \boldsymbol{m} + r \hat{\boldsymbol{u}}\]

For the inverse kinematics to look nice, I want to move the knee as little as possible. As such, let’s try to keep \(\hat{\boldsymbol{u}}\) close to the current knee position:

\[\begin{aligned}\boldsymbol{v} &= \boldsymbol{b} \> – \boldsymbol{m} \\ \boldsymbol{u} &= \boldsymbol{v} \> – (\boldsymbol{v} \cdot \hat{\boldsymbol{n}}) \hat{\boldsymbol{n}} \\ \hat{\boldsymbol{u}} &= \text{normalize}(\boldsymbol{u}) \end{aligned}\]

We can implement this in code as follows:

f32 len_ab = glm::length(b - a);
f32 len_bc = glm::length(c - b);
f32 len_at = glm::length(t - a);

// Unit vector from t to a.
glm::vec3 n = (a - t) / len_at;
if (len_at < 1e-3f) {
    n = glm::vec3(1.0f, 0.0f, 0.0f);
}

bool reached_target = false;
glm::vec3 b_final, c_final;  // The final positions of the knee and foot.
if (len_at < std::abs(len_ab - len_bc)) {
    // The target is too close.
    // Extend the longer appendage toward it and the other away.
    int sign = (len_ab > len_bc) ? 1 : -1;
    b_final = a - sign * len_ab * n;
    c_final = b_final + sign * len_bc * n;

} else if (len_at <= len_ab + len_bc) {
    // The target is reachable
    reached_target = true;
    c_final = t;

    // The final knee position b_final will lie on a circle that is the intersection of two
    // spheres: the sphere of radius len_ab centered at a and the sphere of radius len_bc
    // centered at t.

    // Cosine of the angle t - a - b_final
    f32 cos_theta =
        (len_bc * len_bc - len_ab * len_ab - len_at * len_at) / (-2 * len_ab * len_at);
    f32 sin_theta = std::sqrt(1.0 - cos_theta * cos_theta);

    // Radius of the circle that b_final must lie on.
    f32 r = len_ab * sin_theta;

    // The center of the circle that b_final must lie on.
    glm::vec3 m = a - len_ab * cos_theta * n;

    // Unit vector perpendicular to n and pointing at b from m.
    // If b and m are coincident, we default to any old vector perpendicular to n.
    glm::vec3 u = GetPerpendicularUnitVector(n, b - m);

    // Compute the new knee position.
    b_final = m + r * u;

} else {
    // The target is too far.
    // Reach out toward it to max extension.
    b_final = a - len_ab * n;
    c_final = b_final - len_bc * n;
}

Adjusting the Orientations

Now that we have the updated knee and foot positions, we need to adjust the bone transforms to rotate the segments to match them.

The bone transforms are all defined relative to their parents. Let our end effector (the hand or foot) be associated with bone \(i\), its parent (the elbow or knee) be associated with bone \(j\), and its parent (the hip or shoulder) be associated with bone \(k\). We should already have the aggregate transforms for these bones computed for the existing pose.

First we’ll rotate bone \(k\) to point toward \(\boldsymbol{b}’\) instead of \(\boldsymbol{b}\). We need to calculate both positions in the bone’s local frame:

\[\begin{aligned}\boldsymbol{b}_\text{local} &= \mathbb{T}_k^{-1} \> \boldsymbol{b} \\ \boldsymbol{b}’_\text{local} &= \mathbb{T}_k^{-1} \> \boldsymbol{b}’ \end{aligned}\]

We then normalize these vectors and find the rotation that between them. A minimum rotation between two unit vectors \(\hat{\boldsymbol{u}}\) and \(\hat{\boldsymbol{v}}\) has an axis of rotation:

\[\boldsymbol{a} = \hat{\boldsymbol{u}} \times \hat{\boldsymbol{v}}\]

and a rotation angle:

\[\theta = \cos^{-1}\left(\hat{\boldsymbol{u}} \cdot \hat{\boldsymbol{v}}\right)\]

// Find the minimum rotation between the two given unit vectors.
// If the vectors are sufficiently similar, return the unit quaternion.
glm::quat GetRotationBetween(const glm::vec3& u, const glm::vec3& v, const f32 eps = 1e-3f) {
    glm::vec3 axis = glm::cross(u, v);
    f32 len_axis = glm::length(axis);
    if (len_axis < eps) {
        return glm::quat(0.0f, 0.0f, 0.0f, 1.0f);
    }

    axis /= len_axis;
    f32 angle = std::acos(glm::dot(u, v));
    return glm::angleAxis(angle, axis);
}

We then adjust the orientation of bone \(k\) using this axis and angle to get an updated transform, \(\boldsymbol{T}_k’\). We then propagate that effect down to the child transforms.

We do the same thing with bone \(j\), using the updated transform. The code for this is:

glm::mat4 iTTk = glm::inverse(TTk);
glm::vec3 local_currt = glm::normalize(To3d(iTTk * glm::vec4(b, 1.0f)));
glm::vec3 local_final = glm::normalize(To3d(iTTk * glm::vec4(b_final, 1.0f)));
frame->orientations[grandparent_bone] =
frame->orientations[grandparent_bone] * GetRotationBetween(local_currt, local_final);

(*bone_transforms)[grandparent_bone] =
(*bone_transforms)[grandgrandparent_bone] * CalcTransform(*frame, grandparent_bone);
(*bone_transforms)[parent_bone] =
(*bone_transforms)[grandparent_bone] * CalcTransform(*frame, parent_bone);

// Do the same with the next bone.
glm::mat4 TTj_new = (*bone_transforms)[parent_bone];
local_currt = glm::normalize(To3d(glm::inverse(TTj) * glm::vec4(c, 1.0f)));
local_final = glm::normalize(To3d(glm::inverse(TTj_new) * glm::vec4(c_final, 1.0f)));
frame->orientations[parent_bone] =
frame->orientations[parent_bone] * GetRotationBetween(local_currt, local_final);

Conclusion

This approach to inverse kinematics is efficient and effective. I’ve been using it to refine various action states, such as placing hands and feet onto ladder rungs:

It works pretty well when the hand or foot doesn’t have to be adjusted all that much. For larger deltas it can end up picking some weird rotations because it does not have any notion of maximum joint angles. I could add that, of course, at the expense of complexity.

Its really nice to be able to figure something out like this from the basic math foundations. There is something quite satisfying about being able to lay out the problem formulation and then systematically work through it.

Cheers!

Impulse Responses

This post expands on the previous post, which looked at collision detection and resolution for 2d polygons. By the end we were able to apply impulses to colliding bodies to properly get them to bounce and spin off of one another. Not only will we take it further with friction impulses and joints, but we’ll pay special attention to how these impulses are derived to emphasize that this line of reasoning is a tool that can be extended to new applications.

There is more math in this post than normal because I want to both provide a resource that I myself could have benefited from and show that this stuff can all be derived.

An impulse \(\boldsymbol{j}\) is an instantaneous change in momentum. They are very similar to forces, which influence velocity over time:

\[\boldsymbol{f} = m \boldsymbol{a} \quad \Rightarrow \quad \boldsymbol{v}’ \approx \boldsymbol{v} + \frac{1}{m} \boldsymbol{f} \Delta t\]

An impulse applied to a rigid body at a point \(\boldsymbol{r}\) relative to the body’s center of mass similarly affects its velocity, but without the need for the timestep:

\[\boldsymbol{v}’ = \boldsymbol{v} + \frac{1}{m} \boldsymbol{j}\]

This impulse also effects the angular velocity:

\[\boldsymbol{\omega}’ = \boldsymbol{\omega} + \boldsymbol{I}^{-1} \boldsymbol{r} \times \boldsymbol{j}\]

This update models the angular velocity in three dimensions, treating it as a vector. We can work out the equivalent 2d update where we just keep track of a scalar \(\omega\):

\[\begin{align}\boldsymbol{r} \times \boldsymbol{j} = \begin{bmatrix} r_x \\ r_y \\ 0\end{bmatrix}\end{align} \times \begin{bmatrix}j_x \\ j_y \\ 0\end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ r_x j_y – r_y j_x \end{bmatrix}\]

If we assume our inertia tensor is diagonal, and just pull out the z entry \(I_{zz}\), we get:

\[\omega’ = \omega + \frac{1}{I_{zz}} \left(r_x j_y – r_y j_x\right)\]

From here on out, if you see a scalar \(I\), it refers to \(I_zz\).

The Normal Impulse

In the last blog post, we used an impulse along the impact normal to cause two colliding rigid bodies to bounce off each other. We’re going to cover the same thing, but derive it instead of just take the result for granted.

We have two bodies, A and B, and a contact point located at \(\boldsymbol{r}_A\) relative to body A’s center and at \(\boldsymbol{r}_B\) relative to body B’s center:

The velocity of the contact points on each body as they enter the collision is:

\[\begin{align}\boldsymbol{v}_{cA} & = \boldsymbol{v}_A + \boldsymbol{\omega}_A \times \boldsymbol{r}_A \\ \boldsymbol{v}_{cB} &= \boldsymbol{v}_B + \boldsymbol{\omega}_B \times \boldsymbol{r}_B \end{align}\]

In 2D that amounts to:

\[\boldsymbol{v}_{cA} = \begin{bmatrix}v_{A,x} \\ v_{A,y} \\ 0\end{bmatrix} + \begin{bmatrix}0 \\ 0 \\ \omega_A\end{bmatrix} \times \begin{bmatrix} r_{A,x} \\ r_{A,y} \\ 0\end{bmatrix} = \begin{bmatrix}v_{A,x} \\ v_{A,y} \\ 0\end{bmatrix} + \begin{bmatrix}-\omega r_{A,y} \\ \hphantom{-}\omega r_{A,x} \\ 0\end{bmatrix} \]

One can do this stuff in 3D and then drop or ignore the \(z\)-axis, but in my code I just stick with 2D:

Vec2f r; // radius
Vec2f v; // linear velocity vector
f32 w;   // angular velocity
Vec2f v_c = v + w * Rotr(r);

The contact velocity — how fast the contact points collide — is the relative speed of the contact points:

\[\boldsymbol{v}_\text{rel} = \boldsymbol{v}_{cB} \> – \boldsymbol{v}_{cA}\]

To derive the impulse, we stipulate two things. First, that the impulse lies in the normal direction:

\[\boldsymbol{j} = j \hat{\boldsymbol{n}}\]

Second, that the relative velocity along the normal direction reverses sign and changes by a factor \(e\):

\[\boldsymbol{v}_\text{rel}’ \cdot \hat{\boldsymbol{n}} = \> – e \> \boldsymbol{v}_\text{rel} \cdot \hat{\boldsymbol{n}}\]

This factor \(e\) is called the restitution. Setting it to zero produces a perfectly inelastic collision (because the resulting relative velocity in the normal direction is zero), whereas setting it to 1 produces a perfectly elastic collision that preserves the kinetic energy. Most objects use a value somewhere in-between.

We now have everything we need to derive the impulse. Let’s expand the left-hand side of the restitution equation:

\[\begin{align}\boldsymbol{v}_\text{rel}’ \cdot \hat{\boldsymbol{n}} \\ \left(\boldsymbol{v}_{cB}’ \> – \boldsymbol{v}_{cA}’\right) \cdot \hat{\boldsymbol{n}} \\ \left[\left(\boldsymbol{v}_B’ + \boldsymbol{\omega}_B’ \times \boldsymbol{r}_B\right) – \left(\boldsymbol{v}_A’ + \boldsymbol{\omega}_A’ \times \boldsymbol{r}_A\right)\right] \cdot \hat{\boldsymbol{n}} \\ \left[\left(\left[\boldsymbol{v}_B + \frac{1}{m_B}\boldsymbol{j}\right] + \left[\boldsymbol{\omega}_B + \boldsymbol{I}_B^{-1} \boldsymbol{r}_B \times \boldsymbol{j}\right] \times \boldsymbol{r}_B\right) – \left(\left[\boldsymbol{v}_A – \frac{1}{m_A}\boldsymbol{j}\right] + \left[\boldsymbol{\omega}_A – \boldsymbol{I}_A^{-1} \boldsymbol{r}_A \times \boldsymbol{j}\right] \times \boldsymbol{r}_A\right)\right] \cdot \hat{\boldsymbol{n}} \\ \left[\left(\left[\boldsymbol{v}_B + j \frac{1}{m_B}\hat{\boldsymbol{n}}\right] + \left[\boldsymbol{\omega}_B + j \boldsymbol{I}_B^{-1} \boldsymbol{r}_B \times \hat{\boldsymbol{n}}\right] \times \boldsymbol{r}_B\right) – \left(\left[\boldsymbol{v}_A – j \frac{1}{m_A}\hat{\boldsymbol{n}}\right] + \left[\boldsymbol{\omega}_A – j \boldsymbol{I}_A^{-1} \boldsymbol{r}_A \times \hat{\boldsymbol{n}}\right] \times \boldsymbol{r}_A\right)\right] \cdot \hat{\boldsymbol{n}} \\ \left[\boldsymbol{v}_\text{rel} + j \left(\frac{1}{m_B}\hat{\boldsymbol{n}} + \left[ \boldsymbol{I}_B^{-1} \boldsymbol{r}_B \times \hat{\boldsymbol{n}}\right] \times \boldsymbol{r}_B + \frac{1}{m_A}\hat{\boldsymbol{n}} + \left[ \boldsymbol{I}_A^{-1} \boldsymbol{r}_A \times \hat{\boldsymbol{n}}\right] \times \boldsymbol{r}_A\right)\right] \cdot \hat{\boldsymbol{n}} \\ \boldsymbol{v}_\text{rel} \cdot \hat{\boldsymbol{n}} + j \left(\frac{1}{m_A}\hat{\boldsymbol{n}} + \frac{1}{m_B}\hat{\boldsymbol{n}} + \left[ \boldsymbol{I}_A^{-1} \boldsymbol{r}_A \times \hat{\boldsymbol{n}}\right] \times \boldsymbol{r}_A + \left[ \boldsymbol{I}_B^{-1} \boldsymbol{r}_B \times \hat{\boldsymbol{n}}\right] \times \boldsymbol{r}_B\right) \cdot \hat{\boldsymbol{n}} \\ \boldsymbol{v}_\text{rel} \cdot \hat{\boldsymbol{n}} + j \left(\frac{1}{m_A}\hat{\boldsymbol{n}}\cdot \hat{\boldsymbol{n}} + \frac{1}{m_B}\hat{\boldsymbol{n}}\cdot \hat{\boldsymbol{n}} + \left[ \boldsymbol{I}_A^{-1} \boldsymbol{r}_A \times \hat{\boldsymbol{n}}\right] \times \boldsymbol{r}_A \cdot \hat{\boldsymbol{n}} + \left[ \boldsymbol{I}_B^{-1} \boldsymbol{r}_B \times \hat{\boldsymbol{n}}\right] \times \boldsymbol{r}_B \cdot \hat{\boldsymbol{n}}\right) \\ \boldsymbol{v}_\text{rel} \cdot \hat{\boldsymbol{n}} + j \left(\frac{1}{m_A} + \frac{1}{m_B} + \left[ \boldsymbol{I}_A^{-1} \boldsymbol{r}_A \times \hat{\boldsymbol{n}}\right] \times \boldsymbol{r}_A \cdot \hat{\boldsymbol{n}} + \left[ \boldsymbol{I}_B^{-1} \boldsymbol{r}_B \times \hat{\boldsymbol{n}}\right] \times \boldsymbol{r}_B \cdot \hat{\boldsymbol{n}}\right) \end{align}\]

We then equate this to \( – e \> \boldsymbol{v}_\text{rel} \cdot \hat{\boldsymbol{n}}\) and solve for the impulse scalar \(j\), yielding:

\[j = \frac{-(1+e) \boldsymbol{v}_\text{rel} \cdot \hat{\boldsymbol{n}} }{ \frac{1}{m_A} + \frac{1}{m_B} + \left(\left[ \boldsymbol{I}_A^{-1} \boldsymbol{r}_A \times \hat{\boldsymbol{n}}\right] \times \boldsymbol{r}_A + \left[ \boldsymbol{I}_B^{-1} \boldsymbol{r}_B \times \hat{\boldsymbol{n}}\right] \times \boldsymbol{r}_B\right) \cdot \hat{\boldsymbol{n}} }\]

This is the equation that Wikipedia gives us.

The equation I gave in the last blog post looks a little different:

\[j = \frac{-(1+e)\boldsymbol{v}_\text{rel} \cdot \hat{\boldsymbol{n}}}{\frac{1}{m_A} + \frac{1}{m_B} + \frac{(\boldsymbol{r}_A \times \boldsymbol{n})^2}{I_A} + \frac{(\boldsymbol{r}_B \times \boldsymbol{n})^2}{I_B}}\]

It only differs in the terms involving the moment of inertia.

Let’s expand the Wikipedia version:

\[\begin{align}\left[\left(\boldsymbol{I}^{-1} \boldsymbol{r} \times \hat{\boldsymbol{n}}\right) \times \boldsymbol{r}\right] \cdot \hat{\boldsymbol{n}} &= \left(\left(\frac{1}{I}\begin{bmatrix}r_x \\ r_y \\ 0\end{bmatrix} \times \begin{bmatrix} \hat{n}_x \\ \hat{n}_y \\ 0 \end{bmatrix}\right) \times \begin{bmatrix}r_x \\ r_y \\ 0\end{bmatrix}\right) \cdot \begin{bmatrix}\hat{n}_x \\ \hat{n}_y \\ 0\end{bmatrix} \\ &= \left(\frac{1}{I}\begin{bmatrix}0 \\ 0 \\ r_x \hat{n}_y – r_y \hat{n}_x\end{bmatrix} \times \begin{bmatrix}r_x \\ r_y \\ 0\end{bmatrix}\right) \cdot \begin{bmatrix}\hat{n}_x \\ \hat{n}_y \\ 0\end{bmatrix} \\ &= \frac{1}{I}\begin{bmatrix}-r_x r_y \hat{n}_y + r_y^2 \hat{n}_x \\ r_x^2 \hat{n}_y – r_x r_y \hat{n}_x \\ 0 \end{bmatrix} \cdot \begin{bmatrix}\hat{n}_x \\ \hat{n}_y \\ 0\end{bmatrix} \\ &= \frac{1}{I}\left( r_x^2 \hat{u}_y^2 – 2 r_x r_y \hat{u}_x \hat{u}_y + r_y^2 \hat{u}_y^2 \right) \end{align}\]

We get the same thing if we expand the other version:

\[\begin{align}\frac{1}{I}\left(\boldsymbol{r} \times \hat{\boldsymbol{n}}\right)^2 &= \frac{1}{I}\left(r_x \hat{n}_y – r_y \hat{n}_x\right)^2 \\ &= \frac{1}{I}\left( r_x^2 \hat{u}_y^2 – 2 r_x r_y \hat{u}_x \hat{u}_y + r_y^2 \hat{u}_y^2 \right)\end{align}\]

The two formulations are equal, but assume that we’re working in 2D and that the moment of inertia tensor is diagonal with every entry equal to \(I\). The Wikipedia version is more general, and can handle 3D collisions and general inertia tensors.

The normal impulse \(j\) in 2D is thus

\[j = \> -(1+e) \> m_\text{eff} \> \boldsymbol{v}_\text{rel} \cdot \hat{\boldsymbol{n}} \]

where the effective mass is

\[m_\text{eff} = \left(\frac{1}{m_A} + \frac{1}{m_B} + \frac{(\boldsymbol{r}_A \times \hat{\boldsymbol{n}})^2}{I_A} + \frac{(\boldsymbol{r}_B \times \hat{\boldsymbol{n}})^2}{I_B}\right)^{-1}\]

After calculating it, we apply an impulse \(j \hat{\boldsymbol{n}}\) to body B and the negated impulse to body A. Note that we only apply an impulse if the relative velocity is causing the bodies to move into contact (\(\boldsymbol{v}_\text{rel} \cdot \hat{\boldsymbol{n}} < 0\)), otherwise the contact point already has a separating velocity.

The restitution \(e\) will depend on the colliding bodies. One convenient way to get it is to assign an restitution to each rigid body, and then use the maximum between the two for the collision. That allows bounciness to win out:

\[e = \max\left(e_A, e_B\right)\]

Friction Impulse

The normal impulse takes care of the relative penetration speed, but does nothing in the tangent direction along the contact line. We typically want shapes that slide along each other to have friction — that is what the friction impulse, or tangent impulse, is for.

Wikipedia and other websites will at this point show you a fancy diagram for Coulomb friction depicting the friction cone. I don’t find that to be particularly illuminating.

Instead, I simply like to think about Coulomb friction as:

  • compute a tangent impulse that causes the relative tangent velocity to be zero (\(e=0\))
  • if the required tangent impulse is sufficiently small, apply it (static friction)
  • if the required tangent impulse is too large, cap it out and apply that instead (dynamic friction)

Let’s derive the update ourselves again.

The friction impulse acts in the tangent direction \(\hat{\boldsymbol{t}}\), which we can get by rotating the normal. We once again have a scalar impulse:

\[j = j \hat{\boldsymbol{t}}\]

We compute the tangent that causes a zero relative tangent velocity (\(e = 0\)).

These two conditions are structurally the same as the normal impulse, except \(e\) is fixed. We can directly apply our previous equations to get:

\[j = \> – m_\text{eff} \> \boldsymbol{v}_\text{rel} \cdot \hat{\boldsymbol{t}}\]

where the effective mass is now in the tangent direction:

\[m_\text{eff} = \left(\frac{1}{m_A} + \frac{1}{m_B} + \frac{(\boldsymbol{r}_A \times \hat{\boldsymbol{t}})^2}{I_A} + \frac{(\boldsymbol{r}_B \times \hat{\boldsymbol{t}})^2}{I_B}\right)^{-1}\]

As long as this impulse is small enough (in magnitude), we apply it in order to prevent any tangential sliding. If it is large enough, we switch from static friction to dynamic friction:

\[j = \begin{cases} j & \text{if } |j| \leq j_\text{static} \\ j_\text{dynamic} & \text{otherwise} \end{cases}\]

Easy peasy.

The static and dynamic friction values are typically multiples of the normal impulse. We multiply the normal impulse by \(\mu_\text{static}\) to get the static friction impulse threshold and by \(\mu_\text{dynamic}\) to get the dynamic friction impulse threshold. Like the restitution, these are properties of each body, and we can obtain values to use for a pair of bodies by combining their values. Here it is recommended to use the square root of the product, which lets extremely low-friction (slippery) values dominate:

\[\mu_\text{static} = \sqrt{\vphantom{P}\mu_{A,\text{static}} \cdot \mu_{B,\text{static}}}\]

Revolute Joints

There are other things we can do with impulses. One of the main ones is to model attachment points between rigid bodies. These revolute joints allow rotation, but force the bodies to maintain a constant distance vector to the joint:

Revolute joints are useful for all sorts of things, including swinging:

I didn’t have a lot of luck finding information on how to enforce revolute joints between rigid bodies in 2D. Thankfully, we can derive the necessary impulse ourselves.

The property we want to enforce is that the contact velocity at the joint after the impulse be zero:

\[\boldsymbol{v}_\text{rel}’ = \boldsymbol{0}\]

We can expand that out:

\[\begin{align}\boldsymbol{v}_\text{rel}’ &= \boldsymbol{0} \\ \boldsymbol{v}_{cB}’ \> – \boldsymbol{v}_{cA}’ &= \boldsymbol{0} \\ \left(\boldsymbol{v}_B’ + \boldsymbol{\omega}_B’ \times \boldsymbol{r}_B\right) – \left(\boldsymbol{v}_A’ + \boldsymbol{\omega}_A’ \times \boldsymbol{r}_A\right) &= \boldsymbol{0} \\ \left(\left[\boldsymbol{v}_B + \frac{1}{m_B}\boldsymbol{j}\right] + \left[\boldsymbol{\omega}_B + \boldsymbol{I}_B^{-1} \boldsymbol{r}_B \times \boldsymbol{j}\right] \times \boldsymbol{r}_B\right) – \left(\left[\boldsymbol{v}_A – \frac{1}{m_A}\boldsymbol{j}\right] + \left[\boldsymbol{\omega}_A – \boldsymbol{I}_A^{-1} \boldsymbol{r}_A \times \boldsymbol{j}\right] \times \boldsymbol{r}_A\right) &= \boldsymbol{0} \\ \boldsymbol{v}_\text{rel} + \left(\frac{1}{m_B}\boldsymbol{j} + \left[\boldsymbol{I}_B^{-1} \boldsymbol{r}_B \times \boldsymbol{j}\right] \times \boldsymbol{r}_B\right) + \left(\frac{1}{m_A}\boldsymbol{j} + \left[\boldsymbol{I}_A^{-1} \boldsymbol{r}_A \times \boldsymbol{j}\right] \times \boldsymbol{r}_A\right) &= \boldsymbol{0} \\ \boldsymbol{v}_\text{rel} + \left(\frac{1}{m_A} + \frac{1}{m_B}\right)\boldsymbol{j} + \left(\boldsymbol{I}_A^{-1} \boldsymbol{r}_A \times \boldsymbol{j}\right) \times \boldsymbol{r}_A + \left(\boldsymbol{I}_B^{-1} \boldsymbol{r}_B \times \boldsymbol{j}\right) \times \boldsymbol{r}_B &= \boldsymbol{0}\end{align}\]

If we work out the cross products, then we end up with the following system of equations:

\[\begin{bmatrix}\frac{1}{m_A} + \frac{1}{m_B} + \frac{1}{I_A}r_{Ay}^2 + \frac{1}{I_B}r_{By}^2 & -\left(\frac{1}{I_A}r_{Ax}r_{Ay} + \frac{1}{I_B}r_{Bx}r_{By}\right) \\ -\left(\frac{1}{I_A}r_{Ax}r_{Ay} + \frac{1}{I_B}r_{Bx}r_{By}\right) & \frac{1}{m_A} + \frac{1}{m_B} + \frac{1}{I_A}r_{Ax}^2 + \frac{1}{I_B}r_{Bx}^2\end{bmatrix}\begin{bmatrix} j_x \\ j_y\end{bmatrix} = \begin{bmatrix}-v_{\text{rel},x} \\ -v_{\text{rel},y}\end{bmatrix}\]

We can write a simple function that can solve such a \(2 \times 2\) linear system:

// Solve a 2x2 system of linear equations
// [a b][x1] = [e]
// [c d][x2] = [f]
// This method assumes that it will succeed, which
// requires that the matrix be invertible.
template <typename T>
void Solve(T a, T b, T c, T d, T e, T f, T& x1, T& x2) {
    T det = d * a - c * b;
    x2 = (a * f - c * e) / det;
    x1 = (e - b * x2) / a;
}

and then use that to solve for our impulse.

// Compute the effective mass components.
f32 m_eff_11 = inv_m1 + inv_m2 + inv_I1 * r1.y * r1.y + inv_I2 * r2.y * r2.y;
f32 m_eff_12 = -(inv_I1 * r1.x * r1.y + inv_I2 * r2.x * r2.y);
f32 m_eff_22 = inv_m1 + inv_m2 + inv_I1 * r1.x * r1.x + inv_I2 * r2.x * r2.x;

// Solve for the impulse:
//  [m_eff_11   m_eff_12][j.x] = [-v_rel.x]
//  [m_eff_12   m_eff_22][j.y] = [-v_rel.y]
Vec2f j;
Solve(m_eff_11, m_eff_12, m_eff_12, m_eff_22, -v_rel.x, -v_rel.y, j.x, j.y);

We can then apply the resulting impulse to both bodies and perform a positional correction. I weight the correction such that the target joint location is biased toward the heavier object (that way joints on infinite-mass objects never move).

Conclusion

This post set the mathematical foundation for impulses, then derived the normal, friction, and revolute joint impulses. In each case, we looked at what was being preserved, formulated that mathematically, and massaged the equations to back out the impulse. While the equations were long at times, they did not lead us astray and we were able to find our way to some solid physics.

2D Collision Detection and Resolution

This post is an update on the sidescroller project. In short, I still find the project interesting, and as long as that is true, I want to work toward my original goal of a 2D sidescroller with arbitrary level geometry (i.e., not based on tiles or AABBs). I’ve achieved my stated goal of learning how to use the GPU and roll my own skeletal animation system, but I obviously haven’t built a game. A full game was, and still is, out of the question, but it would be cool to have something with a game loop and perhaps a complete level and a boss.

Fleshing out the game further requires fleshing out more of the game-play design. As such, physics must happen.

This blog post covers how I’m going about collision detection and resolution for convex polygons.

The Problem

We wish to simulate objects moving around in a 2D space. While we might eventually have fancy objects like ropes, let’s start by restricting ourselves to convex polygons:

A crude rendition of a 2D platformer scene (left) and the convex polygons in it (right).

As we can see, nonconvex polygons can be represented by multiple convex polygons, so this isn’t all that restrictive.

Most of the polygons will be static. The floors and ceilings, for the most part, do not move. Entities like the player, monsters, eventual ropes, crates, etc. will need to move. That movement needs to obey the laws of physics, which mainly means integrating accelerations into velocities, velocities into positions, and handling collisions.

More formally, we have a scene comprised of \(m\) rigid bodies, some of which are static and some of which are dynamic. Each rigid body is defined by a convex polygon and a 2D state. Each convex polygon is represented as a list of vertices in counter-clockwise (right-hand) order:

More serious implementations might include additional information, like precomputed edge normals or an enclosing axis-aligned box for broad-phase collision detection, but let’s leave those optimizations out for now. And yes, I could have used an std::vector but I like having trivially copyable types since I’m serializing and deserializing my levels to disk and I don’t want to think too hard when doing that.

I made the simplification of keeping track of static bodies separately from dynamic rigid bodies. For the static bodies, the polygon vertices are directly defined in the world frame. For the dynamic bodies, the polygon vertices are defined in the body frame of the entity, and are then transformed according to the entity’s state:

A vertex \(\boldsymbol{v}\) in the body frame of a polygon can be transformed into the world frame according to:

\[\begin{bmatrix}v_x \\ v_y\end{bmatrix}_\text{world} = \begin{bmatrix}\cos \theta & -\sin \theta \\ \sin \theta & \hphantom{-}\cos \theta\end{bmatrix}\begin{bmatrix}v_x \\ v_y\end{bmatrix}_\text{body} + \begin{bmatrix}p_x \\ p_y\end{bmatrix}\]

As the physics evolve, these states change, but the polygons do not. In every frame, we’ll accumulate forces and torques on our bodies and use them to update the state. We’ll also have to perform collision detection and handle any resulting forces and torques.

State Propagation

We need to simulate the laws of motion. That is, solve \(F = m \> a\) and compute

\[\begin{matrix}v(t + \Delta t) = v(t) + \int a(t) \> dt \\ p(t + \Delta t) = p(t) + \int v(t) \> dt \end{matrix}\]

except in 2 dimensions with torques and rotations.

We can’t do exact integration, so we’ll have to rely on approximations. Games tend to run at pretty high frame rates, so its reasonable to approximate the update across a single frame using the Euler method:

\[\begin{align} u(t + \Delta t) &= u(t) + a_x(t) \Delta t \\ v(t + \Delta t) &= v(t) + a_y(t) \Delta t \\ \omega(t + \Delta t) &= \omega(t) + \alpha(t) \Delta t \\ x(t + \Delta t) &= x(t) + u(t) \Delta t \\ y(t + \Delta t) &= y(t) + v(t) \Delta t \\ \theta(t + \Delta t) &= \theta(t) + \omega(t) \Delta t\end{align} \]

or written a bit more compactly:

\[\begin{align} u’ &= u + a_x \Delta t \\ v’ &= v + a_y \Delta t \\ \omega’ &= \omega + \alpha \Delta t \\ x’ &= x + u \Delta t \\ y’ &= y + v \Delta t \\ \theta’ &= \theta + \omega \Delta t\end{align} \]

Apparently, doing this would work okay, but using symplectic Euler integration is a bit more stable. It uses the updated velocities for the position update:

\[\begin{align} u’ &= u + a_x \Delta t \\ v’ &= v + a_y \Delta t \\ \omega’ &= \omega + \alpha \Delta t \\ x’ &= x + u’ \Delta t \\ y’ &= y + v’ \Delta t \\ \theta’ &= \theta + \omega’ \Delta t\end{align} \]

Our physics system will perform this update for every dynamic body in every game tick. After doing so, it needs to handle collisions.

Properly handling collisions is a whole big field and there is potentially a lot to talk about. In an ideal sense, if we were to identify a collision, we’d back up in time to where the collision first occurred, and handle it there. That’s more accurate, but also complicated. In a scene with a bunch of objects and a bunch of collisions, we don’t want to be sorting by a bunch of collision times.

Instead, I’ll be taking inspiration from Verlet integration and enforce constraints via iteration. A single tick of the physics system becomes:

void PhysicsEngine::Tick(f32 dt) {
    // Euler integration
    for (auto& rigid_body : rigid_bodies_) {
        EulerIntegrationStep(&rigid_body.state, dt);
    }

    for (int i = 0; i < 4; i++) {
        EnforceConstraints();
    }
}

We’re running Euler integration once, then running several iterations of constraint enforcement! This iterative approach helps things shake out because oftentimes, which you resolve one constraint, you start violating another. Its a pretty messy way to do things, but its so simple to implement and can be surprisingly robust.

What constraints will we enforce? For now we’re only going to prevent polygons from interpenetrating. In general we can enforce a bunch of other types of constraints, most notably joints between bodies.

Collision Detection

Alright, all that pretext to get to the meat and potatoes of this post.

In order to prevent our rigid bodies from interpenetrating, we need to detect when they collide:

We can do this in a pairwise manner, only considering the world-space polygons of two rigid bodies at once.

We start by observing that two polygons are in collision if they share an interior. While that may be technically true, it doesn’t really tell us how to efficiently compute intersections.

A more useful observation is that two polygons A and B are in collision if

  • A contains any vertex of B, or
  • B contains any vertex of A, or
  • any edge in A intersects any edge in B.

This definition actually works for nonconvex as well as convex polygons (but don’t ask me about degenerate polygons that self-intersect and such). Unfortunately, its a bit inefficient. If A has \(n\) vertices and B has \(m\) vertices, then we’d be doing \(n+m\) polygon–point collision checks and then \(nm\) edge intersection checks. If we keep our polygons simple then it isn’t all that bad, but there are better methods.

One go-to method for 2D polygon collision checking is the separating axis theorem, which states that if two convex polygons A and B are in collision, then you cannot draw a line between them. (Sometimes these things seem kind of obvious in retrospect…):

We can draw a separating line between the two polygons on the left, but cannot for the colliding ones on the right.

One direct consequence of the separating axis theorem is that if you can draw a separating line, then the projection of each shape perpendicular to that line will also have a gap:

Similarly, if you project the two shapes in any direction that they cannot be separated in, then their projections will not have a gap:

For convex polygons that are not colliding, a separating axis can always be constructed along one of the polygon’s edges. That is great — it means we only need to consider \(m + n\) possible separating axes.

Our collision detection algorithm is thus:

  • project both polygons onto all edge normals
  • if there are any gaps, then they do not intersect
  • otherwise, they intersect

Projecting a polygon along an axis is pretty easy:

struct PolygonProjection {
    f32 lo;
    f32 hi;
};

Overlap ProjectPolygonAlongDirection(
    const ConvexPolygon& polygon,
    const Vec2f normal)
{
    PolygonProjection proj;
    proj.lo = std::numeric_limits<f32>::infinity();
    proj.hi = -std::numeric_limits<f32>::infinity();
    for (u8 i = 0; i < polygon.n_pts; i++) {
        const f32 d = Dot(normal, polygon.pts[i]);
        proj.lo = std::min(d, overlap.lo);
        proj.hi = std::max(d, overlap.hi);
    }
    return proj;
}

Notice that the direction we pass in is the edge normal, not the edge tangent. This is because the normal is used to measure the distance away from the separating axis.

Two projects have a gap if proj_a.hi <= proj_b.lo || proj_b.hi <= proj_a.lo. Otherwise, they overlap.

Our collision detection algorithm is thus:

bool CollidesInAnAxisOfA(
    const ConvexPolygon& A,
    const ConvexPolygon& B)
{
    Vec2f a1 = A.pts[A.n_pts-1];
    for (u8 i = 0; i < A.n_pts; i++) {
        const Vec2f& a2 = A.pts[i];
        const Vec2f tangent = Normalize(b - a);
        const Vec2f normal = Rotr(tangent);
 
        PolygonProjection proj_a = 
        ProjectPolygonAlongDirection(A, normal);
        PolygonProjection proj_b = 
             ProjectPolygonAlongDirection(B, normal);
        if (proj_a.hi > proj_b.lo && proj_b.hi > proj_a.lo) {
            return true; // no gap, thus a collision
        }
    }
}

bool Collides(
    const ConvexPolygon& A,
    const ConvexPolygon& B)
{
   return CollidesInAnAxisOfA(A, B) || CollidesInAnAxisOfA(B, A);
}

Penetration Vectors

Once we have determined that two polygons are colliding, we need to separate them. In order to do that, we need to find the penetration vector, or the shortest vector that can be added to one of the polygons such that they no longer overlap:

Rather than figuring this out from scratch, let’s modify our collision-checking routine to also figure this out.

When we project the two polygons into a direction, then the overlap of their projections is the distance we would need to move them in the projection direction to separate them:

We can thus find the penetration vector by keeping track of the smallest overlap among all of the projections we do!

struct CollisionResult {
    bool collides;   // whether there is a collision
    f32 penetration; // the length of the penetration vector
    Vec2f dir;       // the unit penetration direction
}

// A helper function that projects along all of the edge tangents of A.
void CollisionHelper(
    const ConvexPolygon& A,
    const ConvexPolygon& B,
    CollisionResult* retval)
{
    Vec2f a1 = A.pts[A.n_pts-1];
    for (u8 i = 0; i < A.n_pts; i++) {
        const Vec2f& a2 = A.pts[i];
        const Vec2f tangent = Normalize(b - a);
        const Vec2f normal = Rotr(tangent);
        
        PolygonProjection proj_a = 
            ProjectPolygonAlongDirection(A, normal);
        PolygonProjection proj_b = 
            ProjectPolygonAlongDirection(B, normal);
        if (proj_a.hi > proj_b.lo && proj_b.hi > proj_a.lo) {
            retval.collides = true;
            return; // no gap, thus a collision
        }

        // Keep track of the smallest penetration
        f32 penetration = min(
            abs(proj_a.lo - proj_b.hi),
            abs(proj_a.hi - proj_b.lo))
        if (penetration < retval.penetration) {
            retval.penetration = penetration;
            retval.dir = normal;
        }
    }
}

CollisionResult Collides(
    const ConvexPolygon& A,
    const ConvexPolygon& B)
{
   CollisionResult retval = {};
   retval.penetration = std::numeric_limits<f32>::infinity();
   CollisionHelper(A, B, &retval);
   if (retval.collides) return retval;
   CollisionHelper(B, A, &retval);
   return retval;
}

Collision Manifolds

Our collision resolution logic can separate the polygons, but that isn’t enough to get realistic physics. We want to affect more than just their positions, but also their velocities and angular states. A collision should result in an impulse force that changes each body’s velocity and angular acceleration.

The torque is the counter clockwise force times the radius to the rigid body’s center of mass. That means we need the radius, which means knowing where the force is applied. So far we’ve only calculated the penetration normal, not where the penetration occurred.

Properly determining a collision response requires computing the contact point or contact manifold. Eric Catto has a nice GDC 2007 slide deck on this, but I wasn’t able to track down the actual talk that goes along with it, so it leaves something to be desired.

The contact manifold represents the points at which two objects come into contact. Ideally that would be the very first points that touch as two objects collide, but because of our discrete time steps, we have to work with partial overlaps:

In some cases, a contact manifold should be an entire edge, but we’ll be approximating it using at most two points.

The contact manifold can be constructed using either polygon as a reference. We’ll be moving the polygons out of collision afterwards, so the contact manifold will be roughly the same independent of which polygon we choose:

The reference polygon will be the one with the edge that the penetration vector is normal to. We’ll call that edge the reference edge. The other polygon is called the incident polygon, and we’ll need to identify an incident edge that crosses the reference edge.

We will start by running our latest collision resolution algorithm, which gives us a penetration unit vector and magnitude. We have to slightly modify that code to keep track of which polygon it came from, so that we know which polygon is the reference polygon.

If it determines that there is no collision, we are done. Otherwise, we then identify the incident edge, which shall be the edge in the incident polygon whose outward normal is most opposed to the reference edge normal:

int inc_edge_index = 0;
int inc_edge_index_succ = 0;
f32 min_dot = std::numeric_limits<f32>::infinity();

int i_prev = I->n_pts - 1;
for (u8 i = 0; i < I->n_pts; i++) {
    common::Vec2f normal_i = Rotl(Normalize(I->pts[i] - I->pts[i_prev]));
    f32 dot = common::Dot(normal, normal_i);
    if (dot < min_dot) {
        min_dot = dot;
        inc_edge_index = i_prev;
        inc_edge_index_succ = i;
        }
    i_prev = i;
}

We then clip the incident edge to lie between the end caps of the reference edge:

This clipping is done in two steps — first by clipping the incident edge (which is a line segment) to lie on the negative side of the left halfspace and then again to lie on the negative side of the right halfspace. We can clip a line segment to a halfspace using the dot product of the endpoints with the halfspace normal vector (which is tangent to the reference edge) and comparing that to the dot product of the reference edge endpoint.

struct ClipResult {
   int n_pts;
   Vec2f pts[2];
};
ClipResult ClipSegmentToHalfspace(
    const Vec2f a,
    const Vec2f b,
    const Vec2f plane_normal,
    const f32 plane_offset,
    const int ref_edge_index) {

    ClipResult retval = {};

    // Compute the distances of the end points to the plane
    f32 dist_a = Dot(plane_normal, a) - plane_offset;
    f32 dist_b = Dot(plane_normal, b) - plane_offset;

    // If the points are on the negative side of the plane, keep them
    if (dist_a <= 0.0f) {
        retval.pts[retval.n_pts] = a;
        retval.n_pts += 1;
    }
    if (dist_b <= 0.0f) {
        retval.pts[retval.n_pts] = b;
        retval.n_pts += 1;
    }

    // If the points are on different sides of the plane, 
    // then we need to find the intersection point. 
    // (We don't have to do anything in the case that the
    // points are both on the positive side of the plane.)
    if (dist_a * dist_b < 0.0f) {
        // Keep the intersection points
        f32 interp = distance_0 / (distance_0 - distance_1);
        retval.pts[retval.n_pts] = a + interp * (b - a);
        retval.n_pts += 1;
    }

    return retval;
}

If we fully clip the segment away (which shouldn’t happen), then we don’t have a real intersection.

The collision manifold will consist of all endpoints in the clipped incident edge that penetrate across the reference edge. This explains how we can end up with either a 1- or a 2-point collision manifold. We can once again use a dot product and compare it to a base offset:

f32 face_offset = Dot(reference_normal, reference_a);
for (int i = 0; i < 2; i++) {
    // Get the distance of the ith endpoint along the normal from the
    // reference face. A negative value indicates penetration into the
    // reference polygon.
    f32 separation = Dot(reference_normal, clip.pts[i]) - face_offset;
    if (separation <= kEps) {
        manifold.points[manifold.n_pts] = clip.pts[i];
        manifold.n_pts += 1;
    }
}

I coded up a visualization of this computation to help with debugging. The red vector is the unit normal along the reference edge, the little aligned blue vector is the same vector scaled by the penetration, the little red square is the 1 point in this collision manifold, and the little green square is the other endpoint of the clipped incident edge, which in this case was not accepted into the collision manifold:

We can set up a case where there are two points in our collision manifold:

I can interact with my visualization by translating or rotation the one square around:

Collision Resolution

Now that we can detect collisions, calculate penetration vectors, and calculate a collision manifold, we can finally do the work of resolving a collision. Wikipedia has a good overview of this, but I’ll write it out here for good measure.

If we have a collision, then our contact manifold will contain one or two contact points. For starters let’s work with the case of a single contact point \(\boldsymbol{c}\). We’ll assume that body A is the reference polygon.

We can simulate the effect of a collision by imparting opposed impulses to the colliding bodies. This impulse is the integral of the force acting on each body over that collision timestep.

The impulse force depends on the relative velocity between the two objects. Let \(\boldsymbol{r}_A\) and \(\boldsymbol{r}_B\) be the vectors from each body’s center of mass (usually its centroid) to the contact point. Then we can calculate the relative velocity at the contact point:

\[\boldsymbol{v}_\text{rel} = (\boldsymbol{v}_B + \omega_B \times \boldsymbol{r}_B) – (\boldsymbol{v}_A + \omega_A \times \boldsymbol{r}_A)\]

The impulse is zero if this relative velocity along the normal is positive.

The normal impulse for the objects is:

\[j = \frac{-(1+e)\boldsymbol{v}_\text{rel} \cdot \boldsymbol{n}}{\frac{1}{m_A} + \frac{1}{m_B} + \frac{(\boldsymbol{r}_A \times \boldsymbol{n})^2}{I_A} + \frac{(\boldsymbol{r}_B \times \boldsymbol{n})^2}{I_B}}\]

where \(e \in [0,1]\) is the coefficient of restitution that determines how elastic the collision is, \(m_A\) and \(m_B\) are the polygon masses, and \(I_A\) and \(I_B\) are their moments of inertia.

We then update each polygon’s linear and angular speed according to:

\[\begin{align} \boldsymbol{v}_A’ &= \boldsymbol{v}_A \> – \frac{j}{m_A} \boldsymbol{n} \\ \boldsymbol{v}_B’ &= \boldsymbol{v}_B + \frac{j}{m_B} \boldsymbol{n} \\ \omega_A’ &= \omega_A \> – \frac{j}{I_A} \left(\boldsymbol{r}_A \times \boldsymbol{n}\right) \\ \omega_B’ &= \omega_B + \frac{j}{I_B} \left(\boldsymbol{r}_B \times \boldsymbol{n} \right)\end{align}\]

When there are two points in the contact manifold, one might be tempted to approximate the collision with their mean. I found that that doesn’t quite work correctly in all cases. For example, in the situation below, the rotation causes the estimated contact point to move right, which imparts a torque in the wrong direction:

In order to fix this, I process both contact points, but weight the impulse in proportion to the penetration depth.

Positional correction is similar. To resolve the collision, we simply need to separate the two bodies along the penetration unit normal by a distance equal to the penetration \(\delta\). We could split the difference and shift the incident polygon by \(\delta / 2\) and the reference polygon by \(-\delta / 2\) along the normal, but we also want to account for their relative masses.

To account for the masses, we’d want to move the incident polygon by

\[\frac{\delta}{2} \frac{m_A}{m_A + m_B}\]

and the reference polygon by

\[-\frac{\delta}{2} \frac{m_B}{m_A + m_B}\]

where \(m_B\) is the mass of the incident polygon and \(m_A\) is the mass of the reference polygon. The heavier polygon is moved less.

However, as stated earlier, rather than fully resolve the collision, we will perform multiple, smaller resolution steps. If our penetration has a magnitude \(\delta\), we only apply \(\beta \delta\) where \(\beta\) is between 0 and 1, typically something like 0.4. We move the incident polygon in the penetration normal direction by that amount, and the reference polygon by the opposite amount. This will help jostle multiple conflicting collisions in a nice way such that they hopefully find a stable, valid solution.

For a nice code reference for all this, I recommend Randy Paul’s Impulse Engine.

Conclusion

And that’s enough to get us some pretty solid solid-body physics!

We actually did quite a lot in this post. We set off to do polygon collision resolution, started with polygon intersections, realized we also needed to figure out the penetration vector, did that, and then realized that to get appropriate impulses we needed to compute approximate collision manifolds. We then used that information to compute impulses, apply them, and to physically separate any colliding bodies.

We don’t have any friction. Its actually pretty easy to add, but I figured this post was already long enough.

We haven’t yet talked about any sort of spatial partitioning or other broad-phase collision tactics. No special memory allocation strategies, stable contact points, or joints. As always, there is a whole world of potential complexity out there that can be tackled if and when its needed, and we want to tackle it.

I highly recommend the Box2D repo as a helpful reference. It is a bit more mature, which can lead to some excessive complexity, but on the whole it is actually very cleanly written and extremely well documented.

Happy coding.

A Transformer Sokoban Policy

I’ve posted about Sokoban before. It is a problem I keep coming back to.

Sokoban is both very simple to specify but its difficulty can be readily scaled up. It is easy to accidentally push blocks into an inescapable kerfuffle, and more interestingly, it isn’t always obvious when you’ve done it.

Moreover, I think people solving Sokoban often beautifully exhibit hierarchical reasoning. The base game is played with up/down/left/right actions, but players very quickly start thinking about sequences of block pushes. We learn to “see” deadlock configurations and reason about rooms and packing orders. This ability to naturally develop higher-order thinking and execute on it is something I want to understand better. So I keep coming back to it.

In this blog post, I’ll recount training a Sokoban policy using deep learning, and more specifically, using the transformer architecture. I started pretty small, developing something I that hope to flesh out more in the future.

The Problem

This blog post is about policy optimization. A policy \(\pi(s)\) maps states to actions. These states and actions must exist in some sort of problem — i.e. a search problem or Markov decision process. It is often a good idea to define the problem before talking about how to structure the policy.

Our problem, of course, is Sokoban. The state space is the space of Sokoban boards. Each board is a grid, and each tile in the grid is either a wall or a floor, each floor may be a goal and may contain either a box or the player. For this post I’m working with 8×8 boards.

\[\text{state space}\enspace\mathcal{S} = \text{Sokoban boards}\]

Normal Sokoban has an action space of up/down/left/right. At least, that is what we’re told. I think that isn’t quite true. In normal Sokoban, the player can get stuck in a deadlock. When that happens, they typically reset the level or hit undo. So I’m going use an action space of up/down/left/right/undo. Trajectories with undos are of course never optimal, but you nevertheless might need to use them when you’re exploring.

\[\text{action space}\enspace\mathcal{A} = \left\{\text{up}, \text{down}, \text{left}, \text{right}, \text{undo}\right\}\]

Notice that we’re working directly with player movements, not with box pushes. Pretty much no-one ever works directly with player movements, because the search space gets so large so quickly. That being said, hierarchical reasoning is the point of my investigations, so I’m starting at the bottom of the reasoning hierarchy.

Our goal is to solve Sokoban problems. For now, let’s just say that we get a reward of 1 whenever we solve a board (all boxes are on goals), and zero reward otherwise. Given two action sequences that solve a board, we prefer the shorter one.

\[R(s,a) = \begin{cases}1 & \text{if } \texttt{issolved}(s) \\ 0 & \text{otherwise}\end{cases}\]

We won’t directly make use of this reward function in this blog post, but it will come into play later.

The Policy

A typical policy would take the current state and try to find the best action that leads toward having all boxes on goals. Its architecture would broadly look like:

The output is a probability distribution over the actions. There are 5 actions (up,down,left,right, and undo), so this is just a categorical distribution over 5 values. That’s the bread and butter of ML and we can easily produce that with a softmax output layer.

The input is a board state. I decided to represent boards as \(8\times 8\times 5\) tensors, where the first two dimensions are the board length and width, and then we have 5 channels that indicate whether a given tile is a wall space, a floor space, has a goal, has a player, and has a box. Altogether that is 320 scalars, which is somewhat long and sparse. However, it contains a lot of structured spatial information that we need to make use of.

The first architectural decision is to run the board state through an encoder head to produce a latent vector. Its basically a 5-channel image, so I’m using convolutional layers, just like you would for an image input. These can learn consistent features that get applied in parallel across the input.

This could work. We’d just stick some feedforward layers with maybe some residual connections into that trunk, and we’d be able to train a policy to place all boxes on goals. I didn’t quite want to go that route though, because I wanted the network to be slightly more configurable.

My eventual goal is to do hierarchical reasoning, so I want to be able to have another, higher-level policy use this lower-level policy as a tool. As such, I want an input that it can provide in order to steer the policy. To this end, I’m providing an additional goal input that tells the neural network what we want to happen:

In Sokoban its pretty important for the player to know when a board just isn’t solvable. Its really easy to end up in a deadlock state:

We want the model to be able to learn when this happens, and to be able to report that it thinks this is the case.

I thus include an additional output, which I’m calling the nsteps output. It predicts the number of steps left in the solution of the puzzle. If the network thinks there are an infinite number of steps left, then the board is not solvable.

In Stop Regressing, the authors suggest avoiding (continuous) regression and to train over discrete targets instead. As such, my nsteps output will actually output a discrete value:

nsteps = Int(clamp(round(log(nsteps+1)), 0, 5)) + 1

which can take on up to 6 values:

and a 7th bin for unsolvable.

That leaves our network as (with the softmaxes implied):

This architecture is a perfectly reasonable policy network. In fact, we’ll compare against this network later on. However, I was also very interested in working with transformers, so I built a transformer version of this network:

This architecture is roughly the same, except you have the massively parallel transformer that:

  • can train with supervised learning more efficiently, by receiving an entire sequence at once
  • can use past information to inform future decisions
  • can easily be scaled up or down based on just a few parameters
  • lends itself nicely to beam search for exploration

I am initially working with relatively small models consisting of 3-layer transformers with 16-dimensional embedding spaces, a maximum sequence length of 32, and 8-headed attention.

Training

This initial model is trained with supervised learning. In order to do that, we need supervised data.

Typical supervised learning would provide a starting state, the goal state, and an optimal set of actions to reach it. My training examples are basically that, but also include examples that cannot be solved and some examples with bad moves and a training mask that allows me to mask them out during training. (This theoretically lets the network learn undos and to recover from bad moves).

start:
########
# #
# #
# . bp #
# #
# #
# #
########

goal:
########
# #
# #
# B #
# #
# #
# #
########

solved
moves: LL
isbad: --

solved
moves: xLL
isbad: x--


solved
moves: UxLL
isbad: xx--

I have a few handcrafted training entries, but most of them are randomly generated problems solved with A*. I generate 1-box problems by starting with an 8×8 grid of all-walls, producing two random rectangular floor sections (rooms), randomly spawning the goal, box, and the player, and then speckling in some random walls. That produces a nice diversity that has structure, but not too much structure.

Most of these boards aren’t solvable, so I when I generate training examples, I set a target ratio for solved and unsolved boards and make sure to keep generating until I get enough of each type. I have 4000 generated problems with solutions and 4000 generated problems without solutions, resulting in 8,000 overall generated training examples.

I was able to cheaply boost the size of the training set by recognizing that a problem can be rotated 4 ways and can be transposed. Incorporating these transforms (and applying the necessary adjustments to the solution paths) automatically increases the number of training examples by 8x.

I train with a standard Flux.jl loop using the default Adam optimizer. I shuffle the training set with each epoch, and train with a batch size of 32. I’m using 1% dropout.

The training loop is set up to create a new timestamped directory with each run and save a .json file with basic metrics. I compute both the policy and nsteps loss on each batch and log these for later printing. This gives me nice training curves, effectively using the code from the Flux quick start guide:

This plot uses a log-scale x-axis, which gives it some distortion. Training curves are often plotted this way because a lot of stuff happens early on and more updates have to be be made later on to get similar movement.

I’ve been consistently seeing the plots play out like they do above. At first the loss drops very quickly, only to plateau between 100 and 1000 batches, and then to suddenly start reducing again. The model seems to suddenly understand something around the 1000-batch mark. Based on how flat the policy loss is, it seems to be related to that. Maybe it learns some reliable heuristics around spatial reasoning and walking to boxes.

The curves suggest that I could continue to train for longer and maybe get more improvement. Maybe that’s true, and maybe I’ll look into that in the future.

Validation

I have two types of validation – direct supervised metrics and validation metrics based on using the policy to attempt to solve problems. Both of these are run on a separate validation dataset of 4000 randomly generated problems (with a 50-50 split of solvable and not solvable).

The supervised metrics are:

  • accuracy at predicting whether the root position is solvable
  • top-1 policy accuracy – whether the highest-likelihood action matches the supervised example
  • top-2 policy accuracy – whether the supervised example action is in the top two choices
  • top-1 nsteps accuracy – whether the highest-likelihood nsteps bucket matches the example
  • top-2 nsteps accuracy – ditto, but top two choices

Note that a Sokoban problem can have multiple solutions, so we shouldn’t expect a perfect policy accuracy. (i.e. going up and then left is often the same as going left and then up).

I then use the model to solve each problem in the validation dataset using beam search. This lets me actually test efficacy. Beam search was run with a beam width of 32, which means I’m always going to find the solution if it takes fewer than 2 steps. The search keeps the top candidates based on the running sum of the policy likelihood. (I can try other formulations, like scoring candidates based on the predicted nsteps values).

Results

I trained two models – a transformer model and a 2nd model that does not have access to past data. The 2nd model is also a transformer, but with a mask that masks out everything but the current and goal states.

The metrics are:

TransformerBaseline
Solvability Accuracy0.9770.977
Top-1 Policy Accuracy0.8670.831
Top-2 Policy Accuracy0.9820.969
Top-1 Nsteps Accuracy0.9000.898
Top-2 Nsteps Accuracy0.9900.990
Solve Rate0.9210.879
Mean Solution Length6.9836.597

Overall, that’s pretty even. The baseline policy that does not have access to past information is ever so slightly worse with respect to top-1 accuracy. Our takeaway is that for this problem, so far, the past doesn’t matter all that much and we can be pretty Markovian.

A 97.7% solvability accuracy is pretty darn good. I am happy with that.

The top-k policy and nsteps accuracies are pretty good too. The full transformer outperforms the baseline a bit here, but only by a bit.

The primary difference comes out when we look at the solve rate, which we obtain by actually using the policies to solve problems. The full transformer succeeds 92.1% of the time, whereas when there is no history it only solves it 87.9% of the time. That’s only a 4.2% spread, but it is noticeable. I can only hypothesize, but I suspect past information is useful for longer traversals and simply the fact that you can pull more information in can give the model more space to compute with.

Differential Evolution

Algorithms for Optimization covers Differential Evolution in its chapter on population methods. It is fairly easy to specify what it does, but the combination of things it does is somewhat strange, and it isn’t all that obvious why it would work, or work particularly well. That being said, there is plenty of support for Differential Evolution in the literature, so I figured there probably is a method to this method’s madness. We’re in the process of updating the textbook for a 2nd edition, so in this post I’ll spend some time trying to build more intuition behind Differential Evolution to better present it in the text.

The Algorithm

Differential Evolution is a derivative-free population method. This means that instead of iteratively improving a single design, it iteratively improves a bunch of designs, and doesn’t need the gradient to do so. The individual updates make use of other individuals in the population. An effective population method will tend to converge its individuals on the (best) local minima of the objective function.

Differential Evolution applied to the Ackley function, proceeding left to right and top to bottom. Image from Algorithms for Optimization published by the MIT Press.

In each iteration, a new candidate individual \(\boldsymbol{x}’\) is created for every individual \(\boldsymbol{x}\) of the population. If the new candidate individual has a lower objective function value, it will replace the original individual in the next generation.

We first construct an interim individual \(\boldsymbol{z}\) using three randomly sampled and distinct other population members:

\[\boldsymbol{z} = \boldsymbol{a} + w \cdot (\boldsymbol{b} – \boldsymbol{c})\]

where \(w\) is a scalar weight.

The candidate design \(\boldsymbol{x}’\) is created via uniform crossover between \(\boldsymbol{z}\) and \(\boldsymbol{x}\), which just means that each component has a probability \(p\) of coming from \(\boldsymbol{z}\) and otherwise comes from \(\boldsymbol{x}\).

This is really easy to code up:

function diff_evo_step!(
population, # vector of designs
f, # objective function
p, # crossover probability
w) # differential weight

n, m = length(population[1]), length(population)
for x in population
a, b, c = sample(population, 3, replace=false)
z = a + w*(b-c)
x′ = x + (rand(m) .< p).*(z - x)
if f(x′) < f(x)
x[:] = x′
end
end
return population
end

What I presented above is a slight simplification to the normal way Differential Evolution is typically shown. First, while my implementation samples three distinct members of the population for \(\boldsymbol{a}\), \(\boldsymbol{b}\), and \(\boldsymbol{c}\), it does not guarantee that they are distinct from \(\boldsymbol{x}\). They will pretty much always be distinct from \(\boldsymbol{x}\) for sufficiently large population sizes, and if not it isn’t all that detrimental.

Second, the standard way to define Differential Evolution always forces at least one component to mutate. This isn’t important if the crossover probability is large enough in comparison to the number of components, but in practice users have found that a very low, or even zero, crossover probability with one guaranteed mutation works best for many objective functions.

What is Going On

Differential Evolution is weird. Or at the very least, its unintuitive. Let’s break it down.

The algorithm proceeds by replacing individuals with new candidate individuals if they are better. So far so good.

The candidate individuals are formed by three things:

  • a baseline individual \(\boldsymbol{a}\)
  • a perturbation formed by the difference \(w\cdot(\boldsymbol{b} – \boldsymbol{c})\)
  • uniform crossover

Differential Evolution gets its name from the second bullet – the difference between individuals. This is its secret sauce. Its nice in that it provides a natural way to take larger steps at the start, when the population is really spread out and hasn’t found local minima, and smaller steps later on when the population is concentrated around a few specific regions:

We can run a version of Differential Evolution that only has this differential, using \(\boldsymbol{x}’ = \boldsymbol{a} + w \cdot (\boldsymbol{b} – \boldsymbol{c})\). If we do that, we’ll get design improvements, but they’ll be local to the current individual. Including the baseline individual is going to do better:

The mean best objective function value and 1 standard deviation intervals using Differential Evolution and pure differential on the Ackley function, over 1000 trials. I’m using \(w = 0.2\) to exaggerate the effect.

Local perturbations alone are often insufficient to get individuals out of local minima to better local minima. This is where the baseline individual \(\boldsymbol{a}\) comes in. Perturbing off of a different individual than \(\boldsymbol{x}\) means we could be substituting \(\boldsymbol{x}\) with a perturbed individual in a better local minima. When that happens, we’re basically warping \(\boldsymbol{x}\) to that better minima:

So that’s where \(\boldsymbol{a} + w \cdot (\boldsymbol{b} – \boldsymbol{c})\) comes from. What about uniform crossover? Let’s run an experiment.

Below I show the average and \(\pm 1\) standard deviation on the best objective function value vs. iteration for 1000 trials on a bunch of different algorithm variations on the Ackley function:

  • no perturbation (p=0.5, w=0.0)
  • no baseline individual (p=0.5, w=0.5, a=x)
  • baseline Differential Evolution (p=0.5, w=0.5)
  • high mutation rate (p=0.9, w=0.5)
  • guaranteed crossover (p=1.0, w=0.5)

Without perturbations (w=0.0) we can’t really explore the space. We’re only looking at coordinate values that exist in our initial population. Its no surprise that that one does poorly.

Using just the baseline individual is the next-worst, because that one can’t as effectively jump individuals to new regions.

The top three performers are all Differential Evolution with varying mutation rates. We find that higher mutation rates in uniform crossover perform best. In fact, our best performance doesn’t use crossover at all, but just directly uses \(\boldsymbol{x}’ = \boldsymbol{z}\). Do we even need uniform crossover?

This table from Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces shows the parameters they used on a fairly large test suite of objective functions:

The CR column gives the crossover probability. Its basically always zero! Recall that in official implementations, crossover in Differential Evolution is guaranteed to mutate at least one component.

According to Differential Evolution: A Practical Guide to Global Optimization, this paper somewhat confused the community, because functions tended to either be solved best with \(p \approx 0\) or with \(p \approx 1\). It wasn’t until later that people made the connection that the functions that were solved with \(p \approx 0\) were decomposable, and the functions with \(p \approx 1\) were not.

A function is decomposable if it can be written as a sum of functions using distinct subsets of the inputs, such as:

\[f(\boldsymbol{x}) = \sum_i f^{(i)}(x_i)\]

When this is the case, we can improve individual components in isolation without having to be in-sync with the other components. Small mutation rates are reliable, because we can just reliably adjust a few components at a time and step our way to a minimum. By changing fewer components we have a smaller chance of messing something else up.

Most of the interesting objective functions have a high degree of coupling, and we can’t get away with optimizing some variables in isolation. For example, a neural network mixes its inputs in a convoluted manner. In order to make progress we often have to step in multiple component directions at the same time, which for Differential Evolution requires a higher crossover probability.

Optimizing Decomposable and Non-decomposable Functions

Let’s run another experiment. We’ll again do 1000 trials each (population size 20) with a bunch of versions of Differential Evolution. In the first trial, we’ll optimize a decomposable function, 10 parallel copies of Ackley’s function (making this a 20-dimensional optimization problem):

n_copies = 10
f = x -> sum(100*(x[2i-1]^2-x[2i])^2 + (1-x[2i-1])^2 for i in 1:n_copies)

We’re going to vary the mutation rate. If our results match with the literature, we expect very low mutation rates to do better.

We do indeed find that low mutation rates do better. In fact, a mutation rate of zero with one guaranteed mutation is the best, closely followed by \(p = 0.1\). This does well because taking a small step in just one or two axes can more reliably lead to a win rather than moving in a bunch of axes, and this “knowledge” gained in one individual can easily be copied to other individuals to improve them too.

We can do another experiment with a non-decomposable function, in this case the xor problem. Let’s optimize some basic 2-layer feedforward neural networks. These networks have 21 parameters, so this is a 21-parameter optimization problem:

relu(x) = max(x,0.0)
sigmoid(x) = 1/(1+exp(-x))

function binary_cross_entropy(y_true, y_pred; ϵ=1e-15)
y_pred = clamp.(y_pred, ϵ, 1-ϵ)
return -mean(y_true .* log.(y_pred) + (1.-y_true) .* log.(1.-y_pred))
end

# xor problem
f = x -> begin
inputs = randn(2,10)
W1 = reshape(x[1:10], (5,2))
b1 = x[11:15]
W2 = reshape(x[16:20], (1,5))
b2 = x[21]

hidden = relu.(W1*inputs .+ b1)
y_pred = sigmoid.(W2*hidden .+ b2)

truth_values = inputs .> 0.0
y_true = .!=(truth_values[1, :], truth_values[2, :])

return binary_cross_entropy(y_true, y_pred)
end

This optimization problem is not decomposable. Taking steps in only one axis at a time is less likely to lead to significant improvements.

Again, that is what we see! Here the single guaranteed mutation is the worst. Interestingly, full crossover with \(p=1.0\) is initially pretty good but then tapers off, so there does appear to be some advantage to some crossover randomness. The lowest loss is most consistently achieved with \(p=0.5\), but \(p=0.9\) does nearly as well.

Why is full crossover not as good? I suspect that version will more quickly concentrate its populations into local regions, whereas having some stochasticity spreads the populations out more. Let’s plot the mean pairwise distance in the population for this experiment:

Yup! That’s exactly what we see. With full crossover, our population prematurely converges in about 20 iterations. With \(p=0.9\) it also converges pretty quick, but it apparently has enough time to find a better solution. The other crossover rates are able to avoid that premature convergence.

Conclusion

So now we know a thing or two about what makes Differential Evolution work. To recap, it uses:

  • the differential between two individuals to provide perturbations, which let us explore
  • this differential automatically produces smaller perturbations as the population converges
  • applies the perturbation to a baseline individual which can automatically warp designs to more favorable regions
  • uses uniform crossover to either
    • \(p \approx 0\) force steps in a small set of coordinates; useful for separable functions
    • \(p \geq 0.5\) allow for steps in many coordinates, while preventing premature convergence

So all of the ingredients have their place! Now its time to condense these learnings and add them to the next edition of the textbook.

Cheers.