Category Archives: Devlogs

Nooks and Crannies

The Witness is a really good game and I can’t stop playing it and I think I might have to cancel the Gunmetals Arcadia so I can spend all my time playing The Witness. :V

January was pretty chaotic, as I tried to wedge in every last little feature I needed for vertical slice on top of the content creation tasks I already had scheduled. This month, as I return to a normal development groove, I’m trying to be better about accounting for those sorts of unexpected tasks and bug fixes so that maybe I don’t have to spend every evening and weekend working.

GunPreq 2016-02-05 17-06-57-064

I’ve begun grayboxing a few more levels (“Basil” and “Cilantro,” in keeping with my herbs and spices motif). Having previously finished the “Cardamom” level for vertical slice, I have a better idea of what works and what doesn’t, and I’m looking for ways I can play to my strengths. One of my goals (as I discussed previously in this episode, around the 14-minute mark) has been to develop a sort of verse/chorus dynamic throughout each level, not only in the pacing of tension and release, but also in the direction of travel and the aesthetic design of each location. You can see this in Cardamom, where long horizontal stretches are divided by vertical rooms. There’s a return to the surface following an underground sequence, and this stretch is once again concluded by a downward traversal before the final boss. I don’t necessarily think Cardamom is the best representation of this concept, but hopefully the ideas are coming through in some fashion, and I’m hoping to improve this sort of dynamic with each level I make.

As I’ll discuss more in Wednesday’s video, I’m curious to find a format that works well for the sorts of content creation tasks I’m going to be doing over the next four or five months. Since these are generally going to be same sorts of tasks week after week and not really conducive to the sort of problem-solving blogs I often write, I may start taking a more in-depth look at very specific areas. In addition, I’m also going to start posting my actual schedules for the week here.

Upcoming tasks for the week of February 8, 2016:

  • Monday: Graybox “Mint”
  • Tuesday: Graybox “Fennel”
  • Wednesday: Graybox “Rosemary”
  • Thursday: Implement two new enemy types
  • Friday: Record Let’s Make Gunmetal Arcadia, Episode 26, write next week’s blog, additional features and bug fixes as time allows

“Rosemary” should be an interesting level, as I had already put some time into it prior to vertical slice to get a feel for what grayboxing would actually entail. Barring any surprises, it will be the last level I graybox for Gunmetal Arcadia Zero, and I’m sort of curious to see how it will shape up and where it will fit in the narrative. (As I’ve mentioned before, I’m trying to develop these graybox maps in such a way that they could potentially arranged in any order. In practice, I usually have at least a vague sense of what I’m building, but “Rosemary” is legitimately a wildcard at this point.)

At some point in there, I’m also going to have to figure out how to transition from one level to the next, possibly with the option for Ninja Gaiden-style cutscenes in between and after the final level. That should hopefully be the last obstacle standing in the way of a (very rough) complete playthrough of Gunmetal Arcadia Zero, and I can start to get a sense of the scope of the entire thing, which is going to be crucial as I move into populating and balancing levels.

Oh, in other news, my GDC talk “Building a Better Jump” (a part of the Math for Game Programmers tutorial) was approved this week, so yay that! If you’re attending GDC this year, stop by and watch me try to cram 35 minutes of content into a 25-minute talk! :V

Verticality

I’ve spent the last two weeks crunching on a vertical slice milestone, and it’s getting close to being finished. My deadline is Wednesday; Thursday we’ll be headed down to San Antonio for PAX South, and I’ll try to have a playable build ready in time for next week’s update.

GunPreq 2016-01-21 11-17-33-395-CCC

The goal of this milestone (besides having a nice build that I can point to and go, “See that? That’s the game I’m making!”) has been to prove that my schedule is viable. I’m aiming for a June release, and in order to complete the game by that time, I’ll need to produce content at this same rate for the next four or five months. Maybe.

I haven’t met all my goals for this milestone, and it’s unlikely I will. I’m happy with how the vertical slice level looks, but I had intended to do more — much more — with environment art and decorations. And I just haven’t had the time to complete everything.

GunPreq 2016-01-21 11-17-33-395-AAA

I knew when I was scheduling tasks that there would be a lot of work I hadn’t accounted for. In fact, I’d estimate about half my time these last couple of weeks has been spent on these unscheduled tasks. These included getting saved games working, handling death and game over states, wrangling random number generators to behave correctly in all cases, and various tweaks, polish, and bug fixes too minor and numerous to mention here.

Some of those tasks were one-time-only. Most of them, probably. Saved games work, for example. Barring minor polish or bug fixes, that’s not something I’ll have to touch again before shipping. But I know between now and June, there will be other tasks I haven’t allotted time for, and I can’t guarantee I’ll have any more of an opportunity to work on things like environment art than I’ve had this milestone.

GunPreq 2016-01-21 16-00-15-064

So, considering the triangle of resources, scope, and schedule, what are my options? My resources are fixed. My schedule is pretty much fixed; there might be a little wiggle room in when exactly I launch, but for now, I’m disregarding that. So, scope. Cut scope. Obviously.

Maybe I don’t need as much decorative environment art as I thought. This level is already looking good (and comparatively miles ahead of where Super Win was at any point in its development). So maybe that’s good enough. Another possibility is that I lean harder on content reuse. I’ve been scheduling time for drawing a completely unique tileset for each level, and that might not be necessary. For one thing, I already have a number of tilesets in progress, dating back to nearly a full year ago, when I replaced tiles stolen from Faxanadu with an original set. For another, I suspect I’ll be able to get a lot of mileage out of swapping the palettes of my existing sets. It’s possible that a core set of tiles consisting of common elements like bricks, drawn in a variety of colors and accented with level-specific decorations, might be enough.

GunPreq 2016-01-21 11-17-33-395-BBB

I’ll be figuring all this scheduling stuff out as I wrap this vertical slice milestone and move on to the remainder of production. I don’t expect things will slow down at all; I said a couple weeks ago it would be a mad dash to the finish line, and it feels like that’s still going to be the case, but I have some great forward momentum. It’s exciting seeing all the bits and pieces I’ve been working on in isolation for so long finally coalesce into something resembling an actual game.

Inertia

Gather ’round, kids, it’s time for yet another blog about serialization!

If it feels like I’ve been leaning hard on that topic in these blogs, that’s only because it’s proportionate to the amount of time and effort I’ve invested in these systems. Serialization in Gunmetal Arcadia is shaping up to be the crowning technical achievement of these games, in the same way the first implementation of the related entity-component system was in You Have to Win the Game and the editor and its XML-powered instantiation of entity components was in Super Win.

Recently, I’ve been tackling saved games in Gunmetal Arcadia Zero. I talked a little bit about the introduction of a three-slot save file system a la Zelda in last week’s video, but I haven’t really dug into how it works under the hood.

Last March (has it really been that long already?), I drew up this possibly-syntactically-accurate UML diagram.

entity_construction

And if you can follow that, it’s still pretty much the state of things. If not, it can be simplified to, “When entities are destroyed, they can write their current state into deltas in memory, and those can in turn be written to disk in turn. And then that whole process can be performed in reverse.” The part I’ve been focusing on recently is that link in the very bottom right, where the serializer meets the disk. In Super Win the Game, and in Gunmetal Arcadia Zero until just recently, this was a trivial process. The serializer would dump its content to a predetermined location on disk or replace its contents wholesale with contents loaded from that same location.

That was good enough for Super Win, but I’ve known for a while I’d need a more versatile solution for Gunmetal Arcadia and its eventual persistent state that exists beyond a single run, so it made sense to broach some of these topics for Zero. I’ve been imagining the interface for Gunmetal Arcadia would mirror The Binding of Isaac‘s three-slot system (which in turn was lifted from Zelda and countless other classic games), but that didn’t necessarily have to be the same for Zero. So I put it to a Twitter poll:

Aaand I guess I’m doing a three-slot system for Zero, too! That’s convenient. (In fact, I’m also tentatively supporting quicksave, barring any unresolvable issues it might raise.) So I threw together a quick prototype and an interface, and it looks like this.

threeslot

And then I had to solve all the problems this created.

The first thing to understand was that each of these save slots needs to encompass multiple saves on disk. One of these would be your last checkpointed location, the place you’d respawn on death. Another would be an immediate autosave location for exiting and resuming without losing data. Then there would be a manual quicksave that could be created at any point. These could all coexist without stomping over each other, but they’d all conceptually represent the same session, the same save slot.

So I broke these out into “profiles,” each of which could contain an arbitrary number of “saves.” (In this example, a “profile” is one of the three slots, and a “save” is a single snapshot of the game at any given time.) Zooming in on the corner of my old UML diagram, it would now look something like this:

more_uml

The implementation is about as straightforward as I can imagine. Each profile maps to a separate folder within a base saves location, each save maps to a separate folder within its respective profile, and inside that folder, I can put whatever data is associated with that save. In practice, this is currently a single file containing all serialized data. I had previously done some work to support saving and loading arbitrary binary data from additional external files, but a recent change to give each entity its own random number generator created too much file bloat as each one wrote a separate file to disk, so I’ve temporarily deprecated that path.

Where this got a little more interesting was in deciding how to handle death and game over states. I was starting with a few assumptions:

  • I want a finite number of lives in Gunmetal Arcadia Zero. My concern is that death becomes inconsequential with unlimited retries.
  • Running out of lives should not completely reset the game. That would be too punishing and fundamentally at odds with having a saved game.
  • The experience of running out of lives needs to be different in some way from the experience of dying once. That is to say, these can’t both simply send you back to the last checkpoint. There has to be something on the line to discourage failure.

This has led me to split the game’s checkpoints into two separate but occasionally overlapping concepts. There’s an initial checkpoint when you first enter a level, and then there may be additional midlevel checkpoints throughout. When you die, you return to the most recent checkpoint you reached, whatever that was. (Besides the start of the level, these will typically also occur immediately after boss fights.) When you run out of lives, you’ll get dumped back to the title screen, but you can choose to continue your game from the beginning of the last level you reached. I’m imagining this case will also incur some further costs, such as losing money or gear, but whatever the price, it won’t be a complete reset.

I’ve been racing to get these features online in time for the vertical slice coming up at the end of next week, and it’s just about there. I’ll still have a few game features left to implement after that, including figuring out how to transition from one level to another after boss fights (or from the final level to an ending sequence and credits), but I’m quickly reaching a point where I can focus almost entirely on content creation.

What is Gunmetal Arcadia again?

Hi 2016! Hi regular updates! This week marks my return to full-time development and my former weekly blogging schedule. Thanks for your patience during the interim, and a special thanks to my Patreon backers for standing by as my progress has slowed. I’m back!

I’m currently focused on knocking out the last few remaining core gameplay features I’ll need for Gunmetal Arcadia Zero before I move on to content production for that game in anticipation of an early summer release. I’ll dive into those topics more thoughout the coming weeks, but I wanted to kick off the year with a top to bottom recap of what this thing called Gunmetal Arcadia is that I’ve spent the last fifteen or so months of my life developing.


Gunmetal Arcadia is an action-platformer roguelike…”

Gunmetal Arcadia is basically a roguelike Zelda II…”

I don’t know, I’m bad at elevator pitches.

Gunmetal Arcadia looks like this:

GunPreq 2015-09-12 10-20-43-881
One of these days I’ll stop leaning so hard on this GIF, but today is not that day.

…or maybe like this, thanks to my hopefully-someday-to-be-award-winning CRT simulation.

tiles_wip_6
Samesies.

Wait, let’s back up.

In September 2014, as I was nearing completion of Super Win the Game, I began taking notes on what my next game could be. I liked the idea of reusing the tech I had developed for Super Win, and that necessitated — or at least strongly encouraged — making another retro NES-themed game. It felt like a good opportunity to tackle some longstanding goals of mine. I’ve wanted to make a game in the style of Zelda II practically since I first played Zelda II. The idea of melding roguelike elements with other genres is fairly commonplace now, but the idea to randomly generate an unlimited number of Zelda II-like dungeons first came to me somewhere in the vicinity of ten years ago, and it’s been bouncing around my head in some form or another ever since, just waiting to be implemented.

But that history is well-documented. In the context of a post-Super Win world, Gunmetal Arcadia has come to feel like a bit of a redo, a chance to make up for the flaws of my last game, set my sights higher across the board, and produce something for which I don’t have to make excuses.

From the outset, I’ve striven to avoid cutting corners in Gunmetal Arcadia, forcing myself to solve the hard problems that I glossed over in Super Win. That behavior was less a product of laziness than an overly aggressive tendency to cut early, a mentality that arose from its predecessor, ignoring that game’s status as a freeware side project and choosing to remain stubbornly beholden to unnecessary principles. Melee combat would impose a lot of new challenges and draw out the schedule and this game would never ship, so the player won’t have a weapon. Now that’s canon, so I have to respect it in the sequel. Those were the sorts of design decisions that ended up watering Super Win down into “just a platformer” (albeit one with impeccable game feel, but how do you sell that?) and in conjunction with a handful of other regrettable decisions (cough the name cough) damaged its marketability.

By contrast, Gunmetal Arcadia is the project where I ravenously tear into every challenge regardless of perceived challenge, genuinely consider every wouldn’t-it-be-cool-if, and to the best of my ability remain open and honest about the entire development process. And this has produced a host of exciting new features that are going to define the look and feel and function of this game, to wit: melee combat, palette swaps, tools to facilitate faster creation of game entities with more expressive animation, massive improvements to the expressiveness of the audio synthesizer, and much, much more.

I feel like I should emphasize that these are games from scratch. That’s not a term that gets thrown around much in the games industry, but it’s perhaps the most apropos in this case. I build all my own tech: my own game engine, originally developed for Windows and painstakingly ported to Mac and Linux; my own editor, built for Super Win and leveled up for Gunmetal Arcadia; all original code and content, all requiring an engineer’s attention to detail. Nothing comes easy, and nothing comes cheap, but the level of freedom and control this affords me is worth it. The results speak for themselves.

Before looking at the future, I have to dive into the past once more. Last summer, I made the decision to expand the scope of Gunmetal Arcadia to encompass a second game. Gunmetal Arcadia Zero was conceived as a prequel, smaller in scope and without the roguelike trappings of the flagship title, but with a more discernable path to completion and an earlier launch date. As Zero has become my primary focus in the last few months, I’ll continue my recap there and conclude with some discussion of my high-level goals for the second game, the roguelike action-platformer called simply Gunmetal Arcadia.


Gunmetal Arcadia Zero was born of an uncertain schedule, but it has quickly transformed in my mind into something really exciting and really special. I’ve been thinking of Zero in terms of the original NES Castlevania — six levels, each with an end boss, linear and tightly focused. I think it’s fair to say that isn’t necessarily a game that I would have ever chosen as my next project under normal circumstances. It feels like too much of a known entity. It’s — and I say this with tongue planted squarely in cheek — too easy. I tend to favor the experimental concepts, the ones that necessitate the sort of grueling problem solving on which I thrive. Zero offers none of those. But the way it’s grown organically out of the development of the roguelike Gunmetal Arcadia, the way it can serve as a prequel narratively and also as an appetizer functionally, that’s exciting. And by design, it shouldn’t require solving hard problems. The entire point of this game from the very start has been to take the wealth of technology I’ve already built, produce something great with that, and continue building on those successes for the flagship game.

So right now, my life is Zero. I’ll be honest, most of it still exists only in my head right now. Bits and pieces are beginning to appear: a tileset here, a graybox level there, but it’s not a playable game yet. That process begins this week. My goal is to have a vertical slice level ready in time for PAX South. (I won’t be demoing anything there, but I will be attending, and it’s a good place to set a milestone.) That will validate my schedule, which has me shipping in June.

It’s going to be a mad dash to the finish line, and I can’t wait. I’ve been champing at the bit for months waiting to get back to full-time development. There are still lots of aspects of this game that need to be designed and implemented or cut, and I expect I’ll be making hard cuts all the way to the end, but right now, I’m full of adrenaline and ready to make an amazing video game.


Gunmetal Arcadia — and now I’m speaking specifically of the flagship title, the roguelike, the one that doesn’t have a subtitle to disambiguate it from the series as a whole — is the most ambitious project I’ve ever undertaken. Whether it has the opportunity to grow into that ambition depends on many factors, not the least of which are the reception Zero receives and its own continuing interest as a “hobby” game, one that players want to return to day after day. To that end, I’m planning to support daily challenges in the style of Spelunky (and more recently, The Binding of Isaac), and I hope to be able to sustain a rapid cycle of post-release content production.

But I’m getting ahead of myself here.

The structure of a session of Gunmetal Arcadia is going to be similar to the whole of Gunmetal Arcadia Zero. “Six levels, each with an end boss,” or something to that effect. But these levels will be constructed pseudorandomly from prefabs, and you’ll only have one life. If you fail, you start over from the beginning with a different set of levels. But here’s the hook: the decisions you made and the actions you took in your previous session will have immediate and obvious consequences in your next session. In most cases, these will be positive effects, and in that way, I hope to mitigate some of the frustration of permadeath without introducing grinding. This will also necessarily introduce the concept of deliberately making decisions in your current session with the intent to alter the shape of your next session, and perhaps of sessions beyond that. (As per Episode 13 of my YouTube series “Let’s Make Gunmetal Arcadia,” these effects will be formatted as strict action-reaction pairs, but that doesn’t necessarily preclude these reactions from setting up chains of events across multiple future sessions. Whether I take advantage of that fact is still up in the air.)

This structure provides a perfect opportunity to grow Gunmetal Arcadia not only as a singular experience but as a platform for expansion. It’s no secret that The Binding of Isaac has been hugely influential to this project, and in particular, the way it has been expanded over the years serves as a model for Gunmetal Arcadia. In a perfect world, a world in which the base game had the groundswell to support a longterm plan like this, I would continue adding new playable characters, tilesets, prefabs enemies, upgrades, bosses, action-reaction pairs, and more into the foreseeable future. I can’t predict whether it will ever reach that state. To do so, I would need to make a fantastic base game that could compel a majority of players to return on a regular basis, and I would need to market it well enough to reach every potential player. And that’s a tall order. I can’t guarantee I’ll get there. But that’s the dream.

There’s a weird chicken-and-egg problem that comes up from time to time in game development. Simplified, it goes like this:

Developer: “I’m not going to support Feature X because my player base isn’t large enough to justify the cost.”

Consumer: “You’d have a larger player base if you only supported Feature X.”

To feasibly sustain development of new content for a shipped game requires sales to be consistent enough to offset the cost of development at minimum. But I can’t sell the game on the strength of features that might someday exist. Deciding which goals to pursue requires a leap of faith — an educated one, perhaps, but a leap nonetheless.

This ties back into my earlier comment about not cutting corners. The base game alone has to be an experience worth playing — and paying for — in its own right, regardless of any plans for future content. (Gunmetal Arcadia Zero carries a similar weight, being the first game in the series players will have the opportunity to experience.) To that end, whatever cuts I do make need to be extremely discriminating. And I need to hope my efforts towards marketing this game (still a notoriously unsolved problem among indie developers) are more effective than past attempts.

So that’s my goal: make Gunmetal Arcadia an infini-project that I can realistically continue supporting with new content for a long time. We’ll see how it goes. Worst case scenario, I’ll let it be whatever it is and go make that sci-fi horror game I’ve been sitting on for years.


If you’ve read this far, congratulations! You’ve survived what should hopefully be my last overly long and self-serving attempt to recap this game and its development for the foreseeable future (Wednesday’s upcoming video notwithstanding). I promise next week I’ll be back to hardcore development details, so stay tuned!

You can support my efforts to document the development of these games on Patreon.

Interim: Peppermint Mocha Edition

It’s the week of Christmas, I’m one more month into my mandatory vacation time, and it’s looking like it’ll still be a little while longer before I’m back to full speed. But there is finally a light at the end of the tunnel. I’ll get to that in a bit, but first, let’s take a look at what I’ve been up to since my last update…


GunPreq 2015-12-09 17-18-05-754

Inspired by this NESDev forum post on forests in NES games, I started pixeling a tileset for the forest levels in Gunmetal Arcadia. I’ve known for a while there would be forest levels in these games — it’s where you’ll meet one of the main NPCs in Zero who will later become a playable character in Gunmetal Arcadia — and it was a personal goal of mine to ensure they looked better than the ones in Super Win. I’m happy to have finally pulled that off. I still have a bit of work left to do on the leaves, as they still look a little too flat and repetitive, but I’m happy with where it’s going.


I accidentally fell down a rabbit hole of renderer refactoring last week. It started when I was testing Gunmetal Arcadia Zero in the “WinNFML” configuration, a Windows build which uses OpenGL and SDL in place of DirectX. That configuration serves as a quick check to make sure the Mac and Linux builds aren’t going to be very obviously broken before I take the time to switch to those platforms, update, and rebuild.

So I run the game, and…something is very obviously wrong. Pixels in the game scene seem to be bleeding into adjacent pixels in ways they shouldn’t. I poke around in the options a bit and realize the “Pixel Sharpness” setting isn’t doing what it’s supposed to. Doesn’t take me long to realize it’s doing  linear interpolation instead of nearest neighbor / point sampling. But why? I go look at the shader source, and…well, there you go. I have two samplers, both referencing the same texture; one does linear interpolation, the other nearest neighbor. And I immediately understand the problem.

OpenGL and GLSL don’t make the distinction between textures and samplers the way DirectX effects do. In GL-land, there are only samplers, each corresponding to a texture unit, and they can be updated in native code to point to whatever image data is desired. DirectX effects extend this concept by automatically associating a sampler and its state (interpolation and indexing methods) with a texture, which represents the image data. Any number of samplers may point to the same texture and may sample it in different ways. This allows the C++ code to care only about the texture and the HLSL code to care only about the sampler. The link between them is defined once upfront and never has to change.

DirectX effects provide many of these sorts of helpful features, and I’ve had to recreate them in my OpenGL implementation. It turns out that when I initially wrote my HLSL-to-GLSL converter two years ago, I made these assumption that textures and samplers would be always one-to-one, as that was how I had historically always used them. It wasn’t until just recently that I’d had cause to decouple them, and my GL implementation broke as a result.

Once I understood the nature of the problem, the fix was fairly straightforward. Thankfully, as it took place entirely in my own wrapper code designed to make OpenGL and GLSL behave more like Direct3D, it didn’t involve any actual painful GL coding, just some rewiring of metadata and refactoring of code that I had written myself and knew completely.


refl_0_valk refl_1_xan

It’s one of those minor things that I forced myself to ignore for a long time, but the reflections on the CRT screen in Super Win the Game just never sat well with me. They were too skewed out in odd directions, too far from what I see when I look at my reference CRT screen. So I finally made an attempt to fix that in Gunmetal Arcadia. There’s still a little bit of skewing due to the FOV of the perspective camera, but I’ve significantly flattened these out so they should appear more natural and correct.


Having just had my hands in rendering code, I thought it would be a good time to attempt the longstanding task of breaking down my primary render loop into a number of smaller, more digestible functions.

Going back years and years (over seven years now, which seems like an impossibly long time), I designed my renderer to automatically look for places where it could minimize how often shaders and render states needed to change. This was done by defining my entire render path as multiple “passes” (admittedly a poor choice of naming considering its use in shader languages), within which a number of “groups” could be selected for drawing. A pass might be, for instance, “draw all in-world objects,” and its corresponding groups would be “in-world objects with/without alpha.” I define these upfront when the game launches, and each frame, I spin through them, in order, and render each item in each group. Within each group, ordering is not guaranteed, as the group gets an opportunity to sort itself for optimal rendering by arranging items that share a shader next to each other so that the renderer doesn’t have to naively set the shader each time. I take this a step further by minimizing how frequently shader parameters change as well, given some metadata specifying whether the parameter needs to change on a per-object or per-material basis. Things like world-view-projection matrices typically must be set for each individual object, while things like textures can be set once for the material and reused for each object with that material.

Anyway, long recap over, the point is that the code that iterated through these passes and groups and objects and materials was one enormous six-hundred-line function, with bits and pieces ifdef’d out for Direct3D and OpenGL builds. Bad in theory, bad in practice, just a mess any time I had to touch it. My goal was to break out the inner parts of each loop into its own function for clarity and ease of maintenance. Seemed like a pretty easy task, and it was, right up until I decided not to stop there and instead also totally redesign how the render path is defined.

I mentioned that I define passes and groups upfront when the game launches. To be clear, I mean that for the last few years, for every game I’ve made, I’ve hardcoded these passes and groups into the native C++ code. As my rendering paths have grown in complexity, and as I’ve started to move towards defining more and more aspects of my games in data, this has become a problem. What I really want, what I’ve wanted for a while, is to be able to define some markup (XML, as is my preference) and translate this into the render path. And ultimately, that is exactly what I’ve ended up with, but it took me a while to get there.

render_path

Once of the fundamental shifts in design was to move away from a strictly linear sequence of passes and groups to a nested tree-like format corresponding to a state stack maintained within the renderer. In the past, each pass would necessarily have to define which render target it wanted to draw to. Now, a pass may optionally choose to specify a render target, and any nested subpasses maintain that state unless they specify their own. The same is done for viewports. Whenever the renderer is finished iterating through the members of a pass that altered the render target or viewport, it pops that data off its stack, restores the previous state, and continues. (In practice, I also reduce redundant render target setting by only committing these changes whenever I’m about to draw something or clear the current target.)

Finally (yes, there’s more), I also killed off a number of old features that had sat dormant for years and were threatening to complicate this change. This included support for multiple render targets (used once in an old deferred rendering demo, never ported to OpenGL) as well as a few separate options for preserving the state of render targets across device resets.


Ever since Super Win, David and I have been in an escalating arms race to develop a cooler Minor Key Games intro splash animation. Here’s the new one I’ve cooked up for Gunmetal Arcadia. This draws from my interest in ’80s laser grids and also hearkens back to the art style of my 2008 indie debut Arc Aether Anomalies.


I mentioned that I’ve been moving towards defining more content in markup, and the render path refactoring is only the first of many steps I hope to take as I bring my engine up to speed with this methodology. My game initialization code is also fairly heavy with definitions for things like configurable variables, controls and default bindings, console commands, and renderable assets, including things like quads used for drawing fullscreen effects.

I took a first step towards generalizing configurable variables over the weekend by refactoring my config system to maintain the state of all variables internally. In the past, the registration process has involved associating a named config variable (e.g., “VSync”) with a pointer to a piece of data in some object somewhere (bool bVSync). These variables would typically live in the core game object, which persists throughout the entire duration of the game, so the pointers would never risk being invalidated.

With this change, I’ve removed all those member variables. These are now maintained internal to the config system and are strictly accessed through that system. This wasn’t a particularly difficult change, but it did involve touching a lot of code, as I have over one hundred config variables at this point.

The next step will be to define these variables and their default values in markup. In order to do this, I’ll have to solve the problem of how to represent function pointers in markup (used for callbacks in response to config variables changing) and then match these up to functions at runtime. I’m not yet sure how this will work, short of a simple name-to-function-pointer mapping, as I’ve done in the past for automating the process of creating entities and components from markup. I’d like to find a better, more generalized way to do this, something more along the lines of tagging functions with a keyword or wrapping them in a macro that automatically associates them with a name, but I’m not yet exactly sure what that would look like. If I did have a system like that, it would also be useful in generalizing a number of other data-definable assets, particularly console commands.


I’ve been working on new tunes whenever I have the time, as I do. I’m pretty happy with how these two have turned out, and I’m excited that I’m starting to develop a coherent sound for the Gunmetal Arcadia ‘verse. Listen to these and watch that GIF up above of Vireo walking through the forest and you’ll get a sense of the gameplay experience…minus the gameplay.


In testing my recent engine changes on Mac and Linux, I ran into an edge case in my build process that I hadn’t seen before. I always rebuild my content packages in Windows (as that’s where my tools live), commit the output to my SVN repository, and sync it on Mac and Linux.

The bug presented itself as: I launch the game on Linux, and nothing draws. The whole screen is black. I can hear sounds, so I know it’s running, but I don’t see anything. After a little bit of debugging, it appears that my auto-generated GLSL code isn’t in the correct format; it doesn’t have the recent texture/sampler decoupling features I’d added. I’m thinking maybe it’s a problem with timestamps. The tool should only rebuild content if the source file has been changed more recently than the destination, so maybe it skipped these, or maybe I simply forgot to rebuild and version content packages before testing this on Linux. So it’s back to Windows, rebuild and commit content, back to Linux, still doesn’t work. Debug a bit more, still coming up with the same thing.

Finally it occurs to me to validate the GLSL output being produced by my converter in Windows before it’s packaged up, and sure enough, it’s not right.  But if a full clean and rebuild didn’t produce the correct output…

Double facepalm, instantly know what’s wrong. Yes, I was cleaning and rebuilding the GLSL shaders, but I was using an outdated version of my converter tool to do so because I had made changes to the tool on my desktop and now I was on my laptop, and my build process doesn’t automatically pull down new versions of my command line tools.

In retrospect, it’s such a simple and obvious problem that I’m amazed I hadn’t run into it before, but I guess historically I just haven’t done much content rebuilding on my laptop, so it had never become an issue.

In any case, this prompted me to refactor my build process a bit to help ensure that the most up-to-date versions of all tools are always available and are only maintained in one location. Previously, I had had various versions of a few command line tools checked in at different locations throughout my repository due to legacy usage, which was another source of occasional trouble.

Anyway, the Mac and Linux builds are working fine now, so yay.


super_win_large

Looking to the immediate future, Super Win the Game will be on sale again imminently, so now would be a great time to pick it up if you haven’t already!

I’m planning — fingers crossed — to return to a normal weekly blogging and video schedule the week of January 11. That’s three weeks from today. I’ll be kicking off 2016 with a top-to-bottom refresher of what these games are and my goals for each. It’ll have been fifteen months since I initially announced Gunmetal Arcadia and  five since I announced Gunmetal Arcadia Zero, so I’m probably due for a bit of a recap.

Anyway, I’m gonna go watch Hans Gruber fall off Nakatomi Plaza now. Happy holidays! See y’all next year!

Interim

Happy Cyber Monday! I love that that’s a thing that makes sense to say in 2015. Feels like we’re living in some tacky mid-’90s vision of the future.

It’s been five weeks since my last update, and it’s looking like it’ll be a while longer before I can resume any sort of regular development, but I’ve done enough bits and pieces of actual work in between changing diapers, pounding energy drinks, and binge-watching The X-Files to warrant an interim update.

I’m dying to get back to real full-time development, but this break has forced some breathing room into my habits, and I’ve been spending far more time than usual just thinking about development and development philosophies, especially as they pertain to my most immediate Gunmetal Arcadia Zero tasks. I’m trying to be more conscious of and consistent about asking deliberate questions of all my design choices, questions like “How does this serve the narrative?” and “How does this serve the gameplay?”

In trying to answer these, at least on paper, I’ve begun putting together some cheat sheets to keep me on the right path throughout the process of developing art for tilesets, constructing coherent scenes from these tiles, and constructing spaces that are conducive to the core gameplay of the Gunmetal Arcadia games.


I’ve always done doodles on paper for every game I’ve ever made, so it shouldn’t be surprising that having some time off has led to a number of sketches of ideas for enemies and environments.

sketches

These in turn fed into the design for some new tilesets I’ve been stubbing out. (If you’ve been following my Twitter feed, these probably won’t be new to you, but it’s the first time they’ve appeared on the blog, so hey.)

tiles_wip_5

Although my intent is still to build my levels first as grayboxes abstract of any actual environment art, I also want to be proactive in ensuring that my tilesets will lend themselves well to composing coherent scenes and identifying setpieces, and to that end, I’ve been doing some mockups, showing how these tiles might look once they’ve been palettized and assembled.

tiles_wip_7

To go off on a short tangent, when I posted this on Twitter, I got a reply suggesting I blur the reflection on the sides of the screen. I had considered this before but rejected had it for being too expensive, as I imagined it would involve an additional render target or two and would function along the same lines as the fullscreen bloom blurring pass. On this particular day, however, an idea for a cheaper implementation came to me. I prototyped it quickly, then realized I could dress it up even further by scaling the blur radius by distance.

refl_detail

These changes are visible in this detail of the scene above (brightened several times to improve visibility). On the left is the reflection as it existed before (and roughly comparable to what shipped in Super Win the Game), and on the right is the reflection with blurring applied. You can see the blur radius gets larger on the right side of the image, to the point that Vireo’s reflection is little more than a dark orange smudge. It’s a subtle effect, especially at its true brightness, but it looks fantastic in motion, and it’s actually no more expensive than the old version, thanks to some discriminating alterations. Notably, the new version no longer applies the shadow mask overlay to the reflected image. This change helps increase the perception of blurriness and frees up enough instructions for the additional textures samples required for the blur.


I’m finding music to be the easiest thing to fit into my current schedule. I previewed a new tune in one of my last few blogs before I went on break, and since then, I’ve begun working on eighteen more ideas. These aren’t all fully realized pieces, and most of them never will be, but I’ll clearly have no shortage of material to choose from when it comes time to finalize the soundtrack for Zero. Here are a few samples:

This one was an experiment in overlapping two identical pulse waves to create a subtle reverb effect. The lead melody that enters at 0:19 is sort of a melange of a few classic tunes.

One of my goals in writing music for Gunmetal Arcadia is to get out of my comfort zone and try weird things that I wouldn’t necessarily expect to work. This is one of those, and I’m not yet sure whether it works or not.


I’ve made a handful of other changes to Gunmetal Arcadia over the last few weeks, things like improving audio perf, adding an on-screen stopwatch, and adjusting my editor layout to fit my laptop, but I’ve already covered the fun stuff. I’ve also been poking at a few dormant side projects as I’ve had the chance, so I’ll rapid fire those real quick.

1. Way way back around Christmas 2007, I started working on a scripting language for my engine, roughly informed by the one I had written for a Guildhall programming class. It never really got off the ground, mostly because I never had a strong use case for it. Recently, though, I’ve been thinking about trying to replace some of my wordy, error-prone XML “scripting” with a real C-like language. This is definitely low priority, “wouldn’t it be cool if” sort of stuff, but since I had some time, I took another look at it and realized it was closer to a functional demo than I had realized. After dinking around with it for a bit, I got a demo with recursive function calls working to calculate and output the Fibonacci sequence. That’s still a long way from being useful for any real-world scenario, but it’s something.

2. I spent a day or two trying to implement my own regex solver on a lark. I didn’t finish it. Womp womp.

3. A few years back, I took a stab at rolling my own task tracking app when I couldn’t find one that fit my needs. I got a prototype off the ground, and though it was functional from a technical perspective, it didn’t feel usable for a number of reasons, and I eventually abandoned it. But the idea stuck around, as did the need. For the last year or so, I’ve done most of my task and bug tracking in emails to myself or in Google Docs, and recently, I’ve started to feel like Gunmetal has outgrown those. It needs a real tracking solution, but I still don’t like most of what’s out there. I’m familiar with JIRA from my Gearbox days, and I considered running my own JIRA server, but there were some hurdles in getting it set up that I didn’t have the patience to resolve, and it kind of feels like overkill for one user.

runrabbit

So of course, I had to roll my own again. But this time, I was starting with the assumption that this would be a locally-hosted service that only had to function correctly on my browser of choice and would have no concept of multiple users, authentication, permissions, and so on. This would be a tool for one user to track their own solo project work and nothing else. I also knew I would need to prioritize look and feel issues if I wanted to be something I’d actually feel compelled to use, so I’ve been drawing inspiration from the sites I tend to frequent. I like the immediacy of Google Docs; there’s no format concept of committing a change to a document; you just type on the page and it gets saved under the hood. So I copied that. I knew I’d need to prioritize issues, but I didn’t want to restrict myself to an arbitrary set of priorities (e.g., P1 – P5), so I chose to use an upvote/downvote interface similar to what you’d see on social networking sites. I’ve spent maybe two weeks on this, off and on, and it’s come together into a pretty nice little demo. You can create issues, edit their text, prioritize them, close and open them, and retrieve a sorted list of issues, loading asynchronously to fill the page as you scroll down.


I’m hoping to be back to a regular development + blogging + vlogging schedule by mid-January, but if work done in the meantime warrants another interim update, I’ll try to get that done when I have the chance. Stay tuned!

10 Things I Learned In One Year Of Keeping A Devlog

This is my last week of regular updates before I’m off on paternity leave. I have a short recap video ready to go on Wednesday, but beyond that, I can’t guarantee what my schedule will look like. I already wrote a long blog about being one year into development on Gunmetal Arcadia and one year out from the launch of Super Win the Game, but since I’ll be taking a short break from documentation, I thought it would be interesting to talk a little bit more about the documentation process itself, and some of the things I’ve learned in the process. Plus I couldn’t resist the clickbait headline.

I really enjoy writing. I guess that shouldn’t be too surprising; I’ve maintained a blog in some form or another almost as long as I’ve been on the internet, but it had been a while since I’d kept a development-oriented blog that I updated on any sort of a regular basis, and I had some concerns that I might not be motivated to keep writing every work. As it’s turned out, I have more than enough material to pull from, and though the typical week’s update is usually an overview of the previous week’s worth of development tasks, I have the flexibility to cover anything at all related to development here, from deep dives into my engine, tools, or processes to tips for first time exhibitors.

And the writing itself is fun. That’s the important (and sometimes surprising) part. Sometimes I think I like writing even more than making games. It scratches a different creative itch, one that’s more immediate. There’s a thrill in publishing content on a regular basis that you just don’t get when you ship a game once a year (or two, or whenever). Launching a game is exciting, but it’s also stressful and the rewards are somewhat muted by comparison. Games, even free games, have a higher barrier to entry than a blog or a video, and you don’t get that thrill of knowing that your work is just a click away. Publishing a blog or a video is instant gratification.

Keeping a devlog forces me to schedule my time better. Very often, especially when I’ve had events and associated items to prepare and contract work encroaching on my schedule, I’ve had to honestly ask myself, “Am I going to have anything to write about this week?” This has become doubly critical since I launched the video series, as my goal has been to minimize overlap between the blog and video each week. So when I find myself having to ask whether or not I have at least two substantial new pieces of development, it reinforces just how precious my time is. I’ve had many weeks in which I dive into a task that I’ve been putting off simply because I know it will give me something to write about in a day or two.

In some cases, I haven’t had new development work to write about, but I’ve been able to turn to other sources, detailing bits of how my engine works, high-level designs for the future, and other things beyond a checklist of recent tasks. Sometimes this feels like a cop-out, and it hammers home the guilt of not having new development to discuss, but I also tend to think those can be fascinating topics, and I might not ever broach those were I not desperately in need of material.

Animated GIFs are the best. I started recording animated GIFs of my recent work back in March, and it’s caused a shift in how I think about new features. I find myself prioritizing tasks that can be conveyed visually, or looking for ways to convey elements that might otherwise be obscured. There was a recent article on Gamasutra built around a conversation between Rami Ismail and Adam Saltsman in which they discussed the benefits of developing games for spectators and how these same principles can benefit players as well. I’m finding the same in true of showing games in development; if an animated GIF can’t provide enough visual cues to communicate the experience, it’s highly probable the player won’t have enough feedback, either. Building for animated GIFs means building a better game in general.

My progress is more apparent to myself. Super Win the Game was developed in eight months, You Have to Win the Game in five. I’m now a year into Gunmetal Arcadia, and I don’t have a clear finish line in sight. And yes, some of that time has been spent on other things, and yes, it’s a big game, and yes, I’ve already made a lot of progress, but I still often find myself feeling like things are moving too slowly. That can be demoralizing. That’s where having a week-by-week chronicle of exactly what I was doing over the last year is nice. It’s easier to see how things are stacking up and easier to be proud of my accomplishments to date rather than critical of myself for not working faster.

I still can’t estimate tasks. I tried once, on David’s insistence, to estimate a projected release date for Gunmetal Arcadia by making a chart detailing all my tasks from the current day to launch, with time estimates for each. I had zero faith in it at the time, and that was before spending two months on free DLC for Super Win the Game and taking on contract work and launching a video series and having a kid. I bet it would look pretty silly now.

The point is, I can’t estimate tasks, and I won’t. And that in turn means I can’t gauge a release date. But I feel like I have a pretty good gut instinct for scope, and more importantly, as a solo developer, I’m in a position to be flexible and adapt to scope and schedule changes. It’s exactly that flexibility that led to the creation of Gunmetal Arcadia Zero. That put me on a path where it made sense to prioritize content production, and I feel like that’s turned out pretty well, as the last few weeks or months of enemy development has shown.

It feels like I’m trending in the right direction. I’ve probably talked about it before, but the primary reason I started this blog was in reaction to the lack of response I got to the announcement and launch of Super Win the Game. With that game, I tried to manufacture awareness with a handful of press releases every few months, and I failed. So with Gunmetal Arcadia, I made a decision to grow awareness more naturally with a constant stream of information. It feels like that’s working. Will that be The Thing that solves my low sales? I don’t know. There probably isn’t Just One Thing. But as I mentioned in my one-year postmortem, it’s not as critical that I have an Eldritch-scale hit as I once thought. It’s still too early to say for sure whether this is going to achieve my goals, but the trends are encouraging.

I wear more hats than ever. So many hats. Game developer, blogger, video producer, contract worker, occasional email replier-to, family man, not complete social recluse…that’s a lot of hats. I’m not always good at wearing that many hats. Drink every time I say “hats.”

It’s a weird combination of adrenaline and self-pity that fuels me when the weight of the hats starts crushing down, a bleary-eyed, caffeine-propelled fixation on overcoming the odds. I don’t know if that’s healthy. I mean, I know it’s not healthy. But it gets things done.

I still have further aspirations. My ambitions are limitless. The more I do in this vein, the more I want to do. I’m done a couple of dev streams and I’m currently dangling those as a carrot to fund my Patreon, but more recently, I’ve been doing some game streams just for fun, and I’m really enjoying those. In fact, I’m playing more games in general than I have in a long time. I’m finding myself itching to write or speak about these experiences from a perspective of fan and critic and designer. I’ve talked about this a few times now, and it’s hard to say if anything will come of it because it feels impractical for a number of reasons (too much on my plate already, too similar to what others are already doing, etc.), but it’s been stuck at the back of my mind for a while and I figured it was worth mentioning again.

This blog tends to be more recounting of personal experiences and less practical knowledge and education, but I’ve been wanting to branch out in that direction for some time. I mentioned my plans for a GDC talk recently, and as the deadline for the first draft is quickly approaching, I’ve been doing a little work on that this week. I’m excited about where it’s going, hopefully it’ll turn out to be a good resource for future indie developers.

I have no idea what I’m doing. Feedback is an interesting beast. You want it until you have it, and then you do your best to ignore it. I tend to rely on gut instinct most of the time, for better or for worse. There’s many other ways I could structure and push this blog. Should I be more high-level? More deep dives? Should I enable comments? Should I link old posts more often on social media? Should I blog more? Less? Change up my schedule? More art? More in-dev builds? More videos? More streams?

I know what sort of content I like to see on others’ blogs, and I try to aim for that, but it can be difficult to look at my own work with a critical eye. I’m not always sure my own perceptions or expectations align with everyone else’s. I can’t read my own work without knowing what went into both the development and the writing. I can’t read without knowing what comes next. Sometimes I’m not sure who I’m writing for. Gamers? Developers? Both? Someone else entirely?

This game is still exciting to me. Losing motivation is one of the more widely discussed aspects of solo game development, and I’m not going to pretend I don’t have days where I have to drag myself out of bed and force myself to work, but a year out from its inception,the promise of what Gunmetal Arcadia will be still gets my blood pumping. On more than one occasion, I’ve caught myself thinking, “Wouldn’t it be awesome if there were an NES game that player like Zelda II but with roguelike elements— oh yeah I’m making that game.”

But even beyond the core promise, the day-to-day implementation work is actually fun, and I think the devlog is a large part of that. Development can sometimes feel thankless, programming in particular, especially when it’s on systems that the average player would never observe. Knowing that I can write about those and detail exactly what goes into the apparently mundane aspects of development makes it a little more palatable.

This blog and the accompanying video series are also a much a wider outlet for communicating my ongoing development than the occasional tweet I did for Super Win, especially once you account for forums and such where I can repost the content. It’s sometimes hard to gauge exactly how many readers and viewers I’m reaching, but it for sure doesn’t feel like I’m just talking into the void, and that’s the important thing.

Anyway, on that note…

::Disappears into the night::

Grab Bag 6

Having spent essentially my entire Thursday writing code, I felt like doing something completely different on Friday. I had a mental image of cartoony rendition of Vireo jumping and striking a slime, and I wanted to see whether I could capture it.

new_key_art_sketch_smaller

I started with a quick notepad doodle and went through a few iterations of scanning, printing a light image I could trace over, refining, and repeating. Once I was happy with the shapes, I scanned it one last time and traced over it with vector shapes. I made a few more changes to the shapes once it was in that form, replacing the pointy edges of the sword with rounded ones and shifting arms and legs around to look a little more natural.

The design and colors were based roughly on the painting I did earlier this year for the promotional flyers, with a few superficial differences. As I talked about in a video a couple weeks back, I don’t consider myself an artist, and part of that is that I don’t really do character concepts in advance, preferring to make things up as I go. So it’s nice to have a few (mostly consistent) references for my lead character now.

new_key_mockup_smaller

Having already been through the whole process of sketching, refining, converting to vector, and coloring for Vireo, the slime monster went substantially faster.

I don’t really know what I’ll do with these characters yet. I’m not happy enough with it to make it my cover art, probably, but I might paint in a backdrop at some point and maybe it can be a wallpaper for the Steam trading cards system or something.


I made some extensive changes to my synth tools back in June, but I hadn’t had much of an opportunity to put them to use until this week, when I finally wrote my first complete piece of music using these new voices. I’m super happy with how it turned out. Take a listen:

On a subjective level, I think this piece has a really strong hook, but even from a strictly technical perspective, there’s so much more going on here than what I was capable of doing when I wrote the music for Super Win the Game. I had no support for different instruments or voices in that game; I could set the duty cycle of the pulse waves per channel, but that was about the extent of it.

For Gunmetal Arcadia, I’ve leveled up my tools to allow more expression while still staying roughly within the limitations of the 2A03 sound chip. I’m still limited to the usual four channels, but I can author a variety of different instruments that may be played on each channel. (I’m ignoring the DPCM channel for now, but that may change at some point if I decide I really need some good drum samples.)

In total, there are seven different instruments in this piece. The lead voice was my attempt to recreate the sounds of Zelda II‘s temple theme. It is a pulse wave with a 12.5% duty cycle and some pronounced vibrato.

The descending toms are a fun one. These are played on the triangle wave channel and are created by doing a very quick pitch shift from a full octave above the target note. These are typically played over the same noise channel bursts that I use to approximate kick and snare sounds. Because these occupy the triangle wave, they are mutually exclusive with the bassline, but that didn’t really create any problems on this particular piece.

I experimented a little with the bassline in this tune, as well. The volume of the triangle wave can’t change, which limits its versatility compared to the pulse waves, but I tried to give each note a little bit of a punchier sound by again doing a quick pitch shift. This one is much faster than the tom’s, and it creates a sort of hard blat on the attack. It’s a sort of digital, synthy sound, not terribly natural, but it’s at least a little more interesting than a vanilla triangle.

The sort of creepy sounding swells in the first part of the tune (the “verse,” as I tend to think of these things are following a pop song verse-chorus-bridge structure) use pulse width modulation, dynamically altering the duty cycle of the pulse wave to create a sort of “swirling” sound.


Moving on to game work, I’ll start with a quick thing from a little over a week ago that didn’t make it into last week’s blog.

I have no plans to include the sort of fully submerged swimming bits like in Super Win the Game, but I did think it would be useful to have some swampy shallows like in Zelda II, something that can impede movement and create tension.

GunPreq 2015-10-07 21-28-31-580

When I first set up the Gunmetal Arcadia codebase, I nuked all the swimming code from Super Win, so I ended up selectively bringing back little bits of that.

Super Win defined two different types of fluids: water and “toxic” (a catch-all for acid, lava, or anything else requiring an additional powerup to enter safely). I haven’t yet brought over the code related to toxic swimming, as I’m not sure whether this will apply to Gunmetal, and I’d almost certainly want to represent it differently if it did. I can imagine a scenario where I might want Metroid-like lava that inflicts damage over time. That feels like a more useful alternative to the instant death lava pits of Zelda II, as I already have a few instant death opportunities in Gunmetal.


I was hoping to create two types of flying enemies this week, as I tweeted in advance of the work, but ultimately I only had time to do the first. (Whenever I get around to it, the second will be something akin to the bats in Zelda II, something that hangs on the ceiling, only dropping to attack when the player gets near.)

GunPreq 2015-10-15 18-34-14-629

For this one, tentatively called the “hoverfly” and using a placeholder asset that I previously used for my Medusa heads, I disabled physics (gravity) and decoupled horizontal and vertical movement. Vertical motion follows a sine wave, not unlike the Medusas, while horizontal motion is based on the distance in x-coordinates between the hoverfly and the player. It accelerates faster when it is far away, but its maximum velocity is capped to prevent it from reaching ludicrous speeds, and it is only allowed to turn around once it is a certain distance from the player, so it can never hover directly overhead. This forces it to strafe back and forth, and in conjunction with the bullets (fired only when it is facing the player), this feels like a good enemy that can create tension when used in conjunction with other, more immediate threats.

This was my first case of aiming shots at the player that aren’t affected by gravity. I previously discussed the case of shots that are affected by gravity in an earlier blog, and I mentioned this case in passing. It’s a simpler problem, but one that I hadn’t yet had cause to implement.


I already have ladders for facilitating vertical movement (and have had them for nearly the entire development of this game, going back to last December), but there have been a couple instances in my test maps where I’ve really wanted elevators for one reason or another.

Fundamentally, this didn’t seem like a difficult problem. I already had moving platforms in Super Win; the big difference here would be that I’d need to alter their movement based on the player’s posture: crouch to move down, look up to move up.

GunPreq 2015-10-13 03-00-51-531

Where this got a little trickier is in how my collision system handles collision response.

When you take an elevator up, I have an invisible blocker sitting at the top of the shaft that only the elevator itself can collide against. It hits this thing and stops and the player disembarks. The problem I was encountering was that, even though the elevator would appear to be flush with the adjacent floor, the player would get stuck on the corner and would have to jump to leave the elevator.

I knew exactly why this was happening as soon as I saw it. My collision system defines a small value as the thickness of all collidable surfaces. (To provide a sense of scale, this is 0.015 pixels in Gunmetal Arcadia.) In practice, this means that in normal cases, colliders should always stop 0.015 pixels away from each other, but they are able to correctly handle colliding anywhere within this buffer space. Once they are actually physically intersecting each other, no collision result will be produced, and the objects will phase through each other. This helps to account for floating point imprecision, but in this case, it was also creating a bug. The elevator would stop 0.015 pixels below the adjacent floor, and the player would catch on that tiny edge.

My fix for this ended up being a bit of a one-off hack, but given how unlikely it is to encounter another bug of this sort, I didn’t feel like a more general solution would have been appropriate. The elevator gets a callback when it collides with something, and in response, it rounds its own position to the nearest integer (which in this case also means the nearest pixel). This makes it flush with the adjacent floor so the player can walk safely between the two. It also puts it flush with the invisible collider, as close as it can possibly be without phasing through.

Since I’m dealing with axis-aligned (and frequently integer-aligned) bounding boxes, some of these systems are perhaps a little aggressive for what this game needs, but it’s never been my goal to support 2D platformer games exclusively, so it’s important to build with other use cases in mind.


Finally, I checked off a longstanding TODO and brought over some screen shake stuff from Super Win. My biggest use case was for bombs, but I’ll probably also use this for boss death effects and maybe one or two other things. I’m not really a huge fan of the proliferation of overblown screen shake among indie games recently (yes, juice it or lose it, but maybe show a little restraint?), so I’ve tried to keep this fairly quick and conservative. It’s a little difficult to tell in the animated GIF below, but the shake is also quantized in time, only updating every two or three frames. This gives it an interesting sort of “chunky” feel that, to my eye, is a little more authentic to NES games and feels more stable than wild rubberbandy screen shake.

GunPreq 2015-10-15 00-26-06-041


A couple last things. I’m planning to take some time off for paternity leave starting in another week or two. I may still be working on the game when I’m able (I mean, that’s kind of what I do for fun), but I can’t make any guarantees as to what my blogging and video production schedule will look like. If nothing else, my Twitter should stay active.

Sometime in the extremely near future, I’m also going to be putting together a first draft of materials for a GDC talk next year, so I may devote a video log or two to dry runs of this material to get comfortable with it and solicit feedback. Stay tuned!

Control Binding

It’s been a while since I’ve done a deep dive into some aspect of my engine, so I thought I’d talk a little bit about how control binding works in my engine, some of the concessions I’ve made to support various gameplay requirements, and some general thoughts on input handling best practices.

If I go back to the earliest origins of input handling in my engine, it began with polling a DirectInput device for complete mouse and keyboard state each frame. By comparing the current state to a cached state from the previous frame, I could catch key-up and key-down events in addition to knowing whether a key were pressed (or a mouse button, or whether a mouse axis were moving). This pattern remains the basis for all my input code: poll, compare values against the previous frame’s, and react.

My earliest games and game demos had hardcoded controls simply because I had not taken the time to develop a complete control binding system, but it was always a goal of mine. As I continued developing my engine, it eventually became a topmost priority. The earliest manifestations of this system appeared in August 2008, as I was nearing completion on Arc Aether Anomalies. At the time, I had only gotten as far as abstracting device-specific inputs (keys, buttons, and axes) from high-level game-facing controls (move, shoot, etc.). The specific inputs were still hardcoded, but after an initial setup pass, they were safely abstracted away and never had to be referenced directly again. Instead of asking, “Is the player clicking the left mouse button and/or pressing the right trigger?” I could ask, “Is the player pressing the fire button?” and it would mean the same thing. This simplified my game code by allowing me to test a single value automatically aggregated from any number of sources under the hood, but it would be another two years before this would become a user-facing system that could support arbitrary control bindings defined by the player.

So let’s talk implementation. My engine defines an “input” as any single element of a device which can be polled to produce a value in the range [0, 1]. This may be keys on a keyboard, mouse buttons and axes, gamepad buttons and triggers, analog stick and joystick axes, wheels, sliders, directional pads and POV hats, whatever. The polling methods and return values for each of these vary somewhat from API to API, so my first line of attack in normalizing this data is to bring everything into the [0, 1] range. Sometimes this is trivial; keys are either up (0) or down (1). Most devices report axes independently, so for instance, analog stick axes on gamepads are reported as separate X and Y values in some range, often as signed shorts [-32768, +32767]. This can be scaled down into the [-1, +1] range and then separated into positive and negative axes each in [0, 1]. An exception to this is POV hats (which also usually includes directional pads); these are reported in terms of hundredths of a degree clockwise from the top, or the special value 0xFFFF (32767) to indicate no movement. (As a personal aside, if anyone’s ever encountered a POV hat that took advantage of this precision, I’d be fascinated to learn more. I’ve never seen one that didn’t clamp to eight-way directions.)

My engine defines a “control” as a list of inputs with a name indicating its purpose in the game (“walk left,” “look up,” “jump,” “attack,” and so on). Controls may also optionally specify rules regarding non-linear scaling and acceleration over time, which is desirable for certain cases like camera rotation in a first-person game. The state of a control may be queried, and the result will be the sum of all its inputs, clamped to [0, 1], with these additional rules applied.

input-control

The systems described above were sufficient for shipping Arc Aether Anomalies, but since then, I’ve added support for user-defined control bindings. As you could probably guess, this involves changing the inputs associated with a control at runtime based on player input. This is simple in principle, but there are a number of tricky issues that crop up when putting it into practice.

One of the first “gotchas” I encountered in implementing this was dealing with making input bindings unique. That is to say, if the left mouse button is currently bound to “shoot” and I attempt to bind it to “use,” it should be removed from the “shoot” control. This raises a number of questions about the nature of how controls should or should not overlap. For instance, let’s assume I am using this same control binding system for driving input to menus. It would not be reasonable to assume that the left mouse button could only be associated with shooting in the game or activating buttons in the menu but not both. To address this, I introduced the notion of “control groups.” Each group deals with a different mode or context, and this is transparently displayed within the control binding menus, as seen in this screenshot from Super Win the Game.

superwin-bindings

Each input is considered unique only within the context of the current group. Assigning the left mouse button to a control in the “gameplay” group has no effect on its bindings in the “menu” group.

Unfortunately, even this solution has proven to be less than effective as the numbers of actions the player may take has grown with each game I’ve made. In the original You Have to Win the Game, the player’s actions were limited to walking, jumping, and jumping down through shallow platforms. There was essentially no potential for overlap or conflict here. In Super Win, I added some new abilities, including entering doors. The natural default for this control was up: the up arrow key on the keyboard (or W), up on the d-pad, up on the analog stick, whatever device the player might be using, up. This created a conflict in that I was also already using the up arrow key for jumping when playing with the keyboard. Sure, I could have changed that one to Z, X, C, or the spacebar, but it felt wrong to not support both at once. To solve this, I added another system wherein specific controls within the same group can optionally allow the same input to be bound to each. In this way, the up arrow key can be both “jump” and “enter doors” in Super Win.

I’m not totally happy with that solution. It’s an odd one-off backpedaling of another system’s effects, and it’s not made clear to the end user in any way other than trial and error. It would be nice if I could scope my controls better such that these cases simply could not occur, but this doesn’t feel like a viable solution either. Consider the case described above; if “jump” and “enter doors” were merged into a single control, then the A button on the gamepad would activate doors by default, almost certainly never the player’s intent. On the bright side, the fact that this workaround is implemented at the control level rather than the input level means that it’s at least somewhat easy to figure out where these conflicts might arise and suppress them.


I have a whole bunch of input-related notes remaining and no good way to tie them into a larger narrative, so I’m just gonna rapid fire some thoughts here.

Mouse input is an oddity. Unlike most other devices, the values it reports when polled effectively have a delta time premultiplied into them and are unbounded. An analog stick may only be pushed as far as its physical bounds will allow, but a mouse may be pushed arbitrarily far in a single frame. This breaks some of the assumptions I make regarding the [0, 1] range of inputs and requires some workarounds of its own. While a control is actively receiving mouse inputs, it does not do any acceleration or non-linear scaling. This has the desired result on both fronts. Our input value may be well outside the [0, 1] range, which our acceleration and non-linear scaling rules are not equipped to handle, and from the player’s perspective, mouse input should not be subject to these rules anyway, as they could create unnatural, non-1:1 interpretations of the mouse’s movement.

Two-dimensional inputs such as analog sticks require a little bit of special attention. As I mentioned above, I decompose these inputs not only into the separate X and Y axes as they are typically reported by the API, but also into separate positive and negative regions. As I’ve noted, this is advantageous because it allows me to treat all input as being within the [0, 1] range, although it is perhaps a little odd from the user experience, when all four points of the compass have to be bound separately. (Counterpoint: this is arguably the ultimate expression of Y axis inversion.) But regardless of how the axes are represented under the hood, it’s important to recognize that the input exists as a single physical thing in meatspace the real world and handle it accordingly, specifically with regard to dead zones and non-linear scaling.

If you were to draw a picture, what would the dead zones on an analog stick look like? There’s no wrong answer, but it’s important to consider every option so you can make an informed decision. Perhaps the easiest sort of dead zone to implement is a cross- or plus-shaped one. This can be done by clamping the input from each axis separately. There are times this may be desirable, for instance if you want to suppress errant side-to-side motion while holding up/forward in a first-person game. But I tend to prefer circular dead zones, and to accomplish that, we must look at both axes together. We can represent the axes as a two-dimensional vector and then apply dead zone clamping to its magnitude. Then we can decompose the vector into separate X and Y elements and output the result of each.

sample-dead-zones

I tend to be fairly aggressive with my dead zones, on account of frustrations I’ve had in the past in which movement persisted even when the stick were released (because it was physically sticking a little bit to one side) and when I could not reach full speed pushing in certain directions. The latter case is where outer dead zones are important. I model both inner and outer dead zones as circles, clamping the magnitude of the 2D vector to within this range and scaling to [0, 1].

Non-linear scaling is an interesting one if only because so many games get it wrong — and I’m not talking about more easily forgiven hobbyist and indie games, I’m talking huge multi-million AAA franchise games. A surprising number of high-profile titles have made the mistake of applying non-linear scaling to each axis independently. It’s perhaps a minor quibble in the big picture, but it’s a crucial part of maintaining a fluid, natural game feel, and I hope by now I’ve made it clear just how important game feel is to me.

So once again, the solution is to combine your X and Y axes into a single 2D vector, apply non-linear scaling to its magnitude as a whole rather than its separate X and Y values alone, then decompose it into its components and return the one you need.

non-linear

To illustrate why this is important (and possibly ruin a number of otherwise great first-person shooters for you once you notice it), consider the case of pushing the analog stick 45 degrees up and to the right. Assuming a circular gate and perfectly accurate hardware, this should give us X and Y values of 0.707 each (which is ½√2 because the Pythagorean theorem blah blah). I’m ignoring sign here; it’s possible that up on the stick would be negative Y but whatever. Let’s say then that we’re doing some non-linear scaling with a power of two. If we did this on each axis separately, we’d get results of 0.5 for both the X and Y axes. This gives us a resulting magnitude of only 0.707. If we used this method, we’d only be getting 70% of the intended input (70% movement, 70% rotation, etc.) despite the fact that the stick is pushed as far as it’s physically able! If instead, we formed a 2D vector first, its magnitude would be 1.0. Then non-linear scaling with a power of two would have no effect on it, and the resulting magnitude would still be 1.0, correctly representing the player’s intent.


There’s more details I could talk about with regard to my input system, from dealing with a multitude of APIs across each platform, to interpreting manufacturer and device GUIDs to make an educated guess as to how to display button prompts, to ideas for the future to move input to a separate thread that can run at full speed regardless of the game’s state. But this is the nuts and bolts of my control binding system specifically, and I’m pretty happy with how it’s turned out, and hopefully it’s beneficial to players to have the flexibility to define their own control schemes.

Grab Bag 5

It’s been another week of various gameplay tasks, bug fixes, and editor work. I don’t have a strong thematic hook to tie all these bits together this week, so…it’s time for another Grab Bag!

GunPreq 2015-10-05 03-25-49-333

I’m getting close to wrapping development on the biped enemy. I still need to modify his behavior when he starts pursuing the player, to increase foot speed and disregard the edges of platforms, but otherwise, I’ve knocked out a lot of the big issues here. When this guy initially spawns and is unaware of the player, he’ll patrol back and forth, as I showed in last week’s video. More recently, I’ve added a line-of-sight test to the player character. Once he has eyes on the player, the biped will begin pursuing, close to melee distance, and attack. I’m pretty happy with how this feels already, but I’ll probably want to continue tuning it to strike the right balance between fairness and difficulty. It doesn’t feel right when the enemy can immediately recover from damage and knockback and return fire, but combat becomes trivially easy if enemies can be caught in a loop of taking damage and being stunned. I don’t yet have a good solution for that one, but I’ll be taking a look at similar games soon to see how this sort of thing is typically handled.


I’ve been tackling a few bugs recently. One I tweeted about earlier this week after finally discovering a reliable repro.

GunPreq 2015-10-02 20-23-01-500

I was immediately obvious what was happening once I discovered this repro, but this bug had been lingering for a week or two in that “seen once, beginning to doubt my own eyes” state. The issue here stems from reusing technology originally built for keeping the player character attached to moving platforms to attach weapons to their owners’ hands. In the event that a viable attachment base (a moving platform, or, in this case, the player character) sweeps upward into an entity that can be attached to others, it parents itself to that thing, no questions asked.

This could also happen when the biped enemy jumped into the player’s sword, and I had seen this happen once or twice during normal gameplay testing as well.

GunPreq 2015-10-02 20-23-37-817

The quick fix was to disallow entities from changing bases if they’re already hard-attached to something else. As I mentioned in a previous video, a “hard attachment” is something which is treated as an extension of the thing it’s attached to, rigidly moving where its parent moves with no regard for collision. By comparison, when the player stands on a moving platform, they are “soft-attached” and will try to move with the platform, but will stop if they collide with a wall.


I had encountered another bug a few times in the last couple of weeks that had me stumped. Infrequently, during normal gameplay, I would get a crash with this message:

purevfc

I had run across this one a few times while testing from the editor (i.e., running a release build with no debugger attached), but I hadn’t been able to get a repro in a debug build, nor could I find any reliable repro steps.

Eventually, I did hit this while debugging. It was a memory access violation, incurred when attempting to do an initial zero-time tick on an entity that had been created in the previous frame but which had been immediately destroyed by some other force.

That information was sufficient to theorize what was going on, and from there, I was able to narrow down a repro. What was happening (and this explains why the bug was so rare and also why it was difficult to test even once I understood the problem) was that, if the player swung their sword at exactly the same moment they took damage, the game would crash. When the player performs a melee attack, the weapon entity is created in response to playing an animation sequence that wants to spawn an attachment. As such, the sword entity is spawned in the middle of doing normal game loop delta-time entity ticking. Entities spawned mid-tick are held in a list to perform an initial zero-time tick once all other entities have been ticked, in order to ensure they are set up properly. So a pointer to the newly created sword entity is sitting in a list somewhere, and then,  as we continue ticking entities this frame, the player takes damage. In response to taking damage, we switch to a hurt/stunned animation sequence, one that does not have an attachment point for a melee weapon. This destroys the sword attachment we just created, but the pointer we had saved off earlier remains. At the end of the tick, we go to do a zero-time tick on it and crash reading bad memory.

In many cases, I can avoid these sorts of issues by using handles instead of pointers, as a handle to a destroyed entity would be correctly recognized as invalid and not dereferenced. In this case, however, I’m dealing with pointers to “tickables,” an abstract class that exists outside of my entity/component hierarchy and is therefore unable to be referenced by a handle. (I should stress this limitation is specific to my engine architecture and could be considered motivation for a more generic refactoring of handles, but I’m not quite there yet.)


One of my development machines is a laptop I purchased in early 2011 for doing some networked game testing. It’s become my primary Linux dev machine by necessity, so I find myself testing my games on it fairly regularly. This laptop has a GeForce G210M video card and is capable of running some 3D games fairly well, so in theory, a 2D game should present no problem. What I’ve found, however, is that more so than any of my other test machines, this laptop gets very pixel-bound very quickly. This has become a problem over time as I’ve continued adding fullscreen passes here and there throughout my game framework to support features like gamma adjustment or blurring out the game scene underneath the menus.

Recently, it occurred to me that if I could bypass some of these default features, I could support an optional high-performance mode. This mode would exclude any CRT simulation rendering, but it would also avoid as many fullscreen draws as possible. It would prioritize speed over visual quality. My goal was to get as close as possible to rendering the game scene directly to the backbuffer. Since I’m already rendering the game scene to a pixel-perfect 256×240 render target texure, I chose not to alter that path, but once the scene has been composed, I render a single fullscreen quad to the backbuffer, and that’s it. Under these conditions, I can maintain a constant 60 frames per second while running in fullscreen on my laptop, so I’m pretty happy with that. I haven’t yet added this option to the menu, so it’s not really a fully supported feature yet, but it can be enabled from the console or config file, and at the very least, I plan to ship that implementation.


I did a little bit of editor work a few days ago in the interest of providing myself with a more comfortable and familiar environment for writing game scripting markup. As I’ve mentioned in a couple recent videos, much of my gameplay — enemy AI, in particular — is authored with XML markup in an editor window. This isn’t necessarily the best environment, and I’ve talked in the past about how large, wordy blocks of markup are usually a good indication that a feature needs to be broken out into a separate visual interface, as I’ve done in the past with animation and collision. However, it seems unlikely that I’ll move entirely away from writing markup by hand anytime soon, certainly not in time to ship the Gunmetal Arcadia titles, so I figured it would be worth my time to make some improvements to the interface as long as I’m going to continue using it for the foreseeable future.

markup_better

I’ve been using a plain vanilla C# / .NET RichTextBox control for authoring markup within the editor. (For external definitions, I use Notepad++, which handles XML nicely, highlighting paired tags and so on.) RichTextBoxes are a decent place to start, but they don’t do everything exactly as I’d like. In particular, when I’m dealing with blocks of markup, the ability to select multiple lines and tab or shift-tab to indent and unindent is critical. RichTextBoxes don’t do this out of the box; by default, selecting a region and hitting Tab will overwrite the selected text with a single tab. I wound up implementing this myself, checking for lines in the selected region and inserting or removing tabs at the start of each line as appropriate. I also handle the case of hitting Shift+Tab when no text is selected to unindent the current line if able.

The next feature I wanted was Ctrl+F and F3 (and Shift+F3) to search for text in the markup window. As it turns out, C# makes this remarkably simple, thanks to a pair of string searching functions IndexOf() and LastIndexOf(). Between the two of these, it’s easy to look forward and backward from the cursor position to find the previous or next instance of the search phrase. Currently, I don’t support the usual “match case” or “whole word only” criteria, but it’s conceivable I might want these at some point in the future.

Finally, in the interest of legibility (as some of my AI markup has grown to as many as fifteen nested tags in places), I’ve reduced the size of tabs from their default 48 pixels to 25, allowing deeply nested tags to remain more visible.


I’ve been reevaluating my schedule in light of recent Real Life Events encroaching on my work hours. I haven’t committed to any changes yet, but I’m considering shifting videos from Wednesday to Friday. We’ll see. I’ll probably be taking some time off from my regular blogging and video production schedule for paternity leave in early November or thereabouts. What exactly that will look like and how long I’ll be away are still up in the air.