Even with two shipped games under its belt, my entity-component system (ECS) is still one of the youngest features of my engine, and it should come as no surprise that I’ve discussed changes to it many times throughout the development of the Gunmetal Arcadia titles.

A few weeks back, just before I resumed my normal weekly blogging schedule, I tweeted about some ECS refactoring I was doing to better support dynamically modifying enemies and other entities at creation time. Then just last week, I made yet another change to my entity serialization, so I’ll be covering both of those today.

Entity creation from multiple definitions

Last August, I added a feature to my editor to allow me to split entity definition markup across multiple hierarchical templates, giving birth to a paradigm that I compared, perhaps inadvisably, to multiple inheritance.


This solved a problem of having a large amount of copied and pasted markup across entities that were similar in some ways but which couldn’t be easily generalized to a single common ancestor.

This proved to be a huge help in shipping Gunmetal Arcadia Zero, but I also relied on the ability to add markup to specific instances of entities which had been placed in levels. I often used this to flag certain enemies as being counted towards challenge room locking and unlocking, or to spawn rewards when bosses died, or other situation-dependent events. It would not have been appropriate to create a new entity template for each of these since each one would only be instantiated once; keeping that markup associated with the instance that needed it was and is the most logical option.


Unfortunately, the procedurally generated nature of Gunmetal Arcadia precludes the use of instance-specific markup. With a small number of exceptions, entities are not placed in the editor at all, but are instead spawned at runtime at known-safe locations given information about spawn “opportunities.” As a result, I was forced to find an alternative that would play nice with procedural generation.

My solution was to extend the concept of aggregating multiple data definitions sources from the editor to the game. Historically, every entity has been spawned from a single definition file containing all the markup it would ever need. The notions of an entity hierarchy and instance-specific markup were strictly editor conceits; all this data would eventually get concatenated together into a single file for each and every entity.

As of a recent change, I can now specify an arbitrarily large list of data definition files from which to build an entity, rather than just one. This is conceptually similar to concatenating markup into a single file in the editor, but rather than iterating through markup that was assembled offline, I spin through all the markup in one file, then proceed to the next, and continue until all files have been exhausted. The end result is identical, provided the source is otherwise the same aside from being split across multiple files.

In this way, I’ve been able to solve the problem of marking up entities based on situation, with my first use case being to count the number of living enemies for locking and unlocking challenge rooms, as I mentioned briefly in Episode 40. I took the same instance-specific markup that I used in Gunmetal Arcadia Zero for flagging enemies as counted for challenge rooms, and I moved it to a new “mixin” entity template. Then I simply need to augment the list of entity definitions with this one when spawning enemies within challenge rooms, and they automatically get counted without the need to create challenge-room-specific variations of every single enemy type.

Since solving that first use case, I also found a use for this when working on spawning doors between the level hub and subrooms. Historically, doors have been assigned a GUID by virtue of being placed in the editor. Entities require a GUID to be serialized (saved and loaded to disk, but also saved to memory to be recalled during the same session), so if we want to preserve the state of a locked or unlocked door, we need one. Now that they are spawned dynamically, they require a dynamically assigned GUID. Now, I could have edited the primary door template in the editor and added some markup to accomplish this, in the assumption that all doors in this game will be dynamically spawned and will all require a dynamically assigned GUID.


However, in light of this new tool, it feels more correctly to split the markup for dynamically assigning a GUID out into its own “mixin” template that can be applied at runtime. In this way, I can preserve the way doors have worked in the past in case I ever need to rely on that behavior again. I like this too because it feels truer; there’s nothing intrinsically “dynamically spawned” about the concept of a door, so putting that markup in the door template feels kludgey. Splitting it out into its own thing that can augment a door, effectively turning it into a different kind of door, that makes more sense.

Additional component serialization

Continuing with the development of doors, I ran into a couple of unusual problems (not pictured below) in which entering the same door multiple times could, depending on the circumstance,

  • Drop the player in front of a different door in the same room
  • Drop the player in the middle of the screen rather than a door
  • Crash the game

At least one of these turned out to be a fairly boring bug caused by failure to correctly account for the position of the door within a large scrolling room. But the more interesting issue, and the one I’m going to talk about here, dealt with some assumptions I’ve made in the past regarding data definition markup and serialized entity-component state.

GunArc 2016-07-24 22-18-45-652

In the past, I’ve typically assumed that the data definition markup I provide when constructing an entity represents its whole. Regardless of whether that markup is contained within a single file or split across multiple, it has always represented the complete, fully constructed state of the entity and all its components. Recently, I broke that assumption.

The situation was this: when placing usable doors on either side of a hub/subroom transition, each needs to know about the other in order to specify a destination. In the past, this has been handled by the editor. I mark one door as “Side A,” the other as “Side B,” and the editor automatically adds some markup to each letting it know the other exists and identifying itself to the other. But once again, I can’t rely on editor tools, as these are now spawned dynamically at runtime.

Now, I might not have the actual doors available to me in the editor, but what I do have, as I’ve mentioned before, are entities representing known-safe spawn points. I have a set of these for enemies, and I have another set for doors. And my solution to this particular problem was to add a new component to the door spawn points called a “door proxy.” This component is responsible for making the link between the two doors on either side of the transition; by virtue of the fact that it doesn’t get constructed until the level has been procedurally generated, it can trust that that information will be available. In fact, it is this proxy component that is responsible for spawning the door itself, which may be an invisible bounding box that prompts for the “up” input, or may be a visible locked door requiring a key to open.

So the proxy spawns the actual door, and it also maintains the information on how the two sides of the door are connected. But there’s a missing piece, and that’s how the door knows that it needs to utilize this information. I’ve accomplished that with the use of yet another new component, a “teleport helper.” This component exists on the door itself and looks for incoming information from a door proxy. If found, it does essentially exactly what the editor tool did in Zero: it assigns a unique name to this door, and it sets up information for teleporting to the other door, which is assumed to have been assigned a name in the same fashion and can therefore be located when necessary.

That’s all well and good, and entering a door constructed in this manner works correctly. The trouble starts when we go back the other way. Rather than returning through the door we came from, we spawn in the middle of the room (default fallback behavior when no appropriate spawn point can be found), and if we enter it a second time, the game crashes.

The problem stems from what the teleport helper component does. It assigns a name to the entity (via a name component) and provides information about its destination (via a teleport component, which is different from a teleport helper component). That in itself is fine, except that the data definition markup for the door does not explicitly specify either of these components; they are created at runtime as they are needed.

I mentioned at the start of this latter wall of text that I’ve historically assumed the data definition markup represented the whole of an entity, and as I’ll explain, that meant specifically that no additional components were expected to be added after the entity were constructed from its definition.

My ECS code has no qualms about adding components post-factory-build. In fact, in both You Have to Win the Game and Super Win the Game, I frequently added components by hand in code, as the notion of a complete data definition for the player entity in particular simply didn’t exist at the time. So that wasn’t a problem. And my serialization code didn’t care if new components had been added when it went to save the entity to disk; it would spin through whatever components were present and write out all their data.

The problem arose when loading this serialized data back up, as in the case of returning to the first room. After spawning the door, the spawn point with the proxy component deletes itself, as it assumes the door is capable of sustaining itself after creation (and with the assigned GUID discussed in the first half of this post, it should be). But the way serialization works in my engine is, the entity first gets rebuilt from its data definition file(s), and then the serialized data (which is assumed to be a delta from this default state) gets applied a component at a time. And here’s the catch that’s taken me far too many paragraphs to reach: if a particular component is not specified in the data definition file(s), it will not be constructed; if it’s not constructed, it will not attempt to load previously serialized data.

Functional doors require a name and a teleport destination, but specify neither component in their data definition markup, relying instead on the teleport helper component to construct these. Therefore, when reloading a door that was previously serialized, it will be missing these components, and the serial data for them — which does exist — is simply ignored.

My solution was simple enough: when rebuilding an entity from serial data, I now spin through that data, constructing any missing components for which serial data exists. In this manner, the name component and teleport component for the door will be constructed when their previously saved data is found, despite not be explicitly specified in the data definition markup for a door.

In truth, there was a much simpler fix for this issue, which would have been to stub out placeholder data definition markup for these two components to ensure their creation. This would have been all of two XML tags:

<name />
<teleport />

With that, the entity factory would know to construct those components. It wouldn’t set them up with any defaults, leaving that to the teleport helper, but it would guarantee they existed prior to loading serial data.

So why didn’t I do that? Mostly it came down to the fact that my code has always allowed for components to be added after factory construction, and even though I almost never do that, and a workaround did exist in this case, it felt better to continue to support that path.

This does raise an interesting question though. If I support adding new components at runtime, should I also support removing them? Historically, I’ve never removed components from an entity, ever, and I can’t even imagine what sort of use case would necessitate that behavior, but there is a sort of irritating asymmetry in supporting one but not the other.

Oh well.


The hardware limitations of the NES necessitated certain aspects of its games’ visual aesthetics. Sprites and backgrounds were both composed of 8×8 tiles; in action platformers, these were nearly universally grouped into 16×16 blocks. In conjunction with the console’s limited color palette, this led to a recognizable uniformity across its library. Additionally, certain artistic conventions arose throughout the NES’s life cycle, among them the notion of placing drop shadows beneath platforms in action games to provide clarity of spatial positioning and a greater sense of depth.


dropshadows3   dropshadows4   dropshadows1

As I’ve noted before, Faxanadu was one of my primary points of reference in developing the look of Gunmetal Arcadia, and drop shadows were an important part of this. The exact nature of these shadows vary from tile set to tile set, but going back to the earliest implementation of the catacombs set, shadows have been an integral part of the look.


In Gunmetal Arcadia Zero, these shadows were all placed by hand. For the roguelike Gunmetal Arcadia, that is not an option. The quarter-scale prefabs that compose the world don’t (and shouldn’t) have knowledge of their neighbors, so drawing shadows that extend correctly across prefab boundaries cannot be done reliably without introducing severe restrictions to how prefabs can be drawn.

My solution has been to procedurally generate background tiles based on the collision type (solid, walkable, ladder, etc.) of adjacent tiles. After generating a level using the “card-laying” algorithm I discussed previously, I go back through each room and, on a tile-by-tile basis, see whether I need to substitute in a replacement.

In my editor, tiles that need this substitution are implemented as “custom tiles,” which is a sort of catch-all feature that allows me to associate some XML metadata with a tile to provide clues to the game as to how it should function. Currently these only containing stubs flagging them as walkable background tiles or solid collidable walls and floor, but in the future, these will likely be extended to include specific replacement rules.


In code, I look at the collision type of adjacent tiles in certain positions relative to the current one, and based on these, I choose substitutions. Since shadows drop down and to the right, I have to search up and to the left for occluding tiles. The exact substitution I choose depends on which tiles are occluders and how far they are from the current tile.


While prototyping this scheme for background tiles, it became apparent it could be useful to apply a similar approach to solid tiles as well. I’ve typically assembled patterns like these overlapping bricks from smaller sets of general purpose tiles, as shown below. Just like the shadows, it would be impossible to match these up perfectly at the edges of prefabs if I were to draw these by hand, so what I’ve done instead is to choose substitutes in this overlapping brick pattern based on location, and to apply the appropriate edges when adjacent tiles are non-solid, so bricks don’t get cut off halfway.


These prototypes are strictly tied to this particular tile set, but understanding how to solve these problems for one set will put me in a good place to understand how to general these sorts of rules to any set. In particular, I’m thinking about the tiles for the third and fifth levels of Gunmetal Arcadia Zero and how they have a top layer of grass. These rules will undoubtedly be similar to the ones I’m using for detecting edges of bricks in the catacombs set, but I expect there will be some additional edge cases to consider based on my experiences with handpainting these tiles.

I talked a little bit in Tuesday’s video about some input remapping changes I’ve been making recently in order to better support a wider number of gamepads. This feature grew out of a two-hour development stream I did last week and consumed the majority of my week, but it also clued me in to some existing problems in my input system that I otherwise wouldn’t have found.

This solves a problem that I’ve known about for years but had never attempted to address before. The problem is, there’s no consistent standards or conventions for the assignment of buttons and axes on DirectInput devices. I’ve historically followed Logitech’s conventions, as their gamepads seem to be among the most popular non-XInput devices, but not all devices adhere to these same rules. In particular, I’ve had trouble with NES- and Super NES-like gamepads, as well as DualShock 2 gamepads running through USB adapters. The face buttons are often assigned differently; for instance, the bottom face button, the one that would be labeled “A” on an Xbox controller or “X” on a PlayStation one, will be identified as “Button 2” on some devices and “Button 3” on others. This creates two problems. The first is in assigning default control bindings. Let’s say I assign Button 2 to the “Jump” control. This will work correctly on Logitech devices and some others, requiring the player to press the bottom face button to jump, but on other devices, “Button 2” might be the right face button (B or Circle by Xbox or PlayStation standards), so the player would have to press that button to jump by default. Of course, I support arbitrary control bindings, so this can be corrected, but that’s only part of the issue. The second problem is that the button glyphs displayed by the game are coupled with the input (Button 2, Button 3, etc.), so even if you rebind the correct button in terms of physical placement, the game might display the wrong glyph for that button. (For instance, if the player were using a device for which the bottom face button were Button 3, the game would show “B” or “Circle” as the glyph for that button, when it would be appropriate to show “A” or “X.”)

This problem is easiest to understand in terms of face buttons, but it extends to axes and POV hats, as well. The naming of “POV hats” dates back to their origins on flight sticks, but nearly every gamepad I’ve ever tested reports the d-pad as a POV hat, so that is how they are most commonly used today. Axis naming is even more bizarre; DirectInput supports 24 axes, named for X, Y, and Z; position, velocity, acceleration, and force; translation and rotation. Typically the left analog stick of a gamepad will be mapped to XY/position/translation, but the right analog stick and sometimes triggers (if reported as axes rather than buttons) are less standardized.

The DualShock 4 is my favorite controller of the current generation and I want to support it as well as possible, but it brings a number of additional concerns to the table. Chief among these is that its triggers are reported as both buttons and axes. This creates problems when awaiting user input to bind a control. My control binding code watches for changes to any input on any device; when it sees a change, it assigns that input to the specified control. Because the triggers activate two inputs at once, it will only catch the first one it sees, which will often, but not always be the axis rather than the button because it checks for this one first. However, it the trigger is only slightly depressed, such that it is still within the dead zone, the axis will be ignored and the button will be selected instead. My solution for this, against my better judgement, was to introduce a device-specific hack to simply mute the button inputs associated with the DS4 triggers when that device is used (as detected by vendor and product ID).

Additionally, the DualShock 4’s trigger axes rest at the minimum position (-32768) rather than the conventional center (zero). On an ordinary axis, I can watch for deviations from zero in either direction to see whether the input is being pushed, and in which direction. That code fails for these triggers, because changes across the zero boundary will happen both as they’re being pressed and released.

It’s conceivable that my code is simply naive and should account for an arbitrary resting position. However, I would argue that axes are intended to rest at zero and that this is an unusual outlier, as evidenced by the fact that DirectInput has no mechanism for reporting axes’ resting values, and in fact, this exact issue led Microsoft to combine the Xbox 360 controller’s triggers into a single axis that does rest at zero (with the unfortunate and surely intended side effect that the triggers cannot be reliably polled with DirectInput at all, and XInput must be used instead).

My initial solution was to cache off the first values reported by polling the device immediately after it was initialized or selected within the game menu. This value (clamped to the nearest of zero, -32768, or 32767) would be treated as the resting value. This would introduce problems of its own when using flight sticks with dials that don’t automatically center themselves, but until I make a flight sim, I think that’s a risk I can take.

However, that plan fell apart on account of getting unpredictable results back when polling a device immediately after it were acquired by DirectInput. Sometimes it would return the expected values; other times it would return a completely zeroed out state. As this state could conceivably be a valid result for some devices, I couldn’t assume it had failed and needed to be retried on the next tick, and with no error or warning result I could catch, this plan seemed to have hit a wall. So, I added another hardcoded hack, forcing the resting values of these axes to the expected values when the DualShock4 vendor and product IDs were seen.

These sorts of major changes to established systems always make me a little skittish, but so far the results have been promising. I have a little more testing to do on Mac and Linux builds, but both my DirectX and OpenGL/SDL Windows builds are behaving correctly for all my dozen or so test devices, so that’s reassuring.


Hello again!

Five weeks ago, I paused this blog and the accompanying video series on account of not being able to openly discuss the work I was doing. I’ve wrapped up those tasks, and after spending a little while on a demo build of Gunmetal Arcadia Zero for events, I’m back to full-time development on the roguelike Gunmetal Arcadia. (In fact, if memory serves, this is the first time since last April that I’m not splitting my time among multiple projects, so that’s cool.) I’ve covered a decent amount of ground in the last few weeks, so let’s get going.

That console thing

The console port work I’ve been awkwardly not talking about was investigative in nature. My intent was not to have a shippable game at this time, but to prove that it would be feasible to port my engine to a home console at some point in the future. As of the time I wrapped development on this task, I had a functional version of Gunmetal Arcadia Zero running on the console, with rendering, audio, input and more all working correctly. This implementation has a number of known memory leaks, particularly in the new rendering code, but nothing that couldn’t be solved in a day or two. That’s not to say this build was anywhere near cert-compliant, and I would estimate another month or two of work to bring it from this first playable state to a shipping build, but it’s definitely within the realm of possibility, and I consider this investigation a success.

Recent work

I’ve been making some incremental improvements to the procedural level generation scheme I documented back in May. The levels it builds are now somewhat more interesting and plausible as gameplay spaces and offer more room for tweaking and tuning as I continue making progress. Fundamentally the concept is the same: construct a path through a region of a predetermined size and then choose prefabs that meet the appropriate conditions; the difference is in how I construct that path. I now differentiate between one-way and two-way vertical traversals, an important concept in a 2D platformer, as it turns out. It’s also more tolerant of empty spaces, resulting in fewer narrow, twisty paths. I expect it will still require a great deal more work before shipping, but it’s getting better all the time.

Besides constructing the physical space of levels, I’ve been doing some work on specifying the intent of individual rooms within a level, which in turn will notify other systems of how they are to operate. For instance, rooms can be flagged as generic combat spaces, treasure rooms, shops, and so forth. This affects the types of entities that can be spawned in those rooms, so for instance, enemies can spawn in combat spaces but not in shops.

I’ve also been stubbing out a framework to support one of the tentpole features of Gunmetal Arcadia, the notion that each session affects the next in clear ways. This has involved tearing apart the saved game code from Zero, which is fine; Zero exists in a separate source control branch so its code is safe, and the saved game code I wrote for it was never intended to be general-purpose enough to encompass my needs for the roguelike game as well.

Oh yeah, I finally started writing some new music for the first time since finalizing the Zero soundtrack in early March. It’s not much yet, but with some better instrumentation, this could be something.

A new schedule

I’ll have a new video up on Wednesday as per my typical schedule, but starting next week, I’m shaking things up a bit. Videos will go up on Tuesday, with the written blog following on Thursday. This should fit my schedule a little better and hopefully eliminate some of the awkwardness of filming the video before the blog and having to guess at what I’ll be covering in each.

Speaking of schedule, it’s probably worth mentioning that my pace has necessarily slowed a bit. I can’t really work evenings and weekends anymore, so I’m on a rigid 40-hour work week now.

That being said, I still feel like I can believably hit a Q1 2017 launch window. I had originally hoped to have both of the Gunmetal Arcadia games out by the end of 2016, but launching in the holiday season against the AAA heavy hitters feels optimistic at best and willfully irresponsible at worst. Pushing it back to early 2017 makes more sense and still falls within my budget.

Something new

If you follow me on Twitter, you may have seen me talking about “the next game after Gunmetal Arcadia.” This project, codenamed “Banon” and tentatively titled Oaks, is the “other idea” I mentioned a couple months back. I’ve poked around at some concepts for rendering tech for that game over the last few weeks, but I don’t intend to start doing any serious development on that one until Gunmetal Arcadia is done. My time is too limited at the moment to split among multiple projects. But it’s an exciting concept, and I’m taking lots of notes and assembling reference materials, and some of that will undoubtedly spill over onto Twitter.

I don’t have any more events this summer, at least until October and Retropalooza, so hopefully the next few months will be a return to the sort of productive schedule I’ve maintained in the past, or at least a reasonable facsimile thereof in the more limited time I have.