Synthesis

As of this last weekend, I’ve largely finished bringing over changes from my standalone synth tool into the game. It’s not totally done yet; there’s still a lot of optimization left to be done, but it’s stable enough that I can move on to other tasks for a time.

Porting diffs between these two environments is always a little bit more involved than I expect due to some considerations necessary for supporting a latency-free gameplay experience. The standalone tool only plays one “tune” at a time. A tune may be a looping piece of background music or a sound effect; in either case, it’s a single thing that may use all four available channels (two pulse waves, a triangle wave, and a random noise channel). In this environment, I never have to worry about how to handle multiple overlapping tunes, but more importantly, I don’t have to worry about inserting data into the stream just ahead of the write cursor, which greatly simplifies the implementation. This is nice for prototyping behavior in the standalone tool, but these differences do need to be reconciled when bringing these changes into the game.

Let’s take a step back and review how audio buffers work. An audio buffer is a block of memory that contains raw waveform data. For one-shot sound effects or very short loops, the data can often be loaded in its entirety, thrown over to the audio device, and never touched again. For longer pieces that would be impractical to keep in memory all at once, we have to stream the data. In this case, we allocate an audio buffer of a fixed size and dynamically write and rewrite its data as the audio device is playing it. If our source is a large WAV file, we may be able to copy the data directly; if it’s in MP3 or Ogg Vorbis format, we’ll have to decode the data first before copying it into the buffer. In the case of Super Win and Gunmetal, the source is in a minimal MIDI-like event timeline format, and the waveform data must be synthesized in real time before being copied into the buffer.

That’s not too bad. Each frame, we can query the audio device to find the cursor positions in the buffer. We get two cursor positions back: a play cursor and a write cursor. The play cursor indicates the point from which the audio device is currently reading and playing data. The write cursor will be a short distance ahead of the play cursor. This indicates the point at which we can safely write new data into the buffer. The region between the two cursors, which the audio device will be playing imminently, is considered unsafe for writing. Knowing these, we can write some portion of data starting from the write cursor and then sleep for a while, waking in time to continue writing before the device catches up to us.

audio_buffer

For a single piece of background music, that’s pretty much the end of the story. Where this gets a little tricky is in dealing with sound effects that must play quickly in reaction to gameplay events. In order to minimize the perception of latency, we need to insert these into the stream as close to the play cursor as possible. As we know, the closest we can get to the play cursor is the point designated by write cursor, so we want to insert the new data there. We will almost certainly have already written data from another tune to this region, though, so we need to make sure to mix these together correctly and not obliterate existing data.

Where this gets really tricky is in applying realtime effects like reverb and filters to the entire mix. Because we may wish to insert new data into the stream in response to gameplay events, we can’t know for sure that a region of the buffer is completely finalized and won’t be changing again until the write cursor has passed it and it has entered the unsafe region between the play and write cursors. At this point, it will be too late to apply effects, as we can no longer alter that portion of the buffer. Instead, what I do is save off the state of each effect starting from wherever the write cursor was on the previous frame. From there, I can step forward through the data in the buffer, in the unsafe region, up to the current write cursor, and save off the state of the effect at that point. These saved states give me a “known-good” position from which I can advance the effect forward to the end of whatever data I’m writing. If I end up inserting new data into the stream before the write cursor has advanced, I can apply the effect again starting from the same point and trust that the output will be correct. Only once the write cursor has advanced and I know that the data is the unsafe region is final do I advance the saved state of the effect up to the new write cursor and begin the process again.

The exact nature of these saved states depends on the effect, but they generally involve saving off waveform data or something calculated from waveform data. For temporal effects like reverb, I maintain a series of ring buffers of decreasing sizes that can be used to produce a dense array of echoes. For first-order low-pass and high-pass filters, I only need to save off the input and output values for the previous waveform sample. (For Nth-order filters, the previous N values could be recorded instead.) For dynamic DC bias, I maintain a running average of the wave values for a window and shift the output by this mean to keep it roughly centered on the line.

The downside to this method is that I often end up synthesizing data or applying effects more than once, and those costs begin to add up. It’s possible in the future I may move away from realtime synthesis entirely in favor of converting these synth tunes into WAV, MP3, or Ogg Vorbis formats upfront. There are advantages and disadvantages to each option. Improving runtime performance is a very compelling argument for this conversion, as perf is becoming increasingly important as the synthesis grows more complex and CPU-intensive. On the other hand, the current implementation offers a drastically reduced file size of shipping binaries, which is nice not only for decreasing download times but also for minimizing the size of my source control repositories, as binaries generally can’t be diffed and must be stored in their entirety for even small changes.

With any luck, this should be the last devlog for a while that’s devoid of screenshots. I know I’ve been leaning hard on the walls of text recently, and now that this work is wrapping up, I should hopefully be able to get back to producing new content again. Stay tuned!

In-Dev Build #2

Changes since previous version:

  • Brought over recent changes from synth tool
    • Better approximation of 2A03 sounds
      • Implemented looped noise mode
      • Restricted noise to the correct 16 frequencies
      • Quantized changes to 60Hz updates
      • Low- and high-pass filters at 14KHz and 90Hz
      • Non-linear mixing
    • Better support for dynamic instrument voices
      • ADSR envelopes
      • Pitch envelopes
      • Vibrato
      • Tremolo
      • Pulse width modulation
    • Added oversampling option to reduce high frequency aliasing
    • Dynamic DC bias
  • Added “audiothread” console command to report on thead health

Windows: GunArc.zip
Mac OS X: GunArc.dmg
Linux: GunArc.tar.gz

Twisty Little Passages

Hey, wow, it’s been a whole week already since that thing I did! Following last week’s introduction to my new development documentation plan, I put up another video on Wednesday and another in-dev build on Saturday. I feel like this is a schedule I should be able to maintain. It’ll be crunchy, and I can’t promise I won’t miss a week here or there, but I feel like I’ve streamlined the process well enough that I won’t be spending all my time preparing this content and leaving no room for development.

Still pretty much the state of the game.

This week was the first time since early April that I was really able to dig into some Gunmetal Arcadia development tasks. Whatever momentum I had had on content production has been cut short, and in the interim, I’ve decided to focus on some core systems that don’t exist yet. I’m not talking about moment-to-moment gameplay systems; by and large, those have been finished since late March, barring whatever one-off work is required for weapons, items, abilities, enemies, bosses, NPCs, or environments that I haven’t designed or built yet. No, I’m talking about systems that define how a single session of Gunmetal Arcadia is structured, how persistent data is carried over across multiple sessions, how daily challenges will be implemented, and so on.

I’ve started by working towards making random number generation higher quality and easier to wrangle. I’ve usually just assumed that the built-in C++ rand() function will be good enough for my purposes, but in light of my efforts to serialize the complete game state, I need a better solution, and that means rolling my own RNGs. Well, not my own own; I’m using a bog standard Mersenne Twister implementation (MT19937). This guarantees consistent behavior on all platforms, facilitates the use of multiple RNG streams, and allows me to save and restore the state of RNGs across multiple runs. That last one is what I’ve been pursuing this week. The state of a Mersenne Twister can be stored as 625 integers (624 for the internal state and one index into this state array), which is a relatively small amount of data but too heavy to fit nicely into my existing serialization system. My saved game data consists of nested key-value pairs, all represented as strings. This is convenient for a number of reasons (easy conversion to and from XML, etc.), but for anything more than a small handful of variables, it’s too unwieldy. I ran into this limitation previously when I was implementing speedrun ghosts for Super Win the Game. For these, I ended up doing a one-off implementation in which a speedrun component managed its own file I/O. That was sufficient for a quick post-launch feature, but for Gunmetal Arcadia, I need to build with the future in mind, and that means spinning this into a real system.

I’ve already written extensively about my entity serialization in past entries, but the high-level overview is:

  • When entities’ components are destroyed, they write out any data necessary to restore them to their current state. This data is saved in the serializer, keyed by the entity’s GUID. When the entity is rebuilt, this data is retrieved and applied to the newly constructed components.
  • Upon request, the serializer may write out all its stored data to disk. This data represents the entirety of the player’s saved game. Once loaded back into memory, this data can be retrieved and used to rebuild entities as normal.

In general, I think of this pattern as going: [live data] ↔ [serial data in memory] ↔ [serial data on disk]. In extending this system to account for arbitrary binary data, I had to decide whether or not to follow this same pattern exactly. What I’ve done for now is, rather than storing potentially large amounts of binary data for inactive entities in memory all the time, I’m writing the binary data to disk immediately but in a temporary location. Only once the serializer is prompted to save the player’s game is this file copied to the normal saved game folder. In this way, the behavior mirrors that of the ordinary nested key-value data path, but without the need to keep this data in memory. This is still a very young system, and it feels like it’ll take a few more revisions before I’m totally happy with it, but at least in this first test case of saving and restoring the states of random number generators, it’s proved successful.

In-Dev Build #1

Changes since previous version:

  • Added a placeholder character selection screen
  • Added a component for managing random number generators
  • Added a framework for associating arbitrary binary data on disk with serialized data for saved games
  • Fixed a reference counting bug in file streams
  • Updated in-dev build deployment scripts
    • Windows binaries are now built with SDL and OpenGL to eliminate the need for an installer
    • Fixed some naming conventions in Linux builds

Windows: GunArc.zip
Mac OS X: GunArc.dmg
Linux: GunArc.tar.gz

Scry, To

This is going to be a long one, so if you’re looking for a tl;dr, here it is:

  • I’m starting a video series to accompany this devlog. The pilot episode is here.
  • There’s a new in-dev build available on all platforms here.

Last November, a little over a month after Super Win had launched and well after its initial sales spike had fallen off, I wrote a blog on Gamasutra describing its launch as a failure for which I theorized a number of possible causes. That was months before SteamSpy would emerge, and my closest point of reference was Eldritch, released a year earlier and  under somewhat different circumstances. Knowing what I know now, I would say Super Win‘s reception was disappointing, certainly, but not necessarily worse than I could reasonably expect, having seen estimated data for other similar titles. It’s getting harder for any game to thrive in the current market, much less a one-developer pixel art platformer.

Of course, launch isn’t the end of the story; I’ve continued to support Super Win with new content since its release, and it’s had some moderate success during Steam sales. But it hasn’t yet recouped its costs, and it’s unclear whether it will. And that raises the question of how I can help ensure that Gunmetal Arcadia can and will. I’ve pointed to marketing as my biggest failing on Super Win before, and to some extent, I think that’s accurate, but it also wasn’t a game that lent itself well to promotion. It was a good game, and I’m proud of it, but I’d be hard pressed to say why it works in a sentence or two. That’s one of the problems I’m hoping to address with Gunmetal Arcadia. It has a very clear — and hopefully very strong — promise: “Zelda II as a roguelike.” I can dress it up a little more, describe the narrative conceits of conflicting factions uniting (or not) in the face of war, or elaborate on exactly how many -likes I’ve abstracted the design from the Berlin Interpretation, but the crux of it, the thing that’s always been exciting to me, is just that: “Zelda II as a roguelike.”

So that’s one vector along which I hope I’ve improved and can continue to improve. Another is visibility. I was needlessly secretive during the development of Super Win. A few #screenshotsaturdays here and there, some playtest sessions that only a small number of regional gamers could even have had the opportunity to attend, and then I was surprised when the reaction was muted? That’s what prompted me to start maintaining this devlog — that and the fact that I’ve always enjoyed writing about what I do. But I can do more. When I think about what visibility means in this era, it’s about having a wealth of content across a variety of formats, something for everyone to discover and share. So here’s what I’ve been thinking.


Development streaming feels like a decent option for improving visibility; it’s live, it’s personal, it’s sort of interactive, and more than anything I could write on this blog, more than any videos or GIFs I could post, it’s the clearest window into what game development actually entails on a day-to-day basis. The downside, and the reason I’ve been reticent to stream more often, is that I feel like I need to have at least a bit of an outline of a plan for what I’d be streaming, and many days, I simply don’t. I think with a little time and effort, I could make this a part of my normal schedule, but I’m not there yet.

So streaming may have to wait, but something I’m hoping to start doing very soon is producing a weekly video log to accompany or augment this weekly devlog. Sometimes it’s difficult to put into words (or screenshots, or even animated GIFs) some parts of the development process, and I think a video format could be a perfect accompaniment to the written word. Exactly what format these would take remains to be seen, but I would imagine it would fall somewhere on the spectrum between talking at my webcam and a slick documentary-style format. I’ve put together a sample of what this might look like here:

This first pilot episode is mostly focused on the introduction of the video log as an alternate form of development documentation and elaborates on some of my thoughts on where it could go in the future, as well as some of the other concepts I’ve detailed in this blog. I have more video content in the works, but I’d love to hear your feedback on this format and what sort of content you’d like to see. I’ll also be trying to answer questions in the video log, so if there’s anything you’d like to know about Gunmetal Arcadia, Super Win the Game, Minor Key Games, or anything else, please leave a comment on YouTube or get in touch on Twitter or ask.fm!

I’ve been thinking about the “Game Feel” test build I released back in March and how I haven’t done another one since. That void, coupled with the improvements I’ve been making to my Mac and Linux build process, got me thinking about possibly getting into the rhythm of publishing weekly in-dev builds. These wouldn’t necessarily be polished, consumer-ready pieces of software; they would literally be whatever state the game were in at the time, and it’s easy to imagine that week-on-week changes might be miniscule at best, but these builds could be another facet — importantly, a fully interactive one — of this transparent, documented development process that I’m envisioning.

So on that note, here’s the current build of Gunmetal Arcadia:

Weekly builds, June 15, 2015:
Windows: GunArc_Install.exe
Mac OS X: GunArc.dmg
Linux: GunArc.tar.gz

You may notice the game hasn’t changed appreciably since the last build I uploaded. You may also notice there are some obvious bugs — enemies not moving, getting stuck when falling down bottomless pits, and so on. That’s normal. That’s going to be the shape of these builds. These aren’t polished, tested, consumer products; they’re weekly snapshots of the development process, warts and all.

Community involvement in the development process is something that’s probably further off, once I have a more fully realized version of Gunmetal Arcadia and can start soliciting feedback on the complete game experience rather than specific aspects. From very early in this game’s lifecycle, I’ve been thinking that if there were ever a game I made that seemed perfectly suited for Early Access, it’s this one. As I’ve mentioned before, if I were to go down that road, it would only be once the game were believably at a place where it could be shipped — playable start to finish, feature- and content-complete for the current scope, and free of known bugs. That would be the ideal point at which to start involving the community in the process of tuning the game, adding, removing, or altering content as necessary, and shaping it into something better than it could otherwise be.

It’s an exciting vision, for sure. It also might be unfeasible. I already lose about half a day each week to this devlog. Once I start factoring in time spent on video editing, building deliverables on all platforms, planning out work ahead of time that I can cover in a stream, and so on, that’s sounding like a fairly sizable chunk of my week. I’m not sure I can afford to do that, or at least not all of it, or at least not all at once. But I think the benefits could vastly outweight the costs. So my idea is to start with what I can do. Test the waters. Make a video log. Publish an in-dev build. Do a development stream. See whether I can accomplish any of these in a reasonable amount of time and to a reasonable level of quality, and whether they’re effective in raising awareness. See whether this idea is worth pursuing at all. And if it is…well, there’s options.


Don’t worry, this blog doesn’t end with, “…and that’s why I have a Patreon now, so go pledge your support!” But, yeah. That’s one idea.

In the past, I’ve avoided the notion of using Kickstarter to fund the development of either Super Win or Gunmetal Arcadia, and I don’t have any intent or desire to go that way. I’m not in a position where I need those funds, and to be totally honest, I have some doubts as to whether I could run a successful campaign, regardless of whether I asked for the actual cost of remaining development. (And as we’ve seen in recent articles, that rarely happens, thanks to a perpetuating cycle of misleadingly low estimates lowering consumer expectations of costs and vice-versa.)

But a Patreon campaign, not to fund the development of the game itself, but to help mitigate the recurring costs of sustaining what I want to believe could be an unprecedented degree of openness and transparency about the development of the game? That’s something I might be able to get behind. It’s easy to imagine what rewards for a campaign like that could be. Your name in the credits (that’s one I always like as both a player and developer), a free copy of the game when it launches, and so on.

I’m not saying any of this will definitely happen, but these are a few possibilities I’ve been kicking around for where Gunmetal might go in the future. I’m curious to hear your thoughts.

Support Structures

I had a totally different blog drafted and ready to go this week — one that I’m very excited about, so watch for that next time! — but I thought it would be nice to bookend the last two months of side project work by doing a bit of a recap here instead.

Two months.

Two months to build five new levels, implement a mapping system to be displayed in two different modes, automate the process of calculating the number of collectables in each level, define what a level even is in the context of this mapping feature and having built the game worlds in totally incongruous ways, implement a speedrun mode with powerups that are maintained separately from the ones you can find in the normal campaign, add a speed boost powerup, track position data over time and visualize it as a player “ghost,” implement Steam leaderboard support and a leaderboard UI, write some new dialog, append some new music to the soundtrack, find and fix bugs, and deploy six builds across three storefronts.

That’s a lot to get done in two months, and it’s been a bit of a rough landing, requiring several days of hurried bug fixing. But it’s all wrapped up now, fingers crossed, and Super Win the Game can go back to being a thing that I support from a distance rather than with hands-on development.

My plan with this update was to effectively “relaunch” the game and give it another shot at visibility and viability. Even though a few of my ideas fell through (for instance, a relaunch discount was forced off the table due to the timing of other sales), some unexpected opportunities popped up to take their place, and ultimately it does feel like I’ve been successful in drumming up renewed interest. It’s difficult to say whether it will be enough to justify the costs, but it’s off to a pretty good start, and I’m hopeful about the future.

It’s coincidental that this update/relaunch landed so close to the release of David’s NEON STRUCT, and I’m not entirely sure what that means for messaging. One of our goals when we formed Minor Key Games was to benefit from the promotion and visibility of each other’s titles. It’s possible this is a case for that. On the other hand, it feels like maybe I’m diluting a message that should be taking precedence. I don’t have enough knowledge, experience, or data to support either side at this time, but it’s something I’m thinking about.

I can’t wait to get back to Gunmetal Arcadia, and I’m super excited about some of the new developments I have in the pipeline. But it’s been nice to revisit Super Win and right some of its wrongs. As I said two months ago when I first announced this update, I had had a lot of bad feelings about this game for a while after its launch, and I think I’ve finally managed to put those behind me. If you haven’t played it yet, I’d encourage you to give it a try. The minimap really does make a world of difference; everyone who asked for it was right, and I’m glad I finally reached a point where I wasn’t too proud or too burnt out to consider it seriously.

Super Win the Game is available for Windows, Mac OS X, and Linux for $7.99 (standard) or $9.99 (soundtrack edition). You can find it on Steam, Humble, and itch.io. The soundtrack is also available for purchase on BandCamp.

Remaining Cross-Platform

Last February, I wrote on my personal blog about the difficulties I had encountered in porting my engine from Windows to Mac and Linux. Since then, I’ve continued to iterate on these versions, and I thought I’d share some of the experiences I’ve had in that time.

When I completed the first iteration of this work last year, I was only building a 64-bit Linux binary. The reason for this was simple: I had installed the recommended Ubuntu distro for my PC. This was a 64-bit distro, which meant that by default, I was building 64-bit applications. Based on what limited data I had available from, e.g., Steam and Unity surveys, this seemed like a safe bet, but I very quickly heard from 32-bit Linux users who were unhappy with this decision. At the time, the only cross-platform game I had shipped was freeware, so I wasn’t too concerned about it, but I knew that I would probably have to cross that bridge before shipping Super Win the Game.

In June, after four or five months of Windows-only development, I began compiling Super Win on Mac and Linux. Right off the bat, I ran into some bugs in my OpenGL implementation that had never appeared in either my Windows OpenGL build or in any version of YHtWtG. Both of these were my own fault, and one was easily fixed (I was calculating the size of vertex elements incorrectly in certain cases), but the other was a little more sinister. I had some shader code containing a parameter which was intended to be a float but had been written as a float4. Outside of its declaration, all the code both within the shader and in the game’s code treated it as a one-component float. Despite being technically incorrect, this worked just fine on Windows under both DirectX and OpenGL. It was only when I switched to Linux and ran the game that the GLSL compiler on that platform threw a fit about it. In retrospect, it’s possible that testing with debug D3D and higher warning levels might have caught this far earlier, but I rarely remember to test in those conditions unless I have an obvious rendering bug in Direct3D. The scary part is that, even having found and fixed this one particular bug, I don’t feel like I can be entirely certain that my auto-generated GLSL code will compile on all users’ machines, and in fact, evidence has suggested that it will not, for reasons that aren’t yet clear to me.

It was around this time that I made the first of three attempts to build 32-bit binaries on Linux. Each of these deserves its own paragraph.

My first attempt involved building a 32-bit binary on my existing 64-bit distro. It sounded simple enough: add the “-m32” flag when compiling. What could possibly go wrong? Of course, this also meant recompiling 32-bit versions of all the libraries I was linking (SDL, Steam, GLEW), and of course each of these had their own dependencies. This led to what I described on Twitter as a Mandelbrot fractal of dependency issues. A zillion sudo apt-get installs later, I reached a point where (1) I still didn’t have all the dependencies I would need to compile 32-bit binaries, and (2) Code::Blocks would no longer open because some of the dependencies I installed had broken compatibility. With no clear path to revert these changes, I began manually removing these dependencies and reinstalling the 64-bit versions where I could. I finally reached a point where Code::Blocks would open again and I could compile the code, but it would immediately crash. Then I restarted my PC, and Ubuntu simply would not boot up. I had killed my first Linux distro.

This was June 2014. I had first installed Ubuntu on my PC eight months earlier, and I didn’t really remember how the process had worked. Did I use Wubi? Did I create a partition? There seemed to be several ways this could work, and none of them was wanting to play nice for reasons I couldn’t fathom. Sometimes my PC just wouldn’t boot to Ubuntu. At least once, the Ubuntu setup install process just spun indefinitely until I had to power my machine down. I think I restarted the entire process at least five or six times that night. Eventually, I ended up getting a 32-bit distro installed, in the hopes that I could build a 32-bit binary natively and it would just work on 64-bit distros, too. I was able to produce a 32-bit binary, but it didn’t run on a 64-bit distro right off the bat. I started installing dependencies to see whether I could make it work, and…yep. I killed that distro, too. Two Linux distros dead in two days.

Now here’s the really fun part. At some point when I was trying to get any version of Ubuntu working again, I ended up with my PC booting to GNU GRUB by default rather than the Windows Boot Manager that I was familiar with. And now that my Linux install was dead, GRUB wasn’t working, either. My PC wouldn’t boot at all, not even to Windows. For all intents and purposes, it appeared I had just bricked my PC trying to install compatibility packages on Linux. In an era before smartphones, that probably would have been true. Fortunately, I found a StackExchange post that described the exact problem I was having and the solution. I had a brief moment of panic when I wasn’t sure if I even still had my Windows discs, but I did eventually find them and was able to repair the damage. I had lost three days and could still only build 64-bit Linux binaries with any reliability, but I didn’t lose my PC completely. I chose to ignore the problem and get back to game dev work for a while.

A month later, barely a day after wrapping up three weekends in a row presenting the game at various expos and conventions, I returned to the wonderful world of Linux development with a new plan of attack. Since my desktop PC was too scarred from previous failures and I didn’t trust it to stand up to a fresh install, I decided to turn my laptop into my primary Linux development environment. This time, I went with a Wubi install, despite recommendations to the contrary — it was, I had eventually learned, how I had installed Ubuntu the first time, when I had had the least trouble with stability. I installed a 32-bit distro, rebuilt all my external libraries, compiled the game, and tested it on my 64-bit desktop install, and…this time it worked.

I can’t explain that. I don’t know why my results were different this time. To my recollection, I had followed the exact same steps once before with different results. I tested it on my Steam Machine and it worked there, too. To date, I don’t believe I’ve had any complaints about 32-bit/64-bit compatibility since I started building in this environment. I’m happy it finally worked, and it was strange how smoothly the process went this time compared to all my previous attempts, but I can’t explain it. In all likelihood, if this environment ever fails me for any reason, it will be the death knell for my support of Linux.


I’ve been using various versions of Visual Studio as my primary IDE since 2005. It took me months, if not years, to feel totally comfortable with it, so it’s not surprising that working in other environments would present difficulties.

I use Code::Blocks on Linux and Xcode on Mac. Code::Blocks seems like it was designed to be familiar to VS users. It has some oddities, but there are usually pretty clear analogues between the two environments. Importing projects and configurations from VS was fairly uneventful; it’s been the little unexpected things that I’ve had to learn to deal with in Code::Blocks. For instance, the order in which library dependencies are specified matters in C::B where it doesn’t in VS. This gave me endless linker errors, and for many months before I understood the source of the problem, I just worked around it by making throwaway calls to various library functions in game code so that the linker would understand it needed to link those libraries. I had another issue for a while where Code::Blocks would eventually just sort of choke and slow to a crawl after compiling the game once or twice. This turned out to be related to the inordinate amount of warnings being displayed in the output window. Simply lowering the warning level fixed this problem. (Yes, a pedant might insist that the real solution here would be to address the warnings, but I always compile with /W4 in Visual Studio and produce no warnings at all, so I say gcc is just too persnickety.) The Code::Blocks debugger has been fairly hit-or-miss for me, too, often ignoring breakpoints entirely and requiring me to break at other places and step through code to get to where I want to be. I suspect but haven’t proven this may be related to spaces in filename paths, which is one of those fun “I’ll know better the next time I write an engine from scratch” issues that crops up from time to time.

Xcode is a totally different ball of wax. It wants to work its own way with no regard for what others are doing. In my experience, it’s also a little too eager to happily build and run projects that really aren’t configured correctly, whch has made it difficult to know for sure whether the applications I’m shipping will actually work on other users’ machines. It also likes to put intermediate and product files in bizarrely named folders buried somewhere inaccessible, and the options to relocate these paths to something more intuitive aren’t readily apparent. Shipping a product on Mac is a strange thing, too; Xcode really, really wants every application to be built for and deployed to the App Store, and that’s completely orthogonal to what I’m doing. As a result, the builds I produce outside of Steam are shipped in compressed disk images, which seems to be de rigueur despite the fact that OS X will spit out warnings about downloadable apps shipped this way. It’s a strange situation, where Apple seems to want all apps to go through the App Store with no regard for cases in which the App Store simply doesn’t apply.


As I’ve been wrapping up the upcoming Super Win update, I’ve found myself in a familiar scenario of having to make new builds fairly frequently as I find and fix bugs. On Windows, deploying a new build is fast. I have a single script that rebuilds all game content, compiles Steam and non-Steam builds, builds an installer, and automatically commits all changes to source control. Historically, rebuilding on Mac and Linux has always been a little more involved, as I have not had comparable scripts, and have instead had to manual open each IDE, choose the appropriate configurations or schemes, build the game twice (once for Steam and once for non-Steam), and manually package the executable and content into a .tar.gz or .dmg.

Yesterday, after repeating this process two or three times in as many days, I finally decided it was time to automate this process. As it turns out, both Code::Blocks and Xcode have fairly simple command line support, as do tar and hdiutil for assembling the compressed packages. A few hours’ work later, I have scripts that will quickly and easily produce iterative builds on each platform. There’s still a little more work I could do here to achieve parity with my Windows build script (e.g., automatically commiting to source control), but it’s a good start that should save me some time in the long run.

What would be really cool, and I’ll have to give this one some thought to work out the details, would be if I could automatically update the Code::Blocks and Xcode projects whenever my Visual Studio projects change. Most of the time, this happens when I’ve added, removed, moved, or renamed files. Assuming C::B and XC both have support for modifying projects from the command line (and that is admittedly a big assumption to make without researching it at all), I can imagine I could write a script to watch for changes to the VS project and make the corresponding changes in the other IDEs.

At the moment, I only build on Mac and Linux every once in a while during core development, so keeping the projects in sync isn’t a huge hassle to do by hand. One of my potential goals for the future, however, is to start producing more frequent in-dev builds on all platforms, and at that point, this might become a bigger deal.