Wednesday, September 13, 2017

Faster builds for C++ projects

AAA titles in the video game industry really push boundaries in tech. Game engines and code content requirements lead to massive projects -- almost exclusively in C++. In spite of ever faster mass-storage, and CPU speed, build times suffer worse with each generation of titles.

C++ programmers at times look enviously to Python, JavaScript and other languages for their comparatively fast iteration loops, but know that leaving the speed, power and type safety of a compiled systems language like C++ is a train wreck waiting to happen. Sure, there are a few newcomers on the scene, but none compare to a language that has decades of stability and game industry support for blazing fast physics, rendering and game logic.

Yet, we end up in this self-defeating cycle. Production teams demand ever more content. Iterations times nose-dive, making it harder to turn around new features quickly -- build times rise above the 20 or 40 minute mark (and beyond). Producers then want to hire more programmers to get more out of their tech teams. This self-defeating act of desperation looses dozens or more engineers on the code base, adding more code, more bugs and even worse iteration times.

What is a tech team to do? Inevitably more coders are going to pour mode code into the project and I have yet to meet a producer that understands that "adding more people to the project just makes the project later." Well, the next best approach it to speed up build times.

There are some powerful build acceleration tools at our disposal as programmers. Some will distribute builds across the developer's network, ensuring no single engineer has to spend all of his workstation's resources on a task that is too large. DistCC and Incredibuild do a great job distributing computation workloads.

Others will cache workloads. CCache has long been available on Linux/UNIX systems. Unfortunately that only works well for mostly server and back-end platforms -- most game and engine programmers still need to deal with Windows as the preferred PC gaming platform, and Microsoft compiler tools dominate console game development. Mobile platforms are locked in with XCode, which does not support the use of CCache (though it can be crow-barred in by doing some unsavory things to an OSX system).

Around the turn of the century, I liked to use the ccache+distcc combination for my Linux platform MMO backends. A system would look for a cache hit and return immediately. On a miss, it would distribute each compilation pass across the network. It was beautiful. It was configuration hell. It was well worth all of the extra effort to administer and maintain.

So, let's say you are a AAA game developer (or otherwise locked in to using Microsoft's tools with C++). What can you do?

Distributed Compilation

Incredibuild and FastBuild both have distributed compilation features, Incredibuild focuses on just working out of the box, but is a commercial offering. FastBuild is an up-and-coming OSS project that is free, but requires more from developers to support.



Incredibuild is a no-brainer for any team larger than a few programmers that also has to develop and maintain a complex C++ code base. The time savings are a huge pay-off (from a business point of view) and provide better quality of life (from the perspective of a human being having to write code).

Configuring and maintaining it is straightforward. Install a coordinator somewhere. Install a bunch of agents on the network and give them the IP address of the coordinator and your are off to the races.

Incredibuild has some pretty amazing tech behind their work distribution solution. Unlike distcc which required each node to have the right compiler installed, Incredibuild just shuttles your version of CL.EXE, LIB.EXE or other tools to an agent. The Agent intercepts many system calls to access files, so it LOOKS like (on the agent) that CL.EXE is actually running on the initiating node.
Incredibuild's Build Monitor showing progress distributed across several machines.

It "just works" for just about every scenario.

I've used Incredibuild for at least 10 years (maybe longer) and insist on having it anywhere I work that takes more than 20 minutes to get a decent build after switching branches or syncing someone's awful global header changes.

My experience with FastBuild is only tangential, but it is written by game developers at Ubisoft -- one of the few big publishers that takes engineering problems of scale seriously enough to encourage smart guys to take time to develop really cool solutions. 

FastBuild is a bit more invasive, requiring developers to adjust their build systems to accommodate the tool. Like ccache and CLCache, this can be employed to great effect, really reducing build times, especially for git-like workflows that have devs bouncing between feature development on their own branch and back to bug fixes on another.
FastBuild's integrated build monitor, very similar to Incredibuild's visualizer. 

FastBuild provides some distributed compilation, like distcc. It also tries to do some caching, like ccache. It appears to provide good performance for complex projects, with the cost being the requirement to re-tool projects and maintain a completely new build description language.

What about Caching?

For the longest time, if you were on a Microsoft compiler suite, you could not get a compiler cache. There *are* some options now, however! CLCache comes to mind on the Free Software side, as well as the commercial offering Stashed, which is designed to work with distributed build systems like Incredibuild or FastBuild. Fastbuild also sports some features of compiler caches (see above).

CLCache

CLCache aims to be a free ccache-like implementation for Microsoft C/C++ compilers. To work most generally (without having to rewrite a project's build system), the programmer compiles the CLCache python script, replaces Microsoft's CL.EXE and renames it to another file that CLCache knows to invoke on a cache miss. In principle it works a lot like ccache. In practice, it also comes with a lot of the headaches that are also involved with trying to get XCode/clang on Mac OSX working with ccache.

On small enough teams, the administrative headaches are easily worth the trouble. It does pretty much what it says on the tin. Best of all, it is free and open-source, so will likely improve with time.

There are some things it cannot do (yet) that might be a deal breaker for many programmers. These are the same reasons it has taken almost 20 years just to get minimalist ccache equivalent for MSVC.

Working compiler caches that support CL.EXE are hard to make. Microsoft does some really wacky things at build time with its compiler driver (CL.EXE). Managing the program database is one that has so far been an obstacle for compiler cache developers.

So, using /Zi or /ZI with CLCache is still somewhere over the horizon. Projects are out of luck if they use PDBs for debugging, which are especially nice to keep around so you can ship release versions of a game while collecting minidumps from the top crashes in the wild and debug them. Unlike older COFF debug formats, PDBs move most of the debug symbols to a separate file that can be excluded from a shipping build, but still be used to analyze crashes.

Stashed

Stashed is a recently available, commercial compiler cache. It works on similar principles to ccache, but with more smarts. It does not require modifying an existing build system or really any configuration at all to work. The setup process is to download a 10MB installer, run it, verify an email address and just get back to work. It starts with a free month trial -- no credit card required.

One of the nice things about Stashed is that it works with PDBs (/Zi) just fine. What is super stellar is that it also plays very nicely with FastBuild and Incredibuild. It is, so far, the only game in town that scratches three of those itches:

  1. no crazy configuration or build system changes needed
  2. accelerates even when PDBs are being generated
  3. pairs up nicely with Incredibuild or FastBuild to distribute computation when a cache miss happens, and REALLY speeds them up when the cache is hot (sometimes by 5-10x faster than either of them alone!)


Here's a nice video showing Stashed + Incredibuild building an UnrealEngine project, as well as Stashed running standalone to build LLVM:


Conclusion?

Commercial projects with an existing code base should be using Incredibuild + Stashed, hands-down. No changes required to project files, no configuration overhead, can be easily installed and administered on lots of machines.

Small teams just starting and hobby/home users should check out all of the freebies available. ccache can be made to work on OSX and Linux. CLCache, with some effort, can be made to work seamlessly with MSVC.


Full disclosure:
I am a Stashed developer and have been using Incredibuild for a very long time, so my experiences may be biased. Please comment with corrections!

Saturday, August 1, 2015

"T"-shaped developers

What is a T-Shaped Developer?

The "T"-shaped developer is an agile term for a maker that has very broad knowledge and can help a team to get stuff done regardless of the task, yet has a very deep knowledge in a particular discipline, Non-agile projects and companies tend to pigeon-hole programmers and reward them on the depth of their knowledge of a particular subject. 

This creates fragility in an organization. When "that one programmer that knows that one thing and performs that one function" is no longer available, the project is stuck and cannot move forward. An agile company is filled with lots of T-shaped generalists than can work anywhere to get stuff done.

This is not a "Jack of all trades, master of none" it is a "Jack of all trades and master of one".

I was recently hired into a team as the "back-end guy" to do the "easy" work of building a business and service platform for a game. As a T-shaped developer, I had the opportunity to build an audio engine and provide music and sound to the game heading into the green-light for the game. I also took on UI and systems work for the game. There were not enough UI and Audio experts available to finish features, so I was quite happy to step in and do the work. If I only could contribute as "the back-end guy" on the team, I would have been twiddling my thumbs or worse. writing a ton of code apart from the game as busy work that might not fit with the vision for the game.

The Generalist Programmer role does not mesh well with production and administrative management, however. A generalist does not slot into a hiring plan or career model. They contribute everywhere and it is hard to judge their progress because they span such a broad spectrum of work required to ship a game. This is a shame, because promoting the generalist, T-Shaped developer unlocks and removes blocking tasks on a project.

As a Technical Director, engineering advocate and hiring manager, I desperately sought and promoted the generalist, much to the consternation of the rest of the organization. Programmers that wanted to switch and expand their roles are often treated as risks -- because in the short-term for a schedule, they might be less efficient and lack the experience and wisdom to get it right the first time. Taking the long view, moving programmers around meant the organization as a whole becomes much stronger. Those risks are easily mitigated by pairing neophytes up with the wizened grey-beards and running training courses. Sure, you lose an hour or two each week in training, but the project gains another expert that can help the business adapt to constantly changing requirements.

The T-Shaped developer on a team provides more value than can be quantified. They might do artwork, game AI, networking, build shaders, optimize a rendering pipeline or any number of specialized tasks that simply would never be completed when human beings are treated as specialists that can only perform a single function.

When building a Scum team, or a new business, go for the T-Shaped developers first. They can solve the surprise changes and complexity of creating something new when requirements drive developers out of their "expert" zone.

Friday, March 7, 2014

I Do not have Time to Write Tests!

On more than one occasion during conversations around automating tests, I have heard a programmer say "I don't have time to write tests!"

Why?

1) Any programmer that submits code without compiling the source file, please leave a comment.


2) Okay, of the programmers that at least compile their source, do they build the entire project locally? If not, leave a comment.
It is ok. You aren't part of the team until you break the build for everyone...

3) If you are building for more than one platform, do you compile and link on all platforms before submitting a change? If not, leave a comment.
Screw those guys doing the PS3 port. They are jerks!

4) Of the programmers that compile, link and build for all platforms, how many run their changes under a debugger to prove (at least to themselves) it works? (Leave a comment)

5) Of the programmers that compile, link, build for all platforms and test locally, have found that their features have been wrecked by changes from other developers (artists, designers, programmers)? Comments should be full of this.

To illustrate the point, every project I have worked on has seen violations of all 5 scenarios. I will confess that I am also guilty of some or all of them. It happens and we learn :)

How much time, as the complexity of the project increases each iteration, is spent dealing with feature regression, technical debt and system fragility?

When a live service, earning millions of dollars each iteration, goes tits over an untested feature, leads to a panicked rush to fix it now now now!!! and then leads to a cascade of fail, is it time to consider, as my angry sage and friend Jonathan Tanner said: "slowing down to move faster?"

Most programmers that have been in the business for a while suffer under a completely different regime: they know what to do, but pointy-haired bosses make doing the right thing impossible! 

This majority of engineers should take this little bit of wisdom to heart: producers do not produce code. Business managers have no business to sell without your work. They regard you as freaking wizards. Be honest with them about the costs of doing a bad job.

Do you have time to write tests? Can you make time to help your fellow developers ensure they do not accidentally break what you have created?

For a taste of a solid test framework, check this link: http://logicle-cplusplus.blogspot.sg/2014/03/writing-great-user-stories.html

Wednesday, March 5, 2014

Writing Great User Stories


A few years ago, I was introduced to Behavior-Driven Development (BDD), RSpec and shortly afterward, Cucumber. Having advocated for Test-Driven Development (TDD), I was curious and cautiously optimistic.

Working with Cucumber (http://cukes.info) has been great. It is feature specification, documentation and testing wrapped up in a single workflow. It starts with a feature file written in plain language that drives the rest of the test framework. It is a close cousin to TDD, in that the tests are written first, then the implementation follows until tests pass. The difference between BDD and TDD: BDD tools like cucumber use behavioral/business specifications as the starting point, not an engineering authored suite of tests.

Feature files usually start with what reads like a typical scrum user story, followed by several scenarios that clearly define the feature's behavior. The scenarios can be used as acceptance criteria in addition to documentation and driving tests. Gherkin (the DSL that drives Cucumber), like any language or technology, has established current best practices. Gherkin doesn't really care how a feature is described, but scrum-like user stories are typical.

One of the guidelines for writing great feature descriptions is to apply an adapted version of the Six Sigma 5-Whys method.

There is a great example of this approach on the cucumber wiki.
[5:08pm] Luis_Byclosure: I'm having problems applying the "5 Why" rule, to the feature 
                         "login" (imagine an application like youtube)
[5:08pm] Luis_Byclosure: how do you explain the business value of the feature "login"?
[5:09pm] Luis_Byclosure: In order to be recognized among other people, I want to login 
                         in the application (?)
[5:09pm] Luis_Byclosure: why do I want to be recognized among other people?
[5:11pm] aslakhellesoy:  Why do people have to log in?
[5:12pm] Luis_Byclosure: I dunno... why? 
[5:12pm] aslakhellesoy:  I'm asking you
[5:13pm] aslakhellesoy:  Why have you decided login is needed?
[5:13pm] Luis_Byclosure: identify users
[5:14pm] aslakhellesoy:  Why do you have to identify users?
[5:14pm] Luis_Byclosure: maybe because people like to know who is 
                         publishing what
[5:15pm] aslakhellesoy:  Why would anyone want to know who's publishing what?
[5:17pm] Luis_Byclosure: because if people feel that that content belongs 
                         to someone, then the content is trustworthy
[5:17pm] aslakhellesoy:  Why does content have to appear trustworthy?
[5:20pm] Luis_Byclosure: Trustworthy makes people interested in the content and 
                         consequently in the website
[5:20pm] Luis_Byclosure: Why do I want to get people interested in the website?
[5:20pm] aslakhellesoy:  :-)
[5:21pm] aslakhellesoy:  Are you selling something there? Or is it just for fun?
[5:21pm] Luis_Byclosure: Because more traffic means more money in ads
[5:21pm] aslakhellesoy:  There you go!
[5:22pm] Luis_Byclosure: Why do I want to get more money in ads? Because I want to increase 
                         de revenues.
[5:22pm] Luis_Byclosure: And this is the end, right?
[5:23pm] aslakhellesoy:  In order to drive more people to the website and earn more admoney, 
                         authors should have to login, 
                         so that the content can be displayed with the author and appear 
                         more trustworthy.
[5:23pm] aslakhellesoy:  Does that make any sense?
[5:25pm] Luis_Byclosure: Yes, I think so
[5:26pm] aslakhellesoy:  It's easier when you have someone clueless (like me) to ask the 
                         stupid why questions
[5:26pm] aslakhellesoy:  Now I know why you want login 
[5:26pm] Luis_Byclosure: but it is difficult to find the reason for everything
[5:26pm] aslakhellesoy:  And if I was the customer I am in better shape to prioritise this 
                         feature among others
[5:29pm] Luis_Byclosure: true!
This is equally valuable for user stories in a typical product backlog. A well groomed backlog should have stories that communicate the value of each item for easy prioritization.

On some scrums, as the product owner, I have been pairing user stories with scenarios as acceptance criteria. Each story is groomed a bit with the team to include scenarios so the criteria for the features are clearly defined, testable, can be automated and communicated with business managers, production teams, artists, designers, testers and other engineers. The output of product backlog grooming (for upcoming items), are cucumber feature files.

As stated previously in this blog, the "business value" for game developers is making fun. It is all too easy to forget that when talking about monetization, operating expenses, development budgets, etc... Yes, we are in business to make money, but we do so by offering a value proposition that is unique -- we sell fun and delight to customers. Everything else we do is pointless if we have nothing of value to offer.


Applying 5-Whys to user stories should focus on whether the feature satisfies Minimum Marketable Features (MMF). One addition to the criteria of problems we need to address with a feature is "will this make the game more fun?"

Friday, January 22, 2010

A Brief Refresher on Member Initialization

So, I forgot something that is maybe a little important, especially when it comes to multithreaded code.
struct MakeThreads
{
    MakeThreads() :
    _thread(threadFunc, this)
    , _done(false)
    {
    }
    ~MakeThreads()
    {
        _done = true;
        _thread.join();
    }

    void threadFunc()
    {
        while(!_done)
        {
            // do stuff
        }
    }

    Thread  _thread;
    bool    _done;
}

MakeThread letsScrewUp;

Now, my old, single-threaded, C self wants to put the smaller members later in the struct/class declaration to ensure data packing is efficient. My new, multi-threaded C++ self hasn't formed proper habits to prevent this sort of bug. How I wish I had lint here to continue to wrap my knuckles and improve my coding habits!

Just a reminder, the member initializer list in the constructor does NOT dictate the order in which members are initialized with many C++ compilers. As Scott Meyers (and g++ with proper settings) warns us: the member initializer list should always match the order of member declaration in a class!

I made this mistake recently. For those that haven't already caught it, _done is initialized AFTER the thread is initialized. The thread queries _done (which may or may not be initialized at this point.)

This is one of those scenarios where the code does what you expect most of the time but rarely all of the time!

Like any language, some of the finer points of C++ are easy to forget unless you find opportunities to practice them often. For those itches that aren't scratched on your day job, I highly recommend busting your own chops in your spare time for remedial work :)

Notes on Scaling a Multiplayer Game Server

Well, lets start with some basics. The Client/Server model (for video games) consists of a game server, which is authoritative for some or all of the game state. The clients produce messages from player input, and consume state updates as dictated by the server. The client side, as far as the network is concerned, is pretty straightforward:

  • Connect to the server
  • Handle incoming messages to update the world
  • Send local player input to the server
  • Clean up when the server disconnects

There are a few things that can be done to streamline the client network code, such as putting Network I/O in a separate thread. This isn't done to make the game render faster, but to prevent the network from timing out when the client is busy loading some massive data set. The game client will never spend significant memory or CPU time dealing with the network and its messages. Programmers can play it pretty fast and loose in this regard. If the client can pass packets around, the job pretty much consists of keeping the game state consistent with what the server is telling the client.

For servers that don't have to scale beyond a handful of connections, the same paradigms hold true. Pumping the network and tossing packets isn't much work compared to the heavy lifting the game simulation has to do. There is one problem that frequently plagues the server: O(n^2) (order 'n' squared, for those not familiar with Big-O notation) operations. Every FPS I've worked on was inherently O(n^2) in its messaging. Basically, this happens when every client causes some update that generates traffic to every other client connected to the server.

In the great tradition of examples contrived and simplified to illustrate a point, let's assume that game clients are authoritative for player position. To further complicate the problem (and the math), we'll throw the update rate into the mix. so further assume that they update the network every 50ms (20Hz, twice as fast as the original Quake).

  • Each client sends 20 position updates each second.
  • A server with 2 clients will send 40 updates each second. 20 updates from Client 1, sends 20 update messages to client 2. Client 2 sends 20 update messages to client 1.
  • A server with 3 clients will send 120 updates each second. 20 updates from Client 1 will trigger a total of 40 update messages to client 2 and 3. 20 Updates from client 2 will trigger 40 update messages to clients 1 and 3. Client 3 will trigger 40 updates to clients 1 and 2.
  • A server with 4 clients will send 480 updates each second. 20 updates from client 1 will trigger 120 update messages to clients 2, 3 and 4. (etc...)

There seems to be a pattern here. With only 4 clients, running at 20Hz, the server needs to toss packets at a 2ms interval. This is something most hardware can handle, but game servers need to handle dozens, hundreds or thousands of players. Oh, and it needs to also run a game simulation. This model doesn't hold up well in the face of those numbers.

Most game programmers are painfully aware of these scaling issues. They employ a number of techniques to reign in the traffic requirements. Reducing the update frequency to other clients based on distance is one (of many) ways to affect the scalability equation. MMO's spread the load across dozens of servers in a cluster that consists of a single game "shard". Since this article is about scaling, and about the performance of network code, it won't focus on those techniques, but instead look at how scale affects overall performance of net code.

Scale comes in too many different flavors. Web servers need to deal with thousands of concurrent, isolated, short-lived connections. Chat servers handle thousands of concurrent, long-lived point-to-point connections. MMO servers must support hundreds or thousands of long-lived, point-to-multipoint connections. FPS servers deal with dozens of long-lived, point-to-all-point connections. The first common-sense reaction to scaling issues for a new server programmer is "well, Google manages to handle millions of clients with no problem, we'll use Web technology since this is already a solved problem."

The problems for game servers are primarily matters of pushing state updates at a rate that is proportional to the number of players that cause the updates, and the number of players that must receive those updates. Game servers are nothing like web servers, unless the game is designed to treat players as disconnected entities that have no affect on the state of the world that the rest of the players participate in.

Consider Iron Forge in World of Warcraft. At any point in time, there are hundreds of players nearby. It is one of the worst performing scenarios in multiplayer gaming. Everyone is running around in close quarters. In MMO parlance, it's a flashpoint. What is the server network code doing?

  1. The server receives a position update from a player.
  2. The server determines that 165 players in the immediate vicinity need to receive that update.
  3. Server sends 165 net messages. (the other 165 players are ALSO running around, creating messages. Do the math again, there are thousands and thousands of messages required to keep this state consistent for the game clients!).

Ok, code time:

void Connection::send(void * data, size_t length)
{
    // blocks while the OS copies the data to a net buffer. 
    // If the kernel buffer is full, blocks the entire time the 
    // remote is acknowledging it accepted data from the other 
    // 164 messages it was sent
    _socket->send(data, length); 
}

That's what client code on the server might do. An evolution of this model may want to avoid the potential blocking the kernel may do while its send buffers are full.

void Connection::send(void * data, size_t length)
{
    // don't block the ENTIRE game sim for one slow-assed client
    // that isn't emptying the kernel socket buffer fast enough
    // assume _sendQueue is thread safe
    _sendQueue.push_back(new Message(data, length));
}

// in write thread, grab messages off the queue
void Connection::networkThread()
{
    while(_connected)
    {
        Message * m = _sendQueue.pop_front(); // locks queue, removes front element
        _socket->send(m->data(), m->length());
        delete m;
    }
}

That's an improvement. Many client programmers will tell you that after they have optimized the snot out of their bleeding edge rendering system they had to start looking at allocations and moving memory around.

Allocating memory isn't doing work, it's making room to do work. Moving memory around isn't doing work, it's putting it someplace convenient to do work later. In our scenario, tossing 30,000 packets per second means there is a lot of work to do. Making 30,000 allocations per second and 30,000 deep copies per second will soon show up on the profile of an active server (though it will NEVER show up on the profile of a pre-production server that never has more than 10 or 20 people connected). Lesson to learn here: Play-testing for months with a few dozen users will never prepare code for what happens when thousands of users start beating the snot out of it.

One more word about allocations: the biggest risk for a server that needs to scale is memory fragmentation. Allocating and freeing tens of thousands of variable length buffers each second wreaks havoc on a server. It's not uncommon for a server to fall on its face because it cannot allocate memory. This can happen before it stalls trying to send/receive/move packets. I wouldn't bet on that race, but it is something that kept me awake some nights while trying to find a good solution. An allocation failure can happen when the server has 1GB of free memory for the process, but doesn't have 1k of free CONTIGUOUS memory to give the application when it needs it!

Reducing the allocations (that don't do work) and deep memory copies (that also don't do work) provides the greatest improvement for network code. Reducing the number of messages sent is the job of game code and game design. Network code can't fix an insane design, but it can try to accommodate a reasonable one so it can scale well.

void Game::updatePlayersNearby()
{
    MessageBuffer & buffer = _connectionManager::getBuffer();
    buffer << _newStateDeltas;
    _outstandingMessages.insert(&buffer);
    foreach(Connection * player, _nearbyPlayers)
    {
        player->send(buffer);
    }
}

void Game::updateComplete(MessageBuffer & buffer)
{
    if(std::find(_outstandingMessages.begin(); _outstandingMessages.end(), &buffer) != _outstandingMessages.end())
    {
        _connectionManager.freeBuffer(buffer);
    }
}

There are a LOT of assumptions made in that code that I hope are obvious. The principle point is that a server should multicast or broadcast to multiple connections without allocating new memory and without copying that memory for each connection. A single, pre-allocated message buffer is requested. That same buffer is shared for sending to ALL connections that need the data, and that buffer is pegged until they are all finished. The load on memory and the CPU for non-work is eliminated.

This isn't a panacea for scaling multiplayer game servers. There are MANY other issues involved with scaling a server well. In my personal experience, these are the issues most relevant to scaling the low level network code itself. This helps to address some of the problems that are most often experienced with game server technology. Always consider the traffic characteristics:

  • How many simultaneous connections will there be?
  • What are network side-effects of a connection sending a message to the server? (Send 1 or send N packets?)
  • How long will the connections last?

Take those few questions into consideration, and also think about how much non-work the server should sacrifice in the interests of performance. Sometimes non-work is a time sync and contributes to lag and overall scalability of the server.

Lastly, not all multiplayer games are games that need to scale. There's no need to go overboard with Overlapped I/O, shared buffers and other paradigms that complicate game code if the game code doesn't need to accommodate the scaling techniques these technologies are designed to solve.

Until next time ...

Have Fun!

Monday, December 7, 2009

Nails

I picked up the new Motorola "Droid" phone. I'm a big fan of Linux and this has to be one of the most useful devices I've purchased in recent memory. Of course, my first thought was how I might write some code for the Android platform.

Google's SDK is very Java-oriented. I've done some development in Java, long ago, and didn't care for the experience. My last gig at Microsoft required a lot of C# work, which brought back the bad-old-days of Java programming. Fortunately, Google also provides an NDK for native development. I guess C++ is a programmer comfort zone to me.

After a bit of hackery, I was running a simple OpenGL ES app. It was cute. The process was relatively painless, and I was thrilled to have my bits shuffled by my Droid's ARM-7. After stewing on it for a while, I began to wonder if maybe I was being too harsh on Java/C# and other languages for game development. Most of the work I've done over the past decade have been on large, expensive productions -- not something you build on today's mobile platforms. I was guilty of using the C++ hammer in my toolbox to the exclusion of everything else.

So, I've decided to put together a Flash-based game with an eye to porting it to Java and C# for other platforms (Android/XNA perhaps)? All of the targets support blasting bits around to a display device or bitmap, so I'm going old-school. I forgot how much fun it is to watch a simple game evolve over the course of a few hours and a few hundred lines of code.

If the game reaches a playable state and would otherwise be destined to rot in my bit-locker, maybe I'll post a walkthrough here on the blog. It may not be C++, but it is game development, and not every nail is a multi-million dollar production requiring a C++ hammer.