My Automated Testing Trail and My Executable Use Cases Approach

I’ve been using automated testing as a development acceleration tool since 1989, where it saved me a lot of grief as a programmer in the super-computing world. I moved my first team to my form of test-driven development in 1990. Since then I’ve tried to advance my testing strategies with each new software project, and as part of my self-development plan, I’ve done two new automated testing lectures each year, for ten years.

My focus is not just lowering the cost of quality for the user via automated testing, but also improving developer quality by better infrastructure, less waste and faster iteration loops.

I’ve been one of the major leaders in bringing automated testing and metrics-driven development into the gaming industry, but not just from a quality perspective. I follow the Lean school of thought; if you attack the quality improvement problem by improving the production processes, you end up with both higher quality -and- faster development times.

    • A summary presentation of my automated testing approach in games
    • Automated metrics collection and aggregation is an under-served portion of the automated testing problem:
    • I co-authored an MMO Engineering textbook, writing the chapters on automated testing and metrics aggregation for online games
    • Overall, I’ve done a dozen industry lectures on accelerating production via automated testing, metrics & architecture
    • As part of my personal growth process, I’ve done at least one talk on a new aspect of automated testing for over a decade
    • At EA, I revolutionized the testing process for The Sims franchise and helped kickstart other testing projects in other studios. We created one of the first fully automated build/deploy/test/measure pipelines in the game industry (2001). My approach changed the game’s architecture to support easy automated testing, which allowed us to support load testing, regression testing and CI/engineering tests via a single test system, and for some games, via a single test client
    • My auto-test approach differs from most: I test and measure at the player experience level, and modify the code architecture to be more testable. This radically lowers the cost of testing and increases malleability as the product shifts over time, and supports the huge amount of iterative development required in interactive systems
  • Before games, I was also responsible for some of the earliest advances in automated testing, and I’ve iteratively improved my techniques with every project since 1989. Specifically, I’ve designed and built testing tools for engineering speed, performance testing in super-computing, and functional/compatibility testing across ranges of super-computing and clustered computing options. In 1990, I created one of the first test-driven development approaches: I had all engineers on the team writing tests (in my custom harness) before writing their code; all code had to pass before checking in, and we also had one of the earliest nightly build systems that ran unit tests, full system tests and performance tests each night. I also designed the load testing system for the HLA RTI 2.0 (the military standard networking engine for distributed virtual worlds used in training simulations) when I was a DARPA contractor in Advanced Distributed Simulation and tightly-coupled clustered computing.
  • My long-term goal is to increase innovation by taking cost, risk and time out of the problems in building interactive systems.
  • This is a test plan (and simplistic functional testing code sample) I did for Blizzard. They described it as the best test plan they had ever seen.
  • My current work in Lean Game Factories is based heavily on my custom automated testing approach for interactive systems. We’ve built a continual deployment pipeline that does the usual unit/functional testing, but also performance testing, on devices and at load, for each code checkin. By tickling the system under test in different ways, we’ve managed to support every part of the game team, in different ways
    • Game designers and monetization teams: a decision aid tool in early analysis (player bots that play through all the content, every night, with automated metrics aggregation on balancing data)
    • Engineering: performance testing (client and server)
    • Upper Management: prediction of progress
    • Daily Management: automated collection of Kaizen-style Waste and Friction metrics (essentially automated Production Efficiency Metrics, including heatmaps of defects and change rates per code module, trended over time, as well as common failures or slow tools that interfere with production)

I can (and do) talk all day about how to improve automated testing and expand the use cases into all aspects of production. But I’ll stop here for now 😉


One thought on “My Automated Testing Trail and My Executable Use Cases Approach

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s