TDD project summaries (KIXEYE & EA)

I recently spent two years at KIXEYE, building a test engineering group and integrating automated testing into every game team. Significant improvements in development time, scalability and operational cost/quality were directly attributed to the load-testing work I led, driven by metrics for the engineering, QA, producer and executive teams.
After the initial success wave of KIXEYE, operational costs skyrocketed and server failures during peak revenue hours became commonplace; the infrastructure and development processes simply weren’t scaling. The KIXEYE executive team issued a priority one mandate to implement load testing tools and test-driven processes for all titles.
To staff the testing team, we set up intern programs with several schools and hired the best interns out of each batch into our QE group, where we trained them in load testing, application metrics and how to work with each game team. We then embedded these QE engineers into each game team, working with the game teams to establish priorities and continuing to run daily scrums for mentoring purposes and fine-tuning direction.

This approach overcame each game team’s strong initial resistance to load testing: engineering decisions and launch decisions became largely driven by results from the automated tests and embedded performance metrics probes. We then grew the work into performance testing for mobile and web clients.
A quite interesting project for me was Robbie; a friendly front-end to the Maxis automated testing systems. After the success of automated testing in The Sims Online, building a second-generation system, integrated into all teams, became a top studio priority. We first tackled build stability, performance and finally content regression for the studio’s next major release: The Sims 2.0. By embedding myself into the Sims 2.0 team, we found tremendous opportunities for turning test and measure tools into task accelerators for the content team, and by aggregating performance statistics into an executive level dashboard we were able to help focus engineering resources and provide clear cross-team communication. For example, the studio head had been able to start a non-confrontational, very directed conversation on performance, as shown in this simple email: “I see on the dashboard that your frame rate is not where it should be at this point in your project. How are you planning to address this risk in the remaining time before launch?”

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s