My Automated Testing Trail and My Executable Use Cases Approach

I’ve been using automated testing as a development acceleration tool since 1989, where it saved me a lot of grief as a programmer in the super-computing world. I moved my first team to my form of test-driven development in 1990. Since then I’ve tried to advance my testing strategies with each new software project, and as part of my self-development plan, I’ve done two new automated testing lectures each year, for ten years.

My focus is not just lowering the cost of quality for the user via automated testing, but also improving developer quality by better infrastructure, less waste and faster iteration loops.

I’ve been one of the major leaders in bringing automated testing and metrics-driven development into the gaming industry, but not just from a quality perspective. I follow the Lean school of thought; if you attack the quality improvement problem by improving the production processes, you end up with both higher quality -and- faster development times.

    • A summary presentation of my automated testing approach in games
    • Automated metrics collection and aggregation is an under-served portion of the automated testing problem: http://maggotranch.com/MMO_Metrics.pdf
    • I co-authored an MMO Engineering textbook, writing the chapters on automated testing and metrics aggregation for online games
    • Overall, I’ve done a dozen industry lectures on accelerating production via automated testing, metrics & architecture
    • As part of my personal growth process, I’ve done at least one talk on a new aspect of automated testing for over a decade
    • At EA, I revolutionized the testing process for The Sims franchise and helped kickstart other testing projects in other studios. We created one of the first fully automated build/deploy/test/measure pipelines in the game industry (2001). My approach changed the game’s architecture to support easy automated testing, which allowed us to support load testing, regression testing and CI/engineering tests via a single test system, and for some games, via a single test client
    • My auto-test approach differs from most: I test and measure at the player experience level, and modify the code architecture to be more testable. This radically lowers the cost of testing and increases malleability as the product shifts over time, and supports the huge amount of iterative development required in interactive systems
  • Before games, I was also responsible for some of the earliest advances in automated testing, and I’ve iteratively improved my techniques with every project since 1989. Specifically, I’ve designed and built testing tools for engineering speed, performance testing in super-computing, and functional/compatibility testing across ranges of super-computing and clustered computing options. In 1990, I created one of the first test-driven development approaches: I had all engineers on the team writing tests (in my custom harness) before writing their code; all code had to pass before checking in, and we also had one of the earliest nightly build systems that ran unit tests, full system tests and performance tests each night. I also designed the load testing system for the HLA RTI 2.0 (the military standard networking engine for distributed virtual worlds used in training simulations) when I was a DARPA contractor in Advanced Distributed Simulation and tightly-coupled clustered computing.
  • My long-term goal is to increase innovation by taking cost, risk and time out of the problems in building interactive systems. http://maggotranch.com/Innovation_Factory.docx
  • This is a test plan (and simplistic functional testing code sample) I did for Blizzard. They described it as the best test plan they had ever seen. https://github.com/griddletoks/wow-test
  • My current work in Lean Game Factories is based heavily on my custom automated testing approach for interactive systems. We’ve built a continual deployment pipeline that does the usual unit/functional testing, but also performance testing, on devices and at load, for each code checkin. By tickling the system under test in different ways, we’ve managed to support every part of the game team, in different ways
    • Game designers and monetization teams: a decision aid tool in early analysis (player bots that play through all the content, every night, with automated metrics aggregation on balancing data)
    • Engineering: performance testing (client and server)
    • Upper Management: prediction of progress
    • Daily Management: automated collection of Kaizen-style Waste and Friction metrics (essentially automated Production Efficiency Metrics, including heatmaps of defects and change rates per code module, trended over time, as well as common failures or slow tools that interfere with production)

I can (and do) talk all day about how to improve automated testing and expand the use cases into all aspects of production. But I’ll stop here for now 😉

Top five metrics mistakes in games

Here are the top five mistakes I’ve observed when a project tries to implement a metrics program. These are generalities extracted from multiple observations, and as such, are intended to provide rule of thumb guidance, not rules chiseled in stone. On the ground conditions in any given project may require a metrics solution tailored to their specific needs.
Note that some very important metrics usually gets up and running without much risk. For example, channeling user behavior metrics into the game design group is such an obvious mission-critical task it will usually happen even if the game designer has to buy an SQL textbook. Thus few user behavior metrics are represented in the five most common mistakes with metrics.

Top five mistakes in metrics
One: No application of metrics in task assignments and project management.
Two: Failing to measure factors that affect team efficiency and delivery schedules.
Three: Raw, singleton data and manual distribution don’t work. You must automate the entire collection, aggregation and distribution cycle.
Four: Not having senior engineers involved in the architectural analysis, implementation design and growth of your metrics system will either cripple or kill your metrics project.
Five: Not using metrics generated via repeatable automated tests at the front end of the production pipeline to prevent defects from moving further down the production line.

ONE: No application of metrics in task assignments and project management.
a)    Without a measurable goal, it is very amorphous as to when a particular task is considered done, or rather, done well enough. The developer has little incentive to do more than get the task done with the minimal amount of time involved: people respond to the way in which their performance is measured. The level of completeness, stability, performance, scalability and other critical factors tend not to be addressed unless they are considered in the Measures of Success for any given task, or until they become a serious problem. This can result in a very high go back cost: how much time is spent fixing defects in a module or in other, connected modules. To paraphrase one senior MMO engineer, “using metrics in my task allowed me to significantly improve performance and remove some bottlenecks. But my question is why would I ever use metrics again, unless it is out of the goodness of my heart? My manager did not specify anything further than getting the feature to work; not how well it worked, or how stable it needed to be. So if I spend time improving my module via metrics, I have, in my manager’s eyes, achieved less work that week: I could’ve left my first task alone and gotten other tasks done instead of improving my first task.”
b)    Metrics also helps to accurately focus staff on real problems, not perceived problems. For example, if a system is failing to scale, there are two paths to follow. The common approach is to gather the senior engineers together and have them argue for a while about what might be causing the problem and then implementing one of the educated guesses, hoping to get lucky. The other path is to place some metrics probes in the system that is failing to scale and then run a test. With the resultant metrics, it is usually much easier to find where the problem is, implement a solution, and rerun the tests to see if the scalability numbers have improved.
c)    Before we had implemented an effective metrics system on TSO, engineers were tasked mostly by educated guessing: we had no way to observe what was going on inside our game and were thus trying to debug a large-scale, nondeterministic blackbox, with very little time remaining. Once we had effective metrics, server engineers were tasked mostly via metrics coming out from automated scale testing. Our production rate soared.
d)    Aggregated data also provides an easy, excellent focusing tool. A Crash Aggregator can pull crash frequencies and locations per build to provide the number of crashes at specific code-file and line-number locations. Prioritization then becomes quite simple. If you know that bug 33 crashed 88 times in build 99, you know that it is a more critical fix than bug 1 that crashed once in build 99.
e)    Lack of metrics-driven task assignment is particularly deadly in the iterative, highly agile world of game production, which has some pretty deep behaviors burned into how things are done. Further, agile development is sometimes the pretext for programmers to continually change what they want to build, on-the-fly. Gold-plated toilets in a two story outhouse are often the result… Customer driven metrics, task driven metrics and team efficiency metrics are good antidotes for keeping teams focused.
f)    The risk of building necessarily partial implementations on-the-fly is that “something” is working by the due date, but it only has a fuzzy possibility of being correct. Further, the go back costs are not accounted for in the schedule and thus they become “found work” that adds unexpected time into the schedule. Of course many features are experimental in nature: they may shift radically or may not make it into the final game, and so it makes sense to build as little as possible until the system requirements are understood. This is still very addressable via metrics: as part of such experimental tasks, simply define the Measures of Success for the initial task as “implement core functionality only” and address the rest later.
g)    Example: when building an inventory system, you need to deliver enough of that system so that some play testing can be done, but you don’t need to cover all edge conditions upfront. Instead, you define and build only the core functionality and deal with edge conditions later, when you will actually have firmer knowledge of how the system is used and what it is expected to do. Using Inventory as an example, the core functionality is simply <add item; remove item; show current items>. Completions of such features are easily tested and measured, and are thus easy to keep stable in the build and in gameplay. Similarly, once the final inventory requirements are known, the measurable conditions of “ready for alpha” or “ready for launch” are easy to define. In this case, the final acceptance metrics would be something like: 30 items are allowed in the inventory, delete one item then test and measure that the inventory count goes down one; verify that that item has been actually removed (from the user’ s perspective); verify that all other items are still in the inventory; verify that adding the 31st item does not damage the inventory existing items and that an appropriate error message is given; does deleting a nonexistent item from the inventory return failure, and are all of the existing items still intact; does deleting a nonexistent item from an empty inventory return failure; and following that, does adding a real item to the potentially corrupted empty inventory still work; etc. etc. etc.
h)    Metrics allow tying production and operation actions to the big three business metrics: cost of customer acquisition, cost of customer support and cost of customer retention. And if you can quantify an improvement you want to make in the game and track how it affects the big three business metrics, you can do what you need to do: no fuss, no muss.
i)    Finally, without project management using task completion metrics, identifying the current state of game completion and projecting long-term milestones are at best exercises in wishful thinking. This tends to result in projects that inch closer and closer to their launch date, with little actual idea of what will happen then, or even if the game will be complete by then. With early, accurate measures of completion, actions can be taken early enough to improve projects at risk: adding staff, cutting features or pushing back the release date. Without early, accurate measures, by the time the problem is detected it is too late to do anything about it.

TWO: Failing to measure factors that affect team efficiency and delivery schedules.
a)    Large teams building large, complex systems can be crippled by small problems, brittle code, non-scalable tools and lack of development stability. Even if individual developer efficiency drops by only 10%, a 100 person team takes a serious hit in the amount of work done by the team, each and every week.
b)    Some such factors are build failure rate, build download time, build completion time, game load time, components that have a high go back cost, time from “build started, build downloaded to QA, time until pass/failure data reaches production, server downtime, etc. These and other critical path tasks not only slow production, they are also mission-critical problems in operations.
c)    Measuring bottlenecks in your content production pipeline can point to places where automation could be added to speed up production; if server stability or database incompatibility or broken builds are recurring bottlenecks, the engineering team then has an actionable task that will measurably improve content production. In TSO, we found that such bottlenecks, despite being widely known as a problem, were not tagged as priority problems to solve! The management team was under tremendous pressure to build features and add content. Assigning resources to fix a fuzzily defined artist tool issue instead of putting more pixels on the screen is a hard sell. So the problems were always dismissed as “oh the build probably doesn’t fail often anyway, “it probably doesn’t affect the team very much when it does”, or “oh, we probably won’t have another Perforce failure, we must’ve found them all by now”. But when we quantified the numbers of build failures in a week, multiplied by the size of the team and how long it took people to resume forward motion, stabilizing the build became a top priority problem. Lost team efficiency via a poor production environment is one of my favorite metrics. It has always resulted in tool improvements and a faster, more stable production cycle, and one that makes it easier to project delivery times for large-scale systems. In a TSO postmortem, the senior development director stated that “[stabilizing the build] saved us.”

THREE: Raw data and manual lookups don’t work. You must automate the entire collection, aggregation and distribution cycle.
a)    Building a series of one-off metric systems that do not support the entire metrics collection/aggregation/distribution cycle is a path to duplicative dooms. One-off systems quickly rot, which is why you can find so many dead ones littering your code base; people hack together what they need for the moment and then they are done with it. And when you next need a number, you’re back at square one: the old hacks are dead so you hack in a new metrics ‘system’.
b)    One-off tools do not generally support correlation and aggregation across multiple databases, nor do they generally have team wide distribution built-in nor do they generally have sophisticated visualization systems.
c)    One-off systems generate patient only a specific type of report and must be run, by hand, whenever the data is needed, and delivery to others is by a whim or by e-mail. In other words, the data is not actionable. To be actionable, a metrics report must contain specific data points before a given task is started and can report any changes in those specific data points after a task has been completed. Such reports are “breadcrumbs” that quickly lead the developer to the problem and know when the problem is solved.
d)    A team-wide Metrics Dashboards helps improve the efficiency of developers by supplying real-time views into the most common and the most critical reports. This also helps improve the efficiency of your build masters and senior engineers, who are continually distracted by questions such as “where’s my stuff in the build pipeline” or “why is this <thingy> broken?”
e)    Lack of automation in a metrics system means somebody is going to have to continually do a lot of data aggregation and communication tasks. This generally leads to people working with what they know: a simple, one-of-a-kind spreadsheet built and then discarded, or some incredibly complex spreadsheet is built by someone, who then enters massive amounts of data by hand and e-mails the results. Sounds very real-time and accurate; a tool that people would love to use, right?
f)    Your metrics system also needs to support calibration: you use the results in critical business decisions and you need to know that the numbers are accurate. Running hundreds of tests to remove nondeterministic factors by, for example, aggregating multiple test runs, eliminating the outriders and averaging the middle third results. This is a typical function that the report builder tool needs to support.
g)    Using metrics in multiple areas also helps to prevent code rot: the system is always in use, and therefore will be kept up-to-date. Further, a stable, feature-rich metrics system reduces the incentive for engineers to create one-off metric systems and thus preventing duplicative, wasted work. Finally, if the metric system is on the production/operation critical path, not only will it remain active, it will be continually grown by the people using it in day-to-day tasks.

FOUR: Not having senior engineers involved in architectural analysis, implementation design and growth of your metrics system will either cripple or kill your metrics project.
a)    A metrics system capable of supporting a large-scale online game is a complex system in and of itself. A poor metrics tool will be a hard sell into a production team that has gotten along without metrics before, or the metrics tool could be integrated into a project and then crack in the seams as the software and customers scale up. Examples of tasks that are beyond junior engineers to complete without guidance: tailoring the system to meet “on-the-fly” priority requests, deciding what are the key metrics to capture, how to make the system flexible enough for easy addition of new metrics and rapid aggregation/calibration/ of new reports, how to make the system scalable, how to make an easy user interface for the complex aggregation map/calibration/new-report functions.
b)    In other words, the design and implementation of a mission-critical tool usually falls down the programmer pecking order to the people least likely to make the correct decisions or correctly implement a real-time report creation tool or a real-time report viewing tool in a massively scaled metrics database that imports data from multiple external forces or correctly aggregate data from multiple, radically different databases.
c)    Example: correlating game data with data from CS or social networks can produce a profit/trouble ratio for customers or detect bots and hackers. One could correlate game features to network costs and suggest game changes to lower network costs, or suggest network changes to strengthen gameplay. One could easily find the players that generate the highest revenue with the lowest hassle. One could easily expand the detection of a single hacker into finding the other hackers associated with them, or even different hackers to use the same basic patterns. You could also correlate better between what game features create more and stronger social building blocks and thus broader social networks. If you know most of your friends from playing an online game, that’s what you have in common, strengthening the customer retention factor.
d)    Failure to collect “metrics on metrics”, which lets you see how the team is using your metrics system: what features are popular, who uses what features, and what is response time for users creating or viewing a report.

FIVE: Not using metrics generated via repeatable automated tests at the front end of the production pipeline to prevent defects from moving further down the production line.
a)    The earlier you detect a defect, the better.
b)    The further you let a bug go down the production pipeline, the more expensive and time-consuming it is. Bug verification, bug assignment, bug replication, bug tracking, bug fixing and fix verification generates expensive noise that hinders already busy people.
c)    The more you allow defects into your build, the more you affect the productivity of the entire team! One half hour of a junior engineer can drop in a build bomb that freezes your team for the hours it takes to find, fix and create a new build.
d)    Even worse, tracking down hard problems often require your top technical people, who could otherwise be generating useful systems! I measured a few such bugs on TSO. One little problem in the build consumed about 30 hours of five of the most expensive people on the team.
e)    Using metrics generated by repeatable automated tests before checking in buggy code will prevent them from burning team-wide time.
f)    Many of the most valuable metrics in an online game can only be accurately produced via repeatable automated tests. Failure to integrate your metrics system into an automated testing system will at the worst kill your project, and at the best, cost you time and money that you might not have.

Innovation Factories

Innovation requires a tremendous amount of iterative experimentation to capitalize on infrequent glimpses into a new, potential future. Doing this at scale is prohibitively slow and expensive, leading to a vicious cycle where cost and risk greatly limit our ability to expand into new fields.

Video games are a perfect case study for this problem. True innovation occurs rarely, while most of the industry flounders around, squabbling over increasingly smaller shares of the market with increasingly larger numbers of competitors. Then, when a new niche is opened up via either new game play models, new business models or new technology, the clone wars immediately start and profit/growth decline. Innovation keeps you ahead of the curve, but is harder to schedule and harder to fund.

An Innovation Factory approach is proposed, where automation is heavily utilized to lower both the cost and risk of innovation. Further, said automation is capable of taking innovative prototypes directly into the marketplace, bypassing the productization phase without risking scaling and quality control issues. Finally, an Innovation Factory is not just to get new products to market, but also to allow rapid, iterative adjustments to true market conditions and scalability to meet market demands.

Online games and Virtual Reality applications are excellent candidates for Innovation Factories. Much of the Production and Operations work is highly susceptible to reusable automation techniques, thus lowering the cost/risk of development and also lowering the recurring costs in running highly complex distributed systems with high quality control requirements, high content refresh rate requirements and low operating cost requirements. The scale of the development team, the scale of the potential user base and the scale of the application complexity all greatly limit innovation opportunities to most content creators: there is a direct relationship between iterative innovation at scale and the cost/risk/time of new products.

AAA products require AAA production techniques, currently only available to dominant market players, such as EA and Ubisoft.

  • Architectural support for iteration and Automation of the content creation, live operations and testing processes are useful in and of themselves.
  • When coupled with real-time Analytics of Players, Production and Performance, the whole becomes greater than the sum of the parts.
  • A transformative leap in effective creativity, coupled with the ability to take rapid prototypes directly into the marketplace, all at radically reduced cost/risk/schedule factors.

Why is this not done already?

The market opportunity here is deceptively simple. The death rate of projects & studios limits retaining lessons painfully learned on how to scale development without crippling innovation. Every project starts from scratch, with a new group of people, and it is always three months late, from day one. Senior engineers who learn these lessons get frustrated and leave the industry, to be replaced by young programmers who have dreamed only of making games, not scalable software & processes, and thus the vicious cycle repeats itself. Couple that with the lack of corporate memory from the churn rate of projects and studios. Add the fact that to make this work, you need a deep understanding of all aspects of the experimentation process/mindset, how to construct complex code/content that needs to shift direction almost daily, how to test rapidly & cheaply, at scale, how to cheaply field live operations of brittle prototypes, and how to modify them on the fly, quickly enough to react to shifts in the ecosystem. Factor in that most of the people who have the background and talent to pull off these challenges off are only in games to work on gameplay features, not infrastructure. Top off with the credibility and business/communication skills required to convince funding agents that investing in invisible infrastructure from the start is more valuable than pure feature work, and it becomes clear why this is not done everyday…

Summary

  • Quality and speed, at scale
  • Innovation at scale
  • Accelerating the experimentation rate provides the innovation of play mechanics and also the highly iterative polish of the user experience so essential to success
  • Take Innovation direct to the market: put your rapid prototypes directly in the live market, continuing to iterate on design with one hand and iterate on cost/scale/stability with the other­­­

We need a massive shift in the mindset of how we build complex, interactive systems. A shift that is capable of fostering innovation at both the grass roots level and the large corporation level, but it is not enough to simply come up with new ideas; we also need a way to take new concepts directly, with quality, to the market, and quickly, iteratively, improve against real world conditions.

What has worked before that we can copy from? Darwin-Driven Development!

  • Evolution rocks! It always finds a good solution to the current problem, and a way to adapt to shifts in the market/eco-system. But who has millions of years to get to market?
  • So we accelerate the huge random search called evolution
    • Automation to speed each step
    • Embedded metrics to help prune branches early and react to on the ground conditions
    • More automation, performance-test centric, to allow a rapid change rate without killing the code/product
  • Guided evolution, not random evolution, accelerated via automation!

 

 

Metrical Mistake: Andre Iguodala gets robbed of the NBA 6th Man Award

Metrical Mistake: Andre Iguodala just got robbed of the NBA 6th Man Award. Voters clearly looked at only the easy to measure, sexy but weak metric: points scored. I would argue that by using these bad metrics in publications, kids will get the wrong idea of what skills to focus on in their training.
 
1) Metrics are like fire: a powerful servant, but a dangerous master.
 
2) Singleton metrics aren’t worth spit. Use groups of metrics, ones that tie directly to the core goal: in this case, winning games.

Jamal Crawford, of the Los Angeles Clippers, took the award for the third time, which you’d might think that meant he was quite valuable.

By scoring a lot, in bursts, he is ranked highly. But his defense is so bad, his team is actually better off without him on the floor. Yes, Crawford scores a lot of points, but he lets the other team score even more!

Iggy was Finals MVP last year for the 2016 NBA champions, the Golden State Warriors. That’s a pretty solid indicator of value. The award is decided by a panel of nine high-end media members, after their detailed watching and commenting on of every game in the series. Experts with solid data: the qualitative view. Another qualitative view: Dre is a key part of the so-called Death Squad; the top-ranked lineup in the NBA. When these five guys are on the floor, they completely dominate. Everyone. Like, at a shocking level. The coaches put these guys on the floor together any time the game is on the line, and they deliver at the best rate in the conference. Jamal gets on the floor, and his team gets worse! So why did Crawford win? He’s a chucker: he physically can’t touch the ball without shooting the ball.

Let’s add a quantitative view to help out here: measured player actions and their impact on the overall game score. Then we compare a few different sets, and check for noise. Like, if a player you just know isn’t that great comes up rated in the top ten in the X composite metric, you know there is a flaw somewhere.

Example: one willowy center was getting killer PER numbers, one of the best aggregated stats. He was an anomaly: excellent numbers, but only in very specific scenarios and with very specific people around him. You knew he was an anomaly because he’s a bench player who gets limited burn; way less burn than any sane coach would give a normal player with such a high PER. Always validate your hard data with qualitative views.

So let’s look at Iggy and Jamal with several advanced stats, drawn from http://www.basketball-reference.com/play-index/pcm_finder.cgi?request=1&sum=0&y1=2016&p1=crawfja01&y2=2016&p2=iguodan01&y3=2016&p3=goberru01&y4=2016&p4=bogutan01&y5=2016&p5=greendr01&y6=2016&p6=leonaka01

Wow. Iggy outranks Jamal in almost every advanced stat! And in the regular stats, Jamal is ahead only on scoring-related statistics. Not shooting accuracy, not rebounding, not assists, not turnovers. Chuckers like Jamal are very overvalued as a result of people only seeing the easy metrics.

A final sample: Jamal’s  Defensive Real Plus Minus is the lowest of anyone on his team,  and is ranked 454th in the league. Andre’s rank in the league: 6th. And let’s not get into hustle stats 😉

More details: https://www.numberfire.com/nba/news/4881/is-jamal-crawford-actually-a-liability-for-the-los-angeles-clippers

[update] Whee! Somebody did a similar analysis; let’s shift the voting patterns for awards! http://www.theroar.com.au/2016/04/21/nba-living-past-crawfords-6th-man-award/ and http://www.basketballinsiders.com/a-closer-look-iguodala-as-sixth-man-of-the-year/

 

A writer’s retreat weekend in Amsterdam

A very good writer’s retreat weekend in Amsterdam! The train ride is 5 hours, but it is such a smooth user experience that I got a lot done on the way here, and I’m more willing to travel via train than air. Then a nice evening of lingering over garlic mushrooms and Irish coffee: making notes on some interesting software development problems at the office, and some preliminary notes for a new talk on rapid iteration.IMG_1384  IMG_1391

You have no idea how hard it is to find a coffee shop in Amsterdam that actually sells, you know, actual coffee. With caffeine. But by a spectacular coincidence, they’re playing two of my favorite writing albums: I’m jamming with Miles Davis on Kinda Blue and bopping with Bob Marley’s best! So I guess I have to stay for a bit, right?

Three out of three hits on the random, walking-around food scene in Amsterdam! Wrapping up the weekend with a droolingly-good ribeye-steak from Argentina: a fantastic flavor experience after a few months in Germany, where the beef just brings the wrong type of tears to the eye. I am writing up more notes on scalability and rapid iteration in game production. This, I am beginning to feel, will be a most interesting talk: the 25 minute limitation is making me really think about the core messages and the delivery framework, which is producing some interesting, clarifying moments.

I (Heart) Books

When I was young, I used to be addicted to virtual worlds, spending every waking moment in worlds populated by Orcs, Elves and Aliens. Except when I grew up, virtual worlds were called books.

A good day was reading two books; a great day was reading three. And a perfect day was avoiding conversations with anyone 😉

Good books are good friends, expanding your mind and providing new vistas and memories, time after time. You can geek out with old friends and a favorite turn of phrase when revisiting an older story, or take a short holiday in a new book; be a stranger in a strange land.

Even when times were tough, my mom would take me down to the used bookstore each month, where I was allowed to fill a single brown grocery bag with tattered but beguiling goodies. Optimizing for books in terms of size and shape resulted in a pretty odd tangle of reading material! But once I discovered The Hobbit, Tom Swift and Heinlein, it was all over: the wonders of science fiction won out over the size of the books, with a slight seasoning of fantasy to fill up the corners.

You don’t get out much when you live on a maggot ranch, so reading books and riding horses were pretty much my only entertainment there. But university changed everything for me. It taught me that girls existed, computers could be taught to play games, and that there was more to music than Disco. I’d drag in my big speakers from home and we’d crank code on the late shift, powered by chocolate-covered coffee beans, chai tea chats and classic albums from my fellow coders. They’d taken pity on my sad lack of musical knowledge and taught me to love music: Celtic to Classical, with Miles Davis, Pink Floyd, Cowboy Junkies and Gregorian Chants in between.

But even with all this new entertainment available, books remain a big part of my life. Enough so that I am finally trying to write a few of my own! I co-authored a textbook on building online video games a few years ago, and have a second textbook underway. Most of my free writing time is going into alternate history, science fiction type books; the research and plot development have been thrilling to do, but dialog and character development have proven quite tricky so far. If this were a work project, I’d just look up the answer, but figuring stuff out is half the fun for a personal project, so I’ve been re-reading dozens and dozens of books — good and bad — trying to distill what makes a book good. Bit by bit, I am getting there. Most of the stories on this blog are writing exercises of one form or another. It was quite difficult to start, as I am a very — probably excessively — private person and I don’t like exposing anything on what I think. But over the years I’ve gotten less bad at it. Doing conference lectures was a big help. They make me dig deep into a topic and figure out how to communicate the essence without overwhelming in detail.

Fishing with dad: a story I wrote for Father’s day.

Kootenay is a massive lake with, back then, mostly deserted shores or camping, and spectacular skylines!
Kootenay is a massive lake with, back then, mostly deserted shores for camping, complete with spectacular skylines!
Kootenay has many different types of stunning views, and the height of the mountains around the lake bring snow into the summer view!
Kootenay has many different types of stunning views, and the height of the mountains around the lake bring snow into the summer view!

Dad and I would go out to Kootenay Lake, three or four times a year; a six hour drive to reach the deep valley lake. It was over fifty miles long, with very cold, very deep water, set amid the beautiful Rocky Mountains of British Columbia.  And best of all perhaps, most of it was completely isolated. We would throw enough food into the boat for a week in the summer, and camp on isolated sandy beaches tucked in amongst the rocky coves. We got skunked five years running in terms of catching a fish (we were only after the big ones that lurk in the deep waters), but the user experience was amazing! We would build a nice fire out of driftwood and toss spuds jacketed in tinfoil into the embers. Using a giant granite boulder as a heat reflector, we’d grill some steaks, broiled in butter and chill with a bottle of orange liquor from his still. It’s dark enough and high enough that you could see the satellites moving overhead, and the Milky Way becomes actually just that: so many stars that it looks like a solid stream of light. And the occasional Aurora Borealis as well. Dad would say “there are people that would pay a million dollars for this experience”, and then we would both chorus “so please introduce me to one, so that I can retire!”
Kootenay Lake is big enough that it doesn’t freeze over in the winter, so we hooked the boat up as a cabin cruiser. We’d head out a little after the crack of dawn, dad with a case of beer and me with a case of books. Then we would set out Velveeta cheese and crackers, with our favorite Ukrainian sausage, on the table in the middle. And we would pass the day in companionable silence while the stacks of beers and books went steadily down.

The fishing war was finally won one rainy winter day. I had cream of mushroom soup and hot dogs broiling in front of the propane heater we’d use to keep warm while winter fishing. One of the rods we had out began clanking in its holder, a complete change from the established practice. Dad and I looked at each other, puzzled, as the clanking continued and the reel began to buzz. After an appalling number of seconds, we came to the same realization: the first fish strike in five years!
We both leaped for the rod to get the situation under control, but we had delayed too long. It was an old-style reel that had managed to tangle itself somewhat as the fish jerked the line back and forth. Finally dad grabs the reel and started to unsnag it while I frantically stripped line in by hand to see if the fish was still there, and ideally, keep the hook tight. And there were a few jerks back and forth of the fish trying to shake the hook, so it was clear we still had a chance. Dad finally gave up on the reel and started stripping line himself, but we lost the fish. Dad started cursing up a storm as he shifted into high gear, restarted the boat and started zooming back in the direction from which we had just come. He was convinced the fish was still there, and with our down-rigger set up, we could keep the same depth and hopefully troll right back through his hunting zone. Given the gigantic size of the lake overall, and the hundreds of feet of depth that the fish could choose from, plus the fact that the fish just had a hook in the side of its mouth to make it a little wary, it seemed to me our chances were pretty low. But dad was right!

Less than ten minutes in we had another big strike, and this time, we were ready for it. We ended up pulling in a 12 pound Dolly Varden; a salmon variant. We had that sucker filleted and broiled in butter 30 minutes later, steak style. And that was one damn fine dinner!
Ice fishing was similar in concept but different in practice. Normally, you drive your car out onto the frozen lake’s surface, drill several feet through the ice and drop a worm on a hook into the hole. Then you either huddle miserably on a stool out on the ice, hoping to get lucky, or you jump back in the car and huddle miserably there, again just hoping to get lucky. Dad wanted to take advantage of the time window in the winter fishing season where the ice was too thin to support a car, as well as improve the odds on actually catching something. So he came up with the Super-Duper Snoopy Shelter, a collapsible shed that formed a sled when it was collapsed, so it was incredibly easy to hand-tow across the ice. Once you reached a good spot, you just pop it up! It used black plastic sidewalls to keep the ambient light out and the heat in, with a couple of holes cut through the floor ready for ice fishing, and room for a propane heater in the corner. Once we had augured through the ice, we would pack snow hard around the sled’s edges and the open spots of the floor. Dad wanted to be able to see down into the water to see how the fish were receiving our different offerings, and tune as required!

Other great fishing holes of ours include Buck Lake, set in some beautiful wooded hills. It was so quiet from the no-motorboats rule that I didn’t mind rowing for hours at a time to troll the damn hooks!  We should have spent more time at the Kananaskis Upper Lakes. Sheer cliffs line the shores, with mile after mile of deserted horizon views; simply stunning. A little dicey on the road to get there with the family sedan. We used the car more like a truck, hauling boats, often on mining truck back roads, so the city suspension and body width were sometimes problematic! But I remember the most about White Swan Lake; very isolated and deep in the mountains. Similar cliffs to Kananaskis, but smaller and more wooded. We’d stay in old log cabins at the lake, cooking our catch over wood burning stoves, or in Skookumchuck, where dad had an old Chinese chum who taught us how to say the equivalent “I’ve got a fish on the line!” in Mandarin.

Upper Kananaskis Lake: I love being above the tree line!
Upper Kananaskis Lake: I love being above the tree line!
White Swan Lake still had the treeline look I love, but with softer wooded areas and a more cozy feeling than the Upper Kananaskis.
White Swan Lake still had the treeline look I love, but with softer wooded areas and a more cozy feeling than the Upper Kananaskis.
Natural Hot Springs by White Swan Lake! The old ones were in a dilapidated mining era shack, accessible only by this crazy steep and narrow path down a cliff face.
Natural Hot Springs by White Swan Lake! The old ones were in a dilapidated mining era shack, accessible only by this crazy steep and narrow path down a cliff face.
No motorboats at Buck Lake meant a lot of rowing for me, dad's portable backup motor!
No motorboats at Buck Lake meant a lot of rowing for me, dad’s portable backup motor!
Kootenay Lake has some amazing night skies!
Kootenay Lake has some amazing night skies!

Guerilla Gardening

One of my best guerrilla gardening projects, on a major bike & walking trail.
One of my best guerrilla gardening projects, on a major bike & walking trail.

The upkeep time — a critical metric in GG — was minimal, as these are California Poppies: a native, drought-tolerant wildflower. It only needs a bit of help to compete against imported grasses and weeds. This patch started as just a few scattered flowers. I invested a few 5 to 10 minutes sessions, in winter and spring, to cleaning out the weeds. Once blooming, a few minutes each week dead-heading poppies keeps the bed looking great for several weeks. And by incorporating the time into a stretching break in a bicycle ride, the overall time hit was small and working with flowers is a nice break on a ride! People walking by really liked it. Not enough to stop and help right then, but I decided it would make a great community project: getting people to adopt small parts of their favorite trails.

Ultimate in Hawaii

bang_me_hawaii_tourney

Sometimes people get bored on the sideline of an Ultimate Frisbee game and need to do something with their hands 😉

This was at the Kaimana Klassik, a high-level invitational tourney in Hawaii. Simply incredible: beautiful, fun atmosphere and great players! The fields are in a partially collapsed caldera and mere steps to the beach: a state park that you camp in for the Klassick. I had worked hard on my throws for months in advance: you just don’t want to shank a shot in front of the world-caliber throwers who come to play and party at the Klassik! I almost goobered it though: taking a few hours of surfing lessons a few days before the tourney began seemed like a good idea at the time. But the constant paddling put an odd strain on my shoulder, my throwing shoulder! I had to bail out of the surfing lesson after a couple of hours to make sure I could still use the shoulder. My instructor got a little ticked off at my wimping out, but when I explained the situation, he got it and very kindly offered to take me out again after the tourney.
I was on the Spirit team: random players coming without a team. The Spirit team usually gets hammered by the tough competition, so I got to gamble more on long throws than normal to get the disc past the well-designed and well-executed team defenses. I made my only two called shots: two full field hucks, right off the pull, end zone to end zone. One was easy; a straight catch the pull and throw in the same motion. We caught them napping and my receiver was past their last man with a few steps to spare. Sometimes, just when the disc leaves your hand, you can feel that it’s perfect. This one came out as a real frozen rope, just zipping down the field, flat as a pancake, bang on target. A sweet, sweet feeling, and an easy catch.
For the second called shot, we knew we couldn’t pull the same wool over their eyes, so I took the pull and threw a slow & steady type pass up to another handler at the twenty yard mark. We had a couple of curl cuts to suck in the defense a few steps, and after one fake up field throw, the handler zipped a fast one back to me. Our fastest guy took off, sprinting down the right sideline. To give him a few steps advantage over the defense, I had to throw it before he made his cut, so he needed to know the where and when in advance. I told him I’d send an inside out backhand down the left sideline, pulling the defense further out away from him, and then curve it deep into their end zone, targeting the middle to give both of us a bit of wiggle room.
The defense forced me to release a little early: I was worried I might have overshot him and anxiously redid the intersection math over and over as the receiver motored down the right sideline while the disc zoomed down the left sideline. But he had great wheels and pulled a sharp curve into the middle once he reached the end zone. And zap! The disc curved in from the other side and hit him in the belly, at a dead sprint, 75 yards away. I kinda had to grin at that one: it’s a low probability shot that you’d miss more often than you’d make. But when a world class thrower mutters “good shot” as we walked back to setup again, you can’t help but feel good 😉

All the practice time was paying off. When we picked up a few new players for the last day, I overheard our captain tell them that when Larry had the disc, they should just cut to any open space: he could hit them anywhere on the field. Then the captain started to walk away, paused thoughtfully for a moment, walked back to the new players and emphasized “and I mean anywhere”; a great feel-good moment for me!

If only I hadn’t broken Michelle’s toe, just before the first game of the first day, it would have been a great weekend!

But she crammed her foot into her cleats before the toe swelled up, played hard all day and then danced hard all night. She was truly radiant with her happy, happy smile and exuberant love of life! She was also smoking hot in a bared-back, peacock-themed top I had picked up for her: “knuckle-bitingly hot” as a friend described it. Several times. 😉  And so the weekend turned out to be pretty nice after all!

Our nifty SF Mission Loft

We’ve always wanted to try a loft place. Michelle found this ground-floor, New York loft-style building — with a killer garden space — in the Mission district. It is a converted photography studio with lots of natural light and high, bright ceilings! But because it is scheduled to be torn down next year, Michelle got an amazing deal on the rent, giving us a low-cost way to explore this part of San Francisco and decide if the area is worth the nose-bleed pricing 😉 Update: the city has ruled that this pre-Earthquake building is historic! Turns out it was a gymnasium for an acrobat group, which partially burned down in the Great Fire, then the remaining part became a German community center. So our landlord has to keep the building intact, but he can add a second floor.
Michelle’s keystone requirements were: an easier, rail-based commute to her land conservation gig in Palo Alto; more accessible closet space; and the biggest one, a chance to do something new. She obsessed over the planning like me over a GDC talk 😉 Our policy is whoever has the hot hand in design, and/or is doing the heavy lifting, gets to drive, so the inside work is all hers. I got to obsess over maximizing the usability of a SF-sized garden which was in sad, sad shape. My requirements were simpler: I wanted to check out the livability of the area and have bicycle-commute distances to new jobs and rail centers.
   entry way
Pictures and descriptions are on Facebook: Mission Loft