JIT or AOT Learning?

JIT or AOT Learning?

Colin Jones
Colin Jones

February 12, 2013

Our jobs as creators and maintainers of software systems are heavily wrapped up in our ability to learn. Generally we need a base level of knowledge before we start - this is why companies have interview processes for hiring. Depending on the company and job to be done, this level varies widely, but there's almost always something you need to know up front. On the other hand, writing software is not an assembly line job. We're generally building something we've never built before. This is one reason that estimating accurately is so very difficult. Languages, frameworks, and APIs change as the years go on, as do the people using them. We make personal discoveries every day about performance, security, concurrency, networking, and more.

There are choices to be made, though, in our personal learning styles, and that's going to be the focus here. How should we spend our 20 hours a week of learning outside of work? [1] Should we learn broadly or deeply? Should we learn about technology needed for work tomorrow, or what might be needed over the coming years? I'll try not to paint with too broad of a brush here: certainly there are continua along a number of axes. But people do have preferences, and I find it interesting to think about why our learning preferences exist, and what implications those preferences have.

To talk about one of these axes, I'll borrow some terms from compiler terminology: just-in-time (JIT) and ahead-of-time (AOT). In a compiler, ahead-of-time compilation generates code before a given program is running, and just-in-time compilation is actually actively working at runtime. One advantage of JIT is that it allows performance optimizations to be made on the fly, after learning where things like expensive tight loops exist at runtime. Call sites can be cached to speed up dynamic language method calls, and hot loops can be compiled down into more highly optimized versions in machine code. The tradeoff here is that the first time a given piece of code executes, it runs relatively slowly. [2]

An advantage of ahead-of-time compilation is that it can start running at the machine level straight away, and can use system resources for running the program, rather than spending its cycles compiling. The tradeoff is that the compiler has to make things fast without running the program, which in practice is very hard, and programs will certainly take longer up-front to compile.

Now, our brains are some pretty smart compilers. We are learning all the time, which would imply that we do just-in-time compilation to improve our knowledge. But if we consider ourselves discretely on various days, months, or years, we might be more likely to think of ourselves as different ahead-of-time-compiled programs at different times.

When we do mainly just-in-time learning, we acknowledge that we may need to take some on-the-job time to learn what we need to know in order to do our work. But by doing so, we're able to avoid the expense of learning things we may never use. Our time is valuable, so putting in weeks or months learning a language that we'll never use may be wasteful. On the flip side, spending work time on learning things that may seem obvious to a colleague who's learned things ahead-of-time might make us look as though we don't know everything, and if something takes a long time to learn, our bosses or clients might balk at the cost of our time.

On the other hand, if we try to front-load our learning, AOT-style, we do incur this risk, that our time spent learning may be a waste, and that in the end we'll need to fall back to just-in-time learning anyway. The payoff, when we learn the correct thing ahead of time, is that the job itself doesn't take as long.

Thus far, these tradeoffs have been strictly in terms of time costs of up-front work. What about the costs of long-term maintenance? Will there be fewer bugs when my knowledge is brand new, or when my learning has been going on for quite awhile? Bugs come in many varieties, including ones that only occur in production under certain traffic profiles, security vulnerabilities that are difficult to exploit, and your run-of-the-mill unexpected input processing. A compiler has the benefit that it needs only to translate source code to machine code, and it should be able to prove that it executes code that is correct according to the language specification. So it's either fast or slow: incorrectness is not allowed in a production compiler. But people's knowledge is a tricky thing, where we may not even know we've done something incorrect until someone with more knowledge in the area points it out. [3]

For my part, I'd certainly prefer to have learned ahead-of-time (over the last month) the knowledge I need (today and for the next weeks), because I believe I am less likely to create bugs that way. If such time travel were possible, I could spend my at-work cycles thinking about edge cases and performance / security characteristics of my code. I have no scientific studies to support this belief, only anecdotes of experiences writing bad code before I really grokked the language/platform/paradigm.

But unfortunately, time travel isn't possible, or at least not accessible to us, so how can we ahead-of-time compile/learn information that we'll need in the future? Well, what if we had a compiler that was able to predict what code would be executed weeks, months, or years in the future? If you had such a thing, it could spend its spare cycles compiling those areas of knowledge that are most likely to eventually pay off. On a small timescale this already exists in just-in-time compilers, of course, but humans have more knowledge about the outside world and can make larger predictions, based on longer-term trends in our industry, company, and team. So if we like, we can make predictions with varying degress of accuracy about what we'll be doing next month, what knowledge gaps we have, and where those items intersect.

Assuming we wanted to spend our valuable time making predictions like this, how would we manage risk? Would we be willing to invest 40 hours of study in an area that's only 10% likely to be helpful? Perhaps not. But 40 hours in an area that is 80% likely to be useful? Maybe so. Would mistakes be costly to the business or our personal reputation? That information might tip the scale in one direction or the other.

Just-in-time learning isn't ever going away - we certainly can't foresee everything we'll need to know. Likewise, we learn for other reasons besides direct business needs. Getting better at the practice of learning itself can accelerate our learning in the future. And learning for its own sake can also be a valuable goal - whether for entertainment purposes or just plain curiosity. So I'm not here with a strong prescription to do only one or the other.

One thing is clear to me, though: if we always delay learning until we need it, progress will be slower, and mistakes can happen without us even knowing it. If we're willing to put in a small amount of time predicting what we think is most likely to be useful down the road, and to spend our learning efforts in those areas, we may set ourselves up to avoid costly mistakes. But most importantly, let's spend our learning time wisely by making sure we're aware of the tradeoffs we make and the risks we take, whether we tend in one direction or another.

[1] Uncle Bob Martin recommends 20 hrs/week of learning/practice for yourself, not your employer, in The Clean Coder. It's a lot of time, for sure. But it does help.

[2] Relative to how quickly a machine code program would run.

[3] See the Dunning-Kruger effect and the poetry of Donald Rumsfeld.