: Coders at Work: Reflections on the craft of programming

Brendan Eich

Brendan Eich

Creator of JavaScript, perhaps the most widely used and most reviled programming language on the modern Web, Brendan Eich is now CTO of the Mozilla Corporation, the subsidiary of the Mozilla Foundation responsible for continuing development of the Firefox browser.

With an appreciation of both elegant theory and good pragmatic engineering, Eich spent the early days of his career hacking network and kernel code at Silicon Graphics and MicroUnity. After MicroUnity, he moved to Netscape, where he worked on the Netscape browser and, under intense time pressure, invented JavaScript.

In 1998, along with Jamie Zawinski, he was one of the leaders of the effort to convince Netscape to open-source its browser, leading to the formation of mozilla.org, where he was chief architect.

In recent years Eich has been involved in both high-level direction setting for the Mozilla platform and in low-level hacking on a new JITing JavaScript virtual machine called TraceMonkey. And, as he explains in this interview, he has also been trying to find ways for the Mozilla project to move the research needle, bringing practical-minded academics into the Mozilla fold in order to bridge the gap between academic theory and industrial practice.

Other topics we touched on include why JavaScript had to look somewhat like Java but not too much, why JavaScript does still need to grow as a language despite the failure of the ECMAScript 4 project, and the need for more kinds of static code analysis.

Seibel: When did you learn to program?

Eich: I was a physics major as an undergraduate at Santa Clara in the late 70s, early 80s. We used to go over to Stanford and hack into LOTS-A and LOTS-B, which were the two big timesharing DEC TOPS-20 systems and Santa Clara had a TOPS-20 system: a nice 36-bit processor from DEC, great OS, wonderful macro assembler. C is portable assembler but the macro processing is terrible, whereas back then you had real macros for assembly and you could do a lot of fairly structured programming if you were disciplined about it. No type system, but C doesnt have much of one to speak of. And a rich set of system calls, system services, memory-mapped I/O, all the stuff that didnt make it into Unix originally.

I was doing physics but I was starting to program more and I was liking the math and computer-science classes I was taking, dealing with automata theory and formal languages. At that point there was a research race to develop the best bottom-up parser generator, what yacc was doing and others would do. It was easy to see the formal purity translate into fairly clean code, which has almost always been the case with the front end of compiler construction. The back end back then was a mess of lore and heuristics, but I really liked the formal language theory and the theory of regular languages and so on.

Seibel: And what languages and environments were you programming inpresumably in physics you were doing Fortran?

Eich: Thats the funny thing. I was so pure physics, I wasnt taking the engineering classes that wouldve had me carrying the deck and risking spilling it and using the collator. I actually bypassed Fortran. Pascal was big then and we were getting into C. And assembly. So I was doing low-level coding, writing assembly hash tables and stuff like that. Which was good. You get better appreciation for different trade-offs. You can tell the programmers whove actually been down to bit-banging level versus the ones whove been shielded all their lives.

I also was interested in C and Unix but we were only just getting into it with this old DEC iron. We had the Portable C Compiler yacc-based mess and we were just starting to generate code and play around with Unix utility porting. Since physics wasnt getting me summer jobs and I was doing a lot of hacking and being a lab assistant I ended up switching in my senior year to math/computer science, and thats what I got my undergraduate degree in.

Seibel: What was the first interesting program that you remember writing?

Eich: This is going to be embarrassing. There was a terrible graphics terminal DEC made; it was like an evolution of VT100 because it understood the escape sequences, but it also had some pathetic color depth and some sort of early 80s resolution. So I started writing game knockoffs for it: Pac-Man, Donkey Kong. I wrote those games in Pascal and they emitted escape sequences. But it was kind of hobby programming that grew and grew. I think that was the first nontrivial programming I did where I had to think about modularity and protecting myself.

This was while I was a physics major, probably third year. Fourth year I became math/computer science and I was starting to study formal languages and write parser generators. So those were the kinds of programs I wrote: either games or serious nerd parser generator type programs. Then I started thinking about compilers and writing macro processor knock offs, like m4 knockoffs or CPP knockoffs. I remember when we got some version of the Unix source and reading some of the really crufty C code. The John Reiser C preprocessorprobably the originalwas just an amazing mess. It was very efficient; it was using a global buffer and it was a huge pointer stew, and it would try to avoid copying. I thought, There has to be a better way to do this.

So thats how I ended up getting out of physics and into computer science and programming. I wasnt really programming before that; it was all math and science. My parents didnt let me get an Apple II. I tried once. I didnt beg but I was saying, I could learn a foreign language with this, which was a complete smoke screen. No. Youll probably waste time playing games. And they were right. So they saved me from that fate.

Seibel: Other than providing more summer employment than physics, what about programming drew you in?

Eich: The connection between theory and practice, especially at the front end of the compiler construction process, was attractive to me. Numerical methods I didnt get into too much. Theyre less attractive because you end up dealing with all sorts of crazy trade-offs in representing real numbers as finite precision floating-point numbers and thats just hellish. It still bites JavaScript users because we chose this hardware standard from the 80s and its not always operating the way people expect.

Seibel: Because, like the Spanish Inquisition, no one really expects floating point.

Eich: No one expects the rounding errors you getpowers of five arent representable well. They round badly in base-two. So dollars and cents, sums and differences, will get you strange long zeros with a nine at the end in JavaScript. There was a blog about this that blamed Safari and Mac for doing math wrong and its IEEE doubleits in everything, Java and C.

Physics was also less satisfying to me because it has kind of stalled. Theres something not quite right when you have these big inductive theories where people are polishing corners and inventing stuff like dark energy, which is basically unfalsifiable. I was gravitating toward something that was more practical but still had some theoretical strength based in mathematics and logic.

Then I went to University of Illinois Champaign-Urbana to get a masters degree, at least. I was thinking of going all the way but I got stuck in a project that was basically shanghaied by IBM. They had a strange 68020 machine they had acquired from a company in Danbury, Connecticut, and they ported Xenix to it. It was so buggy they co-opted our research project and had us become like a QA group. Every Monday wed have the blue suit come out and give us a pep talk. My professors were kind of supine about it. I shouldve probably found somebody new, but I also heard Jim Clark speak on campus and I pretty much decided I wanted to go work at Silicon Graphics.

Seibel: What were you working on at SGI?

Eich: Kernel and networking code mostly. The amount of language background that I used there grew over time because we ended up writing our own network-management and packet-sniffing layer and I wrote the expression language for matching fields and packets, and I wrote the translator that would reduce and optimize that to a short number of mask-and-match filters over the front 36 bytes of the packet.

And I ended up writing another language implementation, a compiler that would generate C code given a protocol description. Somebody wanted us to support AppleTalk in this packet sniffer. It was a huge, complex grab bag of protocol syntax for sequences and fields of various sizes and dependent types of mostly arrays, things like that. It was fun and challenging to write. I ended up using some of the old Dragon bookAho and Ullmancompiler skills. But that was it. I think I did a unifdef clone. Dave Yost had done one and it didnt handle #if expressions and it didnt do expression minimization based on some of the terms being pound-defined or undefined, so I did that. And thats still out there. I think it may have made its way into Linux.

I was at SGI from 85 to 92. In 92 somebody I knew at SGI had gone to MicroUnity and I was tired of SGI bloating up and acquiring companies and being overrun with politicians. So I jumped and it was to MicroUnity, which George Gilder wrote about in the 90s in Forbes ASAP as if it was going to be the next big thing. Then down the memory hole; it turned into a $200 million crater in North Sunnyvale. It was a very good learning experience. I did some work on GCC there, so I got some compiler-language hacking. I did a little editor language for MPEG2 video where you could write this crufty pseudospec language like the ISO spec or the IEC spec, and actually generate test bit streams that have all the right syntax.

Seibel: And then after MicroUnity you ended up at Netscape and the rest is history. Looking back, is there anything you wish you had done differently as far as learning to program?

Eich: I was doing a lot of physics until I switched to math and computer science. I was doing enough math that I was getting some programming but I had already studied some things on my own, so when I was in the classes I was already sitting in the back kind of moving ahead or being bored or doing something else. That was not good for personal self-discipline and I probably missed some things that I couldve studied.

Ive talked to people whove gone through a PhD curriculum and they obviously have studied certain areas to a greater depth than I have. I feel like that was the chance that I had then. Cant really go back and do it now. You can study anything on the Internet but do you really get the time with the right professor and the right coursework, do you get the right opportunities to really learn it? But Ive not had too many regrets about that.

As far as programming goes, I was doing, like I said, low-level coding. Im not an object-oriented, design-patterns guy. I never bought the Gamma book. Some people at Netscape did, some of Jamie Zawinskis and my nemeses from another acquisition, they waved it around like the Bible and they were kind of insufferable because they werent the best programmers.

Ive been more low-level than I shouldve been. I think what Ive learned with Mozilla and Firefox has been about more test-driven development, which I think is valuable. And other things like fuzz testing, which we do a lot of. We have many source languages and big deep rendering pipelines and other kinds of evaluation pipelines that have lots of opportunity for memory safety bugs. So we have found fuzz testing to be more productive than almost any other kind of testing.

Ive also pushed us to invest in static analysis and thats been profitable, though its fairly esoteric. We have some people we hired who are strong enough to use it.

Seibel: What kind of static analysis?

Eich: Static analysis of C++, which is difficult. Normally in static analysis youre doing some kind of whole-program analysis and you like to do things like prove facts about memory. So you have to disambiguate memory to find all the aliases, which is an exponential problem, which is generally infeasible in any significant program. But the big breakthrough has been that you dont really need to worry about memory. If you can build a complete controlflow graph and connect all the virtual methods to their possible implementation, you can do a process of partial evaluation over the code without actually running it. You can find dead code and you can find redundant tests and you can find missing null tests.

And you can actually do more if you go to higher levels of discourse where we all operate, where theres a proof system in our head about the program were writing. But we dont have a type system in the common languages to express the terms of the proof. Thats a real problem. The Curry-Howard correspondence says theres a correspondence between logic systems and type systems, and types are terms and programs are proofs, and you should be able to write down these higher-level models that youre trying to enforce. Like, this array should have some constraint on its length, at least in this early phase, and then after that it maybe has a different or no constraint. Part of the trick is you go through these nursery phases or other phases where you have different rules. Or youre inside your own abstractions firewall and you violate your own invariants for efficiency but you know what youre doing and from the outside its still safe. Thats very hard to implement in a fully type-checked fashion.

When you write Haskell programs youre forced to decide your proof system in advance of knowing what it is youre doing. Dynamic languages became popular because people can actually rapidly prototype and keep this latent type system in their head. Then maybe later on, if they have a language that can support it, or if theyre recoding in a static language, they can write down the types. That was one of the reasons why in JavaScript we were interested in optional typing and we still are, though its controversial in the committee. Theres still a strong chance well get some kind of hybrid type system into a future version of JavaScript.

So we would like to annotate our C++ with annotations that conservative static analysis could look at. And it would be conservative so it wouldnt fall into the halting-problem black hole and take forever trying to go exponential. It would help us to prove things about garbage-collector safety or partitioning of functions into which control can flow from a script, functions from which control can flow back out to the script, and things to do with when you have to rematerialize your interpreter stack in order to make security judgments. It would give us some safety properties we can prove. A lot of them are higher-level properties. They arent just memory safety. So were going to have to keep fighting that battle.

Seibel: So thats a very high-level view of programming. How close to the metal do you think programmers today need to be able to go? If someone is going to be writing most of their applications in JavaScript, is it still important that they grok assembly?

Eich: I know a lot of JavaScript programmers who are clever programmers, and the best ones have a good grasp of the economics. They benchmark and they test as they go and they write tight JavaScript. They dont have to know about how it maps to machine instructions.

A lot of them are interested in that when they hear about these JITing, tracing VMs that were building. And were getting more and more people who are pushing pixels. If you give people enough programming-language performance and enough pixel-pushing power I think JavaScript programmers will start using JavaScript at a lower level. And machine economics or the virtual-machine economicswhat really matters? Maybe its the virtual-machine economics.

Abstraction is powerful. What Im really allergic to, and what I had a bad reaction to in the 90s, was all the CORBA, COM, DCOM, object-oriented nonsense. Every startup of the day had some crazy thing that would take 200,000 method calls to start up and print hello, world. Thats a travesty; you dont want to be a programmer associated with that sort of thing. At SGI, the kernel, of course, was where the real programmers with chest hair went, and there you couldnt screw around. Kernel malloc was a new thing; we still used fixed-sized tables, and we panicked when we filled them up.

Staying close to the metal was my way of keeping honest and avoiding the bullshit, but now, you know, with time and better, faster hardware and an evolutionary winnowing process of good abstractions versus bad, I think people can operate above that level and not know assembly and still be good programmers and write tight code.

Seibel: Do you think, conversely, that the people who, back in the day, could write intricate, puzzle-box assembly code, would be just as great programmers in todays world doing high-level programming? Or does that kind of programming require different skills?

Eich: I would say for certain aspects of programming there is a correspondence between the two. Theres a difference between raw pointers and this happy, fun JavaScript world. That kind of still separates the chest hairgender-independentprogrammers from those who dont quite have it.

Keeping it all in your head is important. Obviously people have differentsized heads. Somebody with a big head could keep track of higher-level invariants in a memory-safe architecture and not have to worry about pointers. But theres something still that bothers me if over time we lose the ability to write to the metal. Somebodys doing it; the compiler is generating the code. The compiler writers have to be doing a better job over time.

Seibel: So there will always be a place for that kind of programming. But are there people who can be successful programmers now who just couldnt when all programming was low-level hacking? Or is there one fixed population of people who have the right kind of brain for programming and now theyre split with some people doing low-level stuff and some people doing high-level stuff?

Eich: I havent hacked kernel code in a long time, so I would have to go for theres some ability to migrate. Theres more code to write. And sound abstractions give you leverage over problems you couldnt address before.

Seibel: Lets go back to those ten days when you implemented the original JavaScript. I know that at some point someone had turned you on to Abelson and Sussman and your original idea was to put Scheme in the browser.

Eich: The immediate concern at Netscape was it must look like Java. People have done Algol-like syntaxes for Lisp but I didnt have time to take a Scheme core so I ended up doing it all directly and that meant I could make the same mistakes that others made.

I didnt have total dynamic scope, like Stallman insisted was somehow important for Emacs and infested Elisp with. JavaScript has mostly lexical scope with some oddness to itthere are a few loopholes that are pretty much dynamic: the global object, the with statement, eval. But its not like dollar variables in Perl before my or upvar, uplevel in Tcl. The 90s was full of thatit was trendy.

But I didnt stick to Scheme and it was because of the rushing. I had too little time to actually think through some of the consequences of things I was doing. I was economizing on the number of objects that I was going to have to implement in the browser. So I made the global object be the window object, which is a source of unknown new name bindings and makes it impossible to make static judgments about free variables. So that was regrettable. Doug Crockford and other object-capabilities devotees are upset about the unwanted source of authority you get through the global object. Thats a different way of saying the same thing. JavaScript has memory-safe references so were close to where we want to be but there are these big blunders, these loopholes.

Making those variables in the top level actually become mutable properties of an object that you can alias and mess around with behind the back of somebodythats no good. It shouldve been lexical bindings. Because if you go down from there into the functions and nested functions, then it is much more Scheme-like. You dont quite have the rich binding forms, the fluid lets or whatever; you have more like set-bang. But the initial binding you create with a local variable is a lexical variable.

Seibel: So basically now people make a top-level function to get a namespace.

Eich: Yeah. You see people have a function and they call it right away. It gives them a safe environment to bind in, private variables. Dougs a big champion of this. It was not totally unknown to the Schemers and Lispers but a lot of JavaScript people had to learn it and Doug and others have done a valuable service by teaching them. Its not like youre getting everybody to be high-quality Scheme programmers, unfortunately, but to some extent theyve succeeded, so people now do understand more functional idioms at some patterny level, not necessarily at a deep level.

Seibel: So thats the JavaScript thats been out there for over a decade. And now theres this big renaissance due to Ajax. So folks say, OK, we really need to take another look at this. You recently went through the drama of the ECMAScript 4 proposal and the competing ECMAScript 3.1 proposal and now things seem to have settled down with the Harmony plan for unifying the two. Was the ES4 proposal your chance to show the world that, Look, Im a really smart guy and JavaScript is a really a good language?

Eich: No, I dont think so. I know Doug may think that. I dont think Doug knows me that well, but the thing is, Im not really looking for respect, especially from the Java-heads or the trailing edge.

Seibel: Was ES4 your brainchild? Was it your take on, knowing all that you know now, what you want JavaScript to be?

Eich: No. It was definitely a collaborative effort and in some ways a compromise because we were working with Adobe, who had done a derivative language called ActionScript. Their version three was the one that was influencing the fourth-edition proposals. And that was based on Waldemar Horwats work on the original JavaScript 2/ECMAScript fourthedition proposals in the late 90s, which got mothballed in 2003 when Netscape mostly got laid off and the Mozilla foundation was set up.

Waldemar did a good jobI gave him the keys to the kingdom in late 97 when I went off to found mozilla.org with Jamie. Waldemar is a huge brainI think he won the Putnam in 87. MIT PhD. He did try and keep the dynamic flavor of the language, but he struggled to add certain programming-in-the-large facilities to it, like namespaces.

Theres a contrary school which is more pedantic: We should have just a few primitives we can de-sugar our spec to; we can lambda code everything. Thats how people should write anyway because thats how I think of things or, Thats the best way to think of things. Its very reductionistic and its not for everybody. Obviously one way to do your own mental proof system is to reduce things, to subset languages. Subsetting is powerful. But to say everyone has to program in this sort of minuscule subset, thats not usable.

Seibel: In some of the discussion about ES4, you cited Guy Steeles paper, Growing a Language. Speaking as a Lisper, to me the take-away from that paper was, step one, put a macro system in your language. Then all of this special sugar goes away.

Eich: There are two big problems, obviously. C syntax means that you have a much harder time than with s-expression, so you have to define your ASTs and were going to have to standardize them and thats going to be a pain. Then theres the other problem, which is hygiene is still not quite understood. Dave Herman, whos working with us is doing his thesisor was last I checkedon a kind of logic for proving soundness for hygiene, which is, I hope, beneficial. Because we will get to macros.

I said this to Doug Crockford a couple years ago when he had me speak at Yahoo! I started talking about the sugar that I was enthusiastic about. He said, Gee, maybe we should do a macro system first, and I said, No, because then well take nine years. At the time there was a real risk politically that Microsoft was just not going to cooperate. They came back into ECMA after being asleep and coasting. The new guy, who was from Hyderabad, was very enthusiastic and said, Yes, we will put the CLR into IE8 and JScript.net will be our new implementation of web JavaScript. But I think his enthusiasm went upstairs and then he got told, No, thats not what were doing. So it led to the great revolt and splitting the committee.

So we were concerned that if we went off to do macros we were doing research, and if we were doing research we were not going to have Microsoft engaged and we were not going to be putting competitive pressure on them. So macros have had to wait. Im fine with that so long as we do the right automated grammar checks and we do make sure we can recast all of the sugar as macros when we have macros. But in the meantime theres no reason to starve the users for sugar. It doesnt rot their teeth and it helps them avoid mistakes.

Seibel: Back in 1995, what other languages influenced your original design of JavaScript?

Eich: Self was big, mainly because of the papers that Dave Ungar had just written. I never played with any Self code, but I was just inspired by them. I like Smalltalk and here was somebody taking one idea applied to Smalltalk, which was prototype-based delegationmultiple prototypes unlike JavaScriptand just running with it as hard as they could. That was inspiring to me because there was both good compiler, VM-level engineering and, I thought, good language design.

Because, like Crock and others, I think you do want to simplify and I do like the languages designers who take fewer primitives and see how far they can go. I think theres been kind of a Stockholm syndrome with JavaScript: Oh, it only does what it does because Microsoft stopped letting it improve, so why should we want better syntax; its actually a virtue to go lambda-code everything. But that Stockholm syndrome aside, and Microsoft stagnating the Web aside, language design can do well to take a kernel idea or two and push them hard.

Seibel: Were you aware of NewtonScript at all?

Eich: Only after the fact did someone point it out to me and I realized, Hey, theyve got something like our scope chain in their parent link and our single prototype. I think it was convergent evolution based on Self. And the DOM event handlerspart of the influence there was HyperTalk and Atkinsons HyperCard. So I was looking not only at Self and Scheme, but there were these onFoo event handlers in HyperTalk, and that is what I did for the DOM onClick and so on.

One more positive influence, and this is kind of embarrassing, was awk. I mean, I was an old Unix hacker and Perl was out, but I was still using awk for various chores. And I couldve called these first-class functions anything, but I called them function mainly because of awk. An eight-letter keywordits kind of heavy, but there it is.

Seibel: At least it wasnt lambdaJavaScript wouldve been doomed from the start. Were there any languages that negatively influenced JavaScript, in the sense of, I dont want to do that?

Eich: It was such a rush job that I wasnt, like, worried about, Oh, I cant make it into Ada or Common Lisp. Java was in some ways a negative influence. I both had to make it look like Java and not let in those crazy things like having a distinction between primitive types and objects. Also, I didnt want to have anything classy. So I swerved from that and it caused me to look at Self and do prototypes.

Seibel: Did you ever consider making a language more closely related to Javatake Java and make some kind of simple subset; get rid of the primitive types and other needless complexities?

Eich: There was some pressure from management to make the syntax look like Java. There was also some pressure to make it not too big, because after all, people should use Java if theyre doing any real programming; this is just Javas dumb little brother.

Seibel: So you wanted to be like Java, but not too much.

Eich: Not too much. If I put classes in, Id be in big trouble. Not that I really had time to, but that wouldve been a no-no.

Seibel: Coming back to the present, ES4 has been officially abandoned and everyone is now working toward ES-Harmony, which will somehow combine ES3.1 with ideas from ES4? Do you think thats ultimately a good decision?

Eich: Doug was a little triumphalist in first blog post: Weve won. The devil has been vanquished. I had a joke slide I gave in London a year ago about Doug being Gandalf on the bridge, at Khazad-d?m facing down the ES4rog. He liked that a lot. It was the first time I poked fun at him because hes a little serious sometimes when he gets on this topic and he liked it a lot. He can be the hero; ES4 wasnt quite the monster.

ES4 looks, in retrospect, too big. But we need to be practical about standards. We cant just say all you need are lambdasAlonzo Church proved it, so were not going to add any more to the language. Thats the sort of impoverished approach that tries to make everybody into an expert and it will not work on the large number of programmers out there who have been mistrained in these Java schools. JavaScript will fall someday but we can keep evolving it and keep it competitive in both the theoretical and the practical sense if we dont try to hold back the sugar for puritys sake.

It needs to evolve to solve the problems that programmers face. Programmers can solve some of them by writing their own library abstractions. But the ability to write abstractions in the language is limited without the extensionsyou cant write getters and setters. You cant make objects look native, have properties turn into code; things like that. And you cant solve some of these security problems in an implicit or automated way.

Seibel: In general do you feel like languages are getting better over time?

Eich: I think so, yeah. Maybe were entering the second golden age; theres more interest in languages and more language creation. We talk about programming: we need to keep practicing the craftits like writing or music. But the language that you usethe tonal systemmatters too. Language matters. So we should be evolving programming languages; we shouldnt be sitting still. Because the Web demands compatibility, JavaScript may have to sit still too much. But we shouldnt get stuck by that; we should either make a better JavaScript, even if it doesnt replace the one on the Web, or we should move beyond that.

You see stuff like Ruby, which took influences from Ada and Smalltalk. Thats great. I dont mind eclecticism. Though Ruby does seem kind of overhyped. Nothing bad about it, just sometimes the fan boys make it sound like the second coming and its going to solve all your problems, and thats not the case. We should have new languages but they should not be overhyped. Like the C++ hype, the whole design patterns will save us. Though maybe they were reacting to the conservatism of the Unix C world of the 80s.

But at some point we have to have better languages. And the reason is to have proof assistants or proof systems, to have some kind of automatic verification of some claims youre making in your code. You wont get all of them, right? And the dynamic tools like Valgrind and its race detectors, thats great too. Theres no silver bullet, as Brooks said, but there are better languages and we should migrate to them as we can.

Seibel: To what extent should programming languages be designed to prevent programmers from making mistakes?

Eich: So a blue-collar language like Java shouldnt have a crazy generic system because blue-collar people cant figure out what the hell the syntax means with covariant, contravariant type constraints. Certainly Ive experienced some toe loss due to C and C++s foot guns. Part of programming is engineering; part of engineering is working out various safety properties, which matter. Doing a browser they matter. They matter more if youre doing the Therac-25. Though that was more a threadscheduling problem, as I recall. But even then, you talk about better languages for writing concurrent programs or exploiting hardware parallelism. We shouldnt all be using synchronized blockswe certainly shouldnt be using mutexes or spin locks. So the kind of leverage you can get through languages may involve trade-offs where you say, Im going, for safety, to sacrifice some expressiveness.

With JavaScript I think we held to this, against the wild, woolly Frenchmen superhackers who want to use JavaScript as a sort of a lambda x86 language. Were not going to add call/cc; theres no reason to. Besides the burden on implementerslets say that wasnt a problempeople would definitely go astray with it. Not necessarily the majority, but enough people who wanted to be like the superhackers. Theres sort of a programming zigguratthe Right Stuff, you know. People are climbing towards the top, even though some of the tops sometimes fall off or lose a toe.

You can only borrow trouble so many different ways in JavaScript. There are first-class functions. There are prototypes, which are a little confusing to people still because theyre not the standard classical OOP.

Thats almost enough. Im not a minimalist who says, Thats it; we should freeze the language. Thats convenient cover for Microsoft, and it kind of outrages me, because I see people wasting a lot of time and still having bugs. You know, you can still have lots of hard-to-find bugs with lambda coding.

Doug has taught people different patterns, but I do agree with Peter Norvig: those patterns show some kind of defect in the language. These patterns are not free. Theres no free lunch. So we should be looking for evolution in the language that adds the right bits. Adding optional types probably will happen. They might even be more like PLT contracts.

Seibel: A lot of the stuff youre dealing with, from static analysis of your C++ to the tracing JITs and new features for JavaScript, seems like youre trying to keep up with some pretty cutting-edge computer-science research.

Eich: So were fighting the good fight but were trying to be smart about it. Were also trying to move the research needle becausethis is something else that was obvious to me, even back when I was in school, and I think its still a problemthere are a lot of problems with academic research. Its widely separated from industry.

So theres something wrong that wed like to fix. Weve been working with academics who are practically minded. Thats been great. We dont have much money so were going to have to use leveragepartly its just getting people to talk and network together.

You lose something when the academics are all off chasing NSF grants every year. The other thing is, you see the rise in dynamic languages. You see crazy, idiotic statements about how dynamic language are going to totally unseat Java and static languages, which is nonsense. But the academics are out there convinced static type systems are the ultimate end and theyre researching particular kinds of static type systems like the ML, Hindley-Milner type inferences and its completely divorced from industry.

Seibel: Why is that? Because its not solving any real problems or because its only a partial solution?

Eich: We did some work with SML New Jersey to self-host the reference implementation of JavaScript, fourth edition, which is now defunct. We were trying to make a definitional interpreter. We werent even using Hindley-Milner. We would annotate types and arguments to avoid these crazy, notorious error messages you get when it cant unify types and picks some random source code to blame and its usually the wrong one. So theres a quality-of-implementation issue there. Maybe theres a typetheoretic problem there too because it is difficult, when unification fails, to have useful blame.

Now you could do more research and try to develop some higher-level model of cognitive errors that programmers make and get better blame coordinates. Maybe Im just picking on one minor issue here, but it does seem like thats a big one.

Academia has not been helpful in leading people toward a better model. I think academia has been kind of derelict. Maybe its not their fault. The economics that they subsist on arent good. But we all knew we were headed toward this massively parallel future. Nobody has solved it. Now theyre all big about transactional memory. Thats not going to solve it. Youre not going to have nested transactions rolling back and contending across a large number of processors. Its not going to be efficient. Its not going to actually work correctly in some cases. You cant map all your concurrent or parallel programming algorithms on to it. And you shouldnt try.

People like Joe Armstrong have done really good work with the sharednothing approach. You see that a lot in custom systems in browser implementations. Chrome is big on it. We do it our own way in our JavaScript implementation. And shared nothing is not even interesting to academics, I think. Transactional memory is more interesting, especially with the sort of computer-architecture types because they can figure out ways to make good instructions and hardware support for it. But its not going to solve all the problems we face.

I think there will be progress and it should involve programming languages. Thats why I do think the talk about the second golden age isnt wrong. Its just that we havent connected the users of the languages with the would-be developers with the academics who might research a really breakthrough language.

Seibel: You got a Masters but not a PhD. Would you generally recommend that people who want to be programmers should go get a PhD in computer science? Or should only certain kinds of people do that?

Eich: I think only certain kind of people. It takes certain skills to do a PhD, and sometimes you wonder if its ultimately given just because you endured. But then you get the three letters to put after your name if you want to. And that helps you open certain doors. But my experience in the Valley in this inflationist boom of 20 years or so that weve been living throughthough that may be coming to an endwas certainly it wasnt a good economic trade-off. So I dont have regrets about that.

The ability to go study something in a systematic, and maybe even leisurely, way is attractive. The go-to-market, ride Moores law, and compete and deal with fast product cycles and sometimes throwaway softwareseems like a shame if thats all everybody does. So theres a role for people who want to get PhDs, who have the skills for it. And there is interesting research to do. One of the things that were pushing at Mozilla is in between whats respected in academic research circles and whats already practice in the industry. Thats compilers and VM stuff, debuggers eventhings like Valgrindprofiling tools. Underinvested-in and not sexy for researchers, maybe not novel enough, too much engineering, but theres room for breakthroughs. Were working with Andreas Gal and he gets these papers rejected because theyre too practical.

Of course, we need researchers who are inclined that way, but we also need programmers who do research. We need to have the programming discipline not be just this sort of blue-collar thing thats cut off from the people in the ivory towers.

Seibel: How do you feel about proofs?

Eich: Proofs are hard. Most people are lazy. Larry Wall is right. Laziness should be a virtue. So thats why I prefer automation. Proofs are something that academics love and most programmers hate. Writing assertions can be useful. In spite of bad assertions that shouldve been warnings, weve had more good assertions over time in Mozilla. From that weve had some illumination on what the invariants are that youd like to express in some dream type system.

I think thinking about assertions as proof points helps. But not requiring anything that pretends to be a complete proofthere are enough proofs that are published in academic papers that are full of holes.

Seibel: On a completely different topic, whats the worst bug you ever had to track down?

Eich: Oh, man. The worst bugs are the multithreaded ones. The work I did at Silicon Graphics involved the Unix kernel. The kernel originally started out, like all Unix kernels of the day, as a monolithic monitor that ran to completion once you entered the kernel through a system call. Except for interrupts, you could be sure you could run to completion, so no locks for your own data structure. That was cool. Pretty straightforward.

But at SGI the bright young things from HP came in. They sold symmetric multiprocessing to SGI. And they really rocked the old kernel group. They came in with some of their new guys and they did it. They stepped right up and they kept swinging until they knocked the ball pretty far out of the field. But they didnt do it with anything better than C and semaphores and spin locks and maybe monitors, condition variables. All hand-coded. So there were tons of bugs. It was a real nightmare.

I got a free trip to Australia and New Zealand that I blogged about. We actually fixed the bug in the field but it was hellish to find and fix because it was one of these bugs where wed taken some single-threaded kernel code and put it in this symmetric multiprocessing multithreaded kernel and we hadnt worried about a particular race condition. So first of all we had to produce a test case to find it, and that was hard enough. Then under time pressure, because the customer wanted the fix while we were in the field, we had to actually come up with a fix.

Diagnosing it was hard because it was timing-sensitive. It had to do with these machines being abused by terminal concentrators. People were hooking up a bunch of PTYs to real terminals. Students in a lab or a bunch of people in a mining software company in Brisbane, Australia in this sort of 70s sea of cubes with a glass wall at the end, behind which was a bunch of machines including the SGI two-processor machine. That was hard and Im glad we found it.

These bugs generally dont linger for years but they are really hard to find. And you have to sort of suspend your life and think about them all the time and dream about them and so on. You end up doing very basic stuff, though. Its like a lot of other bugs. You end up bisectingyou know wolf fence. You try to figure out by monitoring execution and the state of memory and try to bound the extent of the bug and control flow and data that can be addressed. If its a wild pointer store then youre kinda screwed and you have to really start looking at harder-to-use tools, which have only come to the fore recently, thanks to those gigahertz processors, like Valgrind and Purify.

Instrumenting and having a checked model of the entire memory hierarchy is big. Robert OCallahan, our big brain in New Zealand, did his own debugger based on the Valgrind framework, which efficiently logs every instruction so he can re-create the entire program state at any point. Its not just a time-traveling debugger. Its a full database so you see a data structure and theres a field with a scrogged value and you can say, Who wrote to that last? and you get the full stack. You can reason from effects back to causes. Which is the whole game in debugging. So its very slow. Its like a hundred times slower than real time, but theres hope.

Or you can use one of these faster recording VMsthey checkpoint only at system call and I/O boundaries. They can re-create corrupt program states at any boundary but to go in between those is harder. But if you use that you can probably close in quickly at near real time and then once you get to that stage you can transfer it into Robs Chronomancer and run it much slower and get all the program states and find the bug.

Debugging technology has been sadly underresearched. Thats another example where theres a big gulf between industry and academia: the academics are doing proofs, sometimes by hand, more and more mechanized thanks to the POPLmark challenge and things like that. But in the real world were all in debuggers and theyre pieces of shit from the 70s like GDB.

Seibel: In the real world one big split is between people who use symbolic debuggers and people who use print statements.

Eich: Yeah. So I use GDB, and Im glad GDB, at least on the Mac, has a watch-point facility that mostly works. So I can watch an address and I can catch it changing from good bits to bad bits. Thats pretty helpful. Otherwise Im using printfs to bisect. Once I get close enough usually I can just try things inside GDB or use some amount of command scripting. But its incredibly weak. The scripting language itself is weak. I think Van Jacobson added loops and I dont even know if those made it into the real GDB, past the FSF hall monitors.

But theres so much more debugging can do for you and these attempts, like Chronomancer and Replay, are good. They certainly changed the game for me recently. But I dont know about multithreading. Theres Helgrind and there are other sort of dynamic race detectors that were using. Those are producing some false positives we have to weed through, trying to train the tools or to fix our code not to trigger them. The jury is still out on those.

The multithreaded stuff, frankly, scares me because before I was married and had kids it took a lot of my life. And not everybody was ready to think about concurrency and all the possible combinations of orders that are out there for even small scenarios. Once you combine code with other peoples code it just gets out of control. You cant possibly model the state space in your head. Most people arent up to it. I could be like one of these chestthumpers on Slashdotwhen I blogged about Threads suck someone was saying, Oh he doesnt know anything. Hes not a real man. Come on, you idiot. I got a trip to New Zealand and Australia. I got some perks. But it was definitely painful and it takes too long. As Oscar Wilde said of socialism, It takes too many evenings.

Seibel: How do you design code?

Eich: A lot of prototyping. I used to do sort of high-level pseudocode, and then Id start filling in bottom up. I do less of the high-level pseudocode because I can usually hold it in my head and just do bottom-up until it joins. Often Im working with existing pieces of code adding some new subsystem or something on the side and I can almost do it bottom-up. When I get in trouble in the middle I do still write pseudo-code and just start working bottom up until I can complete it. I try not to let that take too long because youve got to be able to test it; youve got to be able to see it run and step through it and make sure its doing what its supposed to be doing.

Before that level of design, there may be some entity relationships or gross modularization. Theres probably an algorithm or three that were thinking of where youre reasoning about the complexity of itis it linear? Is it constant? Every time Ive written some kind of linear search thats going to compound quadratically, and unleashed it on the Web, web developers have found that to be a problem. Theyve written enough stuff it stresses it. So we tend to do a lot of data structures that are constant time. And even then, constant can be not oneit can be big enough that you care.

So we do lots of prototyping, we do lots of bottom-up and top-down and they meet in the middle. And I think actually we, at Mozilla, dont do enough rewriting. Were very conservative. We are open source, so we have community we try to build and bring new people into. We certainly have value that users benefit from, and we dont want to take a three-year break rewriting, which is what would happen if we tried too much.

But if you really are trying to move a needle and you dont know exactly what youre doing, rewrite. Its going to take several tries to know what the hell youre doing. And then when you have a design more firm youll stick with it and youll start patching it more, and youll get to this mature state where we creak with patches. Its kind of an evolutionary dead-end for code. You know, maybe its a good sunk cost and you can stand on it for years. Maybe its this thing thats crying out for replacement. Maybe in the open-source world some better standard library has emerged.

And that gets back to the craft of programming, I think. You dont just write code based on some old design. You want to keep practicing, and the practicing involves thinking about design and feeding back your experience in coding to the design process.

I have this big allergy to ivory-tower design and design patterns. Peter Norvig, when he was at Harlequin, he did this paper about how design patterns are really just flaws in your programming language. Get a better programming language. Hes absolutely right. Worshipping patterns and thinking about, Oh, Ill use the X pattern.

Seibel: So newer experiences can show you better ways going forward. But what about when writing the code shows you big flaws in your existing design?

Eich: That does happen. It happens a lot. Sometimes its difficult to throw out and go back to square one. Youve already made commitments, and you get into this trap. I did this with JavaScript. In a great big hurry, I wrote a byte-code interpreter. Even at the time I knew I was going to regret some of the things Id done. But it was a design that was understandable to other people and I could hope to get other people helping me work on. So I question design all the time. I just realize that we dont always get the luxury of revisiting our deepest design decisions. And that is where we then attempt to do a big rewrite, because you really would have a hard time incrementally rewriting to change deep design decisions.

Seibel: How do you decide when its right to do a big rewrite? Thanks to Joel Spolsky, Netscape is in some ways the poster child for the dangers of the big rewrite.

Eich: There was an imperative from Netscape to make the acquisition that waved the Design Patterns book around feel like they were winners by using their new rendering engine, which was like My First Object-Oriented Rendering Engine. From a high level it sounded good; it used C++ and design patterns. But it had a lot of problems.

But the second reason we did the big rewriteI was in mozilla.org and I really was kind of pissed at Netscape, like Jamie, who was getting ready to quit. I thought, you know, we need to open up homesteading space to new contributors. We cant do it with this old hairball of student code from 1994. Or my fine Unix kernel-style interpreter code.

We needed to do a fairly big reset. Yeah, we were going to be four years from shipping. At that point I dont think we were telling upper management that because they didnt want to hear it, so we were optimizing to them. And that cost some of the management their heads. Though they all made out fabulously on the optionsmuch better than I did. But for Mozilla that was the right trade.

We were lucky in hindsight, because we could have had a more rapid evolution of the Web. Microsoft wassome people claim this was due to the antitrust case more than their natureinclined to sit on the Web and stagnate it. So that gave us time to wave the standards flagwhich is twoedged and half bullshitand go rewrite. Like Joel, Im skeptical of rewrites. I think its rare to find that kind of an alignment of interests and get enough funding to live through it and not miss the market. The exceptions are very rare.

The rewrites I was speaking of earlier, though, were when youre prototyping. Thats critical and smaller-scale. It may be a cross-cutting change to a big pile of code so its small in lines, but its big in reach and all the invariants you have to satisfy. Or maybe its a new JIT or whatever, and that you can get away with.

Seibel: Have you ever done any literate programming, a la Knuth?

Eich: I followed the original stuff. It was very neat. I liked it. It was word retrieval. He had some kind of a hash-trie data structure and it was all literately programmed. Then Doug McIlroy came along and did it all with a pipeline.

Our programs are heavily commented but we dont have any way of extracting the prose and somehow making it be checked by humans, if not automatically, against the code. Python people have done some more interesting work there. I have not done anything more than heavily comment. I do go back and maintain commentsits a real pain and sometimes I dont do it and then I regret it because somebody gets a bum steer.

I actually like McIlroys rejoinder. It wasnt a rebuttal of literate programmingbut it was kind of. You dont want to write too many words, prose or code. In some ways the code should speak for itself at the small level. Its at the bigger level, the big monster function or the module boundary, that you need docs. So doc comments or things like themdoc strings. Embedding the test in the comment. I guess thats the big Python thing. Thats good.

There is something to literate programming, especially these integrated tests and doc strings. Id like to see more of that supported by languages. We tried to add doc comments of some sort to ES4 with first-class metadata hooks or reflection hooks and it was just impossible to get everybody to agree.

Seibel: Do you read code youre not working on?

Eich: I do it as part of my job. Code review is a mandatory pre-check-in step, mostly to compensate for Netscapes bad hiring, but we kept it and still use it for integration review. We have a separate super review for when youre touching a lot of modules and you dont know all the hidden invariants that Joe Schmoe, who no longer works on Mozilla, knew in his head. Somebody else may have figured them out so you have somebody experienced to look at the big picture. Sometimes you can bypass it if you know what youre doing and youre in the sort of Jedi council, but were not trying to cheat on it too much.

We dont have design reviews, so sometimes this causes a delayed design review to happen. They say, Oh, back to the drawing board. You wrote too much code. You should have designed it this other way. Thats the exception. We arent going to impose any kind of waterfall, design then implementation. That was the big thing when I was getting into the industry in the early 80s and it was a nightmare, frankly. You spend all this time writing documents and then you go to write the code and often you realize that its really stupid and you totally change the code and put the documents down the memory hole.

Seibel: So thats code that is going into Mozilla; do you ever read other peoples code, outside Mozilla, just for edification?

Eich: Open source is great. I love looking at other peoples code thats in some other part of the world. I dont spend enough time on it but I do look at server frameworks or I look at things like Python and Ruby.

Seibel: The implementations of those things?

Eich: Implementations and also library code. I look at the Ajax librariesand its heartening to see how clever people can be and how this small set of toolsclosures, prototypes, and objectscan be used to create reasonable, convenient, sometimes very convenient abstractions. Theyre not always hardened or safe but theyre awfully convenient.

Seibel: When you read a big piece of code, how do you get into it?

Eich: I used to start top-down. If its big enough you get function pointers and control flow gets opaque. I sometimes drive it in the debugger and play around with it that way. Also, I look for bottom-up patterns that I recognize. If its a language processor or if its got something that makes system calls that I understand, I can start looking at how those primitives are used. How does that get used by higher levels in the system? And that helps me get around. But really understanding it is this gestalt process that involves looking at different angles of top and bottom and different views of it, playing in the debugger, stepping through in the debuggerincredibly tedious though that can be.

If you can understand whats going on a little bit in the heapchase pointers, walk through cons cells, whateverthat can be worth the trouble though it gets tedious. That, to me, is as important as reading source. You can get a long way reading source; you can also get stuck and get bored and convince yourself you understand something that you dont.

When I did JavaScripts regular expressions I was looking at Perl 4. I did step through it in the debugger, as well as read the code. And that gave me ideas; the implementation I did was similar. In this case the recursive backtracking nature of them was a little novel, so that I had to wrap my head around. It did help to just debug simple regular expressions, just to trace the execution. I know other programmers talk about this: you should step through code, you should understand what the dynamic state of the program looks like in various quick birds-eye views or sanity checks, and I agree with that.

Seibel: Do you do that with your own code, even when youre not tracking down a bug?

Eich: Absolutelyjust sanity checks. I have plenty of assertions, so if those botch then Ill be in the debugger for sure. But sometimes you write code and youve got some clever bookkeeping scheme or other. And you test it and it seems to work until you step through it in the debugger. Particularly if theres a bit of cleverness that only kicks in when the stars and the moon align. Then you want to use a conditional break point or even a watch point, a data break point, and then you can actually catch it in the act and check that, yes, the planets are all aligned the way they should be and maybe test that you werent living in optimistic pony land. You can actually look in the debugger, whereas in the source youre still in pony land. So that seems important; I still do it.

Seibel: Is the way you discover a problem that youre stepping through looking at the source with your mental model of whats about to happen, and then you see it not happen?

Eich: You see it not happen, orand this is my problemI was in pony land. Im getting older and more skeptical and Im doing better, but theres still something that I was optimistic about. In the back of my mind this Jiminy Cricket is whispering, You probably have a bug because you forgot about something. That kind of problem happens to me still.

And sometimes I know about it, I swearsomewhere in there I know Im wrong. I have this sort of itch in my hind-brainwell, not in my hind-brain; I dont know where it is; the microtubules. Anyway, I kind of feel like theres something that I should watch out for, and being in debugger helps me watch out for it and it helps me force the issue or see that the test vector, though it covered the code in some sense, didnt cover all the combinations, because its a huge, huge hyperspace. And if you just change this one value then youd go into a bad place.

Seibel: In addition to reading code, lots of programmers read books about programmingare there any books that you would recommend?

Eich: I should be a better student of the literature. But I think its sort of like music in that you have to practice it. And you can learn a lot reading other peoples code. I did like Brian Kernighans books; I thought they were neat, because they would build up a small amount of code, and start reusing it as you go, and modularizing. And Knuths Art of Computer Programming, Volumes 13, especially the seminumerical stuff. Double-hashingI love those parts. The lemma about the golden ratio with the proof left as an exercise.

But Im a little skeptical of book learning for programming. Programming is partly engineering; theres maybe some math on a good day. And then theres a lot of practical stuff that doesnt even rise to the level of engineering in the sense of civil engineering and mechanical engineering. Maybe itll be formalized more over time.

Theres definitely a good corpus of knowledge. Computer science is a science. I remember somebody on Usenet 20 years ago said, Science lite, one-third the rigor. Theres still a lot of stuff that doesnt look like it really holds up over timethere are these publish-or-perish ten-page, ten-pointfont papers that often have holes in them. The journal publications are better because you get to interact with the referee; its not just a truth or dare. And they get reviewed more carefully. The areas of mechanized proofs, thats getting impressive. But its still not reaching programmers. So theres something a little bit missing in computer science in my view that makes me skeptical of book learning. I should probably not go so far on this Luddite track. But there it is.

There is science there, and there are important things to learn. You could spend a lot of time studying them, too. I know a lot of people on the theoretical side of it from work on JavaScript language development, and a lot of them are hackers, too, which is good. Some of them dont program.

Theyre not really practical people. They have amazing insights, which can sometimes be very productive, but when you have to actually write programs and ship them to users and have them be usable and have them win some sort of market contest, youre far removed from theory. But I am interested in theory, and it does help improve our lives.

Seibel: There are other kinds of books too. There are books that introduce you to the craft of programming, without a lot of theory.

Eich: And thats the kind of book I like. We talked about Knuths literate programming paper. And there was a whole area of programming as craft that I like. I like the Smalltalk books. Now that I think about it, those were pretty influential. The Adele Goldberg book. And before that, the Byte issue.

Seibel: With the hot-air balloon on the cover?

Eich: Yeah. That turned me around in a big way. That was big. That was like 80 or so. So I wasnt really doing a lot of programming then. I was thinking about it and I was reading about it and I was playing around on this old iron at undergraduate university. The purity of the Smalltalk environment, the fact that it was so highly bootstrappedall that really hit me in a way that made me want to be involved in programminglanguages and virtual machines. Going into Unix was physical machines and operating systems, and that was where the action was. But even then I was readingthere was a Springer-Verlag book that had a bunch of papers, and people back then were fantasizing about universal object file formats and Java byte-code, essentially, before its time. But yes, Smalltalk was huge. I didnt actually get to use it until years later at U of I, when they finally released something that ran on the Suns of the time and it was slow.

Seibel: On another topic, how do you recognize programming talent?

Eich: We hired somebody a while ago; he was a friend of one of the superbrains wed hired. But this was a guy who was, I think, just undergrad or a bachelors degree; Im not sure if he even finished. He met a guy who was working for us and theyre both OCaml hackers, and he was doing his own OCaml hacking on the side. And he was thinking about problems that we were seeing in my static analysis. When we interviewed him, I knew he was young but you couldnt tell. Some people thought, Oh, yeah, he hasnt done much. You know, we should only hire rock stars; what are we talking to him for?

I said, No, you guys are looking at this wrong. This is like one of our bright interns. Get them while theyre young. Hes done a bunch by himself, hes gotten into OCaml; he knows not just the source language, but the runtime, and hes hacked native methods and he was writing an OCaml operating system, toy operating system. But this guy is good. And it wasnt that I gave him any particular programming test; it was just that Id listened to him talk about what hed done and why hed done it. He wasnt just repeating pabulum about C++ patterns. We have kids like that, unfortunately. Nice people and adequate programmers, for what they were doing, Java Enterprise stuff. But we needed somebody different, and this guy was different.

So in the interview the main problem was overcoming people misreading his age or thinking he wasnt accomplished enough. But we hired him and hes just been a superstar. Hes done a bunch of static analysis tooling, originally on this open-source Berkeley Oink framework, and then on GCC as plugins, working with the GCC guys. Now hes kicking our mobile effort into high gear, just doing poor mans profiling and printf of timestamps and finding out where the costs are and whacking them.

So when I interviewed him I knew there was talent. That he came recommended from somebody bright was good, because you know bright people like each other and can judge each othergenerally theres not a dysfunctional, Hire my friend, whos really not bright. They want to work with bright people. Maybe this sounds like Im cheating, but thats one way I recognize talent. And I think thats why were hiring superhackers. I think were hiring up all the Valgrind hackers. Some of those guys can do anything; they dont fuck around.

Seibel: So is that something you often do in interviews: get them to talk about their own projects?

Eich: I do. I dont give people puzzles to solve. We have people who do that here. To the extent that we have to do that and were using that to filter applicants, I worry.

Seibel: Is that even a good first-pass filter?

Eich: Im skeptical. Google does that in spades, and they hire a bunch of very bright puzzle-solvers. But some of them, their street smarts are not necessarily there, and mature judgment. So Im skeptical of it. I think we have to do it to some extent because you can end up getting someone who talks well, but actually isnt effective at programming, and so you want to see them think on their feet, you want to see if theyve solved a problem before. So we give them fairly practical problems. Not esoteric puzzles or math-y things, but more like programming problems.

Check their C++ knowledge, because C++ is hairy. So its sort of a sanity check, not enough to say, Lets hire him. But if they pass it, thats good; if they dont, we worry. To say, Lets hire them, we have to see something else, and thats the spark that involves particulars, like what theyve done and their approach and what languages theyve used.

Maybe Im also sympathetic to the odd duck. I dont mind people who are a little different. I dont want to hire somebody whos hard to work with, but we need talent. We need people who think differently.

When I was an undergrad I was really affected by Pirsigs Zen and the Art of Motorcycle Maintenance. And I had been going through Plato and the early philosophers. I was, at that point, inclined more towards idealism in some philosophical sense. I thought little-endian byte order was superior to bigendian, because after all, the least significant digits are in the lowest addressthere was some kind of harmony or geometry in that. But try reading a hex dump. Practical things matter; particulars matter. The famous School of Athens painting with Aristotle pointing down and Plato pointing upIm more on the pointing-down side now. As I get older I get more and more skeptical and more and more interested in what works.

When Im interviewing people, when Im looking for talent, its very hard for me to not stick with particulars and practicalities. OK, so this guy knew OCamlit meant he was smart, but should we hire him? Well no, but he also did things on his own and he thought on his feet when I talked to him, and he was already thinking about compilation or analysis problems that we were going to hire him to work on. But maybe the important thing there, the real story, was the network we were going through, the guy we hired, he was his friend.

Seibel: Do you still enjoy programming?

Eich: Yeah. Its a bit like an addiction; its a little problematic. Its not just the programming part of getting the code to run; to me now its more and more finding the right idea that has the New Jersey philosophy of a 90/10 trade-offa sweet, sound theoretical core that isnt going to solve all your problems but when you fall on the 10 percent that loses, you dont go to hell. You can actually win this way and the code stays small enough and simple enough and theres some dance between theory and implementation. I like that. That still appeals to me; it still is fun; it keeps me up at night thinking about it when I should be sleeping.

Seibel: Are there parts of it that you dont enjoy as much anymore?

Eich: I dont know. C++. Were able to use most of its featuresthere are too many of them. Its probably got a better type system than Java. But were still screwing around with 70s debuggers and linkers, and its stupid. I dont know why we put up with it.

Impatience and hatred of primitive tools drives me to try to be a better programmer. Our code is riddled with assertions now and they are fatal. Thats important to us. But this is something that has helped me, especially when Im doing one of these allegedly sound, 90/10, sweet trade-off moves on the code that doesnt quite satisfy all the invariants. I forget something; an assertion will botch and then its like, bing, I know what to fix.

Also Im even now learning about my own weaknesses, where Ill optimize something too much. Ill have made some kind of happy pony land in my head, where I forgot some important problem. Thats always a challenge because programmers have to be optimists. Were supposed to be paranoid, neurotic, Woody Allen types who are always worried about things, but really you wouldnt get anywhere in programming if you were truly paranoid.

Seibel: Do you feel at all that programming is a young persons game?

Eich: I think young people have enormous advantages, just physiological advantages to do with the brain. What they dont have is the wisdom! You get crustier and maybe you get slower but you do learn some painful lessons that you try to pass on to the next generation. I see them ignoring me and learning them the hard way, and I shake my fist!

But, apart from that, if you stay well-read and keep at it, your output doesnt necessarily have to be voluminous. While producing a lot of code is still important, what has interested meand this is something that we talked about at Netscape when we talked about their track for principal engineeris somebody who isnt management but still has enough leadership or influence to cause other programmers to write code like they would write without them having to do it, because you dont have enough hours in the day or fingers.

Having that ability to spread your approach and whatever youve learned about programming, and have that go through some kind of community and produce a corpus of code thats bigger than you could do, thats as satisfying to me as being the one that stays up all night writing too much code.

Im still working too much, plus Ive got small children. My wife is a good sport but I dont think she likes me traveling so much. But Im doing some of that too. Thats not programming, yet it somehow has become important. In the case of JavaScript we have to figure out how to move the language forward, and that requires some amount of not just evangelism, but getting people to think about what would happen if the language did move, how would you like it to move, where should it go. And then dealing with the cacophony of responses.

Not all programmers will say this, a lot of them are solitary, in the corner, but one of the things I realized at Netscape was that I liked interacting with people who actually use my code. And I would miss that if I went back into a corner. I want to be grounded about this. Im secure enough to think I could go do something that was a fine sky castle for myself, but Im realist enough to know that it would be only for myself and probably not fine for other people. And whats the point? If Im only for myself, you know, Hillel the elder, what am I?

I am not JavaScript. In the early days, it was such a rush job and it was buggy and then there was some Usenet post Jamie Zawinski forwarded me. He said, Theyre calling your baby ugly. I have real kids now; I dont have to worry about that.


: 1.646. /Cache: 3 / 1