: Coders at Work: Reflections on the craft of programming

L Peter Deutsch

L Peter Deutsch

A prodigy, L Peter Deutsch started programming in the late 50s, at age 11, when his father brought home a memo about the programming of design calculations for the Cambridge Electron Accelerator at Harvard. He was soon hanging out at MIT, implementing Lisp on a PDP-1, and hacking on and improving code written by MIT hackers nearly twice his age.

As a sophomore at UC Berkeley, he got involved with Project Genie, one of the first minicomputer-based timesharing systems, writing most of the operating systems kernel. (Ken Thompson, inventor of Unix and the subject of Chapter 12, would also work on the project while a grad student at Berkeley, influencing his later work on Unix.) After participating in a failed attempt to commercialize the Project Genie system, Deutsch moved to Xerox PARC, where he worked on the Interlisp system and on the Smalltalk virtual machine, helping to invent the technique of just-in-time compilation.

He served as Chief Scientist at the PARC spin-off, ParcPlace, and was a Fellow at Sun Microsystems, where he put to paper the now famous Seven Fallacies of Distributed Computing. He is also the author of Ghostscript, the Postscript viewer. In 1992, he was part of the group that received the Association for Computing Machinery Software System Award, for their work on Interlisp, and in 1994 he was elected a Fellow of the ACM.

In 2002 Deutsch quit work on Ghostscript in order to study musical composition. Today he is more likely to be working on a new musical composition than on a new program, but still cant resist the urge to hack every now and then, mostly on a musical score editor of his own devising.

Among the topics we covered in our conversation were the deep problems he sees with any computer language that includes the notion of a pointer or a reference, why software should be treated as a capital asset rather than an expense, and why he ultimately retired from professional programming.

Seibel: How did you start programming?

Deutsch: I started programming by accident when I was 11. My dad brought home some memo from the Cambridge Electron Accelerator, which was being built at the time. There was a group that did design computations and some memo of theirs accidentally found its way to him. I saw it lying around his office and it had some computer code in it and there was something about it that caught my imagination.

It turned out the memo was actually an addendum to another memo so I asked him if he could lay his hands on the original memo. He brought that home and I said, Gee, this stuff is really interesting. I think I might actually have asked him if I could meet the guy who had written the memos. We met. I dont really remember the details any morethis was 50 years ago. Somehow I got to write a little bit of code for one of the design calculations for the Cambridge Electron Accelerator. Thats how I got started.

Seibel: So that was when you were eleven. By 14 or 15 you were playing on the PDP-1s at MIT, where your dad was a professor.

Deutsch: When I was 14, I found my way to the TX-0 and to the PDP-1 shortly thereafter. I remember getting a hold of a copy of the Lisp 1.5 programmers manual. I dont remember how. It was a very early versionit was actually mimeographedthe old purple ink. There was something about Lisp that caught my imagination. Ive always had a kind of mathematical bent, and Lisp just seemed sort of cool. I wanted to have one to play with and I couldnt get my hands on the Building 26 mainframe. So I did my Lisp implementation on the PDP-1.

Seibel: Do you remember at all how you designed your PDP-1 Lisp?

Deutsch: Im smiling because the program was so small. Have you seen the listing? Its only a few hundred lines of assembler.

Seibel: Ive seen it; I didnt try to understand it. Was it just a matter of transliterating the thing in the 1.5 manual into assembly?

Deutsch: No, no, no. All that was in the 1.5 manual was the interpreter. I had to write a reader and tokenizer and I had to design the data structures and all that stuff. My recollection was that I did that the way I actually have done most of my programming, which is to do the data structures first. When I was young enough that my intuition would point me in the right directionI wont say infallibly, but close enoughthat was a really good approach.

In the last couple of years Ive noticed that Ive gotten rustymy intuition doesnt work so well anymore. Ive been doing a substantial project off and on for several years now to do a good open-source music score editor. Ive been fiddling with that off and on for several years now and I find that letting my intuition steer me to the right data structures and then just writing everything out from there just doesnt work anymore.

Seibel: Do you think your intuition is actually worse or did you used to have more stamina for making it work even if your intuition was actually a bit off?

Deutsch: I think its some of each but I think its more the former. I think what intuition is, really, is an unconscious process for synthesizing a solution out of a large amount of data. And as Ive gotten further and further away from being immersed in the stuff of software, the data that goes into that synthesis has become less and less accessible.

Ive heard it said that to be a master at something you have to have, pretty much at your command, something like 20,000 specific cases. Whats happened is the 20,000 specific cases of software design that passed in front of my face in my 45 years in the industry are just gradually becoming less and less accessible, the way memories do. I think thats pretty much whats going on.

Seibel: Do you remember what it was about programming that drew you in?

Deutsch: With the benefit of 50 years of hindsight, I can see that I have always been drawn to systems of denotative symbolslanguages. Not only languages of discoursehuman languagesbut languages in which utterances have effects. Programming languages certainly fall right into that category.

I think that also has something to do with why the career that Ive switched into is musical composition. Music is a language, or a family of languages, but what you say in those languages not only has denotation, maybe, but it also it has effects on people. Music is kind of interesting because on the spectrum of formality it falls between natural languages and computer languages. Its more formal and structured than a natural language, but it doesnt have nearly the structure or formality of a computer language. That, I think, may have to do with why I went into music and not poetry. I think poetry is not structured enough for me.

But the short answer really is, I just gravitated to this stuff right away.

Seibel: Do you remember the first interesting program you wrote?

Deutsch: The first program that I wrote because the content interested me was actually the second program that I wrote. The first program I wrote was some piece of calculation having to do with the Cambridge Electron Accelerator. The second program was a floating-point-output-formatting program.

Seibel: Which is a pretty hairy problem.

Deutsch: Well it is on a binary machine. Its not a hairy problem on a decimal machine, and I was working on a decimal machine. You just slide the string around and decide where to put the decimal point. You have to decide whether to use the E or F format. But in those days, everything was a lot harderI was writing in assembly language on a batch-processing machineso this was not a trivial problem. It wasnt a hard problem but it wasnt a trivial one. That was the first program that I wrote because I wanted to.

Seibel: So you were hanging out at MIT during high school and then you went to Berkeley for college. Did you want to escape the East Coast?

Deutsch: Sort of. I realized that it would be really good for me to go to someplace that was far away from my parents. The three places that I considered seriously were, I think it was University of Rochester, University of Chicago, and Berkeley. That was a no-braineronly one of the three has reasonable weather. So thats how I ended up at Berkeley. And it was one of the best things that ever happened in my life.

I was at Berkeley and I found Project Genie pretty quickly after I arrived and stayed with that project untilwell there was Project Genie and then there was Berkeley Computer Corporation and then there was Xerox.

Seibel: Presumably at Berkeley you started working on bigger projects than your PDP-1 Lisp implementation.

Deutsch: Oh, yeah. Considerably larger projects at Project Genie. To begin with I wrote pretty much the whole operating-system kernel. The kernel was probably pushing 10,000 lines.

Seibel: How did that changean order-of-magnitude difference in sizechange your design process?

Deutsch: Im trying to remember what was in the kernel. It was still a small enough program that I could approach it as a whole. There were obviously functional divisions. I know I had a clear mental model of which sections of the program were allowed to interact with which of the key data structures. But in fact there werent very damn many data structures. There was a process table; there were ready lists. There were I/O buffers and there was some stuff for keeping track of virtual memory. And then there was an open file table, per process. But the descriptions of all the system data structures probably could have fit on, in terms of C struct definitions, probably could have fit on two pages. So were not talking about a complicated system.

Seibel: Whats the biggest system that youve worked on that you remember how it was designed?

Deutsch: Ive been the primary mover on three large systems. Ghostscriptnot counting the device drivers, which I didnt write most ofwas probably somewhere on the order of between 50,000 and 100,000 lines of C.

On the ParcPlace Smalltalk virtual machine I worked only on the just-in-time compiler, which was probably 20 percent of it, and that was probably in the low single-digit thousands of lines of C. Maybe 3,000, 5,000something like that.

And the Interlisp implementation, as much of it as I was concerned with, was probably a couple thousand lines of microcode, and maybeIm guessing nowmaybe another 5,000 lines of Lisp. So Ghostscript is probably the largest single system Ive ever been involved with.

Seibel: And other than the device drivers written by other people, you basically wrote that by yourself.

Deutsch: Up to the end of 1999, I basically wrote every line of code. At the beginning I made a few architectural decisions. The first one was to completely separate the language interpreter from the graphics.

Seibel: The language being PostScript?

Deutsch: Right. So the language interpreter knew nothing about the data structures being used to do the graphics. It talked to a graphics library that had an API.

The second decision I made was to structure the graphics library using a driver interface. So the graphics library understood about pixels and it understood about curve rendering and it understood about text rendering but it knew as little as I could manage about how pixels were encoded for a particular device, how pixels were transmitted to a particular device.

The third decision was that the drivers would actually implement the basic drawing commands. Which, at the beginning, were basically draw-pixmap and fill-rectangle.

So the rendering library passed rectangles and pixel arrays to the driver. And the driver could either put together a full-page image if it wanted to or, for display, it could pass them through directly to Xlib or GDI or whatever. So those were the three big architectural decisions that I made up front and they were all good ones. And they were pretty much motherhood. I guess the principle that I was following was if you have a situation where you have something thats operating in multiple domains and the domains dont inherently have a lot of coupling or interaction with each other, thats a good place to put a pretty strong software boundary.

So language interpretation and graphics dont inherently interact with each other very much. Graphics rendering and pixmap representation interact more, but that seemed like a good place to put an abstraction boundary as well.

In fact I wrote a Level 1 PostScript interpreter with no graphics before I wrote the first line of graphics code. If you open the manual and basically go through all the operators that dont make any reference to graphics, I implemented all of those before I even started designing the graphics. I had to design the tokenizer; I had to decide on the representation of all the PostScript data types and the things that the PostScript manual says the interpreter has to provide. I had to go back and redo a lot of them when we got to PostScript Level 2, which has garbage collection. But thats where I started.

Then I just let my experience with language interpreters carry me into the design of the data structures for the interpreter. Between the time that I started and the time that I was able to type in 3 4 add equals and have it come back with 7 was about three weeks. That was very easy. And by the way, the environment in which I was workingMS-DOS. MS-DOS with a stripped-down Emacs and somebodys C compiler; I dont remember whose.

Seibel: This was a task that you had done many times before, namely implementing an interpreter for a language. Did you just start in writing C code? Or fill up a notebook with data-structure diagrams?

Deutsch: The task seemed simple enough to me that I didnt bother to write out diagrams. My recollection was that first I sort of soaked myself in the PostScript manual. Then I might have made some notes on paper but probably I just started writing C header files. Because, as I say, I like to design starting with the data.

Then I had some idea that, well, thered have to be a file with a main interpreter loop. Thered have to be some initialization. Thered have to be a tokenizer. Thered have to be a memory manager. Thered have to be something to manage the PostScript notion of files. Thered have to be implementation of the individual PostScript operators. So I divided those up into a bunch of files sort of by functionally.

When I took the trouble of registering the copyright in the Ghostscript code I had to send them a complete listing of the earliest Ghostscript implementation. At that point it was like 10 years laterit was interesting to me to look at the original code and the original structure and the original names of various things and to note that probably 70 or 80 percent of the structure and naming conventions were still there, 10 years and 2 major PostScript language revisions later.

So basically thats what I diddata structures first. Rough division into modules. My belief is still, if you get the data structures and their invariants right, most of the code will just kind of write itself.

Seibel: So when you say you write a header file, is that to get the function signatures or the structs, or both?

Deutsch: The structs. This was 1988, before ANSI Cthere werent function signatures. Once ANSI C compilers had pretty much become the standard, I took two months and went through and I made function signatures for every function in Ghostscript.

Seibel: How has your way of thinking about programming or your practice of programming, changed from those earliest days to now?

Deutsch: Its changed enormously because the kinds of programs that I find interesting have changed enormously. I think its fair to say that the programs that I wrote for the first few years were just little pieces of code.

Over time Ive considered the issues of how do you take a program that does something larger and interesting and structure it and think about it, and how do you think about the languages that you use for expressing it in a way that manages to accomplish your goals of utility, reliability, efficiency, transparency?

Now Im aware of a much larger set of criteria for evaluating software. And I think about those criteria in the context of much larger and more complex programs, programs where architectural or system-level issues is where all the hard work is. Not to say that there isnt still hard work to be done in individual algorithms, but thats not what seems most interesting to me any morehasnt for a long time.

Seibel: Should all programmers grow up to work at that level?

Deutsch: No. In fact, I was just reading that an old friend of mine from Xerox PARC, Leo Guibas, just received some fairly high award in the field. He has never really been a systems guy in the sense that Ive been; hes been an algorithms guya brilliant one. Hes found a way to think about certain classes of analysis or optimization algorithms in a way thats made them applicable to a lot of different problems, and that has yielded new tools for working with those problems. So, its wonderful work. Programmers should be able to grow up to be Leo Guibas, too.

Theres a parallel between architectural principles and the kinds of algorithmic design principles that Leo and people like him use to address these hard optimization and analysis problems. The difference is that the principles for dealing with algorithmic problems are based a lot more directly on 5,000 or10,000 years worth of history in mathematics. How we go about programming now, we dont have anything like that foundation to build on. Which is one of the reasons why so much software is crap: we dont really know what were doing yet.

Seibel: So is it OK for people who dont have a talent for systems-level thinking to work on smaller parts of software? Can you split the programmers and the architects? Or do you really want everyone whos working on systems-style software, since it is sort of fractal, to be able to think in terms of systems?

Deutsch: I dont think software is fractal. It might be nice if it were but I dont think it is because I dont think we have very good tools for dealing with the things that happen as systems get large. I think the things that happen when systems get large are qualitatively different from the things that happen as systems go from being small to medium size.

But in terms of who should do software, I dont have a good flat answer to that. I do know that the further down in the plumbing the software is, the more important it is that it be built by really good people. Thats an elitist point of view, and Im happy to hold it.

Part of whats going on today is that the boundary between what is software and what isnt software is getting blurred. If you have someone whos designing a web site, if that web site has any kind of even moderately complex behavior in terms of interacting with the user or tracking state, you have tools for building web sites like that. And working with those toolsas I understand it, not having used themaccomplishes some of the same ends as programming, but the means dont look very much like writing programs.

So one of the answers to your question might be that over time a larger and larger fraction of what people used to think of as requiring programming wont be programming any more and pretty much anybody will be able to do it and do a reasonable job of it.

You know the old story about the telephone and the telephone operators? The story is, sometime fairly early in the adoption of the telephone, when it was clear that use of the telephone was just expanding at an incredible rate, more and more people were having to be hired to work as operators because we didnt have dial telephones. Someone extrapolated the growth rate and said, My God. By 20 or 30 years from now, every single person will have to be a telephone operator. Well, thats what happened. I think something like that may be happening in some big areas of programming, as well.

Seibel: Can programmers be replaced that way?

Deutsch: Depends on what you want to program. One of the things that Ive been thinking about off and on over the last five-plus years is, Why is programming so hard?

You have the algorithmic side of programming and thats close enough to mathematics that you can use mathematics as the basic model, if you will, for what goes on in it. You can use mathematical methods and mathematical ways of thinking. That doesnt make it easy, but nobody thinks mathematics is easy. So theres a pretty good match between the material youre working with and our understanding of that material and our understanding of the skill level thats required to work with it.

I think part of the problem with the other kind of programming is that the world of basically all programming languages that we have is so different in such deep ways from the physical world that our senses and our brains and our society have coevolved to deal with, that it is loony to expect people to do well with it. There has to something a little wrong with you for you to be a really good programmer. Maybe wrong with you is a little too strong, but the qualities that make somebody a well-functioning human being and the qualities that make somebody a really good programmerthey overlap but they dont overlap a whole heck of a lot. And Im speaking as someone who was a very good programmer.

The world of von Neumann computation and Algol-family languages has such different requirements than the physical world, that to me its actually quite surprising that we manage to build large systems at all that work even as poorly as they do.

Perhaps it shouldnt be any more surprising than the fact that we can build jet airliners, but jet airliners are working in the physical world and we have thousands of years of mechanical engineering to draw on. For software, we have this weird world with these weird, really bizarre fundamental properties. The physical worlds properties are rooted in subatomic physics, but youve got these layers: youve got subatomic physics, youve got atomic physics, youve got chemistry. Youve got tons of emergent properties that come out of that and we have all of this apparatus for functioning well in that world.

I dont look around and see anything that looks like an address or a pointer. We have objects; we dont have these weird things that computer scientists misname objects.

Seibel: To say nothing of the scale. Two to the 64th of anything is a lot, and things happening billions of times a second is fast.

Deutsch: But that doesnt bother us here in the real world. You know Avogadros number, right? Ten to the 23rd? So, were looking here around at a world that has incredible numbers of little things all clumped together and happening at the same time. It doesnt bother us because the world is such that you dont have to understand this table at a subatomic level.

The physical properties of matter are such that 99.9 percent of the time you can understand it in aggregate. And everything you have to know about it, you can understand from dealing with it in aggregate. To a great extent, that is not true in the world of software.

People keep trying to do modularization structures for software. And the state of that art has been improving over time, but its still, in my opinion, very far away from the ease with which we look around and see things that have, whatever it is, 10 to the 23rd atoms in them, and it doesnt even faze us.

Software is a discipline of detail, and that is a deep, horrendous fundamental problem with software. Until we understand how to conceptualize and organize software in a way that we dont have to think about how every little piece interacts with every other piece, things are not going to get a whole lot better. And were very far from being there.

Seibel: Are the technical reasons things that could be changed, or is it just the nature of the beast?

Deutsch: Youd have to start over. Youd have to throw out all languages that have the concept of a pointer to begin with because there is no such thing as a pointer in the real world. Youd have to come to grips with the fact that information takes space and exists over time and is located at a particular place.

Seibel: As you made the progression from writing small pieces of code to building big systems, did you still write small pieces of code the same way and just add a new perspective about bigger systems, or did it actually change the way you did the whole thing?

Deutsch: It changed the way I did the whole thing. The first significant programs I wrote would be the ones on the UNIVAC at Harvard. The next little cluster would be the work that I did on the PDP-1 at MIT. There were really three different programs or systems that I think of dating from that era, in the early-1960s timeframe, around when I was in high school.

There was a Lisp interpreter that I built for a stock PDP-1. I did some work on the operating system for Jack Denniss weird modified PDP-1. And I wrote a text editor for Denniss PDP-1.

Those three systems I still wrote basically monolithically. The difference from my old programs on the UNIVAC was I had to start doing datastructure design. So that was the first big shift in what kind of programming I was doing.

I was starting to be aware of what I would call functional segmentation but I didnt think of it as having any particular significance. I was aware that you could write certain parts of the program and not have to think about other parts of the program while you were doing it, but the issues about interfaces, which really become paramount as programs get big, I dont recall those being of concern.

That transition happened with my the next big cluster of work, which was the work that I did at Berkeley, mostly as an undergraduate, on Project Genie: the 940 timesharing system and on the QED text editor. And I wrote an assembly-language debugger but I dont remember much of anything about it.

The piece that had the most system flavor to it was the operating system. Its not fair to say that I wrote the whole operating system; I didnt. But I essentially wrote the whole kernel. This was done in assembly language. Were talking about programs that are getting to be a little larger here, probably on the order of 10,000 lines of assembler. It had a process scheduler. It had virtual memory. It had a file system. In fact, it had several file systems.

There, there were more complex questions of data-structure design. The one that I remember is there was an active process table. And there was a question as to how to design that and how the operating system should decide when a process was runnable or notthat kind of thing. There were structures for keeping track of the virtual memory system. But some issues of interface started to emerge. Not within the operating system itself, because the operating system was so small that the kernel was designed essentially as a single piece, as a monolith.

But there were two important areas where issues of software interface started to emerge. One of them was simply the interface between user programs and the kernel. What should the system calls be? How should the parameters be laid out? I know that in the first few versions of the 940 TSS, the basic operations for reading and writing files were the equivalent of the Unix read and write calls, where you basically gave a base address and a count. Well, that was all very well and good, but a lot of the time that wasnt what you wanted. You basically wanted a stream interface. And in those days, we didnt have the concept that you could take an operating system facility and then wrap user-level code around it to give you some nicer interface in the way getc and putc get built on top of read and write. So what we actually did was in later versions of the operating system, we added operating-system calls that were the equivalent of getc and putc.

The other place where issues of interface started to show up wereagain, based on the MULTICS modefrom the beginning we had a strong distinction between the kernel and what today we would call the shell. This was early enough in the development of operating systems that we didnt realize that you could, in fact, build a shell with essentially no special privileges. The shell was a user-mode program, but it had a lot of special privileges. But there were some issues about what facilities the kernel had to give the shellwhat things the shell should be able to do directly and what things it should have to make kernel calls for.

We saw interface-design choices just emerging from the task. That was the point in my career at which I dimly started to become aware that interfaces between entities really needed to be designed separately, that the interfaces between them were also an important design issue.

So responding to your question about whether the way I programmed in the small changed as I started to work with larger systems, the answer is, yes. As I built larger and larger systems, I found that when sitting down to write any piece of code, more and more the question I would ask myself first is, OK, whats the interface between this and everything around it going to look like? What gets passed in? What gets passed out? How much of a task should be on which side of an interface? Those kinds of questions started to become a larger and larger part of what I was dealing with. And they did affect the way that I wrote individual smaller chunks of code, I think.

Seibel: And this was a natural consequence of working on bigger systemseventually the systems get big enough that you just have to find some way to break them apart.

Deutsch: Thats right. In that sense I agree that software is fractal in that decomposition is an issue that happens at many different levels. I was going to say I dont think that the kinds of decomposition at bigger grains are qualitatively the same as the kinds of decomposition that happen at smaller grains. Im not sure. When youre doing decomposition at a smaller grain you may not be thinking, for example, about resource allocation; when youre doing decomposition at a larger grain you have to.

Seibel: Have you worked with people who, in your opinion, were very good programmers, yet who could only work at a certain scale? Like they could work well on problems up to a sort of certain size but beyond that, they just didnt have the mentality for breaking systems apart and thinking about them that way?

Deutsch: Ive worked with programmers who were very smart but who didnt have experience at larger-scale programming. And for Ghostscript, I did have some pretty serious disagreements with two of the engineers who were brought into the team as it was getting handed over to the company that I started. Both very smart and hardworking guys, both experienced. I thought they were very good programmers, good designers. But they were not system thinkers. Not only were they not used to thinking in terms of the impact or ramifications of changes; they to some extent didnt even realize that that was an essential question to ask. To me the distinction is between people who understand what the questions are that you have to ask in doing larger-scale design and the people who for whatever reason dont see those questions as well.

Seibel: But you think those peoplewhen theyre not trying to be the architect of a whole systemdo good work?

Deutsch: Yeah. The two engineers that Im thinking of both did really great work for the company. One of them was working on something that was large and rather thankless but important commercially. And the other one redid some substantial chunks of my graphics code and his code produces better-looking results. So these are good, smart, experienced guys. They just dont kind of see that part of the pictureat least thats my take on it.

Seibel: Are there particular skills that you feel made you a good programmer?

Deutsch: Im going to give you a very new-age answer here. Im not a newage kind of guy generally speaking, although I went through my period of really long hair. When I was at what I would consider the peak of my abilities, I had extremely trustworthy intuition. I would do things and they would just turn out right. Some of that was luck. Some of it, Im sure, was experience that had simply gotten internalized so far down that I didnt have conscious access to the process. But I think I just had a knack for it. I know thats not a very satisfying answer but I truly believe that some of what made me good at what I did was something that was just there.

Seibel: In your days as a precocious teenager hanging out at MIT, did you have had a chance to observe, Wow, this guys really smart but he doesnt know how to do this thing that I know how to do.?

Deutsch: No, I didnt. Well, OK, I do remember when I started rewriting the text editor on Denniss PDP-1; I must have been 15 or 16. The original code had been written by one or two of the guys from the Tech Model Railroad Club group. Those were smart guys. And I looked at the code and I thought a lot of it was awful.

I would not say it was a difference between me and the people I was working around. It was a difference between the way I thought code should be and the code that I saw in front of me. I would hesitate to generalize that into a statement about the people.

Ive always been really comfortable in what I would call the symbolic world, the world of symbols. Symbols and their patterns have always just been the stuff I eat for lunch. And for a lot of people thats not the case. I see it even with my partner. Were both musicians. Were both composers. Were both vocal performers. But I come at music primarily from a symbolic point of view. I do a lot of my composition just with a pencil and a pad of paper. The notes are in there but Im not picking them out on the piano. I hear them and I have a plan.

Whereas he does most of his composition on his guitar. He plays stuff and fools around with it, maybe plunks around at the piano a little bit, runs through it again. And he never writes anything down. Hell write down the chord sequences, maybe, if pressed, and I guess at some point he writes down the words. But he doesnt come at composition kind of from the symbol-based mindset at all.

So some people go that way, some people dont. If I was going to draw lessons from itwell again, Im kind of an elitist: I would say that the people who should be programming are the people who feel comfortable in the world of symbols. If you dont feel really pretty comfortable swimming around in that world, maybe programming isnt what you should be doing.

Seibel: Did you have any important mentors?

Deutsch: There were two people One of them is someone whos no longer around; his name was Calvin Mooers. He was an early pioneer in information systems. I believe he is credited with actually coining the term information retrieval. His background was originally in library science. I met him when I was, I think, high-school or college age. He had started to design a programming language that he thought would be usable directly by just people. But he didnt know anything about programming languages. And at that point, I did because I had built this Lisp system and Id studied some other programming languages.

So we got together and the language that he eventually wound up making was one that I think its fair to say that he and I kind of codesigned. It was called TRAC. He was just a real supportive guy at that point for me.

The other person that Ive always thought of as something of a mentor is Danny Bobrow. He and I have been friends for a very long time. And Ive always thought of him as kind of a mentor in the process of my career.

But in terms of actually how to program, how to do software, there wasnt anybody at MIT. There wasnt really anybody at Berkeley. At PARC, only one person really affected the way that I did software, and he wasnt even a programmer. That was Jerry Elkind, who was the manager of the Computer Science Lab at PARC for a while.

The thing that he said that made a profound effect on me was how important it is to measure things; that therell be timesmaybe more times than you thinkwhen your beliefs or intuitions just wont be right, so measure things. Even sometimes measure things you dont think you need to measure. That had a profound effect on me.

When I want to do something thats going to involve a significant amount of computation or significant amount of data, one of the things that I always do now is measure. And thats been the case since I was at PARC, which was starting about 35 years ago.

Seibel: You were the only person I contacted about this book who had a really strong reaction to the word coder in the title. How would you prefer to describe yourself?

Deutsch: I have to say at this point in my life I have even a mildly negative reaction to the word programmer. If you look at the process of creating software that actually works, that does something useful, there are a lot of different roles and a lot of different processes and skills that go into achieving that end. Someone can call themselves a programmer and that doesnt tell you very much about what set of skills they actually bring to bear to that process.

But at least the word programmer is pretty well established as covering a pretty wide range. Coder is strongly associated with the smallest and most narrowly focused part of that whole endeavor. I think of coder, in relation to the process of producing software that actually works and does something useful, as being maybe slightly above bricklayer in the process of constructing buildings that actually work.

Theres nothing wrong with being a coder. Theres nothing wrong with being a bricklayer, either. Theres a lot of skill that goes into doing it well. But it represents such a small corner of the whole process.

Seibel: What is an encompassing term that would work for you? Software developer? Computer scientist?

Deutsch: I have a little bit of a rant about computer science also. I could make a pretty strong case that the word science should not be applied to computing. I think essentially all of whats called computer science is some combination of engineering and applied mathematics. I think very little of it is science in terms of the scientific process, where what youre doing is developing better descriptions of observed phenomena.

I guess if I was to pick a short snappy phrase I would probably say software developer. That covers pretty much everything from architecture to coding. It doesnt cover some of the other things that have to happen in order to produce software that actually works and is useful, but it covers pretty much all of what Ive done.

Seibel: What doesnt it cover?

Deutsch: It doesnt cover the process of understanding the problem domain and eliciting and understanding the requirements. It doesnt cover the processat least not all of the processof the kind of feedback loops from testing to what happens after the software has been released. Basically software developer refers to the world within the boundaries of the organization thats developing the software. It says very little about the connections between that organization and its customers or the rest of the world, which, after all, are what justifies the creation of software in the first place.

Seibel: Do you think thats changing? These days there are people advocating trying to connect earlier with the customer or user and really making that part of the software developers job.

Deutsch: Yes, XP certainly does that. Im not a big fan of XP, and its for two reasons. XP advocates very tight coupling with the customer during the development process on, I guess, two grounds. One is that this results in the customers needs being understood and met better. That may well be true. I dont have firsthand knowledge of it but Im a little wary of that because the customer doesnt always know what the customers needs are.

The other reason that XP, I think, advocates this tight coupling with the customer is to avoid premature generalization or overdesign. I think thats a two-edged sword. Because Ive seen that process go astray in both directions: both premature generalization and premature specialization.

So I have some questions about XP in that respect. What happens after the project is done? Is it maintainable? Is it supportable? Is it evolvable? What happens when the original developers have left? Because XP is so documentation-phobic, I have very grave concerns about that.

Thats an issue Ive had with a number of people who are very much into rapid prototyping or any form of software development that doesnt treat it as engineering. I seriously question how well software not built from an engineering point of view lasts.

Seibel: Can you give an example of when youve seen generalization or specialization go awry?

Deutsch: When I was in the peak years of my career, one of the things that I did extremely well, and I cant claim that I did it in a completely systematic way, was to pick the right level of generality to cover several years worth of future evolution in directions that might not have been completely obvious.

But I think in retrospect the one example of premature specialization was the decision in Ghostscript, at an architectural level, to use pixel-oriented rather than plane-oriented representation of color maps. To use bitmaps and to require the representation of a pixel to fit in a machine long.

The fact that it used a chunky rather than planar representation meant that it turned out to be very awkward to deal with spot colorwhere you have printers that may, for specific jobs, require colors that are not the standard CMYK inks. For example silver, gold, or special tints that have to be matched exactly.

If you look at a pixelized color image there are more or less two ways of representing that in memory. You can represent it in memory as an array of pixels where each pixel contains RGB or CMYK data for the spot on the image. Thats typically the way display controllers work, for example.

The other way, which is more common in the printing industry, is to have an array that contains the amount of red for each pixel and then another that contains the amount of green for each pixel, then another that contains the amount of blue, etc., etc. If youre processing things on a pixel-by-pixel basis, this is less convenient. On the other hand, it doesnt impose any a priori constraint on how many different inks or how many different plates can go into the production of a given image.

Seibel: So if you have a printer that can use gold ink, you just add a plane.

Deutsch: Right. This certainly is not common in consumer-grade printers or even typically in office printers. But in offset printing it is relatively common to have special layers. So that was one area of insufficient generalization.

So thats an example where even with a great deal of thought and skill I missed the boat. It doesnt illustrate my point well; in a sense it undermines my point because, in this case, even careful foresight resulted in insufficient generalization. And I can tell you exactly where that insufficient foresight came fromit came from the fact that Ghostscript was basically done by one very smart guy who had no acquaintance with the printing industry.

Seibel: Meaning you.

Deutsch: Right. Ghostscript started out as strictly a screen previewer for previewing PostScript files because there wasnt one and PDF didnt exist yet. If I was going to draw a moral from that story, its that requirements always change, they always are going to at least attempt to change in directions you didnt think of.

There are two schools of thought as to how you prepare yourself for that. One school of thought, which I think is probably pretty close to the way XP looks at it, that basically says that because requirements are going to change all the time, you shouldnt expect software to last. If the requirements change, you build something new. There is, I think, a certain amount of wisdom in that.

The problem being the old saying in the business: fast, cheap, goodpick any two. If you build things fast and you have some way of building them inexpensively, its very unlikely that theyre going to be good. But this school of thought says you shouldnt expect software to last.

I think behind this perhaps is a mindset of software as expense vs. software as capital asset. Im very much in the software-as-capital-asset school. When I was working at ParcPlace and Adele Goldberg was out there evangelizing object-oriented design, part of the way we talked about objects and part of the way we advocated object-oriented languages and design to our customers and potential customers is to say, Look, you should treat software as a capital asset.

And there is no such thing as a capital asset that doesnt require ongoing maintenance and investment. You should expect that theres going to be some cost associated with maintaining a growing library of reusable software. And that is going to complicate your accounting because it means you cant charge the cost of building a piece of software only to the project or the customer thats motivating the creation of that software at this time. You have to think of it the way you would think of a capital asset.

Seibel: Like building a new factory.

Deutsch: Right. A large part of the sell for objects was that well-designed objects are reusable, so the investment that you put into the design pays itself back in less effort going down the road.

I still believe that, but probably not quite as strongly as I did. The things that I see getting reused these days are either very large or very small. The scale of reuse that we were talking about when we were promoting objects was classes and groups of classes. Except in situations where you have a collection of classes that embody some kind of real domain knowledge, I dont see that happening much.

What I see getting reused is either very small thingsindividual icons, individual web page designsor very big things like entire languages or large applications with extension architectures like Apache or Mozilla.

Seibel: So you dont believe the original object-reuse pitch quite as strongly now. Was there something wrong with the theory, or has it just not worked out for historical reasons?

Deutsch: Well, part of the reason that I dont call myself a computer scientist any more is that Ive seen software practice over a period of just about 50 years and it basically hasnt improved tremendously in about the last 30 years.

If you look at programming languages I would make a strong case that programming languages have not improved qualitatively in the last 40 years. There is no programming language in use today that is qualitatively better than Simula-67. I know that sounds kind of funny, but I really mean it. Java is not that much better than Simula-67.

Seibel: Smalltalk?

Deutsch: Smalltalk is somewhat better than Simula-67. But Smalltalk as it exists today essentially existed in 1976. Im not saying that todays languages arent better than the languages that existed 30 years ago. The language that I do all of my programming in today, Python, is, I think, a lot better than anything that was available 30 years ago. I like it better than Smalltalk.

I use the word qualitatively very deliberately. Every programming language today that I can think of, thats in substantial use, has the concept of pointer. I dont know of any way to make software built using that fundamental concept qualitatively better.

Seibel: And youre counting Python- and Java-style references as pointers?

Deutsch: Absolutely. Yes. Programs built in Python and Javaonce you get past a certain fairly small scalehave all the same problems except for storage corruption that you have in C or C++.

The essence of the problem is that there is no linguistic mechanism for understanding or stating or controlling or reasoning about patterns of information sharing and information access in the system. Passing a pointer and storing a pointer are localized operations, but their consequences are to implicitly create this graph. Im not even going to talk about multithreaded applicationseven in single-threaded applications you have data thats flowing between different parts of the program. You have references that are being propagated to different parts of the program. And even in the best-designed programs, you have these two or three or four different complex patterns of things that are going on and no way to describe or reason about or characterize large units in a way that actually constrains what happens in the small. People have taken runs at this problem. But I dont think there have been any breakthroughs and I dont think there have been any sort of widely accepted or widely used solutions.

Seibel: They arent, perhaps, widely used, but what about pure functional languages?

Deutsch: Yes, pure functional languages have a different set of problems, but they certainly cut through that Gordian knot.

Every now and then I feel a temptation to design a programming language but then I just lie down until it goes away. But if I were to give in to that temptation, it would have a pretty fundamental cleavage between a functional part that talked only about values and had no concept of pointer and a different sphere of some kind that talked about patterns of sharing and reference and control.

Being a compiler and interpreter guy, I can think of lots of ways of implementing a language like that that doesnt involve copying around big arrays all the time. But the functional people are way ahead of me on that. There are a lot of smart people whove been working on Haskell and similar languages.

Seibel: Wouldnt the Haskell guys come back and say, Yes, thats our monads and the way that its clearly differentiated is in the type system?

Deutsch: You know, I have never understood Haskell monads. I think I stopped tracking functional languages at ML.

If you look at Ethis is not a language that anyone knows about to speak ofits this language which is based on a very strong notion of capability. Its related to Hewitts actor languages and its related to capability-based operating systems. It has ports, or communication channels, as the fundamental connection between two objects, the idea being that neither end of the connection knows what the other end of the connection is. So this is very different from a pointer, which is uni-directional and where the entity that holds the pointer has a pretty strong idea whats at the other end of it. Its based on very strong opacity.

My sort of fuzzy first-order idea is that you have a language in which you have functional computations and you do not have sharing of objects. What you have is some form of serialized ports. Whenever you want to talk to something that you only know by reference, its part of the basic nature of the language that you are aware that whatever that thing out there is, its something thats going to be dealing with multiple sources of communications and therefore it has to be expected to serialize or arbitrate or something. Theres no concept of attribute access and certainly no concept of storing into an attribute.

There are languages in which you have opaque APIs so the implementations can maintain invariants; it still doesnt tell you anything about the larger patterns of communication. For example, one common pattern is, you have an object, you hand it off to some third party, you tell that third party to do certain things to it, and then at some point you ask for that object back. Thats a pattern of sharing. You, the caller, may never have actually given up all pointers to the object that you handed off. But you agree with yourself not to make any references through that pointer until that third party has done whatever you asked them to.

This is a very simple example of a pattern of structuring a program that, if there were a way to express it linguistically, would help people ensure that their code was conformant with their intentions.

Maybe the biggest reason why I havent actually undertaken this effort to design a language is that I dont think I have enough insight to know how to describe patterns of sharing and patterns of communication at a high enough level and in a composable enough way to pull it off. But I think that is why constructing software today is so little better than it was 30 years ago.

My PhD thesis was about proofs of program correctness. I dont use that term anymore. What I say is you want to have your development system do as much work as possible towards giving you confidence that the code does what you intend it to do.

The old idea of program correctness was that there were these assertions that were your expressions of what you intend the code to do in a way that was mechanically checkable against the code itself. There were lots of problems with that approach. I now think that the path to software thats more likely to do what we intend it to do lies not through assertions, or inductive assertions, but lies through better, more powerful, deeper declarative notations.

Jim Morris, whos one of my favorite originators of computer epigrams, once said that a type checker is just a Neanderthal correctness-prover. If theres going to be a breakthrough, thats where I see it coming fromfrom more powerful ways of talking declaratively about how our programs are intended to be structured and what our programs are intended to do.

Seibel: So, for instance, you could somehow express the notion, Im passing a reference to this object over to this other subsystem, which is going to frob it for a while and Im not going to do anything with it until I get it back.

Deutsch: Yes. There was some experimental work being done at Sun when I was there in the early 90s on a language that had a concept similar to that in it. And there was a bunch of research done at MIT by Dave Gifford on a language called FX that also tried to be more explicit about the distinction between functional and nonfunctional parts of a computation and to be more explicit about what it meant when a pointer went from somewhere to somewhere.

But I feel like all of this is looking at the issue from a fairly low level. If there are going to be breakthroughs that make it either impossible or unnecessary to build catastrophes like Windows Vista, we will just need new ways of thinking about what programs are and how to put them together.

Seibel: So, despite it not being qualitatively better than Smalltalk, you still like Python better.

Deutsch: I do. There are several reasons. With Python theres a very clear story of what is a program and what it means to run a program and what it means to be part of a program. Theres a concept of module, and modules declare basically what information they need from other modules. So its possible to develop a module or a group of modules and share them with other people and those other people can come along and look at those modules and know pretty much exactly what they depend on and know what their boundaries are.

In Smalltalk it is awkward to do thisif you develop in Smalltalk in the image mode, there never is such a thing as the program as an entity in itself. VisualWorks, which is the ParcPlace Smalltalk, has three or four different concepts of how to make things larger than a single class, and theyve changed over time and theyre not supported all that well by the development tools, at least not in a very visual way. Theres little machinery for making it clear and explicit and machine-processable what depends on what. So if youre developing in the image mode, you cant share anything with anybody other than the whole image.

If you do whats called filing outyou write out the program in a textual formyou have absolutely no way of knowing whether you can read that program back in again and have it come back and do the same thing that it does because the state of the image is not necessarily one that was produced, or that can be produced, by reading in a set of source code. You might have done arbitrary things in a workspace; you might have static variables whose values have been modified over time. You just dont know. You cant reliably draw lines around anything.

Im on the VisualWorks developers list and the stuff that I see coming up over and over again is stuff that cannot happen in languages that dont use the image concept. The image concept is like a number of other things in the rapid-prototyping, rapid-development world. Its wonderful for singleperson projects that never go outside that persons hands. Its awful if you want software to become an asset; if you want to share software with other people. So I think thats the real weakness of the Smalltalk development approach and a serious one.

The second reason I like Python is thatand maybe this is just the way my brain has changed over the yearsI cant keep as much stuff in my head as I used to. Its more important for me to have stuff in front of my face. So the fact that in Smalltalk you effectively cannot put more than one method on the screen at a time drives me nuts. As far as Im concerned the fact that I edit Python programs with Emacs is an advantage because I can see more than ten lines worth at a time.

Ive talked with the few of my buddies that are still working at VisualWorks about open-sourcing the object engine, the just-in-time code generator, which, even though I wrote it, I still think is better than a lot of whats out there. Gosh, here we have Smalltalk, which has this really great codegeneration machinery, which is now very matureits about 20 years old and its extremely reliable. Its a relatively simple, relatively retargetable, quite efficient just-in-time code generator thats designed to work really well with non type-declared languages. On the other hand, heres Python, which is this wonderful language with these wonderful libraries and a slow-as-mud implementation. Wouldnt it be nice if we could bring the two together?

Seibel: Wasnt that sort of the idea behind your pycore project, to reimplement Python in Smalltalk?

Deutsch: It was. I got it to the point where I realized it would be a lot more work than I thought to actually make it work. The mismatches between the Python object model and the Smalltalk object model were bad enough that there were things that could not be simply mapped one-for-one but had to be done through extra levels of method calls and this, that, and the other.

Even at that, Smalltalk with just-in-time code generation was, for code that was just written in Python, still in the same range as the C-coded interpreter. So the idea that I had in mind was that if it had been possible to open-source the Smalltalk code generator, taking that code generator and adapting it to work well with the Python object model and the Python data representation would not have been a huge deal.

But it cant be done. Eliot Miranda, whos probably the most radical of my buddies associated with VisualWorks, tried, and Cincom said, Nope, its a strategic asset, we cant open-source it.

Seibel: Well, youre the guy who says software should be treated as a capital asset.

Deutsch: But that doesnt necessarily mean that its always your best strategy to prevent other people from using it.

Seibel: So in addition to being a Smalltalker from way back, you were also an early Lisp hacker. But youre not using it any more either.

Deutsch: My PhD thesis was a 600-page Lisp program. Im a very heavyduty Lisp hacker from PDP-1 Lisp, Alto Lisp, Byte Lisp, and Interlisp. The reason I dont program in Lisp anymore: I cant stand the syntax. Its just a fact of life that syntax matters.

Language systems stand on a tripod. Theres the language, theres the libraries, and there are the tools. And how successful a language is depends on a complex interaction between those three things. Python has a great language, great libraries, and hardly any tools.

Seibel: Where tools includes the actual implementation of the language?

Deutsch: Sure, lets put them there. Lisp as a language has fabulous properties of flexibility but really poor user values in terms of its readability. I dont know what the status is of Common Lisp libraries is these days, but I think syntax matters a lot.

Seibel: Some people love Lisp syntax and some cant stand it. Why is that?

Deutsch: Well, I cant speak for anyone else. But I can tell you why I dont want to work with Lisp syntax anymore. There are two reasons. Number one, and I alluded to this earlier, is that the older Ive gotten, the more important it is to me that the density of information per square inch in front of my face is high. The density of information per square inch in infix languages is higher than in Lisp.

Seibel: But almost all languages are, in fact, prefix, except for a small handful of arithmetic operators.

Deutsch: Thats not actually true. In Python, for example, its not true for list, tuple, and dictionary construction. Thats done with bracketing. String formatting is done infix.

Seibel: As it is in Common Lisp with FORMAT.

Deutsch: OK, right. But the things that arent done infix; the common ones, being loops and conditionals, are not prefix. Theyre done by alternating keywords and what it is they apply to. In that respect they are actually more verbose than Lisp. But that brings me to the other half, the other reason why I like Python syntax better, which is that Lisp is lexically pretty monotonous.

Seibel: I think Larry Wall described it as a bowl of oatmeal with fingernail clippings in it.

Deutsch: Well, my description of Perl is something that looks like it came out of the wrong end of a dog. I think Larry Wall has a lot of nerve talking about language designPerl is an abomination as a language. But lets not go there.

If you look at a piece of Lisp code, in order to extract its meaning there are two things that you have to do that you dont have to do in a language like Python.

First you have to filter out all those damn parentheses. Its not intellectual work but your brain does understanding at multiple levels and I think the first thing it does is symbol recognition. So its going to recognize all those parenthesis symbols and then you have to filter them out at a higher level. So youre making the brain symbol-recognition mechanism do extra work.

These days it may be that the arithmetic functions in Lisp are actually spelled with their common names, I mean, you write plus sign and multiply sign and so forth.

Seibel: Yes.

Deutsch: Alright, so the second thing I was going to say you have to do, you dont actually have to do anymore, which is understanding those things using token recognition rather than symbol recognition, which also happens at a higher level in your brain.

Then theres a third thing, which may seem like a small thing but I dont think it is. Which is that in an infix world, every operator is next to both of its operands. In a prefix world it isnt. You have to do more work to see the other operand. You know, these all sound like small things. But to me the biggest one is the density of information per square inch.

Seibel: But the fact that Lisps basic syntax, the lexical syntax, is pretty close to the abstract syntax tree of the program does permit the language to support macros. And macros allow you to create syntactic abstraction, which is the best way to compress what youre looking at.

Deutsch: Yes, it is.

Seibel: In my Lisp book I wrote a chapter about parsing binary files, using ID3 tags in MP3 files as an example. And the nice thing about that is you can use this style of programming where you take the specificationin this case the ID3 specput parentheses around it, and then make that be the code you want.

Deutsch: Right.

Seibel: So my description of how to parse an ID3 header is essentially exactly as many tokens as the specification for an ID3 header.

Deutsch: Well, the interesting thing is I did almost exactly the same thing in Python. I had a situation where I had to parse really quite a complex file format. It was one of the more complex music file formats. So in Python I wrote a set of classes that provided both parsing and pretty printing.

The correspondence between the class construction and the method name is all done in a common superclass. So this is all done object-oriented; you dont need a macro facility. It doesnt look quite as nice as some other way you might do it, but what you get is something that is approximately as readable as the corresponding Lisp macros. There are some things that you can do in a cleaner and more general way in Lisp. I dont disagree with that.

If you look at the code for Ghostscript, Ghostscript is all written in C. But its C augmented with hundreds of preprocessor macros. So in effect, in order to write code thats going to become part of Ghostscript, you have to learn not only C, but you have to learn what amounts to an extended language. So you can do things like that in C; you do them when you have to. It happens in every language.

In Python I have my own what amount to little extensions to Python. Theyre not syntactic extensions; theyre classes, theyre mixinsmany of them are mixins that augment what most people think of as the semantics of the language. You get one set of facilities for doing that in Python, you get a different set in Lisp. Some people like one better, some people like the other better.

Seibel: What was it that made you move from programming to composing?

Deutsch: I basically burned out on Ghostscript. Ghostscript was one of my primary technical interests starting in 1986 and it was pretty much my only major technical project starting somewhere around 199293. By 1998, roughly, I was starting to feel burned out because I was not only doing all the technical work; I was also doing all the support, all the administration. I was a one-person business, and it had gotten to be too much. I hired someone to basically build up a business, and he started hiring engineers.

Then it took another two years to find the right person to replace me. And then it took another two years after that to get everything really handed over. By 2002, I had had it. I didnt want to ever see Ghostscript again.

So I said, OK, Ill take six months to decompress and look around for what I want to do next. At that point I was 55; I didnt feel particularly old. I figured I had one more major project left in me, if I wanted to do one. So, I started looking around.

The one project that kind of interested me was an old buddy of mine from the Xerox days, J. Strother Moore II, is, or was, the head of the computer science department at the University of Texas at Austin. His great career achievement was that he and a guy at SRI named Bob Boyer built a really kick-ass theorem prover. And he built up a whole group around this piece of software and built up these big libraries of theorems and lemmas about particular domain areas.

So they had this thriving little group doing theorem proving, which was the subject of my PhD thesis and which had always interested me. And they had this amazing result on the arithmetic unit of the AMD CPU. So I thought, Hey, this is a group that has a lot of right characteristics: theyre doing something that Ive always been interested in; theyre run by a guy that I know and like; their technology is Lisp-based. Itll be really congenial to me.

So, I went down there and gave a talk about how, if at all, could theorem proving have helped improve the reliability of Ghostscript? By that time, we had a big history in the bug tracker for Ghostscript. So I picked 20 bugs, more or less at random, and I looked at each one and I said, OK, for theorem-proving technology to have been helpful in finding or preventing this problem, what would have had to happen? What else would have had to be in place?

The conclusion I came to is that theorem-proving technology probably wouldnt have helped a whole lot because in the few places where it could have, formalizing what it was that the software was supposed to do wouldve been a Herculean job.

Thats the reason why theorem-proving technology basically hasin my opinionfailed as a practical technology for improving software reliability. Its just too damn hard to formalize the properties that you want to establish.

So I gave this talk and it was pretty well received. I talked with a couple of the graduate students, talked with J. a little bit, and then I went away. I thought to myself, The checklist items all look pretty good. But Im just not excited about this.

I was kind of flailing around. Ive sung in a chorus for years. In the summer of 2003 we were on a tour where we actually sang six concerts in old churches in Italy. My partner was with me on that trip and we decided to stay in Europe for two or three weeks afterwards.

We went to Vienna and did the things you do in Vienna. The old Hapsburg Palace has now been dividedpart of itinto ten different little specialized museums. I saw in the guidebook that there was a Museum of Old Musical Instruments.

I went to this museum, and its in this long hall of high-ceilinged old salons. And it starts with, I dont know whether theyre Neolithic, but very old musical instruments, and it progresses through. Of course, most of their musical instruments are from the last couple of centuries in Western Europe. I didnt actually make it all the way through; I was like one or two salons from the end and I was standing there and here was a piano that had belonged to Leopold Mozart. And the piano that Brahms used for practicing. And the piano that Haydn had in his house.

And I had this little epiphany that the reason that I was having trouble finding another software project to get excited about was not that I was having trouble finding a project. It was that I wasnt excited about software anymore. As crazy as it may seem now, a lot of my motivation for going into software in the first place was that I thought you could actually make the world a better place by doing it. I dont believe that anymore. Not really. Not in the same way.

This little lightning flash happened and all of a sudden I had the feeling that the waywell, not the way to change the world for the better, but the way to contribute something to the world that might last more than a few years was to do music. That was the moment when I decided that I was going to take a deep breath and walk away from what Id been doing for 50 years.

Seibel: But you do still program.

Deutsch: I cant help myselfI cant keep myself from wanting to do things in ways that I think are fun and interesting. Ive done a bunch of little software projects of one sort or another over time, but only two that Ive paid ongoing attention to over the last several years.

One has been spam-filtering technology for my mail server. I wouldnt say it was fun but it had a certain amount of interest to it. Based on the logs that I look at every now and then, the filter is actually picking updepending on whos ahead in the arms race at any given momentsomewhere between 80 and 95 percent of the incoming spam.

The other substantial piece of software that I keep coming back to is a musical-score editor. And the reason that I do that is that I have done a fair amount of investigation of whats available out there. I used Finale at a friends house a few times. It sucks. The quality of that system is so bad I cant even tell you. I got a copy of Sibelius. I actually got a Mac laptop primarily so that I could run Sibelius. And discovered that the way that they did the user interface, it is the next thing to unusable if you dont have a Num Lock key. Mac laptops do not have a Num Lock key. And there were some other things about the user interface that I didnt like. So, I decided to roll my own.

Ive been through four different architectures and I finally got one that I like pretty well. But its been kind of an interesting learning experience. Thats an interactive application thats large and complex enough that these system issues do come up, these issues of interfaces.

After having gone through four different architectures I wound up with an architecture for the rendering side of the programwhich I think is by far the hardest partbased on equational programming. You define variable values in terms of equations then you let the implementation decide when to evaluate them. Turns out thats not too hard to do in Python. Its been done at least two other times that I know of. I like the way I do it because it has the least boilerplate.

So yeah, so I still do a moderate amount of programming and I still have fun with it. But its not for anybody and if I dont do programming for weeks at a time, thats OK. When that was what I did professionally, I always wanted to be in the middle of a project. Now, what I want to always be in the middle of is at least one or two compositions.

Seibel: You said before that you thought you could make the world a better place with software. How did you think that was going to happen?

Deutsch: Part of it had nothing to do with software per se; its just that seeing anything around me thats being done badly has always offended me mightily, so I thought I could do better. Thats the way kids think. It all seems rather dreamlike now.

Certainly at the time that I started programming, and even up into the 1980s, computer technology was really associated with the corporate world. And my personal politics were quite anticorporate. The kind of computing that Ive always worked on has been what today we would call personal computing, interactive computing. I think part of my motivation was the thought that if you could get computer power into the hands of a lot of people, that it might provide some counterweight to corporate power.

I never in my wildest dreams would have predicted the evolution of the Internet. And I never wouldve predicted the degree to which corporate influence over the Internet has changed its character over time. I wouldve thought that the Internet was inherently uncontrollable, and I no longer think that. China shows that you can do it pretty effectively.

And I think theres a good chance that if Microsoft plays its cards right, they will have a lock on the Internet. Im sure they would love to, but I think they might be smart enough to see the path from where they are to having what effectively is control of essentially all the software that gets used on the Internet.

So Im not really much of an optimist about the future of computing. To be perfectly honest, thats one of the reasons why it wasnt hard for me to get out. I mean, I saw a world that was completely dominated by an unethical monopolist, and I didnt see much of a place for me in it.


: 2.496. /Cache: 3 / 1