Actors/monads need stderr

I’ve been watching a number of videos about monads. They’re trendy right now. Essentially, monads are a system for safely chaining commands together. If you’ve used JQuery, you’ve used Monads. People talk a lot about the “actor model”: chaining commands together, where each command is run on a service provider, often on a different computer. In other words, distributed monads.

What people don’t seem to realize is the grandfather of all monadic systems outside the ivory tower is Unix. The pipeline that Unix users love is a system for safely chaining commands together, i.e. monads. And as the saying goes: “those who fail to learn from Unix are doomed to re-implement it poorly.”

So what does Unix have that actors don’t? A system for debugging. Namely an out-of-band logging channel known as “standard error”, or “stderr”.

Why is this important? At the end of this video, the presenter is asked for advice on debugging actors, since when something goes wrong across multiple computers it can be hideous to follow the chain of events that went wrong. I’ve seen this myself. The presenter’s response is simply that actors/monads are easier to debug than threads. That’s like saying that keeping pools of gasoline in your basement next to the furnace is safer than storing a nuclear bomb. Not helpful.

So here’s my plea: anyone who does functional programming should include a “stderr” in their monads.

Imagine if JQuery objects had a “history” function, which would point out where you wrote “foo” instead of “.foo” and suddenly got zero results. Imagine if Akka had a log which showed you the chain of events for a particular query, across computers, rather than logging unrelated events on each computer.

Programmers have gotten used to using a stack trace for debugging. But with actors, the stack tells you nothing, because the action is horizontal–across chains of messages– rather than vertical. The fix is really simple, and it’s something Unix got right.

Introducing Ozone

Here’s a demo of an awesome project I’ve been working on: Ozone, a data visualization OLAP database. If you’re a web developer and you like the demo, check out the source code and see if you’ll find it handy. This is a first cut: future versions will load faster, consume less memory, and have more features—while not compromising ease of use.

Note that the boxes that zoom when clicked are from a demo of the popular D3 graphics library. Ozone adds the ability to quickly filter and partition the underlying data.

This is a research project at Vocalabs. I’ve spent a little over ten days total on it, and I plan to keep working on it. My ultimate goal is a high-performance tool that seamlessly bridges Big Data (databases that require a supercomputing cluster) and realtime browser graphics visualization.

Kotlin vs. Ceylon

My life improved considerably when I started writing servlets in Scala instead of straight Java. But Scala is a complex language. The definitive book, Programming in Scala is as dense as K&R’s classic 300-page book on C, but is 850 pages long. I’ve been programming in Scala for over a year now, and there are still “basic” features I have to look up. Plus, compiling Scala in my IDE takes 10 seconds, versus 1 second for Java. That’s the difference between being able to try out every little change, and getting distracted when it’s time to test.

Two new languages offer most of the best features of Scala, but are intended to be simpler: easier to read, write, and compile. Ceylon is developed by Red Hat as an “Enterprise” Java replacement, and version 1.0 was just released. The other is Kotlin, developed by JetBrains, makers of IDEs such as IntelliJ IDEA and WebStorm. Its primary purpose is to replace Java for developing new features of their IDEs. It’s at Milestone 6, which is to say still under initial development, but is complete enough that the latest version of WebStorm includes a major new feature written in Kotlin.

If you look at a checklist, both languages have most of the same features. If you look at code samples, they look remarkably similar. Both will compile to both Java Virtual Machine (JVM) code and JavaScript. Perhaps this is good; both are intended to be pragmatic languages, and it looks like there is some consensus on what new language features are valuable.

For myself and Vocalabs, I’ve been looking for a “glue” language that compiles to both Java and JavaScript. Last time I tried Kotlin (about 6 months ago) it looked good but was too buggy to get working. So yesterday I took another look at Ceylon. And by “look” I mean “read some documentation”, not “wrote some code.” If you want that sort of review, look here.

Instead, here are the main differences I see which will drive use of one over the other.

  • Ceylon is currently more mature. It’s at 1.0, with JavaScript and JVM compilers. There will be some minor changes, but it’s ready to be used. Kotlin is maturing, but JetBrains still reserves the right to make significant changes. JavaScript is still very much an experimental feature.
  • Ceylon is designed for new projects, using its tool chain. The compiler outputs packages (compressed directories) instead of JVM class files. Kotlin is designed for Java interoperability within existing Java projects, and the IDE lets you sprinkle Kotlin files willy-nilly alongside your Java files.
  • Ceylon requires Java 7. This isn’t a big deal for most users (Android recently began to support Java 7), but it’s a deal killer in some cases. At Vocalabs, we use Azul Zing, a low-latency, big-memory JVM that is still on Java 6.

So for my purposes, these are two deal killers against Ceylon. But I’m still not ready to jump to Kotlin until it matures a bit more. Right now I’m actively writing code in four languages: JavaScript, Typescript, Scala, and Java. Kotlin wouldn’t let me drop any of these languages, but it might replace both Typescript and Java over time. It might even replace Scala, although that’s less likely since Scala lets me drop HTML directly into my code.

A gentle introduction to functional programming

Functional programming is hot right now, and I love it. This XKCD Comic promotes its old (and not undeserved) reputation as complicated and esoteric. Lisp, the original functional language, was developed in the 1960s as perhaps the first computer language for non-gearheads. It was instead designed for mathematicians. And it does a good job of presenting things in a way that makes sense to mathematicians, hence tail recursion. (Which I’m not going to explain; suffice it to say that it is catnip to people who love proofs by induction.)

So for 40 years mathematicians (and their friends, Artificial Intelligence researchers) have been trying to explain why functional programming is the Next Big Thing because it’s so easy and makes so much sense. And people who don’t dig mathematical proofs have seen every argument for functional programming as an argument against it. That is, until Google got involved in a big way.

Functional programming is writing your programs the way you write algebra problems. Sounds scary, if you forget that elementary school kids learn algebra. The name comes from the fact that you’re working with mathematical-style functions. (These days, all programming languages support functions, but in the 1960s this was novel.) To show how this differs from other programming, consider a typical imperative (non-functional) way to build one list from another list. This should look at least somewhat familiar if you’ve ever taken a programming class in your life.

function personalizeList( yourList ) {
    myList = new List();
    for (index = 0; index < yourList.length; index = index+1) {
        yourItem = yourList[index]
        myList[index] = "My " + yourItem
    }
    return myList;
}

You have a variable (index) which starts at 0 and increases until yourList has been traversed. If yourList contains ["apple", "banana", "cranberry"], then at the end myList contains ["My apple", "My banana", "My cranberry"]. This all seems perfectly normal to anyone who has learned to program this way, but there's one very counterintuitive piece, especially when you come from an algebra background:

In mathematics, variables never change once they've been assigned a value.

Imagine how confusing D = Ax2 + Bxy + Cy2 would be if x and y could change within the equation. So in functional programming, you avoid changing the values of variables. This makes loops impossible, hence tail recursion. But there are other ways to avoid loops. Here's how you'd write personalizeList in a typical functional language:

function personalizeList( yourList ) {
    return yourList.map( function(item) { return "My " + item } )
}

The "map" function applies a function you provide to each item in the list. Functional languages provide a bevy of functions for working with lists without having to explicitly iterate through all the elements. And they can be stacked on top of each other, like this:

yourList.without("cranberry").append("cherry").count()

This takes yourList, creates a similar list with "cranberry" removed, creates another similar list with "cherry" at the end, and then counts the number of items in the list. This follows the no-variable-modification rule: at each step, a new list is created, and each list cannot be modified. Here's an example of jQuery, a popular JavaScript library, for taking every paragraph ("p") on a web page and turning the text green, then counting the number of paragraphs:

var numberOfParagraphs = $("p").style("color", "green").length

If you pull up a JavaScript console on most web pages (at least the ones that use jQuery), this will work. A nerdy party trick. jQuery is functional because not looping lets you do lots of stuff in a single line of code, and when you're making web pages more interactive you often don't want more than a line of code.

Back to Google. Since at least the 1980s, supercomputer designers have been trying to figure out ways for to automatically convert C code that was written for single processors to work on multi-processor (and multi-core) computers. The theory was that writing explicitly parallel programs is too hard. The for-loop in the first example tells the computer to first handle the first item in the list, then the second item in the list, and so on. Meanwhile you have a variable (index) that is changing all the time. What you want on a 4-core computer is to simultaneously have each core process a quarter of the list, thus making the program up to 4 times faster. But as written, each processor would have to wait to see how the previous processor modified index.

The functional version, meanwhile, makes no guarantees about which elements in the list are processed first, and if you follow the no-variable-modification rule, you don't have to worry about coordinating variable changes between processors. Google called its programming system MapReduce. ("Map" you've seen, "reduce" is a generic term for functions like "count()" or "sum()" which reduce a list to something smaller.) The fundamental idea behind MapReduce is this: the programmer writes in terms of map and reduce instead of loops, and Google's system will figure out how to make it run. It works with anywhere from one computer to hundreds of thousands, and Google even built in redundancy so that if a computer fails during the calculation, another computer will take on the work. (When you have hundreds of thousands of computers, one of them is likely to go down at any moment.)

One of the hardest things about parallel programming is keeping data synchronized. The network is often the slowest component. The no-variable-modification rule can be inefficient on a single-processor machine, but on a computer network it speeds things up enormously.

But that's not why I prefer functional programming. For me, the no-variable-modification rule means no-tracking-down-how-that-variable-changed-in-my-buggy-code. And that saves me a ton of time.

The Purpose of Life

This is the last part of a series on the Purpose of Life.

When I was in high school, I briefly thought I had figured out the Meaning of Life. I was wrong, and later decided that the notion itself is silly. Life is full of many noble pursuits, different ones at different times for different people. Finding one meaning for life is as idiotically reductionist as finding a single meaning in a great novel.

And yet…

And yet here I am, with an idea. One that ties together human life into a common purpose extending across centuries. That can be used as a measure of a person’s life or of a society. It seems awfully arrogant to presume such an idea could have real merit. Perhaps I will see the folly of it one day, but it seems to fit.

It started when I was reading one of Theodore Parker’s sermons. One famously cited by Martin Luther King Jr. The one where Parker says:

Look at the facts of the world. You see a continual and progressive triumph of the right. I do not pretend to understand the moral universe, the arc is a long one, my eye reaches but little ways. I cannot calculate the curve and complete the figure by the experience of sight; I can divine it by conscience. But from what I see I am sure it bends towards justice.

I had recently finished reading two thought-provoking books: Richard Dawkin’s The Selfish Gene and Bruce Schneier’s Liars and Outliers. I started to wonder if we can see the arc of the moral universe– and from a purely Humanist perspective.

Let me start at the beginning.

About 2.4 billion years ago, cyanobacteria transformed the Earth. They dumped a highly caustic chemical into the ocean and atmosphere. The surfaces of all the world’s exposed rocks eroded. The methane in the atmosphere burned away. Without methane (a greenhouse gas) the oceans froze over. And yet the chemical continued to accumulate in the atmosphere until it became our familiar oxygen-rich sky.

Thanks to that corrosive oxygen, half of the chemical battery that fuels us comes directly from the air. The cyanobacteria’s waste transformed a relatively static world into one of constant change. Unlike the moon, where a footprint could last a millennium, everything on Earth that does not adapt will quickly rust, decay, or erode. Cyanobacteria, through evolution, remade the Earth in the image of evolution.

We often look to nature for inspiration. But nature is built from evolution, where anything that promotes survival, no matter how brutal, is encouraged. We will not find moral direction there. Human nature is built from genes that promote every behavior that allows the genes to survive: eating, breathing, sex, cooperation, violent outbursts, friendship, and countless others. Those who think humans are fundamentally evil will find ample evidence. Those who think humans are fundamentally good will find equally ample evidence.

While you will find plenty of brutality in religion, every major religion emphasizes love, kindness, and forgiveness. the Golden Rule is central to all of them.

Our sense of ethics is not a human invention. We find similar attitudes in other primates. We are social, tribal creatures. Our children take years to mature, and we raise only a handful in a lifetime. With so much riding on so few offspring, it really does take a village to nurture and protect them.

Contrast this with spiders, which come together only briefly to mate, and then produce thousands of eggs. A spider’s Golden Rule would be much different from ours. It might be similar to the popular notion of survival of the fittest: win and breed, or die trying. Evolution is a constant cycle of cruel experiments. When we look out our window at nature, we see a mommy duck followed by a dozen little ducklings. We imagine our own ethics of parenting, and rarely stop to think that either ten of those ducklings will die before adulthood, or the population will grow sixfold until starvation sets in. And yet this cruel waste is genetic evolution’s primary mechanism of adaptation.

Although we are the product of genetic evolution, and our bodies and minds are sculpted to follow its precepts, we do not need to obey its mindless purpose. The ethics of the major religions are not the natural progression of a primate mind, they are the products of civilizations which bind together thousands or millions of people. They adapt a moral instinct that evolved in small tribes to fit the needs of large multicultural cities. Small tribes of humans and apes constantly war with their neighbors. Big communities require kindness to strangers and tolerance toward culturally diverse neighbors. They require stories like the good Samaritan. They require the Golden Rule.

Indeed, the arc of history seems to be following this rule. The world is getting less violent. But it’s not simply due to the Golden Rule. Treating your neighbors kindly is one moral lesson, but it’s big and vague and often not appropriate to the circumstances– especially when a neighbor is ruining your neighborhood. Cooperation requires trust, and trust requires trustworthiness. Perhaps the measure of ethical behavior is not the increase of happiness or even love, but of mutual trust.

And a large, well-functioning society needs more than kindness and trust. To live in large communities we needed to invent systems to provide water, food, waste disposal, traffic, and communications. And we’ll need more inventions if we are to provide for a large global population amid depleting resources. Invention requires freedom to experiment and room for inventions to disrupt the status quo. And that freedom needs to belong to everyone, not just a few geniuses. If you look at big inventions– such as Newtonian physics, the airplane, or the light bulb– there may be a few names that are remembered, but there were dozens or hundreds of others whose ideas and experiments were necessary for the ultimate success. In my experience, a well-motivated person with middling intelligence and a fresh perspective will beat an uninterested genius every time.

We are not nature’s only inventors. Many creatures, from crows to octopi, show great creativity. But our capacity to dominate the landscape comes from pairing our capacity to invent with our unparalleled skill at communicating. An octopus’s inventions must be re-invented with each generation, but our inventions can last millennia. Numbers, the 24-hour day, money, beer, bread, brick houses. All are several thousands of years old, and we’re still tinkering with each of them. This capacity to invent, communicate, and cooperate with strangers puts us in a unique position to impose our ethics upon the world.

We have gotten into a lot of trouble by imposing our will upon our surroundings. Extinctions follow in our wake. Indeed, only Africa still has diverse herds of giant animals; the other continents lost them around the same humans arrived. Apparently only by co-evolving with humans did African megafauna adapt. And like the cyanobacteria, we are dumping our waste into the atmosphere, triggering global temperature changes.

But unlike the cyanobacteria, we can learn from our mistakes within our lifetime, adapt, and carry out multi-generational plans. When we apply our ethics, and when we seek broad collaboration in constructive ways, we can improve on nature.

The Golden Rule has a universality that transcends humanity. If we ever find another intelligent, technological species in the universe, it will most likely have ethics similar to ours. It will need rules that encourage mutual cooperation between dissimilar individuals. It will be tolerant of eccentricities and experiments that lead to technological breakthroughs that transform societies. And its society must be open to transformation, allowing powerful individuals to be dethroned in the name of progress. And it will work to increase happiness and reduce suffering; otherwise it would be content to let mindless evolution run its course.

Here, then, is the Purpose of Life, roughly hewn by the needs of genetic evolution, and polished by the needs of cultural evolution:

To re-make the world, through intelligence and wisdom, to reduce suffering and promote justice, kindness and cooperation.

I started this series by questioning whether there is such a thing as “progress.” I consider myself a progressive, and this is my notion of progress.

Dystopia

This is the second part of a series on the Purpose of Life.

There is a lot of science fiction that portrays a dystopian world in which people are controlled by through mind-control drugs and psychological manipulation. Brave New World comes to mind. Often the average person is superficially happy, but so drugged up or deceived as to not experience what the author considers true happiness.

But if we consider traditional progress to be the sum of all joy or happiness, divided by the sum of all suffering– or we use a similar but more sophisticated formula– such a pacified society doesn’t look so bad. (This is the perspective of the philosophy of Utilitarianism. Buddhism has a similar perspective.)

Science is getting better all the time at understanding people’s motivations. Advertisers understand what motivates children (and adults) better than all but the most educated parents. One might argue that we are sliding willingly into a dystopian world. I don’t think so, and not simply because humans have been evolving to manipulate other humans for millennia.

There are three things missing in a dystopian world that aren’t captured in Utilitarianism: truth, freedom, and aspiration. We cringe from happiness that is disconnected from reality, and we prefer to suffer in pursuit of something we choose than to enjoy superficial pleasures. There are still plenty of people willing and able to go against the grain.

In a free society, people are free to invent. While any particular technical pursuit can flourish under a regime that decides to support it, disruptive technologies and ideas can only come from societies that allow– or even encourage– disruption. That’s why even as China adds more scientists and obtains more patents, the next world-changing technology (like the Internet or cell phone) won’t come from there.

So progress includes happiness and the elimination of suffering. It also requires freedom to invent and thereby disrupt the status quo. But can we come up with a definition of progress that’s complete enough to be a modern-day Meaning of Life? I’ll explore that idea later.

This is part two in a series on The Purpose of Life.

The Church of Progress

This is the first part of a series on the Purpose of Life.

A lot of liberals worship at the Church of Progress.  The tenants of this church are as follows:

  • Humanity is getting smarter.  As scientific and practical knowledge grows, so does human wisdom.
  • People will choose the truth over a falsehood.
  • Once debunked, a falsehood fades away.
  • As humanity gets wiser, so does the average person.
  • Therefore new and scientific things are better than old, unscientific things.
  • And all we must do to make the world better is to keep corruption and tradition from fouling the march of progress.

I have to admit that most of the time, I am a progressive.  But my faith has been shaken by the realization that things aren’t getting better on their own and, as I become wiser, I find that the world hasn’t learned the same lessons I have.  (Of course, it’s always possible that I’m getting misled, and the rest of the world is smarter than I think.)

For example, feminism.  When I was a computer science grad student in the 1990s, the department was mostly men. I assumed that there would be a gradual increase in the number of women over time.  Maybe someday, but it’s been 20 years, and if anything the numbers have gone in the opposite direction. (And don’t get me started on the number of African Americans.)

Another example.  Fifty years ago, the nation was able to do long-term planning.  Colleges and universities were affordable, or even free, due to government subsidies.  Because a well-educated populace is good for the economy and democracy.  That’s gotten slowly eroded over time.  Not to mention road maintenance and other basic government services.

Another example.  A hundred years ago people learned that solitary confinement causes permanent psychological damage.  A judge declared it cruel and unusual punishment.  But it’s been gradually on the rise, and now it’s scary just how commonplace it is.

Perhaps I’ve just cherry-picked examples where humanity (or at least the United States) is either not progressing or is backsliding. There are certainly examples of things that have improved over time.  And perhaps the progressive faith is correct, just over time frames of hundreds of years.

But some of these articles of faith are clearly wrong.  “Once debunked, a falsehood fades away” is one of them. Some bad ideas either never die or are re-invented on a regular basis.

Richard Dawkins invented the term “meme” to describe an idea that spreads like a gene.  That is, ideas spread and mutate through selective pressure, much like Darwinian evolution. It’s a rather fancy way of saying “catchy ideas spread, unmemorable ideas don’t.” I used to dismiss the notion of a meme as just a tautology.  Of course catchy ideas spread; otherwise they wouldn’t be catchy!  But in the absence of the tautological “meme” meme, other memes fill the void, such as “every lie contains a kernel of truth” and “truth wins over falsehoods.” Plus Dawkins introduced the idea at the end of his 1976 book The Selfish Gene, after having described how genes spread, how they interact, and so on.  The idea of a meme implies that an idea may be catchy even if it has no basis in truth.  And it may spawn ever catchier variants.  A meme may be as small as a catchy tune or as big as a religion.

I still believe in progress, but I don’t see it as inevitable.  Even though a ball has a tendency to roll down a hill, there’s no guarantee it will make it to the bottom.  And if something throws it back up the hill, there’s no guarantee it will find a path back down.  Progress is a tendency, not an inevitability.

But what, exactly, is progress?  I’ll explore that question more.  This is the first entry in a series on The Purpose of Life.

Another blog change

Hopefully you won’t notice this one, except that my old posts are finally back.  Last year I switched my blogging software to Drupal because my site got hacked and I realized that WordPress is too insecure by default, and I didn’t have the time or interest in keeping it up-to-date.  I’ve been frustrated with Drupal, which is a content management system (CMS) with a blogging plug-in, and it too was too much work.  It’s more secure by default, but there are still security updates, and it’s still a pain to use.  So my current solution is to have a private WordPress site, which I export as static HTML.  It will be more of a pain to push updates, and there will be no comments.  But I shouldn’t ever have to worry about security updates.  And the old (Drupal) URLs should still work.

High performance web applications made easy

Originally posted on Wed, 07/03/2013

Lately I’ve been looking at Node.js, a high-performance web server built in JavaScript, and at Kotlin, a language designed to make programming large applications easy. I haven’t actually used Node.js, but that doesn’t keep me from having opionions. It seems to me that combining the two could make it easy to write big, complicated web applications– if it’s done right.

Node.js is designed to serve tens of thousands of users at a time– something that cheap computer hardware is theoretically capable of, but which is limited by the way that operating systems handle threads and processes. Node.js (and others, like ngnx) does this by strictly using non-blocking I/O. That is, a single Node.js thread handles many simultaneous requests, and that thread never waits for I/O, such as disk or database access. Consider how one would tell it to serve a file:

  1. Node.js is configured so that the URL https://www.my-node.example.com/index.html invokes the JavaScript function writeMyFile("index.html", response) where response is how you send data back to the web browser that’s making the request.
  2. writeMyFile(filename, response) tells the filesystem object to read the file and gives it a function to call once the file has been read. It might be written as:

    fs.readFile(filename, function (error, data) {
    if (error) throw error;
    response.write(data);
    response.end();
    });

  3. The filesystem object requests the data in the file from the computer’s file system, and rather than waiting for the file to get read off a disk, the next request is handled.
  4. At some point in the future, Node.JS notices that the file has been read, and invokes the callback function listed above. The file data is sent to the web browser, and the request is done.

The important point here is that when I/O is required, all work on that request shifts from the current function to a call-back function. For simple tasks, such as reading a single file, this is not too difficult. But consider a typical web page generated by Ruby on Rails, PHP, or other web frameworks. Such as this blog page. Lots of disparate data are loaded from at least one database (user account information, the text of a blog entry, listings of other blog entries, comments, etc.) And to make programming easy, the database calls can be done at any time, in any order.

A PHP script (such as what generates this blog page) looks like plain HTML, with PHP code interspersed where plain HTML won’t do. As the PHP script runs, it prints out the HTML until it gets to a PHP segment, and then it does the PHP computation– such as reading a value from a database– and then goes back to writing HTML. Most web frameworks work this way: you have an HTML template with code mixed in, and the program goes back and forth between writing the HTML and running the embedded code. If it takes 10 milliseconds to do a database query, the thread just stops and waits. But stopping and waiting isn’t allowed in Node.JS. (Actually it is, but you would instantly go from having one of the fastest web servers to the slowest.)

Simply put, a Node.js program with one I/O request can be read from top to bottom. A Node.js program with multiple I/O requests is a tangle of call-back functions.

Which brings us to Kotlin. Kotlin has a number of language features which could (in theory) be used to get the best of both worlds. That is, the code would be structured like HTML, but the web framework would extract all the parts that require I/O, and call them with a call-back function that feeds back into a function that is called once all the I/O is done. That final function would stitch together the template and the data from the I/O into the final HTML result.

Central to this is the ability to take a program that is structured like HTML, extract the I/O portions, and also extract HTML generating code. Central to this is Kotlin’s Groovy-style builders. This is a way to construct objects with a syntax that is structured like HTML. (It doesn’t look like HTML, but it’s at least as compact, and similarly easy to read.) The end result is an HTML object that can generate HTML, but isn’t just a representation of HTML, it’s a full-fledged Kotlin object. It has methods that can be run, it can be subclassed, it can be analyzed and traversed. So in theory you could traverse an HTML builder and extract I/O-dependent objects before generating HTML text.

Here’s the rub: builders typically just contain text and other builders. I’m particularly familiar with Scala, where you can embed XML-within-code-within-XML-within-code all day long. The problem is, the embedded code is run when the XML object is constructed, so you don’t have a chance to pull out embedded, non-XML objects. (In Scala, this is considered an advantage, as the XML is guaranteed not to change every time you read it.)

I suspect Kotlin works much the same way, but there is a silver lining. Kotlin also supports delegation, whereby one object can be obtained via another– such as where the actual thing you want is in a database, and you want to avoid calling the database until absolutely necessary. Delegation is done in a controlled manner, and you can’t simply give something a database proxy when it requires a string. Again, this is usually an advantage, since you know that you’re getting an unchanging string when you expect a string.

So what you need is a special-purpose Builder (perhaps a builder builder) which understands both HTML and I/O-dependent HTML generators. The latter would be designed so they could be extracted and given a call-back function which generates the HTML when all the I/O is done. Note that each I/O object would be responsible for only one I/O call, although they could be chained together for something more complex (e.g. look up data from a database, and then write something back to the database.) At worst, this would bring us back to the Node.js tangle of call-backs, but slightly less tangled. At best, this enables all sorts of places for automatic optimization: identical I/O objects could be merged, the I/O requests could be done in any order, and similar database queries could be combined into a single query.

This is the sort of thing that would be fun to work on, if I had the time. Perhaps someone else should do it. Or perhaps someone else already has, but I haven’t discovered it yet. The core idea here is lazy builders where you can extract the I/O, so that code that is written with I/O interspersed within it can be called in a non-blocking manner

Reading list

Originally posted on Tue, 06/11/2013

Today I discussed my personal theology with my spiritual director. Here are several things that inform my thinking. The first one has an obvious and direct spiritual aspect. The others provide a landscape of ideas which inform my perspective. If you have time for just one book, I recommend Liars and Outliers. If you have one hour, watch the video.

A Google Tech Talk on mindfulness that informs my notion of the mind. I highly recommend it. It’s largely an exploration of the notion of mental health as being something more than an absence of mental illness, just as physical health is more than just a lack of physical illness. It is presented by Daniel J. Siegel, a clinical professor of psychiatry at the UCLA School of Medicine and Executive Director of the Mindsight Institute.

Liars and Outliers: Enabling the Trust that Society Needs to Thrive is the latest and best book by Bruce Schneier, cryptographer and security expert. It reframes security from a sociological perspective, with game theory, economics, psychology, and evolutionary biology mixed in. It’s really an overview of a new interdisciplinary field. I read it feeling like I knew everything in it already, and yet I keep coming back to it.

The Selfish Gene by Richard Dawkins. The book that coined the term “meme.” The science holds up really well, despite being about as old as me. In fact, I kept thinking of recent discoveries that confirm hypotheses in the book.

The Beginning of Infinity: Explanations That Transform the World by David Deutsch. A book of philosophy written by a physicist with a chip on his shoulder, with all the good (thoughtfulness, insight) and bad (arrogance, a desire to beat the crap out of straw man arguments) that entails.