Monthly Archives: September 2011

IQ and genetics

Just watched an interesting lecture on IQ and genomics which makes a number of interesting claims. First, that IQ is as inheritable as height, which is to say that genes are 70% responsible. Second, that it appears to be the result of many different genes, each of which contributes a little bit. (With height, they’ve found 200 such genes.) And third, that although no IQ genes have been discovered, we’ll likely have discovered many of them within the next decade.

This raises many interesting questions, but from an evolutionary standpoint, I see one big one. If IQ is mostly genetic, and it is a major influence in life success (both of which are claimed), why isn’t everyone high-IQ? Is there an evolutionary advantage to a lower IQ (e.g. lower caloric requirements, thus more likely to survive a famine; or lower birthrate?)

This also raises some interesting quandaries. It’s standard practice to do genetic testing on fetuses, to screen for particularly nasty disorders. It’s possible to sequence the entire fetal genome. By the time my kids are grown up, there’s a good chance they’ll be able to do prenatal IQ testing which won’t be as accurate as a real IQ test, but will have decent predictive power.

Re-encoding media files

At work, we’re updating old audio files (uncompressed WAV format) into more modern audio formats. It takes about a second per recording, and we have several million recordings. This means our conversion process will take over a month. These files are big, and the output is on a distributed filesystem: they get copied to many of our servers. So we don’t want to speed the process up too much if it will strain our critical servers.

This gives me some insight into YouTube’s reluctance to embrace additional media formats, even Google’s pet project WebM. Like us, they are sensitive to hardware constraints. Even with virtually unlimited money and resources, it takes time to move gigabytes from one disk to another. Google gets around this by splitting data across many servers, so rather than touching a gigabyte on a single disk you move a few megabytes each across several servers. But when all their data needs to be updated, then every server needs to move gigabytes. So Google has no advantage over an average Joe trying to restore his hard disk from backup. Even if Google could build a new datacenter just for video processing, they’d still have to move all the data out of the existing datacenters into the new one and back.

My guess is that YouTube is using its existing video servers to create WebM versions of its videos, while continuing to serve those videos to users. But it will take months, if not years, to get to a point where they can serve WebM content to users.

RIP Michael S. Hart, E-books creator

Michael Hart is not exactly a celebrity. But as founder of Project Gutenberg, he’s an inspiration. He had a simple idea, digitizing and disseminating public-domain books. Perhaps it was inevitable that paper books would end up digitized, just as hand-copied ancient manuscripts were made into printed books. But he was the first. Often times the march of progress seems inevitable, especially in retrospect; just as often the appearance is misleading. When it comes to books online, the world doesn’t begin and end with Google and Amazon. Typically their source is Project Gutenberg, and it can be your source too. Thanks to Michael Hart.