Friday, November 20, 2009
Written From Chrome OS
If I am anything, I am a software fanboy. I love the hot new shit, whether it be programming languages or frameworks or operating systems or mathematical proofs. I'm always running the bleeding edge Ubuntu, it's always breaking, and Lina is always razzing me about it. When youall got your Google Wave accounts, I was crying into my pillow wishing I had one, not realizing that there's really nothing to do with it.
Well, today I decided I couldn't wait any longer to try Chrome OS. I installed VirtualBox, downloaded a torrent of Chrome, and booted it on my ancient ThinkPad. On top of Kubuntu, of course. My first impression (not, obviously, being run on a netbook like it was intended): they haven't rethought the operating system; they've thrown it out. Chrome OS is basically a browser running on bare (virtualized, in my case) metal.
I have to imagine this kind of thing is going to become more common. They're not the first company to try it, but with virtualization being as prevalent as it is, why should both the host and the guest have a "real" OS? I can imagine this browser appliance paired with a database appliance making me quite happy in some ways. I can also imagine using it on a netbook and not needing anything else.
In the meantime, here's what it looks like:
Looks recursive to me. Anyone else have thoughts about it?
Well, today I decided I couldn't wait any longer to try Chrome OS. I installed VirtualBox, downloaded a torrent of Chrome, and booted it on my ancient ThinkPad. On top of Kubuntu, of course. My first impression (not, obviously, being run on a netbook like it was intended): they haven't rethought the operating system; they've thrown it out. Chrome OS is basically a browser running on bare (virtualized, in my case) metal.
I have to imagine this kind of thing is going to become more common. They're not the first company to try it, but with virtualization being as prevalent as it is, why should both the host and the guest have a "real" OS? I can imagine this browser appliance paired with a database appliance making me quite happy in some ways. I can also imagine using it on a netbook and not needing anything else.
In the meantime, here's what it looks like:
Looks recursive to me. Anyone else have thoughts about it?
Thursday, November 19, 2009
Ducci Sequence in Python
A while back, I wrote this code in Python:
As such, if you've come here for a quick and dirty way to calculate the longest cycle for vectors of length n under the Ducci map in Python, you're in the right place. Unfortunately, I don't have an easy way to obscure this from the rest of you, but we'll be back to our irregularly occurring geekery some time arbitrarily far in the future.
def myfunc(n):It's kind of a long story how I came to be writing that particular snippet of code, but I did, and it outputs a sequence if you do this: print [myfunc(i) for i in range(1,20)]:
cur = [True] + [False for i in range(n-1)]
hist = list()
while cur not in hist:
hist.append(cur)
cur = [cur[i-1] ^ cur[i] for i in range(len(cur))]
return (len(hist)-hist.index(cur))
[1, 1, 3, 1, 15, 6, 7, 1, 63, 30, 341, 12, 819, 14, 15, 1, 255, 126, 9709]I was wondering what kind of a sequence this might be, and in so wondering I became reacquainted with the On-Line Encyclopedia of Integer Sequences. I put my sequence into the search box, et voila. I got the "Length of longest cycle for vectors of length n under the Ducci map."
As such, if you've come here for a quick and dirty way to calculate the longest cycle for vectors of length n under the Ducci map in Python, you're in the right place. Unfortunately, I don't have an easy way to obscure this from the rest of you, but we'll be back to our irregularly occurring geekery some time arbitrarily far in the future.
Tuesday, November 17, 2009
Memento and Persistence
Following a long conversation with a coworker, I wrote down some thoughts about persistent identification schemes (including ARK, DOI, Handle, PURL). I had the post in the can, ready to go, when it was rudely interrupted by a really interesting presentation, which completely changed my thinking. I should recap that ill-fated blog post in one sentence before moving on: adding a layer of identifiers doesn't make an existing identifier more persistent, it makes it less so.
Now, that being said, there's a real problem. If I want to point at something as it exists on a certain date, it's often quite unwieldy to do so. Maybe I can cache it locally, maybe I can use one of the persistent identifiers mentioned in that first paragraph, or maybe I'm just out of luck. I point to someone's Geocities site, and it's just fricken gone. You see, the problem isn't that information moves to a different location, it's that the information at a given location changes. Or that it disappears entirely. That's a use case I care about.
Enter Memento. Herbert Van de Sompel and Michael Nelson gave a presentation about it at the Library of Congress yesterday, and I'm convinced it's a better way to think about persistence. Basic gist is that you specify a date with a URI, and a combination of clients, servers, proxies, and services try to give you back the thing you were pointing at, rather than the thing that's there now. I don't love all their terminology or even their implementations, but those are details. Memento is still a work in progress, and I like the approach.
Now, that being said, there's a real problem. If I want to point at something as it exists on a certain date, it's often quite unwieldy to do so. Maybe I can cache it locally, maybe I can use one of the persistent identifiers mentioned in that first paragraph, or maybe I'm just out of luck. I point to someone's Geocities site, and it's just fricken gone. You see, the problem isn't that information moves to a different location, it's that the information at a given location changes. Or that it disappears entirely. That's a use case I care about.
Enter Memento. Herbert Van de Sompel and Michael Nelson gave a presentation about it at the Library of Congress yesterday, and I'm convinced it's a better way to think about persistence. Basic gist is that you specify a date with a URI, and a combination of clients, servers, proxies, and services try to give you back the thing you were pointing at, rather than the thing that's there now. I don't love all their terminology or even their implementations, but those are details. Memento is still a work in progress, and I like the approach.
A Brief History of Computing
[note: edited to fix Blogger.com's stupid formatting]
http://www.zefrank.com/theshow/archives/2006/07/071106.htmlThis, Internet, is my brain crack. Been thinking about this for a long time, hoping to write something long. Well, here's something short, instead.
Claude Shannon Invented Computers
Claude Shannon invented computers. Not all at once, and not all by himself, but when you think about the guy who had the "Aha!" moment, it's him. In 1937 (there were, it seems, a whole lot of "Aha!" moments in the thirties), he published his Masters thesis, which showed how Boolean logic and electrical circuitry are the same. One can (and Shannon did) translate back and forth between electrical circuitry and Boolean logic.
To me, that's the "Aha!" moment. He's the guy who made it all possible. In his paper, he designed several electronic circuits, wrote them up in Boolean operations, and became the first guy who "programmed" electronics by manipulating logic. And before the comments start flowing, yeah, I can think of quite a few precursors. Heck, Euclid did the same for manipulating physical space, right? But Shannon's most immediate precursor was George Boole, whose logic system he wrote about.
George Boole Invented Binary Logic
George Boole reduced logic down to True and False. Every statement in his system could always be reduced down to True (a statement that follows from the accepted axioms) and False (one that doesn't). If you took a philosophy class in college and had to make truth tables, it's really Boole's fault.
Boole wasn't the first to take an interest in binary systems: that's an old, old idea. Pingala used a kind of binary notation for describing poetry. Eye of Horus is a binomial system, base two just like binary. Knuth even points out in The Art of Computer Programming that English wine merchants have used a binary system for buying and selling wine for hundreds of years. But George Boole was the one who invented binary logic, and that was one of the necessary pieces for Shannon's thesis.
Leibniz Invented Binary Arithmetic
A century or so before Boole, Leibniz got fascinated with how the I Ching seemed to be organized by a binary mathematical notion: if you make a broken line a zero and a solid line a one, they can be placed in sequential binary order. Gottfried Wilhem von Leibniz went further than that, though. In his Explication de l'Arithmetique Binaire, Leibniz showed how decimal numbers have a one-to-one correspondence with binary. He went on to advocate use of binary, not for the ability to do anything particularly new, but for what it shows about numbers.
Anyone who's reading this should know, I've read Glaser's History of Binary and Other Nondecimal Numeration, so I know he's got a whole chapter called "Before Leibniz." Well, none of it convinced me that I've gotten the wrong guy. Leibniz had probably read Lobkowitz, and probably some of the others who had published, but Leibniz "got" that binary is important, and foundational, and not just another kind of enumeration- it's the one using the least number of digits.
Francis Bacon Invented Binary Encoding
I'm going out on a limb here. Francis Bacon was probably not the first guy, but he did devise an encoding system for English text that only used two characters. He used it as a cipher, but it's not all that different from ASCII that we use now, and for that reason alone he deserves mention here.
Bibbity Bobbity Boo
Put it together, and what have you got? A way to use electricity to do math, encode text, and perform logic operations. Church, Turing, Kleene, and even Gödel all came with complicated logical models for manipulating symbols, but Shannon came up with a simple one, and showed us how to use it with electrical circuits we already had.
Sure, Zuse, Von Neumann, Mauchly, Eckert, they all built machines. But people had been building machines for a long time. Heck, Leibniz had one that could add, subtract, multiply, divide. Ada Lovelace and Charles Babbage had one. Jacquard programmed his fricken looms. But Claude Shannon showed us, hey, just use electricity.
So, if you're here, and you're trying to answer the question, "who invented computers?" my answer is, we're all still doing it. But Claude Shannon, the guy who had that "Aha!" moment in 1937, gets my vote for Parent-of-Digital-Computing.
Tuesday, November 10, 2009
Hope for America: Justice Breyer Understands Software
The Supreme Court of the US just heard oral arguments in an interesting case about business process. Bilski v. Kappos (warning, it's a PDF) was argued in such a way as to exclude patentability of software, though it has been described as relevant to that conversation. I read the oral arguments this morning. I'm not a follower of the court, in general, but I care about this topic, and I had a spare half hour.
The thing that struck me most profoundly about the proceedings was that the justices are smart. Really smart. The most impressive sound bite was from Justice Breyer, who offered the following hypothetical argument (top of page 45 in the transcript):
As for the Bilski case, it's too bad both sides can't lose. Either way, though, the court gets a big thumbs up from me.
The thing that struck me most profoundly about the proceedings was that the justices are smart. Really smart. The most impressive sound bite was from Justice Breyer, who offered the following hypothetical argument (top of page 45 in the transcript):
"...this is not a machine. The machine there is a computer. This is a program that changes switches, and that is a different process for the use of the machine."That is, perhaps, the best description of software, and the best argument against software patents I have ever encountered. I'll be interested to see where the court lands on this in the future.
As for the Bilski case, it's too bad both sides can't lose. Either way, though, the court gets a big thumbs up from me.
Tuesday, November 3, 2009
Design Patterns From Evolution
Possibly the most useful lesson to be taken from evolution is natural selection itself: inheritance, variation, competition for scarce resources.
In terms of software development, this is a design pattern not used often enough. After all, software already has plenty of variation. There are scarce physical resources (e.g. cpu, memory, bandwidth), and strong selection pressures (e.g. human beings only have so much time, attention, and patience). Putting inheritance of some kind to work ought not be that heavy of a lift.
At some level, these forces are already at play. Software does inherit memes, and companies come and go. But I haven't seen a compelling implementation of natural selection within a system or tool I've used. Leaving aside the obvious design pattern of evolution itself, however, I think programmers would do well to use other design patterns more frequently. Some examples I can think of:
Spandrels
In biology, a spandrel is a trait that is a by-product of some other adaptation, and has not necessarily been selected for. Spandrels are inherited, but don't provide an evolutionary advantage. The hitch is, they don't provide a disadvantage either. So they get passed along, and some time down the road, they might prove useful after all.
Software developers should pay attention. When features can be added in such a way that they don't cost much attention from the user or cpu, bandwidth, and memory from the machine, they should be considered. I think the Semantic Web could easily develop as a spandrel- RDFa gets added into web pages and ignored until there is an ecology in which they can flourish. There are probably a thousand other possibilities for this.
The difficulty of course is that attention from programmers is one of the scarcest resources of all, so this kind of spandrel can only work if it arrives as a kind of Trojan on some other adaptation that is selected for (e.g. if Blogger.com starts sticking RDFa in blogs, and people keep using Blogger.com, RDFa becomes a spandrel).
Ontogeny
Organisms don't stay the same. Sometimes, as in the case of a tadpole/frog or a caterpillar/butterfly, they're really very different at different developmental stages. In software development, I feel like we get hung up in thinking software should work the same all the time. And there are advantages to that behavior (for example, then people can learn how to use it). But, it's profoundly limiting.
Games are one place where ontogenetic design patterns get used. Players can "unlock" new features and play the game in a profoundly different way as it progresses. Other kinds of software should learn from games. I think if people felt like they had to "earn" the advanced features they'd appreciate them more. Plus, by the time they'd gotten to using them, they'd have a use for them (hopefully).
Speciation
Speciation is one of the most widely understood evolutionary design patterns- in fact, Chuck Darwin wrote a great book about it a hundred and fifty or so years back. For the sake of biology, two organisms are from a distinct species if they are biologically incapable of producing a reproductive offspring. For example, a donkey and a horse might get it on, but the result will be a mule, and the mule can't make more baby mules.
But that's really a side issue. Software developers need to be ready to take advantage of speciation: when a system becomes useful for two distinct purposes, speciation should be strongly considered: allowing the software to become two different pieces of software, each developed for a different purpose.
Convergence
The flipside of speciation is convergence. Bats and birds. They can't breed, but from the standpoint of an ecology, they occupy a very similar space- compete for the same resources. Evolution would totally be awesomer if convergence resulted in bat/bird hybrids. But it doesn't.
It doesn't really work in software either. But going back to the bat and bird example, bats sort of took over the night by being able to get around in the dark, and birds sort of took over during the day by being able to get around when it's light. But they're functionally very similar. As such, when it's clear that an ecological software niche is emerging, programmers need to accept they can still be bats to the birds (Omniweb, for example, was kind of a crappy browser, but it was the only one that ran on NeXT- no innovation there, except they could see at night). It's not a bad way to go.
Any other thoughts from the biologists and programmers among us?
In terms of software development, this is a design pattern not used often enough. After all, software already has plenty of variation. There are scarce physical resources (e.g. cpu, memory, bandwidth), and strong selection pressures (e.g. human beings only have so much time, attention, and patience). Putting inheritance of some kind to work ought not be that heavy of a lift.
At some level, these forces are already at play. Software does inherit memes, and companies come and go. But I haven't seen a compelling implementation of natural selection within a system or tool I've used. Leaving aside the obvious design pattern of evolution itself, however, I think programmers would do well to use other design patterns more frequently. Some examples I can think of:
Spandrels
In biology, a spandrel is a trait that is a by-product of some other adaptation, and has not necessarily been selected for. Spandrels are inherited, but don't provide an evolutionary advantage. The hitch is, they don't provide a disadvantage either. So they get passed along, and some time down the road, they might prove useful after all.
Software developers should pay attention. When features can be added in such a way that they don't cost much attention from the user or cpu, bandwidth, and memory from the machine, they should be considered. I think the Semantic Web could easily develop as a spandrel- RDFa gets added into web pages and ignored until there is an ecology in which they can flourish. There are probably a thousand other possibilities for this.
The difficulty of course is that attention from programmers is one of the scarcest resources of all, so this kind of spandrel can only work if it arrives as a kind of Trojan on some other adaptation that is selected for (e.g. if Blogger.com starts sticking RDFa in blogs, and people keep using Blogger.com, RDFa becomes a spandrel).
Ontogeny
Organisms don't stay the same. Sometimes, as in the case of a tadpole/frog or a caterpillar/butterfly, they're really very different at different developmental stages. In software development, I feel like we get hung up in thinking software should work the same all the time. And there are advantages to that behavior (for example, then people can learn how to use it). But, it's profoundly limiting.
Games are one place where ontogenetic design patterns get used. Players can "unlock" new features and play the game in a profoundly different way as it progresses. Other kinds of software should learn from games. I think if people felt like they had to "earn" the advanced features they'd appreciate them more. Plus, by the time they'd gotten to using them, they'd have a use for them (hopefully).
Speciation
Speciation is one of the most widely understood evolutionary design patterns- in fact, Chuck Darwin wrote a great book about it a hundred and fifty or so years back. For the sake of biology, two organisms are from a distinct species if they are biologically incapable of producing a reproductive offspring. For example, a donkey and a horse might get it on, but the result will be a mule, and the mule can't make more baby mules.
But that's really a side issue. Software developers need to be ready to take advantage of speciation: when a system becomes useful for two distinct purposes, speciation should be strongly considered: allowing the software to become two different pieces of software, each developed for a different purpose.
Convergence
The flipside of speciation is convergence. Bats and birds. They can't breed, but from the standpoint of an ecology, they occupy a very similar space- compete for the same resources. Evolution would totally be awesomer if convergence resulted in bat/bird hybrids. But it doesn't.
It doesn't really work in software either. But going back to the bat and bird example, bats sort of took over the night by being able to get around in the dark, and birds sort of took over during the day by being able to get around when it's light. But they're functionally very similar. As such, when it's clear that an ecological software niche is emerging, programmers need to accept they can still be bats to the birds (Omniweb, for example, was kind of a crappy browser, but it was the only one that ran on NeXT- no innovation there, except they could see at night). It's not a bad way to go.
Any other thoughts from the biologists and programmers among us?
Subscribe to Posts [Atom]