Friday, February 29, 2008
Perl 6, Compiling, and Even More Smartness of Others
In the previous post, I talked about why I like Perl:
Perl 6 is a new language, which is much like Perl in some ways. Unless there is some secret Perl Cabal working on a project they're not talking about, Perl 6 is still a ways away. But there are a number of interesting documents that describe what Perl 6 will do when it comes, which bear thinking about, particularly in the context of Perl.
Of course, there is new syntax- new nouns and verbs and even punctuation that some people will love and some people will hate but most people will just use (or not use) without thinking about it a great deal. From what I've seen of it, I like the new grammar in Perl 6, and I especially like that it is a grammar (rather than an implementation of a grammar, like Perl). But syntax aside, there are two points worth noting about Perl 6 that mirror Perl's original contributions in many ways:
On Boundaries
Perl blurred the traditional boundary between scripting and application development. Using Perl, application developers write code quickly that is platform independent and easily modified. These three variables (speed, portability, and agility) are not altogether unrelated, but that is a topic for another day. With the caveats spelled out in the previous post, Ruby and Python erase these boundaries even more completely.
Perl 6 is aiming for a different boundary. Traditionally, there is a huge gulf between a programming language and its compiler. This gulf is filled with Compiler Compilers like yacc and bison, which make grammars into parsers, which parse languages, which create an abstract syntax tree, which is ultimately compiled down to machine instructions (either virtual, or actual, physical machine instructions). It's a lot of steps.
Perl 6 is not the first language to aim for this boundary, by a long shot. But Perl 6 does take aim with style and flair. If the approach is successful, it may signal the end of learning a new syntax for writing parsers with compiler-compilers. Programmers themselves will be able to experiment with creating new programming paradigms by rewriting the rules as they go along.
Is this a good idea?
The heck if I know. I imagine it will not be used by a great horde of programmers, but then most programming language features aren't.
There are a number of other features that take aim at this boundary, but setting their phasers on stun rather than kill. The design of Perl 6 has a great deal of introspection built in- giving programmers the ability to not only inspect the contents of a container, but also to inspect the container itself, and even make changes to the container.
Once again, I don't really know if this is a good idea. Python and Ruby have already got it in the form of duckpunching or monkeypatching or whatever they're calling it these days. We're going to call it Perl 6, and rather than hating it, we're going to love it. Since, after all, there is more than one way to do it.
On Leveraging the Smartness of Others
Perl programmers believe that other Perl programmers are real smart. It's why CPAN works so well. But there are problems with this belief. Sometimes other programmers get smarter faster than I do, which means they change how their code works in a new version. CPAN made it a little hard to grab a specific instance of someone else's smartness. There might also be more than one smart person working on a particular problem, who both call their code by the same name (since they're both real smart). CPAN made it a little hard to pick one or to not care or to do whatever else I want to do in that instance.
On top of all this, there are times when I may even think someone else is smart, who isn't even a Perl (or Perl 6) programmer. I know it sounds far-fetched, but it does happen sometimes.
Once again, Perl 6 is not the first language to aim for this ability. Perl gave us interfaces to the C programming language, and others eventually, but they are real hard to use unless someone else had already done it (which is often the case). Perhaps I should say real hard to write.
Perl 6 provides enough optional type declaration that writing interfaces to other languages becomes much easier. And in the case that Perl is running on a (possibly virtual) machine that supports other languages (approximately 100% of the time in the current implementation efforts), it will (hopefully) be a no-brainer, such that using the smartness of others will be the same kind of reflex it was with Perl.
And as for CPAN- Perl 6 comes packaged with a nice abstraction layer that allows a great deal of information to be specified about a given class or grammar or package. Authors, versions, and namespaces are built in.
So, I imagine in the future world of Perl 6, I'm going to care even less about who wrote the code I'm running (especially if that person is not-me), but to be able to trust more reliably that the code won't be changing out from under me. I may even care less about what language it's written in, which brings us full-circle to the earlier point.
The code I'm running very well may look like it's written in Ruby or Python, but simply be using a grammar from one of those languages on top of Perl 6. Wouldn't that be strange? Talk about blurring boundaries.
Conclusion
It's hard to conclude anything from this exploration. But it is worth noting that after thinking about it for a while, I do think taking down the language/compiler barrier is a great long-term goal, and I do think (re-)using the smartness of other people is nice. And I admit that it's possible Perl programmers eventually lost their way on the latter some time in the past twenty years. It happens to the best of us. Twenty more years, and we'll be ready for version seven, which will, according to Larry, be the perfect language.
- Perl fudged the boundary between scripting and application development
- Perl gave smart people a way to share their smartness: CPAN
Perl 6 is a new language, which is much like Perl in some ways. Unless there is some secret Perl Cabal working on a project they're not talking about, Perl 6 is still a ways away. But there are a number of interesting documents that describe what Perl 6 will do when it comes, which bear thinking about, particularly in the context of Perl.
Of course, there is new syntax- new nouns and verbs and even punctuation that some people will love and some people will hate but most people will just use (or not use) without thinking about it a great deal. From what I've seen of it, I like the new grammar in Perl 6, and I especially like that it is a grammar (rather than an implementation of a grammar, like Perl). But syntax aside, there are two points worth noting about Perl 6 that mirror Perl's original contributions in many ways:
- Perl 6 is fudging the boundary between language and compiler
- Perl 6 is making it easier to use the smartness of other people
On Boundaries
Perl blurred the traditional boundary between scripting and application development. Using Perl, application developers write code quickly that is platform independent and easily modified. These three variables (speed, portability, and agility) are not altogether unrelated, but that is a topic for another day. With the caveats spelled out in the previous post, Ruby and Python erase these boundaries even more completely.
Perl 6 is aiming for a different boundary. Traditionally, there is a huge gulf between a programming language and its compiler. This gulf is filled with Compiler Compilers like yacc and bison, which make grammars into parsers, which parse languages, which create an abstract syntax tree, which is ultimately compiled down to machine instructions (either virtual, or actual, physical machine instructions). It's a lot of steps.
Perl 6 is not the first language to aim for this boundary, by a long shot. But Perl 6 does take aim with style and flair. If the approach is successful, it may signal the end of learning a new syntax for writing parsers with compiler-compilers. Programmers themselves will be able to experiment with creating new programming paradigms by rewriting the rules as they go along.
Is this a good idea?
The heck if I know. I imagine it will not be used by a great horde of programmers, but then most programming language features aren't.
There are a number of other features that take aim at this boundary, but setting their phasers on stun rather than kill. The design of Perl 6 has a great deal of introspection built in- giving programmers the ability to not only inspect the contents of a container, but also to inspect the container itself, and even make changes to the container.
Once again, I don't really know if this is a good idea. Python and Ruby have already got it in the form of duckpunching or monkeypatching or whatever they're calling it these days. We're going to call it Perl 6, and rather than hating it, we're going to love it. Since, after all, there is more than one way to do it.
On Leveraging the Smartness of Others
Perl programmers believe that other Perl programmers are real smart. It's why CPAN works so well. But there are problems with this belief. Sometimes other programmers get smarter faster than I do, which means they change how their code works in a new version. CPAN made it a little hard to grab a specific instance of someone else's smartness. There might also be more than one smart person working on a particular problem, who both call their code by the same name (since they're both real smart). CPAN made it a little hard to pick one or to not care or to do whatever else I want to do in that instance.
On top of all this, there are times when I may even think someone else is smart, who isn't even a Perl (or Perl 6) programmer. I know it sounds far-fetched, but it does happen sometimes.
Once again, Perl 6 is not the first language to aim for this ability. Perl gave us interfaces to the C programming language, and others eventually, but they are real hard to use unless someone else had already done it (which is often the case). Perhaps I should say real hard to write.
Perl 6 provides enough optional type declaration that writing interfaces to other languages becomes much easier. And in the case that Perl is running on a (possibly virtual) machine that supports other languages (approximately 100% of the time in the current implementation efforts), it will (hopefully) be a no-brainer, such that using the smartness of others will be the same kind of reflex it was with Perl.
And as for CPAN- Perl 6 comes packaged with a nice abstraction layer that allows a great deal of information to be specified about a given class or grammar or package. Authors, versions, and namespaces are built in.
So, I imagine in the future world of Perl 6, I'm going to care even less about who wrote the code I'm running (especially if that person is not-me), but to be able to trust more reliably that the code won't be changing out from under me. I may even care less about what language it's written in, which brings us full-circle to the earlier point.
The code I'm running very well may look like it's written in Ruby or Python, but simply be using a grammar from one of those languages on top of Perl 6. Wouldn't that be strange? Talk about blurring boundaries.
Conclusion
It's hard to conclude anything from this exploration. But it is worth noting that after thinking about it for a while, I do think taking down the language/compiler barrier is a great long-term goal, and I do think (re-)using the smartness of other people is nice. And I admit that it's possible Perl programmers eventually lost their way on the latter some time in the past twenty years. It happens to the best of us. Twenty more years, and we'll be ready for version seven, which will, according to Larry, be the perfect language.
Labels: compilers, language design, perl, perl6, programming, python, ruby
Perl, Scripting, and the Smartness of Others
Perl is the name of a programming language I like.
Meaning, I like the name, and I like the language. Perl opened an ecological niche for programming languages that may have since been filled with other, better languages, but Perl was first. The other languages even have fun names. Python and Ruby come to mind in particular.
The niche is hard to describe- people call them "scripting" languages, but that term only has meaning when they're used for writing scripts (programs that control other programs, basically). Which is not, for the most part, what people do with these languages any more.
The thing that distinguishes the best of these languages is the relationship between development time and execution time. The languages allow people to write code quickly, which may or may not run quickly. They are best used in scenarios where the difference between one second and two seconds probably doesn't matter. For the most part.
Real scripting languages are still used mostly for scripting (e.g. sed, awk, bash). But Perl expanded this niche beyond scripting, and that's one of the things I like it for. People have used Perl for development of all kinds of applications over the past couple decades. Yes, perl is ancient.
When I was thinking about this post, I re-read Steve Yegge's essay about ancient Perl. It's full of his usual blah-blah-blah, but one point in particular amused me:
CPAN is institutional knowledge of a bunch of badass Perl hackers dumped into a central place and mirrored around the world for anyone to use. It is as schizophrenic and disorganized as you might expect from this kind of a resource. But it's also the answer to a big load of common problems that Perl hackers have had. And man, do Perl hackers have problems.
For all of the better-ness of other languages (and I really do like the syntax of those languages better in some ways), they still lack a CPAN. Some of them probably consider this a feature more than it is a bug, which may be the biggest bug of all.
Python and Ruby and their many friends fixed lots of problems with Perl syntax, and maybe even with Perl culture. But while they were busy throwing away the bad syntax Perl inherited from the real scripting languages, they left but the one feature that made Perl more than a novelty, which was not what we could do with it, but what others had already done.
Meaning, I like the name, and I like the language. Perl opened an ecological niche for programming languages that may have since been filled with other, better languages, but Perl was first. The other languages even have fun names. Python and Ruby come to mind in particular.
The niche is hard to describe- people call them "scripting" languages, but that term only has meaning when they're used for writing scripts (programs that control other programs, basically). Which is not, for the most part, what people do with these languages any more.
The thing that distinguishes the best of these languages is the relationship between development time and execution time. The languages allow people to write code quickly, which may or may not run quickly. They are best used in scenarios where the difference between one second and two seconds probably doesn't matter. For the most part.
Real scripting languages are still used mostly for scripting (e.g. sed, awk, bash). But Perl expanded this niche beyond scripting, and that's one of the things I like it for. People have used Perl for development of all kinds of applications over the past couple decades. Yes, perl is ancient.
When I was thinking about this post, I re-read Steve Yegge's essay about ancient Perl. It's full of his usual blah-blah-blah, but one point in particular amused me:
You see, someday I will start my own company, and I'll decide my own hiring bar. I'll of course be my own company's chief technical officer (wouldn't you?), so I'll decide how I expect people to engineer their software. And there will be no Perl. So there's no need for me to get worked up about its use at Amazon. Whew. I feel so much better.I did start my own company. And we did use Perl. Steve Yegge probably makes as much money in a month at Google as I did by selling that company, but it still gives me a very different perspective. We used Perl because it blurred the boundary I talked about earlier, and because of CPAN. The Comprehensive Perl Archive Network. What a beautiful thing. For all they've done, none of those other, better languages have replicated CPAN.
CPAN is institutional knowledge of a bunch of badass Perl hackers dumped into a central place and mirrored around the world for anyone to use. It is as schizophrenic and disorganized as you might expect from this kind of a resource. But it's also the answer to a big load of common problems that Perl hackers have had. And man, do Perl hackers have problems.
For all of the better-ness of other languages (and I really do like the syntax of those languages better in some ways), they still lack a CPAN. Some of them probably consider this a feature more than it is a bug, which may be the biggest bug of all.
Python and Ruby and their many friends fixed lots of problems with Perl syntax, and maybe even with Perl culture. But while they were busy throwing away the bad syntax Perl inherited from the real scripting languages, they left but the one feature that made Perl more than a novelty, which was not what we could do with it, but what others had already done.
Labels: perl, programming, python, ruby
Thursday, February 21, 2008
Green Goo in Processing 0135
I'm immensely satisfied Processing's ability to make pictures. Today's installation is something I like to call Green Goo (tm). It's much the same as previous applets in that it uses local rules to update each pixel. In this one, we start with a randomly dispersed field of green stuff on a gray background. Each pixel of green stuff looks at its neighboring pixels, and asks each of them, "how many of your neighboring pixels have green stuff in them?" The pixel then tallies the results and if the results are between eight and eighteen, it stays (or turns) green itself.
The code takes some shortcuts, so the rule is less obvious than I would like, but I'm also running it on a slow laptop, so, alas, I needed a few shortcuts. The results of my noodling around can be seen here.
I'm well on my way toward having quite a gallery of these applets, with zebra stripes, coastlines, cow spots, ant tunnels, and now goo. If anyone would be interested in collaborating on making these more interesting or prettier, please drop me an email: dbrunton@gmail.com. Ha! Google catches spam so well that I can publish that here without even worrying.
--
UPDATE: Click the pic to randomize, press space to start the goo gooing.
The code takes some shortcuts, so the rule is less obvious than I would like, but I'm also running it on a slow laptop, so, alas, I needed a few shortcuts. The results of my noodling around can be seen here.
I'm well on my way toward having quite a gallery of these applets, with zebra stripes, coastlines, cow spots, ant tunnels, and now goo. If anyone would be interested in collaborating on making these more interesting or prettier, please drop me an email: dbrunton@gmail.com. Ha! Google catches spam so well that I can publish that here without even worrying.
--
UPDATE: Click the pic to randomize, press space to start the goo gooing.
Labels: goo, pretty, processing, programming
Sunday, February 17, 2008
Garbage Collecting Thingyness
Hopefully most people who read the title for this post will immediately think to themselves, "Thingyness is a made-up word for a made-up idea. Thingyness doesn't even exist!" If you fall into that camp, you can stop reading after this first paragraph, since that's pretty much all I'm trying to say. Thinking of a particular slice of the spacetime continuum as a "thing" may be a useful convention for everyday life, but hardly a rigorous scientific concept.
This is not a new idea I'm proposing. One afternoon not long ago, I found myself at the end of a long trail of clicks in Wikipedia, staring at the Sorites paradox, thinking "gosh, the Greek word for heap is the same as the last name of George Soros." The Sorites paradox, for those who were able to resist clicking through to Wikipedia, is part of a long and glorious tradition of identity paradoxes. If you have a heap of sand, and take away the grains of sand one by one, when does it stop being a heap? Imagine you pile them all into a new location- when does the new location begin to be a heap?
Another such paradox is called the ship of Theseus. Guy Theseus gets on a boat for a long trip. Boat decays (as boats and other things are wont to do), and Theseus replaces various planks, oars, masts, keels, etc., until none of the original parts are left. Is it still the same boat?
Programmers, as it turns out, lack such a paradox. It's not that programmers are so smart, but rather that identity is a more rigorously defined concept. In a software program, every value can be represented by a string of bits. Strings of bits can be compared to other strings of bits for equality. If each bit in the string is the same as each bit in another string, they are equal. If a single bit differs, they are not equal.
Pretty simple.
Programmers are also accustomed to defining a very specific number of bits at a very specific location for analysis, and subsequently re-using that same location for an entirely different purpose (understanding that the re-use must be done quite explicitly). Thus, in the case of the Sorites paradox, a programmer might say "This memory location is for a heap. In each part of the heap, there may be either sand or air. When there is no longer a need for the heap, re-allocate this memory location to be used for storing something else."
The three variables the programmer needs to track are the memory location, memory size, and memory scope. Scope is basically a programmers way of defining when to start and when to stop using the memory at a particular location. By this point it should be clear even to non-programmers why programmers are un-troubled by identity paradoxes.
There are however, some strange nuances to this lack of paradox.
For instance, a particular location in memory may be used to store a letter, which in turn gets used as a number or a color or a pointer to a different location in memory, depending upon the context. As an example, the letter "A" is often stored as the binary value 00101001. That binary value also corresponds to the number 41 in decimal. If you increase the decimal value 41 to 42 (viewing the value as a number) it will then have the binary representation 00101010, which in turn corresponds to the letter B.
That's not too counter-intuitive- the letter B does, after all, follow the letter A.
But what about this: in the same encoding (known as "ASCII"), if you take the number 42 and add twenty, you get the number 62, which can be encoded in binary as 00111110. That corresponds with the letter b. E.g. the lower-case version of B.
Adding 20 to a number seems like a weird operation to make it lowercase, but that's the kind of trick that programmers do all the time. It would also be totally normal to view the letter B as a color (or at least a grayscale value) in a different context. The Processing language that I've discussed elsewhere on this site does so.
What does it all mean?
Well, the answer is, I don't really know what it all means. However, I do suspect that computer programming provides us with a reasonably sound philosophical tool that may assist us in untangling identity paradoxes and other possibly thorny issues that arise with our every-day mode of non-rigorous thinking. Maybe paradoxes like these ones are best handled by asking some of the questions programmers ask all the time:
Even if it doesn't help untangle paradoxes any better, it will make us all bigger nerds, which is always a good thing.
This is not a new idea I'm proposing. One afternoon not long ago, I found myself at the end of a long trail of clicks in Wikipedia, staring at the Sorites paradox, thinking "gosh, the Greek word for heap is the same as the last name of George Soros." The Sorites paradox, for those who were able to resist clicking through to Wikipedia, is part of a long and glorious tradition of identity paradoxes. If you have a heap of sand, and take away the grains of sand one by one, when does it stop being a heap? Imagine you pile them all into a new location- when does the new location begin to be a heap?
Another such paradox is called the ship of Theseus. Guy Theseus gets on a boat for a long trip. Boat decays (as boats and other things are wont to do), and Theseus replaces various planks, oars, masts, keels, etc., until none of the original parts are left. Is it still the same boat?
Programmers, as it turns out, lack such a paradox. It's not that programmers are so smart, but rather that identity is a more rigorously defined concept. In a software program, every value can be represented by a string of bits. Strings of bits can be compared to other strings of bits for equality. If each bit in the string is the same as each bit in another string, they are equal. If a single bit differs, they are not equal.
Pretty simple.
Programmers are also accustomed to defining a very specific number of bits at a very specific location for analysis, and subsequently re-using that same location for an entirely different purpose (understanding that the re-use must be done quite explicitly). Thus, in the case of the Sorites paradox, a programmer might say "This memory location is for a heap. In each part of the heap, there may be either sand or air. When there is no longer a need for the heap, re-allocate this memory location to be used for storing something else."
The three variables the programmer needs to track are the memory location, memory size, and memory scope. Scope is basically a programmers way of defining when to start and when to stop using the memory at a particular location. By this point it should be clear even to non-programmers why programmers are un-troubled by identity paradoxes.
There are however, some strange nuances to this lack of paradox.
For instance, a particular location in memory may be used to store a letter, which in turn gets used as a number or a color or a pointer to a different location in memory, depending upon the context. As an example, the letter "A" is often stored as the binary value 00101001. That binary value also corresponds to the number 41 in decimal. If you increase the decimal value 41 to 42 (viewing the value as a number) it will then have the binary representation 00101010, which in turn corresponds to the letter B.
That's not too counter-intuitive- the letter B does, after all, follow the letter A.
But what about this: in the same encoding (known as "ASCII"), if you take the number 42 and add twenty, you get the number 62, which can be encoded in binary as 00111110. That corresponds with the letter b. E.g. the lower-case version of B.
Adding 20 to a number seems like a weird operation to make it lowercase, but that's the kind of trick that programmers do all the time. It would also be totally normal to view the letter B as a color (or at least a grayscale value) in a different context. The Processing language that I've discussed elsewhere on this site does so.
What does it all mean?
Well, the answer is, I don't really know what it all means. However, I do suspect that computer programming provides us with a reasonably sound philosophical tool that may assist us in untangling identity paradoxes and other possibly thorny issues that arise with our every-day mode of non-rigorous thinking. Maybe paradoxes like these ones are best handled by asking some of the questions programmers ask all the time:
- What location in memory (or in spacetime) are we talking about?
- How is the location in memory being used in the program (or subjective reality)?
- When is it appropriate to re-allocate that location in memory for other uses?
Even if it doesn't help untangle paradoxes any better, it will make us all bigger nerds, which is always a good thing.
Monday, February 11, 2008
Meta
While others are refusing to say anything, I am shouting as loudly as I can into a broken microphone, courtesy of my hosting provider, who turns out to kind of suck. Nonetheless, I feel compelled to respond to aboyko, who listed all the wrong reasons for writing in an online journal. My reasons are as follows:
- I write in this journal so that I can be wrong publicly. Being wrong all alone is a good way to keep being wrong. Being wrong publicly is a good way to be mocked and cajoled into refining one's positions.
- I write in this journal to practice saying things, such that I may eventually achieve a higher degree of skill in said activity, regardless of the medium.
- I write in this journal to communicate in high latency. People can read and digest at leisure, theoretically even years from now.
- I write in this journal to impress girls.
Labels: meta
Friday, February 8, 2008
Bits are a Platonic Ideal
Taking some time out from inventing the FPGA, I read dchud's post about linking data, and it made me think, in turn about a Perlmonks post that I read last week (specifically the part in the beginning about expressions versus values). That, in turn, made me think about a paper by Claude Shannon, which in turn made me think this linking thing is much like my brain.
I digress.
Which is both true, and is how linking is like my brain.
What I started out to say when I wrote the title for this post, is that an expression has a Platonic ideal: it is the representation of that expression in a string of bits, which may, in turn, be compared to a different string of bits.
In practice, there are a lot of other bits besides the value that get taken into consideration. The machine that is holding the bits needs to know what to do with them. We call those bits meta data, and an obvious candidate for inclusion in this sentence is character encoding. The bits in this post would be different bits (for the "same" expression) if it were encoded in MARC instead of ASCII. These bits are, nonetheless, a value. A Platonic ideal. They can be compared quite directly to other strings of bits, and if all that meta data lines up, the comparison might be meaningful.
Which is where Claude Shannon entered my thinking.
His whole shtick was that when you transmit a bit over a channel, it's an answer to a specific question that both sides know in advance. An encoding is also a way of doing that, except instead of asking "Is it by land? Is it by sea?" you're asking "Is it an a? Is it a c?" This is done in the exact same way for every bit sequence. This string of questions is the so-called meta-data.
Knowledge, and representations of knowledge, is much harder, in part because the list of questions is so much longer. The meta data is shared experience. Possibly, the list of questions is exactly one lifetime's worth for the originator of the knowledge (and ostensibly the representation of the knowledge). Can we hope to encode this context in the same way we encode twenty-six or so letters?
No.
However, that's not the end of the story. Imagine German text has been encoded into a format that is unknown to the recipient. It might be an Enigma initially, but even a few simple hints can be enough to back into the encoding. And I mean a very, very few hints.
Transmitting knowledge is more like that than it is like a character encoding. Take it down into a representation (like this one you're reading), and transmit it over a channel (like this tube I'm sending it over). A few hints may be all the human brain requires in order to derive more interesting and possibly useful information from the bits sent over the tube.
Heck, the recipient brain may even derive some information from the representation that the transmitting brain did not intend to include.
I digress.
Which is both true, and is how linking is like my brain.
What I started out to say when I wrote the title for this post, is that an expression has a Platonic ideal: it is the representation of that expression in a string of bits, which may, in turn, be compared to a different string of bits.
In practice, there are a lot of other bits besides the value that get taken into consideration. The machine that is holding the bits needs to know what to do with them. We call those bits meta data, and an obvious candidate for inclusion in this sentence is character encoding. The bits in this post would be different bits (for the "same" expression) if it were encoded in MARC instead of ASCII. These bits are, nonetheless, a value. A Platonic ideal. They can be compared quite directly to other strings of bits, and if all that meta data lines up, the comparison might be meaningful.
Which is where Claude Shannon entered my thinking.
His whole shtick was that when you transmit a bit over a channel, it's an answer to a specific question that both sides know in advance. An encoding is also a way of doing that, except instead of asking "Is it by land? Is it by sea?" you're asking "Is it an a? Is it a c?" This is done in the exact same way for every bit sequence. This string of questions is the so-called meta-data.
Knowledge, and representations of knowledge, is much harder, in part because the list of questions is so much longer. The meta data is shared experience. Possibly, the list of questions is exactly one lifetime's worth for the originator of the knowledge (and ostensibly the representation of the knowledge). Can we hope to encode this context in the same way we encode twenty-six or so letters?
No.
However, that's not the end of the story. Imagine German text has been encoded into a format that is unknown to the recipient. It might be an Enigma initially, but even a few simple hints can be enough to back into the encoding. And I mean a very, very few hints.
Transmitting knowledge is more like that than it is like a character encoding. Take it down into a representation (like this one you're reading), and transmit it over a channel (like this tube I'm sending it over). A few hints may be all the human brain requires in order to derive more interesting and possibly useful information from the bits sent over the tube.
Heck, the recipient brain may even derive some information from the representation that the transmitting brain did not intend to include.
Subscribe to Posts [Atom]