Wednesday, January 30, 2008
A Submission to the High Level CPU Challenge
This post is a response to the "high level CPU" challenge found here. If you have comments for the public, post them to your own blog. If you have comments for me, send me an email: dbrunton@gmail.com.
It's hard to know where to start with yosefk's post. Perhaps a different post entirely. His definition of "high-level" is as good a place as any. The adjective high-level has meaning in exactly one context. If the behavior of some system differs significantly from the behavior of components of that system, the behavior of the system is high-level, and the behavior of the component is low-level.
The NAND Gate is a great example of this. A NAND Gate by itself is of limited usefulness- it has two inputs and returns 0 if both inputs are 1 otherwise, 1. Not hugely smart. However, it functionally complete. String together enough of them and you've got yourself a CPU. In other posts, I've called such behavior "emergence" though it's a dubious designation in the case of something engineered as much as such a CPU would need to be. Conway's Game of Life, Wolfram's Rule 30 (maybe), NOR gates, C-NOT gates, there are lots of these. Toffoli's CAM-8 was at least one attempt of fabbing automata theory into hardware.
However, yosefk's high-level CPU challenge included these two sentences: "I have images. I must process those images and find stuff in them." None of the machines in the previous paragraph are going to do that better than your current run-of-the-mill silicon wafer. However, yosefk, if you "process" those images by using your eyeballs to look at them, any semantic meaning you derive from them is "high-level". It cannot be predicted by the behavior of any pixel on the image or any one of the receptors on your retina, yet it's there.
Humanity's failure to replicate this kind of behavior in silicon does not in any way invalidate the architecture, and it may in fact be due in part to the bargain you're striking. Millimeters of Silicon? Hell, maybe you shouldn't even be using silicon!
I digress.
Let us go back to the problem of designing high-level hardware. Presumably, one of the reasons I would undergo such an exercise is for the pure joy of it. But I also think there could be some benefit to the architecture- for instance, high-level hardware might excel at solving problems for which low-level hardware is ill-suited. Or high level hardware might become easier for humans to interact with. Perhaps, high-level hardware will do things we haven't thought of yet (this last said with the understanding that if we haven't thought of them they may not be useful). Perhaps, high level hardware will be much like us- good at some tasks, bad at others, but profoundly interesting in both cases.
I digress again.
Here is the outline of my proposal:
I certainly haven't gotten it right, but I also firmly agree with all the huffing and puffing by the "C is for Von Neumann Architecture" crowd. Future programming languages are going to run on high-level hardware, they are going to be extraordinarily compact, they are going to perform very specific tasks for which Von Neumann computers are ill suited, and they're probably going to have some Von Neumann architecture watching them closely, since one of the things they'll be bad at is knowing if and when they got the right answer.
I guess they really are like us.
It's hard to know where to start with yosefk's post. Perhaps a different post entirely. His definition of "high-level" is as good a place as any. The adjective high-level has meaning in exactly one context. If the behavior of some system differs significantly from the behavior of components of that system, the behavior of the system is high-level, and the behavior of the component is low-level.
The NAND Gate is a great example of this. A NAND Gate by itself is of limited usefulness- it has two inputs and returns 0 if both inputs are 1 otherwise, 1. Not hugely smart. However, it functionally complete. String together enough of them and you've got yourself a CPU. In other posts, I've called such behavior "emergence" though it's a dubious designation in the case of something engineered as much as such a CPU would need to be. Conway's Game of Life, Wolfram's Rule 30 (maybe), NOR gates, C-NOT gates, there are lots of these. Toffoli's CAM-8 was at least one attempt of fabbing automata theory into hardware.
However, yosefk's high-level CPU challenge included these two sentences: "I have images. I must process those images and find stuff in them." None of the machines in the previous paragraph are going to do that better than your current run-of-the-mill silicon wafer. However, yosefk, if you "process" those images by using your eyeballs to look at them, any semantic meaning you derive from them is "high-level". It cannot be predicted by the behavior of any pixel on the image or any one of the receptors on your retina, yet it's there.
Humanity's failure to replicate this kind of behavior in silicon does not in any way invalidate the architecture, and it may in fact be due in part to the bargain you're striking. Millimeters of Silicon? Hell, maybe you shouldn't even be using silicon!
I digress.
Let us go back to the problem of designing high-level hardware. Presumably, one of the reasons I would undergo such an exercise is for the pure joy of it. But I also think there could be some benefit to the architecture- for instance, high-level hardware might excel at solving problems for which low-level hardware is ill-suited. Or high level hardware might become easier for humans to interact with. Perhaps, high-level hardware will do things we haven't thought of yet (this last said with the understanding that if we haven't thought of them they may not be useful). Perhaps, high level hardware will be much like us- good at some tasks, bad at others, but profoundly interesting in both cases.
I digress again.
Here is the outline of my proposal:
- High-level hardware is made from identical low-level components.
- Low-level components:
- communicate with up to eight physically proximate neighbors.
- implement a truth table of up to eight bits.
- have a small, fixed memory, set at run time (ROM).
- can remember state over a fixed duration of cycles (RAM).
- can compare RAM and ROM bitwise for equality.
- can signal halt.
- High-level language defines :
- the truth table currently in use.
- a "topology" for low-level components.
- state for low-level components.
- the halting state for low-level components.
- An outside actor (e.g. Von Neumann computer) might watch what's going on.
This code loosely translated into English-ese:
truth_table( bits: 3, endian: big, packed: 30)
topology(max cells: 40, neighbors: 2, unique: true)
state(default: 0; singularity: 1)
halting(0b1010000110)
run
The truth table takes three bits of input. Arrange all combinations of bits in decreasing sequence, and unpack the integer 30 from the big end to tell you the bit-truth value for each three-bit input. Wolfram rule 30.Now, I have described a single instance of such a machine- and I have just riffed the details out of my big shiny brain. Other people have actually built such machines, and they are perfectly capable of performing particular tasks with faster clock time than good-ol x86 and friends.
Each cell acts as an input to two other cells, and receives input from two other cells. The two must be unique in both cases. Hopefully you can kind of see this makes a "ring" of cells. The ring can be a maximum of 40 cells.
The initial state is all zeros, and one 1. In this case, it doesn't matter where the 1 is, but there is only one (it is a singularity, duh).
If the most recent ten bits of state on any given cell are exactly the halting state, that cell signals halt. It notifies the high-level language of completion, and hopefully the high-level language remembers where the completion occurred.
I certainly haven't gotten it right, but I also firmly agree with all the huffing and puffing by the "C is for Von Neumann Architecture" crowd. Future programming languages are going to run on high-level hardware, they are going to be extraordinarily compact, they are going to perform very specific tasks for which Von Neumann computers are ill suited, and they're probably going to have some Von Neumann architecture watching them closely, since one of the things they'll be bad at is knowing if and when they got the right answer.
I guess they really are like us.
Labels: cellular automata, cpu, response
Subscribe to Posts [Atom]