I am watching a George Boole bio on Prime but still don’t get it.I started watching the first few minutes of the "Genius of George Boole" on Amazon Prime, and it was garbage. It's the typical content that's been dumbed-down so much that any useful content has been removed. It's the typical sort of hero worshipping biography that credits the subject with everything that it plausible can.
Boole was a mathematician who tried to apply the concepts of math to statements of "true" and false", rather than numbers like 1, 2, 3, 4, ... He also did a lot of other mathematical work, but it's this work that continues to bear his name ("boolean logic" or "boolean algebra").
But what we know of today as "boolean algebra" was really developed by others. They named it after him, but really all the important stuff was developed later. Moreover, the "1" and "0" of binary computers aren't precisely the same thing as the "true" and "false" of boolean algebra, though there is considerable overlap.
Computers are built from things called "transistors" which act as tiny switches, able to turn "on" or "off". Thus, we have the same two-value system as "true" and "false", or "1" and "0".
Computers represent any number using "base two" instead of the "base ten" we are accustomed to. The "base" of number representation is the number of digits. The number of digits we use is purely arbitrary. The Babylonians had a base 60 system, computers a base 2, but the math we humans use is base 10, probably because we have 10 fingers.
We use a "positional" system. When we run out of digits, we put a '1' on the left side and start over again. Thus, "10" is always the number of digits. If it's base 8, then once you run out of the first eight digits 01234567, you wrap around and start agains with "10", which is the value of eight in base 8.
This is in contrast to something like the non-positional Roman numerals, which had symbols for ten (X), hundred (C), and thousand (M).
A binary number is a string of 1s and 0s in base two. The number fifty-three, in binary, is 110101.
Computers can perform normal arithmetic computations on these numbers, like addition (+), subtraction (−), multiplication (×), and division (÷).
But there are also binary arithmetic operation we can do on them, like not (¬), or (∨), xor (⊕), and (∧), shift-left («), and shift-right (»). That's what we refer to when we say "boolean" arithmetic.
Let's take a look at the end operation. The and operator means if both the left "and" right numbers are 1, then the result is 1, but 0 otherwise. In other words:
0 ∧ 0 = 0
0 ∧ 1 = 0
1 ∧ 0 = 0
1 ∧ 1 = 1
There are similar "truth tables" for the other operators.
While the simplest form of such operators are on individual bits, they are more often applied to larger numbers containing many bits, many base two binary digits. For example, we might have two 8-bit numbers and apply the and operator:
01011100
∧
11001101
=
01001100
The result is obtained by applying and to each set of matching bits in both numbers. Both numbers have a '1' as the second bit from the left, so the final result has a '1' in that position.
Normal arithmetic computations are built from the binary. You can show how a sequence of and and or operations can combine to add two numbers. The entire computer chip is built from sequences of these binary operations -- billions and billions of them.
Conclusion
Modern computers are based on binary logic. This is often named after George Boole, "boolean logic", who did some work in this area, but it's foolish to give him more credit than he deserves. The above Netflix documentary is typical mass-market fodder that gives their subject a truly astounding amount of credit for everything they could plausibly tie to him.
2 comments:
Good article! But the Babylonians were human too, so you might want to be more specific, like "moderns" or "Westerners".
Post a Comment