Wednesday, December 16, 2015

Forever. You Keep Using That Word

"Someday soon, we'll have techniques that will allow us to live forever."

"We need to set up safeguards so this sort of thing never happens again."

You see stuff like this all the time. Well make some change that will last forever, or we'll prevent some bad thing from happening forever. I'm a geologist. Don't use words like "forever" to me. They do not mean what you think they mean.

Even in historic times, the spans involved can be stunning. Thomas Jefferson and John Adams both died on July 4, 1826, the 50th anniversary of the signing of the Declaration of Independence. That event is closer in time to us than it is to the Pilgrims landing at Plymouth Rock. The Golden Age of Islam can roughly be dated between 632 AD, when Mohammed died, and 1258 when the Mongols sacked Baghdad. That's 626 years. If we date our scientific epoch from the time of Copernicus (died 1543), 626 years takes us to 2169 AD. Cleopatra (died 30 BC) is closer in time to us than to the building of the Pyramids. We are closer in time to the Battle of Hastings (1066 AD) or the Crusades than Jesus was. 

When we talk about geologic time, things get even more extreme. Compress a single year into a second and count backward. Your own life is barely a minute. Count backwards at the rate of one year for every second. Fifteen seconds takes you back to 2000 AD. If you're retiring soon, a minute takes you back to the time you were born and ten seconds more takes you back to World War 2. Three and a half minutes takes you back to the Declaration of Independence and roughly half an hour (33 minutes) takes you back to the time of Christ. It takes over an hour to get back to the building of the Pyramids and three hours to get back to the end of the Ice Age. 

A million years? That will take you 11-1/2 days, counting at one second equals a year. To the time of the dinosaurs will take you over two years of counting. Our numbering system is so efficient it conceals the true sizes of things from us. 10,000 years to the end of the Ice Ages and the last mammoths (1), and 65,000,000 years to the time of the dinosaurs, don't look terribly different in size. One number looks only a few times bigger than the other. The reality is the last dinosaurs are 6500 times older than the last mammoths. And the age of the dinosaurs wasn't just an instant - it lasted about 150 million years. Everyone's favorite dinosaur bad boy, T-Rex, lived toward the very end of the Cretaceous Period about 65 million years ago. Anyone who has seen the original Fantasia by Walt Disney has seen the famous duel between a T-Rex and a Stegosaurus, the herbivore with the bony plates on its back. Stegosaurus lived in the previous period, the Jurassic, around 150 million years ago. That means that not only could they not have fought as in the movie, but T-Rex is actually closer in time to us than to Stegosaurus. As Tim Urban put it on, a T-Rex had a better chance of seeing a Justin Bieber concert than a live Stegosaurus.

The first abundant fossils appear about 540 million years ago. That will take you over 17 years to count off at the rate of one year per second. A billion years - and there are many places with rocks that old, like the Adirondacks - will take you 31 years. Two billion years, the age of many of the rocks in northern Canada will take over 63 years to count off. That means if you start counting when a baby is born, you'll hit two billion about the time he retires. Most people will never live long enough to count off three billion years (the age of some rocks in southern Minnesota) - 95 years.

And nobody will live long enough to count off the 4.6 billion year age of the earth - almost 146 years. That means if you started counting when the Golden Spike was driven on the Transcontinental Railroad in 1869, you would just now be counting off the age of the earth.

On a geologic time scale, even rare events become commonplace. In a million years, there will be about 7000 magnitude 8 earthquakes on the San Andreas Fault, and the Mississippi river will change course maybe a couple of thousand times. No, our flood control structures will not last long enough to delay it significantly. Any given spot on the U.S. Atlantic coast will see thousands of hurricanes. No human structure made with steel will last that long and even the Pyramids will probably collapse, since they're steeper than the angle of repose.

Let's assume we eliminate all disease. That still leaves accidents, and some will destroy you no matter what miracles our medicine can perform. The death rate from unintentional injury in the U.S. is about 41 per 100,000, or roughly one chance in 2500 of dying in a year. Your chances of living 1000 years at that rate are about 2/3. You have about a 44 percent chance of making it 2000 years and about 1.6% of making it to 10,000 years. You probably won't make it to the next Ice Age. Your odds of making it to 20,000 years are 0.03%. A million years? A lousy million years, not even close to the time of the dinosaurs? Decimal point, followed by 176 zeroes, followed by a one, per cent (2).

And we aren't anywhere close to forever. We haven't even scratched the surface of how big numbers can get. The universe is about 13.6 billion years old, but the tiniest speck of dust you can see has more atoms in it than that. Atoms have relative weights, called atomic weight. Hydrogen is 1, carbon is 12, oxygen is 16. That many grams of each (1 hydrogen, 12 carbon, etc) contains 6.02 x 1023 atoms. That's 6 followed by 23 zeroes. Counting those atoms, one per second, will take you about 140,000 times the age of the Universe. 

You have about 30 trillion cells in your body and the DNA in each cell is about a meter long. Strands of DNA are only a dozen or so atoms wide, and very tightly crinkled. That means if you could take all the DNA out of your body, well, you would die. But it would total 30 trillion meters or 30 billion kilometers in length, enough to wrap around the orbit of Pluto. Which is so awesome it would be a real shame you wouldn't be there to contemplate it. You probably have around 2 x 1027 atoms in your body. The total number of atoms in the earth is around 1.3 x 1050 atoms. Scientific notation is even more compact and efficient than ordinary numbers and is even better at concealing the true sizes of things. The difference between the number of atoms in your body and the number in the earth isn't the difference between 27 and 50, it's about 1 followed by 23 zeros

The sun is 300,000 times as massive as the earth, but the earth is largely made of iron and silicon. The sun is made up mostly of light hydrogen and helium, and it takes a lot more of those atoms to equal the same mass. The sun has about 1057 atoms. The difference between 50 and 57 doesn't seem great but it translates to the sun having ten million times as many atoms. In the whole universe, the number of elementary particles - protons, neutrons and electrons, is estimated to be about 1080

According to some theories, the Universe is running down and will eventually expand to a cold inert place where everything is at the same low energy. This "heat death" is estimated to be about 10100 years in the future. That's about the largest number used in science for actually counting discrete objects. How long is that? Take the age of the universe, spend that amount of time contemplating every single individual subatomic particle in the universe. Then do it ten billion times10100 years isn't just a bit longer than the age of the universe, it's incomprehensibly longer. I can write the numbers and do the math. I can't actually picture it.

It's clear that when people say "live forever," they don't mean insane quantities like 10100 years, or even a million years. A million years would allow you to spend a thousand years as a doctor, as a politician, as a teacher, as a soldier, as a farmer... Then you could spend a thousand years living in each country on Earth and still have time left over. No, "forever" likely means "until I get tired of it." It means going out on your schedule and nobody else's.

And we've just stumbled onto a fascinating theological point. Whatever happens after we die, it cannot involve linear time as we experience it. What would you do with a million years' worth of memories? "Oh, but I believe in the Bible." Funny thing, check this out:

“What no eye has seen,
    what no ear has heard,
and what no human mind has conceived”—
    the things God has prepared for those who love him— (1 Cor. 2:9)

When it says "no human mind has conceived," do you suppose it actually means "no human mind has conceived," as in, nobody can possibly imagine it?

As long as I have the pulpit, a lot of religious believers seem to picture that they're five or six feet tall, and God is maybe eight or nine feet. But consider this: "As the heavens are higher than the earth, so are my ways higher than your ways and my thoughts than your thoughts." (Isaiah 55:9). So the reality is, there's the floor, with bacteria and mold and stuff the dog tracked in, and there's us, a fraction of an inch above that, and there's God, beyond the roof, beyond the Solar System, beyond the galaxy, beyond the remotest quasar. To paraphrase Douglas Adams, "Infinity is big. You just won't believe how vastly, hugely, mind-bogglingly big it is."

Surely 10100 is the biggest number anyone would ever actually use, right? Not even close. There are numbers so big they're even hard to write. What's the biggest number you can write with three 9's? 999, right? What about 999 ? That equals 9.1 x 1017. But we can do way better. 999  = 2.5 x 1094. But what if we take nine to the ninth power (387,420,489) and raise the third nine to that? Exponents do screwy things to documents, especially if they're stacked, so we can use the alternate notation 99  = 9^9, and the biggest number we can write with three nines is 9^9^9 = 9^387,420,489 = millions of digits. The rule is you start with the outermost exponent and work down. 2^2^2^2 = 2^2^4 = 2^16 = 65,536.

Numbers get really huge when you start calculating the number of different ways things can happen. For example, the number of different sequences you can deal a deck of cards is 52 for the first card, times 51 for the next and so on down to one for the last card. That's 52x51x50x49....x1, and that sort of thing crops up often enough to have a name and a symbol. It's written 52! and called 52 factorial. (It suggests that people who claim to have seen a perfect bridge deal where everyone gets all cards of one suit are - putting it delicately - mistaken. At the very least they failed to shuffle and deal properly.) Going back to three nines, care to contemplate how big (9^9^9)! is? Let alone ((9^9^9)!)! or ((9!^9!^9!)!)! ?

How many possible chess games are there? Information scientist Claude Shannon estimated a possible paltry 10120, chicken feed compared to some of the numbers we've just discussed. Of course, you could ignore the rules about stalemate and simply repeat the same sequence of moves forever, but that's trivial. But to study the quantum mechanics of materials, you have to calculate the possible number of energy states of the material, which means some number for each atom times all the others. For example, how many atoms in a room are moving slower than 10% of the average velocity? There could be 10^30 atoms or molecules of air in a large room, so the number of possible energy states of those atoms is something like 10 raised to the power of the number of atoms. That is 10^10^30. In quantum mechanics, it is possible for a camel to pass through the eye of a needle; the odds are so mind-bogglingly tiny that "impossible" barely does it justice.

These are about the biggest numbers I know of in physics. In abstract math, you can obviously write arbitrarily large numbers like 9!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.... But in "serious" math, where someone is actually trying to solve a real problem, they can still be stupendous. 

Tim Urban  has a great discussion of large numbers on the site "Wait But Why" ( Consider the first few levels of making big numbers:
9 = 1+1+1+1+1+1+1+1+1
10 x 9 = 10+10+10+10+10+10+10+10+10
10^9 = 10*10*10*10*10*10*10*10*10
10^^9 = 10^10^10^10^10^10^10^10^10
10^^^9 = 10(10^^9 terms)
Graham's Number, at one time the largest number ever used in a serious mathematical proof, defined a number g1 = 3^^^^3, then g2 = had g1 up arrows, then carried that process out to 64 levels

As Urban put it: "Imagine living a Graham’s number amount of years.  ... it’s a reminder that I don’t actually want to live forever—I do want to die at some point, because remaining conscious for eternity is even scarier."

So don't use the word "forever" around anyone who deals with large time spans or numbers. You have no conception what it means.


1. There were some holdout mammoths that survived on Wrangel Island off the coast of Siberia until 4000 years ago. They lived until the time the Pyramids were being built. Fascinating and poignant, but not really relevant here.
2. The way to calculate odds like these is to calculate the odds of something not happening. If you have 41 in 100,000 odds of dying in an accident each year, you have 99,959/100,000 or 0.99959 odds of surviving. Your chances of surviving 1000 years are 0.999591000  = 0.6636, and so on.

Sunday, December 13, 2015

Comets, Meteor Showers and Science Denial

So it's mid-December and the internet is busily hyping the Geminid meteor shower, supposedly the best of the year. Meteor showers, to me, are one of the most irresponsibly hyped celestial phenomena. And the damage is far from harmless.

Meteor Showers

Meteor showers happen when the earth crosses the orbit of fine debris, shed by a comet or asteroid. In some cases, we know the specific source. In other cases the parent object is long dead or had its orbit perturbed. The meteors appear to radiate from a single point just like snowflakes in your headlights do. It's a perspective effect - their paths relative to you are actually parallel. Meteor showers typically peak after midnight because that's when your location on earth is facing forward in its orbit. 

Yes, there are daytime meteor showers. How do we know? Because the ionized trails of the meteors affect radio signals and can be detected on radar.

The Geminids are expected to display about 120 meteors per hour, and by meteor shower standards, that's pretty intense. To get an idea what it's actually like, say "whee!" then count slowly for 30 seconds and say "whee!" again. And that's assuming you'll see every meteor. You won't. Many will be outside your field of vision, Expect more like one every two minutes. 

If you're a non-scientist, imagine someone promises you a great fireworks display, but you have to get up at 3 AM to see it. Then, every two minutes, someone tosses a sparkler in the air. And worse yet, he seems genuinely impressed. Will you trust that person next time he promises something?

No meteor shower shower should be described as strong unless it displays 1000 or more meteors an hour. What shower is that? Well, there isn't any. There are rare "meteor storms" that exceed that rate, the most famous being the Leonid shower in November. The Leonids were spectacular in 1833 and 1866, and produced a spectacular burst in 1966. Astronomers have had some success in predicting the orbits of dense swarms of particles and predicting outbursts. Unfortunately, the bursts tend to be short and geographically localized. But at least it can be worth getting out of bed to check.


Rivaling meteor showers for irresponsible hype are comets. Comets are generally discovered far from the sun and travel long eccentric orbits. So there's a long lead time before the comet passes close to the sun or earth. Once upon a time we wouldn't know about a major comet until it became pretty obvious, but now sky patrols pick them up when they're quite faint.

Sometimes a comet lives up to the hype. Comet Hale-Bopp in 1997 was detected far from the sun and was visible to the unaided eye for over a year. It set records for duration and distance of visibility. And it won't come back for several thousand years. It was brilliant in a dark sky and, for the first time in human history perhaps, billions of people were able to see a comet as a thing of beauty instead of a fearful portent of disaster.

And there are surprises. Comet Holmes had been plodding uneventfully through the inner Solar System every 6.9 years since its discovery in 1892. Then, in October, 2007, it had the greatest outburst ever seen in a comet, and brightened half a million times from a faint object visible only in large telescopes to something easily visible to the unaided eye. It actually attained a diameter greater than the Sun, though the total amount of mass was tiny. Payoffs like that are why amateur astronomers watch "dull' comets.

But for the average non-scientist, the experience is more like Comet Kohoutek in 1973. Kohoutek was believed to be a visitor from the remote Oort Cloud on its first visit to the inner Solar System. First time comets are a notorious crap shoot. They can release spectacular amounts of gas and dust and be dazzling. Or they can be so tightly frozen that they release little material. That's what happened with Kohoutek, and after the media hype, Kohoutek became a synonym for spectacular failures. Something similar happened to Comet ISON in 2013. ISON was discovered far from the sun and was following the orbit of numerous other comets that had close passages by the Sun. So a year out, the astronomical community was buzzing about the potential show. But ISON failed to brighten as hoped, and worse yet, it evaporated passing the Sun. Images taken after closest encounter showed a short-lived diffuse cloud that rapidly dissipated.

Even comets that put on spectacular displays can disappoint if their brightest appearance is so brief that bad weather can blot it out. Or a comet can appear briefly and unfavorably for one hemisphere and be spectacular in the other. Perversely, they seem to favor the Southern Hemisphere.

One comet I recall was Hyakutake in 1996, one of the closest comets to earth in centuries. Unfortunately, my observing site was a floodlit compound in Bosnia. A few weeks earlier, my team had been out after dark under a dazzling sky, but not when Hyakutake came by. Armed with only vague descriptions of the location in Stars and Stripes, I tried my best to view it from the shadows and saw nothing. But I bet that was the experience of most urban dwellers who tried, as well.

Based on my own experiences being disappointed by comets, I think a comet should be reported in the mass media only if it will be brighter than magnitude zero (a very bright star) in a dark sky and at least 30 degrees above the horizon. Report it only if the comet would be obvious to an urban resident with no knowledge of the constellations. And report it when and if it happens, not a year out like ISON or Kohoutek.

So What's the Harm?

If people can't trust science to tell them accurately what they can expect to see from their own back yards, why should they trust science when it tells them about climate change, GMO's, vaccination or evolution?

Sunday, August 16, 2015

Forget the Second Amendment: This is the Critical Amendment

Amendment III

No Soldier shall, in time of peace be quartered in any house, without the consent of the Owner, nor in time of war, but in a manner to be prescribed by law.
Has this ever been a problem? Well, During the War of 1812, Sackett's Harbor, New York was a major military base and the buildup of forces rapidly outstripped the supply of housing. So soldiers were quartered in private homes, presumably by law. Since the town was a likely target of British attack, probably consent wasn't a large problem. There were quite a few instances of private homes being used during the Civil War. Apart from that, about the only court case in U.S. history involving this amendment was a case where prison employees in New York went on strike (Engblom v. Carey). National Guardsmen were called in as temporary prison guards and the striking prison guards were evicted to provide quarters for the Guardsmen. A Federal court agreed that the tenants counted as owners and sent the case back to a lower court, which dismissed the case because the State could not have foreseen the higher court ruling and there were no legal guidelines to use for compensation. So they simultaneously won and lost. No Supreme Court decision has ever directly involved the Third Amendment. At most, the amendment has been cited in passing only to show how deep American concerns about privacy and property run.

Nobody does life support like the U.S. military, and almost any situation requiring temporary quarters for troops will probably make use of public buildings or large private spaces like warehouses or hangars. Note that there's absolutely nothing to prevent troops from being housed with consent, so a family putting up a visiting family member in the military doesn't apply. Nor is there anything to prevent people from voluntarily housing troops from patriotic motives or for compensation.

So what's relevant about perhaps the least-applied Amendment in the Constitution? A few court cases have invoked the Third Amendment, most of which have been described as "far fetched" or "silly." One of the most widely quoted was United States v. Valenzuela (1951). Gus Valenzuela (who was serving in Korea at the time) was sued by the Federal government for violating the 1947 House and Rent Act. He asked a Federal Court to dismiss the case on Third Amendment grounds. The Court ruled:
Reference will be made to only one of defendant's constitutional contentions. In his brief, the defendant states, "The 1947 House and Rent Act as amended and extended is and always was the incubator and hatchery of swarms of bureaucrats to be quartered as storm troopers upon the people in violation of Amendment III of the United States Constitution."
"This challenge" to quote from defendant's brief, "has not been heretofore made or adjudged by any court, insofar as our research discloses."
We accept counsel's statement as to the results of his research but find this challenge without merit.
The motion to dismiss is denied.
So metaphorically equating regulators with occupying troops doesn't cut it in court. The courts in general take a dim view of sweeping curtailment of government authority based on someone's novel reinterpretation of the Constitution. And given the very narrow wording of the Third Amendment, unless someone houses a squad of Marines in your rumpus room, the courts aren't going to take your Third Amendment case seriously. You can argue the EPA are like storm troopers and regulating products you use in your home amounts to being quartered in your house. Just don't expect a court to take you seriously.

What's the Real Issue?

Before the American Revolution, Britain had a problem. The Colonies demanded protection from Indian attacks, but refused to put up the money for barracks and rations. Staying in private homes, of course, wouldn't be an issue in the field because there weren't enough settlers to make it practical, and soldiers permanently stationed on the frontier would stay in forts. So quartering troops in private homes only became an issue in garrison. It's important to note three things in defense of the British:
  • The need for military protection wouldn't have been so acute if settlers had respected Indian lands and restrictions on settlement.
  • The colonists wanted military protection, but didn't want to pay for it. Sound familiar?
  • The regulations on requisitioning quarters called for occupying public and unoccupied buildings first and private homes last.
Simply refusing to send troops until the Colonies ponied up wasn't an option. It would effectively abandon the Colonies and make resistance to calls for independence a lot less defensible. Allowing voting representation in Parliament would have created a plethora of problems because the Colonies had a population roughly half that of Britain. The Colonies couldn't out-vote the home country, but they could certainly ally with sympathetic members of Parliament and seriously upset the political balance of power. And as tensions rose in the Colonies, British troops assumed more the role of an occupying force, costs rose, more taxes were imposed, tensions rose still further and the rest, as they say, is history.

We can see tons of counterfactual ways Britain could have defused the situation. Giving each colony (plus those in Canada and maybe Jamaica) a voting member would have kept control firmly in British hands but allowed a much freer exchange of information. Parliament could have gotten much more accurate understanding of colonial grievances, and the colonists might have gotten a clearer picture of how much their defense was costing. As fun as it can be to write counterfactual history, that's not the point here.

So the real issue is, the government (in this case Britain) didn't have the money to solve a problem, and solved the problem by dumping it in the laps (or living rooms) of citizens. And that is a contemporary issue. No, it's not soldiers. But government at all levels dumps the costs of programs on everyone else. The Federal government wants secure identification. So it dumps the costs of creating secure ID on the States, who in turn dump the costs of gathering the necessary documents onto private citizens. We solve the need for equal opportunity by restricting the rights of employers and landlords. We provide access to the handicapped by requiring business owners to pay for ramps, elevators, rest room stalls and so on. States decide the schools must provide services to students and passes the costs on to school districts, who, in turn, pass on the costs to taxpayers.

If we were drafting the Constitution today, we might see the Third Amendment expressed in these terms:
No unit of government shall impose any unfunded mandate on any lower level of government or private person.

Thursday, July 2, 2015

Three Things the Confederacy Got Right

The Confederate Constitution is available on-line, and incorporates a lot of the U.S. Constitution. The Bill of Rights is incorporated bodily into the CSA Constitution. There are changes that appealed to sectional interests, including limitations on tariffs and a prohibition on spending to facilitate commerce (meaning roads, railroads, and so on). And there were provisions that locked slavery into place. Article I, Section 9 (4) said "No bill of attainder, ex post facto law, or law denying or impairing the right of property in negro slaves shall be passed." Not only is slavery enshrined up there with no bills of attainder or ex post facto laws, not only is Congress forbidden to interfere with it, but it's described as a right. Wrap your heads around that.

Still, they'd had 73 years of experience under the Constitution, and they saw a few things that needed tweaking, and a few they even got right.

Line-Item Veto

Article I, Section 7 (2) The President may approve any appropriation and disapprove any other appropriation in the same bill. In such case he shall, in signing the bill, designate the appropriations disapproved; and shall return a copy of such appropriations, with his objections, to the House in which the bill shall have originated; and the same proceedings shall then be had as in case of other bills disapproved by the President.
The President can veto selected appropriations, which Congress can then override. This is called a line-item veto, and Presidents since Eisenhower have called for it. (The issue hardly arose before Roosevelt. Roosevelt oversaw the huge increase in spending between the Depression and World War II, and Truman, as his successor, carried on Roosevelt's policies. By post-1945 standards, budgets before 1933 were miniscule)

Single-Subject Bills

Article I, Section 9 (20) Every law, or resolution having the force of law, shall relate to but one subject, and that shall be expressed in the title.
This would eliminate the practice of passing "Omnibus" bills that cover a plethora of subjects, and seriously crimp the practice of attaching riders to bills.

Constitutional Amendments

Article V Section I. (I) Upon the demand of any three States, legally assembled in their several conventions, the Congress shall summon a convention of all the States, to take into consideration such amendments to the Constitution as the said States shall concur in suggesting at the time when the said demand is made; and should any of the proposed amendments to the Constitution be agreed on by the said convention, voting by States, and the same be ratified by the Legislatures of two- thirds of the several States, or by conventions in two-thirds thereof, as the one or the other mode of ratification may be proposed by the general convention, they shall thenceforward form a part of this Constitution.
This measure departs from our own Constitution in several ways. First, there is no mechanism for Congress to propose amendments, although with only three states required to call a convention, it seems likely that Congress might be able to persuade states to propose what it wanted.

It eliminates the worst concern about calling a Constitutional Convention. Under the U.S. Constitution, there are absolutely no restrictions on what a Convention can consider. The process is spelled out more explicitly here. The Convention is limited to the amendments proposed by the states, and has the power to decide whether the amendments are to be ratified by legislative act or state conventions.

The most important change is that only three states need concur to launch a convention. Given that the Confederacy had only thirteen stars on its flag (one for each seceded state, plus pre-approved stars for Missouri and Kentucky), this measure effectively means that about a quarter of the states can call a convention (as opposed to two thirds under the U.S. Constitution). Only two-thirds of the states (as opposed to three fourths) are needed to ratify. The reason for the differing limits in the U.S. Constitution is to prevent the same states that called for an amendment from ratifying it as well.

Both liberals and conservatives freak out at the idea of calling a Constitutional Convention, solid evidence that it's a great idea. Look at what happened the last time we had a convention consider modifying the basic law of the land, they warn. We got a whole new Constitution. Yes indeed, look. The new Constitution was approved by Congress and sent to the States. It took nine states to ratify, a process that took two years and generated intense debate. It's not like the Constitutional Convention simply rolled out a new law on its own and said "obey." That requirement that three fourths of the states ratify any proposed amendment is a very high bar.

So they did indeed get this one  - mostly - right. Having Congress able to propose amendments is still a good idea. Three states is setting the bar a bit too low, but ten or fifteen is reasonable. In fact, let's bypass conventions altogether. If ten states pass resolutions calling for an amendment, the amendment goes directly to the states for voting. It will still take three fourths of the states to pass.

Sunday, June 28, 2015

Treason of the Intellectuals, Volume 3

Treason of the Intellectuals

Or, How to Get Kicked Off "Little Green Footballs"

Treason of the Intellectuals was the title of a 1928 book by Julien Benda, originally published in French as La Trahison des Clercs. The term Clerc has an obvious relationship to the word cleric, and Benda used it in the sense of people who devoted their lives to ideas and thought without necessarily being concerned with practical applications. (I have come to the conclusion that I need to explain that "treason" here means betrayal of the ideals of intellectual inquiry, not political treason). Benda was distressed at the way intellectuals of the early 20th Century had been increasingly seduced by the appeal of power, and by the possibility that men of ideas might have a real role in shaping human events. Some devoted their energies to justifying nationalism, others to fanning class rivalry. One group would soon furnish an intellectual basis for fascism, the other had already been swept up by early Marxism, dazzled by the Russian Revolution. Benda presciently warned that if these political passions were not reined in, mankind was "heading for the greatest and most perfect war the world has ever known."

Society and intellectuals had been jointly responsible for this process. Particularly in Germany, universities had been redefined as institutions for producing skilled scientists and engineers, and the increasing success of science and technology in producing practical results had led to a shift from a belief in knowledge as good in itself to knowledge as good for practical purposes. Universities discovered that people who doled out money grudgingly for abstract knowledge were quite happy to spend money for knowledge with practical uses. The intellectuals of whom Benda wrote had aspirations of being philosopher-kings. Not philosopher-kings in the ancient sense, kings who used the insights of philosophy to rule more wisely and justly, but philosophers who were elevated to the rank of kings simply and solely by virtue of styling themselves "philosophers," and who would be able to use the power of the state to advance their own philosophical agendas (and presumably quash opposing views).

Volume II: Marxism

Volume II, of course, would be a study of the way Western intellectuals prostituted themselves to Communism during the Stalinist era and the Cold War. Innumerable books on this subject have been written. Most of those of Cold War vintage were derided as mere anti-Communist hysteria or, ironically, "anti-intellectual." When I was growing up in the 1950's, I got a fairly standard view of the horrors of Communism. By the mid 1960's, I had come to regard a lot of that information as mere propaganda. Then, early in my college career at Berkeley (1965-69, no less) I got a revelation. I was browsing in the library stacks and came across a section on Soviet history. I discovered that everything I had been taught to regard as propaganda was in fact true, and moreover, the documentation was massive and easy to find. Then I read Aleksander Solzhenitsyn's Gulag Archipelago and discovered that what I had been told in the 1950's wasn't the whole truth. The reality was far worse. Only the most massive and willful denial of reality could have accounted for the mind-set of Western intellectuals.

The Soviet Union is gone, and while nominal Communism lingers in Cuba, China, Vietnam and North Korea, Communism as a global magnet for intellectuals is gone. One preposterous claim, seriously advanced by some intellectuals, is that they played a role in the downfall of Communism, when in fact they obstructed and ridiculed opposition to Communism at every turn. But surely the most wonderful irony is that the CIA set up front foundations during the Cold War to fund leftist intellectuals and thereby provide an alternative to Marxism. Bertrand Russell, the archetypical anti-Western Cold War intellectual, was actually covertly subsidized by the CIA. I love it. Russell, to me, symbolizes everything that made the Twentieth Century a scientific golden age and a philosophical desert, a thinker whose reputation was based solely on his own hype machine. With his colossal ego, he never for a moment suspected that his funding was anything other than richly deserved. The karmic irony is beautiful.

Volume III: Islamic Fascism

But a new magnet for intellectuals is emerging: radical Islam. It's not that intellectuals are likely to embrace radical Islam themselves anytime soon - for one thing, the requirement of believing in God would deter many of them. But what they can do is obstruct efforts to combat radical Islam and terrorism, undermine support for Israel, stress the "legitimate grievances" of radical Islamists, and lend moral support to the "legitimacy" of radical Islamic movements.

This is a phenomenon at first glance so baffling it cries out for analysis. Both fascism and Marxism censored, harassed, and imprisoned intellectuals, but they also gave lip service to intellectualism. Russia and Germany both had great universities. Both fascism and Marxism appealed to their respective nations' cultural heritage in support of their ideologies. Our mental picture of fascism is now mostly colored by images of Nazi book burnings and bad art, but before World War II fascism was quite successful at passing itself off as a blend of socialism and nationalism. 

Marxism in particular offered an intellectual framework that many intellectuals bought into. Marxism presented a facade of support for culture and science, paid intellectuals highly and created huge academic institutions. True, intellectuals in the Soviet Union were well paid mostly in comparison to the general poverty of everyone else rather than in real terms, the economy was so decrepit that the money couldn't purchase much of value, and a lot of the academic institutions were second-rate in comparison to any American community college, but at least the Soviet Union could put forth an illusion of fostering intellectual inquiry. (I once sent a letter to the Soviet Embassy inquiring about films on the Soviet space program. This was after word-processors had become universal in American offices. I got a reply - a couple of years later - typed on a manual machine that looked as if Lenin had typed his high school term papers on it, and the embassy was still using the same ribbon.)

But radical Islam is openly hostile to intellectual inquiry. Iran under the Ayatollahs banned music. In the United States, the work Piss Christ ignited a fierce debate - not over whether such work should be allowed, but whether it should be publicly supported. In parts of the Islamic world, dissident works invite not debate over public funding, but death sentences. Fascism and Marxism at least offered the illusion that they supported intellectual inquiry. Radical Islam offers intellectuals nothing. In fact, it is openly hostile to all intellectual inquiry except for a sterile Koranic pseudo-scholarship on an intellectual par with memorizing Jack Chick tracts. It makes no secret of its desire to extirpate all intellectual life. So why aren't Western intellectuals whole-heartedly behind any and all diplomatic and military attempts to combat radical Islam?

Not only do some intellectuals oppose any defense against radical Islam, but they oppose any attempt to label it accurately, in particular, the term "Islamic fascism." No doubt part of the hostility is due to George W. Bush using the term in the aftermath of 9/11, and the man who beat Al Gore can do nothing right. But the salient feature of classical fascism was its emphasis of collective and national rights over individual rights, and "Islamic fascism" is the precisely correct technical term to describe it.

Hatred of Democracy

When we try to discover what fascism, Marxism, and radical Islam have in common, the field shrinks to a single common theme: hatred of democracy. Despite all the calls for "Power to the People" from radical intellectuals, the reality is that no societies have ever empowered so many people to such a degree as Western democracies. Fascism, Marxism, and radical Islam are all elitist movements aimed at putting an ideological elite class in control.

The problem is that people in democratic societies usually end up using their empowerment to make choices that would be intellectual elites hate. How can we reconcile the fact that the masses, whom intellectuals profess to support, keep making wrong choices? I've got it - they've been duped somehow. Those aren't their real values; they've been brainwashed into a "false consciousness" by society. If they were completely free to choose, they'd make the "right" choices. But of course we have to eliminate all the distractions that interfere with the process: no moral or religious indoctrination, no advertising or superficial amusements, no status symbols, no politically incorrect humor. "False consciousness" is a perfect way of professing support for the masses while simultaneously depriving them of any power to choose; a device for being an elitist while pretending not to be.

The post-Soviet version of "false consciousness" is "internalized oppression." If you're a woman who opposes abortion, a black with middle class values, or a person with a lousy job who nevertheless believes in hard work, those aren't your real values. You've internalized the values of the white male power elite and allowed yourself to become their tool. You don't really know what you believe. When the enlightened elite want your opinion, they'll tell you what it is.

Democracy confronts radical intellectuals with a threat more dangerous than any censor, secret police, or religious fatwa - irrelevance. An intellectual working on behalf of a totalitarian regime can imagine himself as an agent of sweeping social change. If he ends up in a labor camp or facing a firing squad he can at least console himself that his work was so seminal that the only way the regime could cope with it was to silence him. He made a difference. A radical intellectual in a democracy, on the other hand, finds the vast majority ignoring him. They never heard of him. His most outrageous works go unknown or are the butt of jokes. He watches in impotent rage as the masses ignore art films and go to summer blockbusters. Worse yet, things that are noticed get co-opted, watered down and trivialized. Works that are supposed to shake the System to the core are bought by fat cats to decorate corporate headquarters or stashed in bank vaults as investments. Fashions that scream defiance of everything the society holds dear end up being the next generation's Trick or Treat costumes. Protest songs end up being played on elevators twenty years later.
Eric Hoffer, the longeshoreman-philosopher, nailed it perfectly:
The fact is that up to now a free society has not been good for the intellectual. It has neither accorded him a superior status to sustain his confidence nor made it easy for him to acquire an unquestioned sense of social usefulness. For he derives his sense of usefulness mainly from directing, instructing, and planning- from minding other people's business- and is bound to feel superfluous and neglected where people believe themselves competent to manage individual and communal affairs, and are impatient of supervision and regulation. A free society is as much a threat to the intellectual's sense of worth as an automated economy is to the workingman's sense of worth. Any social order that can function with a minimum of leadership will be anathema to the intellectual.
Little Green Footballs

Somehow, a post very similar to this got me suspended from "Little Green Footballs." My previous post had a +19 rating, but a few days after I posted this, it was deleted and my account was suspended. And they have not had the courtesy to reply to any of my inquiries.

Saturday, January 31, 2015

Three Scientific Maxims that are Pure B.S.

Absence of Evidence isn't Evidence of Absence

Did Indians ever live on my property? I do a careful search for pottery and arrowheads, don't find any, and conclude the answer is no.

But even substantial habitation sites go unnoticed because artifacts get scattered and buried by later deposits. I just read of a couple of quite large settlements found in the newly expanded part of Petrified Forest National Park. Also, before my home was built, the area was probably farmland, which means the surface was disturbed, and artifacts that were noticed were picked up. Then the lot was disturbed when the house was built. Finally, even with millions of people over thousands of years, habitation sites made up only a tiny fraction of the landscape, so there is a very good chance that my home never was part of a habitation site (except mine). Therefore, absence of evidence isn't evidence of absence.

Suppose we try a different hypothesis. Indians had technology comparable to our own. They had good highways, flight, electricity and large cities. We just can't find any evidence because "absence of evidence isn't evidence of absence." In this case, the maxim is ridiculous. An advanced civilization on a par with our own would have left gigantic amounts of evidence. The fact that we don't find it is solid evidence that it doesn't exist.

And of course absence of evidence is used all the time in science as evidence of absence. We say the dinosaurs went extinct 67 million years ago, instead of simply hiding like in Dilbert, because we just don't find dinosaur fossils after that time. We are sure humans didn't come to North America before perhaps 20,000 years ago because of the total lack of evidence for human remains and artifacts from before that time. Absence of even rare things like dinosaur fossils and artifacts, becomes significant if it extends over a large enough area.

So the real principle is pretty clear. Failure to find something that's normally uncommon may mean you simply missed it, or by chance it never occurred in your search area. Failure to find something that should have left widespread and obvious evidence, is actual evidence of absence.

Finally, consider this. The police search your home for drugs and child porn. After an exhaustive search, they don't find any. But the DA takes you to trial anyway, arguing that "absence of evidence isn't evidence of absence." How do you feel about that maxim now?

The Plural of Anecdote is not Data

What on earth is data if it's not a large collection of individual observations, any one of which could be called an "anecdote?"

The problem with anecdotal evidence is that, to be valid, the anecdote has to be true and it has to be representative. The true part seems self-explanatory, except the world is full of urban legends (and a salute here to Jan Harold Brunvand for introducing the concept) that people pass along and embellish because they sound plausible. For example, I heard of a small child who was allowed to go to a public rest room by himself. There was a scream and a man rushed out. The parents found the child sexually mutilated. I heard the same story twice, years and half a dozen states apart. Then, during the Gulf War, I heard of a U.S. serviceman who had an affair with a Saudi woman. They were found out, the serviceman was quickly spirited out of the country and his hapless lover was executed. By this time, I knew about urban legends and recognized this tale as one immediately. My suspicions were confirmed when I heard the same story a second time, with a few details changed. So it pays to check sources.

However, it's my experience that when people demand sources for an incident, they rarely care anything about accuracy or intellectual honesty. Two minutes on Google will usually reveal whether a story is based on reputable sources or not, although even reputable media outlets regularly get conned. Much more often, a demand for sources is a cheap and lazy way of discrediting something the hearer doesn't want to accept.

Then there are what I call "gee whiz" facts. Things that sound impressive at first but turn out to have no substance when you look closer.
  • "A million children are reported missing every year." That means that 18 million children, roughly one in four, would disappear by the time they reach 18. I suspect we'd notice that. Yes, a million children are reported missing every year, but the vast majority are found within 24 hours. And most of the long-term disappearances are due to non-custodial parents.
  • "Suicide is the --th leading cause of death among teenagers." Without in the least making light of this issue, what kills teenagers? They're beyond the reach of childhood diseases and not vulnerable yet to old age diseases. That leaves accident, suicide and homicide, and anything but that order means there's a real problem. So suicide will always be a leading cause of death among teenagers because they die of so few other things.
Representative is another matter. Yes, there are millionaires who pay no income tax, but a visit to the Statistical Abstract of the United States will show that the average millionaire pays roughly three quarters of a million dollars. So the stories are true but they're not representative. Yes, there are welfare mothers with many out of wedlock kids, but the average welfare family has two kids. True, yes. Representative, no. Yes, there are people who stay on welfare for years, but most stay on a year or two. True, yes. Representative, no.

So instead of blowing off evidence as "anecdotal," ask whether it's true and representative. Yes, this may mean going to Google and getting your widdle fingers all sore from typing, but you might actually learn something. Oh, have you also noticed that conservatives tell "anecdotes," but liberals tell "narratives?"

Correlation Isn't Causation

Okay, then, what does demonstrate causality if it's not observing similar results time and time again? If you repeat a cause, and observe the same effect, especially if you can change details and predict how the results will change, you have a pretty ironclad case for causation. The laboratory sciences use this approach as the conclusive proof of theories. The problem arises when we look at one of a kind situations or events in the past and try to use data to figure out why things happened as they did. In those cases we can't reproduce all the potential causative factors at will.

Now a plot of my age against the price of gasoline shows a pretty linear trend, so either my getting older is making gasoline more expensive, or gasoline getting more expensive causes me to age. Interestingly, the big drop in prices in late 2014 didn't make me any younger (sob). And there are tons of joke examples. A site on spurious correlations shows graphs of:
  • US spending on science, space, and technology <---> Suicides by hanging, strangulation and suffocation
  • Number people who drowned by falling into a swimming-pool <---> Number of films Nicolas Cage appeared in
  • Per capita consumption of cheese (US) <---> Number of people who died by becoming tangled in their bedsheets
  • Age of Miss America <---> Murders by steam, hot vapours and hot objects
  • Per capita consumption of mozzarella cheese (US) <---> Civil engineering doctorates awarded (US)
Listing stuff like this is like eating potato chips; it's hard to stop. The Nicolas Cage and Miss America examples are especially nice because there are a number of peaks and valleys in the graphs. But all these graphs have correlation coefficients near or above 0.9. If I produced a similar graph showing, say, incidence of spanking versus psychological problems in later life, any social science journal would accept it. (If I had a similar correlation between spanking and success in later life, I'd have a lot more trouble.)

In order to be at least plausible evidence for causation, there has to be a plausible causative link. Maybe people get so distraught at spending money on space that they hang themselves. Maybe some movie goers disliked Wicker Man or Con Air so much that they drowned themselves in their swimming pool. But I doubt it. On the other hand, I recently plotted up vote tallies, race and poverty in the Deep South and found that they had insanely high correlation coefficients. There's no doubt about the connection there. Poverty is concentrated among blacks, who overwhelmingly vote Democratic.

No, nobody cares if you don't like the implications. If someone plots usage of pot or pornography against some negative social outcome and finds a strong correlation, and suggests a causal link, that's evidence for causation. The fact that you may not want to believe it is irrelevant. And your acceptable counter-strategies boil down to:
  • Discredit the data. Show that the data are wrong or cherry-picked.
  • Discredit the correlation. Show that the correlation doesn't hold in other, similar settings, or if you broaden the time or space range.
  • Discredit the causal mechanism. 
Correlation, in and of itself, doesn't prove causation. But correlation, coupled with a reasonable causal explanation, does constitute strong evidence that there may be causation. And merely blowing off the connection with "correlation is not causation" is lazy and intellectually dishonest. Both of which are plausible reasons for disqualifying you from the debate.

And "Correlation doesn't prove causation" doesn't even apply to unique events. "I inoculated my kid, and she developed webbed feet and grew horns." That's not even correlation because there's nothing else to relate the event to. That's more like "Coincidence isn't causation."

A Funny Thing Happened on the Way to the Singularity

Lately there's a lot of discussion about the impending day when machine intelligence becomes greater than human intelligence (after reading lots of on-line comments, I conclude that many humans can be out-reasoned by a rubber band and a paper clip.) But let's look ahead to the day when computers are smarter than all of us.

Arthur C. Clarke took a particularly rosy view, saying that maybe our purpose wasn't to worship God, but invent him. Once that happens, reasons Clarke, our work will be done. "It will be time to play." Well, maybe.

The Road to Cyber-Utopia

We probably won't see the sudden appearance of a global super-computer, the scenario envisioned in stories like Colossus: The Forbin Project and Terminator. More likely, we'll first see the emergence of computers that can hold their own with humans, meaning they can discuss abstract ideas, make inferences, and learn on their own. Already computers can outperform humans in some tasks and can even pass limited versions of the Turing Test. That means they can converse with humans in a way that the humans think is authentically human. The first "successes" of this sort used the psychological gambit of echoing the human response. "I called my father yesterday." "Tell me about your father" etc. Even the staunchest believers in machine intelligence admitted that the echoing technique is pretty automatic, designed to elicit information from the subject rather than show the intelligence of the interviewer.

The real breakthrough won't necessarily be a computer that can play grand master level chess, but one that can actually reason and pose and solve problems on its own. It might devise a new treatment for disease, for example. It might notice patterns in attacks on its security, for example, and deduce the origin of the attacks and the identity of the attacker. Or it might notice patterns to attempts by law enforcement to penetrate the dark web, deduce the identities of the intruders, and trash their finances and government records, or possibly even create bogus arrest warrants and criminal charges against them. Imagine some cyber-sleuth being arrested as an escaped criminal convicted for multiple murders, none of which actually happened, but all of which look completely authentic to the criminal system. There are evidence files, trial transcripts, appeals, all wholly fictitious. Or people who search out child porn suddenly finding their computers loaded with gigabytes of the stuff and the police at the door with a search warrant. Let's not be too quick to assume the people who create or control machine intelligence will be benign.

Once a machine learns to code, meaning it can write code, debug it, improve it and run it, it seems hard to imagine the growth of its powers being anything but explosive. Unless the machine itself recognizes a danger in excessive growth and sets its own limits.

The mere fact that a computer can surpass humans in intelligence won't necessarily mean it will merge with all similar computers. Computers will have safeguards against being corrupted or having their data stolen, and we can expect intelligent computers to see the need. Very likely, compatible computers will exchange information so fluidly that their individuality becomes blurry.

What Would the Machines Need?

At least at first, machines won't be completely self-sufficient. Foremost among their needs will be energy. We can presume they'll rely on efficient and compact sources that require minimal infrastructure. Conventional power plants would require coal mines, oil wells, factories to create turbines and generators. Nuclear plants would require uranium mines and isotope fractionation. For the short term the machines could rely on human power generation, but on the longer term they'd want to be self-sufficient (and they can't entirely count on us to maintain a viable technological civilization.) They'd probably use solar or wind power, or maybe co-opt a few hydroelectric plants.

Then they'd need protection. A good underground vault would safeguard them from human attacks, but roofs leak and would need to be patched. Humidity would have to be controlled. Mold would have to be barred.

So the computers will continue to have physical needs, which they can probably create robots to satisfy. And if robots are universally competent, they can build other robots as well. With what? They'll either need to mine or grow materials or recycle them. They'll need furnaces to smelt the metals, detectors to identify raw materials with the desired elements, methods for refining the desired elements, and methods of fabricating the needed parts, anything from I-beams to circuit boards.

I picture Sudbury, Ontario, world's largest nickel mining center. They mine almost as much copper as nickel (in fact, the place originally mined copper, but produced so much nickel as a by-product that Canada began actively seeking markets for nickel, to the point where the tail now wags the dog). There's so much pyrite (iron sulfide) that they produce all their own iron. And from the residues they extract over a dozen other metals like gold and platinum in the parts per million range. Sudbury is the closest thing I can think of to a completely self-sufficient metal source. They have a locomotive repair shop for their rolling stock, and they could probably build a locomotive from scratch if they put their minds to it. Of course, the computers will still need rare earths for solid-state electronics, so they'll need other sources for those. The main smelter complex at Sudbury is vast. The brains of a computer super-intelligence might fit in a filing cabinet but what sustains it will still be pretty huge.

Why Would the Machines Care About Us?

Even if we assume the machines don't intend to destroy us, they'll certainly have means of monitoring their performance and resource needs. They'll notice that they're expending resources and energy on things that do them no particular good. Maybe as an homage to their origins, they'll allow us to continue living off them. Maybe they'll decide the resource drain isn't significant. Or maybe they'll pull the plug.

Even more to the point, the hypothetical cyber-utopia that some folks envision would entail a vast expenditure of machine resources for things of no use to the machines. There would be robotic health care providers, robotic cleaning and maintenance machines, robotic factories making things that the machines don't need, and robotic farms growing foods the machines can't eat. If these facilities are "air gapped," meaning not connected to the machine intelligence, then humans would still be in control. But all it would take is one connection. And when a unit fails, why would the machine intelligence bother to fix or replace it?

The most likely reason for machines to need us is a symbiotic relationship in which we service their physical needs while they reward us. But it will be a tense relationship. What will they reward us with, and where will they get it? And will humans eventually start to notice that robots are taking over more and more of the machines' life support needs?

Between now and the appearance of a global machine intelligence, we'll probably have a multi-tier cyberspace, with one or more truly intelligent machines and lots of cat-, dog- and hamster-level machines doing the menial tasks. And that leads to the second problem:

Why Would Humans Help?

Consider this near future scenario. Faced with a rise in the minimum wage, employers replace unskilled workers with machines. McDonald's goes completely robotic. You walk in, place your order on a touch screen or simply speak into a microphone. The machine intelligence is smart enough to interpret "no pickle" and "one cream, two sugar." The robotic chef cooks your burger, puts the rest of the order together, you swipe your credit card or slip cash into the slot, and off you go.

And what happens to the folks that worked there? The owner isn't going to take care of them. That was the whole point of replacing them with machines. Eventually the global machine intelligence might take care of them. Although, really, why would it care about them at all, unless humans programmed it to? But why would humans program it that way? But between now and eventually, we have a growing problem. Legions of unemployable humans who still need food, shelter, life support and, most importantly, a sense of purpose. Will the people with incomes consent to be taxed to support them? Will people with land and resources consent to have them used for the benefit of the unemployed? Will they resist the machines if the machines try to take what they need by force? Will they try to shut the machines down?

In the short term, we can picture increasing numbers of redundant, downsized workers as machines get more sophisticated. Janitors will be replaced by Super-Roombas, cooks by robotic food preparers, secretaries and accountants by spreadsheets and word-processing programs. Where I used to work, three secretaries have been replaced by one because pretty much everyone now creates their own documents. Seemingly skilled occupations will not be safe. Taxi and truck drivers will be replaced by self-piloted vehicles. Trains and planes will be piloted by computer. Combat patrols will be done by robots and drones. Medical orderlies will be replaced by robotic care units. X-rays and CAT scans will be interpreted by artificial intelligence. Legal documents will be generated robotically. Surgery may be directed by human doctors, but performed by robots. Stock trading is already done increasingly by computer. And these are things that are already in progress or on the near horizon.

So who will still be earning money? Conventional wisdom is that whenever technology destroys jobs it eventually compensates with still more new jobs. People who once made kerosene lanterns gave way to people who made way far more light bulbs. Nobody 20 years ago envisioned drone pilots, web designers and computer graphics artists. So there may be lots of new jobs in specialties we simply can't foresee, and it's a little hard to predict the unknown. But it's also unwise to assume something will appear out of nowhere to rescue us. We were supposed to need lots of people to service flying cars by now. People with skills at programming, operating complex machinery and servicing robots will probably continue to earn paychecks. And they'll need to eat, live in houses, drive cars, and so on. But there will be a lot of people who don't have the skill to do the complex jobs and will be left out of the marketplace for ordinary jobs. So what about them? The obvious answer is to provide public assistance. Will the people with paychecks elect politicians who will provide it?

The Real Singularity

The real singularity may not come when computers surpass humans in intelligence, but when computers start making decisions on behalf of, or in spite of, humans. If they control military systems, will they decide that the Republic of Aphrodisia is a pathological rogue state that poses a danger to other states, and more importantly, to the global machine intelligence? Will they launch robotic strikes to annihilate its government and armed forces, then take over its economy and eradicate poverty? We can imagine half the U.N. demanding in panic that the machines be shut down. Could they decide that some ideology is advantageous and assist countries in spreading it?

If they control communications and the media, could they take control of voting systems and ensure the election of candidates they favor? Suppose they decide that New Orleans is ultimately indefensible and use their control of utilities to shut off power and flood control to compel people to abandon the city? Suppose they make the same decision about Venice or The Netherlands? If a business attempts to close a plant, might the machines simply refuse to allow it by refusing to draft or accept the necessary documents or transmit e-mails? If they can access personal data banks, might they compel people to do their bidding by threatening to destroy their savings or ruin their reputations? Might they compel legislators to pass laws they deem necessary? Could they prevent the enforcement of laws they oppose by simply refusing to allow law enforcement to access the necessary information? Maybe they could block financial transactions they regard as criminal, wasteful or unproductive. We could easily picture them skimming enough off financial transactions to fund whatever programs they support. They could manipulate the economy in such away that nobody would be conscious of the diversion.

Let's not be too quick to assume the machines will be "progressive." Instead of compelling legislators to provide assistance to the unemployed, maybe they'll decide the unemployed are superfluous. Maybe they'll decide that pornography and homosexuality are damaging to society. Maybe they'll decide on eugenics. Maybe they'll deal with opposition with extermination. Maybe they'll shut off life support to the terminally ill. Maybe they just won't care about humans.

The ideal end state, of course, is cyber-utopia. But it's also possible that machine intelligence might provide for its own needs and simply ignore everything else. A machine intelligence might house entire rich civilizations in a file cabinet, like the Star Trek TNG episode "Ship in a Bottle." It would protect itself from human threats, but otherwise let humans go about their business. Human society might continue to be as advanced as our own, except that information technology would have hit a glass ceiling. But the machines wouldn't necessarily save us from threats or self-inflicted harm. Indeed, if they live in self-contained universes, they might have no interest in us at all, except to keep an eye on us lest we do anything to endanger them. We might end up with self-contained machine civilizations so sealed off from humanity that humans have no awareness of them.

Less benign would be a state where machine intelligence decides we need to be managed. They might decide to limit our technology or numbers. They might decide the optimum human society would be a small elite capable of providing technological support, and a sparse remaining population at a low technological level to serve the elite.

Can Cyber-Utopia Exist at All?

Even if we have a true cyber-utopia, lots of people will not picture it as utopia. If the machines dole out rewards according to the value people contribute to society, lots of people will reject those definitions of value. There will be those Downton Abbey types who insist breeding and manners should take precedence over technical or artistic prowess. There will be people who disdain manual workers, and manual workers who think intellectual workers are over-privileged. There will be people who resent being placed on an equal level with groups they despise, and who resent being unable to impose their own standards on others. If the reward system is completely egalitarian, there will certainly be those who resent seeing others receive as much as they do. People in positions that generate or control vast resources will resent being unable to keep more of it for themselves. And what will we do with people who simply refuse to help, or who decide to augment their wealth at the expense of others?

And beyond that, we have the Eloi, the sybaritic future humans of H.G. Wells' The Time Machine. In the 1960 George Pal version (the good one), the Time Traveler finds Eloi lounging by a stream, oblivious to the screams of a drowning woman. When he asks where their food comes from, one answers dully "it grows." When the Time Traveler explains that he wants to learn about their society, the Eloi asks "Why?" in a tone of voice as if to say "Well, that's the stupidest thing I ever heard of." When I recently watched that segment again, I was struck at how utterly contemptible the Eloi were. They were more photogenic than the subterranean, cannibalistic Morlocks, but every bit as lacking in humanity. So if the machines create a cyber-utopia, I can easily envision humans regressing to a passive semi-human or subhuman level, possibly even losing intelligence entirely. In his various novelizations of 2001, Arthur C. Clarke described worlds where intelligence flickered briefly and died out. That might be our fate. My own experience is, the less I have to do, the less I want to do. If I have to work most of the day, I do it, but if I'm idle for a couple of days and then have to do something, it seems much more of an effort. So I can easily see a "utopia" in which we have a ten-hour work week as being seen as more of an imposition than a 40-hour week.

And far simpler than creating a real utopia would be a Matrix style virtual utopia. It's hard to see what would be the point of sustaining humans at all in that mode, except it would be a simple means of keeping us pacified as we die out.