Wednesday, March 29, 2006

Back to the Future! (Part 4)

THE CALF PATH -- Sam Walter Foss (1858 - 1911)

One day, through the primeval wood,
A calf walked home, as good calves should;
But made a trail all bent askew,
A crooked trail, as all calves do.

Since then three hundred years have fled,
And, I infer, the calf is dead.
But still he left behind his trail,
And thereby hangs my moral tale.

The trail was taken up next day
By a lone dog that passed that way;
And then a wise bellwether sheep
Pursued the trail o’er vale and steep,
And drew the flock behind him, too,
As good bellwethers always do.

And from that day, o’er hill and glade,
Through those old woods a path was made,
And many men wound in and out,
And dodged and turned and bent about,
And uttered words of righteous wrath
Because ’twas such a crooked path;
But still they followed — do not laugh —
The first migrations of that calf,
And through this winding wood-way stalked
Because he wobbled when he walked.

This forest path became a lane,
That bent, and turned, and turned again.
This crooked lane became a road,
Where many a poor horse with his load
Toiled on beneath the burning sun,
And traveled some three miles in one.
And thus a century and a half
They trod the footsteps of that calf.

The years passed on in swiftness fleet.
The road became a village street,
And this, before men were aware,
A city’s crowded thoroughfare,
And soon the central street was this
Of a renowned metropolis;
And men two centuries and a half
Trod in the footsteps of that calf.

Each day a hundred thousand rout
Followed that zigzag calf about,
And o’er his crooked journey went
The traffic of a continent.
A hundred thousand men were led
By one calf near three centuries dead.
They follow still his crooked way,
And lose one hundred years a day,
For thus such reverence is lent
To well-established precedent.

A moral lesson this might teach
Were I ordained and called to preach;
For men are prone to go it blind
Along the calf-paths of the mind,
And work away from sun to sun
To do what other men have done.
They follow in the beaten track,
And out and in, and forth and back,
And still their devious course pursue,
To keep the path that others do.

They keep the path a sacred groove,
Along which all their lives they move;
But how the wise old wood-gods laugh,
Who saw the first primeval calf!
Ah, many things this tale might teach —
But I am not ordained to preach.


This is the final installment in what has become a 4 part series(!) The first three installments can be read here:

Part 1
Part 2
Part 3


The Communication Age

In my last post I talked about the Communication Age and how the internet represents the current evolution in a long string of technological advances in the realm of communications.

In less than two decades, the internet has grown into a technology that we now take for granted. But is it obvious exactly what the internet is? How would I explain the internet to 11-year old Jim?

It might pose a challenge. At first glance, it seems simple enough: most technologies are defined by how we use them ... or, more to the point, what we use them for. The problem is, we do so many things with the internet today, that such a generic definition is hard to come up with! We might call it an information tool, or a connectivity tool. But these terms are vague and really not very useful.

Of course, you can define the technologies that the internet is made up of. The term internet technically refers to the physical interconnection of many thousands of individual networks around the world. People often use the term World Wide Web interchangeably when referring to the internet; but they are not the same. The WWW refers specifically to the global collection of interconnected documents; i.e. "information", which can be accessed via the internet. But custom sees us continually using the term Internet or just "the Net" to refer to the entire pantheon of technologies, tools, and concepts associated with today's modern digital connectivity culture. To avoid any confusion, I will continue using the word in this vague way.

In reality, the internet revolution was made possible by the confluence of three factors: affordable computers, affordable high-speed connectivity, and fast, affordable data storage. All of these elements didn't come together at once, but without them, the internet as we know it, in all of its versatility, would not be possible. It could easily have remained an elite, restricted online environment, useable only by large corporations, government organizations, and universities. But it was its accessibility to the general public that allowed it to explode into a multi-faceted communications tool, driven as much by the creativity of the wide world of webusers in general as by the immense profit potential the medium promises to those enterprising companies of every industry equipped to capitalize on it.

Defining the term "internet" technically doesn't tell you much about what it can actually do for you though. It is somewhat akin to saying that a "cake" is made of flour, eggs, and sugar: it doesn't tell you anything about what a cake really is; how it tastes ... and when and why we eat it.

The "Net" is difficult to define exactly because it unites a variety of media in a single conduit: perhaps it can be best described as a mixture of the telephone, television, radio, the magazine, and the catalog all-in-one (and all of which are currently under threat of eventual replacement by the internet!). And that's just the start of it. It can be (and generally is!) exploited commercially, but the key to the internet's power (which may also be its greatest weakness ... I'll come to this point in a minute) is that anyone can share whatever he or she wants to. We can share our knowledge, our opinions, our beliefs... we can share our art, our interests, our thoughts. That doesn't mean that everyone has to be a generator of content -- but the internet is not a passive medium, and it is that aspect that differentiates it from the one-way "network television" model of communication.


The Internet and Tribal Culture


I know that "the internet" is a huge subject ... and I could go on all day about it. But there is one aspect in particular which interests me greatly, and which is related to the topic that I started in the previous posts.

I have belaboured the point about the Internet being a communication channel for a specific reason: how we think of this phenomenon can be significant in evaluating our expectations as to what this technology can actually do for us, which will determine the future directions it may take our society. Throughout history, new technologies have often led to dramatic cultural transformations over time: and this is particularly true of communication technologies. The internet has a tremendous transformation potential because it is simultaneously a "personal" communication device as well as the most effective and democratic mass communications tool ever developed. Not every cultural transformation is painless, however.

In spite of its apparent success, the internet has raised some new fears that spring from its darker side: a whole slew of technological parasites have arisen that may threaten the technology and its utility: viruses, spam, spyware, trojans, phishers... the internet has become a threat to our privacy. As we come to depend on our computers to store everything we know (rather than boxes in the basement), we must fear the consequences of that dependency -- the possibility of losing that data, or, worse, of someone stealing it with malicious intent. As we come to depend on the internet for everything, including access to our bank accounts, the security of this medium is paramount. Our credit card numbers can travel the world over: can we trust a particular internet merchant just because he has a flashy, official-looking website?

Besides this aspect of security, another spectre haunts the internet: all of the effort we invest to protect our privacy on the internet from the mal-intentioned also may serve to protect those who seek to prey on the innocent: the child molesters, terrorists, hate-mongers, extremists. Online, people can assume any identity they wish to: mild-mannered office worker by day, aggressive sexual predator in the virtual world. Although much of this behavior may be just harmless fantasy rollplaying, the possibility of encountering a large number of people who share "interests" normally shunned by society may provide a false sense of legitimacy that ends up reinforcing and strengthening destructive behavior.

This is a negative consequence of what is apparently one of the principal characteristics of today's internet. The connected personal computer is not really an information tool: it is a communication tool. Culture itself is built on communications between people; the sharing of ideas, technologies, language, experiences. In the past, geographical, linguistic, and political isolation led groups of people to develop their own cultures, which sometimes turned out very different from a similar group of people just across the river. The internet strips away those borders, and provides the potential for a global culture that transcends physical boundaries.

To a large extent, we see this happening: but there is also another very strong trend which can be cast in a positive or negative light, depending on ones' point of view: the formation and propagation of internet sub-cultures. The internet is a powerful tool for communication: never before has it been so easy to spread a message to thousands or even millions of people. As such, it has never been so easy to find other people who think like you do, who share your beliefs, or who can come to share them. People who agree with each other tend to gravitate towards one another in the virtual world, often reinforcing their own set of beliefs while spreading them to others. These virtual special interest groups can be referred to as "internet tribes".

This trend explains the plethora of "social networking" tools that have sprouted like weeds across the internet: what began decades ago with the "usenet" has branched into dozens of new ways to meet and connect with people. Nowhere is this more evident than with today's teenagers, who arguably live as much in a virtual world as a in the real world: internet chat rooms, forums, instant messengers, blogs, flogs, MySpace, Friendster, Orkut, webcams, MMORPGS ...it's not unusual for a teen these days to count the members of their social circles in the dozens or even hundreds of people, many of whom they may never have even met face to face.

Social networking tools be incredibly useful, allowing us to find and build relationships with people who share our interests, create our own "tribes", without regard to where we may be located in the world. But of course, it also raises serious questions about the nature of the relationships formed, when the people with whom you are socializing may not even be who they claim to be. Or, even if they are, you don't know their background, their history, who their family is... reasonable items of worry for parents who hope to see their children safely through to adulthood.


Another more subtle item of concern is the effect that the formation of tribal sub-cultures can have on our belief systems. This may be an innocuous occorrence when a social group forms around, for example, "Java Programming" (C# and .NET sucks!); but many groups are based on political, religious, or social ideologies. Since it is usually in our nature to seek out people who think like ourselves, we may end up encouraged to "adopt-an-ideology", polarizing our own views, or taking on "labels" which may encompass ideas and beliefs we did not inicially possess. Since we can, in general, pick and choose what we read and who we communicate with, we have a natural tendency to read things written by people which we already agree with, excluding those points of views which we tend to reject. When we affiliate ourselves with a particular group, we often aggregate related ideas or opinions held by those in the group to our own (this is the definition of "ideology"). This closed-loop system may end up reinforcing our beliefs to the point of radicalization. So, incredibly, even though we supposedly have more access to "information" than ever before in history, we still find ourselves dividing up into polarized groups, drawing lines in the sands of the internet, and squaring off against each other just like we always did. Except today, the "enemy" isn't necessarily on the other side of the river: he may live on the other side of your street!

There is nothing new about this behavior ... "culture-forming" is apparently part of human nature, and dividing ourselves into groups is part of this process. Usually this is a positive thing -- an adaptation to some reality which affects a group of people; but it is basically the same process that created, Nazi Germany, for example. In the past, charismatic or influential leaders, large numbers of missionaries, or control over the media were necessary to reach a vast audience. The internet only makes this ancient behavior much easier, and far more accessible to "normal" people.

I am not saying that there is anything wrong with the existence of opposing viewpoints ... actually, debate is healthy and desirable. But often the very existence of a given tribe is founded on some fundamental ideology which can not be called into question without resulting in the dissolution of the very group itself! Members of a group called "I hate Microsoft" might debate pros and cons of their respective operating systems with a group called "Microsoft Rules". Some individual members might actually be convinced to switch sides; but like a living organism, the instinct for self preservation dictates that the group itself must defend its own ideology, even against a logical argument, or suffer death. Through natural selection, the moderate members are weeded out, until only the radicals are left to toe the party line. Like an audio feedback loop within an amplifier, the group's ideas are fed back into themselves and augmented repeatedly until nothing comes out but a high-pitched whine!

The internet is a new deal for us, so it may take a while for us, as a society, to get used to it. False information, lies, and distortion can travel just as quickly over the internet as truth can; in fact, it often travels faster because radicals and extremists use inflammatory language, designed to provoke strong emotions and appeal to our basest instincts. What I am contending here is that one of the unexpected consequences of the internet is the propagation or amplification of new and existing extremist ideologies. If I were going to do a doctoral dissertation in anthropology now, this would probably be my subject.


Freedom and the Internet

Abuse of the internet is a complex problem, but use of the internet at all is a threat to oppressive governments around the world. Even in such societies, where free public access to information can be a menace to the established social order, those in charge are challenged with the question of how to bar access to the internet and still build a technologically modern culture. This may seem an unlikely danger to those of us living in western democracies; but even here debate still rages about how free our access to information should be, and just how much privacy individual citizens should be allowed. It may not take much more than a few high-profile incidents to provide our governments with the public backing they need to further invade our privacy and restrict our access to knowledge, in the name of our own security. As we become ever more reliant on this technology, control by the government of the internet or its progeny may make George Orwell's vision of the world entire feasible.

It is in this war that one of the major battles is currently being waged, as large corporations, entire industries, and even private individuals seek to maintain control over their intellectual property. The internet encourages and facilitates the interchange and sharing of data digitally, and that includes art, music, film, books, software .... Many forward-thinking content-producers are concluding that the old-ways are doomed to failure, and are experimenting with alternative models of commercialization in order to be fairly compensated for their efforts. But battle-lines are drawn, and, one way or another, things may get nastier before they get better.


Fear and the Future


It's not surprising, then, that the internet occasionally inspires a certain degree of fear. It may even become tempting, at some point, to begin to think in terms of Daedalus & Icarus: the internet is our Godzilla, or our Frankenstein's monster. The internet has grown far beyond its original conception, has evolved creatively and dynamically into -- something new, and -- just maybe -- sometimes, something dangerous. Can this be the price we pay for our desire for omnipotence?

If there is one thing that hasn't changed when it comes to forseeing the future, it is fear of the future. Both change and the unknown generally inspire fear, and the future is the ultimate unknown, no matter how hard we try to predict it. Today, as in the past, we can find any number of potential reasons to fear the future. In fact, when I was 11 years old, there was always one possible scenario that would prevent me from living to see robots and our expansion through space in the year 2000: and that was the seemingly inevitable destruction of the human race in a global thermonuclear war. If you are under 25-30 years old, you probably won't even know what I'm talking about. But the fear was genuine, and the anticipation of Mutually Assured Destruction, despite its obvious madness, fostered nightmares throughout my childhood.

Today the phantom of nuclear destruction seems more distant, but other anxieties have taken its place: global warming, genetic engineering, cancer causing substances in virtually everything we eat ... terrorists with access to Weapons of Mass Destruction.

Most people haven't yet come to fear the internet enough to reject its use; probably because it is too incredibly useful. The vague potential of "danger" isn't enough to outweigh that utility. Cars are dangerous too, but they are so useful for getting around, very few people would consider giving them up. The changes to our culture wrought by the internet will likely be as profound and long-lasting as were those brought on by the automobile itself, or television. Maybe more-so.

The internet and the home computer, along with the mobile phone, the i-Pod, and a thousand other wearable gizmos, have insidiously infiltrated and installed themselves into our very culture. This seems to be an accelerating trend: and there is no stopping it. It is changing the way we buy and sell. It is changing the way we think about "intellectual property". It is changing the way we socialize, and the way we search for "information". But, perhaps even more significant, it is changing the way we define "information".


The Nature of Information

According to John Brockman, of The Edge (a non-profit online magazine that promotes "inquiry into and discussion of intellectual, philosophical, artistic, and literary issues" as well as working "for the intellectual and social achievement of society"):

"We are in the age of 'searchculture', in which Google and other search engines are leading us into a future rich with an abundance of correct answers along with an accompanying naïve sense of certainty. In the future, we will be able to answer the question, but will we be bright enough to ask it? "

He is voicing what seems to be a valid fear: just as calculators and computers have virtually eliminated our need (and our capacity!) to work out complex mathematical calculations longhand, might not the instantaneous access to answers to virtually any question without us even having to think eventually erode our capacity for rational thinking, logic, and reasoning? If this is a legitimate concern, than I say that the problem is even more serious than Brockman alludes to: at least we can trust the answers that a calculator gives us, since they are the result of computation; unless the software or the electronic circuit fails, it will never make a mistake. Google gives us no such guarantees about the nature or quality of the information it returns us. Even its much talked about "page rank" algorithm has nothing to do with trustworthiness of the information; pages are supposedly ranked more by popularity than any other factor.

The internet wasn't designed from the ground up to be all that it is today. Indeed, in theory, it is something that should never work ... it only works in practice! What use is an "information tool" that can bring you wrong information just as easily as right?

The key to answering to that question is the recognition that there is no such thing as "true" information!

"Information" ... as a concept ... can encompass every kind of knowledge, including opinions, beliefs, and ideas, along with a nebulous concept called "facts". Now I am going to wax philosophical and make what may seem to some a controversial affirmation:

All that can really exist in our consciousness -- all that we classify as "information" -- are the result of two kinds of actions -- observation, and communication. All that we actually know -- what we classify as "knowledge" -- is the result of our interpretation of this information: the deductive or inductive reasoning we apply to our knowledge to arrive at our own conclusions about the meaning of that information.

"Observations" are those phenomena which we experience with our own senses; but we would do well to remember that even our own senses can be tricked, and how we understand all that we perceive still depends mostly on learned information, transmitted to us through communication. "Communication", then, is the dissemination of "knowledge" between people, which in turn means the transmission of our observations, and that which has been "communicated" to us, and of our interpretations of these things.

Never before has it been more obvious than with the internet that all "knowledge" is relative: a student researching a given subject for a term paper may easily encounter a wide variety of different points of view, or beliefs about that subject. He may also encounter contradictory "facts": after all, anyone can make (up) a web page. You can try to limit your research to "trustworthy" sources, but eventual contradictions are inevitable, and the access that we now have to a large number of sources of "information" should make it obvious that, no matter where we find our information, we are simply, only, "communicating" with other people (even if my source is someone long dead!).

But there is a silver-lining to all of this, which is the point that I hoped to eventually arrive at with this rambling discourse. Throughout almost all of human history, mankind has had to rely on a very small number of sources of information for all of our knowledge; usually this information was passed on verbally within the tribes or community in which we lived. Usually this set of "knowledge" was adapted to the individual reality of each individual group of people, and worked well for them. Ocassionally, upheavals would occur, when new observations of nature or contact with other groups called into question our accepted beliefs.

Then, suddenly, someone came up with a way to "record" our "knowledge" symbolically, first by drawing pictures and then, just a little later, by scratching abstract symbols which represent the sounds we make. If all of human history were compressed into a year, then this "revolution" would have occurred only a week or so ago. Suddenly it was possible to truly "accumulate" knowledge, pass it on to future generations and even, eventually, share it cross culturally.

In this cosmological blink-of-an-eye, humans changed all the rules: new technologies, new ways to raise and grow food, new machines of warfare. But for the first few thousand years of written history, this fundamental, important ability of our species was restricted to an elite few: even the capacity to read was not widespread. Most people were condemned to be followers: when your sources of information are limited, you are at the mercy of those leaders who tell you what should or must believe. Those that could read were still lucky to have access to one or two books (probably the Bible or Koran being one of them ...). Still, it was enough to get us where we are today.

On this one-year timeline of human history, the invention of the printing press was just over a day ago. The television, 2.5 hours ago, and the internet: just about an hour ago. Each of these revolutions brought more and more sources of information into contact with the average person, until, finally, with the internet, we reach the point where we are today.

In the poem presented at the beginning of this post ("The Calf Path"), Sam Foss creates a little parable which decries mankind's tendency to follow in the footsteps of those who have gone before, rather than stepping off the beaten path and maybe finding a better way to do something. A copy of this poem was first given to me by Dad when I was in high school: it exemplified what was one of his "pet philosophies", and it (and he) has had a tremendous impact on the way I think. The idea of the calf path became something of a family philosophy. The idea can be summed up as basically saying: Don't take anything for granted. Don't believe everything you hear (or read). Always look for a better way to solve a problem. Dad's skepticism was often confused for stubborness (or maybe that was the other way around!), but nobody questioned his capacity for creativity and innovation.

My take on the calf path is more to the point of this blog post: in the poem, Sam Foss says:

"... For Men are prone to go it blind, along the calf paths of the mind" .

To me, this means that it is easier to just follow a path than to understand why the path was made the way it was.

Of course, what makes human beings the most successful animal on the face of this earth is that we excel both at innovation, and at transmitting the results of that innovation to each other ... as well as down to successive generations. History has been built both by leaders and by followers; but the great leaps forward have been taken by those who dared to step off the path. I am not arguing in favor of blind skepticism, which is even worse than blindly following others; humanity wouldn't have made it very far at all if every step forward started at the beginning. Rather, I am simply presenting the case for my belief that the next generation of intellectuals and innovators will be those who seek to understand why a path was made the way it was, as opposed to just choosing a path and following blindly. Usually, when you are on a given path, you can't really see if the route you are following is the correct one. You need to step off the path and take a look at it from afar so you can evaluate it, possibly compare it with other potential paths. Note that this is not the same thing as saying that everyone should make their own paths!

This brings me back to the first post of this sequence, and the picture of the supposed "home computer" of 1954. This "future" in which we now live is one in which we have at our disposal the potential to communicate with a large number of people, and the possibility of sharing more and more "knowledge" and information than ever before in history. Much of this information will seemingly be less accurate (technically speaking, or scientifically speaking) than what we could only have found in the library or the newspaper just a few years ago: and some of it will be outright lies. Still, many people whom I know (and many of them I have "met" in online tribes!) have adapted with ease to this paradigm of internet knowledge. Rather than locking themselves into radical extremist "tribes", they recognize the importance of "critical thinking" with respect to virtually everything we observe or receive via any communication, whether the source be the internet, the local TV station, the newspaper, or a library book.

The optimist in me believes that it is not specifically the access to other people's "knowledge" -- chewed up, processed and spit back out at us -- which drives our capacity for innovation (this is just the calf path of the mind). Rather, it is the free exchange of ideas which stimulates creativity, produces change, and makes great things happen. My contention is that we find ourselves on the edge of a critical moment in human history, that possibly will make all of the great leaps of the past look like trivial baby steps in comparison. If history itself is any indication, the next few decades will see more radical and deeper changes in human culture and technology than we have ever seen in all of human history. People may fear and resist some of this change, but "information" is an inexorable tide which, once accessible to all, no levee will be able to hold back.

Hmmm ... now I wonder what the future will be like when my kids are 40 years old?!

Wednesday, March 22, 2006

Back to the Future! (Part 3)



This is the third installment in what should have been a 2 part series! The first two installments can be read here:

Part 1
Part 2

It turns out this will be a four-part series! I promise! No more!

****

It has become rather common to refer to our era as the "Information Age". However, careful consideration of the technological and cultural trends that make up our society leads me to conclude that a more appropriate label would be the "Communication Age". I'm certainly not the first to suggest this: the word information implies somehow that we are more informed, or that we have more knowledge than in previous ages. I'm not sure that is true. We do, however, have faster and more agile access to the collective "knowledge" (ideas, beliefs, opinions) of the world's population, and that is the key point that I want to discuss.

Instead of robots, we have the home computer. One could reasonably claim that it is exactly this technology that characterizes our "age": the possession of personal computers by individuals, whether setting on our desktops or carried about with us like "notebooks". I suppose it is one of the most visibly apparent aspects of modern life. Actually, one could argue that a home computer is basically just a physically handicapped robot. "Personal" computers are our very own electronic assistants, except that today's computers don't get around much on their own, and, although very powerful when compared with those of even a few years ago, offer little in the way of artificial intelligence.

At least computers don't fill up a whole room any more; in fact, they are getting smaller and smaller. Those who are more technologically savvy will realize that the embedding of computers in virtually everything is a far more significant trend than just having access to our own computers. Let's face it: computers are not our servants, like we once imagined robots would become. They don't actually do anything by themselves. They are really nothing more than sophisticated tools (or, in some cases, toys), much like our televisions, domestic appliances, telephones, cars, and the uncountable gadgets which we have come to depend upon on a daily basis-- and most of which, in general, are powered by some kind of embedded microprocessor control. In most cases, the computer is not an end in itself; the device in which the computer is embedded is the end.

Mobile phones are a good example: this is a technology that has been long predicted, and the reality perhaps surpasses the imagination of most science fiction writers. I'm sure 11-year old Jim would be impressed with the "Star Trek" communicators that virtually everyone carries around all the time. We are in touch constantly, whether we want to be or not. You can build a cellphone without necessarily having to embed a computer in it; but these computers are what makes it possible for today's cellphones to be chock-full of cool features: they're beginning to look more like McCoy's tricorder than a mere communicator, with built-in cameras, video, GPS, and web browsing. Pretty soon they will be monitoring our blood sugar and heart rate, and, who knows what else?

But back to computers. Computers depend on software to do anything. In general, software still has to be written by someone: we don't yet really have computers that can be taught, or can figure out how to do things on their own. Of course, well designed software can provide the illusion that the computer is doing these things. Certainly a computer can be very good at solving problems if provided with an algorithm and data: it is here where the computer excels, having the speed and patience to iterate through millions of possible solutions until it finds the one that fits. They are also great for storing information.

Although we often speak of computers in the same breath as "information technology", computers don't actually have information either: at least, not unless someone has digitized and stored that information. With the exponentially increasing size of hard drives, computers can now store vast amounts of data, as well as provide mechanisms for quick and easy retrieval of that data. Much of the growth of the computer industry during the 1970's and 80's springs directly from the dependency of modern businesses on this aspect of computers: information storage and retrieval.

But, beyond businesses, most of us don't have that much information to store; at least, not that much more than we had a few decades ago, although the information format has changed. These days, we may fill our computer's storage medium with images, music, videos ... but when I go down in my grandmother's basement, I find boxes of old pictures, videos, super 8 films, LP's, cassette tapes (OK ... I admit it ... I still have boxes of that stuff in my garage!). It's all stored in a far more bulky form than the digitized info on my hard drive, but it's still basically the same kind of stuff that I want to keep for my kids and grandkids. I wonder how many new technologies will arise ... how many times I will have to copy and transform the digital data that I want to store forever before my grandchildren get around to looking at it. Will they be able to read my CD's ? My DVD's? My jpg's, mpeg's, divx, mp3's, avi's, bmp's, gif's, htm's, and the rest? Or will I have to copy and convert it all to the "new formats" and "new media" every couple of years? How long until CD's and DVD's go the way of floppy disks? (how many people still have 5 1/4" floppys stored somewhere???).

This is a serious question, although it has nothing to do with "the point" of this essay (if it can be said to have one point!). Years after I am dead, if my great-grandkids dig a box of old cd's out of my attic, will they still be readable? One of the things on my "to do" list is to transform all of my "legacy data" -- my 8mm and VHS videos -- into DVD format, which ought to buy me a couple of years. But I guess maybe I am going to have to print out all of my digital pictures after all ...
:-(

So why do we call this the Information Age? Well ... it's probably because of the technology which has become so ubiquitous that it is practically taken for granted these days -- but has also come to be considered indispensable for a large segment of the world's population: the Internet. The internet has placed an immense volume of "information" at our fingertips. With the click of a mouse, I can read the newspaper of virtually any city in the world; or find information about practically any subject I wish to. The internet has been called the information superhighway. Political interests aside, it's not a bad analogy: the internet is not a repository of information-- it's a communications channel. Sometimes we think of it as a vast data bank -- but it's not.

The internet, more than any other factor, is what put a computer on the desktop of the great majority of today's computer users in the world: sure, a lot of people need a word processor and a spreadsheet, but not everyone. Like electricity and the telephone, to participate in today's global technoculture, everyone needs the internet. At least we think we do.

It's ironic that, in science fiction, computers and robots have usually been foreseen as entities which do things for us. The reality has been somewhat different: like the mobile phone, computers are mostly just tools for communication -- sharing, formatting, and storing information and ideas.

I have a WiFi access point in my home that covers my entire yard. I carry my Sony VAIO around, generally looking for a peaceful spot to work (somehow, kids and a dog seem to gravitate to wherever it is I happen to be located at the moment!). Almost all of what I am doing with the computer at any given moment can be classified as communication: either as a passive recipient of information that someone has produced for my benefit, or the benefit of others; or as an active participant in conversation with one or more other people (e-mail, instant messaging, Skype, etc.). At other times (like right now) I am an active producer of content, seeking to communicate whatever crackpot thought I have at the moment, to any and all who possess the patience to read it!

I've mentioned the concept of the "communication age" -- but I don't mean to imply that this is something new. Actually, the internet is the latest manifestation of a process that began in the 19th century, with the first "teles": the telegraph -- then the telephone -- radio (ok, that's not a tele!), and finally the television. The Internet is just the first step in the natural evolution of all of these trends.


****

(to be continued)

Thursday, March 16, 2006

Back to the Future! (Part 2)



This is Part 2 of what will (hopefully) be no more than a 3 part series.

Part 1 can be read here.

****

So here we are, in the year 2006. And I'm no astronaut. I try to imagine what the 11 year-old boy who used to lie awake at night, nose stuck in a paperback, dreaming of the future, would think about the world today. I also wonder what he would think about the life I have made for him as an adult... neither astronaut, nor astrophysicist, nor even scientist of any kind, really. I'm sure he would be devastated... but that's too big of a subject for this blog entry, so I won't go there right now. The question I want to ask myself is this: What if I could roll back 30 years of my life, and see the world today through the eyes of an imaginative young introvert of the mid-1970's? What would 11 year-old Jim think? Has the future lived up to the expectations created by generations of science fiction authors?

After the surprise that people still use blue-jeans (what about those funky shiny uniforms that everyone should be using???), his first disappointment would surely be the absence of flying cars (think Jetsons). In fact, although the technology that drives automobiles underneath the hood has evolved considerably, outwardly, and to the layperson, cars are basically the same as they always were. The technology of transportation, in general, is virtually unchanged: you can't really get anywhere faster or even in more comfort then you could have 30 years ago. And, although you may use less fuel, you will pay a lot more for it. Think about it: even air travel hasn't changed in any fundamental way, except perhaps to have substituted meals for an inflight entertainment center. Thirty years ago, supersonic passenger flight was in its infancy. Surely 30 years would have been enough to consecrate this technology! Bigger and faster Concords should have become commonplace by the turn of the century. Actually, if I could take my kids back a couple of decades and show them a Concord, they would probably think it was quite futuristic!

Unfortunately, any major advance in the area of travel will naturally depend on the discovery of a practical alternative source of vast amounts of energy. As long as we are dependent on fossil fuels, I'm afraid our cars are grounded!

Then there is spaceflight. Forget interstellar travel: I don't think that even as a kid I had any illusions about boldly going where no man has gone before in my lifetime. Still, in 1975, Skylab was in operation and the space shuttle was on the drawing boards. Man had walked on the moon several times. By the year 2000, spacecraft should have been flitting back and forth between our orbital space stations and lunar and Martian outposts almost daily. Intrepid explorers would already be heading out to explore the outskirts of the solar system: the moons of Jupiter, the rings of Saturn, and beyond. We do have the International Space Station; and the ISS is certainly bigger than Skylab, but it hardly matches the gigantic rotating ring stations with artificial gravity that so often appeared in science fiction (such as Kubrick's "2001 - a Space Odyssey") as well as my highly coveted Man and Space book from the Time-Life Science Library series. I think, even if I had become an astronaut, the eleven-year old me would be disappointed with the stagnation of the space program in the 21st century.

Again, this is just a question of economics. Until space travel can offer financial returns that far outweigh the costs and risks involved, it will remain basically a government activity. And as long as it's a government activity, funding for space travel will depend on the whims of politicians and fickle public opinion. Not that I necessarily think that the latest private endeavours are a hopeless case -- space flight will truly only begin to become efficient when it becomes technologically feasible for private enterprise to take it over. I just don't think that that's around the corner: at least not unless someone comes up with a cheap, powerful energy source.


But there is one thing that almost every science fiction story predicted, when speculating about the future: robots! One thing I never managed to get out of my head was the Lost in Space Robot B9: forget stupid-looking bipedal humanoid or "android" devices: a cool robot had to run on tracks and have accordian-tube arms. A bubble-head, and flashing lights on its front panel were also reasonable attributes. Not until the appearance of R2-D2 did I modify my criteria for cool robot technology: a tiny tin can on wheels can be pretty neat too!

So-- we are now smack in the middle of the so-called "information age" -- but where are the robots ?!?

****

Of course, we do have robots; they are even commonplace, on the factory floors of probably thousands of companies around the world-- welding, cutting, painting, building our cars, and performing inumerous repetitive tasks at speeds and levels of precision that would be impossible for human workers. But they don't look or act much like the robots in science fiction stories and movies.

In science fiction, robots are typically portrayed as being intelligent servants of mankind: repositories of information; cold -- emotionless-- logical. Occasionally they are depicted as being the manual laborers of the future -- the Jetsons' maid, for example. Generally the end result of having robot laborers is the catapulting of mankind onto a higher aristocratic plane: no longer is it necessary for us to perform the blue-collar tasks. Mechanical assistants wait on us hand and foot. Like the English gentry of the 19th century, we can occupy our minds and bodies with more noble pursuits.

Traditionally, robots have provided a somewhat ambiguous counterpoint to the common theme of man vs. machine, born of the industrial revolution of the last century. In this motif, the machinery of capitalism turned without regard to the human lives sacrificed for the sake of profit. The artesans and craftsmen were lost, their workmanship and attention to detail no longer necessary. Their skills, passed down through generations, honed from father to son and regulated by guilds and trade unions, had become obsolete: their wares were now mass-produced by the thousands in the engines and foundries of vast factories, whose billowing smokestacks blackened the countryside and whose waste products turned the streams and rivers into sludge. Robots, being a "higher form" of machine, generally did not fit into this dark vision of the world. Robots were shiny and metallic; clean and intelligent.

Which doesn't mean that robots weren't frequently the antagonists of the stories: a common theme of science fiction is mankind oppressed by our own creation, which echos to some degree the older industrial age fears. In this storyline, the robots have evolved beyond the programming of their former masters: biological life forms have become unnecessary and expendable. Of course, this is a modern veneer on a very classic theme if you look below the surface: the ancient greeks told stories of men who attempt to reach beyond their human limitations, and, in so doing, invoke the ire of the gods. What could be more pretentious than to believe, in our audacity, that we are capable of creating a form of electronic "life"? A "creature" that surpasses in some way (usually strength and/or intelligence) those attributes endowed us by our own creator? This is the very definition of hubris! Zeus must be charging up his lightning bolts!

To a certain degree, this is the theme of what some believe to be one of the first "true" science fiction novels: Mary Shelley's Frankenstein published in 1818. Although it is more often classified as a gothic horror story these days, Mary Shelley's intention was clear: Victor Frankenstein was a scientist, who applied technology (as it was understood in the 19th century) to animate the inanimate, imbueing it with life. For Shelley, the story was about the scientist, and not his creation: like Daedalus, who created wings for his son and then watched him die when he flew too close to the sun -- or like the namesake Prometheus, creator of mankind who was punished by Zeus -- he was a tragic victim of his own arrogance.

In later science fiction, variations of this theme were commonplace: besides robots, the punishment of mankind for overreaching our limits in the form of biological monsters continued to be a common theme: Godzilla, an unstoppable force born of nuclear radiation, is one of the most prominent, but only the first, in a long string of films and stories that play on the shock and awe created by atomic weapons.

The thing is, although robots are becoming constantly more sophisticated, and it is even possible today to buy some pretty sophisticated robot "toys", it will still take some tremendous technological advances before we have any reason to begin worrying about "robot overlords" * in our future. The problem isn't really mechanical or electronic, although there are still some hurdles to be overcome in the area of mobility. The big problem is intelligence. The truth is, we are nowhere near developing hardware and software capable of processing even a fraction of the sensory data that the most primitive vertebrate animal can process in the blink of an eye.

The question remains, then -- what is it that characterizes the present (11 yr old "me's" hypothetical future)? We're not Jetson's ... we're not Lost in Space ... what are we?

(to be continued)

****

* That's right ... click here to listen to John Coulton's hilarious Chiron Beta Prime. This song was inspiration for some of the ideas in this post ... I'm one of his newest fans. Then check out Future Soon and Skullcrusher Mountain. If you have any slightly geeky tendencies at all, you will become a fan too. Once you are convinced, go and buy his albums online and download the rest of his music!

Wednesday, March 08, 2006

It's a girl!



Not one to be left out of the wave of birth announcements, I have one to make! For this, I have to interrupt my "previously scheduled" programming (Back to the Future - Part II)... which is just as well, since (as is normal for me) the text has grown beyond my original intentions (I'm going to have to break it into at least two other posts).

Please welcome to the world Isabela, b. March 7, 2006, in New Jersey! Congratulations to her proud parents -- Mom Cleia (Cristina's sister) and Dad Mark! Pleased grandparents Clea and Celio are on hand, ready to dote, once Cleia arrives home with her little bundle of joy. They have also been efficient at relaying news and pictures to us here in the southern hemisphere, nearly as far from the action as possible.

This isn't quite enough to assuage the rampant pining among the female family members down here (there's something genetic about the effect of maternity on the birther's sisters!); certainly Isabela will be a few months old before anyone down here gets to meet her personally. We are all, however, (male and female alike) anxious to make her acquaintance as soon as possible.

For those of you who are interested in stats, Isabela weighed in at 3.7 kg (8.1 lbs) and 51 cm (20.08 in.) .

As I already mentioned, the epidemic of human reproduction sweeping the world these last few months (possibly in consequence of last May's launch of "Star Wars III - Revenge of the Sith" -- that's my theory) is somewhat intriguing. Will the Force be strong with these younglings??? That remains to be seen. I, for one, will be paying close attention over the next few years!

Next Jedi birth to watch: my brother-in-law Celio's wife Tatiana is expecting a pair of male twins (!) due in another month or so. Stay tuned for my next birth announcement!

****

One last comment, on an unrelated note: once again our family is being assailed by an unrelenting viral nemesis. Currently, myself and oldest son James are the victims -- viral conjunctivitis, better known as "pink eye". In my case, it's the least unpleasant symptom of an overall state that includes a throat infection, nasal congestion, and a high fever (2 advil every 6 hours since Sunday). Tomorrow, James and I will go in for some blood work to see if there isn't also a bacterial element to our afflictions.

Meanwhile, Kevin's eye is starting to look reddish; so this may be only the beginning of our ordeal. :-(

Oh ... and one final final comment: I've finally added an RSS link to the right sidebar, so Gramlingville can now be subscribed to (to receive automatic updates, for example). If you don't know what an RSS feed is, you probably aren't going to use it anyway, so I'm not going to waste any more time explaining it!

Thursday, March 02, 2006

Back to the Future! (Part 1)



Tonight I'm going to try something completely different. Please bear with me ... if this is not your cup of tea, then take comfort in the fact that I'm breaking this into two posts. (Hopefully I will finish this and post the rest tomorrow!)

I ran across the picture above on the internet, supposedly taken from a 1954 issue of Popular Mechanics. The caption reads:

Scientists from RAND Corporation have created this model to illustrate how a "home computer" could look in the year 2004. However the needed technology will not be economically feasible for the average home. Also the scientists readily admit that the computer will require not yet invented technology to actually work, but 50 years from now scientific progress is expected to solve these problems. With teletype interface and the Fortran language, the computer will be easy to use.

Note the input devices: a steering wheel (instead of a mouse!) and a teletype as a keyboard! Take a closer look at the picture before reading on.

Pretty funny! At least it would be, if there was any chance of it being genuine. First off, I have always been under the impression that the term "home computer" wasn't coined until the late 70's. That in itself would be enough to make me skeptical. Second, I always assume that anything I read on the internet is a fake until proven otherwise. It took me all of about 30 seconds and a visit to Snopes to confirm that the photo is truly a fake: in reality, it's a modified image of a nuclear submarine maneuvering room, with an old color tv and teletype/printer superimposed over the more modern display and control console.

The point is, you can't believe anything you see on the internet. Really, it may seem like a cliché, but it's not so far from the truth. Just about any information you find must be examined with a healthy dose of skepticism, especially if it sounds fantastic or implausible. (Although, these days, many of the internet-propagated lies and hoaxes are more plausible-sounding than a lot of truths.) I'm amazed at how much junk I receive in my inbox or read on blogs that people just pass along without stopping to consider its source. I'll come back to this point in a minute.

The home computer picture above was originally created as a joke, then circulated as if it were true. But the point of the joke lies in the fact that it seems as if it could be true: it is not entirely implausible that 50 years ago someone would have imagined a room-sized computer affordable enough for home use. It was this detail that got me thinking (a dangerous habit!): about the past, the present, and the future. The question that popped into my head was this: how incredible would today's technology (personal computers, notebooks, i-pods, internet, cell-phones etc.) actually appear to someone living 50 years ago?

To really answer that question, it's not enough to just compare the current state of technology with the technology of that time period; you have to look at the expectations that people had back then with respect to their future. Of course, the answer will be very subjective: I could speculate as to what people imagined the future would be like 50 years ago, or I could ask them (my own parents, or even my grandmother!). But I don't really need to, because I know what my own expectations were 30 years ago. This wouldn't be much of a thermometer, if it wasn't for the fact that practically all of western culture's expectations with respect to the future of technology are founded on the same source: the explosion of speculative science fiction that started in the 1920's, and became truly popular during and following WW II.

The fact is, the idea of associating scientific and technological progress with a vision of the "future" is a relatively modern concept: a concept that was basically born with modern industrial civilization. Speculative science fiction as a form of literature arose for all practical purposes only at the end of the 19th century, with authors such as Jules Verne, HG Wells, and Edgar Rice Burroughs. Before that, the notion that scientific progress would result in a future radically different from the present was not widespread. Throughout most of human history, small hunter-gatherer and agricultural communities were far more concerned with the present and the past than the future (usually the mythical past, or the knowledge and belief system passed verbally from generation to generation). The future was typically regarded as little more than a continuation of the present. Those who lived under the memory of the great civilizations of history (Greece and Rome in medieval Europe, for example), generally regarded the past as having been a "golden age" ... and the present and future as a decadent shadow of ancient greatness. The great civilizations themselves, in their hubris, tended to regard their own culture and technology as being the pinnacle of history. The future meant nothing more than a direction for their own manifest destiny, usually to expand and conquer.

Only once mankind could look back on his past and perceive the changes that have occurred, was it possible to extend that vision into the future. To imagine a future different from the present, you must be willing to question your world-view; to admit that you do not possess all the truths. And that, in itself, is a radical concept: but it's a concept that transformed the world and made possible that which could never have even been dreamed of just a few short centuries ago.

****
For as long as I can remember, I've loved science fiction, as a genre, both in literature and film. Science fiction fed and drove my vision of the future. I suppose it started with Lost in Space as a kid; the idea of the Robinson family winging around the galaxy in a flying saucer-shaped RV with their own personal robot appealed to me on every level. I also kind of had a thing for Penny. (That's right: not Judy! Penny!). Since I was a very timid child, the idea of leaving Earth and its teeming throngs behind and exploring deserted planets with just my family probably had a lot to do with the tv series' appeal (especially if I had Penny and my own Robot along with us!).

My children often ask me what kind of games I liked to play when I was a kid. Looking back, solitary adventures would have to be the predominant theme in my fantasy world throughout much of my preteen years. Our basement was an entire universe that could be transformed at will into the fantasy setting of our choosing: a spacecraft, to explore the outer edges of the galaxy; a submarine, diving to the uncharted depths of the sea; the heart of the Amazon rainforest; deep in some cavern beneath the earth; into the past, the age of dinosaurs. Wherever I'd go, it was somewhere remote, far from the crowds. Just me, and maybe my brothers and sister, cousins, and occasionally a few select friends.

I also loved the old monster movies: I watched Godzilla and the whole slew of related rubber-suited behemoths stomp Tokyo into rubble time-after-time. But it was space that captivated me more than anything. Besides Lost in Space, I was brought up watching the Apollo moon landings, old 1950's sci-fi movies, Land of the Giants and later, Star Trek. I remember watching Star Trek when I was far too young to understand it; in fact, I was convinced that Dad actually was Captain Kirk (when he left for work, he would beam up to the Enterprise, I guess!). Later, I discovered books... the ultimate fantasy escape. Reading was a truly solitary adventure, and science fiction could take you literally to any place or any time in the universe.

So my destiny was always pretty clear to me; other than a brief interval around the the fifth grade in which I was pretty sure I wanted to be a marine biologist (this temporary detour was corrected with the release of Star Wars in 1977), my destination was the stars. I would be an astronaut. There was no question about it. And I had it all figured it out: by the year 2000, I would be thirty-six years old! The year 2000! Nothing sounded as futuristic as the year 2000-- the 21st century, and the future meant space travel. Astronauts would be in great demand, and I would reach prime adulthood just in time for the space odyssey!

The Final Frontier, here I come!!

****
(to be continued)