Information Technology

Information: a history, a theory, a flood

Review of The Information: A History, a Theory, a Flood by James Gleick.

James Gleick’s book The Information is more that a simple history of information theory or a simple history of the computer.  It is actually a sweeping philosophical dissertation  on what he calls The Information.  And he is not talking about information, but The Information, which is a much more complex and difficult and ambiguous thing.  So, this book is about The Information.

The Information doesn’t begin with Claude Shannon’s revolutionary information theory or Alan Turing’s early theories of computing as most such books do. Instead, Gleick’s definition of The Information is much wider.  In fact, he goes all the way back to the human discovery of speech and then the invention of  writing which he says led to philosophy and mathematics and all the rest of mankind’s most impressive achievements.  For Gleick, speaking and writing are just another form of The Information.

The heart of Gleick’s  book is his contention that the central dogma of Information Theory is that “Meaning Is Irrelevant.”   This mans that any kind of information, and especially the information of computer technology, can be seen as independent of the meaning that it expresses and of the language that is used to express it.  And that looking at information in this way leads to huge and surprising developments.

Gleich is not saying that Information doesn’t carry meaning.  He is saying that if you forget about the meaning and look just at information itself, that if you abstract Information away from the meaning it carries, then Information can become much, much more powerful.  If you can deal with Information itself in terms of mathematics or some other symbolic system, then the power of information can be magnified immensely.

And there is more to this idea, a whole lot more. Gleick tells us that the  information which humans have always tried to communicate, since they first learned to talk and to write, has become more and more abstract over the course of human history.  What does this mean?  It means that information, including writing, has become, more and more, to be seen as a thing in itself.  It means that information has gradually come to be more than just the meaning that it carries or the code it is communicated in.  And, he says that if we look at the history of human communication, this idea becomes more and more clear.  So he shows us, in detail, a whole lot of history of information where communicate has become more and more abstract.

The idea of separating the information from it’s meaning is not originally Gleick’s idea.  It originally comes from Claude Shannon, generally considered the father of Information Theory.  We’ll get to Shannon in due time.  Or kind of, Shannon is pretty difficult.

Gleick’s book, The Information, begins with a chapter about the talking drums of Africa.  The talking drums disappeared around the middle of the twentieth century, as they were replaced by modern communication technologies.  But  fifty years ago drummers all over Africa spread the local news and gossip of their villages very rapidly and efficiently over hundreds of miles every night.

Gleick uses the way the talking drums worked to illustrate his central point about  information technology.  In the Democratic Republic of Congo the human language is Kele.  This language, like many African languages, is a tonal language.  In the  Kele language, meaning is conveyed not only by vowels  and consonants but also by the tones of these syllables.  The tones are either a distinct high or low sound.  When experienced drummers rapped out the nightly news, the vowels and consonants of the Kele language got lost and only the high and low tonalities of the syllables remained.  In the process, a lot of the message was also lost.  Even simple statements became very ambiguous.  A short drum passage could mean any one of twenty or so different things.  To solve this problem, African drummers added a lot of extra redundancy to their sentences.  Drummers tried to say everything in six or eight different ways to make their message clear.

John Carrington was a missionary in Africa who was fascinated by the talking drums and who learned to use them himself and who finally wrote a book explaining how they worked.  He provides an example of how redundancy had to be introduced into the drum language to make them understandable.   When Carrington would out working in the jungle and it got to be lunchtime, his wife, who also spoke drum language, would call him for lunch on the drums.  She would drum out something like this:  “White man spirit in forest come come to house of shingles high up above of white man spirit in forest. Woman with yams awaits. Come come.”

It turns out that about eight words of drum language is needed to transmit one word of human language unambiguously. This redundancy compensates for the loss of information when vowels and consonants are lost.  The amount of  redundancy needed to make drum talk understandable can even be expressed mathematically.  Gleick says that

“After publishing his book, John Carrington came across a mathematical way to understand this point. A paper by a Bell Labs telephone engineer, Ralph Hartley, even had a relevant-looking formula: H = n log s, where H is the amount of information, n is the number of symbols in the message, and s is the number of symbols available in the language. Hartley’s younger colleague Claude Shannon later pursued this lead, and one of his touchstone projects became a precise, mathematical measurement of the redundancy in English.”

The point of Gleick’s chapter on drum language is that drum talk illustrates the central idea of information theory, that meaning is irrelevant.  Saying it another way, information can be seen as independent of the meaning that it carries and independent of the language or code used to express it.  This is to say that information is an abstract concept that can be embodied equally well in human speech or in writing or in drumbeats or in digital code.  All that is needed to transfer information from one language to another is a coding system.  A coding system may be simple or complicated.  If the code is simple, a given amount of information requires a longer message.  If the code is complicated, as it is for a spoken language, the same amount of information can be conveyed in a much shorter message.  And the messages consist of abstractions within abstractions; and they can be expressed in any code which represents the message.

By turning information into an abstraction that can be expressed mathematically, it can be dealt with in much the same way that physicists turn reams of physical observations into abstract mathematical formulas.  This simple process of turning physical experience in mathematical abstractions that can then be manipulated by the rules of mathematics, transforms data into abstract information.

This tool of abstracting information away from its physical expression is an enormously powerful tool.  It is one of the most important and powerful tools used by science.  It is very like what physicists do when they abstract friction away from inclined planes and rolling balls to turn motion into abstract equations.  It is exactly what Newton did to state his famous laws of motion and gravity as equations.

Gleick makes this point again in his second chapter where he talks about how writing,  written symbols, transformed the telling of stories, the relating of experience  into abstract words that could be manipulated in many different ways.  He says this is what was responsible for the creation of philosophy in Greece over 2000 years ago.

Writing enables the transmission of knowledge over time and space.  Writing is basically substituting signs for things.  Writing Gleick says is twice removed from language.  First language is transposed into an alphabet which represents human sounds, and then the alphabet is transformed into words which are symbols that represent human ideas and experiences.

When the old oral stories of Homer were written down sometime in 700-800 BC, Gleick says this changed everything.  A new level of abstraction was introduced.  A writer-reader could structure knowledge and could now have knowledge about knowledge.   This abstraction went a step further with the Greek philosophers Plato and Aristotle.

Philosophy, says Gleick, was born when we learned to write. Before writing there was only “experience”.   Then we began to structure experience with written language.  The Greek Philosophers invented “categories” or “essences as Aristotle called them.  They used categories to classify animals and natural objects into types.  (With animals we now call these categories or types genus and family and species and so on.) Plato did the same thing when he created Forms for things like “chairness” or “tableness” and then went on to talk of even more abstract Forms like Beauty, and Truth and the Good.

Gleick talks about how writers began to embrace the discipline of abstraction.   Before writing, humans told stories orally.  They were narrations of events.  Writers changed this; they went from the prose of narrative to  the prose of ideas.  They went from organizing experience in terms events to categorizing and abstracting ideas. Gleick says that out of writing came abstract thinking and the true beginning of consciousness.   He says that writing predates thinking.

Aristotle, says Gleick, took this a step further by inventing logic, which is a formalized and even more abstract form of thinking.  Syllogisms, he says, came only after writing.  Speech alone is too fleeting for analysis.  Logic is a way of manipulating abstractions , a way of discovering truth from words alone; it is different from mere talking.  Gleick says there are no syllogisms in Homer, that Homer arranges things by events, not catagories .  Aristotle manipulates words and abstractions to attain knowledge and truth.  Pre literate people don’t think in either categories or in abstract ideas.

Gleick says that the human journey is from the spoken word, to writing, to categories, to logic and metaphor.  But he points out that there are always flaws and gaps in abstractions or in logic.  As soon as Western culture invented highly abstract ideas and logic, paradoxes begin to appear.  This happens because words and abstractions are not the same as the things they name.  As others have pointed out, “The Map should not be mistaken for the Territory.”  Maps are a metaphor for abstract ideas and the territory is a metaphor for the incredibly complex physical stuff that is actually out there.  Maps are simple and very helpful but they are never identical with the territory; maps are necessary and powerful but they are always wrong in some way or another.  Just as the ultimate abstraction, mathematics, is incredibly powerful and always wrong some way or another.

So the question has arisen, are categories part of reality, or do they only exist in languages or maybe in mens minds.  Are these paradoxes only in language or are they also in the real world.  At this point men thought maybe they could eliminate impure language and use only unambiguous and pure symbols like those of  mathematics, an even more extreme form of abstraction.  By the early 20th century it seemed like math could make logic work and the ideas of Logical Positivism came to the fore.  But it wasn’t long before the dream of finding truth via language expressed in exact mathematics symbols was found to be illusory.

At this point, allow me to put Gleick’s argument aside for a moment to look at a slight disagreement some have had with his ideas about the increasing abstraction of human communication.  In a recent book review of The Information in The New York Times, Geoffrey Nunberg says that actually such ideas of talking leading to writing leading to abstract ideas leading eventually to human consciousness are now partly discredited.  Nunberg goes on to say that this somewhat diminishes the effectiveness of Gleick’s argument.  Here is how Nunberg puts it…

“This is all engagingly told, though Gleick’s focus on information systems occasionally leads him to exaggerate the effects technologies like printing and the telegraph could have all by themselves. For example, he repeats the largely discredited argument, made by the classicist Eric Havelock in the 1970s, that it was the introduction of the alphabet that led to the development of science, philosophy and “the true beginning of consciousness.”

Such errors are mostly minor. But Gleick’s tendency to neglect the social context casts a deeper shadow over the book’s final chapters, where he turns from explicating information as a scientific concept to considering it as an everyday concern, switching roles from science writer to seer.”

I suspect the sequence of ascending knowledge may not be exactly as Gleick describes it.  But the idea of an increasing levels of abstraction in man’s use of language and writing and symbols and categories to finally ascend into the philosophical writings  of Plato and Aristotle is very attractive to me.  Exactly how it happened seems less important than the fact that it did happen.

In Chapter two Gleick discusses the invention of the alphabet and its immediate and almost simultaneous use in writing as an important step in the increasing abstraction of information.

In chapter three he goes on to discuss the use of alphabetizing lists of words in dictionaries.  Again, he sees this as another abstraction of information.

Even though alphabetized lists did appear as early as 250 BC, they didn’t really come into their own until the early 1600’s with Robert Cawdrey’s alphabetized dictionary.   Gleick calls this dictionary a milestone in the history of information.  Gleick says alphabetization forces the user to divorce information from meaning, “to focus abstractly on the configuration of the words themselves.”  It is not necessary to know the meaning of words to look them up in an alphabetized list.  In fact, readers used such lists to figure out the meaning of words.

Now-a-days we think of alphabetization as a very simple tool, but it is actually amazingly powerful.  It allows one to find one word in millions and millions of words easily without knowing anything about about the meaning of any of the words.  Finding words in alphabetized lists is a completely mechanical task.

There are lists that organize words by meaning or topic but this doesn’t work nearly so well as alphabetizing.  Lists organized by topic are hopeless to use.  It takes forever to find anything.   On the other hand, anyone who knows the alphabet can find any word in the hugest dictionary in a seconds or at least in a few minutes.

At this point in Gleick’s book I began to realize something about information and computers and artificial intelligence that I don’t think I had ever fully realized before.  This is the idea that information theory  and computers work so well because they are completely mechanical, just like alphabetized lists.   That is, just like the meaning of alphabetized words is irrelevant to the process  of sorting words alphabetically, the information that computers process is, for the computer itself, completely divorced from any kind of meaning.  Computers don’t need to understand anything about the information they are processing to work as miraculously and powerfully as they do.

I’m not entirely sure that this is exactly what Shannon was saying when he insisted that information has to be divorced from meaning in information technology.  In fact I’m pretty sure he was talking about something much more complex.   John Naughton in a book review of The Information in The Guardian also seems to find Shannon’s idea pretty deep.  Here is what Naughton has to say about  Shannon’s insistence that Information needs to be separated from meaning in IT.

“James Gleick is an accomplished stalker of mysterious ideas. His first book, Chaos (1987), provided a compelling introduction to a new science of disorder, unpredictability and complex systems. His new book, The Information, is in the same tradition. It’s a learned, discursive, sometimes wayward exploration of a very complicated subject.

The second part [of The Information] centres on the work of Claude Shannon, the American mathematical genius who in 1948 proposed a general theory of information. Shannon was the guy who coined the term “bit” for the primary unit of information, and provided a secure theoretical underpinning for electronic communications (so in a way he’s the godfather of the modern world). The trouble was that Shannon’s conceptual clarity depended on divorcing information from meaning, a proposition that to this day baffles everyone who is not an engineer

But the most startling insights in the book come when Gleick moves to explore the role of information in biology and particle physics. From the moment when James Watson and Francis Crick cracked the structure of DNA, molecular biology effectively became a branch of computer science. For the replication of DNA is the copying of information and the manufacture of proteins is a transfer of information – the sending of a message.

And then there’s quantum mechanics, the most incomprehensible part of physics, some of whose most eminent practitioners – such as the late John Archibald Wheeler – have begun to wonder if their field might not be, after all, just about information. “It from bit” was Wheeler’s way of putting it. “Every it – every particle, every field of force, even the space-time continuum itself – derives its function, its meaning, its very existence… from bits.”

So, I’m pretty sure that my concept of how computers divorce meaning from information isn’t exactly what Shannon meant.  But allow me to run with my idea for a bit.

When I first had the idea that computers work so well because they are completely mechanical, just like alphabetized lists, my first thought was about the popular conception that computers are becoming more and more like humans and that they are developing ways of thinking that are very human and that because computers can learn so much faster and better than humans, real people might in danger of being replaced by computers.  After reading Gleich, this sounded not quite so reasonable.

Computers and AI’s are by definition non-human, mechanical, manipulators of information.  They are completely separated from any kind of human understanding of  meaning.  In fact, the information is deliberately separated from meaning.  It was this realization by Claude Shannon in 1948, that information can and should be separated from meaning, that was responsible for the subsequent explosion of computer power in the modern world.  There were all kinds of both mechanical and electronic computers in existence by the end of the 1940’s, before Shannon, but they were not all that important until Shannon realized that the secret to the power of computation  was to disconnect the information from the meaning of the information.  Then and and only then could information be processed in a powerful and mechanical way.

It doesn’t seem to me that computers as they are now will ever be able to think like humans.  At least from my amateur perspective it doesn’t seem very possible.  Computers are defined by the powerful manipulation of data and they do this by excluding the meaning of the data.  On the other hand, human consciousness is practically defined as the search for meaning.  Humans are saturated with the search for  meaning.

This is not to say that that computers cannot communicate  meaningful messages or that they can’t be used in the search for meaningful knowledge, or that they cannot crunch data that can be used very meaningfully.  It’s just that this is not how computers work.  They do they their work in a mechanical way: matching lists mechanically and ordering them by mechanical rules and crunching information in all kinds of mathematically complex ways.  But they do all this without accessing or using or understanding the meaning of the information.  That job is for the humans.

Here is an example of a problem that I and most of the rest of us deal with daily.  Like everyone else, I have advanced and  automatic spell checkers on my digital phone and in all my computers.  And they work extremely well and they keep improving all the time and they undoubtably will become much better.  And I wouldn’t want to live without them.  Right now, as I write rapidly and sloppily and misspell every other word my spell checker is correcting everything pretty much automatically and usually quite accurately.

But all of my spell checkers are extremely stupid.  They have no idea what the words they are correcting mean or whether they make sense in the context of the sentence they are part of.  They absolutely  insist on inserting totally ridiculous words into my sentences that no third grader would ever take seriously.  We have a joke in  my family.  Whenever the spellchecker injects a stupid word in an email or  text and the text gets sent out with a batch of other incredibly funny mistakes in it, we don’t even try to correct.  We just quickly add, “Smart phone not so smart” in the next paragraph and everyone understands that the computer, not the author was responsible for the mistake.

Yes, smart phones are going to learn to be a lot better spellers, but not through increased understanding of the sense of the words or of the sentence.   Spellcheckers are going to get better by just creating a better engine to compare this sentence with other similar sentences and longer lists of likely words or some other mechanical improvement.  The computer is unlikely to ever understand what the meaning of my sentence is and add words that it knows make sense in that sentence.  At least not unless the whole concept of information and computing changes radically.  Which may very well happen, perhaps sooner rather than later.

Computers don’t deal in meaning, they deal in mechanical computation only.  That’s why they deal with math so well.  Math and logic don’t really deal with meaning either.  They seem to me to be  mechanical formulas and algorithms.  They follow mathematical rules and they look for contradictions and other logical errors.  Math and logic seem to me to be tools that can be used by humans to clarify thinking and also to generate huge pools of knowledge.  But, so far, only the human mind can actually deal with meaning and with making sense.

This is why mathematics and logic can be exactly right in a mechanical kind of way but totally wrong in terms of understanding in a larger way.

My kindle can be extremely smart in manipulating the symbols that make up a book but they cannot understand the meaning of the book or even individual sentences in any real way.  Computers can even write books now-a-days as long as the books follow some kind of simple formula.  They can write simple love stories or thrillers or war stories or articles about last night’s football game but they really don’t understand what they are doing.  They are just following time tested rules for creating narratives.  Unfortunately none of these narratives can ever be really original or surprising or creative.  At least I don’t think they can.

And I think this is true in spite of the fact of the new learning algorithms that are now part of the new AIs.  Algorithms are now learning to recognize patterns like faces, or voices, or words when exposed to mountains of Big Data.  And they are learning to write new algorithms and to fix flawed algorithms.  Often they do this better than humans.  And these advances are being produced by computers based on the same principles as the interlayered, interconnected neurons of the human brain.  But I don’t think these computers will ever understand the meaning of what they are doing the way humans do.  It is just more mechanical retrieval and computation and manipulation of more data.

However at the rate AIs are growing and improving maybe this isn’t true.  Maybe a day will come when computers will truly understand meaning as humans do.  I’m doubtful though.

However, in spite of the fact that computers are mechanical and are divorced from the meaning of the data they crunch, they can be and are still very, very powerful tools.  And they are tools that humans use in lots of ways to huge advantage.

It seems to me that even science, especially physics is a somewhat like computers in that it is also deliberately divorced from meaning, deliberately abstracted from the physical world.  Physics particularly and deliberately divorces itself from the physical world of time and  space to gain its power.  I have recently been reading a half a dozen books by notable and famous physicists who explain that physics deliberately divorces itself from time in its search for truth.  Theoretical physicists such as Brian Green, John Gribbin, Sean Carroll, Lee Smolin and James Gleick, who wrote this book, and even the famous physicist Newton can be included in this list of famous scientists who seriously claim that time is a very pernicious illusion.  All of these guys say science always and necessarily and deliberately excludes time and the physical world.  Smolin is the only one on this list who says this has always been a bad idea and should be changed.

Mathematics exists out of time and outside of the physical world in ways that are easier to understand.   Numbers are not physical things of this world, they are abstractions.  Lines in geometry are not real lines; they do not have width or mass or weight; they are ideas; they are abstractions.    And 2+2=4 is true forever and ever, not just today or tomorrow.  it is something that exists outside of time.  The truths of mathematics are outside of time.   And mathematics has always been about manipulating symbols according to abstract, true, mechanical rules.

A quick aside here.  Not all mathematicians agree that math exists only in the world of ideas and forms.  There are some, who are called neo-platonists, who deeply and convincingly believe that the rules of math and science are not abstract ideas that we invent, but part of the real world, part of actual physical nature.  Neoplatonists believe that the rules of math are out there waiting to be discovered.  The opposite school believes mathematicians invent the rules of math, that the rules of math live only in the minds of man.  And, this school believes, sometimes these rules are wrong and sometimes they are right.  This argument can go on and on.  And who knows who is right.  But enough of that for now.

Don’t get me wrong, I’m not saying that math and science and information are worthless because they are divorced from meaning.  The truth is just the opposite.  I am a big fan of science and mathematics and computation.   They are about the only thing that even comes close to figuring out what is true and false in our universe.  And they have been almost totally responsible for everything that is good and progressive in our world.  certainly they are almost single-handedly responsible for the luxury and prosperity and safety and happiness that most people in the developed world now take for granted.  (As opposed to the horrors and violence and poverty and even starvation of much of the past.)

Well, here I am at the end of this essay, and yet I don’t feel like I have explained the book at all.  And I’m sure that there are some significant parts of the book that I don’t understand.  As I have been saying, the real mystery of the book seems to center around Gleick’s and Shannon’s insistence that Information and Meaning are separate things and this is what gives information its power.  I can’t say that I completely understand this enigma.  However I seem to be in good company. A number of pretty impressive minds have written reviews of The Information and have also been at least somewhat mystified.  Below are some quick opinions from some of these other reviews.  It looks to me that the Information is much, much much more complicated and important than we knew.

John Naughton of the Guardian discussed The Information with Gleick on April 9, 2011.  He too was puzzled by the separation of Information and Meaning.  What Shannon had to say about this seems to be more complex than I had at first imagined.  I quoted this part of his article earlier in this review, but I think it is good enough to quote again.   Naughton says that…

“The second part [of the book] centres on the work of Claude Shannon, the American mathematical genius who in 1948 proposed a general theory of information. Shannon was the guy who coined the term “bit” for the primary unit of information, and provided a secure theoretical underpinning for electronic communications (so in a way he’s the godfather of the modern world). The trouble was that Shannon’s conceptual clarity depended on divorcing information from meaning, a proposition that to this day baffles everyone who is not an engineer

But the most startling insights in the book come when Gleick moves to explore the role of information in biology and particle physics. From the moment when James Watson and Francis Crick cracked the structure of DNA, molecular biology effectively became a branch of computer science. For the replication of DNA is the copying of information and the manufacture of proteins is a transfer of information – the sending of a message.

And then there’s quantum mechanics, the most incomprehensible part of physics, some of whose most eminent practitioners – such as the late John Archibald Wheeler – have begun to wonder if their field might not be, after all, just about information. “It from bit” was Wheeler’s way of putting it. ‘Every it – every particle, every field of force, even the space-time continuum itself – derives its function, its meaning, its very existence… from bits.’ “

In a book review of The Information in the New York Times of March 18, 2010.  Geoffrey Nunberg also takes on the conundrum of Shannon’s separation of The Information from The Meaning:  Here is his comment:

“In an epilogue called “The Return of Meaning,” Gleick argues that to understand how information gives rise to belief and knowledge, we have to renounce Shannon’s “ruthless sacrifice of meaning,” which required jettisoning “the very quality that gives information its value.” But Shannon wasn’t sacrificing meaning so much as ignoring it, in the same way that a traffic engineer doesn’t care what, if anything, the trucks on the highway are carrying. Once you start to think of information as something meaningful, you have to untether it from its mathematical definition, which leaves you with nothing to go on but the word itself. And in its ordinary usage, “information” is a hard word to get a handle on (even after a recent revision, the Oxford English Dictionary still makes a hash of its history). It’s one of those words, like “objectivity” and “literacy,” that enable us to slip from one meaning to the next without letting on, even to ourselves, that we’ve changed the subject.”

I think the very best review of The Information is an article called “How We Know” by Freeman Dyson in the New York Review of Books, published on March 10, 2011.  Dyson touches briefly on the paradox of information and meaning.  After discussing Chapter One of Gleick’s book, “Drums That Talk”,  Dyson says…

“The story of the drum language illustrates the central dogma of information theory. The central dogma says, “Meaning is irrelevant.” Information is independent of the meaning that it expresses, and of the language used to express it. Information is an abstract concept, which can be embodied equally well in human speech or in writing or in drumbeats. All that is needed to transfer information from one language to another is a coding system. A coding system may be simple or complicated. If the code is simple, as it is for the drum language with its two tones, a given amount of information requires a longer message. If the code is complicated, as it is for spoken language, the same amount of information can be conveyed in a shorter message.”

Dyson goes on to relate Gleick’s ideas to his own vast fund of information on theoretical physics, molecular biology and even literature.  Dyson ends his essay by returning to the problem of Shannon’s separation of information from meaning and with a reference to Jorge Luis Borges.

“Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information.

The Information is neither a simple nor a short book (It’s 526 pages long).  And, I have to admit, I still have not finished reading it completely.  After I finish the book and surely re-read parts of it, and think about all of it some more, maybe I’ll have some better ideas.  I suspect this is not my last essay on this book.

The Information: A History, a Theory, a Flood,
James Gleick
Pantheon Books, New York
Copyright 2011

 

9886,-Mesa-Arch,-Long-Narrow,-8x19,-hi-dpi
Mesa Arch in Canyonlands National Park, Utah, Picture by Hanselmann Photography. 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s