Swedenborg and the Great American Experiment

My mother is one of those women who relate through self effacement. For example, after being complemented on a wonderful Sunday dinner, she is likely to respond:

“I always serve peas with this meal; I don’t know why I served beans this time. Peas would have been much better don’t you think? Why didn’t I serve peas! ”

One of the things she regrets is that she passed up the opportunity to name me William Stark Stinson. In my family, William Stark is the name traditionally given to the first born male of each generation. 

“Why didn’t I name you William Stark? I’m sure you would have appreciated it more than your cousin”.

Why William Stark?

Every New Hampshire school child learns the story of how on April 28, 1752 John Stark was captured by Indians while on a hunting trip near what is now Rumney, NH. He was used as bait to lure his brother, William, and William’s brother in-law David Stinson, into a trap. At the last minute, John broke free, knocked the Indians’ guns into the air just as they fired, saving his brother’s life.

But not, alas, Stinson’s.

Now some of you may be wondering how this event could result in a family name handed down through the generations, much less being taught to every school child.

How many of you know who John Stark is?

General John Stark, hero of the Battle of Bennington?

Still not ringing a bell?

Let me try another tack. How many of you know the NH state motto? It’s on all the license plates.

Yes, “Live Free or Die” was uttered by none other than John Stark!

(Actually he said “Live free or die: Death is not the worst of evils”.)

David did get a brook, a lake, and a mountain named after him. I am proud to share my surname with a mountain. A mountain memorializing my relative; the one famed for not being saved by John Stark, famous Revolutionary War hero!

Fortunately, there were other Stinsons around at the time, and one John Stinson became a local and state leader, one of NH’s first supporters of Thomas Jefferson and, according to the History of the Town of Dunbarton, a “strenuous advocate for religious freedom.”

The light of the “Founding Fathers” — Franklin, Adams, Madison, Jefferson — shines so bright it is difficult to remember they couldn’t have accomplished what they did without the support of men and women like John Stinson in all the towns and wilderness settlements of the 13 Colonies. By reading State Constitutions one can obtain an insight into the thinking of those lesser lights on a variety of topics, including religious freedom.

New Hampshire has the distinction of adopting an interim constitution on January 5, 1776, making NH the first of the Colonies to declare independence from Great Britain. They then argued for another 7 years (mostly about how to create a government without giving it any actual power) before adopting a final constitution in October 1783. You may think 7 years is a pretty long argument, but remember it was 8 more years before the Bill of Rights was adopted.

The NH constitution contains some remarkable statements. For example, Article 10 states that, should the government become tyrannical; the people have, not just the right, but the duty, to take up arms and overthrow the government. Beat that, Berkeley City Council!

Of more interest are Articles 1, 2 and 3. These declare that all people are born free and independent, and have inherent rights … not privileges granted by kings – or by the Constitution, but rights that are an intrinsic ingredient of being human. People may voluntarily surrender some of those rights to society in return for society’s protection of other rights, an idea known as the Social Contract, most famously explicated by John Locke. 

But the epiphany comes in Article 4. Article 4 asserts that some rights cannot be surrendered to society, because there is nothing of equal value that society can return. The one such right listed is the Right of Conscience.

Think about that for a moment. The framers of this constitution are stating that there is NOTHING you can be given that is as valuable as your right to develop and hold personal beliefs of what is right and what is wrong. There is nothing as important as the freedom to pursue in your own way the answers to the ultimate questions of life and meaning. Not food, not shelter, not life itself. Nothing. 

This is the meaning of “live free or die”.

š›

It is so difficult to understand what life was like back then, looking as we do through our lens of modern culture. Imagine. We started as a society where it was a fact as obvious as the sun rises in the east that God intended us to believe what our betters told us to believe. Where it was accepted fact that the best way to support religion was to put the might of the state behind it.

Just a few years later we believed that an individual’s right to approach god as he or she saw fit was sacred. That this is one place where the State shall not tread.

And that true religion would flourish as a result.

This is the Great American Experiment. Not democracy… that had been done before. But the United States may be the first society built on the principle of religious freedom.

How did this come about?

We tell our children the colonists came to the New World seeking religious freedom. Nothing could be further from the truth. They may have come here to escape religious persecution, but religious freedom was not on their minds. Before boot hit beach, theocracy was established.

The Puritans were the most notorious. Only members of their Church were allowed to vote. Church elders decided if you were pious enough to join. Catholics were driven from Massachusetts with a whip to their backs. Quakers were hung by the neck from the trees of Boston Commons.

The Puritans were not unique. Every colony had an established religion in one form or another. Churches were supported by tax dollars. Dissenters experienced intense discrimination if not outright persecution. The Pope was the anti-Christ.

What caused the change? 

The religious fervor known as “the Great Awakening” resulted in the creation of many new denominations. Witnessing the abuse meted out on these groups began to disgust a growing number of people, James Madison among them. Some of these new sects saw Jesus’ teachings that “my kingdom is not of this world”, and “render onto Caesar that which is Caesar’s” as a Biblical injunction against the mingling of Church and State. As a result the Baptists and other forerunners of today’s Evangelical Christians were among the most fervent supporters of the strictest separation between Church and State.

Some factors were purely practical. The Continental Congress discovered it had almost as many religions represented as there were delegates. They could either “hang together or hang separately”. Washington, fearing invasion from the north, sought to minimize British support there. Consequently, he vigorously and successfully fought to quell anti-Catholic sentiments throughout the States. Needing a truly national force, he rigorously enforced religious tolerance within the army. 

For Madison, even the radical idea of religious tolerance missed the mark. He believed that true faith must flow from a free mind. Holding to a different faith was not a privilege to be tolerated by a monarch or even by an enlightened majority; it was an intrinsic right. “Religious tolerance” must be replaced by “religious freedom”. “If this freedom is abused,” he wrote, “it is an offense against God, not against man.”

Both Jefferson and Madison looked at history and concluded Christianity was most vibrant in its early years but became moribund and corrupt once adopted as the state religion by Constantine. They convincingly argued that truth and good eventually win in a free marketplace of ideas, and government power only props-up falsehood and weakness.

š›

Religious freedom was precious to these men in part because they were each on their own spiritual journeys.

Washington credited the intervention of God for his victories and blamed moral weakness for his defeats. He forbade swearing in the Continental Army. Yet he never took communion. During his presidency, his minister gave a sermon with a thinly veiled rebuke of this practice. Washington stopped going to church on communion Sunday.

In his youth Franklin thought it beneath the dignity of God to meddle in the affairs of men and concluded that God must have created deputy gods to look after each of the many planets that exist throughout the universe.

Jefferson believed the clergy purposely obfuscated religious teachings and struck alliances with tyrants to protect their own privileges. He called Jesus “the greatest moral teacher the world had ever seen”, but denied his divinity. Jefferson assembled his own bible, cutting the words of Jesus from several translations of the bible and pasting them into a notebook. He was convinced that if it were not for corruptions introduced by priests, starting with Paul, all the world would have long ago accepted Jesus’ teachings. It is interesting to note that the New Testament used by the General Church branch of the followers of the writings of Emanuel Swedenborg contains only Mathew, Mark, Luke, John and Revelation, excluding all the writings of Paul.

Jefferson’s Library contained at least two volumes of Swedenborg’s works, including “Apocalypse Revealed”. He invited John Hargrove, the first Swedenborgian minister ordained in the United States, to give public lectures in the halls of Congress on at least two occasions.

Adams, while a Unitarian, had a hard time shaking his Puritan roots. Throughout his presidency he believed government support of religion was necessary, although such aid should be distributed without favoring one sect over the other. He frequently enlisted Almighty aid in the battles of his administration. This resulted in a bitter struggle between Adams and Jefferson with Adams’ operatives calling Jefferson an “Infidel” and declaring “a vote for Jefferson is a vote against God”. However this backfired and Adams later wrote that his position “alarmed and alienated … Quakers, Anabaptists, Mennonists, Moravians, Swedenborgians, Methodists, Catholicks, Episcopalians, Arians, Socinians, Armenians, & & &, Atheists and Deist.”

The Founding Fathers used their religious freedom to study, contemplate, debate, and most significantly, to change. Adams, perhaps, changed the most. In his 80’s Adams worked to remove state support of religion from the Massachusetts state constitution. He studied all the world’s religions and wrote “men ought (after they have examined with unbiased judgments every system of religion, and chosen one system, on their own authority, for themselves) to avow their opinions and defend them with boldness.” 

The Founding Fathers confirmed, as we would say in Swedenborgian terms, that each individual is a “church specific” pursuing a path of spiritual growth.

Interestingly, the Founding Fathers also converged on a definition of the Church Universal. Jefferson succinctly laid out this common creed in 1822: “That to love God with all thy heart and thy neighbor as thyself, is the sum of religion.”

They saw that the “one path” to enlightenment was the freedom to pursue “many paths”. And that these many paths/one path lead to a single mountain top, expressible in three words:

“Love thy neighbor”.

— + —

During the Revolutionary War exhortations that “god is on our side” were routinely used to rally the troops. There was a common belief that to enlist in the Continental Army was to enlist in Christ’s Army. Many thought God had singled out the nation for a special role in history, and with some reason. Washington escaped without harm from so many perilous situations he once observed with amazement “by the all powerful dispensations of Providence, I have been protected beyond all human probability or expectation.”

Their early jingoistic rhetoric may be a reflection of the irrationality of a rag-tag collection of colonies challenging one of the world’s great and rising powers. In later years, the Founding Fathers repudiated their belief that the United States held a special place in God’s eye. Adams wrote in 1812 “there is no special Providence for us. We are not a chosen people that I know of”.

While denouncing chauvinism, I believe there was an essential truth hidden in their early belief that God had a special role for the nation.

In one of his visions, Emanuel Swedenborg saw a crystal-walled church building that represented the New Jerusalem, or New Church, prophesied in Revelation. A church made possible by the understanding of the Word he was bringing to the world. Above the door was the Latin phrase “Nunc licet” which he knew to mean “Now it is permitted to enter with understanding into the mysteries of faith”. “Nunc Licet” is the quintessential Swedenborg sound bite. It expresses the knowledge that we now have the tools to safely dispense with the prejudice, dogmatism and brutality of blind belief and to study, question and reason our way to true faith.

But knowledge, or discernment, isn’t sufficient. Discernment must direct intentionality, or will, to give form and substance in the natural world to this revelation. Swedenborg needed a Jefferson to take the celestial “it is permitted” and transform it into the words now carved in the white Georgian marble of the Jefferson Memorial: “I have sworn upon the altar of God, eternal hostility against every form of tyranny over the mind of man”. 

This then, is the miracle of the American Experiment. Because they had to struggle through inauspicious beginnings, full of bigotry, dogmatism, hate and fear, a society was created dedicated to nurturing each and every person on their spiritual journey. The fledgling American nation demonstrated to the world that, against all belief, the best way to support those journeys, to encourage religion to flourish, is to accept that God meant it when he gave us free will, and no earthly power should attempt to thwart it in matters of conscience.

In this way, the United States became part of the cycle of repentance, reformation and regeneration creating the New Church. For according to Swedenborg, the second coming of Christ is not so much an event as a journey. The Founding Fathers and thousands of others throughout the 13 states carried us forward a giant’s stride. But, as they well understood, they did not complete the task. It is a task we, today, inherit. It is a task we should face with humility from the recognition that regeneration is a gift from God, but also with enthusiasm and joy from the beauty and majesty of what we are creating.

–+–

Once bitter political enemies, John Adams and Thomas Jefferson became in old age friends again. Through correspondence they explored many issues, in particular, the blessings of religious freedom. They firmly believed their friendship would continue in the next life. In a letter to Adams on September 4, 1823, Jefferson painted an image of the two of them standing at the windows of heaven. “You and I shall look down from another world on these glorious achievements to man”, Jefferson wrote, “which will add to the joys even of heaven.”

Three years later, on the 50th anniversary of the Declaration of Independence, July 4, 1826, both men entered that blissful state.

Posted in History, Religion | Tagged , , , | Leave a comment

The Age of Creation

I’m reading “Now: The Physics of Time” by Richard Muller. It reminded me that by “the expansion of the universe” we don’t mean that the galaxies are moving away from each other, but that more space is being created between them. When I reminded a friend of this, the person responded “I know this is not a ‘proper’ question, but were does that new space come from?”

Which made me think: God did not create the universe 6000, or even 14 billion, years ago. Creation is happening NOW, and we are witness to it.

1024px-ngc_2818_by_the_hubble_space_telescope

Posted in Science | Tagged , , , | Leave a comment

GDP Blindness

Everyone seems to agree that Europe is a basket case compared to the US. Left-leaning economists say it is because of “austerity”, right-leaning economists say it is because of the “burdensome welfare state”. Wondering around Vienna, Austria recently I counted exactly 3 people begging and no one sleeping in the streets. There was a vibrant cafe scene. Compare that to what I see every day in the US: beggars on almost every street corner, not only in the city, but in affluent suburbs. I trip over people sleeping on the sidewalk in San Francisco. Admittedly, I am one person spending one week in one European city, but could it be that our relentless focus on GDP growth blinds us to a bigger picture?

Posted in Community, Economics, Government | Leave a comment

Robust Design and the Downside of Efficiency

I just read the article The Downside of Efficiency by . I realized that there is an analogy between what is discussed in that article and changes in the way we design products.

It was formerly the case that engineers were taught to design for maximum performance. The problem was that this often resulted in products that only achieved that performance under optimum conditions; in real life the products were finicky, had high failure rates, and seldom lived up to their potential.

Today the dominant paradigm is to design products to be “insensitive to variation”. Some performance may be sacrificed, but the performance that  is achieved is maintained under all expected conditions, through end-of-life, etc. The product is robust and customers end up experiencing better performance than the “optimized” design.

This learning has not generally carried over into our production systems. “Performance” is analogous to “efficiency”. As we strive for greater and greater efficiency, our production systems become more and more “sensitive to variation”. Overly efficient supply chains are susceptible to a single dock-worker strike, mono-culture crops are susceptible to new diseases. GMO crops produce infertile seeds by design — what happens if there is a disruption in Monsanto’s production system?

As a society we need to learn the lessons designers learned long ago – design for robustness, not efficiency.

Posted in Economics, Environment | Tagged , , , , , , , | Leave a comment

Of Myth and the Gift Economy

First published in 1983, may consider “The Gift” by Lewis Hyde “a brilliantly orchestrated defense of the value of creativity and of its importance in a culture increasingly governed by money and overrun with commodities”. Hyde claims that creativity is “a gift” which must be shared to be of value. The shared products of creativity form a Gift Economy.

I see Hyde struggling to draw a nuanced line between a “Gift Economy” and a market economy. In his chapter on usury, this line is substantiated as a boundary between the neighbor and the stranger. I believe he struggles because the line is not sharp, and its lack of sharpness underscores the importance of the gift economy even in today’s world of “market triumphalism”.

Becoming a member of the Tribe

Most business-to-business transactions are identical to consumer transactions: one business has something to sell and another buys it. However, when one company desires to purchase an item that is a critical part of its product from another company, the “consumer” process is often deemed inadequate. In these cases a Supply Agreement is negotiated between the two companies. The agreement attempts to spell out the rights and responsibilities of each party. This will include detailed definitions of the product and its quality, how that quality will be demonstrated, volume of product committed to be purchased, upper and lower bounds for the volume of periodic purchases, when ownership of the product transfers from one party to the other, price of the product and mechanism for adjusting the price, liability, indemnification against intellectual property disputes and more. The Supply Agreement will attempt to anticipate everything that could possibly go wrong and each party’s course of action if any of those scenarios should occur.

It can take many months to negotiate such an agreement.

When done, the agreement is filed away. And the lawyers will tell you, if you ever feel the need to take the agreement out of the drawer, your relationship with the other party is already dead.

Lower level functionaries at the two companies handle the actual transactions between the buyer and the seller. It almost never goes the way it was spelled out in the agreement. The world is just too messy for that. Instead, the person at one company is always communicating with his or her counterpart at the other, explaining their current problem or need and working out a solution. Sales are down (or up), can you postpone a delivery (or bring one forward)? One of our machines broke down. Can we send a partial shipment now and make it up next month? And so on. With a successful business partner, these issues get worked out and are often never seen by senior management. The less visible they are, the more successful the relationship.

This is the gift economy working within the market economy. Every time a buyer helps solve a supplier’s problem they are giving them a gift. There is no quid pro quo, just a belief that the gift will be returned – in a different form – sometime in the future. These transactions are personal. The purchasing agent does not call “the supplier,” they call a particular person at the supplier. They don’t ask that a favor be done for their company, they say “can you help me out here?” As these gifts are exchanged, the relationship is built. The partner that successfully negotiates through these difficulties is often perceived as a “better” supplier/customer than the one who flawlessly delivers on the agreement. This makes sense: if a company has delivered flawlessly to-date, you have no idea how they will behave when the inevitable crisis occurs.

What of that carefully negotiated Supply Agreement gathering dust? It is simply a relic of the ritual that converted a stranger into a tribe member.

–—The myth of intellectual Property and the ambiguity of the modern Tribe

Hyde’s mention of the patent system is an opening to a deeper conversation on the role of the gift economy in contemporary society. Hyde identifies the limited duration of the monopoly granted by a patent as an appropriate compromise between a gift economy and a market economy. More important, in my opinion, is the fact that granting of a patent is contingent on the inventor describing the invention in sufficient detail that others can practice it. This allows others to apply the new knowledge imbedded in the invention to other areas, and to begin improving on the invention long before they can exploit those improvements in the market place. The inventor may be given a monopoly on the profit from an invention for a limited time, but the knowledge itself immediately becomes humanity’s common heritage.

The fact that patents and copyright have the lofty status of being mentioned in the Constitution points to their significance, as does the unique way they are presented there. The Constitution of the United States generally reflects the philosophy of Jean-Jacques Rousseau and John Locke where “rights” reside in the individual and some of these rights are voluntarily surrendered to government in return for the government protecting other rights. The Constitution seeks to enumerate the rights surrendered to the government and orders the government to not infringe other rights. The Bill of Rights starts “Congress shall make no law…” reflecting the general tone of the document.

By contrast, Article 1, Section 8 states “The Congress shall have the Power … To promote the Progress of Science and the useful Arts, by securing for limited Times to Authors and Inventors the exclusive right to their respective Writings and Discoveries”. Here there is no hint of a belief that there exist abstract, natural “Intellectual Property Rights” to be protected, as we often discuss them today. Rather we see a concrete, artificial market right that can be arbitrarily created (or removed) by government.

This reflects the ongoing ambiguity we feel about the rights of the individual vs. their obligation to society. Humans are fragile compared to the rest of nature and we survive only through a shared culture – which requires shared knowledge. In a small village I share my knowledge for making better arrowheads and reap the benefits of a more successful hunting party. If Proctor and Gamble downloads my photograph from the Internet and uses it to sell soap, how will I ever benefit? This becomes theft by the stranger rather than sharing within my tribe.

In modern society, who is my tribe? I may belong to many. Copyright law attempts to tread this needle through the “Fair Use” clause. The reviewer, professor or journalist who uses my work builds on it and passes the gift along. Eventually I share the, often intangible, benefit. They are part of my tribe, even if I never know them. The person who sells my work for profit, or uses it to sell something else, consumes my gift and owes me compensation.

There is perhaps no better modern-day example of a gift economy than the Open Source Movement. It is the height of irony that almost all of the commerce on the Internet flows through the backbone of the Apache Webserver – software developed by a cadre of independent volunteers working solely for the satisfaction of creating something appreciated by others – appreciated both by users for its usefulness and by their peers for its elegance. Luckily for all of us these volunteers have a day job.

–—The price of the death of Myth

In his book Collaborative Circles, Michael Farrell reports that in 1913 C. S. Lewis “felt particularly skeptical of Christianity, which he viewed as a religion built on recycled mythology”. During a long discussion starting the evening of September 19th and lasting well into the morning, J.R.R. Tolkien “agreed that the New Testament story was a myth” but that “the historical events had unfolded in the form of a familiar myth so as to create a story that would penetrate human consciousness”. So compelling did Lewis find Tolkien’s argument, that the “urbane Oxford scholar” became the best-known Christian apologist.

To Tolkien, a devout Roman Catholic, God is not a myth, but requires myth[1] to achieve his ends. An alternative view is represented by the theology professor whom when asked, “is God Dead?”, replied, “Go to any museum. They are filled with dead gods”. Her point being that there is a universal spirit that must be reinterpreted as “God” for each age. In this view “God” is a myth and only the “Holy Spirit” has an ultimate reality.

Economies of either the gift or market variety exist within, and are, mythologies, if we accept mythologies as world-views that inform our actions. Adam Smith in The Wealth of Nations spoke of us being guided toward the common good “as if by an invisible hand” as we pursue individual gain. But he perceived this as only working within a framework of laws and common mores. Among many it is current to believe the “invisible hand” does not work within a framework of mores, but is superior to them. In this belief system “anything goes” in the marketplace because the invisible hand of the market will produce the optimum result. The invisible hand becomes the myth, but a myth that replaces the market-as-human-endeavor with market-as-force-of-nature. This is a market we cannot control and therefore are not accountable for its consequences. We have ample evidence the invisible hand does not always produce optimum results. If it did, economic bubbles would not exist. Please, bring back the mythology of the staid, conservative community banker!

We compensate for a lack of guiding mythologies by enacting laws. But the nature of laws is that they invite lawyering, that is, parsing the law, cutting it to bits until it has lost all meaning. We end with the absurdity of a jealous wife who tries to poison her husband’s mistress being charged with violating a chemical weapons treaty.

Myths, on the other hand, are not subject to such lawyerly interpretation. Their ability to restrict individual actions comes only from a holistic interpretation and integration into the psyche. A respected banker finds selling worthless mortgage-backed derivatives inconceivable. He may not be able to explain why, except to say “it’s just not done!”

Unfortunately, that respected banker also finds it inconceivable that women should vote. However, in this society, the bohemian, the artist-provocateur, is busy building an alternative mythology. He or she is shunned by society, shunning being the primary enforcement mechanism in a mythology-regulated society. The artist, however, lives within his/her own Gift Economy with a value system that is less dependent on the admiration of greater society. Not even a recommendation from Emerson could get Whitman a job, yet he remained happy in his garret. In a law-dominated society the artist is sent to the Gulag.

(As an aside, in this metaphor Science can be viewed as one of the biggest, baddest Bohemians as it disrupts the social order through the creation of new paradigms.)

In a myth-free society, what is the place of the artist? To create pretty pictures for commercial consumption? Or can the artist go beyond propagating the old mythologies and build new ones? Mythologies that reinterpret the universal nature of man, the “Holy Spirit,” and create a new God for the current age.

 


 

[1] At least myth is required if free will is to be maintained

 

Posted in Economics, Government, Religion, Science, Sociology | Tagged , , , , , , , , , , , , , , , , , , , , | Leave a comment

The Fragility of Freedom

In high school I was an idealistic and active participant in the local Methodist Youth Fellowship in my New Hampshire town of 6,000 souls. Concerned with growing drug use among teens, our Fellowship’s adult leader and his wife opened the door of their home 24/7 to any troubled youth who felt the need of conversation or just neutral territory. For the upstanding citizens of our community, he would only be willing to let “those people” into his home if he were a drug dealer. Our own minister convinced parents not to let their children attend Methodist Youth Fellowship meetings. The sergeant of our town’s police force told me that if I went to this leader’s house, “some day I would hear a crash and there would be a police officer coming through every window.”

This was the 60’s and adults were threatened by anyone wearing long hair or beads. They complained to the Board of Supervisors that they didn’t feel safe going downtown because of “undesirable” youth hanging out. The Supervisors passed an ordinance banning all people under 18 from the town common and any group of three or more from downtown. In protest, a group of high school students went to the common after school. In a scene out of “Alice’s Restaurant,” the police appeared, dressed in full riot gear recently purchased with a Federal Grant available because of all the threats the nation faced from anti-war protesters. The police got on their bullhorn and ordered the protesters to disperse. The scared teenagers scattered to the winds.

The last protestor was hunted down some three miles from downtown and hauled to jail. The charge was “failure to disperse”.

It was about this time that the National Guard was being called out against people saying what most people now believe to be true: the Vietnam War was a mistake. Four dead in Ohio.

I wonder what our government would justify doing if there were a real threat, say from terrorists, and they had the capability of unlimited access to our private lives?

One day my high school social studies teacher gave us an unusual lesson. The class was divided into teams. One team represented the leaders of a newly independent country. The remaining teams were to represent different forms of government. After giving us a week to research the hypothetical country and the different theories of government, a debate was held at which each team tried to convince the “leaders” to adopt their form of government.

I convinced them to adopt fascism.

It was easy.

At one point the team representing democracy got so frustrated with the obvious success of my arguments, they shouted out “but what if the leaders become corrupt?” I waved my hand at the leaders and responded, “Are you accusing these people of being corrupt?” The deal was sealed.

In fact, the team representing democracy had the most difficult argument to make. Democracy is messy, provides no tangible benefits and offers protections against only hypothetic harms. No practical leader would choose democracy. Belief in democracy requires idealism and a willingness to stick with those ideals even in situations where it seems against your best interest.

In 1776, fifty-six people had the courage to sign a confession of treason in support of those ideals. A confession which ends “for the support of this Declaration, with a firm reliance on the protection of divine Providence, we mutually pledge to each other our Lives, our Fortunes and our sacred Honor.” I’m talking, of course, of the Declaration of Independence. Today we are almost unimaginably more powerful and safer than were those 56 gentlemen. Yet, how much risk are we willing to take for those ideals?

Posted in Community, Government | Tagged , , , , , , , , | 1 Comment

The King is Dead. Long Live the King

Particularly with the proliferation of camera phones, there is a growing belief that we are inundated with images and, in particular, that the “soft”, that is, electronic, nature of these images is creating fundamental changes in how we view photography, and perhaps to the culture in general.  Whatever the actual extent of these changes, they are deeply rooted in human nature.

The belief that the world is saturated with images did not begin with electronic cameras. In The Story of Kodak, published in 1990, author Douglas Collins writes “By the late 1980s pictures had become a common coin, bright, plentiful, almost too ubiquitous even to notice.”[1] In her 1973 essay In Plato’s Cave Susan Sontag seems to lament

“… there are a great many more images around, claiming our attention. The inventory started in 1839 and since then just about everything has been photographed, or so it seems.”[2]

When Sontag wrote her essay, it is estimated that around 10 billion photographs were taken each year. Today that number is closer to 400 billion.[3]

But there is no reason to start with the invention of photography. Creating representational images may be the defining trait of human beings. More common definitions – tool-making, verbal language – have been found in “lower” animals, but image use seems uniquely human.

One particularly fine example of image creation by the earliest humans is “Wounded Bison”, a Paleolithic cave painting in Altamira, Spain. In the classic History of Art, H.W. Jason describes it thus:

“We are amazed not only by the keen observation, the assured, vigorous outlines, the subtly controlled shading that lends bulk and roundness to the forms, but even more perhaps by the power and dignity of this creature in its final agony.”[4]

Left: Wounded Bison, unknown, ~15,000 BCE. Right: Bull -Plate 1, Pablo Picasso, 1945.

Figure 1. Left: Wounded Bison, unknown, ~15,000 BCE. Right: Bull -Plate 1, Pablo Picasso, 1945.

Perhaps the earliest examples of human image making are the paintings in the Chauvet-Pont-d’Arc Cave, some dated to over 30,000 BCE. One thing of note: there is no evidence of human habitation in this cave: no fire pits, no bone scraps, no tool chips.[5] The site was used solely for the purpose of displaying images. It may be the earliest example of an art gallery.

Sontag claims “To photograph is to appropriate the thing photographed.” But this explains neither Paleolithic art, nor the meteoric rise in the number of images created in the 21st century. If one considered the image a way of capturing the object imaged, one would keep it near hearth and home, not in a cave, nor in the cloud.

A more likely explanation of the desire to photograph, or more generally, to create images, is our need to be part of a community created by shared experiences. For a species so visually oriented, photography satisfies that need to share.

That photography was a way of sharing was a belief within the Eastman Kodak Company, at least towards the end of the 20th century. Several executives became convinced that the “new” way of sharing images was to view them on your television.[6] This belief led to the 1992 introduction of the PhotoCD system. The concept was that customers would receive a CD containing scanned versions of their photos in addition to their processed negatives and prints. A special player would display these on their TV while the entire family gathered around to watch. However, at that time people were conditioned to watching video on their TVs so the general reaction of people to the displayed images was “what, they don’t move?” It would take people becoming conditioned to seeing still images on their computers screens before seeing them on their TV could become popular.

Photo CD Player

Figure 2. Photo CD player and disc

PhotoCD could have played a role in the digital sharing of images; Kodak convinced all manufacturers of CD drives to make them PhotoCD compatible. However, a fundamental objective of the PhotoCD system was to protect Kodak’s film business from encroachment by electronic image capture. This resulted in complex ‘protections’ being built into the system that often made it difficult or impossible for consumers to use the product as they desired, for example, to make a copy of the disc.[7] More importantly, with the 1990 introduction of the first commercially available digital camera, the Logitech Fotoman,[8] to share images the consumer no longer needed Kodak to “do the rest” after they “pushed the button”.

So identified is photography with sharing that the first camera phone is generally attributed to Philippe Kahn, best known as the founder of Borland Software. However, what Kahn actually did was to be the first person to share an image over the Internet using a cellular phone. He accomplished this in 1997 by taking a picture of his newborn daughter with a Casio QV-10 digital camera, transferring the image to a Toshiba laptop, and then hot wiring the laptop to his Motorola Startac cellular phone. His personal experience, as well as the reaction of the people with whom he shared the image, prompted Kahn to start a company that worked with Japanese cell phones makers to create the camera phone. [9]

First photo by cell phone

Figure 3. First photo shared by cell phone

The first commercial camera phone, the Sharp J-SH04, was introduced about three years later, in November 2000.[10] This segment of photography grew rapidly, and in 2008 Nokia became the largest camera manufacturer, selling more camera phones than Kodak sold film cameras.[11]

Photographs taken with cell phones are seldom printed, they are shared with family and friends by text message, Instagram or other electronic means. The website with the most photos stored is Facebook, with over 90 billion images in January 2011.[12]  On Facebook, storage of images is essentially an unintended consequence of sharing them. But is this really any different than the way people treated ‘traditional’ photographs? There was a flurry of excitement when the pictures came back from the photofinisher, but once shared they were put in a shoebox to gather dust.

In the early 1980’s, Leo ‘Jack’ Thomas, Senior Vice President of Research for Eastman Kodak from 1977 – 1985, reflecting on the growing excitement around electronic imaging, commented that if chemical-based photography were being invented today, it would be considered a marvel.[13]  His statement continues to be true today. The amount of scientific knowledge and technology condensed into a few microns of emulsion, and the image quality that results, verges on the magical. It’s just that, compared to digital imaging, it didn’t deliver what people want: any easy way to share their experiences.


[1] Douglas Collins, The Story of Kodak, Harry N. Abrams, Inc. (1990) p368

[2] New York Review of Books, October 18, 1973. Available at http://www.nybooks.com/articles/archives/1973/oct/18/photography

[4] H.W. Janson, History of Art, Harry N. Abrams, Inc. (1963) p19

[5] Cave of Forgotten Dreams, a documentary film by Werner Herzog.

[6] One might conclude that these same executives also believed that sharing convenience trumped image quality. If so, they were adept at keeping that belief under wraps. Kodak executives almost universally expressed the opinion that film’s “inherently” superior image quality would keep electronic image capture at bay.

[7] The discussion of PhotoCD is based on the author’s personal experience while employed by Eastman Kodak Company

[12] Justin Mitchell, self identified ‘Facebook Photos engineer’, in http://www.quora.com/How-many-photos-are-uploaded-to-Facebook-each-day/all_comments/Justin-Mitchell. Accessed 2/24/2013

[13] Personal recollection of the author

Posted in History, Photography, Sociology | Tagged , , , , , | 4 Comments

4×5 Kodachromes of WWII

A friend clued me into this blog: 4×5 Kodachromes. The lighting and the Kodachrome colors in the 4×5 format make these dramatic images. As was pointed out in the comments, the photos of women factor workers must have been staged – these are essentially propaganda photos for home consumption.  They tell a glamorized yet true story of life at home during WWII. IT is good to see women other than “Rosie the Riveter”!

Posted in Uncategorized | Leave a comment

My new Photo Blog

I’ve decided to make all my photography-related posts on a new blog, Douglas G. Stinson Photography. I will use “Unexpected Connections” for my more philosophical musings. As keep this distinction clear, I have deleted my purely photography-related postings from this blog.

I hope those of you who have enjoyed my photography will “follow” me at the new site!

Thanks for reading!

Posted in Photography, Uncategorized | Tagged | Leave a comment

Creating ones self

OuroborosIn his blog on technology for the writer’s group The Loft, my friend Don reinterprets the Ouroboros, conventional “he who eats the tail” as “that which creates itself by speaking itself.”

I know I’m supposed to think the Ouroboros paradoxical. The snake who constantly consumes itself, but is never consumed. But that is not how I see it. I see the circle grow ever tighter until it becomes a point, and then disappears.

However, if the snake is speaking itself into existence, the paradox seems unavoidable. I see it happening, so I must believe it, but it seems impossible because how did it start?

Don writes a bit of PHP code that “eats” a string of characters and regurgitates it in reverse order as a symbolic representation of the transformation from Ouroboros “one who eats his tail” to Soroboruo “that which creates itself by speaking itself”. Comments poetically explain the code’s function, as if holding the code up to a mirror, reflecting the analogy.

I think a better programing analogy might be to the concept of recursion. “Ordinary” functions F(x) take a value x and transform it into a new number, for example F(x)=x2+1. In code, this might look like function fnF(x){return x^2+1;}. This is easy to understand. Hand the function an “x” and the function will return a “y” equal to x2+1.

A recursive function, for example Fn=Fn-1 + Fn-2, creates itself from itself. This might look like function fnF(n){return fnF(n-1)+fnF(n-2);}. This is not so easy to understand. Hand the function an “n”, and it asks itself ‘what is the value for n-1 and n-2?’. It then asks itself, ‘what is the value for n-2 and n-3? And so forth.

This happens to be the definition of a Fibonacci series. From n=-5 to n=5 one such series is … 5, -3, 2, -1, 1, 0, 1, 1, 2, 3, 5 … . The definition of the function explains from where the next number comes, but from where did the series come? No matter how far back one goes, there are always two earlier elements that needed to be calculated from yet earlier elements. I see it so I must believe it, but it seems impossible because how did it start?

Of course, historically the Fibonacci series was created by assigning F1=1 and F2=1 and calculating forward. Later it was extended to negative “n”. This is rather like freezing the Ouroboros in time, which is exactly what the drawing at the top of this essay does. It is only in our minds that we imagine what the Ouroboros must have looked like before, and before that, and before that, creating the symbolism and the paradox. But didn’t the Fibonacci recursion relationship and the series itself always exist, independent of time, from -∞ to +∞, without us having started it?

If the Ouroboros is “speaking itself into existence”, it is more than recursive, it is  selfreferential, i.e. talking about itself. Self-referential statements are even more difficult to deal with. The most famous such statement was made by the Cretan Epimenides, as quoted by the Apostle Paul: “Cretans are always liars” (Titus 1:12).

Is that statement true or is it false?

Self-referential statements are so problematic that one is tempted to ban them from logic. But isn’t the ability to examine one’s self, talk about one’s self and modify one’s self the very definition of possessing consciousness?

This is the basis for Douglas Hofstadter’s assertion that self-referential algorithms, or “strange loops” as he calls them, are critical to artificial and natural intelligence.

In his blog, Don call attention to an analogy between his reinterpretation of the Ouroboros and  the Gospel of John

In the beginning was the Word, and the Word was with God and the Word was God … All things came into being by Him…

When we read this in conjunction with Genesis

then God said, “Let there be light”; and there was light

we see God simultaneously speaking the words that bring the universe into existence and  being “the Word”.

In the original Greek, what was written was λόγος (logos), which means not only “word” but “an expectation” and “reason”. I particularly like the thought of the universe existing as “an expectation”, full of potentialities, where we create it as we go by the choices we make.

Emanuel Swedenborg saw the human as a microcosm of the universe and the creation story of Genesis as a symbolic description for individual human development. While he proposed detailed correspondences for each of the seven days, they can be generalized into three steps (1) recognizing the need to improve [Repentance], (2) acting “as if” you were improved, i.e., practicing [Reformation], and finally, incorporating the “new you” into your inner nature [Regeneration]. In a real sense, we are “speaking our new self into existence”. While Swedenborg has a particular way of expressing these concepts, you see similar principles espoused in practically every religion and every secular “self-help” group.

Posted in Mathematics, Religion, Science | Tagged , , , , , , , , , , , , , , , , | Leave a comment