Tag Archives: creativity

Austin and Asking, 2

I’m re-reading Austin’s How to Do Things With Words, trying to come to terms with these lectures and what perspectives they offer on the broad theme of conversation and collaboration I’ve been exploring in a series of posts on the power of asking.

On my first reading, which I discussed here, I must have nodded midway through Lecture VI, or maybe I just wasn’t in the right frame of mind to appreciate the historical argument Austin advances in that lecture about the “evolution of language” (focusing specifically on the development of the explicit from the primary performative).

…historically, from the point of view of the evolution of language, the explicit performative must be a later development than certain more primary utterances, many of which are at least already implicit performatives, which are included in many or most explicit performatives as parts of a whole. For example ‘I will…’ is earlier than ‘I promise that I will…’.The plausible view (I do not know exactly how it would be established) would be that in primitive languages it would not yet be clear, it would not yet be possible to distinguish, which of various things that (using later distinctions) we might be doing we were in fact doing. For example, Bull or Thunder in a primitive language of one-word utterances could be a warning, information, a prediction, &c. It is also a plausible view that explicitly distinguishing the different forces this utterance might have is a later achievement of language, and a considerable one; primitive or primary forms of utterance will preserve the ‘ambiguity’ or ‘equivocation’ or ‘vagueness’ of primitive language in this respect; they will not make explicit the precise force of the utterance. This may have its uses, but sophistication and development of social forms and procedures will necessitate clarification. But note that this clarification is as much a creative act as a discovery or description! It is as much a matter of making clear distinctions as of making already existent distinctions clear.

One thing, however, that it will be most dangerous to do, and that we are very prone to do, is to take it that we somehow know that the primary or primitive use of sentences must be, because it ought to be, statemental or constative, in the philosophers’ preferred sense of simply uttering something whose sole pretension is to be true or false and which is not liable to criticism in any other dimension. We certainly do not know that this is so, any more, for example, than, to take an alternative, that all utterances must have first begun as imperatives (as some argue) or as swear-words — and it seems much more likely that the ‘pure’ statement is a goal, an ideal, towards which the gradual development of science has given the impetus, as it has likewise also towards the goal of precision. Language as such and in its primitive stages is not precise, and it is also not, in our sense, explicit: precision in language makes it clearer what is being said — its meaning: explicitness, in our sense, makes clearer the force of the utterances, or ‘how…it is to be taken’.

What Austin says here about how human beings came to mark and remark the forces of utterances and took language from a primitive to a sophisticated state can apply to asking as well. In this view, the explicit use of the performative ask (“I ask…” or “I ask that…”) would constitute a step forward in the evolution of language, “a later achievement…and a considerable one.” Austin calls it a “creative act” of “clarification.”

Historically, one thing that act might have helped to clarify — Austin’s caveat about the presumed historical priority of imperatives notwithstanding — is the difference between asking and command, and, therefore, the terms on which interlocutors meet, or the “social forms and procedures” that govern their relationships and necessitate this clarification or distinction.

This puts us in murky territory, and Austin readily admits it. The historical argument here seems “plausible,” as Austin says, but ultimately it may not stand up (though it’s hard to see how it could be decisively knocked down).

This much seems clear: the creative act of explicitly asking will always help clarify the force of asking; and the articulation of that force — that power of asking — essentially creates a new charter for conversation with a second person, an interlocutor or interlocutors whose standing to address us we recognize and whose replies we await and then take into account.

That said, let’s also admit that the explicit performative “I ask…” or “I ask that…” is not (nowadays) so widely used, but is reserved, it seems, for certain kinds of serious inquiry and formal address. (Austin’s own lectures furnish numerous examples of this reserved use, as I suggested in my earlier post; but they were given in 1955, and both words and things have changed, at Harvard and everywhere else, since then.)

Still, making asking explicit can help render the conversation serious, not just because it makes language more precise, but also because it clarifies the relationship between interlocutors and the power they have to reckon with, and share.

Serious Conversations, 7

In these notes on serious conversations, I keep circling back, it seems, to two ideas: first, that what makes a conversation serious is not its subject matter or tone, but the stance of its participants toward each other; and, second, that the conversational stance requires that we confer a certain authority on our interlocutors, or (to put it another way) recognize that they have standing to address us.

While other kinds of authority — title, rank, role — are of secondary importance, and can sometimes even get in the way, this moral authority or standing is fundamental. It does not have to be earned, proven or ratified by reference to some person, written instrument or record of accomplishment outside the conversation or by institutional set up. It is constituted and realized in the relationship you and I have — or, if that is just too clunky, let’s say it is the relationship you and I have; and it is sufficient authority for a serious conversation because it makes us mutually accountable to each other.

Where this equal human stature (or dignity) is respected (and appreciated), it can be a source of power: not just the power of one over another, but the power to make claims or demands of each other, or to ask and answer, and this power of asking is essential if we are going to deliberate in earnest about our situation or collaborate on something new.

The conversational stance allows for genuine co-creation, because it’s not founded on subordination or one person ordering the other about. And the capacity for co-creation, the creative power that we share, only increases as we include more people in the circle of the conversation. (Of course there are limits: the research on group size and social complexity Dunbar summarizes suggests the circle probably should not widen beyond 150 people.)

I’ve tried to capture this thought in a simple rule: the power of asking will always be greater than the power of command.

That’s the basic position.

Another way to put the same thought might be in terms of the mechanics of ordering versus asking: whereas in the former we have one person directing the will of another, as we might address a short-order cook, in the latter we direct each other’s wills, so that we are, to stick with the metaphor, chefs in our own kitchen.

Of course the usual caveat applies about too many cooks spoiling the broth, I guess, but let’s also remember that people have different talents, training and competencies, and we can worry about how to order and organize ourselves once it comes to the actual cooking. Right now we’re just having a conversation.

Let’s also acknowledge, while we’re at it, that short-order cooks are models of industrial-era efficiency (but no longer efficient enough for the post-industrial fast food kitchen); gains in co-creativity can and probably will translate to losses in short-term efficiency.

Some concessions on one side or the other will probably have to be made, but too often the proponents of efficiency win without any argument, and people start giving orders or setting out plans for what’s to be done before the conversation even has a chance to get started. That’s when all the real power goes out of the room.

On Making Yourself Useful, Or Not

The Effective Altruists have persuaded Rhys Southan that his screenplay-writing is of no social value and ethically idiotic. They may be right, but he’s going to keep doing it anyway.

Good for him, I suppose. Keep trying, expect failure and look for unexpected outcomes. Take some time to think about why you want to write a screenplay or make a film or pursue a project. (This post by Jay Webb is a good place to start.) But don’t bother with people who tell you to make yourself useful.

Southan is bothered by them, I gather, because he seems to be confused about what art is and the work that artists do — a theme I touched on the other day in a post about the misappropriation of a sentence from Aquinas’ Summa, and a couple of weeks ago in my post on the word “sullen,” where I discussed Ingmar Bergman’s disciplined solitude.

He seems to understand his screenplay-writing and for that matter all art as “self-expression”; and then he asks that art improve society. On the one hand, he reduces art to vanity — not a disciplined encounter with humanity of the kind Bergman describes, but an elaborate selfie. On the other, he subordinates art to half-baked social engineering schemes and encourages didacticism or morally uplifting platitudes of the sort Alain de Botton has foisted on to the collection at the Rijksmuseum.

To ask whether art is useful is to ask the wrong question of it — or at least to invite a Thomistic quibble that restricts the meaning of the term and helps move the conversation away from confused Romantic ideas: as an operative virtue, art is useful to the artist. “The craftsman needs art, not that he may live well, but that he may produce a good work of art, and have it in good keeping.”

What’s more, to live merely by calculations of utility of the kind the Effective Altruists urge is not to lead much of a life at all: you may set out to do others some good but you probably won’t have a very good life. May I enjoy a fresh fig or a cigar, split town and head for the coast, putter around in my garden, consider an idea, make love or make a friend without submitting to a utility calculation?

Of course I can and should and will, and this isn’t just a matter of opting for pleasure over other considerations of utility.

A person who becomes my friend, or professes to love me, based on calculations of utility would have to be a sociopath or a monster of some kind. A person who tells me how I might make myself useful — appealing to moral criticism in order to advance a social improvement scheme — would be equally suspect.

To question the altruism of Effective Altruism may ultimately be an altruistic thing to do.

Rickaby’s Doublet — Doing the Work Philosophy Bots Won’t Do

The other day a Twitterbot called @AquinasQuotes tweeted this:

While others retweeted it and favorited it and seemed to identify with it, I thought the translation sounded ungainly and struggled to make sense of it.

As I’ve noted before, most philosophy bots seem to operate without editorial (let alone philosophical) oversight; so it’s no surprise to find misattributions, awkward translations, sentences taken out of context and once coherent thoughts rendered nonsensical. There’s often not much editorial discernment on the other end of the communication, either; if it sounds vaguely encouraging and uplifting, it will find an audience.

The quotable items the bots serve up usually appear without any link or citation that would allow them to be tracked down and read in context, and in most cases they aren’t even lifted from a work of philosophy. Instead, they’ve been pulled from some existing compilation of quotations — which was made, in the majority of cases, from some other compilation. We are almost always at several removes from the original text.

In this case, I tracked down the quotation about living well and working well to the Summa Theologiae, 1ae-2ae Question LVII Article 5. Here Aquinas takes up the question: Is Prudence A Virtue Necessary to Man? The full argument runs as follows in the translation by the English Dominican fathers.

Prudence is a virtue most necessary for human life. For a good life consists in good deeds (bene enim vivere consistit in bene operari). Now in order to do good deeds, it matters not only what a man does, but also how he does it; to wit, that he do it from right choice and not merely from impulse or passion. And, since choice is about things in reference to the end, rectitude of choice requires two things: namely, the due end, and something suitably ordained to that due end. Now man is suitably directed to his due end by a virtue which perfects the soul in the appetitive part, the object of which is the good and the end. And to that which is suitably ordained to the due end man needs to be rightly disposed by a habit in his reason, because counsel and choice, which are about things ordained to the end, are acts of the reason. Consequently an intellectual virtue is needed in the reason, to perfect the reason, and make it suitably affected towards things ordained to the end; and this virtue is prudence. Consequently prudence is a virtue necessary to lead a good life.

I understand the impulse to get away from “a good life consists in good deeds” or “good works,” but the translation of bene operari as “to work well, to show a good activity” doesn’t really help. First, it tries too hard to articulate the Latin verb, so that instead of a simple construction (“to work well”), we have to grapple with an unnatural sounding doublet. The English Dominicans seem to have understood that it’s not really all that necessary to fuss over the verb operor here, since Aquinas spends the rest of the article breaking down what he means by it: not only what we do but how we do it, from right choice rather than merely from passion or impulse, and so on.  And if we try to parse “show a good activity” we might run into other problems, since it could easily be confused with hypocritical display.

The trouble seems to have started with the publication of Father Joseph Rickaby’s Aquinas Ethicus in 1896, where the Stonyhurst philosopher offered “to live well is to work well, or display a good activity”. I’m still not sure what Rickaby was trying to accomplish with this doubling of the verb (why “display”? why “a” good activity?) and by what contortions he managed to get the adjective “good” for the second half of his doublet from the adverb bene. I take it that with “display a good activity” he’s reaching for something like Aristotle’s “activity of the soul in conformity with excellence or virtue” and that in bene vivere consistit bene operari the Jesuit hears Aquinas hearkening back to Aristotle’s definition of eudaimonia or happiness as eu zen (living well) and eu prattein (doing well).

It’s unfortunate that Rickaby did not consult with his friend Gerard Manley Hopkins for a more felicitous phrase. The thing might at least have had some rhythm to it.

In any case, it was only a matter of time before someone tried to make things a little more natural sounding and came upon the word “show.” (I haven’t yet tracked him down, but I should.) That’s how we find Rickaby’s doublet reproduced (without comment) by creativity guru Julia Cameron in her book Walking In this World, in Forbes magazine’s “Thoughts on the Business of Life” feature, and on a whole batch of sites offering inspiring quotations to live by.

I wonder how Forbes readers or Cameron’s readers make sense of this sentence from the Summa, without the benefit of Aquinas’ explication. Do they find in it something like Garrison Keillor’s exhortation at the end of Writer’s Almanac to “do good work”? Or do meaningful work? Or do the work to which one is called? (Can we still talk meaningfully about vocation?) I wonder, too, whether it genuinely clarifies things for them, or why they might wish to identify with the statement and pretend to themselves and others that it clarifies things or inspires them.

This isn’t just a matter of being fussy or snobby about the misreading of Aquinas or deploring the degeneration of philosophy into a meme, though I do have that reflex, I confess. I’m noticing something else happening here, and it has to do with confusion that Rickaby’s doublet causes, or at least fails to resolve, for modern readers around the English word “work.”

Consider just for a moment the appearance of this sentence from the Summa in Cameron’s book on creativity. It hangs there in the margin on page 105, as a gloss on the following passage: “When we start saying ‘Can’t, because I’m working,’ our life starts to work again. We start to feel our artist begin to trust us again and to ante up more ideas.” We have to make room for “our artist,” who retreats when we are busy and over-scheduled, to come out and play. Then and only then will our life “work” again. That’s Cameron’s word, not mine; she’s saying that when we cordon off time for artistic work, our life “works” — makes sense or becomes meaningful again.

This idea of a life that works should bring us back into the territory of eudaimonia as human flourishing, or happy activity; the life of the working artist flows, but not because she acts in accordance with virtue, but because she takes measures to care for the self and allows “her artist,” or what used to be called her genius, to come forward without fear or interference. “We forget that we actually need a self for self expression,” Cameron continues, and that is why we have to say “no” to invitations and other demands on our time: “Instead of being coaxed into one more overextension of our energies in the name of helping others, we can help ourselves by coaxing our artist out with the promise of some protected time to be listened to, talked with, and interacted with.”

The notion of an artist abiding within us who needs to be drawn out and cared for and listened to would be entirely foreign to Aquinas and the Aristotelian ethics on which the Summa draws. That aside, I’m sympathetic to the argument Cameron is making here. Just recently I wrote admiringly of Ingmar Bergman’s “disciplined solitude,” and I know firsthand how hard and how critical it is to secure protected time in order to do one’s work. There’s that word again: work. Maybe it’s always been a confusing word, and maybe that’s why in the 19th century Rickaby felt he had to render it with that doublet. But I have to point out that the “work” of artists, writers, craftsmen, creative people — the work Cameron wants us to put aside time for so that our lives will start to work again — isn’t at all the work Aquinas is talking about at this juncture in the Summa.

In fact, Aquinas takes great pains in this part of the Summa to draw a sharp distinction between the work of the artist and the performance of action: following Aristotle, he distinguishes the artist’s making (facere) from doing (agere); and with this distinction in mind he defines art as “right reason about things to be made” and prudence as “right reason about things to be done.” So the considerations that apply to “working well” or prudent action do not apply to the artist’s work. “The good of an art is to be found, not in the craftsman, but in the product of the art.”

Consequently art does not require of the craftsman that his act be a good act, but that his work be good (ad artem non requiritur quod artifex bene operetur, sed quod bonum opus faciat)….the craftsman needs art, not that he may live well, but that he may produce a good work of art, and have it in good keeping: whereas prudence is necessary to man, that he may lead a good life (bene vivendum) and not merely that he may be a good man.

By the time we’ve gotten to Cameron’s book and its ideas about creativity, the quotation from the Summa has lost all connection to Aristotlelian ideas about “work” as virtuous action and the other-directed performance of duties (or what Aquinas calls the “due end” of action). Instead, the focus has shifted here entirely to the self and the demands of “self-expression.” What Father Rickaby called “the display of a good activity” is now sounding more like self-display. Through accidents of translation and misreading, the idea of work that Father Rickaby tried to capture in his doublet has drifted from an activity of the soul in conformity with excellence — or virtue — to what might amount to nothing more than the production of an elaborate selfie.

On Being Sullen

Having been told that I often seem sullen, I decided to look up the word and find out a little more about it. It’s a derivative of the Latin solus, as are English words like “solitary,” “solitude,” “solo” and “sole.”

Sullen doesn’t always just mean morose, though that is the sense in which we most often use it these days; and I am pretty certain that was the sense in which the word was directed at me. It’s associated with mourning — “sullen black,” in the words of a remorseful Bolingbroke at the end of Richard II, just after he announces the gloomy fate of Exton: “With Cain go wander through shades of night,/ And never show thy head by day nor light.” The sullen mood takes us well to the east of sunlit Eden, and seems often to arise from a sense of having been wronged, or at least a sense that things have gone terribly wrong.

So in some twentieth-century English translations of the Book of Kings, Ahab retires in chapter 21 to his palace in Samaria and, “sullen and angry,” takes to his bed and refuses to eat: Ahab had tried to negotiate a land swap, but “Naboth the Jezreelite had said, ‘I will not give you the inheritance of my ancestors.’” (Ahab’s wife Jezebel will soon fix that.)

Wycliffe makes Ahab dyspeptic (“having indignation, and gnashing on the word which Naboth of Jezreel had spoken to him”), but if we are not going with “sullen” we should prefer the King James rendering of the Hebrew adjective (sr or sar) here as “heavy,” and remember that “sullen” can connote heaviness. The sullen person carries a weight, or is likely to sink or feel weighed down, like the bride in Lars von Trier’s Melancholia. “The sullen passage of thy weary steps” is the apt phrase. (That’s Richard II again.) He or she can be obstinate, stubborn and unyielding as well. So can sullen animals. Daniel Defoe writes of a bull that is “sullen, untractable [sic], and outrageous,” and in the 17th century we find a horse described as “sullen” and in need of the spur.

But the story about “sullen” that interests me most begins in the 14th century. That’s when we first find “sullen” applied to those who deliberately keep to themselves – “a soleyn by hymself” (as a line in Piers Plowman has it) — because they are averse to society or disinclined to be social. This sullen character is a melancholic, the predecessor of the early modern misanthrope — and maybe a remote ancestor of Henry David Thoreau or a great-great-great uncle of Nietzsche’s Zarathustra. I’m not going to try to put the whole family tree together here, but I think it’s fascinating to consider the emergence of this solitary figure and follow his adventures in the modern period.

It’s also worth noting that here we have the most radical use of the word “sullen,” in the sense that it connects us with the word’s roots in the condition of solitude or going solo.

This sullen one separates himself from the madding crowd, or withdraws far into himself. Even in a crowd, he can sink into his sulk. Of course, deliberately keeping to oneself and withdrawing into solitude can carry a cost. The 19th century critic George Bancroft wrote disapprovingly of Byron and other romantic poets who through “sullen misanthropy” had divorced themselves from “the haunts of man” and squandered their gifts; and in another post I’ve written about the pain that Zarathustra feels as he takes leave of his friends and returns to his mountain haunt at the behest of his mistress, The Stillest Hour. The recognition that a sullen disposition can be painful or damaging is hardly unique to the 19th century: even in the medieval poem Richard the Redeless we find logic splitters that are “so soleyne and sad of her wittis” that they can’t reach conclusions.

Sullen withdrawal and the solitude it can bring is not, however, just a way of absenting oneself, and it’s not always confounding. Maybe that’s obvious, but how often do we appreciate the illuminations that gloom can bring? A sullen turn of mind is a special kind of about-face, away from sociability and cheery outward show — to face oneself. The sullen figure (at least the one who interests me most) takes his solitary way, not just out of Eden or the haunts of men, but into himself and into the human interior.

To be sullen in this sense is not just to play solo, but to play with solitude itself. So Dr. Johnson thought the epithet “sullen” could not be applied to the trumpet, but he never heard Chet Baker. Ingmar Bergman writes in his autobiography that as a child he was “considered sullen and too sensitive”; but in his mature years, as Dorthe Nors notes in a recent essay, he became a master of disciplined solitude. “In my solitude,” Bergman writes, “I have the feeling that I contain too much humanity” — and for Nors that excess, that overflowing of humanity, is the wellspring of artistic creativity. It is not just self-imposed exile, but an encounter.

The New Collaboration

collaboration

“Collaboration” enters the English language in the latter half of the nineteenth century, from French. The OED notes that the word applies especially to literary, artistic and scientific work.

The spike in usage during and immediately after the Second World War comes as no surprise. In a second French import, the words “collaborate,” “collaborator,” and “collaboration” figure prominently in accounts of quislingism or collusion with the authorities and occupying forces.

But what explains the surge in usage from the mid-1980s on? Nothing very French at all. Instead, during this period, the American business world rehabilitates the word.

The word “collaboration” has now shed many of its sinister associations, and it’s become so commonplace that we no longer consider it pretentious or even wrongheaded to elevate the doings of the workplace to a level of human achievement and excellence formerly reserved for intellectual and artistic endeavor.

In the process, we have lost sight of just how rare, intellectually trying and emotionally fraught truly collaborative work can be.

The special working relationships forged by composers and librettists, scientists and illustrators, dancers and musicians, writers and photographers, etc., are usually not made to last; but while they last, they offer collaborators a chance to accomplish something that they could never accomplish if left to themselves.

Now, however, we are regularly asked to believe that collaboration can be something people do every day, on the job. How is that going to happen?

You obviously can’t mandate collaboration: “’lets force people to collaborate.’ Sounds really dumb, doesn’t it?” business consultant Daniel Mezick asked just the other day. Dumb — or downright totalitarian. It’s equally senseless to expect collaborative behavior where people are getting bossed around, or promote collaboration while leaving powers of command and organizational hierarchy intact.

Misguided efforts to institutionalize collaboration can also crush creative resistance and penalize rule-breaking — the very essence of successful collaboration — or at least reign in and stifle creative individuals who excel when they disregard protocol and go it alone.

Denning and the Death of Hierarchies

Steve Denning, the “radical management” and leadership guru, published a post at Forbes.com yesterday about the shift taking place within many organizations, away from hierarchical models of command and toward more fluid, flexible and agile setups. Drawing on Fairtlough’s The Three Ways of Getting Things Done — which argues that the only “effective” organizational models are hierarchy, heterarchy and responsible autonomy — Denning argues that hierarchies “must sign their own death warrants to survive” in what he likes to call the Creative Economy.

In this post, Denning’s interested in why business leaders cling to hierarchy even in the face of evidence that it’s no longer the most effective way of getting stuff done (if it ever was), and in the paradox that in all the examples he can find, “it’s the hierarchical management itself that has led the shift away from hierarchy. The shift didn’t occur as a kind of bottom-up movement. It was the top that saw that there was a better way to make decisions and went for it.” Flatter organizations tend to cleave to the status quo and work within established frameworks, he observes.

Of course plenty of other people within an organization might see that there is a better way. Those atop the organizational hierarchy are the ones permitted or entitled to say it aloud or do something about it. Hierarchy isn’t just a way to get things done; it’s also a way of distributing power, and the power relations hierarchy maintains are a daily fact of life for subordinates. They usually don’t have a place at the table when the organizational models are being drawn up or redrawn. In order to effect change within a hierarchy, those at the bottom – and the middle – would need to be enlisted as stakeholders, entrusted with real power and respected as equals (which would itself require some undoing of the organizational hierarchy).

I am a little puzzled why Denning here doesn’t present a more considered and nuanced view of the way power actually works within organizations – and the way in which concentrated power can actually hamper performance and kill ideas or even the motivation to present ideas about how to do things better.

That aside, and no matter how or why or by whom “the shift away from hierarchy” is brought about, Denning’s article is a good place to start talking about what this shift will really entail and require of people at every level of a hierarchical organization. It seems fair to say that as organizations get flatter and try to operate with more creativity and agility, the way things are coordinated – the way we use language to order the world, get things done and coordinate action — will itself have to undergo a radical change. The way I’d put it is that coordination will have to shift from the power of command to the power of asking.

Indeed, how we use language – how we make claims and demands on others, how we talk and listen to others about what to do — can itself help effect a shift from hierarchical command structures to the more fluid structure associated with the give and take of serious conversation (the rough equivalent, to my mind, of what philosopher T.M. Scanlon calls “co-deliberation”). I’ll have more to say about what constitutes a serious conversation in a future post.

People In the Way

It’s good to see that Jane Catherine Lotter’s obituary in the Seattle Times has gone “viral” — whatever that is supposed to mean anymore. I suppose if something in the culture — a meme, a song, a fad or a bit of slang — manages to reach me, it must have pretty wide circulation: I don’t keep up.

Lotter wrote it herself, as she was slowly dying, noting that “one of the few advantages of having Grade 3, Stage IIIC endometrial cancer, recurrent and metastasized to the liver and abdomen, is that you have time to write your own obituary.” She faced death with humor, courage and grace.

I was especially struck by what Lotter had to say to her children, Tessa and Riley. “May you, every day, connect with the brilliancy of your own spirit. And may you always remember that obstacles in the path are not obstacles, they ARE the path.” I’ve certainly come across the thought before; I suppose we all have. De mortuis nihil nisi bonum, and don’t come to obituaries looking for poetic or philosophical originality. Besides, it’s more interesting to reflect on the reasons why the thought has stayed with me over the past few days.

First, because I have been trying to get a big new project together, and I always struggle when starting a new project to take the little steps that will get me to the big place I see in the distance. When I am struck by an idea, excited by a project, or even when the first words of a piece of writing come to me, I can easily forget that eureka is just the start of the journey. I am impatient and I want to rush ahead; I look for shortcuts, end up taking detours and don’t take in the sights because I am so focused on where I think I am heading. And since I never end up exactly where I first intend to go, I would learn a lot more if I would allow myself to experience the trip.

It gets worse than that. Every difficulty I encounter seems like some kind of grand injustice the universe, or some evil deceiver, has visited upon me. Every time I stumble or fail to make progress — which is more often than I care to admit — I risk falling into the trap of blaming myself, thinking I have betrayed myself, or just feeling sorry for myself because I am up against insurmountable odds. When others don’t see things my way or express doubts, or don’t sufficiently rally to the idea in which I have fully invested my ego and imagination, or simply say they don’t get it, whatever it may be, they can become my persecutors and enemies, even though their intentions may have been friendly.

I am exaggerating (a little) to make a point: the emotion that takes over at such moments is powerful and undeniable. At root, I suspect, these feelings stem from a sense of vulnerability: new ideas, new plans, new projects — all make you newly vulnerable, because they are disorienting and will more likely than not fail.

The pursuit of an idea, a plan or a path entails great moral risk, especially when we come up against others. Just consider how often you hear, or how often you think, that people are in the way. It’s hard not to feel this way, at some point, if you live in New York City. I’m heading down the stairs to the subway platform, and someone in front of me is moving slowly, lumbering, limping, tired, breathing heavily, grunting, dragging a granny cart or leading a toddler down the stairs, cute little step by adorable little step by sweet little step. I can hear the train coming into the station. Not the train: my train. Get out of my way! On the sidewalk, badly dressed, slow-witted tourists, sweating and bloated with their deep-fried lunch, walk four and five across, gawking and without any sense of direction. Single file! Don’t know how to merge at the Holland Tunnel? Honk! People line up six, twelve, twenty-four deep at checkouts, taxi stands, restaurants — nearly everywhere you go. End of the line.

So in our rush, in our huff, when we are inspired, wired and just plain tired, we reduce people to inanimate objects or obstacles in our way. That puts us in the same moral ballpark as seeing people as means to our ends, instruments of our will — the outrage is that they are not mere extensions of our will — but it’s a little more sociopathic and depraved. People in the way need to be shoved aside, eliminated or made to disappear. They are not human beings but mere blocks; they might as well be sawhorses, sandbags or Jersey barriers — and it’s all the more irritating that they are not cast from concrete and set down by government order; they are alive, with all the appearances and behaviors of intelligent humanity, and yet they are very much in the way.

Sometimes we say that people are in the way when they are not even there, in front of us; they are in the way because they are obstinate, or don’t see things our way, or because they are creating difficulties of one kind or another. This is the more interesting case, and it involves risk of a different magnitude. For starters, it’s a strange abuse of language to talk about these people being “in the way” when there is no way apparent — no road, no staircase, no sidewalk or path. We speak as if there is a single orientation in the world — as if there were a way, the way, my way, as if the right way for all people were established by one person’s willing it. My way or the highway. Why doesn’t she get with the program? “The way” even has a whiff of providence about it, as if it reflected some higher order, and echoes of messianic religious vocabulary.

It also suggests we know where we are going — which of course we do not. And this is perhaps the greatest risk we run: to think that we know the path before we have traveled it, and that we have secured our ends simply because we have set out toward them. Stephen Covey advised highly effective people to start with the outcome they want to achieve, but the more important lesson is that you are most likely to achieve something other than what you set out to do. That’s a basic truth about human action, and a pretty good reason to set your sights on something other than being highly effective. This is especially so if you think of yourself as a leader. The leader who cannot or will not admit his vulnerability and uncertainty about the best way forward will probably just end up getting in everybody’s way.

Creativity or Command?

John Hagel and John Seely-Brown have a new piece on CNN Money called “Welcome to the Hardware Revolution” that nicely highlights an issue I touched on yesterday: the limits that institutionalized power — or the ways we institutionalize power — can place on learning and innovation.

I suggested, in passing, that business organizations rely on hierarchical models of command-obedience in order to achieve efficiency; but that doesn’t always work to their advantage (and the performance advantage to be gained from scalable efficiency, Hagel and Seely-Brown have argued elsewhere, isn’t anywhere near what it used to be). What these companies may gain in efficiency they lose in creativity, learning and innovation.

Here is what Hagel and Seely-Brown have to say in their “Hardware Revolution” piece:

many of the executives we speak with list talent development and innovation as top priorities, but for all they push, progress remains a struggle. Part of the problem is that most businesses’ institutional structures, hierarchies, and cultures actually limit the connecting, exploration, tinkering, and improvisation that make learning and innovation possible.

Increasingly, the sharing of ideas and new developments are taking place outside big companies or officially sanctioned workflows and processes, in what Hagel and Seely-Brown call “creation spaces.” These are spaces — communities, networks and cultures — conducive to what Illich would call conviviality: places real or virtual, open and decentralized, where people congregate to share tools and experiment together, learn from one another, try new things, and be part of a community of people with shared interests.

The article goes on to recommend some basic questions business executives can ask themselves in order to improve their companies and move them toward creativity, or at least get their bearings. But there’s another question looming behind those: the question how (in a large and established organization) you go about institutionalizing the kind of practices you find in creation spaces. Eventually, something’s got to give; and if it comes down to a decision between preserving organizational hierarchies and legacy models of how stuff gets done or opening the doors to creativity — how many will choose the latter?

Maybe the choice is not as stark as my title makes it out to be. Let’s just say there are better ways to spur creativity than to command it.

Collecting My Thoughts, Collecting Myself

I’m starting to think of this blog as INSEAD professor Gianpiero Petriglieri has taught me to think of Twitter: as “a public notebook” — a place to collect, reflect and share. I’ve kept notebooks for years, but they’ve always been closely-kept, private affairs. Those notebooks can be scattered and go off in all sorts of directions, but I realize that the public aspect of this notebook carries some obligation to put the pieces together every once in a while, or at least to reflect on where things appear to be heading.

This morning’s post about Sister Mary Lou Wirtz and non-coercive leadership brought some new clarity and gave me a chance to gather my thoughts.

As anyone who reads this blog regularly knows, I’ve been looking at models of non-coercive power, dialogue as an alternative to coercive power, asking as an alternative to command-obedience. It’s a big topic, which I try to describe with the rubric The Power of Asking, and it branches out in many different directions.

I’ve written about the use and abuse of the word “ask” and the practice of asking, as well as some literary and historical examples that illustrate the subject. On a related front, I’ve looked at human rights frameworks, including the Ruggie Guiding Principles for Business and Human Rights and the concept of “free, prior and informed consent” enshrined in the UN’s Declaration on the Rights of Indigenous Peoples, and tried to appreciate how questions of autonomy, respect and consensus-building figure into them. In posts on the role of business in society and on issues in corporate governance, I’ve explored some alternatives to organizational models of command and control (which institutionalize coercive power or command-obedience, usually in the name of “efficiency” and often to the detriment of creativity, learning and innovation); and I’ve tried to outline some of the rules of “engagement” that could inform a framework for shareholder dialogues to make companies more responsive and responsible.

There are other models to consider as well — call them conceptual models. My reading list would have to include Illich on conviviality, Arendt’s discussion of “initiative” in The Human Condition (no, the whole book, from start to finish, but especially the section on Action) as well as, I suppose, Habermas’s reflections on non-coercive dialogue, and some of the work on ethics that came out of Bernard Williams’s seminal paper on “Internal and External Reasons.” And then there are anthropological texts like Clastres Society Against the State, Richard White’s The Middle Ground and James C. Scott’s book on Zomia, which ground some of these philosophical considerations in social realities. These are touchstones for me, readings that have shaped and continue to shape my thinking about non-coercive power and help me ask questions about language, power and the possibility of dialogue. They’ve also helped me to think about what happens to all these questions when one attempts to move from theory to practice; more often than not, as I’ve tried to make clear, things fall apart, good intentions go bad, and people resort to coercion, displays of power, issuing commands, demanding obedience and asserting authority.

With this framework in mind, I am currently developing two projects. The first is a writing project that I’m calling The Power of Asking, which will develop the theme of non-coercive power and try to articulate something like a model of non-coercive leadership. The second is a documentary film tentatively titled Prosperity, which is set against the backdrop of the mining boom around Lake Superior in Michigan’s Upper Peninsula; there, issues of free, prior and informed consent as well as critical issues of corporate responsibility and environmental ethics are at stake (along with the future of Lake Superior itself).

As I continue to write on the theme of asking and develop and raise funds for my new documentary project, I will probably break out each of these projects into subpages. (And by the way, if you know anyone who is super-talented when it comes to WordPress and can help me make this whole effort a little less pedestrian-looking, please send me a message — @lvgaldieri — on Twitter). In the meantime, rest assured there is method in my madness here, even if I myself often don’t realize it.