There was an op-ed in the Wall Street Journal last week on artificial intelligence. AI is the process of making machines (software and hardware) intelligent so that they can make decisions independent of specific human direction.
So I will reprint this blog entry, which is relevant to all people, regardless of where they work, what they do, or how they are involved:
Helping Government Learn, March 23, 2009
Michael Skapinker of the Financial Times, in the column "Managers must listen before disaster strikes," calls our attention to the report from the UK National Audit Office, "Helping Government Learn." Also see the Executive summary (PDF - 238KB), Press notice (HTML), and Literature Review (PDF - 606KB).
From the column:
In 1999, newspapers were full of pictures of would-be British holidaymakers queuing in the rain because a new processing system at the UK Passport Agency had failed to produce their passports.
By contrast, the 2006 roll-out of “e-passports” – passports with electronic chips – proceeded without a bump. Why the difference? The leadership of the agency adopted new strategies. They talked about possible risks at the beginning of every meeting rather than at the end. Managers had a commitment to “knowledge transfer” – learning from others and teaching them – written into their contracts. The e-passports team also made regular contact with passport agencies involved in similar programmes in five other countries.
People at the top began with an assumption that they did not know everything – that front-line staff and customers probably knew more. Several of the report’s examples of listening managers were in departments that had, like the Passports Agency, previously suffered catastrophic setbacks.
The real challenge will be finding leaders who listen before disaster strikes. That will be harder. It is self-confidence that propels people to the top, not openness to criticism.
His piece is about organizations generally, and he starts by discussing the financial crash. He asks "Will employees who dissent always be ignored or harassed?"
I think yes. Too often, people see analysis as criticism and as personally directed. I find two past articles from the Harvard Business Review to be particularly relevant to this issue, "Performing a Project Premortem" which is summarized as:
In a premortem, team members assume that the project they are planning has just failed--as so many do--and then generate plausible reasons for its demise. Those with reservations may speak freely at the outset, so that the project can be improved rather than autopsied.
and this article, "The Experience Trap," which is pretty damning. Rather than analyze issues and problems, most managers just expect that problems will occur, they take this for granted, dooming themselves to repetitive failure rather than the development of robust systems designed to capture learning and apply it to increase the likelihood of future success. From the article:
Why Learning Breaks Down
When anyone makes a decision, he or she draws on a preexisting stock of knowledge called a mental model. It consists largely of assumptions about cause-and-effect relationships in the environment. As people observe what happens as a result of their decisions, they learn new facts and make new discoveries about environmental relationships. Discoveries that people feel can be generalized to other situations are fed back, or “appropriated,” into their mental models. On the face of it, the process seems quite scientific—people form a hypothesis about a relationship between a cause and an effect, act accordingly, and then interpret the results from their actions to confirm or revise the hypothesis. The problem is that the approach seems to be effective only in relatively simple environments, where cause-and-effect relationships are straightforward and easily discovered. In more complex environments, such as software projects, the learning cycle frequently breaks down. In the experiments we carried out with our study participants, we identified three types of real-world complications that were associated with the cycle’s breakdown.
What I try to do through the writings in this blog is to capture and disseminate knowledge, to suss out the reasons for success or failure, and figure out how to capture that learning into the development of robust change programs:
1. Indicate -- identity the particulars of processes and structures of success and failure
2. Duplicate -- figure out how to duplicate (repeat) success.
3. Replicate -- develop the systems, structures, frameworks to apply programs to different situations and communicate them throughout innovation networks.
4. Accelerate -- figure out how to speed up successful innovation and programs.
Transformation and innovation and creative problem solving is the goal.
But most people aren't too interested in dealing with challenges to their world-view. Especially higher-up officials, and often people in "think tanks."