Complexity threatens to overwhelming our workplaces and our society. The reality in which we are living at the beginning of the third millennium is characterized by a drastic rise in complexity, which has caused rapid changes in corporate and human behaviour. Complexity is an inherently subjective concept; what is considered complex depends on the point of view of a given individual or organization. When we term something complex, what we are doing is using everyday language to express a feeling or impression that we dignify with the label complex.

Every time a decision is made to significantly change an aspect of your business, you are starting a new battle between complexity and simplicity. This battle is fought on many fronts, and most of the time complexity wins. The first front is the analytical one, of which financial justification is the major part. Complexity usually sneaks across this front unseen. The cost of complexity is hard to quantify, so it is generally not considered.

Clarity” is one of those rare words in the English language that is so basic, so fundamental, that it virtually defines itself. We know when we have clarity, but it can be very elusive. More importantly, it is something we need—in our lives and our businesses—if we wish to move forward. It is critical to have clarity. Questions in business need to be asked routinely, such as “Is there a more simple way?” and “What will the impact of this decision be on the business in terms of added complexity?” Simplicity won’t always win the war against complexity. However, it has a much better chance of doing so if you take the first step and make the commitment to simplicity as a value. One small step for you, one giant step for your business.

There are numerous examples in our public, organizational, and private lives that illustrate the urgent need to create awareness of this critical problem and to seek clarity and, ultimately, simpler processes.


  • In the Netherlands, an elderly woman spent a week in a shopping mall. She could not find the exit. She bought food during the day and slept on a bench at night. She could not find anyone to ask where the exit was.

  • In France farmers rioted because they could not understand the new laws they were supposed to obey. They blocked the roads for days with tractors and farm equipment, almost paralyzing the country. The laws were too complicated.

  • Industry research suggests that, unless they have an adolescent at home, over 90 percent of consumers do not use 95 percent of the features of their video recorders because they are too difficult to use.

  • When we shop at a large mall, we frequently forget where the car was parked. Mall owners employ staff to help customers find their lost cars.

  • Each year, millions of Americans wrestle with their income tax returns. The laws are so complicated that one in five of the nation’s taxpayers wait until the final week to file their taxes by the deadline. In 2002, 27.1 million taxpayers—more than 20 percent—waited until the last minute to submit their income tax return. Generating maximum tax refunds is now a national pastime.

  • Programmers and technicians write computer manuals, as they are responsible for creating the software. Know the computer systems as well as they do, they cannot understand the problems facing users who do not know the systems. The result is mass confusion due to complexity.

  • After NASA first started sending astronauts into space, they quickly discovered that ballpoint pens wouldn’t work in zero gravity. To combat the problem, NASA scientists spent a decade and several million dollars developing a pen that writes in zero gravity, upside down, underwater, on almost any surface, including glass, and at temperatures ranging from below freezing to 300°C. Cost to the taxpayer? Don’t ask.

The Russians were faced with the same problem; they used a pencil.


We should admire the profession of cartooning. Cartoonists are daily faced with the challenge of making all sorts of readers smile and laugh by delivering an image and text in a simple, direct fashion. I suggest that they do this by using a simplified approach.

  1. Cartoonists use simple language and images. Through its use of parable, a cartoon must speak to a wide audience.

  2. Cartoonists focus on the basics. A cartoon strips the topic down to its bare essentials without clouding them in detail. The philosophy is that while the details may be important, they can always come later by reading about it elsewhere. Nonetheless, they won’t be worth anything if the fundamentals are not understood first.

  3. Cartoonists do not think for us. A cartoon should encourage us to interpret its message for ourselves. In this way, our conclusions are much more powerful and much more likely to stay with us.

Perhaps most powerful of all, is that cartoonists’ results are concepts that can be easily shared. Readers of “cartoons” become members of an informal “club.” They might share a new “language” and can readily compare each other’s individual approaches to change using the simple cartoons. Cartoonists have to make the complex simple and direct. There are no second chances.


There is a long history of software problems that have led to serious disasters. The cost has been astronomical. Below is a list of some of the better-known failures. The figures in brackets are the estimated costs of the project. As you read through this list, a recurring theme is that these large projects were extremely complex—to design, test, construct, and implement, as well as manage.

  • 1960 – The first successful US Corona spy satellite mission was launched after 12 previous failures due to software problems. (Cost to large to calculate)

  • 1962 – The United States launched Ranger 3 to land scientific instruments on the Moon, but the probe missed its target by some 22,000 miles due to software problems.( $14 million)

  • 1962 – Mariner I was launched for Venus, veered off course within seconds, and was ordered destroyed. It was later found that a single hyphen from the computer launch code was missing ($16 million)

  • 1981 – US Air Force Communications & Control Software exceeded estimated development costs by a factor of 10. ($3.2 million)

  • 1987–1993 –Attempt to build an integrated car and license software management system in California failed. ($44 million)

  • 1992 – The London Ambulance system had to be scrapped due to the system not being tested sufficiently before introduction. Lost emergency calls and duplicate dispatches were just a few of the problems encountered ($50 million)

  • 1993 – Integration of SABRE reservation system with other online systems failed. ($162 million)

  • 1995 – The new Denver International airport was delayed for over nine months due to the software problems of the baggage handling system( Cost not disclosed)

  • 1997 – All development on California’s SACSS system was stopped after exceeding budgets. Eight alternative solutions were later considered. ($312 million)

  • 1999 – The Ariane Rocket—launched by the European Space Agency—was destroyed shortly after takeoff. The cause? Failure in the ADA launch codes. ($500 million)

  • 2000 – The Mars Polar Lander had software problems with metrics conversion that lead to the total loss of the spacecraft. It crashed into the surface of Mars. ($165 million)

It is estimated that software failures cost industry over $100 billion in the year 2000. Some observers say that that figure is conservative and the actual is much higher.

Software that is used in critical life-supporting equipment should always be critically tested before release. On occasion, failures can have dire consequences. Take the case of the Panamanian x-ray disaster that happened in 2001. An x-ray machine was incorrectly computing the dosage rates and exposure on patients. Twenty-eight people were overexposed, and three died. The remaining survivors are likely to develop “serious complications, which in some cases may ultimately prove fatal1,” according to the FDA.

There have also been several near misses. For example, in March 1997, the three-man Soyuz TM-24 barely evaded two potential catastrophic software flaws during its return to Earth. First, after separating from its propulsion module, the command module was nearly rammed by the jettisoned unit when its control computer fired the wrong set of pointing rockets. Moments later, the command module’s autopilot lined it up for atmospheric entry—but in precisely the wrong direction, nose first rather than heat shield first. Manual intervention fixed that problem—but even at the height of the shuttle-Mir US-Russian space partnership, there’s no indication the Russians shared news of either of these flaws with NASA.

The rest of the descent appeared to go as planned, and the parachutes and soft-landing engines did their job. As in about half of all Soyuz landings, the landing module wound up on its side, probably pulled over by a gust of wind in its parachute just at touchdown.
The three men, who knew they were far off course, were able to open the hatch themselves and get out, as it’s a much easier drop to the ground when the capsule is on its side. They then waited two hours to be spotted by a search plane, and several hours more for the arrival of the first helicopter. This is not what you would call a smooth and predictable landing.

The complexity of large software systems cannot be overemphasized. We simply do not have the rigorous testing and deployment mechanisms that can manage this complex environment. More efficient automated tools are needed to break down the complexity and manage it. Where automated tools do not exist, we need to manage the complexity with self-managing systems.

One would hope that we would have learned our lessons already, . but it appears that we have not. Until we do, software disasters will continue to haunt us.


There are numerous definitions, interpretations, and academic theories associated with the study of organizational complexity. In this book and for simplicity’s sake, we shall define complexity as arising from the inter-relationship, interaction, and interconnectivity of elements or processes within an organization or system and between a system and its environment.

Complexity is the opposite of simplicity. Complexity is simplicity that has failed. There is a simpler way of doing most things if there is a desire and motivation to look for it. But simplicity rarely happens on its own. There is always the possibility that there is no simpler way of doing something. Even if there isn’t, it is always worth the effort to find out. However, the simpler method is not easy. It requires creative thinking, effort, and analysis. It is always difficult to find a simpler method. But when organizations provide the tools to conduct business in a simpler fashion, there is always recognition that a substantial event has occurred.

Below is a set of eight general rules that define a process for reducing complexity in business.

  1. Management must support the initiative

This is an obvious rule, but senior management must support and provide adequate resources, personnel, and time to make the project a success. Complexity reduction is a new process that may be unfamiliar to management. Therefore it will need a senior person as its champion.

  1. Determination to succeed

The project team and the individuals on that team must be motivated to succeed. Clear guidelines for expectations, roles, milestones, and recommendations must be identified and agreed upon.

  1. Understanding and knowledge

The team of individuals needs to be selected on the basis of their knowledge of and experience in the areas or processes under review for complexity reduction. If a process is to undergo complexity reduction, the team must be knowledgeable about the process to be successful. But they must have the motivation to make things simpler.

  1. Flexibility, options, and design

The keyword for this rule is design. In order to make simpler and more effective business processes, analysis of the options and alternatives is required to reduce the stated complexity. Teams should present several options that provide flexibility in the solution.

  1. Challenge everything

There is a rule that everything needs to be challenged. Teams need to dig deep in the pile and evaluate the need for every process or business area. This can often be difficult, as other managers will offer significant resistance to any challenge. Turf battles often result from the quest for simplicity. Systems have a tendency to grow increasingly complicated, and there is no effort to simplify them.

  1. Decomposition

Complexity can be effectively understood and often reduced when processes are broken down in to smaller, more manageable segments. This is the process of decomposition. Decomposition also clarifies thinking and will illustrate the complexity already in place.

  1. Best of breed solution

Teams need to recommend the best and simplest solution that will work for the business process. In presenting such a solution, the process by which it was arrived at must be documented, and a business case must be created, explaining potential reductions in costs, reduced timescales, and the like. The solution needs to be “defensible,” as many managers will wish to see it fail. Determination and preparation for this event will help it to succeed through to implementation.

  1. Continuous effort in reducing complexity

Reducing business complexity is not just a one-time effort. There is a defined need to make it a continuous program with the resources and necessary funding to take on future projects. If this does not happen, complexity will creep back into the enterprise, and soon everything will be back to square one. Future new projects should undergo a simplicity review and have representatives that can assist other project teams to be successful. Spread the word on simplicity, and make complexity reduction a part of corporate culture.

Consider Figure 2.1 below. This structure can be described as follows:

  • Simple work management and coordination: Business processes, project tasks, and technology are not new to the organization. Decision maker is known and has authority. Work is mainly coordinating resources and communicating progress.

  • Complex work management and coordination: Business process design is nontrivial, or new project management methods or technology are in use. Project requires multidisciplined resources with multiple work assignments. Secondary levels of managers are needed to oversee complex business process designs, technology platforms, or workgroups.

  • Complex work and relationship management: Project requires coordination and relationship management (negotiation of common understandings, agendas, and decisions among business process owners or unit managers). This style also is required when contractors do complex work.

  • All frontier projects: Work is targeted at a new business domain or a new technology area where the methodologies are unknown and unproven. Project management must be developed, monitored, and adjusted as the project is being conducted.

Figure 2.1 A diagram to illustrate the different work domains and their associated complexity levels.


Consider IBM’s experience with complexity when it hired Lou Gerstner as CEO in 1993. IBM’s market share and profitability were eroding due to its difficulty in adjusting to the growth of new technology, such as PCs. IBM chose Gerstner, its first CEO from outside the company and outside the computer industry, to right the ship.

Why did IBM select Gerstner? The company recognized that, above all, it needed a strategist to determine what customers really wanted and a change agent to transform a complex engineering-oriented culture to a customer-driven culture. In this light, Gerstner’s track record as a McKinsey strategy consultant, a turnaround specialist at Nabisco, and a builder of financial-service product lines at American Express was a great fit.

Gerstner was, by most accounts, highly successful. He changed IBM’s culture to that of an eclectic technology services company. IBM’s market value rose from about $29 billion, when he took over in 1993, to $181 billion when he resigned as CEO in March 2002.

Gerstner said shortly after joining the company in 1993 that, first and foremost, IBM was losing its way with regard to the customer. And, secondly, the need to integrate of IBM as a company. This was the biggest value it could bring to its customers. If the first point above isn’t the definition of complexity, the second point certainly is. IBM had too much and too many of just about everything—data centers, commerce engines, network providers, client desktop configurations, and so on.

When Gerstner started at IBM, it had 24 different business units, and they did not share services across the company. Every unit had its full complement of everything—services, purchasing, and manufacturing, to name just a few. But after an internal effort to reduce the complexity, IBM went from 55 data centers down to 12. It had 31 different network providers; now it is has one. IBM had over 100 CIO’s; today there is one. On the product lines, for example, IBM had over 100 different desktop configurations; today it has four. The effect on the company was dramatic and can be seen by its performance—for example, at the 2001 shareholders meeting IBM presented the following results:

  • IBM had record earnings and revenue.

  • IBM made significant investments to strengthen their portfolio: $5.6 billion invested in research and development; another $5.6 billion in capital expenditures, and half a billion dollars on strategic acquisitions;

  • The IBM cash position was still strong enough to allow them to buy back $6.7 billion of common stock.

  • On the technology front, IBM had the most U.S. patent awards for the eighth straight year.

  • IBM had $85 billion in services backlog, which is basically future revenue already under contract.

Complexity was an issue at IBM as far back as the 1960’s. An IBM employee at the time, Frederick P. Brooks Jr., one of the architects of the IBM 360 mainframe, and author observed, “Complexity is the business we are in, and complexity is what limits us.”2 That is changing at IBM, and with its autonomic computing initiative, so it will for IBM customers who embrace it.


IBM was on the verge of breakup: Bureaucracy, complexity, and silos was slowing IBM down—costing money and keeping the organization too opaque to function decisively. Stock prices were at a 20-year low, and the company had posted an $8.1 billion loss.

IBM drove common processes across lines of business: IBM began by breaking down barriers between lines of business, implementing enterprise-wide standards for five core processes:

  • Market planning

  • Product development

  • Procurement

  • CRM

  • Fulfillment

IBM simplified infrastructure and governance



Number of CIO’s



Host data centers



Web hosting centers









Results: IT spending was reduced by 31 percent over the past decade—even as the IT infrastructure grew to support new applications and processes, higher volume, and enhanced functionality.

Total savings:

More than $9B

Time to market:

75% faster

Customer satisfaction:

Up 5.5%

Simplification was not enough: Standardized processes stopped the tide of red ink. But IBM was still not fully leveraging the size, scope, and power to reach their customers and differentiate them in the marketplace.

Integrated across the value net: They reorganized to deliver unified processes across the value net—from suppliers to partners to employees to customers. This further increased efficiency, especially for cross-business initiatives like SCM and CRM.



  • $26.4B in 2002, up 4% YTY

  • $11.6B from, up 3%


  • Cost avoidance from e-support in 2002: ~$600M, up 17% YTY

  • 60% of phone contacts result in sales


  • Applications reduced by 42%

  • 70% of PC orders “touchless”


  • 90% of orders “hands-free”

  • Cost avoidance from e-procurement in 2002: ~$450M, up 8% YTY

Our e-business transformation efforts to date have realized:

$16.5 billion in benefits from $5.6 billion of investment

To stay competitive as the business environment grows faster, less predictable, and increasingly customer-driven, e-business on demand will allow IBM to respond quickly to changes and market opportunities.

Six key initiatives: IBM is still in the early stages of implementing e-business on demand. There are six key areas where they feel they are getting closest:

  1. An integrated supply chain

  2. New semiconductor manufacturing facility

  3. Implementation of the on demand workplace at IBM

  4. Grid computing

  5. The build the next generation of infrastructure

  6. E-business worldwide centers

IBM came back from the brink. To reach this stage, IBM has transformed its business processes, technology, and most importantly, its culture. This has involved a great deal of planning and effort, but the story is proof that it can be done—and that the rewards are enormous.

The organizations that move first will have an enormous competitive advantage over those that are slow to adapt. The difficult part is changing business thinking. On demand business challenges long-held notions about organizations and hierarchy—but “silo” thinking and obsession with control are obstacles on the path to the future.


One of the most difficult challenges facing IT organizations today is ensuring alignment with business objectives in terms of quality, flexibility, initial cost, and time to market. In computing, opportunity breeds complexity. Moreover, complexity begets systems that can be unreliable and difficult to manage. In most medium- to large-sized IT organizations, there are now numerous applications and environments that weigh in at tens of millions of lines of code, and they require substantially skilled IT professionals to install, configure, tune, debug, upgrade, and generally maintain them. This means that the difficulty of managing today’s computing systems goes well beyond the administration of individual software environments. There is a need to integrate several different environments into corporate-wide computing systems and make them work. This goes beyond company boundaries into the Internet and introduces new levels of complexity. Computing systems’ complexity appears to be approaching the limits of human capability, yet the relentless march toward increased interconnectivity and integration rushes ahead unabated. New technologies, such as wireless, will increase the complexity even more.

We do not see a slowdown in the progression of Moore’s law. Rather, it is the IT industry’s exploitation of technological growth inherent in Moore’s law that leads us to the verge of a complexity crisis. Software companies now have massive computing power, which can produce ever more complex applications that run on even more complex IT infrastructures. Add to this mix network and communications technology and the complexity increases by several orders of magnitude. The domino effect applies here. Software packaged applications aren’t providing much, if any, relief from the traditional organizational need to customize enterprise applications. They pose system and network management challenges that also aren’t getting any simpler to handle. The result is more complexity. More complexity means more time and more resources to manage the complexity; so the costs begin to rise.

Many surveys have been conducted that ask CIO’s and their staff the following basic question: “What IT applications or technologies are becoming too complicated to manage?”

The responses and results have been predicable.

  1. Integrating the Web and standard legacy-based systems.

  2. Implementing custom software packages.

  3. Integrating new Java™-based software.

  4. Constructing object-based and distributed architectures.

  5. Data warehousing.

  6. Implementing new e-business systems.


Most CEOs would cringe at the idea that IT infrastructure—the way architectures and associated technology resources are organized—will determine the agility with which companies can carry out good strategy. Yet the difficulty and cost of modifying today’s rigid IT architectures—dominated by big enterprise applications, such as Enterprise Resource Planning (ERP), and large application suites, such as Customer Relationship Management (CRM) and many others—can be so high that some companies would rather abandon new strategic initiatives than make a single change to the applications they already have in place.

Businesses need to reduce costs and sprawling networks as well as respond more quickly to changing business environments. In today’s typical IT environment, there may be three to four tiers of computing, with caching and security servers on the front end, application servers as a middle layer, and transaction and data processing servers on the end. Add to this another layer of complexity with Internet servers, Web services, and portals, and it gets even worse.

By reducing the tiers of computing, customers may gain cost savings and create an environment where they can deploy new technologies, such as Linux and grid solutions, to create a dynamic operating environment capable of responding more flexibly to changing business or customer requirements.

Businesses reducing the tiers of computing are using a scale-out and scale-up approach. Scale-out systems, such as Blade servers, allow customers to add compute capacity by the processor; while scale-up systems—mainframe and mainframe-class servers—grow like building blocks. Buy when you need it. Install it, and use it.

Good news is on the horizon in the form of service-oriented architectures, which promise to reduce, if not remove, the current obstacles to less complexity.


IT must start taking positive steps forward if it wants enable of growth, rather than hinder it. Business units are demanding a higher level of service from IT, and CIOs are taking a hard look at how they run their operations, spend their money and plan for tomorrow. In the long run, CIOs can implement new systems that are expected to radically change IT operations and reduce staff and costs.

Here are some of the symptoms of complexity in IT:

  1. Frequent and reoccurring software crashes of critical applications due to incompatibility of data, files, errors, or network protocols.

  2. Longer timeframes for IT staff to solve the problems in item 1.

  3. A significant increase in IT budgets, including hardware, software, human capital costs, training, and support.

  4. Increase in the level of application outsourcing—if there is a problem, it is better to let some other vendor deal with it.

  5. High turnover of critical IT staff due to frustration, long hours, and burnout.

  6. Unexpected surprises in new technology, new languages, or applications leading to increased time in understanding and managing projects that use them.

  7. Longer timeframes to satisfactorily test and install new applications or software packages.

  8. Growth of expensive hardware and software in IT architectures—the “silent sales” syndrome.

  9. Incompatibility between competing vendor software packages—i.e., file structures, databases, transmission protocols, and parameters—due to lack of standards.

  10. Frequent but necessary software upgrades of packages, operating systems, and application development languages, resulting in yet another round of errors and incompatibility problems.

  11. Incessant requests for new business systems to be developed and installed within what appears to be unreasonable timeframes.


To rid themselves of unnecessary complexity, corporations will need comprehensive self-assessment to create a plan for transforming their systems and making them simpler. Corporations can take immediate steps to untangle most of their unwanted IT complexity by focusing on the following six specific activities, which, taken together, will help them transform the way they use and manage IT, thus making IT organizations leaner and companies better prepared for the end of the downturn:

  • Understand and target the root causes of complexity.

  • Install self-managing systems, such as autonomic computing.

  • Consider consolidation of hardware and software.

  • Regenerate the company’s IT architecture.

  • Plan for outsourcing of certain applications.

  • Develop a management culture in IT.

By reducing this kind of IT complexity, corporations position themselves to benefit as growth continues. They will then need to add systems, but they will be able to do so more quickly and at far less expense by pruning complexity now. Adding an application to a streamlined, integrated IT platform, for example, requires less systems development and integration work.


Technology should obviously make it possible to streamline processes and reduce costs. Often, it does. But during the 1990s, companies added wave after wave of novel technologies, often starting new projects before completing earlier ones, with disappointing results. By not fully changing business processes to reap the value of the new systems, these companies made processes more complex—not more streamlined—and increased costs, particularly in IT.

Rapid mergers and globalization added to the complexity. Integrating and rationalizing systems after a merger can be a massive job that takes months, sometimes years, to complete. For example Bank One, the nation’s fifth-largest bank, has been acquiring smaller banks over the last ten years and is continuously integrating new accounts into its systems. In the interim, companies may have to grapple with operating systems and business applications that don’t mesh fully or even partially; trying to get data to flow between two recently merged companies is a straightforward aim, but may turn out to be frustratingly difficult. Meanwhile, as companies expanded into new markets around the globe, they added systems to support new supply chains, local human resources and legal requirements, new financial structures, and the flow of information in a variety of languages—all of which increased costs due to added complexity. As almost every computer user can confirm, the “hidden costs” of computing are huge. That is, after paying for the hardware and software, considerable time is required to address software installation, upgrades, maintenance, enhancements, configuration, tuning and optimization, problem detection and resolution, and security considerations.

A few companies have decided that IT cost-cutting provides a great opportunity to untangle their systems and projects. Rather than taking the short-term view—“Can we live without this piece of IT now?”—these companies are looking to the longer run, transforming their business activities and IT processes in ways that will strengthen their systems and, at the same time, eliminate the deeper causes of bloated IT spending. In a sense, the companies are finishing a job they didn’t have time to complete during the bubble years.

IT costs soared in the 1990s as companies adopted systems and applications to support new channels and products, expansion into new markets, and tighter coordination with suppliers. The rapid pace of competition often meant that such companies implemented these systems quickly—in “Internet time”—without fully integrating them with existing systems (and retiring older ones) or making the business changes needed to exploit technology’s potential for helping to automate and streamline business activities.

Indeed, companies can make short-term cuts and save while, at the same time, addressing the costly longer-term roots of IT complexity, but only if senior IT and business leaders commit themselves to keeping these goals in mind. Companies are reaping big savings by rethinking the way they manage multiple channels—for example by closing a failed Web site or keeping the Internet channel and outsourcing a call center. The same treatment is being accorded to product portfolios supported by disparate and often uncoordinated IT systems: Banks and telecommunications companies, for example, are dropping some older offerings. Companies are also consolidating their database-management systems and other infrastructure technologies and redrawing their IT architectures—the blueprint for the IT structure supporting the business. They are giving themselves the ability to take advantage of new outsourcing arrangements that will ease overall complexity.

In summary, there are many advances that can be made to lessen existing IT complexity. Here are some suggestions:

  1. To establish the level of complexity, seek and secure senior IT management authorization to review the entire corporation’s IT infrastructure, resources, applications, processes, and operating procedures. Write a report with metrics and recommendations to define priority recommendations—see the next section on Corporate Assessment.

  2. Establish what can be done in the short term:

  • Server consolidation – Determines what can be consolidated to reduce costs and lessen complexity.

  • Outsourcing – Review all applications to establish what, if any, applications can be outsourced.

  • System Management – Review new automated system management software to determine if automated updating, new software releases, patches, and updates are needed. .

  • Application Development – Review all application development software, and consolidate where possible.

The impact of complexity on IT systems thinking is fundamental. Instead of basing our strategies and actions on prediction, with the development and implementation of a plan designed to take us from “here and now” to “there and then,” we need to adopt a more creative approach. This implies more frequent monitoring and reassessment, with an awareness and capacity to change targets and goals, to make use of what is working and reduce what does not. This is an approach that recognizes the constant need to learn about what is happening and to try to make sense of it as fast as possible—before the complexity increases to unmanageable proportions.


The first step in beginning to identify corporate complexity should be to perform an assessment. Results of the complexity assessment are used to define the most cost-effective, appropriate complexity implementation plan to meet corporate goals.

The complexity assessment is performed to measure the potential for practicing complexity in a corporation, to determine if the corporation is ready to embark on a complexity reduction program, and to define where to focus its efforts to gain the maximum benefit. The emphasis is on a business viewpoint, looking at the reasons why complexity has evolved, how a reduction policy and the introduction of simplicity can help, and the expected business value to be gained from complexity reduction or elimination. The result of the complexity assessment can be used as the basis for defining corporate complexity goals, complexity reduction adoption strategies, the domains in which to practice simplicity instead of complexity, and the complexity reduction program implementation plan.


The complexity assessment is performed to help successfully introduce complexity reduction into a corporation. The purposes of the complexity assessment are as follows:

  1. Evaluate a corporation’s current complexity strategy and the implementation of that strategy in current software projects and various systems groups.

  2. Use the results of the assessment to determine a corporation’s complexity goals, elements of a complexity program to achieve those goals, and domains in which to focus complexity efforts.

  3. Recommend actions to take to implement its complexity strategy.

Instituting the practice of complexity reduction across a corporation is a large, complex task in itself, especially if the ultimate goal is to practice complexity reduction practices above the project level—that is, across teams, across product lines, and across software groups/organizations. Success requires careful planning, cooperation, and good management practices. To ensure success, a corporation needs to determine how ready, willing, and able it is to practice a complexity-reduction-driven development approach and what actions it needs to take to prepare itself to accomplish its complexity objectives and goals.

The assessment will investigate both technical and management/organizational complexity issues. On the technical side, some important issues include:

  • Identifying and defining core business objects and other kinds of components.

  • Defining guidelines and standards for core business objects (once they exist) and for creating or re-engineering core business objects.

  • Defining the organizational structure and classification scheme for the complexity library or libraries.

On the management/organizational side, issues include:

  • Defining personnel support for core business objects/components.

  • Establishing complexity training programs.

  • Establishing the complexity measurement infrastructure (i.e., complexity metrics and measurements, corporate complexity policy, complexity incentives).


Conducting an IT infrastructure management assessment would help corporate IT users and senior management to prepare their IT environments to incorporate emerging capabilities, such as automated provisioning, autonomic computing, and business systems monitoring.

Infrastructure management assessment services are key components of complexity management offerings. The services provide ongoing assessments of corporate IT systems with special focus on the following areas:

  • Systems Management: Ongoing operational health checks using proven methods for monitoring system availability across multi-platform IT hardware, software, and network resources.

  • Asset Management: Systematic, multi-vendor asset management through consolidated assessment and tracking of hardware and software assets to managing even complex monitoring and usage environments.

  • Resource Management: Health checks on what resources are being used where and at what cost.

  • Problem Management: An assessment of problem management determinations and how problems and issues are tracked, resolved, and documented.

  • Change Management: Processes that help decrease risk of IT system downtime.

  • Service Management: Interlocking of test, migration, and production schedules for smooth integration and transition of new technologies into an existing data center environment.

  • Security Management: Assessment of security strategies, policies, and protocols for deploying security techniques.

By having clear assessments of the IT environment, the services help users define the technical and business requirements designed to enable improvements in server, storage, and network utilization; standardize their computing environment; and evaluate their security needs using automation software and processes.

Customers that manage their own data centers have a huge opportunity to reclaim money spent manually directing the hundreds or thousands of computer systems in their enterprise. By designing an IT framework based on industry standards and proven business processes, customers can substantially increase the efficiency and utilization of their existing IT resources.

Just as enterprise resource planning and six sigma made company manufacturing and supply chains more efficient and flexible, applying more discipline to a corporate data center can bring similar benefits to a company’s overall IT operations.


Complexity is becoming a major issue for all IT departments, whether acknowledged or not. Many CIO’s are still in denial. This is not industry hype; rather it is reality. Complexity is not just an academic theory, since it has emerged into the IT world. Complexity in IT increases costs and affects productivity. This is a new threat to progress and future success that must be addressed. There is no future in the status quo. To let IT infrastructures and architectures become increasingly complex with no action is unacceptable and irresponsible. If the eventual solution is to throw more skilled programmers and others at this problem, it is clear that chaos will be the order of the day. Reliability and performance of critical corporate applications will be called into question. Confidence in the IT department, already battered in the past, will be the next issue on senior management’s checklist. Until IT vendors solve the problem of complexity, the same problems will be repeated and continue to plague the industry. There is no way that these complexities can be managed through skilled staff alone, even if they are available.

Corporations and forward-thinking CIOs should start with a complexity reduction management policy. This policy will set out an approach for dealing with complexity and present solutions, such as autonomous computing. The management, and ultimate reduction, of complexity and a move to simpler solutions will not be easily achieved. Consider this quote from IBM’s Alan Ganek.


In the IT industry, we are operating in one of the most difficult and complex business environments that any of us have participated in during our business careers, and it is vital that we address complexity NOW.


  1. Simplicity, Edward De Bono, Penguin Putnam Books 375 Hudson Street, New York, NY, 10014, USA, 1998.

  2. Simplicity, the New Competitive Advantage, Bill Jensen, Perseus Books: Cambridge, MA, 2000.

  3. The Clock of the Long Now, Stephen Brand, Weidenfeld and Nicolson Press, 2000.

  4. Information, Entropy, and Progress: A New Evolutionary Paradigm, R Ayers, Woodbury Press, New York: American Institute of Physics, 1999,

  5. The Alchemy of Growth: Kickstarting and Sustaining Growth in Your Company, M A Baghai et al London: Orion Business, 1999.

Note to readers: Regrettably, there are no IT books on this subject.

1 see

2 F. P. Brooks. The Mythical Man-Month, pub. Addison-Wesley, 1995.