Chapter Reflection

profilensam
ProjectManagementProcessesMethodologiesandEconomics3rdEdition.pdf

Project Management Processes, Methodologies, and Economics

Third Edition

Avraham Shtub

Faculty of Industrial Engineering and Management

The Technion–Israel Institute of Technology

Moshe Rosenwein

Department of Industrial Engineering and Operations Research

Columbia University

Boston Columbus San Francisco New York Hoboken Indianapolis London Toronto Sydney Singapore Tokyo Montreal Dubai Madrid Hong Kong Mexico City Munich Paris  Amsterdam Cape Town

Vice President and Editorial Director, Engineering and Computer Science: Marcia J. Horton

Editor in Chief: Julian Partridge

Executive Editor: Holly Stark

Editorial Assistant: Amanda Brands

Field Marketing Manager: Demetrius Hall

Marketing Assistant: Jon Bryant

Managing Producer: Scott Disanno

Content Producer: Erin Ault

Operations Specialist: Maura Zaldivar-Garcia

Manager, Rights and Permissions: Ben Ferrini

Cover Designer: Black Horse Designs

Cover Photo: Vladimir Liverts/Fotolia

Printer/Binder: RRD/Crawfordsville

Cover Printer: Phoenix Color/Hagerstown

Full-Service Project Management: SPi Global

Composition: SPi Global

Typeface: Times Ten LT Std Roman 10/12

Copyright © 2017, 2005, 1994 Pearson Education, Inc. Hoboken, NJ 07030. All rights reserved. Manufactured in the United States of America. This publication is protected by copyright and permissions should be

obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise. For information regarding permissions, request forms and the appropriate contacts within the Pearson Education Global Rights & Permissions department, please visit www.pearsoned.com/permissions/.

Many of the designations by manufacturers and seller to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed in initial caps or all caps.

The author and publisher of this book have used their best efforts in preparing this book. These efforts include the development, research, and testing of theories and programs to determine their effectiveness. The author and publisher make no warranty of any kind, expressed or implied, with regard to these programs or the documentation contained in this book. The author and publisher shall not be liable in any event for incidental or consequential damages with, or arising out of, the furnishing, performance, or use of these programs.

Library of Congress Cataloging-in-Publication Data

Names: Shtub, Avraham, author. | Rosenwein, Moshe, author. Title: Project management : processes, methodologies, and economics / Avraham Shtub, Faculty of Industrial Engineering and Management, The Technion-Israel Institute of Technology, Moshe Rosenwein, Department of Industrial Engineering and Operations Research, Columbia University. Other titles: Project management (Boston, Mass.) Description: 3E. | Pearson | Includes bibliographical references and index. Identifiers: LCCN 2016030485 | ISBN 9780134478661 (pbk.) Subjects: LCSH: Engineering—Management. | Project management. Classification: LCC TA190 .S583 2017 | DDC 658.4/04—dc23 LC record available at https://lccn.loc.gov/2016030485

10 9 8 7 6 5 4 3 2 1

ISBN-10: 0-13-447866-5

ISBN-13: 978-0-13-447866-1

This book is dedicated to my grandchildren Zoey, Danielle, Adam, and Noam Shtub.

This book is dedicated to my wife, Debbie; my three children, David, Hannah, and Benjamin; my late parents, Zvi and Blanche Rosenwein; and my in-laws, Dr. Herman and Irma Kaplan.

Contents 1. Nomenclature xv

2. Preface xvii

3. What’s New in this Edition xxi

4. About the Authors xxiii

1. 1  Introduction 1

1. 1.1 Nature of Project Management 1

2. 1.2 Relationship Between Projects and Other Production Systems 2

3. 1.3 Characteristics of Projects 4

1. 1.3.1 Definitions and Issues 5

2. 1.3.2 Risk and Uncertainty 7

3. 1.3.3 Phases of a Project 9

4. 1.3.4 Organizing for a Project 11

4. 1.4 Project Manager 14

1. 1.4.1 Basic Functions 15

2. 1.4.2 Characteristics of Effective Project Managers 16

5. 1.5 Components, Concepts, and Terminology 16

6. 1.6 Movement to Project-Based Work 24

7. 1.7 Life Cycle of a Project: Strategic and Tactical Issues 26

8. 1.8 Factors that Affect the Success of a Project 29

9. 1.9 About the book: Purpose and Structure 31

1. Team Project 35

2. Discussion Questions 38

3. Exercises 39

4. Bibliography 41

5. Appendix 1A: Engineering Versus Management 43

6. 1A.1 Nature of Management 43

7. 1A.2 Differences between Engineering and Management 43

8. 1A.3 Transition from Engineer to Manager 45

9. Additional References 45

2. 2  Process Approach to Project Management 47

1. 2.1 Introduction 47

1. 2.1.1 Life-Cycle Models 48

2. 2.1.2 Example of a Project Life Cycle 51

3. 2.1.3 Application of the Waterfall Model for Software Development 51

2. 2.2 Project Management Processes 53

1. 2.2.1  Process Design 53

2. 2.2.2 PMBOK and Processes in the Project Life Cycle 54

3. 2.3 Project Integration Management 54

1. 2.3.1  Accompanying Processes 54

2. 2.3.2  Description 56

4. 2.4 Project Scope Management 60

1. 2.4.1  Accompanying Processes 60

2. 2.4.2  Description 60

5. 2.5 Project Time Management 61

1. 2.5.1  Accompanying Processes 61

2. 2.5.2  Description 62

6. 2.6 Project Cost Management 63

1. 2.6.1  Accompanying Processes 63

2. 2.6.2  Description 64

7. 2.7 Project Quality Management 64

1. 2.7.1  Accompanying Processes 64

2. 2.7.2  Description 65

8. 2.8 Project Human Resource Management 66

1. 2.8.1  Accompanying Processes 66

2. 2.8.2  Description 66

9. 2.9 Project Communications Management 67

1. 2.9.1  Accompanying Processes 67

2. 2.9.2  Description 68

10. 2.10 Project Risk Management 69

1. 2.10.1  Accompanying Processes 69

2. 2.10.2  Description 70

11. 2.11 Project Procurement Management 71

1. 2.11.1  Accompanying Processes 71

2. 2.11.2  Description 72

12. 2.12 Project Stakeholders Management 74

1. 2.12.1  Accompanying Processes 74

2. 2.12.2  Description 75

13. 2.13 The Learning Organization and Continuous Improvement 76

1. 2.13.1  Individual and Organizational Learning 76

2. 2.13.2  Workflow and Process Design as the Basis of Learning 76

1. Team Project 77

2. Discussion Questions 77

3. Exercises 78

4. Bibliography 78

3. 3 Engineering Economic Analysis 81

1. 3.1 Introduction 81

1. 3.1.1 Need for Economic Analysis 82

2. 3.1.2 Time Value of Money 82

3. 3.1.3 Discount Rate, Interest Rate, and Minimum Acceptable Rate of Return 83

2. 3.2 Compound Interest Formulas 84

1. 3.2.1 Present Worth, Future Worth, Uniform Series, and Gradient Series 86

2. 3.2.2 Nominal and Effective Interest Rates 89

3. 3.2.3 Inflation 90

4. 3.2.4 Treatment of Risk 92

3. 3.3 Comparison of Alternatives 92

1. 3.3.1 Defining Investment Alternatives 94

2. 3.3.2 Steps in the Analysis 96

4. 3.4 Equivalent Worth Methods 97

1. 3.4.1 Present Worth Method 97

2. 3.4.2 Annual Worth Method 98

3. 3.4.3 Future Worth Method 99

4. 3.4.4 Discussion of Present Worth, Annual Worth and Future Worth Methods 101

5. 3.4.5 Internal Rate of Return Method 102

6. 3.4.6 Payback Period Method 109

5. 3.5 Sensitivity and Breakeven Analysis 111

6. 3.6 Effect of Tax and Depreciation on Investment Decisions 114

1. 3.6.1 Capital Expansion Decision 116

2. 3.6.2 Replacement Decision 118

3. 3.6.3 Make-or-Buy Decision 123

4. 3.6.4 Lease-or-Buy Decision 124

7. 3.7 Utility Theory 125

1. 3.7.1 Expected Utility Maximization 126

2. 3.7.2 Bernoulli’s Principle 128

3. 3.7.3 Constructing the Utility Function 129

4. 3.7.4 Evaluating Alternatives 133

5. 3.7.5 Characteristics of the Utility Function 135

1. Team Project 137

2. Discussion Questions 141

3. Exercises 142

4. Bibliography 152

4. 4 Life-Cycle Costing 155

1. 4.1 Need for Life-Cycle Cost Analysis 155

2. 4.2 Uncertainties in Life-Cycle Cost Models 158

3. 4.3 Classification of Cost Components 161

4. 4.4 Developing the LCC Model 168

5. 4.5 Using the Life-Cycle Cost Model 175

1. Team Project 176

2. Discussion Questions 176

3. Exercises 177

4. Bibliography 179

5. 5 Portfolio Management—Project Screening and Selection 181

1. 5.1 Components of the Evaluation Process 181

2. 5.2 Dynamics of Project Selection 183

3. 5.3 Checklists and Scoring Models 184

4. 5.4 Benefit-Cost Analysis 187

1. 5.4.1 Step-By-Step Approach 193

2. 5.4.2 Using the Methodology 193

3. 5.4.3 Classes of Benefits and Costs 193

4. 5.4.4 Shortcomings of the Benefit-Cost Methodology 194

5. 5.5 Cost-Effectiveness Analysis 195

6. 5.6 Issues Related to Risk 198

1. 5.6.1 Accepting and Managing Risk 200

2. 5.6.2 Coping with Uncertainty 201

3. 5.6.3 Non-Probabilistic Evaluation Methods when Uncertainty Is Present 202

4. 5.6.4 Risk-Benefit Analysis 207

5. 5.6.5 Limits of Risk Analysis 210

7. 5.7 Decision Trees 210

1. 5.7.1 Decision Tree Steps 217

2. 5.7.2 Basic Principles of Diagramming 218

3. 5.7.3 Use of Statistics to Determine the Value of More Information 219

4. 5.7.4 Discussion and Assessment 222

8. 5.8 Real Options 223

1. 5.8.1 Drivers of Value 223

2. 5.8.2 Relationship to Portfolio Management 224

1. Team Project 225

2. Discussion Questions 228

3. Exercises 229

4. Bibliography 237

5. Appendix 5A: Bayes’ Theorem for Discrete Outcomes 239

6. 6 Multiple-Criteria Methods for Evaluation and Group Decision Making 241

1. 6.1 Introduction 241

2. 6.2 Framework for Evaluation and Selection 242

1. 6.2.1 Objectives and Attributes 242

2. 6.2.2 Aggregating Objectives Into a Value Model 244

3. 6.3 Multiattribute Utility Theory 244

1. 6.3.1 Violations of Multiattribute Utility Theory 249

4. 6.4 Analytic Hierarchy Process 254

1. 6.4.1 Determining Local Priorities 255

2. 6.4.2 Checking for Consistency 260

3. 6.4.3 Determining Global Priorities 261

5. 6.5 Group Decision Making 262

1. 6.5.1  Group Composition 263

2. 6.5.2  Running the Decision-Making Session 264

3. 6.5.3  Implementing the Results 265

4. 6.5.4  Group Decision Support Systems 265

1. Team Project 267

2. Discussion Questions 267

3. Exercises 268

4. Bibliography 271

5. Appendix 6A: Comparison of Multiattribute Utility Theory with the AHP: Case Study 275

6. 6A.1 Introduction and Background 275

7. 6A.2 The Cargo Handling Problem 276

1. 6A.2.1 System Objectives 276

2. 6A.2.2 Possibility of Commercial Procurement 277

3. 6A.2.3 Alternative Approaches 277

8. 6A.3 Analytic Hierarchy Process 279

1. 6A.3.1 Definition of Attributes 280

2. 6A.3.2 Analytic Hierarchy Process Computations 281

3. 6A.3.3 Data Collection and Results for AHP 283

4. 6A.3.4 Discussion of Analytic Hierarchy Process and Results 284

9. 6A.4 Multiattribute Utility Theory 286

1. 6A.4.1 Data Collection and Results for Multiattribute Utility Theory 286

2. 6A.4.2 Discussion of Multiattribute Utility Theory and Results 290

10. 6A.5 Additional Observations 290

11. 6A.6 Conclusions for the Case Study 291

12. References 291

7. 7 Scope and Organizational Structure of a Project 293

1. 7.1 Introduction 293

2. 7.2 Organizational Structures 294

1. 7.2.1 Functional Organization 295

2. 7.2.2 Project Organization 297

3. 7.2.3 Product Organization 298

4. 7.2.4 Customer Organization 298

5. 7.2.5 Territorial Organization 299

6. 7.2.6 The Matrix Organization 299

7. 7.2.7 Criteria for Selecting an Organizational Structure 302

3. 7.3 Organizational Breakdown Structure of Projects 303

1. 7.3.1 Factors in Selecting a Structure 304

2. 7.3.2 The Project Manager 305

3. 7.3.3 Project Office 309

4. 7.4 Project Scope 312

1. 7.4.1 Work Breakdown Structure 313

2. 7.4.2 Work Package Design 320

5. 7.5 Combining the Organizational and Work Breakdown Structures 322

1. 7.5.1 Linear Responsibility Chart 323

6. 7.6 Management of Human Resources 324

1. 7.6.1 Developing and Managing the Team 325

2. 7.6.2 Encouraging Creativity and Innovation 329

3. 7.6.3 Leadership, Authority, and Responsibility 331

4. 7.6.4 Ethical and Legal Aspects of Project Management 334

1. Team Project 335

2. Discussion Questions 336

3. Exercises 336

4. Bibliography 338

8. 8 Management of Product, Process, and Support Design 341

1. 8.1 Design of Products, Services, and Systems 341

1. 8.1.1 Principles of Good Design 342

2. 8.1.2 Management of Technology and Design in Projects 344

2. 8.2 Project Manager’s Role 345

3. 8.3 Importance of Time and the Use of Teams 346

1. 8.3.1 Concurrent Engineering and Time-Based Competition 347

2. 8.3.2 Time Management 349

3. 8.3.3 Guideposts for Success 352

4. 8.3.4 Industrial Experience 354

5. 8.3.5 Unresolved Issues 355

4. 8.4 Supporting Tools 355

1. 8.4.1 Quality Function Deployment 355

2. 8.4.2 Configuration Selection 358

3. 8.4.3 Configuration Management 361

4. 8.4.4 Risk Management 365

5. 8.5 Quality Management 370

1. 8.5.1 Philosophy and Methods 371

2. 8.5.2 Importance of Quality in Design 382

3. 8.5.3 Quality Planning 383

4. 8.5.4 Quality Assurance 383

5. 8.5.5 Quality Control 384

6. 8.5.6 Cost of Quality 385

1. Team Project 387

2. Discussion Questions 388

3. Exercises 389

4. Bibliography 389

9. 9 Project Scheduling 395

1. 9.1 Introduction 395

1. 9.1.1 Key Milestones 398

2. 9.1.2 Network Techniques 399

2. 9.2 Estimating the Duration of Project Activities 401

1. 9.2.1 Stochastic Approach 402

2. 9.2.2 Deterministic Approach 406

3. 9.2.3 Modular Technique 406

4. 9.2.4 Benchmark Job Technique 407

5. 9.2.5 Parametric Technique 407

3. 9.3 Effect of Learning 412

4. 9.4 Precedence Relations Among Activities 414

5. 9.5 Gantt Chart 416

6. 9.6 Activity-On-Arrow Network Approach for CPM Analysis 420

1. 9.6.1 Calculating Event Times and Critical Path 428

2. 9.6.2 Calculating Activity Start and Finish Times 431

3. 9.6.3 Calculating Slacks 432

7. 9.7 Activity-On-Node Network Approach for CPM Analysis 433

1. 9.7.1 Calculating Early Start and Early Finish Times of Activities 434

2. 9.7.2 Calculating Late Start and Late Finish Times of Activities 434

8. 9.8 Precedence Diagramming with Lead–Lag Relationships 436

9. 9.9 Linear Programming Approach for CPM Analysis 442

10. 9.10 Aggregating Activities in the Network 443

1. 9.10.1 Hammock Activities 443

2. 9.10.2 Milestones 444

11. 9.11 Dealing with Uncertainty 445

1. 9.11.1 Simulation Approach 445

2. 9.11.2 Pert and Extensions 447

12. 9.12 Critique of Pert and CPM Assumptions 454

13. 9.13 Critical Chain Process 455

14. 9.14 Scheduling Conflicts 457

1. Team Project 458

2. Discussion Questions 459

3. Exercises 460

4. Bibliography 467

5. Appendix 9A: Least-Squares Regression Analysis 471

6. Appendix 9B: Learning Curve Tables 473

7. Appendix 9C: Normal Distribution Function 476

10. 10 Resource Management 477

1. 10.1 Effect of Resources on Project Planning 477

2. 10.2 Classification of Resources Used in Projects 478

3. 10.3 Resource Leveling Subject to Project Due-Date Constraints 481

4. 10.4 Resource Allocation Subject to Resource Availability Constraints 487

5. 10.5 Priority Rules for Resource Allocation 491

6. 10.6 Critical Chain: Project Management by Constraints 496

7. 10.7 Mathematical Models for Resource Allocation 496

8. 10.8 Projects Performed in Parallel 499

1. Team Project 500

2. Discussion Questions 500

3. Exercises 501

4. Bibliography 506

11. 11 Project Budget 509

1. 11.1 Introduction 509

2. 11.2 Project Budget and Organizational Goals 511

3. 11.3 Preparing the Budget 513

1. 11.3.1 Top-Down Budgeting 514

2. 11.3.2 Bottom-Up Budgeting 514

3. 11.3.3 Iterative Budgeting 515

4. 11.4 Techniques for Managing the Project Budget 516

1. 11.4.1 Slack Management 516

2. 11.4.2 Crashing 520

5. 11.5 Presenting the Budget 527

6. 11.6 Project Execution: Consuming the Budget 529

7. 11.7 The Budgeting Process: Concluding Remarks 530

1. Team Project 531

2. Discussion Questions 531

3. Exercises 532

4. Bibliography 537

5. Appendix 11A: Time–Cost Tradeoff with Excel 539

12. 12 Project Control 545

1. 12.1 Introduction 545

2. 12.2 Common Forms of Project Control 548

3. 12.3 Integrating the OBS and WBS with Cost and Schedule Control 551

1. 12.3.1 Hierarchical Structures 552

2. 12.3.2 Earned Value Approach 556

4. 12.4 Reporting Progress 565

5. 12.5 Updating Cost and Schedule Estimates 566

6. 12.6 Technological Control: Quality and Configuration 569

7. 12.7 Line of Balance 569

8. 12.8 Overhead Control 574

1. Team Project 576

2. Discussion Questions 577

3. Exercises 577

4. Bibliography 580

13. Appendix 12A: Example of a Work Breakdown Structure 581

14. Appendix 12B:  Department of Energy Cost/Schedule Control Systems Criteria 583

15. 13 Research and Development Projects 587

1. 13.1 Introduction 587

2. 13.2 New Product Development 589

1. 13.2.1 Evaluation and Assessment of Innovations 589

2. 13.2.2 Changing Expectations 593

3. 13.2.3 Technology Leapfrogging 593

4. 13.2.4 Standards 594

5. 13.2.5 Cost and Time Overruns 595

3. 13.3 Managing Technology 595

1. 13.3.1 Classification of Technologies 596

2. 13.3.2 Exploiting Mature Technologies 597

3. 13.3.3 Relationship Between Technology and Projects 598

4. 13.4 Strategic R&D Planning 600

1. 13.4.1 Role of R&D Manager 600

2. 13.4.2 Planning Team 601

5. 13.5 Parallel Funding: Dealing with Uncertainty 603

1. 13.5.1 Categorizing Strategies 604

2. 13.5.2 Analytic Framework 605

3. 13.5.3 Q-Gert 606

6. 13.6 Managing the R&D Portfolio 607

1. 13.6.1 Evaluating an Ongoing Project 609

2. 13.6.2 Analytic Methodology 612

1. Team Project 617

2. Discussion Questions 618

3. Exercises 619

4. Bibliography 619

5. Appendix 13A: Portfolio Management Case Study 622

16. 14 Computer Support for Project Management 627

1. 14.1 Introduction 627

2. 14.2 Use of Computers in Project Management 628

1. 14.2.1 Supporting the Project Management Process Approach 629

2. 14.2.2 Tools and Techniques for Project Management 629

3. 14.3 Criteria for Software Selection 643

4. 14.4 Software Selection Process 648

5. 14.5 Software Implementation 650

6. 14.6 Project Management Software Vendors 656

1. Team Project 657

2. Discussion Questions 657

3. Exercises 658

4. Bibliography 659

5. Appendix 14A: PMI Software Evaluation Checklist 660

6. 14A.1 Category 1: Suites 660

7. 14A.2 Category 2: Process Management 660

8. 14A.3 Category 3: Schedule Management 661

9. 14A.4 Category 4: Cost Management 661

10. 14A.5 Category 5: Resource Management 661

11. 14A.6 Category 6: Communications Management 661

12. 14A.7 Category 7: Risk Management 662

13. 14A.8 General (Common) Criteria 662

14. 14A.9 Category-Specific Criteria Category 1: Suites 663

15. 14A.10 Category 2: Process Management 663

16. 14A.11 Category 3: Schedule Management 664

17. 14A.12 Category 4: Cost Management 665

18. 14A.13 Category 5: Resource Management 666

19. 14A.14 Category 6: Communications Management 666

20. 14A.15 Category 7: Risk Management 668

17. 15 Project Termination 671

1. 15.1 Introduction 671

2. 15.2 When to Terminate a Project 672

3. 15.3 Planning for Project Termination 677

4. 15.4 Implementing Project Termination 681

5. 15.5 Final Report 682

1. Team Project 683

2. Discussion Questions 683

3. Exercises 684

4. Bibliography 685

18. 16 New Frontiers in Teaching Project Management in MBA and Engineering Programs 687

1. 16.1 Introduction 687

2. 16.2 Motivation for Simulation-Based Training 687

3. 16.3 Specific Example—The Project Team Builder (PTB) 691

4. 16.4 The Global Network for Advanced Management (GNAM) MBA New Product Development (NPD) Course 692

5. 16.5 Project Management for Engineers at Columbia University 693

6. 16.6 Experiments and Results 694

7. 16.7 The Use of Simulation-Based Training for Teaching Project Management in Europe 695

8. 16.8 Summary 696

1. Bibliography 697

1. Index 699

Nomenclature AC annual cost

ACWP actual cost of work performed

AHP analytic hierarchy process

AOA activity on arrow

AON activity on node

AW annual worth

BAC budget at completion

B/C benefit/cost

BCWP budgeted cost of work performed

BCWS budgeted cost of work scheduled

CBS cost breakdown structure

CCB change control board

CCBM critical chain buffer management

CDR critical design review

CE certainty equivalent, concurrent engineering

C-E cost-effectiveness

CER cost estimating relationship

CI cost index; consistency index;

criticality index

CM configuration management

COO chief operating officer

CPIF cost plus incentive fee

CPM critical path method

CR capital recovery, consistency ratio

C/SCSC cost/schedule control systems criteria

CV cost variance

DOD Department of Defense

DOE Department of Energy

DOH direct overhead costs

DSS decision support system

EAC estimate at completion

ECO engineering change order

ECR engineering change request

EMV expected monetary value

EOM end of month

EOY end of year

ERP enterprise resource planning

ETC estimate to complete

ETMS early termination monitoring system

EUAC equivalent uniform annual cost

EV earned value

EVPI expected value of perfect information

EVSI expected value of sample information

FFP firm fixed price

FMS flexible manufacturing system

FPIF fixed price incentive fee

FW future worth

GAO General Accounting Office

GDSS group decision support system

GERT graphical evaluation and review technique

HR human resources

IPT integraded product team

IRR internal rate of return

IRS Internal Revenue Service

ISO International Standards Organization

IT information technology

LCC life-cycle cost

LOB line of balance

LOE level of effort

LP linear program

LRC linear responsibility chart

MACRS modified accelerated cost recovery system

MARR minimum acceptable (attractive) rate of return

MAUT multiattribute utility theory

MBO management by objectives

MIS management information system

MIT Massachusetts Institute of Technology

MPS master production schedule

MTBF mean time between failures

MTTR mean time to repair

NAC net annual cost

NASA National Aeronautics and Space Administration

NBC nuclear, biological, chemical

NPV net present value

OBS organizational breakdown structure

O&M operations and maintenance

PDMS product data management system

PDR preliminary design review

PERT program evaluation and review technique

PMBOK project management body of knowledge

PMI Project Management Institute

PMP project management professional

PO project office

PT project team

PV planned value

PW present worth

QA quality assurance

QFD quality function deployment

RAM reliability, availability, and maintainability; random access memory

R&D research and development

RDT&E research, development, testing, and evaluation

RFP request for proposal

ROR rate of return

SI schedule index

SOW statement of work

SOYD sum-of-the-years digits

SV schedule variance

TQM total quality management

WBS work breakdown structure

WP work package

WR work remaining

Preface We all deal with projects in our daily lives. In most cases, organization and management simply amount to constructing a list of tasks and executing them in sequence, but when the information is limited or imprecise and when cause-and-effect relationships are uncertain, a more considered approach is called for. This is especially true when the stakes are high and time is pressing. Getting the job done right the first time is essential. This means doing the upfront work thoroughly, even at the cost of lengthening the initial phases of the project. Shaving expenses in the early stages with the intent of leaving time and money for revisions later might seem like a good idea but could have consequences of painful proportions. Seasoned managers will tell you that it is more cost-effective in the long run to add five extra engineers at the beginning of a project than to have to add 50 toward the end.

The quality revolution in manufacturing has brought this point home. Companies in all areas of technology have come to learn that quality cannot be inspected into a product; it must be built in. Recalling the 1980s, the global competitive battles of that time were won by companies that could achieve cost and quality advantages in existing, well-defined markets. In the 1990s, these battles were won by companies that could build and dominate new markets. Today, the emphasis is partnering and better coordination of the supply chain. Planning is a critical component of this process and is the foundation of project management.

Projects may involve dozens of firms and hundreds of people who need to be managed and coordinated. They need to know what has to be done, who is to do it, when it should be done, how it will be done, and what resources will be used. Proper planning is the first step in communicating these intentions. The problem is made difficult by what can be characterized as an atmosphere of uncertainty, chaos, and conflicting goals. To ensure teamwork, all major participants and stakeholders should be involved at each stage of the process.

How is this achieved efficiently, within budget, and on schedule? The primary objective in writing our first book was to answer this question from

the perspective of the project manager. We did this by identifying the components of modern project management and showing how they relate to the basic phases of a project, starting with conceptual design and advanced development, and continuing through detailed design, production, and termination. Taking a practical approach, we drew on our collective experience in the electronics, information services, and aerospace industries. The purpose of the second edition was to update the developments in the field over the last 10 years and to expand on some of the concerns that are foremost in the minds of practitioners. In doing so, we have incorporated new material in many of the chapters specifically related to the Project Management Body of Knowledge (PMBOK) published by the Project Management Institute. This material reflects the tools, techniques, and processes that have gained widespread acceptance by the profession because of their proven value and usefulness.

Over the years, numerous books have been written with similar objectives in mind. We acknowledge their contribution and have endeavored to build on their strengths. As such in the third edition of the book, we have focused on integrative concepts rather than isolated methodologies. We have relied on simple models to convey ideas and have intentionally avoided detailed mathematical formulations and solution algorithms––aspects of the field better left to other parts of the curriculum. Nevertheless, we do present some models of a more technical nature and provide references for readers who wish to gain a deeper understanding of their use. The availability of powerful, commercial codes brings model solutions within reach of the project team.

To ensure that project participants work toward the same end and hold the same expectations, short- and long-term goals must be identified and communicated continually. The project plan is the vehicle by which this is accomplished and, once approved, becomes the basis for monitoring, controlling, and evaluating progress at each phase of the project’s life cycle. To help the project manager in this effort, various software packages have been developed; the most common run interactively on microcomputers and have full functional and report-generating capabilities. In our experience, even the most timid users are able to take advantage of their main features after only a few hours of hands-on instruction.

A second objective in writing this book has been to fill a void between texts aimed at low- to mid-level managers and those aimed at technical personnel with strong analytic skills but little training in or exposure to organizational issues. Those who teach engineering or business students at both the late undergraduate and early graduate levels should find it suitable. In addition, the book is intended to serve as a reference for the practitioner who is new to the field or who would like to gain a surer footing in project management concepts and techniques.

The core material, including most of the underlying theory, can be covered in a one-semester course. At the end of Chapter 1, we outline the book’s contents. Chapter 3 deals with economic issues, such as cash flow, time value of money, and depreciation, as they relate to projects. With this material and some supplementary notes, coupled with the evaluation methods and multiple criteria decision-making techniques discussed in Chapters 5 and 6, respectively, it should be possible to teach a combined course in project management and engineering economy. This is the direction in which many undergraduate engineering programs are now headed after many years of industry prodding. Young engineers are often thrust into leadership roles without adequate preparation or training in project management skills.

Among the enhancements in the Third Edition is a section on Lean project management, discussed in Chapter 8, and a new Chapter 16 on simulation- based training for project management.

Lean project management is a Quality Management initiative that focuses on maximizing the value that a project generates for its stakeholders while minimizing waste. Lean project management is based on the Toyota production system philosophy originally developed for a repetitive environment and modified to a nonrepetitive environment to support project managers and project teams in launching, planning, executing, and terminating projects. Lean project management is all about people—selecting the right project team members, teaching them the art and science of project management, and developing a highly motivated team that works together to achieve project goals.

Simulation-based training is a great tool for training project team members and for team development. Chapter 16 discusses the principles of simulation-

based training and its application to project management. The chapter reports on the authors’ experience in using simulation-based training in leading business schools, such as members of the Global Network for Advanced Management (GNAM), and in leading engineering schools, such as the Columbia University School of Engineering and the Technion. The authors also incorporated feedback received from European universities such as Technische Universität München (TUM) School of Management and Katholieke Universiteit Leuven that used the Project Team Builder (PTB) simulation-based training environment. Adopters of this book are encouraged to try the PTB—it is available from http://www.sandboxmodel.com/—and to integrate it into their courses.

Writing a textbook is a collaborative effort involving many people whose names do not always appear on the cover. In particular, we thank all faculty who adopted the first and second editions of the book and provided us with their constructive and informative comments over the years. With regard to production, much appreciation goes to Lillian Bluestein for her thorough job in proofreading and editing the manuscript. We would also like to thank Chen Gretz-Shmueli for her contribution to the discussion in the human resources section. Finally, we are forever grateful to the phalanx of students who have studied project management at our universities and who have made the painstaking efforts of gathering and writing new material all worthwhile.

Avraham Shtub

Moshe Rosenwein

What’s New in this Edition The purpose of the new, third edition of this book is to update developments in the project management field over the last 10 years and to more broadly address some of the concerns that have increased in prominence in the minds of practitioners. We incorporated new material in many of the chapters specifically related to the Project Management Body of Knowledge (PMBOK) published by the Project Management Institute. This material reflects the tools, techniques, and processes that have gained widespread acceptance by the profession because of their proven value and usefulness.

Noteworthy enhancements in the third edition include:

An expanded section regarding Lean project management in Chapter 8;

A new chapter, Chapter 16, discussing the use of simulation and the Project Team Builder software;

A detailed discussion on activity splitting and its advantages and disadvantages in project management;

Descriptions, with examples, of resource-scheduling heuristics such as the longest-duration first heuristic and the Activity Time (ACTIM) algorithm;

Examples that demonstrate the use of Excel Solver to model project management problems such as the time–cost tradeoff;

A description of project management courses at Columbia University and the Global Network of Advanced Management.

About the Authors Professor Avraham Shtub holds the Stephen and Sharon Seiden Chair in Project Management. He has a B.Sc. in Electrical Engineering from the Technion–Israel Institute of Technology (1974), an MBA from Tel Aviv University (1978), and a Ph.D. in Management Science and Industrial Engineering from the University of Washington (1982).

He is a certified Project Management Professional (PMP) and a member of the Project Management Institute (PMI-USA). He is the recipient of the Institute of Industrial Engineering 1995 Book of the Year Award for his book Project Management: Engineering, Technology, and Implementation (coauthored with Jonathan Bard and Shlomo Globerson), Prentice Hall, 1994. He is the recipient of the Production Operations Management Society Wick Skinner Teaching Innovation Achievements Award for his book Enterprise Resource Planning (ERP): The Dynamics of Operations Management. His books on Project Management were published in English, Hebrew, Greek, and Chinese.

He is the recipient of the 2008 Project Management Institute Professional Development Product of the Year Award for the training simulator “Project Team Builder – PTB.”

Professor Shtub was a Department Editor for IIE Transactions, he was on the Editorial Boards of the Project Management Journal, The International Journal of Project Management, IIE Transactions, and the International Journal of Production Research. He was a faculty member of the department of Industrial Engineering at Tel Aviv University from 1984 to 1998, where he also served as a chairman of the department (1993–1996). He joined the Technion in 1998 and was the Associate Dean and head of the MBA program.

He has been a consultant to industry in the areas of project management, training by simulators, and the design of production—operation systems. He was invited to speak at special seminars on Project Management and

Operations in Europe, the Far East, North America, South America, and Australia.

Professor Shtub visited and taught at Vanderbilt University, The University of Pennsylvania, Korean Institute of Technology, Bilkent University in Turkey, Otego University in New Zealand, Yale University, Universitat Politécnica de Valencia, and the University of Bergamo in Italy.

Dr. Moshe Rosenwein has a B.S.E. from Princeton University and a Ph.D. in Decision Sciences from the University of Pennsylvania. He has worked in the industry throughout his professional career, applying management science modeling and methodologies to business problems in supply chain optimization, network design, customer relationship management, and scheduling. He has served as an adjunct professor at Columbia University on multiple occasions over the past 20 years and developed a project management course for the School of Engineering that has been taught since 2009. He has also taught at Seton Hall University and Rutgers University. Dr. Rosenwein has published over 20 refereed papers and has delivered numerous talks at universities and conferences. In 2001, he led an industry team that was awarded a semi-finalist in the Franz Edelman competition for the practice of management science.

Chapter 1 Introduction

1.1 Nature of Project Management Many of the most difficult engineering and business challenges of recent decades have been to design, develop, and implement new systems of a type and complexity never before attempted. Examples include the construction of oil drilling platforms in the North Sea off the coast of Great Britain, the development of the manned space program in both the United States and the former Soviet Union, and the worldwide installation of fiber optic lines for broadband telecommunications. The creation of these systems with performance capabilities not previously available and within acceptable schedules and budgets has required the development of new methods of planning, organizing, and controlling events. This is the essence of project management.

A project is an organized endeavor aimed at accomplishing a specific nonroutine or low-volume task. Although projects are not repetitive, they may take significant amounts of time and, for our purposes, are sufficiently large or complex to be recognized and managed as separate undertakings. Teams have emerged as the way of supplying the needed talents. The use of teams complicates the flow of information and places additional burdens on management to communicate with and coordinate the activities of the participants.

The amount of time in which an individual or an organizational unit is involved in a project may vary considerably. Someone in operations may work only with other operations personnel on a project or may work with a team composed of specialists from various functional areas to study and solve a specific problem or to perform a secondary task.

Management of a project differs in several ways from management of a typical organization. The objective of a project team is to accomplish its prescribed mission and disband. Few firms are in business to perform just one

job and then disappear. Because a project is intended to have a finite life, employees are seldom hired with the intent of building a career with the project. Instead, a team is pulled together on an ad-hoc basis from among people who normally have assignments in other parts of the organization. They may be asked to work full time on the project until its completion; or they may be asked to work only part time, such as two days a week, on the project and spend the rest of the time at their usual assignments. A project may involve a short-term task that lasts only a matter of days, or it may run for years. After completion, the team normally disperses and its members return to their original jobs.

The need to manage large, complex projects, constrained by tight schedules and budgets, motivated the development of methodologies different from those used to manage a typical enterprise. The increasingly complex task of managing large-scale, enterprise-wide projects has led to the rise in importance of the project management function and the role of the project manager or project management office. Project management is increasingly viewed in both industry and government as a critical role on a project team and has led to the development of project management as a profession (much like finance, marketing, or information technology, for example). The Project Management Institute (PMI), a nonprofit organization, is in the forefront of developing project management methodologies and of providing educational services in the form of workshops, training, and professional literature.

1.2 Relationship Between Projects and Other Production Systems Operations and production management contains three major classes of systems: (1) those designed for mass production, (2) those designed for batch (or lot) production, and (3) those designed for undertaking nonrepetitive projects common to construction and new product development. Each of these classes may be found in both the manufacturing and service sectors.

Mass production systems are typically designed around the specific processes used to assemble a product or perform a service. Their orientation is fixed and their applications are limited. Resources and facilities are composed of special-purpose equipment designed to perform the operations required by the product or the service in an efficient way. By laying out the equipment to parallel the natural routings, material handling and information processing are greatly simplified. Frequently, material handling is automated and the use of conveyors and monorails is extensive. The resulting system is capital intensive and very efficient in the processing of large quantities of specific products or services for which relatively little management and control are necessary. However, these systems are very difficult to alter should a need arise to produce new or modified products or to provide new services. As a result, they are most appropriate for operations that experience a high rate of demand (e.g., several hundred thousand units annually) as well as high aggregate demand (e.g., several million units throughout the life cycle of the system).

Batch-oriented systems are used when several products or services are processed in the same facility. When the demand rate is not high enough or when long-run expectations do not justify the investment in special-purpose equipment, an effort is made to design a more flexible system on which a variety of products or services can be processed. Because the resources used in such systems have to be adjusted (set up) when production switches from one product to another, jobs are typically scheduled in batches to save setup time. Flexibility is achieved by using general-purpose resources that can be

adjusted to handle different processes. The complexity of operations planning, scheduling, and control is greater than in mass production systems as each product has its own routing (sequence of operations). To simplify planning, resources are frequently grouped together based on the type of processes that they perform. Thus, batch-oriented systems contain organizational units that specialize in a function or a process, as opposed to product lines that are found in mass production systems. Departments such as metal cutting, painting, testing, and packaging/shipping are typical examples from the batch-oriented manufacturing sector, whereas word processing centers and diagnostic laboratories are examples from the service sector.

In the batch-oriented system, it is particularly important to pay attention to material handling needs because each product has its specific set of operations and routings. Material handling equipment, such as forklifts, is used to move in-process inventory between departments and work centers. The flexibility of batch-oriented systems makes them attractive for many organizations.

In recent years, flexible manufacturing systems have been quick to gain acceptance in some industrial settings. With the help of microelectronics and computer technology, these systems are designed to achieve mass production efficiencies in low-demand environments. They work by reducing setup times and automating material handling operations but are extremely capital intensive. Hence they cannot always be justified when product demand is low or when labor costs are minimal. Another approach is to take advantage of local economies of scale. Group technology cells, which are based on clustering similar products or components into families processed by dedicated resources of the facility, are one way to implement this approach. Higher utilization rates and greater throughput can be achieved by processing similar components on dedicated machines.

By way of contrast, systems that are subject to very low demand (no more than a few units) are substantially different from the first two mentioned. Because of the nonrepetitive nature of these systems, past experience may be of limited value so little learning takes place. In this environment, extensive management effort is required to plan, monitor, and control the activities of the organization. Project management is a direct outgrowth of these efforts.

It is possible to classify organizations based on their production orientation as a function of volume and batch size. This is illustrated in Figure 1.1.

Figure 1.1 Classification of production systems.

Figure 1.1 Full Alternative Text

The borderlines between mass production, batch-oriented, and project- oriented systems are hard to define. In some organizations where the project approach has been adopted, several units of the same product (a batch) are produced, whereas other organizations use a batch-oriented system that produces small lots (the just-in-time approach) of very large volumes of products. To better understand the transition between the three types of systems, consider an electronics firm that assembles printed circuit boards in small batches in a job shop. As demand for the boards picks up, a decision is made to develop a flow line for assembly. The design and implementation of this new line is a project.

1.3 Characteristics of Projects Although the Manhattan project—the development of the first atomic bomb —is considered by many to be the first instance when modern project management techniques were used, ancient history is replete with examples. Some of the better known ones include the construction of the Egyptian pyramids, the conquest of the Persian Empire by Alexander the Great, and the building of the Temple in Jerusalem. In the 1960s, formal project management methods received their greatest impetus with the Apollo program and a cluster of large, formidable construction projects.

Today, activities such as the transport of American forces in Operations in Iraq and Afghanistan, the pursuit of new treatments for AIDS and Ebola, and the development of the joint U.S.–Russian space station and the manned space mission to Mars are examples of three projects with which most of us are familiar. Additional examples of a more routine nature include:

Selecting a software package

Developing a new office plan or layout

Implementing a new decision support system

Introducing a new product to the market

Designing an airplane, supercomputer, or work center

Opening a new store

Constructing a bridge, dam, highway, or building

Relocating an office or a factory

Performing major maintenance or repair

Starting up a new manufacturing or service facility

Producing and directing a movie

1.3.1 Definitions and Issues As the list above suggests, a project may be viewed or defined in several different ways: for example, as “the entire process required to produce a new product, new plant, new system, or other specified results” (Archibald 2003) or as “a narrowly defined activity which is planned for a finite duration with a specific goal to be achieved” (General Electric Corporation 1983). Generally speaking, project management occurs when emphasis and special attention are given to the performance of nonrepetitive activities for the purpose of meeting a single set of goals, typically under a set of constraints such as time and budget constraints.

By implication, project management deals with a one-time effort to achieve a focused objective. How progress and outcomes are measured, though, depends on a number of critical factors. Typical among these are technology (specifications, performance, quality), time (due dates, milestones), and cost (total investment, required cash flow), as well as profits, resource utilization, market share, and market acceptance.

These factors and their relative importance are major issues in project management. These factors are based on the needs and expectations of the stakeholders. Stakeholders are individuals and parties interested in the problem the project is designed to solve or in the solution selected. With a well-defined set of goals, it is possible to develop appropriate performance measures and to select the right technology, the organizational structure, required resources, and people who will team up to achieve these goals. Figure 1.2 summarizes the underlying processes. As illustrated, most projects are initiated by a need. A new need may be identified by stakeholders such as a customer, the marketing department, or any member of an organization. When management is convinced that the need is genuine, goals may be defined, and the first steps may be taken toward putting together a project team. Most projects have several goals covering such aspects as technical and operational requirements, delivery dates, and cost. A set of potential projects to undertake should be ranked by stakeholders based on the relative

importance of the goals and the perceived probability of each potential project to achieve each of the individual goals.

Figure 1.2 Major processes in project management.

Figure 1.2 Full Alternative Text

On the basis of these rankings and a derived set of performance measures for each goal, the technological alternatives are evaluated and a concept (or initial design) is developed along with a schedule and a budget for the project. This early phase of the project life cycle is known as the initiation phase, the front end of the project, or the conceptual phase. The next step is

to integrate the design, the schedule, and the budget into a project plan specifying what should be done, by whom, at what cost, and when. As the plan is implemented, the actual accomplishments are monitored and recorded. Adjustments, aimed at keeping the project on track, are made when deviations or overruns appear. When the project terminates, its success is evaluated based on the predetermined goals and performance measures. Figure 1.3 compares two projects with these points in mind. In project 1, a “design to cost” approach is taken. Here, the budget is fixed and the technological goals are clearly specified. Cost, performance, and schedule are all given equal weight. In project 2, the technological goals are paramount and must be achieved, even if it means compromising the schedule and the budget in the process.

Figure 1.3 Relative importance of goals.

Figure 1.3 Full Alternative Text

The first situation is typical of standard construction and manufacturing projects, whereby a contractor agrees to supply a system or a product in accordance with a given schedule and budget. The second situation is typical of “cost plus fixed fee” projects where the technological uncertainties argue against a contractor’s committing to a fixed cost and schedule. This arrangement is most common in a research and development (R&D) environment.

A well-designed organizational structure is required to handle projects as a result of their uniqueness, variety, and limited life span. In addition, special skills are required to manage them successfully. Taken together, these skills and organizational structures have been the catalyst for the development of the project management discipline. Some of the accompanying tools and techniques, though, are equally applicable in the manufacturing and service sectors.

Because projects are characterized by a “one-time only” effort, learning is limited and most operations never become routine. This results in a need for extensive management involvement throughout the life cycle of the project. In addition, the lack of continuity leads to a high degree of uncertainty.

1.3.2 Risk and Uncertainty In project management, it is common to refer to very high levels of uncertainty as sources of risk. Risk is present in most projects, especially in the R&D environment. Without trying to sound too pessimistic, it is prudent to assume that what can go wrong will go wrong. Principal sources of uncertainty include random variations in component and subsystem performance, inaccurate or inadequate data, and the inability to forecast satisfactorily as a result of lack of experience. Specifically, there may be

1. Uncertainty in scheduling. Changes in the environment that are impossible to forecast accurately at the outset of a project are likely to have a critical impact on the length of certain activities. For example, subcontractor performance or the time it takes to obtain a long-term loan is bound to influence the length of various subtasks. The availability of scarce resources may also add to uncertainty in scheduling. Methods are needed to deal with problematic or unstable time estimates. Probability theory and simulation both have been used successfully for this purpose, as discussed in Chapter 9.

2. Uncertainty in cost. Limited information on the duration of activities makes it difficult to predict the amount of resources needed to complete them on schedule. This translates directly into an uncertainty in cost. In

addition, the expected hourly rate of resources and the cost of materials used to carry out project tasks may possess a high degree of variability.

3. Technological uncertainty. This form of uncertainty is typically present in R&D projects in which new (not thoroughly tested and approved) technologies, methods, equipment, and systems are developed or used. Technological uncertainty may affect the schedule, the cost, and the ultimate success of the project. The integration of familiar technologies into one system or product may cause technological uncertainty as well. The same applies to the development of software and its integration with hardware.

There are other sources of uncertainty, including those of an organizational and political nature. New regulations might affect the market for a project, whereas the turnover of personnel and changes in the policies of one or more of the participating organizations may disrupt the flow of work.

To gain a better understanding of the effects of uncertainty, consider the three projects mentioned earlier. The transport of American armed forces in Operation Iraqi Freedom faced extreme political and logistical uncertainties. In the initial stages, none of the planners had a clear idea of how many troops would be needed or how much time was available to put the troops in place. Also, it was unknown whether permission would be granted to use NATO air bases or even to fly over European and Middle Eastern countries, or how much tactical support would be forthcoming from U.S. allies.

The development of a treatment for AIDS is an ongoing project fraught with technological uncertainty. Hundreds of millions of dollars have already been spent with little progress toward a cure. As expected, researchers have taken many false steps, and many promising paths have turned out to be dead ends. Lengthy trial procedures and duplicative efforts have produced additional frustration. If success finally comes, it is unlikely that the original plans or schemes will have predicted its form.

The design of the U.S.–Russian space station is an example in which virtually every form of uncertainty is present. Politicians continue to play havoc with the budget, while other stakeholders like special interest groups (both friendly and hostile) push their individual agendas; schedules get altered and

rearranged; software fails to perform correctly; and the needed resources never seem to be available in adequate supply. Inflation, high turnover rates, and scaled-down expectations take their toll on the internal workforce, as well as on the legion of subcontractors.

The American Production and Inventory Control Society has, tongue-in- cheek, fashioned the following laws in an attempt to explain the consequences of uncertainty on project management.

Laws of Project Management 1. No major project is ever installed on time, within budget or with the

same staff that started it. Yours will not be the first.

2. Projects progress quickly until they become 90% complete, then they remain at 90% complete forever.

3. One advantage of fuzzy project objectives is that they let you avoid the embarrassment of estimating the corresponding costs.

4. When things are going well, something will go wrong.

When things just cannot get any worse, they will.

When things seem to be going better, you have overlooked something.

5. If project content is allowed to change freely, then the rate of change will exceed the rate of progress.

6. No system is ever completely debugged. Attempts to debug a system inevitably introduce new bugs that are even harder to find.

7. A carelessly planned project will take three times longer to complete than expected; a carefully planned project will take only twice as long.

8. Project teams detest progress reporting because it vividly manifests their

lack of progress.

1.3.3 Phases of a Project A project passes through a life cycle that may vary with size and complexity and with the style established by the organization. The names of the various phases may differ but typically include those shown in Figure 1.4. To begin, there is an initiation or a conceptual design phase during which the organization realizes that a project may be needed or receives a request from a customer to propose a plan to perform a project; at this phase alternative technologies and operational solutions are evaluated and the most promising are selected based on performances, cost, risk, and schedule considerations. Next there is an advanced development or preliminary system design phase in which the project manager (and perhaps a staff if the project is complex) plans the project to a level of detail sufficient for initial scheduling and budgeting. If the project is approved, it then will enter a more detailed design phase, a production phase, and a termination phase.

Figure 1.4 Relationship between project life cycle and cost.

Figure 1.4 Full Alternative Text

In Figure 1.4, the five phases in the life cycle of a project are presented as a function of time. The cost during each phase depends on the specifics, but usually the majority of the budget is spent during the production phase. However, most of this budget is committed during the advanced development

phase and the detailed design phase before the actual work takes place. Management plays a vital role during the conceptual design phase, the advanced development phase, and the detailed design phase. The importance of this involvement in defining goals, selecting performance measures, evaluating alternatives (including the no-go or not to do the project), selecting the most promising alternative and planning the project cannot be overemphasized. Pressures to start the “real work” on the project, that is, to begin the production (or execution) phase as early as possible, may lead to the selection of the wrong technological or operational alternatives and consequently to high cost and schedule risks as a result of the commitment of resources without adequate planning.

In most cases, a work breakdown structure (WBS) is developed during the conceptual design phase. The WBS is a document that divides the project work into major hardware, software, data, and service elements. These elements are further divided and a list is produced identifying all tasks that must be accomplished to complete the project. The WBS helps to define the work to be performed and provides a framework for planning, budgeting, monitoring, and control. Therefore, as the project advances, schedule and cost performance can be compared with plans and budgets. Table 1.1 shows an abbreviated WBS for an orbital space laboratory vehicle.

TABLE 1.1 Partial WBS for Space Laboratory Index Work element 1.0 Command module 2.0 Laboratory module 3.0 Main propulsion system 3.1 Fuel supply system 3.1.1 Fuel tank assembly 3.1.1.1 Fuel tank casing 3.1.1.2 Fuel tank insulation

4.0 Guidance system 5.0 Habitat module 6.0 Training system 7.0 Logistic support system

The detailed project definition, as reflected in the WBS, is examined during the advanced development phase to determine the skills necessary to achieve the project’s goals. Depending on the planning horizon, personnel from other parts of the organization may be used temporarily to accomplish the project. However, previous commitments may limit the availability of these resources. Other strategies might include hiring new personnel or subcontracting various work elements, as well as leasing equipment and facilities.

1.3.4 Organizing for a Project A variety of structures are used by organizations to perform project work. The actual arrangement may depend on the proportion of the company’s business that is project oriented, the scope and duration of the underlying tasks, the capabilities of the available personnel, preferences of the decision makers, and so on. The following five possibilities range from no special structure to a totally separate project organization.

1. Functional organization. Many companies are organized as a hierarchy with functional departments that specialize in a particular type of work, such as engineering or sales (see Figure 1.5). These departments are often broken down into smaller units that focus on special areas within the function. Upper management may divide a project into work tasks and assign them to the appropriate functional units. The project is then budgeted and managed through the normal management hierarchy.

Figure 1.5 Portion of a typical functional organization.

Figure 1.5 Full Alternative Text

2. Project coordinator. A project may be handled through the organization as described above, but with a special appointee to coordinate it. The project is still funded through the normal channels and the functional managers retain responsibility and authority for their portion of the work. The coordinator meets with the functional managers and provides

direction and impetus for the project and may report its status to higher management.

3. Matrix organization. In a matrix organization, a project manager is responsible for completion of the project and is often assigned a budget. The project manager essentially contracts with the functional managers for completion of specific tasks and coordinates project efforts across the functional units. The functional managers assign work to employees and coordinate work within their areas. These arrangements are depicted schematically in Figure 1.6.

4. Project team. A particularly significant project (development of a new product or business venture) that will have a long duration and requires the full-time efforts of a group may be supervised by a project team. Full-time personnel are assigned to the project and are physically located with other team members. The project has its own management structure and budget as though it were a separate division of the company.

5. Projectized organization. When the project is of strategic importance, extremely complex and of long duration, and involves a number of disparate organizations, it is advisable to give one person complete control of all the elements necessary to accomplish the stated goals. For example, when Rockwell International was awarded two multimillion- dollar contracts (the Apollo command and service modules, and the second stage of the Saturn launch vehicle) by NASA, two separate programs were set up in different locations of the organization. Each program was under a division vice president and had its own manufacturing plant and staff of specialists. Such an arrangement takes the idea of a self-sufficient project team to an extreme and is known as a projectized organization.

Table 1.2 enumerates some advantages and disadvantages of the two extremes—the functional and projectized organizations. Companies that are frequently involved in a series of projects and occasionally shift around personnel often elect to use a matrix organization. This type of organization provides the flexibility to assign employees to one or more projects. In this arrangement, project personnel maintain a permanent reporting relationship

that connects vertically to a supervisor in a functional area, who directs the scope of their work. At the same time, each person is assigned to one or more projects and has a horizontal reporting relationship to the manager of a particular project, who coordinates his or her participation in that project. Pay and career advancement are developed within a particular discipline even though a person may be assigned to different projects. At times, this dual reporting relationship can give rise to a host of personnel problems and creates conflicts.

Figure 1.6

Typical matrix organization.

Figure 1.6 Full Alternative Text

TABLE 1.2 Advantages and Disadvantages of Two Organizational Structures Functional organization Projectized organization

Advantages

Efficient use of technical personnel

Career continuity and growth for technical personnel

Good technology transfer between projects

Good stability, security, and morale

Good project schedule and cost control

Single point for customer contact

Rapid reaction time possible

Simpler project communication

Training ground for general management

Disadvantages Weak customer interface Uncertain technical direction Weak project authority Inefficient use of specialists

Poor horizontal communications Insecurity regarding future job assignments

Discipline (technology) oriented rather than program oriented

Poor crossfeed of technical information between projects

Slower work flow

1.4 Project Manager The presence of uncertainty coupled with limited experience and hard-to-find data makes project management a combination of art, science, and, most of all, logical thinking. A good project manager must be familiar with a large number of disciplines and techniques. Breadth of knowledge is particularly important because most projects have technical, financial, marketing, and organizational aspects that inevitably conspire to derail the best of plans.

The role of the project manager may start at different points in the life cycle of a project. Some managers are involved from the beginning, helping to select the best technological and operational alternatives for the project, form the team, and negotiate the contracts. Others may begin at a later stage and be asked to execute plans that they did not have a hand in developing. At some point, though, most project managers deal with the basic issues: scheduling, budgeting, resource allocation, resource management, stakeholder management (e.g., human relations and negotiations).

It is an essential and perhaps the most difficult part of the project manager’s job to pay close attention to the big picture without losing sight of critical details, no matter how slight. In order to efficiently and effectively achieve high-level project goals, project managers must prioritize concerns key stakeholders while managing change that inevitably arises during a project’s life cycle. A project manager is an integrator and needs to trade off different aspects of the project each time a decision is called for. Questions such as, “How important is the budget relative to the schedule?” and “Should more resources be acquired to avoid delays at the expense of a budget overrun, or should a slight deviation in performance standards be tolerated as long as the project is kept on schedule and on budget?” are common.

Some skills can be taught, other skills are acquired only with time and experience, and yet other skills are very hard to learn or to acquire, such as the ability to lead a team without formal authority and the ability to deal with high levels of uncertainty without panic. We will not dwell on these but simply point them out, as we define fundamental principles and procedures.

Nevertheless, one of our basic aims is to highlight the practical aspects of project management and to show how modern organizations can function more effectively by adopting them. In so doing, we hope to provide all members of the project team with a comprehensive view of the field.

1.4.1 Basic Functions The PMI (2012) identifies ten knowledge areas that the discipline must address:

1. Integration management

2. Scope management

3. Time management

4. Cost management

5. Quality management

6. Human resource management

7. Communication management

8. Risk management

9. Procurement management

10. Stakeholders management

Managing a project is a complex and challenging assignment. Because projects are one-of-a-kind endeavors, there is little in the way of experience, normal working relationships, or established procedures to guide participants. A project manager may have to coordinate many diverse efforts and activities to achieve project goals. People from various disciplines and from various parts of the organization who have never worked together may be assigned to a project for different spans of time. Subcontractors who are unfamiliar with

the organization may be brought in to carry out major tasks. A project may involve thousands of interrelated activities performed by people who are employed by any one of several different subcontractors or by the sponsoring organization.

Project leaders must have an effective means of identifying and communicating the planned activities and their interrelationships. A computer-based scheduling and monitoring system is usually essential. Network techniques such as CPM (critical path method) and PERT (program evaluation and review technique) are likely to figure prominently in such systems. CPM was developed in 1957 by J.E. Kelly of Remington-Rand and M.R. Walker of Dupont to aid in scheduling maintenance shutdowns of chemical plants. PERT was developed in 1958 under the sponsorship of the U.S. Navy Special Projects Office, as a management tool for scheduling and controlling the Polaris missile program. Collectively, their value has been demonstrated time and again during both the planning and the execution phases of projects.

1.4.2 Characteristics of Effective Project Managers The project manager is responsible for ensuring that tasks are completed on time and within budget, but often has no formal authority over those who actually perform the work. He or she, therefore, must have a firm understanding of the overall job and rely on negotiation and persuasion skills to influence the array of contractors, functionaries, and specialists assigned to the project. The skills that a typical project manager needs are summarized in Figure 1.7; the complexity of the situation is depicted in Figure 1.8, which shows the interactions between some of the stakeholders: client, subcontractor, and top management.

The project manager is a lightning rod, frequently under a storm of pressure and stress. He or she must deal effectively with the changing priorities of the client, the anxieties of his or her own management ever fearful of cost and schedule overruns or technological failures, and the divided loyalties of the

personnel assigned to the team. The ability to trade off conflicting goals and to find the optimal balance between conflicting positions is probably the most important skill of the job.

In general, project managers require enthusiasm, stamina, and an appetite for hard work to withstand the onslaught of technical and political concerns. Where possible, they should have seniority and position in the organization commensurate with that of the functional managers with whom they must deal. Regardless of whether they are coordinators within a functional structure or managers in a matrix structure, they will frequently find their formal authority incomplete. Therefore, they must have the blend of technical, administrative, and interpersonal skills as illustrated in Figure 1.7 to furnish effective leadership.

1.5 Components, Concepts, and Terminology Although each project has a unique set of goals, there is enough commonality at a generic level to permit the development of a unified framework for planning and control. Project management techniques are designed to handle the common processes and problems that arise during a project’s life cycle. This does not mean, however, that one versed in such techniques will be a successful manager. Experts are needed to collect and interpret data, negotiate contracts, arrange for resources, manage stakeholders, and deal with a wide range of technical and organizational issues that impinge on both the cost and the schedule.

The following list contains the major components of a “typical” project.

Project initiation, selection, and definition

Identification of needs

Mapping of stakeholders (who are they, what are their needs and expectations, how much influence and power they have, will they be engaged and by how much and will they be involved in the project and by how much)

Figure 1.7 Important skills for the project manager.

Figure 1.7 Full Alternative Text

Figure 1.8 Major interactions of project stakeholders.

Development of (technological and operational) alternatives

Evaluation of alternatives based on performances, cost, duration, and risk

Selection of the “most promising” alternatives

Estimation of the life cycle cost (LCC) of the promising alternatives

Assessment of risk of the promising alternatives

Development of a configuration baseline

“Selling” the configuration and getting approval

Project organization

Selection of participating organizations

Structuring the work content of the project into smaller work

packages using a WBS

Allocation of WBS elements to participating organizations and assigning managers to the work packages

Development of the project organizational structure and associated communication and reporting facilities

Analysis of activities

Definition of the project’s major tasks

Development of a list of activities required to complete the project’s tasks

Development of precedence relations among activities

Development of a network model

Development of higher level network elements (hammock activities, subnetworks)

Selection of milestones

Updating the network and its elements

Project scheduling

Development of a calendar

Assigning resources to activities and estimation of activity durations

Estimation of activity performance dates

Monitoring actual progress and milestones

Updating the schedule

Resource management

Definition of resource requirements

Acquisition of resources

Allocation of resources among projects/activities

Monitoring resource use and cost

Technological management

Development of a configuration management plan

Identification of technological risks

Configuration control

Risk management and control

Total quality management (TQM)

Project budgeting

Estimation of direct and indirect costs

Development of a cash flow forecast

Development of a budget

Monitoring actual cost

Project execution and control

Development of data collection systems

Development of data analysis systems

Execution of activities

Data collection and analysis

Detection of deviations in cost, configuration, schedule, and quality

Development of corrective plans

Implementation of corrective plans

Forecasting of project cost at completion

Project termination

Evaluation of project success

Recommendation for improvements in project management practices

Analysis and storage of information on actual cost, actual duration, actual performance, and configuration

Each of these activities is discussed in detail in subsequent chapters. Here, we give an overview with the intention of introducing important concepts and the relationships among them. We also mention some of the tools developed to support the management of each activity.

1. Project initiation, selection, and definition. This process starts with identifying a need for a new service, product, or system. The trigger can come from any number of sources, including a current client, line personnel, or a proposed request from an outside organization. The trigger can come from one or more stakeholders who may have similar or conflicting needs and expectations. If the need is considered important and feasible solutions exist, then the need is translated into technical specifications. Next, a study of alternative solution approaches is initiated. Each alternative is evaluated based on a predetermined set of performance measures, and the most promising compose the “efficient frontier” of possible solutions. An effort is made to estimate the performances, duration, costs, and risks associated with the efficient alternatives. Cost estimates for development, production (or

purchasing), maintenance, and operations form the basis of a Life Cycle Cost (LCC) model used for selecting the “optimal” alternative.

Because of uncertainty, most of the estimates are likely to be problematic. A risk assessment may be required if high levels of uncertainty are present. The risk associated with an unfavorable outcome is defined as the probability of that outcome multiplied by the cost associated with it. A proactive risk management approach means that major risk drivers should be identified early in the process, and contingency plans should be prepared to handle unfavorable events if and when they occur.

Once an alternative is chosen, design details are fleshed out during the concept formulation and definition phase of the project. Preliminary design efforts end with a configuration baseline. This configuration (the principal alternative) has to satisfy the needs and expectations of the most important stakeholders and be accepted and approved by management. A well-structured selection and evaluation process, in which all relevant parties are involved, increases the probability of management approval. A generic flow diagram for the processes of project initiation selection and definition is presented in Figure 1.9.

Figure 1.9 Major activities in the conceptual design phase.

Figure 1.9 Full Alternative Text

2. Project organization. Many stakeholders, ranging from private firms and research laboratories to public utilities and government agencies, may participate in a particular project. In the advanced development phase, it

is common to define the work content [statement of work (SOW)] as a set of tasks, and to array them hierarchically in a treelike form known as the WBS. The relationship between participating organizations, known as the organizational breakdown structure (OBS) is similarly represented.

In the OBS, the lines of communication between and within organizations are defined, and procedures for work authorization and report preparation and distribution are established. Finally, lower-level WBS elements are assigned to lower-level OBS elements to form work packages and a responsibility matrix is constructed, indicating which organizational unit is responsible for which WBS element.

At the end of the advanced development phase, a more detailed cost estimate and a long-range budget proposal are prepared and submitted for management approval. A positive response signals the go-ahead for detailed planning and organizational design. This includes the next five functions.

3. Analysis of activities. To assess the need for resources and to prepare a detailed schedule, it is necessary to develop a detailed list of activities that are to be performed. These activities should be aimed at accomplishing the WBS tasks in a logical, economically sound, and technically feasible manner. Each task defined in the initial planning phase may consist of one or more activities. Feasibility is ensured by introducing precedence relations among activities. These relations can be represented graphically in the form of a network model.

Completion of an important activity may define a milestone and is represented in the network model. Milestones provide feedback in support of project control and form the basis for budgeting, scheduling, and resource management. As progress is made, the model has to be updated to account for the inclusion of new activities in the WBS, the successful completion of tasks, and any changes in design, organization, and schedule as a result of uncertainty, new needs, or new technological and political developments.

4. Project scheduling. The expected execution dates of activities are

important from both a financial (acquisition of the required funds) and an operational (acquisition of the required resources) point of view. Scheduling of project activities starts with the definition of a calendar specifying the working hours per day, working days per week, holidays, and so on. The expected duration of each activity is estimated, and a project schedule is developed based on the calendar, precedence relations among activities, and the expected duration of each activity. The schedule specifies the starting and ending dates of each activity and the accompanying slack or leeway. This information is used in budgeting and resource management. The schedule is used as a basis for work authorization and as a baseline against which actual progress is measured. It is updated throughout the life cycle of the project to reflect actual progress.

5. Resource management. Activities are performed by resources so that before any concrete steps can be taken, requirements have to be identified. This means defining one or more alternatives for meeting the estimated needs of each activity (the duration of an activity may be a function of the resources assigned to perform it). Based on the results, and in light of the project schedule, total resource requirements are estimated. These requirements are the basis of resource management and resource acquisition planning.

When requirements exceed expected availability, schedule delays may occur unless the difference is made up by acquiring additional resources or by subcontracting. Alternatively, it may be possible to reschedule activities (especially those with slack) so as not to exceed expected resource availability. Other considerations, such as minimizing fluctuations in resource usage and maximizing resource utilization, may be applicable as well.

During the execution phase, resources are allocated periodically to projects and activities in accordance with a predetermined timetable. However, because actual and planned use may differ, it is important to monitor and compare progress to plans. Low utilization as well as higher-than-planned costs or consumption rates indicate problems and should be brought to the immediate attention of management. Large

discrepancies may call for significant alterations in the schedule.

6. Technological management. Once the technological alternatives are evaluated and a consensus forms, the approved configuration is adopted as a baseline. From the baseline, plans for project execution are developed, tests to validate operational and technical requirements are designed, and contingency plans for risky areas are formulated. Changes in needs or in the environment may trigger modifications to the configuration. Technological management deals with execution of the project to achieve the approved baseline. Principal functions include the evaluation of proposed changes, the introduction of approved changes into the configuration baseline, and development of a total quality management (TQM) program. TQM involves the continuous effort to prevent defects, to improve processes, and to guarantee a final result that fits the specifications of the project and the expectations of the client.

7. Project budgeting. Money is the most common resource used in a project. Equipment and labor have to be acquired, and suppliers have to be paid. Overhead costs have to be assigned, and subcontractors have to be put on the payroll. Preparation of a budget is an important management activity that results in a time-phased plan summarizing expected expenditures, income, and milestones.

The budget is derived by estimating the cost of activities and resources. Because the schedule of the project relates activities and resource use to the calendar, the budget is also related to the same calendar. With this information, a cash flow analysis can be performed, and the feasibility of the predicted outlays can be tested. If the resulting cash flow or the resulting budget is not acceptable, then the schedule should be modified. This is frequently done by delaying activities that have slack.

Once an acceptable budget is developed, it serves as the basic financial tool for the project. Credit lines and loans can be arranged, and the cost of financing the project can be assessed. As work progresses, information on actual cost is accumulated and compared with the budget. This comparison forms the basis for controlling costs. The sequence of activities performed during the detailed design phase is summarized in Figure 1.10.

Figure 1.10 Major activities in the detailed design phase.

Figure 1.10 Full Alternative Text

8. Project execution and control. The activities described so far compose the necessary steps in initializing and preparing a project for execution. A feasible schedule that integrates task deadlines, budget considerations, resource availability, and technological requirements, while satisfying the precedence relations among activities, provides a good starting point for a project.

It is important, however, to remember that successful implementation of the initial schedule is subject to unexpected or random effects that are difficult (or impossible) to predict. In situations in which all resources are under the direct control of management and activated according to plan, unexpected circumstances or events may sharply divert progress from the original plan. For resources that are not under complete management control, much higher levels of uncertainty may exist, for example, a downturn in the economy, labor unrest, technology breakthroughs or failures, and new environmental regulations.

Project control systems are designed with three purposes in mind: (1) to detect current deviations and to forecast future deviations between actual progress and the project plans; (2) to trace the source of these deviations; and (3) to support management decisions aimed at putting the project back on the desired course.

Project control is based on the collection and analysis of the most recent performance data. Actual progress, actual cost, resource use, and technological achievements should be monitored continually. The information gleaned from this process is compared with updated plans across all aspects of the project. Deviations in one area (e.g., schedule overrun) may affect the performance and deviations in other areas (e.g., cost overrun).

In general, all operational data collected by the control system are analyzed, and, if deviations are detected, a scheme is devised to put the project back on course. The existing plan is modified accordingly, and steps are taken to monitor its implementation.

During the life cycle of the project, a continuous effort is made to update original estimates of completion dates and costs. These updates are used by management to evaluate the progress of the project and the efficiency of the participating organizations. These evaluations form the basis of management forecasts regarding the expected success of the project at each stage of its life cycle.

Schedule deviations might have implications on a project’s finances or Profit and Loss (P and L), if payments are based on actual progress. If a

schedule overrun occurs and payments are delayed, then cash flow difficulties might result. Schedule overruns might also cause excess load on resources as a result of the accumulation of work content. A well- designed control system in the hands of a well-trained project manager is the best way to counteract the negative effects of uncertainty.

9. Project termination. A project does not necessarily terminate as soon as its technical objectives are met. Management should strive to learn from past experience to improve the handling of future projects. A detailed analysis of the original plan, the modifications made over time, the actual progress, and the relative success of the project should be conducted. The underlying goal is to identify procedures and techniques that were not effective and to recommend ways to improve operations. An effort aimed at identifying missing or redundant managerial tools should also be initiated; new techniques for project management should be adopted when necessary, and obsolete procedures and tools should be discarded.

Information on the actual cost and duration of activities and the cost and utilization of resources should be stored in well-organized databases to support the planning effort in future projects. Only by striving for continuous improvement and organizational learning through programs based on past experience is competitiveness likely to persist in an organization. Policies, procedures, and tools must be updated on a regular basis.

1.6 Movement to Project-Based Work Increased reliance on the use of project management techniques, especially for research and development, stems from the changing circumstances in which modern businesses must compete. Pinto (2002) pointed out that among the most important influences promoting a project orientation in recent years have been the following:

1. Shortened product life cycles. Products become obsolete at an increasingly rapid rate, requiring companies to invest ever-higher amounts in R&D and new product development.

2. Narrow product launch windows. When a delay of months or even weeks can cost a firm its competitive advantage, new products are often scheduled for launch within a narrow time band.

3. Huge influx of global markets. New global opportunities raise new global challenges, such as the increasing difficulty of being first to market with superior products.

4. Increasingly complex and technical problems. As technical advances are diffused into organizations and technical complexity grows, the challenge of R&D becomes increasingly difficult.

5. Low inflation. Corporate profits must now come less from raising prices year after year and more from streamlining internal operations to become ever more efficient.

Durney and Donnelly investigated the effects of rapid technological change on complex information technology projects (2013). The impact of these and other economic factors has created conditions under which companies that use project management are flourishing. Their success has encouraged increasingly more organizations to give the discipline a serious look as they

contemplate how to become “project savvy.” At the same time, they recognize that, for all the interest in developing a project-based outlook, there is a severe shortage of trained project managers needed to convert market opportunities into profits. Historically, lack of training, poor career ladders, strong political resistance from line managers, unclear reward structures, and almost nonexistent documentation and operating protocols made the decision to become a project manager a risky move at best and downright career suicide at worst. Increasingly, however, management writers such as Tom Peters and insightful corporate executives such as Jack Welch have become strong advocates of the project management role. Between their sponsorship and the business pressures for enhancing the project management function, there is no doubt that we are witnessing a groundswell of support that is likely to continue into the foreseeable future.

Recent Trends in Project Management Like any robust field, project management is continuously growing and reorienting itself. Some of the more pronounced shifts and advances can be classified as follows:

1. Risk management. Developing more sophisticated up-front methodologies to better assess risk before significant commitment to the project.

2. Scheduling. New approaches to project scheduling, such as critical chain project management, that offer some visible improvements over traditional techniques.

3. Structure. Two important movements in organizational structure are the rise of the heavyweight project organization and the increasing use of project management offices.

4. Project team coordination. Two powerful advances in the area of project team development are the emphasis on cross-functional cooperation and

the model of punctuated equilibrium as it pertains to intra-team dynamics. Punctuated equilibrium proposes that rather than evolution occurring gradually in small steps, real natural change comes about through long periods of status quo interrupted by some seismic event.

5. Control. Important new methods for tracking project costs relative to performance are best exemplified by earned value analysis. Although the technique has been around for some time, its wider diffusion and use are growing.

6. Impact of new technologies. Internet and web technologies have given rise to greater use of distributed and virtual project teams, groups that may never physically interact but must work in close collaboration for project success.

7. Lean project management. The work of teams of experts from academia and industry led to the development of the guide to lean enablers for managing engineering programs (2012). The list of these enablers and the way they should be implemented is an important step in the development and application of lean project management methodologies.

8. Process-based project management. The PMBOK (PMI Standards Committee 2012) views project management as a combination of the ten knowledge areas listed in Section 1.14.1. Each area is composed of a set of processes whose proper execution defines the essence of the field.

1.7 Life Cycle of a Project: Strategic and Tactical Issues Because of the degree to which projects differ in their principal attributes, such as duration, cost, type of technology used, and sources of uncertainty, it is difficult to generalize the operational and technical issues they each face. It is possible, however, to discuss some strategic and tactical issues that are relevant to many types of projects. The framework for the discussion is the project life cycle or the major phases through which a “typical” project progresses. An outline of these phases is depicted in Figure 1.11 and elaborated on by Cleland and Ireland (2006), who identify the long-range (strategic) and medium-range (tactical) issues that management must consider. A synopsis follows.

Figure 1.11

Project life cycle.

Figure 1.11 Full Alternative Text

1. Conceptual design phase. In this phase, a stakeholder (client, contractor, or subcontractor) initiates the project and evaluates potential alternatives. A client organization may start by identifying a need or a deficiency in existing operations and issuing a request for proposal (RFP).

The selection of projects at the conceptual design phase is a strategic decision based on the established goals of the organization, needs, ongoing projects, and long-term commitments and objectives. In this phase, expected benefits from alternative projects, assessment of cost and risks, and estimates of required resources are some of the factors weighed. Important action items include the initial “go/no go” decision for the entire project and “make or buy” decisions for components and equipment, development of contingency plans for high-risk areas, and the preliminary selection of subcontractors and other team members who will participate in the project.

In addition, upper management must consider the technological aspects, such as availability and maturity of the required technology, its performance, and expected usage in subsequent projects. Environmental factors related to government regulations, potential markets, and competition also must be analyzed.

The selection of projects is based on a variety of goals and performance measures, including expected cost, profitability, risk, and potential for follow-on assignments. Once a project is selected and its conceptual design is approved, work begins on the second phase where many of the details are ironed out.

2. Advanced development phase. In this phase, the organizational structure of the project is formed by weighing the tactical advantages and disadvantages of each possible arrangement mentioned in Section 1.3.4. Once a decision is made, lines of communication and procedures for work authorization and performance reporting are established. This

leads to the framework in which the project is executed.

3. Detailed design phase. This is the phase in a project’s life cycle in which comprehensive plans are prepared. These plans consist of:

Product and process design

Final performance requirements

Detailed breakdown of the work structure

Scheduling information

Blueprints for cost and resource management

Detailed contingency plans for high-risk activities

Budgets

Expected cash flows

In addition—and most importantly—procedures and tools for executing, controlling, and correcting the project are developed. When this phase is completed, implementation can begin since the various plans should cover all aspects of the project in sufficient detail to support work authorization and execution.

The success of a project is highly correlated with the quality and the depth of the plans prepared during this phase. A detailed design review of each plan and each aspect of the project is, therefore, conducted before approval. A sensitivity analysis of environmental factors that contribute to uncertainty also may be needed. This analysis is typically performed as part of “what-if” studies using expert opinions and simulation as supporting mechanisms.

In most situations, the resources committed to the project are defined during the initial phases of its life cycle. Although these resources are used later, the strategic issues of how much to spend and at what rate are addressed here.

4. Production or execution phase. The fourth life-cycle phase involves the execution of plans and in most projects dominates the others in effort and duration. The critical strategic issue here relates to maintaining top management support, while the critical tactical issues center on the flow of communications within and among the participating organizations. At this level, the focus is on actual performance and changes in the original plans. Modifications can take different forms—in the extreme case, a project may be canceled. More likely, though, the scope of work, schedule, and budget will be adjusted as the situation dictates. Throughout this phase, management’s task is to assign work to the participating parties, to monitor actual progress and compare it with the baseline plans. The establishment and operation of a well-designed communications and control system therefore are necessary.

Support of the product or system throughout its entire life (logistic support) requires management attention in most engineering projects for which an operational phase is scheduled to follow implementation. The preparation for logistic support includes documentation, personnel training, maintenance, and initial acquisition of spare parts. Neglecting this activity or giving it only cursory attention can doom an otherwise successful venture.

5. Termination phase. In this phase, management’s goal is to consolidate what it has learned and translate this knowledge into ongoing improvements in the process. Current lessons and experience serve as the basis for improved practice. Although successful projects can provide valuable insights, failures can teach us even more. Databases that store and support the retrieval of project management information related to project cost, schedules, resource utilization, and so on are assets of an organization. Readily available, accurate information is a key factor in the success of future projects.

6. Operational phase. The operational phase is frequently outside the scope of a project and may be carried out by organizations other than those involved in the earlier life-cycle stages. If, for example, the project is to design and build an assembly line for a new model of automobile, then the operation of the line (i.e., the production of the new cars) will not be

part of the project because running a mass production system requires a different type of management approach. Alternatively, consider the design and testing of a prototype electric vehicle. Here, the operational phase, which involves operating and testing the prototype, will be part of the project because it is a one-time effort aimed at a very specific goal. In any case, from the project manager’s point of view, the operational phase is the most crucial because it is here that a judgment is made as to whether the project has achieved its technical and operational goals.

Strategic issues such as long-term relationships with customers, as well as customer service and satisfaction, have a strong influence on upper management’s attitudes and decisions. Therefore, the project manager should be particularly aware of the need to open and maintain lines of communication between all parties, especially during this phase.

Other life cycle models are used including the Spiral model (Boehm, 1986), which emphasizes prototyping and Agile Project Management (2001), which emphasizes collaboration and communication, with particular application to software development.

1.8 Factors that Affect the Success of a Project A study by Pinto and Slevin (1987) sought to find those factors that contribute most to a project’s success and to measure their significance over the life cycle. They found the following ten factors to be of primary importance. Additional insights are provided by Balachandra and Friar (1997) regarding new product development and by the Standish Group that focused on Information Technology (IT) projects since 1994 (The CHAOS reports 1995–2013).

1. Project mission and goals. Well-defined and intelligible understanding of the project goals are the basis of planning and executing the project. Understanding the goals and performance measures used in the evaluation is important for good coordination of efforts and building organizational support. Therefore, starting at the project initiation or the conceptual design phase of the project life cycle, the overall mission should be defined and explained to team members, contractors, and other participants.

2. Top management support. The competition for resources, coupled with the high levels of uncertainty typically found in the project environment, often leads to conflict and crisis. The continuous involvement of top management throughout the life cycle of the project increases their understanding of its mission and importance. This awareness, if translated into support, may prove invaluable in resolving problems when crises and conflicts arise or when uncertainty strikes. Therefore, continued, solid communication between the project manager and top management is a catalyst for the project to be a success.

3. Project planning. The translation of the project mission, goals, and performance measures into a workable (feasible) plan is the link between the initiation phase and the execution or production phase. A detailed plan that covers all aspects of the project—technical, financial,

organizational, scheduling, communication, and control—is the basis of implementation. Planning does not end when execution starts because deviations from the original plans during implementation may call for replanning and updating from one period to the next. Thus, planning is a dynamic and continuous process that links changing goals and performance to the final results.

4. Client consultation. The ultimate user of the project is the final judge of its success. A project that was completed on time according to the technical specifications and within budget but was never (or rarely) used can certainly be classified as a failure. In the conceptual design phase of the project life cycle, client input is the basis for setting the mission and establishing goals. In subsequent phases, continual consultation with the client can help in correcting errors previously made in translating goals into performance measures. In many projects, the client is a group of project stakeholders, each having needs and expectations from the project. However, as a result of changing needs and conditions, a mission statement that represented accurately the client’s needs in the conceptual design phase may no longer be valid in the planning or implementation phases. As discussed in Chapter 6, the configuration management system provides the link between existing plans and change requests issued by the client, as well as the project team.

5. Personnel issues. Satisfactory achievement of technical goals without violating schedule and budgetary constraints does not necessarily constitute a complete success, even if the stakeholders are satisfied. If relations among team members, between team members and the client, or between team members and other personnel in the organization are poor and morale problems are frequent, then project success is doubtful. Well-motivated teams with a sufficient level of commitment to the project and a good relationship with the other stakeholders are the key determinants of project success.

6. Technical issues. Understanding the technical aspects of the project and ensuring that members of the project team possess the necessary skills are important responsibilities of the project manager. Inappropriate technologies or technical incompatibility may affect all aspects of the

project, including cost, schedule, system performance, and morale.

7. Client acceptance. Ongoing client consultation (as well as consultation with other important stakeholders) during the project life cycle increases the probability of success regarding user acceptance. In the final stages of implementation, the stakeholders evaluate the resulting project and decide whether it is acceptable. A project that is rejected at this point must be viewed as a failure.

8. Project control. The continuous flow of information regarding actual progress is a feedback mechanism that allows the project manager to cope with uncertainty. By comparing actual progress with current plans, the project manager can identify deviations, anticipate problems, and initiate corrective actions. Lower-than-planned achievements in technical areas as well as schedule and cost deviations detected early in the life cycle can help the project manager focus on the important issues. Plans can be updated or partially adjusted to keep the project on schedule, on budget, and on target with respect to its mission.

9. Communication. The successful transition between the phases of a project’s life cycle and good coordination between participants during each phase requires a continuous exchange of information. In general, communication within the project team, with other parts of the organization, and between the project manager and the client is made easier when lines of authority are well defined. The organizational structure of the project should specify the communication channels and the information that should flow through each one. In addition, it should specify the frequency at which this information should be generated and transmitted. The formal communication lines and a positive working environment that enhances informal communication within the project team contribute to the success of a project.

10. Troubleshooting. The control system is designed to identify problem areas and, if possible, to trace their source through the organization. Because uncertainty is always a likely culprit, the development of contingency plans is a valuable preventive step. The availability of prepared plans and procedures for handling problems can reduce the effort required for dealing with them should they actually occur.

1.9 About the Book: Purpose and Structure This book is designed to bridge the gap between theory and practice by presenting the tools and techniques most suited for modern project management. A principal goal is to give managers, engineers, and technology experts a larger appreciation of their roles by defining a common terminology and by explaining the interfaces between the underlying disciplines.

Theoretical aspects are covered at a level appropriate for a senior undergraduate course or a first-year graduate course in either an Engineering or an MBA program. Special attention is paid to the use and evaluation of specific tools with respect to their real-world applicability. Whether the book is adopted for a course or is read by practitioners who want to learn the “tools of the trade,” we tried to present the subject matter in a concise and fully integrated manner.

A simulation tool, called the Project Team Builder (PTB), can be used to integrate the different aspects of project management and to provide hands-on experience of using these tools in a dynamic, uncertain environment. The PTB software is available from Sandboxmodel http:// www.sandboxmodel.com/.

The book is structured along functional lines and offers an in-depth treatment of basic processes, the economic aspects of project selection and evaluation, the technological aspects of configuration management, and the various issues related to budgeting, scheduling, and control. By examining these functions and their organizational links, a comprehensive picture emerges of the relationship that exists between project planning and implementation.

The end of each chapter contains a series of discussion questions and exercises designed to stimulate thought and to test the readers’ grasp of the material. In some cases, the intent is to explore supplementary issues in a more open-ended manner. Also included at the end of each chapter is a team

project centering on the design and construction of a solid waste disposal facility known as a thermal transfer plant. As the readers go from one chapter to the next, they are asked to address a particular aspect of project management as it relates to the planning of this facility.

Each of the remaining chapters deals with a specific component of project management or a specific phase in the project life cycle. A short description of Chapters 2 through 16 follows.

Chapter 2 focuses on process-based project management; it begins with a discussion of life-cycle models and their importance in planning, coordination, and control. We then introduce the concept of a process, which is a group of activities designed to transform a set of inputs consisting of data, technology, and resources into the desired outputs. The remainder of the chapter is devoted to the processes underlying the ten project management knowledge areas contained in the PMBOK. As we explain, these processes, along with an appropriate information system, constitute the cornerstones of process-based project management.

In Chapter 3, we address the economic aspects of projects and the quantitative techniques developed for analyzing a specific alternative. The long-term perspective is presented first by focusing on the time value of money. Investment evaluation criteria based on net present value, internal rate of return, and the payback period are discussed. Next, the short-term perspective is given by considering the role that cash flow analysis plays in evaluating projects and comparing alternatives. Ideas surrounding risk and uncertainty are introduced, followed by some concepts common to decision making, such as expected monetary value, utility theory, breakeven analysis, and diminishing returns. Specific decisions such as buy, make, rent, or lease are also elaborated.

The integration of LCC analysis into the project management system is covered in Chapter 4. LCC concepts and the treatment of uncertainty in the analysis are discussed, as well as classification schemes for cost components. The steps required in building LCC models are outlined and explained to facilitate their implementation. The idea that the cost of new product development is only a fraction of the total cost of ownership is a central theme of the chapter. The total LCC is determined largely in the early phases

of a project when decisions regarding product design and process selection are being made. Some of the issues discussed in this context include cost estimation and risk evaluation. The concept of the cost breakdown structure and how it is used in planning is also presented.

The selection of a project from a list of available candidates and the selection of a particular configuration for a specific project are two key management decisions. The purpose of Chapter 5 is to present several basic techniques that can be used to support this process. Checklists and scoring models are the simplest and first to be introduced. This is followed by a presentation of the formal aspects of benefit-cost and cost-effectiveness analysis. Issues related to risk, and how to deal with them, tie all the material together. The chapter closes with a comprehensive treatment of decision trees. The strengths and weakness of each methodology are highlighted and examples are given to demonstrate the computations.

It is rare that any decision is made on the basis of one criterion alone. To deal more thoroughly with situations in which many objectives, often in conflict with one another, must be juggled simultaneously, a value model that goes beyond simple checklists is needed. In Chapter 6, we introduce two of the most popular such models for combining multiple, possibly conflicting objectives into a single measure of performance. Multiattribute utility theory (MAUT) is the first presented. Basic theory is discussed along with the guiding axioms. Next, the concepts and assumptions behind the analytical hierarchy process (AHP) are detailed. A case study contained in the appendix documents the results of a project aimed at comparing the two approaches and points out the relative advantages of each.

The OBS and the WBS are introduced in Chapter 7. The former combines several organizational units that reside in one or more organizations by defining communication channels for work authorization, performance reports, and assigning general responsibility for tasks. Questions related to the selection of the most appropriate organizational structure are addressed, and the advantages and disadvantages of each are presented. Next, the WBS of projects is discussed. This structure combines hardware, software, data, and services performed in a project into a hierarchical framework. It further facilitates identification of the critical relationships that exist among various

project components. Subsequently, the combined OBS-WBS matrix is introduced, whereby each element in the lowest WBS level is assigned to an organizational unit at the lowest level of the OBS. This type of integration is the basis for detailed planning and control, as explained in subsequent chapters. We close with a discussion of human resources, focusing on a project manager’s responsibilities in this area.

In Chapter 8, the process by which the technological configuration of projects is developed and maintained is discussed. The first topic relates to the importance of time-based competition, the use of teams, and the role of QFD in engineering. We then show how tools such as benefit-cost analysis and MAUT can be used to select the best technological alternative from a set of potential candidates. Procedures used to handle engineering change requests via configuration management and configuration control are presented. Finally, the integration of quality management into the project and its relationship to configuration test and audit are highlighted.

Network analysis has played an important role in project scheduling over the past 50 years. In Chapter 9, we introduce the notions of activities, precedence relations, and task times, and show how they can be combined in an analytic framework to provide a mechanism for planning and control. The idea of a calendar and the relationship between activities and time are presented, first by Gantt charts and then by network models of the activity-on-arrow/activity- on-node type. This is followed by a discussion of precedence relations, feasibility issues, and the concepts of milestones, hammock activities, and subnetworks. Finally, uncertainty is introduced along with the PERT approach to estimating the critical path and the use of Monte Carlo simulation to gain a deeper understanding of a project’s dynamics.

Chapter 10 opens with a discussion of the type of resources used in projects. A classification scheme is developed according to resource availability, and performance measures are suggested for assessing efficiency and effectiveness. Some general guidelines are presented as to how resources should be used to achieve better performance levels. The relationship between resources and their cost and the project schedule is analyzed, and mathematical models for resources allocation and leveling are described.

In Chapter 11, we deal with the budget as a tool by which organizational

strategies, goals, policies, and constraints are transformed into an executable plan that relates task completions and capital expenditures to time. Techniques commonly used for budget development, presentation, and execution are discussed. Issues also examined are the relationship between the duration and timing of activities and the budget of a project, cash flow constraints and liabilities, and the interrelationship among several projects performed by a single organizational unit.

The execution of a project is frequently subject to unforeseen difficulties that cause deviation from the original plans. The focus of Chapter 12 is on project monitoring and control—a function that depends heavily on early detection of such deviations. The integration of OBS and WBS elements serves as a basis for the control system. Complementary components include a mechanism for tracing the source of each deviation and a forecasting procedure for assessing their implications if no corrective action is taken. Cost and schedule control techniques such as the earned value approach are presented and discussed.

Engineering projects where new technologies are developed and implemented are subject to high levels of uncertainty. In Chapter 13, we define R&D projects and highlight their unique characteristics. The typical goals of such projects are discussed, and measures of success are suggested. Techniques for handling risk, including the idea of parallel funding, are presented. The need for rework or repetition of some activities is discussed, and techniques for scheduling R&D projects are outlined. The idea of a portfolio is introduced, and tools used for portfolio management are discussed. A case study that involves screening criteria, project selection and termination criteria, and the allocation of limited resources is contained in the appendix.

A wide variety of software has been developed to assist the project manager. In Chapter 14, we discuss the basic functions and range of capabilities associated with these packages. A classification system is devised, and a process by which the most appropriate package can be selected for a project or an organization is outlined.

In Chapter 15, the need to terminate a project in a planned, orderly manner is discussed. The process by which information gathered in past projects can be stored, retrieved, and analyzed is presented. Post-mortem analysis is

suggested as a vehicle by which continuous improvement can be achieved in an organization. The goal is to show how projects can be terminated so that the collective experience and knowledge can be transferred to future endeavors.

In Chapter 16, we present new developments in teaching Project Management in MBA and Engineering programs. First we discuss the need to improve the way we teach project management. Next the idea of Simulation Based Training (SBT) as a way to gain “hands-on” experience in a controlled, safe environment where the cost of errors is minimized and learning by doing is implemented. The “Project Team Builder (PTB)” simulator is described next with a focus on the main features of this SBT tool. This is followed by two specific examples based on our experience using Simulation Based Training and PTB in the Global Network for Advanced Management (GNAM) New Product Development (NPD) Course and in a Project Management course at Columbia University, School of Engineering.

It goes without saying that the huge body of knowledge in the area of project management cannot be condensed into a single book. Over the past 25 years alone, much has been written on the subject in technical journals, textbooks, company reports, and trade magazines. In an effort to cover some of this material, a bibliography of important works is provided at the end of each chapter. The interested reader can further his or her understanding of a particular topic by consulting these references.

TEAM PROJECT* Thermal Transfer Plant * The authors thank Warren Sharp and Ian St. Maurice for their help in writing this case study.

Introduction

To exercise the techniques used for project planning and control, the reader is encouraged to work out each aspect of the thermal transfer plant case study. At the end of each chapter, a short description of the relevant components of the thermal transfer plant is provided along with an assignment. If possible, the assignment should be done in groups of three or four to develop the interpersonal and organizational skills necessary for teamwork.

Not all of the information required for each assignment is given. Before proceeding, it may be necessary for the group to research a particular topic and to make some logical assumptions. Accordingly, there is no “correct solution” to compare recommendations and conclusions. Each assignment should be judged with respect to the availability of information and the force of the underlying assumptions.

Total Manufacturing Solutions, Inc. Total Manufacturing Solutions, Inc. (TMS) designs and integrates manufacturing and assembly plants. Their line of products and services includes the selection of manufacturing and assembly processes for new or existing products, the design and selection of manufacturing equipment, facilities design and layout, the integration of manufacturing and assembly systems, and the training of personnel and startup management teams. The broad range of services that TMS provides to its customers makes it a unique and successful organization. Its headquarters are in Nashville, Tennessee, with branches in New York and Los Angeles.

TMS began operations in 1980 as a consulting firm in the areas of industrial engineering and operations management. In the late 1990s, the company started its design and integration business. Recently it began promoting just- in-time systems and group technology-based manufacturing facilities. The organization structure of TMS is depicted in Figure 1.12; financial data are presented in Tables 1.3 and 1.4.

Figure 1.12 Simplified organization chart.

Figure 1.12 Full Alternative Text

TABLE 1.3 TMS Financial Data: Income Statement

Income Statement ($1,000)

Year ending December 31, 2004

Net sales $47,350  Cost of goods sold  Direct labor 26,600  Overhead  6,000

32,600 Gross profit 14,750 General and administrative 5,350 Marketing  4,900

10,250 Profit before taxes 4,500 Income tax (32%)  1,440 Net profit $3,060

TABLE 1.4 TMS Financial Data: Balance Sheet

Balance Sheet ($1,000)

Year ending December 31, 2004

Assets Current assets  Cash $1,100  Accounts receivables 1,500  Inventory 12  Other    3   Total current assets 2,615 Net fixed assets   325   Total assets 2,940

Liabilities Current Liabilities  Notes payable 35  Accounts payable 137  Accruals    90   Total current liabilities 262 Long-term debt 50 Capital stock and surplus 1,300 Earned surplus  1,328  Net worth  2,628   Total liabilities $2,940

TMS employs approximately 500 people, 300 of whom are in the Nashville area, 100 in New York, and 100 in Los Angeles. Approximately 50% of these are industrial, mechanical, and electrical engineers, and approximately 10% also have MBA degrees, mostly with operations management concentrations. The other employees are technicians, support personnel, and managers. Some information on labor costs follows.

Engineers $50,000/year Technicians $25/hour Administrators $35,000/year Other $10/hour

These rates do not include fringe benefits or overhead. Moreover, bear in mind that individual salaries are a function of experience, position, and seniority within the company.

In the past 10 years, TMS averaged 20 major projects annually. Each project consisted of the design of a new manufacturing facility, the selection, installation, and integration of equipment, and the supervision of startup activities. In addition, TMS experts are consultants to more than 100 clients, many of whom own TMS-designed facilities.

The broad technical basis of TMS in the areas of mechanical, electrical, and

industrial engineering and its wide-ranging experience are its most important assets. Management believes that the company is an industry leader in automatic assembly, material handling, industrial robots, command and control, and computer-integrated manufacturing. TMS is using subcontractors mainly in software development and, when necessary, for fabrication, because it does not have any shops or manufacturing facilities.

Recently, management has decided to expand its line of operations and services into the area of recycling and waste management. New regulations in many states are forcing the designers of manufacturing plants to analyze and solve problems related to waste generation and disposal.

Your team has been selected by TMS-Nashville to work on this new line of business. Your first assignment is to analyze the needs and opportunities in your geographical area. On the bases of a literature search and conversations with local manufacturers, environmentalists, and politicians, making whatever assumptions you believe are necessary, write a report and prepare a presentation that answers the following questions:

1. How well does this new line of business fit into TMS operations? What are the existing or potential opportunities?

2. How should a waste management project be integrated into TMS’s current organizational structure?

3. What are the problems that TMS might encounter should it embark on this project? How might these problems affect the project? How might they affect TMS’s other business activities?

4. If a project is approved in waste management, then what would its major life-cycle phases be?

Any assumptions regarding TMS’s financial position and borrowing power, personnel, previous experience, and technological capabilities relating, for example, to computer-aided design, should be stated explicitly.

Discussion Questions 1. Explain the difference between a project and a batch-oriented production

system.

2. Describe three projects, one whose emphasis is on technology, one with emphasis on cost, and one with emphasis on scheduling.

3. Identify a project that is “risk free.” Explain why this project is not subject to risk (low probability of undesired results, low cost of undesired results, or both).

4. In the text, it is stated that a project manager needs a blend of technical, administrative, and interpersonal skills. What attributes do you believe are desirable in an engineering specialist working on a project in a matrix organization?

5. Write a job description for a project manager.

6. Identify a project with which you are familiar, and describe its life-cycle phases and between 5 and 10 of the most important activities in each phase of its life cycle.

7. Find a recent news article on an ongoing project, evaluate the management’s performance, and explain how the project could be better organized and managed.

8. Analyze the factors that affect the success of projects as a function of the project’s life cycle. Explain in which phase of the life cycle each factor is most important, and why.

9. In a matrix management structure, the person responsible for a specific activity on a specific project has two bosses. What considerations in a well-run matrix organization reduce the resulting potential for conflict?

10. Outline a strategy for effective communication between project

personnel and the customer (client).

11. Select a project and discuss what you think are the interfaces between the engineers and managers assigned to the project.

12. The project plan is the basis for monitoring, controlling, and evaluating the project’s success once it has started. List the principal components or contents of a project plan.

Exercises 1. 1.1 What type of production system would be associated with the

following processes?

1. A production line for window assemblies

2. A special order of 150 window assemblies

3. Supplying 1,000 window assemblies per month throughout the year

2. 1.2 You decided to start a self-service restaurant. Identify the stages of this project and the type of production system involved in each stage, from startup until the restaurant is running well enough to sell.

3. 1.3 Select two products and two services and describe the needs that generated them. Give examples of other products and services that could satisfy those needs equally well.

4. 1.4 You have placed an emergency order for materials from a company that is located 2,000 miles away. You were told that it will be shipped by truck and will arrive within 48 hours, the time at which the materials are needed. Discuss the issues surrounding the probability that the shipment will reach you within the 48 hours. How would things change if shipment were by rail?

5. 1.5 Your plumber recommends that you replace your cast iron pipes with copper pipes. He claims that although the price for the job is $7,000, he has to add $2,000 for unforeseen expenses. Discuss his proposal.

6. 1.6 In statistical analysis, the coefficient of variation is considered to be a measure of uncertainty. It is defined as the ratio of the standard deviation to the mean. Select an activity, say driving from your home to school, generate a frequency distribution for that activity, and calculate its mean and the standard deviation. Analyze the uncertainty.

7. 1.7 Specify the type of uncertainties involved in completing each of the following activities successfully.

1. Writing a term paper on a subject that does not fall within your field of study

2. Undertaking an anthropological expedition in an unknown area

3. Driving to the airport to pick up a friend

4. Buying an item at an auction

8. 1.8 Your professor told you that the different departments in the school of business are organized in a matrix structure. Functional areas include organizational behavior, mathematics (operations research and statistics), and computer science. Develop an organization chart that depicts these functions along with the management, marketing, accounting, and finance departments. What is the product of a business school? Who is the customer?

9. 1.9 Provide an organizational structure for a school of business administration that reflects either a functional orientation or a product orientation.

10. 1.10 Assume that a recreational park is to be built in your community and that the city council has given you the responsibility of selecting a project manager to lead the effort. Write a job description for the position. Generate a list of relevant criteria that can be used in the selection process, and evaluate three fictitious candidates (think about three of your friends).

11. 1.11 Write an RFP soliciting proposals for preparing your master’s thesis. The RFP should take into account the need for tables, figures, and multiple revisions. Make sure that it adequately describes the nature of the work and what you expect so that there will be no surprises once a contract is signed.

12. 1.12 Explain how you would select the best proposal submitted in

Exercise 1.11 . That is, what measures would you use, and how would you evaluate and aggregate them with respect to each proposal?

13. 1.13 The following list of activities is relevant to almost any project. Identify the phase in which each is typically performed, and order them in the correct sequence.

1. Developing the network

2. Selecting participating organizations

3. Developing a calendar

4. Developing corrective plans

5. Executing activities

6. Developing a budget

7. Designing a project

8. Recommending improvement steps

9. Monitoring actual performance

10. Managing the configuration

11. Allocating resources to activities

12. Developing the WBS

13. Estimating the LCC

14. Getting the customer’s approval for the design

15. Establishing milestones

16. Estimating the activity duration

14. 1.14 Drawing from your personal experience, give two examples for each of the following situations.

1. The original idea was attractive but not sufficiently important to invest in.

2. The idea was compelling but was not technically feasible.

3. The idea got past the selection process but was too expensive to implement.

4. The idea was successfully transformed into a completed project.

15. 1.15 List two projects with which either you or your organization is involved that are currently in each of the various life-cycle phases.

16. 1.16 Select three national, state, or local projects (e.g., construction of a new airport) that were completed successfully and identify the factors that affect their success. Discuss the attending risks, uncertainty, schedule, cost, technology, and resources usage.

17. 1.17 Identify three projects that have failed, and discuss the reasons for their failure.

Bibliography

Elements of Project Management Balachandra, R. and J. H. Friar, “Factors for Success in R&D Projects and New Product Development: A Contextual Framework,” IEEE Transactions on Engineering Management, Vol. 44, No. 3, pp. 276–287, 1997.

Boehm B, “A Spiral Model of Software Development and Enhancement,” ACM SIGSOFT Software Engineering Notes, ACM, Vol. 1, No. 4, pp. 14–24, August 1986.

Durney, C. P. and R. G. Donnelly, “Managing the effects of rapid technological change on complex information technology projects,” Journal of the Knowledge Economy, pp. 1–24, 2013.

Fleming, A. and J. Koppelman, “The Essence of Evolution of Earned Value,” Cost Engineering, Vol. 36, No. 11, pp. 21–27, 1994.

General Electric Corporation, “Guidelines for Use of Program/Project Management in Major Appliance Business Group,” in D.J. Cleland and W.R. King (Editors), System Analysis and Project Management, McGraw-Hill, New York, 1983.

Keller R. T., “Cross-Functional Project Groups in Research and New Product Development: Diversity, Communications, Job Stress, and Outcomes,” Academy of Management Journal, Vol. 44, pp. 547–555, 2001.

Pinto, J. K., “Project Management 2002,” Research Technology Management, Vol. 45, No. 2, pp. 22–37, 2002.

Pinto, J. K. and D. P. Slevin, “Critical Factors in Successful Project

Implementation,” IEEE Transactions on Engineering Management, Vol. EM-34, No. 1, pp. 22–27, 1987.

Schmitt, T., T. D. Klastorin, and A. Shtub, “Production Classification System: Concepts, Models and Strategies,” International Journal of Production Research, Vol. 23, No. 3, pp. 563–578, 1985.

Standish Group. The CHAOS reports 1995–2013.

Books on Project Management Archibald, R. D., Managing High-Technology Programs and Projects, Third Edition, John Wiley & Sons, New York, 2003.

Badiru, A. B., Project Management in Manufacturing and High Technology Operations, Second Edition, John Wiley & Sons, New York, 1996.

Cleland, D. I., Guide to the Project Management Body of Knowledge (PMBOK Guide), Project Management Institute, Newton Square, PA, 2002.

Cleland, D. I. and L. R. Ireland, Project Management: Strategic Design and Implementation, Fifth Edition, McGraw-Hill, New York, 2006.

Kerzner, H., Project Management: A Systems Approach to Planning, Scheduling and Control, Seventh Edition, John Wiley & Sons, New York, 2000.

Kezsbom, D. S. and K. A. Edward, The New Dynamic Project Management: Winning Through the Competitive Advantage, John Wiley & Sons, New York, 2001.

Meredith, J. R. and S. J. Mantel, Jr., Project Management: A Managerial Approach, Fourth Edition, John Wiley & Sons, New York, 1999.

Oehmen, J., Oppenheim, B. W., Secor, D., Norman, E., Rebentisch, E., Sopko, J. A., . . . and Driessnack, J., The Guide to Lean Enablers for Managing Engineering Programs, 2012.

PMI Standards Committee, A Guide to the Project Management Body of Knowledge (PMBOK), Project Management Institute, Newtown Square, PA, 2012 (http://www.PMI.org).

Randolph, W.A. and Z.B. Posner, Checkered Flag Projects: Ten Rules for Creating and Managing Projects that Win! Second Edition, Prentice Hall, Upper Saddle River, NJ, 2002.

Appendix 1A Engineering Versus Management

1A.1 Nature of Management Practically everyone has some conception of the meaning of the word management and to some extent understands that it requires talents that are distinct from those needed to perform the work being managed. Thus, a person may be a first-class engineer but unable to manage a high-tech company successfully. Similarly, a superior journeyman may make an inferior foreman. We all have read about cases in which an enterprise failed not because the owner did not know the field, but because he was a poor manager. To cite just one example, Thomas Edison was perhaps the foremost inventor of the last century, but he lost control of the many businesses that grew from his inventions because of his inability to plan and to direct and supervise others.

So what exactly is management, and what does a good manager have to know? Although there is no simple answer to this question, there is general agreement that, to a large extent, management is an art grounded in application, judgment, and common sense. To be more precise, it is the art of getting things done through other people. To work effectively through others, a manager must be able to perform competently the seven functions listed in Table 1A.1. Of those, planning, organizing, staffing, directing, and controlling are fundamental. If any of these five functions is lacking, then the management process will not be effective. Note that these are necessary but not sufficient functions for success. Getting things done through people requires the manager also to be effective at motivating and leading others.

The relative importance of the seven functions listed in Table 1A.1 may vary with the level of management. Top management success requires an emphasis on planning, organizing, and controlling. Middle-level management activities are more often concerned with staffing, directing, and leading.

Lower-level managers must excel at motivating and leading others.

1A.2 Differences between Engineering and Management Many people start out as engineers and, over time, work their way up the management ladder. As Table 1A.2 shows, the skills required by a manager are very different from those normally associated with engineering (Badawy and Trystram 1995, Eisner 2002).

TABLE 1A.1 Functions of Management Functions Description

Planning

The manager first must decide what must be done. This means setting short- and long-term goals for the organization and determining how they will be met. Planning is a process of anticipating problems, analyzing them, estimating their likely impacts, and determining actions that will lead to the desired outcomes, objectives, or goals.

Organizing

Establishing interrelationships between people and things in such a way that human and material resources are effectively focused toward achieving the goals of the enterprise. Organizing involves grouping activities and people, defining jobs, delegating the appropriate authority to each job, specifying the reporting structure and interrelationships between these jobs, and providing the policies or other means for coordinating these jobs with each other. In organizing, the manager establishes positions and

Staffing

decides which duties and responsibilities properly belong to each. Staffing involves appraising and selecting candidates, setting the compensation and reward structure for each job, training personnel, conducting performance appraisals, and performing salary administration. Turnover in the workforce and changes in the organization make it an ongoing function.

Directing

Because no one can predict with certainty the problems or opportunities that will arise, duties must naturally be expressed in general terms. Managers must guide and direct subordinates and resources toward the goals of the enterprise. This involves explaining, providing instructions, pointing out proper directions for the future, clarifying assignments, orienting personnel in the most effective directions, and channeling resources.

Motivating

A principal function of lower management is to instill in the workforce a commitment and enthusiasm for pursuing the goals of the organization. Motivating refers to the interpersonal skills to encourage outstanding human performance in others and to instill in them an inner drive and a zeal to pursue the goals and objectives of the various tasks that may be assigned to them.

Leading

This means encouraging others to follow the example set for them, with great commitment and conviction. Leading involves setting examples for others, establishing a sense of group pride and spirit, and instilling allegiance.

Controlling

Actual performance will normally differ from the original plan, so checking for deviations and taking corrective actions is a continuing responsibility of management. Controlling involves monitoring achievements and progress against the plans, measuring the degree of compliance with the plans,

deciding when a deviation is significant, and taking actions to realign operations with the plans.

TABLE 1A.2 Engineering Versus Management What engineers do What managers do Minimize risks, emphasize accuracy and mathematical precision

Take calculated risks, rely heavily on intuition, take educated guesses, and try to be “about right”

Exercise care in applying sound scientific methods, on the basis of reproducible data

Exercise leadership in making decisions under widely varying conditions, based on sketchy information

Solve technical problems on the basis of their own individual skills

Solve techno-people problems on the basis of skills in integrating the talents and behaviors of others

Work largely through their own abilities to get things done

Work through others to get things done

Engineering involves hands-on contact with the work. Managers are always one or more steps removed from the shop floor and can influence output and performance only through others. An engineer can derive personal satisfaction and gratification in his or her own physical creations, and from the work itself. Managers must learn to be fulfilled through the achievements of those whom they supervise. Engineering is a science. It is characterized by precision, reproducibility, proven theories, and experimentally verifiable results. Management is an art. It is characterized by intuition, studied judgments, unique events, and one-time occurrences. Engineering is a world of things; management is a world of people. People have feelings, sentiments, and motives that may cause them to behave in unpredictable or unanticipated ways. Engineering is based on physical laws, so that most events occur in an orderly, predictable manner.

1A.3 Transition from Engineer to Manager Engineers are often propelled into management out of economic considerations or a desire to take on more responsibility. Some organizations have a dual career ladder that permits good technical people to remain in the laboratory and receive the same financial rewards that attend supervisory promotions. This type of program has been most successful in research- intensive environments such as those found at the IBM Research Center in Yorktown Heights and the Department of Energy research laboratories around the United States.

Nevertheless, when an engineer enters management, new perspectives must be acquired and new motivations must be found. He or she must learn to enjoy leadership challenges, detailed planning, helping others, taking risks, making decisions, working through others, and using the organization. In contrast to the engineer, the manager achieves satisfaction from directing the work of others (not things), exercising authority (not technical knowledge), and conceptualizing new ways to do things (not doing them). Nevertheless, experience indicates that the following three critical skills are the ones that engineers find most troublesome to acquire: (1) learning to trust others, (2) learning how to work through others, and (3) learning how to take satisfaction in the work of others.

The step from engineering to management is a big one. To become successful managers, engineers usually must develop new talents, acquire new values, and broaden their point of view. This takes time, on-the-job and off-the-job training, and careful planning. In short, engineers can become good managers only through effective career planning.

Additional References Badawy, M. K. and D. Trystram, Developing Managerial Skills in Engineers and Scientists, John Wiley & Sons, New York, 1995.

Eisner, H., Essentials of Project and Systems Engineering Management, Second Edition, John Wiley & Sons, New York, 2002.

Jones, G. R. and J. M. George, Essentials of Contemporary Management, McGraw-Hill, New York, 2003.

Moore, D. C. and D. S. Davies, “The Dual Ladder: Establishing and Operating It,” Research Management, Vol. 20, No. 4, pp. 21–27, 1977.

Chapter 2 Process Approach to Project Management

2.1 Introduction A project is an organized set of activities aimed at accomplishing a specific, non-routine, or low-volume task such as designing an e-commerce website or building a hypersonic transport. Projects are aimed at meeting the objectives and expectations of their stakeholders. Because of the need for specialization, as well as the number of hours usually required, most projects are undertaken by multidisciplinary teams. In some cases, the team members belong to the same organization, but often, at least a portion of the work is assigned to subcontractors, consultants, or partner firms. Leading the effort is the project manager, who is responsible for the successful completion of all activities.

Coordination between the individuals and organizations involved in a project is a complex task and a major component of the project manager’s job. To ensure success, integration of deliverables produced at different geographical locations, at different times, by different people, in different organizations is required.

Projects are typically performed under time pressure, limited budgets, tight cash flows, and uncertainty using shared resources. The triple constraint of time, cost, and scope (i.e., project deliverables that are required by the end- customers or end-users) requires the project manager to repeatedly make tradeoffs between these factors with the implicit goal of balancing risks and benefits. Moreover, disagreements among stakeholders on the best course of action to follow can lead to conflicting direction and poor resource allocation decisions. Thus, a methodology is required to support the management of projects. Early efforts in developing such a methodology focused on specific tools for different aspects of the problem. Tools for project scheduling, such as the Gantt chart and the critical path method, were developed along with tools for resource allocation, project budgeting, and project control. Each is

covered in considerable detail in the chapters that follow.

Nevertheless, although it is important to gain an appreciation of these tools, each is limited in the view that it provides the project manager. For example, tools for scheduling rarely address problems related to configuration management, and tools for budgeting typically do not address problems associated with quality. The integration of these tools in a way that supports decision making at each stage in a project’s life cycle is essential for understanding the dynamics of the project environment. This chapter identifies the relevant management processes and outlines a framework for applying them to both single and multiple projects.

A project management process is a collection of tools and techniques that are used on a predefined set of inputs to produce a predefined set of outputs. The processes are interconnected and interdependent. The full collection forms a methodology that supports all of the aspects of project management throughout a project’s life cycle—from the initiation of a new project to its (successful) completion and termination.

The framework that we propose to organize and study the relevant processes is based on the ten knowledge areas identified by the Project Management Institute (PMI) and published as the Project Management Body of Knowledge (PMBOK). PMI also conducts a certification program based on the PMBOK. A Project Management Professional certificate can be earned by passing an exam and accumulating relevant experience in the project management discipline.

The benefit gained from implementing the full set of project management processes has been evident in many organizations. Although each project is a one-time effort, process-oriented management promotes learning and teamwork through the use of a common set of tools and techniques. A detailed description of their use is provided in the remainder of the book. Each chapter deals with a specific knowledge area and highlights the tools and techniques in the form of mathematical models, templates, charts, and checklists used in the processes developed for that area.

2.1.1 Life-Cycle Models Because a project is a transitory effort designed to achieve a specific set of goals, it is convenient to identify the phases that accompany the transformation of an idea or a concept into a product or system. The collection of such phases is defined as the project life cycle.

A life-cycle model is a set of stages or phases through which a family of projects goes, in which each phase may be performed sequentially or concurrently. The project life cycle defines the steps required to achieve the project goals as well as the contents of each step. The end of each phase often serves as a checkpoint or milestone for assessing progress, as the actual status of the project is compared with the original plan in an effort to identify deviations in cost, schedule, and performance so that any necessary corrective action can be taken.

For software projects, the spiral life-cycle model proposed by Boehm (1988) and further refined by Muench (1994) has gained widespread popularity. The model, shown in Figure 2.1, is very useful for repetitive development in which a project goes through the same phases several times, each time becoming more complete; that is, closer to the final product. It has two main distinguishing features. The first is a cyclic approach for incrementally expanding a system’s definition and degree of implementation while decreasing its level of risk. The other is a set of anchor point milestones for ensuring stakeholder commitment to feasible and mutually satisfactory solutions. The general idea is to ensure that the riskier aspects of the project are completed first to avoid failures in an advanced phase.

Figure 2.1 Spiral life-cycle model.

Figure 2.1 Full Alternative Text

Construction projects also have their own set of life-cycle models, such as the one proposed by Morris (1988). In this model, a project is divided into four stages to be performed in sequence.

Stage (Feasibility) This stage terminates with a go/no go decision for the project. It includes a statement of goals and objectives, conceptual

I design, high-level feasibility studies, the formulation of strategy, and the approval of both the design and the strategy by upper management.

Stage II

(Planning and Design) This stage terminates with the awarding of major contracts. It includes detailed design, cost and schedule planning, contract definitions, and the development of the road map for execution.

Stage III

(Production) This stage terminates with the completion of the facility. It includes construction, installation of utilities, equipment acquisition and setup, landscaping, roadwork, interior appointments, and operational testing.

Stage IV

(Turnover and Startup) This stage terminates with full operation of the facility. It includes final testing and the development of a maintenance plan.

Clearly this model does not fit research and development (R&D) projects or software projects because of the sequential nature in which the work is performed. In R&D projects, for example, it is often necessary to undertake several activities in parallel with the hope that at least one will turn out to meet technological and cost objectives.

Other life-cycle models include:

Waterfall model. Each phase is completed before the initiation of the following phase. This model is most relevant for information technology projects.

Incremental release model. In the early phases, an imperfect version of the project is developed with the goal of maximizing market share. Toward the later phases, a final version of the product emerges. This is a special case of the spiral model.

Prototype model. In the early phases, the rudimentary functions associated with the user interface are developed before the product itself is finalized. This model is most appropriate for information technology projects.

By integrating the ideas of project processes and the project life cycle, a

methodology for project management emerges. The methodology is a collection of processes whereby each process is associated with a phase of the project life cycle. The project manager is responsible for identifying individuals who have the necessary skills and experience and for assigning them to the appropriate processes. A project’s likelihood of success increases when the definition of inputs and outputs of each process is clear and when team members are clear about the lines of authority, individual responsibilities, and overall project objectives. Clear communication of the overall project’s objectives as well as clear delineation of major work streams is necessary to ensure a well-coordinated flow of information and good communications between project participants.

Life-cycle models are indispensable project management tools. They provide a simple, yet effective, means of monitoring and controlling a project at each stage of its development. As each phase comes to an end, all results are documented and all deliverables are certified with respect to quality and performance standards.

2.1.2 Example of a Project Life Cycle The DOD uses a simple life-cycle model for systems acquisition (US DOD 5000.2 1993). Its components are shown in Figure 2.2. The project starts only after the determination of mission needs and approval is given. At the end of stage IV the system is taken out of service. This is the end of the life cycle.

Figure 2.2 DOD life-cycle model.

Figure 2.2 Full Alternative Text

2.1.3 Application of the Waterfall Model for Software Development A waterfall model captures the relevant phases of software development effort through a series of stages. There are specific objectives to be accomplished in each stage, and each activity must be deemed successful for work to proceed to the subsequent phase. The process is usually considered non-iterative. Each phase requires the delivery of particular documentation (contract data requirements list). In addition, many of the phases require successful completion of a government review process. Critics of the waterfall model, in fact, find that the model is geared to recognize documents as a measure of progress rather than actual results.

The nine major activities are as follows:

1. Systems concept/system requirements analysis

2. Software requirements analysis

3. Software parametric cost estimating

4. Preliminary design

5. Detailed design

6. Coding and computer software unit testing

7. Computer software component integration and testing

8. Computer software configuration item testing

9. System integration and operational testing

A schematic of the process, representing concurrent hardware and software development, is given in Figure 2.3.

Figure 2.3 Waterfall model.

Figure 2.3 Full Alternative Text

An alternative approach to software development involves the use of incremental builds. With this approach, software development begins with the design of certain core functions to meet critical requirements. Each successive software build (iteration on product development) provides additional functions or enhances performance. Once system requirements are defined and preliminary system design is complete, each build may follow the waterfall pattern for subsequent development phases. Each successive build will usually have to be integrated with previous builds.

2.2 Project Management Processes A process is a group of activities designed to transform a set of inputs into the desired outputs. The transformation consists of the following three elements:

1. Data and information

2. Decision making

3. Implementation and action

A well-defined set of processes, supported by an appropriate information system (composed of a database and a model base) and implemented by a team trained in performing the processes, is a cornerstone in modern project management.

The following discussion is based on the work of Shtub (2001).

2.2.1 Process Design The design of a process must address the following issues.

1. Data required to support decisions, including:

data sources

how the data should be collected

how the data should be stored

how the data should be retrieved

how the data should be presented as information to decision makers

2. Models required to support decisions. A model is a simplified

representation of reality that is used in part to transform data into useful information. When a problem is too complicated to solve or some information is missing, simplifying assumptions are made and a model is developed. There are many types of models including mathematical, physical, and statistical. The model—the simplified presentation of reality—is analyzed and a solution is obtained. Sensitivity analysis is then used to evaluate the applicability of the solution found to the real problem and its sensitivity to the simplifying assumptions. Consider, for example, a simple way of estimating the time required to travel a given distance. Assuming a constant speed and movement in a straight line, one possibility would be: time = distance/speed. This simple algebraic model is frequently used, although most vehicles do not travel at a constant speed or in a straight line. In a similar way, a variety of models are used in project management, including:

models that support routine decisions

models that support ad-hoc decisions

models used for project control

Their value depends on how useful their estimates are in practice.

3. Data and models integration:

How data from the database are analyzed by the models

How information generated by the models is transferred and presented to decision makers

2.2.2 PMBOK and Processes in the Project Life Cycle A well-defined set of processes that apply to a large number of projects is discussed in the PMBOK published by the PMI. Although some of the

PMBOK processes may not apply to all projects, and others may need to be modified before they can be applied, the PMBOK is a widely accepted, widely known source of information. The processes are classified in two ways.

1. By project phase:

initiating processes

planning processes

executing processes

monitoring and controlling processes

closing processes

2. By knowledge areas or management functions:

Knowledge areas are:

Project Integration Management

Project Scope Management

Project Time Management

Project Cost Management

Project Quality Management

Project Human Resource Management

Project Communications Management

Project Risk Management

Project Procurement Management

Project Stakeholder Management

2.3 Project Integration Management

2.3.1 Accompanying Processes Project integration management involves six processes:

1. Project charter development—This process involves some sort of cost- benefit analysis that leads to a go/no go decision regarding a proposed project. A project charter is created at the conclusion of this phase, and a project manager is selected. The charter defines the business or societal need that the project addresses, the project timeline, and the budget. Considerations like the fit of the proposed project to the organization strategy, stakeholders’ needs and expectations, competition, technological and economic feasibility are important in this process.

2. Project plan development—gathering results of various planning processes and integrating it all into an acceptable plan.

3. Management and directing project execution—implementation of the project plan during the project execution.

4. Monitoring and controlling the project work during execution—an effort to identify deviations from the project plan in order to take corrective actions when needed.

5. Integrated change control—coordination of changes in scope, schedule, budget, and other parts of the plans for the entire project.

6. Project closing—the last process in the project life cycle ensuring that the project work was done, deliverables are accepted, and all contracts with different stakeholders are terminated.

The purpose of these processes is to ensure coordination across the various work streams of the project.

Integration management is concerned with the identification, monitoring, and control of all interfaces between the various components of a project, including:

1. Human interface—the personnel associated with the various aspects of the project such as the project team members, subcontractors, consultants, stakeholders, and customers.

2. Scope interface—if the scope is not defined properly, then some required work may not be performed or work that is not required may be done.

3. Time interface—adequate resources must be provided to avoid delays and late deliverables.

4. Communication interface—Timely transfer of the right information to the right stakeholders at the right time is critical to project success.

5. Technological interface—since in most projects the work content is divided among project participants, the interfaces between the deliverables supplied by the participants must be managed throughout the project to ensure smooth integration of the parts into the final deliverables as specified.

Proper integration management requires proper communication between members of the project’s stakeholders; indeed, one of the knowledge areas is communication management. The life-cycle model plays an important role. The project plan is developed in the early phases of the project, whereas execution of the plan and change control occurs during the later phases.

2.3.2 Description

Project charter development Many alternative project proposals may exist. On the basis of an appropriate

set of evaluation criteria and a selection methodology, the best alternative is chosen, a project charter is issued, and a project manager is selected.

Projects are initiated in response to a need typically arising at a level in the organization that is responsible for setting strategic goals. Research has shown that the most important criterion guiding organizations in choosing projects is financial. Projects are selected for implementation when they support clear business goals and have an attractive rate of return or net present value. A second factor that is likely to trigger a new project is an advance in technology. In the electronics industry, for example, the steady reduction in cost and increase in performance of integrated circuits and memory chips has forced firms to offer new products on a semiannual basis, just to remain competitive.

In summary, projects are initiated when:

1. a defined need arises

2. there is strategic support and a willingness to undertake the project

3. the technology is compelling

4. there are available resources

Potential projects can be classified in several ways:

1. External versus internal projects; that is, projects performed for customers outside the organization versus customers within the organization

2. Projects that are initiated to:

1. address a business opportunity

2. solve a problem

3. follow a directed order

3. Due date and completion time

4. Organizational priority

The project plan The project plan and its execution are the major outputs of this process. The plan is based on inputs from other processes such as scope planning, schedule development, resource planning, and cost estimating, along with historical information and organizational policies. It is updated continuously on the basis of corrective actions triggered by approved change requests and analysis of performance measures. As a tool for coordination, the documents that define the plan must address:

1. The time dimension—when is each stage performed

2. The scope dimension—what should be achieved

3. The human dimension—who does what

4. The risk dimension—how to deal with uncertainty

5. The resource dimension—the plan must ensure availability of resources

6. The information and communication dimension—the way data is collected, analyzed, stored, and communicated to stakeholders must be addressed as part of the project plan

The primary purpose of the plan is to guide the execution of the project. It assists the project manager in leading the project team and in achieving the project’s goals. Critical characteristics are fluidity and flexibility, allowing changes to be incorporated easily as they occur. The corresponding document typically consists of the following parts:

1. Preface, including a general review, goals, outputs, scope of work to be done, and technical specifications

2. Project organization description—interfaces, organizational structure, responsibilities

3. Management processes—for example, procurement, reporting, and monitoring

4. Technical processes—for example, design and verification

5. Execution—the way work will be done, scheduling (i.e., timeline) and budget information, resource allocation, and information flow

A project plan should reflect the needs and expectations of stakeholders. Therefore, a project manager should perform an analysis, prior to formally proposing a project idea, to determine stakeholders’ principal concerns and perspectives and understand the organization’s underlying unmet needs.

This information can be used to develop guidelines for managing the relationship between project personnel and the stakeholders. The level of influence and the needs and expectations of any particular stakeholder may have a significant impact on the success or failure of the project. Moving in a direction that is at crossroads with an influential stakeholder can spell doom.

Execution of the plan Execution of the project plan produces the deliverables. For integration management to be successful, a project manager must be skilled in the three areas listed below. Some of these skills are innate, whereas others can be learned.

1. The technology that is used by the project is referred to as the product scope. Often the project manager can delegate responsibilities for technological issues to a team member with detailed expertise. Most of the effort of the project manager, then, is related to integration—seeing that the pieces come together properly.

2. The organizational factor—the project manager must understand the nature of the organization, the human interrelations, the common types of interactions, and so on. Organizational understanding can be expressed as follows.

1. Human resources (HR) framework—the focus is on creating harmony among the organizational needs, needs of the project participants, and the project requirements.

2. Cultural framework—the focus is on understanding the organizational culture; that is, the values of the organization.

3. Symbolic framework—the focus is on positions and responsibilities, coordination, and monitoring. The organizational breakdown structure (OBS) and the work breakdown structure (WBS) aid in defining this framework.

The project manager’s authority is invested through the WBS but also through the political, HR, and cultural frameworks.

4. Political framework—begins with the assumption that the project organization is a coalition of different stakeholders. Key points to bear in mind are internal struggle and governing power. Because of the transitory nature of a project, the project manager must use the stakeholders’ power to advance project goals. Stakeholders can, typically, be divided into two groups.

1. Stakeholders with an interest in the failure of the project

2. Stakeholders with an interest in the success of the project

The project manager must identify all of the stakeholders and their political influence, their objectives, and their ability to affect the project. Once again, a project manager should spend some time to determine the significant needs and requirements of the chief stakeholders.

3. The business factor—the project manager must understand all aspects of the business associated with the project.

In terms of personal characteristics, the most successful project managers are:

1. Efficient

2. Decisive

3. Supportive of team members’ decisions

4. Confident

5. Articulate communicators

6. Highly motivated

7. Technologically oriented

8. Able to deal with high levels of uncertainty

Project execution involves the management and administration of the work described in the project plan. Usually most of the budget, time, and resources are spent during the execution phase. When the focus of the project is on new product development, success is often determined by the depth and details of the plan. As the saying goes, “measure twice, cut once.” Vital tools and techniques for project implementation are as follows:

1. Authorization management system—enables the project manager to verify that an authorized team member is performing a specific task at the correct point in time.

2. Status review meetings—prescribed meetings for information exchange regarding the project.

3. Project management software—decision support software (including a database and a model base) to help the project manager plan, implement, and control all aspects of the project, including budgets, personnel, schedule, and other resources.

4. Monitoring system—software, spreadsheets, or other mechanisms for comparing budget outlays, work performed, and resources consumed over time with the original plan.

Integrated change control Once a project launches, changes to the original project plan are inevitable. A procedure must be put in place to identify, quantify, and manage the changes throughout the project life cycle. The main targets of change control are:

1. Evaluation of the change requests to determine whether the benefits of the change will be sufficient to justify the corresponding disruption and expense;

2. Determining that a change has occurred;

3. Managing the actual changes when and as they occur.

The original project scope must be maintained by continuously managing changes to the baseline. This is accomplished either by rejecting new change proposals or by approving changes and incorporating them into a revised project baseline.

As described in greater detail in Chapter 8, change control makes use of the following modules in the configuration management system.

1. Configuration identification. Conceptually, each configuration item should be coded in a way that facilitates reference to its accompanying documents. Any changes approved in the configuration item should trigger a corresponding change in the documents, thus ensuring the correct description of the element.

2. Change management. A change is initiated via an engineering change request (ECR). The ECR contains the basis of the change along with a statement of the effect that it will have on activity times, schedules, and resource usage, as well as any new risks that may result.

To guarantee that each type of change is handled by the proper authority, a change classification system should be put in place. The most important changes are handled by the change control board (CCB) that represents all of the stakeholders. After a review, a change request can be accepted or rejected

by the board. Once a request is accepted, an engineering change order (ECO) is issued. The ECO contains all relevant information, such as the nature of the change, the party responsible for its execution, and the time when the change is to take place.

2.4 Project Scope Management

2.4.1 Accompanying Processes Project scope management consists of the following six processes:

1. Plan scope management. The scope management plan is part of the project plan. This process focuses on the preparation of the scope management plan and the requirement management plan, as both are part of the project plan.

2. Requirements Gathering. The driving force of any project is the needs and expectations of the stakeholders that are translated into requirements.

3. Define Scope. The project scope is the work content of the project. This work content and the way it should be performed are described in a document that defines a project’s scope.

4. Create WBS. The work content is broken into work packages. Each work package is assigned to a work package manager who can provide information on the time and effort required to perform the work for planning purposes and is also responsible for the execution of the work.

5. Validate Scope. To ensure that the project work was performed as required and the deliverables satisfy the requirements, inspection and testing are conducted as part of the validation process.

6. Control Scope. The actual work performed and the project deliverables are monitored throughout the project life cycle to ensure stakeholders’ satisfaction. When needed, corrective actions are taken to update the project plan or the requirements.

The purpose of these processes is to ensure that the project includes all work

(and only that work) required for its successful completion. Scope management relates to:

the product scope—defined as the features and functions to be included in the product or service that translate into specific project scope

the project scope—encompasses the project management processes defined as the work that must be done in order to deliver the product scope

Management of a project’s scope is similar for many projects, although the product scope is context-specific.

2.4.2 Description Scope management encompasses the effort required to perform the work associated with a project, as well as the processes required to produce the intended products or services.

The scope management processes address the statement of work (SOW), and work breakdown structure (WBS), respectively. An outline of what is included in each follows.

SOW. The SOW gives information on:

1. Scope of work—what work should be completed and how;

2. Where will the work take place (at what physical location);

3. Duration of execution—initial schedule along with milestones for every product;

4. Applicable standards;

5. Product allocation;

6. Acceptance criteria;

7. Additional requirements—transportation needs, special documentation, insurance requirements, safety and security.

WBS. The WBS decomposes the project into subprojects. Each subproject should be described with full detail of owner, schedule, activities, how each is to be performed and when, and so on. It is advisable to have a WBS template, especially for organizations with many similar projects. The template specifies how to divide the project into the work packages.

A disconcerting issue related to scope management is “scope creep,” in which new features and performance requirements are added to the project without a proper change management process. By adhering to the management processes described in this chapter, scope creep can be minimized.

2.5 Project Time Management

2.5.1 Accompanying Processes Time management establishes the schedule for tasks and activities defined in the work packages. The following seven processes are included:

1. Plan Schedule Management. This process, which is part of the project plan, focuses on the preparation of a schedule management plan.

2. Define Activities. This process focuses on the preparation of a list of activities required to complete the project along with the attribute of each activity and, when applicable, specific dates or milestones of the project.

3. Sequence Activities. This process focuses on the precedence relationship among activities, including technological precedence relationships and managerial precedence relationships. In some cases, a lead or a lag is part of the precedence relationship.

4. Estimate Activity Resources. This process focuses on the resources required to perform the project activities, including human resources, material, machine, equipment, etc.

5. Estimate Activity Durations. This process deals with the estimate of the duration of the activities. In many projects, activity duration is a function of the resources assigned to perform the activity, and it is possible to reduce the duration of some activities by adding resources (a process known as activity crashing).

6. Develop Schedule. Various tools and techniques are used to integrate the information on activities, their duration, precedence relations, and resources into a schedule that specifies the dates resources will perform each activity. Network-based models are widely used to perform this

process, including the Critical Path Method and the Critical Chain.

7. Control Schedule. The actual duration of activities as well as their start and finish dates are monitored throughout the project life cycle to ensure timely completion of the project and its milestones. When needed, corrective actions are taken to update the project plan or the schedule.

The purpose of time management is to ensure the timely completion of the project. The schedule defines what is to be done, when it is to be done, and by what resources. The schedule is used throughout the project to synchronize people, resources, and organizations involved and as a basis for control. When activities slip beyond their due dates, at least two major problems may arise:

1. Time and money are often interchangeable. As projects are pushed beyond their due date, time-related costs are incurred.

2. Most contracts specify rigid due dates, possibly with penalties for late deliveries.

Alternatively, early deliveries may have incentives associated with them.

Scheduling issues can create conflicts in some organizations, especially during the implementation phase and specifically in organizations that have a matrix structure. By implementing proper processes for project management, conflicts can be minimized.

2.5.2 Description Project work content is defined in the SOW and then translated into the WBS. Each work package in the WBS is decomposed into a set of activities that reflect its predefined scope. Estimating the duration of each activity is a major issue in time management. Activity durations are rarely known with certainty and are estimated by either point estimates or probability distributions. The work package manager is the best source of these estimates because he or she knows the technology. Sometimes an estimate can be

derived from a database of similar activities. A problem is created when organizations do not maintain time-related records or do not associate parameters with an activity. The absence of parameterized data often precludes its use in deriving time estimates.

In developing the schedule, precedence relations among activities are defined, and a model, such as a Gantt chart or network, is constructed. Both technological and managerial precedence relations may be present. The former are drawn from the physical attributes of the product or system being developed. The latter emerge from procedures dictated by the organization; for example, issuing a purchase order usually requires that a low-ranking manager give his or her approval before the senior officer signs the final forms. Whereas managerial precedence relations can be sidestepped in some instances, say, if the project is late, technological precedence relations are invariant.

An initial schedule is the basis for estimating costs and resource requirements. After a blueprint is developed, constraints imposed by due dates, cash flows, resource availability, and resource requirements of other projects can be added. Further tuning of the schedule may be possible by changing the combination of resources (these combinations are known as modes) assigned to activities. In constructing a graph of cost versus duration, the modes correspond to the data points. Such graphs have two endpoints: (1) minimum cost (at maximum duration) and (2) maximum cost (at minimum duration). Implicit in this statement is the rule that the shorter the activity duration, the higher the cost.

As a first cut, the project manager normally uses the minimum cost– maximum duration point for each activity to determine the earliest finish time of the project. If the result is not satisfactory, then different modes for one or more activities may be examined. If the result still is not satisfactory, then more sophisticated methods can be applied to determine the optimal combination of costs and resources for each activity. Fast-tracking some activities is also possible by repositioning them in parallel or overlapping them to a certain degree. In any case, the schedule is implemented by performing the activities in accordance with their precedence relations. Uncertainty, though, calls for a control mechanism to detect deviations and to

decide how to react to change requests. The schedule control system is based on performance measures such as actual completion of deliverables (milestones), actual starting times of activities, and actual finishing times of activities. Changes to the baseline schedule are required whenever a change in the project scope is implemented.

2.6 Project Cost Management

2.6.1 Accompanying Processes Project cost management involves four processes:

1. Plan Cost Management. The cost management plan is part of the project plan. This process focuses on the preparation of a cost management plan.

2. Estimate Costs. This process requires information about activities, the project schedule, and resources assigned to perform project activities to estimate the cost of the project.

3. Determine Budget. Funding for the estimated costs is crucial. This process is based on aggregation of costs of individual activities and work packages into a cost baseline and matching the available funds to the estimated costs based on the policies of the organization and its ability to provide the needed funds.

4. Control Costs. The actual cost of activities as well as the project and product scope may change during the life cycle of the project and, therefore, they are monitored to ensure that the project budget is realistic and satisfies stakeholders’ needs and expectations. When needed, corrective actions are taken to update the project plan or the budget.

These processes are designed to provide an estimate of the cost required to complete the project scope, to develop a budget based on availability of funds, management policies, and strategy, and to ensure that the project is completed within the approved budget and approved changes to the budget.

2.6.2 Description

To complete the project activities, different resources are required depending on whether the work is to be done internally or by outside contractors. Labor, equipment, and information, for example, are required for in-house activities, whereas money is required for outsourcing. The work packages derived from the SOW contain plans for using resources and suggest different operational modes for each activity.

There are various methods of estimating activity costs, from detailed accounting procedures to guesswork. Formal accounting procedures can be tedious and time consuming and perhaps a waste of time in case the project is discarded. Thus, early in the project life cycle, rough order-of-magnitude estimates are best, although they are not likely to be accurate.

Estimates of the amount of resources required for each activity, as well as the timing of their use, are based on the activity list and the schedule. Resource allocation is performed at the lowest level of the WBS—the work package level—and requirements are rolled up to the project level and then to the organizational level. A comparison of resource requirements and resource availability along with corporate strategies and priorities forms the basis of the allocation decisions at the organizational level. Resource planning results in a detailed plan specifying which resources are required for each work package. By applying the resource cost rates to the resource plan and adding overhead and outsourcing expenses, a cost estimate of the project is developed. This provides a basis for budgeting. As determined by the schedule, cost estimates are time-phased to allow for cash flow analysis. Additional allocations may also be made in the form of, say, a management reserve, to buffer against uncertainty. The resulting budget is the baseline for project cost control.

Because of uncertainty, cost control is required to detect deviations and to decide how to react to get the project back on track and within budget. Change requests require a similar response. The cost control system is based on performance measures, such as actual cost of activities or deliverables (milestones), and actual cash flows. Changes to the baseline budget are required whenever a change in the project scope is implemented.

2.7 Project Quality Management

2.7.1 Accompanying Processes Project quality management consists of three processes:

1. Plan Quality Management. The quality management plan is part of the project plan. This process focuses on the preparation of a quality management plan.

2. Perform Quality assurance. This process is focusing on analyzing the quality requirements and building the processes, tools, and techniques that guarantee that the project and its deliverables will satisfy these requirements.

3. Control Quality. This process is based on a comparison between quality requirements and results of tests and audits to verify that quality requirements are met and to recommend corrective actions in case the results of quality testing show substandard results.

The purpose of these processes is to ensure that the finished product satisfies the needs for which it was undertaken. Garvin (1987) suggested the following eight dimensions for measuring quality.

1. Performance. This dimension refers to the product or service’s primary characteristics, such as the acceleration, cruising speed, and comfort of an automobile or the sound and the picture clarity of a TV set. Understanding of the stakeholder’s performance requirements and the design of the product or service to meet those requirements are key factors in quality-based competition.

2. Features. This is a secondary aspect of performance that supplements the basic functions of the product or service. Features could be considered “bells and whistles.” The flexibility afforded a customer to select desired

options from a long list of possibilities contributes to the quality of the product or service.

3. Reliability. This performance measure reflects the probability of a product’s malfunctioning or failing within a specified period of time. It affects both the cost of maintenance and downtime of the product.

4. Conformance. This is the degree to which the design and operating characteristics of the product or service meet established standards.

5. Durability. This is a measure of the economic and technical service duration of a product. It relates to the amount of use that one can get from a product before it has to be replaced due to technical or economical considerations.

6. Serviceability. This measure reflects the competence and courtesy of the agent performing the repair work, as well as the speed and ease with which it is done. The reliability of a product and its serviceability complement each other. A product that rarely fails and—on those occasions when it does—can be repaired quickly and inexpensively has a lower downtime and better serves its owner.

7. Aesthetics. This is a subjective performance measure related to how the product feels, tastes, looks, or smells and reflects individual preferences.

8. Perceived quality. This is another subjective measure related to the reputation of the product or service. Reputation may be based on past experience and partial information, but, in many cases, the customers’ opinions are based on perceived quality as a result of the lack of accurate information on the other performance measures.

2.7.2 Description Until the mid-1980s, quality was defined as meeting or exceeding a specific set of performance measures. Since then, the need to understand user requirements and application requirements has been on the rise. Quality starts

with understanding stakeholders’ requirements. Stakeholders require products that carry different grades at maximum achievable quality. Quality is the proper match for the desired requirements at the expected grade. The product should have specific characteristics.

Quality management starts with the definition of standards or performance levels for each dimension of quality. On the basis of the scope of the project, quality policy, standards, and regulations, a quality management plan is developed. The plan describes the organizational structure, responsibilities, procedures, processes, and resources needed to implement quality management; that is, how the project management team will implement its quality policy to achieve the required quality levels. Checklists and metrics or operational definitions are also developed for each performance measure so that actual results and performance can be evaluated against stated requirements.

To provide confidence that the project will achieve the required quality level, a quality assurance process is implemented. By continuously reviewing (or auditing) the actual implementation of the plan developed, quality assurance systematically seeks to increase the effectiveness and efficiency of the project and its results. Actual results are monitored and controlled. The quality control process forms the basis of acceptance (or rejection) decisions at various stages of development.

2.8 Project Human Resource Management

2.8.1 Accompanying Processes HR management during the life cycle of a project is primarily concerned with the following four processes:

1. Plan Human Resource Management. The human resource management plan is part of the project plan. This process focuses on the preparation of a human resource management plan.

2. Acquire Project Team. The process of obtaining the project team members from inside or outside the performing organization.

3. Develop Project Team. The process of developing shared understanding among project team members regarding project goals and the way to achieve those goals together.

4. Manage Project Team. The process of leading the project team during the project life cycle to achieve project goals by working together resolving conflicts and creating synergy among team members.

Collectively, these processes are aimed at making the most effective use of people associated with the project. The temporary nature of the project structure and organization, the frequent need for multi-disciplinary teams, and the participation of people from different organizations transform into a need for team building, motivation, and leadership if goals are to be met successfully.

2.8.2 Description

The work content of the project is allocated among the performing organizations by integrating the project’s WBS with its OBS (Organizational Breakdown Structure). As mentioned, work packages—specific work content assigned to specific organizational units—are defined at the lowest level of these two hierarchical structures. Each work package is a building block; that is, an elementary project with a specific scope, schedule, budget, and quality objectives. Organizational planning activities are required to ensure that the total work content of the project is assigned and performed at the work package level, and that the integration of the deliverables produced by the work packages into the final product is possible according to the project plan. The organizational plan defines roles and responsibilities, as well as staffing requirements and the OBS of the project.

On the basis of the organizational plan, manpower assessments are made along with staff assignments. The availability of staff is compared with project requirements, and gaps are identified. These gaps are filled by the project manager working in conjunction with the HR department of the firm or agency. The assignment of available staff to the project and the acquisition of new staff result in the creation of a project team that may be a combination of full-time employees assigned full time to the project, full-timers assigned part time, and part-timers. Subcontractors, consultants, and other outside resources may be part of the team also.

The assignment of staff to the project is the first step in the team development process. To succeed in achieving project goals, teamwork and a team spirit are essential ingredients. The transformation of disparate individuals who are assigned to a project into a high-performance team requires leadership, communication skills, and negotiation skills, as well as the ability to motivate people, to coach and to mentor them, and to deal with conflicts in a professional, yet effective manner.

2.9 Project Communications Management

2.9.1 Accompanying Processes The three processes associated with project communications management are:

1. Plan Communication Management. The Communication Management plan is part of the project plan. This process focuses on the preparation of a communication management plan to satisfy the needs of stakeholders for information.

2. Manage Communication. The process of collecting data, storing and retrieving the data, and processing it to create useful information that is distributed according to the Communication Management plan.

3. Control Communication. The process of monitoring the information distributed to stakeholders throughout the project life cycle and comparing it to the needs for information of the stakeholders to identify gaps and to take corrective actions when needed.

These processes are required to ensure “timely and appropriate generation, collection, dissemination, storage, and ultimate disposition of project information” (PMBOK). Each is tightly linked with organizational planning. Communication between team members, with stakeholders, and with external parties and systems can take many forms. For example, it can be formal or informal, written or verbal, and planned or ad hoc. The decisions regarding communication channels, the information that should be distributed, and the best form of communication for each type of information are crucial in supporting teamwork and coordination.

2.9.2 Description Communications planning is the process of selecting the communication channels, the modes of communication, and the contents of the communication between project participants, stakeholders, and the environment. Taking into account information needs, available technology, and constraints on the availability and distribution of information, the communications management plan specifies the frequency and methods by which information is collected, stored, retrieved, transmitted, and presented to the parties involved in the project. On the basis of the plan, data collection as well as data storage and retrieval systems can be implemented and used throughout the project life cycle. The project communication system that supports the transmission and presentation of information should be designed and established early to facilitate the transfer of information.

Information distribution is based on the communication management plan and occurs throughout the project life cycle. As one can imagine, documentation of ongoing performance with respect to costs, schedule, and resource usage is important for several reasons. In general, performance reporting provides stakeholders with information on the actual status of the project, current accomplishments, and forecasts of future project status and progress. It is also essential for project control because deviations between plans and actual progress trigger corrective actions. In addition to the timely distribution of information, historical records are kept to enable post-project analysis in support of organizational and individual learning.

To facilitate an orderly closure of each phase of the project, information on actual performance levels of all activities is collected and compared with the project plan. If a product is the end result, then performance information is similarly collected and compared with the product specifications at each phase of the project. This verification process ensures an ordered, formal acceptance of the project’s deliverables by the stakeholders and provides a means for record keeping that supports organizational learning.

Communications planning should answer the following questions:

1. What information is to be provided?

2. Who will be the correspondent?

3. When and in what form is the information to be provided?

4. What templates are to be used?

5. What are the methods for gathering the information to be provided?

6. With what frequency will the information be passed?

7. What form will the communication take—formal, informal, handwritten, oral, hard copy, email?

Information distribution is the implementation of the communication program. If the program is lacking appropriate definition, then it is possible to create a situation of information overload in which too much irrelevant information is passed to project participants at too great a frequency. When this happens, essential information may be overlooked, ignored, or lost. To be more precise regarding the appropriateness of various communication channels, we have:

Informal communication. This is the result of an immediate need for information that was not addressed by the communication plan.

Verbal communication. This is vital in a project setting. The project manager must make sure that team meetings are held on a scheduled basis.

Performance reporting is an important part of communication. It enables the project manager to compare the actual status of each activity with the baseline. This provides the foundations for the change control process and allows for the collection and aggregation of knowledge.

2.10 Project Risk Management

2.10.1 Accompanying Processes Risk is an unwelcome but inevitable part of any project or new undertaking. Risk management includes six processes:

1. Plan Risk Management. The risk management plan is part of the project plan. This process focuses on the preparation of the risk management plan.

2. Identify Risks. The process of determining risk events that might impact the project success.

3. Perform Qualitative Risk Analysis. The process of assessing the likelihood and impact of identified risk events in order to prioritize and focus on the most significant risks.

4. Perform Quantitative Risk Analysis. The process of estimating the probability and impact of identified risk events and applying numerical analysis in order to assess overall project risk.

5. Plan Risks Response. The process of selecting risk events for mitigation and deciding the best way to mitigate those risks as well as developing contingency and risk response plans and setting reserves for residual risks and risks that are not mitigated.

6. Control risk. The process of monitoring identified risks and identification of new risks throughout the project life cycle as a trigger for activation of contingency plans and a basis for corrective actions and changes.

These processes are designed to identify and evaluate possible events that could have a negative impact on the project. Tactics are developed to handle

each type of disruption identified, as well as any uncertainty that could affect project planning, monitoring, and control.

2.10.2 Description All projects have some inherent risk as a result of the uncertainty that accompanies any new nonrepetitive endeavor. In many industries, the riskier the project, the higher the payoff. Thus, risk is at times beneficial because it has the potential to increase profits (i.e., “upside”). Risk management is not risk avoidance, but a method to control risks so that, in the long run, projects provide a net benefit to the organization.

A decision maker’s attitude toward risk may be described as either risk averse, risk prone, or risk natural. For different circumstances and payoffs, the same decision maker can fall into any of these categories. In Chapter 3, we discuss how to construct individual utility functions that capture risk attitudes in specific situations. In project management, these utility functions should reflect the inclination of the organization. Risks can affect the scope, quality, schedule, cost, and other goals of the project such as client satisfaction.

Major risks should be handled by performing a Pareto analysis to assess their magnitude. As a historical footnote, Villefredo Pareto studied the distribution of wealth in the 18th century in Milan and found that 20% of the city’s families controlled approximately 80% of its wealth. His findings proved to be more general than the initial purpose of his study. In many populations, it turns out that a small percentage of the population (say, 15%–25%) accounts for a significant portion of a measured factor (say, 75%–85%). This phenomenon is known as the Pareto rule. Using this rule it is possible to focus one’s attention on the most important items in a population. In risk management, by focusing on the 10%–20% of the risks with the highest magnitude, it is possible to take care of approximately 80% of the total risk impact on the project.

In a Pareto analysis, events that might have the most severe effect on the project are identified first, for example, by examining the history of similar

projects. A risk checklist is then created with the help of team members and outside experts. Next, the magnitude of each item on the list is assessed in terms of impact and probability. Multiplying these terms together gives the expected loss for that risk. When probability estimates are not readily available, methods such as simulations and expert judgments can be used.

A risk event is a discrete random occurrence that cannot be factored into the project plan explicitly. Risk events are identified on the basis of the potential difficulty that they impose on (1) achieving the project’s objectives (the characteristics of the product or service), (2) meeting the schedule and budget, and (3) satisfying resource requirements. The environment in which the project is performed is also a potential source of risk. Historical information is an important input in the risk identification process. In high- tech projects, for example, knowledge gaps are a common source of risk. Efforts to develop, use, or integrate new technologies necessarily involve uncertainty and, hence, risk. External sources of risk include new laws, transportation delays, raw material shortages, and labor union problems. Internal difficulties or disagreements may also generate risks.

The probability of risk events and their magnitude and effect on project success are estimated during the risk quantification process. The goal of this process is to rank risks in order of the probability of occurrence and the level of impact on the project. Thus, a high risk is an event that is highly probable and may cause substantial damage. On the basis of the magnitude of risk associated with each risk event, a risk response is developed. Several responses are used in project management, including:

Risk elimination—in some projects it is possible to eliminate some risks altogether by using, for example, a different technology or a different supplier.

Risk reduction—if risk elimination is too expensive or impossible, then it may be possible to reduce the probability of a risk event or its impact or both. A typical example is redundancy in R&D projects when two mutually exclusive technologies are developed in parallel to reduce the risk that a failure in development will harm the project. Although only one of the alternative technologies will be used, the parallel effort reduces the probability of a failure.

Risk sharing—it is possible in some projects to share risks (and benefits) with some stakeholders such as suppliers, subcontractors, partners, or even the client. Buying insurance is another form of risk sharing.

Risk absorption—if a decision is made to absorb the risk, then buffers in the form of management reserve or extra time in the schedule can be used. In addition, it may be appropriate to develop contingency plans to help cope with the consequences of any disruptions.

Because information is collected throughout the life cycle of a project, new information is used to update the risk management plan continuously. A continuous effort is required to identify new sources of risk, to update the estimates regarding probabilities and impacts of risk events, and to activate the risk management plan when needed. By constantly monitoring progress and updating the risk management plan, the impact of uncertainty can be reduced and the probability of project success can be increased. Being on the lookout for symptoms of risk is the first step in warding off trouble before it begins. One way to do this is to formulate a list of the most prominent risks to be checked periodically. Because risks change with time, the list must be updated continuously and new estimates of their impact and probability of occurrence must be derived.

2.11 Project Procurement Management

2.11.1 Accompanying Processes Procurement management for projects consists of the following four processes:

1. Plan Procurement Management. The procurement management plan is part of the project plan. This process focuses on the preparation of the procurement management plan.

2. Conduct Procurement. The process of selecting the sellers and signing contracts with them.

3. Control Procurement. The process of managing the relationship with the seller throughout the procurement process after signing the contract. Includes the management of changes and the monitoring of the contract performances.

4. Close Procurement. The process of completing the procurement process.

These processes accompany the acquisition of goods and services from outside sources, such as consultants, subcontractors, and third-party suppliers. The decision to procure goods and services from the outside (the “make or buy” decision) has a short-term or tactical-level (project-related) impact as well as a long-term or strategic-level (organization-related) impact. At the strategic level, core competencies should rarely be outsourced, even when such action can reduce the project cost, shorten its duration, reduce its risk, or improve quality. At the tactical level, outsourcing can alleviate resource shortages, help in closing knowledge gaps, off-load certain financial risks, and increase the probability of project success. Management of the outsourcing process from supplier selection to contract closeout is another

important part of the project manager’s job.

2.11.2 Description The decision on which parts of a project to purchase from outside sources, and how and when to do it, is critical to the success of most projects. This is because significant parts of many projects are candidates for outsourcing, and the level of uncertainty and consequent risk is different from the corresponding measures associated with activities performed in-house. To gain a competitive advantage from outsourcing, the planning, execution, and control of outsourcing procedures must be well-defined and supported by data and models.

The first step in the process is to consider which parts of the project scope and product scope to outsource. This decision is related to capacity and know-how and can be crucial in achieving project goals; however, a conflict may exist between project goals and the goals of the stakeholders. For example, subcontracting may help a firm in a related industry develop the skills and capabilities that would give it a competitive advantage at some future time. This was the case with IBM, which outsourced the development of the Disk Operating System to Microsoft and the development of the central processing unit to Intel. The underlying analysis should take into account the cost, quality, speed, risk, and flexibility of in-house development versus the use of subcontractors or suppliers to deliver the same goods and services. The decisions should also take into account the long-term or strategic factors discussed earlier. Some additional considerations are:

the prospect of ultimately producing a less-expensive product with higher quality

the lack of in-house skills or qualifications as defined by prevailing laws and regulations

the ability to shift risks to the supplier

Once the decision to outsource is made, the following questions must be

addressed:

Should the purchase be made from a single supplier, or should a bid be issued?

Should the purchase be for a single project or for a group of projects?

Should finished products or parts be purchased or just the labor hours and have the work done in-house?

How much should be purchased if, for example, quantity discounts are available?

When should the purchase be made? There is a tradeoff between time at which a spending commitment is made and the risk associated with delaying the purchase.

Should the idea of shared purchases be considered whereby joint orders are placed with (competing) organizations to receive quantity discounts or better contractual terms?

Once a decision is made to outsource, the solicitation process begins. This step requires an exact definition of the goods or services to be purchased, the development of due dates and cost estimates, and the preparation of a list of potential sources. Various types of models can be used to support the process by arraying the alternatives and their attributes against one another and allowing the decision maker to input preferences for each attribute. The use of simple scoring models, such as those described in Chapter 5, or more sophisticated methods, such as those described in Chapter 6, can help stakeholders reach a consensus by making the selection process more objective.

In conjunction with selecting a vendor, a contractual agreement is drawn up that is based on the following items:

1. Memorandum of understanding. This is a non-obligatory legal document that provides the foundations for the contract. It is preliminary to the contract.

2. SOW—description of required work to be purchased. The SOW offers the vendor a better understanding of the customer’s expectations.

3. Product technical specifications.

4. Acceptance test procedure.

5. Terms and conditions—defines the contractual terms.

The contract is a legal binding document that should specify the following:

1. What—scope of work (deliverables)

2. Where—location of work

3. When—period of performance

4. Schedule for deliverables

5. Applicable standards

6. Acceptance criteria—the criteria that must be satisfied for the project to be accepted

7. Special requirements related to testing, documentation, standards, safety, and so on

Solicitation can take many forms. One extreme is a request for proposal (RFP) advertised and open to all potential sources; a direct approach to a single preferred (or only) source is another extreme. There are many options in between, such as requests for letters of inquiry, qualification statements, and pre-proposals. The main output of the solicitation process is to generate one or more proposals—from the outside—for the goods or services required.

A well-planned solicitation planning process followed by a well-managed solicitation process is required for the next step—source selection—to be successful. Source selection is required whenever more than one acceptable vendor is available. If a proper selection model is developed during the solicitation planning process and all the data required for the model are

collected from the potential vendors during the solicitation process, the rest is easy. On the basis of the evaluation criteria and organizational policies, proposals are evaluated and ranked to identify the top candidates. Negotiations with a handful of them follow to get their best and final offer. The process is terminated when a contract is signed. If, however, solicitation planning and the solicitation process do not yield a clear set of criteria and a manageable selection model, then source selection may become a difficult and time-consuming process; it may not end with the best vendor selected or the best possible contract signed. It is difficult to compare proposals that are not structured according to clear RFP requirements; in many cases, important information may be missing.

Throughout the life cycle of a project, contracts are managed as part of the execution and change control efforts. Deliverables, such as test results, prototype models, subassemblies, documentation, hardware and software are submitted and evaluated, payments are made; and, when necessary, change requests are issued. When these are approved, changes are made to the contract. Contract management is equivalent to the management of a work package performed in-house; therefore, similar tools are required during the contract administration process.

Contract closeout is the final process that signals formal acceptance and closure. On the basis of the original contract and all of the approved changes, the goods or services provided are evaluated and, if accepted, payment is made and the contract is closed. Information collected during this process is important for future projects and vendor selection.

2.12 Project Stakeholders Management

2.12.1 Accompanying Processes Stakeholders management for projects consists of the following four processes:

1. Identify Stakeholders. This process identifies and maps the individuals and parties that may impact the project or may be impacted by the project. The needs and interests of important and influential stakeholders early on in the project life cycle are the basis for setting project objectives, goals, and constraints.

2. Plan Stakeholders Management. Based on the analysis of needs and interests of important and influential stakeholders, a stakeholders management plan is developed specifying how each stakeholder should be engaged throughout the project life cycle.

3. Manage Stakeholders Engagement. Throughout the life cycle of the project the stakeholders management plan is executed by communicating and working with the stakeholders according to the plan. Information is distributed to the stakeholders and collected from them, their concerns, needs, and expectations are analyzed, and appropriate actions are taken.

4. Control Stakeholders Engagement. Due to uncertainty, stakeholders needs and expectations may change as well as their interests and level of influence on the project. Throughout the life cycle of the project important stakeholders are monitored, and the stakeholders management plan is updated and adjusted based on new information that becomes available.

These processes are key to setting project objectives, goals, and constraints early on in the project life cycle and developing/updating project plans to achieve those objectives, goals, and constraints. Stakeholders may be part of the performing organization; they may come from outside the performing organization, may support the project, or may oppose the project and try to stop it or to limit its success. Therefore, specific attention to developing plans to manage the stakeholders is crucial to improving the probability of project success.

2.12.2 Description Projects are performed to satisfy the needs and expectations of some stakeholders. Stakeholders management is therefore an important and, yet, a very difficult task. Frequently, the needs and expectations of different stakeholders are in conflict and, sometimes, satisfying one group of stakeholders means that another group will not be satisfied or, even worse, will oppose the project.

Mapping of stakeholders is the first step—an effort to understand who they are, what are their needs, expectations, and interests, their power to influence the project, and their desire to be involved in the project and their expected level of engagement in the project. Based on the mapping, a strategy for managing each stakeholder is developed. Some influential stakeholders who are very interested in the project may be partners and take part in the decision-making process, while other stakeholders will be satisfied if they get specific information during the project life cycle to guarantee their support. The stakeholders management plan should translate this strategy into specific actions like setting regular meetings with some stakeholders and providing some specific information by email or phone in specific points of time to other stakeholders.

The stakeholders management plan is an important part of the project plan, and it should specify who is responsible for the ongoing relationship with each of the stakeholders, what should be done, and when.

An important aspect of a stakeholders management plan is the ongoing effort

to monitor and control the stakeholders already identified and to update the list of stakeholders when new stakeholders are identified. This activity is required because the needs and expectations of stakeholders may change throughout the project life cycle, as well as their level of interest in the project and their ability to influence it. Changes in the market, the economic and political environment, and technological changes may all introduce new stakeholders to the project. The earlier these new players are identified and managed, the better it is.

2.13 The Learning Organization and Continuous Improvement

2.13.1 Individual and Organizational Learning To excel as a project manager, an individual must have expertise in a number of arenas—planning, initiation, execution, supervision—and an ability to recognize when each phase of a project has been completed successfully and the next phase is ready to begin. If such an individual has facility with all aspects of the managerial process, then he or she will be in a prime position to educate, challenge, stimulate, direct, and inspire those whose work he or she is overseeing. A good project manager will be able to serve as a powerfully effective role model and as a source of knowledge and inspiration for those less experienced. In essence, organizational growth and development can be enhanced by way of this “trickle-down” effect from a project manager who enjoys his or her work and takes pride in doing it well; is reliable, committed, and disciplined; can foster development of a strong work ethic and a sense of prideful accomplishment in those whom he or she is managing; and is a font of knowledge, a master strategist, and a visionary who never loses sight of the long-term goal.

The ability of groups to improve performance by learning parallels the same abilities found in individuals. Katzenbach and Smith (1993) explained how to combine individual learning with team building, a key component of any collective endeavor. Just as it is important for each person to learn and master his or her assignment in a project, it is equally important for the group to learn how to work as a team. By establishing clear processes with well- defined inputs and outputs and by ensuring that those responsible for each process master the tools and techniques necessary to produce the desired output, excellence in project management can be achieved.

2.13.2 Workflow and Process Design as the Basis of Learning The one-time, nonrepetitive nature of projects implies that uncertainty is a major factor that affects a project’s success. In addition, the ability to learn by repetition is limited because of the uniqueness of most projects. A key to project management success is the exploitation of the repetitive parts of the project scope. By identifying repetitive processes (both within and between projects) and by building an environment that supports learning and data collection, limited resources can be more effectively allocated. Reuse of products and procedures is also a key to project success. For example, in software projects, the reuse of modules and subroutines reduces development time and cost.

A valuable step in the creation of an environment that supports learning is the design and implementation of a workflow management system—a system that embodies the decision-making processes associated with each aspect of the project. Each process, discussed in this chapter, should be studied, defined, and implemented within a workflow management system. Definitional elements include the trigger or initiation mechanism of the process, inputs and outputs, skills and resource requirements, activities performed, data required, models used, relative order of execution, termination conditions, and, finally, an enumeration of results or deliverables. The workflow management system uses a workflow enactment system or workflow process engine that can create, manage, and execute multiple process instances.

By identifying processes that are common to more than one project within an organization, it is possible to implement a workflow system that supports and even automates those processes. Automation means that the routing of each process is defined along with the input information, processing tools and techniques, and output information. Although the product scope may vary substantially from project to project, when the execution of the project scope is supported by an automatic workflow system, the benefits are twofold: (1) the level of uncertainty is reduced because processes are clearly defined and

the flow of information required to support those processes is automatic, and (2) learning is enabled. In general, a well-structured process can be taught easily to new employees or learned by repetition. For the organization that deals with many similar projects, efficiency is greatly enhanced when the same processes are repeated, the same formats are used to present information, and the same models are used to support decision making. The workflow management system provides the structure for realizing this efficiency.

TEAM PROJECT Thermal Transfer Plant Develop two project life-cycle models for the plant. Focus on the phases in the model and answer the following questions.

1. What should be done in each phase?

2. What are the deliverables?

3. How should the output of each phase be verified?

Discuss the pros and cons of each life-cycle model and select the one that you believe is best. Explain your choice.

Discussion Questions 1. Explain what a project life cycle is.

2. Draw a diagram showing the spiral life-cycle model for a particular project.

3. Draw a diagram showing the waterfall life-cycle model for a particular project.

4. Discuss the pros and cons of the spiral project life-cycle model and the waterfall project life-cycle model.

5. How are the processes in the PMBOK related to each other? Give a specific example.

6. How are the processes in the PMBOK related to the project life cycle? Give a specific example.

7. If time to market is the most important competitive advantage for an organization, then what life-cycle model should it use for its projects? Explain.

8. What are the main deliverables of project integration?

9. What are the relationships between a learning organization and the project management processes?

10. What are the characteristics of a good project manager?

Exercises 1. 2.1 Find an article describing a national project in detail. On the basis of

the article and on your understanding of the project, answer the questions below. State any assumptions that you feel are necessary to provide answers.

1. Who were the stakeholders?

2. Was it an internal or external project?

3. What were the most important resources used in the project? Explain.

4. What were the needs and expectations of each stakeholder?

5. What are the alternative approaches for this project?

6. Was the approach selected for the project the best, in your opinion? Explain.

7. What were the risks in the project?

8. Rank the risks according to severity.

9. What was done or could have been done to mitigate those risks?

10. Was the project a success? Why?

11. Was there enough outsourcing in the project? Explain.

12. What lessons can be learned from this project?

2. 2.2 Find an article that discusses workflow management systems (e.g., Stohr and Zhao 2001) and explain the following:

1. What are the advantages of workflow systems?

2. Under what conditions is a workflow system useful in a project environment?

3. Which of the processes described in the PMBOK are most suitable for workflow systems?

4. What are the disadvantages of using a workflow system in a project environment?

3. 2.3 On the basis of the material in this chapter and any outside sources you can find, answer the following.

1. Define what is meant by a “learning organization.”

2. What are the building blocks of a learning organization?

3. What are the advantages of a learning organization?

4. What should be done to promote a learning organization in the project environment?

Bibliography Adler, P. S., A. Mandelbaum, V. Nguyen, and E. Schwerer, “From Project to Process Management: An Empirically-Based Framework for Analyzing Product Development Time,” Management Science, Vol. 41, No. 3, pp. 458–484, 1995.

Boehm, B., “A Spiral Model of Software Development and Enhancement,” IEEE Computer, Vol. 21, No. 5, pp. 61–72, 1988.

Franco, C. A., “Learning Organizations: A Key for Innovation and Competitiveness,” 1997 Portland International Conference on Management of Engineering and Technology, pp. 345–348, July 27–31, 1997.

Fricke, S. E. and A. J. Shenhar, ”Managing Multiple Engineering Projects in a Manufacturing Support Environment,” IEEE Transactions on Engineering Management, Vol. 47, No. 2, pp. 258–268, 2000.

Garvin, D. A., “Competing on the Eight Dimensions of Quality,” Harvard Business Review, Vol. 65, No. 6, pp. 101–110, November– December 1987.

ISO 9000 Revisions Progress to FDIS Status, press release ref. 779, International Organization for Standardization, Geneva, Switzerland, 2000.

Katzenbach, R. J. and K. D. Smith, The Wisdom of Teams, Harvard Business School Press, Boston, MA, 1993.

Keil, M., A. Rai, J. E. C. Mann, and G. P. Zhang, “Why Software Projects Escalate: The Importance of Project Management Constructs,” IEEE Transactions on Engineering Management, Vol. 50, No. 3, pp. 251–261, 2003.

Morris, P. W. G., “Managing Project Interfaces: Key Points for Project

Success,” in D. I. Cleland and W. R. King (Editors), Project Management Handbook, Second Edition, Prentice Hall, Englewood Cliffs, NJ, 1988.

Muench, D., The Sybase Development Framework, Sybase, Oakland, CA, 1994.

PMI Standards Committee, A Guide to the Project Management Body of Knowledge (PMBOK) Fifth Edition, Project Management Institute, Newton Square, PA, 2013 (http://www.PMI.org).

PMI, Organizational Project Management Maturity Model, Project Management Institute, Newton Square, PA, 2003.

Shtub, A., “Project Management Cycle—Process Used to Manage Projects (Steps to go Through),” in G. Salvendy (Editor), Handbook of Industrial Engineering: Technology and Operations Management, Third Edition, Chapter 45, pp. 1246–1251, John Wiley & Sons, New York, 2001.

Shtub, A., J. F. Bard, and S. Globerson, Project Management Engineering, Technology, and Implementation, Prentice Hall, Englewood Cliffs, NJ, 1994.

Stevenson, T. H. and F. C. Barnes, “Fourteen Years of ISO 9000: Impact, Criticisms, Costs and Benefits,” Business Horizons, Vol. 44, No. 3, pp. 45–51, 2001.

Stohr, E. A. and J. L. Zhao, “Workflow Automation: Overview and Research Issues,” Information Systems Frontiers, Vol. 3, No. 3, p. 281– 296, 2001.

U.S. Department of Defense Directive 5000.2 (1993).

U.S. Department of Defense, “Parametric Software Cost Estimating,” in Parametric Estimating Handbook, Second Edition, Chapter 5, International Society of Parametric Analysts (ISPA), 1999 (http://www.jsc.nasa.gov/bu2/PCEHHTML/pceh.htm).

Wyrick, D. A., “Understanding Learning Styles to Be a More Effective Team Leader and Engineering Manager,” Engineering Management Journal, Vol. 15, No. 1, pp. 27–33, 2003.

Chapter 3 Engineering Economic Analysis

3.1 Introduction The design of a system represents a decision about how resources will be transformed to achieve a given set of objectives. The final design is a choice of a particular combination of resources and a blueprint for using them; it is selected from among other combinations that would accomplish the same objectives but perhaps with different cost and performance consequences. For example, the design of a commercial aircraft represents a choice of structural materials, size and location of engines, spacing of seats, and so on; the same result could be achieved in any number of ways.

A design must satisfy a host of technical considerations and constraints because only some things are possible. In general, it must conform to the laws of natural science. To continue with the aircraft example, there are limits to the strength of metal alloys or composites and to the thrust attainable from jet engines. The creation of a good design for a system requires solid technical knowledge and competence. Engineers may take this to be self- evident, but it often needs to be stressed to upper management and political leaders, who may be motivated by what a proposed system might accomplish rather than by costs and the limitations of technology.

Economics and value must also be taken into account in the choice of design; the best configuration cannot be determined from technical qualities alone. Moreover, value per dollar spent tends to dominate the final choice of a system. As a general rule, the engineer must pick from among many possible configurations, each of which may seem equally effective from a technical point of view. The selection of the best configuration is determined by comparing the costs and relative values associated with each. The choice between constructing an aircraft of aluminum or titanium is generally a question of cost, as both can meet the required standards. For more complex

systems, political or other values may be more important than costs. In planning an airport for a city, for instance, it is usually the case that several sites will be judged suitable. The final choice hinges on societal decisions regarding the relative importance of accessibility, congestion, and other environmental and political impacts, in addition to cost.

As engineers have become increasingly involved with interoperability and integration of systems, they must deal with new issues and incorporate new methods into their analyses. Traditionally, engineering education and practice have been concerned with detailed design. At that level, technical problems dominate, with economics taking a back seat. In designing an engine, for example, the immediate task––and the trademark of the engineer––is to make the device work properly. At the systems level, however, economic considerations are likely to be critical. Thus the design of a transportation system generally assumes that engines to power vehicles will be available and focuses attention on such issues as whether service can be provided at a price low enough to generate sufficient traffic to make the enterprise worthwhile.

3.1.1 Need for Economic Analysis The purpose of an economic evaluation is to determine whether any project or investment is financially desirable. Specifically, an evaluation addresses two sorts of questions:

Is an individual project worthwhile? That is, does it meet our minimum standards?

Given a list of projects, which is the best? How does each project rank or compare with the others on the list?

This chapter shows how both of these questions should be answered when dealing strictly with cash flows. Chapters 5 and 6 add qualitative considerations to the discussion.

In practice, economic evaluations are difficult to perform correctly. This is in

great part because of the fact that those who are responsible for carrying out the analyses––middle-level managers or staff––necessarily have a limited view of their organization’s activities and cannot realistically take into account all potential opportunities and risks. The result is that most evaluations are done on the basis of incomplete and/or inaccurate information, leading to erroneous assumptions.

Project proposals are evaluated using financial criteria such as net present value (NPV), rate of return (ROR), and payback period. Each method is discussed in detail and then compared with the others. Each criterion requires assumptions on the part of decision makers that can lead to biases in evaluating project proposals. The chapter concludes with a discussion of utility theory that can be used to explain how decision makers deal with uncertain outcomes.

3.1.2 Time Value of Money Many projects, particularly large systems, evolve over long periods. Costs incurred in one period may generate benefits for many years to come. The evaluation of whether these projects are worthwhile therefore must compare benefits and costs that occur at quite different times.

The essential problem in evaluating projects over time is that money has a time value. A dollar now is worth less than a dollar a year from now. The money represents the same nominal quantity, to be sure, but a dollar later does not have the same usefulness or buying power that it has today. The problem is one of compatibility. Because of this value differential, we cannot estimate total benefits (or costs) simply by adding dollar amounts that are realized in different periods. To make a valid comparison, we need to translate all cash flows into comparable quantities.

From a mathematical point of view, the solution to the evaluation problem is simple. It consists of using a handful of formulas that depend on only two parameters: the duration, or “life,” of the project, n, and the discount rate, i. These formulas are built into many pocket calculators and are routinely embedded in spreadsheet programs available on personnel computers. In the

next three sections, we present these essential formulas and examine their use.

From a practical point of view, the analytic solutions are delicate and must be interpreted with care. Values generated by the formulas are sensitive to their two parameters, which are rarely known with certainty. Results, therefore, are somewhat arbitrary, implying that the problem of evaluating projects over time is a mixture of art and science.

3.1.3 Discount Rate, Interest Rate, and Minimum Acceptable Rate of Return A dollar today is worth more than a dollar in the future because it can be used productively between now and then. For example, you can place money in a savings account and get a greater amount back after some period. In the economy at large, businesses and governments can use money to build plants, manufacture products, grow food, educate people, and undertake other worthwhile activities.

Moreover, any given amount of money now is typically worth more than the same amount in the future because of inflation. As prices go up as a result of inflation, the current buying power of the dollar erodes. The discount rate is one way of translating cash flows in the future to the present. It is used to determine by how much any future receipt or expenditure is discounted; that is, reduced to make it correspond to an equivalent amount today. The discount rate thus is the key factor in the evaluation of projects over time. It is the parameter that permits us to compare costs and benefits incurred at different instances in time.

The discount rate is generally expressed as an annual percentage. Normally, this percentage is assumed to be constant for any particular evaluation. Because we usually have no reason to believe that it would change in any known way, we take it to be constant over time when looking at any project.

It may, however, be different for various individuals, companies, or governments, and may also vary among people or groups as circumstances change. Baumol (1968) discussed the effect of the discount rate on social choice, and De Neufville (1990) indicated how to select an appropriate value for both public- and private-sector investments.

The discount rate is similar to what we think of as the prevailing interest rate but is actually a different concept. It is similar in that both can be stated as a percentage per period, and both can indicate a connection between money now and money later. The difference is that the discount rate represents real change in value to a person or a group, as determined by their possibilities for productive use of the money and the effects of inflation. By contrast, the interest rate narrowly defines a contractual arrangement between a borrower and a lender. This distinction implies a general rule: discount rate>interest rate. Indeed, if people were not getting more value from the money that they borrow than the interest that they pay for it, then they would be silly to go to the trouble of incurring the debt.

When an organization launches a project, it is inherently taking on some risk. As we know from real-world applications, certain projects will fail altogether while others will under-deliver and/or be delayed. In order to protect itself against risk, an organization will seek a financial return on a project that is greater than the prevailing interest rate that can be obtained in a bank. The discount rate that an organization uses to assess project opportunities can reflect some of the inherent risk associated with proposed projects. Different projects may use different discount factors, depending on their respective level of risk.

It is common in the engineering economic literature to use the terms discount rate and interest rate interchangeably. A third term, minimum acceptable rate of return (MARR), also has the same meaning. In the remainder of the book, we follow convention and take all three terms to be synonymous unless otherwise indicated.

3.2 Compound Interest Formulas Whenever the interest charge for any period is based on the remaining principal to be repaid plus any accumulated interest charges up to the beginning of that period, the interest is said to be compound. Basic compound interest formulas and factors that assume discrete (lump-sum) payments and discrete interest periods are discussed in this section. The notation used to present the concepts is summarized below:

i=interest rate per interest period, sometimes referred to as the discount rate or MARR; given as a decimal number in the formulas below (e.g., 12% is equivalent to 0.12)

n=number of compounding periods

P=present sum of money (equivalent worth of one or more cash flows at a point in time called the present)

F=future sum of money (equivalent worth of one or more cash flows at a point in time called the future)

A n =discrete payment or receipt occurring at the end of some interest period n

A=end-of-period cash flow (or equivalent end-of-period value) in a uniform series continuing for n periods (sometimes called “annuity”); special case in which A 1 = A 2 =…= A n =A

G=gradient or amount by which end-of-period cash flows increase or decrease linearly (arithmetic gradient); A n = A 1 +( n−1 )G

g=gradient or amount by which end-of-period cash flows increase or decrease geometrically; A n = A 1 ( 1+g ) n−1

The compound interest formulas follow:

Single payment compound amount factor

( F/P, i, n )= ( 1+i ) n

Single payment present worth factor

( P/F, i, n )= 1 ( 1+i ) n = 1 ( F/P, i, n )

Uniform series compound amount factor

( F/A, i, n )= ( 1+i ) n −1 i

Uniform series sinking fund factor

( A/F, i, n )= i ( 1+i ) n −1 = 1 ( F/A, i, n )

Uniform series present worth factor

( P/A, i, n )= ( 1+i ) n −1 i ( 1+i ) n

Uniform series capital recovery factor

( A/P, i, n )= i ( 1+i ) n ( 1+i ) n −1 = 1 ( P/A, i, n )

Arithmetic gradient present worth factor

( P/G, i, n )= ( 1+i ) n −in−1 i 2 ( 1+i ) n

Arithmetic gradient uniform series factor

( A/G, i, n )= ( 1+i ) n −in−1 i ( 1+i ) n −i

Geometric gradient present worth factor

( P/ A 1 , g, i, n )= 1− ( 1+g ) n ( 1+i ) − n g−i  for i≠g = n ( 1+i )    for i=g

Limiting cases:

As n→∞: ( F/P, i, n )→∞, ( P/F, i, n )→0, ( P/A, i, n )→1/i, ( A/P, i, n )→i,  ( F/A, i, n )→∞, ( A/F, i, n )→0, ( P/G, i, n )→1/ i 2 , ( A/G, i, n )→1/i For i=0: ( F/P, i, n )=1, ( P/F, i, n )=1, ( P/A, i, n )=n, ( A/P, i, n )=1/n, ( F/A, i, n )=n, ( A/F, i, n )=1/n, ( P/G, i, n )=n( n−1 )/2, ( A/G, i, n )=( n−1 )/2

In using the compound interest formulas to solve a problem, it is useful to note that the chain rule is applicable. For example, if you want to find P given F, instead of calculating P with the expression P=F( P/F, i, n ), you can make use of the relationship P=F( A/F, i, n )( P/A, i, n ) should it be more convenient to do so.

3.2.1 Present Worth, Future Worth, Uniform Series, and Gradient Series Figure 3.1 is a diagram that shows typical placements of P, F, A, and G over time for n periods with interest at i% per period. Upward pointing arrows usually indicate payments or disbursements, and downward pointing arrows indicate receipts or savings. As depicted in the figure, the following conventions apply in using the discrete compound interest formulas and corresponding tables:

Figure 3.1 Standard cash flow diagram indicating points in time for P, F, A, and G.

Figure 3.1 Full Alternative Text

1. A occurs at the end of the interest period.

2. P occurs one interest period before the first A.

3. F occurs at the same point in time as the last A, and n periods after P.

4. There is no G cash flow at the end of period 1; hence, the total gradient cash flow at the end of period n is ( n−1 )G.

Most economic analyses involve conversion of estimated or given cash flows to some point or points in time, such as the present, per annum, or the future. The specific calculations are best illustrated with the help of examples.

Example 3-1 Suppose that a $20,000 piece of equipment is expected to last 5 years and then result in a $4,000 salvage value; that is, can be sold for $4,000. If the minimum acceptable rate of return (interest rate) is 15%, what are the following values?

1. Annual equivalent (cost)

2. Present equivalent (cost)

Solution Figure 3.2 shows all the cash flows.

Figure 3.2 Cash flow diagram for Example 3-1.

Figure 3.2 Full Alternative Text

1. A=−$20,000( A/P, 15%, 5 )+$4,000( A/F, 15%, 5 ) =−$20,000( 0.2983 )+$4,000( 0.1483 )=−$5,373

[Note: $5,373 is sometimes called the annual cost (AC) or equivalent uniform annual cost (EUAC).]

2. P=−$20,000+$4,000( P/F, 15%, 5 ) =−$20,000+$4,000( 0.4972 )= −$18,011

Alternatively, it is possible to solve part (b) by exploiting the results obtained from part (a) as follows:

P=A( P/A, 15%, 5 ) =−$5,373( 3.3522 )=−$18,011

Example 3-2 (Deferred Uniform Series and Gradient Series)

Suppose that a certain savings is expected to be $10M at the end of year 3 and to increase $1M each year until the end of year 7. If the MARR is 20%,

then what are the following values?

1. Present equivalent (at beginning of year 1)

2. Future equivalent (at end of year 7)

Solution Once again, the first step is to draw the cash flow diagram. Figure 3.3 shows the gradient beginning at the end of year 3 and the unknowns to be calculated (dashed arrows). In the solution, subscripts are used to indicate a point or points in time.

Figure 3.3 Cash flow diagram for Example 3-2 showing deferred uniform and gradient series.

Figure 3.3 Full Alternative Text

1. A 3−7 =$10M+$1M( A/G, 20%, 5 ) =$10M+$1M( 1.6405 )=$11.64M P 2 = A 3−7 ( P/A, 20%, 5 ) =11.64M( 2.9906 )=$34.81M P 0 = F 2 ( P/F, 20%, 2 ) =$34.81M( 0.6944 )=$24.17M

Notice that in the last calculation, the value of P 2 is substituted for F 2 .

2. (Skipping intermediate calculations):

F 7 =[ $10M+$1M( A/G, 20%, 5 ) ]( F/A, 20%, 5 ) =[ $10M+$1M( 1.6405 ) ]( 7.4416 )=$86.62M

Alternatively, one can use part (a) results to obtain F 7 as follows:

F 7 = P 0 ( F/P, 20%, 7 )=$24.17M(3.5832)=$86.62M

Example 3-3  

(Repeating Cycle of Payments)

Suppose that the equipment in Example 3-1 is expected to be replaced three times with identical equipment, making four life cycles of 5 years each. To compare this investment correctly with another alternative that can serve 20 years, what are the following values when MARR=15%?

1. Annual equivalent (cost)

2. Present equivalent (cost)

Solution Figure 3.4 shows the costs involved. The key to this type of problem is to recognize that if the cash flows repeat each cycle, then the annual equivalent for one cycle will be the same for all other cycles.

Figure 3.4 Cash flow diagram for Example 3-3.

1. We demonstrate a slightly different way to get the same answer as in Example 3-1.

A=[ −$20M+$4M( P/F, 15%, 5 ) ]( A/P, 15%, 5 ) =[− $20M+$4M( 0.4972 ) ]( 0.2983 )=−$5,373K

2. P=−$5,373K( P/A, 15%, 20 ) =−$5,373K( 6.2593 )=−$33,629K

3.2.2 Nominal and Effective Interest Rates Interest rates are often quoted many different ways. In standard terminology, we have

Nominal interest rate, r, is the annual interest rate without considering the effects of compounding.

Effective interest rate, i eff , is the annual interest rate taking into account the effects of compounding during the year.

To work with these rates, it is necessary to know the number of compounding periods per year, denoted by p. The nominal interest rate is typically stated as a percentage compounded p times per year.

Example 3-4 The nominal rate is 16%/year compounded quarterly. What is the effective rate?

Solution r=16%/year divided by 4=4%/quarter. On an annual basis, this equals 16.99%/year. The general formula is

i eff = ( 1+r/p ) p −1 i eff = ( 1+0.16/4 ) 4 −1=( 1.04 )4−1=1.1699−1=0.1699→16.99%

Example 3-5  

(Nominal vs. Effective Rates)

A credit card company advertises a nominal rate of 16% on unpaid balances compounded daily. What is the effective interest rate per year being charged?

Solution   r=16%/year, p=365 days/year i eff = ( 1+0.16/365 ) 365 −1=0.1735→17.35%

In the beginning of this section, i was defined simply as the interest rate per

interest period. A more precise definition, we now know, is that i is the effective interest rate per interest period. When compounding is continuous, we have the special case in which i eff = e r −1.

3.2.3 Inflation Inflation is a condition in the economy characterized by rising prices for goods and services. An inflationary trend makes future dollars have less purchasing power than current dollars. This helps long-term borrowers at the expense of lenders because a loan negotiated today will be repaid in the future with dollars of lesser value.

In an economic analysis, one approach used to compensate for inflation is first to convert all cash flows from year-n, or actual dollars, into year-0, or real dollars. If the inflation rate is, say f, then this can be done by discounting or deflating future dollars to the present as follows:

year-0 dollars=[ ( 1+f ) −n ] ( year-n dollars )

We would now proceed as before with the analysis. Alternatively, one may compute an interest rate, i′ with inflation

i′=i+f+i×f

and use it in conjunction with the present worth factors to compute the present value of future cash flows. Either approach should give the same results. The important thing to remember is that all cash flows must be expressed in the same units.

Example 3-6 1. Tuition at Big State University is $2,000 today. We expect college costs

to increase at a 6% annual rate. What will tuition be in 10 years?

Future tuition=$2,500 ( 1+0.06 ) 10 =$4,477

2. If the cost of a hamburger is $3 today, then what did it cost 40 years ago? Assume the average rate of inflation during that time was 5%.

Former price of hamburger=3/ ( 1+0.05 ) 40 =$0.43

When all receipts and expenses escalate at the same rate as inflation, we can ignore inflation and do the analysis in real dollars using i. In practice, however, cash flows may be given in both real and actual dollars so we must select a constant frame of reference in which to perform the analysis.

Example 3-7 You are considering a $10,000 investment that has a life of 10 years and no salvage. On the basis of today’s economic environment, it is estimated that

operating costs will be $500 per year and revenue $2,000 per year

the general inflation rate will be 5% ( f=0.05 )

operating costs will escalate at the same rate as general inflation

revenues will not increase with time

For a 4% MARR without inflation ( i=0.04 ), what is the NPV of the investment?

Solution The components of the cash flow increase at different rates than general inflation, so we must either convert all of them to actual dollars and use the MARR with inflation i′ or convert all of them to real dollars and use the MARR without inflation (i).  The analysis for both approaches is presented.

1. Analysis in terms of actual dollars: We first must find the appropriate interest rate.

i′=0.04+0.05−0.04×0.05=0.092 or 9.2%

The revenues are already expressed in actual dollars, so it is necessary only to convert the costs to actual dollars. The data in the last column of the first table below represent the present worth (PW) of the cash flow at the end of year n using an MARR of 9.2%.

Time Costs (actual $) Revenues (actual $)

Net cash flow (actual $) PW(9.2%) (actual $)

0           10,000

            −10,000

         −10,000

1 525 2,000 1,475            1,351

2 551 2,000 1,449            1,215

3 579 2,000 1,421            1,091

4 608 2,000 1,392 5 638 2,000 1,362 6 670 2,000 1,330 7 704 2,000 1,296 8 739 2,000 1,261 9 776 2,000 1,224 10 814 2,000 1,186

            NPV

2. Analysis in terms of real dollars: For this case, we use i=0.04 to compute PW. To get the net cash flows in each year, it is first necessary to convert revenues to real dollars using the formula

Revenue in real $ ( in year n )=$2,000/ ( 1.05 ) n

Time Costs (real $) Revenues (real $) Net cash flow (real $)

0 10,000                     

1 500         1,905         1,405             2 500         1,814         1,314             3 500         1,728         1,228             4 500         1,645         1,145             5 500         1,567         1,067             6 500         1,492         992             7 500         1,421         921             8 500         1,354         854             9 500         1,289         789            

10 500         1,228         728            

As expected, both sets of computations give the same NPV of −$1,332.

3.2.4 Treatment of Risk Risk comes in many forms. If a new product is being developed, then the probability of commercial success is a major consideration. If a new technology is being pursued, then we must constantly reevaluate the probability of technical success and the availability of critical personnel and resources. Once a product is ready for the market, such factors as financing, contractual obligations, reliability of suppliers, and strength of competition must be brought into the equation.

In the private sector, projects that are riskier than others are forced to pay higher interest rates to attract capital. A speculative new company will have to pay the banks several percentage points more for its borrowing than will established, prime customers. Private companies, which always run the risk of bankruptcy, have to pay more than the government. This extra amount of interest is known as the risk premium and, as a practical matter, is already included in the discount rate.

When a particular project faces uncommon technical or commercial risks, the evaluation process should address each directly. Decision analysis (Chapter

5), coupled with the use of multiple-criteria methodologies (Chapter 6), is the preferred way to appraise projects with a high component of risk.

3.3 Comparison of Alternatives The essence of all economic evaluation is a discounted cash flow analysis. The first step in every situation is to lay out the estimated cash flows, the sequence of benefits (returns), and costs (payments) over time. These are then discounted back to the present, using the methods shown in the previous section, either directly or indirectly in the case of the rate-of-return and payback period methods.

The relative merits of the available alternatives are determined by comparing the discounted cash flows of benefits and costs. In general, a project is considered to be worthwhile when its benefits exceed its costs. The relative ranking of the projects is then determined by one of several evaluation criteria. The methods of evaluation differ from each other principally in the way in which they handle the results of the discounted cash flow analysis. The present value method focuses on the difference between the discounted benefits and costs, the ratio methods involve various comparisons of these qualities, and the internal rate-of-return method tries to equalize them. The question of what one does with the results of the discounted cash flows is the central problem of economic evaluation.

Most methods presume that the discount rate to be used in the cash flow analysis is known. This is often a reasonable assumption, because many companies or agencies require that a specific rate be used for all of their economic evaluations. In many instances, however, the discount rate must be determined.

In carrying out an evaluation, estimation of the discount rate may be crucial. Its choice can easily change the ranking of projects, making one or another seem best depending on the rate used. This is because lower rates make long- term projects, with benefits in the distant future, seem much more attractive relative to short-term projects with immediate benefits than they would be if a higher rate were used.

To see this, suppose that your organization has the choice of two storage and

retrieval systems, one that requires a human operator and one that is fully automated. Both will last for 10 years. The human-assisted system costs $10,000 and requires $4,200 per year of labor. The automated system has an initial cost of $18,000 and consumes an additional $3,000 per year in power. The decision is a question of whether the benefits of the annual savings ( $4,200−$3,000=$1,200 a year ) justify the additional initial cost of $8,000. Is the NPV of the upgrade to the more expensive alternative positive?

If the discount rate were zero, implying that future benefits are not discounted, then the upgrade is clearly worthwhile.

NPV( i=0% )=( $1,200/yr )( 10 years )−$8,000=$4,000

Conversely, if the discount rate were large, then future benefits would be heavily discounted. For infinite i,

NPV( i=∞ )=$1,200( 0 )−$8,000= −$8,000

so the project is not worthwhile.

The variation of the NPV with the discount rate is summarized as follows:

i% 0 5 10 15 ∞ NPV(i%) $4,000 $1,264 −$632 −$1,976 −$8,000

The critical value of i, below which the more expensive system is preferred, is approximately 8.5%, as determined by interpolation.

As this example shows, the choice of the discount rate can steer an analysis in one direction or another. Powerful economic and political forces allied with a particular technology may encourage this. When the U.S. Federal Highway Administration promulgated a regulation in the early 1970s that the discount rate for all federally funded highways would be zero, this was widely interpreted as a victory for the cement industry over asphalt interests. Roads that are made of concrete cost significantly more than those that are made of asphalt but require less maintenance and less frequent replacement.

3.3.1 Defining Investment Alternatives Every evaluation deals with two distinct sets of projects or alternatives: the explicit and the implicit. The explicit set consists of the opportunities that are to be considered in detail; they are the focus of the analysis. The implicit set, which can only be defined imprecisely, is important because it provides the frame of reference for the evaluation and defines the minimum standards.

Explicit set of alternatives This is a limited list of the potential projects that could actually be chosen. The list is usually defined by a manager who is concerned with a particular issue; for example,

an official of the department of highways who is responsible for maintenance and construction of roads

a manager of a computer center, proposing to acquire new equipment

an investment officer for a bank, presenting a menu of opportunities for construction loans

The projects suggested by each of the preceding situations illustrate two characteristics typical of the choices considered in an evaluation. The explicit set is:

1. Limited in scope, in that it includes only a portion of the projects that might be in front of the organization as a whole. Thus, the manager of the computer center is competent only in and considers only various ways to improve the information systems; whether money should be spent on developing a new product or replacing the central heating is literally not his or her department.

2. Limited in number, being only a fraction of all of the projects that could be defined over the next several years. Usually, the explicit list deals only with the immediate choices, not the ones that could arise during the next budget or decision period.

Since the sets of projects that we consider explicitly are limited, any procedure that analyzes separate sets of projects independently can easily lead to a list of recommended choices that are not the best ones for the organization as a whole. For example, consider a company with an information systems department, a research laboratory, and a manufacturing plant: If we evaluate the projects proposed by each group, we can determine the best software, the best instrument, and the best machine tool to buy, but this plan may not be in the best interests of the company. It is possible that the second-best machine tool is a better investment than the best instrument or that none of the software is worthwhile financially.

The issue is: How does an organization ensure that the projects selected by its components are best for the organization as a whole? In addressing this question, we must recognize that the obvious answer, considering all possible projects simultaneously, is neither practical nor even possible. A large number of analyses could be done, but the level of computation is not the real obstacle.

An analysis of all alternatives at once is not practical because it would be extremely difficult for any group in an organization to be sufficiently knowledgeable both to generate the possible projects for all departments and to estimate their benefits and costs. They simply would not have sufficient knowledge of the topic, region, or clients. Furthermore, the analysis of all alternatives at once is not even conceptually feasible because we are unable to predict which options will be available in the future. We therefore can never be sure that the projects that we select from a current list, however comprehensive it may be, will include all of the opportunities that will be available over the life of the projects and that might otherwise be selected. Some degree of sub-optimization is unavoidable.

To reduce the likelihood for sub-optimization, it is necessary to create some means of evaluating any set of explicit alternatives that do not critically depend on future developments. This can be done by creating a substitute for

the universe of possibilities. The implicit alternatives fill this role.

Implicit set of alternatives This set is intended to represent all projects that were available in the past and that might be available in the near future. Because it refers in part to unknown prospects, it can never be described in detail. It thus indicates inexactly what could be done instead of what can be done by opting for one of the explicit alternatives.

The implicit set of alternatives is of interest because it establishes minimum standards for deciding whether any explicit project is worthwhile. To illustrate, consider the situation in which a person has consistently been able to choose investments that provide yearly profits of 12% or more and has rejected all others with smaller returns. Faced now with the problem of evaluating an explicit set of specific proposals, this person will naturally turn to past experience for guidance. If the investment possibilities have not changed fundamentally, then the person may assume that there are continued possibilities—the implicit set of alternatives—for earning at least 12% as before and should correctly conclude that any explicit choice can be worthwhile only if its profitability equals or exceeds the 12% implicitly available elsewhere.

The minimum standards suggested by the implicit alternatives can be stated in several ways. An obvious and common way is to stipulate a minimum acceptable rate of return. Minimum standards of profitability can also be expressed differently, however. In business, they are typically stated in terms of the highest number of periods that will be required for the benefits to equal the initial investment (the maximum payback period, see Section 3.4.6). Minimum standards can also be defined in terms of minimum ratios of benefits to costs (Section 5.4).

Organizations use minimum standards for the economic acceptability of projects, as they force each department or group to take into account the global picture. They cannot, for example, choose projects unless they are at least as good as others available elsewhere in the organization.

3.3.2 Steps in the Analysis A systematic procedure for comparing investment alternatives can be outlined as follows:

1. Define the alternatives.

2. Determine the study period.

3. Provide estimates of the cash flows for each alternative.

4. Specify the interest rate (MARR).

5. Select the measure(s) of effectiveness (i.e., the criteria for judging success).

6. Compare the alternatives.

7. Perform sensitivity analyses.

8. Select the preferred alternative(s).

The study period defines the planning horizon over which the analysis is to be performed. It may or may not be the same as the useful lives of the equipment, facility, or project involved. In general, if the study period is less than the useful life of an asset, then an estimate of its salvage value should be provided in the final period; if the study period is longer than the useful life, then estimates of cash flows are needed for subsequent replacements of the asset.

Whenever alternatives that have different lives are to be compared, the study period is usually one of the following:

1. The organization’s traditional planning horizon

2. The life of the shortest-lived alternative

3. The life of the longest-lived alternative

4. The lowest common multiple of the lives of the alternatives

When the study period for the alternatives is forced to be the same by using measures 1, 2, or 3 above or for any other reason, the so-called co-terminated assumption is said to apply and whatever cash flows are thought appropriate are considered within that study period. When the study period is chosen by measure 4 above, the alternatives normally are assumed to satisfy the following so-called repeatability assumptions.

1. The period of needed service is either indefinitely long or a common multiple of the lives.

2. What is estimated to happen in the first life cycle will happen in all succeeding life cycles, if any, for each alternative.

In the upcoming subsections that illustrate the various analytic methods, when alternatives have different lives and nothing is indicated to the contrary, the repeatability assumptions are used. These assumptions are commonly adopted for computational convenience. The decision maker must decide whether they are reasonable for the situation.

3.4 Equivalent Worth Methods For purposes of analysis, equivalent worth methods convert all relevant cash flows into equivalent (present, annual, or future) amounts using the MARR. If a single project is under consideration, then it is acceptable (earns at least the MARR) if its equivalent worth is greater than or equal to zero; otherwise, it is not acceptable. These methods all assume that recovered funds (net cash inflows) can be reinvested at the MARR.

If two or more mutually exclusive alternatives are being compared and receipts or savings (cash inflows) as well as costs (cash outflows) are known, then the project that has the highest net equivalent worth should be chosen, as long as that equivalent worth is greater than or equal to zero. If only costs are known or considered (assuming that all alternatives have the same benefits), then the project that has the lowest total equivalent of those costs should be chosen. Because all three equivalent worth methods give completely consistent results, the choice of which to use is a matter of computational convenience and preference for the form in which the results are expressed.

3.4.1 Present Worth Method PW denotes a lump-sum amount at some early point in time (often the present) that is equivalent to a particular schedule of receipts and/or disbursements under consideration. If receipts and disbursements are included in the analysis, PW can best be expressed as the difference between the present worth of benefits and the present value of costs, otherwise known as NPV.

Example 3-8 Consider the following two mutually exclusive alternatives and recommend which one (if either) should be implemented.

Machine A

Initial cost $20,000                   $30,000                  

Life 5 years                   years                  

Salvage value $4,000                   Annual receipts

$10,000                   $14,000                  

Annual disbursements

$4,400                   $8,600                  

  Minimum acceptable rate of return=15%   Assume 10-year study period and repeatability

Solution (using PW method)

Machine A

Annual receipts  $10,000(P/A, 15%, 10) $50,188             $14,000(P/A, 15%, 10) $70,263            Salvage value at end of   year 10=$4,000( P/F, 15%, 10 )

   $989                

  Total PW of cash inflow

$51,177            $70,263           

Annual disbursements:  $4,400(P/A, 15%, 10) −$22,083              $8,600(P/A, 15%, 10) −$43,162 Initial cost −$20,000             −$30,000 Replacement:

   ( $20,000−$4,000 )( P/F, 15%, 5 )            

−$7,955 _                 

 Total PW of cash outflow

−$50,038             −$73,162

Net PW (NPV) $1,139            −$2,899

Thus project A has the higher NPV and represents the better economic choice. Since the NPV of project B is negative, a firm would never select project B in any case.

3.4.2 Annual Worth Method Annual worth (AW) is merely an “annualized” measure for assessing the financial desirability of a proposed undertaking. It is a uniform series of money over a certain period of time that is equivalent in amount to a particular schedule of receipts and/or disbursements under consideration. Any “period” can be used in the analysis, such as a month or a week. The word “annual” is used to represent a generic time period. If only disbursements are included, then the term is usually expressed as annual cost (AC) or equivalent uniform annual cost (EUAC). The examples in this section include both cash inflows and outflows.

Calculation of capital recovery cost The capital recovery (CR) cost for a project is the equivalent uniform annual cost of the capital that is invested. It is an annual amount that covers the following two items.

1. Depreciation (loss in value of the asset)

2. Interest (MARR) on invested capital

Consider an alternative requiring a lump-sum investment P and a salvage

value S at the end of n years. At interest rate i per year, the annual equivalent cost can be calculated as

CR=P( A/P, i, n )−S( A/F, i, n )

There are several other formulas for calculating the CR cost. Probably the most common is

CR=( P−S )( A/P, i, n )+Si

One might want to reverse signs so that a cost is negative, as is done in the following example, which includes CR costs in an AW comparison.

Example 3-9 Given the same machines A and B as used to demonstrate the net PW method in Example 3-8, we now compare them by the net AW method.

Machine A

Initial cost $20,000                   $30,000                  

Life 5 years                   years                  

Salvage value $4,000                   Annual receipts

$10,000                   $14,000                  

Annual disbursements

$4,400                   $8,600                  

  Minimum acceptable rate of return=15%   Assume repeatability

Solution

(using AW method)

Machine A

Annual receipts

$10,000                $14,000               

Annual disbursements

−$4,400                                

CR amount: −$20,000( A/P, 15%, 5 )

−$5,966                

  +$4,000( A/F, 15%, 5 )

+$593                

  −$30,000( A/P, 15%, 10 )

                                  

Net AW $227                −$578                

Thus project A, having the higher net annual worth which also is greater than $0, is the better economic choice. A shortcut for calculating the net AWs given the net PWs calculated in the preceding section is

AW( A )=$1,139( A/P, 15%, 10 )=$227 AW( B )=−$2,889( A/P, 15%, 10 )= −$578

One significant computational shortcut when comparing alternatives with different lives by the PW method and assuming repeatability is first to calculate AWs as above and then calculate the PWs for the lowest common multiple-of-lives study period. Thus,

PW( A )=$227( P/A, 15%, 10 )=$1,139 PW( B )=−$578( P/A, 15%, 10 )= −$2,899

3.4.3 Future Worth Method

The future worth (FW) measure of merit is a lump-sum amount at the end of the study period which is equivalent to the cash flows under consideration.

Example 3-10 Given the same machines A and B (Examples 3-8 and 3-9), determine which is better on the basis of FW at the end of the 10-year study period.

Solution (using FW method) Rather than calculating FWs of all the types of cash flows involved (as was done for the PW solution above), shown below are shortcut solutions based on (a) PWs and (b) AWs calculated previously:

1. FW( A )=$1,139( F/P, 15%, 10 )=$4,608 FW( B )=−$2,899( F/P, 15%, 10 )=−$11,728

2. FW( A )=$227( F/A, 15%, 10 )=$4,608 FW( B )=−$578( F/A, 15%, 10 )=−$11,735

Not surprisingly, we have once again found that alternative A is preferred. The ratios of the numbers produced by each of the equivalent worth methods will always be the same. For machines A and B, FW( A )/FW( B )=PW( A )/PW( B )=AW( A )/AW( B )=−0.393.

Example 3-11  

(Different Useful Lives: Fixed-Length Study Period)

Suppose that two measurement instruments are being considered for a certain industrial laboratory. Following are the principal cost data for one life cycle

of each alternative:

Instrument M1

Investment $15,000                   $25,000                   Life 3 years                   5 years                   Salvage value 0                   Annual disbursements

$4,400                   $8,600                  

  Minimum acceptable rate of return=20%   Assume no repeatability

Which instrument is preferred?

Solution The calculations will be done using the PW method and MARR=20% for the following two cases:

1. If the study period is taken to be 3 years, then we need a salvage value for alternative M2 at the end of the third year. Assuming it to be, say, $6,000, the following results are obtained:

Instrument M1

Investment $15,000           $25,000                   Annual disbursements:   $8,000(A/P, 20%, 3)

$16,852          

  $5,000(A/P, 20%, 3)

$10,533                  

Salvage: −$6,000( A/F, 20%, 3 )

                                 

 Net PW (NPV)

$31,852           $32,061                  

Thus the first alternative is slightly better. Note that "+" is used for costs.

2. If the study period is taken to be 5 years, then we need estimates of what will happen after the first life cycle of alternative M1. Let us assume that it can be replaced at the beginning of the fourth year for $18,000 and that the annual disbursements will be $9,000 for years 4 and 5. Furthermore, it will have a $7,000 salvage value at the end of year 5. In this case, we obtain

Instrument     M1

Investment $15,000                $25,000           Annual disbursements    $8,000(A/P, 20%, 3)

$16,852               

   $9,000(P/A, 20%, 2)(P/F, 20%, 3)

$7,975               

   $5,000(A/P, 20%, 5)

$14,953          

Additional investment: $18,000(P/F, 20%, 3)

$10,147               

Salvage: −$2,813 _

−$7,000( P/F, 20%, 5 )

               

  Net PW (NPV)

$47,413                $39,953          

Thus, alternative M2 has a slightly lower net PW and hence is better with the new assumption.

3.4.4 Discussion of Present Worth, Annual Worth, and Future Worth Methods Some academics and accountants assert that the net PW methods—and in particular, the NPV criterion—should be used in all economic analyses. This prescription should be resisted. NPV (and its equivalents) provides a good comparison between projects only when they are strictly comparable in terms of level of investment or total budget. This condition is rarely met in the real world. The practical consequence is that NPVs are used primarily for the analysis of investments, particularly of specific sums of money, rather than for the evaluation of projects, which come in many different sizes.

The advantage of the net PW criteria is that they focus attention on quantity of money, which is what the evaluation is ultimately concerned with. Net PW, AW, and FW differ in this respect from the other criteria of evaluation, which rank projects by ratios and hence do not directly address the bottom- line question of maximizing profit.

One disadvantage of NPV is that its precise meaning is difficult to explain. NPV does not measure profit in any usual sense of the term. In ordinary language, profit is the difference between what we receive and what we pay out. As an example, consider an investment now for a lump sum of revenue later. In crude terms,

profit = money received−money invested

More precisely, if we had to borrow money to make the original investment, then the profit would be net of interest paid for n periods:

profit = money received−( money invested )( F/P, i, n )

Where i is the interest rate. This profit can also be placed in present value terms using the appropriate MARR for the organization concerned. Note that it is now important to make the distinction between the MARR and the interest rate.

present value of profit=( money received )( P/F, MARR, n ) −( money invested )( F/P, i, n )( P/F, MARR, n )

In the last calculation, it turns out that because the MARR is not, in general, equal to the interest rate, NPV≠present value of profit. Thus even when NPV equals zero, a project may be profitable, as understood in common language. A project with NPV=0 is simply not advantageous compared with other alternatives available to the organization. NPV thus indicates “extra profitability” beyond the minimum.

Another difficulty with the net PW criteria is that they give no indication of the scale of effort required to achieve the result. To see this, consider the problem of evaluating projects P1 and P2 below.

Project Benefit Cost P1 $2,002,000           $2,000,000           P2 $2,000           $1,000          

If one considers only NPV, then project P1 seems better. Most investors would consider that an absurd choice, however, because of the difference in scale between the projects. Taking scale into account, P2 presumably gives a much better return than P1: the money saved by investing in the former rather than the latter can be invested elsewhere for a return greater than that offered by P1. In any case, NPV by itself is not a good criterion for ranking projects.

Formally, the essential conditions for net worth to be an appropriate criterion for the evaluation and ranking of projects are that:

we have a fixed budget to invest

projects require the same investment

These conditions do not hold with any regularity. On the contrary, it is most often the case that the list of projects consists of a variety of possibilities with varying costs. A central problem in the evaluation and choice of systems is to delimit their size and budget. Analysis of net worth is not particularly helpful in those contexts.

3.4.5 Internal Rate of Return Method The internal rate of return (IRR) method involves the calculation of an interest rate that is compared against a minimum threshold (i.e., the MARR). As we will see, it is the interest rate for which the NPV of a project is zero. The concept is that the IRR expresses the real return on any investment (i.e., return on investment). For evaluation, the idea is that projects should be ranked from the highest IRR down.

The IRR is now used increasingly by sophisticated business analysts. The advantage of this criterion is that it overcomes two difficulties inherent in the calculation of both NPV and benefit-cost ratios. That is:

1. It eliminates the need to determine the appropriate MARR.

2. Its rankings cannot be manipulated by the choice of a MARR.

It also focuses attention directly on the rate of return of each project, an attribute that cannot be understood from either the net present value or the benefit-cost ratio.

The IRR is known by other names, such as investor’s rate of return, discounted cash flow return, and so on. We will demonstrate its use for a single project and then for the comparison of mutually exclusive projects.

IRR method for single project The most common method of calculation of the IRR for a single project involves finding the interest rate, i, at which the PW of the cash inflow (receipts or cash savings) equals the PW of the cash outflow (disbursements or cash savings foregone). That is, one finds the interest rate at which PW of cash inflow equals PW of cash outflow; or at which PW of cash inflow minus PW of cash outflow equals 0; or at which PW of net cash flow equals 0. The IRR could also be calculated by using the same procedures applied to either AW or FW.

The calculations normally involve trial and error until the correct interest rate is found or can be interpolated. Closed-form solutions are not available because the equivalent worth factors are a nonlinear function of the interest rate. The procedure is described below for several situations. (When both cash inflows and outflows are involved, the convention of using a "+" sign for inflows and a "−" sign for outflows will be followed.)

Example 3-12 Given the same machine A as in Section 3.4.1, find the IRR and compare it with a MARR of 15%.

Machine A Initial cost $20,000 Life 5 years Salvage value $4,000 Annual receipts $10,000 Annual disbursements $4,400

Solution

Expressing the NPV of cash flow and setting it equal to zero results in the following:

NPV( i )=−$20,000+( $10,000−$4,400 )( P/A, i, 5 )+$4,000( P/F, i, 5 )=0

Try i=10%

NPV( 10% )=−$20,000+$5,600( P/A, 10%, 5 )+$4,000( P/F, 10%, 5 ) =$3,713>0

Try i=15%

NPV( 15% )=−$20,000+$5,600( P/A, 15%, 5 )+$4,000( P/F, 15%, 5 ) =$730>0

Try i=20%

NPV( 20% )=−$20,000+$5,600( P/A, 20% 5 )+$4,000( P/F, 20%, 5 ) = −$1,196<0

Because we have both a positive and a negative NPV, the desired answer is bracketed. Linear interpolation can be used to approximate the unknown interest rate, i, as follows:

i−15% 20%−15% = $730−0 $730−( −$1,196 )

so

i=15%+ $730 $730+$1,196 ( 20%−15% )

Solving gives i=16.9%. 1 Now, because 16.9% is greater than the MARR of 15%, the project is justified. A plot of NPV versus interest rate is given in Figure 3.5.

1 A more exact calculation gives i%=16.47%, but we use 16.9% for the remainder of the chapter.

Figure 3.5 Relationship between NPV and IRR for Example 3-12.

Because the P/A and P/F factors are nonlinear functions of the interest rate, the linear interpolation (above) causes an error, but the error is usually inconsequential in economic analyses. The narrower the range of rates over which the interpolation is done, the more accurate are the results. Finally note that as the trial interest rate is increased, the corresponding NPV decreases.

IRR Method for Comparing Mutually Exclusive Alternatives When comparing alternatives by any rate of return (ROR) method when at most one alternative will be chosen, there are three main principles to keep in mind:

1. Any alternative whose IRR is less than the MARR can be discarded immediately.

2. Each increment of investment capital must justify itself (by sufficient ROR on that increment).

3. Compare a higher investment alternative against a lower investment alternative only if that lower investment alternative is justified.

The usual approach when using a ROR method is to choose the alternative that requires the highest investment for which each increment of investment capital is justified. This choice assumes that the organization wants to invest any capital needed as long as the capital is justified by earning a sufficient ROR on each increment of capital. In general, a sufficient ROR is any value greater than or equal to the MARR. The IRR on the incremental investment for any two alternatives can be found by:

1. finding the rate at which the PW (or AW or FW) of the net cash flow for the difference between the two alternatives is equal to zero or

2. finding the rate at which the PWs (or AWs or FWs) of the two alternatives are equal.

Example 3-13 Suppose that we have the same machines, A and B, as considered in Section 3.4.1. In addition, machines C and D are mutually exclusive alternatives also to be included in the comparison by the IRR method. Relevant data and the solution are presented below. Repeatability of the alternatives is assumed.

Machine A B

Initial cost $20,000        $30,000        $35,000       

Life 5 years        10 years       

5 years       

Salvage value $4,000        0        $4,000       

Annual receipts

$10,000        $14,000        $20,000       

Annual disbursements

$4,400        $8,600        $9,390

 Net annual receipts– disbursements

$5,600        $5,400        $10,610       

IRR 16.9%        12.4%        17.9%       

Solution As a first step, it is best to arrange the alternatives in order of increasing initial investment because this is the order in which the increments will be considered. The symbol Δ means “increment,” and A→B means “the increment in going from alternative A to alternative B.” Recall that an increment of investment is justified if the IRR on that increment (i.e., ΔIRR ) is ≥15%. The least expensive alternative is always compared with the “do nothing” option.

A A→B † A→C C→D ΔInvestment $20,000 $10,000 $15,000 $8,000 ΔSalvage $4,000 −$4,000 $0 $1,000 Δ (annual receipts— disbursements)

$5,600 −$200 $5,010 $2,140

ΔIRR 16.9% 0% 20% 13.3% Is ΔInvestment justified? Yes No Yes No

†Analysis must include $16,000 replacement cost for alternative A at end of year 5.

The analysis indicates that alternative C would be chosen because it is associated with the largest investment for which each increment of investment capital is justified. The analysis was performed without

considering the IRR on the total investment for each alternative. However, when we look at the individual IRRs, we see that IRR( B )=12.4% <15%= MARR, so alternative B could have been discarded.

In choosing alternative C, one increment of investment was justified as follows:

Increment Incremental investment

IRR on increment, ΔIRR( % )

A $20,000 16.9 A→C $15,000 20.0 Total investment

$35,000

Coincidentally, alternative C had the largest IRR, which seems intuitive but is not always the case. If the MARR were, say, 12%, then alternative D would have been selected. As a general rule, if the most expensive alternative has the highest IRR, it will always turn out to be preferred.

In Example 3-13, because the useful lives of A and B are different and repeatability is assumed, one should closely examine the cash flows for A→B (B minus A) for the lowest common multiple of lives. For the 10-year period, ∑ ( positive cash flows )=$16,000=(replacement cost)=∑(negative cash flows)=$10,000+$4,000+10($200). Thus ΔIRR=0%; any i>0 would produce a negative NPV.

Occasionally, situations arise in which a single positive interest rate cannot be determined from the cash flow; that is, solving for NPV=0 yields more than one solution. Descartes’s rule of signs indicates that multiple solutions can occur whenever the cash flow series reverses sign (from net outflow to net inflow, or vice versa) more than once over the study period. This is demonstrated in the following example.

Example 3-14

(No Single IRR Solution)

The Converse Aircraft Company has an opportunity to supply a wide-body airplane to Banzai Airlines. Banzai will pay $19 million when the contract is signed and $10 million one year later. Converse estimates its second- and third-year net cash flows at $50 million each during production. Banzai will take delivery of the plane during year 4 and agrees to pay $20 million at the end of that year and the $60 million balance at the end of year 5. Compute the ROR on this project.

Solution Computation of NPV at various interest rates, using single payment PW factors (for year 2 and i=10%, PW=−50( P/F, 10%, 2 )=−50( 0.826 )=−41.3 ) is presented:

Year Cash flow 0% 10% 20% 40% 50% 0 +19 +19 +19 +19 +19 +19 1 +10 +10 +9.1 +8.3 +7.1 +6.7 2 −50 −50 −41.3 −34.7 −25.5 −22.2 3 −50 −50 −37.6 −28.9 −18.2 −14.8 4 +20 +20 +13.7 +9.6 +5.2 +4.0 5 +60 +60 _ +37.3 _ +24.1 _ +11.2 _ +7.9 _

NPV= +9 +0.2 −2.6 −1.2 +0.6

The NPV plot for these data is depicted in Figure 3.6. We see that the cash flow produces two points at which NPV=0; one at approximately 10.1% and the other at approximately 47%. Whenever multiple answers such as these exist, it is likely that neither is correct.

Figure 3.6 NPV plot for more than one change in sign.

An effective way to overcome this difficulty and obtain a “correct” answer is to manipulate cash flows as little as necessary so that there is only one sign reversal in the net cash flow stream. This can be done by using an appropriate interest rate to move lump sums either forward or backward, and then solve in the usual manner. To demonstrate, let us assume that all money held outside the project earns 6%. (This value could be considered the external interest rate that Converse faces. If it had to borrow money, the interest rate might be different.) At both year 0 and year 1, there is an inflow of cash resulting from the advance payments by Banzai. The money will be needed later to help pay the production costs. Given an external interest rate of 6%, the $19 million will be invested for 2 years and the $10 million for 1 year. Their compounded amount at the end of year 2 will be

FW at end of year 2=19( F/P, 6%, 2 )+10( F/P, 6%, 1 ) =19( 1.124 )+10( 1.06

) =32

When this amount is returned to the project, the net cash flow for year 2 becomes −50+32=−18. The resulting cash flow for the 5 years is:

Year Cash flow 0% 8% 10% 0 0 0 0 0 1 0 0 0 0 2 −18 −18 −15.4 −14.9 3 −50 −50 −39.7 −37.6 4 +20 +20 +14.7 +13.7 5 +60 +60 _ +40.8 _ +37.3 _

NPV= +12 +0.4 −1.5

This cash flow stream has one sign change, indicating that there is either zero or one positive interest rate. By interpolation, we can find the point where NPV=0:

i=8%+2% 0.4 1.5+0.4 =8%+2%( 0.21 )=8.42%

Thus, assuming an external interest rate of 6%, the internal rate of return for the Banzai plane contract is 8.42%.

In many situations, we are asked to compare and rank independent investment opportunities rather than a set of mutually exclusive alternatives designed to meet the same need. Portfolio analysis is such an example in which the firm is considering a number of different R&D projects and must evaluate the costs and benefits of each. Here, the IRR method will always give results that are consistent (regarding project acceptance or rejection) with those obtained from the PW, AW, or FW method. However, the IRR method may give a different ranking regarding the order of desirability when comparing independent investment opportunities.

As an example, consider Figure 3.7, depicting the relation of IRR to NPV for two projects, X and Y. The IRR for each project is the interest rate at which the NPV for that project is zero. This is shown for a nominal MARR. For the

hypothetical but quite feasible relationship shown in Figure 3.7, project Y has the higher IRR, whereas project X has the higher NPV of all IRRs except for the rate at which the net present values are equal. This illustrates the case in which the IRR method does result in a different ranking of alternatives compared with the PW (AW or FW) method. Nevertheless, because both projects have an NPV greater than zero, the IRR for either is greater than the MARR. The determination of acceptance of both projects is shown consistently by either method. It should be noted that if X and Y had been mutually exclusive alternatives, then there would have been no inconsistency regarding which to choose provided an incremental IRR analysis was performed.

Figure 3.7

Relationship between NPV and IRR for independent investment.

3.4.6 Payback Period Method In its simplest form, the payback period is the number of periods, usually measured in years, required for the accruing net undiscounted benefits from an investment to equal its cost. If we assume that the benefits are equal in each future year and that depreciation and income taxes are not included into the calculations, the formula is

payback period= initial investment annual net undiscounted benefits

When the benefits differ from year to year, it is necessary to find the smallest value of n such that

∑ j=1 n B j ≥P

where P is the initial investment and B j is the annual net benefit in year i.

Example 3-15 The cash flows for two alternatives are as follows:

Year Alternative 0 1 2 3 4 5

A −$2,700 +1,200 +1,200 +1,200 +1,200 +1,200 B −$1,000 +200 +200 +1,200 +1,200 +1,200

On the basis of the payback period, which alternative is best?

Solution Alternative A: Because the annual benefits are uniform, the payback period

can be computed from the first formula in this section; that is,

$2,700 $1,200/yr =2.25 years

Alternative B: The payback period is the length of time required for profits or other benefits of an investment to equal the cost of the investment. In the first 2 years, only $400 of the $1,000 cost is recovered. The remaining $600 is recovered in the first half of the third year. Thus the answer is 2.5 years.

Therefore, to minimize the payback period, choose alternative A.

The great advantage of the payback period is that it is simple. It thus is an excellent mechanism for allowing middle managers and technical staff to choose among proposals without going through a detailed analysis or to sort through many possibilities before resorting to a more sophisticated approach.

Situations that are suitable for the use of the payback period are often found in industry. These are projects in which a constant benefit is expected to accrue for an extended period as a result of a particular investment. A typical case would be the purchase of a new robot that would reduce operating expenses each year by a fixed amount, or some insulation or control that would regularly save on energy bills.

The weakness of this criterion is that it is crude; it does not clearly distinguish between projects with different useful lives. For any projects with identical useful lives, for which the capital recovery factor will be identical, the payback period gives as good a measure of economic desirability as the NPV or IRR. When the useful lives of projects are different, the capital recovery factors are not the same and the results can be highly misleading, as the following analysis shows:

P1 Investment $2,000               Useful live               years               Annual $1,000              

receipts               Payback period               years               NPV at 10%              

$487              

IRR               23.4%              

In this example, project P1 has a shorter payback period than the alternative P2 and would seem better by this criterion, yet project P2 is, in fact, more economically desirable for a wide range of discount rates. This is because P2 provides substantial benefits over a much longer period. Thus over a 6-year cycle, P1 would have to be repeated twice for a total cost of $4,000 and benefits of $6,000, whereas P2 would cost only $2,000 and yield returns of $4,800—greater net benefits and a higher NPV for any number of discount factors.

3.5 Sensitivity and Breakeven Analysis Much of the data collected in solving a business or engineering problem represent projections of future consequences and hence may possess a high degree of uncertainty. As the desired result of the analysis is decision making, an appropriate question is: “To what extent do the variations in the data affect the decision?” When small variations in a particular estimate would change the alternative selected, the decision is said to be sensitive to the estimate. To better evaluate the impact of any parameter, one should determine the amount of variation necessary in it to effect a change in outcome. This is called sensitivity analysis.

This type of analysis highlights the important and significant aspects of a problem. For example, one might be concerned that the estimates for annual maintenance and future salvage value in a facility modernization project vary substantially, depending on the assumptions used. Sensitivity analysis might indicate, however, that the decision is insensitive to the salvage value estimates over the full range of possibilities. At the same time it might show that small changes in annual maintenance expenditures strongly influence the choice of equipment. Under these circumstances, one should place greater emphasis on pinning down the true maintenance costs than on worrying about salvage value estimates.

Succinctly, sensitivity analysis describes the relative magnitude of a particular variation in one or more elements of a problem that is sufficient to alter a particular decision. Closely related is breakeven analysis, which determines the conditions under which two alternatives are equivalent. These two evaluation techniques frequently are useful in engineering problems called stage construction. That is, should a facility be constructed now to meet its future full-scale requirements, or should it be constructed in stages as the need for the increased capacity arises? Three examples of this situation are as follows:

Should we install a cable with 400 circuits now, or a 200-circuit cable now and another 200-circuit cable later?

A 10-cm water main is needed to serve a new area of homes. Should the 10-cm main be installed now, or should a 15-cm main be installed to provide an adequate water supply later for adjoining areas when other homes are built?

An industrial firm currently needs a 10,000 -m 2 warehouse and estimates that it will need an additional 10,000 m 2 in 4 years. The firm could have a warehouse built now and later enlarged, or have a 20,000 m 2 warehouse built today.

Examples 3-16 and 3-17, adapted from Newnan et al. (2000), illustrate the principles and calculations behind sensitivity and breakeven analysis.

Example 3-16 Consider the following situation in which a project may be constructed to full capacity now or may be undertaken in two stages.

Construction costs Two-stage construction  Construct first stage now $100,000  Construct second stage n years from now $120,000 Full-capacity construction $140,000

Other factors 1. All facilities will last until 40 years from now regardless of when they

are installed; at that time, they will have zero salvage value.

2. The annual cost of operation and maintenance is the same for both alternatives.

3. Assume that the MARR is 8%.

Plot a graph showing “age when second stage is constructed” versus “costs for both alternatives.” Mark the breakeven point. What is the sensitivity of the decision to second-stage construction 16 or more years in the future?

Solution Because we are dealing with a common analysis period, the calculations may be either AC or PW. PW calculations seem simpler and are used here:

Construct full capacity now PW of cost=$140,000

Two-stage construction In this alternative, the first stage is constructed now with the second stage to be constructed n years hence. To begin, compute the PW of cost for several values of n (years).

PW of cost=$100,000+$120,000( P/F, 8%, n ) n=5:    PW=$100,000+$120,000( 0.6806 )=$181,700 n=10:  PW=$100,000+$120,000( 0.4632 )=$155,600 n=20:  PW=$100,000+$120,000( 0.2145 )=$125,700 n=30:  PW=$100,000+$120,000( 0.0994 )=$111,900

These data are plotted in Figure 3.8 in the form of a breakeven chart. The horizontal axis is the time when the second stage is constructed; the vertical axis represents PW. We see that the PW of cost for two-stage construction

naturally decreases as the time for the second stage is deferred. The one-stage construction (full capacity now) option is unaffected by the time variable and hence is a horizontal line on the graph.

Figure 3.8 Breakeven chart diagram for Example 3-16.

Figure 3.8 Full Alternative Text

The breakeven point on the graph is the point at which both alternatives have equivalent costs. We see that, if in two-stage construction, the second stage is deferred for 15 years, the PW of that alternative is equal to the PW of the first, which is approximately $137,800. Thus, year 15 is the breakeven point.

The plot also shows that if the second stage were needed before year 15, then one-stage construction, with its smaller PW of cost, would be preferred. If the second stage were not needed until after year 15, then the opposite is true.

The decision as to how to construct a project is sensitive to the age at which the second stage is needed only if the range of estimates includes 15 years. For example, if one estimated that the second-stage capacity would be needed sometime over the next 5 to 10 years, then the decision is insensitive to that estimate. The more economical thing to do is to build the full capacity now, but if demand for the second-stage capacity were between, say, years 12 and 18, then the decision would depend on the estimate of when full capacity would actually be needed.

One question posed by Example 3-16 is how sensitive the decision is to the need for the second stage at or beyond 16 years. The graph shows that the decision is insensitive. In all cases for construction on or after 16 years, two- stage construction has a lower PW of cost.

Example 3-17 In this example, we have three mutually exclusive alternatives, each with a 20-year life and no salvage value. Assume that the MARR is 6% and

A B C Initial cost $2,000 $4,000 $5,000 Uniform annual benefit $410 $639 $700

Calculating the NPV of each alternative gives

NPV=A( P/A, 6%, 20 ) NPV( A )=$410( 11.470 )−$2,000=$2,703 NPV( B )=$639( 11.470 )−$4,000=$3,329 NPV( C )=$700( 11.470 )−$5,000=$3,029

so alternative B is preferred. Now we would like to know how sensitive the decision is to the estimate of the initial cost of B. If B is preferred at an initial cost of $4,000, then it will continue to be preferred for any smaller values, but how much higher than $4,000 can the initial cost go up and still have B as

the preferred alternative?

Solution  

The computations may be performed in several different ways. The first thing to note is that for the three alternatives, B will maximize NPV only as long as its NPV is greater than $3,029. Let X=initial cost of B. Thus, we have

NPV( B )=$639( 11.470 )−X>$3,029

or

X<$7,329−$3,029=$4,300

implying that B is the best alternative if its initial cost does not exceed $4,300. The breakeven chart for the problem is displayed in Figure 3.9. Because we are maximizing NPV, we see that B is preferred if its initial cost is less than $4,300. At an initial cost above this value, C is preferred. At the breakeven point, B and C are equally desirable. For the data given, alternative A is always inferior to alternative C.

Figure 3.9 Breakeven chart diagram for Example 3-17.

Figure 3.9 Full Alternative Text

Sensitivity analysis and breakeven point calculations can be very useful in identifying how different estimates affect the decision. It must be recognized, however, that these calculations assume that all parameters except one are held constant and that the sensitivity of the decision to that parameter is what

is being evaluated.

3.6 Effect of Tax and Depreciation on Investment Decisions The discussion thus far referred to investment earnings as cash flows implicitly net of tax consequences. The reason for this is that only the actual cash flow produced by an investment is relevant to the decision process. Earnings before depreciation and taxes do not represent the actual benefits realized by a firm. Consequently, the expected income from an investment must be adjusted to represent the true cash inflow before ranking can take place. Note that depreciation can be viewed as an expense and thus reduces gross income for tax purposes. The procedures and schedules used to compute depreciation in any year are promulgated by the Internal Revenue Service (IRS).

Assume that a machine that costs $10,000 has a useful life of 5 years and is expected to produce gross earnings of $4,000 each year. With straight-line depreciation [ amount per year=( initial cost−salvage value )/( useful life ) ], no salvage value, and a 40% tax rate, the annual cash flow in each of the 5 years will be

A. Gross earnings $4,000 B. Depreciation expense $2,000 C. Taxable income ( A−B ) $2,000 D. Taxes (40% of C) −$800 _ E. Cash flow ( A−D ) $3,200

Now, if the MARR for the firm is 10%, then the NPV of the investment is

$3,200( P/A, 10%, 5 )−$10,000=$3,200( 3.708 )−$10,000=$2,131

which makes it worthwhile.

Income tax rates are specified differently for individuals and corporations,

and depend on the level of income. Most countries have what is called a progress tax system in which the more money you make, the higher your tax rate is on the additional income. In such a system, income brackets and corresponding tax rates are defined. Each dollar earned within a bracket after accounting for deductions is taxed at the corresponding rate. In 2004, in the United States, all individual income over $297,374 was taxed at the rate of 39.1%, the highest bracket. For corporations, the situation is a bit more complicated, but all income over $15M was taxed at 38%.

The rationale for a progress tax system is based on what economists call the marginal utility of the last dollar earned. If someone is poor and struggling to pay for basic necessities such as food and housing, then an extra dollar or an extra $100 probably mean a lot to him or her. For a wealthy person, an extra $100 might be the equivalent of pocket change. Therefore, “removing” $39 of the $100 from someone who makes $300,000 per year should have much less of an impact on that person than on someone who makes only $25,000 per year. In fact, one could argue, as do the proponents of the system, that the amount that should be removed from the lower wage earner to achieve an equivalent impact is roughly $15, or 15%, the current tax bracket for $25,000. As the argument goes, the wealthier you are, the less you should miss the additional dollars earned so taxing them at a progressively higher rate is reasonable. At some point, though, this argument breaks down because the system becomes confiscatory. This was realized in the U.S. in the mid- 1960s, when the highest marginal rate peaked at 90%. Since then, the U.S. Congress has been steadily lowering all brackets for both economic and political reasons.

It should be mentioned that profits that are realized on the sale of assets such as stocks, homes, antiques, businesses, and equipment are not taxed as income, but as capital gains. The capital gains tax rate is flat so everyone pays the same percentage on their net profits. Losses can be balanced against gains in any given year so only the net counts in computing your taxes.

When determining a depreciation allowance on an asset, it is necessary to use the method prescribed by the IRS. In the past, straight-line, sum-of-the-years digits (SOYD), and declining balance were the common methods. For all assets put into productive service in recent years, the modified accelerated

cost recovery system (MACRS) must be used. This system assigns all property to a handful of classes distinguished by their tax life. For example, computers are given a 3-year life, whereas nonresidential real property is given a 31.5-year life. Depreciation is calculated as a percentage of the initial cost. The MACRS percentages for the 3-year class are 33.33%, 44.45%, 14.81%, and 7.41%; that is, a 3-year asset must be depreciated over four years according to this schedule. For the 5-year class, the percentages are 20%, 32%, 19.2%, 11.52%*, 11.52%, and 5.76%. The 3-, 5-, 7-, and 10-year classes are based on double declining balance depreciation with conversion to the straight-line method in the appropriate year (*) to maximize the deduction.

3.6.1 Capital Expansion Decision

Example 3-18 The Leeds Corporation leases plant facilities in which expendable thermocouples are manufactured. Because of rising demand, Leeds could increase sales by investing in new equipment to expand output. The selling price of $10 per thermocouple will remain unchanged if output and sales increase. On the basis of engineering and cost estimates, the accounting department provides management with the following cost estimates based on an annual increased output of 100,000 units.

Cost of new equipment having an expected life of 5 years

$500,000

Equipment installation cost $20,000 Expected salvage value 0 New operation’s share of annual lease expense $10,000 Annual increase in utility expenses $40,000 Annual increase in labor costs $160,000 Annual additional cost for raw materials $400,000

The SOYD method of depreciation will be used, and taxes are paid at a rate of 40%. Mr. Leeds’s policy is not to invest capital in projects that earn less than a 20% ROR. Should the proposed expansion be undertaken?

Solution Compute cost of investment:

Acquisition cost of equipment $500,000 Equipment installation costs $20,000 Total cost of investment $520,000

Determine yearly cash flows throughout the life of the investment. The lease expense is a sunk cost. It will be incurred regardless of whether the investment is made and therefore is irrelevant to the decision and should be disregarded. Annual production expenses to be considered are utility, labor, and raw materials. These total $600,000 per year. Annual sales revenue is $10×100,000 units of output, or $1,000,000. Yearly income before depreciation and taxes thus is $1,000,000 gross revenue less $600,000 expenses, or $400,000.

Determine the depreciation charges to be deducted from the $400,000 income each year using the SOYD method ( ∑ =1+2+3+4+5=15 ). With SOYD, the depreciation in year j is: ( initial cost−salvage value )×( N−j+1 )/ ∑ for j=1, …, N.

Year Proportion of $500,000 to be depreciated

Depreciation charge

1 5/15×$500,000 =$166,667 2 4/15×$500,000 =$133,333 3 3/15×$500,000 =$100,000 4 2/15×$500,000 = $66,667 5 1/15×$500,000 = $33 333 _

Accumulated depreciation =$500,000

Find each year’s cash flow when taxes are 40%. Cash flow for only the first year is illustrated:

Earnings before depreciation and taxes $400,000  Depreciation expense $166,667  Taxable income $233,333 Taxes ( 0.4×$233,333 ) −$93,332 _ Cash flow (first year) $306,668

Determine present value of the cash flows. Because Leeds demands at least a 20% ROR on investments, multiply the cash flows by the 20% present value factor (P/F, 20%, j) for each year j.

Year Present-value factor Cash flow Present value 1 0.833 × $306,667 = $255,454 2 0.694 × $293,333 = $203,573 3 0.579 × $280,000 = $162,120 4 0.482 × $266,667 = $128,533 5 0.402 × $253,334 = $101,840

Total present value of cash flows   (discounted at 20%) = $851,520

Find whether NPV is positive or negative:

Total present value of cash flows $851,520 Total cost of investment $520,000 NPV $331,520

Decision Net present value is positive when returns are discounted at 20%. Therefore, the proposed expansion should be undertaken.

3.6.2 Replacement Decision

We now consider the case of fixed assets, such as equipment or buildings, and ask whether they should be replaced. The normal means of monitoring expenditures in industry as well as in government are by annual budgets. One important factor in budgeting is the allocation of money for new capital expenditures, either new facilities or replacement and upgrading of current facilities. Existing assets are replaced for many reasons, including deterioration, reduced performance, new requirements, increasing operations and maintenance (O&M) costs, reduced reliability, obsolescence, or more attractive leasing options. In each of these cases, the ability of a current asset to produce a desired output for the lowest cost is challenged. This adversarial situation has given rise to the terms defender—the existing asset, and the challenger—the potential replacement.

Example 3-19 For 5 years Emetic Pharmaceuticals has been using a machine that attaches labels to bottles. The machine was purchased for $4,000 and is being depreciated over 10 years to a zero salvage value using the straight-line method. The machine can be sold now for $2,000. Emetic can buy a new labeling machine for $6,000 that will have a useful life of five years and cut labor costs by $1,200 annually. The old machine will require a major overhaul in the next few months. The cost of the overhaul is expected to be $300. If purchased, the new machine will be depreciated over five years to a $500 salvage value using the straight-line method. The company will invest in any project that earns more than the 12% cost of capital. Its tax rate is 40%. Should Emetic invest in the new machine?

Solution Determine the cost of investment:

Price of the new machine $6,000   Less: Sale of old machine $2,000     Avoidable overhaul costs $300

Total deductions −$2,300 _ Effective cost of investment $3,700

Determine the increase in cash flow resulting from investment in the new machine:

Yearly cost savings =$1,200.

Differential depreciation:

Annual depreciation on old machine:

cost−salvage useful life = $4,000−$0 10 =$400

Annual depreciation on new machine:

cost−salvage useful life = $6,000−$500 5 =$1,100

Differential depreciation=$1,100−$400=$700

Yearly net increase in cash flow into the firm:

Cost savings $1,200    Deduct: Taxes at 40% $480    Add: Advantage of increase in depreciation ( 0.4×$700 )

$280

Net deductions −$200 _

Yearly increase in cash flow $1,000

Determine the total present value of the investment:

The 5-year cash flow of $1,000 per year is an annuity.

Discounted at 12%, the cost of capital, the present value is $1,000×3.605=$3,605.

The present value of the new machine, if sold at its salvage value of

$500 at the end of the fifth year is $500×0.567=$284.

Total present value of the expected cash flows: $3,605+$284=$3,889

Determine whether the NPV is positive:

Total present value $3,889 Cost of investment 3,700 NPV $189

Decision Emetic Pharmaceuticals should make the purchase because the investment will return slightly more than the cost of capital.

Note The importance of depreciation has been shown in this example. The present value of the yearly cash flow resulting from operations is only

( cost savings−taxes ) ( PV factor ) ( $1,200−$480 ) × 3.605 =$2,596

This figure is $1,104 less than the $3,700 cost of the investment. Only a very large depreciation advantage makes this investment worthwhile. The total present value of the advantage is $1,009; that is,

( tax rate×differential depreciation ) ( PV factor ) ( 0.4×$700 ) × 3.605 =$1,009

In this problem, we did a 5-year analysis based on the useful life of the new asset. In most situations, it is more appropriate first to determine the “life” of the competing assets. Types of asset lives include:

1. The physical life is the period until the asset is salvaged, scrapped, or torn down.

2. The accounting life or tax life is the time over which the asset is depreciated. It may or may not reflect the physical life.

3. Useful life is the time over which the asset will provide useful service.

4. Economic life is the number of years at which the equivalent uniform annual cost (EUAC) or net annual cost (NAC) of ownership is minimized.

It is often the case that the economic life is shorter than the physical or useful life of an asset as a result of increasing O&M costs in the later years of ownership. In a traditional replacement analysis, the economic lives of the defender and challenger along with the accompanying costs are used to make the decision. To conduct an analysis, let

N=useful life

P=investment at time 0

A( n )=net cost in year n (i.e., O&M–revenue)

S( n )=salvage value in year n

PA( n )=present worth of annual costs for n years

NAC( n )=net annual cost for n years

where

PA( n )=A( 1 )( P/F, i, 1 )+A( 2 )( P/F, i, 2 )+…+A( n )( P/F, i, n ) NAC( n )=[ P+PA( n ) ]( A/P, i, n )−S( n )( A/F, i, n )

The economic life is then argmin { NAC( n ) : n=1,…, N }; that is, the value of n that minimizes NAC(n).

Example 3-20

You have purchased a router for $15,000. The machine has a useful life of 10 years and will be depreciated to zero using the SOYD method over that period of time. Assume that the salvage in year n is equal to the book value. In the first year, operating costs are expected to be $500, increasing by 40% in each subsequent year. If your MARR is 18%, then what is the economic life of the router?

Solution The following table lists the relevant data. The last column indicates the equivalent annual cost if the router is keep for n years, n=1,…, 10. The minimum NAC(n) occurs at year 6, which is economic life.

Time, n Operating cost, A(n) Salvage value, S(n) 0        $15,000         1        $500         $12,273         2        $700         $9,818         3        $980         $7,636         4        $1,372         $5,727         5        $1,921         $4,091        

6        $2,689         $2,727        

7        $3,765         $1,636         8        $5,271         $818         9        $7,379         $273        

10        $10,331         0        

Although in this example the initial investment cost at time 0 was given, it is usually not so straightforward to figure out what value should be used for an existing asset. In general, the investment cost that should be used for the defender is the money that you give up by not disposing of it; that is, the opportunity cost. You must also add any costs at time 0 to make it equivalent to the challenger. In summary, the investment consists of the:

current market value for the defender,

less costs necessary for its disposal,

less taxes on the capital gain (when taxes are considered),

plus any real costs at time 0 necessary to keep it.

Example 3-21 Eight years ago you bought a used car for $4,500 that has a trade-in value of $500. You are now considering a replacement for $8,250 and want to know whether to go ahead with the deal. Your minimum acceptable rate of return is 12%. Some additional cost data are given below.

Old car (defender) Maintenance costs next year will be $800, and are expected to go up by $400 a year in the coming years ($800, $1200, $1600, . . .). The car is now a death trap, not worth more than $250. To trade it in, you’re going to have to clean it up for a cost of $50. At any time in the future, net salvage value is also expected to be $200.

New car (challenger) This car is supposed to last for 10 years with a trade-in value of $750 at the end of that time. If you sell it before the end of its useful life, you expect the trade-in value to be the same as the book value computed with straight-line depreciation. Maintenance will be $100 per year for the first three years and $300 per year thereafter.

Solution

For the defender, we have

Investment: P D =$200

Operating cost: A D ( n )=$800+$400( n−1 )

Salvage value: S D ( n )=$200

NAC D ( n )= P D ( A/P, i, n )+$800+$400( A/G, i, n )−200( A/F, i, n )

For the challenger,

Investment: P C =$8,250

Operating cost: A C ( n )=$100, $100, $100, $300, $300, $300, . . .

Salvage value: S C ( n )=$8,250−$750n

NAC C ( n )= P C ( A/P, i, n )+100− S C ( n )( A/F, i, n ) for n=1, 2, 3 NAC C ( n )= P C ( A/P, i, n )+300−200( P/A, i, 3 )( A/P, i, n )− S C ( n )( A/F, i, n ) for n=4,…, 10

The investment cost specified for the defender is simply the opportunity cost of not trading it in plus the $50 to get it working satisfactorily. It is not the book value or the actual trade-in value. The investment cost specified for the challenger is the purchase cost which does not include the trade-in. Generally speaking, we do not use any challenger characteristics to compute the defender investment, costs, or salvage, and vice versa.

The following table lists the data used in the analysis. As can be seen, the economic life is 1 year for the defender and the corresponding cost is $824. The economic life of the challenger is 10 years with an annual cost of $1,632. Thus, it is optimal to keep the defender one more year at least.

Defender Age, n A D ( n ) S D ( n ) NAC D ( n ) Age,

0    $200      0    1    $800      $200      $824      1   

2    $1,200      $200      $1,013      2    3    $1,600      $200      $1,194      3   

4    5    6    7    8    9   

10   

This example illustrates a common situation: namely, that the economic life of the defender is often one year, and the economic life of the challenger is often its useful life. Finally, we mention that when considering taxes, an after-tax cash flow analysis should be used. In such cases, there will be a tax consequence if the book value of the defender does not equal its net market value.

3.6.3 Make-or-Buy Decision

Example 3-22 The GIGO Corporation manufactures and sells computers. It makes some of the parts and purchases others. The engineering department believes that it might be possible to cut costs by manufacturing one of the parts that is currently being purchased for $8.25 each. The firm uses 100,000 of these parts each year, and the accounting department compiles the following list of annual costs based on engineering estimates:

Fixed costs will increase by $50,000.

Labor costs will increase by $125,000.

Factory overhead, currently running $500,000 per year, is expected to

increase 12%.

Raw materials used to make the part will cost $600,000.

Given the estimates above, should GIGO make the part or continue to buy it?

Solution Find the total cost per year incurred if the part were manufactured:

Additional fixed costs $50,000 Additional labor costs $125,000 Raw materials cost $600,000 Additional overhead costs=0.12×$500,000 $60,000  Total cost to manufacturer $835,000

Find cost per unit to manufacture:

$835,000 100,000 =$8.35 per unit

Decision  

GIGO should continue to buy the part. Manufacturing costs exceed the present cost to purchase by $0.10 per unit.

Perspective  

The decision to make or buy is arguably the most fundamental component of manufacturing strategy. Should a firm be highly integrated, such as Henry

Ford’s River Rouge plant, with raw iron ore and coal flowing in one end and a finished Model A rolling out the other? Or should they simply purchase components from capable suppliers and then perform an assembly role much like today’s PC manufacturers such as Compaq and Dell?

Henry Ford’s model of vertical integration slipped from favor in the early 1960s, when outsourcing became increasingly attractive. Businesses found that outsourcing had certain advantages, potentially allowing them to:

Convert fixed costs to variables costs, thereby providing flexibility in an economic downturn

Balance workforce requirements

Reduce capital investment requirements

Reduce costs via suppliers’ economies of scale and lower wage structures

Accelerate new product development

Gain access to invention and innovation from suppliers

Focus resources on high-value-added activities

Nevertheless, recent studies have shown that many make-or-buy decisions have historically been taken with a disproportionate weight placed on unit cost and an insufficient regard for strategic or technical issues (e.g., see Dertouzos et al. 1989). This cost-focused approach has led to competitive disaster for many firms, indeed, entire industries in the United States. The list of those affected by this phenomenon is well known. Some of the most notable include consumer electronics, machine tools, semiconductors, and office equipment. As recently as 2004, General Motors reported more than 8,000 suppliers for direct material alone.

3.6.4 Lease-or-Buy Decision

Example 3-23 Jeremy Sitzer is a small businessman who has need for a pickup truck in his everyday work. He is considering buying a used truck for $3,000. If he goes ahead, he believes that he will be able to sell it for $1,000 at the end of 4 years, so he will depreciate $2,000 of the truck’s value on a straight-line basis. Sitzer can borrow $3,000 from the bank and repay it in four equal annual installments at 6% interest. However, a friend advises him that he may be better off to lease a truck if he can get the same terms from the leasing company that he receives at the bank. Assuming that this is so, should Sitzer buy or lease the truck? He is in the 40% tax rate bracket.

Solution Find the cost to buy: The bank loan is an installment loan at 6% interest, so the payments constitute a 4-year annuity. Divide the amount of the loan by the present value factor for a 4-year annuity at 6% [ ( P/A, 6%, 4 )=3.465 ] to find the annual payment. Multiply the annual payments by 4 to find the total payment.

$3,000 3.465 =$866 annual payment 4×$866=$3,464total payment

Next, find the present value of the cost of the loans:

(1) Year (2) Yearly Payment (3) Interest at 6% (4) Payment on principal

1     $866        $180        $686        2     $866        $139        $727        3     $866        $95        $771        4     $866        $50        $816       

(7) Tax deductible expense ( 3 )+( 6 )

(8) Tax saving 0.4×( 7 )

(9) Cost of owning 8 )

$680            $272        $594          $639            $256        $610          $595            $238        $628          $550            $220        $646         

Total present value of payments

Present value of salvage =$1,000×0.792=$792

Present value of cost of loan =$2,127−$792=$1,335

Find the cost to lease:

(1) Year

(2) Lease

payment

(3) Tax savings 0.4×866

(4) Lease cost after taxes ( 2

)−( 3 )

(5) Present value factor

at 6%

(6) Present value ( 4

)×( 5 ) 1 $866 $346 $520 0.943 $490 2 $866 $346 $520 0.890 $463 3 $866 $346 $520 0.840 $437 4 $866 $346 $520 0.792 $411

Total present value of lease payments =$1,801         

Compare present values of cost to buy and cost to lease:

Present value of cost to lease $1,801 Present value of cost to buy $1,335 Advantage of buying $ 466

Decision  

Mr. Sitzer should buy the truck.

Note  

Again, the importance of depreciation should be mentioned. When Sitzer purchases the truck, he gains the accompanying tax advantages of ownership. If the truck were leased, then the leaser would depreciate it and thereby gain advantage. Sitzer was also aided by being able to reduce the cost of buying by the present value of the salvage (or disposal) value of the truck. In general, depreciation and salvage value reduce the cost of buying. Nevertheless, if an asset is subject to rapid obsolescence, then it may be less expensive to lease.

3.7 Utility Theory Decision theory is concerned with giving structure and rationale to the various conditions under which decisions are made. In general, one must choose from among an array of alternatives. These are referred to as actions (or strategies), and each results in a payoff or outcome. If decision makers knew the payoff associated with each action, then they would be able to choose the action with the largest payoff. Most situations, however, are characterized by incomplete information, so for a given action, it is necessary to enumerate all probable outcomes together with their consequences and probabilities. The degree of information and understanding that the decision maker has about a particular situation determines how the underlying problem can be approached.

Two people, faced with the same set of alternatives and conditions, are likely to arrive at very different decisions regarding the most appropriate course of action for them. What is optimal for one may not even be an attractive alternative for the other. Judgment, risk, and experience work together to influence attitudes and choices.

Implicit in any decision-making process is the need to construct, either formally or informally, a preference order so that alternatives can be ranked and a final choice is made. For some problems this may be easy to accomplish, as we saw in the preceding sections, where the decision was based on a profit-maximization or cost-minimization rule. There, the preference order is adequately represented by the natural order of real numbers. In more complex situations, where factors other than profit maximization or cost minimization apply, it may be desirable to explore the decision maker’s preference structure in an explicit manner and to attempt to construct a preference ordering directly. An important class of techniques that works by eliciting preference information from the decision maker is predicated on what is known as utility theory. This, in turn, is based on the premise that the preference structure can be represented by a real-valued function called a utility function.2 Once such a function is constructed, selection of the final alternative should be relatively simple. In the absence of

uncertainty, an alternative with the highest utility would represent the preferred solution. For the case in which outcomes are subject to uncertainty, the appropriate choice would correspond to the one that attains the highest expected utility. Thus, the decision maker is faced with two basic problems involving judgment:

2 Technically speaking, the term utility function is reserved for the case in which uncertainty is present. When each alternative has only one possible outcome, the term value function is used. In either case, the construction procedure is the same.

1. How to quantify (or measure) utility for various payoffs

2. How to quantify judgments concerning the probability of the occurrence of each possible outcome or event

In this section, we focus on the first question––of quantifying and exploiting personal preference; the second, subjective probability estimation, falls more appropriately in the realm of elementary statistics and so is not treated here.

3.7.1 Expected Utility Maximization Assuming the presence of uncertainty, when a decision maker is repeatedly faced with the same problem, experience often leads to a strategy that provides, on average, the best results over the long run. In technical terms, such a strategy is one that maximizes expected monetary value (EMV). Let A be a particular action with possible outcomes j=1,…, n. Also, let p j be the probability of realizing outcome j with corresponding payoff or return x j . The expected monetary value of A is calculated as follows:

EMV( A )= ∑ j=1 n p j x j (3.1)

For the case in which the decision maker is faced with a unique problem, using the EMV criterion might not be such a good idea. In fact, a large body of empirical evidence suggests that it is rarely the criterion selected. To see this, assume that you must select one of the two alternatives in each of the

following five situations:

Situation 1:

a 1  :

The certainty of receiving $1

or a 2  :

on the flip of a fair coin, $10 if it comes up heads or −$1 if it comes up tails.

Situation 2:

b 1  :

The certainty of receiving $100

or b 2  :

on the flip of a fair coin, $1,000 if it comes up heads or −$100 if it comes up tails.

Situation 3:

c 1  :

The certainty of receiving $1,000

or c 2  :

on the flip of a fair coin, $10,000 if it comes up heads or −$1,000 if it comes up tails.

Situation 4:

d 1  :

The certainty of receiving $10,000

or d 2  :

on the flip of a fair coin, $100,000 if it comes up heads or −$10,000 if it comes up tails.

Situation 5:

e 1  :

The certainty of receiving $10,000

or a payment of $2 n , where n is the number of times

e 2  :

that a fair coin is flipped until heads comes up. If heads appears on the first toss, you receive $2; if the coin shows tails on the first toss and heads on the second, then you receive $4 and so forth. However, you are given only one chance; the game stops with the first showing of heads.

Most people would probably choose a 2 , b 2 , c 1 , d 1 , and e 1 . The choices a 2 and b 2 would be those derived from an EMV maximization criterion because EMV( a 2 )= 1 2 $10+12(−$1)=$4.5 is greater than the return from the certain choice a 1 =$1, and EMV( b 2 )=$450 is greater than $100. Nevertheless, in situations 3 and 4, c 1 would probably be preferred to c 2 , even though EMV( c 2 )=$4,500 is greater than $1,000, and d 1 would be preferred to d 2 even though EMV( d 2 )=$45,000 is greater than $10,000. In situation 5, the EMV of e 2 is infinite; that is,

EMV( e 2 )= 1 2 ( $2 )+ 1 4 ( $4 )+ 1 8 +… =$1+$1+$1+… =∞

yet e 1 would be preferred to e 2 practically by everyone.

In the first four situations, most people would tend to change their decision criterion away from maximizing EMV as soon as the thought of losing a large sum of money (say $1,000) became too painful despite the pleasure to be gained from possibly obtaining a large sum (say, $10,000). At this point, the person faced with such a choice would not be considering EMV but would instead be thinking solely of utility. In this sense, utility refers to the pleasure (utility) or displeasure (disutility) that one would derive from certain outcomes. In essence, we are saying that the person’s displeasure from losing $1,000 is greater that the pleasure of winning many times that amount. In situation 5, no prudent person would choose the gamble e2 over the certainty of a relatively modest amount obtained from e1. This problem, known as the St. Petersburg paradox, led Daniel Bernoulli to the first investigations of utility, rather than EMV, as the basis of decision making.

3.7.2 Bernoulli’s Principle

Logic, observed behavior, and introspection all indicate that any adequate procedure for handling choice under uncertainty must involve two components: personal valuation of consequences and personal strengths of belief about the occurrence of uncertain events. Bernoulli’s principle, as refined by von Neumann and Morgenstern (1947), has the normative justification of being a logical deduction from a small number of axioms that most people find reasonable. The relevant axioms differ slightly depending on whether the decision maker (a) has a single goal, (b) has multiple goals between which he or she can establish acceptable trade-off relations, or (c) has multiple goals that are not substitutable. The first two cases lead to a one- dimensional utility measure (i.e., real number) for each alternative action; the last to a lexicographically ordered utility vector.3 We consider only the single-goal case here; multiple goals are taken up in subsequent chapters.

3 Given two n-dimensional vectors x and y, if xi=yi, for i=1,…,r−1, and xr>yr, then x is said to be lexicographically greater than y.

Axioms: 1. Ordering. For the two alternatives A 1 and A 2 , one of the following

must be true: the person either prefers A 1 to A 2 or A 2 to A 1 , or is indifferent between them.

2. Transitivity. The person’s evaluation of alternatives is transitive: if he or she prefers A 1 to A 2 , and A 2 to A 3 , then he or she prefers A 1 to A 3 .

3. Continuity. If A 1 is preferred to A 2 , and A 2 to A 3 , then there exists a unique probability p, 0<p<1, such that the person is indifferent between outcome A 2 with certainty, or receiving A 1 with probability p and A 3 with probability ( 1−p ). In other words, there exists a certainty equivalent to any gamble.

4. Independence. If A 1 is preferred to A 2 , and A 3 is some other prospect, then a gamble with A 1 and A 3 as outcomes will be preferred to a gamble with A 2 and A 3 as outcomes if the probability of A 1 and A 2

occurring is the same in both cases.

These axioms relate to choices among both certain and uncertain outcomes. That is, if a person conforms to the four axioms, then a utility function that expresses his or her preferences for both certain outcomes (more precisely, we should say value function in this case) and the choices in a risky situation can be derived. In essence, they are equivalent to assuming that the decision maker is rational and consistent in his or her preferences and imply Bernoulli’s principle, or as it is also known, the expected utility theorem.

Expected Utility Theorem Given a decision maker whose preferences satisfy the four axioms, there exists a function U, called a utility function, that associates a single real number or utility index with all risky prospects faced by the decision maker. This function has the following properties:

1. If the risky prospect A 1 is preferred to A 2 (written A 1 > A 2 ), then the utility index of A 1 will be greater than that of A 2 [i.e., U( A 1 )>U( A 2 ) ]. Conversely, U( A 1 )>U( A 2 ) implies that A 1 is preferred to A 2 .

2. If A is the risky prospect with a set of outcomes { θ } distributed according to the probability density function p( θ ), then the utility of A is equal to the statistically expected utility of A; that is,

U( A )=E[ U( A ) ] (3.2)

If p( θ ) is discrete,

E[ U( A ) ]= ∑ θ U( θ )p( θ ) (3.3a)

and if p( θ ) is continuous,

E[ U( A ) ]= ∫ −∞ ∞ U( θ )p( θ )dθ (3.3b)

As these equations indicate, only the first moment (i.e., the mean or

expected value) of utility is relevant to the choice. For a person who accepts the axioms underlying Bernoulli’s principle, the variance or other higher moments of utility are irrelevant; the expected value takes full account of all of the moments (mean, variance, skewness, etc.) of the probability distribution p( θ ) of outcomes.

3. Uniqueness of the function is defined only up to a positive linear transformation. Given a utility function U, any other function U* such that

U*=aU+b, a>0, (3.4)

for scalars a and b, will serve as well as the original function. Thus, utility is measured on an arbitrary scale and is a relative measure analogous, for example, to the various scales used for measuring temperature. Because there is no absolute scale for utility and because a person’s utility function reflects his or her own personal valuations, it is not possible to compare one person’s utility indices with another’s (for further discussion of numbers and scales, see Gass 2001).

Bernoulli’s principle thus provides a mechanism for ranking risky prospects in order of preference, the most preferred prospect being the one with the highest utility. Hence, Bernoullian or statistical decision theory implies the maximization of utility, which, by the expected utility theorem, is equivalent to maximization of expected utility. Equations (3.3a) and (3.3b) provide the empirical basis of application of the theory. Two concepts are involved: degree of preference (or utility) and degree of belief (or probability).

3.7.3 Constructing the Utility Function Utility functions must be assessed separately for each decision maker. To be of use, utility values (i.e., subjective preferences) must be assigned to all possible outcomes for the problem at hand. Usually, we define a frame of reference whose lower and upper bounds represent the worst and best

possible outcomes, respectively. In many circumstances, outcomes are nonmonetary in nature. For example, in selecting a portable computer, one weighs such factors as speed, memory, display quality, and weight. It is possible to assign utility values to these outcomes; however, in most business-related problems, a monetary consequence is of major importance. Hence, we illustrate how to evaluate one’s utility function for money, although the same procedure applies to nonmonetary outcomes.

The assessment of a person’s utility function involves pinning down, in quantitative terms, subjective feelings that may not have been thought of before in such a precise way. At least four approaches for doing this have been distinguished (Keeney and Raiffa 1993): (1) direct measurement; (2) the von Neumann-Morgenstern (NM) method or standard reference contract; (3) the modified NM method; and (4) the Ramsey method.

The first approach involves asking a series of questions of the type: “Suppose that I were to give you an outright gift of $100. How much money would you need to make you twice as happy as the $100 would make you feel?” The answers to a sequence of such questions enable the plotting of a utility curve against whatever arbitrarily chosen utility (value) scale is desired. The drawbacks of this approach are that it is not concerned with uncertainty, and for many people, it cannot be expected to be as precise as the other methods.

The other three approaches deal with the question of risk attitude directly and ask the decision maker to compare certain gambles to sure sums of money, or gambles to gambles. For example, in a new product development problem, a question might be to have the project manager choose between receiving $200,000 for certain versus a gamble (lottery), with equal chances of winning $1,000,000 and losing $500,000. Such a situation might arise if the project manager were faced with selecting one of two technologies: the first being a sure thing, the second being much more risky. Through this type of questioning, one can find some riskless value that would make the project manager indifferent (Axiom 3). This value is called the certainty equivalent (CE) of the gamble. When the CE is less than the expected monetary value ( CE<EMV ), we say that the decision maker is risk averse. The measurement procedure is continued with different gambles until enough data points are available to plot the utility curve.

In this subsection, we discuss the modified NM method, which in our experience is the most easily understood. The first step in deriving the utility function is to designate two monetary outcomes as reference points. For convenience, we look at the most favorable and least favorable outcomes and then select two values greater than or equal to and less than or equal to these outcomes. The utilities of these extreme points may be selected arbitrarily; however, convention usually assigns them values of 1 and 0, respectively. For example, in the new product development problem given below, the monetary outcomes range from −$267,000 to $750,000. For expediency, we thus might choose extreme values of −$500,000 and $1,000,000, assigning a utility of 0 to the first and a utility of 1 to the second. That is,

U( −$0.5M )=0 and U( $1M )=1 (3.5)

Once again, the choice of the scale 0 to 1 is arbitrary and just as well could have been −100 to 100.

The standard reference contract or NM method is based on the concept of certainty equivalence. If outcome x 1 is preferred to x 2 , and x 2 is preferred to x 3 , then by continuity there exists a probability p such that

pU( x 1 )+( 1−p )U( x 3 )=U( x 2 ) (3.6)

For specified values of x 1 , x 2 and x 3 , the utility of x 2 can be determined by questioning to find the value of p at which x 2 is the CE of the gamble involving x 1 and x 3 (i.e., what value of p will make you indifferent to the gamble of receiving x 2 for certain?), U( x 1 ) and U( x 3 ) being given values on an arbitrary scale. For example, if U( x 1 ) is set at 1 and U( x 3 ) at 0, then U( x 2 )=p [i.e., p( 1 )+( 1−p )( 0 )=U( x 2 ) ]. By defining the values of p corresponding to an array of values of x 2 between x 1 and x 3 , the utility curve may be plotted for values of x in this range.

The difficulty that arises in applying Eq. (3.6) is that most people have no experience in specifying probabilities and consequently become extremely frustrated with the questioning. This is especially true when the appropriate value of p is small, say less than 0.1. To overcome the biases that result, the modified NM method uses neutral probabilities of p=0.5=1−p. Questions are posed to determine the CE x 2 for a 50-50 lottery of x 1 and x 3 . Thus, we

have

0.5U( x 1 )+0.5U( x 3 )=U( x 2 ). (3.7)

If U( x 1 ) is set at 1 and U( x 3 ) at 0, then U( x 2 )=0.5. In a similar manner, the CE may be established for the 50-50 lottery of x 1 and x 2 , say x 4 , which will have a utility of

U( x 4 )=0.5U( x 1 )+0.5U( x 2 )=0.75

and for the 50-50 lottery of x 2 and x 3 , say x 5 , which will have a utility of

U( x 5 )=0.5U( x 2 )+0.5U( x 3 )=0.25.

By such further linked questions, additional points on the utility curve may be established. Now, using Eq. (3.5), let’s see how we can find the project manager’s utility function. To do this, we formulate the following two alternatives: (1) a gamble that offers a 50-50 chance of winning $1,000,000 and losing $500,000 and (2) one that offers a sure amount of money.

Suppose that you have the choice of the gamble (call this scenario B) versus the sure thing (call this A). How much money would the sure thing have to be such that you were indifferent between A and B (i.e., the two alternatives were equally attractive)? Suppose that you said −$250,000. Because you are indifferent to these two options, they must have the same utility, or more properly, the same expected utility. Recall that the expected utility of any set of mutually exclusive outcomes resulting from a decision is the sum of the products of the utility of each outcome and its probability of occurrence. The expected utility of the gamble B is

U( B )=0.5U( $1,000,000 )+0.5U( −$500,000 ) =0.5( 1 )+0.5( 0 )=0.5

implying that U( B )=U( A )=U( −$250,000 )=0.5. The basic concept is depicted in Figure 3.10.

Figure 3.10 Diagram for utility assessment.

We now have three points on the project manager’s utility curve. Additional evaluations may be made in a similar manner to obtain a more precise picture. For example, pose an alternative that offers a 50-50 chance of gaining $1,000,000 and losing $250,000. Find the amount that must be offered with certainty to make him or her indifferent to the gamble. Suppose that he says $75,000. Then

U( $75,000 )=0.5U( $1,000,000 )+0.5U( −$250,000 ) =0.5( 1 )+0.5( 0.5 )=0.75

Next pose the alternative involving a 50-50 chance of losing $250,000 or $500,000. The project manager would clearly consider this gamble unfavorable and would surely be willing to pay some amount to be relieved of the choice (in the same way that one buys insurance to be relieved of risk). Suppose that he or she were indifferent between the gamble and paying a fixed amount of $420,000. Then

U( −$420,000 )=0.5U( −$250,000 )+0.5U( −$500,000 ) =0.5( 0.5 )+0.5( 0 )=0.25

We now have five points on his or her utility function, as given in Table 3.1. These can be connected by a smooth curve to approximate the “true” utility function over the entire range from −$500,000 to $1,000,000 (see Figure 3.11).

TABLE 3.1 Assessed Utilities for Project Manager Monetary outcome, x Utility, U(x) −$500,000 0.00 −$420,000 0.25 −$250,000 0.50   $75,000 0.75 $1,000,000 1.00

Figure 3.11

Utility function obtained from data in Table 3.1.

Figure 3.11 Full Alternative Text

Note that to be consistent, the project manager should, for example, be indifferent between a gamble C, which offered an equal chance of winning $1,000,000 or losing $500,000, and a second gamble D, which offered an equal chance of winning $75,000 or losing $420,000. That is,

U( C )=0.5U( $1,000,000 )+0.5U( −$500,000 )=0.5( 1 )+0.5( 0 )=0.5 U( D )=0.5U( $75,000 )+0.5U( −$420,000 )=0.5( 0.75 )+0.5( 0.25 )=0.5

If this is not true, then the manager’s assessments are inconsistent and should be revised. Similar checks should be performed to gain confidence in the decision maker’s responses.

To facilitate the analysis, a number of commercial products are available. These can be used to guide in the construction of the utility function, assess subjective probabilities, check for inconsistencies in judgment, and rank the alternatives.

3.7.4 Evaluating Alternatives In the general case, we are given a set of m alternatives A={ A 1 ,…, A m }, where each alternative may result in one of n outcomes or “states of nature.” Call these θ 1 ,…, θ n , and denote x ij as the consequence realized if θ j results when alternative i is selected. Also, let p j ( θ j ) be the probability that the state of nature θ j occurs. Then, from Eq. (3.3a) we can compute the expected utility of alternative A i as follows:

U( A i )= ∑ j=1 n p j ( θ j )U( x ij ), i=1,…,m (3.8)

where x ij + x ij ( θ j ) is an implicit function of θ j . For the deterministic case in which n=1, implying that only one outcome is possible, Eq. (3.8) reduces to U( A i )=U( x i ).

Example 3-24  

(Selection of New Product Development Strategy)

As project manager of a research and development group, you have been assigned the responsibility for coming up with a new switching circuit as a modular component for a laser device. You are given a budget of $300,000 and 3 months to complete the project. Two technical approaches have been identified, one using a circuit incorporating conventional transistors and another designed around a single integrated chip.

You estimate that a successful conventional circuit design would be worth $478,300 to the company. In contrast, use of a single integrated chip would offer a simpler, more reliable circuit and one that was sufficiently easier to manufacture. Moreover, it would yield an additional cost savings of $150,000 and would be worth an additional $121,700 to the firm over and above any cost savings, for the quantity expected.

You are sure that either of the two approaches could be developed to satisfy the project’s specifications given enough time and money. However, within the allotted time and budget, you estimate that there is a 30% chance that the conventional circuit would not meet specifications and a 50% chance that the integrated chip would also fail.

The end result of the project is to be a prototype built in the manufacturing shop from the drawings furnished by you. To work out the design details of the circuit and to identify and resolve unanticipated problems, you plan to design and build a breadboard model. This would take 3 months and cost (in labor, materials, and equipment) $60,000 for the conventional design and $100,000 for the integrated chip. The critical decision with which you are confronted is the choice of which design to pursue in construction of a breadboard.

Because you would be within budget, you have the additional option of pursuing the two technical approaches simultaneously. Nevertheless, if you

undertake both in parallel, you will incur an additional $107,000 in expenses. What is the best course of action for conducting the development project?

Solution Let A 1 be the alternative associated with the conventional design, A 2 the alternative associated with the integrated chip, and A 3 the parallel strategy. Note that if the last is pursued and both breadboards are built, then the cost will be $267,000.

The data for this problem are displayed in Table 3.2 in the form of a payoff matrix. For each alternative, there are four possible states of nature ( n=4 ), depending on whether the respective breadboard is a success (S) or a failure (F). These outcomes, θ j ( j=1,…,4 ), are indicated in the first row of the table. The probabilities p j ( θ j ) are computed by taking the product of the two possible outcomes, S or F. For example, p 1 ( θ 1 )=Prob( A 1 is a success)×Prob(A2is a success)=0.7×0.5=0.35. The monetary consequences of each action for each state of nature are determined by subtracting the costs from the returns. For example, x 33 represents the payoff for which both designs are pursued but only the second succeeds. The cost would be $60,000 for the conventional option +$100,000 for the integrated chip +$107,000 for the duplication of effort =$267,000. The returns to the firm are $478,300+$121,700+$150,000 for ease of manufacturability =$750,000. Thus x 33 =$750,000−$267,000=$483,000.

TABLE 3.2  Payoff Matrix for New Product Development Example

θ j A 1  : S S F F

A 2  : S F S F

Ai\pj(θj) 0.35 0.35 0.15 0.15 EMV($1,000) A1 418.3 418.3 −60.6 −60.6 275 A2 650.0 −100.0 650.0 −100.0 275 A3 483.0 211.3 483.0 −267.0 275

The last column of Table 3.2 lists the EMV of each alternative. These values were obtained by repeated application of Eq. (3.1) and all are equal to $275,000. This suggests that one should be indifferent to all three alternatives, but can this really be the case? You, as the decision maker, might not be willing to tolerate the prospect of losing $100,000 or more (e.g., such a loss might cost you your job or might put the company into a difficult financial position), but you might be willing and able to bear the strain of a $60,000 loss. Hence, you would choose A1 over the other options in that no more than $60,000 could be lost with A1 whereas $100,000 and $267,000 could be lost with A2 and A3, respectively.

If we now approach this problem from a utility theory point of view, whereby our attitude toward risk is implicitly taken into account in the construction of the utility function, then the analysis is more informative. To proceed, the first step is to convert monetary outcomes to “utiles” by using the curve in Figure 3.11. The results are displayed in Table 3.3, where now we see that A1 is preferred to A3, which is preferred to A2, although only slightly. Evidently, the increased prospect for success with alternative 3 is not sufficiently high to balance the risk of losing $267,000 should both projects fail. Similarly, the $650,000 payoff associated with A2 is not large enough for this risk-averse decision maker to compensate for the 50% chance of losing $100,000. Nevertheless, because the expected utilities for the three alternatives are so close, additional effort should go into refining the probability, cost, and return estimates.

TABLE 3.3  Utility Matrix for New Product Development

Example

θ j A 1  : S

A 2  : S

S

F

F

S

F

F Expected

Utility

Ai\pj(θj) 0.35 0.35 0.15 0.15 A1 0.90 0.90 0.70 0.70 0.84 A2 0.95 0.67 0.95 0.67 0.81 A3 0.92 0.83 0.92 0.49 0.82

3.7.5 Characteristics of the Utility Function The curve derived in Figure 3.11 increases monotonically from the lower left to the upper right. In other words, it has a positive slope throughout. This is generally the characteristic of utility functions. It simply implies that people ordinarily attach greater utility to larger amounts of money than to smaller amounts (i.e., more is preferred to less). Economists refer to such a psychological trait as a positive marginal utility for money.

Three general types of utility functions are depicted in Figure 3.12. Of course, actual shapes may vary, and the particular application will determine the scale on the horizontal axis. Any number of combinations of the three are possible. The concave-downward shape is illustrative of a person who has a diminishing marginal utility for money, although the marginal utility is always positive (the slope is positive but decreasing as the dollar amount increases –– the rate of change of the slope is negative). This type of utility function is indicative of a risk avoider, or someone who is risk averse. The decreasing slope implies that the utility of a given amount of gain is less than the disutility of an equal amount of loss; also, as the dollar gain increases, it becomes less valuable. A person characterized by such a utility function

would prefer a small but certain monetary gain to a gamble whose EMV is greater but may involve a larger but unlikely gain, or a large and not unlikely loss.

Figure 3.12 Three general types of utility functions.

Figure 3.12 Full Alternative Text

The linear function in Figure 3.12 depicts the behavior of a person who is neutral or indifferent to risk. For such a person, every increment of, say, $1,000 has an associated constant increment of utility (the slope of the utility curve is positive and constant). That is, he or she values an additional dollar of income just as highly regardless of whether it is the first dollar or the 100,000th dollar gained. This type of person would use the EMV criterion in making decisions because by so doing he or she would also maximize expected utility. Government decision making usually proceeds from a risk-

neutral viewpoint. Referring to Figure 3.12, the expected utility of each alternative in Example 3-24 is 0.51.

The third curve in Figure 3.12, which has a convex shape, is that of a risk seeker, someone who is risk prone. Note that the slope of the utility function increases as the dollar amount increases. This implies that the utility of a given gain is greater than the disutility of an equivalent loss. A risk-seeking person subjectively values each dollar of gain more highly. This type of person willingly accepts gambles that have a smaller EMV than an alternative payoff received with certainty. He or she will also take an “unfair” bet in the sense that he or she will choose an action whose EMV is negative. In the case of such a person, the attractiveness of a possibly large payoff in the gamble tends to outweigh the fact that the probability of such a payoff may indeed be exceedingly small. People who persistently buy lottery tickets fall into this category. When the risk prone curve in Figure 3.12 is used, in Example 3-24, the expected utilities for the three alternatives are 0.155, 0.195, and 0.157, thus reversing the order of preference. Now, A2>A3>A1.

Most people have utility functions whose slopes do not change very much for small changes in money, suggesting risk-neutral attitudes. In considering courses of action, however, in which one of the consequences is very adverse or in which one of the payoffs is very favorable, people can be expected to depart from the maximization of EMV criterion. In fact, most people are risk seekers for small gains and losses and risk avoiders when the stakes are high, in either direction. This explains why most of us buy insurance and stay with secure but often unexciting jobs rather than seek out risky opportunities that have the probability of making us wildly rich.

For many business decisions, for which the monetary consequences may represent only a small fraction of the total assets of the organization, maximization of EMV constitutes a reasonable approximation to the decision-making criterion of maximization of expected utility. In such cases, the utility function may be considered linear over the range of possible monetary outcomes. Moreover, Shoemaker (1982) summarizes many examples of experiments that have shown decision makers to make “irrational” decisions, violating the axioms of utility theory. In general, utility curves are challenging to construct, as they are not intuitive to business

decision makers. Moreover, many decision makers have difficulty in assessing probabilities and attaching outcomes to a particular probability. For these reasons, utility theory is not generally used in practice. Decision making, with respect to project selection, for example, is based on either EMV (straightforward and easy to apply) or heuristic judgment. Nevertheless, it is useful to understand utility theory, as it is a normative approach to decision making. Moreover, gaining insight into a decision maker’s attitude toward risk (averse, neutral, seeking) is important for a project manager in positioning project proposals in front of senior management decision makers.

TEAM PROJECT Thermal Transfer Plant On the basis of your excellent report and presentation, Total Manufacturing Solutions (TMS) has decided to approve a prototype project in the area of waste management and recycling. Because there is a need for a rotary combustor in one of the company’s new plant designs, a decision was made to select this project as a prototype.

Rotary combustors are designed to burn a variety of solid combustible wastes, including municipal, commercial, industrial, and agricultural wastes. The basic component of the combustor is the rotating barrel, made out of alternating carbon steel water tubes and perforated steel bars (Figure 3.13). The barrel assembly is set at a slope of −6° and is rotated slowly [approximately 10 revolutions per hour (rph)]. Solid waste is charged from the higher end of the barrel, and the combustion air comes into the barrel through the perforated holes. As the material burns, it tumbles through the barrel and eventually comes out of the lower end as residue (Figure 3.14). In the process, heated forced air promotes drying and burning.

Figure 3.13 Arrangement of rotary combustor.

Figure 3.13 Full Alternative Text

Figure 3.14 Details of rotary combustor.

Figure 3.14 Full Alternative Text

Hot gases created inside the barrel convert the boiler water into steam, which is used in the generation of electricity. High thermal efficiency of up to 80% provides maximum energy recovery through heat transfer from all hot surfaces of the combustor/boiler. Simplified moving parts assure ease of operation, maintenance, and servicing, as well as minimal repair costs.

The combustor capacity is targeted for 14 tons/day. Estimated costs are as follows:

Cost of material:

 Combustor barrel $10,000  Tires and trunnions $50,000  Chutes $10,000  Drive gears and chain $20,000  Pushers (2) $2,000  Enclosure and insulation $20,000  Rotary water joint $5,000  Hydraulic drive system (includes power unit, cylinders, and combustor drive components)

$90,000

 Welding materials $30,000 Cost of labor:  Combustor barrel fabrication   10 workers, 8 weeks at $50/hr $160,000  Tire and trunnion installation   5 workers, 1 week $10,000  Chute fabrication   4 workers, 2 weeks $16,000  Gear installation   2 workers, 2 days $1,600  Pusher fabrication   2 workers, 2 weeks $8,000  Enclosure fabrication   10 workers, 4 weeks $80,000  Water joint installation   2 workers, 1 day $800 Other:  Design $15,000  Instrumentation $13,000  Pressure testing $2,500  Preassembly $7,000  Break down and loading for shipment $3,000  Overhead 25%

The following factors contribute to the risk of the project:

Schedule risks NIMBY (not in my back yard). The construction of thermal transfer plants may prove to be a long, drawn-out affair. In addition to Environmental Protection Agency requirements, local opposition must be considered.

One option for the combustor drive is to use a single large hydraulic motor. There are two manufacturers of this type of motor, one in Sweden and one in Germany.

Cost risks Costs due to delays (see first item above).

Price increases—an entire plant is being built—estimated duration is 2 years.

Design time—difficult to estimate.

Fabrication time.

Overhead is very difficult to estimate and control.

Technological risks Hazardous location—a rotary combustor is a furnace and because of fire hazards, mineral-based hydraulic fluid cannot be used. Fire-resistant fluids are an alternative but require that certain hydraulic components be down-rated.

Speed control—the accuracy and degree of variability of speed control

for the rams are yet to be determined.

No satisfactory design exists yet for a rotary water joint.

Satisfactory seals around the combustor are yet to be developed.

Hydraulic leaks at other installations have been a problem, particularly at the rams and at the power unit.

Instrumentation—there is disagreement as to the sophistication of instrumentation required, particularly on the rams.

TMS’s main business is design and consulting; however, it is believed that this new area of operation may present an opportunity for the company to develop manufacturing capabilities. Management has three alternatives under consideration:

1. Design the rotary combustor at TMS based on customer needs, but subcontract all manufacturing and assembly.

2. Design the rotary combustor at TMS, subcontracting all manufacturing of parts but assembling the system at TMS facilities.

3. Design, manufacture, and assemble the combustor at TMS.

Your assignment is to compare the economic aspects, including risks.

1. For each alternative, list the risks involved and their associated costs.

2. Analyze TMS’s overall financial position under each alternative.

3. Include projected differences in total expenditures and investments.

In evaluating these alternatives, you can make any assumptions necessary. Each should be stated explicitly.

Discussion Questions 1. What are the shortcomings of engineering economic analysis? What

difficulties and uncertainties might one face when performing such an analysis?

2. American businesses have often been criticized for short-term thinking that places too much emphasis on payback period and ROR. When Honda started making cars in the early 1970s, for example, the chief executive officer stated that the firm would be “willing to accept an ROR no greater than 2% or 3% for as long as it took to be recognized as the best car maker in the world.” In light of the success of many Japanese firms, is the criticism of American business justified?

3. If a firm is short of capital, then what action might it take to conserve the capital it has and to obtain more?

4. Explain why the marginal cost for borrowing money increases. Why might the cost also be high for borrowing small amounts?

5. Are there any reasons for using present value analysis rather than future value analysis?

6. Why might a decision maker like to see the payback analysis as well as the ROR and the NPV?

7. In the 1960s, the top marginal tax rate for individuals in the United States was 90%; that is, for each dollar that a person earned above roughly $100,000, he or she had to pay 90 cents in taxes. It was argued by many economists at the time that this rate was much too high. What do you think are the negative economic and social consequences of such “confiscatory” tax rates?

8. Discuss why the comparison of alternative investment decisions is especially difficult when the investment choices have different useful lives.

9. Breakeven analysis is typically simplified by using constant-unit variable costs and revenues. What would you expect realistic costs and revenues to be, and what would a corresponding breakeven chart look like?

10. Identify a situation and set of alternatives whose outcomes are not measured on a monetary scale. Assess your utility function for this situation.

11. Give some examples for which the axioms underlying Bernoulli’s principle are violated.

12. Most countries have a progressive income tax system whereby each dollar earned in incrementally higher tax brackets is taxed at an increasingly higher rate. Do you think that a flat tax system would be more fair? How about a proportional tax system? Explain your answer.

13. If you just assessed a corporate executive’s utility function for a problem concerning the purchase of a supercomputer, then could you use the same utility function for a problem of buying an automobile? A personal computer? Explain.

14. It has been argued that comparable interpersonal utility scales may be established on the basis of equating people’s best conceivable situations at the top end and their worst conceivable situations at the bottom end. What’s wrong, if anything, with this approach?

15. In situations in which wealthy employers bargain over wages and benefits with needy employees on an individual basis, the employer usually gives away much less than he actually might have been pressured into or could have afforded. Can you explain this consequence in terms of utility theory?

Exercises 1. 3.1 Construct a diagram illustrating the cash flows involved in the

following transactions from the borrower’s viewpoint. The amount borrowed is $2,000 at 10% for 5 years.

1. Year-end payment of interest only; repayment of principal at the end of the 5 years

2. Year-end repayment of one fifth of the principal ($400) plus interest on the unpaid balance

3. Lump-sum repayment at the end of year 5 of principal plus accrued interest compounded annually

4. Year-end payments of equal-sized installments, as in a standard installment loan contract

2. 3.2 A firm wants to lease some land from you for 20 years and build a warehouse on it. As your payment for the lease, you will own the warehouse at the end of the 20 years, estimated to be worth $20,000 at that time.

1. If i=8%, then what is the PW of the deal to you?

2. If i=2% per quarter, then what is the PW of the deal to you?

3. 3.3 In payment for engineering services, a client offers you a choice between (1) $10,000 now and (2) a share in the project, which you are fairly certain you can cash in for $15,000 five years from now. With i=10%, which is the most profitable choice?

4. 3.4 Assume that a medium-size town now has a peak electrical demand of 105 megawatts, increasing at an annually compounded rate of 15%. The current generating capacity is 240 megawatts.

1. How soon will additional generating capacity be needed on-line?

2. If the new generator is designed to take care of needs 5 years past the on-line date, then what size should it be? Assume that the present generators continue in service.

5. 3.5 A local government agency has asked you to consult regarding acquisition of land for recreation needs for the urban area. The following data are provided:

Urban population 10 years ago 49,050 Urban area population now 89,920 Desired ratio of recreation land in acres per 1,000 population

10 acres/1,000

Actual acres of land now held by local government for recreational purposes

803 acres

1. Find the annual growth rate in the urban area by assuming that the population grew at a compounded annual rate over the past 10 years.

2. How many years ago was the desired ratio of recreation land per 1,000 population exceeded if no more land was acquired and the population continued to grow at the indicated rate?

3. The local government is planning to purchase more land to supply the recreational needs for 10 years past the point in time found in part (b). How many acres of land should they purchase to maintain the desired ratio, assuming that the population growth continues at the same rate?

6. 3.6 A young engineer decides to save $240 per year toward retirement in 40 years.

1. If he invests this sum at the end of every year at 9%, then how much will be accumulated by retirement time?

2. If by astute investing the interest rate could be raised to 12%, then what sum could be saved?

3. If he deposits one fourth of this annual amount each quarter ($60 per quarter) in an interest bearing account earning a nominal annual interest rate of 12%, compounded quarterly, how much could be saved by retirement time?

4. In part (c), then what annual effective interest rate is being earned?

7. 3.7 A lump sum of $100,000 is borrowed now to be repaid in one lump sum at end of month (EOM) 120. The loan bears a nominal interest rate of 12% compounded monthly. No partial repayments will be accepted on the loan. To accumulate the repayment lump sum due, monthly deposits are made into an interest-bearing account that bears interest at 0.75% per month from EOM 1 until EOM 48. From EOM 48 until EOM 120 the interest rate changes to 0.5%. Monthly deposits of amount A begin with the first deposit at EOM 1 and continue until EOM 48. Beginning with EOM 49, the deposits are doubled at amount 2A and continued at this level until the final deposit at EOM 120. Draw the cash flow diagram and find the initial monthly deposit amount A.

8. 3.8 A backhoe is purchased for $20,000. The terms are 10% down and 2% per month on the unpaid balance for 60 months.

1. How much are the monthly payments?

2. What annual effective interest rate is being charged?

9. 3.9 Your firm owns a large earth-moving machine and has contracts to move earth for $1 per cubic yard. For $100,000, this machine may be modified to increase its production output by an extra 10 yd3 per hour, with no increase in operating costs. The earth-moving machine is expected to last another 8 years, with zero salvage value at the end of that time. Determine whether this investment meets the company objective of earning at least 15% return. Assume that the equipment works 2,000 hours per year.

10. 3.10 Your firm wants to purchase a $50,000 computer, no money down. The $50,000 will be paid off in 10 equal end-of-year payments with interest at 8% on the unpaid balance.

1. What are the annual end-of-year payments?

2. What hourly charge should be included to pay off the computer, assuming 2,000 hours of work per year, credited at the end of the year?

3. Assume that 5 years from now you would like to trade in the computer and purchase a new one. You expect a 5% increase in price each year. What would the new computer cost at the end of year 5?

4. What is the unpaid balance on the current computer after 5 years?

11. 3.11 A transportation authority asks you to check on the feasibility of financing for a toll bridge that will cost $2,000,000. The authority can borrow this amount now and repay it from tolls. It will take 2 years to construct and be open for traffic at end of year (EOY) 2. Tolls will be accumulated throughout the third year and will be available for the initial annual repayment at EOY 3. In subsequent years, the tolls are deposited at the end of the year. Draw the cash flow diagram assuming a flow rate of 10,000 cars/day. How much must be charged to each car to repay the borrowed funds in 20 equal annual installments (first installment due at EOY 3), with 8% compound interest on the unpaid balance?

12. 3.12 A firm invested $15,000 in a project that seemed to have excellent potential. Unfortunately, a lengthy labor dispute in year 3 resulted in costs that exceeded benefits by $8,000. The cash flow for the project is as follows:

Year 0 1 2 3 4 5 6 Cash flow ($)

−15,000 +10,000 +6,000 −8,000 +4,000 +4,000 +4,000

Compute the ROR for the project. Assume a 12% interest rate on external investments for purposes of moving money from one period to another.

13. 3.13 An oil company plans to purchase for $70,000 a piece of vacant land on the corner of two busy streets. The company has four different types of businesses that it installs on properties of this type.

Plan Cost of improvements†

Description

A $75,000 Conventional gas station with service facilities for lubrication, oil changes, etc.

B $230,000 Automatic car wash facility with gasoline pump island in front

C $30,000 Discount gas station (no service bays)

D $130,000 Gas station with low-cost, quick-car- wash facility

†Cost of improvements does not include the $70,000 cost of land.

In each case, the estimated useful life of the improvements is 15 years. The salvage value for each is estimated to be the $70,000 cost of the land. The net annual income, after paying all operating expenses, is projected as follows:

Plan Net annual income A $23,300 B $44,300 C $10,000 D $27,500

If the oil company expects a 10% ROR on its investments, then which plan (if any) should be selected?

14. 3.14 A firm is considering three mutually exclusive alternatives as part of a production improvement program. The relevant data are:

A   B   C   Installation cost $10,000 $15,000 $20,000 Uniform annual benefit $1,625 $1,625 $1,890 Useful life (years) 10 20 20

For each alternative, the salvage value at the end of useful life is zero. At the end of 10 years, alternative A could be replaced by a copy of itself that has identical cost and benefits. The MARR is 6%. If the analysis period is 20 years, then which alternative should be selected?

15. 3.15 Consider four mutually exclusive alternatives that each has an 8- year useful life. The costs and benefits of each are given in the following table.

A B C D Initial cost $1,000 $800 $600 $500 Uniform annual benefit $122 $120 $97 $122 Salvage value $750 $500 $500 0

If the minimum acceptable ROR is 8%, then which alternative should be selected?

16. 3.16 A project has the following costs and benefits. What is the payback period?

Year Costs  Benefits 0 $1,400 1 $500 2 $300 $400

3–10 $300 per year

17. 3.17 A motor with a 200-horsepower output is needed for intermittent

use in a factory. A Teledyne motor costs $7,000 and has an electrical efficiency of 89%. An Allison motor costs $6,000 and has an 85% efficiency. Neither motor would have any salvage value after 20 years of use because the cost to remove them would equal their scrap value. The maintenance cost for either motor is estimated at $300 per year. Electric power costs $0.072/kilowatt hour (1hp=0.746 KW). If a 10% annual interest rate is used in the calculations, then what is the minimum number of hours that the higher-initial-cost Teledyne motor must be used each year to justify its purchase? Use a 20-year planning horizon.

18. 3.18 Lu Hodler planned to buy a rental property as an investment. After looking for several months, she found an attractive duplex that could be purchased for $93,000 cash. The total expected income from renting out both sides of the duplex would be $800 per month. The total annual expenses for property taxes, repairs, gardening, and so on are estimated at $600 per year. For tax purposes, Lu plans to depreciate the building by the SOYD method, assuming that the building has a 20-year remaining life and no salvage value. Of the total $93,000 cost of the property, $84,000 represents the value of the building and $9,000 is the value of the lot (only the former can be depreciated). Assume that Lu is in the 38% incremental income tax bracket (combined state and federal taxes) throughout the 20 years.

In this analysis Lu estimates that the income and expenses will remain constant at their present levels. If she buys and holds the property for 20 years, then what after-tax ROR can she expect to receive on her investment, using the assumptions noted below?

1. The building and lot can be sold at the end of 20 years for the $9,000 estimated value of the lot.

2. A more optimistic estimate of the future value of the property is that it can be sold for $100,000 at the end of the 20 years.

19. 3.19 The effective combined tax rate in an owner-managed corporation is 40%. An outlay of $20,000 for certain new assets is under consideration. It is estimated that for the next 8 years, these assets will be responsible for annual receipts of $9,000 and annual disbursements

(other than for income taxes) of $4,000. After this time, they will be used only for standby purposes, and no future excess of receipts over disbursements is expected.

1. What is the prospective ROR before income taxes?

2. What is the prospective ROR after taxes if these assets can be written off for tax purposes in 8 years using straight-line depreciation?

3. What is the prospective ROR after taxes if it is assumed that these assets must be written off over the next 20 years using straight-line depreciation?

20. 3.20 The Coma Chemical Company needs a large insulated stainless steel tank for the expansion of its plant. Coma has located one at a brewery that has just been closed. The brewery offers to sell the tank for $15,000, including delivery. The price is so low that Coma believes that it can sell the tank at any future time and recover its $15,000 investment. The outside of the tank is lined with heavy insulation that requires considerable maintenance. Estimated costs are as follows:

Year 0 1 2 3 4 5 Maintenance cost $2,000 $500 $1,000 $1,500 $2,000 $2,500

1. On the basis of a 15% before-tax MARR, what is the economic life of the insulated tank; that is, how long should it be kept?

2. Is it likely that the insulated tank will be replaced by another tank at the end of its computed economic life? Explain.

21. 3.21 The Gonzo Manufacturing Company is considering the replacement of one of its machine fixtures with a more flexible variety. The new fixture would cost $3,700, have a 4-year useful and depreciable life, and have no salvage value. For tax purposes, SOYD depreciation would be used. The existing fixture was purchased 4 years ago at a cost of $4,000 and has been depreciated by straight-line depreciation assuming an 8-year life and no salvage value. It could be sold now to a

used equipment dealer for $1,000 or be kept in service for another 4 years. It would then have no salvage value. The new fixture would save approximately $900 per year in operating costs compared with the existing one. Assume a 40% combined state and federal tax rate and that capital gains (and losses) are taxed at 40% as well.

Hint: For the existing fixture, the “investment” cost is the opportunity cost of not selling it.

1. Compute the before-tax ROR on the replacement proposal of installing the new fixture rather than keeping the old one.

2. Compute the after-tax ROR on the proposal.

22. 3.22 The following estimates have been made for two mutually exclusive alternatives; one must be chosen. The before-tax ROR required is 20%.

A B Installed cost $120,000 $150,000 Estimated useful life 10 years 10 years Salvage at retirement $20,000 $30,000 Annual operating costs $20,000 $15,000

Try to minimize your computations as you determine which course of action to recommend.

23. 3.23 The following cost estimates apply to independent equipment alternatives A and B. The before-tax ROR required is 20%.

A B Installed cost $100,000 $40,000

Operating costs

$5,000 at the end of year 1 and increasing by $1,000 per year for 20 years

$10,000 at the end of year 1 and increasing by $2,000 per year for 10 years

Overhaul costs every 5 years

$10,000 None required

Economic life 20 years 10 years Salvage value at end of life (just overhauled)

$20,000 $10,000

1. Compare the NPV of each using a study period of 20 years.

2. Compare the annual equivalent costs.

24. 3.24 An investor requires a MARR of 12% before inflation (not considering the effect of inflation on future costs and benefits).

1. If an inflation rate of 8% is expected, then what MARR should the investor require for an analysis that includes the effect of inflation?

2. If the labor cost is $15 an hour today and the inflation rate is 6%, then how much would you expect the labor cost to be in three years?

25. 3.25 Martha is considering the purchase of the piece of land and some new equipment adjacent to her day care center to use as a play area. Maintenance costs (e.g., mowing the lawn, repairs) are expected to be $500 a year for every year of the project. She expects that the additional lure of the play area will bring in extra business, increasing her income by $1,000 in the first year, and then by an additional $600/year thereafter ($1,600 in year 2, and so on). She plans to keep the land for five years, then donate it to the town (meaning no salvage value). All of these costs and revenues are estimated in today’s dollars. The cash flows are expected to inflate by 7% per year. This is the same as the general rate of inflation.

How much should she pay for the land to get a 12% ROR? The 12% includes the effect of the 7% inflation rate.

26. 3.26 An investment of $2,000 results in the cash flow below. The amounts are expressed in constant dollars.

1. The general rate of inflation is 6%, and future cash flows are expected to increase with inflation. Show the amounts in actual (year-n) dollars in the following table.

Year 0 1 2 3 4 5 Cash flow

2. Your minimum acceptable ROR without considering inflation is 10%. Should you accept this investment opportunity? Show your work.

27. 3.27 For the cash flow given in the figure of the previous exercise, say that you must pay taxes on the incomes shown. The investment for the project is to be depreciated with the SOYD method. The future incomes are expected to increase with an inflation rate of 6%. The general rate of inflation is also 6%. The tax rate is 40%, the tax life is 5 years, and salvage is zero.

Show in the table below the after-tax cash flows for the 5 years associated with this project. Also show the interest rate that you should use that is appropriate for these cash flows. The after-tax MARR

without considering inflation is 10%.

Year 0 1 2 3 4 5 ATCF

Should you accept or reject this project? Show your work.

28. 3.28 Your brother needs a $5,000 loan to go to college. Because of his poverty, he will pay nothing for the next four years. Five years from today he will begin paying you $2,500 a year for the next 4 years. The first payment occurs 5 years from today, and the total of the four payments will be $10,000.

1. If your minimum ROR is 8%, then is this an acceptable investment? Explain.

2. For the same payment schedule but with a 5% rate of inflation, is this an acceptable investment? Note that your brother pays you $2,500 a year regardless of the inflation rate. Provide quantitative justification for your decision.

29. 3.29 You are to do an analysis of an investment with and without taxes and with and without considering inflation. The initial investment (at time 0) is $10,000. The projected benefits of the investment are $1,000 per year. After 5 years the project will be sold for $8,000. All of these amounts are estimated in real (year-0) dollars. The MARR for the project is 20% and does not include an allowance for inflation. This MARR is to be used for both the before-tax and after-tax analyses. In each case, you are to write the formula for the NPV of the investment. Be sure to show the appropriate interest rate. It is not necessary to evaluate the formula.

1. Consider the investment without taxes and without inflation. Write the formula for the NPV of the investment.

2. Consider the investment with taxes but without inflation. Write the formula for the NPV of the investment. Use straight-line depreciation with a salvage of 0. All income and capital gains are

taxed at 40%.

3. Consider the investment without taxes but with inflation. The original information given about the problem was in real dollars. The inflation rate is 10% per year for the benefits. The salvage value is also expected to be affected by inflation, growing at a rate of 10% per year. The general inflation rate is also 10% per year.

4. Consider both taxes and inflation in this part. The general inflation rate is 10% per year, affecting both the annual benefits and the salvage value. Use straight-line depreciation with a salvage value of 0. Assume that all income and capital gains are taxed at 40%. Find the NPV of the investment.

30. 3.30 The tables below show the operating cost and salvage value for a machine that was purchased for $50,000 and has a useful life of 3 years. Find its economic life using an MARR of 10%.

1.

Year Operating cost Salvage value 1 $10,000 0 2 $40,000 0 3 $70,000 0

2.

Year Operating cost Salvage value 1 $10,000 $30,000 2 $10,000 $20,000 3 $10,000 0

31. 3.31 Your company purchased a machine for $14,000 with a 6-year tax life. The SOYD method is used for depreciation, and the tax salvage value is zero.

1. After the third year of use, the machine is sold for $10,000. How

much does the company get from the sale after taxes, assuming that the tax rate on capital gains is 40%?

2. Neglect taxes in this part. After the third year of life, the company is thinking about replacing the machine with a new one. It can be sold now for $10,000. Next year it will be worth only $6,000 and in two years, only $4,000. Three years from now the machine will have no resale value. The operating cost of the machine is expected to be constant for the next three years at $1,000 per annum. The new machine has a life of 10 years with a NAC of $5,000. Should the old machine be replaced with the new one if the company’s MARR is 10%? Explain.

32. 3.32 A milling machine (machine A) in your company’s shop has a current market value of $30,000. It was bought nine years ago for $54,000 and has since been depreciated by the straight-line method assuming a 12-year tax life. If the decision is made to keep the machine at this point in time, then it can be expected to last another 12 years (measured from today). At the end of the 12 years, it will be worthless. The operating costs of this machine are $7,500 per year and are not expected to change for its remaining life.

Alternatively, machine A can be replaced by a smaller machine B, which costs $42,000 and is expected to last 12 years. Its operating costs are $5,000 per year and would be depreciated by the straight-line method over the 12-year period with no salvage value expected.

Both income and capital gains are taxed at 40%. Compare the after-tax EUACs of the two machines and decide whether machine A should be retained or replaced by machine B. Use a 10% after-tax MARR in your calculations.

33. 3.33 What is the argument for using assessment procedures based on 50- 50 gambles as opposed to assessment procedures based on using reference gambles?

34. 3.34 Explain why identification of special attitudes toward risk can simplify the utility assessment process.

35. 3.35 Given the following information, plot four points on the person’s preference curve. The maximum payoff is $1,000. The minimum payoff is $0. The CE for a 50-50 gamble between $1,000 and $0 is $400. The CE for a 50-50 gamble between $400 and $0 is $100.

36. 3.36 As part of a decision analysis, Archie Leach provided the following information:

He was indifferent between a 50-50 chance at +$10 million and −$10 million and −$5 million for certain.

His CE for a lottery offering a 0.5 chance at −$5 million and a 0.5 chance at +$10 million was $0.

He was indifferent between a lottery with a 0.7 chance at +$10 million and a 0.3 chance at $0, and +$5 million for certain.

Sketch a preference curve for Leach on the basis of this information.

37. 3.37 Refer to Figure 3.15 .

Figure 3.15 Preference curve for risk-averse decision maker.

1. Specify a reference gamble that is equivalent (based on this curve) to the certain amount $30,000.

2. Specify a 50-50 gamble that is equivalent (based on this curve) to the certain amount $30,000.

38. 3.38 Beverly Silverman had long been promised a graduation present of $10,000 by her father, to be received on graduation day 3 months hence. Her father had recently offered an alternative gift of 1,000 shares of stock in Opera Systems, Inc., a consulting firm with which Beverly was

slightly acquainted. He requested that she choose between the two gifts by the following day. On the day she was trying to decide, the stock was selling for $12 per share. Thus it looked like it would be wise to take the stock because its present value was $12,000. She recognized, however, that she would not receive the stock until graduation day and that the stock price 3 months in the future was uncertain. She also recognized that her utility for money was not linear and that her risk aversion would play a major role in her decision. With these facts in mind, Beverly reached the following conclusions:

1. She believed that the stock price was more likely to rise than to fall in the intervening 3 months, and that it was as likely to be above $14 per share as below that figure when she was to receive the stock.

2. She believed that there was only 1 chance in 100 that the stock price would drop to less than $6 per share and an equal chance that the price would be more than twice its current value on graduation day.

3. She also thought that there was only 1 chance in 5 that the price would be below $10 and that there was 1 chance in 4 that it would be above $16 when she received it.

In considering her preferences, Beverly decided the following:

1. That her CE for a lottery offering a 50-50 chance at $0 and $25,000 was $9,000.

2. That her CE for a lottery offering a 0.2 chance at $25,000 and a 0.8 chance at 0 was $3,000.

3. That her CE for a lottery offering a 50-50 chance at $3,000 or $25,000 was $12,000.

4. That her CE for a lottery offering a 50-50 chance at $12,000 or $25,000 was $17,000.

Determine the cumulative probability distribution that Ms. Silverman has assigned to the stock price. Calculate her CE for the gift of the stock.

39. 3.39 A manager expresses indifference between a certain profit of $5,000 and a venture with a 70% chance of making $10,000 and a 30% chance of making nothing. If the manager’s utility scale is set at 1 utile for $0, and 100 utiles for $10,000, then what is the utility index for $5,000?

40. 3.40 The manager in Exercise 3.39 is indifferent between a venture that has a 60% chance of making $10,000 and a 40% chance of making $1,000, and a sure investment that yields $5,000. Find the value of $1,000 in utiles for this manager.

41. 3.41 Below are the results of a preference test given to an executive:

1. She is indifferent between an investment that will yield a certain $10,000 and a risky venture with a 50% chance of $30,000 profit and a 50% chance of a loss of $1,000.

2. Her utility function for money has the following shape:

Money ($) −1,000 0 5,000 20,000 30,000 Utility −2 0 10 20 30

A new risky venture is proposed. The possible payoffs are either $0 or $20,000. The probabilities of the gain cannot be determined. Find the probability combination of $0 and $20,000 that would make the executive indifferent to the certain $10,000.

42. 3.42 Frances Gumm has an opportunity to invest $3,000 in a venture that has a 0.2 chance of making nothing, a 0.3 chance of making $2,000, a 0.2 chance of making $4,000, and a 0.3 chance of making $6,000. Her utilities for each of the outcomes are 0 for $2,000, 35 for $4,000, and 40 for making $6,000. Draw Frances’s utility curve and advise her on making the investment.

43. 3.43 A plant manager has a utility of 10 for $20,000, 6 for $11,000, 0 for

$0, and −10 for a loss of $5,000.

1. The plant manager is indifferent between receiving $11,000 for certain and a lottery with a 0.6 chance of winning $5,000 and a 0.4 chance of winning $20,000. What is the utility of $5,000 for the manager? Construct the manager’s utility curve.

2. Using this curve, find the CE for the following gamble (i.e., the amount of cash that will make the manager indifferent to the gamble):

Payoff Probability −$2,000 0.2 0 0.3 $3,000 0.4 $10,000 0.1

3. What probability combination of $0 and $20,000 would make the manager indifferent to the certain $11,000? Show your work.

4. The manager is facing a decision about buying a new production machine that can bring a net profit of $15,000 (80% chance) or a loss of $1,000 (20% chance); alternatively, the manager can use the old machine and make a $10,000 profit. Use the utility curve to find which alternative the manager should select. Specify all necessary assumptions.

Bibliography Baumol, W. J., “On the Social Rate of Discount,” American Economic Review, Vol. 57, No. 4, pp. 778–802, 1968.

Blank, L. T. and A. Tarquin, Engineering Economy, McGraw-Hill, New York, 2011.

Bowman, M. S., Applied Economic Analysis for Technologists, Engineers and Managers, Second Edition, Prentice Hall, Upper Saddle River, NJ, 2003.

Canada, J. R. and W. G. Sullivan, Economic and Multiattribute Evaluation of Advanced Manufacturing Systems, Prentice Hall, Englewood Cliffs, NJ, 1989.

Collier, A. C. and C. R. Glagola, Engineering Economic and Cost Analysis, Third Edition, Prentice Hall, Upper Saddle River, NJ, 1999.

De Neufville, R., Applied Systems Analysis: Engineering Planning and Technology Management, McGraw-Hill, New York, 1990.

Dertouzos, M., R. Lester, and R. Solow (Editors), Made in America: Regaining the Productive Edge, MIT Press, Cambridge, MA, 1989.

English, J. M., Project Evaluation: A Unified Approach for the Analysis of Capital Investments, Macmillan, New York, 1984.

Finnerty, J. D., Project Financing: Asset-Based Financial Engineering, John Wiley & Sons, 2013.

Gass, S. I., “Model World: When is a Number a Number?” Interfaces, Vol 31, No. 1, pp. 93–103, 2001.

Humphreys, K. K., Jelen’s Cost and Optimization Engineering, Third Edition, McGraw-Hill, New York, 1991.

Keeney, R. L. and H. Raiffa, Decisions with Multiple Objectives: Preferences and Value Tradeoffs, Cambridge University Press, Second Edition, Cambridge, 1993.

Martino, J. P., R&D Project Selection, John Wiley & Sons, New York, 1995.

Miller, C. and A. P. Sage, “A Methodology for the Evaluation of Research and Development of Projects and Associated Resource Allocation.” Computers & Electrical Engineering, Vol. 8, No. 2, pp. 123–152, 1981.

Newnan, D. G., J.P. Lavelle and T.G. Eschenbach, Engineering Economic Analysis, Eighth Edition, Engineering Press, Austin, TX, 2000.

Park, C. S., Contemporary Engineering Economics, Third Edition, Prentice Hall, Upper Saddle River, NJ, 2002.

von Neumann, J. and O. Morgenstern, Theory of Games and Economic Behavior, Second Edition, Princeton University Press, Princeton, NJ, 1947.

White, J. A., K. E. Case, D. B. Pratt and M. H. Agee, Principles of Engineering Economic Analysis, Fourth Edition, John Wiley & Sons, New York, 1997.

Chapter 4 Life-Cycle Costing

4.1 Need for Life-Cycle Cost Analysis The total cost of ownership of a product, structure, or system over its useful life defines its life-cycle cost (LCC). For products purchased off the shelf, the major factors are the cost of acquisition, operations, service, and disposal. For products or systems that are not available for immediate purchase, it may be necessary to include the costs associated with conceptual analysis, feasibility studies, development and design, logistics support analysis, manufacturing, and testing.

In discussing the LCC of a system or a product versus a project, a distinction is often made between the various phases of the two. The main difference is that the project usually terminates when the system or product enters its operational life. The life cycle of the system or product, however, continues far beyond that point. In Chapter 1, we introduced the five life-cycle phases of a project. Here we introduce the five life-cycle phases of a system or product:

1. Conceptual design phase

2. Advanced development and detailed design phase

3. Production phase

4. System operations and maintenance phase

5. System divestment/disposal phase

The need for life-cycle costing arises because decisions made during the early phases of a project inevitably have an impact on future outlays as the design

evolves and the product matures. This need was recognized in the mid-1960s by the Logistics Management Institute, which issued a report stating that “the use of predicted logistics costs, despite their uncertainty, is preferable to the traditional practice of ignoring logistics’ costs because the absolute accuracy of their quantitative values cannot be assured in advance.”

An LCC analysis is intended to help managers identify and evaluate the economic consequences of their decisions. In 1978, the Massachusetts Institute of Technology (MIT) Center for Policy Alternatives published one of the first studies on LCC estimates. The focus was on appliances; some of the estimates are summarized in Table 4.1. As can be seen, the cost of acquisition was between 40.9% and 60.2%; the rest was spent after the acquisition on operations, maintenance, and disposal. Nevertheless, the decisions made at the acquisition stage affect 100% of the LCC. Because the product’s design dictates its LCC, it is of utmost importance to consider different options and their overall impact. A design that increases the production costs may be justified if it reduces the system’s operational costs over its useful life.

TABLE 4.1 LCC Estimates for Appliances

Air conditioners Refrigerators Useful life: 10 years 15 years Cost element

Acquisition $204 (58.7%)        

$295 (40.9%)         (60.2%)        

Operations $131 (37.8%)        

 $92 (54.3%)         (26.8%)        

Service   $4    (1.2%)        

 $19  (2.6%)         (11.9%)        

Disposal     $8     $16 

(2.3%)         (2.2%)         (1.1%) $347

(100%)         $722

(100%)         (100%)        

The MIT research demonstrated the importance of considering costs that are incurred during the operational stage of a system or product. This led the principal investigators to propose the establishment of consumer LCC data banks. Today, information on the operational costs of appliances such as energy consumption of refrigerators is posted on the units in the retail outlets. Similarly, the Environmental Protection Agency makes data on gasoline mileage of passenger cars readily available to the public.

A parallel situation exists for purchased commodities, as well as for research, development, and construction projects, in which decisions made in the early stages have a significant impact on the entire LCC. Engineering projects in which a new system or product is being designed, developed, manufactured, and tested may span years, as in the case of a new automobile, or decades in the case of a nuclear power plant. New product development takes anywhere from several months to several years. In lengthy processes of this type, decisions made at the outset may have substantial, long-term effects that are frequently difficult to analyze. The tradeoff between current objectives and long-term consequences of each decision is therefore a strategic aspect of project management that should be integrated into the project management system.

A typical example of a decision that has a long-term effect deals with the selection of components and parts for a new system at the advanced development and detailed design phase. Often, manufacturing costs can be reduced by selecting less expensive components and parts at the expense of a higher probability of failures during the operational life of the system. Another example is the decision regarding inspection and testing of components and subassemblies. Time and money can be saved at the early stages of a project by minimizing these efforts, but design errors and faulty components that surface during the operational phase may have severe cost consequences.

A third example relates to the need for logistics support. In this regard,

consider the maintenance costs during the operational phase of a system. These costs can be reduced by including in the design built-in test equipment that identifies problems, locates their source, and recommends a corrective course of action. Systems of this type that combine sensors with automated checklists and expert systems logic are expensive to develop, but in the long run decrease maintenance costs and increase availability.

LCC models track the costs of development, design, manufacturing, operations, maintenance, and disposal of a system over its useful life. They relate estimates of these cost components to independent (or explanatory) decision variables. By developing a functional representation [known as a cost estimating relationship (CER)] of the cost components in terms of the decision variables, the expected effect of changing any of the decision variables on one or more of the cost components can be analyzed.

A typical example of a CER is the effect of work design on the cost of labor. One aspect of this effect is the learning phenomenon discussed in Chapter 9. Because the slope of the learning curve depends on the type of manufacturing technology used, a CER can help the design engineers select the most appropriate technology. This situation is depicted in Figure 4.1, where two manufacturing technologies are considered. Technology 1 requires lower labor cost for the first unit produced but has a slower learning rate than that of technology 2. The decision to adopt either technology depends on the number of units required and the cost of capital (assuming that everything else is equal). For a small number of units, technology 1 is better, as labor costs are lower in the early stages of the corresponding learning curve. Also, if the cost of money is high, then technology 1 might be preferred because it displaces a substantial portion of the labor cost into the future. Finally, for a large number of units, technology 2 is preferred. In Figure 4.1, the point where the two technologies yield the same total cost is called the breakeven point.

Figure 4.1 Learning curves for two technologies.

In this example (as in many others) the importance of the LCC model increases when the proportion of manufacturing, operations, and maintenance costs is greater than the proportion of design and development costs over the lifetime of the product or system.

The development and widespread use of LCC models is particularly justified when a number of alternatives exist in the early stages of a project’s life cycle and the selection of an alternative has a noticeable influence on the total LCC. At the outset of a project, they provide a means of evaluating alternative designs; as work progresses, they may be called on to evaluate proposed engineering changes. These models are also used in logistics planning, where it is necessary, for example, to compare different maintenance concepts, training approaches, and replenishment policies. At a higher level, model results support decisions regarding logistic and configuration issues, the selection of manufacturing processes, and the formulation of maintenance procedures. By proper use, engineers and managers can choose alternatives so that the LCC is minimized while the required system effectiveness is maintained. The development and application of LCC models therefore is an essential part of most engineering projects.

As another example, let us consider a project involving the construction of an office building in which the windows can be either single- or double-pane glass. Material and installation costs make the initial investment in the second option greater than in the first; however, if an LCC analysis is conducted, then the cash flow over the useful life of the windows should be evaluated. The aim would be to consider not only the initial investment but also the intermittent and recurrent costs resulting from the decision, such as the loss of energy as a result of differences in isolation abilities. Taking qualitative factors into account, though, presents a problem. Although double-pane windows have technical advantages, such as better noise isolation, it is difficult if not impossible to translate these types of advantages in monetary terms. If this is the case, then the multi-criteria methods for project evaluation discussed in Chapters 5 and 6 should be used.

4.2 Uncertainties in Life-Cycle Cost Models In the conceptual design phase where LCC models are usually developed, little may be known about the system, the activities required to design and manufacture it, its modes of operation, and the maintenance policies to be employed. Consequently, LCC models are subject to the highest degrees of uncertainty at the beginning of a project. This uncertainty declines as progress is made and additional information becomes available.

Because decisions taken in the early stages of a project’s life cycle have the potential to affect the overall costs more than decisions taken later, the project team faces a situation in which the most critical decisions are made when uncertainty is highest. This is illustrated in Figure 4.2 and 4.3 where the potential effect of decisions on cost and the corresponding level of uncertainty are plotted as a function of time. From these graphs, the importance of a good LCC model in the early phases of a system’s life cycle is evident.

There are two principal types of uncertainty that LCC model builders should consider: (1) uncertainty regarding the cost-generating activities during the system’s life cycle, and (2) uncertainty regarding the expected cost of each of these activities. The first type of uncertainty is typically present when a new system is being developed and few historical data points exist. The equipment used on board several of the early

Figure 4.2 Percentage of budget affected by decision made is life-cycle phase of a system.

Figure 4.2 Full Alternative Text

Figure 4.3 Cost estimate errors over time.

Figure 4.3 Full Alternative Text

earth-orbiting satellites and the first space shuttle, Columbia, fall into this category. There was a high level of uncertainty with respect to maintenance requirements for this equipment as well as the procedures for operating and maintaining the launch vehicles and supporting facilities. Maintenance practices were finalized only after sufficient operational experience was accumulated. The reliability and dependability of these systems were studied carefully to determine the required frequency of scheduled maintenance.

Nevertheless, the accuracy of LCC models in which this type of uncertainty is present is relatively low, implying that their benefits may be somewhat limited to providing a framework for enumerating all possible cost drivers and promoting consistent data collection efforts throughout the life of the system. But even if this were the only use of the model, benefits would accrue from the available data when the time came to upgrade or build a second-generation system.

The second type of uncertainty, estimating the magnitude of a specific cost- generating activity, is common to all LCC models. There are multiple sources of this type of uncertainty, such as future inflation rates, the expected efficiency and utilization of resources, and the failure rate of system components. Each affects the accuracy of the cost estimates. To obtain better results, sophisticated forecasting techniques are often used, fueled by a wide array of data sources. Analysts who build LCC models should always trade off the desired level of accuracy with the cost of achieving that level. Most engineering projects are associated with improving current systems or developing new generations of existing systems. For such projects it is frequently possible to increase the accuracy of cost estimates by investing more effort in collecting and analyzing the underlying data. Therefore, it is important to determine when the point of diminishing returns has been reached. More sophisticated models may pose an increasingly problematic

challenge to their intended users and may become more expensive or complicated than the quality of the input data can justify.

The accuracy of cost estimates changes over the life cycle of the system. During the conceptual design phase, a tolerance of −30% to +50% may be acceptable for some factors. By the end of the advanced development and detailed design phase, more reliable estimates are expected to be available. Further improvement is realized during the production and system operations phases when field data are collected.

4.3 Classification of Cost Components The selection of a specific design alternative, the adoption of a maintenance or training policy, or the analysis of the impact of a proposed engineering change is based on the tradeoff between the expected costs and the expected benefits of each candidate. To ensure that the economic analysis is complete, the LCC model should include all significant costs that are likely to arise over the system’s life cycle. In this effort it is essential for the model builder to consider the type of system being developed. On the basis of the logical design of the project, common management concerns, and supporting data requirements, the cost classifications and structures can be defined.

Many ways of classifying costs are possible in an LCC analysis. Some are generic, whereas others are tailored to meet individual circumstances. In the following discussion, we present several commonly used schemes. Each can be modified to fit a specific situation, but a particular application may require a unique approach.

One way to classify costs is by the five life-cycle phases:

1. Cost of the conceptual design phase. This category highlights the costs associated with early efforts in the life cycle. These efforts include feasibility studies, configuration analysis and selection, systems engineering, initial logistic analysis, and initial design.

The cost of the conceptual design phase usually increases with the degree of innovation involved. In projects aimed at developing new technologies, this phase tends to be long and expensive. For example, consider the development of a new drug for AIDS or the development of a permanently manned lunar base. In such projects, high levels of uncertainty motivate in-depth feasibility studies, including the development of models, laboratory tests, and detailed analyses of alternatives. When a modification or improvement of an existing system

is being weighed, the level of uncertainty is lower, and consequently, the cost associated with the conceptual design phase is lower. This is the case, for example, with many construction projects in which the use of new techniques or technologies is not the main issue.

The LCC model can be used in this phase to support benefit-cost analyses. One must proceed with caution, however, because initial LCC estimates may be subject to large errors. A comparison of alternatives is appropriate only when the cost difference between them is measurably larger than the estimation errors and hence can be detected by the LCC models.

2. Cost of the advanced development and detailed design phases. Here the cost of planning and detailed design is presented. This includes product and process design; preparation of final performance requirements; preparation of the work breakdown structure, schedule, budget, and resource management plans; and the definition of procedures and management tools to be used throughout the life cycle of the project.

These phases are labor intensive. Engineers and managers design the product and plan the project for smooth execution. Attempts to save time and money by starting implementation prior to a satisfactory completion of these phases can lead to future failures. The development of a good product design and a comprehensive project plan are preconditions for successful implementation. In the advanced development and detailed design phase of the LCC analysis, accurate estimates of cost components are required. These estimates are used, in part, to support decisions regarding the selection of alternative technologies and the logistic support system for the product.

3. Cost of the production phase. This category consists of the costs associated with the execution of the design, including the construction of new facilities or the remodeling of existing facilities for assembly, testing, production, and repair. Also included are the actual costs of equipment, labor, and material required for operations, as well as blueprint reproduction costs for engineering drawings and the costs associated with documenting production, assembly, and testing procedures.

In many projects and systems this is the highest cost phase. The quality of the requirements and design decisions made earlier in the project determine the actual cost of production. By accumulating and storing the actual costs in appropriate databases, LCC analysis can be improved for similar future projects. The LCC model in this phase becomes increasingly accurate, making detailed cost analysis of alternative operations and maintenance policies possible.

4. Cost of operating and maintaining the system. This category identifies the costs surrounding the activities performed during the operational life of the system. These include the cost of personnel required for operations and maintenance together with the cost of energy, spare parts, facilities, transportation, and inventory management. Design changes and system upgrade costs also fall into this category.

5. Cost of divestment/disposal phase. When the end of the useful life of a system has been reached, it must be phased out. Parts and subassemblies must be inventoried, sold for scrap, or discarded. In some cases, it is necessary to take the system apart and dispose of its components safely. The phasing out or disposal of a system might have a negative cost (i.e., produce revenue) when it is sold at the end of its useful life, or it might have a positive cost (often high), as in the case of a nuclear reactor that has to be carefully dismantled and its radioactive components safely discarded.

The relative importance of each phase in the total LCC model is system specific. Figure 4.4 presents a comparison for two generic systems by life- cycle phase. In general, when alternative projects are being considered, the relative magnitude and timing of the different cost components figure prominently in the analysis. In Figure 4.4, system A requires substantial research and development efforts. The conceptual design phase and the advanced development phase account for 50% of the LCC. In system B, these two phases account for only 30% of the total cost. Thus, system B can be thought of more as a production/implementation project, whereas system A represents more of a design/development project.

Figure 4.4 Cost comparison of two projects by life-cycle phase.

Figure 4.4 Full Alternative Text

A second classification scheme has its origins in manufacturing and is based on cost type; that is, direct labor versus indirect labor, subcontracting, overhead allocations, and material (direct and indirect), as illustrated in Figure 4.5. These categories parallel those traditionally found in cost accounting, so data should be readily available for many applications.

Figure 4.5 Cost classification for manufacturing.

Figure 4.5 Full Alternative Text

A third means of classification is based on the time period in which each cost component is realized. To make this scheme operational, it is necessary to define a minimum time period, such as 1 month or 1 quarter, in the system’s life cycle. All costs that are incurred in this predetermined time period are

grouped together. This is illustrated in Figure 4.6, where the graphs provide a 12-month history of costs. This type of classification scheme is important when cash flow constraints are considered. Two projects with the same total cost may have a different cost distribution over time. In this case, because of cash flow considerations (the time value of money), the project for which cost outlays are delayed may be preferred.

Figure 4.6 LCC as a function of time.

Figure 4.6 Full Alternative Text

A fourth classification scheme is by work breakdown structure (WBS). In this approach, the cost of each element is estimated at the lowest level of the WBS. If more detail is desired, each element can be disaggregated further by life-cycle phase (first classification), cost type (second classification), or time period (third classification).

As the situation dictates, other schemes, perhaps based on the bill of material, the product structure, or the organization breakdown structure (OBS), might

be used. In particular, classification based on the organizational breakdown structure has proved useful as a bridge between the LCC model and the project budget, which traditionally is prepared along organizational lines.

It goes without saying that the scheme chosen should directly support the kinds of analyses to be undertaken. Thus, if future cash flow analyses are required, then the timing of each cost component is important. If, however, a system is developed by one organization (a contractor) for use by another (the client), and the customer is scheduled to deliver some of the subsystems, as in the case of government-furnished equipment in government contracts, then classification of cost based on the organization responsible for each cost component might be appropriate.

Sophisticated LCC models apply several classification schemes in the cost breakdown structure (CBS) so that each cost component can be categorized by the life-cycle phase and time period in which it arises, the WBS element in which it appears, and the class type from an accounting point of view. The cost of developing and maintaining such models depends on the desired resolution (number of subcategories in each classification scheme) and accuracy of the cost estimates, the updating frequency, and the number of classification schemes used. LCC model builders should strive to balance development costs with maintenance and data collection requirements.

An example of an LCC model for a hypothetical system in which a simple three-dimensional cost structure is used is given next. In this classification scheme, costs are broken down by (1) the life-cycle phase, (2) the quarter in which they occur, and (3) labor and material. The data are presented in Table 4.2.

In the example we assume that three different models of the same system are being developed during the first two years (eight quarters). Production starts on the first model before detailed design of the other two is finalized. Thus during quarters 6 through 8, advanced development and detailed design as well as production costs are present. Similarly, the first model becomes operational before the completion of the production phase of the other models, implying overlapping costs in these categories in quarters 9 and 4. The three models are phased out in quarters 14, 15, and 17 as noted by divestment costs and reduced operations and maintenance costs in these

periods.

TABLE 4.2 Example of an LCC Model ($1,000)

System life-cycle phase

Conceptual design

Advanced development & detailed

design

Production Operations

& maintenance

Divestment/disposal

Quarter Labor Mat’l Labor Mat’l Labor Mat’l Labor Mat’l Labor 1 2 2 3 3 3 4 1 3 5 4 1 6 5 1 10 3 7 5 1 12 4 8 3 1 15 6 9 10 5 3 1 10 7 3 4 2 11 5 3 12 5 3 13 5 3 14 5 3 1 15 4 2 1 16 4 2 17 3 1 1 18

Total 9 – 20 4 54 21 38 20 3

The LCC data in Table 4.2 can be used to produce several views, each giving a different perspective and highlighting different aspects of the project. For example, in Figure 4.7 we plot the cumulative LCC of the system over time, as well as the cost that is incurred in each quarter. The LCC can also be presented by life-cycle phase. This is illustrated in Figure 4.8. A third possibility is labor cost versus material cost, as shown in Figure 4.9. Although the periodic and total LCCs are the same in Figure 4.8 and 4.9, the breakdown of these costs is different and can serve different purposes, as discussed in the next section.

Figure 4.7 Total LCC of the system.

Figure 4.8 LCC by phase.

Figure 4.8 Full Alternative Text

Figure 4.9 Cost breakdown by labor and material.

Figure 4.9 Full Alternative Text

In the example, a fourth classification (or dimension) might correspond to the WBS and a fifth to the OBS. By using a 5-dimensional grid, questions such as, “What is the expected cost of software development by the main contractor for the real-time control system during the third quarter of the project?” can be answered. The type of questions and scenarios for which the LCC model is to be exercised is the principal consideration in its design.

4.4 Developing the LCC Model The first step in the design of an LCC model is to identify the types of analyses that it is intended to support. The following is a list of several common applications.

Strategic or long-range budgeting. Because the LCC model covers the entire life cycle of a system, it can be used to coordinate investment expenditures over the system’s useful life or to adjust the requirement for capital for one system or project with capital needed or generated by other systems or projects. Such long-range budget planning is important for strategic investment decisions.

Strategic or long-range technical decisions. Strategic decision making as it relates to such issues as the redesign of a system or the early termination of a research and development (R&D) project is difficult to support. The LCC model can be used to monitor changes in cost estimates as the project evolves. Revised estimates of production, operations, or maintenance costs that are substantially higher than the baseline figures may serve as a trigger for unscheduled design reviews, major changes in system engineering, or even a complete shutdown of the project. Because LCC estimates improve over time, rough projections made in the early phases of a project’s life cycle may be updated later and provide managers with more accurate data to support the technical decision making process.

Data analysis and processing. LCC models routinely serve as a framework for the collection, storage, and retrieval of cost data. By using an appropriate data structure (e.g., LCC breakdown structure), the cost components of current or retired systems can be analyzed simultaneously to yield better estimates for future systems.

Logistic support analysis. Logistics is generally concerned with transportation, inventory and spare parts management, database systems, maintenance, and training. Questions such as which maintenance

operations should be performed and at what frequency, how much to invest in spare parts, how to package and ship systems and parts, which training facilities are required, and which type of courses should be offered to the operators and maintenance personnel are examples of decisions supported by LCC analyses.

Once agreement is reached on the types of analyses that will be conducted, LCC model development can proceed. The following steps should be carried out:

1. Classification. In this step the classification schemes are developed. Major activities that generate cost are listed and major cost categories (labor, material, etc.) are identified. For example, the LCC data presented in Table 4.2 can be classified by the organizational unit responsible for each cost component and the activities performed by that unit.

2. CBS. Next, a coding system is selected to keep track of each cost component. To gain further insights, the latter may be organized in a multidimensional hierarchical structure based on the system chosen in step 1. Each component at each level of the hierarchy is assigned an identification number. The CBS enables the cost components to be aggregated based on the classification scheme. Thus, with the proper scheme the labor cost of a specific activity in a given period or the cost of a specific subsystem during its operational phase can be determined. The CBS links cost components to organizational units, to WBS elements, and to the system’s bill of material.

As an example, consider the CBS of a project aimed at developing a new radar system. The system is composed of a transmitter, receiver, antenna, and computer. The plan is to subcontract the computer design and its software as well as part of the antenna servo, while developing the rest of the components in-house. The coding scheme for the CBS is as shown in Table 4.3.

TABLE 4.3 Coding and

Classification Scheme for LCC Digit Classification Code assignment 1 Who performs the work Performed in-house 1

Subcontracted 2 2 System part Transmitter 1

Receiver 2 Antenna 3 Computer 4

3 Life-cycle phase Conceptual 1 Detailed design 2 Production 3 Operations & maintenance 4 Divestment 5

4 Type of cost Direct labor 1 Direct material 2 Overhead 3

Using this simple four-digit code, a question such as, “What is the direct cost of material to be used during the production phase of the receiver?” can be answered by retrieving all cost components with the following LCC codes:

first digit 1 or 2 second digit 2 third digit 3 fourth digit 2

Thus, we would search for the LCC codes 1232 and 2232. The corresponding cost components might represent the cost at different months of the project, assuming that cost is estimated on a monthly

basis. Other situations are possible.

3. Cost estimates. After the various cost components are identified and organized within the chosen classification scheme, the final step is to estimate each cost component. The American Association of Cost Engineers AACE 1986 has proposed three classifications for this purpose:

Order of magnitude: accuracy of −30% to +50%. An estimate that is made without any detailed engineering data.

Budget: accuracy of −15% to +30%. This estimate is based on preliminary layout design and equipment details, and is performed by the client to establish a budget for a new project (at the request for proposal (RFP) stage).

Definitive: accuracy of −5% to +15%. This cost estimate is based on well-defined engineering data and a complete set of specifications.

The work involved in preparing cost estimates is a function of the required accuracy and the size and cost of the project. In the process industries, the typical costs for preparing estimates were estimated by Pikulik and Diaz (1977):

Order-of-magnitude estimates

Project cost ($ million) Cost of estimate ($ thousand) Up to 1      7.5 to 20 1 to 5      17.5 to 45 5 to 50      30 to 60

Budget estimates

Project cost ($ million) Cost of estimate ($ thousand) Up to 1      20 to 50 1 to 5      45 to 85 5 to 50      70 to 130

Definitive estimates

Project cost ($ million) Cost of estimate ($ thousand) Up to 1      35 to 85 1 to 5      85 to 175 5 to 50      150 to 330

A variety of estimation procedures are used in industry, all of which are based on the assumption that past experience is a valid predictor of future performance. Estimation procedures fall into one of two categories: (1) causal, whereby the aim is to derive CERs; and (2) noncausal, or direct. Causal estimates follow from an assumed functional relationship between the cost component and one or more explanatory variables. For example, the cost of fuel required during the operational life of a car might be estimated as a function of the distance driven, the weight of the car, the car’s engine size, and the expected road conditions. An equation, relating the cost of fuel to the explanatory variables, can be developed by using regression analysis or any other curve fitting technique (see Section 9.2.5). With the use of CERs, the expected effect of changing any explanatory variable on the LCC can be analyzed. To develop CERs, past data on the values of the cost component under investigation and the explanatory variables are required.

As an example, consider the equipment CER proposed by Fabrycky and Blanchard (1991),

C= C r × ( Q c Q r ) β (4.1)

where

C=cost for a new design size Q c

C r =cost for existing reference design Q r

Q c =design size—new design

Q r =design size—existing reference design

β=correlation parameter; 0<β≤1

Taking the logarithm of both sides of Eq. (4.1) gives the CER

log C−log C r =β ( log Q c −log Q r ) (4.2)

where β is to be determined from a regression analysis.

Suppose that a cost estimate for a new 750-gallon water desalination system is required and that information on the actual cost of five systems is available. These data are presented below.

Reactor Cost Size (gallons) 1 $14,000 200 2 $18,000 300 3 $21,500 400 4 $25,000 500 5 $28,000 600

A pairwise comparison between the five systems yields the following data in the form needed for Eq. (4.2).

C r C Q r Q c log C−log C r log Q c −log Q r 14,000 18,000 200 300 0.109 0.176 18,000 21,500 300 400 0.077 0.125 21,500 25,000 400 500 0.066 0.096 25,000 28,000 500 600 0.049 0.079

A regression analysis using the first three systems yields the CER

log C−log C r =0.628 ( log Q c −log Q r )

with R 2 =0.983. Now, using the fourth system as a reference ( Q r ), the estimated cost for a new 750-gallon ( Q c ) system (same type) is

C=$25,000× ( 750 500 ) 0.628 =$32,249

This type of CER is useful for a company that has to estimate the cost of new systems that differ from existing systems mainly by size.

Cost estimates can alternatively be derived using noncausal methods, such as:

Judgment and experience, rules of thumb, or the use of organizational standards for similar activities. These techniques are informal, inexpensive, and therefore appropriate when formal LCC models and cost estimates with high levels of accuracy are not essential.

Analogy to a similar system or component and an appropriate adjustment of cost components according to the difference between the systems.

Technical estimation based on drawings, specifications, time standards, and values of parameters such as mean time to failure and mean time to repair.

Value of contracts for similar systems, such as office cleaning contracts and maintenance contracts. It is also possible to estimate costs on the basis of bids from contractors who respond to RFPs.

Each technique requires a combination of resources, such as time, data, equipment, and software, and may call on the expertise and experience of people within or external to the organization. From the data and resources available, the required accuracy, and the cost of using each cost estimating technique, the most suitable approach for each application can be selected. For each cost component, one or more cost estimating techniques might be appropriate. In the early stages of the life cycle, technical estimation is usually not feasible as drawings and other information are not available. For new systems, analogy might not be feasible if similar systems have not been developed or previously deployed.

Let us demonstrate the derivation of a CER for a project related to the development of a training course. Stark Awareness, Inc. is a company that specializes in developing such courses for its customers and wishes to estimate the labor hours required for putting together a new course. The deliverable is a packet of materials that will include all of the documents and

slides required for conducting the class. Dr. Stark, the chief statistician for the company, decided to develop a CER based on expert judgment, in this case a team of instructors who have wide experience in this type of project. The experts identified the relevant parameters and the labor hours associated with each. For example, for the parameter “number of lecture hours” for the course, it was agreed that for every new lecture hour there is a need to spend 15 labor hours on activities such as reading new material, summarizing the main points, and preparing PowerPoint slides.

The above process led to the following equation:

LH=15L+4E+20T+10P

where

LH=number of labor hours required to develop new course

L=number of lecture hours for new course

E=number of exercises that students will be assigned

T=number of tests to be given

P=number of course projects

For example, if there is a need to develop a training program that consists of 12 lecture hours, 3 exercises, and one project, then the estimated number of labor hours required to organize the class is:

LH=15×12+4×3+10×1=202 hours

LCC models are relatively mature in the areas of software development and maintenance planning. Several exist for estimating labor requirements for different tasks as a function of system characteristics and the level of experience of the project team. One such model, called COCOMO II, is based on the analysis of data collected from approximately 160 projects (Bohem et al. 2000). To estimate the resource requirements (independent variable) for a software project, the authors proposed using the following parameters (dependent variables):

Project size, expressed by the number of old and new line codes

Technical complexity of the new system

Risk level

Size of the databases required for the system

Experience of the project team

Complexity of communication channels

Previous experience of the organization on projects of similar nature

Organizational ability in the application of project management methodology

Availability of advanced programming tools

Organizational turnover

Obviously, it is impractical to use the same model for every project; however, it is not uncommon for an organization to use similar estimation techniques and models for similar projects

The selection of a cost estimating procedure depends on data availability, required accuracy, and cost. The analyst should consider all three aspects in the process of model design and application. To demonstrate further the process of developing an LCC model, consider the problem of estimating energy costs in residential buildings. It is possible to reduce the cost of energy by proper design, the use of insulation and improved ventilation, and the selection of efficient heating and cooling devices. The following is an example of a basic LCC model for such a project. The model has only two classifications: the first centers on the activities that generate cost, and the second is based on time. Table 4.4 depicts levels 1 and 2 of the CBS for the cost-generating activities.

TABLE 4.4 Partial CBS for Residential Building Example

1. Cost of engineering

1. 1.1 Structural design

2. 1.2 Interior design

3. 1.3 Drawing preparation

4. 1.4 Supervision

5. 1.5 Management

2. Cost of construction

1. 2.1 Equipment

2. 2.2 Contractors

3. 2.3 Material

4. 2.4 Labor

5. 2.5 Energy

6. 2.6 Inspection

7. 2.7 Management

3. Cost of operations

1. 3.1 Energy

2. 3.2 Maintenance

3. 3.3 Consumables

4. 3.4 Subcontractors

A time dimension is added to the model by introducing the timing of each cost component. For example, the structural design (1.1) may take 3 months. Assuming that the cost of the first month is $500, the cost of the second month is $1,100, and the cost of the last month is $400, the total cost of structural design is $500+$1100+$400=$2000 over a 3-month period. By assigning the cost of each cost component in Table 4.4 to a specific month, the time aspect of this LCC model is introduced.

If more detail is needed, then the model can be expanded to three or four levels. For example, consider item 2.1, equipment, which can be broken

down further by air conditioning system, heater, and so on. Once the lowest level is identified and the data elements are defined, the model can be used to estimate the cost of each component for each design alternative on a periodic basis, if necessary. Alternatives might differ in their total LCC, in the allocation of costs over the life cycle, and in the allocation of costs among different system components. As discussed in Chapters 5 and 6, the selection of the best alternative depends on the evaluation criteria specified. System reliability, maintenance requirements, and safety are common criteria, but LCC usually plays a predominant role. In particular, if the minimum net present cost is the criterion chosen, between two design alternatives with the same total LCC, then the one that delays monetary outlays the longest would be preferred. In the above example, it should not be surprising that this might lead to an energy-inefficient house—one that is less expensive to build but more expensive to maintain.

A possible CER for the example might be a linear equation relating the cost of heating to the insulation used and the difference between the desired temperature inside the house and the ambient temperature outside. Additional explanatory variables that might be included are the area of windows and the type of glass used.

The CBS can be as detailed as required to capture the impacts of decisions on overall cost and performance. Continuing with item 2.1, equipment can be broken down further to the level of components used in the air-conditioning system if it were thought that the selection of these components would measurably affect the LCC.

4.5 Using the Life-Cycle Cost Model The integration of the CBS with estimates of each component produces the aggregate LCC model for the system. This model (distributed over time) is the basis for several types of analyses and decision making.

1. Design evaluations. In the planning stages of a project, alternative designs for the entire system or its components have to be evaluated. The LCC model combined with a measure of system effectiveness produce a basis for cost-effectiveness analysis during various stages of the development cycle. Methodological details are provided in Chapters 5 and 6, where issues related to risk, benefit estimation, and criteria selection are discussed.

2. Evaluation of engineering change requests (ECRs). As explained in Chapter 8, the process of ECR approval or rejection is based on estimates of cost and effectiveness with and without the proposed change. The LCC model provides the foundation for conducting the analysis.

3. Sensitivity analysis and risk assessment. In the development of CERs, parameters that affect the LCC of the system are used as the explanatory variables. A sensitivity analysis should always be conducted to see how the LCC changes, as each parameter is varied over its feasible range. Depending on the nature of the project and the time horizon, some typical explanatory variables might be the rate of inflation, the cost of energy, and the minimum acceptable rate of return.

4. Logistic support analysis. The evaluation of policies for maintenance, training, stocking of spare parts, inventory management, shipping, and packaging is supported by appropriate LCC models. By estimating the cost of different alternatives for logistic support, decision makers can trade off the cost and benefits of each scenario under consideration.

5. Pareto, or ABC, analysis. This analysis is used to identify the most

important cost components of a project. The first step is to sort each component by cost and then to place them into one of the following three groups:

Group A:

small percentage of the top cost components (10% to 15%), which together account for roughly 60% or more of the total cost

Group B:

all cost components that are not members of group A or C

Group C:

large percentage of the bottom cost components (about 50%) which account for 10% or less of the total cost

In the sorted list, the first 10 to 15% of the cost components are members of group A and the last 50% are members of group C. The remaining components in the middle range of the list are assigned to group B. This clustering scheme is the basis for management control. The strategy is to monitor closely those items that account for the largest percentage of the total LCC (group A components). Conversely, group C components, which represent a relatively large number of items but account for a relatively small portion of the total cost, require the least amount of attention.

6. Budget and cash flow analysis. Here the concern is staying within budget and cash flow constraints and estimating future capital investment needs. By combining the LCC models of all projects in an organization, the net cash flow for each future period can be forecast. The results then may be used to support feasibility analyses, decisions regarding the acceptance of new projects, and recommendations for rescheduling or abandoning ongoing projects.

The LCC model is an important project management tool for strategic financial planning, logistics analysis, and technology-related decision making. Properly designed and maintained LCC models help the project manager in both planning and control by linking together the cost and technological aspects of a project. By using CERs, the impact that different alternatives have on the system’s LCC can be analyzed and used as a basis for technology evaluation and selection, resource acquisition, and configuration management.

TEAM PROJECT Thermal Transfer Plant Your plans for the prototype rotary combustor project have been approved. Total Manufacturing Solutions (TMS) management is now weighing the possibility of investing in a plant for manufacturing the combustors. There is a feeling, however, that the degree of subcontracting associated with producing the prototype may not be appropriate for the repetitive manufacturing environment of the new plant.

Your team has been requested to perform an LCC analysis to help determine which parts and components of the rotary combustor to manufacture in-house and which to buy or subcontract. Design your models to answer these “make or buy” questions, keeping in mind that the expected life of a rotary combustor is approximately 25 years and TMS would like to support these units throughout their life cycle. State any assumptions that you believe are necessary to estimate costs and risks. Discuss the sensitivity of your results, assumed parameter values, timing of costs, levels of risk, and so on.

Discussion Questions 1. Estimate the LCC for a passenger car. In so doing, select an appropriate

CBS and explain your cost estimates.

2. Explain how the design of a car affects its LCC.

3. Compare the cost of ownership of a new car with that of a used car of similar type.

4. Explain the design factors that affect the LCC of an elevator in a New York City office building.

5. What are the sources of uncertainty in Question 4?

6. What do you think are the principal cost drivers in designing a permanently manned lunar base? What noncost factors would you want to consider?

7. Identify a potential consumer product that is not yet on the market, such as video telephones, and list the major costs in each phase of its life cycle. How might these costs be estimated?

8. Pick an R&D project of national scope, such as mapping all of the genes on a human chromosome (the human genome project). First, sketch a potential OBS for the project and identify the tasks that might fall within each organizational unit. Then develop a CBS and relate it to the OBS.

9. Develop an LCC model to assist you in selecting the best heating system for your house. Discuss the alternatives and explain the cost structure that you have selected.

10. Discuss the effect of taxes on the LCC of passenger cars. Compare domestic and imported cars.

11. Discuss the effect of LCC on the decision to locate a new warehouse.

12. Discuss a project in which the first phase of the life cycle accounts for more than 50% of the LCC.

13. Discuss a project in which the detailed design phase accounts for more than 50% of the LCC.

Exercises 1. 4.1 The cost of a used car is highly correlated with the following

variables:

t=age of the car 1≤t≤5 (years) V=volume of engine 1000≤V≤2,500 (cubic centimeters) D=number of doors D=2, 3, 4, 5 A=accessories and style A=1, 2, 3, 4, 5, 6 (qualitative)

Using regression analysis, the following relationship between the cost of a car and the four independent variables was found:

Purchase cost=( 1+ 1 t )×V×( D 2 +A )

1. Plot the purchase cost as a function of the four variables.

2. Which variable has the greatest effect on cost?

3. You have a total of $5,000. List the different types of cars (combinations of the parameters) that you can afford.

4. Develop a model by which you select the best car for your needs.

5. Operations and maintenance costs for the car are estimated as follows:

annual maintenance cost= t 2 ×V× s 1,000 annual operating cost=( D×t+ V 1,000 )× s 250

where s is the number of miles driven annually. What is the best car (combination of parameters) for a person who drives 12,000 miles every year?

2. 4.2 A construction project consists of 10 identical units. The cost of the

first unit is $25,000, and a learning curve of 90% applies to the cost and the duration of consecutive units. Assume that the first unit takes 6 months to finish and that the project is financed by a loan taken at the beginning of the project at an annual interest rate of 10%.

1. Should the units be constructed in sequence (to maximize learning) or in parallel (to minimize the cost of the loan)?

2. Find the schedule for the 10 units that minimizes the total cost of the project.

3. 4.3 Develop three cost classifications for the LCC of an office building.

4. 4.4 Develop a cost breakdown structure for the cost of an office building. Estimate the cost of each component.

5. 4.5 Show a cash flow analysis for the LCC of an office building.

6. 4.6 Perform a Pareto (ABC) analysis on the data of the LCC of an office building.

7. 4.7 Develop an estimate for the cost of a 3-week vacation in Europe.

8. 4.8 Develop an LCC model to support the decision to buy or rent a car.

9. 4.9 Natasha Gurdin is debating which of two possible models of a car to buy (A or B), being indifferent with regard to their technical performance. She has been told that the average monthly cost of owning model A, based on an LCC analysis, is $500.

1. Using the following data for model B, calculate its LCC and determine which model is the better choice for Natasha:

Purchase price $23,000 Life expectancy 4 years Resale value $13,000 Maintenance $1,100 per year Operational cost (gas, etc.) $90 per month

Car insurance $1,400 per year Mean time between failures 14 months Repair cost per failure $650

2. Develop a general model that can be used to calculate the LCC for a car.

10. 4.10 Your company has just taken over an old apartment building and is renovating it. You have been appointed manager and must decide which brand of refrigerator to install in each apartment unit. Your analysis should consider expenses such as purchase price, delivery charges, operational costs, insurance for service, and selling price after 6 years of use. Identify two brands of 18-cubic-foot refrigerators and compare them.

11. 4.11 You have been told that even warehouse location decisions should be based, at least in part, on the results of an LCC analysis. Discuss this issue.

12. 4.12 Maurice Micklewhite has decided to replant his garden. Show him what the cost is of making an erroneous decision at various stages of the project, starting with conceptual design and ending with the ongoing maintenance of the garden.

13. 4.13 The relative cost of each stage in the project life cycle is a function of the nature of the project or product. Generate a list of possible projects and group them by the similarities in their relative cost profile.

14. 4.14 Different organizations and customers look at different aspects of the LCC data. Select five projects and identify the relevant LCC aspects for each organization and customer involved.

15. 4.15 Develop a list of cost components for two projects and estimate their values. Identify the components that represent approximately 80% of the projects’ costs and discuss possible alternatives to reduce the LCC of one particular component. What might be the expected impact of the suggested alternatives?

Bibliography

Life-Cycle Cost Blanchard, B. S., Design and Manage to Life Cycle Cost, Matrix Press, Chesterland, OH, 1978.

Cabeza, L. F., et al. “Life cycle assessment (LCA) and life cycle energy analysis (LCEA) of buildings and the building sector: a review.” Renewable and Sustainable Energy Reviews, Vol. 29, pp. 394–416, 2014.

Dhillon, B. S., Life Cycle Costing: Techniques, Models and Applications, Gordon and Breach Science Publishers, New York, 1989.

Earls, U. E., Factors, Formulas and Structures for Life Cycle Costing, Second Edition, Eddins-Earles, Concord, MA, 1981.

Emblemsvag, J., Life-Cycle Costing: Using Activity-Based Costing and Monte Carlo Methods to Manage Future Costs and Risks, John Wiley & Sons, New York, 2003.

Fabrycky, J. W. and B. S. Blanchard, Life Cycle Cost and Economic Analysis, Prentice Hall, Englewood Cliffs, NJ, 1991.

Nugent, D.L. and K. S. Benjamin, “Assessing the lifecycle greenhouse gas emissions from solar PV and wind energy: A critical meta-survey.” Energy Policy, Vol. 65, pp. 229–244, 2014.

Perera, H., N. Nagarur, and M. Tabucanon, “Component Part Standardization: A Way to Reduce the Life-Cycle Costs of Products,” International Journal of Production Economics, Vol. 60–61, pp. 109– 117, 1999.

Riggs, J. L. and D. Jones, “Flowgraph Representation of Life-Cycle Cost Methodology: A New Perspective for Project Managers,” IEEE Transactions on Engineering Management, Vol. 37, No. 2, pp. 147–152, 1990.

Spence, G., “Designing for Total Life Cycle Costs,” Printed Circuit Design, Vol. 6, No. 8, pp. 14–17, 1989.

Yao, J., “A multi-objective (energy, economic and environmental performance) life cycle analysis for better building design,” Sustainability, Vol. 6, No. 2, pp. 602–614, 2014.

Cost Estimation AACE, Standard Cost Engineering Terminology, American Association of Cost Engineers, Morgantown, WV, 1986.

Augustine, N. R., Augustine’s Laws, Viking, Penguin, New York, 1997.

Bledsoe, J. D., Successful Estimating Methods: From Concept to Bid, RSMeans, Kingston, MA, 1991.

Bohem, B. W., E. Horowitz, R. Madachy, D. Reifer, B. K. Clark, B. Steece, A. W. Brown, S. Chulani, and C. Abts, Software Cost Estimation with COCOMO II, Prentice Hall, Upper Saddle River, NJ, 2000.

Coombs, P., IT Project Estimation: A Practical Guide to the Costing of Software, Cambridge University Press, Cambridge, England, 2003.

Emblemsvag, J., Life Cycle Costing, John Wiley & Sons, New York, 2003.

Neil, J. M. (Editor), Skills and Knowledge of Cost Engineering, Second Edition, American Association of Cost Engineers, Morgantown, WV, 1988.

Ostwald, P., Construction Cost Analysis and Estimating, Prentice Hall,

Upper Saddle River, NJ, 2000.

Pikulik, A. and H. E. Diaz, “Cost Estimating for Major Process Equipment,” Chemical Engineering, Vol. 84, p. 106, 1977.

Puerifoy, R., Estimating Construction Costs, Fifth Edition, McGraw- Hill, New York, 2001.

Stewart, R. D. and R. M. Wyskida, Cost Estimator’s Reference Manual, John Wiley & Sons, New York, 1987.

Chapter 5 Portfolio Management— Project Screening and Selection

5.1 Components of the Evaluation Process Every new project starts with an idea. Typically, new ideas arrive continuously from a variety of sources, such as customers, suppliers, upper management, and shop floor personnel. Details of the steps involved in processing these ideas and the related analyses are highlighted in Figure 5.1.

Depending on the scope and estimated costs, management may simply be interested in determining the merit of the idea or it may want to determine how best to allocate a budget among a portfolio of projects. If the organization is a consulting firm or an outside contractor, then it may want to decide on the most advantageous strategy for responding to requests for proposals (RFPs).

Of course, there are many different types of projects, so the evaluation criteria and accompanying methodology should reflect the particular characteristics of the sponsoring or responding organization. The usual divisions are public sector versus private sector, research and development (R&D) versus operations, and internal customer versus external customer. Project size, expected duration, underlying risks, and required resources are some of the factors that must weigh on the decision.

Regardless of the source or nature of the customer, screening is usually the first step. A proposed project is analyzed in a preliminary manner in light of the most prominent criteria or prevailing conditions. This should be a quick and inexpensive exercise. The results may suggest, for example, that no further effort is warranted as a result of uncertainty in the technology or the lack of a well-defined market. If some promise exists, then the project may be

temporarily backlogged in deference to more attractive contenders. At some time in the future when conditions are more favorable, it may be desirable to re-visit the go/no go decision, or the project may be deemed so urgent or beneficial to the organization that it is placed at the top of the priority list. Alternatively,

Figure 5.1 Project evaluation and selection process.

Figure 5.1 Full Alternative Text

results of the project screening process may indicate that the proposed project possesses some merit and deserves further investigation.

If a project passes the organization screening process for evaluating new project ideas, then a more in-depth analysis should be performed with the goal of narrowing uncertainties associated with the project’s costs, benefits, and risks. In contrast to the screening process, the evaluation process usually involves extensive and in-depth data collection, the solicitation of expert opinion, sample computations, and perhaps technological forecasting. As with the screening process, several courses of action might be suggested. The proposal may be rejected or abandoned for lack of merit, it may be backlogged for later retrieval and analysis, or it may be found to be acceptable and placed on a candidate list for a comparative analysis. In some cases, it may be initiated immediately.

When the results of the evaluation process indicate that a proposal passes an acceptance threshold but that it is not clearly superior to other candidates, each proposal should be assessed and ranked competitively. The relative strengths and weaknesses of each candidate project are examined carefully, and a weighted ranking is obtained. Ideally, the ranking would indicate not only the most preferred project but also the degree to which it is preferred over the other contenders. A number of assessment methodologies are presented in Sections 5.3 through 5.7 and Chapter 6.

If the ranking of a particular proposal is high enough, then resources may tentatively be assigned. However, the decision to fund and initiate work on a proposal involves the full consideration of the available human and financial resources within the organization. The level of available funds and personnel skill types and the commitments to the current portfolio of activities must be

factored into the decision. It may be that the new idea is so meritorious that it should replace one or more ongoing projects. If this is the case, then some ongoing project(s) will be terminated or halted temporarily so that resources can be freed up for the new project. Portfolio models have been developed to aid in making these decisions. A portfolio model determines the best way to allocate available resources among competing alternatives, including new candidates and ongoing projects. An example of such a model is presented in Chapter 13.

Portfolio models are used only when multiple projects compete for the same resources. In the remainder of this chapter, we discuss methods for screening and prioritizing alternatives when resources limit the size of the portfolio.

5.2 Dynamics of Project Selection As Figure 5.1 suggests, project selection can be a very dynamic process. Screening, evaluation, prioritizing, and portfolio analysis decisions may be made at various points, and new ideas may not even go through these steps in sequence. An idea may be shelved or abandoned at any point in time. New information and changed circumstances may reverse a previous decision to reject or abandon a project. For example, efforts to develop lightweight portable computers were given a new impetus with the dramatic improvement in flat-screen display technology. Alternatively, new information or changed circumstances may cause a previously backlogged project to be rejected. The drastic reduction in the price of imported oil in the early 1980s dealt a death blow to some exotic alternative energy projects, such as coal gasification and shale oil reclamation.

The available budget or labor skills within an organization may constrain the project selection process. A meritorious project may be delayed if insufficient budget is available to fund it. Alternatively, a project may be phased, and certain portions initiated while others are postponed until the financial situation becomes more favorable. Customer complaints, competitive threats, or unique opportunities may occasion an urgent need to pursue a particular idea. Depending on the urgency, the project may receive only a cursory screening and evaluation and may go directly into the portfolio.

Screening, evaluation, prioritizing, and portfolio decisions may be repeated several times over the life cycle of a project in response to emerging technologies and changing environmental, financial, or commercial circumstances. The advent of a new RFP, a change in competitive pressures, and the appearance of a new technology are some factors that may cause management to reevaluate an ongoing project. Moreover, with each advance that is recorded, new technical information that may influence other efforts and proposed ideas will be forthcoming. As current projects near completion, key personnel and equipment may be released so that they can be used on another project, perhaps one that was previously backlogged for lack of appropriate resources.

In general, evaluation and selection of new product ideas and project proposals is a complex process, consisting of many interrelated decisions. The complexities involve the variety of data that must be collected and the difficulty of unequivocally measuring and assessing candidate projects on the basis of information derived from these data. Much of the resultant information is subjective and uncertain in nature. Many ideas and proposals exist only as embryonic thoughts and are propelled forward by the sheer force of the sponsor’s enthusiasm. The presence of various organizational and behavioral factors tends to politicize the decision-making process. In many cases, the potential costs and benefits of a project play only a small role in the final decision. For example, an extensive two-year analysis of LANDSAT, an earth-orbiting satellite with advanced resource monitoring capabilities, concluded that the benefits to the user community would fall significantly short of the expected costs associated with operating and maintaining the system over its 10-year lifetime, even under the most optimistic of scenarios (Bard 1984). Nevertheless, pressure from National Aeronautics and Space Administration (NASA) and its congressional allies, who saw LANDSAT as a high-profile, nonmilitary application of space technology that might actually return some benefits, persuaded the U.S. Department of the Interior to provide funding.

The more sophisticated analytical and behavioral tools that have been developed to aid managers in evaluating projects vary in their approach for handling nonquantitative aspects of the decision.

5.3 Checklists and Scoring Models The idea-generation stage of a project, when done properly, will often lead to more proposals than can realistically be pursued. Thus, a screening procedure designed to eliminate those proposals that are clearly infeasible or without merit must be established. Compatibility with the organization’s objectives, existing product and service lines, and resources is a primary concern. It is also important to keep in mind that when comparing alternatives early on, a wide range of criteria should be introduced in the analysis. The fact that these criteria are often measured on differing scales makes the screening and evaluation much more difficult.

Of the several techniques available to aid in the screening process, perhaps the most commonly used are rating checklists. They are appropriate for eliminating the most undesirable proposals from further consideration. Because they require a relatively small amount of information, they can be used when the available data are limited or when only rough estimates have been obtained. Such methods should be viewed as expedient; they do not provide a great deal of depth and should be used with this caveat in mind.

Table 5.1 presents an illustration of a checklist. In constructing a checklist, it is necessary to identify the criteria or set of requirements that will be used in making the decision. In the next step, a (arbitrary) scoring scale is developed to measure how well a project does with respect to each criterion. Words such as “excellent” and “good” may be associated with the numerical values [see Gass (2001) for a more complete discussion of several issues related to the choice of scales and their effect on rankings].

TABLE 5.1 An Example of a Checklist for Screening Projects

Criteria

Profitability Time to market

Development risks

Commercial success

Score: 3 2 1 3 2 1 3 2 1 3 2 1 Total score

Project A

× × × × 10

Project B

× × × ×  6

Project C

× × × ×  8

In the example displayed in Table 5.1, the criteria include profitability, time to market, development risks, and commercial success. Each candidate is evaluated subjectively and scored using a 3-point scale. The built-in assumption is that each criterion is weighted equally. Total scores are displayed in the rightmost column. Typically, a cutoff point or threshold is specified below which the project is abandoned. Of those that exceed the threshold, the top contenders are held for further analysis, whereas the remainder are backlogged or shelved temporarily. Here, if 7 is specified as the threshold total score, then only projects A and C would be pursued.

An alternative means of displaying the information in Table 5.1 is a multidimensional diagram known as a polar graph (Canada et al. 1996), shown in Figure 5.2. In one sense, this type of representation is more efficient than a table because it allows the analyst quickly to ascertain the presence of dominance. For example, by noting that the performance measure surface of project B is completely within that of project A, we can conclude that B is no better than A on any dimension and thus can be discarded or backlogged.

Figure 5.2 Multidimensional diagram for checklist example.

Figure 5.2 Full Alternative Text

Scoring models extend the logic of checklists by assigning a weight to each criterion that signifies the relative importance of one to the other (Baker 1974, Hobbs 1980, Souder and Mandakovic 1986). A weighted score is then computed for each candidate. In deriving the weights, a team approach should be used to head off disagreement after the assessment. One way of accomplishing this is to list all criteria in descending order of importance. Next, assign the least important (last-listed) criterion a value of 10, and

assign a numerical weight to each criterion on the basis of how important it is relative to this one. A criterion considered to be twice as important as the least important criterion would be assigned a weight of 20. If team members cannot agree on specific values, then sensitivity analysis should be performed.

An example of the use of a scoring model for screening projects associated with the development of new products is shown in Table 5.2. Here eight criteria are to be rated on a numerical scale of 0 to 30, where 0 means poor and 30 means excellent. Because this scale is arbitrary, no significance should be placed on relative values. For convenience, the weights are scaled between 0 and 1. In general, the factor score for project j, call it T j , is obtained by multiplying the relative weights, w i for criterion i, by the ratings, s ij , and summing. That is,

T j = ∑ i w i S ij (5.1)

TABLE 5.2  An Example of a Scoring Model for Screening Projects

Rating

Criteria Relative weight

Excellent 30

Good 20

Fair 10

Poor 0

Factor score

Marketability 0.20 ×  6 Risk 0.20 ×  4 Competition 0.15 ×  3 Value added 0.15 ×  0 Technical opportunities

0.10 ×  3

Material availability

0.10 ×  1

Patent protection

0.05 ×  0

Current products  0.05  ×   1  Total 1.00 18

In this example, the project under consideration received a factor score of 18.

A variety of other formulas have been proposed for deriving the relative weights. Three of the simplest are presented below. More elaborate schemes are discussed in the next chapter.

1. Uniform or equal weights. Given N criteria, the weight for each is

w i = 1 N

2. Rank sum weights. If R i is the rank position of criterion i (with 1 as the highest rank) and there are N criteria, then rank sum weights for each criterion may be calculated as

w i = N− R i +1 ∑ k=1 N ( N− R k +1 )

where the denominator is the sum of the first N integers; that is, N( N+1 )/2.

3. Rank reciprocal weights. These weights may be calculated as

w i = 1/ R i ∑ k=1 N 1/ R k

The advantage of a scoring model is that it takes into account the tradeoffs among the criteria, as defined by the relative weights. The disadvantage is that it lacks precision and relies on an arbitrary scoring system.

An environmental scoring form developed by Niagara Mohawk, a New York utility, is depicted in Table 5.3. Note that the procedure for assigning points is specified.

5.4 Benefit-Cost Analysis Evaluation of the merits of alternative investment opportunities begins with technical feasibility. The next step involves a comparison at some minimum attractive rate of return (MARR) of the estimated stream of costs and benefits over the expected

TABLE 5.3  Environmental Scoring Form Used by Niagara Mohawk

Points, P Environmental attributes

Weight, W

0 1 2 3 4

Air emissions  Sulfur oxides (lb/MWh)

7 >6 4.0–6.0 2.5–3.9 1.5- 2.4

0.5–1.4

 Nitrogen oxides (lb/MWh)

16 >6 4.0–6.0 2.5–3.9 1.5– 2.4 0.5–1.4

 Carbon dioxide (lb/MWh)

3 >1500 1050– 1500 650–1049

250– 649 100–249

 Particulates (lb/MWh)

1 >0.3 0.2–0.3 0.1–0.199 0.05– 0.099

0.01– 0.049

Water effects  Cooling water flow (annual intake 1 80–100 60–79 40–59

20– 39 5–19

as % of lake volume)

 Fish protection 1 None

Operational restrictions

Fish protection

 NY State water quality classification of receiving water

1 A or better

B C+ C+ D

Land effects  Acreage required (acres/MW)

1 0.3–0.5 0.2– 0.29 0.1–0.19

0.05– 0.09 0.01–0.05

 Terrestrial 1

Unique ecological or historical value

Rural or low- density suburban

Industrial area

 Visual aesthetics 1

Highly visible

Within existing developed area

Not visible from public roads

 Transmission 2 New OH >5 miles

New OH 1–5 miles

New UG >5 miles

New UG 1–5 miles

Use existing facilities

 Noise ( L eq −backgrd L 90 )

2 5–10 0–4.9

 Solid waste disposal (lb/MWh)

2 >300 200– 300 100–199

50– 99 10–49

 Solid waste as fuel (% of total Btu)

1 0 1–30 31–50 51– 80

81–90

 Fuel delivery method

1 New RR spur

Truck and existing RR

New pipeline

Barge Use existing pipeline

 Distance from receptor area (km)

1 <10 10–39 40–69 70– 100 >100

Total score

economic life of each project. Engineering studies must be undertaken to establish the fundamental data. The estimated benefits and costs are then compared, usually on a present value basis, using a predetermined discount rate.

In the private sector, the firm generally pays all of the costs and receives all of the benefits, both quantitative and qualitative. Replacing an outdated piece of equipment is an example in which the returns are measurable, whereas constructing a new company cafeteria illustrates the opposite case. Where the activities of government are concerned, however, a different situation arises. Revenues are received through various forms of taxation and are supposed to be spent “in the public interest.” Thus, the government pays but receives very few, if any, benefits. This can present all sorts of problems. For one, it means that the intended beneficiaries of a federal project will be very anxious to get the project approved and funded. Such situations may induce otherwise virtuous people to redefine the standards of acceptable ethical behavior. A second problem concerns the measurement of benefits, which are often widely disbursed. Other difficulties include the selection of an interest rate and choosing the correct viewpoint from which the analysis should be made. Finally, in the benefit-cost (B/C) analysis, where the B/C ratio is used to rank competing projects, there may be legitimate ambiguity in deciding what goes in the numerator and what goes in the denominator of the ratio.

At first glance, it would seem to be a simple matter of sorting out the consequences into benefits (for the numerator) or costs (for the denominator).

This works satisfactorily when applied to projects for a firm or a person. In government projects it may be considerably more difficult to classify the various consequences, as shown in Example 5-1.

Example 5-1 On a proposed government project, the following consequences have been identified:

Initial cost of project to be paid by government is $100K.

Present worth (PW) of future maintenance to be paid by government is $40K.

PW of benefits to the public is $300K.

PW of additional public users costs is $60K.

Show the various ways of computing the B/C ratio.

Solution Putting the benefits in the numerator and all costs in the denominator gives

B/C ratio= All benefits All costs = 300 100+40+60 = 300 200 =1.5

An alternative computation is to consider user costs as disbenefits and to subtract them in the numerator rather than add them in the denominator:

B/C ratio= public benefits−public costs government costs = 300−60 100+40 = 240 140 =1.7

Still another variation would be to consider maintenance costs as disbenefits:

B/C ratio= 300−60−40 100 = 200 100 =2.0

It should be noted that although three different B/C ratios may be computed, the net present value (NPV) does not change:

NPV=PW of benefits−PW of costs=300−60−40−100=100.

There is no inherently correct way to compute the B/C ratio. Using the notation of Chapter 3, two commonly used formulations are given below:

1. Conventional B/C

B/C= PW of benefits to user PW of total costs to supplier = PW[ B ] PW[ CR+( O+M ) ] (5.2a)

or

B/C= Annual worth ( AW ) of benefits to user AW of total costs to supplier = B CR+( O+M ) (5.2b)

where

B=AW of benefits to user

CR=capital recovery cost (equivalent annual cost of initial investment, considering any salvage value)

O=equivalent uniform annual operating cost

M=equivalent uniform maintenance cost

2. Modified B/C

B/C= PW[ B−( O+M ) ] PW[ CR ]  or B/C= B−( O+M ) CR

The modified method has become more popular with governmental agencies and departments over the last decade. Although both methods yield the same recommendation when comparing mutually exclusive alternatives, they may yield different rankings for independent investment opportunities. In either case, using PW, AW, or future worth (FW) should always provide the same results.

Example 5-2  (Single-Project Analysis)

An individual investment opportunity is deemed to be worthwhile if its B/C ratio is greater than or equal to 1. Consider the project of installing a new inventory control system with the following data:

Initial cost $20,000 Project life 5 years Salvage value $4,000 Annual savings $10,000 Operating & Maintenance disbursements $4,400 MARR 15%

By interpreting annual savings as benefits, the conventional and modified B/C ratios based on annual equivalents are computed as follows:

CR= $20,000( A/P, 15%, 5 )−$4,000( A/F, 15%, 5 ) =20,000(0.2983)−4,000(0.1483)=$5,373 conventional B/C= B CR+( O+M ) = $1,0000 $5,373+$4,400 =1.02 modified B/C= B−( O+M ) CR = $10,000−$4,400 $5,373 =1.04

Because either B/C is greater than 1, the investment is worthwhile. Nevertheless, there is an opportunity cost associated with the investment that may preclude other possibilities. The fact that the B/C of a project is greater than 1 does not necessarily mean that it should be pursued.

Example 5-3  (Comparing Mutually Exclusive Alternatives)

As was true for rate of return (ROR) calculations, when comparing a set of mutually exclusive alternatives by any B/C method, an incremental approach

is preferred. The principles and criterion of choice as explained in Chapter 3 apply equally to B/C methods, the only difference being that each increment of cost (the denominator) must be justified by a B/C ratio≥1.

Consider the data in Table 5.4a associated with the four alternative projects used in Example 3.9 to demonstrate the internal rate of return (IRR) method. Each is listed in increasing order of investment. The symbol Δ( B/C ) means that the B/C ratio is being computed on the incremental cost. Once again, a MARR of 15% is used.

TABLE 5.4  Input Data and Results for Incremental Analysis

Project (a) Input data A B C D Initial cost $20,000 $30,000 $35,000 $43,000

Useful life 5 years 10 years

5 years 5 years

Salvage value $4,000 0 $4,000 $5,000 Annual receipts $10,000 $14,000 $20,000 $18,000 Annual disbursements  $4,400  $8,600  $9,390  $5,250   Net annual receipts −disbursements

$5,600 $5,400 $10,610 $12,750

(b) Results A A→B A→C C→D ΔInvestment $20,000 $10,000 $15,000 $8,000 ΔSalvage 4,000 −4,000 0 1,000 ΔCR=ΔC 5,373 605 4,477 2,386 Δ( annual receipts −disbursements )=ΔB

5,600 −200 5,010 2,140

Δ( B/C )=ΔB/ΔC 1.04 −0.33 1.12 0.90

Is Δinvestment justified? Yes No Yes No

The output data in Table 5.4b confirm the results previously found using the IRR method. Alternative C would be chosen given that it is the most expensive project for which each increment of cost is justified (by B/C ratio≥1 ).

B/C studies within the public sector in particular may be approached from several points of view. The perspective taken may have a significant impact on the outcome of the analysis. Possible viewpoints include

1. That of the governmental agency conducting the study

2. That of the local area (e.g., town, municipality)

3. The nation as a whole

4. The targeted industry

Thus, it is essential that the analyst have clearly in mind which group is being represented before proceeding with the study. If the objective is to promote the general welfare of the public, then it is necessary to consider the impact of alternative policies on the entire population, not merely on the income and expenditures of a selected group. Practically speaking, however, without regulations, the best that can be hoped for is that the broader interests of the community will be taken into account. Most would agree, for example, that without environmental and health regulations and the attendant threat of prosecution, there would be little incentive for firms to treat their waste products before discharging them into local waterways.

The national viewpoint would seem to be the correct one for all federally funded public works projects; however, most such projects provide benefits only to a local area, making it difficult, if not impossible, to trace and evaluate quantitatively the national effects. The following example parallels an actual case history.

Example 5-4

The government wants to decide whether to give a $5,000,000 subsidy to a chemical manufacturer who is interested in opening a new factory in a depressed area. The factory is expected to generate jobs for 200 people and further stimulate the local economy through commercial ventures and tourist trade. The benefits as a result of jobs created and improved trade in the area are estimated at $1,000,000 per year. Six percent is considered to be a fair discount rate. The study period is 20 years. Calculate the B/C ratio to determine whether the project is worthwhile.

Solution PW of benefits= $1,000,000( P/A, 6%, 20 )=$11,470,000

B/C ratio= $11,470,000 $5,000,000 =2.3

Outcome The plant was funded on the basis of the foregoing study, but pollution control equipment was not installed. During operations, raw by-products were dumped into the river, causing major environmental problems downstream. Virtually all of the fish died, and the river became a local health hazard. The retrofitting of pollution control equipment sometime later made the entire project uneconomical, and the plant eventually closed.

Conclusion Because the full costs of the project were not taken into account originally, the results were overly optimistic and misleading. Had the proper viewpoint been established at the outset and all of the factors considered, the outcome might not have been so unfortunate.

5.4.1 Step-by-Step Approach

To conduct a benefit-cost (B/C) analysis for an investment project, it is important to complete the following steps:

1. Identify the problem clearly.

2. Explicitly define the set of objectives to be accomplished.

3. Generate alternatives that satisfy the stated objectives.

4. Identify clearly the constraints (e.g., technological, political, legal, social, financial) that exist with the project environment. This step will help narrow the alternatives generated.

5. Determine and list the benefits and costs associated with each alternative. Specify each in monetary terms. If this cannot be done for all factors, then this should be stated clearly in the final report.

6. Calculate the B/C ratios and other indicators (e.g., present value, ROR, initial investment required, payback period) for each alternative.

7. Prepare the final report comparing the results of the evaluation of each alternative examined.

5.4.2 Using the Methodology As with any decision-making process, the first two steps above are to define the problem and related goals. This may involve identifying a particular problem to be solved (e.g., pollution) or agreeing on a specific program, such as landing an astronaut on the moon. Once this is done, it is necessary to devise a solution that is feasible, not only technically and economically but also politically.

Implicit in these steps is a twofold selection process: a macro-selection process whereby we choose from among competing opportunities or programs (should more federal funds be expended on space research or pollution cleanup and control?) and a micro-selection process whereby we strive to find the best of several alternatives (should we build a nuclear- or

coal-fired plant?).

5.4.3 Classes of Benefits and Costs Once a set of alternatives has been established, the detailed analysis can begin. The benefits and costs may be broken down into four classes: primary, secondary, external, and intangible. Primary refers to benefits and costs that are a direct result of a particular project. If a corporation manufactures videocassette recorders, then the primary costs are in production, and the primary benefits are in profits. In building a canal, the construction costs and the revenues generated from water charges are the primary elements.

“Secondary” benefits and costs are the marginal benefits and costs that accrue when an imperfect market mechanism is at work. In such instances, the market prices of a project’s final goods and services do not reflect the “true” prices. The use of government funds to build and maintain airports is a good example. There is a hidden cost to society as well as a hidden benefit to the airlines and their more frequent customers. Increased noise pollution and traffic congestion around the airport are illustrative of the costs; benefits can be measured by lower airfares.

External benefits and costs are those that arise when a project produces a spillover effect on someone other than the intended group. Thus, a government subsidy to airports produces external benefits by indirectly boosting the local economy. Massive government spending on space has yielded extensive benefits to medical science and the microelectronics industry. Similarly, there are spillover effects of pollution that produce disutilities in the form of health costs and the loss of recreational facilities.

Intangible benefits and costs are those that are difficult, if not impossible, to measure on a monetary scale. Examples of intangible benefits include trademarks and goodwill, whereas examples of intangible costs include costs associated with increased urban congestion. If intangibles dominate the decision process, the value of multiple-criteria methods such as multi- attribute utility theory and the analytic hierarchy process, discussed in Chapter 6, increases.

After categorizing the benefits and costs in this manner, they should be allocated to the various stages in a project in which they are expected to occur. A typical project includes stages such as planning, implementation, operation, and closeout. This distinction is necessary for proper quantitative evaluation. For example, the costs associated with noise, traffic disruption, and hazards of subway construction may occur only in the implementation stage and must be discounted accordingly.

5.4.4 Shortcomings of the Benefit- Cost Methodology Upon completion of the quantitative assessment of the various costs and benefits, the actual desirability of the project can be determined. Use of the B/C ratio to rank the best alternative can be deceptive, however, because it disguises the problem of scale. Two projects may have the same ratio yet involve benefits and costs that differ by millions of dollars, or one project may have a lower ratio than another and still possess greater benefits. Sometimes, therefore, projects will be selected simply on the basis of whether their benefits exceed their costs; yet again, scale must be considered, for two projects obviously can have the same net benefit, but one may be far more costly than the other.

As mentioned, another way to evaluate projects is to compare the expected ROR on investment with the interest rate on an alternative use of the funds. This criterion is implicit in most private-sector decisions but generally is neglected in the public sector, where tangible financial returns are not the sole criterion for investment allocations. Moreover, there is rarely a consensus on which discount rate should be used. Economists invariably dispute the choice, some arguing for the social rate of time preference, whereas others lean toward the prevailing interest rate. Except when a particular rate is specified by the decision maker, the NPV calculations should be repeated using several values to ascertain sensitivity effects.

The difficulty in agreeing on a discount rate is usually secondary to the problem of determining future costs and benefit streams. Uncertainties in

long-term consequences may be large for extended time horizons of more than a few years, although frequently, all alternatives will suffer from a similar fate. Investigating questions of inter-temporal equity and methods for dealing with uncertain outcomes are central problems of research, and their logic must be pursued relentlessly. Moreover, all forms of decision making must resolve these questions, regardless of whether they are dealt with explicitly.

In practice, it is rare that any one criterion will suffice for making a sound decision. Several criteria, as well as their many variations, must be examined in the analysis. The important point, however, is that even if all relevant factors are addressed, the analysis will still possess a high degree of subjectivity, leaving room for both conscious and unacknowledged bias. This leads to the two major shortcomings of B/C analysis.

The first is the need and general failure to evaluate those items that are unquantifiable in monetary terms. The type of question that continually gets raised is, “How do you measure the value of harmony between labor and management?” or “What is the value of a pollution-free environment?” The development of indicators other than those that reflect dollar values explicitly present a considerable challenge to analysts. They must depart from the familiar criteria of economic efficiency as a prime mechanism of evaluation and venture into the unknown areas of social and environmental concerns. Interestingly enough, the nonquantifiable elements bear equally on the governmental, business, and consumer sectors of the economy. In short, these “unmeasurable” elements may be of utmost significance, as system indicators must be developed to evaluate their impact on the program. It is here where judgment and subjectivity come into play.

The second weakness in the practice of B/C analysis arises from the “judge and jury” characteristic. Invariably, the same organization (either in a private company or a government agency) that proposes and sponsors a particular project undertakes the analysis. Whether this is done internally or by a subcontractor is not important. Rather, the organization and its contractors will usually display similar attitudes and biases in their approach to a problem. Independent, unbiased assessments are needed if the process is to work correctly and produce believable results.

5.5 Cost-Effectiveness Analysis When comparing two projects that have the same B/C ratio, the one that costs more will provide greater returns. In some situations, though, there may be a fixed or upper limit on the budget, so a project that is technically feasible may not be economically feasible even if it has a high B/C ratio. Economic barriers to entry are common in many fields, such as automotive or semiconductor manufacturing where the required initial investment may be as high as $1 billion.

In the case in which the budget is the limiting factor, a cost-effectiveness (C- E) study is often performed to maximize the value of an organization’s investment. In a C-E study, the focus is the performance of the proposed system (i.e., project) as measured by a composite index that is necessarily subjective in nature. This is because incommensurable and qualitative factors such as development risk, maintainability, and ease of use all must be evaluated collectively.

In general, system effectiveness can be thought of as a measure of the extent to which a system may be expected to achieve a set of specific mission requirements. It is often denoted as a function of the system availability, dependability, and capability.

Availability is defined as a measure of the system condition at the start of a mission. It is a function of the relationship among hardware, personnel, and procedures.

Dependability is defined as a measure of the system condition at one or more points during mission operations.

Capability accounts specifically for the performance spectrum of the system.

The term effectiveness can be difficult to define precisely. For a product or service, one definition would be the ability to deliver what is called for in the

technical specification. Among the terms that are related to (or have been substituted for) effectiveness are value, worth, benefit, utility, gain, and performance. Unlike cost, which can be measured in dollars, effectiveness does not possess an intrinsic measure by which it can be uniquely expressed.

Government agencies, in particular, the U.S. Department of Defense, have been prominent users of C-E analyses. The following eight steps represent a common blueprint for conducting a C-E study:

1. Define the desired goals.

2. Identify the mission requirements.

3. Develop alternative systems.

4. Establish system evaluation criteria.

5. Determine capabilities of alternative systems.

6. Analyze the merits of each.

7. Perform sensitivity analysis.

8. Document results and make recommendations.

A critical step in the procedure is in deciding how the merits of each alternative will be judged. After the evaluation criteria or attributes are established, a mechanism is needed to construct a single measure of performance. Scoring models, such as those described in Section 5.3, are commonly used. Here, we assess the relative importance of each system attribute and assign a weight to each. Next, a numerical value, say between 0 and 100, is assigned to represent the effectiveness of each attribute for each system. Once again, these values are subjective ratings but may actually be based on simple mathematical calculations of objective measures, subjective opinion, or engineering judgments. Where an appropriate physical scale exists, the maximum and minimum values can be noted and a straight line between those boundaries can be used to translate outcomes to a scale of 0 to 100. The analyst must ensure that the actual value of the attribute corresponds

to the subjective description; for example, 100≥excellent≥80; 80>good≥60.

In many cases it is useful to compare attribute relative values graphically to determine whether any obvious errors exist in data entry or logic. Figure 5.3 provides a visual comparison of the ratings of each of five attributes for four systems. The corresponding data are displayed in Table 5.5.

Figure 5.3 Relative effectiveness of systems.

TABLE 5.5  Data for C-E

Analysis System 1 System 2 System 3 System 4

Attribute Weight EFF WT EFF WT EFF WT EFF WT A. Efficiency 0.32 85 27.2 80 25.6 75 24.0 60 19.2 B. Speed 0.24 85 20.4 60 14.4 80 19.2 95 22.8 C. User Friendly

0.24 85 20.4 50 12.0 70 16.8 90 21.6

D. Reliability 0.12 50  6.0 80  9.6 80  9.6 99 11.9 E. Expandability

0.08 85  6.8 90  7.2 70  5.6 50  4.0

 Total effectiveness

80.8 68.8 75.2 79.5

 Costs $450K $250K $300K $350K

At this point in the analysis, two sets of numbers have been developed for each attribute i: the normalized weights, w i , and the perceived effectiveness assigned to each system j for each attribute i, s ij . To arrive at a composite measure of effectiveness, T j , for each system j, we could use Eq. (5.1). The highest value of T would indicate the system with the best overall performance.

If this system were within budget and none of its attribute values were below a predetermined threshold, then it would represent the likely choice. Nevertheless, effectiveness alone does not tell the entire story, and, whenever possible, the analysis should be extended to include costs as well. In a similar manner, cost factors can be combined into a single measure to compare with effectiveness. Typically, procurement, installation, and maintenance costs are considered. When the planning horizon extends beyond one year, the effects of time should be included through appropriate discounting. Table 5.5 contains this information.

The final step of the C-E methodology compares system effectiveness and costs. A graphical representation may be helpful in this regard. Figure 5.4

plots the two variables for each system (the unlabeled points represent systems not contained in Table 5.5). The outer envelope denotes the efficient frontier. Any system that is not on this curve is dominated by one or a combination of two or more systems, implying that it is inferior from both a cost and an effectiveness point of view. Systems that fall below the dashed line (predetermined threshold) are arbitrarily deemed unacceptable. Finally, note the relationship between systems 1 and 4. Although system 1 has the highest effectiveness rating, it is only marginally better than system 4. The fact that it is almost 30% more expensive, however, makes its selection problematic, as an incremental analysis would indicate.

Figure 5.4

Relationship between system effectiveness and cost.

Figure 5.4 Full Alternative Text

5.6 Issues Related to Risk In designing, building, and operating large systems, engineers must address such questions as, “What can go wrong, and how likely is it to happen?” “What range of consequences might there be, when, and how could they be averted or mitigated?” “How much risk should be tolerated or accepted during normal operations, and how can it be measured, reduced, and managed?”

Formal risk analysis attempts to quantify answers to these questions (Bell 1989, Kaplan and Garrick 1981). In new systems, it is coming to be accepted as a way of comparing the risks inherent in alternative designs, spotlighting the high-risk portion of a system, and pointing up techniques for attenuating those risks. For older systems, risk analysis conducted after systems have been built and operated have often revealed crucial design faults. One such fault cost the lives of 167 workers on the British oil production platform Piper Alpha in the North Sea several years ago. A simple gas leak in the $3 billion rig led to a devastating explosion. The platform had a vertical structure, and risk analysis was not done on the design. Workers’ accommodations were on top, above the lower compartments, which housed equipment for separating oil from natural gas. The accommodations were thought to be immune to mishap, but as a post-accident computer simulation revealed, the energy from the explosion in the lower level coupled to the platform’s frame. Stress waves were dissipated effectively into the water below, but in short order, reflections at the steel–air interface at the upper levels expanded, weakened, and shattered the structure. In contrast, Norwegian platforms, which are designed using government-mandated risk analysis, are long and horizontal like aircraft carriers, with workers’ accommodations at the opposite end of the structure from the processing facilities and insulated from them by steel doors.

Analysts define risk as a combination of the probability of an undesirable event and the magnitude of every foreseeable consequence (e.g., damage to property, loss of money, and delay in implementation). The consequences considered can range in seriousness from mild setback to catastrophic. Some

related definitions are given in Table 5.6.

TABLE 5.6 Some Definitions Related to Risk Term Definition

Failure Inability of a product or system to perform its required function.

Quality Assurance

Probability that a product or system will perform its intended function when tested.

Reliability Probability that a product or system will perform its intended function for a specified time duration (assuming under normal conditions).

Risk A blend of the probability of failure and the monetary outcome (or equivalent) associated with failure.

Risk Assessment

Processes and procedures for identifying and quantifying risks.

Risk Management

Techniques used to minimize risk either through reducing the probability of a failure or reducing the impact of a failure.

Uncertainty

A measure of the limits of knowledge in a technical area; for example, uncertainty may be expressed by a statistical confidence interval (a measure of sampling accuracy).

The first step in risk analysis is to tabulate the various stages or phases of a system’s mission and list the risk sensitivities in each phase, including technical, human, and economic risks. The time at which a failure occurs may mitigate its consequences. For example, a failure in an air traffic control system at a major airport would disrupt local air traffic far more at weeknight rush hour than on a Sunday morning. Similarly, a failure in a chemical processing plant would be more dangerous if it interfered with an

intermediate reaction that produced a toxic chemical than if it occurred at a stage when the by-products were more benign.

Next, for each phase of the mission, the system’s operation should be diagrammed and the logical relationships of the components and subsystems during that phase determined. The most useful techniques for the job are failure modes and effects analysis (FMEA), event tree analysis, and fault tree analysis (Kumamoto and Henley 2001). The three complement one another, and when taken together, help engineers identify the hazards of a system and the range of potential consequences. The interactions are particularly important because one piece of equipment might be caused to fail by another’s failure to, say, supply fuel or current.

For engineers and managers, the chief purpose of risk analysis—defining the stages of a mission, examining the relationships between system parts, and quantifying failure probabilities—is to highlight any weakness in a design and identify those that contribute most heavily to delays or losses. The process may even suggest ways of minimizing or mitigating risk.

A case in point is the probabilistic risk analysis on the U.S. space shuttle’s auxiliary power units, completed for NASA in December 1987 by the engineering consulting firm Pickard, Lowe & Garrick. The auxiliary power units, among other tasks, throttle the orbiter’s main engines and operate its wing ailerons. NASA engineers and managers, using qualitative techniques, had formerly judged fuel leaks in the three auxiliary fuel units “unlikely” and the risks acceptable, without fully understanding the magnitude of the risks that they accepted, even though a worst-case consequence could be the loss of the vehicle. One of the problems with qualitative assessment is that subjective interpretation of words such as “likely” and “unlikely” allows opportunity for errors in judgment about risk. For example, NASA had applied the word “unlikely” to risks that ranged from 1:250 to 1:20,000.

The probabilistic risk analysis revealed that although the probability of individual leaks was low, there were so many places where leaks could occur that five occurred in the first 24 shuttle missions. Moreover, in the ninth mission on November 28, 1983 the escaping fuel self-ignited while the orbiter was hurtling back to earth and exploded after it had landed.

The probabilistic analysis pinpointed the fact that an explosion was more likely to occur during landing than during launch, when the auxiliary power units are purged with nitrogen to remove combustible atmospheric oxygen. It also suggested several ways of reducing the risk, such as changing the fuels or placing fire barriers between the power units.

5.6.1 Accepting and Managing Risk Once the risks are determined, managers must decide what levels are acceptable on the basis of economic, political, and technological judgments. The decision can be controversial because it necessarily involves subjective judgments about costs and benefits of the project, the well-being of the organization, and the potential damage or liability.

Naturally, risk is tolerated at a higher level if the payoffs are high or critical to the organization. In the microcomputer industry, for example, where product lifetimes may be no greater than 1 or 2 years and new products and upgrades are being introduced continually, companies must keep pace with the competition or forfeit market share. Whatever the level of risk finally judged acceptable, it should be compared with and, if necessary, used to adjust the risks calculated to be inherent in the project. The probability of failure may be reduced further by redundant or standby subsystems or by parallel efforts during development. Also, managers should prepare to counter the consequences of failure or setbacks by devising contingency plans or emergency procedures.

5.6.2 Coping with Uncertainty Two sources of uncertainty still need to be considered: one intrinsic in probability theory and the other born of all-too-human error. First, the laws of chance exclude the prediction of when and where a particular failure may occur. That remains true even when enough statistical information about the system’s operation exists for a reliable estimate of how likely it is to fail. The probability of failure, itself, is surrounded by a band of uncertainty that

expands or shrinks depending on how much data are available and how well the system is understood. This statistical level of confidence is usually expressed as a standard deviation about the mean or a related measure. Finally, if the system is so new that few or no data have been recorded for it and analogous data from similar systems must be used to get a handle on potential risks, then there is uncertainty over how well the estimate resembles the actual case.

At the human interface, the challenge is to design a system so that it will not only operate as it should, but also leave the operator little room for erroneous judgment. Additional risk can be introduced if a designer cannot anticipate which information an operator may need to digest and interpret under the daily pressures of the job, especially when an emergency starts to develop.

From an operational point of view, poor design can introduce greater risk, sometimes with tragic consequences. After the U.S.S. Vincennes on July 3, 1988, mistook Iran Air Flight 655 for an enemy F-14 and shot down the airliner over international waters in the Persian Gulf, Rear Admiral Eugene La Roque blamed the calamity on the bewildering complexity of the Aegis radar system. He is quoted as saying that “we have scientists and engineers capable of devising complicated equipment without any thought of how it will be integrated into a combat situation or that it might be too complex to operate. These machines produce too much information and don’t sort the important from the unimportant. There’s a disconnection between technical effort and combat use.”

All told, human behavior is not nearly as predictable as that of an engineered system. Today, there are many techniques for quantifying with fair reliability the probability of slips, lapses, and misperceptions. Still, remaining uncertainty in the prediction of individual behavior contributes to residual risk in all systems and projects.

5.6.3 Non-probabilistic Evaluation Methods when Uncertainty Is

Present When considering a capital investment, there are four major sources of uncertainty that are nearly always present in engineering economic studies:

1. Inaccuracy of the cash flow estimates, especially benefits related to new products or technology.

2. Relationship between type of business and future health of the company. Certain lines of business are inherently unstable, such as oil drilling, entertainment, and luxury goods.

3. Type of physical plant and equipment involved. Some structures have definite economic lives and market values, whereas others are unpredictable. The cost of specialized plants and equipment is often difficult to estimate, especially for first-time projects.

4. Length of the project and study period. As the length increases, so does the variability in the estimates of operations and maintenance costs, as well as presumed benefits.

As discussed in Chapter 3, breakeven analysis and sensitivity analysis are two simple ways of addressing uncertainty. Other approaches include scenario analysis, risk-adjusted MARR, and reduction of useful life. Breakeven analysis is commonly used when the selection process is dependent on a single factor, such as capacity, sales, or ROR, and only two alternatives are being considered. In this case, we identify the one whose marginal benefit is greater and solve for the value of the factor that makes the two alternatives equally attractive. Above the breakeven point, the alternative with the greater marginal benefit is preferable.

Sensitivity analysis is aimed at assessing the relative magnitude of a change in the measure of interest, such as NPV, caused by one or more changes in estimated factors, such as interest rate and useful life. The results can often be visualized graphically, as shown in the following example.

Example 5-5 (Sensitivity Analysis)

Your office is considering the acquisition of a new workstation, but there is some uncertainty about which model to buy and the expected cash flows. Before making the investment, your supervisor has asked you to investigate the NPV of a generic system over a range of ±$40% with respect to (a) capital investment, (b) annual net cash flow, (c) salvage value, and (d) useful life. The following data characterize the investment:

Capital investment −$11,500 Annual revenues $5,000 Annual expenses −$2,000 Estimated salvage value $1,000 Useful life 6 years MARR 10%

Solution The first step is to compute the NPV for the given data.

Baseline NPV=− $11,500+$3,000( P/A, 10%, 6 )+$1,000( P/F, 10%, 6 )=$2,130

1. When initial investment varies by ±p%,

NPV( p )=−( 1+p/100 )( $11,500 )+$3,000( P/A, 10%, 6 ) +$1000(P/F,10%,6)

2. When revenues vary by ±p%,

NPV( p )=−$11,500+( 1+p/100 )( $3,000 )( P/A, 10%, 6 ) +$1000(P/F,10%,6)

3. When salvage value varies by ±p%,

NPV( p )=−$11,500+$3,000( P/A, 10%, 6 ) +(1+p/100)($1,000) (P/F,10%,6)

4. When the useful life varies by ±p%,

NPV( p )=−$11,500+$3,000[ P/A, 10%, 6( 1+p/100 ) ] +$1,000[ P/F, 10%, 6( 1+p/100 ) ]

Plotting the functions NPV(p) for −40%≤p≤+40%, gives rise to what is known as a spider chart, as shown in Figure 5.5. A frame of references is provided by the baseline result.

Figure 5.5 Spider chart for sensitivity analysis.

Figure 5.5 Full Alternative Text

Scenario analysis, or optimistic-pessimistic estimation, is used to establish a range of values for the measure of interest. Typically, the optimistic estimate is defined to have only a 5% chance of being exceeded by the actual outcome, whereas the pessimistic estimate is defined so that it is exceeded approximately 95% of the time.

Example 5-6 (Scenario Analysis)

An ultrasound inspection device for which optimistic, most likely, and pessimistic estimates are given below is being considered for purchase. If the MARR is 8%, then what course of action would you recommend? Base your answer on net annual worth (NAW).

Measure Optimistic (O) Most likely (M) Pessimistic (P) Capital investment −$150,000 −$150,000 −$150,000 Annual revenues  $110,000  $70,000   $50,000 Annual costs −$20,000 −$43,000 −$57,000 Salvage value    $0    $0    $0 Useful life 18 years 10 years 8 years NAW $73,995 $4,650 −$33,100

Solution Whether to accept or reject the purchase is somewhat arbitrary, and would depend strongly on the decision maker’s attitude toward risk. A conservative approach would be to

accept the investment if NAW( P )>0

reject the investment if NAW( O )<0

or do more analysis

Applying this rule tells us that more information is needed. One possible approach at this point is to evaluate all combinations of outcomes and see how many are above some threshold, say $50,000, and below, say $0. Following this idea, we note that annual revenues, annual costs, and the useful life are the independent inputs that vary from one scenario to another. This means that there are 3 3 =27 possible outcomes. The NAW of each is listed in the table below rounded to the nearest $1,000. For example, the first block of 9 data entries represents the results when the annual revenues and useful life are varied over the three scenarios, whereas the annual costs are held fixed at the optimistic estimate.

Annual costs O M P

Useful life Useful life Useful life Annual revenues O M P O M P O M P

O 74 68 64 51 45 41 37 31 27 M 34 28 24 11  5  1 −3 −9 −13 P 14  8  4 −9 −15 −19 −23 −29 −33

The computations indicate that the NAW> $50,000 in 4 of 27 scenarios and NAW< $0 in 9 out of 27. Coupled with the results for the strictly optimistic, most likely, and pessimistic scenarios, this might not be sufficient for a positive decision.

The risk-adjusted MARR method involves the use of higher discount rates for those alternatives that have a relatively high degree of uncertainty and lower discount rates for projects that are at the other end of the spectrum. A higher- than-usual MARR implies that distance cash flows are less important than current or near-term cash flows. This approach is widely used in practice but contains many pitfalls, the most serious being that the uncertainty is not made explicit. As a consequence, the analyst should first try other methods.

Example 5-6 

(Risk-Adjusted MARRs)

As an analyst for an investment firm, you are considering two alternatives that have the same initial cost and economic life but different cash flows, as indicated in the table below. Both are affected by uncertainty to some degree; however, alternative P is thought to be more uncertain than alternative Q. If the firm’s risk-free MARR is 10%, then which is the better investment?

End-of-year, k Alternative P Alternative Q 0 −$160,000 −$160,000 1  $120,000  $20,827 2   $60,000   $60,000 3     $0  $120,000 4   $60,000   $60,000

Solution At the risk-free MARR of 10%, both alternatives have the same NPV= $39,659. All else being equal, alternative Q should be chosen because it is less uncertain. To take into account the degree of uncertainty, we now use a prescribed risk-adjusted MARR of 20% for P and 17% for Q. Performing the same computations, we get

NPV P ( 20% )=−$160,000+$120,000( P/F, 20%, 1 )+$60,000( P/F, 20%, 2 ) +$60,000( P/F, 20%, 4 )=$10,602 NPV Q ( 17% )=−$160,000+$20,827( P/F, 17%,1 ) +$60,000( P/F, 17%, 2 )+$120,000 (P/F, 17%,3) +$60,000( P/F, 17%, 4 )=$8,575

implying that alternative P is preferable. This is a reversal of the first result.

Figure 5.6 plots the NPV of the two alternatives as a function of the MARR. The breakeven point is 10%. For MARRs beyond 10%, P is always the better choice.

Figure 5.6 NPV comparisons for risk-adjusted MARRs.

Figure 5.6 Full Alternative Text

Another technique used to compensate for uncertainty is based on truncating the project life to something less than its estimated useful life. By dropping from consideration those revenues and costs that may occur after the reduced study period, heavy emphasis is placed on rapid recovery of investment capital in the early years. Consequently, this method is closely related to the payback technique discussed in Chapter 3.

Implementation can by carried out in one of two ways. The first is to reduce the project life by some percentage and discard all subsequent cash flows. The NPV of the alternatives are then compared for the shortened life. The second is to determine the minimal life of the project that will produce an acceptable ROR. If this life is within the expectations of the decision maker, say, in terms of the maximum payback period, then the project is viewed as acceptable.

Example 5-7  

(Reduction of Useful life)

A proposed new product line requires $2,000,000 in capital over a 2-year period. Estimated revenues and expenses over the product’s anticipated 8- year commercial life are shown in Table 5.7. The company’s maximum payback period is 4 years (after taxes), and its effective tax rate is 40%. The investment will be depreciated by the modified accelerated cost recovery system (MACRS) using a 5-year class life.

TABLE 5.7  Data and Results for Reduction of Useful Life Example

End of year ($M) Cash flows

  −1    0

1 2 3 4 5 6 7 8

Initial investment

−0.9 −1.1 0 0  0   0   0   0   0   0  

Annual revenues

   0

   0

1.8 2  2.1  1.9  1.8  1.8  1.7  1.5 

Annual expenses

   0

   0

−0.8  

−0.9  

−0.9   

−0.9  

−0.8  

−0.8  

−0.8  

−0.7  

ATCF −0.9 −1.1   0.76

  0.92

0.88 0.7 0.7 0.65 0.54 0.48

ROR — — — — 10.3% 18.6% 23.6% 26.6% 28.3% 29.4%

The company’s management is concerned about the financial attractiveness

of this venture should unforeseen circumstances arise (e.g., loss of market or technological breakthroughs by the competition). They are very leery of investing a large amount of capital in this product because competition is fierce and companies that wait to enter the market may be able to purchase improved technology. You have been given the task of assessing the downside profitability of the product when the primary concern is its staying power (life) in the marketplace. If the after-tax MARR is 15%, then what do you recommend? State any necessary assumptions.

Solution The first step is to compute the after-tax cash flow (ATCF). To do this, we assume that the salvage value of the investment is zero, that the MACRS deductions are unaffected by the useful life of the product, and that they begin in the first year of commercial operations (year 1). The results are given in Table 5.7.

Next we compute the ROR of the investment as a function of the product’s presumed life. For the first 2 years, the undiscounted ATCF is negative so there is no ROR. In year 3, the ROR is 10.3% and climbs to 29.4% if the full commercial life is realized. A plot of the after-tax ROR versus the actual life of the product line is shown in Figure 5.7. To make at least 15% per year after taxes, the product line must last 4 or more years. It can be quickly determined from the data in the table that the simple payback period is 3 years. Consequently, this venture would seem to be worthwhile as long as its actual life is at least 4 years.

Figure 5.7 After-tax parametric analysis for product.

5.6.4 Risk-Benefit Analysis Risk-benefit analysis is a generic term for techniques that encompass risk assessment and the inclusive evaluation of risk, costs, and benefits of alternative projects or policies. Like other quantitative methods, the steps in risk-benefit analysis include specifying objectives and goals for the project options, identifying constraints, defining the scope and limits for the study itself, and developing measures of effectiveness of feasible alternatives. Ideally, these steps should be completed in conjunction with a responsible decision maker, but, in many cases, this is not possible. It therefore is incumbent upon the analyst to take exceptional care in stating assumptions and limitations, especially because risk-benefit analysis is frequently controversial.

The principal task of this methodology is to express numerically, insofar as possible, the risks and benefits that are likely to result from project outcomes. Calculating these outcomes may require scientific procedures or simulation

models to estimate the likelihood of an accident or mishap, and its probable consequences. Finally, a composite assessment that aggregates the disparate measures associated with each alternative is carried out. The conclusions should incorporate the results of a sensitivity analysis in which each significant assumption or parameter is varied in turn to judge its effect on the aggregated risks, costs, and benefits.

One approach to risk assessment is based on the three primary steps of systems engineering, as shown in Figure 5.8 (Sage and White 1980). These involve the formulation, analysis, and interpretation of the impacts of alternatives on the needs, and the institutional and value perspectives of the organization. In risk formulation, we determine or identify the types and scope of the anticipated risks. A variety of systemic approaches, such as the nominal group technique, brainstorming, and the Delphi method, are especially useful at this stage (Makridakis et al. 1997). It is important to identify not only the risk elements but also the elements that represent needs, constraints, and alternatives associated with possible risk reduction with and without technological innovation. This can be done only in accordance with a value system.

Figure 5.8 Systems engineering approach to risk assessment.

Figure 5.8 Full Alternative Text

In the analysis step, we forecast the failures, mishaps, and other consequences that might accompany the development and implementation of the project. This will include estimation of the probabilities of outcomes and the associated magnitudes. Many methods, such as cross-impact analysis, interpretive structural modeling, economic modeling, and mathematical programming, are potentially useful at this step. The inputs are those elements determined during problem formulation.

In the final step, we attempt to give an organizational or political interpretation to the risk impacts. This includes specification of individual and group utilities for the final evaluation. Decision making follows. The economic methods of B/C analysis are most commonly used at this point. Extension to include the results of the risk assessment, however, is not trivial. A principal problem is that risks and benefits may be measured in different units and therefore may not be strictly additive. Rather than trying to convert everything into a single measure, it may be better simply to present the risks and net benefits in their respective units or categories.

To aid in interpreting the results, risk-return graphs, similar to the C-E graph displayed in Figure 5.4, can be drawn to highlight the efficient frontier. Risk profiles may also be useful. Figure 5.9 illustrates a perspective provided by a risk analysis profile. Projects 1 and 2 are most likely to yield lifetime profits of $100,000 and $200,000, respectively. So, for some decision makers, project 2 might be considered superior if the B/C ratio were favorable. Nevertheless, it is worth probing the data a bit more. Project 2 has a finite probability of returning a loss but a higher expected profit than project 1. The probability that project 2 will yield lower profits than project 1 is known as the downside risk and can be found by a breakeven analysis. Given these data, a risk-averse person would be inclined to select project 1, which has a big chance (0.50) of realizing a moderate profit of at least $100K, with little

chance of anything much less or much greater; that is, project 1 has a small variance. A gambler would lean toward project 2, which has a small chance at a very large profit.

Figure 5.9 Illustration of risk profile.

Figure 5.9 Full Alternative Text

The types of risk profiles contained in Figure 5.9 make the consequences of

outcomes more visible and enable a decision maker to behave in a manner consistent with his or her attitude toward risk, be it conservative or freewheeling. Generally speaking, the amount of data needed to construct a graph such as Figure 5.9 is small and relatively easy to obtain if a historical database exists. It can be solicited from the engineers and marketing personnel who are familiar with an organization’s previous projects. If no collective experience can be found within the organization, then more subjective or arbitrary procedures would be required. A number of software packages are available to help with the construction effort.

5.6.5 Limits of Risk Analysis The ultimate responsibility for project selection and implementation goes beyond any risk assessment and rests squarely on the shoulders of top management. Although formal analysis can reveal unexpected vulnerabilities in large complex projects, it remains an academic exercise unless the managers take the results seriously and ensure that the project is managed conscientiously. Safety must be designed into a system from the beginning, and good operating practice is essential to the success of any continuing program of risk management. Controversy still rages, for example, over whether the vent-gas scrubber—a key element in the safety system of the Union Carbide pesticide plant in Bhopal, India that exploded in 1984, killing more than 3,000 people—was designed adequately to handle a true emergency. But even if it had been, neither it nor a host of other safety features were maintained in working order.

For risks to be ascertained at all, project managers must agree on the value of assessing them in engineering design. It has often been said that you can degrade the performance of a system by poor quality control, but you cannot enhance a poor design by good quality control. At the point at which project managers are responsible for crucial decisions, risk assessment is one more tool that can help them weigh alternatives so that their choices are informed and deliberate rather than isolated or worse, repetitions of past mistakes.

5.7 Decision Trees Decision trees, also known as decision flow networks and decision diagrams, may depict and facilitate analysis of problems that involve sequential decisions and variable outcomes over time. They make it possible to look at a large, complicated problem in terms of a series of smaller simple problems while explicitly considering risk and future consequences.

A decision tree is a graphical method of expressing, in chronological order, the alternative actions that are available to a decision maker and the outcomes determined by chance. In general, they are composed of the following two elements, as shown in Figure 5.10.

Figure 5.10 Structure of decision tree.

Figure 5.10 Full Alternative Text

1. Decision nodes. At a decision node, usually designated by a square, the

decision maker must select one alternative course of action from a finite set of possibilities. Each possible course of action is drawn as a branch emanating from the right side of the square. When there is a cost associated with an alternative, it is written along the branch. Each alternative branch may result in a payoff, another decision node, or a chance node.

2. Chance nodes. A chance node, designated as a circle, indicates that a random event is expected at this point in the process; that is, one of a finite number of states of nature may occur. The states of nature are shown on the tree as branches to the right of the chance nodes. The corresponding probabilities are similarly written above the branches. The states of nature may be followed by payoffs, decision nodes, or more chance nodes.

Constructing a Tree A tree is started on the left of the page with one or more decision nodes. From these, all possible alternatives are drawn branching out to the right. Then, a chance node or second decision node, associated with either subsequent events or decisions, respectively, is added. Each time a chance node is added, the appropriate states of nature with their corresponding probabilities emanate rightward from it. The tree continues to branch from left to right until the final payoffs are reached. The tree shown in Figure 5.10 represents a single decision with two alternatives, each leading to a chance node with three possible states of nature.

Finding a Solution To solve a tree, it is customary to divide it into two segments: (1) chance nodes with all their emerging states of nature (Figure 5.11a) and (2) decision nodes with all their alternatives (Figure 5.11b). The solution process starts with those segments that end in the final payoffs, at the right side of the tree, and continues to the left, segment by segment, in the reverse order from

which it was drawn.

Figure 5.11 Segments of tree.

Figure 5.11 Full Alternative Text

1. Chance node segments. The expected monetary value (EMV) of all of the states of nature that emerge from a chance node must be computed (multiply payoffs by probabilities and sum the results). The EMV is then written above the node inside a rectangle (labeled a “position value” in Figure 5.10). These expected values are considered as payoffs for the branch to the immediate left.

2. Decision node segments. At a decision point, the payoffs given (or computed) for each alternative are compared and the best one is selected. All others are discarded. The corresponding branch of a discarded alternative is marked by the symbol ∥ to indicate that the path is suboptimal.

This procedure is based on principles of dynamic programming and is commonly referred to as the “rollback” step. It starts at the endpoints of the tree where the expected value at each chance node and the optimal value at each decision node are computed. Suboptimal choices at each decision node are dropped, with the rollback continuing until the first node of the tree is reached. The optimal policy is recovered by identifying the choices made at each decision node that maximize the value of the objective function from that point onward.

Example 5-8 (Deterministic Replacement Problem)

The most basic form of a decision tree occurs when each alternative results in a single outcome; that is, when certainty is assumed. The replacement problem defined in Figure 5.12 for a 9-year planning horizon illustrates this situation. The numbers above the branches represent the returns per year for the specified period should the replacement be made at the corresponding decision point. The numbers below the branches are the costs associated with that decision. For example, at node 3, keeping the machine results in a return of $3K per year for 3 years, and a total cost of $2K.

Figure 5.12 Deterministic replacement problem.

Figure 5.12 Full Alternative Text

As can be seen, the decision as to whether to replace the old machine with the new machine does not occur just once, but recurs periodically. In other words, if the decision is made to keep the old machine at decision point 1, then later, at decision point 2, a choice again has to be made. Similarly, if the old machine is chosen at decision point 2, then a choice has to be made at decision point 3. For each alternative, the cash inflow and duration of the project is shown above the branch, and the cash investment opportunity cost is shown below the branch. At decision point 2, for example, if a new machine is purchased for the remaining 6 years, then the net benefits from that point on are (6 yr)($6.5K/yr) returns −$17.0K opportunity cost= $22.0K net benefits. Alternatively, if the old machine is kept at decision point 2, then

we have ($3.5K/yr)(3 yr) returns −$1.0K opportunity cost +$7K net benefits associated with decision point 3=$16.5K net benefits.

For this problem, one is concerned initially with which alternative to choose at decision point 1, but an intelligent choice here should take into account the later alternatives and decisions that stem from it. Hence, the correct procedure in analyzing this type of problem is to start at the most distant decision point, determine the best alternative and quantitative result of that alternative, and then roll back to each successive decision point, repeating the procedure until finally the choice at the initial or present decision point is determined. By this procedure, one can make a present decision that directly takes into account the alternatives and expected decisions of the future.

For simplicity in this example, timing of the monetary outcomes first will be neglected, which means that a dollar has the same value regardless of the year in which it occurs. Table 5.8 displays the necessary computations and implied decisions. Note that the monetary outcome of the best alternative at decision point 3 ($7.0K for the “old”) becomes part of the outcome for the old alternative at decision point 2. That is, if the decision at node 2 is to continue to use the current machine rather than replace it, then the monetary value associated with this decision equals the EMV at node 3 ($7K) plus the transition benefit from node 2 to 3 ( $3.5/yr×3 yr−$1K=$9.5K ), or $16.5K. Similarly, the best alternative at decision point 2 ($22.0K for the “new”) becomes part of the outcome for the “old” alternative at decision point 1.

TABLE 5.8  Computational Results for Replacement Problem in Figure 5.12

Decision point

Alternative Monetary outcome Choice

3 Old ( $3K/yr )( 3 yr )−$2K =$7.0K Old ( $6.5K/yr )( 3 yrs )

New −$18K =$1.5K

2 Old $7K+( $3.5K/yr )( 3 yr ) −$1K

=$16.5K

New ( $6.5K/yr )( 6 yr ) −$17K

=$22.0K New

1 Old $22.0K+( $4K/yr )( 3 yr )−$0.8K

=$33.2K Old

New ( $5K/yr )( 9 yr )−$15K =$30.0K

By following the computations in Table 5.7, one can see that the answer is to keep the old machine now and plan to replace it with a new machine at the end of 3 years (at decision point 2). In practice, an organization would re- evaluate the decision on a rolling, annual basis and may, in fact, replace the machine prior to three years or may delay machine replacement beyond three years.

Example 5-9 (Timing Considerations)

For decision tree analyses, which involve working from the most distant decision point to the nearest decision point, the easiest way to take into account the timing of money is to use the present value approach and thus discount all monetary outcomes to the decision points in question. To demonstrate, Table 5.9 gives the computations for the same replacement problem of Figure 5.9 using an interest rate of 12% per year.

TABLE 5.9  Computations for Replacement Problem with 12% Interest Rate

Decision point

Alternative Monetary outcome Choice

3 Old $3K( P/A, 12%, 3 )−$2K =$3K( 2.402 )−$2K

=$5.21K Old

New $6.5K( P/A, 12%, 3 ) −$18K =$6.5K( 2.402 ) −$18K

=−$2.39K

2 Old

$3.5K( P/A, 12%, 3 ) −$1K +$5.21K( P/F, 12%, 3 ) =$3.5K( 2.402 )−$1K +$5.21K(0.7118)

=$11.11K Old

New $6.5K( P/A, 12%, 6 ) −$17K =$6.5K( 4.111 ) −$17K

=$9.72K

1 Old

$4K( P/A, 12%, 3 ) −$0.8K +$11.11K( P/F, 12%, 3 ) =$4K( 2.402 )−$0.8K +$11.11K(0.7118)

=$16.71K Old

New $5.0K( P/A, 12%, 9 ) −$15K =$5.0K( 5.328 ) −$15K

=$11.64K

Note from Table 5.8 that when taking into account the effect of timing by calculating PWs at each decision point, the indicated choice is not only to keep the old at decision point 1, but also to keep the old at decision points 2 and 3. This result is not surprising because the high interest rate tends to favor the alternatives with lower initial investments, and it also tends to place less weight on long-run returns. When the interest rate drops to 10%, the solution is the same as that for Example 5.8.

Example 5-10 (Automation Decision Problem with Random Outcomes)

In this problem, the decision maker must decide whether to automate a given process. Depending on the technological success of the automation project, the results will turn out to be poor, fair, or excellent. The net payoffs for these outcomes (expressed in NPVs and including the cost of the equipment) are −$90K, $40K, and $300K, respectively. The initially estimated probabilities that each outcome will occur are 0.5, 0.3, and 0.2. Figure 5.13 is a decision tree depicting this simple situation. The calculations for the two alternatives are

Automate: −$90K( 0.5 )+$40K( 0.3 )+$300K( 0.2 )=$27K

Don’t automate: $0

Figure 5.13 Automation problem before consideration of technology study.

Figure 5.13 Full Alternative Text

These calculations show that the best choice for the firm is to automate on the basis of an expected NPV of $27K versus $0 if it does nothing. Nevertheless, this may not be a clear-cut decision because of the possibility of a $90K loss. Depending on the decision maker’s attitude toward risk and confidence in the given data, he or she might want to gather more information.

Suppose that it is possible for a decision maker to conduct a technology study for a cost of $10K. The study will disclose that the enabling technology is “shaky,” “promising,” or “solid” corresponding to ultimate outcomes of “poor,” “fair,” and “excellent,” respectively. Let us assume that the probabilities of the various outcomes, given the technology study findings, are as shown in Figure 5.14, which is a decision tree for the entire problem. This diagram shows expected future events (outcomes), along with their respective cash flows and probabilities of occurrence. The calculation of these probabilities requires the use of Bayes’ theorem given in Appendix 5A at the end of this chapter and discussed in a later subsection. To use Bayes’ theorem, it is necessary to know all conditional probabilities of the form P( study outcome|state ); e.g., P( shaky|poor ) or P( excellent|promising ).

Figure 5.14 Automation problem with technology study taken into account.

Figure 5.14 Full Alternative Text

The rectangular blocks represent (decision) points in time at which the decision maker must elect to take one and only one of the paths (alternatives) available. These decisions are normally based on a quantifiable measure, such as money, which has been determined to be the predominant “cost” or “reward” for comparing alternatives. As mentioned, the general approach is to find the action or alternative that will maximize the expected NPV equivalent of future cash flows at each decision point, starting with the furthest decision point(s) and then rolling back until the initial decision point is reached.

Once again, the chance (circular) nodes represent points at which uncertain events (outcomes) occur. At a chance node, the expected value of all paths that lead (from the right) into the node can be calculated as the sum of the anticipated value of each path multiplied by its respective probability. (The probabilities of all paths that lead into a node must sum to 1.) As the project progresses through time, the chance nodes are automatically reduced to a single outcome on the basis of the “state of nature” that occurs at that time.

The solution to the problem in Figure 5.14 is given in Table 5.10. It can be noted that the alternative “technology study” is shown to be best with an expected NPV of $34.62K. (To check the solution in Table 5.10, perform the rollback procedure on Figure 5.14, indicating which branches should be eliminated.)

TABLE 5.10  Expected NPV Calculations for the Automation Problem Decision

point Alternative Expected monetary

outcome Choice

2A Automate −$90K( 0.73 )+$40K( 0.22 ) +$300K( 0.05 )

=−$41.9K

Don’t automate

=$0 Don’t automate

2B Automate −$90K( 0.43 )+$40K( 0.34 ) +$300K( 0.23 )

=$43.9K Automate

Don’t automate

=$0

2C Automate −$90K( 0.21 )+$40K( 0.37 ) +$300K( 0.42 )

=$121.9K Automate

Don’t automate

=$0

1 Automate (see calculations above)

=$27K

Don’t automate

=$0

Technology study

$0( 0.41 )+$43.9K( 0.35 ) +$121.9K( 0.24 )−$10K

=$34.62K Technology study

5.7.1 Decision Tree Steps Now that decision trees (diagrams) have been introduced and the mechanics of using them to arrive at an initial decision have been illustrated, the steps involved can be summarized as follows:

1. Identify the points of decision and alternatives available at each point.

2. Identify the points of uncertainty and the type or range of possible outcomes at each point (layout of decision flow network).

3. Estimate the values needed to conduct the analysis, especially the probabilities of different outcomes and the costs/returns for various outcomes and alternative actions.

4. Remove all dominated branches.

5. Analyze the alternatives, starting with the most distant decision point(s) and working back, to choose the best initial decision.

In Example 5.9, we used the expected NPV as the decision criterion. However, if outcomes can be expressed in terms of utility units, then it may be appropriate to use the expected utility as the criterion. Alternatively, the decision maker may be willing to express his or her certain monetary equivalent for each chance outcome node and use that as the decision criterion.

Because a decision tree can quickly become unmanageably large, it is often best to start out by considering only major alternatives and outcomes in the structure to get an initial understanding or feeling for the issues. Secondary alternatives and outcomes can then be added if they are significant enough to affect the final decision. Incremental embellishments can also be added if time and resources are available.

5.7.2 Basic Principles of Diagramming The proper diagramming of a decision problem is, in itself, very useful to the understanding of the problem, as well as being essential to performing the analysis correctly. The placement of decision points and chance nodes from the initial decision point to any subsequent decision point should give an accurate representation of the information that will and will not be available when the decision maker actually has to make the choice associated with the decision point in question. The tree should show the following:

1. All initial or immediate alternatives among which the decision maker wishes to choose.

2. All uncertain outcomes and future alternatives that the decision maker wishes to consider because they may directly affect the consequences of

initial alternatives.

3. All uncertain outcomes that the decision maker wishes to consider because they may provide information that can affect his or her future choices among alternatives and hence, indirectly affect the consequences of initial alternatives.

It should also be noted that the alternatives at any decision point and the outcomes at any payoff node must be:

1. Mutually exclusive; that is, no more than one can possibly be chosen.

2. Collectively exhaustive; that is, when a decision point or payoff node is reached, some course of action must be taken.

In Figure 5.14, decision nodes 2A, 2B, and 2C are each reached only after one of the mutually exclusive results of the technology study is known; and each decision node reflects all alternatives to be considered at that point. Furthermore, all possible outcomes to be considered are shown, as the probabilities sum to 1.0 for each chance node.

5.7.3 Use of Statistics to Determine the Value of More Information An alternative that frequently exists in an investment decision is to conduct further research before making a commitment. This may involve such action as gathering more information about the underlying technology, updating an existing analysis of market demand, or investigating anew future operating costs for particular alternatives.

Once this additional information is collected, the concepts of Bayesian statistics provide a means of modifying estimates of probabilities of future outcomes, as well as a means of estimating the economic value of further investigation study. To illustrate, consider the one-stage decision situation depicted in Figure 5.15, in which each alternative has two possible chance

outcomes: “high” or “low” demand. It is estimated that each outcome is equally likely to occur, and that the monetary result expressed as PW is shown above the arrow for each outcome. Again, the amount of investment for each alternative is written below the respective lines. On the basis of these amounts, the calculation of the expected monetary outcomes minus the investment costs (giving expected NPV) is as follows:

Old system: E[NPV]=$45M(0.5)+$27.5M(0.5)−$10M=$26.25MNew FMS: E

which indicates that the new flexible manufacturing system (FMS) should be selected.

Figure 5.15 One-stage FMS replacement problem.

Figure 5.15 Full Alternative Text

To demonstrate the use of Bayesian statistics, suppose that one is considering the advisability of undertaking a fresh intensive investigation before deciding on the “old system” versus the “new FMS.” Suppose also that this new study would cost $2.0M and will predict whether the demand will be high (h) or low ( ℕ ). To use the Bayesian approach, it is necessary to assess the

conditional probabilities that the investigation (technology study) will yield certain results. These probabilities reflect explicit measures of management’s confidence in the ability of the investigation to predict the outcome. Sample assessments are

P( h|H )=0.70, P( ℕ|H )=0.30, P( h|L )=0.20, and P( ℕ|L )=0.80,

where H and L denote high and low actual demand as opposed to predicted demand. As an explanation, P( h|H ) means the probability that the predicted demand is high (h), given that the actual demand will turn out to be high (H).

A formal statement of Bayes’ theorem is given in Appendix 5A along with a tabular format for ease of computation. Tables 5.11 and 5.12 use this format for revision of probabilities based on the assessment data above, and the prior probabilities of 0.5 that the demand will be high and 0.5 that the demand will be low [i.e., P( H )=P( L )=0.5 ]. These probabilities are now used to assess the technology study alternative. Figure 5.16 depicts the complete decision tree. Note that demand probabilities are entered on the branches according to whether the investigation indicates high or low demand.

TABLE 5.11  Computation of Posterior Probabilities Given That Investigation-Predicted Demand is High (h)

(1) (2) (3) ( 4 )=(2)×(3) ( 5 )=(4)/ ∑(4)

State (actual

demand)

Prior probability,

P(state)

Confidence assessment, P(

h|state )

Posterior joint

probability

Probability, P( state|h )

H 0.5 0.70 0.35 0.78   

L 0.5 0.20 0.10   0.22

0.45

TABLE 5.12  Computation of Posterior Probabilities Given That Investigation-Predicted Demand is Low ( ℕ )

(1) (2) (3) ( 4 )=(2)×(3) ( 5 )=(4)/ ∑(4)

State (actual

demand)

Prior probability,

P(state)

Confidence assessment, P(

ℕ|state )

Posterior joint

probability

Probability, P( state|ℕ )

H 0.5 0.30 0.15 0.27

L 0.5 0.80    0.40  

0.73

0.55

Figure 5.16 Replacement problem with alternative of technology study.

Figure 5.16 Full Alternative Text

The next step is to calculate the expected outcome for the technology study

alternative. This is done by the standard decision tree rollback principle, as shown in Table 5.11. Note that the 0.45 and 0.55 probabilities that the investigation-predicated demand will be high and low, respectively, are obtained from the totals in column (4) of the Bayesian revision calculations depicted in Tables 5.11 and 5.12.

Thus, from Table 5.13, it can be seen that the “new FMS” alternative with an expected NPV of $29.0M is the best course of action by a slight margin. (As an exercise, perform the calculations on Figure 5.16 and indicate the optimal path.) Although the figures used here do not reflect any advantages to this technology study, the benefit of gathering additional information can potentially be great.

TABLE 5.13  Expected NPV Calculations for Replacement Problem in Figure 5.13 Decision

point Alternative Expected monetary

outcome Choice

2A Old system $45M( 0.78 )+$27.5M( 0.22 )−$10M

=$31.15M

New FMS $80M( 0.78 )+$48M( 0.22 )−$35M

= $37.96M

New FMS

2B Old system $45M( 0.27 )+$27.5M( 0.73 )−$10M

=$22.23M Old system

New FMS $80M( 0.27 )+$48M( 0.73 )−$35M

=$21.64M

1 Old system (see calculations above) =$26.25M

New FMS (see calculations above) =$29.00M New FMS

Technology study

$37.96M( 0.45 )+$22.23M( 0.55 )−$2M

=$27.31M

In practice, firms will conduct market research or spot-market tests before launching a new product to a larger market audience. The research—with a representative sample of customers—will enable the firm to refine its probabilities of successfully launching a new product. The firm may learn, for instance, that a proposed, new product is not well-received by the research panel. In this case, the firm may abandon its broader “go to market” strategy for the new product and save itself from a more catastrophic financial loss. The decisions of (1) whether to conduct a spot-market test and (2) whether to go to market using a broad, national campaign can be modeled with decision trees, assuming that a finite set of possible outcomes, associated with each decision, can be stated and probabilities associated with each of the possible outcomes can be estimated.

5.7.4 Discussion and Assessment One unique feature of decision trees is that they allow management to view the logical order of a sequence of decisions. They afford a clear graphical representation of the various courses of action and their possible consequences. By using decision trees, management can also examine the impact of a series of decisions (over many periods) on the goals of the organization. Such models reduce abstract thinking to a rational, visual pattern of cause and effect. When costs and benefits are associated with each branch and probabilities are estimated for each possible outcome, analysis of the tree can clarify choices and risks.

On the down side, the methodology has several weaknesses that should not be overlooked. A basic limitation of its representational properties is that only small and relatively simple decision models can be shown at the level of detail that makes trees so descriptive. Every variable added expands the tree’s size multiplicatively. Although this problem can be overcome to some extent by generalizing the diagram, significant information may be lost in doing so. This loss is particularly acute if the problem structure is highly dependent or asymmetric.

Regarding the computational properties of trees, for simple problems in

which the endpoints are pre-calculated or assessed directly, the rollback procedure is very efficient. However, for problems that require a roll-forward procedure, the classic tree-based algorithm has a fundamental drawback: it is essentially an enumeration technique. That is, every path through the tree is traversed to solve the problem and generate the full range of outputs. This feature raises the “curse of dimensionality” common to many stochastic models: for every variable added, the computational requirements increase multiplicatively. This implies that the number of chance variables that can be included in the model tends to be small. There is also a strong incentive to simplify the value model, because it is recalculated at the end of each path through the tree.

Nevertheless, the enumeration property of tree-based algorithms in theory can be reduced dramatically by taking advantage of certain structural properties of a problem. Two such properties are referred to as “asymmetry” and “coalescence.” For more discussion and some practical aspects of implementation, consult Call and Miller (1990).

5.8 Real Options NPV has been criticized for not properly accounting for uncertainty and flexibility—that is, multistage development funding and abandonment options. Decision trees more accurately capture the multistage nature of development by using probability-based EMVs, but can be time consuming and overly complex when all potential courses of action are included. An alternative to decision trees is real options, a technique that applies financial options theory to nonfinancial assets and encourages managers to consider the value of strategic investments in terms of risks that can be held, hedged, or transferred.

Seen through a real options lens, NPV always undervalues potential projects, often by several hundred percent. Real-options analysis offers the flexibility to expand, extend, contract, abandon, or defer a project in response to unforeseen events that drive the value of a project up or down through time. It is good practice to consider these options at the outset of an investment analysis rather than only when trouble arises.

Recall that the NPV of a project is estimated by forecasting its annual cash flows during its expected life, discounting them back to the present at a risk- adjusted weighted average cost of capital, then subtracting the initial start-up capital expenditure. There’s nothing in this calculation that captures the value of flexibility to make future decisions that resolve uncertainty.

Financial managers often overrule NPV by accepting projects with negative NPVs for “strategic reasons.” Their intuition tells them that they cannot afford to miss the opportunity. In essence, they’re intuiting something that has not been quantified in the project.

5.8.1 Drivers of Value Like options on securities, real options are the right but not the obligation to take an action in the future at a predetermined price (the exercise or striking

price) for a predetermined time (the life of the option). When you exercise a real option, you capture the difference between the value of the asset and the exercise price of the option. If a project is more successful than expected, then management can pay an “exercise price” to expand the project by making an additional capital expenditure. Management can also extend the life of a project by paying an exercise price. If the project does worse than expected, then it can be scaled back or abandoned. In addition, the initial investment does not have to be made today—it can be deferred.

The value of a real option is influenced by the following six variables:

1. Value of the underlying project. The option to expand a project (a call), for example, increases the scale of operations and therefore the value of the project at the cost of additional investment (the exercise price). Thus, the value of the project (without flexibility) is the value of what, in real- options language, is called the underlying risky asset. If we have flexibility to expand the project—in other words, an option to buy more of the project at a fixed price—then the value of the option to expand goes up when the value of the underlying project goes up.

2. Exercise price/investment cost. The exercise price is the amount of investment required to expand. The value of the option to expand goes up as the cost of expansion is reduced.

3. Volatility of the underlying project’s value. Because the decision to expand is voluntary, you will expand only when the value of expansion exceeds the cost. When the value is less than the cost and there is no variability in the value, the option is worthless, but if the value is volatile, then there’s a chance that the value can rise and exceed the cost, making the option valuable. Therefore, the value of flexibility goes up when uncertainty of future outcomes increases.

4. Time to maturity. The value of flexibility increases as the time to maturity lengthens because there’s a greater chance that the value of expansion will rise the longer you wait.

5. Risk-free interest rate. As the risk-free rate of interest goes up, the present value of the option also goes up because the exercise price is

paid in the future, and therefore, as the discount rate increases, the present value of the exercise price decreases.

6. Dividends. The sixth variable is the dividends, or the cash flows, paid out by the project. When dividends are paid, they decrease the value of the project and therefore the value of the option on the project.

5.8.2 Relationship to Portfolio Management The flexible decision structure of options is valid in an R&D context. After an initial investment, management can gather more information about the status of a project and market characteristics and, on the basis of this information, change its course of action. The real option value of this managerial flexibility enhances the R&D project value, whereas a pure NPV analysis understates it. Five basic sources of flexibility have been identified (e.g., Trigeorgis 1997). A defer option refers to the possibility of waiting until more information has become available. An abandonment option offers the possibility to make the investment in stages, deciding at each stage, on the basis of the newest information, whether to proceed further or to stop (this is applied by venture capitalists). An expansion or contraction option represents the possibility to adjust the scale of the investment (e.g., a production facility) depending on whether market conditions turn out favorably or not. Finally, a switching option allows changing the mode of operation of an asset, depending on factor prices (e.g., switching the energy source of a power plant, switching raw material suppliers).

One key insight generated by the real options approach to R&D investment is that higher uncertainty in the payoffs of the investment increases the value of managerial flexibility, or the value of the real option. The intuition is clear— with higher payoff uncertainty, flexibility has a higher potential of enhancing the upside while limiting the downside. An important managerial implication of this insight is that the more uncertain the project payoff is, the more efforts should be made to delay commitments and maintain the flexibility to change the course of action. This intuition is appealing. Nevertheless, there is hardly

any evidence of real options pricing of R&D projects in practice despite reports that Merck uses the method. Moreover, there is recent evidence that more uncertainty may reduce the option value if an alternative “safe” project is available.

This evidence represents a gap between the financial payoff variability, as addressed by the real options pricing literature, and operational uncertainty that pervades R&D. For example, R&D project managers encounter uncertainty about budgets, schedules, product performance, or market requirements, in addition to financial payoffs. The relationship between such operational uncertainty and the value of managerial flexibility (option value of the project) is not clear. For example, should the manager respond to increased uncertainty about product performance in the same way as to uncertainty about project payoffs, by delaying commitments? Questions such as this must be addressed on a case-by-case basis in full view of the scope and consequences of the attending risks.

TEAM PROJECT Thermal Transfer Plant On the basis of the evaluation of alternatives, Total Manufacturing Solutions, Inc. (TMS) management has adopted a plan by which the design and assembly of the rotary combustor will be done at TMS. Most of the manufacturing activity will be subcontracted except for the hydraulic power unit, which TMS decided to build “in-house.”

There are three functions involved in charging and rotating the combustor. Two of them, the charging rams and the resistance door, naturally lend themselves to hydraulics. The third, turning the combustor, can be done either electromechanically (by an electric motor and a gearbox) or hydraulically. If the hydraulic method is chosen, then there are two alternatives: (1) use a large hydraulic motor as a direct drive or (2) use a small hydraulic motor with a gearbox. Figure 5.17 contains a schematic.

Figure 5.17 Hydraulic power unit.

TMS engineering has produced the following specifications for the hydraulic power unit:

Applicable documents, codes, standards, and requirements

National Electric Manufacturers Association (NEMA)

American National Standards Institute (ANSI)

Pressure Vessels Code, American Society of Mechanical Engineers (ASME) Section VIII

Hydraulic rams

Two hydraulic cylinders will be provided for the rams. The cylinders will be 8 in. bore ×96 in. stroke. They will operate at 1,500 psi, and will have an adjustable extension rate of 2 to 6 ft/min. They will retract in 15 seconds, will operate 180° out of phase, and will retract in the event of a power failure.

Combustor barrel drive

A single-direction, variable-speed drive will be provided for the combustor. The output of this drive will deliver up to 1.6 rpm and 7,500 ft-lb of torque.

Resistance door cylinder

This cylinder will be 6 in. bore ×48 in. stroke and will operate with a constant pressure of 200 psi.

Hydraulic power unit

The hydraulic power unit will be skid mounted and ready for hookup to interfacing equipment. Mounting and lifting brackets will be manufactured as well.

Hydraulic pumps will be redundant so that in the event of the failure of one, another can be started to take over its function. Accumulators will be added to retract the rams and close the resistance door in the event of a power failure.

The hydraulic fluid is to be E. F. Houghton’s Cosmolubric or equivalent. Although system operating pressure is to be 1,500 psi,

the plumbing will be designed to withstand 3,000 psi. Water-to-oil heat exchangers shall be provided to limit reservoir temperature to 130°C.

A method of controlling ram extension speed and combustor rpm within the specifications stated above will be provided. Control concepts may be analog (5 to 20 milliamperes) or digital.

Electrical

Electric motors will be of sufficient horsepower to drive the hydraulic pumps. Motors shall operate at 1,200 rpm, 220/440 volts, 3 phase, 60 hertz.

Solenoids and controls

Solenoids are to be 120 volt, 60 hertz and will have manual overrides. Any analog control function is to respond to a 5- to 20- milliampere signal.

Combustion drive

A single-direction, variable-speed drive will be provided for the combustor. The output of this drive will deliver up to 1.6 rpm and 7,500 ft-lb of torque. Three potential alternatives for the combustor drive are

Electric motor and gearbox

Hydraulic motor with gearbox (hydraulic power supplied by hydraulic power unit)

Hydraulic motor with direct drive (hydraulic power supplied by hydraulic power unit)

Your team assignment is to select the most appropriate drive from these candidates. To do so, develop a scoring model or a decision tree and evaluate each alternative accordingly. State your assumptions clearly, regarding technological, economic, and other aspects and explain the methodology used to support your analysis.

Initial cost estimates available to your team are:

Ram cylinders (two required)  $5,948 each Resistance door cylinder  $1,505 Hydraulic power unit $50,000 Low-speed, high-torque motor $22,780 High-speed motor with gear box  $7,000

Discussion Questions 1. Where would ideas for new projects and products probably originate in a

manufacturing company? What would be the most likely source in an R&D organization such as AT&T Laboratories or IBM’s Watson Center?

2. Assume that you work in the design department of an aerospace firm and you are given the responsibility of selecting a workstation that will be used by each group in the department. How would you find out which systems are available? What basic information would you try to collect on these systems?

3. How can you extend a polar graph, similar to the one shown in Figure 5.2, to the case in which the criteria are individually weighted?

4. Identify a project that you are planning to pursue either at home or at work. List all of the components, decision points, and chance events. What is the measure of success for the project? Assuming that there is more than one measure, how can you reconcile them?

5. If you were evaluating a proposal to upgrade the computer-aided design system used by your organization, what type of information would you be looking for in detail? How would your answer change if you were buying only one or two systems as opposed to a few dozen?

6. Which factors in an organization do you think would affect the decision to go ahead with a project, such as automating a production line, other than the B/C ratio?

7. For years before beginning the project to build a tunnel under the English Channel, Great Britain and France debated the pros and cons. Speculate on the critical issues that were raised.

8. The project to construct a subway in Washington, D.C. began in the early 1970s with the expectation that it would be fully operational by

1980. A portion of the system opened in 1977, but as of 2004, approximately 5% remained unfinished. What do you think were the costs, benefits, and risks involved in the original planning? How important was the interest rate used in those calculations? Speculate on who or what was to blame for the lengthy delay in completion.

9. Where does quality fit into the B/C equation? Identify some companies or products that compete primarily on the basis of quality rather than price.

10. A software company is undecided on whether it should expand its capacity by using part-time programmers or by hiring more full-time employees. Future demand is the critical factor, which is not known with certainty but can be estimated only as low, medium, or high. Draw a decision tree for the company’s problem. What data are needed?

11. How could B/C analysis be used to help determine the level of subsidy to be paid to the operator of public transportation services in a congested urban area?

12. Why has the U.S. Department of Defense been the major exponent of C- E analysis? Give your interpretation of what is meant by “diminishing returns,” and indicate how it might affect a decision on procuring a military system versus an office automation system.

13. In which type of projects does risk play a predominant role? What can be done to mitigate the attendant risks? Pick a specific project and discuss.

Exercises 1. 5.1 Consider an important decision with which you will be faced in the

near future. Construct a scoring model detailing your major criteria and assign weights to each. Indicate which data are known for sure and which are uncertain. What can be done to reduce the uncertainty?

2. 5.2 Use a checklist and a scoring model to select the best car for a married graduate student with one child. State your assumptions clearly.

3. 5.3 Assume that you have just entered the university and wish to select an area of study.

1. Using B/C analysis only, what would your decision be?

2. How would your decision change if you used C-E analysis? Provide the details of your analysis.

4. 5.4 You have just received a job offer in a city 1,000 miles away and must relocate. List all possible ways of moving your household. Use two different analytic techniques for selecting the best approach, and compare the results.

5. 5.5 Three new-product ideas have been suggested. These ideas have been rated as shown in Table 5.14 .

TABLE 5.14  Product1

Criteria A B C Weight (%) Development cost P F VG 10 Sales prospects VG E G 15 Producibility P F G 10

Competitive advantage E VG F 15 Technical risk P F VG 20 Patent protection F F VG 10 Compatibility with strategy VG F F   20  

100 

1 P = poor, F = fair, G = good, VG = very good, E = excellent

1. Using an equal point spread for all five ratings (i.e., P=1, F=2, G=3, VG=4, E=5 ), determine a weighted score for each product idea. What is the ranking of the three products?

2. Rank the criteria, compute the rank-sum weights, and determine the score for each alternative. Do the same using the rank reciprocal weights.

3. What are some of the advantages and disadvantages of this method of product selection?

6. 5.6 Suppose that the products from Exercise 5.5 have been rated further as shown in Table 5.15 .

TABLE 5.15  Product

A B C Probability of technical success 0.9 0.8 0.7 Probability of commercial success 0.6 0.8 0.9 Annual volume (units) 10,000 8,000 6,000 Profit contribution per unit $2.64 $3.91 $5.96 Lifetime of product (years) 10 6 12   Total development cost $50,000 $70,000 $100,000

1. Compute the expected return on investment over the lifetime of each product.

2. Does this computation change your ranking of the products over that obtained in Exercise 5.5 ?

7. 5.7 The federal government proposes to construct a multipurpose water project. This project will provide water for irrigation and for municipal uses. In addition, there will be flood control benefits and recreation benefits. The estimated project benefits computed for 10-year periods for the next 50 years are given in Table 5.16 .

TABLE 5.16 

Purpose First decade

Second decade

Third decade

Fourth decade

Fifth decade

Municipal $ 40,000 $ 50,000 $ 60,000 $ 70,000 $110,000 Irrigation $350,000 $370,000 $370,000 $360,000 $350,000 Flood Control

$150,000 $150,000 $150,000 $150,000 $150,000

Recreation   $60,000

  $70,000

  $80,000

  $80,000

  $90,000

Totals $600,000 $640,000 $660,000 $660,000 $700,000

The annual benefits may be assumed to be one tenth of the decade benefits. The O&M cost of the project is estimated to be $15,000 per year. Assume a 50-year analysis period with no net project salvage value.

1. If an interest rate of 5% is used and there is a B/C ratio of unity, then what capital expenditure can be justified to build the water project now?

2. If the interest rate is changed to 8%, then how does this change the justified capital expenditure?

8. 5.8 The state is considering the elimination of a railroad grade crossing by building an overpass. The new structure, together with the needed land, would cost $1,800,000. The analysis period is assumed to be 30 years on the theory that either the railroad or the highway above it will be relocated by then. Salvage value of the bridge (actually, the net value of the land on either side of the railroad tracks) 30 years hence is estimated to be $100,000. A 6% interest rate is to be used.

At present, approximately 1,000 vehicles per day are delayed as a result of trains at the grade crossing. Trucks represent 40%, and 60% are other vehicles. Time for truck drivers is valued at $18 per hour and for other drivers at $5 per hour. Average time saving per vehicle will be 2 minutes if the overpass is built. No time saving occurs for the railroad.

The installation will save the railroad an annual expense of $48,000 now spent for crossing guards. During the preceding 10-year period, the railroad has paid out $600,000 in settling lawsuits and accident cases related to the grade crossing. The proposed project will entirely eliminate both of these expenses. The state estimates that the new overpass will save it approximately $6,000 per year in expenses attributed directly to the accidents. The overpass, if built, will belong to the state.

Perform a benefit-cost analysis to answer the question of whether the overpass should be built. If the overpass is built, how much should the railroad be asked to contribute to the state as its share of the $1,800,000 construction cost?

9. 5.9 An existing 2-lane highway between two cities is to be converted to a 4-lane divided freeway. The distance between them is 10 miles. The average daily traffic on the new freeway is forecast to average 20,000 vehicles per day over the next 20 years. Trucks represent 5% of the total traffic. Annual maintenance on the existing highway is $1,500 per lane- mile. The existing accident rate is 4.58 per million vehicle miles (MVM). Three alternative plans of improvement are now under consideration.

Plan A: Add 2 lanes adjacent to the existing lanes at a cost of $450,000

per mile. It is estimated that this plan would reduce auto travel time by 2 minutes and truck travel time by 1 minute when compared with the existing highway. The estimated accident rate is 2.50 per MVM, and the annual maintenance is expected to be $1,250 per lane-mile for all 4 lanes.

Plan B: Improve along the existing alignment with grade improvements at a cost of $650,000 per mile, and add 2 lanes. It is estimated that this would reduce auto and truck travel time by 3 minutes each compared with current travel times. The accident rate on the improved road is estimated to be 2.40 per MVM, and annual maintenance is expected to be $1,000 per lane-mile for all 4 lanes.

Plan C: Construct a new 4-lane freeway on new alignment at a cost of $800,000 per mile. It is estimated that this plan would reduce auto travel time by 5 minutes and truck travel time by 4 minutes compared with current conditions. The new freeway would be 0.3 miles longer than the improved counterparts discussed in plans A and B. The estimated accident rate for plan C is 2.30 per MVM, and annual maintenance is expected to be $1,030 per lane-mile for all 4 lanes. If plan C is adopted, then the existing highway will be abandoned with no salvage value.

Useful data: Incremental operating cost  – Autos  6 cents/mile  – Trucks 18 cents/mile Time saving  – Autos  3 cents/minute  – Trucks 15 cents/minute Average accident cost=$1,200  

If a 5% interest rate is used, then which of the three proposed plans should be adopted? Base your answer on the individual B/C ratios of each alternative. When calculating these values, consider any annual incremental operating costs due to distance, a user disbenefit rather than a cost.

10. 5.10 A 50-meter tunnel must be constructed as part of a new aqueduct system for a city. Two alternatives are being considered. One is to build a full-capacity tunnel now for $500,000. The other alternative is to build a half-capacity tunnel now for $300,000 and then to build a second parallel half-capacity tunnel 20 years hence for $400,000. The cost of repair of the tunnel lining at the end of every 10 years is estimated to be $20,000 for the full-capacity tunnel and $16,000 for each half-capacity tunnel.

Determine whether the full-capacity tunnel or the half-capacity tunnel should be constructed now. Solve the problem by B/C ratio analysis using a 5% interest rate and a 50-year analysis period. There will be no tunnel lining repair at the end of the 50 years.

11. 5.11 Consider the following typical noise levels in decibels (dBA):

.2-17 Full Alternative Text

1. Assume that you are responsible for designing a machine shop. How would you determine an acceptable level of noise? What costs and risks should you weigh?

2. What would your answer be for the design of a commercial aircraft?

12. 5.12 Epidemiological data indicate that only a handful of patients have contracted the AIDS (acquired immune deficiency syndrome) virus from health care workers. Many, though, have called for the periodic testing of all health care workers in an effort to protect or at least reduce the risks to the public. Identify the costs and benefits associated with such a program. Develop an implementation plan for nationwide testing. How would you go about measuring the costs of the plan? What are the costs and risks of not testing?

13. 5.13 As chief industrial engineer in a manufacturing facility, you are contemplating the replacement of the spreadsheet procedures that you are now using for production scheduling and inventory control with a material requirements planning system. A number of options are available. You can do it all at once and throw out the old system; you can phase in the new system over time; you can run both systems simultaneously, and so on. Identify the costs, benefits, and risks with each approach. Construct a decision tree for the problem. Assume that the benefits of any option depend on the future state of the economy which may be “good” or “bad” with probabilities 0.7 and 0.3, respectively.

14. 5.14 The daily demand for a particular type of printed circuit board in an assembly shop can assume one of the following values: 100, 120, or 130 with probabilities 0.2, 0.3, and 0.5. The manager of the shop thus is limiting her alternatives to stocking one of the three levels indicated. If she prepares more boards than are needed in the same day, then she must reprocess those remaining at a cost price of 55 cents/board. Assuming that it costs 60 cents to prepare a board for assembly and that each board produces $1.05 in revenue, find the optimal stocking level by using a decision tree model.

15. 5.15 In Exercise 5.14 , suppose that the owner wishes to consider her decision problem over a 2-day period. Her alternatives for the second day are determined as follows. If the demand in day 1 is equal to the amount stocked, then she will continue to order the same quantity on the second day. Otherwise, if the demand exceeds the amount stocked, she will have the options to order higher levels of stock on the second day. Finally, if day 1’s demand is less than the amount stocked, then she will have the options to order any of the lower levels of stock for the second day. Express the problem as a decision tree, and find the optimal solution using the cost data given in Exercise 5.14 .

16. 5.16 Zingtronics Corp. has completed the design of a new graphic- display unit for computer systems and is about to decide on whether it should produce one of the major components internally or subcontract it to another local firm. The advisability of which action to take depends on how the market will respond to the new product. If demand is high, then it is worthwhile to make the extra investment for special facilities and equipment needed to produce the component internally. For low demand it is preferable to subcontract. The analyst assigned to study the problem has produced the following information on costs (in thousands of dollars) and probability estimates of future demand for the next 5- year period:

Future demand Action Low Average High Produce $140 $120 $90 Subcontract $100 $110 $160 Probability 0.10 0.60 0.30

1. Prepare a decision tree that describes the structure of this problem.

2. Select the best action on the basis of the initial probability estimates for future demand.

3. Determine the expected cost with perfect information (i.e., knowing future demand exactly).

17. 5.17 Refer to Exercise 5.16 . The management of Zingtronics is planning to hire Dr. Lalith deSilva, an economist and head of a local consulting firm, to prepare an economic forecast for the computer industry. The reliability of her forecasts based on previous assignments is provided by the following table of conditional probabilities.

Future demand Economic forecast Low Average High Optimistic 0.1 0.1 0.5 Normal 0.3 0.7 0.4 Pessimistic  0.6   0.2   0.1 

1.0 1.0 1.0

1. Select the best action for Zingtronics if Dr. deSilva submits a pessimistic forecast for the computer industry.

2. Prepare a decision tree diagram for the problem with the use of Dr. deSilva’s forecasts.

3. What is the Bayes’ strategy for this problem?

4. Determine the maximum fee that should be paid for the use of Dr. deSilva’s services.

18. 5.18 Allen Konigsberg is an expert in decision support systems and has been hired by a small software engineering firm to help plan their R&D strategy for the next 6 to 12 months. The company wishes to devote up to 3 person-years, or roughly $200,000, to R&D projects. Show how Konigsberg can use a decision tree to structure his analysis. State all of your assumptions.

19. 5.19 The management of Dream Cruises, Ltd., operating in the Caribbean, has established the need for expanding its fleet capacity and is considering what the best plan for the next 8-year planning period will be. One strategy is to buy a larger 40,000-ton cruise ship now, which would be most profitable if demand is high. Another strategy would be to start with a small 15,000-ton ship now and consider buying another

medium 25,000-ton ship 3 years later. The planning department has estimated the probabilities for high and low demand for each period to be 0.6 and 0.4 respectively. If the company buys the large ship, then the annual profit after taxes for the next 8 years is estimated to be $800,000 if demand is high and $100,000 if it is low. If the company buys the small ship, then the annual profits each year will be $300,000 if demand is high and $150,000 if it is low.

After 3 years with the small vessel, a decision for new capacity will be reviewed. At this time, the firm may decide to expand by adding a 25,000-ton ship or by continuing with the small one. The annual profit after expansion will be $700,000 if demand is high and $120,000 if it is low.

1. Prepare a decision tree that shows the actions available, the states of nature, and the annual profits.

2. Calculate the total expected profit for each branch in the decision tree covering 8 years of operation.

3. Determine the optimum fleet-expansion strategy for Dream Cruises, Ltd.

20. 5.20 Referring to Exercise 5.19 , determine the optimal fleet-expansion strategy if projected annual profits are discounted at the rate of 12%.

21. 5.21 Pipeline Construction Model. This exercise is a variation of the classical “machine setup” problem. The installation of an oil pipeline that runs from an oil field to a refinery requires the welding of 1,000 seams. Two alternatives have been specified for performing the welding: (1) use a team of ordinary and apprentice welders (B-team) only, or (2) use a team of master welders (A-team) who check and rework (as necessary) the welds of the B-team. If the first alternative is chosen, then it is estimated from past experience that 5% of the seams will be defective with probability 0.30, 10% will be defective with probability 0.50, or 20% will be defective with probability 0.20. However, if the B- team is followed by the A-team, then a defective rate of 1% is almost certain.

Material and labor costs are estimated at $400,000 when the B-team is used strictly, whereas these costs rise to $530,000 when the A-team is also brought in. Defective seams result in leaks that must be reworked at a cost of $1,200 per seam, which includes the cost of labor and spilled oil but ignores the cost of environmental damage.

1. Determine the optimal decision and its expected cost. How might environmental damage be taken into account?

2. A worker on the pipeline with a Bayesian inclination (from long years of wagering on the ponies) has proposed that management consider x-ray inspections of five randomly selected seams following the work of the B-team. Such an inspection would identify defective seams, which would provide management with more information for the decision on whether to bring in the A- team. It costs $5,000 to inspect the five seams. Financially, is it worthwhile to carry out the inspection? If so, then what decision should be made for each possible result of the inspection?

22. 5.22 A decision is to be made as to whether to perform a complete audit of an accounts receivable file. Substantial errors in the file can result in a loss of revenue to the company. However, conducting a complete audit is expensive. It has been estimated that the average cost of auditing one account is $6. However, if a complete audit is conducted, resulting in the true but unknown proportion p of the accounts in error being reduced, then the loss of revenue may be reduced significantly.

Andrew Garland, the audit manager, has the option of first conducting a partial audit before his decision on the complete audit. Using the prior probability distribution and payoffs (costs) given in the table below, develop a single auditing plan based on a partial audit of three accounts. Work with opportunity losses.

Proportion of accounts in error, p

Prior probability of p, P(p)

Conditional cost Do not audit

Complete audit

0.05 0.2  $1,000 $10,000

0.50 0.7 $10,000 $10,000 0.95 0.1 $29,000 $10,000

1. Develop the opportunity loss matrix—the matrix derived from the payoff matrix (state of nature versus cost) by subtracting from each entry the smallest entry in its row.

2. Structure the problem in the form of a decision tree. Specify all actions, sample outcomes, and events. Indicate opportunity losses and probabilities at all points on the tree. Show all calculations.

3. Develop the conditional probability matrix, P(X)|p).

4. Develop the joint probability matrix.

5. Is the single auditing plan better than not conducting a partial audit?

1. What is the expected opportunity loss with no partial auditing?

2. What is the expected value of perfect information (EVPI)? Note that EVPI is the difference between the optimal EMV under perfect information and the optimal EMV under the current uncertainty (before collecting more data).

3. What is the expected value of sample information (EVSI), where EVSI=EVPI−EMV? The evaluation of EMV should take into account the results of the partial audit.

4. State how you would determine the optimal number of partial audits in a sampling plan.

23. 5.23 A trucking company has decided to replace its existing truck fleet. Supplier A will provide the needed trucks at a cost of $700,000. Supplier B will charge $500,000, but its vehicles may require more maintenance and repair than those from supplier A. The trucking company is also considering modernizing its maintenance and repair facility either by renovation or by renovation and expansion. Although

expansion is generally more expensive than renovation alone, it enables greater efficiency of repair and therefore reduced annual operating costs of the facility. The estimated costs of renovation alone and of renovation and expansion, as well as the ensuing operating costs, depend on the quality of the trucks that are purchased and the extent of the maintenance that they require. The trucking company therefore has decided on the following strategy: purchase the trucks now; observe their maintenance requirements for 1 year; then make the decision as to whether to renovate or to renovate and expand. During the 1-year observation period, the company will get additional information about expected maintenance requirements during years 2 through 5.

If the trucks are purchased from supplier A, then first-year maintenance costs are expected to be low ($30,000) with a probability of 0.7 or moderate ($40,000) with a probability of 0.3. If they are purchased from supplier B, then maintenance costs will be low ($30,000) with a probability of 0.3, moderate ($40,000) with a probability of 0.6, or high ($50,000) with a probability of 0.1. The costs of renovation, shown here, depend on the first year’s maintenance experience.

One-year maintenance requirements

Renovation costs

Renovation and expansion costs

Low   $150,000 $300,000 Moderate $200,000 $500,000 High   $300,000 $700,000

Expected maintenance costs for years 2 through 5 can best be estimated after observing the maintenance requirements for the first year (Table 5.17 ). Probabilities of various maintenance levels in years 2 through 5 depend on the types of trucks selected and the maintenance experience during year 1 (Table 5.18 ).

TABLE 5.17 

Renovate Renovate and

Supplier First-year maintenance

Maintenance years 2–5

expand Maintenance

years 2–5 Low Moderate Low Moderate

A Low   $100,000 $150,000 $40,000 $60,000 Moderate $100,000 $150,000 $40,000 $60,000

Moderate High   Moderate High  

B Low   $150,000 $200,000 $50,000 $90,000 Moderate $150,000 $200,000 $50,000 $90,000 High   $250,000 $300,000 $70,000 $100,000

TABLE 5.18 

Supplier First-year maintenance

Maintenance level, years 2–5

Low Moderate High A Low   0.7 0.3 —

Moderate 0.4 0.6 — B Low   — 0.5 0.5

Moderate — 0.4 0.6 High   — 0.3 0.7

Use decision tree analysis to determine the strategy that minimizes expected costs.

Bibliography

General Models Baker, N. R., “R&D Project Selection Models: An Assessment,” IEEE Transactions on Engineering Management, Vol. EM-21, No. 4, pp. 165– 171, 1974.

Davis, J., A. Fusfeld, E. Scriven, and G. Tritle, “Determining a Project’s Probability of Success,” Research Technology Management, Vol. 44, No. 3, pp. 51–57, 2001.

Gass, S. I., “Model World: When is a Number a Number?” Interfaces, Vol 31, No. 1, pp. 93–103, 2001.

Hobbs, B. F., “A Comparison of Weighting Methods in Power Plant Siting,” Decision Science, Vol. 11, No. 4, pp. 725–737, 1980.

Madey, G. R. and B. V. Dean, “Strategic Planning for Investment in R&D Using Decision Analysis and Mathematical Programming,” IEEE Transactions on Engineering Management, Vol. EM-32, No. 2, pp. 84– 90, 1986.

Mandakovic, T. and W. E. Souder, “An Interactive Decomposable Heuristic for Project Selection,” Research Management, Vol. 31, No. 10, pp. 1257–1271, 1985.

Mintzer, I., Environmental Externality Data for Energy Technologies, Technical Report, Center for Global Change, University of Maryland, College Park, MD, 1990.

Shachter, R. D., “Evaluating Influence Diagrams,” Operations Research, Vol. 34, No. 6, pp. 871–882, 1986.

Souder, W. E. and T. Mandakovic, “R&D Project Selection Models,” Research Management, Vol. 29, No. 4, pp. 36–42, 1986.

Benefit/Cost Analysis Agogino, A. M., O. Nour-Omid, W. Imaino, and S. S. Wang, “Decision- Analytic Methodology for Cost-Benefit Evaluation of Diagnostic Testers,” EE Transactions, Vol. 24, No. 1, pp. 39–54, 1992.

Bard, J. F., “The Costs and Benefits of a Satellite-Based System for Natural Resource Management,” Socio-Economic Planning Sciences, Vol. 18, No. 1, pp. 15–24, 1984.

Bordman, S. L., “Improving the Accuracy of Benefit-Cost Analysis,” IEEE Spectrum, Vol. 10, No. 9, pp. 72–76, September 1973.

Dicker, P. F. and M. P. Dicker, “Involved in System Evaluation? Use a Multiattribute Analysis Approach to Get the Answer,” Industrial Engineering, Vol. 23, No. 5, pp. 43–73, May 1991.

Newnan, D. G., J. P. Lavelle, and T. G. Eschenbach, Engineering Economic Analysis, Ninth Edition, Oxford University Press, Cary, NC, 2004.

Walshe, G. and P. Daffern, Managing Cost Benefit Analysis, Macmillan Education, London, 1990.

Risk Issues Bell, T. E., “Special Report on Designing and Operating a Minimum- Risk System,” IEEE Spectrum, Vol. 26, No. 6, June 1989.

Committee on Public Engineering Policy, Perspectives on Benefit-Risk Decision Making, National Academy of Engineering, Washington, DC, 1972.

Dougherty, E. M. and J. R. Fragola, Human Reliability Analysis, John Wiley & Sons, New York, 1988.

Kaplan, S. and B. J. Garrick, “On the Quantitative Definition of Risk,” Risk Analysis, Vol. 1, No. 1, pp. 1–23, 1981.

Kumamoto, H. and E. J. Henley, Probabilistic Risk Assessment and Management for Engineers and Scientists, Second Edition, John Wiley & Sons, New York, 2001.

Lowrance, W. W., Of Acceptable Risk: Science and the Determination of Safety, William Kaufmann, Los Altos, CA, 1976.

Makridakis, S., S. C. Wheelwright, and R. J. Hyndman, Forecasting: Methods & Applications, Third Edition, John Wiley & Sons, New York, 1997.

Sage, A. P. and E. B. White, “Methodologies for Risk and Hazard Assessment: A Survey and Status Report,” IEEE Transactions on System, Man, and Cybernetics, Vol. SMC-10, No. 8, pp. 425–446, 1980.

Vose, D., Risk Analysis: A Quantitative Guide, Second Edition, John Wiley & Sons, New York, 2000.

Yates, J. F. (Editor), Risk Taking Behavior, John Wiley & Sons, New York, 1991.

Decision Trees Call, J. H. and W. A. Miller, “A Comparison of Approaches and Implementations for Automating Decision Analysis,” Reliability Engineering and System Safety, Vol. 30, pp. 115–162, 1990.

Canada, J. R., W. G. Sullivan, and J. A. White, Capital Investment Analysis for Engineering and Management, Second Edition, Prentice Hall, Upper Saddle River, NJ, 1996.

Clemen, R. T., Making Hard Decisions: An Introduction to Decision Analysis, Second Edition, Duxbury Press, Belmont, CA, 1996.

Goodwin, P. and G. Wright, Decision Analysis for Management Judgment, Second Edition, John Wiley & Sons, New York, 2000.

Lindley, D.V., Making Decisions, Second Edition, John Wiley & Sons, New York, 1996.

Maxwell, D. T., “Decision Analysis: Aiding Insight VI,” OR/MS Today, Vol. 29, No. 5, pp. 44–51, June 2002.

Raiffa, H., Decision Analysis: Introductory Lectures on Choices under Uncertainty, Addison-Wesley, Reading, MA, 1968.

Real Options Amram, A. and N. Kulatilaka, Real Options: Managing Strategic Investment in an Uncertain World, Harvard Business School Press, Boston, MA, 1999.

Boute, R., E. Demeulemeester, and W. S. Herroelen, “A Real Options Approach to Project Management,” International Journal of Projection Research, Vol. 42, No. 9, pp. 1715–1725, 2004.

Copeland, T., “The Real-Options Approach to Capital Allocation,” Strategic Finance, Vol. 83, No. 4, pp. 33–37, 2001.

Huchzermeier, A. and C. H. Loch, “Project Management Under Risk: Using the Real Options Approach to Evaluate Flexibility in R&D,” Management Science, Vol. 47, No. 1, pp. 85–101, 2001.

Trigeorgis, L., Real Options, MIT Press, Cambridge, MA, 1997.

Wang, J., and W. L. Hwang, “A Fuzzy Set Approach for R&D Portfolio Selection Using a Real Options Valuation Model,” Omega, Vol. 35, No.3, pp. 247–257, 2007.

Appendix 5A Bayes’ Theorem for Discrete Outcomes For a given problem, let there be n mutually exclusive, collectively exhaustive possible outcomes S 1 ,…, S i ,…, S n whose prior probabilities P( S i ) have been established. The laws of probability require

∑i=1nP(Si)=1, 0≤P(Si)≤1, i=1, …,n

If the results of additional study, such as sampling or further investigation, are designated as X, where X is discrete and P(X)>0, Bayes’ theorem can be written as

P(Si|X)=P(X|Si)∑j=1nP(X|Sj)P(Sj) (5A.1)

The posterior probability P(Si|X) is the probability of outcome Si given that additional study resulted in X. The probability of X and Si occurring, P(X|Si)P(Si), is the “joint” probability of X and Si or P(X, Si). The sum of all of the joint probabilities is equal to the probability of X. Therefore, Eq. (5A.1) can be written

P(Si|X)=P(X|Si)P(Si)P(X) (5A.2)

A format for application is presented in Table 5A.1. The columns are as follows.

TABLE 5A.1  Format for Applying Bayes’ Theorem

(1) (2) (3) (4)=(2)×(3) (5)=(4)/∑

State Prior Probability

of sample Joint probability Posterior

probability

probability outcome, X P(Si|X)

S1 P(S1) P(X|S1) P(X|S1)P(S1) P(X|S1)P(S1) S2 P(S2) P(X|S2) P(X|S2)P(S2) P(X|S2)P(S2) · · · · · · · · · · · · · · · Si P(Si) P(X|Si) P(X|Si)P(Si) P(X|Si)P(Si) · · · · · · · · · ·

Sn P(Sn) P(X|Sn) P(X|Sn)P(Sn) P(X|Sn)P(Sn) ∑i=1n P(Si)=1 ∑i=1nP(X|Si)P(Si)=P(X) ∑i=1nP(Si|X

1. Si: potential states of nature.

2. P(Si): estimated prior probability of Si. (Note: This column sums to one.)

3. P(X|Si): the conditional probability of getting sample or added study results X, given that Si is the true state (assumed to be known).

4. P(X|Si)P(Si): joint probability of getting X and Si; the summation of this column is P(X), which is the probability that the sample or added study results in outcome X.

5. P(Si|X): posterior probability of Si given that sample outcome resulted in X; numerically, the ith entry is equal to the ith entry of column (4) divided by the sum of the values in column (4). (Note: Column (5) sums to unity.)

Chapter 6 Multiple-Criteria Methods for Evaluation and Group Decision Making

6.1 Introduction It is often the case, particularly in the public sector, that goods and services are either of a collective nature, such as those for defense and space exploration, or subsidized so that their prevailing market price is an unrealistic measure of the actual cost to the community. In these circumstances, an attempt must be made to find a suitable undistorted “price.”

When the analysis turns to such intangible considerations as safety, health, and the quality of life, it is rarely possible to find a single variable whose direct measurement will provide a valid indicator. Often a surrogate is used. For example, a city’s environmental character could be evaluated by means of an index composed of air pollution levels, noise levels, traffic flow rates, and pedestrian densities. Another index might include crime, fire alarms, and suicide rates. At the national level, it is common to cite unemployment percentages, the consumer and producer price indices, the level of the Dow Jones industrial stocks, and the amount of manufacturer inventories as indicators of general economic well-being. In fact, each of these measures is a composite of a multitude of elements, weighted and summed together in what many would view as an arbitrary manner. A variety of procedures for doing this were presented in Chapter 5. For evaluating large, complex projects, more systematic and rational procedures are required. In this chapter, we focus on methods that have been developed to bring greater rigor to the evaluation and selection process.

6.2 Framework for Evaluation and Selection The success of a project depends on a host of factors, the foremost being its ability to meet critical performance requirements. Success also depends on the likelihood that the project will remain within the planned schedule and budget, the technological opportunities that it offers beyond the immediate application, and the user’s perception regarding its ability to satisfy long-term organization goals. For balancing each of these factors, a value model is needed. Such a model offers the decision maker a framework for conducting the underlying tradeoffs.

A paradigm for any decision analysis is depicted in Figure 6.1. In the context of project management, a decision maker must pick the most “preferred” alternative from a finite set of candidates. Here, the system model may be as simple as a spreadsheet or as elaborate as a dynamic mathematical simulation. Consideration should be given to the full range of economic, technological, and political aspects of the project. Each alternative, together with the prevailing uncertainties, is fed into the system model, and a particular outcome is reported.

Figure 6.1

Decision analysis paradigm.

If the uncertainties are minimal and the data are reliable, the outcomes will be fairly accurate. When uncertainty dominates, it may not be possible to develop a valid system model. The problems, for which decision analysis is most effective, lie somewhere between these two extremes. For example, if an advanced energy system is to be developed, then certain engineering principles and experience with prototypes should give a good indication of performance. However, some uncertainties will still exist, such as the cost of the system in mass production or its reliability in commercial operation.

In the decision analysis paradigm, the outcomes of the system model provide the input to the value model. The output of the latter is a statement of the decision maker’s preferences in terms of a rank ordering of the outcomes or as numerical values that indicate strength of preference as well as rank.

6.2.1 Objectives and Attributes1 1The word attribute is used to describe what is important in a decision problem and is often interchangeable with objective and criterion. A finer distinction can be made as follows: an objective represents direction of improvement or preference for one or more attributes, whereas criterion is a standard or rule that guides decision making.

For many projects, there are multiple—and, at times—competing objectives or goals. They are stated in terms of properties, either desirable or undesirable, that determine a decision maker’s preferences for the outcomes. For the design of an automobile, for example, several objectives might be to (1) minimize production costs, (2) minimize fuel consumption, (3) minimize air pollution, and (4) maximize safety. The purpose of the value model is to take the outcomes of the system model, determine the degree to which they satisfy each of the objectives, and then make the necessary tradeoffs to arrive at a ranking for the alternatives that correctly expresses the preferences of the decision maker.

The value model is developed in terms of a hierarchy of objectives, as shown

in Figure 6.2 for an automobile design project. To quantify the model, a unit of measurement must be assigned to the lowest members of the hierarchy. These members are called attributes and may be scaled in any number of ways depending on the evaluation technique used. In Figure 6.2, eight attributes are used to quantify the value model. They may be represented by a 8-component vector: x=( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 ). A specific occurrence of an attribute is called a state. An attribute state for the objective “minimize fuel consumption” might be x 3 =35 miles per gallon.

Figure 6.2 Hierarchy of objectives for advanced vehicle systems.

Figure 6.2 Full Alternative Text

Both theory and practice have shown that the set of attributes should satisfy the following requirements for the value model to be a valid and useful representation of the decision maker’s preference structure.

1. Completeness. The set of attributes should characterize all of the factors to be considered in the decision-making process.

2. Importance. Each attribute should represent a significant criterion in the decision-making process, in the sense that it has the potential for affecting the preference ordering of the alternatives under consideration.

3. Measurability. Each attribute should be capable of being objectively or subjectively quantified. Technically, this requires that it be possible to establish a utility function (see Chapter 3 for a discussion of utility functions) for the attribute.

4. Familiarity. Each attribute should be understandable to the decision maker in the sense that he should be able to identify preferences for different states.

5. Uniqueness. No two attributes should measure the same criterion, a situation that would result in double counting.

6. Independence. The value model should be structured so that changes within certain limits in the state of one attribute should not affect the preference ordering for states of another attribute or the preference ordering for gambles over the states of another attribute (more will be said about this later).

If an attribute does not meet these conditions, then it should either be redefined by, say, dividing its range into smaller intervals and introducing “sub-attributes” corresponding to these intervals or be combined with other attributes.

6.2.2 Aggregating Objectives into a Value Model Once attributes have been assigned to all the objectives and attribute states have been determined for all possible outcomes, it is necessary to aggregate the states by constructing a single unit of measurement that will accurately represent the decision maker’s preference ordering for the outcomes. This was achieved somewhat arbitrarily in Chapter 5 by specifying weights for each attribute or criterion. A more rigorous and defendable method of doing this is the “willingness to pay” or “pricing out” technique (Keeney and Raiffa 1976). One attribute is singled out as the reference, preferably an attribute measured in dollars, and rates of substitution are determined for the others.

Two procedures for operationalizing this concept will now be presented. Complementary techniques have been developed by Graves et al. (1992), Lewandowski and Wirezbicki (1989), and Lotfi et al. (1992), just to name a few.

6.3 Multiattribute Utility Theory If the set of attributes satisfies the requirements listed above, then it is possible to formulate a mathematical function called a multiattribute utility function that will assign numbers, called outcome utilities, to each outcome state. In general, the utility U( x )=U( x 1 , x 2 ,…, x N ), of any combination of outcomes ( x 1 , x 2 ,…, x N ) for N attributes can be expressed as either (1) an additive or (2) a multiplicative function of the individual attribute utility functions U 1 ( x 1 ), U 2 ( x 2 ),…, U N ( x N ) provided that each pair of attributes is:

1. Preferentially independent of its complement; that is, the preference order of consequences for any pair of attributes does not depend on the levels at which the other attributes are held.

2. Utility independence of its complement; that is, the conditional preference for lotteries (probabilistic tradeoffs) involving only changes in the levels for any pair of attributes does not depend on the levels at which the other attributes are held.

To illustrate condition 1, suppose that four attributes for a given project are profitability, time to market, technical risk, and commercial success. Preferential independence means that if we judge technological risk, for example, to be more important than profitability, then this relationship is true regardless of whether the level of profitability is high, low, or somewhere in between and also regardless of the value of the other attributes.

The second condition, utility independence, means that if we are deciding on the preference ordering (ranking) for probabilistic tradeoffs between, for example, technological risk and time to market, then this can be done regardless of the value of profitability. An example of preference ordering of probabilistic tradeoffs between technological risk and time to market is, for instance, a 25% chance of very low risk and a 70% chance of quick time to market is preferred to, say, a 15% chance of very low risk and a 90% chance of quick time to market.

Before proceeding it is necessary to verify that these two conditions are valid, or more correctly, to test and identify the bounds of their validity. A procedure for doing this is provided by Keeney (1977). The mathematical notation used to describe the model is given below:

x i state of the ith attribute

x i 0 least preferred state to be considered for the ith attribute

x i * most preferred state to be considered for the ith attribute

x vector ( x 1 , x 2 ,…, x N ) of attribute states characterizing a specific outcome

x 0 outcome constructed from the least preferred states of all attributes; x 0 =( x 1 0 ,…, x N 0 )

x * outcome constructed from the most preferred states of all attributes; x * =( x 1 * ,…, x N * )

( x i , x ¯ i 0 ) outcome in which all attributes except the ith attribute are at their least preferred state

U i ( x i ) utility function associated with the ith attribute

U(x) utility function associated with the outcome x

k i scaling constant for the ith attribute; k i =U( x i * , x ¯ i 0 )

k master scaling constant

Now, if the two independence conditions hold, then U(x) assumes the following multiplicative form:

U( x )= 1 k { ∏ i=1 N [ 1+k k i U i  ( x i ) ]−1} (6.1a)

where the master scaling constant k is determined from the equation 1+k=п i(1+kki). If Σiki>1, then −1<k<0; if Σiki<1, then k>0; if Σiki=1, then k=0 and Eq. (6.1a) reduces to the additive form:

U( x )= ∑ i=1 N k i U i ( x i ) (6.1b)

Because utility is a relative measure, the underlying theory permits the arbitrary assignment of U i  ( x i 0 )=0 and U i  ( x i * )=1; that is, the worst outcome for each attribute is given a utility value of 0 and the best outcome is given a utility value of 1. The shape of the utility function depends on the decision maker’s subjective judgment on the relative desirability of possible outcomes. A pointwise approximation of this function can be obtained by asking a series of lottery-type questions such as the following: For attribute i, what certain outcome, x i , would be equally desirable as realizing the highest outcome with a probability p and the lowest outcome with a probability of ( 1−p )? This can be expressed in utility terms using the extreme values x i * and x i 0 as

U i ( x i =? )=p U i  ( x i * )+( 1−p ) U i  ( x i 0 )=p

To construct the curve, p can be varied in fixed increments until either a continuous function can be approximated or enough discrete points have been assessed to give an accurate picture. Alternatively, one could specify the certain outcome x i over a range of values and ask questions such as, “At what p is the certain outcome x i equally desirable as p U i  ( x i * )+( 1−p ) U i  ( x i 0 )? ” Graphically, the assessment of p can be represented as the lottery shown in Figure 6.3.

Figure 6.3 Graphical assessment of indifference probability.

Example 6-1 Suppose that we want to estimate a utility function for the relative fuel economy of an automobile under development (attribute 3 in Figure 6.2). The best achievable might be 80 mpg, and the worst might be 20 mpg. These outcomes would give the utility function values of 1 and 0, respectively. For p=0.5 (the 50-50 lottery), the question would be, “How many miles per gallon as a “sure thing” would be equivalent to a gamble if there were a 50% chance of realizing 80 mpg and a 50% chance of realizing 20 mpg? If the answer is, say, 60 mpg, the new utility value would be calculated as

U( x=60 ) =0.5U( x=80 )+0.5U( x=20 ) =0.5( 1 )+0.5( 0 )=0.5

Note that the utility of the certain outcome equals the probability of the best outcome. Figure 6.4 depicts the interview process. A typical utility curve that resulted from the questioning of a representative of a consumer’s group is shown in Figure 6.5 (Feinberg et al. 1985).

Figure 6.4 Sample interview question for relative fuel economy.

Figure 6.5 Example of utility curve for representative consumer.

Figure 6.5 Full Alternative Text

Once utility functions for all attributes have been determined, the next step is to assess the scaling constants, k i . For both the multiplicative Eq. (6.1a) and additive Eq. (6.1b) models, k i =U( x i * , x ¯ i 0 ), where 0≤ k i ≤1. That is, k i is the utility value associated with the outcome where attribute i is at its best value, x i * , and all other attributes are at their worst values, x ¯ i 0 . In

assessing the k i 's, the following type of question is usually asked:

For what probability p are you indifferent between:

1. The lottery giving a p chance at x * ≡( x 1 * ,…, x N * ) and a ( 1−p ) chance at x 0 ≡( x 1 0 ,…, x N 0 ), versus

2. The consequence ( x 1 0 ,…, x i−1 0 , x i * , x i−1 0 ,…, x N 0 ).

The interview sheet used for determining the scaling constant associated with relative fuel economy is shown in Figure 6.6 (the responses to the last two questions give an indication of the degree to which the independent conditions hold). The result of the assessment is that, in general, k i =p. Good practice suggests that before assessing the scaling constants, the attributes should be ranked in ascending order of importance as they progress from their worst to their best states. Figure 6.7 displays the question sheet that was used for this purpose.

Figure 6.6 Sample interview question used to determine scaling constant for the relative fuel economy attribute.

Figure 6.7 Sample interview question used to determine order of importance of attributes.

Attribute Relative

fuel economy

Initial cost

Life-cycle cost/mile

Maintain- ability Safety

Refuel time

Unrefueled range

Best state 80 mpg equivalent

$5,000 $0.20/mile 10 10

0.17 hours (10

min)

250 miles

Worst state

20 mpg equivalent $25,000 $1.00/mile 0 0

8.0 hours 50 miles

Order of importance

The last step in the evaluation and selection process is to rank the alternatives. This is done by using the multiattribute utility function to calculate outcome utilities for each alternative under consideration. If two or more alternatives seem to be close in rank, then their sensitivity to both the scaling constants and the utility functions should be examined. Appendix 6A contains a more detailed example of the evaluation process.

A final point to make about multiattribute utility theory (MAUT) concerns the possibility that the state of an attribute may be uncertain. “Completion time of a task,” “reliability of a subassembly,” and “useful life of the system” are some examples of attributes whose states may take on different values with known (or, more distressingly, with unknown) probability. In these cases, x i is really a random variable, so it is more appropriate to compute the expected utility of a particular outcome. For the additive model, this can be done with the following equation:

E[ U( x ) ]= ∑ i=1 N [ k i ∫ −∞ ∞ U i ( x i ) f i ( x i )d( x i )] (6.2)

where f i ( x i ) is the probability density function associated with attribute i, and E[ ⋅ ] is the expectation operator (Keeney and von Winterfeldt 1991). Commercial software is available for helping in the assessment of f i , as well as the scaling constants k i and the individual utility functions U i . ■

6.3.1 Violations of Multiattribute Utility Theory In practice, as pointed out by Schoemaker (1982), among others, MAUT is rarely used. Human decision makers do not structure decision problems as holistically and as comprehensively as required and suggested by expected utility theory. Further, human decision makers do not process information, particularly probabilities associated with uncertain outcomes, with the rigor and consistency required by expected utility theory. Human decision makers tend to use heuristics rules (otherwise referred to as “intuition” or “gut-feel”) in processing information and making decisions. Ultimately, human decision makers—even with the aid of advanced computing—satisfice—rather than optimize.

Schoemaker (1982) surveys a number of controlled experiments that have proven that human decision makers consistently violate some of the key axioms and assumptions of MAUT. Coombs (1975) conducted an experiment in which decision makers were asked to rank three gambles A, B, and C in order of attractiveness where C was a probability mixture of A and B. For example, if A offers a 50-50 chance at $3 or $0, and B offers a 50-50 chance at $5 or $0, then a 40-60 mixture of A and B (i.e., gamble C) offers outcomes of $5, $3, and $0 with probabilities 0.3, 0.2, and 0.5, respectively. According to utility theory, gamble C should be ranked in-between A and B in terms of attractiveness. However, in the Coombs experiment, 46% of participants ranked the gambles CAB, CBA, ABC, or BAC.

Kahneman and Tversky (1979) described the Allais Paradox. In Situation A, decision makers must choose between:

(1a) a certain loss of $45 or

(2a) a 0.5 probability of losing $100 and a 0.5 probability of losing $0.

In Situation B, decision makers must choose between:

(1b) a 0.1 probability of losing $45 and a 0.9 probability of losing $0 or

(2b) a 0.05 probability of losing $100 and a 0.95 probability of losing $0.

Decision makers preferred alternative (2a) to alternative (1a) and alternative (1b) to alternative (2b). If (2a) is preferred to (1a), then, from utility theory,

U( −45 )<0.5U( −100 )+.5U( 0 ).

If (1b) is preferred to (2b), then

0.1U( −45 )+0.9U( 0 )>0.05U( −100 )+0.95U( 0 ) U( −45 )>0.5U( −100 )+0.5U( 0 ).

The Allais Paradox demonstrates that decision makers are not always consistent with respect to their utility function.

Bar Hillel (1973) conducted an experiment which demonstrated decision makers’ difficulties with assessing probability (a key tenet of utility theory). In Bar Hillel’s experiment, participants were asked to consider three alternatives:

Simple event: drawing a red marble from a bag containing 50% red and 50% white marbles

Conjunctive event: drawing a red marble seven times in succession, with replacement from a bag containing 90% red marbles and 10% white marbles

Disjunctive event: drawing a red marble at least once in seven successive tries, with replacement from a bag containing 10% red marbles and 90% white marbles

The probabilities of the three events are 0.5, 0.48, and 0.52, respectively. However, the majority of participants preferred alternative 2 to alternative 1 and alternative 1 to alternative 3. Bar Hillel found that decision makers tend to over-estimate the probability of conjunctive events and under-estimate the probability of disjunctive events. This bias may be explained by anchoring. The stated probability—0.1—of the elementary event provides a natural

starting point from which decision makers make an insufficient adjustment to arrive at a correct ordering of the events.

Several studies, for example, Hershey and Schoemaker (1980), found that decision makers are not risk averse, a central premise of utility theory. For example, fewer than 40% of decision makers were willing to pay $100 to protect themselves from a 1% chance of losing $10K. Although this insurance was actuarially fair and risk-neutral, decision makers behaved as if they were risk-seeking. Hershey and Schoemaker concluded that decision makers have difficulty in processing information that deals with low probability, high loss events.

Katona (1965) discussed the role that psychological factors play in economic behavior. Unlike utility theory which assumes that human decision makers are fully rational and can optimally assess probabilities of uncertain events and outcomes, Katona demonstrated that human decision making is often driven by emotional and psychological factors. He compared private savings of workers who received a private pension from an employer (“forced savings”) with private savings of workers who did not receive such a benefit. Utility theory suggests that workers with forced savings would reduce their own, private savings (the forced savings, in effect, “substitute” for savings that a worker would personally contribute in order to reach a savings goal). However, in Katona’s study, workers with forced savings actually increased their private savings. Katona attributed this counter-intuitive result to aspiration-level adjustments and goal-gradient effects. That is, as workers tended to get closer to their ultimate, overall, savings goals, they tended to accelerate and increase their personal savings (to complement their forced savings employment benefit).

Ronen (1973) found that decision makers are sensitive to a problem’s presentation. For example, interchanging two stages of a multi-stage lottery can affect preferences. Ronen found that a 70% chance of getting a 30% chance of receiving $100 was more attractive than a 30% chance of getting a 70% chance of receiving $100. According to utility theory, the two alternatives are identical.

Related to Ronen’s work, Schoemaker and Kunreuther (1979) discovered a context effect whereby the wording of decision alternatives can affect

preferences. For example, in an experiment with decision makers, they posed a gamble formulation:

(1a) a sure loss of $10

(1b) a 1% chance of losing $1,000.

In contrast, they also posed an insurance formulation:

(2a) pay an insurance premium of $10

(2b) remain exposed to a hazard of losing $1,000 with a 1% chance.

Utility theory suggests that these two formulations are identical. However, 56% of decision makers preferred (1a) to (1b) where 81% preferred (2a) to (2b). From a utility theory perspective, other factors, such as regret, may have influenced some decision makers to switch from choosing (1b) to favor purchasing insurance, alternative (2a).

Tversky and Kahnemen (1981) provided a second example of context effects influencing preferences. Subjects were first asked to choose between two alternatives for combating a disease which was expected to kill 600 people.

1. (1a) if program A is adopted, exactly 200 people will be saved

2. (1b) if program B is adopted, there is a 33% probability that 600 people will be saved and a 67% probability that no one will be saved.

76% preferred program A.

A second group of decision makers was given the same choice but in slightly altered form.

1. (2a) if program A is adopted, exactly 400 people will die

2. (2b) if program B is adopted, there is a 33% probability that nobody will die and a 67% probability that 600 people will die

13% preferred program A.

The switching between preferring Program A in the first case and preferring program B in the second case can be explained by the changes in wording that can affect the reference point that decision makers use to evaluate outcomes. Utility theory would insist that decision makers remain consistent between the two cases in stating their preferences.

MAUT is based on decision makers making holistic choices based on consideration of all relevant information involved in a decision. However, numerous studies have shown that decisions are made in a decomposed fashion using relative comparisons (“divide and conquer”). Human beings find it easier to compare alternatives in a piecemeal, rather than a holistic, fashion. Oftentimes, human decision makers will use conjunctive or disjunctive decision making approaches. In a conjunctive decision process, all attributes must satisfy certain minimum thresholds, whereas in a disjunctive decision process, at least one critical criterion must be satisfied. Oftentimes, a lexicographic decision model is used whereby the decision process follows an elimination by aspect approach. For example, in choosing a restaurant for dinner, a decision maker can rule out all restaurants that are more than 10 miles away. Then, all restaurants where the average entrée cost exceeds $25 can be ruled out, etc. In general, a decision process will vary, depending on the task complexity (e.g., number of reasonable alternatives, number of critical considerations, etc.).

According to utility theory, decision making requires a portfolio perspective. Tversky and Kahnemen (1981), however, demonstrated an “isolation effect” whereby decisions are made within a narrow, myopic context. For example, subjects were asked to consider two scenarios:

Scenario A: If you purchase a $20 theater ticket which you lose while waiting in the lobby—would you buy a new ticket?

Scenario B: If you discover that $20 is missing when you open your wallet to purchase a theater ticket—would you buy a ticket?

The $20 loss seemed less relevant in Scenario B, although from a portfolio or total wealth perspective, both scenarios are identical.

Tversky and Kahneman (1981) suggested that reference points are often

utilized by decision makers, in contradiction to utility theory. Tversky and Kahneman postulated two scenarios.

Scenario A: Suppose you are about to purchase an item for $25; you then learn that you can purchase the same item for $20 at another, nearby store.

Scenario B: Same scenario as Scenario A, except now the item is priced $500 originally and is available for $495 at a nearby store.

Would a decision maker leave the original store and purchase the item at a nearby store? The 20% savings in Scenario A seems more attractive than the 1% savings in Scenario B. Most people’s reference dimension is percent savings. However, utility theory suggests that a decision maker should consider the final asset position in both scenarios. That is, in both scenarios, the decision maker is exactly $5 ahead by switching stores (i.e., the two scenarios are identical).

Thaler (1980) identified a sunk-cost fallacy that can influence decision making. For example, consider a decision maker who bought a case of good wine for $5 per bottle. A few years later, the decision maker’s wine merchant offered to buy the wine back for $100 per bottle. The decision maker refused to sell back the wine, although he never paid more than $35 for a bottle of wine. The decision maker was influenced by a failure to properly consider opportunity costs.

Researchers have found that decision makers often employ subjective probabilities in evaluating uncertainty and making judgments. For example, wishful thinking influences decision makers to inflate probabilities of desirable outcomes. Overconfidence leads decision makers to construct confidence intervals that are too tight. Kahnemen and Tversky (1972) hypothesized the representative heuristic, characterized by the following example. A doctor diagnoses a patient as having a certain disease A—rather than disease B—based on the similarity of the patient’s symptoms to textbook stereotypes and ignores possible differences in the a priori probabilities of someone having each of these diseases. Tversky and Kahneman also hypothesized the availability heuristic. For example, in judging the chances of dying from a car accident versus lung cancer, people

may base their estimates solely on the frequencies with which they hear of both events. Finally, Fischoff (1975) discussed hindsight bias which leads to decision makers distorting probabilities. Specifically, events that happen appear in retrospect more likely than they did before the outcome was known.

Another blind spot that decision makers have relative to assessing probabilities is that new information is often underweighted in the revision of opinions. Decision makers, at times, are conservative and anchor onto old information with insufficient assimilation of new information.

Finally, Bar Hillel (1980) found that decision makers can be led astray by perceptions regarding causal connections between pieces of information. For example, decision makers were told that only 10% of taxi cabs in a city are blue. Was a taxi cab, involved in a particular traffic accident, green or blue? According to an eye witness, the taxi cab was blue. Decision makers entirely focused on the reliability of the eye witness and did not consider the prior probability of a blue taxi cab being involved in an accident. In contrast, a second group of decision makers were told that, although there are an equal number of blue and green cabs, historically only 10% of taxi cabs involved in traffic accidents were blue. By emphasizing the causal connection of the prior probabilities to the event markedly improved the decision makers’ posterior probabilities.

All of these heuristic and sub-optimal rules that decision makers regularly employ in common and everyday decision processes represent violations of utility theory and demonstrate a consistent pattern of decision makers deviating from normative decision making. Human decision makers cannot and do not structure problems as holistically, and as comprehensively, as utility theory suggests. Moreover, decision makers cannot process information—in particular, assess probabilities—according to utility theory. Human decision makers, ultimately, satisfice—rather than optimize— decision making (i.e., they make decisions that are “good enough”—and not necessarily optimal across the full range of alternatives).

6.4 Analytic Hierarchy Process The analytic hierarchy process (AHP) was developed by Thomas Saaty to provide a simple, but theoretically sound, multiple-criteria methodology for evaluating alternatives (Saaty and Vargas 2000). Applications can be found in such diverse fields as portfolio selection, transportation planning, manufacturing systems design, and artificial intelligence. The strength of the AHP lies in its ability to structure a complex, multiperson, multiattribute problem hierarchically and then to investigate each level of the hierarchy separately, combining the results as the analysis progresses. Pairwise comparisons of the factors (which, depending on the context, may be alternatives, attributes, or criteria) are undertaken using a scale that indicates the strength with which one factor dominates another with respect to a higher level factor. This scaling process can then be translated into priority weights or scores for ranking the alternatives.

The AHP starts with a hierarchy of objectives. The top of the hierarchy provides the analytic focus in terms of a problem statement. At the next level, the major considerations are defined in broad terms. This is usually followed by a listing of the criteria for each of the foregoing considerations. Depending on how much detail is called for in the model, each criterion may then be broken down into individual parameters whose values are either estimated or determined by measurement or experimentation. The bottom level of the hierarchy contains the alternatives or scenarios underlying the problem.

Figure 6.8 shows a three-level hierarchy developed for evaluating five different approaches to assembling the U.S. space station while in orbit. The focus of the problem is “selecting an in-orbit assembly system,” and the four major criteria are human productivity, economics, design, and operations. The five alternatives include an astronaut with tools outside the spacecraft, a dexterous manipulator under human control, a dedicated manipulator under computer control, a teleoperator maneuvering system with a manipulator kit, or a computer-controlled dexterous manipulator with vision and force feedback.

Figure 6.8 Summary three-level hierarchy for selection problem.

In the actual analysis, each of the criteria at level 2 was significantly expanded to capture the detail necessary to make accurate comparisons (Bard 1986). For example, the criterion, human productivity, was expanded to include factors such as workload, support requirements, crew acceptability, and issues surrounding human-machine interfaces. Figure 6.9 depicts the full portion of the hierarchy used for this criterion.

Figure 6.9 Human productivity objective hierarchy.

Figure 6.9 Full Alternative Text

6.4.1 Determining Local Priorities Once the hierarchy has been structured, local priorities must be established for each factor on a given level with respect to each factor on the level immediately above it. This step is carried out by using pairwise comparisons between the factors to develop the relative weights or priorities. The weight of the ith factor is denoted by w i . Because the approach is basically

qualitative, it is arguably less burdensome to implement from both a data requirement and a validation point of view than by using the multiattribute utility approach of Keeney and Raiffa. For example, the MAUT’s independence conditions do not need to be verified and utility preference functions do not need to be derived. Nevertheless, AHP requires that the following assumptions, stated in terms of axioms, hold if the methodology is to be valid (Golden et al. 1989):

Axiom 1. Given any two alternatives (or sub-criteria) i and j from the set of alternatives ᷅, the decision maker is able to provide a pairwise comparison a ij of these alternatives under criterion c from the set of criteria X on a reciprocal ratio scale; that is,

a ji = 1 a ij for all i, j∈᷅

Axiom 2. When comparing any two alternatives i,j∈᷅, the decision maker never judges one to be infinitely better than another under any criterion c∈X; that is, a ij ≠∞ for all i, j∈᷅.

Axiom 3. The decision problem can be formulated as a hierarchy.

Axiom 4. All criteria and alternatives that have an impact on the given decision problem are represented in the hierarchy. That is, all of the decision maker’s intuition must be represented (or excluded) in the structure in terms of criteria or alternatives.

These axioms can be used to describe the two basic tasks in the AHP: formulating and solving the problem as a hierarchy (3 and 4) and eliciting judgments in the form of pairwise comparisons (1 and 2). Such judgments represent an articulation of the tradeoffs among the conflicting criteria and are often highly subjective in nature. Saaty suggested that a 1 to 9 ratio scale be used to quantify the decision maker’s strength of feeling between any two alternatives with respect to a given criterion. The pairwise comparisons give rise to the elements a ij which are viewed as the ratio of the weights for factors i and j. In the ideal case, we have a ij = w i / w j . When n alternatives are being compared, it is easy to see that

a i1 w 1 + a i2 w 2 +…+ a in w n =n w i  i=1,…,n (6.3)

In matrix form, Eq. (6.3) is written as Aw=nw. These equations provide the basis for deriving the weights w=( w 1 , w 2 ,…, w n ).

An explanation of the 9-point scale is presented in Table 6.1. Depending on the context, the word factors means alternatives, attributes, or criteria. We also note that because a ratio scale is being used, the derived weights can be interpreted as the degree to which one alternative is preferred to another.

TABLE 6.1 Scale used for Pairwise Comparisons Value Definition Explanation

1 Equal importance Both factors contribute equally to the objective or criterion.

3 Weak importance of one over another

Experience and judgment slightly favor one factor over another.

5 Essential or strong importance

Experience and judgment strongly favor one factor over another.

7 Very strong or demonstrated importance

A factor is favored very strongly over another; its dominance is demonstrated in practice.

9 Absolute importance over another

The evidence favoring one factor is unquestionable.

2, 4, 6, 8

Intermediate values

Used when a compromise is needed.

0 No relationship The factor does not contribute to the objective.

Example 6-2

To illustrate the nature of the calculations, observe the three-level hierarchy in Figure 6.8. Table 6.2 contains the input and output data for level 2.

When n factors are being compared, n( n−1 )/2 questions are necessary to fill in the matrix A≡( a ij ). The elements in the lower triangle are simply the reciprocal of those lying above the diagonal (i.e., a ji =1/ a ij , in accordance with Axiom 1) and need not be assessed. In this instance, the entries in the matrix at the center of Table 6.2 are the responses to the 6 ( n=4 ) pairwise questions that were asked. For example, in comparing “human productivity” with “economic” considerations (element a 12 of the matrix), it was judged that the first “weakly” dominates the second. Note that if the elicited value for this element were 1/3 instead of 3, the opposite would have been true. Similarly, the value 7 for element a 34 means that design considerations “very strongly” dominate those associated with operations.■

In general, when comparing two factors, the analyst first discerns which factor is more important and then ascertains by how much by asking the decision maker to select a value from the 9-point scale. After the decision maker supplies all of the data for the matrix, the following equation is solved to obtain the rankings denoted by w:

Aw= λ max w (6.4)

where w is the n-dimensional eigenvector associated with the largest eigenvalue λ max of the comparison matrix A. The n components of w are then scaled so that they sum to 1. The only difference between Eq. (6.3) and Eq. (6.4) is that n has been replaced by λ max on the right-hand side to allow for some inconsistency on the part of the decision maker.

In practice, the priority vector w=( w 1 , w 2 ,…, w n ) is obtained by raising the matrix A to an arbitrarily large power (16 or greater is usually sufficient). Each element in a given row i converges to the same value, call it v i . The weights are then computed as follows:

w i = v i ∑ k=1 n v k  i=1,…,n

The value of λ max can be found by solving each row of Eq. (6.4) for λ and averaging; that is, let λ i be the solution to A i w= λ i w i , where A i is the ith

row of A. Then λmax = 1n Σ i=1 n λi . It should be noted that this procedure works only for the class of positive reciprocal matrices of which A belongs.

A second but less accurate way of deriving the weights is based on the geometric mean of the row elements of A. First, we compute

v i = ∏ j=1 n a ij n = a i1 a i2 … a in n i=1,…,n

and then we normalize to get w i = v i v 1 + v 2 +…+ v n for each row i. For the example in Table 6.2,

TABLE 6.2 Priority Vector for Major Criteria

Criteria

Criteria 1 2 3 4 Priority Output parameters

1. Human productivity

1 3 3 7 0.521 λ max =4.121

2. Economics 0.333 1 1 5 0.205 CI =0.040 3. Design 0.333 1 1 7 0.227 CR =0.045 4. Operations 0.143 0.2 0.143 1 0.047

A=( 1 3 3 7 1/3 1 1 5 1/3 1 1 7 1/7 1/5 1/7 1 )

Row 1:  v 1 = ( 1 )( 3 )( 3 )( 7 ) 4 = 63 4 =2.82

Row 2:  v 2 = ( 1/3 )( 1 )( 1 )( 5 ) 4 = 5/3 4 =1.14

Row 3:  v 3 = ( 1/3 )( 1 )( 1 )( 7 ) 4 = 7/3 4 =1.24

Row 4:  v 4 = ( 1/7 )( 1/5 )( 1/7 )( 1 ) 4 = 1/245 4 =0.25

Normalizing gives the weights

w 1 = 2.82 2.82+1.14+1.24+0.25 = 2.82 5.45 =0.52 w 2 = 1.14 5.45 =0.21 w 3 = 1.24 5.45 =0.23 w 4 = 0.25 5.45 =0.04

To find λ max we solve the following equations for λ i for each row i=1,…,n

A i w= λ i w (where A i is the ith row of the A matrix)

or a i1 w 1 + a i2 w 2 +…+ a in w n = λ i w i .

For the example we have n=4:

Row 1:  λ 1 =2.120/0.52=4.077

Row 2:  λ 2 =0.813/0.21=3.871

Row 3:  λ 3 =0.893/0.23=3.883

Row 4:  λ 4 =0.189/0.04=4.725

Ideally, these values all should be the same but because this is an approximate method, some variation is inevitable. Setting λ max to the average of these values is a good compromise:

λ max ≅ 1n (λ1 + λ2 + … + λn) = 1 4  ( 4.077+3.871+3.883+4.725 ) =4.139

The true value of λ max =4.121.

6.4.2 Checking for Consistency Consistency of response or transitivity of preference is checked by ascertaining whether

a ij = a ik a kj , for all i, j, k (6.5)

In practice, the decision maker is only estimating the “true” elements of A by assigning them values from Table 6.1, so the perfectly consistent case represented by Eq. (6.5) is not likely to occur.

Therefore, as an approximation, the elements of A can be thought to satisfy the relationship a ij = w i / w j + ϵ ij , where ϵ ij is the error term representing the decision maker’s inconsistency in judgment when comparing factor i with factor j. As such, we would no longer expect a ij to equal a ik a kj throughout. Carrying the analysis one step farther, it can be shown that the largest eigenvalue, λ max , of the matrix A satisfies λ max ≥n, where equality holds for perfect consistency only. This leads to the definition of a consistency index

CI= λ max −n n−1

which can be used to evaluate the quality of the matrix A. To add perspective, we compare the CI to the index derived from a completely arbitrary matrix whose entries are randomly chosen. Through simulation, Saaty has obtained the following results:

n 1 2 3 4 5 6 7 8 9 10 RI 0.00 0.00 0.58 0.90 1.12 1.24 1.32 1.41 1.45 1.49

where n represents the dimension of the particular matrix and RI denotes the random index computed from the average of the CI for a large sample of random matrices. It is now possible to define the consistency ratio (CR) as

CR= CI RI

Experience suggests that the CR should be less than 0.1 if one is to be fully confident of the results. (There is a certain amount of subjectivity in this assertion much like that associated with interpreting the coefficient of determination in regression analysis.) Fortunately, though, as the number of factors in the model increases, the results become less and less sensitive to the values in any one matrix.

Returning to Table 6.2, the priorities derived for the major considerations were 0.521 for human productivity, 0.205 for economics, 0.227 for design, and 0.047 for operations. These values tend to emphasize the first criterion over the others, probably because of the implicit mandate that the U.S. space station must eventually pay for itself. Finally, note that CR=0.045, which is well within the acceptable range.

6.4.3 Determining Global Priorities The next step in the analysis is to develop the priorities for the factors on the third level with respect to those on the second. In our case, we compare the five alternatives previously mentioned with each of the major criteria. For the moment, assume that the appropriate data have been elicited and that the calculations for each of the four comparison matrices have been performed, with the results displayed in Table 6.3 (note that each column sums to 1). The first four columns of data represent the local priorities derived from the inputs supplied by the decision maker. The global priorities are obtained by weighting each of these values by the local priorities given in Table 6.2 (and repeated at the top of Table 6.3 for convenience) and summing. The calculations for alternative 1 are as follows: ( 0.066 )( 0.521 )+( 0.415 )( 0.205 )+ ( 0.122 )( 0.227 )+ ( 0.389 )( 0.047 )= 0.165. To see how the calculations are performed in general, let

n l =number of factors at level l

w i l =global weight at level l for factor i

w ij l =local weight at level l for factor i with respect to factor j at level l −1

TABLE 6.3 Local and Global Priorities for the Problem of Selecting an In-Orbit Assembly System

Local priorities

Alternative* Human

productivity Economies

(0.205) Design (0.227)

Operations (0.047)

Global priorities

(0.521) 1 0.066 0.415 0.122 0.389 0.165 2 0.212 0.309 0.224 0.151 0.232 3 0.309 0.059 0.206 0.178 0.228 4 0.170 0.111 0.197 0.105 0.161 5 0.243 0.106 0.251 0.177 0.214

*1. Astronaut with tools outside the spacecraft;

2. Dexterous manipulator under human control;

3. Dedicated manipulator under computer control;

4. Teleoperator with manipulator kit;

5. Dexterous manipulator with sensory feedback.

The global priorities at level l are obtained from the following equation:

w i l = ∑ j=1 n l−1 w ij i   w j l−1

Continuing with the example, because there are no more levels left to evaluate, the values shown in the last column of Table 6.3 represent the final priorities for the problem. Thus, according to the judgments expressed by this decision maker, alternative 2 turns out to be most preferred.

To complete the analysis, it would be desirable to see how sensitive the results are to changes in judgment and criteria values; that is, to determine how changes in the A matrix would affect intra-level, overall priorities, and consistencies. This feature is built into Expert Choice (Forman et al. 2004), the most popular commercial code for conducting an AHP analysis, and so can be done with little effort. HIPRE 3+ (Hamalainen and Mustajoki 2001) also provided this capability. When uncertainty exists in factor values, additional attributes can be defined to account for this randomness (Bard 1992).

In summary, the commonly claimed benefits of the AHP are that:

1. It is simple to understand and use.

2. The construction of the objective hierarchy of criteria, attributes, and alternatives facilitates communication of the problem and solution recommendations.

3. It provides a unique means of quantifying judgment and measuring consistency.

6.5 Group Decision Making When more than one person is responsible for making decisions, the issues surrounding group dynamics and consensus building become paramount. Rational procedures must be developed for structuring the problem, soliciting opinions, and making use of the information collected. In general, there are two modes of operation: live sessions and some form of correspondence. In the former, the group takes time to structure its problem, usually weighing all factors and considering all inputs. Still there is a need to trim the structure and eliminate redundancies so that the major effort can be brought to bear on the essential parts of the problem. With regard to judgments, behaviorists point out that there are four kinds of situations:

1. People are completely antagonistic to the process and do not wish to participate in a constructive way. In particular, they may believe that the outcome would dilute their own influence.

2. The participants wish to cooperate to arrive at a rational decision and in so doing wish to determine every judgment by agreement and consensus.

3. The group members are willing to have their individual judgments synthesized after some debate.

4. The group consists of experts each of whom knows his or her mind exactly and does not wish to interact. They are willing to accept an outcome but are not willing to compromise on their judgments.

After the session in which the substance is hammered out, the group members may be willing to revise their structure and judgments by conducting additional sessions or by correspondence using questionnaires.

The second alternative is to do the entire process by correspondence without organized meetings. The question here is how to solicit opinions and interact most effectively. The Delphi method is one particular approach for doing this

that has gained strong adherents.

Several researchers have pointed to the following trends in decision making:

1. Organizational decisions are much more technically and politically complex and require frequent meetings attended by a wide range of individuals.

2. Decisions must be reached quickly, usually with greater participation of low-level or staff personnel than in the past.

3. There is an increasing focus on the development of computer-based systems that support the formulation and solution of unstructured decision problems by a group [i.e., a group decision support system (GDSS)].

In what follows, we highlight some of the important considerations in the group decision-making process.

6.5.1 Group Composition The inherent complexity and uncertainty surrounding an organization’s major activities usually necessitates the participation of many people in the decision-making process. In some cases, the composition of the group is fixed (e.g., the board of directors advising the chief executive officer of a corporation), whereas in others, it is necessary to select a mix of members (e.g., choosing a panel to investigate the Columbia disaster). The latter selection process requires specifying the number of experts, nonexperts, staff personnel, and upper-level managers to participate, as well as choosing the appropriate people.

This process can be difficult and time consuming for many reasons. First, participants who are considered “experts” are likely to be troublesome. They may have strong ideas on the appropriate course of action and may not be easily swayed in their assessments. Second, decision makers who are considered “powerful” members of the organization might refuse to

participate. These members are aware that their level of control and influence might be diminished in a group setting. They fear that the social and interactive nature of the group process might dilute their power and ability to direct policy within the organization (Saaty 1989). However, if powerful people actively participate, then they are likely to dominate the process. In contrast, results generated by a group that consists solely of “low-level” managers with little power may not be useful. The danger in all of this is that powerful managers will implement their preferred solutions without taking into account the opinions and observations of others.

One way of dealing with the “power differential” problem is to assemble a group of participants who have equal responsibility and stature within the organization. Collectively, these people can be treated as a decision-making “subgroup” that could help formulate and solve a part of the problem with which they are most knowledgeable. They could also contribute to discussions that involve higher or lower levels of management. This can be viewed as a sort of “shared” decision-making responsibility in which high- level management cooperates with subordinates. In practice, high-level management often depends on low-level employees to gather the appropriate information on which to base their decisions.

6.5.2 Running the Decision-Making Session After the group has been chosen, the members should begin preparing for the decision-making session by formalizing their agenda, structuring the allowable interactions between participants, and clearly defining the purpose of the session in advance. They can seek answers to several questions (e.g., the ones listed below) that are designed to establish the operating ground rules:

Is the purpose of the session simply to improve the group’s understanding of the problem, or is the purpose to reach a final solution?

Are the participants committed to generating and implementing a final

solution?

What is the best way to combine the judgments of the participants on various issues to produce a united course of action?

Often we model decision problems as if the people with whom we are dealing know their minds and can give answers inspired by a clear or telling experience. But this is seldom the case. People have a habitual domain. They are conditioned and biased but also learning and adaptive. Instead of trying to cajole or coerce them prematurely, they must be given the opportunity to learn and solidify their ideas. After much experimentation and trial and error, something useful may emerge. If you hurry, then all you get is a hurried answer, no matter how scientific you try to be. People must be given an adequate chance to understand their own minds before they can be expected to commit themselves. People with different assumptions and different backgrounds, though, may never be on the same wavelength and will change their minds later if they are forced to agree. Moreover, interpersonal comparisons should be undertaken only with the utmost of care. Peer pressure, concealed and distorted preferences, and the inequalities of power all conspire to prejudice the group decision-making process.

6.5.3 Implementing the Results After the final results have been generated, the group should evaluate the effort and cost of implementing the highest-priority outcome. It must be determined whether it is likely that the participants and their constituencies will cooperate in the implementation phase of the effort. To be useful, the decision-making process must be acceptable to the participants, and the participants must be willing to abide by the outcome. Finally, it is important for the group to view whichever GDSS was used, not as a tool for isolated, one-time applications but rather as a process that has ongoing validity and usefulness to an organization.

6.5.4 Group Decision Support

Systems A GDSS aims to improve the process of group decision making by removing common communications barriers, providing techniques for structuring decision analysis, and systematically directing the pattern, timing, and content of the discussion. The more sophisticated the GDSS technology, the more dramatic the intervention into the group’s natural (unsupported) environment. Of course, more dramatic intervention does not necessarily lead to better decisions; but its appropriate design and use can produce the desired results.

Communications technologies available within a GDSS include electronic messaging, local- and wide-area networks, teleconferencing, and store-and- forward facilities. Computer technologies include multiuser operating systems, fourth-generation languages, databases, data analysis methodologies, and so on. Decision support technologies include agenda setting, decision modeling methods (e.g., decision trees, risk assessment, forecasting techniques, the AHP, MAUT), and rules for directing discussion.

Concerning the information-exchange aspect of group decision making, DeSanctis and Gallupe (1987) proposed three levels of support. Level 1 GDSSs provide technical features aimed at removing communications barriers, such as large screens for instantaneous display of ideas, voting solicitation and compilation, anonymous input of ideas and preferences, and electronic message exchange between members. Level 1 features are found in meeting rooms normally referred to as “computer-supported conference rooms” or “electric board rooms.”

Level 2 GDSSs provide decision modeling and group decision techniques that are designed to reduce the uncertainty and “noise” that occur in the group decision process. The result is an enhanced GDSS, as opposed to a level 1 system, which is a communications medium only. A Level 2 GDSS might provide automated planning tools or other aids found in individual DSSs for group members to work on and view simultaneously, again using a large, common screen. Modeling tools to support analyses that ordinarily are performed in a qualitative manner, such as social judgment formation, risk

assessment, and multiattribute utility methods, can be introduced to the group via a level 2 GDSS. In addition, group structuring techniques found in the organizational development literature can be administered efficiently.

Level 3 GDSSs are characterized by machine-induced group communication patterns and can include expert advice in the selecting and arranging of rules to be applied during a meeting. As an example, Hiltz and Turoff (1985) experimented with automating the Delphi method and the nominal group technique, but to date, very little research has been done with such high-level systems.

In summary, the objective of GDSSs is to discover and present new possibilities and approaches to problems. They do this by facilitating the exchange of information among the group. Message transfer can be hastened and smoothed by removing barriers (level 1); systematic techniques can be used in the decision process (level 2); and rules for controlling pattern, timing, and content of information exchange can be imposed on the group (level 3). The higher the level of the GDSS, the more sophisticated the technology and the more dramatic the intervention compared with the natural decision process. Table 6.4 highlights the major tasks of a decision-related meeting, the main activities, the corresponding level of GDSS, and the possible support features.

TABLE 6.4 Example GDSS Features to Support Six Task Types Task purpose

Task type GDSS level

Possible support features

General: Planning Level 1

Large-screen display, graphical aids

Level Planning tools (e.g., PERT); risk assessment, subjective probability

2 estimation for alternative plans

Creativity Level 1

Anonymous input of ideas, pooling and display of ideas; search facilities to identify common ideas, eliminate duplicates

Level 2

Brainstorming; nominal group technique

Choose: Objective Level 1

Data access and display; synthesis and display of rationales for choices

Level 2

Aids to finding the correct answer (e.g., forecasting models, multiattribute utility models)

Level 3

Rule-based discussion emphasizing thorough explanation of logic

Preference Level 1

Preference weighting and ranking with various schemes for determining the most favored alternative; voting schemes

Level 2

Social judgment models; automated Delphi method

Level 3

Rule-based discussion emphasizing equal time to present opinion

Negotiate: Cognitive conflict

Level 1

Summary and display of members’ opinions

Level 2

Using social judgment analysis, each member’s judgments are analyzed by the system and then used as feedback to the individual member or the group

Level 3

Automatic mediation; automate Roberts’ rules

Mixed motive

Level 1

Voting solicitation and summary

Level

2 Stakeholder analysis

Level 3

Rule base for controlling opinion expression; automatic mediation; automate parliamentary procedure

TEAM PROJECT Thermal Transfer Plant Total Manufacturing Solutions (TMS) management is considering the following aspects in selecting a hydraulic power unit for the rotary combustor:

Size

Weight

Power consumption

Required maintenance

Noise

Cost

Reliability

The power unit provides power to operate three components of the system: feed rams, resistance door, and combustor. Three design alternatives are available:

1. Electric motor on a gearbox

2. Low-speed, high-torque hydraulic motor with direct drive

3. High-speed, low torque hydraulic motor on a gearbox

Initial data include the following:

Electromechanical Low speed, high torque

High speed, low torque

Delivery 90–120 days 1–6 weeks 90–120 days Overall efficiency

96% 94% 88%

Useful life 20 years 25 years 25 years Noise level 85 dB 78 dB 100 dB

Using the criteria above as guidance, develop an MAUT and an AHP model for evaluating the three alternatives. It will be necessary to collect data or make assumptions about the values of all of the attributes. For one of the models, perform the analysis with the help of a computer program, and give your recommendation. Be sure to justify and document your results, basing part of your recommendation on a sensitivity analysis.

Discussion Questions 1. How might you measure the benefits associated with space exploration

or a superconducting supercollider for investigating subatomic particles? Can you put a dollar value on these benefits? What are the real costs and opportunity costs of these types of projects?

2. Identify an advanced technology project that you believe should be undertaken, such as bio-electronic computing or coal gasification. Who should be responsible for funding the project? The government? Industry? A consortium? What are the major attributes or criteria associated with the project?

3. What type of technical background, if any, do you think is needed to understand MAUT? The AHP?

4. You have just completed an MAUT evaluation of a number of data communications systems under consideration by your company. How would you present the results to upper management? Assuming that they know nothing about the technique, how much background would you give them? How would your answer differ if the AHP were used instead?

5. What do you think are the strengths and weaknesses of the AHP and MAUT?

6. How would you go about constructing an objective hierarchy? Who should be consulted? Identify a project from your personal experience or observations, and construct such a hierarchy.

7. When performing an evaluation using any multiple-criteria method, from whose perspective should the analysis be undertaken? Would the answer differ if it were a public rather than private project?

8. What experiences have you had with group decision making? What difficulties do you see arising when trying to perform a multiple-criteria

analysis with many interested parties involved? How might these difficulties be overcome, or at least mitigated?

9. Are benefit-cost analysis and multiple-criteria analysis mutually exclusive techniques? In which circumstances is either most appropriate?

10. You just inherited a large sum of money and would like to develop a strategy to invest it. Use the AHP to fashion such a strategy. Construct an objective hierarchy listing all criteria and subcriteria, and principal alternatives. What data are needed to perform the evaluation? How would you go about obtaining the data?

11. From a practical point of view, how would you verify the independence assumptions associated with MAUT?

12. Are the axioms underlying the AHP reasonable and unambiguous? In which circumstances do you think one or more of them could be relaxed?

13. Both the AHP and MAUT are value models that facilitate making tradeoffs between incommensurable criteria. Come up with your own value model or procedure for doing this.

14. In conducting a group study using a multiple-criteria method, you reach a point at which two of the participants cannot agree on a particular response. What course of action would you take to placate the parties and avoid further delay?

15. For which type of projects or problems might MAUT be more amenable than the AHP? Similarly, when is the AHP more appropriate than MAUT?

Exercises 1. 6.1 Assume that you work for a company that designs and fabricates

VLSI chips. You have been given the job of selecting a new computer- aided design software package for the engineering group.

1. Develop an MAUT model to assist in the selection process.

2. Develop an AHP model to assist in the selection process.

In both cases, begin by enumerating the major criteria and the associated subcriteria. Explain your assumptions. Who are the possible decision makers? How do you think the outcome of the analysis would change with each of these decision makers?

2. 6.2 Develop a flow chart detailing input, output, and processes for a software package that supports:

1. MAUT applications

2. AHP applications

3. 6.3 Using MAUT and the AHP, perform an analysis to select a graduate program. Explain your assumptions and indicate which technique you believe is most appropriate for this application.

4. 6.4 You are the vice president of planning for Zingtronics, a small-scale manufacturer of IBM-compatible personal computers and peripherals based in Silicon Valley. Business is growing, and the company would like to open a second facility. Three options are being considered: (1) a second plant in Silicon Valley, (2) a new plant in Mexico as a Maquiladora, and (3) a new plant in Singapore. Most of the workforce will be low-skilled assembly and machine operators but training in the use of computers and information systems will be required. It is also desirable to set up a small design group of engineers for new product and process development.

Of course, each option has its pros and cons. For example, Silicon Valley has a high-skill labor pool but is a very expensive place to do business. Singapore offers the same level of worker skills at lower cost but is distant from the market and headquarters. Mexico is the least expensive place to set up a business, as a result of favorable tax laws and cheap labor, but has a less educated workforce.

Develop two objective hierarchies, one for costs and one for benefits, that can be used to investigate the location problem. Use the AHP to rank the three alternatives on both hierarchies, and then compute the benefit/cost ratios of each. According to your analysis, which alternative is best?

5. 6.5 Referring to Exercise 6.4 , combine the two hierarchies into one so that there are no more than eight subobjectives at the bottom level. Define either a quantitative or a qualitative scale for each of these subobjectives, and construct a utility function for each. Use MAUT to evaluate and rank the three alternatives.

6. 6.6 Use the criteria below to construct a two-level objective hierarchy (major criteria with one set of subcriteria under each) to help evaluate political candidates. Consider as alternatives the major candidates running in the last U.S. presidential election, and use the AHP to make your choice.

Criteria for choosing a national political candidate:

Charisma: Personal leadership qualities, inspiring, enthusiasm, and support

Glamor: Charm, allure, personal attractiveness; associations with other attractive people

Experience: Past office holding relevant to the position sought; preparation for the position

Economic policy: Coherence and clarity of a national economic policy

Ability to manage international relations: Coherence and clarity of foreign policy plus ability to deal with foreign leaders

Personal integrity: Quality of moral standards, trustworthiness

Past performance: Quality of role fulfillment—independent of what the role was—in previous public offices; public record

Honesty: Lawfulness in public life, law-abidingness

7. 6.7 Louise Ciccone, head of industrial engineering for a medium-sized metalworking shop, wants to move the CNC machines from their present location to a new area. Three distinct alternatives are under consideration. After inspecting each alternative and determining which factors reflect significant differences among the three, Louise has decided on five independent attributes to evaluate the candidates. In descending order of importance, they are:

1. Distance traveled from one machine to the next (more distance is worse)

2. Stability of foundation [strong (excellent) to weak (poor)]

3. Access to loading and unloading [close (excellent) to far (poor)]

4. Cost of moving the machines

5. Storage capacity

(Note: Once the machines have been moved, operational costs are independent of the area chosen and hence are the same for each area.) The data associated with these factors for the three alternatives are in Table 6.5 .

TABLE 6.5  Alternative

Attribute Area I Area II

Area III

Ideal Standard Worst

A 500 ft 300 ft 75 ft 0 ft 300 ft 1,000 ft

B Good Very good

Good Excellent Good Poor

C Excellent Very good

Good Excellent Good Poor

D $7,500 $3,000 $8,500 $0 $5,000 $10,000

E 60,000 ft 2

85,000 ft 2

25,000 ft 2

10,000 ft 2

25,000 ft 2

150,000 ft 2

Using the multiattribute utility methodology, determine which alternative is best. For at least one attribute, state all of the probabilistic tradeoff (lottery-type) questions that must be asked together with answers to obtain at least four utility values between the “best” and “worst” outcomes so that the preference curve can be plotted. For the other attributes, you may make shortcut approximations by determining whether each is concave or convex, upward or downward, and then sketching an appropriate graph for each. Next, ask questions to determine the scaling constants k i , and compute the scores for the three alternatives. [Note: If you follow the recommended procedure for deriving the scaling constants, probably Σi ki ≠ 1, so you should use the multiplicative model Eq. 6.1a (). After comparing alternatives by that model, “normalize” the scaling constants so Σi ki = 1, and then compare the alternatives using the additive model Eq. 6.1b (). (It is not theoretically correct to normalize the k i values to enable use of the additive model.) How much difference does use of the “correct” model make?]

8. 6.8 Starting with the environmental scoring model in Table 5.3 , construct an objectives hierarchy that can be used to evaluate capital development and expansion projects being considered by an electric utility company.

9. 6.9 The six major objectives listed below are used by the British Columbia Hydro and Power Authority to evaluate new projects. Use this

list to construct an objectives hierarchy by providing subobjectives and their respective attributes where appropriate. Also, estimate the “worst” and “best” levels for all of the factors at the lowest level of the hierarchy.

1. Maximize the contribution to economic development

2. Act consistently with the public’s environmental values

3. Minimize detrimental health and safety impacts

4. Promote equitable business arrangements

5. Maximize quality of service

6. Be recognized as public service oriented

10. 6.10

1. Use the three weighting techniques in Section 5.3 to make a selection of one of the three used automobiles for which some data are given in Table 6.6 . State your assumptions regarding miles driven each year, life of the automobile (how long you would keep it), market (resale) value at end of life, interest cost, price of fuel, cost of annual maintenance, attribute weights, and other subjectively based determinations.

TABLE 6.6 

Attribute Alternative

Domestic European Japanese Price $8,100 $12,600 $10,300 Gas mileage 25 mpg 30 mpg 35 mpg Type of fuel Gasoline Diesel Gasoline Aesthetic appeal 5 out of 10 7 out of 10 9 out of 10

Passengers 4 6 4 Performance on road Fair Very good Very good Ease of servicing Excellent Very good Good Stereo system Poor Good Excellent Headroom Excellent Very good Poor Storage space Very good Excellent Poor

2. Repeat the analysis using MAUT; that is, construct utility functions and scaling functions for each attribute, and determine the overall utility of each alternative. Does your answer agree with the one obtained in part (a)? Explain why they should (or should not) agree.

11. 6.11 An aspiration level for a criterion or attribute is a level at which the decision maker is satisfied. For example, we all would like our investment portfolio to provide an annual rate of return of 30% or higher, but most of us would happily settle for a return of 5% above the Dow Jones. Develop an interactive multicriteria methodology that is based on aspiration levels of the criteria. Construct a flow chart for the logic and computations. Use your methodology to select one of the alternatives in Exercise 6.10 .

Bibliography

Multiattribute Utility Theory Bard, J. F. and A. Feinberg, “A Two-Phase Methodology for Technology Selection and System Design,” IEEE Transactions on Engineering Management, Vol. EM-36, No. 1, pp. 28–36, 1989.

Bell, D. E., R. L. Keeney, and H. Raiffa (Editors), Conflicting Objectives in Decisions, John Wiley & Sons, New York, 1977.

Bar-Hillel, M., “On the Subjective Probability of Compound Events,” Organizational Behavior, Vol. 9, No. 3, pp. 396–406, 1973.

Bar-Hillel, M., “The Base-Rate Fallacy in Probability Judgments,” Acta Psychology, Vol. 44, pp. 211–233, 1980.

Coombs, C. H., “Portfolio Theory and the Measurement of Risk.” in Human Judgment and Decision Processes, edited by M. F. Kaplan and S. Schwartz, New York Academic Press, pp. 63–86, 1975.

Dyer, J.S. and R.F. Miles, Jr., “An Actual Application of Collective Choice Theory to the Selection of Trajectories for the Mariner Jupiter/Saturn 1977 Project,” Operations Research, Vol. 24, pp. 220– 244, 1976.

Feinberg, A., R. F. Miles, Jr., and J. H. Smith, Advanced Vehicle Preference Analysis for Five-Passenger Vehicles with Unrefueled Ranges of 100, 150, and 250 Miles, JPL D-2225, Jet Propulsion Laboratory, Pasadena, CA, March 1985.

Fischoff, B., “Hindsight is not Equal to Foresight: The Effect of Outcome Knowledge on Judgment Under Uncertainty,” Journal of Experimental Psychology, Vol. 104, No. 1, pp. 288–299, 1975.

Hershey, J. C. and P. J. H. Schoemaker, “Risk-Taking and Problem Context in the Domain of Losses – An Expected Utility Analysis,” Journal of Risk and Insurance, Vol. 47, No. 1, pp. 111–132, 1980.

Kahneman, D. and A. Tversky, “Subjective Probability: A Judgment of Representativeness,” Cognitive Psychology, Vol. 3, No. 3, pp. 430–454, 1972.

Kahneman, D. and A. Tversky, “Prospect Theory: An Analysis of Decision Under Risk,” Econometrica, Vol. 47, No. 2, pp. 263–291, 1979.

Katona, G., Private Pensions and Individual Savings, Monograph No. 40, Survey Research Center, Institute for Social Research, The University of Michigan, 1965.

Keefer, D. L., “Allocation Planning for R&D with Uncertainty and Multiple Objectives,” IEEE Transactions on Engineering Management, Vol. EM-25, No. 1, pp. 8–14, 1978.

Keeney, R. L., “The Art of Assessing Multiattribute Utility Functions,” Organizational Behavior and Human Performance, Vol. 19, pp. 267– 310, 1977.

Keeney, R. L. and H. Raiffa, Decisions with Multiple Objectives: Preference and Value Tradeoffs, John Wiley & Sons, New York, 1976.

Keeney, R. L. and D. von Winterfeldt, “Eliciting Probabilities from Experts in Complex Technical Problems,” IEEE Transactions on Engineering Management, Vol. 38, No. 3, pp. 191–201, 1991.

Ronen, J., “Effects of Some Probability Displays on Choices,” Organizational Behavior, Vol. 9, No. 1, pp. 1–15, 1973.

Schoemaker, P. J. H., “The Expected Utility Model: Its Variants, Purposes, Evidence, and Limitations,” Journal of Economic Literature, Vol. 20, No. 2, pp. 529–563, 1982.

Schoemaker, P. J. H. and C. C. Waid, “An Experimental Comparison of Different Approaches to Determining Weights in Additive Utility Models,” Management Science, Vol. 28, No. 2, pp. 182–196, 1982.

Schoemaker, P. J. H. and H. C. Kunreuther, “An Experimental Study of Insurance Decisions,” Journal of Risk and Insurance, Vol. 46, No. 4, pp. 603–618, 1979.

Thaler, R., “Toward a Positive Theory of Consumer Choice,” Journal of Economic Behavior and Organization, Vol. 1, No. 1, pp. 39–60, 1980.

Tversky, A. and D. Kahneman, “The Framing of Decisions and the Psychology of Choice,” Science, Vol. 211, pp. 453–458, 1981.

Vincke, P., Multicriteria Decision-Aid, John Wiley & Sons, New York, 2002.

Analytic Hierarchy Process Bard, J. F., “Evaluating Space Station Applications of Automation and Robotics,” IEEE Transactions on Engineering Management, Vol. EM- 33, No. 2, pp. 102–111, 1986.

Bard, J. F. and S. F. Sousk, “A Tradeoff Analysis for Rough Terrain Cargo Handlers Using the AHP: An Example of Group Decision Making,” IEEE Transactions on Engineering Management, Vol. 37, No. 3, pp. 222–227, 1990.

Forman, E. H., T. L. Saaty, M. A. Selly, and R. Waldron, Expert Choice, Decision Support Software, McLean, VA, 2004 (http:// www.expertchoice.com).

Finan, J. S. and W. J. Hurley, “The Analytic Hierarchy Process: Can Wash Criteria be Ignored?” Computers & Operations Research, Vol. 29, No. 8, pp. 1025–1030, 2002.

Golden, B. L., E. A. Wasil, and P. T. Harker (Editors), The Analytic

Hierarchy Process: Applications and Studies, Springer-Verlag, Berlin, 1989.

Hamalainen, R. P. and J. Mustajoki, HIPRE 3+ Decision Support Software, Systems Analysis Laboratory, Helsinki University of Technology, Helsinki, Finland, 2001 (http://www.hipre.hut.fi).

Libertore, M. J., “An Extension of the Analytic Hierarchy Process for Industrial R&D Project Selection and Resource Allocation,” IEEE Transactions on Engineering Management, Vol. EM-34, No. 1, pp. 12– 18, 1987.

Saaty, T. L., “Axiomatic Foundations of the Analytic Hierarchy Process,” Management Science, Vol. 32, No. 7, pp. 841–855, 1986.

Saaty, T. L. and L. G. Vargas, Models, Methods, Concepts & Applications of the Analytic Hierarchy Process, International Series in Operations Research and Management Science, Volume 34, Kluwer, Boston, 2000.

Shtub, A. and E. M. Dar-El, “A Methodology for the Selection of Assembly Systems,” International Journal of Production Research, Vol. 27, No. 1, pp. 175–186, 1989.

Wasil, A. E. and B. L. Golden (Editors), “Focused Issue: Analytic Hierarchy Process,” Computers & Operations Research, Vol. 30, No. 10, 2003.

Group Decision Making Aczel, J. and C. Alsina, “Synthesizing Judgements: A Functional Equation Approach,” Mathematical Modelling, Vol. 9, pp. 311–320, 1987.

DeSanctis, G. and Gallupe, R. B., “A Foundation for the Study of Group Decision Support Systems,” Management Science, Vol. 33, No. 5, pp. 589–609, 1987.

Franz, L. S, G. R. Reeves, and J. J. Gonzalez, “Group Decision Processes: MOLP Procedures Facilitating Group and Individual Decision Orientations,” Computers & Operations Research, Vol. 19, No. 7, pp. 695–706, 1992.

Greenberg, J. and R.A. Baron, Behavior in Organizations: Understanding and Managing the Human Side of Work, Eighth Edition, Prentice Hall, Upper Saddle River, NJ, 2003.

Hiltz, S. R. and M. Turoff, “Structuring Computer-Mediated Communication Systems to Avoid Information Overload,” Communications of the ACM, Vol. 28, No. 7, pp. 680–689, 1985.

Poole, M. S., M. Holmes, and G. Desanctis, “Conflict Management in a Computer-Supported Meeting Environment,” Management Science, Vol. 37, No. 8, pp. 926–953, 1991.

Saaty, T. L., “Group Decision Making and the AHP,” in B. L. Golden, E. A. Wasil, and P. T. Harker (Editors), The Analytic Hierarchy Process: Applications and Studies, Springer-Verlag, Berlin, pp. 59–67, 1989.

Tavana, M., “CROSS: A Multicriteria Group-Decision-Making Model for Evaluating and Prioritizing Advanced-Technology Projects in NASA,” Interfaces, Vol. 33, No. 3, pp. 40-56 (2003).

Comparison of Methods Bard, J. F., “A Comparison of the Analytic Hierarchy Process with Multiattribute Utility Theory: A Case Study,” IIE Transactions, Vol. 24, No. 5, pp. 111–121, 1992.

Belton, V., “A Comparison of the Analytic Hierarchy Process and a Simple Multi-attribute Value Function,” European Journal of Operational Research, Vol. 26, pp. 7–21, 1986.

Kamenentzky, R. D., “The Relationship between the AHP and the

Additive Value Function,” Decision Science, Vol. 13, pp. 702–713, 1982.

Additional MCDM Techniques Belton, V. and T. J. Stewart, Multiple Criteria Decision Analysis: An Integrated Approach, Kluwer Academic, Dordrecht, The Netherlands, 2001.

Graves, S. B., J. L. Ringuest, and J. F. Bard, “Recent Developments in Screening Methods for Nondominated Solutions in Multiobjective Optimization,” Computers & Operations Research, Vol. 19, No. 7, pp. 683–694, 1992.

Lewandowski, A. and A. P. Wirezbicki (Editors), Aspiration Decision Support Systems, Vol. 331, Lecture Notes in Economics and Mathematical Systems, Springer-Verlag, Berlin, 1989.

Lotfi, V., T. J. Stewart, and S. Zionts, “An Aspiration-Level Interactive Model for Multiple Criteria Decision Making,” Computers & Operations Research, Vol. 19, No. 7, pp. 671–681, 1992.

Appendix 6A: Comparison of Multiattribute Utility Theory with the Analytic Hierarchy Process: Case Study2 2The material presented in this appendix has been excerpted from Bard (1992).

In this appendix, we present a case study in which the AHP and MAUT are used to evaluate and select the next generation of rough terrain cargo handlers for the U.S. Army. Three alternatives are identified and ultimately ranked using the two methodologies. A major purpose of this study is to demonstrate the strengths and weaknesses of each methodology and to characterize the conditions under which one might be more appropriate than the other.

The evaluation team consisted of five program managers and engineers from the Belvoir Research, Development & Engineering Center. The objective hierarchy used for both techniques contained 12 attributes. In general, the AHP was found to be more accessible and conducive to consensus building. Once the attributes were defined, the decision makers had little difficulty in furnishing the necessary data and discussing the intermediate results. The same could not be said for the MAUT analysis. The need to juggle 12 attributes at a time produced a considerable amount of frustration among the participants. In addition, the lottery questions posed during the data collection phase had an unsettling effect that was never satisfactorily resolved.

6A.1 Introduction and Background In an ongoing effort to reduce risk and to boost the productivity of material- handling crews, the Army is investigating the use of robotics to perform

many of the dangerous and labor-intensive functions normally undertaken by enlisted personnel. To this end, a number of programs are currently under way at several government facilities. These include the development of a universal self-deployable cargo handler (USDCH) at Belvoir Research, Development & Engineering Center (Belvoir 1987b), the testing of a field material handling robot (FMR) at the Human Engineering Laboratory, and the prototyping of an advanced robotic manipulator system (ARMS) at the Defense Advanced Research Projects Agency (more details are given in Sievers and Gordon 1986, Sousk et al. 1988).

In each of these efforts, technological risk, time, and cost ultimately intervene to limit the scope and performance of the final product, but to what extent and in what manner? To answer these questions, a model that is capable of explicitly addressing the conflicts that arise among system and organizational goals is needed. Such a model must also be able to deal with the subjective nature of the decision-making process. The two approaches examined, the AHP and MAUT, each offer an analytic framework in which the decision maker can conduct tradeoffs among incommensurate criteria without having to rely on a single measure of performance.

6A.2 The Cargo Handling Problem Although the Army is generally viewed as a fighting force, the bulk of its activity involves the movement of massive amounts of material and supplies in the field. This is achieved with a massive secondary labor force whose risk exposure is comparable to those engaged in direct combat.

From an operational point of view, cargo must be handled in all types of climates, regions, and environments. At the time of the study, this was accomplished by three different-sized rough-terrain forklifts with maximum lifting capacities of 4,000, 6,000, and 10,000 lbs each. These vehicles are similar in design and performance to those used by industry and, at best, can reach speeds of 20 mph. For the most part, this means that the fleet is not self-deployable (i.e., it cannot keep pace with the convoy on most surfaces). As a consequence, additional transportation resources are required for relocation between job sites. This restriction severely limits the unit’s

maneuverability and hence its survivability on the battlefield.

A second problem relates to the safety of the crew. Although protective gear is available for the operator, his or her effectiveness is severely hampered by its use. Heat exhaustion, vision impairment, and the requirement for frequent changes are the problems cited most commonly. Logistics units thus lack the ability to provide continuous support in extreme conditions.

6A.2.1 System Objectives To overcome these deficiencies as well as to improve crew productivity, a heavy-duty cargo-handling forklift is needed. This vehicle should be capable of operating in rough terrain and of traveling over paved roads at speeds in excess of 40 mph. To permit operations in extreme conditions, internal cooling (microcooling) should be provided for the protective gear worn by the operator. As technology progresses, it is desirable that the basic functions be executable without human intervention, implying some degree of autonomy.

At a minimum, then, the vehicle should be:

Able to substitute for the existing 4,000, 6,000 and 10,000 (4K, 6K, 10K)-lb forklifts while maintaining current material handling capabilities

Capable of unaided movement (self-deployability) between job sites at convoy speeds in excess of 40 mph

Capable of determining whether cargo is contaminated by nuclear, biological, or chemical agents

Capable of handling cargo in all climates and under all contamination conditions

Transportable by C-130 and C-141B aircraft

Operable in the near term as a human-machine system expandable to full

autonomy

Capable of robotic cargo engagement

Operable remotely from up to 1 mile away

6A.2.2 Possibility of Commercial Procurement A market survey of commercial forklift manufacturers, including those currently under contract for the 4K-, 6K-, and 10K-lb vehicles, indicates little opportunity for a suitable off-the-shelf buy. With Army needs constituting less than 15% of the overall market, lengthy procurement cycles and uneven demand work to dampen any corporate interest. In the commercial environment, the use of rough-terrain forklifts is limited to construction and logging operations; highway travel and teleoperations have no real applications. Therefore, few, if any, incentives exist for the industry to undertake the research and development (R&D) effort implied by the design requirements to build a prototype vehicle.

6A.2.3 Alternative Approaches To satisfy the system objectives, then, the existing fleet must either be replaced outright or be substantially overhauled. However, given the low priority of logistics relative to combat needs, a full-scale R&D program is not a realistic option. A more likely approach involves an improvement in the existing system, a modification of a commercial system, or the adaptation of available technology to meet specific requirements. Each of these approaches occasions a different level of risk, cost, and performance that must be evaluated and compared before a final decision can be made. This is the subject of the remainder of the appendix, but first, the leading alternatives are defined.

Taking into account mission objectives and the fact that the Army has

functioned with the existing system up until now, the following alternatives have been identified. This set represents a consensus of the program managers and engineers at Belvoir and the customer at the Quartermaster School:

1. Baseline: the existing system comprising the 4K-, 6K-, and 10K-lb rough-terrain forklifts augmented with the new 6K-lb variable-reach vehicle

2. Upgraded system: baseline upgraded to be self-deployable

3. USDCH: teleoperable, robotic-assisted USDCH with microcooling for the protective gear, and the potential for full autonomy

The new 6K-lb variable-reach (telescoping boom) forklift was scheduled to be introduced into the fleet in early 1990. Its performance characteristics, along with those of the USDCH, have been discussed in several reports (Belvoir 1987a, 1987b). Figure 6A.1 depicts a schematic of the robotic- assisted cargo handler. Note that the field material-handling robot and the advanced robotic manipulator system have been omitted from the list above. At this juncture, the primary interest in these systems centers on

Figure 6A.1 Universal self-deployable cargo handler.

Figure 6A.1 Full Alternative Text

their robotic capabilities rather than on their virtues as cargo handlers. In fact, almost none of the operational deficiencies mentioned previously would be overcome by either the FMR or the ARMS. Consequently, each was dismissed from further consideration.

6A.3 Analytic Hierarchy Process The first step in any multiobjective methodology is to identify the principal criteria to be used in the evaluation. These should be expressed in fairly general terms and be well understood by the study participants. For our problem, the following four criteria were identified: performance, risk, cost, and program objectives. The next step is to add definition by associating a subset of attributes (subcriteria) with each of the above. Figure 6A.2 depicts the resultant objective hierarchy. Risk, for example, has been assigned the following attributes: system integration, technical performance, cost overrun, and schedule overrun. The alternatives are arrayed at the bottom level of the diagram. The connecting lines indicate points of comparison.

Figure 6A.2 Objective hierarchy for next-generation cargo handler.

Figure 6A.2 Full Alternative Text

In constructing the objective hierarchy, consideration must be given to the level of detail appropriate for the analysis. This is often dictated by the present stage of the development cycle, the amount of data available on each alternative, and the relative importance of criteria and attributes. For example, if human productivity were a major concern, as it is in the space program, then a fifth criterion might have been included at the second level.

The inclusion or exclusion of a particular attribute depends on the degree to which its value differs among the alternatives. Although transportability and survivability are important design considerations, all candidates for the cargo- handling mission are expected equally to satisfy basic requirements with respect to these attributes. Consequently, it is not necessary to incorporate them in the model.

To avoid too much detail, aggregation is recommended. This permits overly specific factors to be taken into account implicitly by including them in the attribute definitions. For example, “life-cycle cost” (LCC) could have been further decomposed into unit purchase price, operations and maintenance costs, spare parts, personnel and training, and so on, but at the expense of overtaxing the current database and cost accounting system. As a result, these factors were left undifferentiated. Similar reasoning applies to the attribute “reliability/availability/maintainability” (RAM).

6A.3.1 Definition of Attributes Each of the attributes displayed at level 3 in Figure 6A.2 is described in more detail below. These descriptions, in the form of instructions, were used by the analyst to elicit responses from the decision makers during the data collection phase of the study.

Performance 1. Mission objectives. Compare the alternatives on the basis of how close

they come to satisfying mission objectives and requirements. Consideration should be given to such factors as lifting capacity, deployability, productivity improvement, and operation in a nuclear, biological, chemical (NBC) environment.

2. RAM. Using military standards for RAM, compare the alternatives relative to the likelihood that each will meet these standards. If possible, take into account mean-time-between-failures, mean-time-to-repair, and

the most probable failure modes.

3. Safety. Compare the alternatives on the basis of how well they protect the crew in all climatic conditions and in an NBC environment. Consider the probable degree of hazard exposure, the vehicle response under various driving conditions, and the ability of the crew to work effectively for extended periods.

Risk 4. System integration. Compare the effort required to achieve full system

integration for the alternatives, taking into account the degree of upgrading and reengineering associated with each.

5. Technical performance. Considering the performance goals of each system, evaluate the relative likelihood that these goals will be met within the current constraints of the program. Take into account the Army’s experience with similar systems and the state of commercially available technologies.

6. Cost overrun. Based on the maturity of the technology and the funding histories of similar programs, compare the alternatives as to whether one is more likely to go over budget than the other.

7. Schedule overrun. Based on the maturity of the technology and the development histories of similar programs, compare the alternatives as to whether one is more likely than the other to result in a schedule overrun.

Cost 8. Research, development, testing, and evaluation (RDT&E). Compare the

alternatives from the standpoint of which is likely to have the least cost impact during its development cycle. Consideration should be given to each phase of the program before implementation.

9. LCC. Compare the total cost of buying, operating, maintaining, and supporting each alternative over its expected lifetime. Exclude RDT&E, but take into account personnel needs, training, and the degree of standardization achieved by each system.

Program Objectives 10. Implementation timetable. Compare the alternatives with respect to their

individual schedules for implementation. Consider the effect that the respective timetables will have on military readiness.

11. Technological opportunities. Compare the alternatives on the basis of what new technologies might result from their development, as well as the likelihood that new applications will be found in other areas. Consideration should be given to the prospect of spinoffs, potential benefits, and the development of long-term knowledge.

12. Customer acceptability. Compare the alternatives from both the user representative’s and operator’s points of view. Take into account the degree to which each alternative satisfies basic objectives, as well as the potential for growth, risk reduction, and the adaptation of new technologies. Also consider secondary or potential uses, operator comfort, and program politics.

6A.3.2 Analytic Hierarchy Process Computations To illustrate the nature of the calculations, observe Figure 6A.3 which depicts a three-level hierarchy—an abbreviated version of Figure 6A.2 used in the analysis. Table 6A.1 contains the input and output data for level 2.

Recall that when n factors are being compared, n( n−1 )/2 questions are necessary to fill in the matrix. The elements in the lower triangle (omitted here) are simply the reciprocal of those lying above the diagonal; that is, a ji

=1/ a ij . The entries in the matrix at the center of Table 6A.1 are the responses to the 6( n=4 ) pairwise questions that were asked. These responses were drawn from the 9-point scale shown in Table 6.1. For example, in comparing “performance” with “risk” (element a 12 of the matrix), it was judged that the first “strongly” dominated the second. Note that if the elicited value for this element were 1/5 instead of 5, then the opposite would have been true.

From Table 6A.1, it can be seen that the priorities derived for the major criteria were 0.517 for performance, 0.059 for risk, 0.306 for cost, and 0.118 for program objectives. Also note that the consistency ratio (0.097) is a bit high but still within the acceptable range.

TABLE 6A.1 Priority Vector for Major Criteria

Criteria

Criteria 1 2 3 4 Priority weights

Output parameters

1. Performance 1 5 3 4 0.517 λ max =4.262 2. Risk 1 1/6 1/3 0.059 3. Cost 1 4 0.306 CR=0.097 4. Program objectives

1 0.118

The next step in the analysis is to develop the priorities for the factors on the third level with respect to those on the second. In our case, we compare the three alternatives against the major criteria. For the moment, assume that the appropriate data have been elicited and that the calculations have been performed for each of the four comparison matrices, giving the results displayed in Table 6A.2. The first four columns of data are the local priorities derived from the inputs supplied by the decision maker; note that each column sums to 1. The global priorities are found by respectively multiplying these values by the higher-level local priorities given in Table 6A.1 (and

repeated at the top of Table 6A.2 for convenience) and then summing. Because there are no more levels left to evaluate, the values contained in the last column of Table 6A.2 represent the final priorities for the problem. Thus, according to the judgments expressed by this decision maker, alternative 3 turns out to be most preferred. Finally, it should be noted that other schemes are available for determining attribute weights.

TABLE 6A.2 Local and Global Priorities

Local priorities

Alternatives Performance (0.517)

Risk (0.059)

Cost (0.306)

Program obj. (0.118)

Global priorities

Baseline 0.142 0.704 0.384 0.133 0.248 Upgrade 0.167 0.229 0.317 0.162 0.216 USDCH 0.691 0.067 0.299 0.705 0.536

Figure 6A.3 Abbreviated version of the objective hierarchy.

6A.3.3 Data Collection and Results for AHP In the formative stages of the study, two questions quickly arose: (1) Who should provide the responses? (2) Whose point of view should be represented? With regard to the first, it was believed that the credibility of the results depended on having a broad spectrum of opinion and expertise as input. Subsequently, five people from Belvoir’s Logistics Equipment Directorate with an average of 15 years’ experience in systems design, R&D program management, and government procurement practices were assembled to form the evaluation team. After some discussion it was agreed that the responses should reflect the position of the material developer––the U.S. Army Material Command. Other candidates included the Army as a whole, the customer, and the mechanical equipment division at Belvoir.

At the first meeting, the group was introduced to the AHP methodology and examined the objective hierarchy developed previously by the analyst. Eventually, a consensus grew around the attribute definitions, and each member began to assign values to the individual matrix elements. A bottom up approach was found to work best. Here the alternatives first are compared with respect to each attribute; next, a comparison is made among the attributes with respect to the criteria; and finally, the four criteria at level 2 are compared among themselves. After the data sheets had been filled out for each criterion, individual responses were read aloud to ascertain the level of agreement. In light of the ensuing discussion, the participants were asked to revise their entries to better reflect their renewed understanding of the issues. This phase of the study took approximately 6 hours and was done in two sessions over a 5-day period.

As with the Delphi procedure, the challenge was to come as close to a

consensus as possible without coercing any of the team members. Unfortunately, this proved more difficult than expected as a result of the speculative nature of much of the attribute data. In practice, many researchers have found that uniformity within a group rarely can be achieved without stretching the limits of persuasion (Greenberg and Baron 2003). Biases, insecurities, and stubbornness often develop their own constituencies. Although none of these factors was openly present at the meetings, organizational and program concerns were clearly seen to influence individual judgments.

In the extreme, when there is no possibility of reconciling conflicting perceptions, it is best to stratify responses along party lines. In our case, sufficient agreement emerged to permit the averaging of results without obscuring honest differences of opinion. Table 6A.3 highlights individual preferences for the level 2 criteria and for the problem as a whole. The numbers in parentheses represent the local weights computed for the four criteria: performance, risk, cost, and program objectives. Global weights and rankings are given in the last two columns.

Table 6A.4 summarizes the computations for each decision maker and presents two collective measures of comparison: (1) the arithmetic mean and (2) the geometric mean. (Issues surrounding the synthesis of judgments is discussed by Aczel and Alsina 1987.) The latter is obtained by a geometric averaging of the group’s individual responses at each point of comparison to form a composite matrix, followed by calculation of the eigenvectors in the usual manner. As can be seen, both methods give virtually identical results and rankings. The strongest preference is shown for the USDCH, closely followed by the baseline. The upgraded system is a distant third.

TABLE 6A.3: Comparison of Responses Using the AHP

Local results

Performance Risk Cost Program

Respondent Alternative Weight Rank Weight Rank Weight Rank Weight 1 (0.517) (0.059) (0.306) (0.118)

Baseline 0.142 3 0.704 1 0.384 1 0.133 Upgrade 0.167 2 0.229 2 0.317 2 0.162 USDCH 0.691 1 0.067 3 0.229 3 0.705

2 (0.553) (0.218) (0.147) (0.082) Baseline 0.144 3 0.497 1 0.432 1 0.202 Upgrade 0.213 2 0.398 2 0.383 2 0.269 USDCH 0.643 1 0.105 3 0.185 3 0.529

3 (0.458) (0.240) (0.185) (0.117) Baseline 0.252 3 0.677 1 0.467 1 0.350 Upgrade 0.273 2 0.249 2 0.375 2 0.371 USDCH 0.474 1 0.074 3 0.158 3 0.280

4 (0.359) (0.315) (0.210) (0.116) Baseline 0.214 3 0.666 1 0.602 1 0.529 Upgrade 0.263 2 0.266 2 0.313 2 0.313

USDCH 0.524 1 0.068 3 0.085 3 0.158 5 (0.469) (0.252) (0.194) (0.085)

Baseline 0.184 3 0.655 1 0.565 1 0.176 Upgrade 0.227 2 0.274 2 0.285 2 0.178 USDCH 0.589 1 0.071 3 0.150 3 0.646

6A.3.4 Discussion of Analytic Hierarchy Process and Results The output in Tables 6A.3 and 6A.4 represents the final judgments of the participants and was obtained only after holding two additional meetings to discuss intermediate results. All participants were given the opportunity to examine the priority weights calculated from their initial responses and to assess the reasonableness of the rankings. When their results seemed

counterintuitive, they were encouraged to reevaluate their input data, determine the source of the inconsistency, and make the appropriate changes. The debate that took place during these sessions proved to be extremely helpful in clarifying attribute definitions and surfacing misunderstandings. In a few

TABLE 6A.4: Summary of Results for the AHP Analysis

Respondent

1 2 3 4

Alternative Weight Rank Weight Rank Weight Rank Weight Rank Weight Baseline 0.248 2 0.268 3 0.405 1 0.474 1 Upgrade 0.216 3 0.282 2 0.298 2 0.280 2 USDCH 0.536 1 0.450 1 0.297 3 0.246 3

instances, well-reasoned arguments persuaded some people to reverse their position completely on a particular issue. This was more apt to occur when the advocate was viewed as an expert and was able to furnish the supporting data. Ordinarily, one- or two-point revisions were the rule and had no noticeable effect on the outcome.

Looking at the data in Table 6A.3, a great deal of consistency can be seen across the group. In all but one instance, performance is given the highest priority, followed by risk, cost, and program objectives. For the first three criteria, each alternative has the same ordinal ranking; the only differences arise in the case of program objectives. Nevertheless, the real conflict is reflected in the magnitude of the weights. Although some variation is inevitable, it is frustrating to observe the results for “cost.” In particular, there is little agreement concerning the extent to which personnel and transportation resource reductions that accompany the USDCH will be offset by increased operations and maintenance expenses or how these factors will

affect the LCC. The third and fourth decision makers were more skeptical than the first two and hence showed a greater preference for the baseline.

The results for “risk” also inform a divergence of opinion. Respondent 1 was most forthright in acknowledging its presence in the USDCH program by assigning it an extremely low weight (0.067) relative to the baseline (0.704). The effect of this assignment was minimal, though, because he judged risk to be considerably less important than the other three criteria. Compare his corresponding weight (0.059) with those derived for respondents 2 through 5 (0.218, 0.240, 0.315, and 0.252). From the data in Table 6A.3, it can be seen that the last four decision makers all viewed risk as the second most important criterion. This observation was corroborated indirectly in the utility analysis.

6A.4 Multiattribute Utility Theory MAUT is a methodology for providing information to the decision maker for comparing and selecting among complex alternatives when uncertainty is present. It similarly calls for the construction of an objective hierarchy as depicted in Figure 6A.2 but addresses only the bottom two levels.

6A.4.1 Data Collection and Results for Multiattribute Utility Theory After agreeing on the attributes, the next step in model development is to determine the scaling constants, k i , and the attribute utility functions, U i . This is done through a series of questions designed to probe each decision maker’s risk attitude over the range of permissible outcomes. Before the interviews can be conducted, though, upper and lower bounds on attribute values must be specified. Table 6A.5 lists the values elicited from respondent 1 for the 12 attributes. Notice that seven of these are measured on a qualitative (ordinal) scale, the meanings of which were made precise at the first group session. Table 6A.6 defines the range of scores for the “mission

objectives” attribute and is typical of the 10-point scales used in the analysis.

TABLE 6A.5 Attribute Data for Decision Maker 1

Value*

No. Attribute Scale A1 A2 A3 Range Order of importance†

Scaling constant

Performance

1 Mission obj. Ordinal 4 4 8 4–8 1 0.176 2 RAM Ordinal 6 4 3 3–6 11 0.044 3 Safety Ordinal 4 4 10 4–10 2 0.162

Risk

4 System integ.

Ordinal 9 7 3 3–7 8 0.059

5 Tech. perf. Ordinal 9 7 3 3–9 9 0.059 6 Cost overrun $M 0 1 5 0–5 12 0.044

7 Sched. overrun

Years 0 2 4 0–4 7 0.059

Cost

8 RDT&E $M 0 6 13 0–13 6 0.059

9 LCC $B 3.0 2.8 2.5 2.5– 3.0

4 0.088

Program objectives

10 Timetable Years 2 6 8 2-8 10 0.044

11 Tech. opport. Ordinal 1 2 7 1-7 5 0.074 12 Acceptability Ordinal 1 3 9 1–9 3 0.132

* A1=baseline, A2=upgraded system, A3=USDCH.

†Order of importance for the given range of attribute values.

TABLE 6A.6 Scale used for “Mission Objectives” Attribute Value Explanation

10

All mission objectives are satisfied or exceeded, and some additional capabilities are provided. The design is expected to lead to significant improvements in human productivity and military readiness.

8

All basic mission objectives are met, and some improvement in productivity is expected. The design readily permits the incorporation of new technologies when they become available.

6 Minor shortcomings in system performance are evident, but the overall mission objectives will not be compromised. Some improvement in operator efficiency is expected.

4

Not all performance levels are high enough to meet basic mission objectives. However, no more than one major objective (e.g., self-deployability, microcooling) is compromised, and no threat exists to military readiness.

2 An inability to meet one or more major mission objectives exists. With the current design, it is not economically feasible to bring overall performance up to standards. Significant shortcomings exist with respect to the mission

0 objectives. Implementation or continued use could seriously jeopardize military readiness.

To determine the scaling constants, the decision maker must specify an indifference probability, p, related to the best ( x * ) and the worst ( x 0 ) values of the attribute states. The following scenario is posed:

1. Let attribute i be at its best value and the remaining attributes be at their worst values. Call this situation the “reference.”

2. Assume that a “gamble” is available such that the “best outcome” occurs with probability p, and the “worst outcome” occurs with probability 1−p. If you can achieve the “reference” for sure, then for what value of p are you indifferent between the “sure thing” and the “gamble”?

The resultant scaling constants for each of the five decision makers are displayed in Table 6A.7 along with the corresponding AHP weights. The former have been normalized to sum to 1 to facilitate the comparison and to permit the use of the additive model Eq. (6.1b). At a superficial level, the group showed a remarkable degree of consistency from one set of responses to the next. (Theoretically speaking, the AHP weights and the MAUT scaling constants measure different phenomena and hence cannot be given the same interpretation). In almost all cases, mission objectives, safety, technical performance, and life-cycle cost emerged as the dominant concerns. A look at individual values shows some discrepancies, but rankings and orders of magnitude are similar.

The procedure used to assess the utility functions is nearly identical to that used for the scaling constants. Not surprisingly, the respondents evidenced a slight risk aversion for the attribute ranges considered. Further explanation of the methodology is given by Bard and Feinberg (1989).

TABLE 6A.7  Comparison of AHP Weights and MAUT

Scaling Constants for the Five Decision Makers

Respondent 1 2 3 4

No. Attribute AHP MAUT AHP MAUT AHP MAUT AHP MAUT

Performance

1 Mission objectives

0.324 0.176 0.341 0.287 0.245 0.199 0.215 0.171

2 RAM 0.048 0.044 0.047 0.031 0.092 0.081 0.072 0.105 3 Safety 0.145 0.162 0.164 0.144 0.092 0.103 0.072 0.075

Risk

4 System integration

0.006 0.059 0.080 0.061 0.061 0.016 0.021 0.013

5 Technical performance

0.018 0.059 0.080 0.085 0.141 0.093 0.203 0.225

6 Cost overrun 0.018 0.044 0.037 0.074 0.025 0.097 0.058 0.076

7 Schedule overrun

0.018 0.059 0.023 0.023 0.013 0.016 0.033 0.047

Cost

8 RDT&E 0.038 0.059 0.018 0.025 0.023 0.038 0.023 0.013

9 Life-cycle cost

0.268 0.088 0.129 0.111 0.162 0.191 0.187 0.170

Program objectives

10 Timetable 0.012 0.044 0.027 0.025 0.066 0.094 0.079 0.032

11 Technical opportunity

0.030 0.074 0.027 0.057 0.017 0.021 0.010 0.044

12 Acceptability 0.075 0.132 0.027 0.077 0.033 0.051 0.027 0.029

The computational results for the utility analysis are displayed in Table 6A.8 and are seen to parallel closely those for the AHP. Only decision makers 3 and 5 partially reversed themselves but without consequence; the others maintained the same ordinal rankings. Note again that it would be inappropriate to compare the final AHP priority weights with the final utility values obtained for each alternative (see Belton 1986). The former are measured on a ratio scale and have relative meaning; the latter simply indicate the order of preference.

An examination of the last four columns of Tables 6A.4 and 6A.8 shows that the two methods give the same general results. Here the geometric mean, also known as the Nash bargaining rule, is computed from the five entries in the table. In making comparisons, only the rankings (and not their relative values) should be taken into account.

TABLE 6A.8 Summary of Results for MAUT Analysis

Respondent

1 2 3 4

Alternative Weight Rank Weight Rank Weight Rank Weight Rank Weight Baseline 0.302 2 0.299 3 0.481 1 0.539 1 Upgrade 0.273 3 0.328 2 0.261 3 0.426 2 USDCH 0.595 1 0.567 1 0.337 2 0.273 3

6A.4.2 Discussion of Multiattribute

Utility Theory and Results The interview sessions in which the scaling constants and utility functions were assessed took approximately 30 minutes each and were conducted individually while the analyst and decision maker were seated at a terminal. Three difficulties arose immediately. The first related to the probabilistic nature of the questions. None of the respondents could make sense out of the relationship between the posed lotteries and the overall evaluation process. Repeated coaxing was necessary to get them to concentrate on the gambles and to give a deliberate response.

In this regard, it might have been possible to develop more perspective by using a probabilistic rather than a deterministic utility model. This would have required the attribute outcomes to be treated as random variables (which, in fact, they are) and for probability distributions to be elicited for each. It was believed, however, that this additional burden would have strained the patience and understanding of the group without producing credible results. It was difficult enough to collect the basic attribute data on each alternative without having to estimate probability distributions.

The second issue centered on the assessment of the scaling constants. Here the decision makers were asked to balance best and worst outcomes for 12 attributes at a time. This turned out to be nearly impossible to do with any degree of accuracy and created a considerable amount of tension. The problem was compounded by the fact that in most instances, the group believed that a low score on any one of the principal attributes, such as mission objectives or safety, would kill the program. This produced an unflagging reluctance to accept the sure thing unless the gamble was extremely unfavorable. Because most people are unable to deal intelligently with low probability events, this called into question, at least in our minds, the validity of the accompanying results.

The third concern relates to the use of ordinal scales to gauge attribute outcomes. Although time and cost have a common frame of reference, ordinal scales generally defy intuition. This was the case here. None of the respondents felt comfortable with this part of the interview, even when they

were willing to accept the overall methodology.

6A.5 Additional Observations The level of abstraction surrounding the use of MAUT strongly suggests that the AHP is more acceptable to decision makers who lack familiarity with either method. For problems characterized by a large number of attributes, most of whose outcomes can be measured only on a subjective scale, the AHP once again seems best. When the data are more quantifiable, the major attributes are few, and the alternatives are well understood, MAUT may be the better choice.

This is not to say that the AHP does not have its drawbacks. The most serious relates to the definition and use of the 9-point ratio scale. At some point in the analysis, each of the decision makers found it difficult to reconcile that by expressing a “weak” preference for one alternative over another, they were saying that they preferred it by a factor of 3:1. Although this might have seemed reasonable in some instances, in others, they believed that a score of 2 was equivalent to showing a “strong” preference. Perhaps this problem could be alleviated by the use of a logarithmic scale.

From the standpoint of consensus building, the AHP methodology provides an accessible data format and a logical means of synthesizing judgment. The consequences of individual responses are easily traced through the computations and can quickly be revised when the situation warrants. In contrast, the MAUT methodology hides the implications of the input data until the final calculations. This makes intermediate discussions difficult because no single point of focus exists. Sensitivity analysis offers a partial solution to this problem but in a backward manner that undercuts its theoretical rigor.

As a final observation, we note that the enthusiasm and degree of urgency that the participants brought to the study varied directly with their involvement in the program. Those with vested interests were eager to grasp the methodologies and were quick to respond to requests for data. The remainder viewed each new request as a frustrating and unnecessary ordeal

that was best dealt with through passive resistance.

6A.6 Conclusions for the Case Study The collective results of the analysis indicated that the group had a modest preference for the USDCH over the baseline. The tradeoff between risk and performance for the upgraded system did not seem favorable enough to make it a serious contender for the cargo-handling mission. We therefore recommended that work continue on the development of the basic USDCH technologies, including self-deployability and robotic cargo engagement, to demonstrate the underlying principles. If more supportive data are needed, then the place to start would be with a full-scale investigation of LCCs, and some of the more quantifiable performance measures such as reliability. The effort required to gather these statistics would be considerable, though, and does not seem justified in light of the overall findings.

In summary, the group believed that the idea of imposing new technologies on an existing system would probably increase its LCC without achieving the desired capabilities. The extensive improvements in performance ultimately sought could best be realized through a structured R&D program that fully exploited technological advances and innovative thinking in design. Such an approach would significantly reduce risk while permitting full systems integration. In fact, this is the approach now being pursued.

References Bard, J. F., “A Comparison of the Analytic Hierarchy Process with Multiattribute Utility Theory: A Case Study,” IIE Transactions, Vol. 24, No. 5, pp. 111–121, 1992.

Belvoir RD&E Center, Test and Evaluation Master Plan for the Variable Reach Rough Terrain Forklift, U.S. Army Troop Support Command, Logistics Equipment Directorate, Fort Belvoir, VA, 1987a.

Belvoir RD&E Center, Universal Self-Deployable Cargo Handler,

Contract DAAK-70-87-C-0052, U.S. Army Troop Support Command, Fort Belvoir, VA, Sept. 25, 1987b.

De Lange, W. J., et al. “Incorporating stakeholder preferences in the selection of technologies for using invasive alien plants as a bio-energy feedstock: Applying the analytical hierarchy process.” Journal of environmental management 99, 76–83, 2012.

Kallas, Z., F. Lambarraa, and J. M. Gil. “A stated preference analysis comparing the analytical hierarchy process versus choice experiments,” Food quality and preference, Vol. 22, No. 2, pp. 181–192, 2011.

Saaty, T. L., and L.G. Vargas, Models, methods, concepts & applications of the analytic hierarchy process, Vol. 175, Springer Science & Business Media, 2012.

Sievers, R. H. and B. A. Gordon, Applications of Automation Technology to Field Material Handling, SAIC-86/1987, Science Applications International Corporation, McLean, VA, December 1986.

Sousk, S. F., H. L. Keller, and M. C. Locke, Science and Technology for Cargo Handling in the Unstructured Field Environment, U.S. Army Belvoir RD&E Center, Logistics Equipment Directorate, Fort Belvoir, VA, 1988.

Chapter 7 Scope and Organizational Structure of a Project

7.1 Introduction Project management deals with one-time efforts to achieve a specific goal within a given set of resource and budget constraints. It is essential to use a project organization when the work content is too large to be accomplished by a single person. The fundamentals of project management involve the identification of all work required to be performed, the allocation of work to the participating units at the planning stage, the continuous integration of output through the execution stage, and the introduction of required changes throughout the project life cycle. How the efforts of the participants are coordinated to accomplish their assigned tasks and how the final assembly of their work is achieved on time and within budget are as much an art as they are a science. Adequate technical skills and the availability of resources are necessary but rarely sufficient to guarantee project success. There is a need for coordinated teamwork and leadership—the essence of sound project management.

Three types of “structures” are involved in the overall process. Each is derived from the project scope. They include (1) the work breakdown structure (WBS), which defines the way the work content is divided into small, manageable work packages that can be allocated to the participating units; (2) the organizational structure of each unit participating in the project (the client, the prime contractor, subcontractors, and perhaps one or more government agencies); and (3) the organizational breakdown structure (OBS) of the project itself, which specifies the relationship between the organizations and people doing the work.

Organizations set up management structures to facilitate the achievement of their overall mission as defined in both strategic and tactical terms. In so doing, compromise is needed to balance short-term objectives with long-term goals. As a practical matter, the project manager has very little say in the final design of the organization or in any restructuring that might occur from time to time. Organizations may be involved in many activities and cannot be expected to reorient themselves with each new project. Nevertheless, both the project OBS and the WBS should be designed to achieve the project’s objectives and therefore should be directly under project management control. The thoughtful design and implementation of these structures are critical because of their effect on project success.

The design of a project organizational structure is among the early tasks of the project manager. In performing this task, issues of authority, responsibility, and communications should be addressed. The project organizational structure should fit the nature of the project, the nature of the participating organizations, and the environment in which the project will be performed. For example, the transport of U.S. forces to remove Saddam Hussein from Iraq in 2003 required a project organization that was capable of coordinating logistical activities across three continents (North America, Europe, and the Arabian Peninsula). The authority to decide which forces to transport, when and by what means, as well as the channels through which such decisions were communicated, had to be defined by the project organizational structure. The participating parties were many, including all branches of the U.S. armed services and countries such as England, Australia, and Turkey. To facilitate coordination among these parties, a well-structured project organization with clear definitions of authority, responsibility, and communication channels was needed.

The issue of scope underlies the execution of every project. Scope management includes the processes required to ensure that only the work necessary to complete the project successfully is identified. It is the project manager’s responsibility to inform and update the scope at each stage of a project, starting with the initiation phase, continuing with the introduction of change requests, and ending with the acceptance of the final deliverables. The work content of the project, referred to in shorthand as the WBS, can usually be structured in a variety of ways. For example, if the project is aimed at

developing a new commercial aircraft, then the WBS can be structured around the main systems, including the body, wings, engines, avionics, and controls. Alternatively, it can be broken down according to the life-cycle phases of the project; that is, design, procurement, execution, testing, and so on. The first critical step after a project is approved is the design of the WBS by the project manager. The “best” WBS structure is a function of the work content and the organizational structure used to perform the required tasks. To reach an optimal design, the project manager needs to know what types of structures are common, their strengths and weaknesses, and under what conditions each structure is most effective. These issues are taken up in the remainder of the chapter.

7.2 Organizational Structures Projects are performed by organizations using human, capital, and other resources to achieve a specific goal. Many projects cut across organizational lines. In order to understand the organizational structure of a project, it first is necessary to understand the general nature of organizations.

Theorists have devised various ways of partitioning an organization into subunits to improve efficiency and to decentralize authority, responsibility, and accountability. The mechanism through which this is accomplished is called departmentalization. In all cases, the objective is to arrive at an orderly arrangement of the interdependent components. Departmentalization is integral to the delegation process. Examples include:

1. Functional. The organizational units are based on distinct common specialties, such as manufacturing, engineering, and finance.

2. Product. Distinct units are organized around and given responsibility for a major product or product line.

3. Customer. Organizational units are formed to deal explicitly with a single customer group, such as the Department of Defense.

4. Territorial. Management and staff are located in units defined along geographical lines, such as a southern U.S. sales zone.

5. Process. Human and other resources are organized around the flow of work, such as in an oil refinery.

Thus, organizations may be structured in different ways based on functional similarity, types of processes used, product characteristics, customers served, and territorial considerations.

7.2.1 Functional Organization

Perhaps the most widespread organizational structure found in industry is designed around the technical and business functions performed by the organization. This structure derives from the assumption that each unit should specialize in a specific functional area and perform all of the tasks that require its expertise. Common functional organizational units are engineering, manufacturing, information systems, finance, and marketing. The engineering department is responsible, for example, for such activities as product and process design. The division of labor is based on the function performed, not on the specific process or product. Figure 7.1 depicts a typical functional structure.

Figure 7.1 Portion of a typical functional organization.

When the similarity of processes is used as a basis for the organizational

structure, departments such as metal cutting, painting, and assembly are common in manufacturing, and departments such as new policy development, claims processing, and information systems are common in the service sector. When similar processes are performed by the same organizational elements, capital investment is minimized and expertise is built through repetition within the particular group.

In a functional organization structure, no strong central authority is responsible for integration of the various, detailed aspects of each particular project. Major decisions relating to resource allocation and budgets are seldom based on what is best for a particular project but rather on how they affect the strongest functional unit. In addition, considerable time is spent in evaluating alternative courses of action, because each project decision requires coordination and approval of all functional groups, in addition to upper management. Finally, there is no single point of contact for the customer.

Despite these limitations, the functional organization structure offers the clearest and most stable arrangement for large organizations. Advantages and disadvantages are as follows:

Advantages Efficient use of collective experience and facilities

Institutional framework for planning and control

All activities receive benefits from the most advanced technology

Allocates resources in anticipation of future business

Effective use of production elements

Career continuity and growth for personnel

Well-suited for mass production of items

Disadvantages No central project authority

Little or no project planning and reporting

Weak interface with customer

Poor horizontal communications across functions

Difficult to integrate multidisciplinary tasks

Tendency of decisions to favor strongest functional group

7.2.2 Project Organization In this type of structure, each project is assigned to a single organizational unit and the various functions, such as engineering and finance, are performed by personnel within the unit. This results in a significant duplication of resources. Because similar activities and processes are performed by different organizational elements on any particular project, there could be a widespread disparity in methods and results. Another disadvantage can be attributed to the limited life span of projects. Since work assignments and reporting hierarchies are subject to continuous change, workers’ career paths and professional growth may be negatively impacted.

Figure 7.2 depicts a project-oriented organizational structure. As can be seen, functional units are duplicated across projects. These units are coordinated indirectly by the corresponding central functional unit, but the degree of coordination may vary sharply. The higher the level of coordination, the closer the organizational structure is to a pure functionally oriented structure. Low levels of coordination represent organizational structures closer to the project-oriented structure. For example, consider an organization that has to select a new CAD/CAM (computer-aided design/computer-aided manufacturing) system. In a functional organization, the engineering

department might have the responsibility of selecting the most appropriate system. In a project-oriented organization, each engineering group will select the system that fits its needs best. If, however, it is desirable to achieve commonality and have all engineering groups use the same system, then the central engineering department will have to solicit input from the various groups and on the basis of this input, make a decision that balances the concerns of each. Characteristics of an organization, geared to optimize project performance—as opposed to developing functional skillset capabilities—are highlighted below.

Figure 7.2

Project-oriented organizational structure.

Advantages Strong control by a single project authority

Rapid reaction time

Encourages performance, schedule, and cost tradeoffs

Personnel loyal to a single project

Interfaces well with outside units

Good interface with customer

Disadvantages Inefficient use of resources

Does not develop technology with an eye on the future

Does not prepare for future business

Less opportunity for technical interchange among projects

Minimal career continuity for project personnel

Difficulty in balancing workloads, as projects phase in and out

In addition to the functional organization and project organization, the following structures are also common.

7.2.3 Product Organization

In a mass-production environment where large volumes are the norm, such as in consumer electronics or chemical processing, the organizational structure may be based on the similarity among products. An organization specializing in domestic appliances, for example, may have a refrigerator division, washing machine division, and small appliances division. This structure facilitates the use of common resources, marketing channels, and subassemblies for similar products. By exploiting commonality, it is possible for mixed model lines and group technology cells, handling a family of similar products, to achieve performance that rivals the efficiency of dedicated facilities designed for a unique product.

7.2.4 Customer Organization Some organizations have a few large customers. This is frequently the case in the defense industry, where contractors deal primarily with one branch of the service. By structuring the contractor’s organization around its principal client, it is much easier to establish good working relationships. In many such organizations, as exemplified by consulting firms and architecture and engineering firms, there is a tendency to hire veteran employees from the customer’s organization to smooth communications and exploit personal friendships.

7.2.5 Territorial Organization Organizational structures can be based on territorial considerations, too. Service organizations that have to be located close to the customer tend to be structured along geographical lines. With the push toward reduced inventories and just-in-time delivery, large manufacturers are encouraging their suppliers to set up plants, or warehouses, in the neighborhood of the main facility. The same rationale applies to advertising agencies that need to be in close contact with specific market segments, although this need continues to shrink with the widespread use of both the Internet and video conferencing.

7.2.6 The Matrix Organization A hybrid structure known as the matrix organization provides a sound basis for balancing the use of human resources and skills, as workers are shifted from one project to another. The matrix organization can be viewed as a project organization superimposed on a functional organization, with well- defined interfaces between project teams and functional elements. In the matrix organization, duplication of functional units is eliminated by assigning specific resources of each functional unit to each project. Figure 7.3 depicts an organization that is performing several projects concurrently. Each project has a manager who must secure the required skills and resources from the functional groups. Technical support, for example, is obtained from the engineering department, and the marketing department provides sales estimates. The project manager’s request for support is handled by the appropriate functional manager, who assigns resources on the basis of their availability, the project’s need, and the project’s priority as compared with other projects. Project managers and functional managers must act as partners to coordinate operations and the use of resources. It is the project manager, though, who is ultimately responsible for the success or failure of the project. Important advantages of the matrix organization are:

Figure 7.3 Typical matrix structure.

Figure 7.3 Full Alternative Text

1. Better utilization of resources. Because the functional manager assigns resources to all projects, he or she can allocate resources in the most efficient manner. The limited life span of projects does not reduce utilization of resources, because they can be reassigned to other projects and tasks as the need arises.

2. State-of-the-art technology. The knowledge gained from various projects is accumulated at the functional level. The most sophisticated projects are sources of new technology and skills that can be transferred to other projects and activities performed by the organization. Therefore, the functional departments become knowledge centers.

3. Adaptation to changing environment. The matrix organization can adapt to changing conditions, including the arrival of new competition in the market, the termination of existing projects, and the realignment of suppliers and subcontractors. The functional skeleton is not affected by such changes, and resources can be reallocated and rescheduled as needed. No loss of knowledge is experienced when projects terminate, because the experts are kept within the functional units.

The matrix organization benefits from having focused effort in both the functional and the project dimensions. However, this advantage may be offset by several potential difficulties.

1. Authority. Although personnel resources are under the control of the functional manager in the long run, they are accountable, day-to-day, to the project manager. In a matrix organization, this can lead to a conflict of interest and to a “dual boss” phenomenon.

2. Technical knowledge. The project manager is not an expert in all technical aspects of a project. He or she has to rely on functional experts and functional managers for their inputs. But, once again, the project manager is responsible for the overall outcome.

3. Communications. Workers have to report to their functional manager and to the project manager for whom they perform specific tasks. Double reporting and simultaneous horizontal/vertical communication channels are difficult to develop, manage, and maintain.

4. Goals. The project manager tends to see the short-term objectives of the project most clearly, whereas the functional manager typically focuses on the longer-term goals, such as accumulation of knowledge and the acquisition and efficient use of resources. These different perspectives frequently conflict and create friction within an organization.

The design and operation of a matrix organization are complicated, time- consuming tasks. A well-conceived and well-managed structure is necessary if the impact of the problems listed above is to be minimized.

In general, each project and functional unit has a set of objectives that must be balanced against a set of mutually agreed-on performance measures. This balance depends on the weight given to each objective and is an important determinant in selecting the organizational structure. For example, if the successful completion of projects on time and within budget is considered most important, the matrix organization will be more project oriented. In the case in which functional goals are emphasized, then the matrix organization can be designed to be functionally oriented.

The orientation of a matrix organization can be measured to some degree by the percentage of workers who are fully committed to single projects. If this number is 100%, then the organization has a perfect, project-oriented structure. If none are fully committed, then the organization has a functional structure. A range of matrix organizations can be defined between these two extremes as depicted in Figure 7.4. In this figure, functional organizations are located on the left-hand side, and project-oriented organizations are on the right. Those in between are hybrids of varying degree. An organizational structure that is based on one part-time person managing each project while everyone else is a member of a functional unit represents a very weak matrix structure with a strong functional orientation. Conversely, if the common arrangement is project teams with only a few shared experts among them, then the matrix organization has a strong project orientation, sometimes called a “strong matrix” structure.

Figure 7.4 Level of employee commitment as a function of organizational structure.

Figure 7.4 Full Alternative Text

In summary, the principal advantages and disadvantages of the matrix organization are:

Advantages Effective accumulation of know-how

Effective use of resources

Good interface with outside contacts

Ability to use multidisciplinary teams

Career continuity and professional growth

Perpetuates technology

Disadvantages Dual accountability of personnel

Conflicts between project and functional managers

Profit-and-loss accountability difficult

7.2.7 Criteria for Selecting an Organizational Structure The decision to adopt a specific organizational structure is based on several criteria, as discussed below.

1. Technology. A functional organization and a process-oriented organization have one focal point for each type of technology. The knowledge gained in all operations, projects, and products is accumulated at that focal point and is available to the entire organization. Furthermore, experts in different areas can be used efficiently, because they, too, are a resource available to the whole organization.

2. Finance and accounting. These functions are easier to perform in a functional organization, where the budgeting process is controlled by one organizational element that is capable of understanding the “whole picture.” Such an entity is in the best position to develop a budget that integrates the organizational goals within individual project objectives.

3. Communications. The functional organization has clear lines of communication that follow the organizational structure. Instructions flow from the top down, whereas progress reports are directed over the same channels from the bottom up. The functional organization provides

a clear definition of responsibility and authority and thus minimizes ambiguity in communications.

Product-, process-, or project-oriented structures have vertical as well as horizontal lines of communication. In many cases, communication between units that are responsible for the same function on different projects, processes, or product lines might not be well defined. The organizational structure itself is subject to frequent changes as new projects or products are introduced, existing projects are terminated, or obsolete lines are discontinued. These changes affect the flow of information and cause communications problems.

4. Responsibility to a project/product. The product- or project-oriented organization removes any ambiguity over who has responsibility for each product manufactured or project performed. The project manager has complete control over all resources allocated to the project, along with the authority to use those resources as he or she sees fit. The one- to-one relationship between an organizational element and a project or product eliminates the need for coordination of effort and communication across organizational units and thus makes management easier and more efficient.

5. Coordination. As mentioned, the project/product-oriented structure reduces the need for coordination of activities related to the project or product; however, more coordination is required between organizational units that perform the same function on different products.

6. Customer relations: The project/product-oriented organization provides the customer with a single point of contact. Any need for service, documentation, or support can be handled by the same organizational unit. Accordingly, this structure supports better communications and frequently better service for the customer, compared with the functional structure. Its performance closely approximates that of a pure customer oriented organizational structure.

This partial list demonstrates that there is no single structure that is optimal for all organizations, in all situations. Therefore, each organization must analyze its own operations and select the structure that best fits its needs, be it

functional, process oriented, customer oriented, project/product oriented, or a combination thereof.

7.3 Organizational Breakdown Structure of Projects The OBS should be designed as early as possible in the project’s life cycle. An unambiguous definition of communication channels, responsibilities, and the authority of each participating unit is a key element that affects project success. The most appropriate structure depends on the nature of the project, on the environment in which the work is performed, and on the structure of the participating organizations. For example, if a computer company believes that the development of a lighter laptop is crucial to maintaining its market share, then it is likely that either a project structure or a strong matrix structure would be used for this purpose. In these structures, team members report directly to the project manager and, as a result, are able to maintain a strong identification with the project, thus increasing the probability that the project will be completed successfully.

In most projects, it is not enough to adopt the organizational structure of the prime contractor. At a minimum, both the client and the contractor organizations must be considered. The client organization usually initiates the project by defining its specific needs, whereas the contractor is responsible for developing the plan to satisfy those needs. The two may be elements of the same organization (e.g., an engineering department that develops a new product “for” the marketing department), or they may be unrelated (e.g., a contractor for the National Aeronautics and Space Administration). In either case, the relationship between these organizations is defined by the project organizational structure. This definition should specify the responsibility of each party, the client’s responsibility to supply information or components for the project, such as government-furnished equipment, and the contractor’s responsibility to perform certain tasks, to provide progress reports, to consult periodically with the client, and so on.

7.3.1 Factors in Selecting a

Structure The primary factors that should be taken into consideration when selecting an organizational structure for managing projects are as follows.

1. Number of projects and their relative importance. Most organizations are involved in projects. Common examples are the installation of a new enterprise resource planning system, the integration of a new acquisition into the company structure, or the cultivation of a new market. If an organization is dealing with projects only infrequently, then a functional structure supported by ad hoc project coordinators may be best. As the number of projects increases and their relative importance (measured by the budget of all projects as a percentage of the organizational budget, or any other method) increases, the organizational structure should adapt by moving to a matrix structure with a stronger project orientation.

2. Level of uncertainty in projects. Projects may be subject to different levels of uncertainty that affect cost, schedule, and performance. To handle uncertainty, a feedback control system is used to detect deviations from original plans and to detect trends that might lead to future deviations. It is easier to achieve tight control and to react faster to the effects of uncertainty when each project manager controls all of the resources used in the project and gets all the information regarding actual performance directly from those who are actively involved. Therefore, a project-oriented structure is preferred when high levels of uncertainty are presented.

3. Type of technology used. When a project is based on a number of different technologies and the effort required in each area does not justify a continuous effort throughout the project life cycle, the matrix organization is preferred. When projects are based on several technologies and the work content in each area is sufficient to employ at least one full-time person, then a strong matrix or a project-oriented structure is preferred.

Research and development projects in which new technologies or

processes are developed are subject to high levels of uncertainty. The uncertainty is expressed through parameters such as task completion times, the likelihood of a contemplated breakthrough, or simply the chances that the project’s components can be integrated successfully. Therefore, to struggle better with this high uncertainty, stronger commitment for the project is needed, calling for the use of a project- oriented structure.

4. Project complexity. High complexity that requires very good coordination among the project team is best handled in a project- oriented structure. Here communication is most rapid and unobstructed. Low-complexity projects can be handled effectively in a functional organization or a matrix arrangement with a functional orientation.

5. Duration of projects. Short projects do not justify a dedicated project organization and are best handled within a functional structure or a matrix organization. For certain shorter projects, a functional manager— for example, the manager of a function that has a key role on the project —may assume project manager responsibilities. Long projects that span many months or years justify a project-oriented structure.

6. Resources used by projects. When common resources are shared by two or more projects, the matrix arrangement with a functional orientation tends to be best. This is the case when expensive resources are used or when each project does not need a fully devoted unit of a resource. If the number of common resources among projects is small, then the project- oriented structure is preferred.

7. Overhead cost. By sharing facilities and services among projects, the overhead cost of each project is reduced. A matrix organization should be preferred when an effort to reduce overhead cost is required.

8. Data requirements. If many projects have to share the same databases and it is desirable to make available as quickly as possible the information generated by a set of projects to other elements in the organization not directly involved in these projects, then a weak matrix structure is preferred.

In addition to the above factors, the organizational structures of the client and the contractor must be taken into account. If both have a functional orientation, then direct communication between similar functions in the two organizations might be best. If both are project/product oriented, then an arrangement that supports direct communication links between project managers in their respective organizations would be most efficient.

The situation is complicated when the contractor and the client do not have similar organizational structures or when there are several participating units. If the organizational structure of the contractor is functionally oriented, then the client project manager may have to deal simultaneously with many departments as well as a host of subcontractors, government agencies, and private consultants.

7.3.2 The Project Manager The success of a project is highly correlated with the qualities and skills of the project manager. In particular, a project manager must be capable of dealing with a wide range of issues that include refining and promoting project objectives, translating those objectives into plans, and obtaining the required resources to execute each phase of the project. On a day-to-day basis, a project manager copes with issues related to budgeting, scheduling, and procurement. He or she must also be able to respond to the needs and expectations of key stakeholders, including customers, subcontractors, and government agencies. It is often the case that the project manager has most of the responsibilities of a general manager but almost none of the authority.

In Section 1.4.2, we highlighted some of the important attributes that a project manager should have if he or she is to grapple successfully with the above issues. These attributes are now discussed in detail.

Leadership The most essential attribute of a project manager is leadership. The project

manager has to lead the project team through each phase of its life cycle, dealing swiftly and conclusively with any number of problems as they arise along the way. This is made all the more difficult given that the project manager usually lacks full control and authority over the participants. An ability to guide the project team smoothly from one stage to the next depends on the project manager’s stature, temperament, skills of persuasion, and the degree of commitment, self-confidence, and technical knowledge. A manager who possesses these characteristics, in some measure, is more likely to be successful even when his or her formal authority is limited.

Interpersonal skills The project manager (as any manager) has to achieve a given set of goals through other people. The manager must deal with senior management, members of the project team, functional managers, and perhaps an array of clients. In addition, a project manager frequently must interact with representatives from other organizations, including subcontractors, laboratories, and government agencies. To achieve the goals of the project, the ability to develop and maintain good personal relationships with all parties is crucial.

Communication skills Communication skills The interaction between groups involved in a project and the project manager takes place through a combination of verbal and written communications. The project manager must be kept abreast of progress and be able to transmit directions in a succinct and unambiguous manner. By building reliable communication channels and by using the best channel for each application, the project manager can achieve a fast, accurate response from the team with some degree of confidence that directions will be carried out correctly. The more up to date and comprehensive the information, the smoother the implementation route will be.

Decision-making skills The project manager has to establish procedures for documenting and dealing with problems as they arise. Once the source and the nature of a problem are identified, the manager must evaluate alternative solutions, select the best corrective action, and ensure that it is implemented. These are the fundamental steps in project control.

In some instances, the project manager gets involved early enough to participate in discussions regarding the organizational structure of the project and the choice of technology to be used. An understanding of the basic technical issues gives the project manager the credibility needed to influence resource allocation, budget, and schedule decisions before they are finalized. A project manager’s input on these matters in the initial stages increases the probability that the project will get started in the right direction.

Negotiation and conflict resolution Many of the problems that the project manager faces do not have a “best solution,” for example, when a conflict of interest exists between the project manager and the client over a contract issue contingent on various interpretations. There are many sources of conflict, including:

Scheduling

Disagreements that develop around the timing, sequencing, and duration of projects and feasibility of schedule for project- related tasks or activities.

Managerial and administrative procedures

Disagreements that develop over how the project will be managed: the definition of reporting relationships and responsibilities, interface relationships, project scope, work design, plans of execution, negotiated work agreements with other groups, and procedures for administrative support.

Communication

Disagreements resulting from poor information flow among staff or between senior management and technical staff, including such topics as misunderstanding of project-related goals, the strategic mission of the organization, and the flow of communication from technical staff to senior management.

Goal or priority

Disagreements arising from lack of goals or poorly defined project goals, including disagreements regarding the project mission and related tasks, differing views of project participants over the importance of activities and tasks, or the shifting of priorities by superiors/customers.

Resource allocation

Disagreements resulting from the competition for resources (e.g., personnel, materials, facilities, equipment) among project members or across teams or from lack of resources or downsizing of organizations.

Reward structure/performance appraisal

Disagreements that originate from differences in understanding the reward structure or from the insufficient match between the project team approach and the performance appraisal system.

Personality and interpersonal relations

Disagreements that focus on interpersonal differences rather than on “technical” issues; includes conflicts that are ego- centered, personality differences, or conflicts caused by prejudice or stereotyping.

Costs

Disagreements that arise from the lack of cost control authority within the project office or with a functional group. Disagreements related to the allocation of

funds.

Technical opinion

Disagreements that arise, particularly in technology-oriented projects, over technical issues, performance specifications, technical tradeoffs, and the means to achieve performance.

Politics Disagreements that center on issues of territorial power (not-invented-here attitudes) or hidden agendas.

Poor input or direction from leaders

Disagreements that arise from a need for clarification from upper management on project-related goals and the strategic mission of the organization.

Ambiguous roles/structure

Disagreements, especially in the matrix structure, in which two or more people or sections have related or overlapping assignments or roles.

Tradeoff analysis skills Because most projects have multidimensional goals (e.g., performance, schedule, budget), the project manager often has to perform tradeoff analyses to reach a compromise solution. Questions such as, “Should the project be delayed if extra time is required to achieve the performance levels specified?” or, “Should more resources be acquired at the risk of a cost overrun to reduce a schedule delay?” are common and must be resolved by trading off one objective for another.

In addition to these skills and attributes, a successful project manager will embody good organizational skills, the ability to manage time effectively, a degree of open mindedness, and loyalty to his or her charge. The correct selection of the project manager and the project organizational structure are two important decisions that are made early in the life cycle of a project and have a lasting impact.

A major difficulty that a project manager faces in a matrix structure (which is the most common one) is related to the nature of the relationship with the functional managers. To understand the sources of the difficulties, let us compare the roles of the two by referring to the four following domains: responsibility, authority, time horizon, and communication.

Responsibility The project manager is responsible for ensuring that the project is completed successfully, as measured by time, cost, system or product performance, and stakeholder satisfaction. The functional manager is responsible for running a department so that all the department’s customers are served efficiently and effectively. To be successful, the functional manager must continuously upgrade the technical ability of the department and take care of staff needs.

Inherent in these responsibilities is the following conflict: Assume that a project manager needs a certain job done by one of the functional departments in the organization. The project manager would like a specific individual to do the work. However, the functional manager plans to assign another person to do the job because the preferred employee is needed elsewhere. In these situations, the functional manager is inclined to do what’s best for the department, and not necessarily what’s best for a particular project.

Authority Authority is measured by the amount of resources that a manager can allocate without the need to get higher-level approval. Whenever external contractors are used, the project manager is the one who approves payment in accordance with the terms of the contract. This is not the case when the work is performed by a functional department within the organization, particularly in a matrix environment, because payment is little more than an accounting entry. This means that if the functional department is late with a deliverable, then the project manager cannot withhold payment, implying that he has little

leverage over his functional counterpart. In situations such as this, in which unresolved internal conflicts hurt the chances of the project being completed on time, the project manager should seek resolution with higher-level management. In contrast, the functional manager has the authority over all of the resources that belong to his department, including material, equipment, and employees.

Time horizon Because projects have a limited time horizon, the project manager is necessarily short-term oriented and is interested in immediate impacts. A functional manager has an ongoing department to run whose mission remains in effect beyond the project’s lifetime. A functional manager receives work orders that have to be executed for different customers and may not have the vision to view the full scope or importance of different, individual projects. A project can be viewed as a small business within a larger enterprise whose ultimate goal is to go out of business when all tasks are completed. At the same time, functional departments should be viewed as permanent entities striving to maximize the benefits that they provide to the organization.

Communication In allocating work, a project manager has to interact with many individuals, often from different companies. With some individuals, such as a contractor or consultant, he or she has a formal relationship established through a signed, legally binding contract. With others, such as functional managers within the organization undertaking the project, he or she does not have a formal contract, although there is generally an explicit agreement on the work to be performed. In most cases, specific tasks are carried out not by the person who negotiated the scope of work, but by his or her subordinates. Depending on the established line of communications, the project manager may not be able to communicate directly with those charged with the work; however, in many cases, there is a continuing need for communication and coordination between two individuals who belong to two different

organizational units. Using formal communication channels, the project manager should approach those individuals through their managers. Unfortunately, this process may complicate the communications and increase the response time to unacceptable levels. To circumvent this difficulty, the execution of projects in a matrix environment often requires that the project manager communicate informally with those who are working on his or her project.

Projects are essentially horizontal, whereas the functional organization, as exemplified by the traditional organization chart, is vertical. The basic dichotomy between the two can be better understood by comparing the types of questions that project and functional managers ask. Table 7.1 highlights the differences.

TABLE 7.1 Concerns of Project and Functional Managers Project manager Functional manager

What is to be done?

When will the task be done?

What is the importance of the task?

How much money is available to do the task?

How well has the total project been done?

How will the task be done?

Where will the task be done?

Who will do the task?

How well has the functional input been integrated into the project?

7.3.3 Project Office The project office is a functional department that specializes in the development and implementation of project management methodologies and processes. This department offers its services to all other units in the organization in the same manner as any other functional department. It may be directly under the general manager or may be a subunit in, say, the research and development (R&D) department or the information systems department. These two departments are the ones typically involved in most projects, especially in technology-oriented companies.

The following is a list of tasks that fall within the scope of the project office:

Support in data entry, presentation, and analysis

Development and introduction of project management body of knowledge (PMBOK)-related methods, tools, and techniques

Training project and functional managers

Supplying professional project managers to the organization

Multi-project management support

Maintaining the company’s project management know-how

Coordination between organizational strategy and project portfolio

Contract management

Developing infrastructure required for effective project management

Increased reliance of the use of a project office within large organizations over the last decade can be traced to the need to overcome the following problems:

High failure rate of project completion with respect to budget and

schedule

Constant complaints of overwork by project teams

Departments within the same organization manage projects differently, making it complicated to integrate interdepartmental projects

Insufficient correlation between organizational strategy and the project portfolio

Lack of a standardized way to perform projects

A major concern of many organizations is the process by which data and information are collected and stored. If this process is handled diligently, then its output can be used as a vehicle for improving future project planning and execution. An enterprise-wide, information warehouse, operated and maintained by the IT organization, is typically established to standardize data processing and information procedures across all departments. The development of a project office is not a straightforward job and should be treated as a project in and of itself. The following may serve as guidelines for such a project:

The project office should be developed in stages, beginning with the most painful problems faced by the organization. Long-term objectives can be deferred until a structure is in place, a manager and staff are chosen, and operational procedures are established.

In the early stages, the project office may offer support on issues such as report design, tracking progress, budgeting, methods for analyzing performance, and standardizing processes by developing templates.

There is a need to meet with different stakeholders, such as project managers and functional managers, and identify their immediate needs.

A list of current projects along with their status should be developed to help determine the most pressing organizational needs.

A respected officer in the organization who believes in the need for a

project office should be recruited to champion its development.

A project office is typically called on to support one or more of the following activities:

1. Developing a performance measure and control system. Monitoring the use of resources such as money, labor hours, and material is a basic need of any project.

2. Developing project managers. It is common for a technically competent person to be nominated to be a project manager without having any training or experience in management. A technical perspective is likely to be much different than the perspective needed to plan, schedule, monitor, and control the various aspects of a project. One of the primary functions of a project office is to offer training programs for inexperienced project managers.

3. Formulating project management processes. Training effectiveness depends highly on the organizational commitment to implement standard methods for managing projects. Therefore, the organization should first make a decision on which project management processes it wishes to adopt. If the project is to be managed with the help of software, for example, then it will be necessary to plan for the acquisition, installation, training, and maintenance of the selected product.

4. Developing technological infrastructure. As with any process, project management processes require a technological infrastructure for their implementation. For example, an intranet (internal organizational Internet) is an infrastructure that facilitates the integration of information and effort across all projects within an organization.

5. Developing processes used to manage contractors. Managing work performed by contractors is different from managing work performed by internal units. Because many organizations outsource a significant portion of a project, there is a need to develop a standard process for contract management that will be used by all projects.

6. Continuous improvement. To compete effectively in open markets, there is an ongoing need to improve product performance and quality. This translates into a continuing need for an organization to learn and improve the way it initiates, manages, and administers projects. The development of systematic procedures for incorporating the experience and knowledge gained at the project level and accumulated over time falls within the domain of the project office.

The specific unit within an organization that carries out the above functions may be called by one of several names rather than the “project office.” The name chosen may better characterize its responsibilities. Table 7.2 presents a list of names and their common meaning.

TABLE 7.2 Similar Organizational Units that Perform Project Management Related Tasks Level Organizational unit Major activity

1 Project Support Office Administrative support for projects

2 Project Tool Support Office Support for tools and techniques

3 Project Office Overall project management support

4 Project Management Office Overall project management support

5 Program Office Program and project management support

6 Master Program Office Same as above but with more authority

7 Enterprise Project Management Office

Project and portfolio management

— Virtual Project Management Office

Project management via the Internet

The first column in the table specifies the sophistication level of the departmental activities: level 1 means that the project management department performs very basic tasks, whereas level 7 is associated with the most complicated tasks. Project management departments that belong to levels 1 to 4 focus mostly on managing single projects, whereas departments that belong to levels 5 to 7 deal not only with single projects but also with the coordination and integration of project activities with organizational strategy. No level is specified for the virtual project management office because it may be anywhere from 1 to 7, depending on the organization. This type of office is becoming more and more common as “virtual companies” set up shop in a single office and do all of their business with subcontractors over the Internet and with telecommuting employees. Projects are typically managed with templates that are used by all participants.

It is difficult to quantify the benefit that a project office offers in monetary terms. Therefore, without the sponsorship and ongoing support of upper management, the chances of establishing and maintaining an effective project office are slim. In reality, though, an increasing number of large corporations are establishing project management offices or setting up functional departments devoted to project management. Furthermore, companies are increasingly viewing project management as a desirable skillset in recruitment of new employees for functional areas such as marketing and engineering.

7.4 Project Scope This section highlights issues and concepts associated with the project scope. We begin with the following definitions.

Project scope. The work that must be done to deliver a product that is able to perform a specified set of functions and incorporates a predetermined set of features. If all of the required work is not delineated, then some of the deliverables may be excluded. If more than the required work is delineated, then unplanned and unbudgeted items will be delivered. This will have a negative impact on the cost and schedule of the project and may lead to excessive delays.

Project scope management. The processes required to ensure that the project includes all of the work required, and only the work required, for successful completion. The scope plays a role at each stage of a project, starting with initiation, continuing with change orders, and terminating with the approval of the deliverables. The following is an outline of the scope-related concepts that arise throughout the project life cycle.

Scope in the initiation stage. When a need for a project is identified, possible technical alternatives are explored, their feasibility is evaluated, and a “go/no go” decision is made. At this point, the work required to design, build, and implement a system that responds to the defined need has to be estimated. The end result of the initiation stage is a project charter that provides a summary description of the project content, the project sponsor, and the management approach that should be used. A project charter for an internal project is similar to but not necessarily as detailed as a contract signed with an outside vendor.

Scope planning. This process includes a short description of the project scope, called a scope statement, which is used as the basis for future project decisions and for establishing an understanding between the project team and the customer. The primary components of the scope statement are:

1. Justification for the project

2. Project objectives

3. Sponsor of project

4. Major stakeholders

5. Project manager

6. Major project deliverables

7. Success criteria

The seventh component is used to determine whether each major phase, as well as the project as a whole, has been completed successfully. If a request for proposal (RFP) has already been issued, then it may serve as the basis for the scope statement document, because it includes most of the required information. An example of a scope statement is given in Figure 7.5.

Composing the scope statement starts during the final phase of project initiation and ends before the start of any significant planning efforts.

Scope definition. Although key stakeholders—with input from the project manager—typically define the scope of a project, the project manager has full responsibility for implementation. The major output of this process is the WBS, which is developed right after scope planning. Details are given in the next section.

Scope verification. This process consists of comparing the planned scope with the actual outputs. Deliverables are accepted, rejected, or modified as required, based on comparisons with the original project scope definition. Verification and user acceptance of project deliverables are performed throughout the life cycle of a project.

Scope change. Because no project of any consequence is completed as originally planned, there is a need for a mechanism that will govern the way scope changes are introduced and implemented throughout the project life cycle. A large component of a project manager’s day-to-day responsibility

involves change management. If every module of a project ran according to plan (for example, the project was on schedule, under budget, and all resources were functioning at 100% efficiency), then the role of a project manager would be greatly simplified.

7.4.1 Work Breakdown Structure The scope definition process involves subdividing the major project deliverables into smaller, manageable components called work packages, which can be assigned to

Figure 7.5 Scope statement for a project.

Project justification: The lack of qualified managers within the region is one of the principal reasons that economic growth has stagnated over the past decade. After evaluating a variety of alternatives, community leaders decided that the best way to respond to this problem was to create a college.

Project objectives: 1. To open a top management school within a year, equipped

with advanced computer systems and high-tech teaching facilities.

2. The school will run two major programs: (a) an MBA program and (b) focused seminars that will serve managers who wish to improve their leadership and communications skills.

3. The school will use an existing building that will be renovated to fit its needs.

Sponsor of the project: The local mayor is the chief supporter and fundraiser.

Major stakeholders: 1. The mayor.

2. Big State University, an internationally renowned institution situated in the region that will help structure the program. Dr. Knowly has been nominated to be the coordinator on behalf of the university.

3. Regional Management Association, which will be involved in identifying the region’s management needs and in helping to promote the program. Ms. Simpson has been nominated to be the coordinator on behalf of the association.

4. Regional industry—organizations that wish to upgrade the managerial skills of their current and future employees. There is an emerging high-tech concentration in the area on which to draw students.

The project manager: The mayor has nominated Seymour Smyles as the project manager. Dr. Smyles has 10 years of project experience in the telecommunications industry and has recently earned an MBA.

Major project deliverables: 1. Recognized MBA program

2. Published catalog with courses and instructors

3. Web presence

4. Registered students for the first year

5. High-tech classroom facilities

6. Administrative staff

7. Faculty offices and teaching resources

Success criteria: 1. On-time completion within budget

2. Number of students registered for the first year of the program

3. Number of advanced seminars offered the first year

4. Operating costs for first year

organizational units that are then responsible for their execution. As stated in the beginning of the chapter, the division of the work content into lower level components is called the WBS. According to the PMBOK, the WBS is a deliverable-oriented grouping of project elements that organizes and defines the total scope of the project. Each descending level represents an increasingly detailed definition of project components.

The notion of a WBS was initiated by the U.S. Department of Defense (1975), which also has published guidelines relating to the design of military systems. “A work breakdown structure is a product-oriented family tree

composed of hardware, services and data which result from project engineering efforts during the development and production of a defense material item, and, which completely defines the project/program. A WBS displays and defines the product(s) to be developed or produced and relates the elements of work to be accomplished to each other and to the end product.”

The concept of a “WBS dictionary” is widely used as well and consists of a set of documents that includes the WBS and a detailed description of each work package. The conscientious and meticulous development, maintenance, and use of the WBS contribute significantly to the probability that a project will be completed successfully.

The WBS provides a common language for describing the work content of a project. This language centers on the work package definitions and a hierarchical coding scheme for representing each WBS element. It enables all stakeholders, such as customers, suppliers, and contractors, to communicate effectively throughout a project.

The resources required for a project can be determined by summing the resources required to execute each work package and the level-of-effort (LOE) resources used to maintain the project infrastructure. Typical LOE resources are project management, quality assurance personnel, and information systems.

The first level of the WBS hierarchy represents the entire project. Subsequent levels reflect the decomposition of the project according to a number of possible criteria, such as product components, organization functions, or life- cycle stages. Different WBSs are obtained by applying the criteria at different levels of the hierarchy.

The division of the work content into work packages should reflect the way in which the project will be executed. If, for example, a university initiates a project to create an executive MBA program, then the development of a specific course for the program can be defined as a task and the organizational unit responsible for that course (a professor) can be associated with the task to form a work package. There are, however, different ways to decompose the work content of this project. One way is to divide the entire

project directly into work packages. If there are 30 courses required in the program and each course is developed by one professor, then there will be 30 work packages in the WBS. This is illustrated in Figure 7.6. The following coding scheme can also be used:

Figure 7.6 Two-level WBS for curriculum development project.

1. Development of an MBA program curriculum

1. 1.1 Introduction to Finance

2. 1.2 Introduction to Operations

.

.

.

3. 1.30 Corporate Accounting

Alternatively, the project manager may decide to disaggregate the project work content by functional area and have each such area divide the work content further into specific courses assigned to professors. This situation is illustrated in Figure 7.7. Using an expanded coding scheme, the WBS in this case might take the following form:

1. Development of an MBA program curriculum

1. 1.1 Development of courses in Finance

1. 1.1.1 Introduction to Finance

2. 1.1.2 Financial Management

.

.

.

2. 1.2 Development of courses in operations

1. 1.2.1 Introduction to Operations

2. 1.2.2 Practice of Operations Management

.

.

3. 1.6 Development of courses in accounting

1. 1.6.1 Fundamentals of Accounting

.

.

2. 1.6.4 Corporate Accounting

A third option that the project manager might consider is to divide the work content according to the year in the program in which the course is taught and then divide it again by functional areas. This WBS is illustrated in Figure 7.8 and might take the following form:

1. Development of an MBA curriculum

1. 1.1 First-year courses

1. 1.1.1 Development of courses in Finance

1. 1.1.1.1 Introduction to Finance

.

.

.

Figure 7.7 Three-level WBS for curriculum development project.

Figure 7.7 Full Alternative Text

Figure 7.8 Four-level WBS for curriculum development project.

Figure 7.8 Full Alternative Text

2. 1.2 Second-year courses

1. 1.2.1 Development of courses in finance

1. 1.2.1.1 Financial Management

.

.

.

2. 1.7.6 Development of courses in accounting

1. 1.7.6.1 Management Information Systems in Accounting

.

.

.

2. 1.7.6.4 Corporate Accounting

For all three WBSs, the same 30 tasks are performed at the lowest level by the same professors. However, each WBS represents a different approach to organizing the project. The first structure is “flat.” There are only two levels, and from the organizational point of view, all of the professors report directly to the project manager, who has to deal with the integration of all 30 work packages. In the second WBS, consisting of three levels, there is one intermediate level—the functional committee—in which each functional committee is responsible for integration of the work packages that are directly under them. In the third example of the WBS chart, there are four levels. That is, two intermediate levels that deal with integration.

As a second example, let us consider the construction of a new assembly line for an existing product. To capitalize on experience and minimize risk, the design may be identical to that of the existing facilities; alternatively, a new design that exploits more advanced technology may be sought. In the latter case, the WBS might include automated material handling equipment, an updated process design, and the development of production planning and control systems. One possible WBS follows:

1. New assembly line

1. 1.1. Process design

1. 1.1.1 Develop a list of assembly operations

2. 1.1.2 Estimate assembly time for each operation

3. 1.1.3 Assignment of operations to workstations

4. 1.1.4 Design of equipment required at each station

2. 1.2 Capacity planning

1. 1.2.1 Forecast of future demand

2. 1.2.2 Estimates of required assembly rates

3. 1.2.3 Design of equipment required at each station

4. 1.2.4 Estimate of labor requirements

3. 1.3 Material handling

1. 1.3.1 Design of line layout

2. 1.3.2 Selection of material handling equipment

3. 1.3.3 Integration design for the material handling system

4. 1.4 Facilities planning

1. 1.4.1 Determination of space requirements

2. 1.4.2 Analysis of energy requirements

3. 1.4.3 Temperature and humidity analysis

4. 1.4.4 Facility and integration design for the whole line

5. 1.5 Purchasing

1. 1.5.1 Equipment

2. 1.5.2 Material handling system

3. 1.5.3 Assembly machines

6. 1.6 Development of training programs

1. 1.6.1 For assembly-line operators

2. 1.6.2 For quality control personnel

3. 1.6.3 For foremen and managers

7. 1.7 Actual training

1. 1.7.1 Assembly-line operators

2. 1.7.2 Quality control

3. 1.7.3 Foremen, managers

8. 1.8 Installation and integration

1. 1.8.1 Shipment of equipment and machines

2. 1.8.2 Installations

3. 1.8.3 Testing of components

4. 1.8.4 Integration and testing of line

5. 1.8.5 Operations

9. 1.9 Management of project

1. 1.9.1 Design and planning

2. 1.9.2 Implementation monitoring and control

The decision on how to disaggregate the work content of a project is related to the decision on how to structure the project organization. In making these decisions, the project manager not only establishes how the work content will be decomposed and then later integrated, but also lays the foundation for project planning and control systems.

The WBS of a project can be defined in several ways. The choice depends on a number of factors, such as the complexity of the project, duration of the project, the work content of the project, risk levels, the organizational

structure, resource availability, and management style. There is no one “correct” way. Nevertheless, the WBS selected should be complete in the sense that it captures all of the work to be performed during the project. It should be detailed in the sense that, at its lowest level, executable work packages with specific objectives, resources, budgets, and durations are specified; and it should be accurate in the sense that it represents the way management envisions first decomposing the work content and then integrating the completed tasks into a unified whole.

The following general guidelines may be used when considering a WBS:

The WBS represents work content and not an execution sequence.

The second level of the WBS may be components, functions, and geographical locations.

Managerial philosophy often influences the structure.

The WBS and its derived work packages should be compatible with organizational working procedures.

The WBS should be generic in nature so that it may be used in the future for similar projects.

The WBS is not a product structure tree or bill of materials, both of which refer to a hierarchy of components that are physically assembled into a product.

7.4.2 Work Package Design Each work package (WP) requires a certain amount of planning, reporting, and control. As described by Raz and Globerson (1998), organizations use general guidelines to size WPs. These guidelines are typically expressed in terms of effort (e.g., person-days, dollar value) or in terms of elapsed time (e.g., days, weeks). One possible principle is that a WP should last not more than four weeks.

Ideally, the project manager should ensure that each WP is assigned to a single person or organizational unit and that this unit has the capabilities required to execute it. Smaller WPs mean more frequent deliveries to the customer and earlier payments, reducing finance charges to the contractor and increasing them for the customer.

The definition of a WP—the lowest level of the WBS—should include the following elements:

Objectives. A statement of what is to be achieved by performing this WP. The objectives may include tangible accomplishments, such as the successful production of a part or a successful integration of a system. Nontangible objectives are also possible, such as learning a new computer language.

Deliverables. Every WP has deliverables, which may consist of hardware components, software modules, reports, economic analyses, or a recommendation made after evaluating different alternatives.

Responsibility. The organizational unit that is responsible for proper completion of each WP has to be defined. This unit may be a component of the organization or be an outside contractor.

Required inputs. These include data, documents, and other material needed for the execution of the WP. They are provided by various sources, such as the stakeholders, company records, contractors, and marketing studies. The information derived from these inputs is used by the project manager to establish the order in which all of the WPs will be executed.

Resources. The unit that is responsible for executing the WP should estimate resources that are required for the task (e.g., labor hours, material, and equipment).

Duration. After estimating the resource required for each WP, the responsible party should estimate the duration required for its completion. Resource availability and possible delays must be taken into account.

Budget. A time-phased budget should be prepared for each WP. The budget is a function of the resources allocated to the WP and the duration that each

will be used.

Performance measures. Whether a WP has been completed successfully is determined by a predefined set of performance measures and standards. These elements are used during project execution to compare actual versus planned performance and to establish project control.

Because a WP is the smallest manageable unit of a project, the success of the project depends to a large extent on the ability of the project manager to deal properly with each WP. A powerful tool for this purpose is the WP description form, which contains a description of all relevant WP attributes. It is also used as the basis for a contract, either formal or informal, between the project manager and the supplier of the WP. Figure 7.9 depicts a sample form for the MBA project. The form is generic and may be used for different WPs. The nature of the required resources, for example, will obviously change from one WP to another.

Figure 7.9 Work package definition form.

Figure 7.9 Full Alternative Text

Points to remember when defining a WP:

A WP is the lowest level in the WBS.

A WP always has a deliverable associated with it.

A WP should have one responsible party, called the WP owner.

A WP may be considered by the WP owner as a project in itself.

A WP may include several milestones.

A WP should fit organizational procedures and culture.

Many projects, for a particular company or organization, are likely to be similar in nature. In such cases, developing a generic approach to defining WPs and constructing WBSs can prove extremely advantageous. Although no two projects are identical, many will have enough similarities to allow the same WBS template to be used as a starting point with the necessary modifications made as the requirements unfold. Using this approach will enable a company to improve its performance and perhaps gain a competitive edge.

7.5 Combining the Organizational and Work Breakdown Structures The two structures—the OBS and the WBS—form the basis for project planning, execution, and control. Building blocks, called work packages, are formed at the intersection of the lowest levels of these structures. A specific organizational unit is assigned a specific WP that includes tasks that reside at the lowest level of the WBS. The WP is further divided by the organizational unit into specific activities, each defined by its work content, expected output, required resources, time table, and budget. The hierarchical nature of these structures provides for a roll-up mechanism wherein the information gathered and processed at any level can be aggregated and rolled up to its higher level.

In operational terms, the WP is the smallest unit used by the project manager for planning and control, although internal milestones may be defined to allow for better visibility of progress. Further disaggregation of a WP is undertaken by the person who is charged with getting the work done (e.g., a group leader) and converts the WP into a set of basic tasks and activities. For example, “Introduction to Operations” is a WP in the project outlined in Figure 7.6. Let’s assume that the corresponding execution responsibilities have been assigned to an operations management instructor. To complete the assignment properly, the instructor must divide the WP into tasks and activities. These might include collecting syllabi from institutions that offer a similar course, establishing a list of possible topics, deciding what material to cover on each topic, developing a detailed bibliography, evaluating case studies, generating exercises and discussion questions, and so on.

The person who is responsible for a WP is responsible for detailed resource planning, budgeting, and scheduling of its constituent tasks. The development of the OBS–WBS relationship is a major step in the responsibility assignment task faced by the project manager. By planning, controlling, and managing the execution of a project at the WP level, lines of responsibility are clarified and the effect of each decision made on each element of the project can be

traced to any level of the OBS or the WBS.

7.5.1 Linear Responsibility Chart An important tool for the design and implementation of the project’s work content is the linear responsibility chart (LRC). The LRC, also known as the matrix responsibility chart or responsibility interface matrix, summarizes the relationships between project stakeholders and their responsibilities in each project element. An element can be a specific activity, an authorization to perform an activity, a decision, or a report. The columns of the LRC represent project stakeholders; the rows represent project elements performed by the organization. Each cell corresponds to an activity and the organizational unit to which it is assigned. The level of participation of the organizational unit is also specified.

By reading down a column of the LRC, one gets a picture of the nature of involvement of each stakeholder; reading across a row gives an indication of which organizational unit is responsible for that element, as well as the nature of involvement of other stakeholders with that element. An example of an LRC is shown in Table 7.3. The notation used in the table is defined as follows:

TABLE 7.3 Example of an LRC

Activity Engineering Manufacturing Contracts Project manager

Marketing

Respond to RFP

I I O, A P B

Negotiating contract

I, N I, N I, R P –

Preliminary design

P A R O, B –

Detailed design

P A R O –

Execution R P – O, B – Testing I I – O, B – Delivery N N P A N

A Approval. Approves the WP or the element.

P Prime responsibility. Indicates who is responsible for accomplishing the WP.

R Review. Reviews output of the work package. For example, the legal department reviews a proposal of a bid submitted by the team leader.

N Notification. Notified of the output of the WP. As a result of this notification, the person makes a judgment as to whether any action should be taken.

O

Output. Receives the output of the work package and integrates it into the work being accomplished. In other words, the user of that package. For example, the contract administrator receives a copy of the engineering change orders so that the effects of changes on the terms and conditions of the contract can be determined.

I

Input. Provides input to the WP. For example, a “bid/no bid” decision on a contract cannot be made by a company, unless inputs are received from the manufacturing manager, financial manager, contract administrator, and the marketing manager.

B Initiation. Initiates the WP. For example, new product development is the responsibility of the R&D manager, but the process generally is initiated with a request from the marketing manager.

If A, R, and B are not separately identified, then P is assumed to include them. The LRC in Table 7.3 corresponds to a single project. Similar charts can be constructed for each project in the portfolio, as well as for each WP in a project.

The LRC clarifies authority, responsibility, and communication channels among project stakeholders. Taken as a whole, it is a blueprint of the activity and information flows that occur at the interfaces of an organization. Once

the LRC for a project is developed, it can be sorted for each organizational unit by the nature of its involvement. When a manager reviews the sorted WPs associated with his unit, he can identify those activities for which he has direct responsibility and others in which he plays a supportive role.

The LRC conveys information on job descriptions and organizational procedures. It provides a means for all stakeholders in a project to view their responsibilities and agree upon their assignments. It shows the extent or type of authority exercised by each participant in performing an activity in which two or more parties have overlapping involvement, and it clarifies supervisory relationships that may otherwise be ambiguous when people share work.

To generate the LRC, the OBS should be complete, detailed, and accurate: complete in the sense that it should depict all of the stakeholders and organizational units that will participate in the project; detailed in the sense that each organizational unit is represented down to the level where the work is actually being performed; and accurate in the sense that it reflects the true lines of authority, responsibility, and communication. The LRC integrates the two structures by assigning bottom-level WBS elements to bottom-level OBS elements. This can be done only when the WBS and the OBS are accurate and comprehensive.

Although both the LRC and WPs are formed from elements at the lowest levels of the WBS and the OBS, they take different forms and serve different purposes. The LRC defines the nature of the organizational interaction associated with each major WP. For example, it identifies the responsible stakeholders who have to be consulted with regard to each WP and indicates who should be notified when the WP is completed. Each row in the LRC represents the decision-making process for the specific WP, and each column represents the job description of a specific organizational unit/stakeholder with regard to the project.

The integration of the WBS, the OBS, and the LRC forms the cornerstone of project management and provides the framework for developing and integrating tools needed for scheduling, budgeting, management, and control. It also aids in defining the relationship among the project manager, client representatives, functional managers, and other stakeholders.

7.6 Management of Human Resources Of the many types of resources used in projects (people, equipment, machinery, data, capital), human resources are the most difficult to manage. Unlike other resources, human beings seek motivation, satisfaction, and security and need an appropriate climate and culture to achieve high performance. The problem becomes even more complicated in a project environment because the successful completion of the project is primarily dependent on team effort. Working groups, or teams, are the common organizational units within which individual efforts are coordinated to achieve a common goal. A team is well integrated when information flows smoothly, trust exists among its members, each person knows his or her role in the project, morale is high, and the desire is for a high level of achievement.

7.6.1 Developing and Managing the Team In a project environment where workers from many disciplines join to perform multifunctional tasks, the importance of teamwork is paramount. The issues center on how to build a team, how to manage it, and which kind of leadership is most appropriate for a project team. The objective of team building is to transform a collection of individuals with different objectives and experiences into a well-integrated group in which the objectives of each person promote the goals of the group. The limited life of projects and the frequent need to cross the functional organizational lines make team building a complicated task.

Members of a new project team may come from a variety of organizational units or may be new employees. To build an efficient team, organizational

uncertainty and ambiguity must be reduced to a minimum. This is done by clearly defining, as early as possible, the project, its goals, its organizational structure (organizational chart), and the procedures and policies that will be followed during execution.

Each person who joins the project must be given a job description that defines reporting relationships, responsibilities, and duties. Task responsibilities must also be defined. The LRC is a useful tool for defining individual tasks and responsibilities. Once the roles of all team members have been established, they should be introduced to each other properly and their functions explained. Continuous efforts on the part of the project manager are required to keep the team organized and highly motivated. An ongoing effort is also required to detect any problems and to ensure that appropriate correction measures are taken.

The roles of team members tend to change over time as the project evolves. Because confusion and uncertainty cause conflict and inefficiency, the project manager should frequently update team members regarding their roles. Furthermore, the manager should detect any morale problems as early as possible in an effort to identify and eliminate the cause of such problems. For example, the appearance of cliques or isolated members should serve as a signal that the team is not being managed properly.

The project manager should also help in reducing anxieties and uncertainty related to “life after the project.” When a project reaches its final stages, the project manager, together with relevant functional managers, should discuss the future role in the organization of each team member and prepare a plan that ensures a smooth transition to that new role. By providing a stable environment and a clear project goal, team members can focus on the job at hand.

A recommended practice for management is to conduct regular team meetings throughout the life cycle of the project but more frequently in the early phases, when uncertainty is highest. In a team meeting, plans, problems, operating procedures, and policies should be discussed and explained. By anticipating potential sources of “issues” and preparing an agreed-on plan, the probability of success is increased and the probability of conflicts is reduced or eliminated altogether.

Despite the pragmatic guidelines specified above, if the team is not properly developed, there is a high probability that it will not perform its functions effectively. If, for the moment, we associate an iceberg with the processes of a project, then we might see something similar to relationships depicted in Figure 7.10.

Figure 7.10 Iceberg model of project processes.

Figure 7.10 Full Alternative Text

The tip of the iceberg, the part first to be seen and supported by the submerged structure, represents the project deliverables. The middle of the iceberg, still above water (and supporting the tip) contains all of the supporting project management tools and processes. Finally, below the surface lie all of the human processes. These are hidden from the eye in the sense that we can see their results but not their essence; that is, we can see the product of a committed team or an unmotivated team, but we cannot see the commitment or the lack of motivation itself. Like the iceberg base, any movement below the surface will affect the entire structure. The stability of the iceberg as a whole is only as strong as the stability of its base; and yet although the human processes are of critical importance, they are often left relatively unattended, at least until they rumble and threaten to undermine the

project.

One of the paradoxes of project management is that a project manager may be chosen for technical/professional expertise, rather than for leadership skills, but is then given the task of leading a group of people to achieve collaboratively what may be a set of unfamiliar and conflicting goals. The following paragraphs outline typical team development stages. By recognizing these stages, the project manager will be in a better position to bring out the full potential of the team.

When individuals get together to form a team, they are concerned with four issues:

1. Identity: Who will they be in the team? What role will they play? Will their role be meaningful?

2. Power: How much power and influence will they have in the team? Will their voice be heard? Will they be able to change the course of events and influence team decisions?

3. Interface (conflict or overlap) between their needs as individuals and the needs of the team: Will they benefit from working in this team (materially, professionally)? What will they have to give up to stay in line with the team?

4. Acceptance: Will they be accepted and liked? Will they fit in? Will they belong?

At any given point, individuals may be concerned with one or more of these issues although it is unlikely that they will formulate and express them precisely. A project manager will be better able to respond to a dissatisfied team member by understanding that, often, behind complaints related to, say, scheduling/workload/role definitions, lie concerns of identity/acceptance/power and so on.

A team, as a collective, tends to go through the following four stages: forming, storming, norming, and performing. These stages give rise to what is known as a performance model. As the team moves from one stage to the

next, its competence in performing its task grows. More precisely, we have the following:

Forming task performance at a lower level

lack of clarity regarding roles and expectations

lack of norms governing team interactions

relatively low commitment to both team and task

low trust

high dependence on project manager

high curiosity, expectations

boundaries begin to form (who is/is not a part of the team)

Storming roles and responsibilities understood (accepted or challenged)

open confrontations and power struggles

open expression of disagreement

high competition

“subgroups” formed

little or no team spirit

lots of testing of authority

feeling of being “stuck”

low motivation

Norming roles and responsibilities accepted

purpose clear

agreement on working procedures

trust built

confidence rises

openness to give and receive feedback

conflict resolution strategies formed

task orientation

feeling of belonging

very strong norms may suffocate individual expression and creativity

Performing cooperation and coordination

strong sense of team identity

high commitment to task

mutual support

high confidence in team ability

high task performance

networks created with other teams/parts of the organization

leadership role moves informally between members

high motivation (with occasional dips)

How can this model benefit the project manager? First, many project managers find a familiarity with this model helpful in that it can predict and explain some of the phenomena that they may be observing in their team. Most salient is the storming stage, which project managers often view with distress and come to the conclusion that “something is wrong with the team” or “we’ll never be able to work together,” rather than viewing it as an integral––even necessary––part of team development.

Second, there are operational implications associated with the model; that is, the project manager can, to a certain extent, manage the process of team development. With this in mind, his or her role becomes one of leading the team through the first three stages as smoothly as possible so that they all arrive at the performing stage at the earliest possible time.

In the ambiguity of the forming stage, the project manager may facilitate the team process by being directive and ensuring clarity; that is, by setting a clear mission and set of objectives for the team, by establishing clear roles and reporting procedures, by defining human resource processes, and, in general, by being the authority for the team’s uncertainty and questions.

In the storming stage, the project manager’s role calls for a more supportive and flexible attitude: supporting members, facilitating and reconciling differences, setting boundaries through persuasion, spending time building trust between team members, and constantly reminding the team of their superordinate goals and mission—which tend to get lost in the day-to-day struggles.

In the norming stage, the project manager must constantly be aware of the

team norms that are being created regarding planning and schedules, feedback loops, meetings, communication (quantity and quality), expressing disagreement, and changing priorities as some examples. At this stage, the team forms its own particular style of working, or, in other words, its own culture, which can sometimes be effective and sometimes serve as a real obstacle to effectiveness. (An example of ineffective norm might concern meetings: “We have far too many meetings, people come unprepared for the most part, and the first 15 minutes are spent on socializing––no wonder people are no longer coming as frequently.”) It is important for the project manager to remember that it is much easier to set a desired norm than it is to change an undesirable one.

Finally, in the performing stage, the project manager is called on to become more of a coach: delegating responsibilities as team members become more proficient at taking them on, giving feedback on performance and advice on problems, generating team spirit and motivation, and generally directing and supporting the team’s work.

A revision of the model added a fifth stage, “adjourning,” which is especially relevant in project management because the team is, a priori, a temporary one. Although this is not really a stage like the others, it is sometimes characterized by lowered motivation, by people moving on to the next project (in their minds, if not in reality), and by a scattering of focus and attention. The project manager needs to be aware when he or she sees these things happening and to take steps in two directions. The first is to encourage people to “run the last leg,” mainly through motivational techniques and encouragement. The second is to make sure that the project ends on a positive note—both in the sense of joint celebration and in a process of “lessons learned.” This is particularly important in organizations that are based on project structures because the end of each project leaves all involved with either a positive experience and an enthusiasm to go on to the next project or, the contrary, a negative experience that generates a lack of energy and will to commit to the next project.

7.6.2 Encouraging Creativity and

Innovation The one-time nature of projects requires solutions to problems that have not been dealt with in the past. The ability to apply past solutions to present problems may be limited. The human ability to innovate and create new ideas needs to be stimulated by the project manager.

In order for creativity and innovation to flourish, a project manager—with support from senior management—must create an appropriate climate and culture. The various ways and means by which management has tried to establish the proper conditions have been well documented in literature and include quality circles, suggestion boxes, and rewards for new ideas that are implemented. Sherman (1984) interviewed key executives in eight leading U.S. companies to study the techniques used to encourage innovation. Following are some of his findings:

Organizational level The search for new ideas is part of the organizational strategy. Continuous effort is encouraged and supported at all levels.

Innovation is seen as a means for long-term survival.

Small teams of people from different functions are used frequently.

New organizational models such as quality circles, product development teams, and decentralized management are tested frequently.

Individual level Creative and innovative team members are rewarded.

Fear that the status quo will lead to disaster is a common motivator for individual innovation.

The importance of product quality, market leadership, and innovation is stressed repeatedly and thus is well known to employees.

To put it more succinctly, innovation and creativity should be encouraged and properly managed. To enhance innovation, a systematic process that starts by analyzing the sources of new opportunities in the market is required; namely, users’ needs and expectations. Techniques such as quality function deployment and the house of quality have proved to be very effective in this regard (Cohen 1995, Hauser and Clausing 1988).

Once a need is identified, a focused effort is required to fulfill that need. Such an effort is based on knowledge, ingenuity, free communication, and well- coordinated hard work. The entire process should aim at a solution that will be the standard and trend setter for that industry. Techniques that support individual creativity and innovation are usually designed to organize the process of thinking and include:

1. A list of questions regarding the problem, or the status quo.

2. Influence diagrams that relate elements of a problem to each other.

3. Models that represent a real problem in a simplified way, such as physical models, mathematical programs, and simulation models.

A project manager can enhance innovation by selecting team members who are experts in their technical fields with a good record as problem solvers and innovators in past projects. The potential of individuals to innovate is further enhanced by teamwork and the application of proper techniques, such as brainstorming and the Delphi method.

Brainstorming is used as a tool for developing ideas by groups of individuals headed by a session chairman. The session starts by the chairman presenting a clear definition of the problem at hand. Group members are invited to present ideas, subscribing to the following rules:

Criticism of an idea is barred absolutely.

Modification of an idea or its combination with another idea is

encouraged.

Quantity of ideas is sought.

Unusual, remote, or wild ideas are encouraged.

A major function of the chairman is to stimulate the session with new ideas or direction. A typical session lasts up to an hour and is brought to an end at the onset of fatigue.

The Delphi technique is used to structure intuitive thinking. It was developed by the Rand Corporation as a tool for the systematic collection of informed opinions from a group of experts. Unlike brainstorming, the members of the group need not be in the same physical location. Each member gets a description of the problem and submits a response. These responses are collected and fed back anonymously to the group members. Each person then considers whether he or she wants to modify earlier views or contribute more information. Iterations continue until there is convergence to some form of consensus.

In addition to these two approaches, a number of other techniques are available to support creativity and innovation by groups. For a comprehensive review, see Warfield et al. (1975). As a final example, we mention the nominal group technique, which works as follows:

1. A problem or topic is given and each team member is asked to prepare a list of ideas that might lead to a solution.

2. Participants present their ideas to the group, one at a time, taking turns. The team leader records the ideas until all lists are exhausted.

3. The ideas are presented for clarification. Team members can comment on or clarify each of the ideas.

4. Participants are asked to rank the ideas.

5. The group discusses the ranked ideas and ways to expand or implement them.

7.6.3 Leadership, Authority, and Responsibility Because of the cross-functional nature of most project teams, organizations tend to be matrix oriented. This means that at any given moment, each team member may have two bosses—the project manager and his functional manager. Often, a person is also a part of two or more project teams and may be faced with conflicting priorities and demands. Similarly, the project manager may be constrained by the limited options available for managing the team (e.g., lack of control over compensation and other types of rewards). In the absence of full authority, managing teams becomes both more complex and more challenging. Often the only way a project manager can achieve outstanding results is to motivate the team through a sense of pride, belonging, and commitment. Whereas in other areas such as scheduling and budgeting, a project manager is able to manage, in the “people management” area, a project manager is expected to “lead” rather than to “manage.” Indeed, one definition of “leadership” is precisely the ability to motivate people to achieve a goal through the use of informal motivational techniques, rather than those associated with formal authority.

One way of differentiating between management and leadership would be to consider the sources or bases of power that a project manager has. In general, we tend to speak of five main power bases:

1. Formal/position: the power a manager has over subordinates as given by the organization—to hire and fire, to compensate, to promote, and so on.

2. Reward/coercive: the power to use the “carrot and stick” method. Although there is a large overlap with the first power base, the two are not identical. People have the ability to punish and reward others even when they are not formally responsible for them, for example, by withholding valuable information or resources.

3. Professional expertise: the power to influence people or events through in-depth knowledge, skills, and experience in a certain discipline.

4. Interpersonal skills: the power to create and maintain relationships, which includes the ability to listen, to empathize, and to resolve conflicts.

5. Ability to create identification/commitment: the power to create a sense of meaningfulness for people through a connecting of their own wishes, desires, and ambitions to the task in question.

In a matrix environment, a project manager rarely has the first source of power (formal). The second (reward/coercive) is one that a project manager can exercise to a certain extent, but with limits. Coercion, whether implicit or explicit, creates a type of “transactional” relationship whereby a subordinate will perform according to a perception of the value of the reward that will be received for successful results—or conversely, according to a fear of possible punishment for not performing well (e.g., not being assigned to desired project in the future). The obvious problem is that team members will be cooperative as long as the promise of significant reward or punishment holds out; when neither is there, motivation disappears.

Professional expertise is and has always been a prime power base used by project managers. This is frequently the reason they are chosen for the role in the first place, and it is in using their expertise that they usually feel the most comfortable, seeing themselves and being seen by others as adding value. Although this is both a necessary and an effective power base, it is most often not sufficient by itself. It enables the project manager to manage and control task processes but not necessarily people.

Interpersonal skills are also a critical power base at the disposal of the project manager. One common misperception concerning this power base is that it is inborn, that is, either you have it or you don’t. Although some people may have a head start in interpersonal skills, anyone can acquire a good understanding of them through focus and attention, training, practice, and the intelligent use of several commercial methodologies.

It is, however, the ability to create identification/commitment that differentiates between a good project manager and an outstanding one. This is where “intangible” motivational abilities come into play, first to bring out team members’ inner need to excel and to be a part of a team that is doing

something meaningful and, second, to generate the commitment that can lead people to perform above and beyond their normal levels. These abilities include:

Giving meaning to the tasks by linking them to the project and to the larger organizational picture. This involves generating an ongoing dialogue concerning the “what,” the “how,” and especially the “why” of the project.

Setting an example: being a role model is one of the most difficult but effective ways in which the project manager can motivate his team. A project manager must set standards of behavior, integrity, commitment, and sensitivity to others, and abide by those standards and guiding principles. There is probably nothing as demotivating as a manager who does not “walk his or her talk.”

Creating trust: this relates to the fact, consistently upheld by research, that mutual trust is the primary condition under which people will commit themselves—their knowledge, skills, and spirit—to a team project. When trust does not exist, an inordinate amount of energy is channeled from task–related issues to political or power issues or toward self-justification and protection from criticism.

Creating intellectual and emotional stimuli: both of these relate to the question that each team member asks him- or herself at the beginning of the project: “What’s in it for me?” The answer lies not on the material level but rather in terms of challenge, professional growth, experience, and development in more generic project management areas as well as in a member’s specific professional field. If a project manager can create an environment in which team members can both contribute to and learn from others and can take on meaningful responsibilities, and in which each individual’s unique voice will be heard and heeded, then he or she will have gone a long way towards ensuring the project’s success, for his or her team will give it the best they have.

Leading a team to the successful completion of a project is no simple task. Whereas prediction and control have always been the staples of effective management, they are not easy to implement in today’s turbulent and

constantly changing environment. The “grand paradox” of management, according to management theorist Peter Vaill (1990), is that being a manager in our complex reality is taking responsibility for what is less and less stable and controllable. In the same vein, project managers are expected to work within a paradoxical framework: they need to predict and control the many variables that affect their project, at the same time as planning for the inevitable changes and surprises that cannot be predicted and controlled.

This becomes very clear in the team leadership role of a project manager. He or she needs to understand that effective teamwork does not “just happen” automatically. It requires attention to and engagement in human processes that are often “messy,” emotional, and sometimes irrational. It requires knowledge of group processes and individual preferences and tendencies, together with the understanding that there is no model that can completely capture the complexity of thought processes, behavior, and interaction. It requires an understanding that people are motivated to do their best only when their heart and spirit are involved in the project, rather than only their professional and technical expertise.

Finally, perhaps the biggest paradox of all lies in the fact that although project managers need to be adept in the theory and practice of “people management,” “it is the ability to meet each situation armed not with a battery of techniques but with openness that permits a genuine response. The better managers transcend technique. Having acquired many techniques in their development as professionals, they succeed precisely by leaving technique behind.” (Farson 1996).

The responsibility of a project manager is typically to execute the project in such a way that the pre-specified deliverables will be ready within the time and budget planned. This responsibility must come with the proper level of legal authority, implying that leadership and authority are related. A manager cannot be a leader unless he or she has authority. Authority is the power to command or direct other people. There are two sources of authority: legal authority and voluntarily accepted authority. Legal authority is based on the organizational structure and a person’s organizational position. It is delegated from the owners of the organization to the various managerial levels and is usually contained in a document. Voluntarily accepted authority is based on

personal knowledge, interpersonal skills, or a person’s experience that enables him or her to exercise influence over and above their legal authority. The project manager should have well-defined legal authority in the organization and over the project. However, a good project manager will seek voluntarily accepted authority from the team members and organizations involved in the project on the basis of his or her personal skills.

The importance of legal authority is most pronounced in a matrix organization in which the need to work with functional managers and to utilize resources that “belong” to functional units can trigger conflicts. Reduction of these conflicts depends on the formal authority definition, as well as on the ability of both the project manager and the functional manager to be flexible.

7.6.4 Ethical and Legal Aspects of Project Management The legal authority of a project manager and his or her role as a leader require proper understanding of the legal and ethical aspects of project management. The Project Management Institute (PMI)1 developed a code of ethics.

1 PMI Member Ethical Standards, Project Management Institute Inc., 2000. Copyright and all rights reserved. Material from this publication has been reproduced with the permission of PMI.

A project manager’s legal responsibilities are set by the organization sponsoring the project and depend, in part, on any contracts involving the projects and the laws of the country where the project is performed. The following legal aspects are common to most projects:

Contractual issues regarding clients, suppliers, and subcontractors

Government laws and regulations

Labor relations legislation

As a rule of thumb, whenever the project manager is not sure of the legal aspects of a decision or a situation, he or she should consult the legal staff of the organization.

Legalities are very important when an organization contracts to carry out a project or parts of a project for a customer or when an organization uses subcontractors. A large variety of contract types exist, commonly classified into fixed-cost and cost-reimbursable contracts, and each requires a different legal orientation. Among the first class, two major subclasses can be identified: (1) firm fixed price (FFP) contracts and (2) fixed price incentive fee (FPIF) contracts. Under FFP contracts, the contractor assumes full responsibility for cost, schedule, and technical aspects of the project. This type of contract is suitable when the levels of uncertainty are low, technical specifications are well defined, and schedule and cost estimates are subject to minimal errors. The FPIF contract is designed to encourage performance above a preset target level. Thus, if a project is completed ahead of schedule or under cost, then an incentive is paid to the contractor. In some FPIF contracts, a penalty is also specified in case of cost overruns or late deliveries. By specifying a target that can be achieved with high probability, the risk that the contractor takes is minimized, while the incentive motivates the contractor to try to do better than the specified target.

Cost-reimbursable contracts are also classified into two major types: (1) cost plus fixed fee and (2) cost plus incentive fee (CPIF) contracts. The former are designed for projects in which most of the risk associated with cost overrun is borne by the customer. This type of contract is appropriate when it is impossible to estimate costs accurately, as, for example, in R&D projects. On top of the actual cost of performing the work, an agreed-on fee is paid to the contractor. CPIF contracts are designed to guarantee a minimum profit to the contractor while motivating the contractor to achieve superior cost, schedule, and technical performance. This is done by paying an incentive for performance higher than expected and tying the level of incentive to the performance level.

Within the four types of contracts, there are many variations. The proper contract for a specific project depends on the levels of risk involved, the ability of each party to assume part of the risk, and the relative negotiating

power of the participants. Although the legal staff is usually responsible for contractual arrangements, the project manager has to execute the contract, so his or her ability to establish good working relationships with the client, suppliers, and subcontractors within the framework of the contract is extremely important.

In addition to contracts, the project manager should be familiar with government laws and regulations in areas such as labor relations, safety, environmental issues, patents, and trade regulations. Whenever a question arises, the project manager should consult the legal staff.

Each country has its own labor relations legislation, and managers of international projects must not assume that these regulations are the same or even similar from one country to the next. Typically, these regulations have to do with minimum wages, benefits, work conditions, equal employment opportunity, employment of individuals with disabilities, and occupational safety and health.

To summarize, management of human resources is probably the most difficult aspect of project management. It requires the ability to create a project team, to manage it, to encourage creativity and innovation without being threatened, and to deal with human resources in and out of the organization. The project manager can learn some of these skills, but a majority of them come only with experience, common sense, and inherent leadership qualities.

TEAM PROJECT Thermal Transfer Plant At the last Total Manufacturing Solutions, Inc. (TMS) board meeting, approval was given to develop a new area of business: recycling and waste management. Because your supporting analysis was the determining factor, your team has been asked to develop for TMS an organizational structure that will integrate this new area with its current business. You are also required to develop a detailed OBS and WBS for a project aimed at designing and

assembling a prototype rotary combustor for which only the power unit will be manufactured in-house; other parts will be purchased or subcontracted. In developing the OBS and WBS for the project, clearly identify the corresponding hierarchies and show who has responsibility at each level.

In your report explain your objectives and the criteria used in reaching a decision. Show why the selected structure is superior to the alternatives considered, and explain how this structure relates to the TMS organization as a whole. Your report will be submitted to TMS management for review. Be prepared to present the major points to your management and to defend your recommendations.

Discussion Questions 1. Describe the organizational structure of your school or company. What

difficulties have you encountered working within this structure?

2. Explain how a matrix organization can perform a project for a functional organization. What are the difficulties, contact points, and communication channels?

3. In the matrix management structure, the functional expert on a project has two bosses. What considerations in a well-run organization reduce the potential for conflict?

4. Write a job description for a project manager in a matrix organization. Assume that only the project manager is employed full time by the project.

5. How does the WBS affect the selection of the OBS of a project?

6. Under what conditions can a functional manager act as a project manager?

7. Develop a list of advantages and disadvantages of the following structures:

1. Product organization

2. Customer organization

3. Territorial organization

8. Which kind of OBS is used in the company or organization to which you now or used to belong? What are the limitations that you have perceived?

9. What are the activities and steps involved in developing an LRC?

10. Describe the “team building” inherent in the development of an LRC. How is team building accomplished on large projects? How does this relate to development of the LRC?

11. Discuss the applicability of the nominal group technique, the Delphi method, and brainstorming to the process of scheduling and budgeting a project.

12. Compare the advantages and disadvantages of the four types of contracts discussed in this chapter.

13. Of the types of leadership discussed, which is most appropriate for a high-risk project?

Exercises 1. 7.1 Develop an organizational structure for a project performed in your

school (e.g., the development of a new degree program). Explain your assumptions and objectives.

2. 7.2 You are in charge of designing and building a new solar heater. Develop the OBS and the WBS. Explain the relationship between the two.

3. 7.3 Develop an OBS for an emergency health care unit in a hospital. How should this unit be related to the other departments in the hospital?

4. 7.4 Develop a WBS for a construction project.

5. 7.5 Consider the development of a new electric car by an auto manufacturer and a manufacturer of high-capacity batteries.

1. Develop an appropriate four-level WBS.

2. Develop the OBS.

3. Define several WPs to relate the WBS elements to the OBS.

6. 7.6 Suggest three approaches (OBS–WBS combinations) for the development of a new undergraduate program in electrical engineering.

7. 7.7 Develop an LRC for a project done for a client who has a functional organization by a contractor who has a customer-oriented organization.

1. Describe the project and its WBS.

2. Describe the OBS of the client and the contractor.

8. 7.8 You are the president of a startup company that specializes in computer peripherals such as optical backup units, tape drives, signature

verifications systems, and data transfer devices. Construct two OBSs, and discuss the advantages and disadvantages of each.

9. 7.9 List two activities that you have recently performed with two or more other people. Explain the role of each participant using an OBS, a WBS, and an LRC.

10. 7.10 Give an example of an organization with an ineffective or cumbersome structure. Explain the problems with the current structure and how these problems could be solved.

11. 7.11 You have been awarded the contract to set up a new restaurant in an existing building at a local university (i.e., there is no need for external construction). The WBS for the project, as developed by the planning team, is presented in Figure 7.11 . Using this WBS, carry out the following exercises:

Figure 7.11

WBS for new restaurant.

Figure 7.11 Full Alternative Text

1. Develop a coding system for the project.

2. Identify other types of projects that could use this coding system. For which types of projects would it be inappropriate? Explain.

3. If you wish to use a more general coding system that deals with construction, what would be the differences between the latter and the more specific coding system developed in part (a)?

12. 7.12 You have been offered a contract to undertake the restaurant project in Exercise 7.11 at several campuses that belong to the same university.

1. Suggest an OBS for these projects.

2. Generate three WPs and assign them to the appropriate organizations.

3. Identify some areas that will require coordination among the organizations included in the OBS to ensure that the three WPs will be completed properly.

4. Construct an LRC for coordinating the work among the various functions that are to be carried out.

13. 7.13 For the restaurant project in Exercise 7.11 :

1. Develop another WBS, making sure that it includes the same WPs that are shown in the original WBS in Figure 7.11 .

2. Generate additional WPs for the project and add them to the new WBS.

14. 7.14 You have been assigned the task of developing a network representation of the project in Exercise 7.11 (network construction is

taken up in much greater detail in Chapter 9 ).

1. Design the network for the WBS in Figure 7.11 . In so doing, each WP in the WBS should correspond to a node in the network, and each arc should indicate a precedence relation. Include in your diagram a dummy start node and a dummy end node.

2. Extend your network by including several activities for each WP.

15. 7.15 Prepare a Delphi session for selecting the best project manager for a given project.

16. 7.16 Develop a set of guidelines for project managers in international projects that deal with legal and ethical issues.

17. 7.17 Generate an example of a project management-related ethical issue, and discuss possible ways to resolve it.

18. 7.18 Generate a WP template and test it on a selected WP.

Bibliography

Organizational Structures Anderson, C. C. and M. M. K. Fleming, “Management Control in an Engineering Matrix Organization: A Project Engineer’s Perspective,” Industrial Management, Vol. 32, No. 2, pp. 8–13, 1990.

Chambers, G. J., “The Individual in a Matrix Organization,” Project Management Journal, Vol. 20, No. 4, pp. 37–42, 1989.

DiMarco, N., J. R. Goodson, and H. F. Houser, “Situational Leadership in a Project/Matrix Environment,” Project Management Journal, Vol. 20, No. 1, pp. 11–18, 1989.

Kerzner, H. and D. I. Cleland, Project/Matrix Management Policy and Strategy: Cases and Situations, Van Nostrand Reinhold, New York, 1997.

McCollum, J. K. and J. D. Sherman, “The Effects of Matrix Organization Size and Number of Project Assignments on Performance,” IEEE Transaction on Engineering Management, Vol. 38, No. 1, pp. 75–78, 1991.

Nader, D. and M. Gerstein, Organizational Architectures: Design for Changing Organizations, Jossey-Bass, San Francisco, 1992.

Takahashi, N., “Sequential Analysis of Organization Design: A Model and a Case of Japanese Firms,” European Journal of Operational Research, Vol. 36, No. 3, pp. 297–310, 1988.

Project Organization

Ashly, P. and T. Edwards, Introduction to Human Resource Management, Oxford University Press, New York, 2000.

Carmel, E., Global Software Teams, Prentice Hall, Upper Saddle River, NJ, 1999.

Craig, S. and J. Hadi, People and Project Management for IT, McGraw- Hill, Boston, 1999.

Globerson, S., and A. Korman, “The Use of Just-In-Time Training in a Project Environment,” International Journal of Project Management, Vol. 19, pp. 279–285 (2001).

Hallows, J., Project Management Office Toolkit, Amacom, London, 2001.

Haywood, M., Managing Virtual Teams: Practical Techniques for High- Technology Project Managers, Artech House, Norwood, MA, 1998.

Humphrey, W. S., Managing Technical People: Innovation, Teamwork, and the Software Process, Addison-Wesley, Reading, MA, 1996.

Meredith, J. R. and S. J. Mantel, Jr., Project Management: A Managerial Approach, Fifth Edition, John Wiley & Sons, New York, 2003.

O’Conell, F., How to Run a Successful High-Tech Project Based Organization, Artech House, Norwood, MA, 2002.

Peters, L., C. R. Greer, and S. A. Youngblood (Editors), The Blackwell Encyclopedic Dictionary of Human Resource Management, Blackwell Publishers, Malden, MA, 1997.

Pinto, J. (Editor), Project Leadership: From Theory to Practice, Project Management Institute, Newtown Square, PA, 1998.

Shapira, A., A. Laufer, and A. Shenhar, “Anatomy of Decision Making in Project Teams,” The International Journal of Project Management, Vol. 12, No. 3, pp. 172–182, 1994.

Williams, J., Team Development for High-Tech Project Managers, Artech House, Norwood, MA, 2002.

Work Breakdown Structure Bohem, B. W., E. Horowitz, R. Madachy, D. Reifer, B. K. Clark, B. Steece, A. W. Brown, S. Chulani, and C. Abts, Software Cost Estimation with COCOMO II, Prentice Hall, Upper Saddle River, NJ, 2000.

Globerson, S., “Impact of Various Work Breakdown Structures on Project Conceptualization,” International Journal of Project Management, Vol. 12, No. 3, pp, 165–171, 1994.

Globerson, S., and A. Shtub, “Estimating the Progress of Projects,” Engineering Management Journal, Vol. 7, No. 3, pp. 39–44, 1995.

Globerson, S., “Scope Management,” in J. Knutson (Editor), Project Management for Business Professionals, Chapter 4, pp. 49–62, John Wiley & Sons, New York, 2001.

Haugan, G., Effective Work Breakdown Structures, Management Concepts, Vienna, VA, 2001.

ISO 10007, “Quality Management – Guidelines for Configuration Management,” International Organization for Standardization, Geneva, 1995.

Luby, R. E., D. Peel, and W. Swahl, “Component Based Work Breakdown Structure,” Project Management Journal, Vol. 26, No. 4, pp. 38–43, 1995.

Luon, D., Practical CM: Best Configuration Management Practices for the 21st Century, Fourth Edition, Raven Publishing, Pittsfield, MA, 2003.

MIL-STD-881, A Work Breakdown Structure for Defense Military

Items, U.S. Department of Defense, Washington, DC, 1975.

PMI Standards Committee, A Guide to the Project Management Body of Knowledge (PMBOK), Project Management Institute, Newtown Square, PA, 2000 (http://www.PMI.org).

Rad, P., Project Estimation and Cost Management, Management Concepts, Vienna, VA, 2002.

Raz, T., “An Iterative Screening Methodology for Selecting Project Alternatives,” Project Management Journal, Vol. 28, No. 4, pp. 34–39, 1997.

Raz, T. and S. Globerson, “Effective Sizing and Content Definition of Work Packages,” Project Management Journal, Vol. 29, No. 4, pp. 17– 23, 1998.

Shtub, A. and T. Raz, “Optimal Segmentation of Projects – Schedule and Cost Considerations,” European Journal of Operational Research, Vol. 95, No. 2, pp. 278–283, 1996.

Human Resources Adams, J., Conceptual Blockbusting: A Guide to Better Ideas, Perseus Publishing, New York, 2001.

Carmel, E., Global Software Teams, Prentice Hall, Upper Saddle River, NJ, 1999.

Cohen, L., Quality Function Deployment, Prentice Hall, Upper Saddle River, NJ, 1995.

Farson, R., Management of the Absurd, Simon & Schuster, New York, 1996.

Flannes, S. and G. Levin, People Skills for Project Managers, Management Concepts, Vienna, VA, 2001.

Hackman, R., Leading Teams, Harvard Business School Press, Boston, 2002.

Hauser, J. R. and D. Clausing, “The House of Quality,” Harvard Business Review, Vol. 66, No. 3, pp. 62–73, 1988.

Kotter, J., Leading Change, Harvard Business School Press, Boston, 1996.

Rahin, A., Managing Conflicts in Organizations, Third Edition, Greenwood Publishing/Quorum Books, Westport, CT, 2001.

Sherman, P. S., “Eight Big Masters of Innovation,” Fortune, pp. 66–81, October 15, 1984.

Vaill, P., Managing as a Performing Art, Jossey-Bass, San Francisco, 1990.

Verna, V., Managing the Project Team, Project Management Institute, Newtown Square, PA, 1997.

Warfield, J. N., H. Geschka, and R. Hamilton, Methods of Idea Management, Battelle Institute and Academy of Contemporary Problems, Columbus, OH, 1975.

Chapter 8 Management of Product, Process, and Support Design

8.1 Design of Products, Services, and Systems Design is the conversion of an idea or a need into information from which a new service, product, or system can be developed. It is the “transformation from vague concepts to defined objects, from abstract thoughts to the solution of detailed problems” (Hales 1993). Design is an important part of the life cycle of any product or system. It is also part of any project, either as a phase in the project life cycle or as a process used to introduce changes in existing designs as a result of new information and changes in the environment. Design has an impact on the deliverables of the project as well as on its cost, schedule, and risk. Furthermore, the satisfaction of project stakeholders depends to a large extent on management of the design process and its results.

The project manager should not assume that good engineers are guaranteed to produce good designs. It is the project manager’s responsibility to implement an appropriate design process and to manage the design effort throughout the life cycle of the project to maximize the project’s technological competitive edge.

A good design starts with the selection of the right technology, where “right” connotes the following two primary benefits. First, it provides a market advantage through differentiation of value added, and second, it provides a cost advantage through improved overall system economies. To use technology effectively, an organization must address four elementary questions: (1) What is the basis of competition in our industry? (2) To compete, which technologies must we master? (3) How competitive are we in

these areas? (4) What is our technology strategy? In embryonic and growth industries, technology frequently drives the strategy, whereas in more mature fields, technology must be an enabling resource for manufacturing, marketing, and customer service. The United States excels at technology- driven innovation that creates whole new enterprises. By contrast, Japan excels at incremental advances in existing products and processes.

In the following sections, general purpose tools and techniques for managing the design process are presented. Specific applications, such as CASE (Computer-Aided Software Engineering) tools for software design, though interesting in their own right, fall outside the scope of the text and will not be discussed.

8.1.1 Principles of Good Design The success of products, services, and systems is heavily dependent on the quality of the design process. Most product or service characteristics and corresponding performance measures are determined in the design phase, including:

1. Operational or functional capability. This is a measure of the system’s ability to perform tasks and satisfy the market’s or customer’s needs. For example, the range of an electric passenger vehicle, its payload, and its speed are possible measures of operational or functional capabilities. In software selection, the ability to perform all required functions within acceptable time standards is an operational performance measure.

2. Timeliness. This measure relates to the time at which the system is available to perform its mission (i.e., the successful completion of acceptance tests and the start of regular operations).

3. Quality. Quality measures the system’s design with respect to market or customer needs and with respect to its design specifications. Therefore, the quality of an alternative design refers to the system’s components, the integration of those components, and the compatibility of the proposed system with the environment in which it will interact. Quality

is defined in specific terms for systems such as planes, boats, buildings, and computers, where a host of national and international standards exist. The Institute of Electrical and Electronics Engineers is in the forefront of setting standards for electrical equipment and devices. If adequate standards are not available, then desired quality levels should be specified for both the operational (functional) and the technical (design and workmanship) aspects of the system. The Software Engineering Institute, based at Carnegie-Mellon University, has taken the lead in setting standards for software quality and reliability.

4. Reliability. This measure relates to the probability that a product, system, or service will operate properly for a specified period of time under specified conditions without failure. In the simplest form, two factors—the mean time between failures (MTBF) and the mean time to repair (MTTR) the system—can be combined to calculate the proportion of time that the system is available.

Reliability= MTBF MTBF + MTTR ×100%

There is a correlation between reliability and quality, as a high quality of design, workmanship, and integration usually leads to a high level of reliability. However, reliability also depends on the type of technology used and the operating environment.

5. Compatibility. This measure corresponds to the system’s ability to operate in harmony with existing or planned systems. For example, a new management information system has a higher degree of compatibility if it can use existing databases. Electronic systems are said to be compatible when they can operate without interference from the electromagnetic radiation put out by other systems in the same vicinity. A new software package is compatible when it has the ability to import and export data from other information systems and databases. Organizations seek to minimize disruption and costs associated with implementing changes.

6. Adaptability. This measure evaluates a system’s ability to operate in conditions other than those initially specified. For example, a communication system that is designed for ground use would be

considered a highly adaptable system if it could be used in high-altitude supersonic aircraft without losing any of its functionality. Systems with high adaptabilities are preferred when future operating conditions are difficult to forecast. A highly adaptable software package is one that can run on different computer types under a variety of operating systems in addition to the computer and operating system specified.

7. Life span. This measure has a direct impact on both cost and effectiveness. Because of learning and efforts at continued process improvement, systems with a longer life span tend to improve over time. This eliminates the need for frequent capital investments and hence reduces total LCC.

8. Simplicity. The process of learning a new system while it is being introduced into an organization depends on its simplicity. A system that is easy to maintain and operate is usually accepted faster and creates fewer difficulties for the user. Furthermore, complicated systems may not be maintained and exercised adequately, especially during startup or periods of change when there is high turnover in the organization. A software package that is simple to operate and maintain is one that is developed according to software engineering standards regarding modularity, documentation, and so on.

9. Safety. The methods by which a system will be operated and maintained should be considered in the advanced development phase. Safety precautions should be introduced and evaluated to minimize the risk of accidents. As with quality, designing a safe system from the start can provide significant benefits over the long run.

10. Commonality. A high level of commonality with other systems either used by or produced by the organization should be a driving force in the design. Commonality has many facets, such as common parts and subsystems, input sources, communication channels, databases, and equipment for troubleshooting and maintenance. Many airlines insist that all aircraft that they buy within a particular class, regardless of manufacturer, have the same engines. Some airlines have taken this one step further and buy only one type of aircraft. In a similar vein, the U.S. Department of Defense developed the computer language Ada in the late

1970s and for many years required that all programs commissioned by any of its branches be written in Ada.

11. Maintainability. Providing adequate maintenance for a system is essential. The loss in operational time due to preventative maintenance must be weighed against the probability of system failure and the need for unscheduled maintenance, which in turn, reduces the system’s overall effectiveness. Higher levels of maintainability lead to better labor utilization and lower personnel training costs. Part of maintainability is testability—the ability to detect a system failure and pinpoint its source in a timely manner. Higher levels of maintainability and testability contribute to the effectiveness of a system. In software design, a well-documented source code and clearly defined interfaces between modules of a software package help in detection and correction of bugs.

12. Friendliness. This performance measure quantifies the effort and time required to learn how to operate and maintain a system. A friendly system requires less time and skill to learn and hence reduces both direct and indirect labor costs. In software, the use of menus, on-line help, and pointing devices such as a mouse can increase the friendliness of the software package.

8.1.2 Management of Technology and Design in Projects Although some projects do not have a design phase in their life cycle (these are known as built-to-print projects), almost every project must have a mechanism for addressing design changes. Configuration management systems that deal with design changes will be discussed later in the chapter. Design changes are common in all projects because new information that was not available during the design phase may call for a reassessment of the original assumptions and decisions.

Design activities begin with “the voice of the customer,” an analysis of the

client’s or organization’s needs, which are translated into technical factors and operation and maintenance plans. A common tool for this process is quality function deployment or the house of quality (see Section 8.4). Once approved by the client or upper management, these requirements are transformed into functional and technical specifications. The last link in the chain is detailed product, process, and support design. Product design centers on the structure and shape of the product. Performance, cost, and quality goals all must be defined. Process design deals with the preparation of a series of plans for manufacture, integration, testing, and quality control. In the case of an item to be manufactured, this means selecting the processes and equipment to be used during production, setting up the part routings, defining the information flows, and ensuring that adequate testing procedures are put into place.

Support design is responsible for selecting the hardware and software that will be used to track and monitor performance once the system becomes operational. This means developing databases, defining report formats, and specifying communication protocols for the exchange of data. A second support function concerns the preparation of manuals for operators and maintenance personnel. Related issues center on the design of maintenance facilities and equipment and development of policies for inventory management. Both process design and support design include the design of training for those who manufacture, test, operate, and maintain the system.

Design efforts are also relevant to many non-engineering projects. Such efforts are required to transform needs into the blueprint of the final product. For example, consider the design of a new insurance policy or a change in the structure of an organization. In the first case, new needs may be detected by the marketing department; for example, a need to provide insurance for pilots of ultra-light airplanes. The designer of the new policy should consider the various risks involved in flying ultra-lights and the cost and probability of occurrence associated with each risk. In addition to the risk to the pilot as a result of accidents, damage to the ultra-light or to a third party must be considered. The designer of the new policy has to decide which options should be available to the customer and how the different options should be combined.

Changes in the business environment and new technologies may generate a need to restructure an organization. For example, if a new product is very successful in a traditional organization and the business associated with this product becomes critical to the financial well-being of the organization, then a special division may be needed to manufacture, market, and support this product. The designer of the new organizational structure should consider questions related to the size of the new division and its mission and relationship with the existing parts of the organization.

In some projects, the design effort represents the most important component of the work. Examples are an architect who is designing a new building and a team of communication experts who are designing a satellite relay network. Usually, design is the basis for production or implementation, depending on the context. In many situations, the design effort may consume only a small portion of the assigned budget and resources. Nevertheless, decisions made in the conceptual design and advanced development phases are likely to have a significant effect on the total budget, schedule, resource requirements, performances, and overall success of the project.

Management of the design effort, from identifying a specific need to implementation of the end product, is the core of the technological aspect of project management. That design takes place in the early stages of most projects does not imply that technological management efforts cease once the blueprints are drawn. Changes in design are notoriously common throughout the life cycle of a project and have to be managed carefully.

8.2 Project Manager’s Role The project manager is responsible for assigning the total work content specified in the statement of work (SOW) to the participating units. In Chapter 7, we explained how work packages are constructed from the work breakdown structure (WBS) and assigned to the lowest level units in the organizational breakdown structure (OBS). Design efforts are part of the SOW and are similarly allocated to members of the performing organization or outsourced. In either case, it is the responsibility of the project manager to oversee both the design process and the change process throughout the project life cycle. In doing so, five major factors must be considered: quality, cost, time, risk, and performance, the last being measured by the functional attributes of the system. The tools for assessing each of these factors in the initial stages of a project were discussed in Chapter 3, Engineering Economic Analysis; Chapter 5, Project Screening and Selection; and Chapter 6, Multiple-Criteria Methods for Evaluation. In Chapter 4, we discussed life- cycle costing and showed how (design) decisions made early in the project affect the total LCC. To underscore the importance of a good design, a National Science Foundation study showed that more than 70% of the LCC of a product is defined at the conceptual and preliminary design stages. Information and decision support systems play a dynamic role in these stages by focusing management’s efforts on technology and providing feedback to the design team in the form of assessment data.

Techniques discussed previously can be used throughout the life cycle of a project to manage its design processes and thus its technological aspects. Frequently, the design is subject to change as a result of newly identified needs, changing business conditions, and the evolution of the underlying technology. Therefore, management of the design (or technological management) is a continuous process. Manufacturer warranties and an insistent desire for product improvement in some markets may keep a project alive well after delivery of the product(s).

8.3 Importance of Time and the Use of Teams In the global market, successful companies will be those that learn to make and deliver goods and services faster than their competitors. “Turbo marketers,” a term coined by Kotler and Stonich (1991), have a distinct advantage in markets where customers highly value time compression and are willing to pay a premium or to increase purchases. Moreover, in certain high-tech areas, such as semiconductor manufacturing and telecommunications, where performance is increasing and price is decreasing, survival depends on the rapid introduction of new technologies.

Once a company has examined the demand for its product, it can begin to reduce cycle time. Although the implementation effort and cost required to reduce cycle time will be substantial, the payoff can be great. To create a sustainable advantage, companies must couple the so-called “soft” aspects of management with programs aimed at achieving measurable time-based results.

A trend in technology management is to perform all major components of design concurrently. This approach, aptly known as concurrent engineering, is based on the concept that the parallel execution of the major design components will shorten project life cycles and thus reduce the time to market for new products. In an era of time-based competition when the shelf life of some high-tech items may be as short as six months, this can make the difference between mere survival and material profits.

Studies by the consulting firm, McKinsey & Co., have shown repeatedly that being a few months late to market is even worse than having a 30% development cost overrun. Figure 8.1 points up the difference in revenue when a product is on time or late. The model underlying the graph assumes that there are three phases in the product’s commercial life: a growth phase (when sales increase at a fixed rate regardless of entry time), a stagnation phase (when sales level off), and a decline phase (when sales decrease to

zero). Figure 8.1 shows that a delay causes a significant decline in revenue. Suppose that a market has a six-month growth period followed by a year of stagnation and a decline to zero sales in the succeeding eight months. Then, being late to market by three months reduces revenues by 36%. Thus, a delay of one-eighth of the product lifetime reduces income by more than one-third. Such a loss can be especially severe because the largest profits are usually realized during the growth phase.

Figure 8.1 Lost revenue as a result of delay in reaching market.

Figure 8.1 Full Alternative Text

The application of concurrent engineering principles to technology management requires thoughtful planning and oversight. There is a clear need to inform the product engineers, process engineers, and support specialists of the current status of the design and to keep them updated on all change requests. This is accomplished by the configuration management systems discussed later in the chapter.

In the following sections, we explore the issues surrounding concurrent engineering, configuration management, and describe the risk and quality aspects of technological management.

8.3.1 Concurrent Engineering and Time-Based Competition The ability to design and produce high quality products that satisfy a real need at a competitive price was, for many years, almost a sure guarantee for commercial success. With the explosion of electronic and information technology, a new factor—time—has become a critical element in the equation. The ability to reduce the time required to develop new products and bring them to market is considered by many the next industrial battleground. For example, the Boeing 777 transport design took a year and a half less than its predecessor the 767, permitting the company to introduce it in time to stave off much of the competition from the European Airbus. Similarly, John Deere’s success in trimming development time for new products by 60% has enabled it to maintain its position as world leader in farm equipment in the face of a growing challenge from the Japanese. This was done using the concurrent engineering (CE) approach to support time-based competition. CE’s major advantage is in creating designs that are more easily manufactured (Fleischer and Liker 1997).

CE uses project scheduling and resource management techniques in the design process. These techniques, discussed in Chapters 9 and 10, have always been common to the production phase but are now recognized as vital to all life-cycle phases of a project from start to finish. In a CE environment, teams of experts from different disciplines work together to ensure that the design progresses smoothly and that all of the participants share the same, most recent information.

The CE approach replaces the conventional sequential engineering approach in which new product development starts by one organizational unit (e.g., marketing), which lays out product specifications based on customer needs. These specifications are used by engineers to come up with a product design, which in turn serves as the basis for manufacturing engineering to develop the production processes and flows. Only when this last step is approved does support design begin.

Sequential engineering takes longer because all of the design activities are strictly ordered. Furthermore, the design process may be cyclic. For example, if product specifications prepared by marketing cannot be met by available technology, then marketing may have to modify its specifications. Similarly, manufacturing engineering may not be able to translate product design into process design, as a result of technological difficulties or the absence of adequate support (e.g., it may not be economically practical to develop test equipment for a product that has not been designed with testing in mind). In each of these examples, primary activities have to be repeated, increasing time and cost associated with the design process.

CE depends on designing, developing, testing, and building prototype parts and subsystems concurrently, not serially, while designing and developing the equipment to fabricate the new product or system. This does not necessarily mean that all tasks are performed in parallel but rather that the team members from the various departments make their contribution in parallel. A prime objective of CE is to shorten the time from conception to market (or deployment, in the case of government or military systems), so as to be more competitive or responsive to evolving needs.

The basis of CE is teamwork, parallel operations, information sharing, and constant communication among team members. In recent years, the terms integrated product team (IPT) and integrated product development have been used to describe a team that is responsible for the whole design and support process. The IPT concept is discussed in more detail in the next subsection. To be most effective, the team should be multidisciplinary, composed of one or more representatives from each functional area of the organization. The watchword is cooperation. After a century of labor–management confrontation and sequestering employees in job categories, hierarchies, and functional departments, many manufacturers are now seeking teamwork, dialogue, and barrier bashing. By performing product, process, and support design in parallel, there is a much greater likelihood that misunderstandings and problems of incompatibility will be averted over the project’s life cycle. By reducing the length of the design process, overhead and management costs are reduced proportionally, while the elimination of design cycles reduces direct costs as well. These cost-related issues are discussed in detail in Chapter 11. From a marketing point of view, a shorter design process

results in the ability to introduce new models more frequently and to target specific models to specific groups of customers. This strategy leads to a higher market share.

Implementation of CE is based on shared databases, good management of design information (this is the subject of configuration management), and computerized design tools such as CAD/CAM (computer-aided design/computer-aided manufacturing) and CASE (computer-aided software engineering). CE is risky and, without proper technological and risk management, results can be calamitous. The two most prominent risks are:

1. Organizational risks. The attempt to cross the lines of functional organizations and to introduce changes into the design process is often met with resistance. One way to overcome this resistance is to form IPTs that are made up of people from the various functional areas. In addition, an educational effort aimed at teaching the advantages and the logic of CE can create a positive atmosphere for this new approach.

2. Technological risks. The simultaneous effort of product, process, and support design should be well-coordinated. Configuration management systems are the key to ensure that the information that is used by all of the designers is current and correct. The risks associated with a failure to manage this design information in the CE environment is much higher than in sequential engineering, where it is possible to freeze product design once process design starts and to freeze process design once support design starts.

Companies that are considering the introduction of CE techniques should consider projects that have the following characteristics:

1. The project can be classified as developmental (novel applications of known technology) or applied (routine applications of known technology).

2. The team has experience with the technology.

3. The team has received training in quality management and has had the opportunity to apply the concepts in its work.

4. The scale of the project falls somewhere in the range of 5 to 35 full-time staff members for a period of 3 to 30 months.

5. The goal is a product or family of products with clearly defined features and functions.

6. Success is not dependent on invention or significant innovation.

8.3.2 Time Management One of the goals of CE is to reduce the time that it takes to develop and market new products. Before we can say that a reduction has been achieved, we must have some idea of what the current standards are and what controls them. This is not as clear-cut as it sounds, because few projects proceed smoothly without interference from outside forces. Also, most companies modify their goals as work progresses, making it that much more difficult to measure project length.

Every industry and its constituent firms are in continuous flux, but they all are limited in their flexibility to achieve change. A number of inhibiting factors combine to create a rhythm or tempo in a company that is very difficult to break. Table 8.1 lists some of these factors for manufacturing companies, although each may not be universally applicable at all times. Thoughtful engineering managers develop a feeling for the important factors in their business and how these affect their operations. If possible, they quantify them. This provides a baseline against which improvement can be measured. It is clear that many time-sensitive decisions have an impact on the successful operation of a business and that focusing on only one or two factors to the exclusion of the others is rarely optimal. CE is a business activity, not just an engineering activity. Market success is a function of a firm’s ability to improve all of its key tempo factors by integrating current engineering decisions with business decisions. Important issues are:

TABLE 8.1 Factors that

Affect the Tempo of Manufacturing Firms

Technology life Market forces Product lifetime costs Product life Product development cycle Process development cycle Market development cycle Economic cycle Workforce hiring/training Capital/loan acquisition Long-lead items Access to limited resources Manufacturing capacity planning

Competitive product introductions

Integrated Product Team. Many people have written about time management for individuals. CE requires time management for organizations. The principles are the same, but their implementations are somewhat different. Two notorious time wasters are senior people doing junior work and everyone repeating the same tasks. These are both addressed by the IPT approach—forming a multifunctional project team from the appropriate departments and carefully assigning responsibilities to the members. Not everyone is needed full-time on every team, but the organizing plan should indicate where to get resources when needed on a part-time basis. All team members, whether active or not, should be kept informed of progress so that they do not have to waste time catching up when called into play. Examples of people who fall into this category are patent attorneys, illustrators, and technical specialists needed for tricky problems.

The participation of staff from all major functions—marketing, development, manufacturing, finance, and so on—from the first day of the project makes a direct contribution to the reduction of duplicate effort. The marketing person can immediately comment on the desirability of some feature before the development person has spent time on it. Similarly, the development staff can get immediate feedback from manufacturing on the feasibility of a particular design.

Tools. The team organization will lose effectiveness if its members are not provided with appropriate tools. Today, this usually means access to applications software and system support for CAD, CAE, CIM, CASE, and other computer-aided disciplines. Team members must also be trained in the effective use of the tools.

Team empowerment. The IPT organization will also lose effectiveness if there are unnecessary delays in decision making. An empowerment approach enables a team to make the majority of the decisions. The initial program plan should include some major review milestones, called design reviews, when upper management and peer evaluation can influence the course of the project. These meetings should not be determined by the calendar but rather by progress. The same principle is true of meetings among team members. Setting them up every Tuesday at 8:00 a.m. usually leads people to spend all day Monday preparing for Tuesday and all day Wednesday responding to Tuesday. Have frequent team meetings, but schedule them at short notice to deal with issues as they arise. To use the project scheduling terminology, team meetings are activities, not events. Many companies have difficulty implementing the empowerment requirement because it encroaches on established lines of authority. This is one area where CE can actually increase risks.

If there is an important role for upper management to play during the course of day-to-day activities, then it is in assigning access to limited resources. If two or more teams need access to a special piece of equipment, say, for production trials, then there has to be a responsive mechanism in place to set priorities. Again, the initial project plan must cover this situation.

Use of design authorities. Another approach to facilitate decision making is to appoint design authorities in various areas. For example, there could be a technology design authority, a product design authority, a process design authority, and an equipment design authority. The authorities must be legitimate experts in their fields. They do not necessarily do the design work and may not, in fact, be full-time members of the team. Their role is to help the project manager make the final decision when two or more conflicting approaches have been recommended and to provide peer evaluation and review when needed. The design authority should not be called in until the

competing approaches have been documented in equivalent detail. He or she is a last resort to help resolve sticky issues. By having them available and identified in the plan with their role clearly spelled out, it is possible to facilitate decision making even in complex situations. Nevertheless, the ultimate decision maker is the project manager. The design authorities are consultants who are called on only to evaluate competing solutions and offer their expertise.

Quality. A major time waster is repeating work because of poor quality. Developing procedures that focus on delivering satisfaction to customers, both internal and external, goes a long way in reducing the need to correct or redo work. Obviously, careful selection of team members also goes a long way in ensuring high-quality results. Here is where the best interests of a CE team can conflict with the best interests of individuals. Unless the company implementing the procedures takes special steps to prevent it, working on a CE team can limit growth opportunities for individuals and even eliminate career paths. The project manager wants to be assured of high-quality work in all areas and will tend to select people who have already demonstrated their ability to deliver. The problem can be especially acute for junior staff members who have demonstrated their skills in one area but are not given a chance to expand into other areas because they are continually asked to work on projects that require their known skills.

Bureaucracy. The final time waster of note is lengthy administrative and bureaucratic procedures. Eberhardt Rechtin, a former vice president of engineering at Hewlett-Packard, once said that an approval takes 2n days, where n is the number of levels of approval needed. The obvious solution to this problem is to empower the project team in advance with all of the necessary approval authority. Again, this means that the initial project plan must be prepared very carefully. Another approach to shortening the time required for administration is to provide the team leader with the authority to eliminate competitive bidding procedures on certain development items involving known vendors. Other bureaucratic red tape should also be eliminated, although this makes sense even in the absence of CE. Many companies assign a full-time administrator/facilitator to CE teams to assist the project manager.

External participation. The best users of CE also extend the concept of the project team to involve key vendors and customers. The customers can help minimize the time required to define and specify the product, facilitate product acceptance procedures, and reduce project risk by either ordering early or at least indicating through a letter of intent what their purchases may be. Vendors can be extraordinarily helpful members of the team by providing technical support for the application of their products and materials and by providing preferential access to scarce resources. In return, they get some indication of likely sales. If a company uses formal vendor certification procedures, they should extend them to “certifying” selected key vendors as participants in CE development programs.

Toyota example To cut the length of the design cycle and to improve the quality of the design, Toyota implemented a design process in which IPT plays a major role. Each IPT is headed by a shusa, or big boss whose name becomes synonymous with the project. Members are assigned to the project for its life but retain ties with the functional area (continuity) from which they were drawn. Team member performance is evaluated by the shusa and is used to determine subsequent assignments. Team members sign pledges to do exactly what everyone has agreed on as a group and try to resolve critical design tradeoffs early. The number of team members is highest at the outset of a project. As development proceeds, the number dwindles as certain specialties (e.g., market assessment) are no longer needed.

8.3.3 Guideposts for Success Tom Peters (1991), a well-known management consultant, postulated the following guideposts to help organizations implement the team concept:

1. Set goals, deadlines, or key subsystem tests. Successful project teams are characterized by a clear goal, although the exact path is left unclear to induce creativity. Also, three to six strict due dates for subsystem

technical and market tests/experiments are set and adhered to religiously.

2. Insist on 100% assignment to the team. Key function members must be assigned full time for the project’s duration.

3. Place key functions on-board from the outset. Members from sales, distribution, marketing, finance, purchasing, operations/manufacturing, and design/engineering should be part of the project team from day 1. Legal, personnel, and others should provide full-time members for part of the project.

4. Give members authority to commit to their function. With few exceptions, each member should be able to commit resources from his or her function to project goals and deadlines without second-guessing from higher-ups. Top management must establish and enforce this rule from the start.

5. Keep team-member destiny in the hands of the project leader. For consulting firms such as Booz, Allen & Hamilton and McKinsey & Co., life is a series of projects. The team leader might be from San Francisco or Sydney, Australia; either way, his or her evaluation of team members’ performance will make or break a career. In general, then, the project boss rather than the functional boss should evaluate team members. Otherwise, the project concept falls flat.

6. Make careers a string of projects. A career in a “project-minded company” is viewed as a string of multifunction tasks.

7. Live together. Project teams should be sequestered from headquarters as much as possible. Team camaraderie and commitment depend to a surprising extent on “hanging out” together, isolated from one’s normal set of functional colleagues.

8. Remember the social element. Spirit is important: “We’re in it together.” “Mission impossible.” High spirits are not accidental. The challenge of the task per se is central. Beyond that, the successful team leader facilitates what psychologists call “bonding.” This can take the form of

“signing up” ceremonies upon joining the team, frequent (at least monthly) milestone celebrations, and humorous awards for successes and setbacks alike.

9. Allow outsiders in. The product development team notion is incomplete unless outsiders participate. Principal vendors, distributors, and “lead” (future test-site) customers should be full-time members. Outsiders not only contribute directly but also add authenticity and enhance the sense of distinctiveness and task commitment.

10. Construct self-contained systems. At the risk of duplicating equipment and support, the engaged team should have its own workstations, local area network, database, and so on. This is necessary to foster an “its-up- to-us-and-we’ve-got-the-wherewithal” environment. However, the additional risk created by too much isolation must be balanced with the need for self-sufficiency. Problems may arise when it comes time to integrate the project with the rest of the firm.

11. Permit the teams to pick their own leader. A champion blessed by management gets things under way, but successful project teams usually select and alter their own leaders as circumstances warrant. It is expected that leadership will shift over the course of the project, as one role and then another dominates a particular stage (engineering first, then manufacturing, and distribution later).

12. Honor project leadership skills. No less than a wholesale reorientation of the firm is called for away from “vertical” (functional specialists dominate) and toward “horizontal” (cross-functional teams are the norm). In this environment, horizontal project leadership becomes the most cherished skill in the firm, rewarded by dollars and promotions. Good team skills, for junior members, are also valued and rewarded.

8.3.4 Industrial Experience Consider a few of the real-world success stories of CE implementation that have been documented and reported at professional conferences.

For Cadillac, a winner of the Malcolm Baldrige National Quality Award, CE involved a new culture and a new way of designing and building its extraordinary, complex product—luxury cars. Engineers, designers, and assemblers are now members of vehicle, vehicle-systems, and product (parts) teams that work in close coordination rather than belonging to separate, isolated functional areas as before. Assembly line workers, dealers, repair shop managers, and customers provide insight to engineers involved in all stages of design. To inspire cultural change, Cadillac created a position of champion of simultaneous engineering (a role that combines keeping the process on track, preaching to the believers, and motivating the recalcitrant) and sent 1,400 employees to seminars on quality management. They also established an “Assembly Line Effectiveness Center,” where production workers rub shoulders with engineers, critiquing prototypes for manufacturability.

John Deere’s Industrial Equipment Division in Moline, Illinois, has had two CE efforts. The first, begun in 1984, failed because management retained the traditional manufacturing departments. Designers and process engineers who were assigned to task groups remained loyal to the interests of their disciplines rather than to the overall enterprise. In 1988, the division reorganized. Staff members now report to product teams and answer to team leaders, not functional department heads. Early in the design stage, teams create a product definition document that describes the product precisely, sets deadlines, and lays out the manufacturing plan. Products no longer change as departments work on them. The result has been gradual improvements in manufacturing processes. There are now fewer experimental designs, and it is possible to produce prototypes in the production environment. The advantage of this is that in addition to checking for flaws in the prototypes themselves, engineers can simultaneously perfect the manufacturing process.

A third example of a successful CE implementation is Federal-Mogul, a precision parts manufacturer in Southfield, Michigan. The first Federal- Mogul unit to adopt CE was its troubled oil-seal business. Other units quickly followed. Success in the oil-seal business, in which products are simple but must meet exacting standards, requires rapid turnaround on bids and prototypes and strong customer service. By providing estimates to customers in minutes instead of weeks and producing sample seals in 20 working days

instead of 20 weeks, market share soared. Federal-Mogul accomplished this by adopting a cross-functional product team approach to manufacturing, encouraging consensus building and empowerment, and introducing new information technologies. Key applications include networks that allow all plants to share CAD drawings and machine tools, a scheduling system that automatically notifies appropriate team members when a new order comes in, an engineering data management system, and an on-line database of past orders.

8.3.5 Unresolved Issues From a technical point of view, recent advances in hardware and software, database systems, electronic communications, and the various components of computer-integrated manufacturing facilitated the implementation of CE. At the first International Workshop on CE Design, sponsored by the National Science Foundation (Hsu et al. 1991), four themes emerged from the discussions: models, tools, training, and culture. Participants identified measurement issues and tradeoffs that will inform future models of new product development. They concluded that tools must focus on expanded CAD/CAM/CAPP capabilities with strong interfaces. Training is needed for multiple job stations, in the impact of design on downstream tasks, and in teamwork and individual responsibility. Corporate culture—and how to change it—must be better understood. Important aspects of culture to be clarified include incentives and performance, myths that inhibit an organization’s progress, and the management of change.

One of the primary roles of CE is identifying the interdependencies and constraints that exist over the life cycle of a product and ensuring that the design team is aware of them. Nevertheless, care must be taken in the early stages to avoid overwhelming the design team with constraints and stifling their creativity for the sake of simplicity. A truly creative design that satisfies customer requirements in a superior manner may justify the expense of relaxing some of the development and process guidelines.

Although a basic tenet of CE is that input to the design process should come from all life cycle stages, there is much ambiguity about how to achieve this.

At exactly what point in the CE process should discussion of assembly, sequences, tolerances, and support requirements be introduced? Also, tradeoffs abound. For example, consolidation of parts is desirable, yet too much consolidation implies costly and inefficient procurement and inventorying. A balance must be struck between meeting the customer’s specifications, designing for manufacturability, and LCC. This means that cost information should be available to the design teams throughout a project.

8.4 Supporting Tools

8.4.1 Quality Function Deployment A quality product is one that meets or exceeds stakeholders’ needs and expectations. Thus, the design quality is the degree to which product, process, and support design meets or exceeds stakeholders’ needs and expectations, and the quality of conformance is the degree to which the product, service, or system delivered meets the design specifications.

Clearly, a quality design is the translation of needs and expectations into the blueprints of the product, process, and support system. An important technique that accompanies quality design and CE is quality function deployment (QFD), introduced by Yoji Akao. QFD is based on using interdisciplinary teams. The members of the teams study the market (customers) to determine the required characteristics of the product or system. These characteristics are classified into customer attributes and are listed in order of their relative importance to the customer.

The ranked attributes, also called the “What’s,” are input to a second step in which team members translate the attributes into technical specifications, or “How’s.” Thus, an attribute such as “a tape recorder that is easy to carry around” can be translated into physical dimensions and weight that can be used to guide product development. This example, of course, led to Sony’s Walkman. The joint effort by the team members promotes CE while ensuring better communication and easier integration of the basic functions.

A matrix called the quality chart is used in the QFD process. The rows of the quality chart list in hierarchical order the attributes (the “What’s”); the design characteristics (the “How’s”) are similarly listed across the columns. Each cell in the resulting matrix corresponds to a lower level attribute intersection with a lower level design characteristic. Entries indicate the correlation between the corresponding attribute and design characteristic. From the matrix, team members can infer the relative importance of the attributes along