Details

Trustworthy Systems Through Quantitative Software Engineering


Trustworthy Systems Through Quantitative Software Engineering


Quantitative Software Engineering Series, Band 1 1. Aufl.

von: Lawrence Bernstein, C. M. Yuhas

141,99 €

Verlag: Wiley
Format: PDF
Veröffentl.: 03.10.2005
ISBN/EAN: 9780471750321
Sprache: englisch
Anzahl Seiten: 464

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

A benchmark text on software development and quantitative software engineering<br /> <br /> "We all trust software. All too frequently, this trust is misplaced. Larry Bernstein has created and applied quantitative techniques to develop trustworthy software systems. He and C. M. Yuhas have organized this quantitative experience into a book of great value to make software trustworthy for all of us."<br /> -Barry Boehm<br /> <br /> Trustworthy Systems Through Quantitative Software Engineering proposes a novel, reliability-driven software engineering approach, and discusses human factors in software engineering and how these affect team dynamics. This practical approach gives software engineering students and professionals a solid foundation in problem analysis, allowing them to meet customers' changing needs by tailoring their projects to meet specific challenges, and complete projects on schedule and within budget.<br /> <br /> Specifically, it helps developers identify customer requirements, develop software designs, manage a software development team, and evaluate software products to customer specifications. Students learn "magic numbers of software engineering," rules of thumb that show how to simplify architecture, design, and implementation.<br /> <br /> Case histories and exercises clearly present successful software engineers' experiences and illustrate potential problems, results, and trade-offs. Also featuring an accompanying Web site with additional and related material, Trustworthy Systems Through Quantitative Software Engineering is a hands-on, project-oriented resource for upper-level software and computer science students, engineers, professional developers, managers, and professionals involved in software engineering projects. <p>An Instructor's Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department.<br /> <br /> An Instructor Support FTP site is also available.</p>
<p>Preface xvii</p> <p>Acknowledgment xxv</p> <p><b>Part 1 Getting Started 1</b></p> <p><b>1. Think Like an Engineer—Especially for Software 3</b></p> <p>1.1 Making a Judgment 4</p> <p>1.2 The Software Engineer’s Responsibilities 6</p> <p>1.3 Ethics 6</p> <p>1.4 Software Development Processes 11</p> <p>1.5 Choosing a Process 12</p> <p>1.5.1 No-Method “Code and Fix” Approach 15</p> <p>1.5.2 Waterfall Model 16</p> <p>1.5.3 Planned Incremental Development Process 18</p> <p>1.5.4 Spiral Model: Planned Risk Assessment-Driven Process 18</p> <p>1.5.5 Development Plan Approach 23</p> <p>1.5.6 Agile Process: an Apparent Oxymoron 25</p> <p>1.6 Reemergence of Model-Based Software Development 26</p> <p>1.7 Process Evolution 27</p> <p>1.8 Organization Structure 29</p> <p>1.9 Principles of Sound Organizations 31</p> <p>1.10 Short Projects—4 to 6 Weeks 33</p> <p>1.10.1 Project 1: Automating Library Overdue Book Notices 33</p> <p>1.10.2 Project 2: Ajax Transporters, Inc. Maintenance Project 34</p> <p>1.11 Problems 35</p> <p><b>2. People, Product, Process, Project—The Big Four 39</b></p> <p>2.1 People: Cultivate the Guru and Support the Majority 40</p> <p>2.1.1 How to Recognize a Guru 41</p> <p>2.1.2 How to Attract a Guru to Your Project 42</p> <p>2.1.3 How to Keep Your Gurus Working 43</p> <p>2.1.4 How to Support the Majority 43</p> <p>2.2 Product: “Buy Me!” 45</p> <p>2.2.1 Reliable Software Products 46</p> <p>2.2.2 Useful Software Products 47</p> <p>2.2.3 Good User Experience 48</p> <p>2.3 Process: “OK, How Will We Build This?” 49</p> <p>2.3.1 Agile Processes 49</p> <p>2.3.2 Object-Oriented Opportunities 53</p> <p>2.3.3 Meaningful Metrics 60</p> <p>2.4 Project: Making It Work 61</p> <p>2.5 Problems 65</p> <p>2.6 Additional Problems Based on Case Studies 67</p> <p><b>Part 2 Ethics and Professionalism 73</b></p> <p><b>3. Software Requirements 75</b></p> <p>3.1 What Can Go Wrong With Requirements 75</p> <p>3.2 The Formal Processes 76</p> <p>3.3 Robust Requirements 81</p> <p>3.4 Requirements Synthesis 84</p> <p>3.5 Requirements Specification 86</p> <p>3.6 Quantitative Software Engineering Gates 87</p> <p>3.7 sQFD 88</p> <p>3.8 ICED-T Metrics 91</p> <p>3.8.1 ICED-T Insights 92</p> <p>3.8.2 Using the ICED-T Model 94</p> <p>3.9 Development Sizing and Scheduling With Function Points 95</p> <p>3.9.1 Function Point Analysis Experience 95</p> <p>3.9.2 NCSLOC vs Function Points 96</p> <p>3.9.3 Computing Simplified Function Points (sFP) 97</p> <p>3.10 Case Study: The Case of the Emergency No-Show Service 98</p> <p>3.11 Problems 103</p> <p><b>4. Prototyping 107</b></p> <p>4.1 Make It Work; Then Make It Work Right 107</p> <p>4.1.1 How to Get at the Governing Requirements 108</p> <p>4.1.2 Rapid Application Prototype 108</p> <p>4.1.3 What’s Soft Is Hard 110</p> <p>4.2 So What Happens Monday Morning? 111</p> <p>4.2.1 What Needs to Be Prototyped? 111</p> <p>4.2.2 How Do You Build a Prototype? 112</p> <p>4.2.3 How Is the Prototype Used? 112</p> <p>4.2.4 What Happens to the Prototype? 114</p> <p>4.3 It Works, But Will It Continue to Work? 116</p> <p>4.4 Case Study: The Case of the Driven Development 116</p> <p>4.4.1 Significant Results 119</p> <p>4.4.2 Lessons Learned 122</p> <p>4.4.3 Additional Business Histories 123</p> <p>4.5 Why Is Prototyping So Important? 128</p> <p>4.6 Prototyping Deficiencies 130</p> <p>4.7 Iterative Prototyping 130</p> <p>4.8 Case Study: The Case of the Famished Fish 131</p> <p>4.9 Problems 133</p> <p><b>5. Architecture 137</b></p> <p>5.1 Architecture Is a System’s DNA 137</p> <p>5.2 Pity the Poor System Administrator 139</p> <p>5.3 Software Architecture Experience 141</p> <p>5.4 Process and Model 142</p> <p>5.5 Components 144</p> <p>5.5.1 Components as COTS 144</p> <p>5.5.2 Encapsulation and Abstraction 145</p> <p>5.5.3 Ready or Not, Objects Are Here 146</p> <p>5.6 UNIX 148</p> <p>5.7 Tl1 149</p> <p>5.7.1 Mission 150</p> <p>5.7.2 Comparative Analysis 151</p> <p>5.7.3 Message Formatting 152</p> <p>5.7.4 TL1 Message Formulation 152</p> <p>5.7.5 Industry Support of TL1 152</p> <p>5.8 Documenting the Architecture 153</p> <p>5.8.1 Debriefing Report 154</p> <p>5.8.2 Lessons Learned 154</p> <p>5.8.3 Users of Architecture Documentation 154</p> <p>5.9 Architecture Reviews 155</p> <p>5.10 Middleware 156</p> <p>5.11 How Many Times Before We Learn? 158</p> <p>5.11.1 Comair Cancels 1100 Flights on Christmas 2004 158</p> <p>5.11.2 Air Traffic Shutdown in September 2004 159</p> <p>5.11.3 NASA Crashes into Mars, 2004 159</p> <p>5.11.4 Case Study: The Case of the Preempted Priorities 160</p> <p>5.12 Financial Systems Architecture 163</p> <p>5.12.1 Typical Business Processes 163</p> <p>5.12.2 Product-Related Layer in the Architecture 164</p> <p>5.12.3 Finding Simple Components 165</p> <p>5.13 Design and Architectural Process 166</p> <p>5.14 Problems 170</p> <p><b>6. Estimation, Planning, and Investment 173</b></p> <p>6.1 Software Size Estimation 174</p> <p>6.1.1 Pitfalls and Pratfalls 174</p> <p>6.1.2 Software Size Metrics 175</p> <p>6.2 Function Points 176</p> <p>6.2.1 Fundamentals of FPA 176</p> <p>6.2.2 Brief History 176</p> <p>6.2.3 Objectives of FPA 177</p> <p>6.2.4 Characteristics of Quality FPA 177</p> <p>6.3 Five Major Elements of Function Point Counting 177</p> <p>6.3.1 EI 177</p> <p>6.3.2 EO 178</p> <p>6.3.3 EQ 178</p> <p>6.3.4 ILF 178</p> <p>6.3.5 EIF 179</p> <p>6.4 Each Element Can Be Simple, Average, or Complex 179</p> <p>6.5 Sizing an Automation Project With FPA 182</p> <p>6.5.1 Advantages of Function Point Measurement 183</p> <p>6.5.2 Disadvantages of Function Point Measurement 184</p> <p>6.5.3 Results Common to FPA 184</p> <p>6.5.4 FPA Accuracy 185</p> <p>6.6 NCSLOC Metric 186</p> <p>6.6.1 Company Statistics 187</p> <p>6.6.2 Reuse 187</p> <p>6.6.3 Wideband Delphi 189</p> <p>6.6.4 Disadvantages of SLOC 190</p> <p>6.7 Production Planning 192</p> <p>6.7.1 Productivity 192</p> <p>6.7.2 Mediating Culture 192</p> <p>6.7.3 Customer Relations 193</p> <p>6.7.4 Centralized Support Functions 193</p> <p>6.8 Investment 195</p> <p>6.8.1 Cost Estimation Models 195</p> <p>6.8.2 COCOMO 197</p> <p>6.8.3 Scheduling Tools—PERT, Gantt 205</p> <p>6.8.4 Project Manager’s Job 207</p> <p>6.9 Example: Apply the Process to a Problem 208</p> <p>6.9.1 Prospectus 208</p> <p>6.9.2 Measurable Operational Value (MOV) 209</p> <p>6.9.3 Requirements Specification 209</p> <p>6.9.4 Schedule, Resources, Features—What to Change? 214</p> <p>6.10 Additional Problems 216</p> <p><b>7. Design for Trustworthiness 223</b></p> <p>7.1 Why Trustworthiness Matters 224</p> <p>7.2 Software Reliability Overview 225</p> <p>7.3 Design Reviews 228</p> <p>7.3.1 Topics for Design Reviews 229</p> <p>7.3.2 Modules, Interfaces, and Components 230</p> <p>7.3.3 Interfaces 234</p> <p>7.3.4 Software Structure Influences Reliability 236</p> <p>7.3.5 Components 238</p> <p>7.3.6 Open&Closed Principle 238</p> <p>7.3.7 The Liskov Substitution Principle 239</p> <p>7.3.8 Comparing Object-Oriented Programming With Componentry 240</p> <p>7.3.9 Politics of Reuse 240</p> <p>7.4 Design Principles 243</p> <p>7.4.1 Strong Cohesion 243</p> <p>7.4.2 Weak Coupling 243</p> <p>7.4.3 Information Hiding 244</p> <p>7.4.4 Inheritance 244</p> <p>7.4.5 Generalization/Abstraction 244</p> <p>7.4.6 Separation of Concerns 245</p> <p>7.4.7 Removal of Context 245</p> <p>7.5 Documentation 246</p> <p>7.6 Design Constraints That Make Software Trustworthy 248</p> <p>7.6.1 Simplify the Design 248</p> <p>7.6.2 Software Fault Tolerance 249</p> <p>7.6.3 Software Rejuvenation 251</p> <p>7.6.4 Hire Good People and Keep Them 254</p> <p>7.6.5 Limit the Language Features Used 254</p> <p>7.6.6 Limit Module Size and Initialize Memory 255</p> <p>7.6.7 Check the Design Stability 255</p> <p>7.6.8 Bound the Execution Domain 259</p> <p>7.6.9 Engineer to Performance Budgets 260</p> <p>7.6.10 Reduce Algorithm Complexity 263</p> <p>7.6.11 Factor and Refactor 266</p> <p>7.7 Problems 268</p> <p><b>Part 3 Taking the Measure of the System 275</b></p> <p><b>8. Identifying and Managing Risk 277</b></p> <p>8.1 Risk Potential 278</p> <p>8.2 Risk Management Paradigm 279</p> <p>8.3 Functions of Risk Management 279</p> <p>8.4 Risk Analysis 280</p> <p>8.5 Calculating Risk 282</p> <p>8.6 Using Risk Assessment in Project Development: The Spiral Model 286</p> <p>8.7 Containing Risks 289</p> <p>8.7.1 Incomplete and Fuzzy Requirements 289</p> <p>8.7.2 Schedule Too Short 290</p> <p>8.7.3 Not Enough Staff 291</p> <p>8.7.4 Morale of Key Staff Is Poor 292</p> <p>8.7.5 Stakeholders Are Losing Interest 295</p> <p>8.7.6 Untrustworthy Design 295</p> <p>8.7.7 Feature Set Is Not Economically Viable 296</p> <p>8.7.8 Feature Set Is Too Large 296</p> <p>8.7.9 Technology Is Immature 296</p> <p>8.7.10 Late Planned Deliveries of Hardware and Operating System 298</p> <p>8.8 Manage the Cost Risk to Avoid Outsourcing 299</p> <p>8.8.1 Technology Selection 300</p> <p>8.8.2 Tools 300</p> <p>8.8.3 Software Manufacturing 300</p> <p>8.8.4 Integration, Reliability, and Stress Testing 301</p> <p>8.8.5 Computer Facilities 301</p> <p>8.8.6 Human Interaction Design and Documentation 301</p> <p>8.9 Software Project Management Audits 303</p> <p>8.10 Running an Audit 304</p> <p>8.11 Risks with Risk Management 304</p> <p>8.12 Problems 305</p> <p><b>9. Human Factors in Software Engineering 309</b></p> <p>9.1 A Click in the Right Direction 309</p> <p>9.2 Managing Things, Managing People 312</p> <p>9.2.1 Knowledge Workers 313</p> <p>9.2.2 Collaborative Management 313</p> <p>9.3 FAA Rationale for Human Factors Design 316</p> <p>9.4 Reach Out and Touch Something 319</p> <p>9.4.1 Maddening Counterintuitive Cues 319</p> <p>9.4.2 GUI 319</p> <p>9.4.3 Customer Care and Web Agents 319</p> <p>9.5 System Effectiveness in Human Factors Terms 320</p> <p>9.5.1 What to Look for in COTS 320</p> <p>9.5.2 Simple Guidelines for Managing Development 322</p> <p>9.6 How Much Should the System Do? 323</p> <p>9.6.1 Screen Icon Design 324</p> <p>9.6.2 Short- and Long-Term Memory 326</p> <p>9.7 Emerging Technology 327</p> <p>9.8 Applying the Principles to Developers 334</p> <p>9.9 The Bell Laboratories Philosophy 336</p> <p>9.10 So You Want to Be a Manager 338</p> <p>9.11 Problems 338</p> <p><b>10. Implementation Details 344</b></p> <p>10.1 Structured Programming 345</p> <p>10.2 Rational Unified Process and Unified Modeling Language 346</p> <p>10.3 Measuring Complexity 353</p> <p>10.4 Coding Styles 360</p> <p>10.4.1 Data Structures 360</p> <p>10.4.2 Team Coding 363</p> <p>10.4.3 Code Reading 364</p> <p>10.4.4 Code Review 364</p> <p>10.4.5 Code Inspections 364</p> <p>10.5 A Must Read for Trustworthy Software Engineers 365</p> <p>10.6 Coding for Parallelism 366</p> <p>10.7 Threats 366</p> <p>10.8 Open-Source Software 368</p> <p>10.9 Problems 369</p> <p><b>11. Testing and Configuration Management 372</b></p> <p>11.1 The Price of Quality 373</p> <p>11.1.1 Unit Testing 373</p> <p>11.1.2 Integration Testing 373</p> <p>11.1.3 System Testing 373</p> <p>11.1.4 Reliability Testing 374</p> <p>11.1.5 Stress Testing 374</p> <p>11.2 Robust Testing 374</p> <p>11.2.1 Robust Design 374</p> <p>11.2.2 Prototypes 375</p> <p>11.2.3 Identify Expected Results 375</p> <p>11.2.4 Orthogonal Array Test Sets (OATS) 376</p> <p>11.3 Testing Techniques 376</p> <p>11.3.1 One-Factor-at-a-Time 377</p> <p>11.3.2 Exhaustive 377</p> <p>11.3.3 Deductive Analytical Method 377</p> <p>11.3.4 Random/Intuitive Method 377</p> <p>11.3.5 Orthogonal Array-Based Method 377</p> <p>11.3.6 Defect Analysis 378</p> <p>11.4 Case Study: The Case of the Impossible Overtime 379</p> <p>11.5 Cooperative Testing 380</p> <p>11.6 Graphic Footprint 382</p> <p>11.7 Testing Strategy 384</p> <p>11.7.1 Test Incrementally 384</p> <p>11.7.2 Test Under No-Load 384</p> <p>11.7.3 Test Under Expected-Load 384</p> <p>11.7.4 Test Under Heavy-Load 384</p> <p>11.7.5 Test Under Overload 385</p> <p>11.7.6 Reject Insufficiently Tested Code 385</p> <p>11.7.7 Diabolic Testing 385</p> <p>11.7.8 Reliability Tests 385</p> <p>11.7.9 Footprint 385</p> <p>11.7.10 Regression Tests 385</p> <p>11.8 Software Hot Spots 386</p> <p>11.9 Software Manufacturing Defined 392</p> <p>11.10 Configuration Management 393</p> <p>11.11 Outsourcing 398</p> <p>11.11.1 Test Models 398</p> <p>11.11.2 Faster Iteration 400</p> <p>11.11.3 Meaningful Test Process Metrics 400</p> <p>11.12 Problems 400</p> <p><b>12. The Final Project: By Students, For Students 404</b></p> <p>12.1 How to Make the Course Work for You 404</p> <p>12.2 Sample Call for Projects 405</p> <p>12.3 A Real Student Project 407</p> <p>12.4 The Rest of the Story 428</p> <p>12.5 Our Hope 428</p> <p>Index 429</p>
"In a study, the book was found to be successful at significantly increasing the students' willingness and competency in using good software engineering processes." (<i>Computing Reviews.com</i>, May 10, 2006) <p>"…the book is an excellent and very readable guide to the development of reliable software, augmented with humor, case studies, useful tidbits…highly recommended for all software engineers." (<i>CHOICE</i>, March 2006)</p>
LAWRENCE BERNSTEIN is the Series Editor for the Quantitative Software Engineering Series, published by Wiley. Professor Bernstein is currently Industry Research Professor at the Stevens Institute of Technology. He previously pursued a distinguished executive career at Bell Laboratories. He is a Fellow of IEEE and ACM. <p>C. M. YUHAS is a freelance writer who has published articles on network management in the IEEE Journal on Selected Areas in Communication and IEEE Network. She has a BA in English from Douglass College and an MA in communications from New York University.</p>
A benchmark text on software development and quantitative software engineering <p>"We all trust software. All too frequently, this trust is misplaced. Larry Bernstein has created and applied quantitative techniques to develop trustworthy software systems. He and C. M. Yuhas have organized this quantitative experience into a book of great value to make software trustworthy for all of us."<br /> —Barry Boehm</p> <p>Trustworthy Systems Through Quantitative Software Engineering proposes a novel, reliability-driven software engineering approach, and discusses human factors in software engineering and how these affect team dynamics. This practical approach gives software engineering students and professionals a solid foundation in problem analysis, allowing them to meet customers' changing needs by tailoring their projects to meet specific challenges, and complete projects on schedule and within budget.</p> <p>Specifically, it helps developers identify customer requirements, develop software designs, manage a software development team, and evaluate software products to customer specifications. Students learn "magic numbers of software engineering," rules of thumb that show how to simplify architecture, design, and implementation.</p> <p>Case histories and exercises clearly present successful software engineers' experiences and illustrate potential problems, results, and trade-offs. Also featuring an accompanying Web site with additional and related material, Trustworthy Systems Through Quantitative Software Engineering is a hands-on, project-oriented resource for upper-level software and computer science students, engineers, professional developers, managers, and professionals involved in software engineering projects.</p>

Diese Produkte könnten Sie auch interessieren:

The CISO Evolution
The CISO Evolution
von: Matthew K. Sharp, Kyriakos Lambros
PDF ebook
33,99 €
Data Mining and Machine Learning Applications
Data Mining and Machine Learning Applications
von: Rohit Raja, Kapil Kumar Nagwanshi, Sandeep Kumar, K. Ramya Laxmi
EPUB ebook
190,99 €