Principles of Conventional Software Engineering


Conventional Software Engineering Principles

There are several descriptions of "old-school" engineering software. The software industry has learned many lessons and established numerous principles during the years of software development. This part introduces the fundamental ideas explored throughout the rest of the book by describing one perspective on today's software engineering principles. A small paper titled "Fifteen Principles of Software Engineering" [Davis, 1994] is the benchmark I've chosen. The paper was then expanded into a book [Davis, 1995], which lists 201 principles. Despite its title, the piece outlines the top 30 principles, and it's as excellent a description as any of the software industry's common knowledge. While I agree with most of it, I feel some of it is outdated. Following that, in italics, are Davis' top 30 principles. For each premise, I discuss whether the viewpoint presented later in this book would support or contradict it. I make a few claims here that aren't fully supported until subsequent chapters.

  • Prioritize quality. It must be quantified and measures put in place to encourage its attainment.

    Defining quality that is appropriate for the project at hand is critical, but it is difficult to do at the start of a project. As a result, a contemporary process framework aims to grasp the trade-offs between features, quality, cost, and schedule as early as feasible in the life cycle. It is impossible to specify or control the attainment of quality unless this knowledge is attained.

  • It is feasible to create high-quality software. Involving the client, prototyping, simplifying design, performing inspections, and recruiting the finest personnel are all proven methods for improving quality. This principle overlaps with the others in most ways.

  • Provide merchandise to consumers as soon as possible. No matter how hard you try to figure out what consumers want during the requirements phase, the only way to find out what they want is to offer them a product and let them play with it.

    This is a critical aspect of a contemporary process structure, and there must be many methods in place to keep the customer involved throughout the life cycle. These techniques may include demonstrative prototypes, demonstration-based milestones, and alpha/beta releases, depending on the domain.

  • Before defining the criteria, figure out what the problem is. Most engineers hurry to suggest a remedy when confronted with what they perceive to be a problem. Before attempting to fix a problem, be sure you've considered all of your options and aren't fooled by the obvious solution.

    This principle identifies the problems with the traditional requirements specification approach. As a solution develops, the parameters of the problem become more palpable. A contemporary process framework works together to develop the problem and the solution until the problem is well understood enough to commit to full production.

  • Consider several design options. After you've agreed on the criteria, you'll need to look at a number of designs and methods. You should avoid using an "architecture" only because it was mentioned in the requirements specification.

    This idea appears to be rooted in the waterfall mentality in two ways: (1) the needs come first, rather than the architecture afterward. (2) The requirements definition includes the architecture. While a contemporary process encourages the evaluation of design alternatives, these activities occur concurrently with the definition of requirements, and the notations and artifacts for requirements and architecture are explicitly separated.

  • Make use of a process model that is appropriate. On the basis of company culture, willingness to take risks, application area, volatility of needs, and the extent to which requirements are fully understood, each project must pick a process that makes the most sense for that project.

    There is no such thing as a universal procedure. Instead of a single rigid instance, I use the term process framework to describe a flexible class of processes.

  • Use several languages for various phases. Many have said that the optimum development process is one that uses the same notation throughout the life cycle, owing to our industry's insatiable need for easy answers to complicated issues. Why would software engineers use Ada for requirements, design, and code if it wasn't the best language for all of these phases?

    This is a crucial idea to remember. The primordial artifacts of the process are described in Chapter 6 with an acceptable arrangement and recommended languages/notations.

  • Keep the intellectual distance to a minimum. To reduce cognitive distance, the software's structure should be as near to the real-world structure as feasible.

    The development of object-oriented approaches, component-based development, and visual modeling have all been motivated by this notion.

  • Prioritize techniques over tools. A hazardous, undisciplined software engineer becomes an undisciplined software engineer with a tool. While this idea is sound, it overlooks two crucial points: (1) A disciplined software engineer with good tools will outperform a disciplined software expert who does not have good tools. (2) Automation is one of the most effective strategies to promote, standardize, and supply superior approaches.

  • Get it right before you try to speed it up. Making a functional program run faster is significantly easier than making a fast program work. During the early coding, don't be concerned about optimization.

    This is an excellent statement. Several software gurus have misrepresented it as follows: "Early performance difficulties in a software system are a definite marker of downstream danger." Performance concerns arose early in the life cycle of every successful, nontrivial software project I'm aware of. I would say that in their initial executable iterations, practically all immature architectures (particularly large-scale ones) suffer performance concerns. Understanding the various performance trade-offs necessitates having something running (functioning) early. It's simply too tough to gain this understanding by examination.

  • Examine the code. Testing is a far better technique to detect flaws than inspecting the detailed design and code. For all save the simplest software systems, the importance of this idea is exaggerated. Automated analysis and testing can be done efficiently throughout the life cycle thanks to today's hardware resources, programming languages, and automated environments. In today's iterative development, continuous and automated life-cycle testing is a must.

    Architectural flaws and global design compromises are rarely discovered during general, undirected inspections (as opposed to inspections focused on recognized concerns). That isn't to suggest that all inspections are worthless. Inspections are incredibly successful at addressing problems when utilized wisely and focused on a known issue. However, given that the industry's default approach is to over inspect, this guideline should not be among the top 15.

  • It is more vital to have effective management than it is to have superior technology. Poor management will not be compensated by the greatest technology, while a competent manager may achieve tremendous outcomes even with little resources. Good management inspires employees to perform at their best, yet there are no universally accepted "correct" management approaches.

    With a small budget and timetable, a strong, well-managed team may do remarkable things. On the other hand, excellent management and a low-quality team are mutually exclusive, because a good manager will attract, configure, and keep a high-quality team.

  • People are the most important factor in achieving success. The importance of highly competent personnel with the necessary experience, talent, and training cannot be overstated. With insufficient tools, languages, and processes, the right people will succeed. The wrong people, using the wrong tools, languages, and processes, will almost certainly fail. This idea is placed much too low on the priority list.

  • Proceed with caution. Just because everyone else is doing it doesn't mean it's appropriate for you. It might be correct, but you must carefully consider its appropriateness in your situation. Object orientation, measurement, reuse, process optimization, CASE, prototyping—all of these techniques have the potential to enhance quality, save costs, and boost user happiness. The promise of such procedures is frequently exaggerated, and the advantages are far from certain or universal.

    This is sound advice, especially in a fast-paced sector where technology fads and advancements can be tough to identify. When it comes to balancing features, pricing, and deadlines, the most recent technology isn’t necessarily the best option.

    Take responsibility for your actions. When a bridge falls, we wonder, "What went wrong with the engineers?" We seldom inquire about this, even when the software fails. The truth is that in every engineering subject, the most advanced procedures may result in dreadful designs, while the most primitive approaches might result in lovely solutions.

    This is a fantastic follow-up to item 14. To succeed, you need more than just good techniques, tools, and components. It also needs strong people, effective management, and a learning culture that emphasizes forward development despite the frequent and unavoidable intermediate failures.

  • Recognize the customer's top priorities. It's feasible that the client would accept 90 percent of the functionality being supplied late if only 10% of it was delivered on time. It's crucial to understand the customer's priorities, but only in the context of other stakeholder interests. The belief that "the customer is always right" has probably resulted in more money being squandered than any other. The customer is frequently mistaken, especially in the government contracting area, but more broadly whenever a customer contracts with a system integrator.

  • They require more as they see more. The more functionality (or performance) you supply, the more functionality (or performance) the user desires. This approach is correct, however, it implies that you should never show anything to a user. "The more consumers see, the better they comprehend," it should read. Not every investor is motivated just by profit. They are aware of their limited resources and the limits that developers face.

    Stakeholder expectations must be synchronized, therefore demonstrating intermediate outcomes is a high-visibility activity. The software project manager requires objective facts to justify the inevitable change requests and maintain a balance of cost, features, and risk as a result of this approach in a contemporary process.

  • Make a plan to get rid of one. Whether or not a product is wholly new is one of the most crucial critical success elements. New applications, systems, interfaces, or algorithms are rarely successful the first time they are used.

    You should not intend to discard one. Rather, you should aim to develop a product from an early prototype to a fully functional baseline. If you have to toss it away, that's OK, but don't plan on it. This may have been sound advice for projects that required 100 percent bespoke, cutting-edge software development in the past. Much of the componentry in today's software systems (at least the operating system, DBMS, GUI, network, and middleware) already exists, and much of what is produced in the first pass may be reused.

  • Create a flexible design. Change must be accommodated in the architectures, components, and specification approaches you utilize. This is a simple statement that has proven to be quite difficult to implement. In essence, it states that we must forecast the future and build a framework to accept the change that is not yet fully defined. Nonetheless, I totally support this idea since it is important to success. Although it is impossible to properly forecast the future, attempting to predict the kind of changes that are likely to occur over the life cycle of a system is a helpful risk management exercise and a common theme of successful software projects.

  • Design isn't what it is until it's accompanied by documentation. I've heard software developers say this a lot "I've completed the design. All that's left is the paperwork."

    This notion is also rooted in the traditional document-driven method, in which documentation was kept separate from the program. Maintaining separate papers for the purpose of defining the software design is typically unproductive with visual modeling and higher-level programming languages. High-level architectural papers can be incredibly useful if they are written clearly and simply, but the design notations, source code, and test baselines are the major artifacts utilized by the engineering team. To better make use of today's technology breakthroughs, I would alter this approach as follows: "The majority of software artifacts should be self-documenting."

Updated on: 25-Nov-2021

2K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements