Basic Principles of Good Software Engineering Approach


Software engineering refers to the use of systems engineering principles in the creation of software products and applications. It is a discipline of engineering concerned with assessing user requirements, software design, development, testing, and maintenance.

The following are some fundamental principles of excellent software engineering −

  • Better Requirement Analysis is a fundamental software engineering technique that provides a comprehensive picture of the project. Finally, a thorough grasp of user requirements adds value to consumers by producing a high-quality software solution that satisfies their needs.

  • All designs and implementations should be as basic as feasible, which implies adhering to the KISS (Keep it Simple, Stupid) philosophy. It simplifies code to the point that debugging and maintenance are simplified.

  • The most crucial aspect of a software project's success is maintaining the project's vision throughout the development process. As a result of having a clear vision for the project, it can be developed properly.

  • Many features are included in software projects; all functionalities should be designed using a modular approach to make development quicker and simpler. Because of this modularity, functions or system components are self-contained.

  • Abstraction is a specialization of the idea of separation of concerns for repressing complicated things and offering simplicity to the customer/user, which implies it offers the user just what they need and conceals the rest.

  • Think before you act is a must-have concept in software engineering, which indicates that before you start implementing functionality, you must first consider application architecture since proper planning on the flow of project development yields better results.

  • When a developer combines all features, he or she may subsequently discover that they are no longer needed. As a result, adhering to the Never Add Extra approach is critical since it implements just what is truly necessary, saving time and effort.

  • When other developers work on another's code, they shouldn't be startled and shouldn't spend time trying to figure out what's going on. As a result, improving documentation at critical stages is an excellent method to improve software projects.

  • The Law of Demeter should be obeyed since it separates classes based on their functionality and decreases coupling (connections and interdependence between classes).

  • The developers should design the project in such a way that it satisfies the principle of generality, which means that it should not be limited or restricted to a specific set of cases/functions, but rather should be free of unnatural constraints and capable of providing comprehensive service to customers who have specific or general needs.

  • The principle of consistency is significant in coding style and GUI (Graphical User Interface) design because consistent coding style makes code simpler to understand and consistent GUI makes user learning of the interface and program easier.

  • Never spend time if something is needed and it is already out of the way; instead, use open source to solve it in your own unique manner.

  • Continuous validation ensures that a software system satisfies its requirements and serves its intended function, resulting in improved software quality control.

  • to get out of the present technology industry craze It is critical to use current programming approaches in order to address users' needs in the most up-to-date and sophisticated manner possible.

  • To expand and handle the rising demand for software applications, scalability in software engineering should be maintained.

  • Separation of Purposes - Separation of concerns recognizes the requirement for humans to function within a constrained environment. The human mind can only cope with around seven units of data at a time, according to G. A. Miller [Miller56]. A unit is a single idea or notion that a person has learned to deal with as a whole. Although humans seem to have an infinite capacity for abstraction, it requires time and repeated usage for an abstraction to become a useful tool; that is, to function as a unit.

    When defining the behavior of a data structure component, there are usually two considerations to consider: basic functionality and data integrity support. When these two concerns are separated as much as possible into different sets of client methods, a data structure component is frequently simpler to use. Clients will benefit if the client documentation addresses the two issues individually. Separate consideration of core algorithms and changes for data integrity and exception handling may also benefit implementation documentation and algorithm explanations.

    Separation of concerns is also important for another reason. In order to improve the quality of a product, software developers must deal with complicated values. We can draw a valuable lesson from the study of algorithmic complexity. Although effective methods exist for maximizing a single measurable variable, issues requiring optimization of many quantities are nearly invariably NP-complete. Although this is not proved, most experts in complexity theory think that polynomial-time algorithms cannot tackle NP-complete problems.

    It seems appropriate to split the processing of distinct values in light of this. This may be accomplished by dealing with various values at different periods throughout the software development process, or by arranging the architecture such that distinct components are responsible for obtaining different values.

    Run-time efficiency is an example of a value that commonly clashes with other software values. As a result, most software engineers advise treating efficiency as a distinct issue. Following the creation of the program to fulfill other requirements, the run time may be evaluated and analyzed to discover where the time is spent. If required, areas of code that use the majority of the runtime may be updated to reduce runtime. The paper "Lazy optimization: patterns for efficient smalltalk programming" by Ken Auer and Kent Beck in [VCK96, pages 19-42] goes into great detail about this concept.

  • Modularity - The concept of modularity is a subset of the separation of concerns principle. Following the modularity concept entails breaking down software into components based on their functionality and responsibilities. Parnas [Parnas72] authored one of the first articles on the subject of modularization concerns. [WWW90], a more contemporary paper, offers a responsibility-driven modularization technique in an object-oriented setting.

  • Abstraction - A specialization of the concept of separation of concerns is the principle of abstraction. Separating the behavior of software components from their implementation is a requirement of the abstraction principle. It necessitates learning to examine software and software components from two perspectives: what they do and how they do it.

    Unnecessary coupling is often caused by a failure to separate behavior from implementation. In recursive algorithms, for example, it is usual to add additional parameters to make the recursion work. After that, the recursion should be invoked using a non-recursive shell that gives the appropriate beginning values for the additional arguments. Otherwise, the caller will be confronted with a more sophisticated behavior that will need the specification of the additional arguments. The client code will need to be altered if the implementation is eventually switched to a non-recursive approach.

    For dealing with abstraction, design by contract is a key technique. Fowler and Scott [FS97] outlined the essential concepts of design by contract. Meyer [Meyer92a] provides the most comprehensive description of the approach.

  • Change Anticipation - A computer program is a computerassisted solution to a problem. The issue occurs in a context or domain that is known to the software's users. The domain specifies the sorts of data that users will need to work with, as well as the connections between them.

    Software engineers, on the other hand, are conversant with data abstraction technologies. They work with structures and algorithms without concern for the meaning or significance of the data. Without giving specific meaning to vertices and edges, a software developer might conceive in terms of graphs and graph algorithms.

    As a result, figuring out an automated solution to a problem is a learning process for both software engineers and their customers. The domain in which the clients operate is being learned by software developers. They're also understanding the customer's values: what style of data presentation is most valuable to the client, and what types of data are critical and need specific safeguards.

    Clients are becoming more aware of the breadth of solutions that software technology can bring. They're also learning to assess potential solutions in terms of their ability to suit the client's demands.

    It is unreasonable to expect that the optimal answer will be found in a short amount of time if the issue to be addressed is complicated. The customers, on the other hand, need a quick response. Most of the time, they are unwilling to wait till the ideal answer is found. They want an acceptable answer as quickly as possible; perfection can wait. Program engineers need to understand the requirements, or how the software should operate, in order to produce a timely solution. The notion of change participation acknowledges the difficulty of the learning process for both software engineers and their customers. Early on, preliminary needs should be drawn up, but adjustments to the criteria should be permitted as learning develops.

    Coupling is a significant barrier to transformation. If two components are tightly connected, altering one will almost certainly require replacing the other.

    Cohesiveness has a favorable impact on changeability. When needs change, cohesive components are simpler to reuse. If a component combines numerous jobs into a single package, it will almost certainly need to be broken apart when modifications are made.

  • Generality - The concept of generality is linked to the notion of change anticipation. It is critical to creating software that is free of artificial constraints and limits. The usage of two-digit year numbers is an outstanding example of an artificial constraint or limitation, which has resulted in the "the year 2000" problem: software that will muddle record-keeping at the turn of the century. Despite the fact that the two-digit restriction seemed acceptable at the time, excellent software usually outlives its intended lifespan.

    Consider a client who is changing business methods into automated software as another illustration of the concept of generality in action. They are often attempting to meet broad demands, but they recognize and express their wants in terms of their existing behaviors. As they get a better understanding of the capabilities of automated solutions, they begin to recognize what they need rather than what they are presently doing to meet those requirements. This difference is analogous to the concept of abstraction differentiation, but its consequences are realized earlier in the software development process.

  • Development in Small Steps - Fowler and Scott [FS97] provide a succinct but thorough overview of the incremental software development method. This method involves building software in tiny steps, such as adding one use case at a time.

    Verification is made easier using an incremental software development methodology. If you build software in tiny increments, you just have to deal with the newly introduced functionality when it comes to verification. If any mistakes are found, they are already partially separated, making them considerably simpler to rectify.

    Changes in requirements may also be handled more easily with a well-planned incremental development strategy. To accomplish so, the planning must identify the most likely to be altered use cases and place them at the conclusion of the development process.

  • Consistency - The idea of consistency recognizes that it is simpler to do tasks in a familiar environment. Coding style, for example, is a method of putting down code text in a uniform way. This accomplishes two goals. For starters, it makes it simpler to understand the code. Second, it enables programmers to automate some of the abilities necessary for code entering, allowing them to focus on more critical challenges. Consistency at a higher level entails the creation of idioms for dealing with typical programming issues.

Updated on: 29-Nov-2021

2K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements