Cost Estimation Models in Software Engineering


A software cost estimating methodology is an indirect metric used by software professionals to estimate project costs. They're utilized for a variety of things. It contains the following items −

  • Budgeting − The most desired capability is for the overall estimate to be correct. As a result, the first focus is on estimating the software product's budget.

  • Analysis of tradeoffs and risks − The ability to expose the cost and schedule sensitivity of software project choices is a significant added feature (scoping, staffing, tools, reuse, etc.).

  • Controlling and planning the project − Another option is to break down costs and schedules by component, stage, and activity. Investment analysis for software enhancements Tools, reuse, and process maturity are all beneficial to the software development process.

Models of Cost Estimation

Software development efficiency would alleviate the present challenges of software production, which have resulted in cost overruns or even project cancellations. Software engineering cost models, like any other discipline, has had its share of difficulties. The fast-paced nature of software development has made developing parametric models that deliver high accuracy for software development in all disciplines very challenging. S/w development expenses are rising at an extraordinary pace, and practitioners are constantly lamenting their inability to effectively forecast the costs involved. Software models help to describe the development lifecycle and anticipate the cost of building a software product. Many software estimating models have emerged during the past two decades as a result of academics' pioneering work. Because most models are private, they cannot be compared and contrasted in terms of model structure. The functional shape of these models is determined by theory or experimentation. These are the following −

COCOMO 81

(a) COCOMO fundamentals

The term COCOMO stands for Constructive Cost Model. Barry Boehm's book Software Engineering Economics was initially published in 1981. Due to the model's ease of transparency, it provides the magnitude of the project's expense. It is designed for small projects since it has a limited number of cost drivers. When the team size is small, i.e. when the staff is tiny, it is helpful.

It's useful for getting a quick, early, rough estimate of software costs, but its accuracy is limited due to the lack of factors to account for differences in hardware constraints, personnel quality and experience, use of modern tools and techniques, and other project attributes that are known to have a significant impact on s/w costs.

EFFORT = a* (KDSI)b EFFORT = a* (KDSI)

Constants a and b have different values depending on the project type. The KLOC is the projected number of delivered lines of code for the project.

The following are the three kinds of modes available in COCOMO −

  • Organic Mode − A modest, basic software project involving a small group of people with prior application knowledge. Efforts, E, and Development, D are the following −

    E = 2.4*(KLOC)^1.05

    D=2.5*(E)^0.38

  • Semi-detached Mode − An intermediate software project in which teams with varying levels of expertise collaborate.

                                                                             E= 3.0*(KLOC)^1.12

    D=2.5*(E)^0.35

  • Embedded Mode − A software project that must be built under strict hardware, software, and operational limitations.

    E= 3.6*(KLOC)^1.20

    D= 2.5*(E)^0.32

(b) COCOMO Intermediate

It assesses software development effort as a function of program size and a set of cost drivers, which include a subjective assessment of goods, hardware, employees, and project characteristics.

It's appropriate for medium-sized tasks. Intermediate to basic and advanced COCOMO are the cost drivers. Cost factors influence product dependability, database size, execution, and storage. The team has a modest size. The COCOMO intermediate model looks like this −

EFFORT = a*(KLOC)b*EAF

Here, effort is measured in person-months, and KLOC is the project's projected amount of delivered lines of code.

(c) COCOMO in depth

It's designed for large-scale undertakings. The cost drivers are determined by requirements, analysis, design, testing, and maintenance. The team is rather big. The comprehensive COCOMO Model incorporates all of the characteristics of the intermediate version, as well as an evaluation of the cost driver's impact on each phase of the software engineering process (analysis, design, and so on).

COCOMO-II

COCOMO II is a research project that began in 1994 at USC. It places a strong emphasis on non-sequential and quick development process models, reengineering, reuse-driven methodologies, object-oriented approaches, and so on. It is the outcome of a combination of three models: application composition, early design, and post architecture.

  • On projects that employ Integrated Computer Aided Software Engineering technologies for fast application development, the Application Composition model is used to estimate effort and schedule. It is based on the concept of Object Points (Object Points are a tally of the screens, reports and 3 GL language modules developed in the application).

  • The Early Design Model entails looking at alternative system designs and operational approaches.

  • The Post-Architecture Model is utilized when the apex level design is complete and detailed information about the project is available, and the software architecture is well-defined and well-known, as the name implies. It is a comprehensive expansion of the Early-Design paradigm, accounting for the whole development life-cycle. This is a COCOMO model that ranges from lean to intermediate and is defined as follows −

EFFORT = 2.9(KLOC)^1.10

MODEL PUTNAM (SLIM)

The Software Life Cycle Model (SLIM) is based on Putnam's research on the Rayleigh distribution of project staff level vs time. It combines the majority of common size estimation approaches, such as ballpark techniques, source instructions, function pointers, and so on. It calculates the project's work, timeline, and defect rate. Data from previously completed projects is collected and analyzed, and the model is then standardized. If data is not available, a series of questions may be completed to acquire MBI and PF values from the database. P stands for productivity, which is defined as the ratio of software product size S to development effort E, as follows −

S/E = P

ESTIMACS

Management and Computer Services commercialized this proprietary technology, which is utilized in essential flight software (MACS). In terms of business, ESTIMACS emphasizes the imminent work of reviewing. Rubin has identified six critical estimating proportions and a map depicting their interactions, from what he refers to as "gross business terms" to their influence on the developer's longterm predicted portfolio mix. The following are the important estimate dimensions: effort hours, effort hours, effort hours, effort hours, effort hours, effort hours

  • the number and distribution of the employees,

  • price,

  • needs for hardware resources,

  • danger,

  • the effect on the portfolio.

SEER-SEM

Galorath, Inc. in El Segundo, California sells a product called SEER-SEM, which stands for System Evaluation and Estimation of Resources. This model has been on the market for 15 years and is based on the original Jensen model [Jensen1983]. The model has a wide range of applications. It spans the whole project life cycle, from the early stages of specification through design, development, delivery, and maintenance. It makes sensitivity and trade-off evaluations on model input parameters more easier. It breaks down project aspects into work breakdown structures for better planning and management, as well as displaying project cost drivers. On Gantt charts, the concept enables for interactive scheduling of project aspects. Estimates are based on a large database of previous projects.

Techniques for Cost Estimation

A. Algorithmic Methodologies

The software cost estimate is calculated using an algorithmic technique, which uses a formula. The formula is derived from cost-factor models that are formed by merging them. Additionally, the statistical approach is employed to build the model. The algorithmic technique is intended to give a set of mathematical equations that may be used to estimate software. These mathematical calculations are based on research and historical data, and they take into account factors like the amount of source lines of code (SLOC), the number of functions to run, and other cost drivers like language, design approach, talent levels, risk assessments, and so on. Many models, such as COCOMO models, Putnam models, and function points based models, have been built using algorithmic approaches that have been extensively investigated.

Analysis of Function Points

Another approach of assessing the size and complexity of a software system in terms of the services it provides to the user is the Function Point Analysis. ESTIMACS and SPQR/20, for example, are two proprietary cost estimate methods that use a function point approach. Albrecht [8] was the first to establish this metric, which is based on the program's utility. The overall number of function points is determined by the number of different (format or processing logic) kinds counted.

Counting function points involves two steps

  • Compiling a list of user functions − The raw function counts are calculated using a linear combination of five fundamental software components: external inputs, external outputs, external queries, logic internal files, and external interfaces, all of which are categorized into three degrees of complexity: simple, average, and difficult. The number of function counts is the total of these numbers, weighted according to the difficulty level (FC).

  • Taking into account environmental processing difficulty − The final function points are calculated by multiplying FC by an adjustment factor obtained by taking into account 14 different factors of processing complexity.

B. Techniques That Aren't Algorithmic

The software cost estimate is not calculated using a formula in non-algorithmic ways.

  • Top-Down Method of Estimation

    Macro Model is another name for top-down estimation. The overall cost estimate for the project is obtained from the global attributes of the software project using the top-down estimating approach, and then the project is partitioned into several low-level mechanisms or components. The Putnam model is the most popular technique that employs this strategy. When only global attributes are known, this technique is better appropriate for early cost estimate. Because there is no precise information accessible in the early stages of software development, it is quite valuable.

  •  Bottom-Up Method of Estimation

    The cost of each software component is calculated using the bottom-up estimating approach, and the findings are then combined to arrive at an estimated cost of the total project. Its goal is to build a system estimate using the information gathered about minor software components and their interactions. COCOMO's detailed model is the most popular technique that employs this methodology.

  • Analogy-based estimation

    Estimating by analogy is comparing the planned project to a comparable project that has already been completed and where the project development information is available. To estimate the planned project, actual data from completed projects is extrapolated. This strategy may be used at both the system and component levels. The following are the stages for estimating using this method

    • Learn about the proposed project's qualities.

    • Choose the most comparable completed projects from the historical database based on their qualities.

    • By analogy, calculate the cost of the proposed project using the cost of the most comparable completed project.

Updated on: 30-Nov-2021

10K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements