- Trending Categories
- Data Structure
- Networking
- RDBMS
- Operating System
- Java
- MS Excel
- iOS
- HTML
- CSS
- Android
- Python
- C Programming
- C++
- C#
- MongoDB
- MySQL
- Javascript
- PHP
- Physics
- Chemistry
- Biology
- Mathematics
- English
- Economics
- Psychology
- Social Studies
- Fashion Studies
- Legal Studies

- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who

# Average Returns and Standard Deviation of Securities

In Mathematics, the mean or average return is defined as the average of all the given values. To find the mean, the added sum of all the given values is divided by the total number of values given.

Standard deviation (SD), on the other hand, is a measure of the dispersion of the data points from the mean. Standard deviation, therefore, shows how far the data points are spread out from the average value. SD measures the absolute variability of the data distribution.

**Note** − SD is the most popular measure of variability and is used often to determine the volatility of financial instruments and returns on investment (ROI).

## Standard Deviation Versus Average Deviation

Standard Deviation is used to measure the volatility of returns from an investment fund or strategy because it helps to measure volatility. Higher volatility means a higher risk of losses. Therefore, investors are interested in considering higher returns from the funds that offer higher volatility. For example, a stock index fund should have a relatively low SD as compared to a growth fund.

**Note** − Standard Deviation is the most appropriate measure of variation when having a population sample, where the mean is the best measure of center, and the distribution of data is normal.

The mean average, or mean absolute deviation, is sometimes used as the closest alternative to Standard Deviation. It is also used to measure the volatility in markets and financial instruments, but it is used less frequently than Standard Deviation.

Generally, for a data of normal distribution — that is, when there aren't many outliers — Standard Deviation is the preferable measure of variability. But when there are enough and large outliers, Standard Deviation will register higher levels of dispersion than mean absolute deviation, meaning average deviations being more appropriate in the case.

**Note** − Average deviation, or mean absolute deviation, is sometimes argued to be a better measure of variability when there are distant outliers or when the data is not distributed well enough.