Concurrency problems in DBMS Transactions


Several transactions can be carried out simultaneously in a shared database when using a database management system (DBMS). Concurrent execution can provide a number of benefits, including increased system capacity and quicker reaction times. To ensure accurate and dependable database functioning, there may be a number of issues that need to be resolved. We shall talk about concurrent execution in DBMSs and its issues in this post.

Concurrent execution in a DBMS refers to the capacity to carry out numerous transactions simultaneously in a shared database. A collection of database activities known as a transaction, such as inserting, updating, or removing data, is carried out as a single unit of work. Concurrent execution allows many transactions to access the same data concurrently, which can have numerous benefits, such as higher system throughput and reaction time.

In a DBMS, concurrent execution might offer a number of issues known as Concurrency Problems that need to be resolved in order to guarantee accurate and dependable database operation. Some of the Concurrency Problems in DBMS include the following −

Lost Update

When two or more transactions try to update the same data item at the same time, a lost update happens, and the outcome relies on the sequence in which the transactions are executed. The modifications made by the other transaction will be lost if one transaction overwrites them before they are committed. Inconsistent data and inaccurate findings might occur from lost updates.

Dirty Read

When a transaction accesses data that has already been updated but hasn't been committed, it's known as a dirty read. The information read by the first transaction will be invalid if the modifying transaction rolls back. Data discrepancies and inaccurate outcomes might occur from dirty readings.

Non-Repeatable Read

When a transaction reads the same data item twice and the data is updated by another transaction between the two reads, this is known as a non-repeatable read. This might result in discrepancies in outcomes and data.

Phantom Read

When a transaction reads a group of rows that meet a given criterion and a subsequent transaction adds or deletes rows that meet the same requirement, this is known as a phantom read. The same set of data will have new rows that were not present the first time when the initial transaction reads them. Results and data discrepancies may emerge from this.

Deadlock

In a DBMS, a deadlock happens when many transactions are held up as they wait for one another to release the resources they are holding. Deadlocks can happen when resources are not released properly or are acquired by transactions in a different sequence. Deadlocks can result in decreased system performance or even system crashes.

Starvation

In a DBMS, starvation happens when one transaction is perpetually blocked from using a resource or finishing a job because that resource has been allocated to another transaction. When resources are not equitably distributed among transactions or when priorities are not correctly controlled, starvation may result.

Conclusion

In conclusion, concurrent processing in a DBMS can have a number of advantages, including faster response times and higher system throughput. To ensure accurate and dependable database functioning, there may be a number of issues that need to be resolved. Lost updates, unclean reads, non-repeatable reads, phantom reads, deadlocks, and hunger are some Concurrency Problems with concurrent execution in a DBMS. Many concurrency control strategies, including locks, timestamps, and optimistic concurrency control, are employed to avoid these issues. The DBMS and the application it supports' unique needs will determine which concurrency control method is best. Concurrency Problems must be managed properly to guarantee a DBMS's accuracy and dependability.

Updated on: 07-Sep-2023

2K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements