Theory Behind NoSQL

1.1 CAP

CAP is first published by Eric Brewer in 2000 and is the basic theorem that describes any distributed system. (Brewer, 2000)  Figure10 explains what CAP stands for.

 

cap

Figure 1. CAP stands for Consistency, Availability and Tolerance to network Partitions.

  • Consistency – Any node in the system has the same data with other nodes.
  • Availability – A request will always be responded even though there is a node going down.
  • Partition Tolerance – The system continues to operate even there is a partition or communication break between two nodes. (Brewer, 2000)

The truth is that it’s impossible to meet all the three guarantees.  If consistency is focused, the failure of write operation caused by system unavailability may happen. If availability is focused,  It’s possible that read operation can’t get the latest value of write operation. It’s the focus of system who determines the strategy and normally a combination of two guarantees are chosen.

1.2 Election of CAP guarantee

cap2.png

Figure 2. Databases pick up  two guarantees from CAP

Figure shows how different databases discussed in this paper make their choice. Caution should be taken that each database picks up two guarantee, but it does not mean it losing everything in the third guarantee.

The explanation of possible selections is discussed as following.

  • CA – Data among all nodes is same (consistency) and in order to meet this condition, read or write operation can be accessed in any node (availability). Nevertheless, if there is a communication break occurred in the system, the data won’t be synchronized and won’t be re-synchronized even when the partition is resolved.
  • CP – Consistency and Partition Tolerance are guaranteed. If partition occurs in the system,  in order to ensure consistency, nodes will be regarded as going down (losing availability).
  • AP – Nodes can still be reached even though there is communication failure. Meanwhile, the data can be re-synchronized if the communication break is resolved.

1.3 ACID and BASE

To guarantee the database transactions, a set of properties of ACID is applied.

ACID stands for:

  • Atomicity – Everything in a transaction works successfully or failed.
  • Consistency – The transaction should ensure the database transfer from one consistent state to another consistent state. The consistent state means data in database should meet the integrity constraint.
  • Isolation – Transactions are isolated which means different transactions won’t affect each other.
  • Durability – If data is committed, it remains its state even though there are errors, crashes or power loss.(Gray,& Reuter, 2007)

In order to handle transactions and keep data safe, traditional RDBMS works following the principle of ACID. NoSQL databases on the other hand need to guarantee the availability and scalability for storing lots more data and using a distributed set of servers working together. ACID in this case cannot guarantee this property and then BASE comes up.

BASE stands for Basically Available, Soft State and Eventually Consistency.  (Mapanga & Kadebu, 2013)

  • Basically Available – The system gives up the high availability as regards CAP theorem which means the data can be failed to be responded when it’s in an inconsistent or changing state. But basically, availability is guaranteed.
  • Soft State – The state of the system can change over time due to eventual consistency property. Partition tolerance is supported, but the state can be asynchronous for some time.

evntual consis.png

Figure 3. Eventual consistency model

  • Eventual Consistency – Observing figure12, suppose Node A is written with DataN, the system should guarantee that any read operation obtains the latest value DataN if there are no other write operations to Node A, B and C before the read operations of Node B and C are completed. However, the inconsistency problem could occur due to the interactive delay, system load or the number of replications.

strong consis.png

Figure 4. Strong consistency model

On the contrary to eventual consistency, strong consistency is quite different.

  • Strong consistency – Observing figure13, if Node A is written with DataN, the latest value DataN is always returned from Node B and C. However,  in order to ensure consistency, any read operation is blocked until the update of replications.

BASE model is against to ACID model, as it sacrifices high consistency to achieve availability or reliability.  Relational database is designed based on strong consistency guarantee. To some extent, it loses scalability and performance. In current modern companies like Google, Amazon, Twitter etc., scalability availablity and performance are concerned more, as a result, most NoSQL databases follows BASE model.

1.4 Concurrency control

Assuming a data record is accessed by two users, there is no problem when they read the data record at the same time, but what will happen if they update the data record simultaneously? Collision happens then. Concurrency control is used to deal with the issue which allows multiple users to access the same data simultaneously.

Concurrency control in databases is normally working with transactions. A Transaction is a series of data operations that have to follow ACID guarantee. Most relational management systems like MySQL support transaction, as users of RDBMS consider consistency and integrity of data  as high preference.  However, not all modern new databases support transaction such as MongoDB. If MongoDB operates on a single document, operations are always atomic. But operations on multiple documents are not atomic. In this case, multi-document transactions can not be executed. Fortunately, MongoDB can apply two-phase commits to offer transaction-like semantics. (MongoDB online manual – Perform Two Phase Commits). Whereas, NoSQL databases discussed in this paper Redis, HBASE and Neo4j support transaction.

If interleaving operations are allowed, without proper concurrency control, problems such as the lost update problem, the dirty read problem and the incorrect summary problem can occur.

  • The lost update problem – from figure 14, normally transaction C wanted to update value3, but it reads 8 actually.

lost update.png

Figure 5. The lost update problem

  • The dirty read – transaction B reads a value from transaction A. But transact A has aborted the value at the same time. Transaction A does not want transaction B to read that value actually.
  • The incorrect summary – – transaction A contains data item X(10) and data item Y(20). Both two values add 10. At the same time transaction is reading the result form transaction A. After the calculation, the data items in transaction A are X(20) and Y(30). But transaction may get the result X(20) and Y(20) which Y should be 30.

The mechanism of concurrency control categories (Bernstein, Hadzilacos & Goodman, 1987):

  • Pessimistic lock blocks every operation violating data integrity. It supposes that the probability of other users who attempt to visit or change the object which you are visiting or changing is so high that it will lock the object until you have committed the update operation. Definitely pessimistic lock enhances the operational integrity, but the drawback is that the delay may be too long which can restrict the access of other users.
  • Optimistic lock will check whether data integrity has been violated only when committing the operations. Contrary to pessimistic lock, it assumes that the probability of other users who attempt to visit or change the object which you are visiting or changing is low. Therefore, it will keep unlocking when you read or change the object and lock the object when you are ready to commit the change of object. The concurrent performance is improved, but it cannot solve the dirty read problem. If the second user reads the object before the first user commits the change, the database detects the change before the first user finishes the change and wants to commit, as a result, the first user has to re-read the object and then make the change.

The selection of two lock categories depends on the application requirements. If concurrency doesn’t happen often and dirty read problem is not allowed, pessimistic lock is preferred.  If there are a lot of concurrency issues, pessimistic lock can degrade the performance significantly, as a result, optimistic lock is preferred.

There are a lot of specific concurrency methods (Bernstein, Hadzilacos & Goodman, 1987).

  • Locking methods

Read lock – If one user applies read lock on one object, other users can also read the object. But any write operation of the object is blocked. Figure 15 illustrates how read lock works.

read lock.png

Figure 6. A flow chart to explain how read lock is performed.

Write lock – If one user applies write lock, other users can not read nor write on the object. Figure16 illustrates how write lock works.

write clock.png

Figure 7. A flow chart to explain how write lock performance

  • Multiversion concurrency control (MVCC) generates a new version of  a database object when the object is written and read the most recent version of the object based on timestamp order or increasing transaction ID.

Consider two concurrent writes shown in figure 17.

concurr write.png

Figure 8. Two writes to the same row

The performance of MVCC based on row lock operation is illustrated in the following figure 18.

MVVC

Figure 9. A flow char to explain how MVCC is performed.

Considering the features of different databases, the methods of concurrent control is varied.

  • Mysql supports MVCC in InnoDB store engine and Row-level locking and MVCC in InnoDB store engine and locking the whole table in MyISAM store engine. For the two lock levels: table lock (All rows in the locked table are locked) and row lock (Some rows of a table are locked, but other rows are not locked).
  • As Redis is in single-process and single-thread mode, it adopts queue mode to replace concurrency access into serial access. There is no locking concept in Redis.
  • MongoDB supports read lock and write lock for concurrency control. However, read and write operations can yield their locks. It uses adaptive algorithms based predicted disk access patterns to resolve this problem. (MongoDB FAQ)
  • Hbase guarantees high effective concurrency control through row level lock and MVCC methods.
  • Neo4j applies default locking behavior. When there are changes on properties, nodes and relationships, write locks will be taken.  (Neo4j Manual)

Reference

Bernstein, P. A., Hadzilacos, V., & Goodman, N. (1987). Concurrency control and recovery in database systems (Vol. 370). New York: Addison-wesley.

Brewer, E. A. (2000, July). Towards robust distributed systems. In PODC (p. 7).

Gray, J., & Reuter, A. (2007). Transaction processing: concepts and techniques, 1993.

Mapanga, I., & Kadebu, P. (2013). Database Management Systems: A NoSQL Analysis. International journal of Modern Communication Technologies and Research, 1(7).

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s