Replicated data were employed in distributed databases to enhance data availability. However, this benefit of data availability is only realized at the cost of elaborate algorithms that hide the underlying complexity of maintaining multiple copies of a single data item. The difficulty lies in keeping the copies consistent in the face of system failures while at the same time maximizing data availability. The algorithms which address these problems are called replica control algorithms. Although replica control has been the subject of extensive research for quite some time now, it has yet to fulfil its promise in practical applications. A major reason for this lack of acceptance is that the performance impact of these protocols cannot be easily quantified, as very few existing performance figures of commercial database systems which support copies are available. A natural question arises: Is replication worth it? This question is the focus of our investigation in this paper.
ASJC Scopus subject areas
- Computer Networks and Communications