**End-to-end recovery times of 1-2 seconds for 35 GB.**
# Interesting notes
## Interesting notes
- scattered backups increase possibility of data lost (vs all segment backups on same replicas (think of probability that that system w/ 2 backups, all three of those would fail, vs three random failures affecting scattered backups.
- avail from fast recovery rather than weakened consistency
## Comments
- I didn't see the paper touch on many limitations of their approach. While it seems good, are there any less obvious drawbacks?
Latency of RPCs but can they really control that if they're using TCP?
- Are there any security risks with this approach? Could ORAM be used to make it more secure?
- If we wanted to implement a worst-case recovery in their approach to using DRAM, what would be the absolute worst case?
Would it be if all the data was stored on a single disk and that disk failed?
Would it be if all the data was stored on a single disk and that disk failed, and the data was partitioned across multiple recovery leaders?
# Linearizability
# Implementing Linearizability at Large Scale and Low Latency