-**low-latency reads:** for geo-replicated clients
Complaint is impossibility everything, everywhere, all at once:
- strict serializability
- strict serializability
- low latency writes
- low latency writes
- high throughput
- high throughput
...*for loads which contain physical region localtion in data access*
"order of magnitude reduction of latency relative to state of the art"
-*what are they giving up...*
## Definitions
## Definitions
- strict serializability
- strict serializability
- one-copy serializable
- one-copy serializable
- "if X starts after Y completes, then X follows Y in s"
- "if X starts after Y completes, then X follows Y in serial order"
- real-time ordering
- real-time ordering
Give up strict serializability:
- Dynamo [26], Cassandra [47], and Riak [3] use eventual consistency; PNUTs [20] supports timeline consistency; COPS [51], Eiger [52], and Contrarian [27] support a variation of causal consistency; Lynx (Transaction Chains) [95] supports non-strict serializability with read-your-writes consistency [81]; Walter [77], Jessy [8], and Blotter [56] support variations of snapshot consistency
Give up low latency writes:
- Calvin, Spanner, CockroachDB, MDCC, Cosmos DB: At least one cross-region trip to commit a transaction.
High throughput throughput:
- "one region declared master for a data item" - no need for synchronous replication
- NuoDB, G-Store re-master on the fly to concentrate trans data at single site
# SLOG
- does not remaster.....
- uses deterministic execution
- but:
- require entire transaction in advance
- including *data* to be accessed
- via static analysis and speculative execution
SLOG is implemented in Calvin and therefore requires all this as well.
Low-latency requires either:
- async replication, or
- sync replication, but only in nearby regions
SLOG doesn't require re-mastering to process multi-home transactions, but:
- assumes system dynamically re-mastering to match access characteristics
# Overview
- each region has local **input log** for transactions that modify data mastered at that region.
- local logs are replicated to other regions
- each region is expected to be able to reconstruct input log from other regions
- each region has a *view* of global log
- within a region execution is straight Calvin
data replication across regions *not strictly necessary* since:
- master region oversees all writes and linearizable reads
however, this replication supports:
- other regions can support local snapshot reads at any point in version history
- dynamic re-mastering less painful
If all transactions are single-home, then:
- transactions in different input logs (regions) can not conflict
Assumptions:
- a region can determine prese data accesed by a transaction, before processing

<p>
<p>
<p>

<p>
<p>
<p>
## Multi-homed
- all multi-homed transactions must be global ordered
- approaches include:
1. single ordering server
1. global paxos
1.**sending all multi-home transactions to same region to be ordered**
1. how is this different than i?
- inserted into a region
- if local region is home for any granules, create **LockOnlyTxn** and put into local log (no execution code, just locks)
- single-home xtion appears only one in global region log
- multimulti-home xtion appears several times in global region log:
one for code, a LockOnlyTxn one from each region that houses
(homes?) data for that xtion.

figure:
- T2<sub>0</sub> after T1 in region 0
- between T3 and T4 for region 1
- T3 - T2<sub>1</sub> - T4 - T2<sub>0</sub>
- doesn't matter if LockOnlyTxns for single xtion ordered
differently in different regions, as *they access disjoint data*
and therefore *commute*
*regions do not diverge because order at each region obeys partial
ordering of all other local logs*
ordering of single-home xtions (w/ different homes) also commute: T1,
T3 above
**1-copy-serializability**: schedule is 1-copy-serializabile if
conflict-equivalent to serial ordering in a system w/ a single server
## Re-mastering
complicated, as local decisions about how/when to re-master, but
in-flight xtions...
So achieve:
- strict serializability
- low latency writes (for some transactions)
- high throughput transactions even w/ high contention