Curiosity is bliss    Archive    Feed    About    Search

Julien Couvreur's programming blog and more

Git Internals - branches and remotes

My previous article on Git Internals described the object model for a single repository. But how do distributed repositories work together?
As I’ll try to explain, immutability is the foremost key.

DAG of commits

The core design of Git revolves around building a graph of commits where each commit points to its parent(s) commit(s) and to a tree of objects (representing files and folders). Commits and tree objects are immutable; they can be added, but never modified.
This immutability (and the fact that all those objects have globally unique content-based identifiers) make it safe for people to party on this graph across the world.
Each contributor is just adding new commits to the graph, each with a new object tree. The new commits can reference existing commits and the new object trees can reference existing tree objects. All those new objects could then be safely shared to others without conflicts. At the same time, no single Git instance has the complete view of the graph that is getting built.

References

Not everything in Git is immutable though. Branch references, which are also simply called branches, are updateable references to commits.
The key to avoiding distributed conflicts is clear ownership: a repository can only modify branches it owns, and receive updates for other branches from their owners.
Branch names are namespaced, so you can tell which ones each remote repository owns and which ones your local instance owns. If your repository is linked to “remote1” and “remote2”, their branches will be named “remote1/blah” and “remote2/foo”, while your local branches will simply be named “bar”.

Fetch, merge, rebase, push and pull

We’ll now look at some operations and how they affect the commit graph and the branch references.

Fetch get updates from a remote repository. You will get updated branch references and all the objects necessary to complete their history.
This does not update your own repository’s branches and therefore is conflict-free.

On the other hand, merge and rebase will update one of your repository’s own branches. Both merge and rebase are designed to handle divergence between two* branches. Those could be two* local branches, but I’ll explain the case where your local branch added commits and its corresponding remote branch added other commits.

Merge will create a new commit with two* parents: the commit referenced by the remote branch and the one referenced by your local branch. It is generated by considering all changes since their common commit ancestor, and may require manual intervention to resolve conflicts. Your local branch is then updated to reference this commit after it is created.
The degenerate case where the your branch had no changes is simpler. Your local branch was the common ancestor and will be updated to match the remote branch, without need to make a new commit. It is called fast-forward.

A pull operation simply combines a fetch and a merge.

Rebase will create a chain of new commits which descend from the commit referenced by the remote branch and then update your branch to reference the last commit in that new chain.
Those new commits replay the changes you had in your local branch (since the common ancestor commit). The chain that is generated could be interactively tweaked during rebase, for instance to combine or split the original commits in some way.

Both merge and rebase will only update one of the branches (the working branch) and leave the other(s) unchanged.

Push sends some new commits of yours to the remote repository and ask it to update one of its own branch references. The normal case (no forcing) is restricted to a fast-forward.

Example

Let’s look at an example of divergence, merging and rebasing, using illustrations borrowed from Pro Git. The first figure shows two local branches (master and experiment) that diverged by adding one commit each (C3 and C4).
git-divergence.png

Merging is one way to handle this divergence. It adds a new merge commit (C5) which has two parents and updates one of the branch references (master in this instance). git-merged-divergence.png

Another way to handle this same situation is to use rebase. Instead of creating a merge commit with two parents, it adds a new chain of commits to one side. Those new commits (C4') replay the changes on the other side of the divergence (C4) since the common ancestor (C2). Then it updates the other branch reference (experiment).
Some commits may be left hanging with no reference, such as C4 here.
git-rebased-divergence.png

After this rebase, if we try to update the master branch with a merge of the experiment branch, this will be a fast-forward merge. It simply updates the master reference and does not require creating any new commit.
git-fast-forward.png

This example used two local branch names, but the operations work exactly the same with one remote branch, which is read-only to you, and one local branch, which will be updated.

Summary

To recap, there are a few keys that illuminate Git’s design:

  1. Commits and object trees are immutable.
  2. Commits and objects have globally unique identifiers.
  3. Branches are mutable references to commits, but are namespaced by repository and have clear ownership rules.

Although a couple of people have identified immutability in particular to be a key in Git’s design (Scott Chacon in his excellent Getting Git talk or Philip Nilsson), I’m surprised that this is not more commonly emphasized. With those keys, its design becomes much easier to understand in its simplicity and elegance.

PS: The GitHub team put together a kind of git simulation/visualization tool (more details). Type git commands and see the commit tree get updated. (via).

Along those lines, there is also an interactive git tutorial which not only has a visualization but is structured into mini-games of different levels.

ZooKeeper overview

Some notes on my weekend reading: ZooKeeper: Distributed Process Coordination.

There are two ways to look at any software framework: what abstractions/interface it provides and how it works internally.
The abstraction presented by ZooKeeper is a tree of nodes (called znodes), which is similar to files and directories, and a number of operations with useful guarantees (durability, ordering).
Nodes can be persistent or ephemeral (they get deleted when the session of the client who created the node is terminated or expires). The path of a node is set by the client, but ZooKeeper can optionally generate a sequence number (for example: /tasks/task-<increment>).
So a node has a path, some typically small data (less than 1MB), a mode (persistent or ephemeral), a version (increasing number) and some ACLs.

The operations are to create a node, delete a node, check for existence of a path, read a node, replace the data with newer data, and enumerate the children of a path. There is also the multi-operation which is a combo of operations that only succeed atomically. Although the book doesn’t cover it, ZooKeeper now supports dynamic reconfiguration.
Changes to a node are versioned (but the version get lost/reset whenever the node is deleted) and some operations can be executed conditionally on an expected version.
Rather than polling to watch for changes, a client can set ‘watches’. Those will provide a one-time notification (the watch needs to be reset every time it fires) when the monitor node is created, changes, or is deleted. The watch is an option on other operations.

ZooKeeper offers some important ordering guarantees. Write operations are globally ordered, and they will be observed in that order by any one client. This includes notifications. A watch notification is guaranteed to be delievered to its watchers before any other changes are allowed.

ZooKeeper’s official documentation offers a great overview.

In terms of deployment, ZooKeeper can be run as a standalone instance or as an ensemble (making decisions by quorum). It is accessed with a client library which handles the connections and re-connections. The client will connect to any of the configured instances, with an order of priority. There is also a CLI client and higher-level libraries (such as Curator) to encapsulate common recipes and usage patterns. The book illustrates various primitives with a practical master-worker example.

Although the abstraction seems powerful, distributed systems necessarily have tricky cases. The book does a good job at warning ZooKeeper users of such cases. For instance:

  • what are the classes of recoverable and non-recoverable errors,
  • how a client should check the freshness of a server (last version seen) upon re-connection,
  • the danger of watching for the creation of an ephemeral node (you might miss it in case of disconnects),
  • the difficulty of knowing whether a sequential node was successfully created in the event of a disconnection,
  • the danger of backchannel communication between ZooKeeper clients (two clients may be connected to two servers of different freshness).

Going into the internal design and ZooKeeper’s consensus protocol, I felt the book didn’t explain the success scenario clearly enough before jumping to various error cases.
At a very high level, there are two types of ZooKeeper servers: core servers (which form a quorum and record transactions) and observers (used for scalability). Core servers can be in one of the following states: Looking, Leading or Following. Those in Looking state (at startup or after losing heartbeat from the previous Leader) will elect a Leader. Each time a new Leader is elected, a new epoch starts and the epoch number is incremented (it figures in transaction identifiers).
The Leader acts as sequencer for all write requests, passing them on as transaction proposals to Followers. A quorum of Followers (three out of five, for instance) is required for a proposal to get accepted and committed.
All servers can handle read requests locally, as each server keeps a transaction log and a snapshot of the latest tree (that it knows about).
I’m still working my way down the rabbit hole of distributed consensus protocols, which are notoriously subtle. Here are some pointers and summaries for important consensus papers.

Zookeeper Oreilly book

How round is your circle?

I found How round is your circle? a delightful blend of geometrical and engineering problems.
What I most appreciated is the historical perspective. It shows how problems arose from practical needs and curiosity, that their solutions were incremental, competed against one another, and were in many cases quite recent (less than a few centuries), and finally how those solutions enabled further discoveries and innovations. As a result of the specialization of knowledge and the accumulation of advanced tools, we can easily forget how simple things are actually not trivial at all.
Some highlights of the topics covered:

  • How to build a linkage that will draw a straight line?
  • How to mark graduations on the “first” ruler or sector (requires difficult angle divisions such as trisection)? A later chapter goes into the history and significance of slide rules with logarithmic and alternative scales.
  • How to draw a arc of a large circle (for instance to make a railroad curve)?
  • How to measure with precision (Vernier scales, magnifiers and fine-threaded screws)?
  • How to measure surface areas, and how planimeters work (tools to measure surfaces by drawing contours, surprising but effective)?
  • How to make shapes and volumes of constant width, and conversely how to measure departure from roundness (a hard problem)?

How round is your circle.jpg