mirror of
https://github.com/gomods/athens
synced 2026-02-03 11:00:32 +00:00
Moving design docs from the wiki (#235)
* Adding the design doc for the proxy Taken from the wiki page and modified slightly * Moving the wiki page for registry to a doc and modifying slightly to remove the "Olympus" nomenclature, since it's confusing
This commit is contained in:
committed by
GitHub
parent
349adbe58b
commit
a8730a16d2
@@ -1,61 +1,68 @@
|
||||
# The Athens Proxy
|
||||
|
||||
Athens comes with a module proxy that you can run inside your own organization.
|
||||
This proxy will do two things for you:
|
||||
The Athens project has two components, the [central registry](./REGISTRY.md) and edge proxies.
|
||||
This document details the latter.
|
||||
|
||||
* Cache modules locally, so that developers inside your organization do not
|
||||
have to fetch modules from the public internet
|
||||
* Redirect module fetch operations to a specific location on the internet
|
||||
# The Role of the Proxy
|
||||
|
||||
It's intended that clients set the `GOPROXY` environment variable on their
|
||||
machine to the Athens proxy address before they start developing. That
|
||||
enables the following:
|
||||
We intend proxies to be deployed primarily inside of enterprises to:
|
||||
|
||||
```console
|
||||
go get github.com/a/b
|
||||
- Host private modules
|
||||
- Exclude access to public modules
|
||||
- Cache public modules
|
||||
|
||||
Importantly, a proxy is not intended to be a complete _mirror_ of an upstream registry. For public modules, its role is to cache and provide access control.
|
||||
|
||||
# Proxy Details
|
||||
|
||||
First and foremost, a proxy exposes the same vgo download protocol as the registry. Since it doesn't have the multi-cloud requirements as the registry does, it supports simpler backend data storage mechanisms. We plan to release a proxy with several backends including:
|
||||
|
||||
- In-memory
|
||||
- Disk
|
||||
- RDBMS
|
||||
- Cloud blob storage
|
||||
|
||||
Users who want to target a proxy configure their `vgo` CLI to point to the proxy, and then execute commands as normal.
|
||||
|
||||
# Cache Misses
|
||||
|
||||
When a user requests a module `MxV1` from a proxy and the proxy doesn't have `MxV1` in its cache, it first determines whether `MxV1` is private or not private.
|
||||
|
||||
If it's private, it immediately does a cache fill operation from the internal VCS.
|
||||
|
||||
If it's not private, the proxy consults its exclude list for non-private modules (see below). If `MxV1` is on the exclude list, the proxy returns 404 and does nothing else. If `MxV1` is not on the exclude list, the proxy executes the following algorithm:
|
||||
|
||||
```
|
||||
registryDetails := lookupOnRegistry(MxV1)
|
||||
if registryDetails == nil {
|
||||
return 404 // if the registry doesn't have the thing, just bail out
|
||||
}
|
||||
return registryDetails.baseURL
|
||||
```
|
||||
|
||||
This command will go fetch the `github.com/a/b` module from the proxy,
|
||||
instead of the VCS.
|
||||
The important part of this algorithm is `lookupOnRegistry`. That function queries an endpoint on the registry that either:
|
||||
|
||||
# Proxy Configuration
|
||||
- Returns 404 if it has `MxV1` in the registry
|
||||
- Returns the base URL for MxV1 if it has `MxV1` in the registry
|
||||
|
||||
TODO
|
||||
Finally, if `MxV1` is fetched from a registry server, a background job will be created to periodically check `MxV1` for deletions and/or deprecations. In the event that one happens, the proxy will delete it from the cache.
|
||||
|
||||
# Proxy Caches
|
||||
_In a later version of the project, we may implement an event stream on the registry that the proxy can subscribe to and listen for deletions/deprecations on modules that it cares about_
|
||||
|
||||
The proxy can be configured to store cached modules in several places:
|
||||
# Exclude Lists and Private Module Filters
|
||||
|
||||
* Memory: this is only for development purposes
|
||||
* Disk: this is appropriate for single-node deployments. We don't recommend
|
||||
running an athens proxy on a single node, so we don't recomment using disk
|
||||
storage in a large scale deployment. _This backend is not yet complete_
|
||||
* RDBMS: this is appropriate for multi-node deployments. We recommend this
|
||||
storage backend for larger scale deployments. _This backend is not yet complete_
|
||||
To accommodate private (i.e. enterprise) deployments, the proxy maintains two important access control mechanisms:
|
||||
|
||||
The proxy storage drivers are responsible for storing _all_ information
|
||||
concerning a module:
|
||||
- Private module filters
|
||||
- Exclude lists for public modules
|
||||
|
||||
* The module metadata (versions, etc...)
|
||||
* The module's `go.mod` file
|
||||
* The module source code (in a Zip file)
|
||||
## Private Module Filters
|
||||
|
||||
Additionally, Athens can be configured to automatically fill its caches on a
|
||||
cache miss. For example, if the following request is made with `GOPROXY` set
|
||||
to the proxy's address:
|
||||
Private module filters are string globs that tell the proxy what is a private module. For example, the string `github.internal.com/**` tells the proxy:
|
||||
|
||||
```console
|
||||
go get github.com/a/b@v1
|
||||
```
|
||||
- To never make requests to the public internet (i.e. to the registry) regarding this module
|
||||
- To download module code (in its cache filling mechanism) from the VCS at `github.internal.com`
|
||||
|
||||
... and `github.com/a/b` is not in the cache, this is a cache miss. The proxy
|
||||
will return a `404 Not Found` HTTP status code, and fetch `github.com/a/b` from
|
||||
the VCS in the background. The `404` will make `go get` fetch source from the
|
||||
VCS, and after the proxy fills the cache, future `go get` operations will
|
||||
succeed.
|
||||
## Exclude Lists for Public Modules
|
||||
|
||||
# Proxy Redirects
|
||||
|
||||
The athens proxy can be configured to redirect various module downloads to
|
||||
upstream VCS repositories, proxies, or registries. _Redirection is not yet
|
||||
implemented._
|
||||
Exclude lists for public modules are also globs that tell the proxy what modules it should never download from the registry. For example, the string `github.com/arschles/**` tells the proxy to always return `404 Not Found` to clients.
|
||||
|
||||
+205
-36
@@ -1,45 +1,214 @@
|
||||
_Note: this document is out of date_
|
||||
|
||||
# The Athens Registry
|
||||
|
||||
Athens runs a registry service that stores module source code and metadata for
|
||||
a wide variety of public Go code. Fundamentally, the registry serves module
|
||||
metadata and source code, but it has several important features on top of
|
||||
this basic functionality:
|
||||
Written by:
|
||||
|
||||
* It is hosted and stores module metadata and source code in a CDN
|
||||
* It uses `<meta>` redirects to tell `go get` to fetch module metadata and
|
||||
source code from the CDN
|
||||
* Modules are identified by domain name
|
||||
* The registry is available on several domain names
|
||||
* For example, one of the domain names is `gomods.io`, and so a valid
|
||||
module name is `gomods.io/module/one`
|
||||
* It has an API for uploading new modules -- and new versions of existing
|
||||
module -- to the registry
|
||||
* You have to be authenticated and authorized to upload
|
||||
* Authentication is done with Github login (other login systems may be
|
||||
supported in the future)
|
||||
* It is capable of verifying module source code integrity
|
||||
* The registry names modules according to the `module "abc"` directive
|
||||
in the `go.mod` file. The custom module naming enables custom import
|
||||
directives like this:
|
||||
- [Aaron Schlesinger](https://github.com/arschles)
|
||||
- [Michal Pristas](https://github.com/michalpristas)
|
||||
- [Brian Ketelsen](https://github.com/bketelsen)
|
||||
|
||||
```go
|
||||
package mycustompackage // import "my/custom/package"
|
||||
```
|
||||
The Athens registry is a Go package registry service that is hosted globally across multiple cloud providers. The **global deployment** will have a DNS name (i.e. `registry.golang.org`) that round-robins across each **cloud deployment**. We will use the following **cloud deployments** for _example only_ in this document:
|
||||
|
||||
# Registry Restrictions
|
||||
- Microsoft Azure (hosted at `microsoft.registry.golang.org`)
|
||||
- Google Cloud (hosted at `google.registry.golang.org`)
|
||||
- Amazon AWS (hosted at `amazon.registry.golang.org`)
|
||||
|
||||
The registry gets its code from one of two places:
|
||||
Regardless of which **cloud deployment** is routed to, the **global deployment** must provide up-to-date (precise definition below) module metadata & code.
|
||||
|
||||
* A webhook that fetches code from a VCS
|
||||
* A manual upload
|
||||
We intend to create a foundation (the TBD foundation) that manages **global deployment** logistics and governs how each **cloud deployment** participates.
|
||||
|
||||
In either case, it imposes the following restrictions on the modules it
|
||||
holds:
|
||||
# Glossary
|
||||
|
||||
* In the webhook case, the VCS repository must already be "known"
|
||||
(the owner of the repository will need to register it prior)
|
||||
* In either case, the registry will reject an update on an existing tag, if
|
||||
it's been downloaded 1 or more times
|
||||
* So that tags are immutable, and we don't break anyone's builds
|
||||
In this document, we will use the following keywords and symbols:
|
||||
|
||||
- `OA` - the registry **cloud deployment** hosted on Amazon AWS
|
||||
- `OG` - the registry **cloud deployment** hosted on Google Cloud
|
||||
- `OM` - the registry **cloud deployment** hosted on Microsoft Azure
|
||||
- `MxVy` - the module `x` at version `y`
|
||||
|
||||
# Properties of the Registry
|
||||
|
||||
The registry should obey the following invariants:
|
||||
|
||||
- No existing module or version should ever be deleted or modified
|
||||
- Except for exceptional cases, like a DMCA takedown (more below)
|
||||
- Module metadata & code may be eventually consistent across **cloud deployments**
|
||||
|
||||
These properties are both important to design the **global deployment** and to ensure repeatable builds in the Go community as much as is possible.
|
||||
|
||||
# Technical Challenges
|
||||
|
||||
A registry **cloud deployment** has two major concerns:
|
||||
|
||||
- Sharing module metadata & code
|
||||
- Staying current with what other registry **cloud deployment**s are available
|
||||
|
||||
For the rest of this document, we’ll refer to these concerns as **data exchange** and **membership**, respectively.
|
||||
|
||||
Registries will use separate protocols to do **data exchange** and **membership**.
|
||||
|
||||
# Data Exchange
|
||||
|
||||
The overall design of the **global deployment** should ensure the following:
|
||||
|
||||
- Module metadata and code is fetched from the appropriate source (i.e. a VCS)
|
||||
- Module metadata and code is replicated across all **cloud deployment**s. As previously stated, replication may be eventually consistent.
|
||||
|
||||
Each **cloud deployment** holds:
|
||||
|
||||
- A module metadata database
|
||||
- A log of actions it has taken on the database (used to version the module database)
|
||||
- Actual module source code and metadata
|
||||
- This is what vgo requests
|
||||
- Likely stored in a CDN
|
||||
|
||||
The module database holds metadata and code for all modules that the cloud deployment is aware of, and the log records all the operations the cloud deployment has done in its lifetime.
|
||||
|
||||
# The Module Database
|
||||
|
||||
The module database is made up of two components:
|
||||
|
||||
- A blob storage system (usually a CDN) that holds module metadata and source code
|
||||
- This is called the module CDN
|
||||
- A key/value store that indicates whether and where a module MxV1 exists in the **cloud deployment**'s blob storage
|
||||
- This is called the module metadata database, or key/value storage
|
||||
|
||||
If a **cloud deployment** OM holds modules `MxV1`, `MxV2` and `MyV1`, its module metadata database would look like the following:
|
||||
|
||||
```
|
||||
Mx: {baseLocation: mycdn.com/Mx}
|
||||
My: {baseLocation: mycdn.com/My}
|
||||
```
|
||||
|
||||
Note that `baseLocation` is intended for use in the `<meta>` redirect response passed to vgo. As a result, it may point to other **cloud deployment** blob storage systems. More information on that in the synching sections below.
|
||||
|
||||
# The Log
|
||||
|
||||
The log is an append-only record of actions that a **cloud deployment** OM has taken on its module database. The log exists only to facilitate module replication between **cloud deployment**s (more on how replication below).
|
||||
|
||||
Below is an example event log:
|
||||
|
||||
```
|
||||
ADD MxV1 ID1
|
||||
ADD MxV2 ID2
|
||||
ADD MyV1.5 ID3
|
||||
```
|
||||
|
||||
This log corresponds to a database that looks like the following:
|
||||
|
||||
```
|
||||
Mx: {baseLocation: mycdn.com/Mx}
|
||||
My: {baseLocation: mycdn.com/My}
|
||||
```
|
||||
|
||||
And blob storage that holds versions 1 and 2 of Mx and version 1.5 of My.
|
||||
|
||||
## Log IDs
|
||||
|
||||
Note that each event log line holds ID data (`ID1`, `ID2`, etc...). These IDs are used to by other **cloud deployment**s as database versions. Details on how these IDs are used are below in the pull sync section.
|
||||
|
||||
# Cache Misses
|
||||
|
||||
If an individual **cloud deployment** OM gets a request for a module MxV1 that is not in its database, it returns a "not found" (i.e. HTTP 404) response to vgo. Then, the following happens:
|
||||
|
||||
- OM starts a background cache fill operation to look for MxV1 on OA and OG
|
||||
- If OA and OG both report a miss, OM does a cache fill operation from the VCS and does a push synchronization (see below)
|
||||
- vgo downloads code directly from the VCS on the client's machine
|
||||
|
||||
# Pull Sync
|
||||
|
||||
Each **cloud deployment** will actively sync its database with the others. Every timer tick `T a **cloud deployment** OM will query another **cloud deployment** OA for all the modules that changed or were added since the last time OM synched with OA.
|
||||
|
||||
## Query Mechanism
|
||||
|
||||
The query obviously relies on OA being able to provide deltas of its database over logical time. Logical time is communicated between OM and OA with log IDs (described above). The query algorithm is approximately:
|
||||
|
||||
```
|
||||
lastID := getLastQueriedID(OA)
|
||||
newDB, newID := query(OA, lastID) // get the new operations that happened on OA's database since lastID
|
||||
mergeDB(newDB) // merge newDB into my own DB
|
||||
storeLastQueriedID(OA, newID) // after this, getLastQueriedID(OA) will return newID
|
||||
```
|
||||
|
||||
The two most important parts of this algorithm are the `newDB` response and the `mergeDB` function.
|
||||
|
||||
### Database Diffs
|
||||
|
||||
OA uses its database log to construct a database diff starting from the `lastID` value that it receives from OM. It then sends the diff to OM in JSON that looks like the following:
|
||||
|
||||
```json
|
||||
{
|
||||
"added": ["MxV1", "MxV2", "MyV1"],
|
||||
"deleted": ["MaB1", "MbV2"],
|
||||
"deprecated": ["MdB1"]
|
||||
}
|
||||
```
|
||||
|
||||
Explicitly, this structure indicates that:
|
||||
|
||||
- `MxV1`, `MxV2` and `MyV1` were added since `lastID`
|
||||
- `MaB1` and `MbV2` were deleted since `lastID`
|
||||
- `MdB1` was deprecated since `lastID`
|
||||
|
||||
### Database Merging
|
||||
|
||||
The `mergeDB` algorithm above receives a database diff and merges the new entries into its own database. It follows a few rules:
|
||||
|
||||
- Deletes insert a tombstone into the database
|
||||
- If a module `MdV1` is tombstoned, all future operations that come via database diffs are sent to `/dev/null`
|
||||
- If module `MdV2` is deprecated, future add or deprecation diffs for `MdV2` are sent to `/dev/null`. Future delete operations can still tombstone
|
||||
|
||||
The approximate algorithm for `mergeDB` is this:
|
||||
|
||||
```
|
||||
func mergeDB(newDB) {
|
||||
for added in newDB.added {
|
||||
fromDB := lookup(added)
|
||||
if fromDB != nil {
|
||||
break // the module already exists (it may be deprecated or tombstoned), bail out
|
||||
}
|
||||
addToDB(added) // this adds the module to the module db's key/value store, but points baseLocation to the other cloud deployment's blob storage
|
||||
go downloadCode(added) // this downloads the module to local blob storage, then updates the key/value store's baseLocation accordingly
|
||||
}
|
||||
for deprecated in newDB.deprecated {
|
||||
fromDB := lookup(deprecated)
|
||||
if fromDB.deleted() {
|
||||
break // can't deprecated something that's already deleted
|
||||
}
|
||||
deprecateInDB(deprecated) // importantly, this function inserts a deprecation record into the DB even if the module wasn't already present!
|
||||
}
|
||||
for deleted in newDB.deleted {
|
||||
deleteInDB(deleted) // importantly, this function inserts a tombstone into the DB even if the module wasn't already present!
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
# Push Sync
|
||||
|
||||
If a **cloud deployment** OM has a cache miss on a module MxV1, does a cache fill operation and discovers that no other **cloud deployment** OG or OA have MxV1, it fills from the VCS. After it finishes the fill operation, it saves the module code and metadata to its module database and adds a log entry for it. The algorithm look like the following:
|
||||
|
||||
```
|
||||
newCode := fillFromVCS(MxV1)
|
||||
storeInDB(newCode)
|
||||
storeInLog(newCode)
|
||||
pushTo(OA, newCode) // retry and give up after N failures
|
||||
pushTo(OG, newCode) // retry and give up after N failures
|
||||
```
|
||||
|
||||
The `pushTo` function is most important in this algorithm. It _only_ sends the existence of a new module, but no event log metadata (i.e. `lastID`):
|
||||
|
||||
```
|
||||
func pushTo(OA, newCode) {
|
||||
http.POST(OA, newCode.moduleName, newCode.moduleVersion, "https://OM.com/fetch")
|
||||
}
|
||||
```
|
||||
|
||||
The endpoint in OA that receives the HTTP `POST` request in turn does the following:
|
||||
|
||||
```
|
||||
func receive(moduleName, moduleVersion, fetchURL) {
|
||||
addToDB(moduleName, moduleVersion, OM) // stores moduleName and moduleVersion in the key/value store, with baseLocation pointing to OM
|
||||
go downloadCode(added) // this downloads the module to local blob storage, then updates the key/value store's baseLocation accordingly
|
||||
```
|
||||
|
||||
Note again that `lastID` is not sent. Future pull syncs that OA does from OM will receive moduleName/moduleVersion in the 'added' section, and OA will properly do nothing because it already has moduleName/moduleVersion.
|
||||
|
||||
Reference in New Issue
Block a user