Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

Commit

Permalink
Merge weaveworks/go-1.5 (PR #1386)
Browse files Browse the repository at this point in the history
Make Weave build & work with Go 1.5
  • Loading branch information
dpw committed Sep 9, 2015
2 parents 5aa58e6 + aa61cbd commit 2163b39
Show file tree
Hide file tree
Showing 28 changed files with 1,928 additions and 1,264 deletions.
33 changes: 33 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,36 @@
## Release v1.1.0

**Highlights**:

- `weave launch` now launches all weave components, simplifying
startup.
- `weave status` has been completely revamped, with a much improved
presentation of the information, and the option to select and output
data in JSON.
- weaveDNS has been rewritten and embedded in the router. The new
implementation simplifies configuration, improves performance, and
provides fault resilience for services.
- the weave Docker API proxy now provides an even more seamless user
experience, and enables easier integration of weave with other
systems such as kubernetes.
- many usability improvements
- a few minor bug fixes, including a couple of security
vulnerabilities

More details in the
[change log](https://github.com/weaveworks/weave/issues?q=milestone%3A1.1.0).

## Release 1.0.3

This release contains a weaveDNS feature enhancement as well as minor fixes for
improved stability and robustness.

More details in the
[change log](https://github.com/weaveworks/weave/issues?q=milestone%3A1.0.3).

The release is fully compatible with other 1.0.x versions, so existing
clusters can be upgraded incrementally.

## Release 1.0.2

This release fixes a number of bugs, including some security
Expand Down
28 changes: 12 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,8 +67,8 @@ two containers, one on each host.

On $HOST1 we run:

host1$ weave launch && weave launch-dns && weave launch-proxy
host1$ eval $(weave proxy-env)
host1$ weave launch
host1$ eval $(weave env)
host1$ docker run --name a1 -ti ubuntu

> NB: If the first command results in an error like
Expand All @@ -81,21 +81,18 @@ On $HOST1 we run:
> `sudo`, since some commands modify environment entries and hence
> they all need to be executed from the same shell.
The first line runs the weave router, DNS and Docker API proxy, each
in their own container. The second line sets the `DOCKER_HOST`
environment variable to point to the proxy, so that containers
launched via the docker command line are automatically attached to the
weave network. Finally, we run our application container; this happens
via the proxy so it is automatically allocated an IP address and
registered in DNS.
The first line runs weave. The second line configures our environment
so that containers launched via the docker command line are
automatically attached to the weave network. Finally, we run our
application container.

That's it! If our application consists of more than one container on
this host we simply launch them with `docker run` as appropriate.

Next we repeat similar steps on `$HOST2`...

host2$ weave launch $HOST1 && weave launch-dns && weave launch-proxy
host2$ eval $(weave proxy-env)
host2$ weave launch $HOST1
host2$ eval $(weave env)
host2$ docker run --name a2 -ti ubuntu

The only difference, apart from the name of the application container,
Expand All @@ -113,11 +110,10 @@ available. Also, we can tell weave to connect to multiple peers by
supplying multiple addresses, separated by spaces. And we can
[add peers dynamically](http://docs.weave.works/weave/latest_release/features.html#dynamic-topologies).

The router, DNS and Docker API proxy need to be started once per
host. The relevant container images are pulled down on demand, but if
you wish you can preload them by running `weave setup` - this is
particularly useful for automated deployments, and ensures that there
are no delays during later operations.
Weave must be started once per host. The relevant container images are
pulled down on demand, but if you wish you can preload them by running
`weave setup` - this is particularly useful for automated deployments,
and ensures that there are no delays during later operations.

Now that we've got everything set up, let's see whether our containers
can talk to each other...
Expand Down
2 changes: 1 addition & 1 deletion bin/release
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@ usage() {
echo "Usage:"
echo -e "\t./bin/release build"
echo "-- Build artefacts for the latest version tag"
echo -e "\t./bin/release draft
echo -e "\t./bin/release draft"
echo "-- Create draft release with artefacts in GitHub"
echo -e "\t./bin/release publish"
echo "-- Publish the GitHub release and update DockerHub"
Expand Down
26 changes: 13 additions & 13 deletions docs/architecture.txt
Original file line number Diff line number Diff line change
Expand Up @@ -66,23 +66,23 @@ Router:
simply responds to traffic received from the remote peer via TCP
5. We register this connection with the local peer
6a. If we initiated this connection then we now start sending fast
heartbeats to the remote peer so that the remote peer can determine what
address/port it should use to send UDP back to us. To do this, we
spawn off two further threads which are the forwarder
loops. These receive frames which are to be sent to the remote
peer. One of them sends frames without the DF flag set. To do this
it just sends the packets out of the UDP Listener socket. The
other needs to send frames with the DF flag set and needs its own
socket so that it can do PMTU discovery easily. To do this, it
uses a Raw IP socket (IP has no ports, so there's no collision
issue with the UDP Listener socket) and so it must add UDP headers
itself.
6b. If we did not initiate this connection then the UDP Listener
heartbeats to the remote peer so that the remote peer can
determine what address/port it should use to send UDP back to
us. To do this, we spawn off a "forwarder" thread to send
heartbeats, monitor incoming heartbeats, and some other auxiliary
duties. It also consumes frames to be encapsulated and send via
UDP from two channels, for DF and non-DF cases. In the non-DF
case, it can just send the packets out of the UDP Listener
socket. In the DF case, it needs its own socket so that it can do
PMTU discovery easily. To do this, it uses a Raw IP socket (IP has
no ports, so there's no collision issue with the UDP Listener
socket) and so it must add UDP headers itself.
6b. If we did not initiate this connection then the UDP Listener
should start receiving fast heartbeats from the remote peer. From
those it should be able to identify the local connection via the
local peer. It will tell the local connection (communicating to
the actor thread) about the UDP address of the remote peer. The
local connection will then start its forwarder threads as
local connection will then start its forwarder thread as
described in 6a, and start sending fast heartbeats. We send to the
remote peer via TCP a ConnectionEstablished message. The remote
peer receives this (on the TCP receiver process), tells the
Expand Down
2 changes: 1 addition & 1 deletion prog/weaveexec/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM gliderlabs/alpine
FROM alpine

MAINTAINER Weaveworks Inc <[email protected]>
LABEL works.weave.role=system
Expand Down
89 changes: 49 additions & 40 deletions prog/weaver/main.go
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
package main

import (
"crypto/sha256"
"fmt"
"net"
"net/http"
Expand Down Expand Up @@ -36,31 +35,31 @@ func main() {
runtime.GOMAXPROCS(procs)

var (
config weave.Config
justVersion bool
protocolMinVersion int
ifaceName string
routerName string
nickName string
password string
pktdebug bool
logLevel string
prof string
bufSzMB int
noDiscovery bool
httpAddr string
iprangeCIDR string
ipsubnetCIDR string
peerCount int
apiPath string
peers []string

config weave.Config
justVersion bool
protocolMinVersion int
ifaceName string
routerName string
nickName string
password string
pktdebug bool
logLevel string
prof string
bufSzMB int
noDiscovery bool
httpAddr string
iprangeCIDR string
ipsubnetCIDR string
peerCount int
apiPath string
peers []string
noDNS bool
dnsDomain string
dnsListenAddress string
dnsTTL int
dnsClientTimeout time.Duration
dnsEffectiveListenAddress string
iface *net.Interface
)

mflag.BoolVar(&justVersion, []string{"#version", "-version"}, false, "print version and exit")
Expand Down Expand Up @@ -116,18 +115,25 @@ func main() {
var err error

if ifaceName != "" {
config.Iface, err = weavenet.EnsureInterface(ifaceName)
iface, err := weavenet.EnsureInterface(ifaceName)
if err != nil {
Log.Fatal(err)
}

// bufsz flag is in MB
config.Bridge, err = weave.NewPcap(iface, bufSzMB*1024*1024)
if err != nil {
Log.Fatal(err)
}
}

if routerName == "" {
if config.Iface == nil {
if iface == nil {
Log.Fatal("Either an interface must be specified with --iface or a name with -name")
}
routerName = config.Iface.HardwareAddr.String()
routerName = iface.HardwareAddr.String()
}

name, err := weave.PeerNameFromUserInput(routerName)
if err != nil {
Log.Fatal(err)
Expand Down Expand Up @@ -157,9 +163,14 @@ func main() {
defer profile.Start(&p).Stop()
}

config.BufSz = bufSzMB * 1024 * 1024
config.LogFrame = logFrameFunc(pktdebug)
config.PeerDiscovery = !noDiscovery
config.Overlay = weave.NewSleeveOverlay(config.Port)

if pktdebug {
config.PacketLogging = packetLogging{}
} else {
config.PacketLogging = nopPacketLogging{}
}

router := weave.NewRouter(config, name, nickName)
Log.Println("Our name is", router.Ourself)
Expand Down Expand Up @@ -248,24 +259,22 @@ func canonicalName(f *mflag.Flag) string {
return ""
}

func logFrameFunc(debug bool) weave.LogFrameFunc {
if !debug {
return func(prefix string, frame []byte, dec *weave.EthernetDecoder) {}
}
return func(prefix string, frame []byte, dec *weave.EthernetDecoder) {
h := fmt.Sprintf("%x", sha256.Sum256(frame))
parts := []interface{}{prefix, len(frame), "bytes (", h, ")"}
type packetLogging struct{}

if dec != nil {
parts = append(parts, dec.Eth.SrcMAC, "->", dec.Eth.DstMAC)
func (packetLogging) LogPacket(msg string, key weave.PacketKey) {
Log.Println(msg, key.SrcMAC, "->", key.DstMAC)
}

if dec.DF() {
parts = append(parts, "(DF)")
}
}
func (packetLogging) LogForwardPacket(msg string, key weave.ForwardPacketKey) {
Log.Println(msg, key.SrcPeer, key.SrcMAC, "->", key.DstPeer, key.DstMAC)
}

Log.Println(parts...)
}
type nopPacketLogging struct{}

func (nopPacketLogging) LogPacket(string, weave.PacketKey) {
}

func (nopPacketLogging) LogForwardPacket(string, weave.ForwardPacketKey) {
}

func parseAndCheckCIDR(cidrStr string) address.CIDR {
Expand Down
35 changes: 35 additions & 0 deletions router/bridge.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
package router

// Interface to packet handling on the local virtual bridge
type Bridge interface {
// Inject a packet to be delivered locally
InjectPacket(PacketKey) FlowOp

// Start consuming packets from the bridge. Injected packets
// should not be included.
StartConsumingPackets(BridgeConsumer) error

String() string
Stats() map[string]int
}

// A function that determines how to handle locally captured packets.
type BridgeConsumer func(PacketKey) FlowOp

type NullBridge struct{}

func (NullBridge) InjectPacket(PacketKey) FlowOp {
return nil
}

func (NullBridge) StartConsumingPackets(BridgeConsumer) error {
return nil
}

func (NullBridge) String() string {
return "<no bridge networking>"
}

func (NullBridge) Stats() map[string]int {
return nil
}
Loading

0 comments on commit 2163b39

Please sign in to comment.