-
Notifications
You must be signed in to change notification settings - Fork 56
Configuring a Fabric deployment
To configure the ordering nodes, frontends, peers, and clients, you can use the prepare_<TYPE>_defaults.sh
scripts contained in the ./docker_images/
folder to obtain the default configuration files for each type of principal (where <TYPE>
is either orderingnode
,frontend
, peer
, or cli
). This will create a new folder named ./<TYPE>_material
where you can review and edit the configuration files before incorporating them into containers. Once the configuration files are ready, they can be used in their respective containers by mounting them in volumes. The following steps describe how to create a new Fabric deployment with multiple organizations using our ordering service. Moreover, it is assumed you have entry-level knowledge of both docker and Hyperledger Fabric and that you have already gone through the quick start guide.
The first thing to do is to create the crypto material for the organizations and the genesis block for the system channel. You will need to use the official cryptogen
tool and the configtxgen
tool provided by us. You can access these tools from within a bftsmart/fabric-tools
container (or alternatively, by copying the tool from the container to your machine). You should create organizations for both the ordering service and for the peers that will run chaincodes. In the case of the ordering service, there should be created one organization for the ordering nodes and other for the frontends. An example configuration file for such setup is:
OrdererOrgs:
- Name: OrderingNodes
Domain: node.bft
Template:
Count: 4
Hostname: "{{.Index}}"
- Name: Frontends
Domain: frontend.bft
Specs:
- Hostname: 1000
- Hostname: 2000
PeerOrgs:
- Name: LaSIGE
Domain: lasige.bft
Template:
Count: 2
Hostname: "{{.Index}}.peer"
Users:
Count: 1
- Name: IBM
Domain: ibm.bft
Template:
Count: 2
Hostname: "{{.Index}}.peer"
Users:
Count: 1
Using the example above, the cryptogen
tool will generate certificates and private keys for the organizations OrderingNodes (comprised of 4 ordering nodes), Frontends (comprised of 2 frontends), LaSIGE, and IBM (each one comprised of 2 peers and a single client besides the administrator). The generated crypto material will look like as follows:
$ ls crypto-config/ordererOrganizations/node.bft/orderers/
0.node.bft 1.node.bft 2.node.bft 3.node.bft
`
$ ls crypto-config/ordererOrganizations/frontend.bft/orderers/
1000.frontend.bft 2000.frontend.bft
$ ls crypto-config/peerOrganizations/lasige.bft/peers/
0.peer.lasige.bft 1.peer.lasige.bft
$ ls crypto-config/peerOrganizations/lasige.bft/users/
[email protected] [email protected]
$ ls crypto-config/peerOrganizations/ibm.bft/peers/
0.peer.ibm.bft 1.peer.ibm.bft
$ ls crypto-config/peerOrganizations/ibm.bft/users/
[email protected] [email protected]
The respective configtx.yaml
file to generate the genesis block should be:
Profiles:
BFTGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrderingNodes
- *Frontends
Consortiums:
BFTConsortium:
Organizations:
- *LaSIGE
- *IBM
BFTChannel:
Consortium: BFTConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *LaSIGE
- *IBM
Organizations:
- &OrderingNodes
Name: OrderingNodes
ID: NodesMSP
MSPDir: crypto-config/ordererOrganizations/node.bft/msp
AdminPrincipal: Role.MEMBER
- &Frontends
Name: Frontends
ID: FrontendsMSP
MSPDir: crypto-config/ordererOrganizations/frontend.bft/msp
AdminPrincipal: Role.MEMBER
- &LaSIGE
Name: LaSIGE
ID: LaSIGEMSP
MSPDir: crypto-config/peerOrganizations/lasige.bft/msp
AdminPrincipal: Role.MEMBER
AnchorPeers:
- Host: 0.peer.lasige.bft
Port: 7051
- &IBM
Name: IBM
ID: IBMMSP
MSPDir: crypto-config/peerOrganizations/ibm.bft/msp
AdminPrincipal: Role.MEMBER
AnchorPeers:
- Host: 0.peer.ibm.bft
Port: 7051
Orderer: &OrdererDefaults
OrdererType: bftsmart
Addresses:
- 1000.frontend.bft:7050
- 2000.frontend.bft:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 98 MB
PreferredMaxBytes: 512 KB
Organizations:
Application: &ApplicationDefaults
Organizations:
As was already mentioned in the quick start guide, the genesis block must be created using the configtxgen tool provided the bftsmart/fabric-tools image. Another important detail is the fact that the parameter Orderer->Addresses
must be a list of the frontend addresses, not a list of the ordering nodes. The ordering nodes stand behind the frontends and do not interact directly with any other Fabric component.
Edit the ./<TYPE>__material/config/hosts.config
file with the IDs, IP addresses, and ports of each host running an ordering node. This file must be the same across all ordering nodes, and should look like this:
#server id, address and port (the ids from 0 to n-1 are the service replicas)
0 0.node.bft 11000
1 1.node.bft 11000
2 2.node.bft 11000
3 3.node.bft 11000
Edit the GENESIS and MSPID parameters at the ./<TYPE>__material/config/node.config
file. GENESIS should be the path to the genesis block within the container. MSPID should be set to "OrderingNodes" at the ordering nodes, and "Frontends" at the frontends.
Place the private key for the frontend/ordering node, as well as all certificates for both ordering nodes and frontends at ./<TYPE>__material/config/keys
. Rename each certificate generated with cryptogen
to cert<ID>.pem
, and the private key to keystore.pem
. The contents of the directory should look as follows:
$ ls orderingnode_material/config/keys/
cert0.pem cert1000.pem cert1.pem cert2000.pem cert2.pem cert3.pem keystore.pem
$ ls frontend_material/config/keys/
cert0.pem cert1000.pem cert1.pem cert2000.pem cert2.pem cert3.pem keystore.pem
If you are familiar with the BFT-SMaRt library, you may have noticed that we are placing the keys associated to the Fabric deployment in the same place as the keys used by the library. This is because the ordering service is designed to make the library and the application share the same set of keys, instead of demanding developers to manage two independent set of keys.
To setup the number of ordering nodes present in the system and the number of Byzantine faults to withstands, you need to edit the system.servers.num
, system.servers.f
, and system.initial.view
parameters in the ./<TYPE>__material/config/system.config
file. For instance, to tolerate a single Byzantine fault (which requires a total of 4 nodes), the aforementioned parameters would look as follows:
system.servers.num = 4
system.servers.f = 1
system.initial.view = 0,1,2,3
On the other hand, if you wish to withstand up to 3 Byzantine faults, the parameters would look as follows:
system.servers.num = 10
system.servers.f = 3
system.initial.view = 0,1,2,3,4,6,7,8,9
In addition, you can also configure the ordering service to tolerate either Byzantine faults or classic crash faults by setting the system.bft
to true
of false
, respectively.
Once this is done, you can supply these configuration files into the respective containers using volumes to incorporate them into the file system, and environment variables to force the container to adopt such configuration. Assuming that each configuration is located at /home/jcs/_material across all hosts and the chosen folder to place them at the containers is /bft-config/:
#At the hosts for the ordering nodes
$ docker run -i -t --rm --network=bft_network --name=<NODE ID>.node.bft -e NODE_CONFIG_DIR=/bft-config/config/ -v /home/jcs/orderingnode_material/:/bft-config/ bftsmart/fabric-orderingnode:x86_64-1.1.1 <NODE ID>
#At the hosts for the frontends
$ docker run -i -t --rm --network=bft_network --name=<FRONTEND ID>.frontend.bft -e FRONTEND_CONFIG_DIR=/bft-config/config/ -e FABRIC_CFG_PATH=/bft-config/fabric/ -v /home/jcs/frontend_material/:/bft-config/ bftsmart/fabric-frontend:x86_64-1.1.1 <FRONTEND ID>
The containers for the peers are still configured the same way as in standard Fabric deployments: either by editing the core.yaml
file or by defining environment variables matching the structure of such file. Nonetheless, it is still necessary to mount the crypto material in the container. In the case of this example configuration, the parameters to be modified are peer->gossip->bootstrap
, peer->gossip->endpoint
, peer->mspConfigPath
, and peer->localMspId
. This means that for each peer <ID>
at organization <ORG>
, the values of these parameters should be:
peer:
gossip:
bootstrap: <ID>.peer.<ORG>.bft:7051
endpoint: <ID>.peer.<ORG>.bft:7051
mspConfigPath: /bft-config/fabric/msp
localMspId: <ORG>MSP
And be deployed as follows:
$ docker create -i -t --rm --network=bridge --name=<ID>.peer.lasige.bft -e FABRIC_CFG_PATH=/bft-config/fabric/ -v /home/jcs/peer_material/:/bft-config/ -v /var/run/:/var/run/ hyperledger/fabric-peer:x86_64-1.1.1
$ docker network connect bft_network <ID>.peer.<ORG>.bft
$ docker start -a <ID>.peer.<ORG>.bft
If you prefer, you can also use environment variables to define the above parameters instead of editing core.yaml
. Furthermore, make sure that docker can be correctly accessed from inside the container, by checking the vm->endpoint
parameter from core.yaml
(or by correctly setting $CORE_VM_ENDPOINT
). If the peers are supposed to access docker using UNIX sockets, make sure the host machine is creating the socket file at /var/run/docker.sock
folder and that the value of the parameter is set to unix:///var/run/docker.sock
.
In the case of the clients, the parameters of importance are peer->address
, peer->mspConfigPath
, and peer->localMspId
:
peer:
mspConfigPath: /bft-config/fabric/msp
localMspId: <ORG>MSP
And the deployment looks like:
docker run -i -t --rm --network=bft_network -e FABRIC_CFG_PATH=/bft-config/fabric/ -v /home/jcs/cli_material/:/bft-config/ -e CORE_PEER_ADDRESS=<ID>.peer.<ORG>.bft:7051 bftsmart/fabric-tools:x86_64-1.1.1
If you expect to create the channel creation transaction and the achor peer transaction with the container, you should also include a configtx.yaml
file like the one described earlier. However, keep in mind to update the MspDir
parameter to a valid path within the container such as /bft-config/fabric/<ORG>/msp
:
- &LaSIGE
Name: LaSIGE
ID: LaSIGEMSP
MSPDir: /bft-config/fabric/LaSIGE/msp
AdminPrincipal: Role.MEMBER
AnchorPeers:
- Host: 0.peer.lasige.bft
Port: 7051
- &IBM
Name: IBM
ID: IBMMSP
MSPDir: /bft-config/fabric/IBM/msp
AdminPrincipal: Role.MEMBER
AnchorPeers:
- Host: 0.peer.ibm.bft
Port: 7051