J'ai créé un petit réseau HyperLedger Fabric où j'ai un seul canal avec une seule organisation et quelques pairs avec un service de commande.
Après avoir suivi les étapes normales de création de mes documents cryptographiques, de mon bloc de genèse et de mon fichier channel.tx, j'ai essayé de créer ma chaîne dans un conteneur cli en utilisant la commande:
peer channel create -o orderer.example.com:7050 -c mychannel -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
Après cela, j'ai reçu l'erreur suivante:
Error: got unexpected status: FORBIDDEN -- Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
Une partie de la sortie du fichier journal du conteneur cli:
2019-02-15 20:14:57.323 UTC [orderer/common/server] Start -> INFO 0ab Beginning to serve requests
2019-02-15 20:15:00.063 UTC [orderer/common/server] Deliver -> DEBU 0ac Starting new Deliver handler
2019-02-15 20:15:00.064 UTC [common/deliver] Handle -> DEBU 0ad Starting new deliver loop for 192.168.176.6:38938
2019-02-15 20:15:00.064 UTC [common/deliver] Handle -> DEBU 0ae Attempting to read seek info message from 192.168.176.6:38938
2019-02-15 20:15:00.068 UTC [orderer/common/server] Broadcast -> DEBU 0af Starting new Broadcast handler
2019-02-15 20:15:00.068 UTC [orderer/common/broadcast] Handle -> DEBU 0b0 Starting new broadcast loop for 192.168.176.6:38940
2019-02-15 20:15:00.068 UTC [orderer/common/broadcast] Handle -> DEBU 0b1 [channel: mychannel] Broadcast is processing config update message from 192.168.176.6:38940
2019-02-15 20:15:00.068 UTC [orderer/common/msgprocessor] ProcessConfigUpdateMsg -> DEBU 0b2 Processing config update tx with system channel message processor for channel ID mychannel
2019-02-15 20:15:00.068 UTC [orderer/common/msgprocessor] ProcessConfigUpdateMsg -> DEBU 0b3 Processing config update message for channel mychannel
2019-02-15 20:15:00.069 UTC [policies] Evaluate -> DEBU 0b4 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Writers ==
2019-02-15 20:15:00.069 UTC [policies] Evaluate -> DEBU 0b5 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
2019-02-15 20:15:00.069 UTC [policies] Evaluate -> DEBU 0b6 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Writers ==
2019-02-15 20:15:00.069 UTC [policies] Evaluate -> DEBU 0b7 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
2019-02-15 20:15:00.069 UTC [policies] Evaluate -> DEBU 0b8 == Evaluating *cauthdsl.policy Policy /Channel/Orderer/OrdererOrg/Writers ==
2019-02-15 20:15:00.069 UTC [msp] DeserializeIdentity -> DEBU 0b9 Obtaining identity
2019-02-15 20:15:00.069 UTC [msp/identity] newIdentity -> DEBU 0ba Creating identity instance for cert -----BEGIN CERTIFICATE-----
MIICEzCCAbmgAwIBAgIQSNAnza0BnDG0ZBvOSPenpDAKBggqhkjOPQQDAjBvMQsw
(LONG TEXTS)9XYOAcEPDg==
-----END CERTIFICATE-----
2019-02-15 20:15:00.069 UTC [cauthdsl] func1 -> DEBU 0bb 0xc42016e118 gate 1550261700069869014 evaluation starts
2019-02-15 20:15:00.069 UTC [cauthdsl] func2 -> DEBU 0bc 0xc42016e118 signed by 0 principal evaluation starts (used [false])
2019-02-15 20:15:00.069 UTC [cauthdsl] func2 -> DEBU 0bd 0xc42016e118 processing identity 0 with bytes of 0a05646c4d5350128e062d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d494943457a434341626d674177494241674951534e416e7a6130426e4447305a42764f5350656e7044414b42676771686b6a4f50515144416a42764d5173770a435159445651514745774a56557a45544d4245474131554543424d4b5132467361575a76636d3570595445574d4251474131554542784d4e5532467549455a790a5957356a61584e6a627a45584d4255474131554543684d4f5a4777755a586868625842735a53356a62323078476a415942674e5642414d5445574e684c6d52730a4c6d56345957317762475575593239744d423458445445354d4449784e5449774d5441774d466f58445449354d4449784d6a49774d5441774d466f775754454c0a4d416b474131554542684d4356564d78457a415242674e5642416754436b4e6862476c6d62334a7561574578466a415542674e564241635444564e68626942470a636d467559326c7a593238784854416242674e5642414d4d4645466b62576c75514752734c6d56345957317762475575593239744d466b77457759484b6f5a490a7a6a3043415159494b6f5a497a6a304441516344516741456431357541374a6d464176472f78416c786f567977765459595a4f717677592b36307a37526d59650a743775617553467879777965724456453249497669374f2b626e56463163564d4a544b2b374775434531706c56614e4e4d45737744675944565230504151482f0a42415144416765414d41774741315564457745422f7751434d4141774b7759445652306a42435177496f41673370722f4f6c702f4443314d477a5266525855360a4b4c75576d7353346a347959616f48633158366e306e3077436759494b6f5a497a6a3045417749445341417752514968414e46647870675247546f73545734460a734243794e35474857465539637a61354c467765613243334d6143534169414954462f54366d515872557639644c4c7231426d3342316a697470526b745579650a3958594f4163455044673d3d0a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a
2019-02-15 20:15:00.070 UTC [cauthdsl] func2 -> DEBU 0be 0xc42016e118 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected OrdererMSP, got dlMSP)
2019-02-15 20:15:00.070 UTC [cauthdsl] func2 -> DEBU 0bf 0xc42016e118 principal evaluation fails
2019-02-15 20:15:00.070 UTC [cauthdsl] func1 -> DEBU 0c0 0xc42016e118 gate 1550261700069869014 evaluation fails
2019-02-15 20:15:00.070 UTC [policies] Evaluate -> DEBU 0c1 Signature set did not satisfy policy /Channel/Orderer/OrdererOrg/Writers
2019-02-15 20:15:00.070 UTC [policies] Evaluate -> DEBU 0c2 == Done Evaluating *cauthdsl.policy Policy /Channel/Orderer/OrdererOrg/Writers
2019-02-15 20:15:00.070 UTC [policies] func1 -> DEBU 0c3 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ OrdererOrg.Writers ]
2019-02-15 20:15:00.070 UTC [policies] Evaluate -> DEBU 0c4 Signature set did not satisfy policy /Channel/Orderer/Writers
2019-02-15 20:15:00.070 UTC [policies] Evaluate -> DEBU 0c5 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Writers
2019-02-15 20:15:00.070 UTC [policies] func1 -> DEBU 0c6 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ Orderer.Writers Consortiums.Writers ]
2019-02-15 20:15:00.070 UTC [policies] Evaluate -> DEBU 0c7 Signature set did not satisfy policy /Channel/Writers
2019-02-15 20:15:00.070 UTC [policies] Evaluate -> DEBU 0c8 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Writers
2019-02-15 20:15:00.070 UTC [orderer/common/broadcast] Handle -> WARN 0c9 [channel: mychannel] Rejecting broadcast of config message from 192.168.176.6:38940 because of error: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
2019-02-15 20:15:00.070 UTC [orderer/common/server] func1 -> DEBU 0ca Closing Broadcast stream
2019-02-15 20:15:00.072 UTC [grpc] warningf -> DEBU 0cb transport: http2Server.HandleStreams failed to read frame: read tcp 192.168.176.4:7050->192.168.176.6:38940: read: connection reset by peer
2019-02-15 20:15:00.072 UTC [grpc] infof -> DEBU 0cc transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-02-15 20:15:00.073 UTC [common/deliver] Handle -> WARN 0cd Error reading from 192.168.176.6:38938: rpc error: code = Canceled desc = context canceled
2019-02-15 20:15:00.073 UTC [orderer/common/server] func1 -> DEBU 0cf Closing Deliver stream
2019-02-15 20:15:00.073 UTC [grpc] infof -> DEBU 0ce transport: loopyWriter.run returning. connection error: desc = "transport is closing"
Le fichier configtx.yaml:
Organizations:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
- &dl
Name: dlMSP
ID: dlMSP
MSPDir: crypto-config/peerOrganizations/dl.example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('dlMSP.admin', 'dlMSP.peer', 'dlMSP.client')"
Writers:
Type: Signature
Rule: "OR('dlMSP.admin', 'dlMSP.client')"
Admins:
Type: Signature
Rule: "OR('dlMSP.admin')"
Capabilities:
Channel: &ChannelCapabilities
V1_3: true
Orderer: &OrdererCapabilities
V1_1: true
Application: &ApplicationCapabilities
V1_3: true
V1_2: false
V1_1: false
Application: &ApplicationDefaults
Organizations:
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Orderer: &OrdererDefaults
OrdererType: solo
Addresses:
- orderer.example.com:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- 127.0.0.1:9092
Organizations:
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
BlockValidation:
Type: ImplicitMeta
Rule: "ANY Writers"
Channel: &ChannelDefaults
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
<<: *ChannelCapabilities
Profiles:
SingleOrgOrdererGenesis:
<<: *ChannelDefaults
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
SampleConsortium:
Organizations:
- *dl
SingleOrgChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *dl
Capabilities:
<<: *ApplicationCapabilities
Le fichier crypto-config.yaml:
OrdererOrgs:
- Name: orderer
Domain: example.com
Specs:
- Hostname: orderer
PeerOrgs:
- Name: dl
Domain: dl.example.com
EnableNodeOUs: true
Template:
Count: 3 #NUMBER OF PEERS
Users:
Count: 2 #NUMBER OF USERS APART FROM THE ADMIN
Le fichier docker-compose-cli.yaml
version: '2'
volumes:
orderer.example.com:
peer0.dl.example.com:
peer1.dl.example.com:
peer2.dl.example.com:
networks:
v1:
services:
orderer.example.com:
extends:
file: base/docker-compose-base.yaml
service: orderer.example.com
container_name: orderer.example.com
networks:
- v1
peer0.dl.example.com:
container_name: peer0.dl.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.dl.example.com
networks:
- v1
peer1.dl.example.com:
container_name: peer1.dl.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.dl.example.com
networks:
- v1
peer2.dl.example.com:
container_name: peer2.dl.example.com
extends:
file: base/docker-compose-base.yaml
service: peer2.dl.example.com
networks:
- v1
cli:
container_name: cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///Host/var/run/docker.sock
#- CORE_LOGGING_LEVEL=DEBUG
- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.dl.example.com:7051
- CORE_PEER_LOCALMSPID=dlMSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dl.example.com/peers/peer0.dl.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dl.example.com/peers/peer0.dl.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dl.example.com/peers/peer0.dl.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/dl.example.com/users/[email protected]/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/Host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer.example.com
- peer0.dl.example.com
- peer1.dl.example.com
- peer2.dl.example.com
networks:
- v1
Le fichier docker-compose-base.yaml:
version: '2'
services:
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer:$IMAGE_TAG
environment:
#- ORDERER_GENERAL_LOGLEVEL=INFO
- ORDERER_GENERAL_LOGLEVEL=DEBUG
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
- orderer.example.com:/var/hyperledger/production/orderer
ports:
- 7050:7050
peer0.dl.example.com:
container_name: peer0.dl.example.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.dl.example.com
- CORE_PEER_ADDRESS=peer0.dl.example.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.dl.example.com:7051
# - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.dl.example.com:7051
- CORE_PEER_LOCALMSPID=dlMSP
volumes:
- /var/run/:/Host/var/run/
- ../crypto-config/peerOrganizations/dl.example.com/peers/peer0.dl.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/dl.example.com/peers/peer0.dl.example.com/tls:/etc/hyperledger/fabric/tls
- peer0.dl.example.com:/var/hyperledger/production
ports:
- 7051:7051
- 7053:7053
peer1.dl.example.com:
container_name: peer1.dl.example.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer1.dl.example.com
- CORE_PEER_ADDRESS=peer1.dl.example.com:7051
# - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.dl.example.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.dl.example.com:7051
- CORE_PEER_LOCALMSPID=dlMSP
volumes:
- /var/run/:/Host/var/run/
- ../crypto-config/peerOrganizations/dl.example.com/peers/peer1.dl.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/dl.example.com/peers/peer1.dl.example.com/tls:/etc/hyperledger/fabric/tls
- peer1.dl.example.com:/var/hyperledger/production
ports:
- 8051:7051
- 8053:7053
peer2.dl.example.com:
container_name: peer2.dl.example.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer2.dl.example.com
- CORE_PEER_ADDRESS=peer2.dl.example.com:7051
# - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.dl.example.com:7051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer1.dl.example.com:7051
- CORE_PEER_LOCALMSPID=dlMSP
volumes:
- /var/run/:/Host/var/run/
- ../crypto-config/peerOrganizations/dl.example.com/peers/peer2.dl.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/dl.example.com/peers/peer2.dl.example.com/tls:/etc/hyperledger/fabric/tls
- peer2.dl.example.com:/var/hyperledger/production
ports:
- 9051:7051
- 9053:7053
Lien vers mon code: https://mega.nz/#F!vJIUWKgZ!hx1geJ916PH0LrKKe5Q0RA!LQRBmITR
Dans votre script ./byfn.sh dans la création du bloc genesis, vous avez écrit cette commande
echo "##########################################################"
echo "######### Generating Orderer Genesis block ##############"
echo "##########################################################"
configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block -channelID $CHANNEL_NAME
retirer -channelID $CHANNEL_NAME
et ignorer cet avertissement
2019-02-24 23:34:25.334 IST [common/tools/configtxgen] main -> WARN 001 Omitting the channel ID for configtxgen for output operations is deprecated. Explicitly passing the channel ID will be required in the future, defaulting to 'testchainid'
Ça devrait marcher maintenant. Il l'a fait sur mon système.
Il semble que le canal ait déjà été créé et que vous essayez d'envoyer un fichier proto (channel.tx) avec la même identification de canal.
Si vous essayez simplement de créer un nouveau canal, changez le nom du canal et recréez channel.tx et envoyez la configuration mise à jour dans la commande cli.
Si vous essayez de mettre à jour la configuration du canal, reportez-vous au document this et suivez-le pour obtenir le dernier bloc de configuration et apporter les modifications nécessaires à l'ID MSP, si nécessaire.
N'oubliez pas qu'une fois le canal créé, le client accepte uniquement l'enveloppe de configuration de mise à jour de canal pour mettre à jour le canal et non le fichier de configuration de canal.
J'ai la même erreur.
Error: got unexpected status: FORBIDDEN -- implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied
Après avoir supprimé tout dans docker à l'aide de cette commande, l'erreur est résolue.
docker stop $(docker ps -a -q) ; docker rm -f $(docker ps -aq) ; docker system Prune -a ; docker volume Prune ; docker ps -a ; docker images -a ; docker volume ls
La commande 'docker volume Prune' est particulièrement importante.
- &dl
Name: dlMSP
ID: dlMSP
MSPDir: crypto-config/peerOrganizations/dl.example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('dlMSP.admin', 'dlMSP.peer', 'dlMSP.client')"
Writers:
Type: Signature
Rule: "OR('dlMSP.admin', 'dlMSP.client')"
Admins:
Type: Signature
Rule: "OR('dlMSP.admin')"
Cela signifie qu'il n'autorisera que 'dlMSP.admin'
membres pour créer une chaîne
Assurez-vous que vous disposez de suffisamment de attrs
dans votre certificat.
fabric-ca-client register --id.name admin2 --id.affiliation org1.department1 --id.attrs 'hf.Revoker=true,admin=true:ecert'
Soit le canal est déjà créé, soit vous n'êtes pas autorisé à y accéder, pour cela, vous devez modifier les autorisations.
La solution la plus simple consiste à supprimer tous les conteneurs et images et à recommencer. La commande est la suivante:
docker stop $(docker ps -a -q) //stop all containers first
docker rm -f $(docker ps -aq) // remove all of them
docker system Prune -a // remove all stopcontainers
docker volume Prune //remove all volumes
Maintenant, démarrez à nouveau le réseau.