Emulated flud Networks

From flud

(Redirected from Testing with startNnodes.sh)

Contents

flud Network Emulation Tools

To start an emulated flud network of N nodes, do:

$ start-fludnodes N

To view storage consumed by flud nodes in an emulated flud network, do:

$ gauges-fludnodes ~/.flud 1-N

(note that you can stop and start nodes interactively with the gauges panel)

To stop the emulated flud network of N nodes, do:

$ stop-fludnodes N

To clean out data from all emulated flud nodes, do:

$ clean-fludnodes

Testing for Massive Failure (of nodes storing my data)

The following demonstrates the persistence of backed-up data even when a large portion of the flud backup network has failed. We do this by starting a local flud group with 75 nodes, failing 1/3rd of the nodes, and operating normally with only 2/3rd of the nodes. This should always work, except in extremely unlikely circumstances [insert statistical analysis here]. In fact, you should be able to completely recover data in instances where more than 1/3rd of the nodes fail.

Method 1: start 75 flud nodes on a single host

We can start the 75 nodes at once:

$ start-fludnodes 75

Which will invoke 75 flud daemons, each with their own .fludX directory in $HOME. After running this command, you should have ~/.flud1 - ~/.flud75.

Now, with 75 nodes running, you could try storing some data, etc., then kill some of the nodes and see if you can recover the data. killNnodes.sh makes this easy:

$ stop-fludnodes 20

Will kill the first 20 instances started with startNodes above.


If you wanted to start some additional nodes later, you could do:

$ FLUDSTART=76 start-fludnodes 25 localhost 8084

(the FLUDSTART env variable just tells startNnodes to use ports 76 spots higher than the default, so that we don't try to reuse unavailable ports from the first invocation. The last two options indicate where the first node should try to bootstrap, so that the two pools of nodes can talk to each other.)

The same syntax can be used with stop-fludnodes to stop nodes in any range.

Method 2: start 50 nodes on one host, 25 on another

Start nodes as above, but split the nodes between two machines. The start-fludnodes invocation on the second machine should give one of the nodes on the first machine as the gateway, so that the two pools can see each other.

Of course, the start-fludnodes and stop-fludnodes scripts are just a convenience. You can examine them to see how they start and stop nodes if you'd rather do this manually. For now, note that the pid for each instance is stored in ~/.fludX/twistd.pid, and the twistd log is similarly stored ast twistd.log.

Testing for Single Catastrophic Failure (of my node)

Suppose we lose our own node (the hard drive crashes, or the computer gets destroyed or stolen). This test case emulates such a failure and its recovery.

XXX: should really do the deed; send off credentials, remove the entire client .fludX dir, restore, etc.

  1. Use start-fludnodes to bring up N nodes (where N is > 20)
  2. Start up FludLocalClient for one of the nodes:
    FLUDHOME=~/.flud5 fludlocalclient
  3. Store a file[s]:
    putf filename[s]
  4. Store all metadata:
    > putm
  5. Destroy the node:
    > exit
    
    > rm ~/.flud5/dht/*
    > rm ~/.flud5/meta/*
    > rm ~/.flud5/store/*
  6. Start up the FludLocalClient once again:
    FLUDHOME=~/.flud5 fludlocalclient
  7. See that master metadata is gone:
    > list
  8. Recover master metadata:
    > getm
  9. See that it worked:
    > list
  10. Recover the file[s]:
    > getf filename[s]
Personal tools