The Multi-Minion Machine

August 9, 2013 § Leave a comment

A Function of Scale

Draft 1, Ernest Prabhakar, 2013-08-08

The Sequel to “The Minion Machine

The Premise

Real systems aren’t linear, but have scales where the cost is fixed below, but astronomical above.

The Goal

Extend/Restrict the Minion Machine to capture what it means to operate at “optimal scale”.

The Concept

Define a Multi-Minion Machine as a Minion Machine with the following changes:

  1. There is one minion for each bin (and thus each object) (M = N)
  2. Minions never move; they just shoot objects to other minions.
  3. The N objects are arranged in a ring of radius R, so “1” is next to “N”.
  4. The objects travel on independent tracks of size r << R, so they don’t collide, but take effectively the same distance to a given bin.

The Model

Assume the minions are smart enough to figure out the optimal route from one bin to another. Instead of specifying a distance, we can thus just specify a destination (and not have to worry about ‘overflow’ or ‘underflow’).

Our primitive commands only need specify the initial (b_i) and final (b_f) bins, giving a size of:

S1 = 2 log(N) := 2 k

All other quantities are the same, except that the average distance d will be less (half?) due to the ring topology.

Metrics

Let us use bold characters to represent an action tuple (E, t) whose norm is E times t. For example, operation L has the action A_L = (E_L, t_L). The action of our system can be decomposed into C for the communicator and M for movement.

If solving the puzzle requires n commands of size S1 and average distance d, we can write our action as:

A0 = n S1 C + n M(d) 
[Errata: parallel operations could complete in a time proportion to max(d), independent of n. There is a complex dependency on the relative values of C_t and M_t which I overlooked].

Now we can ask: would higher order commands reduce the action?

To start, let us introduce a program with per-command cost T that interprets a command as a transposition instead of a move. For example, if N = 8, the command 0x1f is split into 0x1f and 0xf1 and executed in parallel.

For a set of disjoint transpositions that would normally take n moves to solve, the action is now:

A1 = n/2 S1 C + n/2 M(d) + n/2 T 

For this case, it is a net win when (substituting k = log(N) = S1 / 2):

T < 2 k C + M(d) 

which is a net win for sufficiently large k.

However, that advantage only holds for disjoint permutations. Conjoined permutations (e.g., cycles) take the same number of steps as before, but most now pay the penaltyT.

To solve that, we could replace T with a program L that describes loops (cycles) rather than mere transpositions. This gives us, for all (?) permutations:

A2 = n/2 S1 C + n/2 M(d) + n/2 L 

with a similar constraint:

L < 2 k C + M(d) 

Interpretation

A particular command/program specification can be interpreted as a “strategy”.

For example [as Christy suggested], imagine two players Satan and God.

  1. Each of them is given a Multi-Minion box for which they devise a fixed strategy behind closed doors.
  2. When the curtain comes up, Satan & God get to see each other’s strategies.
  3. Satan secretly feeds commands into his box to entangle a set of balls.
  4. Those balls are teleported into God’s box, where he must dis-entangle them.

Every command costs some number of “action points” (great name, Christy :-). The winner is the player who spends the fewest action points.

This leads to a number of interesting questions:

Questions

  • Are there optimal strategies for God and Satan? Is the optimal strategy the same for both players? Is there a meta-strategy for which commands Satan should use, after finding out God’s strategy?
  • Does one player have an intrinsic advantage in this case? What about the case where the entanglement isn’t simple permutations, but some NP-complete problem?
  • How should we calculate the per-command cost P for the program used to implement the strategy? Naively, L ought to be bigger than T, but by how much? Can we break all possible strategies down into a “basis” of simpler components, allowing cost comparisons between them?
  • Do any of these results change in interesting ways if we add baseline costs for any of the elements?

Conclusion

I’m not sure if we learned anything about scale, but we did develop a useful concept of strategy. It also implies that the action (which is perhaps closer to “difficulty” rather than mere “complexity”) depends on interactions between the instruction set chosen and details of the input vectors.

Then again, maybe that is why we have different scales: to allow optimal instruction sets for different levels of representing a problem…

Where Am I?

You are currently browsing entries tagged with optimal scale at iHack, therefore iBlog.