Jini is designed to work in a network that is able to do broadcast. The announcement protocol used by the Jini Lookup Services to announce their presence to the whole community relies on a broadcast primitive offered by the underlying hardware/operating system. But in the internet there is no way we can assure a message will be received by every single connected host. Even in the ideal case that we could, every broadcast message would flood the whole Internet. That's a waste of bandwidth. A different approach is needed.
Instead of that, the FADA uses the DNS infrastructure, a well known domain name, and the ability of the FADA nodes to know their neighborhood (in terms of FADA nodes connected with them) to provide a way to discover new FADA nodes and their location. By location we mean we know its IP address and the port where they are listening for requests, so we know in which host it is really running.
Compare this with the Multicast Discovery Protocol, in which a Jini Lookup Service is found by making a call from any host in the same LAN. Every Jini Lookup Service that is running in a host connected to the LAN would respond. But a LAN offers methods to perform broadcast: in a tree or bus topology the medium is already a broadcast medium; in a token ring topology the message can be released by every repeater with low cost (only the ring latency, which is an already assumed cost); in a star topology it is also easy to perform a broadcast.
FADA builds a network of collaborating FADA nodes which use direct http connections to other nodes. It means that a given FADA node knows the addresses and ports of some other existing FADA nodes (called neighbors in the first degree or just neighbors).
Several topologies for the FADA network have been thoroughly considered. The first one (which had been already implemented and tested) was a tree. This arrangement offers the advantage that there are no cycles, so it is easy to traverse the whole tree while performing searches. But it also has some drawbacks:
FADA nodes can crash, thus isolating portions of the tree. A solution could be that every node had a list of alternative nodes to adopt as parents, so the structure would keep connected. But this leads to another problem: the root of the tree is a single point of failure. To avoid this, any of the root children could be elected to be root, adopting its siblings as children. But this tends to overload the root, and also a contention protocol to choose one of the children as root must be defined.
Also, there must be a way to avoid cycles in the structure, as a given FADA node can connect to any other node. If a node is added to another one as a child, and then gets as a child the parent of its parent a cycle is created. This condition could be checked beforehand, but cycles can also exist by connecting to its grandparent, or its grandgrandparent, etc. It would be necessary to check the whole hierarchy before connecting, and this would take long time (think of a huge hierarchy with hundreds of nodes, and the links between them are running over TCP/IP). Furthermore, the hierarchy is not static, so while checking this condition any other node could be connecting already checked nodes, so the cycle would also appear.
In short, too many problems arised, and they seemed very hard, if not impossible, to overcome, so it was chosen not to force any structure in the FADA network. But it is possible to manage the structure of the architecture by using the administration tools provided with the FADA Toolkit.
Any node can connect to any other node, creating transitive cycles. Direct cycles are avoided, as there is no reason to create one (relationships are already bidirectional). The FADA nodes topology is not fixed or known in advance. This gives greater flexibility to the federation: any node can, in any moment, join the architecture in any point, and automatically it will be reached by any client in the network that wishes so.
Also, a node can leave the federation in any moment: nobody will miss it (except, perhaps, the service providers that registered their services there; but they are free to register them in any other place: FADA nodes are idempotent). As a desired side effect, any given FADA node can crash, and the rest will continue to work without ever noticing it. Relationships between nodes take into account the possibility of failure, and non-working references are dropped without causing failure on the nodes. In short, administration has been kept to a minimum.
The fact that the FADA topology is neither known nor can be forced beforehand leads to a certain degree of performance degradation. But it is more important to ensure flexibility and reliability than sacrificing both for the sake of performance. Performance will degrade anyway because of the Internet congestion state. Put in other words: if the bottleneck is not in our hands, why keep a design (and implementation) whose efficiency will be hidden by the inefficiency of the network instead of a design (and implementation) whose efficiency is not very high, but works under bad conditions?
Several algorithms for efficient propagation of lookup requests (that are, from every point of view, broadcasts from one node to the rest) have been considered.
The first one was flooding, which consists in sending all incoming messages to all outgoing connections save by the one the message came by, and storing an identifier for the request to avoid duplicates. It is conceptually simple, but not very efficient, as every message can arrive to a given node several times.
In [1] an efficient approach is considered. It states that, in an (n,k)-arrangement graph a message can be broadcasted in at most O(k log n) steps. It also guarantees that the message is received by every node exactly once. But it relies on the properties of the arrangement graphs, a special kind of graphs whose topology is known in advance. We can not guarantee that. Even in the case that the FADA rearranged it structure to become an arrangement graph, the algorithm is based on reliable connections, as a given message is not sent by two separate ways to arrive at the same destination. But we've seen our connections are not reliable, and the behavior of the FADA should not rely on that circumstance.
[2] approaches the same problem in a different way. It considers the graph to be a star or pancake network, a special case of Cayley graphs. These graphs are hierarchical, which forces a structure, and this is not applicable to our case.
[3] did not force any structure, but relied on an already existing broadcasting medium. It only serves as an optimization of the operating system broadcasting primitives.
In [4] no structure is forced in advance, and the network can be dynamically configurable, and needn't be reliable. But unfortunately it asks nodes to know the inmediate neighborhood, assuming that getting that information is easily obtained at a low cost, and fast. In our case, searching that information (neighbors up to two hops away) is much slower, and gets in the way of doing an efficient job. To perform an efficient broadcast an unefficient information gathering must be achieved. No gain is obtained.
Finally, in [5] we find the same objections we already considered in [1] and [2].
Put in other words, to perform efficient message broadcasting it is needed to avoid duplicates of these messages arriving at the same destination. To ensure that, it is needed to know which messages will arrive at what destination. In short, it is needed to know (at least part of) the structure. But the high availability design of an unhierarchical and unmaintained structure avoids any prior knowledge.