used generically, zeroconf refers to the idea that an informations/communications tech system is able to detect and decide on its own certain network configuration matters by probing its internal state or its environment.
Why you might want to use it
difficulty of authoritative network knowledge
Traditional network configuration assumes a relatively static model of the network. A centralized network authority was expected to know the existence and state of most nodes on the network in order for them to be identified. The number of devices was relatively low (1 or 2 per human user at most), and changes in device state were infrequent.
However, modern networks are experiencing a massive proliferation of devices. Some people even carry around multiple personal devices that can each act as an independent host on a network, in addition to the static resources that they make use of. Given the portability and quantity of these devices, authoritative centralized knowledge probably only covers half the network at best.
If the devices could configure and announce themselves in a somewhat robust way, outdated and incomplete centralized authorities could be abandoned.
centralization is bad
politically and technically, centralization has serious problems. It is vulnerable to severe failure modes, for example, where a centralized resource (like a nameserver) fails, impeding the work of every device dependent on it. Centralized authorities are also subject to compromise, which can cascade into a compromise of all the dependent nodes.
flexibility and peer relationships are good
Having centralized, static authorities in a network pushes the network towards stasis and penalizes dynamic nodes. The difficulty of finding services offered from a dynamic node relegates these nodes to consumers of network services, instead of providers. It would be nice to encourage every node in the network to participate as a peer, both providing and consuming services.
Why you might not want to use it
flakiness of information
statically configured, central servers tend to be more rigorously maintained, and publish more relevant data than dynamic nodes. The dynamic nodes can publish all sorts of garbage. Do you really want to see all the garbage that any node on the local network segment wants to publish?
susceptibility to spoofing
It's much easier to insert a malicious dynamic host onto a network than to subvert a well-maintained centralized network resource. If network peers are evenly trusted, deliberate spoofing is a serious risk.
denial of service
In addition to spoofing, it is simple for a malicious dynamic node to flood the network with bogus information. If there is a significant amount of bogus data, all information gleaned from the network becomes suspect. This amounts to a DoS, because it renders legitimate information unusable.
There are different (competing but not mutually-exclusive) approaches you can use for zeroconf. Here are some promising ones:
You would use DNS-SD to discover specific services (not just host names) on the local network or within the local domain. It is layered on top of DNS, and works best combined with mDNS on the local link segment.
- mDNS -- Multicast DNS: automatic, decentralized name service for the local network segment (i.e. not routable) draft RFC
You would use mDNS to cooperatively enumerate hostnames on a local link segment. It is layered on top of traditional DNS, but uses non-routable multicast packets instead of traditional routable unicast.
- avahi a free mDNS + DNS-SD implementation