Hivemind DHT

This is a Distributed Hash Table optimized for rapidly accessing a lot of lightweight metadata. Hivemind DHT is based on Kademlia [1] with added support for improved bulk store/get operations and caching.

The code is organized as follows:

  • class DHT (__init__.py) - high-level class for model training. Runs DHTNode in a background process.

  • class DHTNode (node.py) - an asyncio implementation of dht server, stores AND gets keys.

  • class DHTProtocol (protocol.py) - an RPC protocol to request data from dht nodes.

  • async def traverse_dht (traverse.py) - a search algorithm that crawls DHT peers.

  • [1] Maymounkov P., Mazieres D. (2002) Kademlia: A Peer-to-Peer Information System Based on the XOR Metric.

  • [2] https://github.com/bmuller/kademlia , Brian, if you’re reading this: THANK YOU! you’re awesome :)

Here’s a high level scheme of how these components interact with one another:

_images/dht.png

DHT and DHTNode

class dht.DHT(listen_on: str = '0.0.0.0:*', initial_peers: Sequence[str] = (), *, start: bool, daemon: bool = True, max_workers: Optional[int] = None, parallel_rpc: Optional[int] = None, receiver_threads: int = 1, expiration: float = 300, **kwargs)[source]

High-level interface to dht.dht that is designed to allow RemoteMixtureOfExperts to select best experts.

Parameters
  • initial_peers – one or multiple endpoints pointing to active DHT peers. Similar format to listen_on.

  • listen_on – an interface for incoming connections, e.g. “127.0.0.1:”, “0.0.0.0:1234” or “ipv6:[::]:

  • start – if True, automatically starts the background process on creation. Otherwise await manual start

  • daemon – if True, the background process is marked as daemon and automatically terminated after main process

  • max_workers – declare_experts and get_experts will use up to this many parallel workers (but no more than one per key)

  • expiration – experts declared from this node expire after this many seconds (default = 5 minutes)

  • receiver_threads – uses this many threads to await on input pipe. Default = 1 should be enough in most cases

  • kwargs – any other params will be forwarded to DHTNode upon creation

Each expert has an identifier in the form of {prefix}.{i}.{j}.{…}, e.g. “ffn_expert.98.76.54.32.10” An expert identifier consists of:

  • optional prefix that determines expert role, experiment name, etc.

  • one or more integers that determine that expert’s position in an N-dimensional grid

A dht.Server can DHT.declare_experts(expert_uids: List[str]) to make its experts visible to everyone. When declaring experts, DHT will store each expert’s uid and all its prefixes until :expiration: (specified at init) For instance, declaring “ffn_expert.98.76.54.32.10” will store the following keys in a DHT: "ffn_expert", "ffn_expert.98", "ffn_expert.98.76", ..., "ffn_expert.98.76.54.32.10"

RemoteMixtureOfExperts can use these prefixes to find top-k most suitable experts with a left-to-right beam search. For instance, consider RemoteMixtureOfExperts with prefix “ffn_expert” and grid size [100, 100, 100, 100, 100]. This MoE can query all experts with that prefix and arbitrary indices in 0…99 along each dimension. However, not every expert in such 100^5 grid can be alive at a given moment of time (the grid size is redundant). In order to find k best “alive” experts, MoE first ranks indices along the first dimension with its gating function. It can then check which of those indices correspond to “alive” experts by querying keys such as “ffn_expert.98”. This is done using DHT.first_k_active function. After selecting k best indices along first dimension, MoE moves to the second dimension. It can find top-k pairs of indices (e.g. “expert.98.76”) that start with one of k first indices from the previous step. Finally, MoE will use DHT.get_experts(uids: List[str]) search for specific experts. This beam search explores one additional dimension per step and finds k best experts from across the DHT in O(k / s * log(N)) average time where s is grid sparsity rate and N is the total number of experts.

run() → None[source]

Serve DHT forever. This function will not return until DHT node is shut down

run_in_background(await_ready=True, timeout=None)[source]

Starts DHT in a background process. if await_ready, this method will wait until background dht is ready to process incoming requests or for :timeout: seconds max.

shutdown() → None[source]

Shuts down the dht process

get_experts(uids: List[str], expiration_time: Optional[float] = None, wait=True) → List[Optional[dht.RemoteExpert]][source]
Parameters
  • uids – find experts with these ids from across the DHT

  • expiration_time – if specified, return experts that expire no sooner than this (based on get_dht_time)

  • wait – if True (default), return when experts are returned. Otherwise return a Future.

Returns

a list of [RemoteExpert if found else None]

declare_experts(uids: List[str], endpoint: str, wait=True, timeout=None) → Optional[List[bool]][source]

Make experts visible to all DHT peers; update timestamps if declared previously.

Parameters
  • uids – a list of expert ids to update

  • endpoint – endpoint that serves these experts, usually your server endpoint (e.g. “201.111.222.333:1337”)

  • wait – if True, awaits for declaration to finish, otherwise runs in background

  • timeout – waits for the procedure to finish, None means wait indeninitely

Returns

if wait, returns a list of booleans, (True = store succeeded, False = store rejected)

first_k_active(uid_prefixes: List[str], k: int, max_prefetch: int = 1, chunk_size: Optional[int] = None)[source]

Find k prefixes with active experts; may return less if there aren’t enough; used for DMoE beam search

Parameters
  • uid_prefixes – a list of uid prefixes ordered from highest to lowest priority

  • k – return at most this many active prefixes

  • max_prefetch – pre-dispatch up to this many tasks (each for chunk_size experts)

  • chunk_size – dispatch this many requests in one task

Returns

a list of at most :k: prefixes that have at least one active expert each;

class dht.DHTNode(*, _initialized_with_create=False)[source]

A low-level class that represents a DHT participant. Please see DHTNode.create for parameters Each DHTNode has an identifier, a local storage and access too other nodes via DHTProtocol.

Note

Hivemind DHT is optimized to store a lot of temporary metadata that is regularly updated. For example, an expert alive timestamp that emitted by the Server responsible for that expert. Such metadata does not require regular maintenance by peers, persistence on shutdown. Instead, DHTNode is designed to rapidly send bulk data and resolve conflicts.

Every (key, value) pair in this DHT has an expiration time - float computed as get_dht_time(), UnixTime by default DHT nodes always prefer values with higher expiration time and may delete any value past its expiration.

Compared to Kademlia RPC protocol, dht DHT has 3 RPCs:

  • ping - request peer’s identifier and update routing table (same as Kademlia PING RPC)

  • store - send several (key, value, expiration_time) pairs to the same peer (like Kademlia STORE, but in bulk)

  • find - request one or several keys, get values & expiration (if peer finds it locally) and :bucket_size: of

    nearest peers from recipient’s routing table (ordered nearest-to-farthest, not including recipient itself) This RPC is a mixture between Kademlia FIND_NODE and FIND_VALUE with multiple keys per call.

Formally, DHTNode follows the following contract:

  • when asked to get(key), a node must find and return a value with highest expiration time that it found across DHT IF that time has not come yet. if expiration time is smaller than current get_dht_time(), node may return None;

  • when requested to store(key: value, expiration_time), a node must store (key => value) at until expiration time or until DHTNode gets the same key with greater expiration time. If a node is asked to store a key but it already has the same key with newer expiration, the older key will not be stored. Return True if stored, False if refused;

  • when requested to store(key: value, expiration_time, in_cache=True), stores (key => value) in a separate “cache”. Cache operates same as regular storage, but it has a limited size and evicts least recently used nodes when full;

async classmethod create(node_id: Optional[dht.routing.DHTID] = None, initial_peers: List[str] = (), bucket_size: int = 20, num_replicas: int = 5, depth_modulo: int = 5, parallel_rpc: Optional[int] = None, wait_timeout: float = 5, refresh_timeout: Optional[float] = None, bootstrap_timeout: Optional[float] = None, num_workers: int = 1, cache_locally: bool = True, cache_nearest: int = 1, cache_size=None, listen: bool = True, listen_on: str = '0.0.0.0:*', **kwargs) → dht.node.DHTNode[source]
Parameters
  • node_id – current node’s identifier, determines which keys it will store locally, defaults to random id

  • initial_peers – connects to these peers to populate routing table, defaults to no peers

  • bucket_size – max number of nodes in one k-bucket (k). Trying to add {k+1}st node will cause a bucket to either split in two buckets along the midpoint or reject the new node (but still save it as a replacement) Recommended value: k is chosen s.t. any given k nodes are very unlikely to all fail after staleness_timeout

  • num_replicas – number of nearest nodes that will be asked to store a given key, default = bucket_size (≈k)

  • depth_modulo – split full k-bucket if it contains root OR up to the nearest multiple of this value (≈b)

  • parallel_rpc – maximum number of concurrent outgoing RPC requests emitted by DHTProtocol Reduce this value if your RPC requests register no response despite the peer sending the response.

  • wait_timeout – a kademlia rpc request is deemed lost if we did not recieve a reply in this many seconds

  • refresh_timeout – refresh buckets if no node from that bucket was updated in this many seconds if staleness_timeout is None, DHTNode will not refresh stale buckets (which is usually okay)

  • bootstrap_timeout – after one of peers responds, await other peers for at most this many seconds

  • num_workers – concurrent workers in traverse_dht (see traverse_dht num_workers param)

  • cache_locally – if True, caches all values (stored or found) in a node-local cache

  • cache_nearest – whenever DHTNode finds a value, it will also store (cache) this value on this many nodes nearest nodes visited by search algorithm. Prefers nodes that are nearest to :key: but have no value yet

  • cache_size – if specified, local cache will store up to this many records (as in LRU cache)

  • listen – if True (default), this node will accept incoming request and otherwise be a DHT “citzen” if False, this node will refuse any incoming request, effectively being only a “client”

  • listen_on – network interface, e.g. “0.0.0.0:1337” or “localhost:” ( means pick any port) or “[::]:7654”

  • channel_options – options for grpc.aio.insecure_channel, e.g. [(‘grpc.enable_retries’, 0)] see https://grpc.github.io/grpc/core/group__grpc__arg__keys.html for a list of all options

  • kwargs – extra parameters used in grpc.aio.server

async shutdown(timeout=None)[source]

Process existing requests, close all connections and stop the server

async find_nearest_nodes(queries: Collection[dht.routing.DHTID], k_nearest: Optional[int] = None, beam_size: Optional[int] = None, num_workers: Optional[int] = None, node_to_endpoint: Optional[Dict[dht.routing.DHTID, str]] = None, exclude_self: bool = False, **kwargs) → Dict[dht.routing.DHTID, Dict[dht.routing.DHTID, str]][source]
Parameters
  • queries – find k nearest nodes for each of these DHTIDs

  • k_nearest – return this many nearest nodes for every query (if there are enough nodes)

  • beam_size – replacement for self.beam_size, see traverse_dht beam_size param

  • num_workers – replacement for self.num_workers, see traverse_dht num_workers param

  • node_to_endpoint – if specified, uses this dict[node_id => endpoint] as initial peers

  • exclude_self – if True, nearest nodes will not contain self.node_id (default = use local peers)

  • kwargs – additional params passed to traverse_dht

Returns

for every query, return nearest peers ordered dict[peer DHTID -> network Endpoint], nearest-first

async store(key: Any, value: Any, expiration_time: float, **kwargs) → bool[source]

Find num_replicas best nodes to store (key, value) and store it there at least until expiration time.

Note

store is a simplified interface to store_many, all kwargs are be forwarded there

Returns

True if store succeeds, False if it fails (due to no response or newer value)

async store_many(keys: List[Any], values: List[Any], expiration_time: Union[float, List[float]], exclude_self: bool = False, await_all_replicas=True, **kwargs) → Dict[Any, bool][source]

Traverse DHT to find up to best nodes to store multiple (key, value, expiration_time) pairs.

Parameters
  • keys – arbitrary serializable keys associated with each value

  • values – serializable “payload” for each key

  • expiration_time – either one expiration time for all keys or individual expiration times (see class doc)

  • kwargs – any additional parameters passed to traverse_dht function (e.g. num workers)

  • exclude_self – if True, never store value locally even if you are one of the nearest nodes

  • await_all_replicas – if False, this function returns after first store_ok and proceeds in background if True, the function will wait for num_replicas successful stores or running out of beam_size nodes

Note

if exclude_self is True and self.cache_locally == True, value will still be __cached__ locally

Returns

for each key: True if store succeeds, False if it fails (due to no response or newer value)

async get(key: Any, latest=False, **kwargs) → Tuple[Optional[Any], Optional[float]][source]

Search for a key across DHT and return either first or latest entry. :param key: same key as in node.store(…) :param latest: if True, finds the latest value, otherwise finds any non-expired value (which is much faster) :param kwargs: parameters forwarded to get_many :returns: (value, expiration time); if value was not found, returns (None, None)

async get_many(keys: Collection[Any], sufficient_expiration_time: Optional[float] = None, num_workers: Optional[int] = None, beam_size: Optional[int] = None) → Dict[Any, Tuple[Optional[Any], Optional[float]]][source]
Parameters
  • keys – traverse the DHT and find the value for each of these keys (or (None, None) if not key found)

  • sufficient_expiration_time – if the search finds a value that expires after this time, default = time of call, find any value that did not expire by the time of call If min_expiration_time=float(‘inf’), this method will find a value with _latest_ expiration

  • beam_size – maintains up to this many nearest nodes when crawling dht, default beam_size = bucket_size

  • num_workers – override for default num_workers, see traverse_dht num_workers param

Returns

for each key: value and its expiration time. If nothing is found , returns (None, None) for that key

Note

in order to check if get returned a value, please check (expiration_time is None)

DHT communication protocol

RPC protocol that provides nodes a way to communicate with each other. Based on gRPC.AIO.

class dht.protocol.DHTProtocol(*, _initialized_with_create=False)[source]
async classmethod create(node_id: dht.routing.DHTID, bucket_size: int, depth_modulo: int, num_replicas: int, wait_timeout: float, parallel_rpc: Optional[int] = None, cache_size: Optional[int] = None, listen=True, listen_on='0.0.0.0:*', channel_options: Optional[Sequence[Tuple[str, Any]]] = None, **kwargs)dht.protocol.DHTProtocol[source]

A protocol that allows DHT nodes to request keys/neighbors from other DHT nodes. As a side-effect, DHTProtocol also maintains a routing table as described in https://pdos.csail.mit.edu/~petar/papers/maymounkov-kademlia-lncs.pdf

See DHTNode (node.py) for a more detailed description.

Note

the rpc_* methods defined in this class will be automatically exposed to other DHT nodes, for instance, def rpc_ping can be called as protocol.call_ping(endpoint, dht_id) from a remote machine Only the call_* methods are meant to be called publicly, e.g. from DHTNode Read more: https://github.com/bmuller/rpcudp/tree/master/rpcudp

async shutdown(timeout=None)[source]

Process existing requests, close all connections and stop the server

async call_ping(peer: str) → Optional[dht.routing.DHTID][source]

Get peer’s node id and add him to the routing table. If peer doesn’t respond, return None :param peer: string network address, e.g. 123.123.123.123:1337 or [2a21:6с8:b192:2105]:8888 :note: if DHTProtocol was created with listen=True, also request peer to add you to his routing table

Returns

node’s DHTID, if peer responded and decided to send his node_id

async rpc_ping(peer_info: grpc_1jj59xj2_pb2.NodeInfo, context: grpc.ServicerContext)[source]

Some node wants us to add it to our routing table.

async call_store(peer: str, keys: Sequence[dht.routing.DHTID], values: Sequence[bytes], expiration_time: Union[float, Sequence[float]], in_cache: Optional[Union[bool, Sequence[bool]]] = None) → Sequence[bool][source]

Ask a recipient to store several (key, value : expiration_time) items or update their older value

Parameters
  • peer – request this peer to store the data

  • keys – a list of N keys digested by DHTID.generate(source=some_dict_key)

  • values – a list of N serialized values (bytes) for each respective key

  • expiration_time – a list of N expiration timestamps for each respective key-value pair (see get_dht_time())

  • in_cache – a list of booleans, True = store i-th key in cache, value = store i-th key locally

Note

the difference between storing normally and in cache is that normal storage is guaranteed to be stored until expiration time (best-effort), whereas cached storage can be evicted early due to limited cache size

Returns

list of [True / False] True = stored, False = failed (found newer value or no response) if peer did not respond (e.g. due to timeout or congestion), returns None

async rpc_store(request: grpc_1jj59xj2_pb2.StoreRequest, context: grpc.ServicerContext) → grpc_1jj59xj2_pb2.StoreResponse[source]

Some node wants us to store this (key, value) pair

async call_find(peer: str, keys: Collection[dht.routing.DHTID]) → Optional[Dict[dht.routing.DHTID, Tuple[Optional[bytes], Optional[float], Dict[dht.routing.DHTID, str]]]][source]
Request keys from a peer. For each key, look for its (value, expiration time) locally and

k additional peers that are most likely to have this key (ranked by XOR distance)

Returns

A dict key => Tuple[optional value, optional expiration time, nearest neighbors] value: value stored by the recipient with that key, or None if peer doesn’t have this value expiration time: expiration time of the returned value, None if no value was found neighbors: a dictionary[node_id : endpoint] containing nearest neighbors from peer’s routing table If peer didn’t respond, returns None

async rpc_find(request: grpc_1jj59xj2_pb2.FindRequest, context: grpc.ServicerContext) → grpc_1jj59xj2_pb2.FindResponse[source]

Someone wants to find keys in the DHT. For all keys that we have locally, return value and expiration Also return :bucket_size: nearest neighbors from our routing table for each key (whether or not we found value)

async update_routing_table(node_id: Optional[dht.routing.DHTID], peer_endpoint: str, responded=True)[source]

This method is called on every incoming AND outgoing request to update the routing table

Parameters
  • peer_endpoint – sender endpoint for incoming requests, recipient endpoint for outgoing requests

  • node_id – sender node id for incoming requests, recipient node id for outgoing requests

  • responded – for outgoing requests, this indicated whether recipient responded or not. For incoming requests, this should always be True

class dht.routing.RoutingTable(node_id: dht.routing.DHTID, bucket_size: int, depth_modulo: int)[source]

A data structure that contains DHT peers bucketed according to their distance to node_id. Follows Kademlia routing table as described in https://pdos.csail.mit.edu/~petar/papers/maymounkov-kademlia-lncs.pdf

Parameters
  • node_id – node id used to measure distance

  • bucket_size – parameter $k$ from Kademlia paper Section 2.2

  • depth_modulo – parameter $b$ from Kademlia paper Section 2.2.

Note

you can find a more detailed description of parameters in DHTNode, see node.py

get_bucket_index(node_id: dht.routing.DHTID) → int[source]

Get the index of the bucket that the given node would fall into.

add_or_update_node(node_id: dht.routing.DHTID, endpoint: str) → Optional[Tuple[dht.routing.DHTID, str]][source]

Update routing table after an incoming request from :endpoint: or outgoing request to :endpoint:

Returns

If we cannot add node_id to the routing table, return the least-recently-updated node (Section 2.2)

Note

DHTProtocol calls this method for every incoming and outgoing request if there was a response. If this method returned a node to be ping-ed, the protocol will ping it to check and either move it to the start of the table or remove that node and replace it with

split_bucket(index: int) → None[source]

Split bucket range in two equal parts and reassign nodes to the appropriate half

get(*, node_id: Optional[dht.routing.DHTID] = None, endpoint: Optional[str] = None, default=None)[source]

Find endpoint for a given DHTID or vice versa

get_nearest_neighbors(query_id: dht.routing.DHTID, k: int, exclude: Optional[dht.routing.DHTID] = None) → List[Tuple[dht.routing.DHTID, str]][source]

Find k nearest neighbors from routing table according to XOR distance, does NOT include self.node_id

Parameters
  • query_id – find neighbors of this node

  • k – find this many neighbors. If there aren’t enough nodes in the table, returns all nodes

  • exclude – if True, results will not contain query_node_id even if it is in table

Returns

a list of tuples (node_id, endpoint) for up to k neighbors sorted from nearest to farthest

class dht.routing.KBucket(lower: int, upper: int, size: int, depth: int = 0)[source]

A bucket containing up to :size: of DHTIDs in [lower, upper) semi-interval. Maps DHT node ids to their endpoints

has_in_range(node_id: dht.routing.DHTID)[source]

Check if node_id is between this bucket’s lower and upper bounds

add_or_update_node(node_id: dht.routing.DHTID, endpoint: str) → bool[source]

Add node to KBucket or update existing node, return True if successful, False if the bucket is full. If the bucket is full, keep track of node in a replacement list, per section 4.1 of the paper.

Parameters
  • node_id – dht node identifier that should be added or moved to the front of bucket

  • endpoint – network address associated with that node id

Note

this function has a side-effect of resetting KBucket.last_updated time

request_ping_node() → Optional[Tuple[dht.routing.DHTID, str]][source]
Returns

least-recently updated node that isn’t already being pinged right now – if such node exists

split() → Tuple[dht.routing.KBucket, dht.routing.KBucket][source]

Split bucket over midpoint, rounded down, assign nodes to according to their id

class dht.routing.DHTID(value: int)[source]
classmethod generate(source: Optional[Any] = None, nbits: int = 255)[source]

Generates random uid based on SHA1

Parameters

source – if provided, converts this value to bytes and uses it as input for hashing function; by default, generates a random dhtid from :nbits: random bits

xor_distance(other: Union[dht.routing.DHTID, Sequence[dht.routing.DHTID]]) → Union[int, List[int]][source]
Parameters

other – one or multiple DHTIDs. If given multiple DHTIDs as other, this function will compute distance from self to each of DHTIDs in other.

Returns

a number or a list of numbers whose binary representations equal bitwise xor between DHTIDs.

to_bytes(length=20, byteorder='big', *, signed=False) → bytes[source]

A standard way to serialize DHTID into bytes

classmethod from_bytes(raw: bytes, byteorder='big', *, signed=False)dht.routing.DHTID[source]

reverse of to_bytes

Traverse (crawl) DHT

Utility functions for crawling DHT nodes, used to get and store keys in a DHT

async dht.traverse.simple_traverse_dht(query_id: dht.routing.DHTID, initial_nodes: Collection[dht.routing.DHTID], beam_size: int, get_neighbors: Callable[[dht.routing.DHTID], Awaitable[Tuple[Collection[dht.routing.DHTID], bool]]], visited_nodes: Collection[dht.routing.DHTID] = ()) → Tuple[List[dht.routing.DHTID], Set[dht.routing.DHTID]][source]

Traverse the DHT graph using get_neighbors function, find :beam_size: nearest nodes according to DHTID.xor_distance.

Note

This is a simplified (but working) algorithm provided for documentation purposes. Actual DHTNode uses traverse_dht - a generalization of this this algorithm that allows multiple queries and concurrent workers.

Parameters
  • query_id – search query, find k_nearest neighbors of this DHTID

  • initial_nodes – nodes used to pre-populate beam search heap, e.g. [my_own_DHTID, …maybe_some_peers]

  • beam_size – beam search will not give up until it exhausts this many nearest nodes (to query_id) from the heap Recommended value: A beam size of k_nearest * (2-5) will yield near-perfect results.

  • get_neighbors – A function that returns neighbors of a given node and controls beam search stopping criteria. async def get_neighbors(node: DHTID) -> neighbors_of_that_node: List[DHTID], should_continue: bool If should_continue is False, beam search will halt and return k_nearest of whatever it found by then.

  • visited_nodes – beam search will neither call get_neighbors on these nodes, nor return them as nearest

Returns

a list of k nearest nodes (nearest to farthest), and a set of all visited nodes (including visited_nodes)

async dht.traverse.traverse_dht(queries: Collection[dht.routing.DHTID], initial_nodes: List[dht.routing.DHTID], beam_size: int, num_workers: int, queries_per_call: int, get_neighbors: Callable[[dht.routing.DHTID, Collection[dht.routing.DHTID]], Awaitable[Dict[dht.routing.DHTID, Tuple[List[dht.routing.DHTID], bool]]]], found_callback: Optional[Callable[[dht.routing.DHTID, List[dht.routing.DHTID], Set[dht.routing.DHTID]], Awaitable[Any]]] = None, await_all_tasks: bool = True, visited_nodes: Optional[Dict[dht.routing.DHTID, Set[dht.routing.DHTID]]] = ()) → Tuple[Dict[dht.routing.DHTID, List[dht.routing.DHTID]], Dict[dht.routing.DHTID, Set[dht.routing.DHTID]]][source]

Search the DHT for nearest neighbors to :queries: (based on DHTID.xor_distance). Use get_neighbors to request peers. The algorithm can reuse intermediate results from each query to speed up search for other (similar) queries.

Parameters
  • queries – a list of search queries, find beam_size neighbors for these DHTIDs

  • initial_nodes – nodes used to pre-populate beam search heap, e.g. [my_own_DHTID, …maybe_some_peers]

  • beam_size – beam search will not give up until it visits this many nearest nodes (to query_id) from the heap

  • num_workers – run up to this many concurrent get_neighbors requests, each querying one peer for neighbors. When selecting a peer to request neighbors from, workers try to balance concurrent exploration across queries. A worker will expand the nearest candidate to a query with least concurrent requests from other workers. If several queries have the same number of concurrent requests, prefer the one with nearest XOR distance.

  • queries_per_call – workers can pack up to this many queries in one get_neighbors call. These queries contain the primary query (see num_workers above) and up to queries_per_call - 1 nearest unfinished queries.

  • get_neighbors – A function that requests a given peer to find nearest neighbors for multiple queries async def get_neighbors(peer, queries) -> {query1: ([nearest1, nearest2, …], False), query2: ([…], True)} For each query in queries, return nearest neighbors (known to a given peer) and a boolean “should_stop” flag If should_stop is True, traverse_dht will no longer search for this query or request it from other peers. The search terminates iff each query is either stopped via should_stop or finds beam_size nearest nodes.

  • found_callback – if specified, call this callback for each finished query the moment it finishes or is stopped More specifically, run asyncio.create_task(found_found_callback(query, nearest_to_query, visited_for_query)) Using this callback allows one to process results faster before traverse_dht is finishes for all queries.

  • await_all_tasks – if True, wait for all tasks to finish before returning, otherwise returns after finding nearest neighbors and finishes the remaining tasks (callbacks and queries to known-but-unvisited nodes)

  • visited_nodes – for each query, do not call get_neighbors on these nodes, nor return them among nearest.

Note

the source code of this function can get tricky to read. Take a look at simple_traverse_dht function for reference. That function implements a special case of traverse_dht with a single query and one worker.

Returns

a dict of nearest nodes, and another dict of visited nodes nearest nodes: { query -> a list of up to beam_size nearest nodes, ordered nearest-first } visited nodes: { query -> a set of all nodes that received requests for a given query }