Skip to content

Commit

Permalink
lint fixes, added spec docs
Browse files Browse the repository at this point in the history
  • Loading branch information
erhant committed Aug 27, 2024
1 parent 87c1ddb commit 0405b22
Show file tree
Hide file tree
Showing 9 changed files with 20 additions and 10 deletions.
12 changes: 11 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Compute nodes can technically do any arbitrary task, from computing the square r

## Requirements

The compute node is a very lightweight process, with few MBs of memory usage along with an image size of less than ~65MBs. If you are using Ollama, you will need the memory to run large models locally, which depend on the model's size that you are willing to.
### Software

You need the following applications to run compute node:

Expand All @@ -56,6 +56,16 @@ You need the following applications to run compute node:
> which docker
> ```
### Hardware
**For overall specifications about required CPU and RAM, please refer to [dkn-node-specs](https://github.com/firstbatchxyz/dkn-node-specs).**
In general, if you are using Ollama you will need the memory to run large models locally, which depend on the model's size that you are willing to. If you are in a memory-constrained environment, you can opt to use OpenAI models instead.
> [!NOTE]
>
> The compute node is a lightweight process, but you may see increased memory & CPU usage during the initial testing phases, due to various protocol-level operations with the growing network size.
## Setup
To be able to run a node, we need to make a few simple preparations. Follow the steps below one by one.
Expand Down
2 changes: 1 addition & 1 deletion src/config/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ impl DriaComputeNodeConfig {

// update good models
if good_models.is_empty() {
return Err("No good models found, please check logs for errors.".into());
Err("No good models found, please check logs for errors.".into())
} else {
self.model_config.models = good_models;
Ok(())
Expand Down
4 changes: 2 additions & 2 deletions src/config/ollama.rs
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ impl OllamaConfig {
// otherwise, give error
log::error!("Please download missing model with: ollama pull {}", model);
log::error!("Or, set OLLAMA_AUTO_PULL=true to pull automatically.");
return Err("Required model not pulled in Ollama.".into());
Err("Required model not pulled in Ollama.".into())
}
}

Expand Down Expand Up @@ -219,6 +219,6 @@ impl OllamaConfig {
}
};

return false;
false
}
}
1 change: 1 addition & 0 deletions src/config/openai.rs
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,7 @@ mod tests {
use super::*;

#[tokio::test]
#[ignore = "requires OpenAI API key"]
async fn test_openai_check() {
let config = OpenAIConfig::new();
let res = config.check(vec![]).await;
Expand Down
2 changes: 1 addition & 1 deletion src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ pub(crate) mod utils;

/// Crate version of the compute node.
/// This value is attached within the published messages.
pub const DRIA_COMPUTE_NODE_VERSION: &'static str = env!("CARGO_PKG_VERSION");
pub const DRIA_COMPUTE_NODE_VERSION: &str = env!("CARGO_PKG_VERSION");

pub use config::DriaComputeNodeConfig;
pub use node::DriaComputeNode;
1 change: 0 additions & 1 deletion src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
tokio::select! {
_ = service_check_token.cancelled() => {
log::info!("Service check cancelled.");
return;
}
result = config_clone.check_services() => {
if let Err(err) = result {
Expand Down
2 changes: 1 addition & 1 deletion src/node.rs
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ impl DriaComputeNode {
// handle message w.r.t topic
if std::matches!(topic_str, PINGPONG_LISTEN_TOPIC | WORKFLOW_LISTEN_TOPIC) {
// ensure that the message is from a valid source (origin)
let source_peer_id = match message.source.clone() {
let source_peer_id = match message.source {
Some(peer) => peer,
None => {
log::warn!("Received {} message from {} without source.", topic_str, peer_id);
Expand Down
5 changes: 2 additions & 3 deletions src/p2p/client.rs
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ impl P2PClient {

Ok(Self {
swarm,
version: Version::parse(&DRIA_COMPUTE_NODE_VERSION).unwrap(),
version: Version::parse(DRIA_COMPUTE_NODE_VERSION).unwrap(),
peer_count: (0, 0),
peer_last_refreshed: Instant::now(),
})
Expand Down Expand Up @@ -272,8 +272,7 @@ impl P2PClient {
.iter()
.find(|p| p.to_string().starts_with("/dria/kad/"))
{
let protocol_ok =
self.check_version_with_prefix(&kad_protocol.to_string(), "/dria/kad/");
let protocol_ok = self.check_version_with_prefix(kad_protocol.as_ref(), "/dria/kad/");

// if it matches our protocol, add it to the Kademlia routing table
if protocol_ok {
Expand Down
1 change: 1 addition & 0 deletions src/utils/crypto.rs
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@ mod tests {
}

#[test]
#[ignore = "run only with profiler if wanted"]
fn test_memory_usage() {
let secret_key =
SecretKey::parse_slice(DUMMY_KEY).expect("Should parse private key slice.");
Expand Down

0 comments on commit 0405b22

Please sign in to comment.