Skip to content

Commit

Permalink
Merge pull request #3586 from KBVE/beta
Browse files Browse the repository at this point in the history
  • Loading branch information
h0lybyte authored Dec 19, 2024
2 parents 6ad1dbd + a2bae33 commit 4f808e5
Show file tree
Hide file tree
Showing 23 changed files with 1,870 additions and 406 deletions.
68 changes: 57 additions & 11 deletions apps/fudster/plugin/json/KBVEPlayerGPS.java
Original file line number Diff line number Diff line change
Expand Up @@ -3,22 +3,68 @@
import lombok.Getter;
import lombok.Setter;
import lombok.ToString;
import net.runelite.api.coords.WorldPoint;
import net.runelite.client.plugins.microbot.util.walker.Rs2Walker;

@Getter
@Setter
@ToString
public class KBVEPlayerGPS {
private String command;
private String username;
private int x;
private int y;
private int z;

public KBVEPlayerGPS(String command, String username, int x, int y, int z) {
this.command = command;
this.username = username;
this.x = x;
this.y = y;
this.z = z;
private String command;
private String username;
private int x;
private int y;
private int z;

public KBVEPlayerGPS(String command, String username, int x, int y, int z) {
this.command = command;
this.username = username;
this.x = x;
this.y = y;
this.z = z;
}

/**
* Processes the current GPS command.
*
* @return true if the command was successfully executed, false otherwise.
*/
public boolean processCommand() {
if (command == null || command.isEmpty()) {
System.out.println("[KBVEPlayerGPS]: Invalid command.");
return false;
}

switch (command.toUpperCase()) {
case "WALK":
return executeWalk();
case "STOP":
System.out.println("[KBVEPlayerGPS]: Stop command received (No-op).");
return true;
default:
System.out.println("[KBVEPlayerGPS]: Unknown command: " + command);
return false;
}
}

/**
* Executes the WALK command by invoking Rs2Walker.walkTo().
*
* @return true if the walk command was successfully initiated, false otherwise.
*/
private boolean executeWalk() {
try {
WorldPoint target = new WorldPoint(x, y, z);
System.out.println("[KBVEPlayerGPS]: Walking to " + target);
return Rs2Walker.walkTo(target);
} catch (Exception e) {
System.out.println(
"[KBVEPlayerGPS]: Error while executing WALK command - " +
e.getMessage()
);
e.printStackTrace();
return false;
}
}
}
59 changes: 46 additions & 13 deletions apps/kbve.com/src/content/docs/application/longhorn.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,12 @@ tags:
---

import {
Aside,
Steps,
Card,
CardGrid,
Code,
FileTree,
Aside,
Steps,
Card,
CardGrid,
Code,
FileTree,
} from '@astrojs/starlight/components';

import { Giscus, Adsense } from '@kbve/astropad';
Expand All @@ -33,7 +33,6 @@ import { Giscus, Adsense } from '@kbve/astropad';

- Requirements for 1.3v Longhorn


---

## NFS
Expand All @@ -51,19 +50,53 @@ import { Giscus, Adsense } from '@kbve/astropad';
- ```shell
sudo apt-get install nfs-common nfs-kernel-server -y
```

---

## Namespace

- Creating a custom namespace to hold the storage.
- Creating a custom namespace to hold the storage.

Kubectl command to create the namespace:

- ```shell
kubectl create namespace storage
- ```shell
kubectl create namespace storage

```
```

- std out: namespace/storage created
- std out: namespace/storage created

- This namespace will be where we store our production data.

---

- This namespace will be where we store our production data.
## Longhorn Uno

Under storage class, we will be creating the longhorn uno and then deploying it under that.

```yaml

annotations:
longhorn.io/last-applied-configmap: |
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn-uno
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: "Delete"
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "1"
staleReplicaTimeout: "30"
fromBackup: ""
fsType: "ext4"
dataLocality: "best-effort"
unmapMarkSnapChainRemoved: "ignored"
disableRevisionCounter: "true"
dataEngine: "v1"
storageclass.beta.kubernetes.io/is-default-class: 'false'
storageclass.kubernetes.io/is-default-class: 'false'
```
24 changes: 24 additions & 0 deletions apps/kbve.com/src/content/journal/12-13.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
---
title: 'Decemeber: 13th'
category: Daily
date: 2024-12-13 12:00:00
client: Self
unsplash: 1511512578047-dfb367046420
img: https://images.unsplash.com/photo-1511512578047-dfb367046420?crop=entropy&cs=srgb&fm=jpg&ixid=MnwzNjM5Nzd8MHwxfHJhbmRvbXx8fHx8fHx8fDE2ODE3NDg2ODY&ixlib=rb-4.0.3&q=85
description: Decemeber 13th.
tags:
- daily
---

import { Adsense, Tasks } from '@kbve/astropad';

## 2024

- 09:30AM

**Friday**

The tesla stock needs to relax, jesus I am going to be rich but damn its too soon.
The issue is that I wanted more before it becomes way too popular and expensive to hold.
I can expect tesla to also stock split next year so I need to prepare for that.

23 changes: 23 additions & 0 deletions apps/kbve.com/src/content/journal/12-14.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
---
title: 'Decemeber: 14th'
category: Daily
date: 2024-12-14 12:00:00
client: Self
unsplash: 1511512578047-dfb367046420
img: https://images.unsplash.com/photo-1511512578047-dfb367046420?crop=entropy&cs=srgb&fm=jpg&ixid=MnwzNjM5Nzd8MHwxfHJhbmRvbXx8fHx8fHx8fDE2ODE3NDg2ODY&ixlib=rb-4.0.3&q=85
description: Decemeber 14th.
tags:
- daily
---

import { Adsense, Tasks } from '@kbve/astropad';

## 2024

- 08:00PM

**Unity**

The Unity session came to an end but I have learned so much!


27 changes: 27 additions & 0 deletions apps/kbve.com/src/content/journal/12-15.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
title: 'Decemeber: 15th'
category: Daily
date: 2024-12-15 12:00:00
client: Self
unsplash: 1511512578047-dfb367046420
img: https://images.unsplash.com/photo-1511512578047-dfb367046420?crop=entropy&cs=srgb&fm=jpg&ixid=MnwzNjM5Nzd8MHwxfHJhbmRvbXx8fHx8fHx8fDE2ODE3NDg2ODY&ixlib=rb-4.0.3&q=85
description: Decemeber 15th.
tags:
- daily
---

import { Adsense, Tasks } from '@kbve/astropad';

## 2024

- 08:45PM

The weather is finally getting better, its been so fucking cold.
I have a bunch of notes that I need to sync across the three past days.



- 11:00PM

Taco bell run is required, need to stock up on the processed plastic food!
Afterwards going to shift back to that unity flow.
50 changes: 50 additions & 0 deletions apps/kbve.com/src/content/journal/12-16.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
---
title: 'Decemeber: 16th'
category: Daily
date: 2024-12-16 12:00:00
client: Self
unsplash: 1511512578047-dfb367046420
img: https://images.unsplash.com/photo-1511512578047-dfb367046420?crop=entropy&cs=srgb&fm=jpg&ixid=MnwzNjM5Nzd8MHwxfHJhbmRvbXx8fHx8fHx8fDE2ODE3NDg2ODY&ixlib=rb-4.0.3&q=85
description: Decemeber 16th.
tags:
- daily
---

import { Adsense, Tasks } from '@kbve/astropad';

## 2024

- 03:52AM

**Sleep**

What is sleep? That is the real question for tonight.

- 07:48PM

**AStar**

Doing more research for the unity plugin for astarpathfinder but I also have concerns about performance.
Will the algo updating be fast enough on the webgl? Hmm, doing more research on that later tonight.

- 08:08PM

**Fudster**

For the fudster project, making sure that a different instance is spinned up and limiting the control only to specific user?
This might be via uuid or I might find a way to handle that initially.

- 08:30PM

**Proxmox**

Going through the painful cycle of backing up each container and making sure that everything will run smoothly when performing the update.
My bigger concern is that we might not have enough ram, so we need to make those adjustments.

- 09:40PM

**Kubes**

We just spent a bunch of time trying to get the kubes upgraded from version `v1.30.7` to `v1.31.3`!
This included some issues with the backups, in which, the VM Worker 4 was just way too heavy or large to backup.
Past noticing that specific issue, I already know that we need a new machine to migrate our backups to but that requires me to go to microcity to get that arranged.
94 changes: 94 additions & 0 deletions apps/kbve.com/src/content/journal/12-17.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
---
title: 'Decemeber: 17th'
category: Daily
date: 2024-12-17 12:00:00
client: Self
unsplash: 1511512578047-dfb367046420
img: https://images.unsplash.com/photo-1511512578047-dfb367046420?crop=entropy&cs=srgb&fm=jpg&ixid=MnwzNjM5Nzd8MHwxfHJhbmRvbXx8fHx8fHx8fDE2ODE3NDg2ODY&ixlib=rb-4.0.3&q=85
description: Decemeber 17th.
tags:
- daily
---

import { Adsense, Tasks } from '@kbve/astropad';

## 2024

- 02:26AM

**Kilobase**

During the migration process, we made a small mistake with the longhorn and now have to redeploy the database.
I am going to take that time to see if there are any updates that we need to do as well.

- 03:20AM

**PvC**

We got a couple persistent volume claims that are stuck in a weird loop, where we are not too sure if they can be found or not.
To help debug this porblem, we will be running these two commands:

```shell
kubectl describe pvc pvc-1696b3cc-da05-4753-872c-f88f13e20d0a
kubectl get pv | grep pvc-1696b3cc-da05-4753-872c-f88f13e20d0a
```

Then it returns this:

```
Error from server (NotFound): persistentvolumeclaims "pvc-1696b3cc-da05-4753-872c-f88f13e20d0a" not found
8Gi RWO Delete Terminating armada/redis-data-redis-master-0 longhorn <unset> 86d
```

To grab the location for it, we went ahead and did this:

```shell
kubectl edit pv pvc-1696b3cc-da05-4753-872c-f88f13e20d0a
```

This will allow us to make a couple quick edits to the `finalizers` that hold that pvc.
In this first case we want to just remove the redis data block, so we will strip out the finalizers holding it.
Then do this:

```shell
kubectl delete pv pvc-1696b3cc-da05-4753-872c-f88f13e20d0a --grace-period=0 --force
```

We need to locate them inside of our longhorn.

```
orphan-ee68325bd461018c0d1103776ad999b60263d56395c9f9364bc5bd7d2b08a844
orphan-7bc0707b775aae99dc1927e76ac1677839a2ef7eacbb6509014aafe5cc81c77e
```

The next option might be just best to just delete the deployment and start again?

Okay we know the redis sits in the `armada-naval`, so we will go ahead and rebuild it from there.
The first step will be the name, which is going to be `armada-naval` and it will deploy to the `armada` namespace.
The repository url will be `https://github.com/KBVE/kbve.git` and the branch will be `dev`.
Afterwards, we want to enable the self healing and set the paths to `/migrations/kube/charts/armada`. Hmm, there is another option to keep resources but we can let longhorn handle that for us.
While that gets pushed up and prepares to build itself out, we can shift over to the AWS and prepare for the Kilobase launch.

- 03:49AM

**Kilobase**

Time to bring the kilobase back around but with more updates and minor fixes as well.
We need to pull the s3 backup and do the cluster auto-migration / repair but I did want to deploy a new instance of kilobase, hmm.
I will hold off on that rust adaption for now, too many problems can be an issue for us.
Same as before but we will swap out the `/migrations/kube/charts/kilobase` path.

- 05:03PM

**Realtime**

Now I want to focus on getting the realtime supabase to work, hopefully without having too many errors in this process.

- 11:24PM

**Redis**

After getting the long horn and size situation resolved, we will be moving forward with the redis deployment.
The goal of the redis deployment is to just get a better understanding of how it will work in our eco-system.
Let me push up the phase zero of the deployment and see where it will go from there.
Under Armada, I am going to have to add another deployment change.
Loading

0 comments on commit 4f808e5

Please sign in to comment.