Skip to content

Commit

Permalink
Merge pull request #66 from shounakmongodb/with-graph-rag-code
Browse files Browse the repository at this point in the history
After adding Graph RAG code
  • Loading branch information
RichmondAlake authored Jan 16, 2025
2 parents d16cd3d + ee16519 commit 8b37f25
Show file tree
Hide file tree
Showing 22 changed files with 3,306 additions and 0 deletions.
Empty file added apps/graph_rag_demo/.env
Empty file.
19 changes: 19 additions & 0 deletions apps/graph_rag_demo/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Ignore Mac system files
.DS_store

# Ignore node_modules folder
node_modules

# Ignore all text files
*.txt

# Ignore files related to API keys
./.env
./.gitignore

# Ignore SASS config files
.sass-cache
#Ignore PDF Files
PDF_KG
.sass-cache

100 changes: 100 additions & 0 deletions apps/graph_rag_demo/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# kg_graph__rag_mongo
This guide explores how to leverage MongoDB's capabilities to create and manipulate graph representations using both Python and Node.js. By utilizing these two popular programming languages, we can demonstrate the versatility of MongoDB in different development environments, showcasing how to perform essential Graph RAG to represent and analyze graphs can provide valuable insights into complex relationships and interactions within data combining the Graph and RAG data in MongoDB.


## Prep Steps
1. Set Up Your MongoDB Database:

Set up a Atlas a [cloud-based MongoDB instance of MongoDB.](https://www.mongodb.com/docs/atlas/tutorial/create-new-cluster/)

2. Install Required Libraries:
For Python:
Please do a regular installation of Anaconda for your Operating System using the [doc](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html)
Once conda is installed, open up a shell with the Conda CLI, change to this directory and create a new environment with the packages listed in the requirements.txt by executing the following statement:

```bash
conda env create -f environment.yaml
conda activate kg-demo
```
For Node.js:
For the Node JS files we have a package.json and you will be able to use npm install to install all the packages needed. Our Code was tested on Node version v20.15.0 and npm version 10.7.0

3. Now open the .env file in the directory and populate the following three variables. Please note that the .env file may be empty when you clone this repository
OPENAI_API_KEY1=
ATLAS_CONNECTION_STRING=
PDF =

4. Create the MongoDB Atlas Database:
Please note that ATLAS_CONNECTION_STRING and OPENAI_API_KEY1 should already be created in the environment file.
Please create a new MongoDB Atlas database called <code>langchain_db</code>
Create two collections named <code>knowledge_graph</code> and <code>nodes_relationships</code>

5. Download and install **MongoDB Compass** for your platform by following the steps mentioned [here](https://www.mongodb.com/docs/compass/current/install/). Please ensure you install a compass version 1.40.0 or higher. Once installed, connect to your Atlas cluster by following the link [here](https://www.mongodb.com/docs/compass/current/connect/).

6. Create vector index on the Compass UI for the <code>embedding</code> field for <code>knowledge_graph</code> collection. [Please refer to this document](https://www.mongodb.com/docs/compass/current/indexes/create-vector-search-index/). You can use the following json document for the index defination. Please name the vector index as <code>vector_index</code>
Sample:
```json
{
"fields": [ {
"type": "vector",
"path": "embedding",
"numDimensions": 1536,
"similarity": "euclidean"
} ]
}
```
After this create an Atlas Search Index on the <code>knowledge_graph</code> collection. [Please refer this document](https://www.mongodb.com/docs/compass/current/indexes/create-search-index/). Please name the Atlas Search Index as <code>default</code>




## Running the application

### If running for the first time
We have to setup the data in the collection. Please perform the following steps:

1. Open an Anaconda Shell and navigate to the project directory.
2. Activate the kg-demo environment by issuing the below commands:
```bash
conda activate kg-demo
```
3. Run the <code>data_insert.py file</code> by issuing the following command:
```bash
python data_insert.py
```
4. Now open a OS shell which should have `node` installed. Navigate to the project directory and issue the following commands in the same order:
```bash
node addEmbeddings.js
```
**Please note that if the above command does not end after 10-15 seconds, please terminate it using Ctrl+C**
And then
```bash
node addTags.js
```
5. Now return to the Anaconda shell opened in Step 1 where you should already be there in the project directory and in the kg-demo environment and run the following command:
```bash
python driver_code.py
```
This will ask for a question which you want to ask. You can give a question like **How is social support related to aging?**.
It will then ask about the Spanning Tree depth. You can give it a value of 2 or 3 for an optimal performance.
Enjoy

### On Subsequent Runs
Data is already prepared. We just need to chat and ask questions. Please perform the following steps:

1. Open an Anaconda Shell and navigate to the project directory.
2. Activate the kg-demo environment by issuing the below commands:
```bash
conda activate kg-demo
```
3. Run the following command:
```bash
python driver_code.py
```
This will ask for a question which you want to ask. You can give a question like **How is social support related to aging?**.
It will then ask about the Spanning Tree depth. You can give it a value of 2 or 3 for an optimal performance.

The answer for this question should be something as below:

**Social Support:Social factor Stress:Condition Diet:Lifestyle factor Physical Health:Condition Work Environment:Environment Job Satisfaction:Emotional state Aging:Condition Job Satisfaction:Condition Productivity:Condition Social Support:Activity Immune System:Biological system Cortisol:Hormone Cognitive Function:Function Immune System:System Diet:Activity Burnout:Condition Work Performance:Condition Heart Disease:Disease Cognitive Function:Condition Sleep Quality:Condition Inflammation:Biological process Social Support:Condition Productivity:Outcome Employee Turnover:Outcome Work Environment:Factor Diabetes:Disease Genetics:Biological factor Physical Activity:Activity Diet:Condition Obesity:Condition Heart Disease:Condition Social Support:Factor Social Relationships:Condition Sleep Quality:Health aspect Inflammation:Condition Diet:Factor Memory:Condition Blood Pressure:Condition Exercise:Activity Depression:Condition Anxiety:Condition Mental Health:Aspect Learning:Condition Sleep Quality:Aspect Stress:Concept Diet:Behavior Physical Health:Health aspect Anxiety:Emotional state Anxiety:Mental condition Depression:Mental condition Depression:Emotional state Physical Activity:Behavior Job Satisfaction:Psychological factor Mental Health:Health aspect Mental Health:Condition -----------
Social support is related to aging through its impact on stress. Social support reduces stress, and since stress accelerates aging, having social support can indirectly slow down the aging process by reducing the level of stress experienced by individuals.**
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
56 changes: 56 additions & 0 deletions apps/graph_rag_demo/addEmbeddings.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { MongoDBAtlasVectorSearch } from "@langchain/mongodb";
import { OpenAIEmbeddings } from "@langchain/openai";
import { MongoClient } from "mongodb";
import { PDFLoader } from "langchain/document_loaders/fs/pdf";
import dotenv from "dotenv";
dotenv.config();

const client = new MongoClient(process.env.ATLAS_CONNECTION_STRING, appname="devrel.showcase.apps.graph_rag_demo");
process.env.OPENAI_API_KEY = process.env.OPENAI_API_KEY1;

async function run() {
try {
await client.connect();
console.log('Connected to MongoDB successfully');
const database = client.db("langchain_db");
const collection = database.collection("knowledge_graph");
const dbConfig = {
collection: collection,
indexName: "vector_index", // The name of the Atlas search index to use.
textKey: "chunks", // Field name for the raw text content. Defaults to "text".
embeddingKey: "embedding", // Field name for the vector embeddings. Defaults to "embedding".
};
// Ensure that the collection is empty
await collection.deleteMany({});
const pdfArray = [
'./PDF_KG/Diet, Stress and Mental Healt.pdf',
'./PDF_KG/Effect of Stress Management Interventions on Job Stress among nurses working in critical care units.pdf',
'./PDF_KG/Factors contributing to stress among parents of children with autism.pdf',
'./PDF_KG/Level of physical activity, well-being, stress and self rated health in persons with migraine and co existing tension-type headache and neck pain.pdf',
'./PDF_KG/Stress and Blood Pr ess and Blood Pressure During Pr e During Pregnancy Racial Diff egnancy Racial Differences.pdf',
'./PDF_KG/Stress and Headache.pdf',
'./PDF_KG/THE IMPACT OF STRESSFUL LIFE EVENTS ON RELAPSE OF GENERALIZED ANXIETY DISORDER.pdf',
'./PDF_KG/where are we at with stress and headache.pdf'
]

// Load and split the sample data
pdfArray.forEach(async (pdfname) => {
console.log("Starting sync...", pdfname);
const loader = new PDFLoader(pdfname);
const data = await loader.load();
const textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const docs = await textSplitter.splitDocuments(data);
await MongoDBAtlasVectorSearch.fromDocuments(docs, new OpenAIEmbeddings(), dbConfig);
console.log("Ending sync...", pdfname);
})
} catch (error){
console.log(error)
}
}
run().catch(console.dir);


69 changes: 69 additions & 0 deletions apps/graph_rag_demo/addTags.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
import { MongoClient } from "mongodb";
import dotenv from "dotenv";
dotenv.config();

async function findAllDocuments(dbName, collectionName1, collectionName2) {
const client = new MongoClient(process.env.ATLAS_CONNECTION_STRING, appname="devrel.showcase.apps.graph_rag_demo");
try {
await client.connect();
console.log('Connected to MongoDB');

// Access the database
const db = client.db(dbName);

// Access the source and destination collections
const sourceCollection = db.collection(collectionName1);
const destinationCollection = db.collection(collectionName2);

// Find all documents in the source collection
const documents = await sourceCollection.find({}, { projection: { _id: 1 } }).toArray()
console.log(`Found ${documents.length} documents in source collection`);

// Insert documents into the destination collection
if (documents.length > 0) {
for (let i = 0; i < documents.length; i++) {
const idRepalce = documents[i]._id.replace(":", ' as a ')
console.log(idRepalce)
const agg = [
{
$search: {
index: "default",
text: {
path: "chunks",
query: idRepalce,
},
},
},
{
$addFields: {
tags: {
$cond: {
if: { $isArray: "$tags" },
then: { $concatArrays: [ "$tags", [ {tagName: documents[i]._id, score: { $meta: "searchScore" }} ] ] },
else: [{tagName: documents[i]._id, score: { $meta: "searchScore" }}]
}
}
}
},
{
$merge: {
into: collectionName2,
whenMatched: "merge",
whenNotMatched: "discard"
}
}
]
const newdocuments = await destinationCollection.aggregate(agg).toArray();
console.log('Documents copied to destination collection', newdocuments.chunks);
}
} else {
console.log('No documents found in the source collection');
}
} catch (err) {
console.error('Error connecting to the database or finding documents:', err);
} finally {
client.close();
}
}

findAllDocuments('langchain_db', 'nodes_relationships', 'knowledge_graph').catch(console.dir);
31 changes: 31 additions & 0 deletions apps/graph_rag_demo/build_graph.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
from collections import defaultdict
def addEdge(graph,u,v):
graph[u].append(v)
def generate_edges(graph):
edges = []

# for each node in graph
for node in graph:

# for each neighbour node of a single node
for neighbour in graph[node]:

# if edge exists then append
edges.append((node, neighbour))
return edges

def build_graph(level_dict):
graph = defaultdict(list)
relationship_names = defaultdict(set)
for level in level_dict.keys():
#print(f'Level is {level}')
for source,targets in level_dict[level].items():
#print(f'Source is {source} and targets are {targets}')
for _,relationships in targets.items():
#print(f'Target Node {target_node} and relationships {relationships}')
for target_node, edges in relationships.items():
addEdge(graph,source,target_node)
relationship_key = (source,target_node)
for edge in edges:
relationship_names[relationship_key].add(edge)
return graph,relationship_names
65 changes: 65 additions & 0 deletions apps/graph_rag_demo/data_insert.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
from langchain_core.documents import Document
from langchain_community.graphs.graph_document import GraphDocument, Node, Relationship
from pymongo import MongoClient
from dotenv import load_dotenv
from pprint import pprint
import os
import json
from nodes_relationships import nodes,links

def build_lookup_map():
quick_lookup = {}
for key in links.keys():
relationship = links[key]
source_node = relationship.source
lookup_key = str(source_node.id)+":"+str(source_node.type)
lookup_content = quick_lookup.get(lookup_key,"empty")
if lookup_content != "empty":
quick_lookup.get(lookup_key).append(relationship)
else:
quick_lookup[lookup_key] = [relationship]
return quick_lookup

def create_mongo_documents():
mongo_documents = []
quick_lookup = build_lookup_map()
for key in nodes.keys():
node = nodes[key]
id = str(node.id)+":"+str(node.type)
type = node.type
rel = quick_lookup.get(id,None)
relationships = set()
targets = {}
if rel!=None:
for relationship in rel:
target_id = str(relationship.target.id)+":"+str(relationship.target.type)
relationships.add(target_id)
target_type = targets.get(target_id,None)
if target_type != None:
targets[target_id].append(relationship.type)
else:
targets[target_id] = [relationship.type]
mongo_documents.append({"_id":id,"type":type,"relationships":list(relationships),"targets":targets})
else:
mongo_documents.append({"_id":id,"type":type,"relationships":[],"targets":{}})
return mongo_documents

def mongo_insert():
mongo_documents = create_mongo_documents()
try:
uri = os.getenv("ATLAS_CONNECTION_STRING")
print(uri)
client = MongoClient(uri)
database = client["langchain_db"]
collection = database["nodes_relationships"]
for doc in mongo_documents:
collection.insert_one(doc)
except Exception as e:
print(e)
finally:
client.close()
if __name__=="__main__":
load_dotenv()
print("Inserting Documents")
mongo_insert()
print("Successfully Inserted Documents")
Loading

0 comments on commit 8b37f25

Please sign in to comment.