Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync: debug with debug-sync-b784 #1521

Merged
merged 89 commits into from
May 31, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
89 commits
Select commit Hold shift + click to select a range
a0f6f59
Merge pull request #1462 from VirtualFlyBrain/alpha
Robbie1977 Feb 19, 2024
83a4ec0
swapping to filter by filter query rather than query
Robbie1977 Feb 20, 2024
5ddc1eb
allowing up to 999999 result rows
Robbie1977 Feb 20, 2024
1aad5c8
Merge pull request #1463 from VirtualFlyBrain/development
Robbie1977 Feb 20, 2024
d33022e
testing optimised cache processing step
Robbie1977 Feb 21, 2024
b14584e
Merge pull request #1464 from VirtualFlyBrain/alpha
Robbie1977 Mar 15, 2024
3c46e4f
Merge pull request #1465 from VirtualFlyBrain/development
Robbie1977 Mar 15, 2024
b3c8957
correcting icons
Robbie1977 Mar 15, 2024
c683f0a
hiding NBLAST upload options
Robbie1977 Mar 15, 2024
2585d52
fix for id passing
Robbie1977 Mar 18, 2024
11bcd8e
Create process_xmi.yml
Robbie1977 Mar 19, 2024
715ce3f
Create process_xmi.py
Robbie1977 Mar 19, 2024
289eb3f
moving script location
Robbie1977 Mar 19, 2024
0e1168f
adding manual run
Robbie1977 Mar 19, 2024
29e7009
removing Aberowl datasource ref
Robbie1977 Mar 19, 2024
969b685
run on processing updates
Robbie1977 Mar 19, 2024
c812902
Ensure the 'xsi' prefix is included in your namespaces dictionary
Robbie1977 Mar 19, 2024
6cb655a
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
010c87b
swapping to ID: Description titles
Robbie1977 Mar 19, 2024
31a866e
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
014e304
Update process_xmi.py
Robbie1977 Mar 19, 2024
f1c1b2e
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
b8cfee9
syntax fix
Robbie1977 Mar 19, 2024
7c4f989
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
e0c8f0e
Create extractqueries.py
Robbie1977 Mar 19, 2024
8e3f039
adding extract queries
Robbie1977 Mar 19, 2024
caa04c3
Update process_xmi.yml
Robbie1977 Mar 19, 2024
4391af6
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
e9d1873
Update extractqueries.py
Robbie1977 Mar 19, 2024
1c92015
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
7db33e5
Update extractqueries.py
Robbie1977 Mar 19, 2024
589241f
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
946ff83
forcing text formatting
Robbie1977 Mar 19, 2024
ec4aee0
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
3b0077c
pulling all
Robbie1977 Mar 19, 2024
0e48c22
expanding to compound and simple queries
Robbie1977 Mar 19, 2024
37407f5
loc fixes
Robbie1977 Mar 19, 2024
3cf25af
import fix
Robbie1977 Mar 19, 2024
2759996
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
5fc9ec6
Update extractqueries.py
Robbie1977 Mar 19, 2024
d3963b7
ignoring fetch variable
Robbie1977 Mar 19, 2024
50a1994
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
a0f4b93
format fix
Robbie1977 Mar 19, 2024
85b239f
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
7456457
fix for indenting
Robbie1977 Mar 19, 2024
eb8e1b2
reverting as not needed
Robbie1977 Mar 19, 2024
57069cc
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
d24e051
fix for hierarchy
Robbie1977 Mar 19, 2024
78edd87
not needed
Robbie1977 Mar 19, 2024
015d4f9
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
0af4a9f
adding children
Robbie1977 Mar 19, 2024
86c8907
processing child queries
Robbie1977 Mar 19, 2024
3b084e5
cleaning
Robbie1977 Mar 19, 2024
3d631c0
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
59475bb
fixing children
Robbie1977 Mar 19, 2024
c1a5c81
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
bd0092c
simplify
Robbie1977 Mar 19, 2024
0fe5934
Update README with XMI structure and query breakdowns
actions-user Mar 19, 2024
0e909d3
Create querySpeedTest.py
Robbie1977 Mar 20, 2024
5003bab
adding notebook for speed tests
Robbie1977 Mar 20, 2024
bbac1a5
adding dependancies
Robbie1977 Mar 20, 2024
5b03af5
html not needed
Robbie1977 Mar 20, 2024
c931fb8
html not needed
Robbie1977 Mar 20, 2024
9806d08
typo fix
Robbie1977 Mar 20, 2024
0be4f87
import fix
Robbie1977 Mar 20, 2024
24f041f
adding queries_execution_notebook.ipynb
Robbie1977 Mar 20, 2024
e46cf03
Update README with XMI structure and query breakdowns
actions-user Mar 20, 2024
d34a923
Merge pull request #1483 from VirtualFlyBrain/development
Robbie1977 Mar 20, 2024
1d6dac3
Update README with XMI structure and query breakdowns
actions-user Mar 20, 2024
d992813
correcting index as Aberowl DS removed
Robbie1977 Mar 20, 2024
5e42a34
Merge branch 'development' of https://github.com/VirtualFlyBrain/gepp…
Robbie1977 Mar 20, 2024
e91816c
Merge pull request #1484 from VirtualFlyBrain/development
Robbie1977 Mar 20, 2024
29265ec
hot fix for exp queries
Robbie1977 Mar 20, 2024
b3e77d4
Update README with XMI structure and query breakdowns
actions-user Mar 20, 2024
7bd954b
Revert "Update README with XMI structure and query breakdowns"
Robbie1977 Mar 20, 2024
8ac0aef
Merge pull request #1485 from VirtualFlyBrain/development
Robbie1977 Mar 20, 2024
218f0db
Update README with XMI structure and query breakdowns
actions-user Mar 20, 2024
520978f
Fix for term info data sources
Robbie1977 Mar 21, 2024
7cea111
Merge branch 'alpha' of https://github.com/VirtualFlyBrain/geppetto-v…
Robbie1977 Mar 21, 2024
b7b5a1f
Merge pull request #1486 from VirtualFlyBrain/alpha
Robbie1977 Mar 21, 2024
0b6c903
Merge pull request #1487 from VirtualFlyBrain/development
Robbie1977 Mar 22, 2024
39f9839
Update README with XMI structure and query breakdowns
actions-user Mar 22, 2024
aea80b4
Update Dockerfile
Robbie1977 Apr 3, 2024
3a34c27
If IOExecption occurs in websocket then the session is lost so page n…
Robbie1977 Apr 3, 2024
5428f82
Updating default GA ref
Robbie1977 Apr 3, 2024
b3536e2
Update client to GA4 compatible version
Robbie1977 Apr 3, 2024
03d1f63
reducing the retries
Robbie1977 Apr 4, 2024
91d2f6c
Merge branch 'master' of https://github.com/VirtualFlyBrain/geppetto-vfb
Robbie1977 Apr 4, 2024
64a1d3a
Revert "If IOExecption occurs in websocket then the session is lost s…
Robbie1977 Apr 4, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
84 changes: 84 additions & 0 deletions .github/scripts/extractqueries.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
import sys
import html
from lxml import etree

def parse_xmi(file_path):
with open(file_path, 'rb') as file:
tree = etree.parse(file)
root = tree.getroot()
return root

def process_queries(element, indent, queries_info, namespaces):
for query in element.findall('.//queries', namespaces=namespaces):
query_id = query.get('id')
query_name = query.get('name')
query_description = query.get('description', 'No description provided')
query_type = query.get('{http://www.w3.org/2001/XMLSchema-instance}type')

# Initialize query entry
query_entry = {
'indent': indent,
'id': query_id,
'name': query_name,
'description': query_description,
'type': query_type,
'query': '',
'childQueries': []
}

if query_type == "gep_2:CompoundQuery":
# Process each queryChain within the compound query
for query_chain in query.findall('.//queryChain', namespaces=namespaces):
process_query_chain(query_chain, indent + " ", query_entry['childQueries'], namespaces)
else:
# Simple or process queries directly within <queries> tag
query_content = query.get('query', query.get('queryProcessorId', 'No query provided'))
query_entry['query'] = html.unescape(query_content)

queries_info.append(query_entry)

def process_query_chain(query_chain, indent, child_queries_info, namespaces):
chain_id = query_chain.get('id')
chain_name = query_chain.get('name')
chain_description = query_chain.get('description', 'No description provided')
chain_type = query_chain.get('{http://www.w3.org/2001/XMLSchema-instance}type')
chain_query = query_chain.get('query', query_chain.get('queryProcessorId', 'No query provided'))

child_queries_info.append({
'indent': indent,
'id': chain_id,
'name': chain_name,
'description': chain_description,
'type': chain_type,
'query': html.unescape(chain_query)
})

def generate_markdown_for_all_queries(queries_info):
markdown_content = "# Queries and Chains Across Data Sources\n\n"
for info in queries_info:
markdown_content += generate_markdown_for_query(info)
return markdown_content

def generate_markdown_for_query(info):
markdown = f"{info['indent']}## Query Name: {info['name']}\n"
markdown += f"{info['indent']}ID: {info['id']}\n"
markdown += f"{info['indent']}Description: {info['description']}\n"
markdown += f"{info['indent']}Type: {info['type']}\n"
markdown += f"{info['indent']}Query: ```\n{info['indent']}{info['query']}\n```\n\n"
for child_query in info.get('childQueries', []):
markdown += generate_markdown_for_query(child_query)
return markdown

def main(xmi_file_path, output_markdown_path):
namespaces = {'xsi': 'http://www.w3.org/2001/XMLSchema-instance'}
root = parse_xmi(xmi_file_path)
queries_info = []
process_queries(root, "", queries_info, namespaces)
markdown_content = generate_markdown_for_all_queries(queries_info)
with open(output_markdown_path, 'w', encoding='utf-8') as file:
file.write(markdown_content)

if __name__ == "__main__":
xmi_file_path = sys.argv[1] # ecore xmi file
output_markdown_path = sys.argv[2] # Where the markdown file will be saved
main(xmi_file_path, output_markdown_path)
95 changes: 95 additions & 0 deletions .github/scripts/process_xmi.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
import sys
from lxml import etree

def parse_xmi(file_path):
with open(file_path, 'rb') as file:
tree = etree.parse(file)
root = tree.getroot()
namespaces = {
'xmi': 'http://www.omg.org/XMI',
'ecore': 'http://www.eclipse.org/emf/2002/Ecore',
'xsi': 'http://www.w3.org/2001/XMLSchema-instance' # Adding the 'xsi' namespace
}
return root, namespaces

def list_queries_under_data_sources(root, namespaces):
data_sources = root.findall('.//dataSources', namespaces=namespaces)
all_data_sources_with_queries = []

for i, ds in enumerate(data_sources):
data_source_info = {'index': i, 'name': ds.get('name'), 'queries': []}
queries = ds.findall('.//queries', namespaces=namespaces)
for qi, query in enumerate(queries):
query_info = {'index': qi, 'name': query.get('name')}
data_source_info['queries'].append(query_info)
all_data_sources_with_queries.append(data_source_info)

return all_data_sources_with_queries

def corrected_debug_list_high_level_queries_with_indices(root, namespaces):
# Find all high-level queries specified by the 'gep_2:CompoundRefQuery' type.
high_level_queries = root.findall(".//*[@xsi:type='gep_2:CompoundRefQuery']", namespaces=namespaces)
corrected_queries_info = []

for query in high_level_queries:
# Directly extracting the 'id', 'name', and 'description' attributes of each high-level query.
query_id = query.get('id') # Accessing 'id' directly without namespace
query_name = query.get('name')
query_description = query.get('description', 'No description provided') # Providing a default if missing

query_info = {
'id': query_id,
'queryName': query_name,
'description': query_description,
'queryChainRefs': []
}

# Extracting and parsing the 'queryChain' attribute to identify dataSource and query indices.
query_chain_refs = query.get('queryChain', '').split()
for ref in query_chain_refs:
ref = ref.replace('//', '').replace('@', '')
parts = ref.split('/')
if len(parts) >= 2:
dataSource, dataSourceIndex = parts[0].split('.')
query, queryIndex = parts[1].split('.')
query_info['queryChainRefs'].append({
'dataSourceIndex': dataSourceIndex,
'queryIndex': queryIndex
})

corrected_queries_info.append(query_info)

return corrected_queries_info

def create_markdown_with_named_query_chains(high_level_queries, data_sources_with_queries):
markdown_content = "# High-Level Queries with Named Query Chain Steps\n\n"
for query in high_level_queries:
query_id = query.get('id', 'No ID') # Fallback to 'No ID' if not present
query_description = query.get('description', 'No description provided') # Fallback to default description
markdown_content += f"## {query_id}: {query_description}\n" # Using ID and description

for chain_ref in query['queryChainRefs']:
ds_index = int(chain_ref['dataSourceIndex'])
q_index = int(chain_ref['queryIndex'])
# Extract the name of the step for readability in the markdown.
step_name = [q['name'] for q in data_sources_with_queries[ds_index]['queries'] if int(q['index']) == q_index][0]
markdown_content += f"- Step: {step_name} (DataSource Index: {ds_index}, Query Index: {q_index})\n"
markdown_content += "\n"
return markdown_content

def main(xmi_path, readme_path):
root, namespaces = parse_xmi(xmi_path)
data_sources_with_queries = list_queries_under_data_sources(root, namespaces)
high_level_queries_info = corrected_debug_list_high_level_queries_with_indices(root, namespaces)
markdown_content = create_markdown_with_named_query_chains(high_level_queries_info, data_sources_with_queries)

with open(readme_path, 'w') as markdown_file:
markdown_file.write(markdown_content)

if __name__ == '__main__':
if len(sys.argv) != 3:
print("Usage: python process_xmi.py <path to vfb.xmi> <output path for README.md>")
sys.exit(1)
xmi_path = sys.argv[1]
readme_path = sys.argv[2]
main(xmi_path, readme_path)
110 changes: 110 additions & 0 deletions .github/scripts/querySpeedTest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
import json
import sys
import time
from lxml import etree
import html
import nbformat as nbf
from nbformat.v4 import new_notebook, new_markdown_cell, new_code_cell

def parse_xmi(file_path):
with open(file_path, 'rb') as file:
tree = etree.parse(file)
root = tree.getroot()
return root, {'xsi': 'http://www.w3.org/2001/XMLSchema-instance'}

def extract_queries_and_data_source(root, namespaces):
queries_info = []
for data_source in root.findall('.//dataSources', namespaces=namespaces):
ds_url = data_source.get('url', '')
ds_type = data_source.get('dataSourceService', '')
query_elements = data_source.findall('.//queryChain', namespaces=namespaces)
query_elements.extend(data_source.findall('.//queries', namespaces=namespaces))
for query in query_elements:
query_name = query.get('name')
query_desc = query.get('description')
query_content = query.get('query')
query_content_decoded = html.unescape(query_content) if query_content else None
queries_info.append({
'name': query_name,
'description': query_desc,
'query': query_content_decoded,
'data_source_url': ds_url,
'data_source_type': ds_type
})
return queries_info

def create_notebook(queries_info, notebook_file_path):
nb = new_notebook()
nb.cells.append(new_markdown_cell("# Query Execution Notebook"))

# Add code cell for installing dependencies
dependencies = ["requests", "lxml", "nbformat"]
install_dependencies_code = f"%pip install {' '.join(dependencies)}"
nb.cells.append(new_code_cell(install_dependencies_code))

for query in queries_info:
if query['query'] is None:
continue

md_content = f"## {query['name']}\nDescription: {query['description']}"
nb.cells.append(new_markdown_cell(md_content))

if query['data_source_type'] == 'neo4jDataSource':
exec_code = generate_neo4j_code(query)
elif query['data_source_type'].endswith('DataSource'):
exec_code = generate_get_request_code(query)
else:
exec_code = "# Unknown data source type"

nb.cells.append(new_code_cell(exec_code))

with open(notebook_file_path, 'w', encoding='utf-8') as f:
nbf.write(nb, f)

def generate_neo4j_code(query):
return f"""
# Insert test IDs here
id = 'YOUR_TEST_ID'
ids = ['ID1', 'ID2']

# Query
query = {"{" + query['query'].replace("$ID", "id").replace("$ARRAY_ID_RESULTS", "ids") + "}"}
query_template = {query}

# Execute the query (example for Neo4j)
import requests
import time

start_time = time.time()
response = requests.post("{query['data_source_url']}", json={{'statements': [query]}})
end_time = time.time()
print('Status Code:', response.status_code)
print('Response:', response.json())
print('Time taken:', end_time - start_time)
"""

def generate_get_request_code(query):
encoded_query = html.escape(query['query']).replace("{", "{{").replace("}", "}}").replace("$ID", "{id}").replace("$ARRAY_ID_RESULTS", "{ids}")
return f"""
# Query execution for Solr or Owlery
import requests
import time

id = 'YOUR_SINGLE_ID_HERE' # Replace with an actual ID
ids = ['ID1', 'ID2'] # Replace with actual IDs
query_url = "{query['data_source_url']}?" + "{encoded_query}".format(id=id, ids=','.join(ids))

start_time = time.time()
response = requests.get(query_url)
end_time = time.time()
print('Status Code:', response.status_code)
print('Response:', response.text)
print('Time taken:', end_time - start_time)
"""

if __name__ == "__main__":
xmi_path = sys.argv[1] # ecore xmi file
notebook_path = sys.argv[2] # Where the markdown file will be saved
root, namespaces = parse_xmi(xmi_path)
queries_info = extract_queries_and_data_source(root, namespaces)
create_notebook(queries_info, notebook_path)
50 changes: 50 additions & 0 deletions .github/workflows/process_xmi.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
name: Process XMI and Update README in /model

on:
push:
paths:
- 'model/*.xmi'
- '.github/workflows/process_xmi.yml'
- '.github/scripts/process_xmi.py'
- '.github/scripts/extractqueries.py'
- '.github/scripts/querySpeedTest.py'
workflow_dispatch:

jobs:
update-readme:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2

- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.8'

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install lxml requests nbformat

- name: Process XMI File and Update README
run: |
python ./.github/scripts/process_xmi.py ./model/vfb.xmi ./model/README.md
- name: Process XMI File and Update query
run: |
python ./.github/scripts/extractqueries.py ./model/vfb.xmi ./model/query.md

- name: Process XMI File and Update query speed check notebook
run: |
python ./.github/scripts/querySpeedTest.py ./model/vfb.xmi ./model/queries_execution_notebook.ipynb



- name: Commit README.md
run: |
git config --local user.email "[email protected]"
git config --local user.name "GitHub Action"
git add model/*.md
git add model/queries_execution_notebook.ipynb
git commit -m "Update README with XMI structure and query breakdowns" || echo "No changes to commit"
git push
6 changes: 3 additions & 3 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ ARG geppettoSimulationRelease=VFBv2.1.0.2
ARG geppettoDatasourceRelease=passingSOLR
ARG geppettoModelSwcRelease=v1.0.1
ARG geppettoFrontendRelease=VFBv2.1.0.3
ARG geppettoClientRelease=VFBv2.2.3
ARG ukAcVfbGeppettoRelease=fullcache
ARG geppettoClientRelease=VFBv2.2.4
ARG ukAcVfbGeppettoRelease=optimisedCache

ARG mvnOpt="-Dhttps.protocols=TLSv1.2 -DskipTests --quiet -Pmaster"

Expand All @@ -26,7 +26,7 @@ ARG VFB_TREE_PDB_SERVER_ARG=https://pdb.v4.virtualflybrain.org
ARG VFB_OWL_SERVER_ARG=http://owl.virtualflybrain.org/kbs/vfb/
ARG VFB_R_SERVER_ARG=http://r.virtualflybrain.org/ocpu/library/vfbr/R/vfb_nblast
ARG SOLR_SERVER_ARG=https://solr.virtualflybrain.org/solr/ontology/select
ARG googleAnalyticsSiteCode_ARG=G-8JYPDQDX3B
ARG googleAnalyticsSiteCode_ARG=G-K7DDZVVXM7
ENV MAXSIZE=2G
ARG finalBuild=false
ENV USESSL=${finalBuild}
Expand Down
6 changes: 3 additions & 3 deletions components/VFBMain.js
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ class VFBMain extends React.Component {
GEPPETTO.SceneController.deselectAll(); // signal something is happening!
var variables = GEPPETTO.ModelFactory.getTopLevelVariablesById(variableId);
if (!variables.length > 0) {
Model.getDatasources()[4].fetchVariable(variableId, function () {
Model.getDatasources()[3].fetchVariable(variableId, function () {
if (callback != undefined) {
callback(variableId, label);
}
Expand Down Expand Up @@ -1429,7 +1429,7 @@ class VFBMain extends React.Component {
}

// google analytics vfb specific tracker
ga('create', 'UA-18509775-2', 'auto', 'vfb');
ga('create', 'G-K7DDZVVXM7', 'auto', 'vfb');
window.console.stdlog = console.log.bind(console);
window.console.stderr = console.error.bind(console);
window.console.logs = [];
Expand Down Expand Up @@ -1488,7 +1488,7 @@ class VFBMain extends React.Component {
location.replace(`https:${location.href.substring(location.protocol.length)}`);
}
if (GEPPETTO.MessageSocket.socketStatus == GEPPETTO.Resources.SocketStatus.CLOSE) {
if (GEPPETTO.MessageSocket.attempts < 10) {
if (GEPPETTO.MessageSocket.attempts < 2) {
window.ga('vfb.send', 'event', 'reconnect-attempt:' + GEPPETTO.MessageSocket.attempts, 'websocket-disconnect', (window.location.pathname + window.location.search));
GEPPETTO.MessageSocket.reconnect();
} else {
Expand Down
Loading
Loading