From 343ded1d3afbbb8e1488c07db114272c40c29a36 Mon Sep 17 00:00:00 2001 From: nvBench2 Date: Tue, 4 Mar 2025 09:20:08 +0800 Subject: [PATCH] index --- index.html | 40 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 39 insertions(+), 1 deletion(-) diff --git a/index.html b/index.html index 7d959c2..2431112 100644 --- a/index.html +++ b/index.html @@ -67,6 +67,12 @@ font-weight: bold; margin-top: 0.5rem; } + .figure-description { + margin-top: 0.5rem; + text-align: justify; + font-style: italic; + font-size: 0.9rem; + } table { width: 100%; margin-bottom: 2rem; @@ -192,7 +198,7 @@

nvBench 2.0: A Benchmark for Natural La --> - @@ -252,6 +258,10 @@

Step-wise Disambiguation

This structured approach enables systematic resolution of ambiguities while preserving multiple valid interpretations of the original query.

+

Figure 1: Example of reasoning appropriate visualizations from an ambiguous natural language query

+

+ As shown in Figure 1, a seemingly straightforward query like "Show the gross trend of comedy and action movies by year" contains multiple ambiguities: "gross" could refer to either World_Gross or Local_Gross columns, "Comedy and action" implicitly requires filtering by Genre, "trend" may suggest a bar chart or line chart, and "By year" implies temporal binning that isn't explicitly defined. The figure illustrates how these ambiguities can be resolved through step-wise reasoning to produce multiple valid visualizations. +

@@ -267,6 +277,9 @@

Ambiguity-Injected NL2VIS Data Synthesizer

An overview of ambiguity-injected NL2VIS data synthesizer

Figure 2: An overview of ambiguity-injected NL2VIS data synthesizer.

+

+ We developed an ambiguity-injected NL2VIS data synthesizer that systematically introduces controlled ambiguity into visualization specifications. As shown in Figure 2, our pipeline consists of: (a) Ambiguity-aware VIS Tree Synthesis that begins with seed visualizations and injects ambiguity nodes to create ambiguity-aware visualization trees, (b) VIS Synthesis that uses an ASP solver to resolve these trees into multiple valid visualizations, (c) NL Synthesis that generates ambiguous natural language queries corresponding to the multiple valid visualizations, and (d) Reasoning Path Synthesis that produces step-wise reasoning paths documenting how ambiguities are resolved. +

@@ -287,6 +300,10 @@

Ambiguity Injection Process

The process ensures traceability from query to visualization through explicit reasoning paths, enabling systematic evaluation of NL2VIS systems' ability to handle ambiguity.

+

Figure 3: Injecting ambiguities into a seed visualization

+

+ Figure 3 demonstrates how we inject ambiguities into a seed visualization through a systematic process: (1) Starting with a seed chart (e.g., a bar chart showing gross by year), (2) Converting it to a seed visualization tree with explicit nodes, (3) Injecting ambiguity nodes (e.g., introducing a choice between Local_Gross and World_Gross), (4) Resolving the tree into multiple valid visualization specifications, and (5) Flattening the trees into concrete visualization queries. +

@@ -300,6 +317,9 @@

Benchmark Comparison

Comparison of NL2VIS benchmarks

Table 1: Comparison of NL2VIS benchmarks.

+

+ nvBench 2.0 distinguishes itself from existing benchmarks by: supporting one-to-many mapping from NL queries to visualizations, explicitly modeling query ambiguity, providing reasoning paths to explain ambiguity resolution, and using LLM-based query generation for natural, diverse queries. +

Benchmark Statistics

@@ -311,6 +331,9 @@

Benchmark Statistics

Distribution of natural language styles across chart types and word count statistics

Table 3: Distribution of natural language styles across chart types and word count statistics.

+

+ The dataset includes diverse query styles (commands, questions, and captions) across various chart types. The average query length is approximately 14 words, with a good balance across all visualization types. +

@@ -323,11 +346,17 @@

Benchmark Statistics

Ambiguity count at each reasoning step

Table 4: Ambiguity count at each reasoning step.

+

+ This table shows the distribution of ambiguities across different reasoning steps in the nvBench 2.0 dataset, highlighting which steps in the visualization process are most prone to ambiguity. +

Statistics of ambiguity patterns

Table 5: Statistics of ambiguity patterns.

+

+ Our dataset contains diverse ambiguity patterns, with Channel Encoding (CE) being the most common type of ambiguity (88.06%), followed by Data Transformation (DT) ambiguities (46.00%). Many samples contain multiple types of ambiguity, highlighting the complexity of real-world visualization requests. +

@@ -373,16 +402,25 @@

Overall Performance

Overall performance comparison between different models on nvBench 2.0

Table 6: Overall performance comparison between different models on nvBench 2.0.

+

+ Our proposed Step-NL2VIS achieves state-of-the-art performance across most metrics, significantly outperforming both prompting-based and fine-tuning-based baselines. Step-NL2VIS obtains the highest F1@3 (81.50%) and F1@5 (80.88%), demonstrating its superior ability to handle ambiguity in NL2VIS tasks. +

F1 across different models and ambiguity levels

Figure 7: F1 across different models and ambiguity levels.

+

+ The heatmap shows that Step-NL2VIS consistently outperforms other models across most chart types and ambiguity levels. Models incorporating step-wise reasoning generally show better performance than their direct prompting counterparts, confirming the effectiveness of decomposing complex visualization reasoning into explicit steps. +

Recall across different models and ambiguity levels

Figure 8: Recall across different models and ambiguity levels.

+

+ Step-NL2VIS demonstrates superior recall performance across all ambiguity levels examined. At ambiguity level 3, it achieves 83.3% recall, representing a significant improvement over comparative approaches. The performance advantage of Step-NL2VIS over alternative approaches expands with increasing ambiguity levels. +