Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Branch][LLM Testing] LLM Testing Suite #1227

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
fe3769b
Update README.md
dbogunowicz May 26, 2023
fd07e3a
Update src/deepsparse/yolov8/README.md
dbogunowicz May 26, 2023
5a59e60
Merge branch 'main' into dbogunowicz-patch-1
dbogunowicz May 30, 2023
a1a2dbc
Update text_generation.py
dbogunowicz Aug 24, 2023
9509a11
Merge branch 'dbogunowicz-patch-1' of https://github.com/neuralmagic/…
dbogunowicz Aug 24, 2023
499f970
Merge branch 'main' into dbogunowicz-patch-1
dbogunowicz Aug 24, 2023
635d3fd
Merge branch 'dbogunowicz-patch-1' of https://github.com/neuralmagic/…
dbogunowicz Aug 24, 2023
7f2ac29
quality
dbogunowicz Aug 24, 2023
353de69
Merge branch 'main' into dbogunowicz-patch-1
dbogunowicz Aug 25, 2023
d429f6c
Merge branch 'main' into dbogunowicz-patch-1
dbogunowicz Aug 28, 2023
7596b18
Merge branch 'main' into dbogunowicz-patch-1
dbogunowicz Aug 29, 2023
64296f1
readability
dbogunowicz Aug 29, 2023
68c1b31
Merge branch 'main' into dbogunowicz-patch-1
dbogunowicz Aug 31, 2023
0bdfece
all tests passing
dbogunowicz Aug 31, 2023
4293592
added some full kv cache tests
dbogunowicz Aug 31, 2023
9ca6280
Merge branch 'main' into dbogunowicz-patch-1
dbogunowicz Aug 31, 2023
5ff2b7b
Merge branch 'main' into dbogunowicz-patch-1
dbogunowicz Sep 1, 2023
65a176f
Merge branch 'main' of https://github.com/neuralmagic/deepsparse into…
dbogunowicz Sep 1, 2023
fb4b6b0
Merge remote-tracking branch 'origin/dbogunowicz-patch-1' into main
dbogunowicz Sep 1, 2023
347a5e6
initial commit
dbogunowicz Sep 1, 2023
8351836
Merge branch 'main' into dbogunowicz-patch-1
dbogunowicz Sep 5, 2023
f333517
Merge branch 'dbogunowicz-patch-1' into feature/damian/fix_continuous
dbogunowicz Sep 5, 2023
7449ad3
Merge remote-tracking branch 'origin/feature/damian/fix_continuous' i…
dbogunowicz Sep 5, 2023
afe072b
ready for review
dbogunowicz Sep 6, 2023
486d174
Merge remote-tracking branch 'origin/feature/damian/testing_sources_t…
dbogunowicz Sep 6, 2023
d577347
Delete tests/deepsparse/transformers/pipelines/proposal_text_generati…
dbogunowicz Sep 6, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 15 additions & 2 deletions src/deepsparse/transformers/pipelines/text_generation.py
Original file line number Diff line number Diff line change
Expand Up @@ -291,6 +291,18 @@ def initialize_engines(
self.cache_support_enabled and self.enable_multitoken_prefill
) or not self.cache_support_enabled:

# input_ids_length for the multitoken engine is either:
# - the prompt_processing_sequence_length if the cache support is enabled
# (the prompt is processed sequentially at predefined processing length)
# - the full sequence_length if the cache support is disabled
# (the prompt is processed in a single pass, prompts length is fixed at
# sequence_length)
input_ids_length = (
self.prompt_processing_sequence_length
if self.cache_support_enabled
else self.sequence_length
)

multitoken_engine = NLDecoderEngine(
onnx_file_path=self.onnx_file_path,
engine_type=self.engine_type,
Expand All @@ -299,7 +311,7 @@ def initialize_engines(
sampling_temperature=self.sampling_temperature,
deterministic=self.deterministic,
sequence_length=self.sequence_length,
input_ids_length=self.prompt_processing_sequence_length,
input_ids_length=input_ids_length,
tokenizer=self.tokenizer,
use_deepsparse_cache=self.use_deepsparse_cache,
)
Expand Down Expand Up @@ -552,10 +564,11 @@ def prompt_inference(
num_tokens_processed += self.prompt_processing_sequence_length
prompt_logits.append(new_logits)

self.engine.reset_kv_cache()
if num_tokens_processed:
# transfer the cache state from the multi-token engine to the main engine
self.engine.transfer_cache_state(cache=self.multitoken_engine.kv_cache)
else:
self.engine.reset_kv_cache()

# prompt size is small, run autoregressive inference to populate kv cache
run_tokens = [] if num_tokens_processed == 0 else tokens[:num_tokens_processed]
Expand Down

This file was deleted.

Loading