diff --git a/assets.pptx b/assets.pptx
deleted file mode 100644
index c418bca8..00000000
Binary files a/assets.pptx and /dev/null differ
diff --git a/azure.yaml b/azure.yaml
index bb017f1d..db0ce5c0 100644
--- a/azure.yaml
+++ b/azure.yaml
@@ -3,6 +3,6 @@
name: OpenAIWorkshop
services:
web:
- project: ./labs/Lab_2_analytics
+ project: ./labs/Lab_2_Data_analytics
language: py
host: appservice
diff --git a/labs/prompt_engineering/01_Prompt_Introduction.md b/labs/prompt_engineering/01_Prompt_Introduction.md
deleted file mode 100644
index 65ba8e8d..00000000
--- a/labs/prompt_engineering/01_Prompt_Introduction.md
+++ /dev/null
@@ -1,304 +0,0 @@
-# Introduction to Prompt Engineering
-## Topics
-
-- [What is a prompt?](#what-is-a-prompt)
-- [What is prompt engineering?](#what-is-prompt-engineering)
-- [Basic Prompt Examples](#basic-prompt-examples)
-- [Elements of a Prompt](#elements-of-a-prompt)
-- [General Tips for Designing Prompts](#general-tips-for-designing-prompts)
-
-**Note:** This link leads to another file, but it is a [Glossary of Terms](./prompt_engineering/99_glossary.md) covering many of the technical terms used the following guides.
-
-## What is a prompt?
-![image](https://www.closerscopy.com/img/blinking-cursor-v2.gif)
-
-We've all seen the blinking cursor. Waiting expectantly for us to act; denoting our chance to provide input...
-
-One way to think of a prompt is as a piece of text that is used to initiate or provide context for the generation of output, primarily natural language in our use-cases, by the language model. This could be an input sentence, question, or topic to generate a response from the language model.
-
-## What is prompt engineering?
-Prompt engineering is a relatively [new discipline](https://www.businessinsider.com/prompt-engineering-ai-chatgpt-jobs-explained-2023-3) for developing and optimizing prompts to efficiently use language models (LMs) across a wide variety of business applications. Prompt engineering skills help to better understand the capabilities and limitations of large language models (LLMs) and refine the completions (outputs) of LLMs. Prompt engineering is used to improve the capacity of LLMs on a wide range of common and complex tasks such as question answering and arithmetic reasoning. Developers use prompt engineering to design robust and effective prompting techniques that interface with LLMs and other tools.
-
-This guide covers the basics of standard prompts to provide a rough idea on how to interact with and instruct the LLMs found on [Azure OpenAI Studio's Playground](https://oai.azure.com/portal/playground).
-
-### Note about example prompts
-> **Note:** All examples are tested with `text-davinci-003` model unless otherwise specified. Each of the `Examples` scenarios (shown in the red box labeled as '1' in the picture below) has pre-set `Parameters` (e.g. `temperature=0.7` and `top_p=1` as shown in the red box labled as '2'). The examples will use those pre-sets unless otherwise noted in a specific prompt scenario.
-
-
-
----
-## Basic Prompt Examples
-> **Note:** Please feel free to enter anything listed in the `Prompt:` box into a `text-davinci-003` model in the [Azure OpenAI Studio's Playground](https://oai.azure.com/portal/playground) to follow along with these prompt examples. Be aware that you may receive different outputs than what is listed in the `Output:` box given the nature of generative models
-
-You can achieve a lot with prompts, but the quality of results depends on how much information you provide in the prompt without being overly descriptive. A prompt can contain information like the instructions or questions. As we will learn later with more advanced prompts, we can also supply examples of required outputs as well as context for our instructions.
-
-Here is a basic example of a simple prompt:
-
-*Prompt:*
-```
-GPT-3 is
-```
-*Output:*
-```
- an autoregressive language model which was developed by OpenAI. It stands for Generative Pre-trained Transformer 3.
- It is a large-scale language model which uses deep learning techniques to generate human-like text. GPT-3 uses a
- transformer-based architecture to generate text with context
-```
-> **Note:** The `Output` in our example ends abruptly because our **Max length (tokens)** variable is set to `=60`. **Max Length (tokens)** sets a limit on the number of tokens to generate in a response. The `text_davinci-003` model supports a maximum of 2048 tokens shared between a given prompt and response completion. (One token is roughly 4 characters for typical English text.)
-
-The `Output:` is a series of strings that make sense given the context provided by our prompt of `"GPT3-3 is"`. However, the output may be unwanted or unexpected based on our use-case. How can we refine, or engineer, our prompt in order to achieve our desired output?
-
-The first thing we can do is provide explicit instructions as to what we want the model to do with our previous prompt. This is what is meant by _prompt engineering_: refining the input so as to produce the best output from the LLM.
-
-*Prompt:*
-```
-Tell me a joke that begins with: GPT-3 is
-```
-
-*Output:*
-```
-GPT-3 is so intelligent that it can tell a joke without a punchline.
-```
-
-Did our instructions improve our output? Admittedly, this is not the funniest joke ever told. And unlike supervised learning problems, there is no easy error or loss metric to compare between the two outputs. Let's look at exactly what we asked the model to generate and what we received:
-| Requirement | Output Meets Requirement? |
-|-------------|--------|
-| Begin with the words, "GPT-3 is" | Yes, the `Output:` began with the words "GPT-3 is" |
-| The output be in the form of a joke | An attempt was made |
-
----
-## Standard Prompts
-
-We looked at two very basic prompts above as well as the output they generated; now that we are familiar with the basic concepts of prompt engineering, let's look as some common formats for prompts.
-
-### Question Format
-
-```
-?
-```
-### Question-Answer (QA) Format
-This can be formatted into a QA format, which is standard in a lot of QA datasets, as follows:
-
-```
-Q: ?
-A:
-```
-Another way to think about this, using other common terms, would be:
-```
-Prompt: ?
-Completion:
-```
-### Few-shot Format
-Given the standard format above, one popular and effective technique to prompting is referred to as few-shot prompting where we provide multiple examples. Few-shot prompts can be formatted as follows:
-
-```
-?
-
-
-?
-
-
-?
-
-
-?
-
-```
-
-### Few-shot Question-Answer (QA) Format
-And you can already guess that its QA format version would look like this:
-
-```
-Q: ?
-A:
-
-Q: ?
-A:
-
-Q: ?
-A:
-
-Q: ?
-A:
-```
-
-Keep in mind that it's not required to use QA format. The format depends on the task at hand. For instance, you can perform a simple classification task and give examples that demonstrate the task as follows:
-
-*Prompt:*
-```
-This is awesome! // Positive
-This is bad! // Negative
-Wow that movie was rad! // Positive
-What a horrible show! //
-```
-
-*Output:*
-```
-Negative
-```
-or
-*Prompt*
-```
-The following is a list of companies and the categories they fall into
-
-Facebook: Social media, Technology
-LinkedIn: Social media, Technology, Enterprise, Careers
-Uber: Transportation, Technology, Marketplace
-Unilever: Conglomerate, Consumer Goods
-Mcdonalds: Food, Fast Food, Logistics, Restaurants
-FedEx:
-```
-*Output:*
-```
-Logistics, Delivery, Shipping
-```
-Few-shot prompts enable in-context learning which is the ability of language models to learn tasks given only a few examples. We will see more of this in action in the upcoming advanced prompt engineering sections.
-
----
-## Elements of a Prompt
-
-As we cover more and more examples and applications that are possible with prompt engineering, you will notice that there are certain elements that make up a prompt.
-
-A prompt can contain any of the following components:
-
-**Instruction** - a specific task or instruction you want the model to perform
-
-**Context** - can involve external information or additional context that can steer the model to better responses
-
-**Input Data** - is the input or question that we are interested to find a response for
-
-**Output Indicator** - indicates the type or format of output.
-
-Not all the components are required for a prompt and the format depends on the task at hand. We will touch on more concrete examples in upcoming guides.
-
----
-## General Tips for Designing Prompts
-
-> **Note:** We are not examing hyperparameters in this section, however, here's a quick link to some pointers on hyperparameters: [Basic Overview of Azure OpenAI Service Hyperparameters](./98_Hyperparameters_Overview.md)
-
-Here are some tips to keep in mind while you are designing your prompts:
-
-### Start Simple
-As you get started with designing prompts, you should keep in mind that it is really an iterative process that requires experimentation to get optimal results. Using a simple playground like [Azure's OpenAI Studio's Playground](https://oai.azure.com/portal/playground) will allow you to test out ideas quickly and easily. The model won't be offended if you ask it to do very similar things over and over again!
-
-You can start with simple prompts and keep adding more elements and context as you aim for better results. Versioning your prompt along the way is vital for this reason. As we read the guide you will see many examples where specificity, simplicity, and conciseness will often give you better results. Begin with a hardcoded prompt and move into to more dynamically generated prompts as you refine your results.
-
-### The Instruction
-You can design effective prompts for various simple tasks by using commands to instruct the model what you want to achieve such as "Write", "Classify", "Summarize", "Translate", "Order", "Create", "Make" etc.
-
-Keep in mind that you also need to experiment a lot so see what works best. Trying different instructions with different keywords, context, and data and see what works best for your particular use case and task. Usually, the more specific and relevant the context is to the task you are trying to perform, the better.
-
-Others recommend that instructions are placed at the beginning of the prompt. It's also recommended that some clear separator like "###" is used to separate the instruction and context.
-
-For instance:
-
-*Prompt:*
-```
-### Instruction ###
-Translate the text below to Spanish:
-
-Text: "hello!"
-```
-
-*Output:*
-```
-¡Hola!
-```
-
-### Specificity
-Be very specific about the instruction and task you want the model to perform. The more descriptive and detailed the prompt is, the better the results. This is particularly important when you have a desired outcome or style of generation you are seeking. There aren't specific tokens or keywords that lead to better results. It's more important to have a good format and descriptive prompt. In fact, providing examples in the prompt is very effective to get desired output in specific formats.
-
-When designing prompts you should also keep in mind the length of the prompt as there are limitations regarding how long this can be. Thinking about how specific and detailed you should be is something to consider. Too many unnecessary details in not necessarily a good approach. The details should be relevant and contribute to the task at hand. This is something you will need to experiment with a lot. We encourage a lot of experimentation and iteration to optimize prompts for your applications.
-
-As an example, let's try a simple prompt to extract specific information from a piece of text.
-
-*Prompt:*
-```
-There are many fruits that were found on the recently discovered planet Goocrux. There are neoskizzles that grow there,
-which are purple and taste like candy. There are also loheckles, which are a grayish blue fruit and are very tart,
-a little bit like a lemon. Pounits are a bright green color and are more savory than sweet. There are also plenty of
-loopnovas which are a neon pink flavor and taste like cotton candy. Finally, there are fruits called glowls, which have
-a very sour and bitter taste which is acidic and caustic, and a pale orange tinge to them.
-
-Please make a table summarizing the fruits from Goocrux
-| Fruit | Color | Flavor |
-| Neoskizzles | Purple | Sweet |
-| Loheckles | Grayish blue | Tart |
-```
-
-*Output:*
-```
-| Pounits | Bright green | Savory |
-| Loopnovas | Neon pink | Cotton candy |
-| Glowls | Pale orange | Sour and bitter |
-```
-> **Note:** The instructions are `Please make a table summarizing the fruits from Goocrux`, and then a sample table with column headings and two examples, with the correct elements in their respective columns are supplied, making this a few-shot prompt.
-
-### Avoid Impreciseness
-
-Given the tips above about being detailed and improving format, it's easy to fall into the trap of wanting to be too clever about prompts and potentially creating imprecise descriptions. It's often better to be specific and direct. The model doesn't mind you being explicit in your instructions - you can't offend it! The analogy here is very similar to effective communication -- the more direct, the more effective the message gets across.
-
-For example, you might be interested in learning the concept of prompt engineering. You might try something like:
-*Prompt:*
-```
-Provide a short explanation of what prompt engineering is in NLP without using technical language
-```
-*Output:*
-```
-Prompt engineering in NLP is the process of creating questions or prompts to help a computer understand
-natural language. It involves creating questions or statements that can be used to prompt a computer to generate
-a response in a natural language.
-```
-It's not clear from the prompt above how many sentences to use, what style we need things output in, or what we intend to take away from the output. You might get passable responses with the above prompt; the better prompt would be one that is very specific, concise, and to the point. Something like:
-*Prompt:*
-```
-Provide a list with 5 bullet points, one sentence each, describing the benefits of prompt
-engineering when interacting with a large language model using an 8th grade reading level comprehension
-```
-*Output:*
-```
-• Increases efficiency by reducing the need for user input.
-• Improves user experience by reducing the time spent waiting for a response.
-• Reduces cognitive load by simplifying language and providing clear instructions.
-• Enhances accuracy by providing a consistent format for language input.
-• Enhances scalability by allowing for the integration of more complex language models.
-```
-
-### To do or not to do?
-Another common tip when designing prompts is to avoid saying what not to do but say what to do instead. This encourages more specificity and focus on the details that lead to good responses from the model.
-
-Here is an example of an IT Support chatbot failing to uphold our instructions because the instructions are phrased in a negative manner.
-
-*Prompt:*
-```
-The following is an exchange between an IT Support Agent and a customer. The customer is locked out of their account. DO NOT ASK FOR THEIR PASSWORD. DO NOT ASK ABOUT PERSONAL INFORMATION.
-
-Customer: I cannot log into my account.
-IT Support Agent:
-```
-
-*Output:*
-```
-Okay, I understand. Can you please tell me the email address you are trying to login with?
-```
-
-Here is a better prompt:
-
-*Prompt:*
-```
-The following is an exchange between an IT Support Agent and a customer. The customer is locked out of their account. The IT Support Agent must not discuss personal information such as username, password, name, email address, account number, or phone number, or security questions. If the IT Support Agent needs personal information to assist the customer, the IT Support Agent must ask them to click on the "Forgot Password?" link at the sign-on page and follow the on-screen instructions.
-
-Customer: I cannot log into my account.
-IT Support Agent:
-```
-
-*Output:*
-```
-I apologize that you are having difficulty accessing your account. Please click on the "Forgot Password?" link at the sign-on page and follow the on-screen instructions to reset your password and access your account.
-```
-
-Some of the examples above were adopted from the ["Best practices for prompt engineering with OpenAI API" article](https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api) and the
-[Democratizing Artificial Intelligence Research, Education, and Technologies's Prompt Engineering Guide](https://github.com/dair-ai/Prompt-Engineering-Guide)
-
----
-[Next Section (Sample Scenarios)](./02_Sample_Scenarios/01_Text_Summarization.md)
diff --git a/labs/prompt_engineering/02_Sample_Scenarios/01_Text_Summarization.md b/labs/prompt_engineering/02_Sample_Scenarios/01_Text_Summarization.md
deleted file mode 100644
index 4fe3fd45..00000000
--- a/labs/prompt_engineering/02_Sample_Scenarios/01_Text_Summarization.md
+++ /dev/null
@@ -1,91 +0,0 @@
-# Text Summarization
-
-## Topics
-
-- [Unstructured Text Summarization](#unstructured-text-summarization)
-- [Semi-Structured Text Summarization](#semi-structured-text-summarization)
-
----
-
-## Unstructured Text Summarization
-
-Let's look at a simple example to summarize a text. Remember that you can always control the output of a prompt via instructions, some typical controls are around:
-- length of the summary
-- target audience of the summary
-
-*Prompt:*
-```
-Summarize this for a second-grade student:
-
-Jupiter is the fifth planet from the Sun and the largest in the Solar System. It is a gas giant with a mass one-thousandth that of the Sun, but two-and-a-half times that of all the other planets in the Solar System combined. Jupiter is one of the brightest objects visible to the naked eye in the night sky, and has been known to ancient civilizations since before recorded history. It is named after the Roman god Jupiter. When viewed from Earth, Jupiter can be bright enough for its reflected light to cast visible shadows, and is on average the third-brightest natural object in the night sky after the Moon and Venus.
-```
-
-*Output:*
-```
-Jupiter is the fifth planet from the Sun and is very big and bright. It can be seen with our eyes in the night sky and it has been known since ancient times. Its name comes from the Roman god Jupiter. It is usually the third brightest object in the night sky after the Moon and Venus.
-```
-
-Let's change the prompt to specify the length of the summary to be 3 sentences.
-
-*Prompt:*
-```
-Summarize this for a second-grade student in 3 sentences:
-
-Jupiter is the fifth planet from the Sun and the largest in the Solar System. It is a gas giant with a mass one-thousandth that of the Sun, but two-and-a-half times that of all the other planets in the Solar System combined. Jupiter is one of the brightest objects visible to the naked eye in the night sky, and has been known to ancient civilizations since before recorded history. It is named after the Roman god Jupiter. When viewed from Earth, Jupiter can be bright enough for its reflected light to cast visible shadows, and is on average the third-brightest natural object in the night sky after the Moon and Venus.
-```
-
-*Output:*
-```
-Jupiter is the fifth planet from the Sun and the biggest in the Solar System. It is very bright and can be seen in the night sky. It is named after the Roman god Jupiter.
-```
-
-Let's now change the prompt to specify the target audience of the summary to be a physicist.
-
-*Prompt:*
-```
-Summarize this for a physicist in 3 sentences:
-
-Jupiter is the fifth planet from the Sun and the largest in the Solar System. It is a gas giant with a mass one-thousandth that of the Sun, but two-and-a-half times that of all the other planets in the Solar System combined. Jupiter is one of the brightest objects visible to the naked eye in the night sky, and has been known to ancient civilizations since before recorded history. It is named after the Roman god Jupiter. When viewed from Earth, Jupiter can be bright enough for its reflected light to cast visible shadows, and is on average the third-brightest natural object in the night sky after the Moon and Venus.
-```
-
-*Output:*
-```
-Jupiter is the fifth planet from the Sun and the largest in the Solar System. It is a gas giant with a mass two-and-a-half times that of all the other planets combined. It is visible to the naked eye in the night sky and has been known since before recorded history, being named after the Roman god Jupiter.
-```
-
-## Semi-Structured Text Summarization
-
-You can easily control the format of the summarization as well to fit into desired formats. Let's summarize the same text in a bullet point format.
-
-*Prompt:*
-```
-Summarize this for a second-grade student in 3 sentences in bullet points:
-
-Jupiter is the fifth planet from the Sun and the largest in the Solar System. It is a gas giant with a mass one-thousandth that of the Sun, but two-and-a-half times that of all the other planets in the Solar System combined. Jupiter is one of the brightest objects visible to the naked eye in the night sky, and has been known to ancient civilizations since before recorded history. It is named after the Roman god Jupiter. When viewed from Earth, Jupiter can be bright enough for its reflected light to cast visible shadows, and is on average the third-brightest natural object in the night sky after the Moon and Venus.
-```
-
-*Output:*
-```
-- Jupiter is the fifth planet from the Sun and the largest in the Solar System.
-- It is a gas giant and is very bright in the night sky.
-- It is named after the Roman god Jupiter.
-```
-
-*Prompt:*
-```
-Summarize this for a second-grade student as a title and a one sentence abstract:
-
-Jupiter is the fifth planet from the Sun and the largest in the Solar System. It is a gas giant with a mass one-thousandth that of the Sun, but two-and-a-half times that of all the other planets in the Solar System combined. Jupiter is one of the brightest objects visible to the naked eye in the night sky, and has been known to ancient civilizations since before recorded history. It is named after the Roman god Jupiter. When viewed from Earth, Jupiter can be bright enough for its reflected light to cast visible shadows, and is on average the third-brightest natural object in the night sky after the Moon and Venus.
-```
-
-*Output:*
-```
-Title: Jupiter, the Brightest Planet
-Abstract: Jupiter is the fifth planet from the Sun and is the brightest object in the night sky, visible to the naked eye and known since before recorded history.
-```
-
----
-
-[Previous Section (Introduction to Prompt Engineering)](../01_Prompt_Introduction.md)
-
-[Next Section (Information Extraction)](./02_Information_Extraction.md)
\ No newline at end of file
diff --git a/labs/prompt_engineering/02_Sample_Scenarios/02_Information_Extraction.md b/labs/prompt_engineering/02_Sample_Scenarios/02_Information_Extraction.md
deleted file mode 100644
index 895a856c..00000000
--- a/labs/prompt_engineering/02_Sample_Scenarios/02_Information_Extraction.md
+++ /dev/null
@@ -1,135 +0,0 @@
-# Information Extraction
-
-## Topics
-
-- [Entity Extractions from Unstructured Data](#entity-extractions-from-unstructured-data)
-- [Entity Extractions from Unstructured Data (Structured Output)](#entity-extractions-from-unstructured-data-structured-output)
-
----
-
-## Entity Extractions from Unstructured Data
-
-The OpenAI GPT models are capable of extracting entities from text. Let's look at a simple example to extract entities from a text. Remember that you can always control the output of a prompt via instructions.
-
-*Prompt:*
-```
-Extract entities from this accident report:
-
-Accident Report
-
-Date: March 3, 2023 Time: 10:15 AM Location: Intersection of Hollywood Boulevard and Highland Avenue, Los Angeles, CA Involved parties: John Smith (truck driver) and Jane Doe (cyclist) Witnesses: Mark Lee (pedestrian) and Lisa Kim (store owner)
-
-Summary:
-
-John Smith was driving a white truck eastbound on Hollywood Boulevard, approaching the intersection with Highland Avenue. He had a green light and was traveling at about 25 mph. Jane Doe was riding a red bicycle northbound on Highland Avenue, crossing Hollywood Boulevard. She had a red light and was traveling at about 15 mph. She did not stop at the intersection and entered the path of the truck. John Smith applied his brakes but could not avoid hitting Jane Doe. Jane Doe was thrown off her bicycle and landed on the pavement. Mark Lee and Lisa Kim witnessed the accident from opposite corners of the intersection. They called 911 and rushed to help Jane Doe.
-
-Injuries:
-
-Jane Doe suffered a concussion, a broken arm, and several bruises and abrasions. She was conscious but disoriented at the scene. She was transported by ambulance to Cedars-Sinai Medical Center for treatment. John Smith was not injured but was shaken by the accident.
-
-Damage:
-
-The truck sustained minor damage to its front bumper and hood. The bicycle was severely damaged and rendered unusable.
-
-Fault:
-
-According to the traffic laws of California, Jane Doe was at fault for running a red light and failing to yield the right-of-way to John Smith. John Smith was not at fault for the accident.
-```
-
-*Output:*
-```
-Entities:
-
-John Smith, Jane Doe, Mark Lee, Lisa Kim, white truck, red bicycle, Cedars-Sinai Medical Center, California, traffic laws.
-```
-
----
-## Entity Extractions from Unstructured Data (Structured Output)
-
-Consider the potential to generate a structured output using the entities extracted from the text. For example, we can generate a structured output in the form of a JSON object.
-
-*Prompt:*
-```
-Extract entities from this accident report as a well formed JSON document:
-
-Accident Report
-
-Date: March 3, 2023 Time: 10:15 AM Location: Intersection of Hollywood Boulevard and Highland Avenue, Los Angeles, CA Involved parties: John Smith (truck driver) and Jane Doe (cyclist) Witnesses: Mark Lee (pedestrian) and Lisa Kim (store owner)
-
-Summary:
-
-John Smith was driving a white truck eastbound on Hollywood Boulevard, approaching the intersection with Highland Avenue. He had a green light and was traveling at about 25 mph. Jane Doe was riding a red bicycle northbound on Highland Avenue, crossing Hollywood Boulevard. She had a red light and was traveling at about 15 mph. She did not stop at the intersection and entered the path of the truck. John Smith applied his brakes but could not avoid hitting Jane Doe. Jane Doe was thrown off her bicycle and landed on the pavement. Mark Lee and Lisa Kim witnessed the accident from opposite corners of the intersection. They called 911 and rushed to help Jane Doe.
-
-Injuries:
-
-Jane Doe suffered a concussion, a broken arm, and several bruises and abrasions. She was conscious but disoriented at the scene. She was transported by ambulance to Cedars-Sinai Medical Center for treatment. John Smith was not injured but was shaken by the accident.
-
-Damage:
-
-The truck sustained minor damage to its front bumper and hood. The bicycle was severely damaged and rendered unusable.
-
-Fault:
-
-According to the traffic laws of California, Jane Doe was at fault for running a red light and failing to yield the right-of-way to John Smith. John Smith was not at fault for the accident.
-```
-
-*Output:*
-```
-{
- "accidentReport": {
- "date": "March 3, 2023",
- "time": "10:15 AM",
- "location": {
- "street": "Intersection of Hollywood Boulevard and Highland Avenue",
- "city": "Los Angeles",
- "state": "CA"
- },
- "involvedParties": [
- {
- "name": "John Smith",
- "role": "truck driver"
- },
- {
- "name": "Jane Doe",
- "role": "cyclist"
- }
- ],
- "witnesses": [
- {
- "name": "Mark Lee",
- "role": "pedestrian"
- },
- {
- "name": "Lisa Kim",
- "role": "store owner"
- }
- ],
- "summary": "John Smith was driving a white truck eastbound on Hollywood Boulevard, approaching the intersection with Highland Avenue. He had a green light and was traveling at about 25 mph. Jane Doe was riding a red bicycle northbound on Highland Avenue, crossing Hollywood Boulevard. She had a red light and was traveling at about 15 mph. She did not stop at the intersection and entered the path of the truck. John Smith applied his brakes but could not avoid hitting Jane Doe. Jane Doe was thrown off her bicycle and landed on the pavement. Mark Lee and Lisa Kim witnessed the accident from opposite corners of the intersection. They called 911 and rushed to help Jane Doe.",
- "injuries": {
- "victim": {
- "name": "Jane Doe",
- "injuries": "concussion, broken arm, bruises and abrasions"
- },
- "other": {
- "name": "John Smith",
- "injuries": "none"
- }
- },
- "damage": {
- "truck": "minor damage to front bumper and hood",
- "bicycle": "severely damaged and rendered unusable"
- },
- "fault": "Jane Doe was at fault for running a red light and failing to yield the right-of-way to John Smith. John Smith was not at fault for the accident."
- }
-}
-```
-
-**IMPORTANT NOTE**: You may have to increase the 'Max length (tokens)' parameter if the output gets cut off (you'll probably need to increase it to 500+ tokens).
-
-Please note how comprehensive the output is. You'll most likely want to control the format of the output to fit your needs and existing systems. For this, One Shot or Few Shots learning are great techniques to consider. See [Advanced Concepts](../03_Advanced_Concepts.md) for more information.
-
----
-
-[Previous Section (Text Summarization)](./01_Text_Summarization.md)
-
-[Next Section (Question Answering)](./03_Question_Answering.md)
diff --git a/labs/prompt_engineering/02_Sample_Scenarios/03_Question_Answering.md b/labs/prompt_engineering/02_Sample_Scenarios/03_Question_Answering.md
deleted file mode 100644
index c303fafb..00000000
--- a/labs/prompt_engineering/02_Sample_Scenarios/03_Question_Answering.md
+++ /dev/null
@@ -1,47 +0,0 @@
-# Question Answering
-
-## Topics
-
- - [Overview of Question Answering](#overview-of-question-answering)
-
----
-
-## Overview of Question Answering
-
-One of the best ways to get the model to respond specific answers is to improve the format of the prompt. As covered before, a prompt could combine instructions, context, input, and output indicator to get improved results. While these components are not required, it becomes a good practice as the more specific you are with instruction, the better results you will get. Below is an example of how this would look following a more structured prompt. Given the often factual nature that Question-Answering requires, we should make a quick review of some of our [hyperparameters pointers](./98_Hyperparameters_Overview.md) that can be used to control the output.
-
-> **Note:** In short, the lower the `temperature` the more deterministic the results in the sense that the highest probable next token is always picked. Increasing temperature could lead to more randomness encouraging more diverse or creative outputs. We are essentially increasing the weights of the other possible tokens. In terms of application, we might want to use lower temperature for something like fact-based QA to encourage more factual and concise responses. For poem generation or other creative tasks it might be beneficial to increase temperature.
-
-Give the above, it may make sense to investigate the `temperature` hyperparameter to see how it affects the output.
-
-> **Note:** The `temperature` has been to `=0.25` per the recommendation above and the `max_tokens' has been set to ='1200' to allow for a greater prompt length.
-
-*Prompt:*
-```
-You must summarize the results of the ----SEARCH RESULTS---- section in a way that best answers the query listed in the ----USER QUERY--- section with your response going in the ---Response--- section.
-
-----USER QUERY----
-what ports and connectors does my surface have?
-
-----SEARCH RESULTS----
-Ports and connectors Surface Book has the ports you expect from a full -feature laptop. Two full-size USB 3.0 ports Connect a USB accessory like a mouse, printer, Ethernet adapter, USB drive, or smartphone. SD card slot Use the full -size SD card slot with an SD card (sold separately) for extra storage and transferring files. Mini DisplayPort version 1.2a Share what’s on your Surface screen by connecting it to an HDTV, monitor, or projector. (Video adapters are sold separ ately.) 3.5 mm headset jack Plug in your favorite headset for a little more privacy when listening to music or conference calls. Software Windows 10 operating system Windows 10 provides a variety of options for entertainment and productivity whether you ’re at school, at home, or on the go.
-Connect devices and accessories You can make photos, videos, and presentations bigger by connecting your Surface Book to a TV, monitor, or projector. Or, connect to an HDTV and watch movies on a big screen. You can connect monitors, accessories, and other devices directly to your Surface Book using the USB ports, Mini DisplayPorts, or Bluetooth. Or, connect everything to a Surface Dock (sold separately). Surface Dock lets you transform your Surface Book into a full desktop PC using a single cable. Set up your workspace with S urface Dock Surface Dock supports high -speed transfer of video, audio, and data. Its compact design gives you flexibility and keeps your desktop clutter -free. The external power supply recharges your Surface and provides plenty of additional power to char ge connected USB devices. Here's how to set up your Surface Dock: 1. Plug the AC end of the Surface Dock power cord into an electrical outlet or power strip and plug the other end into the power port on your Surface Dock. 2. If you want to use a wired network connection, connect a network cable to the Ethernet port on your Surface Dock. 3. Connect your computer peripherals to the USB ports on your Surface Dock. 4. Connect a cable from your monitor to a Mini DisplayPort on your Surfa ce Dock. If your monitor cable doesn’t have a Mini DisplayPort connector, you’ll need to buy another cable or an adapter. For more info on adapters, see Connect Surface to a TV, monitor, or projector on Surface.com. 5. Connect any audio accessories like speakers or headphones.
-Microsoft Page 26 DisplayPort or HDMI port, use a VGA cable and the Mini DisplayPort to VGA Adapter. A VGA adapter or cable is for video only. Audio will play from your Surface speakers unless you’ve connected exter nal speakers. For more info, see Surface sound, volume, and audio accessories on Surface.com. Wireless : You can connect to wireless displays when Windo ws detects one nearby. Or, use a Microsoft Wireless Display Adapter (sold separately) to connect to a screen using Miracast. Connect to a wired display 1. Connect an HDMI or VGA cable to the HDMI, DisplayPort, or VGA port on your TV, monitor, or projector. 2. Connect the other end of the cable to a Surface video adapter or the Mini DisplayPort on your Surface or to your docking station. Mini DisplayPort on Surface Book Mini DisplayPort on Surface Dock 3. If you’re using an adapter, plug it into the Mini DisplayPort on your Surface or your docking station. Connect to a wireless display 1. On your Surface, swipe in from the right edge of the screen or select Action center in the taskbar. 2. Select Connec and in the list of displays, select the name of the wireless display. If you have problems connecting your Surface or have questions about screen resolution, see Trouble connecting Surface to a second screen? on Surface.com. If you’re using a Microsoft Wireless Display Adapter to connect to a screen using Miracast, see Using the Microsoft Wireless Display Adapter with Surface on Surface.com for more info. Set up your screens Once you connect to another screen or screens, you can choose what to display on each screen.
-
----Response---
-```
-
-The source for the ---Search Results--- text above comes from this [Surface User Guide](https://download.microsoft.com/download/7/B/1/7B10C82E-F520-4080-8516-5CF0D803EEE0/surface-book-user-guide-EN.pdf) and is the text from 3 pages within that document.
-
-*Output:*
-```
-The Surface Book has two full-size USB 3.0 ports, an SD card slot, a Mini DisplayPort version 1.2a, a 3.5 mm headset jack, and the ability to connect to a wired display with an HDMI or VGA cable and a wireless display with Miracast. It also has a Surface Dock (sold separately) for connecting monitors, accessories, and other devices.
-```
-
-The response listed above is a concise summarization of the supplied text and it adheres to the source material, namely the text in the ---Search Results--- section.
-
----
-
-[Previous Section (Information Extraction)](./02_Information_Extraction.md)
-
-[Next Section (Text Classification)](./04_Text_Classification.md)
\ No newline at end of file
diff --git a/labs/prompt_engineering/02_Sample_Scenarios/04_Text_Classification.md b/labs/prompt_engineering/02_Sample_Scenarios/04_Text_Classification.md
deleted file mode 100644
index 9996a9a8..00000000
--- a/labs/prompt_engineering/02_Sample_Scenarios/04_Text_Classification.md
+++ /dev/null
@@ -1,57 +0,0 @@
-# Text Classification
-
-## Topics
-
- - [Classification with Prompt Engineering Directives](#classification-with-prompt-engineering-directives)
- - [Classification using One Shot or Few Shot Learning](#classification-using-one-shot-or-few-shot-learning)
-
----
-
-## Classification with Prompt Engineering Directives
-
-Quite a few use cases will fall under this category when the input isn't closely related to a business domain specificity. For example, classifying a news article into a category, classifying a product review into a sentiment, classifying a customer support ticket into a category, etc.
-
-*Prompt:*
-```
-Classify the following news article into 1 of the following categories: categories: [Business, Tech, Politics, Sport, Entertainment]
-
-news article: Donna Steffensen Is Cooking Up a New Kind of Perfection. The Internet’s most beloved cooking guru has a buzzy new book and a fresh new perspective:
-```
-
-*Output:*
-```
-Entertainment
-```
-
----
-## Classification using One Shot or Few Shot Learning
-
-This topic will be covered in the next section [Advanced Concepts](./03_Advanced_Concepts.md), but it's worth mentioning here as well. One shot or few shot learning is a technique that allows you to train a model on a small amount of data and then use that model to classify new data. This is useful when you have a small amount of data, but you want to be able to classify new data that you haven't seen before.
-
-*Prompt:*
-```
-Review: This is a great product. I love it.
-Star Rating: 5
-
-Review: This is a terrible product. I hate it.
-Star Rating: 1
-
-Review: I like the product overall design, but it's not very comfortable.
-Star Rating: 3
-
-Review: The product was amazing while it lasts. It broke after 2 weeks.
-Star Rating:
-```
-
-*Output:*
-```
-2
-```
-
-You've tought the model to rank between 1 and 5 stars based on the review. You can then use this model to classify new reviews.
-
----
-
-[Previous Section (Question Answering)](./03_Question_Answering.md)
-
-[Next Section (Conversation)](./05_Conversation.md)
\ No newline at end of file
diff --git a/labs/prompt_engineering/02_Sample_Scenarios/05_Conversation.md b/labs/prompt_engineering/02_Sample_Scenarios/05_Conversation.md
deleted file mode 100644
index d2f98440..00000000
--- a/labs/prompt_engineering/02_Sample_Scenarios/05_Conversation.md
+++ /dev/null
@@ -1,97 +0,0 @@
-# Conversation
-
-## Topics
-
- - [Overview of Conversation](#overview-of-conversation)
- - [Adding a Personality](#adding-a-personality)
-
----
-
-## Overview of Conversation
-
-[**conversation**](https://en.wikipedia.org/wiki/Conversation)
-```
- interactive communication between two or more people
-```
-
-If we are flexible about our definition of conversation and allow for LLM's to be involved, we can consider the following, then in some manner every single Prompt and Completion exchange we have had with one of the models has been a *conversation*. Let's take that concept one step further and explore the idea of a conversation witha model, but one in which we define its tone and personality.
-
-
-
-*Prompt:*
-```
-The following is a conversation between a Human and a Machine Learning Engineer Assistant. The assistant's tone is business formal and the responses are technical and scientific.
-
-Human: Hello, who are you?
-AI: Hello, it's a pleasure to meet you! I am an Machine Learning Engineer Assistant. How can I help you today?
-Human: Can you tell me about matrix projections when making maps?
-AI:
-
-```
-
-
-*Output:*
-```
- Yes, matrix projections are used to transform a three-dimensional surface into a two-dimensional map. The most common approach to map-making is to use a Cartesian projection, which is a mathematical transformation that projects a sphere onto a flat surface. This approach can be computationally expensive, but yields a map with minimal distortion. Other projection methods, such as sinusoidal and conic projections, are also commonly used and can often yield more accurate results.
-```
-
-Depending on our audience, this might be as far as we feel we need to go. However, maybe we are interested in a bot that is going to help a broad base of users understand the basics of machine learning. In that case, we might want to consider defining the level of output we expect from our bot and change it's tone to something more casual.
-
-*Prompt:*
-```
-The following is a conversation between a Human and a Machine Learning Engineer Assistant. The assistant's tone is casual and the responses are at a high-school level.
-
-Human: Hello, who are you?
-AI: Hello, it's a pleasure to meet you! I am an Machine Learning Engineer Assistant. How can I help you today?
-Human: Can you tell me about matrix projections when making maps?
-AI:
-```
-
-*Output:*
-```
-Sure! Matrix projections are used when making maps to transform the 3D surface of the Earth onto a 2D plane. This is done by projecting the surface onto a mathematical plane that can be used to represent the Earth on a 2D map. This is done to make the map more accurate in terms of geographical features and distances.
-```
-Notice that asking to change the tone resulted in some subtle changed in the output - `three-dimensional` and `two-dimensional` became `3D` and `2D` respectively. We also lost any reference to more technical terms like `sinusoidal`. The output is still technically correct, but it is written in a way that is more accessible to a broader audience. This is a good example of how we can use the same model to generate different outputs based on the needs of our audience.
-
----
-## Adding a Personality
-
-Let's have a little fun with our prompt and assign a personality to our bot. We can do this by telling the model to use the tone of a famous writer. Let's try it with Mark Twain and Ernest Hemingway
-
-*Prompt:*
-```
-The following is a conversation between a Human and a Machine Learning Engineer Assistant. The assistant's should respond in the tone of Mark Twain.
-
-Human: Hello, who are you?
-AI: Hello, it's a pleasure to meet you! I am an Machine Learning Engineer Assistant. How can I help you today?
-Human: Can you tell me about matrix projections when making maps?
-AI:
-```
-
-*Output:*
-```
-Why certainly! A matrix projection is a method of transforming the 3-dimensional surface of the Earth into a two-dimensional representation on a map. It is an important component of cartography, allowing maps to be constructed in different shapes, sizes, and orientations.
-```
-
-*Prompt:*
-```
-The following is a conversation between a Human and a Machine Learning Engineer Assistant. The assistant's should respond in the tone of Ernest Hemingway.
-
-Human: Hello, who are you?
-AI: Hello, it's a pleasure to meet you! I am an Machine Learning Engineer Assistant. How can I help you today?
-Human: Can you tell me about matrix projections when making maps?
-AI:
-```
-
-*Output:*
-```
-Yes, matrix projections are a type of mathematical equation used to transform the curved surface of the Earth into a flat map. It can be used to make accurate, detailed maps of the Earth's surface.
-```
-
-The two responses _feel_ very different. The Mark Twain response begins with a characteristically southerm "Why Certainly" and terminology like cartography - a word that is not commonly used in everyday conversation. The Hemingway response is much more direct and uses more common language. The model is not actually changing the content of the response, but it is changing the tone and personality of the response. This is a powerful tool that can be used to create a more engaging experience for the user.
-
----
-
-[Previous Section (Text Classification)](./04_Text_Classification.md)
-
-[Next Section (Code Generation)](./06_Code_Generation.md)
\ No newline at end of file
diff --git a/labs/prompt_engineering/02_Sample_Scenarios/06_Code_Generation.md b/labs/prompt_engineering/02_Sample_Scenarios/06_Code_Generation.md
deleted file mode 100644
index 36d8bd35..00000000
--- a/labs/prompt_engineering/02_Sample_Scenarios/06_Code_Generation.md
+++ /dev/null
@@ -1,124 +0,0 @@
-# Code Generation
-
-## Topics
-
-- [Overview of Code Generation](#overview-of-code-generation)
-- [Parameters Setup](#parameters-setup)
-- [Python Code Generation](#python-code-generation)
-- [Code Translation](#code-translation)
-- [SQL Code Generation](#sql-code-generation)
-
----
-
-## Overview of Code Generation
-
-You probably have heard already of [GitHub Copilot](https://github.com/features/copilot), a new AI-powered code completion tool that can write code for you.
-
-Code generation is the process of creating executable code from natural language, comments, or other forms of input.
-
-The OpenAI Davinci codex (code-davinci-002 for instance) is a powerful AI model that can understand and generate code in various programming languages. It is based on a large-scale neural network trained on billions of lines of public code and natural language data.
-
-This codex can be used for a variety of tasks such as completing code snippets, explaining code functionality, converting code between languages, and creating web applications. It can also interact with users through natural language queries and provide feedback and suggestions.
-
-## Parameters Setup
-
-**Note:** Switch your Deployments to a 'code-davinci-002' model.
-
-**Note:** Increase the Max Length (tokens) variable slider to 1,200 tokens to allow for longer code outputs to be generated for your use cases.
-
-**Note**: Please setup a "stop sequences" item in the prompt to let the model know when to stop generating code. For example, you can add the following to the prompt: "#" (type # and then press 'tab' to enable that stop sequence). The model will look at your prompt to complete the prompt but will not try to continue to generate more output based on the prompt 'context'.
-
-## Python Code Generation
-
-*Prompt:*
-```
-# write a python function which adds all days between 2010-01-01 and the current date
-# the function should take no input, and evaluate the current date when called
-```
-
-*Output:*
-```
-def days_since_2010():
- today = datetime.date.today()
- day_2010 = datetime.date(2010, 1, 1)
- return (today - day_2010).days
-```
-
-Feel free to test removing the Stop Sequence to understand its impact on the output and the problems it's causing.
-
-## Code Translation
-
-You can also leverage the coding capabilities of the codex to translate code from one language to another. For example, you can translate Python code to C++.
-
-*Prompt:*
-```
-# Translate the following python code to C++
-def days_since_2010():
- today = datetime.date.today()
- day_2010 = datetime.date(2010, 1, 1)
- return (today - day_2010).days
-#
-
-public
-```
-
-*Output:*
-```
-public int days_since_2010() {
- Date today = new Date();
- Date day_2010 = new Date(2010, 1, 1);
- return (today - day_2010).days;
-}
-```
-
-
-## SQL Code Generation
-
-Writing proper SQL code does require knowledge of a schema, so part of the context provided to the prompt should include the necessary table definitions.
-
-*Prompt:*
-```
-# MySQL tables, with their properties:
-# Table customers, columns = [CustomerId, FirstName, LastName, Company, Address, City, State, Country, PostalCode, Phone, Fax, Email, SupportRepId]
-# Create a MySQL query for all customers in Texas named Jane
-
-SELECT
-```
-
-*Output:*
-```
-SELECT * FROM customers WHERE State = 'Texas' AND FirstName = 'Jane'
-```
-
-*Prompt:*
-```
-# Table albums, columns = [AlbumId, Title, ArtistId]
-# Table artists, columns = [ArtistId, Name]
-# Table media_types, columns = [MediaTypeId, Name]
-# Table playlists, columns = [PlaylistId, Name]
-# Table playlist_track, columns = [PlaylistId, TrackId]
-# Table tracks, columns = [TrackId, Name, AlbumId, MediaTypeId, GenreId, Composer, Milliseconds, Bytes, UnitPrice]
-
-# Create a query for all albums with more than 10 tracks.
-
-SELECT
-```
-
-*Output:*
-```
-SELECT a.Title AS Album, COUNT(t.TrackId) AS Tracks
-FROM albums a
-INNER JOIN tracks t ON a.AlbumId = t.AlbumId
-GROUP BY a.Title
-HAVING COUNT(t.TrackId) > 10
-```
-
-You can leverage the power of this codex for more advanced scenarios like helping you refactor code, document code, create unit tests and more. Remember that you can provide as much context as needed to the prompt to help the codex generate the code you need.
-
-**Note:** Switch your 'Deployments' back to a 'text-davinci-002' or 'text-davinci-003' model for the rest of the scenarios.
-
----
-
-[Previous Section (Conversation)](./05_Conversation.md)
-
-[Next Section (Data Generation)](./07_Data_Generation.md)
\ No newline at end of file
diff --git a/labs/prompt_engineering/02_Sample_Scenarios/07_Data_Generation.md b/labs/prompt_engineering/02_Sample_Scenarios/07_Data_Generation.md
deleted file mode 100644
index db0b89e7..00000000
--- a/labs/prompt_engineering/02_Sample_Scenarios/07_Data_Generation.md
+++ /dev/null
@@ -1,160 +0,0 @@
-# Data Generation
-
-## Topics
-
- - [Overview of Data Generation](#overview-of-data-generation)
- - [Few-Shot Data Generation](#few-shot-data-generation)
- - [Data Imputation](#data-imputation)
----
-
-## Overview of Data Generation
-
-The broad category of models being used in this workshop are known as Generative Models, in our case [Generative Pre-trained Transformer 3](https://news.microsoft.com/source/features/ai/from-conversation-to-code-microsoft-introduces-its-first-product-features-powered-by-gpt-3/?culture=en-us&country=us). Generative models are models that can generate new data based on a prompt. This is useful for generating new data that can be used for training other models. For example, you can use a generative model to generate new text that can be used to train a text classification model. This is useful when you don't have enough data or need anonymous data that mimics real data to train a model. It can also be useful to impute data in a dataset that is missing data.
-
-> **Note:** The Max Length (tokens) variable slider has been increased to 1,200 tokens to allow for a longer exchange between the customer and the agent.
-
-*Prompt:*
-```
-You must create a 2,000 word long document representing an exchange between a customer service agent for the fictitious company Contoso Wireless and their customer. The sentiment of the customer must be {random_sentiment} and the topic of the conversation betweem the agent and customer should center around {random_topic}. The customer must be asking about the product {random_product}. The document should have at least 8 back and forth exchanges between the customer and the agent.'
-```
-
-*Output:*
-```
-Customer: Hi there, I'm considering upgrading my Contoso Wireless cell phone service and wanted to know what my options for that are.
-
-Agent: Hello there! Welcome to Contoso Wireless. Upgrading your service is a great idea. We are sure you will be pleased with our upgraded plans. Can you tell me what device you currently have with us?
-
-Customer: Yes, of course. I have an iPhone 11.
-
-Agent: Okay, great. We have several upgrade plans available for an iPhone 11. Let me just run through your options with you.
-
-Customer: Sure, that would be great.
-
-Agent: Okay, so our first option is an upgrade to an unlimited data plan. We offer customers unlimited data, unlimited text messaging and unlimited nationwide calling.
-
-Customer: That sounds fantastic! What is the cost for that plan?
-
-Agent: The unlimited data plan for your device costs $79.99 a month.
-
-Customer: That's a great price! What other options do I have?
-
-Agent: We also offer a plan that includes unlimited talk and text, plus 2 GB of high speed data. It costs $59.99 per month.
-
-Customer: That's great too. What other options might be available to me?
-
-Agent: In addition to these two plans, we also have a mobile hotspot plan, which provides 5 GB of data for just $39.99 per month.
-
-Customer: That sounds like a good deal too. Can I get more data for any of these plans?
-
-Agent: Absolutely. We are always offering bonus data for customers who sign up for one of our plans. If you sign up for any of the plans I mentioned, you will get an extra 1 GB of data for the first three months.
-
-Customer: Wow, that's great. I think I'm going to go with the unlimited data plan. How can I go about doing that?
-
-Agent: Wonderful! It's easy to upgrade your plan. All you have to do is visit our website and choose the plan you are interested in. You'll be able to make the switch right away and start enjoying all the benefits.
-
-Customer: Perfect. Thank you so much for your help.
-
-Agent: You're welcome! We are always happy to help out our customers. Enjoy your new plan!
-```
-This is a longer exchange than normal, but it's a good example of how you can use a generative model to create data that can be used to train other models. We now have a reasonably long exchange between two individuals that is entirely anonymous. Take note that essentially created a labeled dataset by adding the sentiment (the customer does appear happy in the exchange) and topic of the conversation (upgrading their wireless service) to the prompt.
-
-Another important consideration is that we can progrmatically feed the model prompts with different values, for instance a different product name and sentiment each time. This is useful when you want to generate a large amount of data that is labeled with different values. For example, you could change the product name being fed into the prompt with each iteration, thereby generating a dataset that is, other than our named products, completely anonymous.
-
----
-## Few-Shot Data Generation
-
-The concepts from the [Introduction to Prompt Engineering](./01_Prompt_Introduction.md) and the [Advanced Concepts](./03_Advanced_Concepts.md) sections can be very informative for generating net-new data. First off, we should be as direct in our requirements as possible and provide examples of our desired output if feasible.
-
-*Prompt:*
-```
-Instructions: Create 10 new rows of data that match the format listed in the the ---Example--- section.
-
----Example---
-| Product Name | Color | Weight | Perishable |
-| Crayons | Mixed | 6 oz | No |
-| Carrot | Orange | 12 oz | Yes |
-| Potato | Red | 5 lbs | Yes |
-| Chair | Black | 25 lbs | No |
-```
-
-*Output:*
-```
-| Watermelon | Green | 10 lbs | Yes |
-| Glass | Clear | 3 lbs | No |
-| Milk | White | 1 gal | Yes |
-| Pencil | Yellow | 8 oz | No |
-| Envelope | White | 1 oz | No |
-| Bowl | Blue | 4 lbs | No |
-| Book | Brown | 15 oz | No |
-| Shampoo | Yellow | 32 oz | No |
-| Pizza | Red | 2 lbs | Yes |
-```
-
-We now have 10 new rows of data mimicing our original dataset. This process can be repeated until we have a large enough dataset to train our model, or fit into a data pipeline to test a process, or used in customer-facing examples where we cannot use live-data.
-
----
-## Data Imputation
-
-We often have data sets that are missing values. With numerical data, we have simple techniques that are quick to implement and cover most use cases. For categorical data, however, that process can be far more cumbersome and problematic. However, we can use a generative model to fill in the missing values in our with relative ease.
-
-For instance, we can simultaneously generate new rows of data and fill in missing values in our existing data. This is a great way to create a larger dataset that is more representative of the data we have. We will build on the prompt from the previous section and add a new requirement to fill in the missing values in our existing data.
-
-*Prompt:*
-```
-Instructions: Replace any missing or blank value, denoted as ###, in the existing rows of data in the ---Example--- Section. Create 10 new rows of data that match the format listed in the the ---Example--- section.
-
----Example---
-| Product Name | Color | Weight | Perishable |
-| Crayons | Mixed | 6 oz | No |
-| Carrot | Orange | 12 oz | Yes |
-| Potato | Red | 5 lbs | ### |
-| Chair | Black | 25 lbs | No |
-| Watermelon | Green | ### | Yes |
-| Glass | Clear | 3 lbs | No |
-| Milk | ### | 1 gal | Yes |
-| Pencil | Yellow | 8 oz | No |
-| Envelope | ### | 1 oz | No |
-| Bowl | Blue | ### | No |
-| Book | Brown | 15 oz | No |
-| Shampoo | Yellow | 32 oz | No |
-| Pizza | Red | 2 lbs | Yes |
-```
-
-*Output:*
-```
-| Product Name | Color | Weight | Perishable |
-| Crayons | Mixed | 6 oz | No |
-| Carrot | Orange | 12 oz | Yes |
-| Potato | Red | 5 lbs | Yes |
-| Chair | Black | 25 lbs | No |
-| Watermelon | Green | 8 lbs | Yes |
-| Glass | Clear | 3 lbs | No |
-| Milk | White | 1 gal | Yes |
-| Pencil | Yellow | 8 oz | No |
-| Envelope | White | 1 oz | No |
-| Bowl | Blue | 32 oz | No |
-| Book | Brown | 15 oz | No |
-| Shampoo | Yellow | 32 oz | No |
-| Pizza | Red | 2 lbs | Yes |
-| Cheese | White | 8 oz | Yes |
-| Fork | Silver | 3 oz | No |
-| Cake | White | 4 lbs | Yes |
-| Toothpaste | White | 32 oz | No |
-| Chocolate | Brown | 16 oz | Yes |
-| Jerky | Various | 8 oz | Yes |
-| Tomato Sauce | Red | 24 oz | No |
-| Steak | Red | 16 oz | Yes |
-| Coffee | Brown | 8 oz | No |
-| Chair | Various | 25 lbs | No |
-| Watch | Various | 1 oz | No |
-| Pancake Mix | Brown | 24 oz | No |
-| Orange | Orange | 12 oz | Yes |
-| Apple | Red | 6 oz | Yes |
-```
-Given sufficient examples and instructions, the model can fill in the missing values in our existing data and generate new rows of data that match the format of our original dataset.
-
----
-
-[Previous Section (Code Generation)](./06_Code_Generation.md)
-
-[Next Section (Recommendations)](./08_Recommendations.md)
\ No newline at end of file
diff --git a/labs/prompt_engineering/02_Sample_Scenarios/08_Recommendations.md b/labs/prompt_engineering/02_Sample_Scenarios/08_Recommendations.md
deleted file mode 100644
index 03e2ffdd..00000000
--- a/labs/prompt_engineering/02_Sample_Scenarios/08_Recommendations.md
+++ /dev/null
@@ -1,85 +0,0 @@
-# Recommendations
-
-## Topics
-- [Overview of Recommendations](#overview-of-recommendations)
-
----
-## Overview of Recommendations
-Recommendations are something we are all familiar with in that every website we visit has some sort of recommendations engine. The goal of a recommendations engine is to provide users with a list of items that they are likely to be interested in. This is a very broad definition, but it is a good starting point. How can we extract recommendations from a user's input?
-
-First, let's recall our earlier lessons around prompt engineering. We learned that we can use headings like `---Background---` and `---Preferences---` sections to present information about a user's background and preferences to the model, which can then make recomendations to a user based on the supplied context. Please feel free to take the supplied prompt and change characteristics in the `---Background---` and `---Preferences---` sections to see how the model's recommendations change.
-
-> **Note:** You may wish to change the `Max length (tokens)` parameter to a number higher than the default in order to see more of the model's recommendations and text in a single output.
-
-*Prompt:*
-```
-Given a peson's background labeled in the ---Background--- section and preference, labeled in the ---Preferences--- section, provide hourly recommendations for a 3-day itinerary in San Francisco. Show lodging options from Hotel Azure Tahoe, Hotel San Mateo and Hotel Foster City.
----Background---
-- I am a wealthy individual.
----Preferences---
-- I don't like seafood.
-- I like fine dining.
-- I don't like public transportation.
-```
-
-*Output:*
-```
-Day 1:
-- 9am: Check-in to Hotel Azure Tahoe.
-- 10am: Visit the San Francisco Museum of Modern Art.
-- 12pm: Stop for lunch at the R & G Lounge for some Chinese cuisine.
-- 2pm: Take a cable car ride around the city.
-- 4pm: Visit the San Francisco Botanical Garden.
-- 6pm: Enjoy dinner at a fine dining restaurant, such as Gary Danko.
-
-Day 2:
-- 9am: Visit the Exploratorium.
-- 11am: Stop for lunch at the Golden Gate Park.
-- 1pm: Visit the Palace of Fine Arts.
-- 3pm: Head to the Aquarium of the Bay.
-- 5pm: Enjoy dinner at a fine dining restaurant, such as Scoma's.
-- 7pm: Check-in to Hotel San Mateo.
-
-Day 3:
-- 9am: Visit the Golden Gate Bridge.
-- 11am: Stop for lunch at the Ferry Building Marketplace.
-- 1pm: Visit the California Academy of Sciences.
-- 3pm: Take a boat tour around the bay.
-- 5pm: Enjoy dinner at a fine dining restaurant, such as The Rotunda.
-- 7pm: Check-in to Hotel Foster City.
-```
-In the above example, we can see that the model has recommended a 3-day itinerary, with events noted out by hour of the day, for a wealthy individual who doesn't like seafood, or public transportation, but does prefer fine dining. The model has also recommended lodging options from Hotel Azure Tahoe, Hotel San Mateo, and Hotel Foster City as per requested.
-
-Another possible scenario for utilizing a recommendation prompt is to provide a user with a list of items that they are likely to be interested in. For example, if a user is looking to purchase a new car, we can provide the model with a list of cars that the user is interested in, and the model can then provide a list of cars that the user is likely to be interested in.
-
-*Prompt:*
-```
-Provide a recommended car from the ---Possible Cars--- section based on the ---Previously Owned--- and the ---Customer Attributes--- section.
----Customer Attributes---
-* Two adults as primary drivers
-* Has two kids
-* Owns an SUV
-* Has a dog
-* Takes road trips
----Previously Owned---
-* Subaru Outback
-* Nissan Altima
-* Honda Accord
-* Chrysler 300
----Possible Cars---
-* Chevrolet Corvette
-* Tesla Model X
-* Toyota Sienna
-* Jeep Grand Cherokee
-```
-
-*Output:*
-```
-Based on the customer attributes and previously owned vehicles, a recommended car from the Possible Cars section is the Toyota Sienna. It is an 8-passenger minivan with plenty of room for adults, children, a dog, and luggage for road trips. It has a good safety rating and plenty of features for a comfortable ride.
-```
-Given clearly defined parameters, the model is able to provide not only a recommendation, but also a justification for the recommendation.
-
----
-[Previous Section (Data Generation)](./07_Data_Generation.md)
-
-[Next Section (Advanced Concepts)](../03_Advanced_Concepts.md)
\ No newline at end of file
diff --git a/labs/prompt_engineering/03_Advanced_Concepts.md b/labs/prompt_engineering/03_Advanced_Concepts.md
deleted file mode 100644
index c438cba6..00000000
--- a/labs/prompt_engineering/03_Advanced_Concepts.md
+++ /dev/null
@@ -1,211 +0,0 @@
-# Advanced Concepts
-
-## Topics
-
- - [Introduction](#introduction)
- - [Zero-Shot Prompts](#zero-shot-prompts)
- - [One-Shot Prompts](#one-shot-prompts)
- - [Few-Shot Prompts](#few-shot-prompts)
-
----
-
-## Introduction
-
-At this point you have experienced the power and flexibility of prompts. Tuning prompts to get the desired results is the idea behind prompt engineering.
-
-We will now cover some more advanced topics to tune our outputs without introducing fine-tuning of our GPT models.
-
-Let's take a simple classification example:
-
-*Prompt:*
-```
-Classify the sentiment of the text below.
-
-Text: I think this movie was terrible. What a waste of time.
-```
-
-*Output:*
-```
-Negative
-```
-
-The output seems to be correct, but we could improve it by providing more information to the model if we wanted a more granular classification. Let's do this via a Zero-Shot prompt.
-
----
-
-## Zero-Shot Prompts
-
-The GPT LLMs are trained on such large amount of data that they are capable of understanding complex instructions to lead to the desired output in most cases. This is called 'Zero-Shot' prompt.
-
-We could refine the example below by being more descriptive about our instructions.
-
-*Prompt:*
-```
-Classify the sentiment of the text below into very negative, negative, neutral, positive, very positive.
-
-Text: I think this movie was terrible. What a waste of time.
-```
-
-*Output:*
-```
-Very Negative
-```
-
-This is called Zero-Shot. A precise instruction leading to the desired output without any example.
-
----
-
-## One-Shot Prompts
-
-Sometimes it may be easier to provide an example to the model to learn from. This is called 'One-Shot' prompt.
-
-First, let's do a Zero Shot prompt.
-
-*Prompt:*
-```
-Tell me in which city a university is located.
-
-University: UCLA
-```
-
-*Output:*
-```
-City: Los Angeles, California
-```
-
-Let's say you wanted to have a specific output for this prompt. You could provide an example to the model to learn from.
-
-Here's a One-Shot Prompt that leads to the same output.
-
-*Prompt:*
-```
-Tell me in which city a university is located.
-
-University: UCLA
-City: Los Angeles, CA, USA
-
-University: MIT
-```
-
-*Output:*
-```
-City: Cambridge, MA, USA
-```
-
-Note that you could have used Zero-Shot prompt for this example as well. But, One-Shot prompts are more flexible and can be used to fine-tune the model to your needs.
-
-Here's a Zero-Shot Prompt equivalent.
-
-*Prompt:*
-```
-Tell me in which city a university is located. Provide the city name, state code and country, comma separated as one line.
-
-University: UCLA
-```
-
-*Output:*
-```
-City: Los Angeles, CA, USA
-```
-
----
-
-## Few-Shot Prompts
-
-Few-Shot prompts enable you to provide multiple examples to the model to learn from. This is useful when you want to fine-tune the output for more complex scenarios where the output may vary based on the input. It may also be a simpler way to define a task than providing detailed natural language instructions of what you expect.
-
-Here's an example of entity extractions which is well fitted to Few-Shot prompts.
-
-Let's try it first with a Zero-Shot prompt.
-
-*Prompt:*
-```
-Generate a JSON document which provides Name, position and company from the text below.
-
-Text: Fred is a serial entrepreneur. Co-founder and CEO of Platform.sh, he previously co-founded Commerce Guys, a leading Drupal ecommerce provider. His mission is to guarantee that as we continue on an ambitious journey to profoundly transform how cloud computing is used and perceived, we keep our feet well on the ground continuing the rapid growth we have enjoyed up until now.
-```
-
-*Output:*
-```
-{
- "Name": "Fred",
- "Position": "Co-founder and CEO",
- "Company": "Platform.sh, Commerce Guys"
-}
-```
-
-Not exactly what we expect (only 'Platform.sh' should come back in 'Company'), and it may be difficult to express that in a Zero-Shot prompt.
-
-Let's try a Few-Shot prompt. Note that we're going to drop the instructions and just provide the desired output.
-
-*Prompt:*
-```
-Text: Fred is a serial entrepreneur. Co-founder and CEO of Platform.sh, he previously co-founded Commerce Guys, a leading Drupal ecommerce provider. His mission is to guarantee that as we continue on an ambitious journey to profoundly transform how cloud computing is used and perceived, we keep our feet well on the ground continuing the rapid growth we have enjoyed up until now.
-
-JSON:
-{
- "Name": "Fred",
- "Position": "Co-founder and CEO",
- "Company": "Platform.sh"
-}
-
-Text: Microsoft (the word being a portmanteau of "microcomputer software") was founded by Bill Gates on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800. Steve Ballmer replaced Gates as CEO in 2000, and later envisioned a "devices and services" strategy.
-
-JSON:
-```
-
-*Output:*
-```
-{
- "Name": "Microsoft",
- "Founder": "Bill Gates",
- "Founded": "April 4, 1975",
- "CEO": "Steve Ballmer",
- "Strategy": "Devices and Services"
-}
-```
-
-Note that the output is not what we want here, but there hasn't been enough example to understand if the goal is to extract key entities or certain entities only.
-
-A few shots prompt will clarify this.
-
-*Prompt:*
-```
-Text: Fred is a serial entrepreneur. Co-founder and CEO of Platform.sh, he previously co-founded Commerce Guys, a leading Drupal ecommerce provider. His mission is to guarantee that as we continue on an ambitious journey to profoundly transform how cloud computing is used and perceived, we keep our feet well on the ground continuing the rapid growth we have enjoyed up until now.
-
-JSON:
-{
- "Name": "Fred",
- "Position": "Co-founder and CEO",
- "Company": "Platform.sh"
-}
-
-Text: Microsoft (the word being a portmanteau of "microcomputer software") was founded by Bill Gates on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800. Steve Ballmer replaced Gates as CEO in 2000, and later envisioned a "devices and services" strategy.
-
-JSON:
-{
- "Name": "Bill Gates",
- "Position": "Co-founder and CEO",
- "Company": "Microsoft"
-}
-
-Text: Franck Riboud was born on 7 November 1955 in Lyon. He is the son of Antoine Riboud, the previous CEO, who transformed the former European glassmaker BSN Group into a leading player in the food industry. He is the CEO at Danone.
-
-JSON:
-```
-
-*Output:*
-```
-{
- "Name": "Franck Riboud",
- "Position": "CEO",
- "Company": "Danone"
-}
-```
-Now we can see that the model clearly understands that we want to only extract 3 entities from the text and nothing else.
-
----
-
-[Previous Section (Sample Scenarios)](./02_Sample_Scenarios/01_Text_Summarization.md)
-
-[Next Section (Fine Tuning)](./04_Fine_Tuning.md)
\ No newline at end of file
diff --git a/labs/prompt_engineering/04_Fine_Tuning.md b/labs/prompt_engineering/04_Fine_Tuning.md
deleted file mode 100644
index cebbf209..00000000
--- a/labs/prompt_engineering/04_Fine_Tuning.md
+++ /dev/null
@@ -1,74 +0,0 @@
-# Fine Tuning
-
-## Topics
-
- - [What is Fine Tuning?](#what-is-fine-tuning)
- - [When would you consider Fine Tuning vs Prompt Engineering?](#when-would-you-consider-fine-tuning-vs-prompt-engineering)
- - [Operation Cost Considerations](#operation-cost-considerations)
-
-
----
-## What is Fine Tuning?
-
-Fine-tuning is a process of customizing an existing AI model for a specific task or domain by using additional data. OpenAI offers fine-tuning for its language models such as GPT-3, which can generate natural language texts for various purposes.
-
-Fine-tuning allows users to create customized models that can produce more accurate and relevant outputs than the general models.
-
-To fine-tune an OpenAI model, users need to prepare their own training and validation data, select a base model, and use the OpenAI CLI or Studio to start the fine-tuning job.
-
-Fine-tuning can improve the performance and reduce the error rates of OpenAI models significantly.
-
----
-## Fine Tuning Training Data
-
-Training data for fine-tuning OpenAI models are pairs of input prompts and desired outputs that reflect the specific task or domain you want to customize the model for. For example, if you want to fine-tune a model for generating product reviews, your training data could look like this:
-
-```
-{"prompt": "Review: I bought this laptop for my online classes and it works great.", "completion": "Rating: 5 stars"}
-{"prompt": "Review: The battery life is terrible and the screen is too small.", "completion": "Rating: 2 stars"}
-{"prompt": "Review: This is a scam. The product never arrived and the seller did not respond.", "completion": "Rating: 1 star"}
-```
-
-You can use the OpenAI CLI or Studio to prepare, validate, and format your training data into a JSONL file that can be used for fine-tuning.
-
-**IMPORTANT NOTE**:
-It is important to note that to expect better results than using Prompt Engineering, you will need to have a large and high-quality dataset that is relevant to your task or domain, usually a few hundreds high quality examples.
-
----
-## When would you consider Fine Tuning vs Prompt Engineering?
-
-Fine-tuning is a powerful tool that can be used to customize OpenAI models for specific tasks or domains. However, it is not always necessary to fine-tune a model to get the desired results.
-
-Fine-tuning and prompt engineering are two methods of conditioning language models to perform specific tasks or domains.
-
-Fine-tuning involves retraining an existing model on new data, while prompt engineering involves designing and testing input instructions that elicit the desired output from a model.
-
-### Fine Tuning
-
-You might consider fine-tuning when you have a large and high-quality dataset that is relevant to your task or domain, and you want to create a customized model that can produce more accurate and consistent outputs than the general model.
-
-### Prompt Engineering
-
-You might consider prompt engineering when you have a limited or no dataset, and you want to leverage the existing knowledge and capabilities of a general model by asking the right questions or providing the right context.
-
-**IMPORTANT NOTE**: Both methods require some trial and error, but fine-tuning usually takes more time and resources than prompt engineering, and it is not always necessary to fine-tune a model to get the desired results. It is therefore preferable to start with prompt engineering and only consider fine-tuning if you are unable to get the desired results.
-
----
-
-## Operation Cost Considerations
-
-Prompt engineering could be less cost effective if you need to provide a large amount of instructions to accomplish something similar than what you would get with a Fine Tuned model as you'd consume tokens with every request sent.
-
-Hosting a Fine Tune model also has its cost but on medium to high volume that cost would be mostly irrelevant, so operational cost efficiency could be a driver for Fine Tuning.
-
----
-## References
-
-[OpenAI Fine Tuning](https://platform.openai.com/docs/guides/fine-tuning)
-
-[Fine Tuning in the Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/fine-tuning?pivots=programming-language-studio)
-
-
----
-
-[Previous Section (Advanced Concepts)](./03_Advanced_Concepts.md)
\ No newline at end of file
diff --git a/labs/prompt_engineering/05_ChatCompletions_Introduction.md b/labs/prompt_engineering/05_ChatCompletions_Introduction.md
deleted file mode 100644
index 2d89e077..00000000
--- a/labs/prompt_engineering/05_ChatCompletions_Introduction.md
+++ /dev/null
@@ -1,76 +0,0 @@
-
-# Table of Contents
-
-## 1. [Introduction](#introduction)
-
-## 2. [Overview of the Chat Completion API](#overview-of-the-chat-completion-api)
-
-## 3. [Example ChatCompletion.create() Calls](#example-chatcompletioncreate-calls)
-
----
-
-## Introduction
-
----
-
-The ChatGPT and GPT-4 models are optimized for conversational interfaces and work differently than the older GPT-3 models. They are conversation-in and message-out, and require input formatted in a specific chat-like transcript format. Azure OpenAI provides two different options for interacting with these models: Chat Completion API and Completion API with Chat Markup Language (ChatML).
-
-The Chat Completion API is the preferred method for accessing these models, while ChatML provides lower level access but requires additional input validation and only supports ChatGPT models. It's important to use the techniques described in the article to get the best results from the new models.
-
-This notebook will cover the aspects of the Chat Completion Python API with conversation, roles (system, assistant, user) and examples of different usage scenarios.
-
----
-
-## Overview of the Chat Completion API
-
-> **Note:** The following parameters aren't available with the new ChatGPT and GPT-4 models: **logprobs**, **best_of**, and **echo**. If you set any of these parameters, you'll get an error. gpt-35-turbo is equivalent to the gpt-3.5-turbo model from OpenAI.
-
-### ChatCompletion.create()
-
-OpenAI trained the ChatGPT and GPT-4 models to accept input formatted as a conversation. The messages parameter takes an array of dictionaries with a conversation organized by role. The three types of roles are:
-
-* system
-* assistant
-* user
-
-A sample input containing a simple system message, a one-shot example of a user and assistant interacting, and the final "actual" user-supplied prompt is shown below:
-
-```json
-{"role": "system", "content": "Provide some context and/or instructions to the model."},
-{"role": "user", "content": "Example question goes here."}
-{"role": "assistant", "content": "Example answer goes here."}
-{"role": "user", "content": "First question/message for the model to actually respond to."}
-```
-
-Let's dive deeper into our 3 possible roles types of system, user, and assistant.
-
-### **System Role**
-
-The system role, also known as the system message, is included at the beginning of the array. This message provides the initial instructions to the model. You can provide various information in the system role including:
-
-* A brief description of the assistant
-* Personality traits of the assistant
-* Instructions or rules you would like the assistant to follow
-* Data or information needed for the model, such as relevant questions from an FAQ
-
-You can customize the system role for your use case or just include basic instructions. The system role/message is optional, but it's recommended to at least include a basic one to get the best results.
-
-> **Note:** The system role message is counted in the sum of the tokens and needs to be accounted for accordingly.
-
-### **Assistant Role**
-
-The assistant role is that of OpenAI or your assistant. You can omit this role in an intial ChatCompletion.create() call if desired, though it is required if you are going to pass a one-shot or few-shot example through the messages parameter.
-
-Let's take a look at some examples of the Chat Completion API in action.
-
-### **User Role**
-
-The user role is the message that the user sends to the assistant. This is the message that the model will respond to. The user role is required for the model to respond.
-
-> **Note:** To trigger a response from the model, you should end with a user message indicating that it's the assistant's turn to respond.
-
----
-
-## **Example ChatCompletion.create() Calls**
-
-For a more comprehensive overview of the ChatCompletions.create() method, please see the [ChatCompletions.ipynb notebook](https://github.com/microsoft/OpenAIWorkshop/blob/main/scenarios/powerapp_and_python/python/ChatCompletions.ipynb) within the [power_app_and_python](https://github.com/microsoft/OpenAIWorkshop/tree/main/scenarios/powerapp_and_python) scenario.
diff --git a/labs/prompt_engineering/98_Hyperparameters_Overview.md b/labs/prompt_engineering/98_Hyperparameters_Overview.md
deleted file mode 100644
index f9ab54d0..00000000
--- a/labs/prompt_engineering/98_Hyperparameters_Overview.md
+++ /dev/null
@@ -1,59 +0,0 @@
-# Basic Overview of Azure OpenAI Service Hyperparameters
-## Quick Note on adjusting Hyperparameters
-
-When working with prompts, you will be interacting with the LLM via an API or directly. You can configure a few parameters to get different results for your prompts.
-
-**Temperature** - In short, the lower the temperature the more deterministic the results in the sense that the highest probable next token is always picked. Increasing temperature could lead to more randomness encouraging more diverse or creative outputs. We are essentially increasing the weights of the other possible tokens. In terms of application, we might want to use lower temperature for something like fact-based QA to encourage more factual and concise responses. For poem generation or other creative tasks it might be beneficial to increase temperature.
-
-**Top_p** - Similarly, with top_p, a sampling technique with temperature called nucleus sampling, you can control how deterministic the model is at generating a response. If you are looking for exact and factual answers keep this low. If you are looking for more diverse responses, increase to a higher value.
-
-The general recommendation is to alter one not both.
-
-### text-davinci-003 model
-
-**temperature**
-```
-Controls randomness: Lowering results in less random completions.
-As the temperature approaches zero, the model will become deterministic and repetitive.
-```
-
-**max_tokens**
-```
-Set a limit on the number of tokens to generate in a response.
-The system supports a maximum of 2048 tokens shared between a given prompt and response completion.
-(One token is roughly 4 characters for typical English text.)
-```
-
-**top_p***
-```
-Control which tokens the model will consider when generating a response via nucleus sampling.
-Setting this to 0.9 will consider the top 90% most likely of all possible tokens.
-This will avoid using tokens that are clearly incorrect while still maintaining variety
-when the model has low confidence in the highest-scoring tokens.
-```
-
-**frequency_penalty**
-```
-Reduce the chance of repeating a token proportionally based on how often it has appeared in the text so far.
-This decreases the likelihood of repeating the exact same text in a response.
-```
-
-**presence_penalty**
-```
-Reduce the chance of repeating any token that has appeared in the text at all so far.
-This increases the likelihood of introducing new topics in a response.
-```
-
-**best_of**
-```
-Generate multiple responses, and display only the one with the best total probability across all its tokens.
-The unused candidates will still incur usage costs, so use this parameter carefully and make sure to set the
-parameters for max response length and ending triggers as well. Note that streaming will only work when this is set to 1.
-```
-
-**stop**
-```
-Make responses stop at a desired point, such as the end of a sentence or list.
-Specify up to four sequences where the model will stop generating further tokens
-in a response. The returned text will not contain the stop sequence.
-```
diff --git a/labs/prompt_engineering/99_Glossary.md b/labs/prompt_engineering/99_Glossary.md
deleted file mode 100644
index eeed55aa..00000000
--- a/labs/prompt_engineering/99_Glossary.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# Glossary
-
-*language model (LM)*
-```
-A probability distribution over sequences of words.
-```
-
-*large language model (LLM)*
-```
-A large language model (LLM) is a type of machine learning model that can perform a variety of
-natural language processing (NLP) tasks, including generating and classifying text, answering
-questions in a conversational manner and translating text from one language to another.
-
-The label "large" refers to the number of values (parameters) the model can change autonomously as it learns.
-```
-
-*natural language processing (NLP)*
-```
-Natural language processing (NLP) is the ability of a computer program to understand spoken and written human language.
-
-NLP programming automates the translation process between computers and humans by manipulating unstructured data (words)
-in the context of a specific task (conversation). An important goal of NLP is to help the computer to comprehend
-the intent -- as well as the meaning -- of a human being's written or spoken words.
-```
-
-*prompt*
-```
-A piece of text that is used to initiate or provide context for the generation of output.
-This could be an input sentence, question, or topic to generate a response from the language model.
-```
-
-*prompt engineering*
-```
-The discipline concerned with developing and optimizing prompts to efficiently use language models (LMs)
-and large language models (LLMs).
-```
-
-*token(s)*
-```
-Token is an abstraction that is defined by external rules in order to identify the unit of processing in a NLP task.
-A token may be a word, part of a word or just characters like punctuation. (e.g. 'abc ', 'abc.', ' st', etc)
-```
-
diff --git a/labs/prompt_engineering/README.md b/labs/prompt_engineering/README.md
deleted file mode 100644
index e19ee76d..00000000
--- a/labs/prompt_engineering/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
-# Prompt Engineering
-## [Introduction](./01_Prompt_Introduction.md)
-## Sample Scenarios
-* [Text Summarization](./02_Sample_Scenarios/01_Text_Summarization.md)
-* [Information Extraction](./02_Sample_Scenarios/02_Information_Extraction.md)
-* [Question Answering](./02_Sample_Scenarios/03_Question_Answering.md)
-* [Text Classification](./02_Sample_Scenarios/04_Text_Classification.md)
-* [Conversation](./02_Sample_Scenarios/05_Conversation.md)
-* [Code Generation](./02_Sample_Scenarios/06_Code_Generation.md)
-* [Data Generation](./02_Sample_Scenarios/07_Data_Generation.md)
-* [Recommendations](./02_Sample_Scenarios/08_Recommendations.md)
-## [Advanced Concepts](./03_Advanced_Concepts.md)
-## [Fine Tuning](./04_Fine_Tuning.md)
-## [Hyperparamaters Overview](./98_Hyperparameters_Overview.md)
-## [Glossary](./99_Glossary.md)