Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactoring & Documenting the project #13

Merged
merged 3 commits into from
Aug 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
67 changes: 67 additions & 0 deletions .github/ISSUE_TEMPLATE/BUG-REPORT.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
name: "🐛 Bug Report"
description: Create a new ticket for a bug in the MetaCall Test Center.
title: "🐛 [BUG] - <title>"
labels: ["bug"]
body:
- type: textarea
id: description
attributes:
label: "Description"
description: Provide a clear and detailed description of the issue you encountered.
placeholder: Briefly describe the issue...
validations:
required: true
- type: textarea
id: reprod
attributes:
label: "Reproduction Steps"
description: Provide a step-by-step description to reproduce the issue.
value: |
1. Clone the repository: `git clone <REPO_URL>`
2. Navigate to the project directory: `cd <PROJECT_DIRECTORY>`
3. Run the following command: `python3 ./testing.py -f <test-suite-file> -V -e <environment>`
4. Observe the error.
render: bash
validations:
required: true
- type: textarea
id: screenshot
attributes:
label: "Screenshots"
description: Attach screenshots that help illustrate the problem, if applicable.
value: |
![DESCRIPTION](LINK.png)
render: bash
validations:
required: false
- type: textarea
id: logs
attributes:
label: "Logs"
description: Paste any relevant log output here. The logs will be automatically formatted as code.
render: bash
validations:
required: false
- type: dropdown
id: environments
attributes:
label: "Environments"
description: Which environments are impacted by the issue?
multiple: true
options:
- FaaS
- CLI
validations:
required: true
- type: dropdown
id: os
attributes:
label: "Operating Systems"
description: Which operating systems are affected by the issue?
multiple: true
options:
- Windows
- Linux
- macOS
validations:
required: true
85 changes: 85 additions & 0 deletions .github/ISSUE_TEMPLATE/FEATURE-REQUEST.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
name: "💡 Feature Request"
description: Suggest a new feature or enhancement for the MetaCall Test Center.
title: "💡 [REQUEST] - <title>"
labels: ["enhancement", "discussion"]
body:
- type: textarea
id: summary
attributes:
label: "Summary"
description: Provide a brief overview of the proposed feature or enhancement.
placeholder: Describe your feature request in a few lines...
validations:
required: true
- type: textarea
id: motivation
attributes:
label: "Motivation"
description: Explain the problem that this feature would solve and why it is important.
placeholder: Describe the problem your feature request will address...
validations:
required: true
- type: textarea
id: proposed_solution
attributes:
label: "Proposed Solution"
description: Describe how you propose to implement this feature.
placeholder: Outline your solution in detail...
validations:
required: true
- type: textarea
id: basic_example
attributes:
label: "Basic Example"
description: Provide a basic example or use case that demonstrates how the feature would work.
placeholder: Provide code snippets or examples...
validations:
required: true
- type: textarea
id: drawbacks
attributes:
label: "Drawbacks"
description: What are the potential drawbacks or challenges associated with this feature?
placeholder: Identify possible negative impacts or challenges...
validations:
required: true
- type: textarea
id: unresolved_questions
attributes:
label: "Unresolved Questions"
description: List any unresolved questions or concerns related to this feature.
placeholder: Mention any uncertainties or areas needing further discussion...
validations:
required: false
- type: textarea
id: alternatives
attributes:
label: "Alternatives Considered"
description: Describe any alternative approaches or solutions you considered.
placeholder: Discuss other ways this problem could be addressed...
validations:
required: false
- type: input
id: start_date
attributes:
label: "Proposed Start Date"
description: When do you plan to start working on this feature?
placeholder: "MM/DD/YYYY"
validations:
required: false
- type: textarea
id: reference_issues
attributes:
label: "Related Issues"
description: List any related issues or discussions.
placeholder: Reference existing issues or discussions (#Issue IDs)...
validations:
required: false
- type: textarea
id: implementation_pr
attributes:
label: "Implementation PR"
description: Link to the pull request where this feature is being implemented, if applicable.
placeholder: "#Pull Request ID"
validations:
required: false
58 changes: 58 additions & 0 deletions .github/ISSUE_TEMPLATE/TEST_CASE_SUBMISSION.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
name: "🧪 New Test Case Submission"
description: Use this template to submit a new test case for a project.
title: "🧪 [TEST] - <Project Name>"
labels: ["test", "enhancement"]
body:
- type: input
id: project_name
attributes:
label: "Project Name"
description: "Enter the name of the project for which the test case is being submitted."
placeholder: "e.g., time-app-web"
validations:
required: true

- type: input
id: project_url
attributes:
label: "Project Repository URL"
description: "Provide the URL of the repository where the project is hosted."
placeholder: "https://github.com/USERNAME/REPO-NAME"
validations:
required: true

- type: textarea
id: description
attributes:
label: "Description"
description: "Provide a brief description of the test case, including what it aims to validate."
placeholder: "Short and explicit description of your test case..."
validations:
required: true

- type: textarea
id: test_cases
attributes:
label: "Test Cases"
description: |
Provide the test cases in the correct format as shown below. You can add multiple test cases by following the same structure.
placeholder: |
```yaml
code-files:
- path: <path-to-code-file>
test-cases:
- name: <test-case-name>
function-call: <function-call>
expected-pattern: '<expected-output-pattern>'
```
validations:
required: true

- type: textarea
id: additional_notes
attributes:
label: "Additional Notes"
description: "Include any additional information or context that might be useful for reviewing or understanding the test case."
placeholder: "Any additional information or context..."
validations:
required: false
31 changes: 0 additions & 31 deletions NOTES.md

This file was deleted.

128 changes: 110 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,90 @@
# Matacall Test Center
# MetaCall Test Center

This is a test center for Matacall. It contains a set of test cases for Matacall projects and examples. The test cases are written in a specific yaml format, which is described in the following sections.
The main script used for testing is `testing.py` and it is mainly used in the CI/CD pipeline of this repository. It can also used to test the projects locally.
## Overview

## Test Suits Format
MetaCall Test Center is a comprehensive testing framework designed for MetaCall projects and examples. It provides a structured and efficient way to define, run, and manage test cases across different environments. The primary script, `testing.py`, integrates seamlessly into CI/CD pipelines and supports local testing. This project adheres to best practices, SOLID principles, and design patterns to ensure maintainability, scalability, and ease of contribution.

## Project Structure

The project is organized as follows:

``` bash
.
├── README.md
├── LICENSE
├── requirements.txt
├── testing
│ ├── __init__.py
│ ├── deploy_manager.py
│ ├── logger.py
│ ├── repo_manager.py
│ ├── runner
│ │ ├── cli_interface.py
│ │ ├── faas_interface.py
│ │ ├── interface_factory.py
│ │ └── runner_interface.py
│ ├── test_runner.py
│ └── test_suites_extractor.py
├── testing.py
└── test-suites
└── test<example name>.yaml
```

### Components

- **`testing.py`**: The main script that orchestrates the testing process by interacting with various components.

- **`deploy_manager.py`**: Manages the deployment of MetaCall projects locally or remotely, ensuring the necessary environments are set up for testing.

- **`logger.py`**: Provides a centralized logging mechanism with configurable verbosity levels, helping to debug and monitor the testing process.

- **`repo_manager.py`**: Handles cloning and managing the code repositories required for testing, ensuring that the latest code is always used.

- **`test_runner.py`**: The core component responsible for executing test cases across different environments by leveraging the strategy pattern for flexibility.

- **`test_suites_extractor.py`**: Extracts test cases from YAML files, ensuring that the test cases are correctly parsed and ready for execution.

- **`runner`**: Contains specific implementations for running tests in different environments:
- **`runner_interface.py`**: Defines the interface for all runner implementations, adhering to the Dependency Inversion principle.
- **`cli_interface.py`**: Implements the interface for running tests in a CLI environment.
- **`faas_interface.py`**: Implements the interface for running tests in a Function-as-a-Service (FaaS) environment.
- **`interface_factory.py`**: A factory class that creates instances of the appropriate runner interface based on the environment.

## How It Works

1. **Test Suite Definition**: Test cases are defined in YAML format within the `test-suites` directory. Each test suite specifies the project, repository URL, code files, and individual test cases.

2. **Test Execution**: The `testing.py` script is executed with various command-line arguments. The script then:
- Parses the command-line arguments.
- Extracts test cases from the specified YAML file.
- Clones the repository if not already present.
- Deploys the project as a local FaaS (if required).
- Runs the test cases across the specified environments (CLI, FaaS, etc.).

3. **Output and Logging**: The results of the test cases are logged based on the specified verbosity level, and any errors encountered during the process are reported.

## Design Choices and Principles

This project adheres to several key design principles and patterns:

- **SOLID Principles**:
- **Single Responsibility Principle**: Each class has a single responsibility, making the code easier to understand and maintain.
- **Open/Closed Principle**: The code is open for extension but closed for modification. New runner environments can be added without modifying existing code.
- **Liskov Substitution Principle**: Subtypes (`CLIInterface`, `FaaSInterface`) can be used interchangeably with their base type (`RunnerInterface`) without affecting the correctness of the program.
- **Interface Segregation Principle**: The `RunnerInterface` provides a minimal set of methods required by all runner types, preventing unnecessary dependencies.
- **Dependency Inversion Principle**: High-level modules (e.g., `TestRunner`) do not depend on low-level modules (`CLIInterface`, `FaaSInterface`), but both depend on abstractions (`RunnerInterface`).

- **Design Patterns**:
- **Factory Pattern**: The `InterfaceFactory` class encapsulates the creation of runner interfaces, promoting flexibility and adherence to the Open/Closed Principle.
- **Singleton Pattern**: The `DeployManager` and `RepoManager` classes are implemented as singletons to ensure that only one instance exists throughout the application, avoiding redundant deployments or repository clones.
- **Strategy Pattern**: The `TestRunner` uses different strategies (`CLIInterface`, `FaaSInterface`) to run tests in various environments, making the code flexible and easy to extend.

## Usage

### Test Suite Format

Test suites are written in YAML format. Below is an example:

The test suits are written in a yaml format. The following is an example of a test suit for the [random-password-generator-example](https://github.com/metacall/random-password-generator-example)
```yaml
project: random-password-generator-example
repo-url: https://github.com/metacall/random-password-generator-example
Expand All @@ -20,23 +99,36 @@ code-files:
expected-pattern: 'missing 1 required positional argument'
```

## Arguments
### Running Tests

The following arguments are available for the `testing.py` script:
To run the tests, use the following command:

```bash
> python3 ./testing.py -h
usage: testing.py [-h] [-V] [-f FILE]

options:
-h, --help show this help message and exit
-V, --verbose increase output verbosity
-f FILE, --file FILE the test suite file name
-e, --envs the environments to run the tests on, e,g: -e faas cli, default is cli
python3 ./testing.py -f <test-suite-file> -V -e <environments>
```

## Example
- `-f`, `--file`: Specifies the test suite file name.
- `-V`, `--verbose`: Increases output verbosity.
- `-e`, `--envs`: Specifies the environments to run the tests on (e.g., `cli`, `faas`).

Example:

```bash
python3 ./testing.py -f test-suites/test-time-app-web.yaml -V -e cli
```
python3 ./testing.py -f test-suites/test-time-app-web.yaml -V -e cli faas
```

## Contributing

We welcome contributions to the MetaCall Test Center! Here are a few ways you can help improve the project:

- **Enhance Test Coverage**: Add new test cases or improve existing ones to cover more scenarios.
- **Optimize Code**: Refactor and optimize the codebase to improve performance and readability.
- **Extend Functionality**: Implement support for additional environments or enhance existing ones.
- **Documentation**: Improve and expand the documentation to help new users and contributors.

### Guidelines

- Follow the existing code style and structure.
- Ensure that all tests pass before submitting a pull request.
- Provide clear and concise commit messages.
- Open an issue to discuss potential changes before submitting significant modifications.
Loading
Loading