This page answers some of the often asked questions about Schemathesis.
Schemathesis generates random test data that conforms to the given API schema as well as not-conforming data, depending on the supplied data generation method config value. This data consists of all possible data types from the JSON schema specification in various combinations and different nesting levels.
We can't guarantee that the generated data will always be accepted by the application under test since there could be validation rules not covered by the API schema. If you found that Schemathesis generated something that doesn't fit the API schema, consider reporting a bug
The main two groups of problems that Schemathesis targets are server-side errors and nonconformity to the behavior described in the API schema.
It depends. The test data that Schemathesis generates is random. Input validation is, therefore, more frequently examined than other parts.
Since Schemathesis generates data that fits the application's API schema, it can reach the app's business logic, but it depends on the architecture of each particular application.
As the first step, you can use schema generators like flasgger for Python, GrapeSwagger for Ruby, or Swashbuckle for ASP.Net. Then, running Schemathesis against the generated API schema will help you to refine its definitions.
Schemathesis focuses on finding inputs that result in application crash, but it shares the goal of keeping the API documentation up to date with Dredd. Both tools can generate requests to the API under test, but they approach it differently.
Schemathesis uses Property-Based Testing to infer all input values and uses examples defined in the API schema as separate test cases. Dredd uses examples described in the API schema as the primary source of inputs (and requires them to work) and generates data only in some situations.
By using Hypothesis as the underlying testing framework, Schemathesis benefits from all its features like test case reduction and stateful testing. Dredd works more in a way that requires you to write some sort of example-based tests when Schemathesis requires only a valid API schema and will generate tests for you.
There are a lot of features that Dredd has are Schemathesis has not (e.g., API Blueprint support, that powerful hook system, and many more) and probably vice versa. Definitely, Schemathesis can learn a lot from Dredd and if you miss any feature that exists in Dredd but doesn't exist in Schemathesis, let us know.
There are two main ways to run it — as a part of Python's test suite, and as a command-line tool.
If you wrote a Python application and you want to utilize the features of an existing test suite, then the in-code option will best suit your needs.
If you wrote your application in a language other than Python, you should use the built-in CLI. Please keep in mind that you will need to have a running application where you can run Schemathesis against.
Only in some workflows! In CLI, you can test your AioHTTP / ASGI / WSGI apps with the --app
CLI option.
For the pytest
integration, there is schemathesis.from_pytest_fixture
loader where you can postpone API schema loading
and start the test application as a part of your test setup. See more information in the :doc:`../python` section.
It depends on many factors, including the API's complexity under test, the network connection speed, and the Schemathesis configuration. Usually, it takes from a few seconds to a few minutes to run all the tests. However, there are exceptions where it might take an hour and more.
Yes. Schemathesis's hooks mechanism allows you to adapt its behavior and generate data that better fits your use case.
Also, if your application fails on some input early in the code, then it's often a good idea to exclude this input from the next test run so you can explore deeper parts of your codebase.
The case
object that is injected in each test can be modified, assuming your URL template is /api/users/{user_id}
then in tests, it can be done like this:
schema = ... # Load the API schema here
@schema.parametrize()
def test_api(case):
case.path_parameters["user_id"] = 42
Because FastAPI uses JSON Draft 7 under the hood (via pydantic
), which is not compatible with JSON drafts defined by
the Open API 2 / 3.0.x versions. It is a known issue on the FastAPI side.
Schemathesis is more strict in schema handling by default, but we provide optional fixups for this case:
import schemathesis
# will install all available compatibility fixups.
schemathesis.fixups.install()
# You can also provide a list of fixup names as the first argument
# schemathesis.fixups.install(["fast_api"])
For more information, take a look into the "Compatibility" section.
There might be multiple reasons for that, but usually, this behavior occurs when the API schema is complex or deeply nested.
Please, refer to the Data generation
section in the documentation for more info. If you think that it is not the case, feel
free to open an issue.
These CLI parameters both represent some kind of limit for the duration of a certain part of a single test. However, each of them has a different scope.
--hypothesis-deadline
counts parts of a single test case execution, including waiting for the API response, and running all checks and relevant hooks for that single test case.
--request-timeout
is only relevant for waiting for the API response. If this duration is exceeded, the test is marked as a "Timeout".
When Schemathesis finds a failure, it tries to verify it by re-running the test again. If the same failure is not reproduced, then Schemathesis concludes the test as "Flaky".
This situation usually happens, when the tested application state is not reset between tests. Let's imagine that we have an API where the user can create "orders", then the "Flaky" situation might look like this:
- Create order "A" -> 201 with payload that does not conform to the definition in the API schema;
- Create order "A" again to verify the failure -> 409 with conformant payload.
With Python tests, you may want to write a context manager that cleans the application state between test runs as suggested by Hypothesis docs.
CLI reports flaky failures as regular failures with a special note about their flakiness. Cleaning the application state could be done via the :ref:`before_call <hooks_before_call>` hook.
The discriminator
field does not affect data generation, and Schemathesis work directly with the underlying schemas.
Usually, the problem comes from using the oneOf
keyword with very permissive sub-schemas.
For example:
discriminator:
propertyName: objectType
oneOf:
- type: object
required:
- objectType
properties:
objectType:
type: string
foo:
type: string
- type: object
required:
- objectType
properties:
objectType:
type: string
bar:
type: string
Here both schemas do not restrict their additional properties, and for this reason, any object that is valid for the first sub-schema is also valid for the second one, which
contradicts the definition of the oneOf
keyword behavior, where the value should be valid against exactly one sub-schema.
To solve this problem, you can use anyOf
or make your sub-schemas less permissive.
The oneOf
keyword is a tricky one and the validation results might look counterintuitive at first glance.
Let's take a look at an example:
paths:
/pets:
patch:
requestBody:
content:
application/json:
schema:
oneOf:
- $ref: '#/components/schemas/Cat'
- $ref: '#/components/schemas/Dog'
responses:
'200':
description: Updated
components:
schemas:
Dog:
type: object
properties:
bark:
type: boolean
breed:
type: string
enum: [Dingo, Husky, Retriever, Shepherd]
Cat:
type: object
properties:
hunts:
type: boolean
age:
type: integer
Here we have two possible payload options - Dog
and Cat
. The following JSON object is valid against the Dog
schema:
{
"bark": true,
"breed": "Dingo"
}
Though, oneOf
requires that the input should be valid against exactly one sub-schema!
At first glance it looks like the case, but it is actually not. It happens because the Cat
schema does not restrict what properties should always be present and what should not.
If the input object does not have the hunts
or age
properties, then it will be validated as a Cat
instance.
To prevent this situation you might use required
and additionalProperties
keywords:
components:
schemas:
Dog:
type: object
properties:
bark:
type: boolean
breed:
type: string
enum: [Dingo, Husky, Retriever, Shepherd]
required: [bark, breed] # List all the required properties
additionalProperties: false # And forbid any others
Cat:
type: object
properties:
hunts:
type: boolean
age:
type: integer
required: [hunts, age] # List all the required properties
additionalProperties: false # And forbid any others
By adding these keywords, any Cat
instance will always require the hunts
and age
properties to be present.
As an alternative, you could use the anyOf
keyword instead.
Open API 2.0 / 3.0 do not declare the uuid
format as built-in, hence it is available as an extension:
from schemathesis.contrib.openapi import formats
formats.uuid.install()
You need to add additionalProperties: false
to the relevant object definition. But there is a caveat with emulating
inheritance with Open API via allOf
.
In this case, it is better to use YAML anchors to share schema parts; otherwise it will prevent valid data from passing the validation.