Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python test execution simple #21053

Merged
merged 79 commits into from
Apr 18, 2023
Merged
Show file tree
Hide file tree
Changes from 73 commits
Commits
Show all changes
79 commits
Select commit Hold shift + click to select a range
f39f792
switch to using tcp for comm with server
eleanorjboyd Apr 3, 2023
3e1648f
mock test changes
eleanorjboyd Apr 4, 2023
32d6837
fix msgs
eleanorjboyd Apr 4, 2023
6aafdc2
still working and comments in progress
eleanorjboyd Feb 8, 2023
a034f7b
changing to different typing
eleanorjboyd Feb 9, 2023
74e3e25
fix path and docstring
eleanorjboyd Feb 10, 2023
7ecb9c3
functioning test infrastructure
eleanorjboyd Mar 3, 2023
e867b91
switch to single expected output file
eleanorjboyd Mar 3, 2023
7c5eab3
try for unittests
eleanorjboyd Mar 3, 2023
9645add
refactoring and adding more .data files
eleanorjboyd Mar 6, 2023
9202f3e
double and dual tests
eleanorjboyd Mar 6, 2023
c0e8119
cleanup
eleanorjboyd Mar 6, 2023
2ea5d35
inital take on error handling
eleanorjboyd Mar 7, 2023
23c3f93
parametrize
eleanorjboyd Mar 7, 2023
80601d9
parametrize the tests
eleanorjboyd Mar 7, 2023
a74bf17
add commenting
eleanorjboyd Mar 8, 2023
7757aa1
fixing paths for non local testing
eleanorjboyd Mar 8, 2023
de03dda
unneeded files
eleanorjboyd Mar 8, 2023
e28effd
attempt on http server/client
eleanorjboyd Mar 8, 2023
5bd4a4a
reverting
eleanorjboyd Mar 9, 2023
6f00050
add commenting, tests are passing
eleanorjboyd Mar 9, 2023
7113924
fix paths to ensure all use const
eleanorjboyd Mar 9, 2023
b444bf1
remove local references
eleanorjboyd Mar 9, 2023
cf0dc3f
fixing typing, add error handling
eleanorjboyd Mar 9, 2023
9af985c
comments, settings fix
eleanorjboyd Mar 9, 2023
4298bb0
add doc string
eleanorjboyd Mar 9, 2023
0d8ab7c
fixing formatting
eleanorjboyd Mar 9, 2023
825b6b3
black formatting
eleanorjboyd Mar 9, 2023
19c0280
comment noting line num
eleanorjboyd Mar 9, 2023
3a69860
change to txt file
eleanorjboyd Mar 9, 2023
1dc8bfc
create find line no function
eleanorjboyd Mar 9, 2023
d862002
line no cont
eleanorjboyd Mar 9, 2023
fa87708
Apply suggestions from code review from Brett
eleanorjboyd Mar 10, 2023
2368f65
switch to any types to avoid private module
eleanorjboyd Mar 11, 2023
026247c
current changes- broken with add for UUID
eleanorjboyd Mar 30, 2023
37a4c15
pytest y, unittest n
eleanorjboyd Mar 30, 2023
db1467d
both unit and pytest ts working
eleanorjboyd Mar 31, 2023
0f8e396
current state
eleanorjboyd Mar 31, 2023
020286c
remove debug
eleanorjboyd Apr 4, 2023
259102f
add jsonRPCProcessor test
eleanorjboyd Apr 4, 2023
59f8592
round 2 changes
eleanorjboyd Apr 4, 2023
8067b9b
round 3 edits
eleanorjboyd Apr 5, 2023
c5f0632
fix uneeded edits
eleanorjboyd Apr 5, 2023
4b71764
small fixes
eleanorjboyd Apr 6, 2023
9031162
comments and typing
eleanorjboyd Apr 6, 2023
40684b6
Fix UUID unittest (#20996)
eleanorjboyd Apr 5, 2023
f2fe0ee
switch to using tcp for comm with server (#20981)
eleanorjboyd Apr 6, 2023
d759725
mock test changes
eleanorjboyd Apr 4, 2023
d1fd47f
add jsonRPCProcessor test
eleanorjboyd Apr 4, 2023
3fc7fa0
comment out new code
eleanorjboyd Apr 6, 2023
2f54b7b
removing unneeded edit
eleanorjboyd Apr 6, 2023
5515b57
last few
eleanorjboyd Apr 6, 2023
2546e28
remove pytest adapter tests from running rn
eleanorjboyd Apr 6, 2023
218d9dc
fix pyright
eleanorjboyd Apr 7, 2023
4b5ef0f
pyright
eleanorjboyd Apr 7, 2023
caa8586
updated a few types
eleanorjboyd Apr 11, 2023
92f055a
beginning changes
eleanorjboyd Apr 11, 2023
d3196f8
functioning test
eleanorjboyd Apr 12, 2023
7b3b088
working with 10 tests
eleanorjboyd Apr 13, 2023
13207e3
fix assert errors and add tests execution
eleanorjboyd Apr 13, 2023
f4c0903
fix pyright
eleanorjboyd Apr 13, 2023
810cb74
fix package calling
eleanorjboyd Apr 13, 2023
a5ee8b1
Attempt to fix the connection error issue.
karthiknadig Apr 13, 2023
5551f5d
Try winsock reset
karthiknadig Apr 14, 2023
b158853
Revert "Try winsock reset"
karthiknadig Apr 14, 2023
e56a02f
Try with shell execute
karthiknadig Apr 14, 2023
0e90726
Revert "Try with shell execute"
karthiknadig Apr 14, 2023
952d3f3
Fix pathing for windows
karthiknadig Apr 14, 2023
8692100
Try running all python tests without deps
karthiknadig Apr 14, 2023
015eec1
Ensure proper env use.
karthiknadig Apr 14, 2023
dc856be
fix launch path for pytest debug
eleanorjboyd Apr 17, 2023
9ec5990
fix error
eleanorjboyd Apr 17, 2023
4533999
fix duplicate
eleanorjboyd Apr 17, 2023
246e98b
remove dup code
eleanorjboyd Apr 17, 2023
132f54a
dup test
eleanorjboyd Apr 17, 2023
9d20000
incorrect doc string
eleanorjboyd Apr 17, 2023
b1464ae
fix failing tests
eleanorjboyd Apr 17, 2023
8ee468b
fix pyright errors
eleanorjboyd Apr 17, 2023
a58a637
Update pythonFiles/tests/pytestadapter/.data/unittest_folder/test_sub…
eleanorjboyd Apr 18, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,4 @@ def test_subtract_negative_numbers( # test_marker--test_subtract_negative_numbe
self,
):
result = subtract(-2, -3)
self.assertEqual(result, 1)
self.assertEqual(result, 100000)
eleanorjboyd marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@

from .helpers import TEST_DATA_PATH, find_test_line_number

# This file contains the expected output dictionaries for tests discovery and is used in test_discovery.py.

# This is the expected output for the empty_discovery.py file.
# └──
TEST_DATA_PATH_STR = os.fspath(TEST_DATA_PATH)
Expand Down
328 changes: 328 additions & 0 deletions pythonFiles/tests/pytestadapter/expected_execution_test_output.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,328 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.

TEST_SUBTRACT_FUNCTION = "unittest_folder/test_subtract.py::TestSubtractFunction::"
TEST_ADD_FUNCTION = "unittest_folder/test_add.py::TestAddFunction::"
SUCCESS = "success"
FAILURE = "failure"
TEST_SUBTRACT_FUNCTION_NEGATIVE_NUMBERS_ERROR = "self = <test_subtract.TestSubtractFunction testMethod=test_subtract_negative_numbers>\n\n def test_subtract_negative_numbers( # test_marker--test_subtract_negative_numbers\n self,\n ):\n result = subtract(-2, -3)\n> self.assertEqual(result, 100000)\nE AssertionError: 1 != 100000\n\nunittest_folder/test_subtract.py:25: AssertionError"

# This is the expected output for the unittest_folder execute tests
# └── unittest_folder
# ├── test_add.py
# │ └── TestAddFunction
# │ ├── test_add_negative_numbers: success
# │ └── test_add_positive_numbers: success
# └── test_subtract.py
# └── TestSubtractFunction
# ├── test_subtract_negative_numbers: failure
# └── test_subtract_positive_numbers: success
uf_execution_expected_output = {
f"{TEST_ADD_FUNCTION}test_add_negative_numbers": {
"test": f"{TEST_ADD_FUNCTION}test_add_negative_numbers",
"outcome": SUCCESS,
"message": None,
"traceback": None,
"subtest": None,
},
f"{TEST_ADD_FUNCTION}test_add_positive_numbers": {
"test": f"{TEST_ADD_FUNCTION}test_add_positive_numbers",
"outcome": SUCCESS,
"message": None,
"traceback": None,
"subtest": None,
},
f"{TEST_SUBTRACT_FUNCTION}test_subtract_negative_numbers": {
"test": f"{TEST_SUBTRACT_FUNCTION}test_subtract_negative_numbers",
"outcome": FAILURE,
"message": TEST_SUBTRACT_FUNCTION_NEGATIVE_NUMBERS_ERROR,
"traceback": None,
"subtest": None,
},
f"{TEST_SUBTRACT_FUNCTION}test_subtract_positive_numbers": {
"test": f"{TEST_SUBTRACT_FUNCTION}test_subtract_positive_numbers",
"outcome": SUCCESS,
"message": None,
"traceback": None,
"subtest": None,
},
}


# This is the expected output for the unittest_folder only execute add.py tests
# └── unittest_folder
# ├── test_add.py
# │ └── TestAddFunction
# │ ├── test_add_negative_numbers: success
# │ └── test_add_positive_numbers: success
uf_single_file_expected_output = {
f"{TEST_ADD_FUNCTION}test_add_negative_numbers": {
"test": f"{TEST_ADD_FUNCTION}test_add_negative_numbers",
"outcome": SUCCESS,
"message": None,
"traceback": None,
"subtest": None,
},
f"{TEST_ADD_FUNCTION}test_add_positive_numbers": {
"test": f"{TEST_ADD_FUNCTION}test_add_positive_numbers",
"outcome": SUCCESS,
"message": None,
"traceback": None,
"subtest": None,
},
}

# This is the expected output for the unittest_folder execute only signle method
# └── unittest_folder
# ├── test_add.py
# │ └── TestAddFunction
# │ └── test_add_positive_numbers: success
uf_single_method_execution_expected_output = {
f"{TEST_ADD_FUNCTION}test_add_positive_numbers": {
"test": f"{TEST_ADD_FUNCTION}test_add_positive_numbers",
"outcome": SUCCESS,
"message": None,
"traceback": None,
"subtest": None,
}
}

# This is the expected output for the unittest_folder tests run where two tests
# run are in different files.
# └── unittest_folder
# ├── test_add.py
# │ └── TestAddFunction
# │ └── test_add_positive_numbers: success
# └── test_subtract.py
# └── TestSubtractFunction
# └── test_subtract_positive_numbers: success
uf_non_adjacent_tests_execution_expected_output = {
TEST_SUBTRACT_FUNCTION
+ "test_subtract_positive_numbers": {
"test": TEST_SUBTRACT_FUNCTION + "test_subtract_positive_numbers",
"outcome": SUCCESS,
"message": None,
"traceback": None,
"subtest": None,
},
TEST_ADD_FUNCTION
+ "test_add_positive_numbers": {
"test": TEST_ADD_FUNCTION + "test_add_positive_numbers",
"outcome": SUCCESS,
"message": None,
"traceback": None,
"subtest": None,
},
}

# This is the expected output for the simple_pytest.py file.
# └── simple_pytest.py
# └── test_function: success
simple_execution_pytest_expected_output = {
"simple_pytest.py::test_function": {
"test": "simple_pytest.py::test_function",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
}
}

# This is the expected output for the unittest_pytest_same_file.py file.
# ├── unittest_pytest_same_file.py
# ├── TestExample
# │ └── test_true_unittest: success
# └── test_true_pytest: success
unit_pytest_same_file_execution_expected_output = {
"unittest_pytest_same_file.py::TestExample::test_true_unittest": {
"test": "unittest_pytest_same_file.py::TestExample::test_true_unittest",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
},
"unittest_pytest_same_file.py::test_true_pytest": {
"test": "unittest_pytest_same_file.py::test_true_pytest",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
},
}

# This is the expected output for the dual_level_nested_folder.py tests
# └── dual_level_nested_folder
# └── test_top_folder.py
# └── test_top_function_t: success
# └── test_top_function_f: failure
# └── nested_folder_one
# └── test_bottom_folder.py
# └── test_bottom_function_t: success
# └── test_bottom_function_f: failure
dual_level_nested_folder_execution_expected_output = {
"dual_level_nested_folder/test_top_folder.py::test_top_function_t": {
"test": "dual_level_nested_folder/test_top_folder.py::test_top_function_t",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
},
"dual_level_nested_folder/test_top_folder.py::test_top_function_f": {
"test": "dual_level_nested_folder/test_top_folder.py::test_top_function_f",
"outcome": "failure",
"message": "def test_top_function_f(): # test_marker--test_top_function_f\n> assert False\nE assert False\n\ndual_level_nested_folder/test_top_folder.py:14: AssertionError",
"traceback": None,
"subtest": None,
},
"dual_level_nested_folder/nested_folder_one/test_bottom_folder.py::test_bottom_function_t": {
"test": "dual_level_nested_folder/nested_folder_one/test_bottom_folder.py::test_bottom_function_t",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
},
"dual_level_nested_folder/nested_folder_one/test_bottom_folder.py::test_bottom_function_f": {
"test": "dual_level_nested_folder/nested_folder_one/test_bottom_folder.py::test_bottom_function_f",
"outcome": "failure",
"message": "def test_bottom_function_f(): # test_marker--test_bottom_function_f\n> assert False\nE assert False\n\ndual_level_nested_folder/nested_folder_one/test_bottom_folder.py:14: AssertionError",
"traceback": None,
"subtest": None,
},
}

# This is the expected output for the nested_folder tests.
# └── nested_folder_one
# └── nested_folder_two
# └── test_nest.py
# └── test_function: success
double_nested_folder_expected_execution_output = {
"double_nested_folder/nested_folder_one/nested_folder_two/test_nest.py::test_function": {
"test": "double_nested_folder/nested_folder_one/nested_folder_two/test_nest.py::test_function",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
}
}

# This is the expected output for the nested_folder tests.
# └── parametrize_tests.py
# └── test_adding[3+5-8]: success
# └── test_adding[2+4-6]: success
# └── test_adding[6+9-16]: failure
parametrize_tests_expected_execution_output = {
"parametrize_tests.py::test_adding[3+5-8]": {
"test": "parametrize_tests.py::test_adding[3+5-8]",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
},
"parametrize_tests.py::test_adding[2+4-6]": {
"test": "parametrize_tests.py::test_adding[2+4-6]",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
},
"parametrize_tests.py::test_adding[6+9-16]": {
"test": "parametrize_tests.py::test_adding[6+9-16]",
"outcome": "failure",
"message": 'actual = \'6+9\', expected = 16\n\n @pytest.mark.parametrize( # test_marker--test_adding\n "actual, expected", [("3+5", 8), ("2+4", 6), ("6+9", 16)]\n )\n def test_adding(actual, expected):\n> assert eval(actual) == expected\nE AssertionError: assert 15 == 16\nE + where 15 = eval(\'6+9\')\n\nparametrize_tests.py:10: AssertionError',
"traceback": None,
"subtest": None,
},
}

# This is the expected output for the single parameterized tests.
# └── parametrize_tests.py
# └── test_adding[3+5-8]: success
single_parametrize_tests_expected_execution_output = {
"parametrize_tests.py::test_adding[3+5-8]": {
"test": "parametrize_tests.py::test_adding[3+5-8]",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
},
}

# This is the expected output for the single parameterized tests.
# └── text_docstring.txt
# └── text_docstring: success
doctest_pytest_expected_execution_output = {
"text_docstring.txt::text_docstring.txt": {
"test": "text_docstring.txt::text_docstring.txt",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
}
}

# Will run all tests in the cwd that fit the test file naming pattern.
no_test_ids_pytest_execution_expected_output = {
"double_nested_folder/nested_folder_one/nested_folder_two/test_nest.py::test_function": {
"test": "double_nested_folder/nested_folder_one/nested_folder_two/test_nest.py::test_function",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
},
"dual_level_nested_folder/test_top_folder.py::test_top_function_t": {
"test": "dual_level_nested_folder/test_top_folder.py::test_top_function_t",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
},
"dual_level_nested_folder/test_top_folder.py::test_top_function_f": {
"test": "dual_level_nested_folder/test_top_folder.py::test_top_function_f",
"outcome": "failure",
"message": "def test_top_function_f(): # test_marker--test_top_function_f\n> assert False\nE assert False\n\ndual_level_nested_folder/test_top_folder.py:14: AssertionError",
"traceback": None,
"subtest": None,
},
"dual_level_nested_folder/nested_folder_one/test_bottom_folder.py::test_bottom_function_t": {
"test": "dual_level_nested_folder/nested_folder_one/test_bottom_folder.py::test_bottom_function_t",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
},
"dual_level_nested_folder/nested_folder_one/test_bottom_folder.py::test_bottom_function_f": {
"test": "dual_level_nested_folder/nested_folder_one/test_bottom_folder.py::test_bottom_function_f",
"outcome": "failure",
"message": "def test_bottom_function_f(): # test_marker--test_bottom_function_f\n> assert False\nE assert False\n\ndual_level_nested_folder/nested_folder_one/test_bottom_folder.py:14: AssertionError",
"traceback": None,
"subtest": None,
},
"unittest_folder/test_add.py::TestAddFunction::test_add_negative_numbers": {
"test": "unittest_folder/test_add.py::TestAddFunction::test_add_negative_numbers",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
},
"unittest_folder/test_add.py::TestAddFunction::test_add_positive_numbers": {
"test": "unittest_folder/test_add.py::TestAddFunction::test_add_positive_numbers",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
},
"unittest_folder/test_subtract.py::TestSubtractFunction::test_subtract_negative_numbers": {
"test": "unittest_folder/test_subtract.py::TestSubtractFunction::test_subtract_negative_numbers",
"outcome": "failure",
"message": "self = <test_subtract.TestSubtractFunction testMethod=test_subtract_negative_numbers>\n\n def test_subtract_negative_numbers( # test_marker--test_subtract_negative_numbers\n self,\n ):\n result = subtract(-2, -3)\n> self.assertEqual(result, 100000)\nE AssertionError: 1 != 100000\n\nunittest_folder/test_subtract.py:25: AssertionError",
"traceback": None,
"subtest": None,
},
"unittest_folder/test_subtract.py::TestSubtractFunction::test_subtract_positive_numbers": {
"test": "unittest_folder/test_subtract.py::TestSubtractFunction::test_subtract_positive_numbers",
"outcome": "success",
"message": None,
"traceback": None,
"subtest": None,
},
}
16 changes: 16 additions & 0 deletions pythonFiles/tests/pytestadapter/helpers.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,22 @@ def runner(args: List[str]) -> Union[Dict[str, str], None]:
return process_rpc_json(output_path.read_text(encoding="utf-8"))


def find_test_line_number(test_name: str, test_file_path) -> str:
"""Function which finds the correct line number for a test by looking for the "test_marker--[test_name]" string.

The test_name is split on the "[" character to remove the parameterization information.

def listen_on_socket(listener: socket.socket, result: List[str]):
sock, (other_host, other_port) = listener.accept()
all_data: list = []
while True:
data: bytes = sock.recv(1024 * 1024)
if not data:
break
all_data.append(data.decode("utf-8"))
result.append("".join(all_data))


def find_test_line_number(test_name: str, test_file_path) -> str:
"""Function which finds the correct line number for a test by looking for the "test_marker--[test_name]" string.

Expand Down
Loading