diff --git a/.gitignore b/.gitignore new file mode 100644 index 000000000..461957161 --- /dev/null +++ b/.gitignore @@ -0,0 +1,5 @@ +# build +bin/* + +# tmp files +tmp/* diff --git a/LICENSE.md b/LICENSE.md new file mode 100644 index 000000000..f288702d2 --- /dev/null +++ b/LICENSE.md @@ -0,0 +1,674 @@ + GNU GENERAL PUBLIC LICENSE + Version 3, 29 June 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU General Public License is a free, copyleft license for +software and other kinds of works. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +the GNU General Public License is intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. We, the Free Software Foundation, use the +GNU General Public License for most of our software; it applies also to +any other work released this way by its authors. You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + To protect your rights, we need to prevent others from denying you +these rights or asking you to surrender the rights. Therefore, you have +certain responsibilities if you distribute copies of the software, or if +you modify it: responsibilities to respect the freedom of others. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must pass on to the recipients the same +freedoms that you received. You must make sure that they, too, receive +or can get the source code. And you must show them these terms so they +know their rights. + + Developers that use the GNU GPL protect your rights with two steps: +(1) assert copyright on the software, and (2) offer you this License +giving you legal permission to copy, distribute and/or modify it. + + For the developers' and authors' protection, the GPL clearly explains +that there is no warranty for this free software. For both users' and +authors' sake, the GPL requires that modified versions be marked as +changed, so that their problems will not be attributed erroneously to +authors of previous versions. + + Some devices are designed to deny users access to install or run +modified versions of the software inside them, although the manufacturer +can do so. This is fundamentally incompatible with the aim of +protecting users' freedom to change the software. The systematic +pattern of such abuse occurs in the area of products for individuals to +use, which is precisely where it is most unacceptable. Therefore, we +have designed this version of the GPL to prohibit the practice for those +products. If such problems arise substantially in other domains, we +stand ready to extend this provision to those domains in future versions +of the GPL, as needed to protect the freedom of users. + + Finally, every program is threatened constantly by software patents. +States should not allow patents to restrict development and use of +software on general-purpose computers, but in those that do, we wish to +avoid the special danger that patents applied to a free program could +make it effectively proprietary. To prevent this, the GPL assures that +patents cannot be used to render the program non-free. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Use with the GNU Affero General Public License. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU Affero General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the special requirements of the GNU Affero General Public License, +section 13, concerning interaction through a network will apply to the +combination as such. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If the program does terminal interaction, make it output a short +notice like this when it starts in an interactive mode: + + Copyright (C) + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, your program's commands +might be different; for a GUI interface, you would use an "about box". + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU GPL, see +. + + The GNU General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications with +the library. If this is what you want to do, use the GNU Lesser General +Public License instead of this License. But first, please read +. diff --git a/Makefile b/Makefile new file mode 100644 index 000000000..34bdb92c3 --- /dev/null +++ b/Makefile @@ -0,0 +1,19 @@ +.PHONY: test +test: + go test ./... + +.PHONY: test-clean +test-clean: + go clean -testcache + go test ./... + +bin: + mkdir .bin + +.PHONY: build +build: bin + go build -o ./bin/jobctl ./cmd/ + +.PHONY: server +server: build + go run ./cmd/ server diff --git a/README.md b/README.md new file mode 100644 index 000000000..399685b55 --- /dev/null +++ b/README.md @@ -0,0 +1,180 @@ +# jobctl + +**A dead simple tool to run & manage DAGs** + +jobctl is a single command that generates and executes a [DAG (Directed acyclic graph)](https://en.wikipedia.org/wiki/Directed_acyclic_graph) from a simple YAML definition. jobctl also comes with a convenient web UI. It aims to be one of the easiest option to manage DAGs executed by cron. + +## Contents + +- [jobctl](#jobctl) + - [Contents](#contents) + - [Features](#features) + - [Usecases](#usecases) + - [Getting started](#getting-started) + - [Installation](#installation) + - [Usage](#usage) + - [Configuration](#configuration) + - [Environment variables](#environment-variables) + - [Web UI configuration](#web-ui-configuration) + - [Global configuration](#global-configuration) + - [Job configuration](#job-configuration) + - [Simple example](#simple-example) + - [Complex example](#complex-example) + + +## Features + +- Simple command interface (See [Usage](#usage)) +- Simple configuration YAML format (See [Simple example](#simple-example)) +- Simple architecture (no DBMS or agent process is required) +- Web UI to visualize, manage jobs and watch logs +- Parameterization +- Conditions +- Automatic retry +- Cancellation +- Retry +- Prallelism limits +- Environment variables +- Repeat jobs +- Basic Authentication +- E-mail notifications +- REST api interface + +## Usecases +- ETL Pipeline +- Batches +- Machine Learning +- Data Processing +- Automation + +## Getting started +### Installation + +Place a `jobctl` executable somewhere on your system. + +### Usage + +- `jobctl start [--params=] ` - run a job +- `jobctl status ` - display the current status of the job +- `jobctl retry --req= ` - retry the failed/canceled job +- `jobctl stop ` - cancel a job +- `jobctl dry [--params=] ` - dry-run a job +- `jobctl server` - start a web server for web UI + +## Configuration + +### Environment variables +- `JOBCTL__DATA` - path to directory for internal use by jobctl (default : `~/.jobctl/data`) +- `JOBCTL__LOGS` - path to directory for logging (default : `~/.jobctl/logs`) + +### Web UI configuration + +Plase create `~/.jobctl/admin.yaml`. + +```yaml +# required +host: +port: +jobs: +command: + +# optional +isBasicAuth: +basicAuthUsername: +basicAuthPassword: +``` + +### Global configuration + +Plase create `~/.jobctl/config.yaml`. All settings can be overridden by individual job configurations. + +```yaml +logDir: # log directory to write standard output from the job steps +histRetentionDays: 3 # job history retention days (not for log files) + +# E-mail server config (optional) +smtp: + host: + port: +errorMail: + from: + to: + prefix: +infoMail: + from: + to: + prefix: +``` + +## Job configuration + +### Simple example + +A simple example is as follows: +```yaml +name: simple job +steps: + - name: step 1 + command: python some_batch_1.py + dir: ${HOME}/jobs/ # working directory for the job (optional) + - name: step 2 + command: python some_batch_2.py + dir: ${HOME}/jobs/ + depends: + - step 1 +``` + +### Complex example + +More complex example is as follows: +```yaml +name: complex job +description: run python jobs + +# Define environment variables +env: + LOG_DIR: ${HOME}/jobs/logs + PATH: /usr/local/bin:${PATH} + +logDir: ${LOG_DIR} # log directory to write standard output from the job steps +histRetentionDays: 3 # job history retention days (not for log files) +delaySec: 1 # interval seconds between job steps +maxActiveRuns: 1 # max parallel number of running step + +# Define parameters +params: param1 param2 # they can be referenced by each steps as $1, $2 and so on. + +# Define preconditions for whether or not the job is allowed to run +preconditions: + - condition: "`printf 1`" # This condition will be evaluated at each execution of the job + expected: "1" # If the evaluation result do not match, the job is canceled + +# Mail notification configs +mailOnError: true # send a mail when a job failed +mailOnFinish: true # send a mail when a job finished + +# Job steps +steps: + - name: step 1 + description: step 1 description + dir: ${HOME}/jobs + command: python some_batch_1.py $1 + mailOnError: false # do not send mail on error + continueOn: + failed: true # continue to the next step regardless the error of this job + canceled: true # continue to the next step regardless the evaluation result of preconditions + retryPolicy: + limit: 2 # retry up to 2 times when the step failed + # Define preconditions for whether or not the step is allowed to run + preconditions: + - condition: "`printf 1`" + expected: "1" + - name: step 2 + description: step 2 description + dir: ${HOME}/jobs + command: python some_batch_2.py $1 + depends: + - step 1 +``` + +The global config file `~/.jobctl/config.yaml` is useful to gather common settings such as mail-server configs or log directory. diff --git a/cmd/dry.go b/cmd/dry.go new file mode 100644 index 000000000..0951ca2c2 --- /dev/null +++ b/cmd/dry.go @@ -0,0 +1,57 @@ +package main + +import ( + "errors" + "jobctl/internal/agent" + "jobctl/internal/config" + "log" + "os" + + "github.com/urfave/cli/v2" +) + +func newDryCommand() *cli.Command { + cl := config.NewConfigLoader() + return &cli.Command{ + Name: "dry", + Usage: "jobctl dry [--params=\"\"] ", + Flags: []cli.Flag{ + &cli.StringFlag{ + Name: "params", + Usage: "parameters", + Value: "", + Required: false, + }, + }, + Action: func(c *cli.Context) error { + if c.NArg() == 0 { + return errors.New("config file must be specified.") + } + if c.NArg() != 1 { + return errors.New("too many parameters.") + } + config_file_path := c.Args().Get(0) + cfg, err := cl.Load(config_file_path, c.String("params")) + if err != nil { + return err + } + return dryRun(cfg) + }, + } +} + +func dryRun(cfg *config.Config) error { + a := &agent.Agent{Config: &agent.Config{ + Job: cfg, + Dry: true, + }} + listenSignals(func(sig os.Signal) { + a.Signal(sig) + }) + + err := a.Run() + if err != nil { + log.Printf("[DRY] job failed %v", err) + } + return nil +} diff --git a/cmd/dry_test.go b/cmd/dry_test.go new file mode 100644 index 000000000..9ef9dbbe9 --- /dev/null +++ b/cmd/dry_test.go @@ -0,0 +1,19 @@ +package main + +import ( + "testing" +) + +func Test_dryCommand(t *testing.T) { + tests := []appTest{ + { + args: []string{"", "dry", testConfig("basic_success.yaml")}, errored: false, + output: []string{"Starting DRY-RUN"}, + }, + } + + for _, v := range tests { + app := makeApp() + runAppTestOutput(app, v, t) + } +} diff --git a/cmd/jobctl.go b/cmd/jobctl.go new file mode 100644 index 000000000..c444799eb --- /dev/null +++ b/cmd/jobctl.go @@ -0,0 +1,53 @@ +package main + +import ( + "io" + "log" + "os" + "os/signal" + "syscall" + + "github.com/urfave/cli/v2" +) + +var stdin io.ReadCloser + +func main() { + err := run() + if err != nil { + log.Fatalf("%v", err) + } +} + +func listenSignals(abortFunc func(sig os.Signal)) { + sigs := make(chan os.Signal, 1) + signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM) + go func() { + for sig := range sigs { + log.Printf("\nSignal: %v", sig) + abortFunc(sig) + } + }() +} + +func run() error { + stdin = os.Stdin + app := makeApp() + return app.Run(os.Args) +} + +func makeApp() *cli.App { + return &cli.App{ + Name: "jobctl", + Usage: "Simple command to run a group of jobs", + UsageText: "jobctl [options] [args]", + Commands: []*cli.Command{ + newStartCommand(), + newStatusCommand(), + newStopCommand(), + newRetryCommand(), + newDryCommand(), + newServerCommand(), + }, + } +} diff --git a/cmd/jobctl_test.go b/cmd/jobctl_test.go new file mode 100644 index 000000000..656d36ffe --- /dev/null +++ b/cmd/jobctl_test.go @@ -0,0 +1,93 @@ +package main + +import ( + "bytes" + "io" + "jobctl/internal/settings" + "jobctl/internal/utils" + "log" + "os" + "path" + "testing" + + "github.com/stretchr/testify/require" + "github.com/urfave/cli/v2" +) + +type appTest struct { + args []string + errored bool + output []string + exactOutput string + stdin io.ReadCloser +} + +var testsDir = path.Join(utils.MustGetwd(), "../tests/testdata") + +func TestMain(m *testing.M) { + tempDir := utils.MustTempDir("jobctl_test") + settings.InitTest(tempDir) + code := m.Run() + os.RemoveAll(tempDir) + os.Exit(code) +} + +func testConfig(name string) string { + return path.Join(testsDir, name) +} + +func runAppTestOutput(app *cli.App, test appTest, t *testing.T) { + t.Helper() + + origStdout := os.Stdout + r, w, err := os.Pipe() + require.NoError(t, err) + os.Stdout = w + log.SetOutput(w) + + defer func() { + os.Stdout = origStdout + log.SetOutput(origStdout) + }() + + if test.stdin != nil { + origStdin := stdin + stdin = test.stdin + defer func() { + stdin = origStdin + }() + } + + err = app.Run(test.args) + os.Stdout = origStdout + w.Close() + + if err != nil && !test.errored { + t.Fatalf("job failed unexpectedly %v", err) + return + } + + var buf bytes.Buffer + _, err = io.Copy(&buf, r) + require.NoError(t, err) + + s := buf.String() + if len(test.output) > 0 { + for _, v := range test.output { + require.Contains(t, s, v) + } + } + + if test.exactOutput != "" { + require.Equal(t, test.exactOutput, s) + } +} + +func runAppTest(app *cli.App, test appTest, t *testing.T) { + err := app.Run(test.args) + + if err != nil && !test.errored { + t.Fatalf("job failed unexpectedly %v", err) + return + } +} diff --git a/cmd/retry.go b/cmd/retry.go new file mode 100644 index 000000000..f09dc4217 --- /dev/null +++ b/cmd/retry.go @@ -0,0 +1,75 @@ +package main + +import ( + "errors" + "jobctl/internal/agent" + "jobctl/internal/config" + "jobctl/internal/database" + "jobctl/internal/models" + "log" + "os" + "path/filepath" + + "github.com/urfave/cli/v2" +) + +func newRetryCommand() *cli.Command { + cl := config.NewConfigLoader() + return &cli.Command{ + Name: "retry", + Usage: "jobctl retry --req= ", + Flags: []cli.Flag{ + &cli.StringFlag{ + Name: "req", + Usage: "request-id", + Value: "", + Required: true, + }, + }, + Action: func(c *cli.Context) error { + if c.NArg() == 0 { + return errors.New("config file must be specified.") + } + if c.NArg() != 1 { + return errors.New("too many parameters.") + } + config_file_path, err := filepath.Abs(c.Args().Get(0)) + if err != nil { + return err + } + requestId := c.String("req") + db := database.New(database.DefaultConfig()) + status, err := db.FindByRequestId(config_file_path, requestId) + if err != nil { + return err + } + cfg, err := cl.Load(config_file_path, status.Status.Params) + if err != nil { + return err + } + return retryJob(cfg, status.Status) + }, + } +} + +func retryJob(cfg *config.Config, status *models.Status) error { + a := &agent.Agent{ + Config: &agent.Config{ + Job: cfg, + Dry: false, + }, + RetryConfig: &agent.RetryConfig{ + Status: status, + }, + } + + listenSignals(func(sig os.Signal) { + a.Signal(sig) + }) + + err := a.Run() + if err != nil { + log.Printf("running job failed. %v", err) + } + return nil +} diff --git a/cmd/retry_test.go b/cmd/retry_test.go new file mode 100644 index 000000000..2f958d555 --- /dev/null +++ b/cmd/retry_test.go @@ -0,0 +1,49 @@ +package main + +import ( + "fmt" + "jobctl/internal/controller" + "jobctl/internal/database" + "jobctl/internal/scheduler" + "testing" + + "github.com/stretchr/testify/require" +) + +func Test_retryCommand(t *testing.T) { + app := makeApp() + configPath := testConfig("cmd_retry.yaml") + runAppTestOutput(app, appTest{ + args: []string{"", "start", "--params=x", configPath}, errored: true, + output: []string{}, + }, t) + + job, err := controller.FromConfig(configPath) + require.NoError(t, err) + require.Equal(t, job.Status.Status, scheduler.SchedulerStatus_Error) + + db := database.New(database.DefaultConfig()) + status, err := db.FindByRequestId(configPath, job.Status.RequestId) + require.NoError(t, err) + dw, err := db.NewWriterFor(configPath, status.File) + require.NoError(t, err) + err = dw.Open() + require.NoError(t, err) + + for _, n := range status.Status.Nodes { + n.Command = "true" + } + err = dw.Write(status.Status) + require.NoError(t, err) + + app = makeApp() + runAppTestOutput(app, appTest{ + args: []string{"", "retry", fmt.Sprintf("--req=%s", + job.Status.RequestId), testConfig("cmd_retry.yaml")}, errored: false, + output: []string{"parameter is x"}, + }, t) + + job, err = controller.FromConfig(testConfig("cmd_retry.yaml")) + require.NoError(t, err) + require.Equal(t, job.Status.Status, scheduler.SchedulerStatus_Success) +} diff --git a/cmd/server.go b/cmd/server.go new file mode 100644 index 000000000..b15962ec4 --- /dev/null +++ b/cmd/server.go @@ -0,0 +1,36 @@ +package main + +import ( + "jobctl/internal/admin" + "os" + + "github.com/urfave/cli/v2" +) + +func newServerCommand() *cli.Command { + cl := admin.NewConfigLoader() + return &cli.Command{ + Name: "server", + Usage: "jobctl server", + Action: func(c *cli.Context) error { + cfg, err := cl.LoadAdminConfig("") + if err == admin.ErrConfigNotFound { + cfg, err = admin.DefaultConfig() + } + if err != nil { + return err + } + return startServer(cfg) + }, + } +} + +func startServer(cfg *admin.Config) error { + server := admin.NewServer(cfg) + + listenSignals(func(sig os.Signal) { + server.Shutdown() + }) + + return server.Serve() +} diff --git a/cmd/start.go b/cmd/start.go new file mode 100644 index 000000000..d185897b8 --- /dev/null +++ b/cmd/start.go @@ -0,0 +1,58 @@ +package main + +import ( + "errors" + "jobctl/internal/agent" + "jobctl/internal/config" + "log" + "os" + + "github.com/urfave/cli/v2" +) + +func newStartCommand() *cli.Command { + cl := config.NewConfigLoader() + return &cli.Command{ + Name: "start", + Usage: "jobctl start [--params=\"\"] ", + Flags: []cli.Flag{ + &cli.StringFlag{ + Name: "params", + Usage: "parameters", + Value: "", + Required: false, + }, + }, + Action: func(c *cli.Context) error { + if c.NArg() == 0 { + return errors.New("config file must be specified.") + } + if c.NArg() != 1 { + return errors.New("too many parameters.") + } + config_file_path := c.Args().Get(0) + cfg, err := cl.Load(config_file_path, c.String("params")) + if err != nil { + return err + } + return startJob(cfg) + }, + } +} + +func startJob(cfg *config.Config) error { + a := &agent.Agent{Config: &agent.Config{ + Job: cfg, + Dry: false, + }} + + listenSignals(func(sig os.Signal) { + a.Signal(sig) + }) + + err := a.Run() + if err != nil { + log.Printf("running job failed. %v", err) + } + return nil +} diff --git a/cmd/start_test.go b/cmd/start_test.go new file mode 100644 index 000000000..9cacb4952 --- /dev/null +++ b/cmd/start_test.go @@ -0,0 +1,31 @@ +package main + +import ( + "testing" +) + +func Test_startCommand(t *testing.T) { + tests := []appTest{ + { + args: []string{"", "start", testConfig("multiple_steps.yaml")}, errored: false, + output: []string{"1 finished", "2 finished"}, + }, + { + args: []string{"", "start", testConfig("basic_failure.yaml")}, errored: true, + output: []string{"1 failed"}, + }, + { + args: []string{"", "start", testConfig("with_params.yaml")}, errored: false, + output: []string{"params is param-value"}, + }, + { + args: []string{"", "start", "--params=x y", testConfig("with_params_2.yaml")}, errored: false, + output: []string{"params are x and y"}, + }, + } + + for _, v := range tests { + app := makeApp() + runAppTestOutput(app, v, t) + } +} diff --git a/cmd/status.go b/cmd/status.go new file mode 100644 index 000000000..04f293159 --- /dev/null +++ b/cmd/status.go @@ -0,0 +1,42 @@ +package main + +import ( + "errors" + "jobctl/internal/config" + "jobctl/internal/controller" + "jobctl/internal/models" + "log" + + "github.com/urfave/cli/v2" +) + +func newStatusCommand() *cli.Command { + cl := config.NewConfigLoader() + return &cli.Command{ + Name: "status", + Usage: "jobctl status ", + Action: func(c *cli.Context) error { + if c.NArg() == 0 { + return errors.New("config file must be specified.") + } + config_file_path := c.Args().Get(0) + cfg, err := cl.Load(config_file_path, "") + if err != nil { + return err + } + return queryStatus(cfg) + }, + } +} + +func queryStatus(cfg *config.Config) error { + status, err := controller.New(cfg).GetStatus() + if err != nil { + return err + } + res := &models.StatusResponse{ + Status: status, + } + log.Printf("Pid=%d Status=%s", res.Status.Pid, res.Status.Status) + return nil +} diff --git a/cmd/status_test.go b/cmd/status_test.go new file mode 100644 index 000000000..bc72bc0dc --- /dev/null +++ b/cmd/status_test.go @@ -0,0 +1,32 @@ +package main + +import ( + "testing" + "time" +) + +func Test_statusCommand(t *testing.T) { + tests := []appTest{ + { + args: []string{"", "start", testConfig("basic_sleep.yaml")}, errored: false, + }, + } + + for _, v := range tests { + app := makeApp() + app2 := makeApp() + + done := make(chan bool) + go func() { + time.Sleep(time.Millisecond * 50) + runAppTestOutput(app2, appTest{ + args: []string{"", "status", v.args[2]}, errored: false, + output: []string{"Status=running"}, + }, t) + done <- true + }() + + runAppTest(app, v, t) + <-done + } +} diff --git a/cmd/stop.go b/cmd/stop.go new file mode 100644 index 000000000..a104d4d7c --- /dev/null +++ b/cmd/stop.go @@ -0,0 +1,60 @@ +package main + +import ( + "errors" + "jobctl/internal/config" + "jobctl/internal/controller" + "jobctl/internal/scheduler" + "log" + "syscall" + "time" + + "github.com/urfave/cli/v2" +) + +func newStopCommand() *cli.Command { + cl := config.NewConfigLoader() + return &cli.Command{ + Name: "stop", + Usage: "jobctl stop ", + Action: func(c *cli.Context) error { + if c.NArg() == 0 { + return errors.New("config file must be specified.") + } + config_file_path := c.Args().Get(0) + cfg, err := cl.Load(config_file_path, "") + if err != nil { + return err + } + return stopJob(cfg) + }, + } +} + +func stopJob(cfg *config.Config) error { + status, err := controller.New(cfg).GetStatus() + if err != nil { + return err + } + + if status.Status != scheduler.SchedulerStatus_Running || + !status.Pid.IsRunning() { + log.Printf("job is not running.") + return nil + } + syscall.Kill(int(status.Pid), syscall.SIGINT) + for { + time.Sleep(time.Second * 3) + s, err := controller.New(cfg).GetStatus() + if err != nil { + return err + } + if s.Pid == status.Pid && s.Status == + scheduler.SchedulerStatus_Running { + continue + } + break + } + log.Printf("job is stopped.") + return nil +} diff --git a/cmd/stop_test.go b/cmd/stop_test.go new file mode 100644 index 000000000..32b8d9370 --- /dev/null +++ b/cmd/stop_test.go @@ -0,0 +1,45 @@ +package main + +import ( + "jobctl/internal/config" + "jobctl/internal/database" + "jobctl/internal/scheduler" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func Test_stopCommand(t *testing.T) { + c := testConfig("basic_sleep_long.yaml") + test := appTest{ + args: []string{"", "start", c}, errored: false, + } + + app := makeApp() + stopper := makeApp() + done := make(chan bool) + + go func() { + time.Sleep(time.Millisecond * 50) + runAppTestOutput(stopper, appTest{ + args: []string{"", "stop", test.args[2]}, errored: false, + output: []string{"stopped"}, + }, t) + done <- true + }() + + runAppTest(app, test, t) + + <-done + + db := database.New(database.DefaultConfig()) + cfg := &config.Config{ + ConfigPath: c, + } + s, err := db.ReadStatusHist(cfg.ConfigPath, 1) + require.NoError(t, err) + require.Equal(t, 1, len(s)) + assert.Equal(t, scheduler.SchedulerStatus_Cancel, s[0].Status.Status) +} diff --git a/go.mod b/go.mod new file mode 100644 index 000000000..700adbedb --- /dev/null +++ b/go.mod @@ -0,0 +1,25 @@ +module jobctl + +go 1.17 + +require ( + github.com/imdario/mergo v0.3.12 + github.com/mitchellh/mapstructure v1.4.3 + github.com/stretchr/testify v1.7.1 + github.com/urfave/cli/v2 v2.3.0 + golang.org/x/text v0.3.7 + gopkg.in/yaml.v2 v2.4.0 +) + +require ( + github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d // indirect + github.com/davecgh/go-spew v1.1.0 // indirect + github.com/google/uuid v1.3.0 // indirect + github.com/kr/pretty v0.3.0 // indirect + github.com/pmezard/go-difflib v1.0.0 // indirect + github.com/rogpeppe/go-internal v1.8.1-0.20210923151022-86f73c517451 // indirect + github.com/russross/blackfriday/v2 v2.0.1 // indirect + github.com/shurcooL/sanitized_anchor_name v1.0.0 // indirect + gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 // indirect + gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c // indirect +) diff --git a/go.sum b/go.sum new file mode 100644 index 000000000..9c5464d97 --- /dev/null +++ b/go.sum @@ -0,0 +1,47 @@ +github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= +github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d h1:U+s90UTSYgptZMwQh2aRr3LuazLJIa+Pg3Kc1ylSYVY= +github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= +github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= +github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I= +github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU= +github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0= +github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/mitchellh/mapstructure v1.4.3 h1:OVowDSCllw/YjdLkam3/sm7wEtOy59d8ndGgCcyj8cs= +github.com/mitchellh/mapstructure v1.4.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= +github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc= +github.com/rogpeppe/go-internal v1.8.1-0.20210923151022-86f73c517451 h1:d1PiN4RxzIFXCJTvRkvSkKqwtRAl5ZV4lATKtQI0B7I= +github.com/rogpeppe/go-internal v1.8.1-0.20210923151022-86f73c517451/go.mod h1:JeRgkft04UBgHMgCIwADu4Pn6Mtm5d4nPKWu0nJ5d+o= +github.com/russross/blackfriday/v2 v2.0.1 h1:lPqVAte+HuHNfhJ/0LC98ESWRz8afy9tM/0RK8m9o+Q= +github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo= +github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/testify v1.7.1 h1:5TQK59W5E3v0r2duFAb7P95B6hEeOyEnHRa8MjYSMTY= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/urfave/cli/v2 v2.3.0 h1:qph92Y649prgesehzOrQjdWyxFOp/QVM+6imKHad91M= +github.com/urfave/cli/v2 v2.3.0/go.mod h1:LJmUH05zAU44vOAcrfzZQKsZbVcdbOG8rtL3/XcUArI= +golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk= +golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/internal/admin/basicauth.go b/internal/admin/basicauth.go new file mode 100644 index 000000000..052cec361 --- /dev/null +++ b/internal/admin/basicauth.go @@ -0,0 +1,29 @@ +package admin + +import ( + "crypto/sha256" + "crypto/subtle" + "net/http" +) + +func basicAuth(next http.Handler, expectedUsername, expectedPassword string) http.Handler { + return http.HandlerFunc( + func(w http.ResponseWriter, r *http.Request) { + // Reference: https://www.alexedwards.net/blog/basic-authentication-in-go + username, password, ok := r.BasicAuth() + if ok { + usernameHash := sha256.Sum256([]byte(username)) + passwordHash := sha256.Sum256([]byte(password)) + expectedUsernameHash := sha256.Sum256([]byte(expectedUsername)) + expectedPasswordHash := sha256.Sum256([]byte(expectedPassword)) + usernameMatch := (subtle.ConstantTimeCompare(usernameHash[:], expectedUsernameHash[:]) == 1) + passwordMatch := (subtle.ConstantTimeCompare(passwordHash[:], expectedPasswordHash[:]) == 1) + if usernameMatch && passwordMatch { + next.ServeHTTP(w, r) + return + } + } + w.Header().Set("WWW-Authenticate", `Basic realm="restricted", charset="UTF-8"`) + http.Error(w, "Unauthorized", http.StatusUnauthorized) + }) +} diff --git a/internal/admin/config.go b/internal/admin/config.go new file mode 100644 index 000000000..06cb4ff7d --- /dev/null +++ b/internal/admin/config.go @@ -0,0 +1,176 @@ +package admin + +import ( + "fmt" + "jobctl/internal/utils" + "os" + "os/exec" + "path/filepath" + "regexp" + "strconv" + "strings" +) + +var tickerMatcher *regexp.Regexp + +func init() { + tickerMatcher = regexp.MustCompile("`[^`]+`") +} + +type Config struct { + Host string + Port string + Env []string + Jobs string + Command string + WorkDir string + IsBasicAuth bool + BasicAuthUsername string + BasicAuthPassword string + LogEncodingCharset string +} + +func (c *Config) Init() { + if c.Env == nil { + c.Env = []string{} + } +} + +func (c *Config) setup() error { + if c.Command == "" { + c.Command = "jobctl" + } + if c.Jobs == "" { + wd, err := os.Getwd() + if err != nil { + return err + } + c.Jobs = wd + } + if c.Host == "" { + h, err := os.Hostname() + if err != nil { + return err + } + c.Host = h + } + if c.Port == "" { + c.Port = "8000" + } + if len(c.Env) == 0 { + env := utils.DefaultEnv() + env, err := loadVariables(env) + if err != nil { + return err + } + c.Env = buildConfigEnv(env) + } + return nil +} + +func buildFromDefinition(def *configDefinition) (c *Config, err error) { + c = &Config{} + c.Init() + + env, err := loadVariables(def.Env) + if err != nil { + return nil, err + } + c.Env = buildConfigEnv(env) + + c.Host, err = parseVariable(def.Host) + if err != nil { + return nil, err + } + c.Port = strconv.Itoa(def.Port) + + jd, err := parseVariable(def.Jobs) + if err != nil { + return nil, err + } + if !filepath.IsAbs(jd) { + return nil, fmt.Errorf("jobs directory should be absolute path. was %s", jd) + } + c.Jobs, err = filepath.Abs(jd) + if err != nil { + return nil, err + } + c.Command, err = parseVariable(def.Command) + if err != nil { + return nil, err + } + c.WorkDir, err = parseVariable(def.WorkDir) + if err != nil { + return nil, err + } + if c.WorkDir == "" { + c.WorkDir, err = os.Getwd() + if err != nil { + return nil, err + } + } + c.IsBasicAuth = def.IsBasicAuth + c.BasicAuthUsername, err = parseVariable(def.BasicAuthUsername) + if err != nil { + return nil, err + } + c.BasicAuthPassword, err = parseVariable(def.BasicAuthPassword) + if err != nil { + return nil, err + } + c.LogEncodingCharset, err = parseVariable(def.LogEncodingCharset) + if err != nil { + return nil, err + } + return c, nil +} + +func buildConfigEnv(vars map[string]string) []string { + ret := []string{} + for k, v := range vars { + ret = append(ret, fmt.Sprintf("%s=%s", k, v)) + } + return ret +} + +func loadVariables(strVariables map[string]string) (map[string]string, error) { + vars := map[string]string{} + for k, v := range strVariables { + parsed, err := parseVariable(v) + if err != nil { + return nil, err + } + vars[k] = parsed + err = os.Setenv(k, parsed) + if err != nil { + return nil, err + } + } + return vars, nil +} + +func parseVariable(value string) (string, error) { + val, err := parseCommand(os.ExpandEnv(value)) + if err != nil { + return "", err + } + return val, nil +} + +func parseCommand(value string) (string, error) { + matches := tickerMatcher.FindAllString(strings.TrimSpace(value), -1) + if matches == nil { + return value, nil + } + ret := value + for i := 0; i < len(matches); i++ { + command := matches[i] + out, err := exec.Command(strings.ReplaceAll(command, "`", "")).Output() + if err != nil { + return "", err + } + ret = strings.ReplaceAll(ret, command, strings.TrimSpace(string(out[:]))) + + } + return ret, nil +} diff --git a/internal/admin/definition.go b/internal/admin/definition.go new file mode 100644 index 000000000..8204509f1 --- /dev/null +++ b/internal/admin/definition.go @@ -0,0 +1,14 @@ +package admin + +type configDefinition struct { + Host string + Port int + Env map[string]string + Jobs string + Command string + WorkDir string + IsBasicAuth bool + BasicAuthUsername string + BasicAuthPassword string + LogEncodingCharset string +} diff --git a/internal/admin/errors.go b/internal/admin/errors.go new file mode 100644 index 000000000..1d79bb72e --- /dev/null +++ b/internal/admin/errors.go @@ -0,0 +1,17 @@ +package admin + +import ( + "errors" + "net/http" +) + +var errNotFound = errors.New("not found") + +func encodeError(w http.ResponseWriter, err error) { + switch err { + case errNotFound: + http.Error(w, err.Error(), http.StatusNotFound) + default: + http.Error(w, err.Error(), http.StatusInternalServerError) + } +} diff --git a/internal/admin/handler.go b/internal/admin/handler.go new file mode 100644 index 000000000..d00c38a11 --- /dev/null +++ b/internal/admin/handler.go @@ -0,0 +1,45 @@ +package admin + +import ( + "net/http" + "regexp" +) + +type adminHandler struct { + config *Config + routes map[string]map[*regexp.Regexp]http.HandlerFunc +} + +func newAdminHandler(cfg *Config, routes []*route) *adminHandler { + hdl := &adminHandler{ + config: cfg, + routes: map[string]map[*regexp.Regexp]http.HandlerFunc{}, + } + hdl.configure(routes) + return hdl +} + +func (hdl *adminHandler) configure(routes []*route) { + for _, route := range routes { + hdl.addRoute(route.method, route.pattern, route.handler) + } +} + +func (hdl *adminHandler) addRoute(method, pattern string, handler http.HandlerFunc) { + if _, ok := hdl.routes[method]; !ok { + hdl.routes[method] = map[*regexp.Regexp]http.HandlerFunc{} + } + hdl.routes[method][regexp.MustCompile(pattern)] = handler +} + +func (hdl *adminHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { + if patterns, ok := hdl.routes[r.Method]; ok { + for re, handler := range patterns { + if re.MatchString(r.URL.Path) { + handler(w, r) + return + } + } + } + encodeError(w, errNotFound) +} diff --git a/internal/admin/handlers/errors.go b/internal/admin/handlers/errors.go new file mode 100644 index 000000000..b7199b606 --- /dev/null +++ b/internal/admin/handlers/errors.go @@ -0,0 +1,30 @@ +package handlers + +import ( + "errors" + "fmt" + "jobctl/internal/config" + "net/http" +) + +var ( + errInvalidArgs = errors.New("invalid argument") + errNotFound = errors.New("not found") +) + +func formatError(err error) string { + return fmt.Sprintf("[Error] %s", err.Error()) +} + +func encodeError(w http.ResponseWriter, err error) { + switch err { + case config.ErrConfigNotFound: + http.Error(w, formatError(err), http.StatusNotFound) + case errInvalidArgs: + http.Error(w, formatError(err), http.StatusBadRequest) + case errNotFound: + http.Error(w, formatError(err), http.StatusNotFound) + default: + http.Error(w, formatError(err), http.StatusInternalServerError) + } +} diff --git a/internal/admin/handlers/html.go b/internal/admin/handlers/html.go new file mode 100644 index 000000000..c726b1c97 --- /dev/null +++ b/internal/admin/handlers/html.go @@ -0,0 +1,57 @@ +package handlers + +import ( + "bytes" + "embed" + "io" + "log" + "net/http" + "path" + "text/template" +) + +var defaultFuncs = template.FuncMap{ + "defTitle": func(ip interface{}) string { + v, ok := ip.(string) + if !ok || (ok && v == "") { + return "Jobctl Admin" + } + return v + }, +} + +//go:embed web/templates/* +var assets embed.FS +var templatePath = "web/templates/" +var defaultConfig = &struct { +}{} + +func useTemplate(layout string, name string) func(http.ResponseWriter, interface{}) { + files := append(baseTemplates(), path.Join(templatePath, layout)) + tmpl, err := template.New(name).Funcs(defaultFuncs).ParseFS(assets, files...) + if err != nil { + panic(err) + } + + return func(w http.ResponseWriter, data interface{}) { + var buf bytes.Buffer + if err := tmpl.ExecuteTemplate(&buf, "base", data); err != nil { + log.Printf("ERR: %v\n", err) + http.Error(w, err.Error(), http.StatusInternalServerError) + return + } + w.WriteHeader(http.StatusOK) + io.Copy(w, &buf) + } +} + +func baseTemplates() []string { + var templateFiles = []string{ + "base.gohtml", + } + ret := []string{} + for _, t := range templateFiles { + ret = append(ret, path.Join(templatePath, t)) + } + return ret +} diff --git a/internal/admin/handlers/job.go b/internal/admin/handlers/job.go new file mode 100644 index 000000000..874095320 --- /dev/null +++ b/internal/admin/handlers/job.go @@ -0,0 +1,407 @@ +package handlers + +import ( + "fmt" + "io/ioutil" + "jobctl/internal/config" + "jobctl/internal/constants" + "jobctl/internal/controller" + "jobctl/internal/database" + "jobctl/internal/models" + "jobctl/internal/scheduler" + "net/http" + "os" + "path" + "path/filepath" + "regexp" + "sort" + "strconv" + "strings" + + "golang.org/x/text/encoding" + "golang.org/x/text/encoding/japanese" + "golang.org/x/text/transform" +) + +type jobStatus struct { + Name string + Vals []scheduler.NodeStatus +} + +type Log struct { + GridData []*jobStatus + Logs []*models.StatusFile +} + +type jobResponse struct { + Title string + Charset string + Job *controller.Job + Tab jobTabType + Graph string + Definition string + LogData *Log + LogUrl string + Group string + StepLog *stepLog + ScLog *schedulerLog +} + +type schedulerLog struct { + LogFile string + Content string +} + +type stepLog struct { + Step *models.Node + LogFile string + Content string +} + +type jobTabType int + +const ( + JobTabType_Status jobTabType = iota + JobTabType_Config + JobTabType_History + JobTabType_StepLog + JobTabType_ScLog + JobTabType_None +) + +type jobParameter struct { + Tab jobTabType + Group string + File string + Step string +} + +func newJobResponse(cfg string, job *controller.Job, tab jobTabType, + group string) *jobResponse { + return &jobResponse{ + Title: cfg, + Job: job, + Tab: tab, + Definition: "", + LogData: nil, + Group: group, + } +} + +type JobHandlerConfig struct { + JobsDir string + LogEncodingCharset string +} + +func HandleGetJob(hc *JobHandlerConfig) http.HandlerFunc { + renderFunc := useTemplate("job.gohtml", "job") + + return func(w http.ResponseWriter, r *http.Request) { + cfg, err := getPathParameter(r) + if err != nil { + encodeError(w, err) + return + } + + params := getJobParameter(r) + job, err := controller.FromConfig(filepath.Join(hc.JobsDir, params.Group, cfg)) + if err != nil { + encodeError(w, err) + return + } + c := controller.New(job.Config) + data := newJobResponse(cfg, job, params.Tab, params.Group) + + switch params.Tab { + case JobTabType_Status: + data.Graph = models.StepGraph(job.Status.Nodes, params.Tab != JobTabType_Config) + case JobTabType_Config: + steps := models.FromSteps(job.Config.Steps) + data.Graph = models.StepGraph(steps, params.Tab != JobTabType_Config) + data.Definition, _ = config.ReadConfig(path.Join(hc.JobsDir, params.Group, cfg)) + case JobTabType_History: + logs, err := controller.New(job.Config).GetStatusHist(30) + if err != nil { + encodeError(w, err) + return + } + data.LogData = buildLog(logs) + case JobTabType_StepLog: + if isJsonRequest(r) { + data.StepLog, err = readStepLog(c, params.File, params.Step, hc.LogEncodingCharset) + if err != nil { + encodeError(w, err) + return + } + } + case JobTabType_ScLog: + if isJsonRequest(r) { + data.ScLog, err = readSchedulerLog(c, params.File) + if err != nil { + encodeError(w, err) + return + } + } + default: + } + + if isJsonRequest(r) { + renderJson(w, data) + } else { + renderFunc(w, data) + } + } +} + +func isJsonRequest(r *http.Request) bool { + return r.Header.Get("Accept") == "application/json" +} + +type PostJobHandlerConfig struct { + JobsDir string + Bin string + WkDir string +} + +func HandlePostJobAction(hc *PostJobHandlerConfig) http.HandlerFunc { + + return func(w http.ResponseWriter, r *http.Request) { + action := r.FormValue("action") + group := r.FormValue("group") + reqId := r.FormValue("request-id") + + cfg, err := getPathParameter(r) + if err != nil { + encodeError(w, err) + return + } + + file := filepath.Join(hc.JobsDir, group, cfg) + job, err := controller.FromConfig(file) + if err != nil { + encodeError(w, err) + return + } + c := controller.New(job.Config) + + switch action { + case "start": + if job.Status.Status == scheduler.SchedulerStatus_Running { + w.WriteHeader(http.StatusBadRequest) + w.Write([]byte("job is already running.")) + return + } + err = c.StartJob(hc.Bin, hc.WkDir, "") + if err != nil { + w.WriteHeader(http.StatusInternalServerError) + w.Write([]byte(err.Error())) + return + } + case "stop": + if job.Status.Status != scheduler.SchedulerStatus_Running { + w.WriteHeader(http.StatusBadRequest) + w.Write([]byte("job is not running.")) + return + } + err = c.StopJob() + if err != nil { + w.WriteHeader(http.StatusNotFound) + w.Write([]byte(err.Error())) + return + } + case "retry": + if reqId == "" { + w.WriteHeader(http.StatusBadRequest) + w.Write([]byte("request-id is required.")) + return + } + err = c.RetryJob(hc.Bin, hc.WkDir, reqId) + if err != nil { + w.WriteHeader(http.StatusInternalServerError) + w.Write([]byte(err.Error())) + return + } + default: + encodeError(w, errInvalidArgs) + return + } + + http.Redirect(w, r, job.File, http.StatusSeeOther) + } +} + +func readSchedulerLog(c controller.Controller, file string) (*schedulerLog, error) { + logFile := "" + if file == "" { + s, err := c.GetLastStatus() + if err != nil { + return nil, fmt.Errorf("failed to read status") + } + logFile = s.Log + } else { + s, err := database.ParseFile(file) + if err != nil { + return nil, fmt.Errorf("failed to read status file %s", file) + } + logFile = s.Status.Log + } + b, err := os.ReadFile(logFile) + if err != nil { + return nil, fmt.Errorf("failed to read file %s", logFile) + } + return &schedulerLog{ + LogFile: file, + Content: string(b), + }, nil +} + +func readStepLog(c controller.Controller, file, stepName, enc string) (*stepLog, error) { + var steps []*models.Node = nil + var stepm = map[string]*models.Node{ + constants.OnSuccess: nil, + constants.OnFailure: nil, + constants.OnCancel: nil, + constants.OnExit: nil, + } + if file == "" { + s, err := c.GetLastStatus() + if err != nil { + return nil, fmt.Errorf("failed to read status") + } + steps = s.Nodes + stepm[constants.OnSuccess] = s.OnSuccess + stepm[constants.OnFailure] = s.OnFailure + stepm[constants.OnCancel] = s.OnCancel + stepm[constants.OnExit] = s.OnExit + } else { + s, err := database.ParseFile(file) + if err != nil { + return nil, fmt.Errorf("failed to read status file %s", file) + } + steps = s.Status.Nodes + stepm[constants.OnSuccess] = s.Status.OnSuccess + stepm[constants.OnFailure] = s.Status.OnFailure + stepm[constants.OnCancel] = s.Status.OnCancel + stepm[constants.OnExit] = s.Status.OnExit + } + var step *models.Node = nil + for _, s := range steps { + if s.Name == stepName { + step = s + break + } + } + if v, ok := stepm[stepName]; ok { + step = v + } + if step == nil { + return nil, fmt.Errorf("step was not found %s", stepName) + } + var b []byte = nil + var err error = nil + if strings.ToLower(enc) == "euc-jp" { + b, err = readFile(step.Log, japanese.EUCJP.NewDecoder()) + } else { + b, err = os.ReadFile(step.Log) + } + if err != nil { + return nil, fmt.Errorf("failed to read file %s", step.Log) + } + return &stepLog{ + LogFile: file, + Step: step, + Content: string(b), + }, nil +} + +func readFile(f string, decorder *encoding.Decoder) ([]byte, error) { + r, err := os.Open(f) + if err != nil { + return nil, fmt.Errorf("failed to read file %s", f) + } + defer r.Close() + tr := transform.NewReader(r, decorder) + ret, err := ioutil.ReadAll(tr) + return ret, err +} + +func buildLog(logs []*models.StatusFile) *Log { + ret := &Log{ + GridData: []*jobStatus{}, + Logs: logs, + } + tmp := map[string][]scheduler.NodeStatus{} + add := func(step *models.Node, i int) { + n := step.Name + if _, ok := tmp[n]; !ok { + tmp[n] = make([]scheduler.NodeStatus, len(logs)) + } + tmp[n][i] = step.Status + } + for i, l := range logs { + for _, s := range l.Status.Nodes { + add(s, i) + } + } + for k, v := range tmp { + ret.GridData = append(ret.GridData, &jobStatus{Name: k, Vals: v}) + } + sort.Slice(ret.GridData, func(i, c int) bool { + return strings.Compare(ret.GridData[i].Name, ret.GridData[c].Name) <= 0 + }) + tmp = map[string][]scheduler.NodeStatus{} + for i, l := range logs { + if l.Status.OnSuccess != nil { + add(l.Status.OnSuccess, i) + } + if l.Status.OnFailure != nil { + add(l.Status.OnFailure, i) + } + if l.Status.OnCancel != nil { + add(l.Status.OnCancel, i) + } + if l.Status.OnExit != nil { + add(l.Status.OnExit, i) + } + } + for _, h := range []string{constants.OnSuccess, constants.OnFailure, constants.OnCancel, constants.OnExit} { + if v, ok := tmp[h]; ok { + ret.GridData = append(ret.GridData, &jobStatus{Name: h, Vals: v}) + } + } + return ret +} + +func getPathParameter(r *http.Request) (string, error) { + re := regexp.MustCompile("/([^/\\?]+)/?$") + m := re.FindStringSubmatch(r.URL.Path) + if len(m) < 2 { + return "", fmt.Errorf("invalid URL") + } + return m[1], nil +} + +func getJobParameter(r *http.Request) *jobParameter { + p := &jobParameter{ + Tab: JobTabType_Status, + Group: "", + } + if tab, ok := r.URL.Query()["t"]; ok { + i, err := strconv.Atoi(tab[0]) + if err != nil || i >= int(JobTabType_None) { + p.Tab = JobTabType_Status + } else { + p.Tab = jobTabType(i) + } + } + if group, ok := r.URL.Query()["group"]; ok { + p.Group = group[0] + } + if file, ok := r.URL.Query()["file"]; ok { + p.File = file[0] + } + if step, ok := r.URL.Query()["step"]; ok { + p.Step = step[0] + } + return p +} diff --git a/internal/admin/handlers/json.go b/internal/admin/handlers/json.go new file mode 100644 index 000000000..6ad48370b --- /dev/null +++ b/internal/admin/handlers/json.go @@ -0,0 +1,16 @@ +package handlers + +import ( + "encoding/json" + "log" + "net/http" +) + +func renderJson(w http.ResponseWriter, data interface{}) { + w.Header().Set("Content-Type", "application/json; charset=utf-8") + w.WriteHeader(http.StatusOK) + err := json.NewEncoder(w).Encode(data) + if err != nil { + log.Printf("%v", err) + } +} diff --git a/internal/admin/handlers/list.go b/internal/admin/handlers/list.go new file mode 100644 index 000000000..bec2e3a40 --- /dev/null +++ b/internal/admin/handlers/list.go @@ -0,0 +1,102 @@ +package handlers + +import ( + "io/ioutil" + "jobctl/internal/controller" + "log" + "net/http" + "path/filepath" +) + +type jobListResponse struct { + Title string + Charset string + Jobs []*controller.Job + Groups []*group + Group string + HasError bool +} + +type jobListParameter struct { + Group string +} + +type group struct { + Name string + Dir string +} + +type JobListHandlerConfig struct { + JobsDir string +} + +func HandleGetList(hc *JobListHandlerConfig) http.HandlerFunc { + renderFunc := useTemplate("index.gohtml", "index") + return func(w http.ResponseWriter, r *http.Request) { + params := getGetListParameter(r) + dir := filepath.Join(hc.JobsDir, params.Group) + jobs, err := controller.GetJobList(dir) + if err != nil { + encodeError(w, err) + return + } + + groups := []*group{} + if params.Group == "" { + groups, err = listGroups(dir) + if err != nil { + encodeError(w, err) + return + } + } + + hasErr := false + for _, j := range jobs { + if j.Error != nil { + hasErr = true + break + } + } + + data := &jobListResponse{ + Title: "JobList", + Jobs: jobs, + Groups: groups, + Group: params.Group, + HasError: hasErr, + } + if r.Header.Get("Accept") == "application/json" { + renderJson(w, data) + } else { + renderFunc(w, data) + } + } +} + +func getGetListParameter(r *http.Request) *jobListParameter { + p := &jobListParameter{ + Group: "", + } + if group, ok := r.URL.Query()["group"]; ok { + p.Group = group[0] + } + return p +} + +func listGroups(dir string) ([]*group, error) { + ret := []*group{} + + fis, err := ioutil.ReadDir(dir) + if err != nil || fis == nil { + log.Printf("%v", err) + } + for _, fi := range fis { + if !fi.IsDir() { + continue + } + ret = append(ret, &group{ + fi.Name(), filepath.Join(dir, fi.Name()), + }) + } + return ret, nil +} diff --git a/internal/admin/handlers/web/templates/base.gohtml b/internal/admin/handlers/web/templates/base.gohtml new file mode 100644 index 000000000..d0b37a316 --- /dev/null +++ b/internal/admin/handlers/web/templates/base.gohtml @@ -0,0 +1,90 @@ +{{define "base"}} + + + + + + + + + + + + + Jobctl Admin + + + + +
+ + {{template "content" .}} +
+
+ + +{{ end }} diff --git a/internal/admin/handlers/web/templates/index.gohtml b/internal/admin/handlers/web/templates/index.gohtml new file mode 100644 index 000000000..0d5d70e56 --- /dev/null +++ b/internal/admin/handlers/web/templates/index.gohtml @@ -0,0 +1,167 @@ +{{define "content"}} +
+ +{{end}} \ No newline at end of file diff --git a/internal/admin/handlers/web/templates/job.gohtml b/internal/admin/handlers/web/templates/job.gohtml new file mode 100644 index 000000000..c594d72d6 --- /dev/null +++ b/internal/admin/handlers/web/templates/job.gohtml @@ -0,0 +1,699 @@ +{{define "content"}} +
+ + +{{end}} \ No newline at end of file diff --git a/internal/admin/http.go b/internal/admin/http.go new file mode 100644 index 000000000..b8acbd788 --- /dev/null +++ b/internal/admin/http.go @@ -0,0 +1,83 @@ +package admin + +import ( + "context" + "log" + "net" + "net/http" + "time" +) + +type server struct { + config *Config + addr string + server *http.Server + admin *adminHandler + idleConnsClosed chan struct{} +} + +func NewServer(cfg *Config) *server { + return &server{ + addr: net.JoinHostPort(cfg.Host, cfg.Port), + config: cfg, + admin: newAdminHandler(cfg, defaultRoutes(cfg)), + idleConnsClosed: nil, + } +} + +func (svr *server) Shutdown() { + err := svr.server.Shutdown(context.Background()) + if err != nil { + log.Printf("server shutdown: %v", err) + } + close(svr.idleConnsClosed) +} + +func (svr *server) Serve() (err error) { + svr.setupServer() + svr.setupHandler() + + svr.idleConnsClosed = make(chan struct{}) + + log.Printf("admin server is running at \"http://%s\"\n", svr.addr) + + err = svr.server.ListenAndServe() + if err != http.ErrServerClosed { + err = nil + } + if err != nil { + return err + } + + <-svr.idleConnsClosed + + log.Printf("server closed") + + return +} + +func (svr *server) setupServer() { + svr.server = &http.Server{ + Addr: svr.addr, + } +} + +func (svr *server) setupHandler() { + svr.admin.addRoute(http.MethodPost, `^/shutdown$`, svr.handleShutdown) + handler := requestLogger(svr.admin) + if svr.config.IsBasicAuth { + handler = basicAuth(handler, + svr.config.BasicAuthUsername, + svr.config.BasicAuthPassword) + } + svr.server.Handler = handler +} + +func (svr *server) handleShutdown(w http.ResponseWriter, r *http.Request) { + log.Println("received shutdown request") + w.Write([]byte("shutting down the jobctl server...\n")) + go func() { + time.Sleep(time.Millisecond * 3000) + svr.Shutdown() + }() +} diff --git a/internal/admin/loader.go b/internal/admin/loader.go new file mode 100644 index 000000000..31fb33d7d --- /dev/null +++ b/internal/admin/loader.go @@ -0,0 +1,109 @@ +package admin + +import ( + "bytes" + "fmt" + "io/ioutil" + "path" + + "jobctl/internal/utils" + + "github.com/mitchellh/mapstructure" + + "gopkg.in/yaml.v2" +) + +type Loader struct{} + +func NewConfigLoader() *Loader { + return &Loader{} +} + +func DefaultConfig() (*Config, error) { + c := &Config{} + c.Init() + err := c.setup() + if err != nil { + return nil, err + } + return c, nil +} + +func (cl *Loader) LoadAdminConfig(file string) (*Config, error) { + + if file == "" { + homeDir := utils.MustGetUserHomeDir() + file = path.Join(homeDir, ".jobctl", "admin.yaml") + } + + if !utils.FileExists(file) { + return nil, ErrConfigNotFound + } + + raw, err := cl.load(file) + if err != nil { + return nil, err + } + + def, err := cl.decode(raw) + if err != nil { + return nil, err + } + + if def.Env == nil { + def.Env = map[string]string{} + } + for k, v := range utils.DefaultEnv() { + if _, ok := def.Env[v]; !ok { + def.Env[k] = v + } + } + + c, err := buildFromDefinition(def) + if err != nil { + return nil, err + } + + return c, nil +} + +func (cl *Loader) load(file string) (config map[string]interface{}, err error) { + if !utils.FileExists(file) { + return config, fmt.Errorf("file not found: %s", file) + } + return cl.readFile(file) +} + +func (cl *Loader) readFile(file string) (config map[string]interface{}, err error) { + data, err := ioutil.ReadFile(file) + if err != nil { + return nil, fmt.Errorf("%s: %v", file, err) + } + return cl.unmarshalData(data) +} + +func (cl *Loader) unmarshalData(data []byte) (map[string]interface{}, error) { + var cm map[string]interface{} + + err := yaml.NewDecoder(bytes.NewReader(data)).Decode(&cm) + if err != nil { + return nil, err + } + return cm, nil +} + +func (cl *Loader) decode(cm map[string]interface{}) (*configDefinition, error) { + c := &configDefinition{} + md, _ := mapstructure.NewDecoder(&mapstructure.DecoderConfig{ + ErrorUnused: true, + Result: c, + TagName: "", + }) + err := md.Decode(cm) + if err != nil { + return nil, err + } + return c, nil +} + +var ErrConfigNotFound = fmt.Errorf("admin.yaml file not found") diff --git a/internal/admin/loader_test.go b/internal/admin/loader_test.go new file mode 100644 index 000000000..e2742ac25 --- /dev/null +++ b/internal/admin/loader_test.go @@ -0,0 +1,87 @@ +package admin_test + +import ( + "jobctl/internal/admin" + "jobctl/internal/settings" + "jobctl/internal/utils" + "os" + "path" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +var ( + testsDir = path.Join(utils.MustGetwd(), "../../tests/admin/") + testsConfig = path.Join(testsDir, "admin.yaml") +) + +func TestMain(m *testing.M) { + os.Setenv("HOST", "localhost") + settings.InitTest(testsDir) + code := m.Run() + os.Exit(code) +} + +func TestDefaultConfig(t *testing.T) { + cfg, err := admin.DefaultConfig() + require.NoError(t, err) + + wd, err := os.Getwd() + require.NoError(t, err) + + h, err := os.Hostname() + require.NoError(t, err) + testConfig(t, cfg, &testWant{ + Host: h, + Port: "8000", + Jobs: path.Join(wd), + Command: "jobctl", + }) +} + +func TestHomeAdminConfig(t *testing.T) { + loader := admin.NewConfigLoader() + cfg, err := loader.LoadAdminConfig("") + require.NoError(t, err) + + testConfig(t, cfg, &testWant{ + Host: "localhost", + Port: "8081", + Jobs: path.Join(testsDir, "/jobctl/jobs"), + Command: path.Join(testsDir, "/jobctl/bin/jobctl"), + WorkDir: path.Join(testsDir, "/jobctl/jobs"), + }) +} + +func TestLoadAdminConfig(t *testing.T) { + loader := admin.NewConfigLoader() + cfg, err := loader.LoadAdminConfig(testsConfig) + require.NoError(t, err) + + testConfig(t, cfg, &testWant{ + Host: "localhost", + Port: "8082", + Jobs: path.Join(testsDir, "/jobctl/jobs"), + Command: path.Join(testsDir, "/jobctl/bin/jobctl"), + WorkDir: path.Join(testsDir, "/jobctl/jobs"), + }) +} + +func testConfig(t *testing.T, cfg *admin.Config, want *testWant) { + t.Helper() + assert.Equal(t, want.Host, cfg.Host) + assert.Equal(t, want.Port, cfg.Port) + assert.Equal(t, want.Jobs, cfg.Jobs) + assert.Equal(t, want.WorkDir, cfg.WorkDir) + assert.Equal(t, want.Command, cfg.Command) +} + +type testWant struct { + Host string + Port string + Jobs string + Command string + WorkDir string +} diff --git a/internal/admin/logger.go b/internal/admin/logger.go new file mode 100644 index 000000000..19ee28534 --- /dev/null +++ b/internal/admin/logger.go @@ -0,0 +1,15 @@ +package admin + +import ( + "log" + "net/http" +) + +func requestLogger(next http.Handler) http.Handler { + return http.HandlerFunc( + func(w http.ResponseWriter, r *http.Request) { + log.Printf("Request received: %v %s %s", + r.RemoteAddr, r.Method, r.URL.Path) + next.ServeHTTP(w, r) + }) +} diff --git a/internal/admin/routes.go b/internal/admin/routes.go new file mode 100644 index 000000000..c4e83b593 --- /dev/null +++ b/internal/admin/routes.go @@ -0,0 +1,35 @@ +package admin + +import ( + "jobctl/internal/admin/handlers" + "net/http" +) + +type route struct { + method string + pattern string + handler http.HandlerFunc +} + +func defaultRoutes(cfg *Config) []*route { + return []*route{ + {http.MethodGet, `^/?$`, handlers.HandleGetList( + &handlers.JobListHandlerConfig{ + JobsDir: cfg.Jobs, + }, + )}, + {http.MethodGet, `^/([^/]+)$`, handlers.HandleGetJob( + &handlers.JobHandlerConfig{ + JobsDir: cfg.Jobs, + LogEncodingCharset: cfg.LogEncodingCharset, + }, + )}, + {http.MethodPost, `^/([^/]+)$`, handlers.HandlePostJobAction( + &handlers.PostJobHandlerConfig{ + JobsDir: cfg.Jobs, + Bin: cfg.Command, + WkDir: cfg.WorkDir, + }, + )}, + } +} diff --git a/internal/agent/agent.go b/internal/agent/agent.go new file mode 100644 index 000000000..060c967f6 --- /dev/null +++ b/internal/agent/agent.go @@ -0,0 +1,363 @@ +package agent + +import ( + "errors" + "fmt" + "jobctl/internal/config" + "jobctl/internal/constants" + "jobctl/internal/controller" + "jobctl/internal/database" + "jobctl/internal/mail" + "jobctl/internal/models" + "jobctl/internal/reporter" + "jobctl/internal/scheduler" + "jobctl/internal/sock" + "jobctl/internal/utils" + "log" + "net/http" + "os" + "path" + "path/filepath" + "regexp" + "syscall" + "time" + + "github.com/google/uuid" +) + +type Agent struct { + *Config + *RetryConfig + scheduler *scheduler.Scheduler + graph *scheduler.ExecutionGraph + logFilename string + reporter *reporter.Reporter + database *database.Database + dbWriter *database.Writer + socketServer *sock.Server + requestId string +} + +type Config struct { + Job *config.Config + Dry bool +} + +type RetryConfig struct { + Status *models.Status +} + +func (a *Agent) Run() error { + a.init() + if err := a.setupGraph(); err != nil { + return err + } + if err := a.checkPreconditions(); err != nil { + return err + } + if a.Dry { + return a.dryRun() + } + setup := []func() error{ + a.checkIsRunning, + a.setupRequestId, + a.setupDatabase, + a.setupSocketServer, + } + for _, fn := range setup { + err := fn() + if err != nil { + return err + } + } + return a.run() +} + +func (a *Agent) Status() *models.Status { + status := models.NewStatus( + a.Job, + a.graph.Nodes(), + a.scheduler.Status(a.graph), + os.Getpid(), + &a.graph.StartedAt, + &a.graph.FinishedAt, + ) + status.RequestId = a.requestId + status.Log = a.logFilename + if node := a.scheduler.HanderNode(constants.OnExit); node != nil { + status.OnExit = models.FromNode(node) + } + if node := a.scheduler.HanderNode(constants.OnSuccess); node != nil { + status.OnSuccess = models.FromNode(node) + } + if node := a.scheduler.HanderNode(constants.OnFailure); node != nil { + status.OnFailure = models.FromNode(node) + } + if node := a.scheduler.HanderNode(constants.OnCancel); node != nil { + status.OnCancel = models.FromNode(node) + } + return status +} + +// Signal sends the signal to the processes running +// if processes do not terminate for 60 seconds, +// cancel all processes which will send signal -1 to the processes. +func (a *Agent) Signal(sig os.Signal) { + log.Printf("Sending %s signal to running child processes.", sig) + done := make(chan bool) + go func() { + a.scheduler.Signal(a.graph, sig, done) + }() + select { + case <-done: + log.Printf("All child processes have been terminated.") + case <-time.After(time.Second * 60): + a.Cancel(sig) + default: + log.Printf("Waiting for child processes to exit...") + time.Sleep(time.Second * 1) + } +} + +// Cancel sends signal -1 to all child processes. +// then it waits another 20 seconds before therminating the +// parent process. +func (a *Agent) Cancel(sig os.Signal) { + log.Printf("Sending -1 signal to running child processes.") + done := make(chan bool) + go func() { + a.scheduler.Cancel(a.graph, done) + }() + select { + case <-done: + log.Printf("All child processes have been terminated.") + case <-time.After(time.Second * 20): + log.Printf("Terminating the controller process.") + a.Kill(done) + default: + log.Printf("Waiting for child processes to exit...") + time.Sleep(time.Second * 1) + } +} + +// Kill sends signal SIGKILL to all child processes. +func (a *Agent) Kill(done chan bool) { + if a.scheduler == nil { + panic("Invalid state") + } + a.scheduler.Signal(a.graph, syscall.SIGKILL, done) +} + +func (a *Agent) init() { + a.scheduler = scheduler.New( + &scheduler.Config{ + LogDir: path.Join(a.Job.LogDir, utils.ValidFilename(a.Job.Name, "_")), + MaxActiveRuns: a.Job.MaxActiveRuns, + DelaySec: a.Job.DelaySec, + Dry: a.Dry, + OnExit: a.Job.HandlerOn.Exit, + OnSuccess: a.Job.HandlerOn.Success, + OnFailure: a.Job.HandlerOn.Failure, + OnCancel: a.Job.HandlerOn.Cancel, + }) + a.reporter = reporter.New(&reporter.Config{ + Mailer: mail.New( + &mail.Config{ + Host: a.Job.Smtp.Host, + Port: a.Job.Smtp.Port, + }), + }) + a.logFilename = filepath.Join( + a.Job.LogDir, fmt.Sprintf("%s.%s.log", + utils.ValidFilename(a.Job.Name, "_"), + time.Now().Format("20060102.15:04:05"), + )) +} + +func (a *Agent) setupGraph() (err error) { + if a.RetryConfig != nil && a.RetryConfig.Status != nil { + log.Printf("setup for retry") + return a.setupRetry() + } + a.graph, err = scheduler.NewExecutionGraph(a.Job.Steps...) + return +} + +func (a *Agent) setupRetry() (err error) { + nodes := []*scheduler.Node{} + for _, n := range a.RetryConfig.Status.Nodes { + nodes = append(nodes, n.ToNode()) + } + a.graph, err = scheduler.RetryExecutionGraph(nodes...) + return +} + +func (a *Agent) setupRequestId() error { + id, err := uuid.NewRandom() + if err != nil { + return err + } + a.requestId = id.String() + return nil +} + +func (a *Agent) setupDatabase() (err error) { + a.database = database.New(database.DefaultConfig()) + a.dbWriter, _, err = a.database.NewWriter(a.Job.ConfigPath, time.Now()) + return +} + +func (a *Agent) setupSocketServer() (err error) { + a.socketServer, err = sock.NewServer( + &sock.Config{ + Addr: sock.GetSockAddr(a.Job.ConfigPath), + HandlerFunc: a.handleHTTP, + }) + return +} + +func (a *Agent) checkPreconditions() error { + if len(a.Job.Preconditions) > 0 { + log.Printf("checking pre conditions for \"%s\"", a.Job.Name) + if err := config.EvalConditions(a.Job.Preconditions); err != nil { + done := make(chan bool) + go a.scheduler.Cancel(a.graph, done) + <-done + return err + } + } + return nil +} + +func (a *Agent) run() error { + tl := &teeLogger{ + filename: a.logFilename, + } + if err := tl.Open(); err != nil { + return err + } + defer tl.Close() + + err := a.dbWriter.Open() + if err != nil { + return err + } + defer a.dbWriter.Close() + + a.dbWriter.Write(a.Status()) + + listen := make(chan error) + go func() { + err := a.socketServer.Serve(listen) + if err != nil && err != sock.ErrServerRequestedShutdown { + log.Printf("failed to start socket server %v", err) + } + }() + defer func() { + a.socketServer.Shutdown() + }() + + select { + case err := <-listen: + if err != nil { + return fmt.Errorf("failed to start the socket server.") + } + } + + done := make(chan *scheduler.Node) + defer close(done) + go func() { + for node := range done { + a.dbWriter.Write(a.Status()) + a.reporter.ReportStep(a.scheduler, a.graph, a.Job, node) + } + }() + + lastErr := a.scheduler.Schedule(a.graph, done) + status := a.scheduler.Status(a.graph) + + log.Println("schedule finished.") + if err := a.dbWriter.Write(a.Status()); err != nil { + log.Printf("failed to write status. %s", err) + } + + a.reporter.Report(status, a.graph.Nodes(), lastErr) + if err := a.reporter.ReportMail(status, a.graph, lastErr, a.Job); err != nil { + log.Printf("failed to send mail. %s", err) + } + + return lastErr +} + +func (a *Agent) dryRun() error { + done := make(chan *scheduler.Node) + defer close(done) + go func() { + for node := range done { + a.reporter.ReportStep(a.scheduler, a.graph, a.Job, node) + } + }() + + log.Printf("***** Starting DRY-RUN *****") + + lastErr := a.scheduler.Schedule(a.graph, done) + status := a.scheduler.Status(a.graph) + a.reporter.Report(status, a.graph.Nodes(), lastErr) + + log.Printf("***** Finished DRY-RUN *****") + + return lastErr +} + +func (a *Agent) checkIsRunning() error { + status, err := controller.New(a.Job).GetStatus() + if err != nil { + return err + } + if status.Status != scheduler.SchedulerStatus_None { + return fmt.Errorf("The job is already running. socket=%s", + sock.GetSockAddr(a.Job.ConfigPath)) + } + return nil +} + +var ( + statusRe = regexp.MustCompile(`^/status[/]?$`) + stopRe = regexp.MustCompile(`^/stop[/]?$`) +) + +func (a *Agent) handleHTTP(w http.ResponseWriter, r *http.Request) { + w.Header().Set("content-type", "application/json") + switch { + case r.Method == http.MethodGet && statusRe.MatchString(r.URL.Path): + status := a.Status() + b, err := status.ToJson() + if err != nil { + encodeError(w, err) + return + } + w.WriteHeader(http.StatusOK) + w.Write(b) + case r.Method == http.MethodPost && stopRe.MatchString(r.URL.Path): + encodeResult(w, true) + a.Signal(syscall.SIGINT) + default: + encodeError(w, ErrNotFound) + } +} + +func encodeResult(w http.ResponseWriter, result bool) { + w.WriteHeader(http.StatusOK) + w.Write([]byte("OK")) +} + +var ErrNotFound = errors.New("not found") + +func encodeError(w http.ResponseWriter, err error) { + switch err { + case ErrNotFound: + http.Error(w, err.Error(), http.StatusNotFound) + default: + http.Error(w, err.Error(), http.StatusInternalServerError) + } +} diff --git a/internal/agent/agent_test.go b/internal/agent/agent_test.go new file mode 100644 index 000000000..8bf261309 --- /dev/null +++ b/internal/agent/agent_test.go @@ -0,0 +1,170 @@ +package agent_test + +import ( + "jobctl/internal/agent" + "jobctl/internal/config" + "jobctl/internal/controller" + "jobctl/internal/models" + "jobctl/internal/scheduler" + "jobctl/internal/settings" + "jobctl/internal/utils" + "os" + "path" + "syscall" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +var testsDir = path.Join(utils.MustGetwd(), "../../tests/testdata") + +func TestMain(m *testing.M) { + tempDir := utils.MustTempDir("agent_test") + settings.InitTest(tempDir) + code := m.Run() + os.RemoveAll(tempDir) + os.Exit(code) +} + +func TestRunJob(t *testing.T) { + job, err := controller.FromConfig(testConfig("basic_success.yaml")) + require.NoError(t, err) + + status, err := testJob(t, job) + require.NoError(t, err) + + assert.Equal(t, scheduler.SchedulerStatus_Success, status.Status) +} + +func TestCancelJob(t *testing.T) { + for _, abort := range []func(*agent.Agent){ + func(a *agent.Agent) { a.Signal(syscall.SIGTERM) }, + func(a *agent.Agent) { a.Cancel(syscall.SIGTERM) }, + func(a *agent.Agent) { a.Kill(nil) }, + } { + a, job := testJobAsync(t, testConfig("basic_sleep_long.yaml")) + time.Sleep(time.Millisecond * 100) + abort(a) + time.Sleep(time.Millisecond * 500) + status, err := controller.New(job.Config).GetLastStatus() + require.NoError(t, err) + assert.Equal(t, scheduler.SchedulerStatus_Cancel, status.Status) + } +} + +func TestPreConditionInvalid(t *testing.T) { + job, err := controller.FromConfig(testConfig("multiple_steps.yaml")) + require.NoError(t, err) + + job.Config.Preconditions = []*config.Condition{ + { + Condition: "`echo 1`", + Expected: "0", + }, + } + + status, err := testJob(t, job) + require.Error(t, err) + + assert.Equal(t, scheduler.SchedulerStatus_Cancel, status.Status) + for _, s := range status.Nodes { + assert.Equal(t, scheduler.NodeStatus_Cancel, s.Status) + } +} + +func TestPreConditionValid(t *testing.T) { + job, err := controller.FromConfig(testConfig("with_params.yaml")) + require.NoError(t, err) + + job.Config.Preconditions = []*config.Condition{ + { + Condition: "`echo 1`", + Expected: "1", + }, + } + status, err := testJob(t, job) + require.NoError(t, err) + + assert.Equal(t, scheduler.SchedulerStatus_Success, status.Status) + for _, s := range status.Nodes { + assert.Equal(t, scheduler.NodeStatus_Success, s.Status) + } +} + +func TestOnExit(t *testing.T) { + job, err := controller.FromConfig(testConfig("with_teardown.yaml")) + require.NoError(t, err) + status, err := testJob(t, job) + require.NoError(t, err) + + assert.Equal(t, scheduler.SchedulerStatus_Success, status.Status) + for _, s := range status.Nodes { + assert.Equal(t, scheduler.NodeStatus_Success, s.Status) + } + assert.Equal(t, scheduler.NodeStatus_Success, status.OnExit.Status) +} + +func TestRetry(t *testing.T) { + cfg := testConfig("agent_retry.yaml") + job, err := controller.FromConfig(cfg) + require.NoError(t, err) + + status, err := testJob(t, job) + require.Error(t, err) + assert.Equal(t, scheduler.SchedulerStatus_Error, status.Status) + + for _, n := range status.Nodes { + n.Command = "true" + } + a := &agent.Agent{ + Config: &agent.Config{ + Job: job.Config, + }, + RetryConfig: &agent.RetryConfig{ + Status: status, + }, + } + err = a.Run() + status = a.Status() + require.NoError(t, err) + assert.Equal(t, scheduler.SchedulerStatus_Success, status.Status) + + for _, n := range status.Nodes { + if n.Status != scheduler.NodeStatus_Success && + n.Status != scheduler.NodeStatus_Skipped { + t.Errorf("invalid status: %s", n.Status.String()) + } + } +} + +func testJob(t *testing.T, job *controller.Job) (*models.Status, error) { + t.Helper() + a := &agent.Agent{Config: &agent.Config{ + Job: job.Config, + }} + err := a.Run() + return a.Status(), err +} + +func testConfig(name string) string { + return path.Join(testsDir, name) +} + +func testJobAsync(t *testing.T, file string) (*agent.Agent, *controller.Job) { + t.Helper() + + job, err := controller.FromConfig(file) + require.NoError(t, err) + + a := &agent.Agent{Config: &agent.Config{ + Job: job.Config, + }} + + go func() { + a.Run() + }() + + return a, job +} diff --git a/internal/agent/logger.go b/internal/agent/logger.go new file mode 100644 index 000000000..fbe69756c --- /dev/null +++ b/internal/agent/logger.go @@ -0,0 +1,40 @@ +package agent + +import ( + "io" + "jobctl/internal/utils" + "log" + "os" + "path" +) + +type teeLogger struct { + filename string + file *os.File +} + +func (l *teeLogger) Open() error { + dir := path.Dir(l.filename) + if err := os.MkdirAll(dir, 0755); err != nil { + return err + } + var err error + l.file, err = utils.OpenOrCreateFile(l.filename) + if err != nil { + return err + } + mw := io.MultiWriter(os.Stdout, l.file) + log.SetOutput(mw) + return nil +} + +func (l *teeLogger) Close() error { + var lastErr error = nil + if l.file != nil { + if err := l.file.Close(); err != nil { + lastErr = err + } + } + log.SetOutput(os.Stdout) + return lastErr +} diff --git a/internal/config/condition.go b/internal/config/condition.go new file mode 100644 index 000000000..88f7520ad --- /dev/null +++ b/internal/config/condition.go @@ -0,0 +1,54 @@ +package config + +import ( + "fmt" + "jobctl/internal/utils" +) + +type Condition struct { + Condition string + Expected string +} + +type ConditionResult struct { + Condition string + Expected string + Actual string +} + +func (c *Condition) Eval() (*ConditionResult, error) { + ret, err := utils.ParseVariable(c.Condition) + if err != nil { + return nil, err + } + return &ConditionResult{ + Condition: c.Condition, + Expected: c.Expected, + Actual: ret, + }, nil +} + +func EvalCondition(c *Condition) error { + r, err := c.Eval() + if err != nil { + return fmt.Errorf( + "failed to evaluate condition. Condition=%s Error=%v", + c.Condition, err) + } + if r.Expected != r.Actual { + return fmt.Errorf( + "condition was not met. Condition=%s Expected=%s Actual=%s", + r.Condition, r.Expected, r.Actual) + } + return err +} + +func EvalConditions(cond []*Condition) error { + for _, c := range cond { + err := EvalCondition(c) + if err != nil { + return err + } + } + return nil +} diff --git a/internal/config/condition_test.go b/internal/config/condition_test.go new file mode 100644 index 000000000..5e87369b7 --- /dev/null +++ b/internal/config/condition_test.go @@ -0,0 +1,79 @@ +package config_test + +import ( + "jobctl/internal/config" + "os" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestCondition(t *testing.T) { + { + c := &config.Condition{ + Condition: "`echo 1`", + Expected: "1", + } + ret, err := c.Eval() + require.NoError(t, err) + assert.Equal(t, ret.Condition, c.Condition) + assert.Equal(t, ret.Expected, c.Expected) + assert.Equal(t, ret.Actual, c.Expected) + } + { + os.Setenv("TEST_CONDITION", "100") + c := &config.Condition{ + Condition: "${TEST_CONDITION}", + Expected: "100", + } + ret, err := c.Eval() + require.NoError(t, err) + assert.Equal(t, ret.Condition, c.Condition) + assert.Equal(t, ret.Expected, c.Expected) + assert.Equal(t, ret.Actual, c.Expected) + } +} + +func TestEvalConditions(t *testing.T) { + for scenario, test := range map[string]struct { + Conditions []*config.Condition + Want bool + }{ + "no error conditions": { + []*config.Condition{ + { + Condition: "`echo 1`", + Expected: "1", + }, + { + Condition: "`echo 100`", + Expected: "100", + }, + }, + true, + }, + "fail conditions": { + []*config.Condition{ + { + Condition: "`echo 1`", + Expected: "1", + }, + { + Condition: "`echo 100`", + Expected: "0", + }, + }, + false, + }, + } { + t.Run(scenario, func(t *testing.T) { + err := config.EvalConditions(test.Conditions) + if test.Want { + require.NoError(t, err) + } else { + require.Error(t, err) + } + }) + } +} diff --git a/internal/config/config.go b/internal/config/config.go new file mode 100644 index 000000000..f6498ee1c --- /dev/null +++ b/internal/config/config.go @@ -0,0 +1,370 @@ +package config + +import ( + "encoding/csv" + "fmt" + "jobctl/internal/constants" + "jobctl/internal/settings" + "jobctl/internal/utils" + "os" + "path" + "strconv" + "strings" + "time" +) + +type Config struct { + ConfigPath string + Name string + Description string + Env []string + LogDir string + HandlerOn HandlerOn + Steps []*Step + MailOn MailOn + ErrorMail *MailConfig + InfoMail *MailConfig + Smtp *SmtpConfig + DelaySec time.Duration + HistRetentionDays int + Preconditions []*Condition + MaxActiveRuns int + Params []string + DefaultParams string +} + +type HandlerOn struct { + Failure *Step + Success *Step + Cancel *Step + Exit *Step +} + +type MailOn struct { + Failure bool + Success bool +} + +func ReadConfig(file string) (string, error) { + b, err := os.ReadFile(file) + if err != nil { + return "", err + } + return string(b), nil +} + +func (c *Config) Init() { + if c.Env == nil { + c.Env = []string{} + } + if c.Steps == nil { + c.Steps = []*Step{} + } + if c.Params == nil { + c.Params = []string{} + } + if c.Preconditions == nil { + c.Preconditions = []*Condition{} + } +} + +func (c *Config) setup(file string) { + c.ConfigPath = file + if c.LogDir == "" { + c.LogDir = path.Join( + settings.MustGet(settings.CONFIG__LOGS_DIR), + utils.ValidFilename(c.Name, "_")) + } + if c.HistRetentionDays == 0 { + c.HistRetentionDays = 7 + } + dir := path.Dir(file) + for _, step := range c.Steps { + c.setupStep(step, dir) + } + if c.HandlerOn.Exit != nil { + c.setupStep(c.HandlerOn.Exit, dir) + } + if c.HandlerOn.Success != nil { + c.setupStep(c.HandlerOn.Success, dir) + } + if c.HandlerOn.Failure != nil { + c.setupStep(c.HandlerOn.Failure, dir) + } + if c.HandlerOn.Cancel != nil { + c.setupStep(c.HandlerOn.Cancel, dir) + } +} + +func (c *Config) setupStep(step *Step, defaultDir string) { + if step.Dir == "" { + step.Dir = path.Dir(c.ConfigPath) + } +} + +func (c *Config) Clone() *Config { + ret := *c + return &ret +} + +func (c *Config) String() string { + ret := fmt.Sprintf("{\n") + ret = fmt.Sprintf("%s\tName: %s\n", ret, c.Name) + ret = fmt.Sprintf("%s\tDescription: %s\n", ret, strings.TrimSpace(c.Description)) + ret = fmt.Sprintf("%s\tEnv: %v\n", ret, strings.Join(c.Env, ", ")) + ret = fmt.Sprintf("%s\tLogDir: %v\n", ret, c.LogDir) + for i, s := range c.Steps { + ret = fmt.Sprintf("%s\tStep%d: %v\n", ret, i, s) + } + ret = fmt.Sprintf("%s}\n", ret) + return ret +} + +type BuildConfigOptions struct { + headOnly bool + parameters string +} + +func buildFromDefinition(def *configDefinition, file string, globalConfig *Config, + opts *BuildConfigOptions) (c *Config, err error) { + c = &Config{} + c.Init() + + c.Name = def.Name + c.Description = def.Description + c.MailOn.Failure = def.MailOn.Failure + c.MailOn.Success = def.MailOn.Success + c.DelaySec = time.Second * time.Duration(def.DelaySec) + + if opts != nil && opts.headOnly { + return c, nil + } + + env, err := loadVariables(def.Env) + if err != nil { + return nil, err + } + + c.Env = buildConfigEnv(env) + if globalConfig != nil { + for _, e := range globalConfig.Env { + key := strings.SplitN(e, "=", 2)[0] + if _, ok := env[key]; !ok { + c.Env = append(c.Env, e) + } + } + } + + logDir, err := utils.ParseVariable(def.LogDir) + if err != nil { + return nil, err + } + c.LogDir = logDir + if def.HistRetentionDays != nil { + c.HistRetentionDays = *def.HistRetentionDays + } + + c.DefaultParams = def.Params + if opts.parameters != "" { + c.Params, err = parseParameters(opts.parameters, false) + if err != nil { + return nil, err + } + } else { + c.Params, err = parseParameters(c.DefaultParams, true) + if err != nil { + return nil, err + } + } + + c.Steps, err = buildStepsFromDefinition(c.Env, def.Steps) + if err != nil { + return nil, err + } + + if def.HandlerOn.Exit != nil { + def.HandlerOn.Exit.Name = constants.OnExit + c.HandlerOn.Exit, err = buildStep(c.Env, def.HandlerOn.Exit) + if err != nil { + return nil, err + } + } + + if def.HandlerOn.Success != nil { + def.HandlerOn.Success.Name = constants.OnSuccess + c.HandlerOn.Success, err = buildStep(c.Env, def.HandlerOn.Success) + if err != nil { + return nil, err + } + } + + if def.HandlerOn.Failure != nil { + def.HandlerOn.Failure.Name = constants.OnFailure + c.HandlerOn.Failure, err = buildStep(c.Env, def.HandlerOn.Failure) + if err != nil { + return nil, err + } + } + + if def.HandlerOn.Cancel != nil { + def.HandlerOn.Cancel.Name = constants.OnCancel + c.HandlerOn.Cancel, err = buildStep(c.Env, def.HandlerOn.Cancel) + if err != nil { + return nil, err + } + } + + c.Smtp, err = buildSmtpConfigFromDefinition(def.Smtp) + if err != nil { + return nil, err + } + c.ErrorMail, err = buildMailConfigFromDefinition(def.ErrorMail) + if err != nil { + return nil, err + } + c.InfoMail, err = buildMailConfigFromDefinition(def.InfoMail) + if err != nil { + return nil, err + } + c.Preconditions = loadPreCondition(def.Preconditions) + c.MaxActiveRuns = def.MaxActiveRuns + + return c, nil +} + +func parseParameters(value string, eval bool) ([]string, error) { + params := value + var err error + if eval { + params, err = utils.ParseCommand(os.ExpandEnv(value)) + if err != nil { + return nil, err + } + } + r := csv.NewReader(strings.NewReader(params)) + r.Comma = ' ' + records, err := r.ReadAll() + if err != nil { + return nil, err + } + ret := []string{} + for _, r := range records { + for i, v := range r { + err = os.Setenv(strconv.Itoa(i+1), v) + if err != nil { + return nil, err + } + ret = append(ret, v) + } + } + return ret, nil +} + +func buildSmtpConfigFromDefinition(def smtpConfigDef) (*SmtpConfig, error) { + smtp := &SmtpConfig{} + smtp.Host = def.Host + smtp.Port = def.Port + return smtp, nil +} + +func buildMailConfigFromDefinition(def mailConfigDef) (*MailConfig, error) { + c := &MailConfig{} + c.From = def.From + c.To = def.To + c.Prefix = def.Prefix + return c, nil +} + +func buildStepsFromDefinition(variables []string, stepDefs []*stepDef) ([]*Step, error) { + ret := []*Step{} + for _, def := range stepDefs { + step, err := buildStep(variables, def) + if err != nil { + return nil, err + } + ret = append(ret, step) + } + return ret, nil +} + +func buildStep(variables []string, def *stepDef) (*Step, error) { + if err := assertStepDef(def); err != nil { + return nil, err + } + step := &Step{} + step.Name = def.Name + step.Description = def.Description + step.Command, step.Args = utils.SplitCommand(def.Command) + step.Dir = os.ExpandEnv(def.Dir) + step.Variables = variables + step.Depends = def.Depends + if def.ContinueOn != nil { + step.ContinueOn.Skipped = def.ContinueOn.Skipped + step.ContinueOn.Failure = def.ContinueOn.Failure + } + if def.RetryPolicy != nil { + step.RetryPolicy = &RetryPolicy{ + Limit: def.RetryPolicy.Limit, + } + } + step.MailOnError = def.MailOnError + step.Repeat = def.Repeat + step.RepeatInterval = time.Second * time.Duration(def.RepeatIntervalSec) + step.Preconditions = loadPreCondition(def.Preconditions) + return step, nil +} + +func buildConfigEnv(vars map[string]string) []string { + ret := []string{} + for k, v := range vars { + ret = append(ret, fmt.Sprintf("%s=%s", k, v)) + } + return ret +} + +func loadPreCondition(cond []*conditionDef) []*Condition { + ret := []*Condition{} + for _, v := range cond { + ret = append(ret, &Condition{ + Condition: v.Condition, + Expected: v.Expected, + }) + } + return ret +} + +func loadVariables(strVariables map[string]string) (map[string]string, error) { + vars := map[string]string{} + for k, v := range strVariables { + parsed, err := utils.ParseVariable(v) + if err != nil { + return nil, err + } + vars[k] = parsed + err = os.Setenv(k, parsed) + if err != nil { + return nil, err + } + } + return vars, nil +} + +func assertDef(def *configDefinition) error { + if def.Name == "" { + return fmt.Errorf("job name must be specified.") + } + if len(def.Steps) == 0 { + return fmt.Errorf("at least one step must be specified.") + } + return nil +} + +func assertStepDef(def *stepDef) error { + if def.Name == "" { + return fmt.Errorf("step name must be specified.") + } + if def.Command == "" { + return fmt.Errorf("step command must be specified.") + } + return nil +} diff --git a/internal/config/config_test.go b/internal/config/config_test.go new file mode 100644 index 000000000..a27949473 --- /dev/null +++ b/internal/config/config_test.go @@ -0,0 +1,58 @@ +package config_test + +import ( + "fmt" + "jobctl/internal/config" + "jobctl/internal/settings" + "jobctl/internal/utils" + "os" + "path" + "testing" + + "github.com/stretchr/testify/require" +) + +var ( + testDir = path.Join(utils.MustGetwd(), "../../tests/testdata") + testHomeDir = path.Join(utils.MustGetwd(), "../../tests/config") + testConfig = path.Join(testDir, "all.yaml") + testEnv = []string{} +) + +func TestMain(m *testing.M) { + settings.InitTest(testHomeDir) + testEnv = []string{ + fmt.Sprintf("LOG_DIR=%s", path.Join(testHomeDir, "/logs")), + fmt.Sprintf("PATH=%s", os.ExpandEnv("${PATH}")), + } + code := m.Run() + os.Exit(code) +} + +func TestAssertDefinition(t *testing.T) { + loader := config.NewConfigLoader() + + _, err := loader.Load(path.Join(testDir, "err_no_name.yaml"), "") + require.Equal(t, err, fmt.Errorf("job name must be specified.")) + + _, err = loader.Load(path.Join(testDir, "err_no_steps.yaml"), "") + require.Equal(t, err, fmt.Errorf("at least one step must be specified.")) +} + +func TestAssertStepDefinition(t *testing.T) { + loader := config.NewConfigLoader() + + _, err := loader.Load(path.Join(testDir, "err_step_no_name.yaml"), "") + require.Equal(t, err, fmt.Errorf("step name must be specified.")) + + _, err = loader.Load(path.Join(testDir, "err_step_no_command.yaml"), "") + require.Equal(t, err, fmt.Errorf("step command must be specified.")) +} + +func TestReadConfig(t *testing.T) { + f, err := config.ReadConfig(testConfig) + require.NoError(t, err) + if len(f) == 0 { + t.Error("reading yaml file failed") + } +} diff --git a/internal/config/definition.go b/internal/config/definition.go new file mode 100644 index 000000000..d8a1bf035 --- /dev/null +++ b/internal/config/definition.go @@ -0,0 +1,70 @@ +package config + +type configDefinition struct { + Name string + Description string + LogDir string + Env map[string]string + HandlerOn handerOnDef + Steps []*stepDef + Smtp smtpConfigDef + MailOn mailOnDef + ErrorMail mailConfigDef + InfoMail mailConfigDef + DelaySec int + HistRetentionDays *int + Preconditions []*conditionDef + MaxActiveRuns int + Params string +} + +type conditionDef struct { + Condition string + Expected string +} + +type handerOnDef struct { + Failure *stepDef + Success *stepDef + Cancel *stepDef + Exit *stepDef +} + +type stepDef struct { + Name string + Description string + Dir string + Command string + Depends []string + ContinueOn *continueOnDef + RetryPolicy *retryPolicyDef + MailOnError bool + Repeat bool + RepeatIntervalSec int + Preconditions []*conditionDef +} + +type continueOnDef struct { + Failure bool + Skipped bool +} + +type retryPolicyDef struct { + Limit int +} + +type smtpConfigDef struct { + Host string + Port string +} + +type mailConfigDef struct { + From string + To string + Prefix string +} + +type mailOnDef struct { + Failure bool + Success bool +} diff --git a/internal/config/loader.go b/internal/config/loader.go new file mode 100644 index 000000000..9baebaf05 --- /dev/null +++ b/internal/config/loader.go @@ -0,0 +1,200 @@ +package config + +import ( + "bytes" + "errors" + "fmt" + "io/ioutil" + "path" + "path/filepath" + + "jobctl/internal/utils" + + "github.com/imdario/mergo" + "github.com/mitchellh/mapstructure" + + "gopkg.in/yaml.v2" +) + +var ErrConfigNotFound = errors.New("config file was not found") + +type Loader struct { + dir string + homeDir string +} + +func NewConfigLoader() *Loader { + return &Loader{ + homeDir: utils.MustGetUserHomeDir(), + dir: utils.MustGetwd(), + } +} + +func (cl *Loader) Load(f, params string) (*Config, error) { + file, err := filepath.Abs(f) + if err != nil { + return nil, err + } + + dst, err := cl.LoadGlobalConfig() + if err != nil { + return nil, err + } + if dst == nil { + dst = &Config{} + dst.Init() + } + + raw, err := cl.load(file) + if err != nil { + return nil, err + } + + def, err := cl.decode(raw) + if err != nil { + return nil, err + } + + if err := assertDef(def); err != nil { + return nil, err + } + + c, err := buildFromDefinition(def, file, + dst, + &BuildConfigOptions{ + headOnly: false, + parameters: params, + }) + if err != nil { + return nil, err + } + + err = cl.merge(dst, c) + if err != nil { + return nil, err + } + + dst.setup(file) + + return dst, nil +} + +func (cl *Loader) LoadHeadOnly(f string) (*Config, error) { + file, err := filepath.Abs(f) + if err != nil { + return nil, err + } + + raw, err := cl.load(file) + if err != nil { + return nil, err + } + + def, err := cl.decode(raw) + if err != nil { + return nil, err + } + + if err := assertDef(def); err != nil { + return nil, err + } + + c, err := buildFromDefinition(def, file, nil, + &BuildConfigOptions{ + headOnly: true, + }) + if err != nil { + return nil, err + } + + c.setup(file) + + return c, nil +} + +func (cl *Loader) LoadGlobalConfig() (*Config, error) { + if cl.homeDir == "" { + return nil, fmt.Errorf("home directory was not found.") + } + + file := path.Join(cl.homeDir, ".jobctl", "config.yaml") + if !utils.FileExists(file) { + return nil, nil + } + + raw, err := cl.load(file) + if err != nil { + return nil, err + } + + def, err := cl.decode(raw) + if err != nil { + return nil, err + } + + if def.Env == nil { + def.Env = map[string]string{} + } + for k, v := range utils.DefaultEnv() { + if _, ok := def.Env[v]; !ok { + def.Env[k] = v + } + } + + c, err := buildFromDefinition( + def, file, nil, + &BuildConfigOptions{headOnly: false}, + ) + + if err != nil { + return nil, err + } + + return c, nil +} + +func (cl *Loader) merge(dst, src *Config) error { + if err := mergo.MergeWithOverwrite(dst, src); err != nil { + return err + } + return nil +} + +func (cl *Loader) load(file string) (config map[string]interface{}, err error) { + if !utils.FileExists(file) { + return config, ErrConfigNotFound + } + return cl.readFile(file) +} + +func (cl *Loader) readFile(file string) (config map[string]interface{}, err error) { + data, err := ioutil.ReadFile(file) + if err != nil { + return nil, fmt.Errorf("%s: %v", file, err) + } + return cl.unmarshalData(data) +} + +func (cl *Loader) unmarshalData(data []byte) (map[string]interface{}, error) { + var cm map[string]interface{} + + err := yaml.NewDecoder(bytes.NewReader(data)).Decode(&cm) + if err != nil { + return nil, err + } + return cm, nil +} + +func (cl *Loader) decode(cm map[string]interface{}) (*configDefinition, error) { + c := &configDefinition{} + md, _ := mapstructure.NewDecoder(&mapstructure.DecoderConfig{ + ErrorUnused: true, + Result: c, + TagName: "", + }) + err := md.Decode(cm) + if err != nil { + return nil, err + } + return c, nil +} diff --git a/internal/config/loader_test.go b/internal/config/loader_test.go new file mode 100644 index 000000000..757e7a22c --- /dev/null +++ b/internal/config/loader_test.go @@ -0,0 +1,161 @@ +package config_test + +import ( + "fmt" + "jobctl/internal/config" + "jobctl/internal/constants" + "path" + "sort" + "strings" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestLoadConfig(t *testing.T) { + loader := config.NewConfigLoader() + cfg, err := loader.Load(testConfig, "") + require.NoError(t, err) + + steps := []*config.Step{ + { + Name: "1", + Dir: testHomeDir, + Command: "true", + Args: []string{}, + Variables: testEnv, + Preconditions: []*config.Condition{ + { + Condition: "`echo test`", + Expected: "test", + }, + }, + MailOnError: true, + ContinueOn: config.ContinueOn{ + Failure: true, + Skipped: true, + }, + RetryPolicy: &config.RetryPolicy{ + Limit: 2, + }, + }, + { + Name: "2", + Dir: testDir, + Command: "false", + Args: []string{}, + Variables: testEnv, + Preconditions: []*config.Condition{}, + ContinueOn: config.ContinueOn{ + Failure: true, + Skipped: false, + }, + Depends: []string{ + "1", + }, + }, + } + + makeTestStepFunc := func(name string) *config.Step { + return &config.Step{ + Name: name, + Dir: testDir, + Command: fmt.Sprintf("%s.sh", name), + Args: []string{}, + Variables: testEnv, + Preconditions: []*config.Condition{}, + } + } + + stepm := map[string]*config.Step{} + for _, name := range []string{ + constants.OnExit, + constants.OnSuccess, + constants.OnFailure, + constants.OnCancel, + } { + stepm[name] = makeTestStepFunc(name) + } + + want := &config.Config{ + ConfigPath: testConfig, + Name: "test job", + Description: "this is a test job.", + Env: testEnv, + LogDir: path.Join(testHomeDir, "/logs"), + HistRetentionDays: 3, + MailOn: config.MailOn{ + Failure: true, + Success: true, + }, + DelaySec: time.Second * 1, + MaxActiveRuns: 1, + Params: []string{"param1", "param2"}, + DefaultParams: "param1 param2", + Smtp: &config.SmtpConfig{ + Host: "smtp.host", + Port: "25", + }, + ErrorMail: &config.MailConfig{ + From: "system@mail.com", + To: "error@mail.com", + Prefix: "[ERROR]", + }, + InfoMail: &config.MailConfig{ + From: "system@mail.com", + To: "info@mail.com", + Prefix: "[INFO]", + }, + Preconditions: []*config.Condition{ + { + Condition: "`echo 1`", + Expected: "1", + }, + }, + Steps: steps, + HandlerOn: config.HandlerOn{ + Exit: stepm[constants.OnExit], + Success: stepm[constants.OnSuccess], + Failure: stepm[constants.OnFailure], + Cancel: stepm[constants.OnCancel], + }, + } + assert.Equal(t, cfg, want) +} + +func TestLoadGlobalConfig(t *testing.T) { + loader := config.NewConfigLoader() + cfg, err := loader.LoadGlobalConfig() + require.NotNil(t, cfg) + require.NoError(t, err) + + sort.Slice(cfg.Env, func(i, j int) bool { + return strings.Compare(cfg.Env[i], cfg.Env[j]) < 0 + }) + + want := &config.Config{ + Env: testEnv, + LogDir: path.Join(testHomeDir, "/logs"), + HistRetentionDays: 7, + Params: []string{}, + Steps: []*config.Step{}, + Smtp: &config.SmtpConfig{ + Host: "smtp.host", + Port: "25", + }, + ErrorMail: &config.MailConfig{ + From: "system@mail.com", + To: "error@mail.com", + Prefix: "[ERROR]", + }, + InfoMail: &config.MailConfig{ + From: "system@mail.com", + To: "info@mail.com", + Prefix: "[INFO]", + }, + Preconditions: []*config.Condition{}, + } + assert.Equal(t, cfg, want) +} diff --git a/internal/config/mail.go b/internal/config/mail.go new file mode 100644 index 000000000..66115129b --- /dev/null +++ b/internal/config/mail.go @@ -0,0 +1,12 @@ +package config + +type SmtpConfig struct { + Host string + Port string +} + +type MailConfig struct { + From string + To string + Prefix string +} diff --git a/internal/config/step.go b/internal/config/step.go new file mode 100644 index 000000000..c9aeaa043 --- /dev/null +++ b/internal/config/step.go @@ -0,0 +1,42 @@ +package config + +import ( + "fmt" + "strings" + "time" +) + +type Step struct { + Name string + Description string + Variables []string + Dir string + Command string + Args []string + Depends []string + ContinueOn ContinueOn + RetryPolicy *RetryPolicy + MailOnError bool + Repeat bool + RepeatInterval time.Duration + Preconditions []*Condition +} + +type RetryPolicy struct { + Limit int +} + +type ContinueOn struct { + Failure bool + Skipped bool +} + +func (s *Step) String() string { + vals := []string{} + vals = append(vals, fmt.Sprintf("Name: %s", s.Name)) + vals = append(vals, fmt.Sprintf("Dir: %s", s.Dir)) + vals = append(vals, fmt.Sprintf("Command: %s", s.Command)) + vals = append(vals, fmt.Sprintf("Args: %s", s.Args)) + vals = append(vals, fmt.Sprintf("Depends: [%s]", strings.Join(s.Depends, ", "))) + return strings.Join(vals, "\t") +} diff --git a/internal/constants/constants.go b/internal/constants/constants.go new file mode 100644 index 000000000..8d71d6ef2 --- /dev/null +++ b/internal/constants/constants.go @@ -0,0 +1,13 @@ +package constants + +const ( + OnSuccess = "onSuccess" + OnFailure = "onFailure" + OnCancel = "onCancel" + OnExit = "onExit" +) + +const ( + TimeFormat = "2006-01-02 15:04:05" + TimeEmpty = "-" +) diff --git a/internal/controller/controller.go b/internal/controller/controller.go new file mode 100644 index 000000000..5616281ab --- /dev/null +++ b/internal/controller/controller.go @@ -0,0 +1,163 @@ +package controller + +import ( + "fmt" + "io/ioutil" + "jobctl/internal/config" + "jobctl/internal/database" + "jobctl/internal/models" + "jobctl/internal/scheduler" + "jobctl/internal/sock" + "log" + "os" + "os/exec" + "path/filepath" + "syscall" + "time" +) + +type Controller interface { + StopJob() error + StartJob(bin string, workDir string, params string) error + RetryJob(bin string, workDir string, reqId string) error + GetStatus() (*models.Status, error) + GetLastStatus() (*models.Status, error) + GetStatusHist(n int) ([]*models.StatusFile, error) +} + +func GetJobList(dir string) ([]*Job, error) { + ret := []*Job{} + fis, err := ioutil.ReadDir(dir) + if err != nil { + log.Printf("%v", err) + } + for _, fi := range fis { + if filepath.Ext(fi.Name()) != ".yaml" { + continue + } + job, err := fromConfig(filepath.Join(dir, fi.Name()), true) + if err != nil { + log.Printf("%v", err) + if job == nil { + continue + } + } + ret = append(ret, job) + } + return ret, nil +} + +var _ Controller = (*controller)(nil) + +type controller struct { + cfg *config.Config +} + +func New(cfg *config.Config) Controller { + return &controller{ + cfg: cfg, + } +} + +func (c *controller) StopJob() error { + unixClient, err := sock.NewUnixClient(sock.GetSockAddr(c.cfg.ConfigPath)) + if err != nil { + return err + } + _, err = unixClient.Request("POST", "/stop") + return err +} + +func (c *controller) StartJob(bin string, workDir string, params string) (err error) { + go func() { + args := []string{"start"} + if params != "" { + args = append(args, fmt.Sprintf("--params=\"%s\"", params)) + } + args = append(args, c.cfg.ConfigPath) + cmd := exec.Command(bin, args...) + cmd.SysProcAttr = &syscall.SysProcAttr{Setpgid: true, Pgid: 0} + cmd.Dir = workDir + cmd.Env = os.Environ() + defer cmd.Wait() + err = cmd.Start() + if err != nil { + log.Printf("failed to start a job: %v", err) + } + }() + time.Sleep(time.Millisecond * 500) + return +} + +func (c *controller) RetryJob(bin string, workDir string, reqId string) (err error) { + log.Printf("retry start: %s, %s, %s, %s", bin, workDir, c.cfg.ConfigPath, reqId) + go func() { + args := []string{"retry"} + args = append(args, fmt.Sprintf("--req=%s", reqId)) + args = append(args, c.cfg.ConfigPath) + cmd := exec.Command(bin, args...) + cmd.SysProcAttr = &syscall.SysProcAttr{Setpgid: true, Pgid: 0} + cmd.Dir = workDir + cmd.Env = os.Environ() + defer cmd.Wait() + err := cmd.Start() + if err != nil { + log.Printf("failed to retry a job: %v", err) + } + }() + time.Sleep(time.Millisecond * 500) + return +} + +func (s *controller) GetStatus() (*models.Status, error) { + unixClient, err := sock.NewUnixClient(sock.GetSockAddr(s.cfg.ConfigPath)) + if err != nil { + return nil, err + } + ret, err := unixClient.Request("GET", "/status") + if err != nil { + return defaultStatus(s.cfg), nil + } + status, err := models.StatusFromJson(ret) + if err != nil { + return nil, err + } + return status, nil +} + +func (s *controller) GetLastStatus() (*models.Status, error) { + unixClient, err := sock.NewUnixClient(sock.GetSockAddr(s.cfg.ConfigPath)) + if err != nil { + return nil, err + } + ret, err := unixClient.Request("GET", "/status") + if err == nil { + return models.StatusFromJson(ret) + } + db := database.New(database.DefaultConfig()) + status, err := db.ReadStatusToday(s.cfg.ConfigPath) + if err != nil { + if err != database.ErrNoDataFile { + fmt.Printf("read status failed : %s", err) + } + return defaultStatus(s.cfg), nil + } + return status, nil +} + +func (s *controller) GetStatusHist(n int) ([]*models.StatusFile, error) { + db := database.New(database.DefaultConfig()) + ret, err := db.ReadStatusHist(s.cfg.ConfigPath, n) + if err != nil { + return []*models.StatusFile{}, nil + } + return ret, nil +} + +func defaultStatus(cfg *config.Config) *models.Status { + return models.NewStatus( + cfg, + nil, + scheduler.SchedulerStatus_None, + int(models.PidNotRunning), nil, nil) +} diff --git a/internal/controller/controller_test.go b/internal/controller/controller_test.go new file mode 100644 index 000000000..c8f7873fc --- /dev/null +++ b/internal/controller/controller_test.go @@ -0,0 +1,86 @@ +package controller_test + +import ( + "jobctl/internal/agent" + "jobctl/internal/controller" + "jobctl/internal/scheduler" + "jobctl/internal/settings" + "jobctl/internal/utils" + "os" + "path" + "path/filepath" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +var ( + testsDir = path.Join(utils.MustGetwd(), "../../tests/testdata") +) + +func TestMain(m *testing.M) { + tempDir := utils.MustTempDir("controller_test") + settings.InitTest(tempDir) + code := m.Run() + os.RemoveAll(tempDir) + os.Exit(code) +} + +func testConfig(name string) string { + return path.Join(testsDir, name) +} + +func TestGetStatus(t *testing.T) { + file := testConfig("basic_success_2.yaml") + job, err := controller.FromConfig(file) + require.NoError(t, err) + + st, err := controller.New(job.Config).GetStatus() + require.NoError(t, err) + assert.Equal(t, scheduler.SchedulerStatus_None, st.Status) +} + +func TestGetStatusRunningAndDone(t *testing.T) { + file := testConfig("basic_sleep.yaml") + + job, err := controller.FromConfig(file) + require.NoError(t, err) + + a := agent.Agent{Config: &agent.Config{ + Job: job.Config, + }} + + go func() { + err := a.Run() + require.NoError(t, err) + }() + time.Sleep(time.Millisecond * 500) + + st, err := controller.New(job.Config).GetStatus() + require.NoError(t, err) + time.Sleep(time.Millisecond * 50) + + assert.Equal(t, scheduler.SchedulerStatus_Running, st.Status) + + assert.Eventually(t, func() bool { + st, _ := controller.New(job.Config).GetLastStatus() + return scheduler.SchedulerStatus_Success == st.Status + }, time.Millisecond*1500, time.Millisecond*100) +} + +func TestGetJob(t *testing.T) { + file := testConfig("basic_success.yaml") + job, err := controller.FromConfig(file) + require.NoError(t, err) + assert.Equal(t, "basic success", job.Config.Name) +} + +func TestGetJobList(t *testing.T) { + jobs, err := controller.GetJobList(testsDir) + require.NoError(t, err) + + matches, err := filepath.Glob(path.Join(testsDir, "*.yaml")) + assert.Equal(t, len(matches), len(jobs)) +} diff --git a/internal/controller/job.go b/internal/controller/job.go new file mode 100644 index 000000000..42bbb0192 --- /dev/null +++ b/internal/controller/job.go @@ -0,0 +1,65 @@ +package controller + +import ( + "jobctl/internal/config" + "jobctl/internal/models" + "jobctl/internal/scheduler" + "path/filepath" +) + +type Job struct { + File string + Dir string + Config *config.Config + Status *models.Status + Error error + ErrorT *string +} + +func FromConfig(file string) (*Job, error) { + return fromConfig(file, false) +} + +func fromConfig(file string, headOnly bool) (*Job, error) { + cl := config.NewConfigLoader() + var cfg *config.Config + var err error + if headOnly { + cfg, err = cl.LoadHeadOnly(file) + } else { + cfg, err = cl.Load(file, "") + } + if err != nil { + if cfg != nil { + return newJob(cfg, nil, err), err + } + cfg := &config.Config{ConfigPath: file} + cfg.Init() + return newJob(cfg, nil, err), err + } + status, err := New(cfg).GetLastStatus() + if err != nil { + return nil, err + } + if !headOnly { + if _, err := scheduler.NewExecutionGraph(cfg.Steps...); err != nil { + return newJob(cfg, status, err), err + } + } + return newJob(cfg, status, err), nil +} + +func newJob(cfg *config.Config, s *models.Status, err error) *Job { + ret := &Job{ + File: filepath.Base(cfg.ConfigPath), + Dir: filepath.Dir(cfg.ConfigPath), + Config: cfg, + Status: s, + Error: err, + } + if err != nil { + errT := err.Error() + ret.ErrorT = &errT + } + return ret +} diff --git a/internal/database/database.go b/internal/database/database.go new file mode 100644 index 000000000..63527abd4 --- /dev/null +++ b/internal/database/database.go @@ -0,0 +1,296 @@ +package database + +import ( + "bufio" + "crypto/md5" + "encoding/hex" + "fmt" + "jobctl/internal/models" + "jobctl/internal/settings" + "jobctl/internal/utils" + "log" + "os" + "path" + "path/filepath" + "regexp" + "sort" + "strings" + "time" +) + +type Database struct { + *Config +} + +type Config struct { + Dir string +} + +func New(config *Config) *Database { + return &Database{ + Config: config, + } +} + +func DefaultConfig() *Config { + return &Config{ + Dir: settings.MustGet(settings.CONFIG__DATA_DIR), + } +} + +func ParseFile(file string) (*models.StatusFile, error) { + f, err := os.Open(file) + if err != nil { + log.Printf("failed to open file. err: %v", err) + return nil, err + } + defer f.Close() + l, err := findLastLine(f) + if err != nil { + log.Printf("failed to find last line. err: %v", err) + return nil, err + } + m, err := models.StatusFromJson(l) + if err != nil { + log.Printf("failed to parse json. err: %v", err) + return nil, err + } + return &models.StatusFile{File: file, Status: m}, nil +} + +func (db *Database) NewWriter(configPath string, t time.Time) (*Writer, string, error) { + f, err := db.new(configPath, t) + if err != nil { + return nil, "", err + } + w := &Writer{ + filename: f, + } + return w, f, nil +} + +func (db *Database) NewWriterFor(configPath string, file string) (*Writer, error) { + if !utils.FileExists(file) { + return nil, ErrNoDataFile + } + w := &Writer{ + filename: file, + } + return w, nil +} + +func (db *Database) ReadStatusHist(configPath string, n int) ([]*models.StatusFile, error) { + files, err := db.latest(configPath, n) + if err != nil { + return nil, err + } + ret := make([]*models.StatusFile, 0) + for _, file := range files { + status, err := ParseFile(file) + if err != nil { + continue + } + ret = append(ret, status) + } + return ret, nil +} + +func (db *Database) ReadStatusToday(configPath string) (*models.Status, error) { + file, err := db.latestToday(configPath, time.Now()) + if err != nil { + return nil, err + } + + f, err := os.Open(file) + if err != nil { + return nil, err + } + defer f.Close() + + l, err := findLastLine(f) + if err != nil { + return nil, err + } + m, err := models.StatusFromJson(l) + if err != nil { + return nil, err + } + return m, nil +} + +func (db *Database) FindByRequestId(configPath string, requestId string) (*models.StatusFile, error) { + pattern := db.pattern(configPath) + "*.dat" + matches, err := filepath.Glob(pattern) + if err != nil { + return nil, err + } + if len(matches) == 0 { + return nil, fmt.Errorf("%w : %s", ErrNoDataFile, pattern) + } + sort.Slice(matches, func(i, j int) bool { + return strings.Compare(matches[i], matches[j]) >= 0 + }) + for _, f := range matches { + status, err := ParseFile(f) + if err != nil { + log.Printf("parsing failed %s : %s", f, err) + continue + } + if status.Status != nil && status.Status.RequestId == requestId { + return status, nil + } + } + return nil, fmt.Errorf("%w : %s", ErrRequestIdNotFound, requestId) +} + +func (db *Database) RemoveAll(configPath string) { + db.RemoveOld(configPath, 0) +} + +func (db *Database) RemoveOld(configPath string, retentionDays int) error { + if retentionDays <= -1 { + return nil + } + + pattern := db.pattern(configPath) + "*.dat" + matches, err := filepath.Glob(pattern) + if err != nil { + return err + } + + ot := time.Now().AddDate(-1*retentionDays, 0, 0) + for _, m := range matches { + info, err := os.Stat(m) + if err != nil { + log.Printf("%v", err) + continue + } + if info.ModTime().Before(ot) { + err := os.Remove(m) + if err != nil { + log.Printf("%v", err) + } + } + } + return err +} + +func (db *Database) dir(configPath string, prefix string) string { + h := md5.New() + h.Write([]byte(configPath)) + v := hex.EncodeToString(h.Sum(nil)) + return filepath.Join(db.Dir, fmt.Sprintf("%s-%s", prefix, v)) +} + +func (db *Database) new(configPath string, t time.Time) (string, error) { + fileName := fmt.Sprintf("%s.%s.dat", db.pattern(configPath), t.Format("20060102.15:04:05")) + if err := os.MkdirAll(path.Dir(fileName), 0755); err != nil { + return "", err + } + return fileName, nil +} + +func (db *Database) pattern(configPath string) string { + p := prefix(configPath) + dir := db.dir(configPath, p) + return filepath.Join(dir, p) +} + +func (db *Database) latestToday(configPath string, day time.Time) (string, error) { + pattern := fmt.Sprintf("%s.%s*.dat", db.pattern(configPath), day.Format("20060102")) + matches, err := filepath.Glob(pattern) + if err != nil { + return "", err + } + ret, err := filterLatest(matches, 1) + if err != nil { + return "", err + } + return ret[0], err +} + +func (db *Database) latest(configPath string, n int) ([]string, error) { + pattern := db.pattern(configPath) + "*.dat" + matches, err := filepath.Glob(pattern) + if err != nil { + return []string{}, err + } + ret, err := filterLatest(matches, n) + return ret, err +} + +var ( + ErrNoDataFile = fmt.Errorf("no data file found.") + ErrRequestIdNotFound = fmt.Errorf("request id not found.") +) + +var rTimestamp = regexp.MustCompile("2\\d{7}.\\d{2}.\\d{2}.\\d{2}") + +func filterLatest(files []string, n int) ([]string, error) { + if len(files) == 0 { + return []string{}, ErrNoDataFile + } + sort.Slice(files, func(i, j int) bool { + t1 := rTimestamp.FindString(files[i]) + t2 := rTimestamp.FindString(files[j]) + return t1 > t2 + }) + ret := make([]string, 0, n) + for i := 0; i < n && i < len(files); i++ { + ret = append(ret, files[i]) + } + return ret, nil +} + +func findLastLine(f *os.File) (ret string, err error) { + // seek to -2 position to the end of the file + offset, err := f.Seek(-2, 2) + if err != nil { + return "", err + } + + buf := make([]byte, 1) + for { + _, err = f.ReadAt(buf, offset) + if err != nil { + return "", err + } + // Find line break ('LF') + // then read the line + if buf[0] == byte('\n') { + f.Seek(offset+1, 0) + return readLineFrom(f) + } + // If offset == 0 then read the first line + if offset == 0 { + f.Seek(0, 0) + str, err := readLineFrom(f) + return str, err + } + offset-- + } +} + +func readLineFrom(f *os.File) (string, error) { + r := bufio.NewReader(f) + ret := []byte{} + for { + b, isPrefix, err := r.ReadLine() + if err != nil { + return "", err + } + ret = append(ret, b...) + if !isPrefix { + break + } + } + return string(ret), nil + +} + +func prefix(configPath string) string { + return strings.TrimSuffix( + filepath.Base(configPath), + path.Ext(configPath), + ) +} diff --git a/internal/database/database_test.go b/internal/database/database_test.go new file mode 100644 index 000000000..85fd7967a --- /dev/null +++ b/internal/database/database_test.go @@ -0,0 +1,235 @@ +package database + +import ( + "fmt" + "io/ioutil" + "jobctl/internal/config" + "jobctl/internal/models" + "jobctl/internal/scheduler" + "jobctl/internal/utils" + "os" + "path" + "strings" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestDatabase(t *testing.T) { + for scenario, fn := range map[string]func( + t *testing.T, db *Database, + ){ + "create new datafile": testNewDataFile, + "write status to file": testWriteStatusToFile, + "append status to existing file": testWriteStatusToExistingFile, + "write status and find files": testWriteAndFindFiles, + "write status and find by request id": testWriteAndFindByRequestId, + "remove old files": testRemoveOldFiles, + "test read latest status": testReadLatestStatus, + "test read latest n status": testReadStatusN, + } { + t.Run(scenario, func(t *testing.T) { + dir, err := ioutil.TempDir("", "test-database") + db := New(&Config{ + Dir: dir, + }) + require.NoError(t, err) + defer os.RemoveAll(dir) + fn(t, db) + }) + } +} + +func testNewDataFile(t *testing.T, db *Database) { + cfg := &config.Config{ + ConfigPath: "test_new_data_file.yaml", + } + timestamp := time.Date(2022, 1, 1, 0, 0, 0, 0, time.Local) + f, err := db.new(cfg.ConfigPath, timestamp) + require.NoError(t, err) + p := utils.ValidFilename(strings.TrimSuffix( + path.Base(cfg.ConfigPath), path.Ext(cfg.ConfigPath)), "_") + assert.Regexp(t, fmt.Sprintf("%s.*/%s.20220101.00:00:00.dat", p, p), f) +} + +func testWriteAndFindFiles(t *testing.T, db *Database) { + cfg := &config.Config{ + Name: "test_read_status_n", + ConfigPath: "test_data_files_n.yaml", + } + defer db.RemoveAll(cfg.ConfigPath) + + for _, data := range []struct { + Status *models.Status + Timestamp time.Time + }{ + { + models.NewStatus(cfg, nil, scheduler.SchedulerStatus_None, 10000, nil, nil), + time.Date(2022, 1, 1, 0, 0, 0, 0, time.Local), + }, + { + models.NewStatus(cfg, nil, scheduler.SchedulerStatus_None, 10000, nil, nil), + time.Date(2022, 1, 2, 0, 0, 0, 0, time.Local), + }, + { + models.NewStatus(cfg, nil, scheduler.SchedulerStatus_None, 10000, nil, nil), + time.Date(2022, 1, 3, 0, 0, 0, 0, time.Local), + }, + } { + testWriteStatus(t, db, cfg, data.Status, data.Timestamp) + } + + files, err := db.latest(cfg.ConfigPath, 2) + require.NoError(t, err) + require.Equal(t, 2, len(files)) +} + +func testWriteAndFindByRequestId(t *testing.T, db *Database) { + cfg := &config.Config{ + Name: "test_find_by_request_id", + ConfigPath: "test_find_by_request_id.yaml", + } + defer db.RemoveAll(cfg.ConfigPath) + + for _, data := range []struct { + Status *models.Status + RequestId string + Timestamp time.Time + }{ + { + models.NewStatus(cfg, nil, scheduler.SchedulerStatus_None, 10000, nil, nil), + fmt.Sprintf("request-id-1"), + time.Date(2022, 1, 1, 0, 0, 0, 0, time.Local), + }, + { + models.NewStatus(cfg, nil, scheduler.SchedulerStatus_None, 10000, nil, nil), + fmt.Sprintf("request-id-2"), + time.Date(2022, 1, 2, 0, 0, 0, 0, time.Local), + }, + { + models.NewStatus(cfg, nil, scheduler.SchedulerStatus_None, 10000, nil, nil), + fmt.Sprintf("request-id-3"), + time.Date(2022, 1, 3, 0, 0, 0, 0, time.Local), + }, + } { + status := data.Status + status.RequestId = data.RequestId + testWriteStatus(t, db, cfg, status, data.Timestamp) + } + + status, err := db.FindByRequestId(cfg.ConfigPath, "request-id-2") + require.NoError(t, err) + assert.Equal(t, status.Status.RequestId, "request-id-2") + + status, err = db.FindByRequestId(cfg.ConfigPath, "request-id-10000") + assert.Error(t, err) + assert.Nil(t, status) +} + +func testRemoveOldFiles(t *testing.T, db *Database) { + cfg := &config.Config{ + ConfigPath: "test_remove_old.yaml", + } + + for _, data := range []struct { + Status *models.Status + Timestamp time.Time + }{ + { + models.NewStatus(cfg, nil, scheduler.SchedulerStatus_None, 10000, nil, nil), + time.Date(2022, 1, 1, 0, 0, 0, 0, time.Local), + }, + { + models.NewStatus(cfg, nil, scheduler.SchedulerStatus_None, 10000, nil, nil), + time.Date(2022, 1, 2, 0, 0, 0, 0, time.Local), + }, + { + models.NewStatus(cfg, nil, scheduler.SchedulerStatus_None, 10000, nil, nil), + time.Date(2022, 1, 3, 0, 0, 0, 0, time.Local), + }, + } { + testWriteStatus(t, db, cfg, data.Status, data.Timestamp) + } + + files, err := db.latest(cfg.ConfigPath, 3) + require.NoError(t, err) + require.Equal(t, 3, len(files)) + + db.RemoveOld(cfg.ConfigPath, 0) + + files, err = db.latest(cfg.ConfigPath, 3) + require.Equal(t, err, ErrNoDataFile) + require.Equal(t, 0, len(files)) +} + +func testReadLatestStatus(t *testing.T, db *Database) { + cfg := &config.Config{ + ConfigPath: "test_config_status_reader.yaml", + } + dw, _, err := db.NewWriter(cfg.ConfigPath, time.Now()) + require.NoError(t, err) + err = dw.Open() + require.NoError(t, err) + defer dw.Close() + + status := models.NewStatus(cfg, nil, scheduler.SchedulerStatus_None, 10000, nil, nil) + dw.Write(status) + + status.Status = scheduler.SchedulerStatus_Running + status.Pid = 20000 + dw.Write(status) + + ret, err := db.ReadStatusToday(cfg.ConfigPath) + + require.NoError(t, err) + require.NotNil(t, ret) + assert.Equal(t, int(status.Pid), int(ret.Pid)) + require.Equal(t, status.Status, ret.Status) +} + +func testReadStatusN(t *testing.T, db *Database) { + cfg := &config.Config{ + Name: "test_read_status_n", + ConfigPath: "test_config_status_reader_hist.yaml", + } + + for _, data := range []struct { + Status *models.Status + Timestamp time.Time + }{ + { + models.NewStatus(cfg, nil, scheduler.SchedulerStatus_None, 10000, nil, nil), + time.Date(2022, 1, 1, 0, 0, 0, 0, time.Local), + }, + { + models.NewStatus(cfg, nil, scheduler.SchedulerStatus_None, 10000, nil, nil), + time.Date(2022, 1, 2, 0, 0, 0, 0, time.Local), + }, + { + models.NewStatus(cfg, nil, scheduler.SchedulerStatus_None, 10000, nil, nil), + time.Date(2022, 1, 3, 0, 0, 0, 0, time.Local), + }, + } { + testWriteStatus(t, db, cfg, data.Status, data.Timestamp) + } + + recordMax := 2 + + ret, err := db.ReadStatusHist(cfg.ConfigPath, recordMax) + + require.NoError(t, err) + require.Equal(t, recordMax, len(ret)) + assert.Equal(t, cfg.Name, ret[0].Status.Name) + assert.Equal(t, cfg.Name, ret[1].Status.Name) +} + +func testWriteStatus(t *testing.T, db *Database, cfg *config.Config, status *models.Status, tm time.Time) { + t.Helper() + dw, _, err := db.NewWriter(cfg.ConfigPath, tm) + require.NoError(t, err) + require.NoError(t, dw.Open()) + defer dw.Close() + require.NoError(t, dw.Write(status)) +} diff --git a/internal/database/writer.go b/internal/database/writer.go new file mode 100644 index 000000000..3adc62d20 --- /dev/null +++ b/internal/database/writer.go @@ -0,0 +1,48 @@ +package database + +import ( + "bufio" + "fmt" + "jobctl/internal/models" + "jobctl/internal/utils" + "os" + "strings" + "sync" +) + +type Writer struct { + filename string + writer *bufio.Writer + file *os.File + mu sync.Mutex +} + +func (w *Writer) Open() error { + var err error + w.file, err = utils.OpenOrCreateFile(w.filename) + if err != nil { + return err + } + w.writer = bufio.NewWriter(w.file) + return nil +} + +func (w *Writer) Write(st *models.Status) error { + w.mu.Lock() + defer w.mu.Unlock() + if w.writer == nil || w.file == nil { + return fmt.Errorf("file was not opened") + } + jsonb, _ := st.ToJson() + str := strings.ReplaceAll(string(jsonb), "\n", " ") + str = strings.ReplaceAll(str, "\r", " ") + _, err := w.writer.WriteString(str + "\n") + if err != nil { + return err + } + return w.writer.Flush() +} + +func (w *Writer) Close() { + w.file.Close() +} diff --git a/internal/database/writer_test.go b/internal/database/writer_test.go new file mode 100644 index 000000000..bd590c2f2 --- /dev/null +++ b/internal/database/writer_test.go @@ -0,0 +1,73 @@ +package database + +import ( + "jobctl/internal/config" + "jobctl/internal/models" + "jobctl/internal/scheduler" + "jobctl/internal/utils" + "os" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func testWriteStatusToFile(t *testing.T, db *Database) { + cfg := &config.Config{ + Name: "test_write_status", + ConfigPath: "test_write_status.yaml", + } + dw, file, err := db.NewWriter(cfg.ConfigPath, time.Now()) + require.NoError(t, err) + require.NoError(t, dw.Open()) + defer func() { + dw.Close() + db.RemoveOld(cfg.ConfigPath, 0) + }() + + status := models.NewStatus(cfg, nil, scheduler.SchedulerStatus_Running, 10000, nil, nil) + require.NoError(t, dw.Write(status)) + + utils.AssertPattern(t, "FileName", ".*test_write_status.*", file) + + dat, err := os.ReadFile(file) + require.NoError(t, err) + + r, err := models.StatusFromJson(string(dat)) + require.NoError(t, err) + + assert.Equal(t, cfg.Name, r.Name) +} + +func testWriteStatusToExistingFile(t *testing.T, db *Database) { + cfg := &config.Config{ + Name: "test_append_to_existing", + ConfigPath: "test_append_to_existing.yaml", + } + dw, file, err := db.NewWriter(cfg.ConfigPath, time.Now()) + require.NoError(t, err) + require.NoError(t, dw.Open()) + + status := models.NewStatus(cfg, nil, scheduler.SchedulerStatus_Running, 10000, nil, nil) + status.RequestId = "request-id-test-write-status-to-existing-file" + require.NoError(t, dw.Write(status)) + dw.Close() + + data, err := db.FindByRequestId(cfg.ConfigPath, status.RequestId) + require.NoError(t, err) + assert.Equal(t, data.Status.Status, scheduler.SchedulerStatus_Running) + assert.Equal(t, file, data.File) + + dw, err = db.NewWriterFor(cfg.ConfigPath, file) + require.NoError(t, err) + require.NoError(t, dw.Open()) + status.Status = scheduler.SchedulerStatus_Success + require.NoError(t, dw.Write(status)) + dw.Close() + + data, err = db.FindByRequestId(cfg.ConfigPath, status.RequestId) + require.NoError(t, err) + assert.Equal(t, data.Status.Status, scheduler.SchedulerStatus_Success) + assert.Equal(t, file, data.File) +} diff --git a/internal/mail/mailer.go b/internal/mail/mailer.go new file mode 100644 index 000000000..d6c1b7562 --- /dev/null +++ b/internal/mail/mailer.go @@ -0,0 +1,65 @@ +package mail + +import ( + "encoding/base64" + "log" + "net/smtp" + "strings" +) + +type Mailer interface { + SendMail(from string, to []string, subject, body string) error +} + +type mailer struct { + *Config +} + +type Config struct { + Host, Port string +} + +func New(config *Config) Mailer { + return &mailer{ + Config: config, + } +} + +func (m *mailer) SendMail(from string, to []string, subject, body string) error { + log.Printf("Sending an email to %s, subject is \"%s\"", strings.Join(to, ","), subject) + r := strings.NewReplacer("\r\n", "", "\r", "", "\n", "", "%0a", "", "%0d", "") + + c, err := smtp.Dial(m.Host + ":" + m.Port) + if err != nil { + return err + } + defer c.Close() + if err = c.Mail(r.Replace(from)); err != nil { + return err + } + for i := range to { + to[i] = r.Replace(to[i]) + if err = c.Rcpt(to[i]); err != nil { + return err + } + } + wc, err := c.Data() + if err != nil { + return err + } + msg := "To: " + strings.Join(to, ",") + "\r\n" + + "From: " + from + "\r\n" + + "Subject: " + subject + "\r\n" + + "Content-Type: text/html; charset=\"UTF-8\"\r\n" + + "Content-Transfer-Encoding: base64\r\n" + + "\r\n" + base64.StdEncoding.EncodeToString([]byte(body)) + _, err = wc.Write([]byte(msg)) + if err != nil { + return err + } + err = wc.Close() + if err != nil { + return err + } + return c.Quit() +} diff --git a/internal/models/node.go b/internal/models/node.go new file mode 100644 index 000000000..4443ee0bf --- /dev/null +++ b/internal/models/node.go @@ -0,0 +1,131 @@ +package models + +import ( + "bytes" + "fmt" + "jobctl/internal/config" + "jobctl/internal/scheduler" + "jobctl/internal/utils" + "strings" +) + +type Node struct { + *config.Step `json:"Step"` + Log string `json:"Log"` + StartedAt string `json:"StartedAt"` + FinishedAt string `json:"FinishedAt"` + Status scheduler.NodeStatus `json:"Status"` + RetryCount int `json:"RetryCount"` + DoneCount int `json:"DoneCount"` + Error string `json:"Error"` + StatusText string `json:"StatusText"` +} + +func (n *Node) ToNode() *scheduler.Node { + startedAt, _ := utils.ParseTime(n.StartedAt) + finishedAt, _ := utils.ParseTime(n.FinishedAt) + var err error = nil + if n.Error != "" { + err = fmt.Errorf(n.Error) + } + ret := &scheduler.Node{ + Step: n.Step, + NodeState: scheduler.NodeState{ + Status: n.Status, + Log: n.Log, + StartedAt: startedAt, + FinishedAt: finishedAt, + RetryCount: n.RetryCount, + DoneCount: n.DoneCount, + Error: err, + }, + } + return ret +} + +func FromNode(n *scheduler.Node) *Node { + node := &Node{ + Step: n.Step, + Log: n.Log, + StartedAt: utils.FormatTime(n.StartedAt), + FinishedAt: utils.FormatTime(n.FinishedAt), + Status: n.ReadStatus(), + StatusText: n.ReadStatus().String(), + RetryCount: n.ReadRetryCount(), + DoneCount: n.ReadDoneCount(), + } + if n.Error != nil { + node.Error = n.Error.Error() + } + return node +} + +func FromNodes(nodes []*scheduler.Node) []*Node { + ret := []*Node{} + for _, n := range nodes { + ret = append(ret, FromNode(n)) + } + return ret +} + +func FromSteps(steps []*config.Step) []*Node { + ret := []*Node{} + for _, s := range steps { + ret = append(ret, fromStepWithDefValues(s)) + } + return ret +} + +func StepGraph(steps []*Node, displayStatus bool) string { + var buf bytes.Buffer + buf.WriteString("flowchart LR;") + for _, s := range steps { + buf.WriteString(fmt.Sprintf("%s(%s)", graphNode(s.Name), s.Name)) + if displayStatus { + switch s.Status { + case scheduler.NodeStatus_Running: + buf.WriteString(":::running") + case scheduler.NodeStatus_Error: + buf.WriteString(":::error") + case scheduler.NodeStatus_Cancel: + buf.WriteString(":::cancel") + case scheduler.NodeStatus_Success: + buf.WriteString(":::done") + case scheduler.NodeStatus_Skipped: + buf.WriteString(":::skipped") + default: + buf.WriteString(":::none") + } + } else { + buf.WriteString(":::none") + } + buf.WriteString(";") + for _, d := range s.Depends { + buf.WriteString(graphNode(d) + "-->" + graphNode(s.Name) + ";") + } + } + buf.WriteString("classDef none fill:white,stroke:lightblue,stroke-width:2px\n") + buf.WriteString("classDef running fill:white,stroke:lime,stroke-width:2px\n") + buf.WriteString("classDef error fill:white,stroke:red,stroke-width:2px\n") + buf.WriteString("classDef cancel fill:white,stroke:pink,stroke-width:2px\n") + buf.WriteString("classDef done fill:white,stroke:green,stroke-width:2px\n") + buf.WriteString("classDef skipped fill:white,stroke:gray,stroke-width:2px\n") + return buf.String() +} + +func graphNode(val string) string { + return strings.ReplaceAll(val, " ", "_") +} + +func fromStepWithDefValues(s *config.Step) *Node { + step := &Node{ + Step: s, + Log: "", + StartedAt: "-", + FinishedAt: "-", + Status: scheduler.NodeStatus_None, + StatusText: scheduler.NodeStatus_None.String(), + RetryCount: 0, + } + return step +} diff --git a/internal/models/node_test.go b/internal/models/node_test.go new file mode 100644 index 000000000..66a09b039 --- /dev/null +++ b/internal/models/node_test.go @@ -0,0 +1,64 @@ +package models_test + +import ( + "jobctl/internal/config" + "jobctl/internal/models" + "jobctl/internal/scheduler" + "jobctl/internal/utils" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func makeStep(cmd string) *config.Step { + step := &config.Step{ + Name: "test step", + } + step.Command, step.Args = utils.SplitCommand(cmd) + return step +} + +func TestFromNodes(t *testing.T) { + g := testRunSteps( + t, + makeStep("true"), + makeStep("false"), + ) + + ret := models.FromNodes(g.Nodes()) + + assert.Equal(t, 2, len(ret)) + assert.NotEqual(t, "", ret[1].Error) +} + +func TestToNode(t *testing.T) { + g := testRunSteps( + t, + makeStep("true"), + makeStep("true"), + ) + orig := g.Nodes() + for _, n := range orig { + require.Equal(t, scheduler.NodeStatus_Success, n.Status) + } + nodes := models.FromNodes(orig) + for i := range nodes { + n := nodes[i].ToNode() + require.Equal(t, n.Step, orig[i].Step) + require.Equal(t, n.NodeState, orig[i].NodeState) + } +} + +func testRunSteps(t *testing.T, steps ...*config.Step) *scheduler.ExecutionGraph { + g, err := scheduler.NewExecutionGraph(steps...) + require.NoError(t, err) + for _, n := range g.Nodes() { + if err := n.Execute(); err != nil { + n.Status = scheduler.NodeStatus_Error + } else { + n.Status = scheduler.NodeStatus_Success + } + } + return g +} diff --git a/internal/models/status.go b/internal/models/status.go new file mode 100644 index 000000000..0e4b086f3 --- /dev/null +++ b/internal/models/status.go @@ -0,0 +1,114 @@ +package models + +import ( + "encoding/json" + "fmt" + "jobctl/internal/config" + "jobctl/internal/scheduler" + "jobctl/internal/utils" + "strings" + "time" +) + +type StatusResponse struct { + Status *Status `json:"status"` +} + +type Pid int + +const PidNotRunning Pid = -1 + +func (p Pid) String() string { + if p == PidNotRunning { + return "" + } + return fmt.Sprintf("%d", p) +} + +func (p Pid) IsRunning() bool { + return p != PidNotRunning +} + +type Status struct { + RequestId string `json:"RequestId"` + Name string `json:"Name"` + Status scheduler.SchedulerStatus `json:"Status"` + StatusText string `json:"StatusText"` + Pid Pid `json:"Pid"` + Nodes []*Node `json:"Nodes"` + OnExit *Node `json:"OnExit"` + OnSuccess *Node `json:"OnSuccess"` + OnFailure *Node `json:"OnFailure"` + OnCancel *Node `json:"OnCancel"` + StartedAt string `json:"StartedAt"` + FinishedAt string `json:"FinishedAt"` + Log string `json:"Log"` + Params string `json:"Params"` +} + +type StatusFile struct { + File string + Status *Status +} + +func StatusFromJson(s string) (*Status, error) { + status := &Status{} + err := json.Unmarshal([]byte(s), status) + if err != nil { + return nil, err + } + return status, err +} + +func NewStatus(cfg *config.Config, nodes []*scheduler.Node, status scheduler.SchedulerStatus, + pid int, s, e *time.Time) *Status { + finish, start := time.Time{}, time.Time{} + if s != nil { + start = *s + } + if e != nil { + finish = *e + } + models := []*Node{} + if nodes != nil && len(nodes) != 0 { + models = FromNodes(nodes) + } else { + models = FromSteps(cfg.Steps) + } + var onExit, onSuccess, onFailure, onCancel *Node = nil, nil, nil, nil + if cfg.HandlerOn.Exit != nil { + onExit = fromStepWithDefValues(cfg.HandlerOn.Exit) + } + if cfg.HandlerOn.Success != nil { + onSuccess = fromStepWithDefValues(cfg.HandlerOn.Success) + } + if cfg.HandlerOn.Failure != nil { + onFailure = fromStepWithDefValues(cfg.HandlerOn.Failure) + } + if cfg.HandlerOn.Cancel != nil { + onCancel = fromStepWithDefValues(cfg.HandlerOn.Cancel) + } + return &Status{ + RequestId: "", + Name: cfg.Name, + Status: status, + StatusText: status.String(), + Pid: Pid(pid), + Nodes: models, + OnExit: onExit, + OnSuccess: onSuccess, + OnFailure: onFailure, + OnCancel: onCancel, + StartedAt: utils.FormatTime(start), + FinishedAt: utils.FormatTime(finish), + Params: strings.Join(cfg.Params, " "), + } +} + +func (sts *Status) ToJson() ([]byte, error) { + js, err := json.Marshal(sts) + if err != nil { + return []byte{}, err + } + return js, nil +} diff --git a/internal/models/status_test.go b/internal/models/status_test.go new file mode 100644 index 000000000..dd4085f86 --- /dev/null +++ b/internal/models/status_test.go @@ -0,0 +1,60 @@ +package models_test + +import ( + "jobctl/internal/config" + "jobctl/internal/models" + "jobctl/internal/scheduler" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestPid(t *testing.T) { + if models.PidNotRunning.IsRunning() { + t.Error() + } +} + +func TestStatusSerialization(t *testing.T) { + start, end := time.Now(), time.Now().Add(time.Second*1) + cfg := &config.Config{ + ConfigPath: "", + Name: "", + Description: "", + Env: []string{}, + LogDir: "", + HandlerOn: config.HandlerOn{}, + Steps: []*config.Step{ + { + Name: "1", Description: "", Variables: []string{}, + Dir: "dir", Command: "echo 1", Args: []string{}, + Depends: []string{}, ContinueOn: config.ContinueOn{}, + RetryPolicy: &config.RetryPolicy{}, MailOnError: false, + Repeat: false, RepeatInterval: 0, Preconditions: []*config.Condition{}, + }, + }, + MailOn: config.MailOn{}, + ErrorMail: &config.MailConfig{}, + InfoMail: &config.MailConfig{}, + Smtp: &config.SmtpConfig{}, + DelaySec: 0, + HistRetentionDays: 0, + Preconditions: []*config.Condition{}, + MaxActiveRuns: 0, + Params: []string{}, + DefaultParams: "", + } + st := models.NewStatus(cfg, nil, scheduler.SchedulerStatus_Success, 10000, &start, &end) + + js, err := st.ToJson() + require.NoError(t, err) + + st_, err := models.StatusFromJson(string(js)) + require.NoError(t, err) + + assert.Equal(t, st.Name, st_.Name) + require.Equal(t, 1, len(st_.Nodes)) + assert.Equal(t, cfg.Steps[0].Name, st_.Nodes[0].Name) +} diff --git a/internal/reporter/reporter.go b/internal/reporter/reporter.go new file mode 100644 index 000000000..3d173d574 --- /dev/null +++ b/internal/reporter/reporter.go @@ -0,0 +1,142 @@ +package reporter + +import ( + "bytes" + "fmt" + "jobctl/internal/config" + "jobctl/internal/mail" + "jobctl/internal/scheduler" + "jobctl/internal/utils" + "log" + "strings" +) + +type Reporter struct { + *Config +} + +type Config struct { + Mailer mail.Mailer +} + +func New(config *Config) *Reporter { + return &Reporter{ + Config: config, + } +} + +func (rp *Reporter) ReportStep(sc *scheduler.Scheduler, g *scheduler.ExecutionGraph, + cfg *config.Config, node *scheduler.Node) error { + status := node.ReadStatus() + if status != scheduler.NodeStatus_None { + log.Printf("%s %s", node.Name, status) + } + if status == scheduler.NodeStatus_Error && node.MailOnError { + return rp.sendError(cfg, sc.Status(g), g.Nodes()) + } + return nil +} + +func (rp *Reporter) Report(status scheduler.SchedulerStatus, + nodes []*scheduler.Node, err error) { + log.Printf(toText(status, nodes, err)) +} + +func (rp *Reporter) ReportMail(status scheduler.SchedulerStatus, + g *scheduler.ExecutionGraph, err error, cfg *config.Config) error { + if err != nil && status != scheduler.SchedulerStatus_Cancel && cfg.MailOn.Failure { + return rp.sendError(cfg, status, g.Nodes()) + } else if cfg.MailOn.Success { + return rp.sendInfo(cfg, status, g.Nodes()) + } + return nil +} + +func (rp *Reporter) sendInfo(cfg *config.Config, + status scheduler.SchedulerStatus, nodes []*scheduler.Node) error { + mailConfig := cfg.InfoMail + jobName := cfg.Name + subject := fmt.Sprintf("%s %s (%s)", mailConfig.Prefix, jobName, status) + body := toHtml(status, nodes) + + return rp.Mailer.SendMail( + cfg.InfoMail.From, + []string{cfg.InfoMail.To}, + subject, + body, + ) +} + +func (rp *Reporter) sendError(cfg *config.Config, + status scheduler.SchedulerStatus, nodes []*scheduler.Node) error { + mailConfig := cfg.ErrorMail + jobName := cfg.Name + subject := fmt.Sprintf("%s %s (%s)", mailConfig.Prefix, jobName, status) + body := toHtml(status, nodes) + + return rp.Mailer.SendMail( + cfg.ErrorMail.From, + []string{cfg.ErrorMail.To}, + subject, + body, + ) +} + +func toText(status scheduler.SchedulerStatus, nodes []*scheduler.Node, err error) string { + vals := []string{} + vals = append(vals, "[Result]") + for _, n := range nodes { + vals = append(vals, fmt.Sprintf("\t%s", n.Report())) + } + if err != nil { + vals = append(vals, fmt.Sprintf("\tLast Error=%s", err.Error())) + } + return strings.Join(vals, "\n") +} + +func toHtml(status scheduler.SchedulerStatus, list []*scheduler.Node) string { + var buffer bytes.Buffer + addValFunc := func(val string) { + buffer.WriteString( + fmt.Sprintf("%s", + val)) + } + buffer.WriteString(` + + + + + + + + + + + + `) + addStatusFunc := func(status scheduler.NodeStatus) { + style := "" + switch status { + case scheduler.NodeStatus_Error: + style = "color: #D01117;font-weight:bold;" + } + buffer.WriteString( + fmt.Sprintf("", + style, status)) + } + for _, n := range list { + buffer.WriteString("") + addValFunc(n.Name) + addValFunc(utils.FormatTime(n.StartedAt)) + addValFunc(utils.FormatTime(n.FinishedAt)) + addStatusFunc(n.ReadStatus()) + if n.Error != nil { + addValFunc(n.Error.Error()) + } else { + addValFunc("-") + } + buffer.WriteString("") + } + buffer.WriteString("
NameStarted AtFinished AtStatusError
%s
") + return buffer.String() +} diff --git a/internal/scheduler/graph.go b/internal/scheduler/graph.go new file mode 100644 index 000000000..bf27257ed --- /dev/null +++ b/internal/scheduler/graph.go @@ -0,0 +1,157 @@ +package scheduler + +import ( + "fmt" + "jobctl/internal/config" + "log" + "time" +) + +type ExecutionGraph struct { + dict map[int]*Node + nodes []*Node + from map[int][]int + to map[int][]int + StartedAt, FinishedAt time.Time +} + +func NewExecutionGraph(steps ...*config.Step) (*ExecutionGraph, error) { + graph := &ExecutionGraph{ + dict: make(map[int]*Node), + from: make(map[int][]int), + to: make(map[int][]int), + nodes: []*Node{}, + } + for _, step := range steps { + node := &Node{Step: step} + node.init() + graph.dict[node.id] = node + graph.nodes = append(graph.nodes, node) + } + if err := graph.setup(); err != nil { + return nil, err + } + return graph, nil +} + +func RetryExecutionGraph(nodes ...*Node) (*ExecutionGraph, error) { + graph := &ExecutionGraph{ + dict: make(map[int]*Node), + from: make(map[int][]int), + to: make(map[int][]int), + nodes: []*Node{}, + } + for _, node := range nodes { + node.init() + graph.dict[node.id] = node + graph.nodes = append(graph.nodes, node) + } + if err := graph.setup(); err != nil { + return nil, err + } + if err := graph.setupRetry(); err != nil { + return nil, err + } + return graph, nil +} + +func (g *ExecutionGraph) Duration() time.Duration { + if g.FinishedAt.IsZero() { + return time.Since(g.StartedAt) + } + return g.FinishedAt.Sub(g.StartedAt) +} + +func (g *ExecutionGraph) Nodes() []*Node { + return g.nodes +} + +func (g *ExecutionGraph) From(from int) []int { + return g.from[from] +} + +func (g *ExecutionGraph) To(to int) []int { + return g.to[to] +} + +func (g *ExecutionGraph) Node(id int) *Node { + return g.dict[id] +} + +func (g *ExecutionGraph) setupRetry() error { + dict := map[int]NodeStatus{} + retry := map[int]bool{} + for _, node := range g.nodes { + dict[node.id] = node.Status + retry[node.id] = false + } + frontier := []int{} + for _, node := range g.nodes { + if len(node.Depends) == 0 { + frontier = append(frontier, node.id) + } + } + for len(frontier) > 0 { + next := []int{} + for _, u := range frontier { + if retry[u] == true || dict[u] == NodeStatus_Error || dict[u] == NodeStatus_Cancel { + log.Printf("clear node state: %s", g.dict[u].Name) + g.dict[u].clearState() + retry[u] = true + } + for _, v := range g.from[u] { + if retry[u] { + retry[v] = true + } + next = append(next, v) + } + } + frontier = next + } + return nil +} + +func (g *ExecutionGraph) setup() error { + for _, node := range g.nodes { + for _, dep := range node.Depends { + dep_step, err := g.findStep(dep) + if err != nil { + return err + } + err = g.addEdge(dep_step, node) + if err != nil { + return err + } + } + } + return nil +} + +func (g *ExecutionGraph) addEdge(from, to *Node) error { + g.from[from.id] = append(g.from[from.id], to.id) + g.to[to.id] = append(g.to[to.id], from.id) + return g.cycleDfs(to.id, make(map[int]bool)) +} + +func (g *ExecutionGraph) cycleDfs(t int, visited map[int]bool) error { + if visited[t] { + return fmt.Errorf("cycle detected") + } + visited[t] = true + for _, next := range g.from[t] { + err := g.cycleDfs(next, visited) + if err != nil { + return err + } + } + return nil +} + +func (g *ExecutionGraph) findStep(name string) (*Node, error) { + for _, n := range g.dict { + if n.Name == name { + return n, nil + } + } + return nil, fmt.Errorf("step not found: %s", name) +} diff --git a/internal/scheduler/graph_test.go b/internal/scheduler/graph_test.go new file mode 100644 index 000000000..c71a0b180 --- /dev/null +++ b/internal/scheduler/graph_test.go @@ -0,0 +1,89 @@ +package scheduler_test + +import ( + "jobctl/internal/config" + "jobctl/internal/scheduler" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestCycleDetection(t *testing.T) { + step1 := &config.Step{} + step1.Name = "1" + step1.Depends = []string{"2"} + + step2 := &config.Step{} + step2.Name = "2" + step2.Depends = []string{"1"} + + _, err := scheduler.NewExecutionGraph(step1, step2) + + if err == nil { + t.Fatal("cycle detection should be detected.") + } +} + +func TestRetryExecution(t *testing.T) { + nodes := []*scheduler.Node{ + { + Step: &config.Step{Name: "1", Command: "true"}, + NodeState: scheduler.NodeState{ + Status: scheduler.NodeStatus_Success, + }, + }, + { + Step: &config.Step{Name: "2", Command: "true", Depends: []string{"1"}}, + NodeState: scheduler.NodeState{ + Status: scheduler.NodeStatus_Error, + }, + }, + { + Step: &config.Step{Name: "3", Command: "true", Depends: []string{"2"}}, + NodeState: scheduler.NodeState{ + Status: scheduler.NodeStatus_Cancel, + }, + }, + { + Step: &config.Step{Name: "4", Command: "true", Depends: []string{}}, + NodeState: scheduler.NodeState{ + Status: scheduler.NodeStatus_Skipped, + }, + }, + { + Step: &config.Step{Name: "5", Command: "true", Depends: []string{"4"}}, + NodeState: scheduler.NodeState{ + Status: scheduler.NodeStatus_Error, + }, + }, + { + Step: &config.Step{Name: "6", Command: "true", Depends: []string{"5"}}, + NodeState: scheduler.NodeState{ + Status: scheduler.NodeStatus_Success, + }, + }, + { + Step: &config.Step{Name: "7", Command: "true", Depends: []string{"6"}}, + NodeState: scheduler.NodeState{ + Status: scheduler.NodeStatus_Skipped, + }, + }, + { + Step: &config.Step{Name: "8", Command: "true", Depends: []string{}}, + NodeState: scheduler.NodeState{ + Status: scheduler.NodeStatus_Skipped, + }, + }, + } + _, err := scheduler.RetryExecutionGraph(nodes...) + require.NoError(t, err) + assert.Equal(t, scheduler.NodeStatus_Success, nodes[0].Status) + assert.Equal(t, scheduler.NodeStatus_None, nodes[1].Status) + assert.Equal(t, scheduler.NodeStatus_None, nodes[2].Status) + assert.Equal(t, scheduler.NodeStatus_Skipped, nodes[3].Status) + assert.Equal(t, scheduler.NodeStatus_None, nodes[4].Status) + assert.Equal(t, scheduler.NodeStatus_None, nodes[5].Status) + assert.Equal(t, scheduler.NodeStatus_None, nodes[6].Status) + assert.Equal(t, scheduler.NodeStatus_Skipped, nodes[7].Status) +} diff --git a/internal/scheduler/node.go b/internal/scheduler/node.go new file mode 100644 index 000000000..8df58b202 --- /dev/null +++ b/internal/scheduler/node.go @@ -0,0 +1,220 @@ +package scheduler + +import ( + "bufio" + "context" + "fmt" + "jobctl/internal/config" + "jobctl/internal/utils" + "os" + "os/exec" + "path/filepath" + "strings" + "sync" + "time" +) + +type NodeStatus int + +const ( + NodeStatus_None NodeStatus = iota + NodeStatus_Running + NodeStatus_Error + NodeStatus_Cancel + NodeStatus_Success + NodeStatus_Skipped +) + +func (s NodeStatus) String() string { + switch s { + case NodeStatus_Running: + return "running" + case NodeStatus_Error: + return "failed" + case NodeStatus_Cancel: + return "canceled" + case NodeStatus_Success: + return "finished" + case NodeStatus_Skipped: + return "skipped" + case NodeStatus_None: + fallthrough + default: + return "not started" + } +} + +type Node struct { + *config.Step + NodeState + id int + mu sync.RWMutex + cmd *exec.Cmd + cancelFunc func() + logFile *os.File + logWriter *bufio.Writer +} + +type NodeState struct { + Status NodeStatus + Log string + StartedAt time.Time + FinishedAt time.Time + RetryCount int + DoneCount int + Error error +} + +func (n *Node) Execute() error { + ctx, fn := context.WithCancel(context.Background()) + n.cancelFunc = fn + cmd := exec.CommandContext(ctx, n.Command, n.Args...) + n.cmd = cmd + cmd.Dir = n.Dir + for _, v := range n.Variables { + cmd.Env = append(cmd.Env, v) + } + + if n.logWriter != nil { + cmd.Stdout = n.logWriter + cmd.Stderr = n.logWriter + } else { + cmd.Stdout = os.Stdout + cmd.Stderr = os.Stdout + } + + n.Error = cmd.Run() + return n.Error +} + +func (n *Node) clearState() { + n.NodeState = NodeState{} +} + +func (n *Node) ReadStatus() NodeStatus { + n.mu.RLock() + defer n.mu.RUnlock() + ret := n.Status + return ret +} + +func (n *Node) Report() string { + vals := []string{} + vals = append(vals, fmt.Sprintf("Step: %s", n.Name)) + vals = append(vals, fmt.Sprintf("Status: %s", n.ReadStatus())) + cmd := n.Command + if len(n.Args) > 0 { + cmd += " " + strings.Join(n.Args, " ") + } + vals = append(vals, fmt.Sprintf("Command: %s", cmd)) + if n.Error != nil { + vals = append(vals, fmt.Sprintf("Error: %s", n.Error)) + } + return strings.Join(vals, "\t") +} + +func (n *Node) updateStatus(status NodeStatus) { + n.mu.Lock() + defer n.mu.Unlock() + n.Status = status +} + +func (n *Node) signal(sig os.Signal) { + status := n.ReadStatus() + if status == NodeStatus_Running { + n.updateStatus(NodeStatus_Cancel) + } + if n.cmd != nil { + n.cmd.Process.Signal(sig) + } +} + +func (n *Node) cancel() { + status := n.ReadStatus() + if status == NodeStatus_None { + n.updateStatus(NodeStatus_Cancel) + } else if status == NodeStatus_Running { + n.updateStatus(NodeStatus_Cancel) + } + if n.cancelFunc != nil { + n.cancelFunc() + } +} + +func (n *Node) setupLog(logDir string) { + n.StartedAt = time.Now() + n.Log = filepath.Join(logDir, fmt.Sprintf("%s.%s.log", + utils.ValidFilename(n.Name, "_"), + n.StartedAt.Format("20060102.15:04:05"), + )) +} + +func (n *Node) openLogFile() error { + if n.Log == "" { + return nil + } + var err error + n.logFile, err = utils.OpenOrCreateFile(n.Log) + if err != nil { + n.Error = err + return err + } + n.logWriter = bufio.NewWriter(n.logFile) + return nil +} + +func (n *Node) closeLogFile() error { + var lastErr error = nil + if n.logWriter != nil { + lastErr = n.logWriter.Flush() + } + if n.logFile != nil { + if err := n.logFile.Close(); err != nil { + lastErr = err + } + } + return lastErr +} + +func (n *Node) ReadRetryCount() int { + n.mu.RLock() + defer n.mu.RUnlock() + return n.RetryCount +} + +func (n *Node) ReadDoneCount() int { + n.mu.RLock() + defer n.mu.RUnlock() + return n.DoneCount +} + +func (n *Node) incRetryCount() { + n.mu.Lock() + defer n.mu.Unlock() + n.RetryCount++ +} + +func (n *Node) incDoneCount() { + n.mu.Lock() + defer n.mu.Unlock() + n.DoneCount++ +} + +var nextNodeId int = 1 + +func (n *Node) init() { + if n.id != 0 { + return + } + n.id = nextNodeId + nextNodeId++ + if n.Variables == nil { + n.Variables = []string{} + } + if n.Variables == nil { + n.Variables = []string{} + } + if n.Preconditions == nil { + n.Preconditions = []*config.Condition{} + } +} diff --git a/internal/scheduler/node_test.go b/internal/scheduler/node_test.go new file mode 100644 index 000000000..67871f49e --- /dev/null +++ b/internal/scheduler/node_test.go @@ -0,0 +1,29 @@ +package scheduler_test + +import ( + "jobctl/internal/config" + "jobctl/internal/scheduler" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestExecute(t *testing.T) { + n := &scheduler.Node{ + Step: &config.Step{ + Command: "true", + }} + require.NoError(t, n.Execute()) + assert.Nil(t, n.Error) +} + +func TestError(t *testing.T) { + n := &scheduler.Node{ + Step: &config.Step{ + Command: "false", + }} + err := n.Execute() + assert.True(t, err != nil) + assert.Equal(t, n.Error, err) +} diff --git a/internal/scheduler/scheduler.go b/internal/scheduler/scheduler.go new file mode 100644 index 000000000..03eb02eb0 --- /dev/null +++ b/internal/scheduler/scheduler.go @@ -0,0 +1,399 @@ +package scheduler + +import ( + "fmt" + "jobctl/internal/config" + "jobctl/internal/constants" + "jobctl/internal/settings" + "log" + "os" + "sync" + "time" +) + +type SchedulerStatus int + +const ( + SchedulerStatus_None SchedulerStatus = iota + SchedulerStatus_Running + SchedulerStatus_Error + SchedulerStatus_Cancel + SchedulerStatus_Success + SchedulerStatus_Skipped_Unused +) + +func (s SchedulerStatus) String() string { + switch s { + case SchedulerStatus_Running: + return "running" + case SchedulerStatus_Error: + return "failed" + case SchedulerStatus_Cancel: + return "canceled" + case SchedulerStatus_Success: + return "finished" + case SchedulerStatus_None: + fallthrough + default: + return "not started" + } +} + +type Scheduler struct { + *Config + canceled int32 + mu sync.RWMutex + pause time.Duration + lastError error + handlers map[string]*Node +} + +type Config struct { + LogDir string + MaxActiveRuns int + DelaySec time.Duration + Dry bool + OnExit *config.Step + OnSuccess *config.Step + OnFailure *config.Step + OnCancel *config.Step +} + +func New(config *Config) *Scheduler { + return &Scheduler{ + Config: config, + pause: 100 * time.Millisecond, + } +} + +func (sc *Scheduler) Schedule(g *ExecutionGraph, done chan *Node) error { + if err := sc.setup(); err != nil { + return err + } + g.StartedAt = time.Now() + + defer func() { + g.FinishedAt = time.Now() + }() + + var wg = sync.WaitGroup{} + + for !sc.isFinished(g) { + if sc.IsCanceled() { + break + } + for _, node := range g.Nodes() { + if node.ReadStatus() != NodeStatus_None { + continue + } + if !isReady(g, node) { + continue + } + if sc.IsCanceled() { + break + } + if sc.MaxActiveRuns > 0 && + sc.runningCount(g) >= sc.MaxActiveRuns { + continue + } + if len(node.Preconditions) > 0 { + log.Printf("checking pre conditions for \"%s\"", node.Name) + if err := config.EvalConditions(node.Preconditions); err != nil { + log.Printf("%s", err.Error()) + node.updateStatus(NodeStatus_Skipped) + node.Error = err + continue + } + } + wg.Add(1) + + log.Printf("start running: %s", node.Name) + node.updateStatus(NodeStatus_Running) + go func(node *Node) { + defer func() { + node.FinishedAt = time.Now() + wg.Done() + }() + + if !sc.Dry { + node.setupLog(sc.LogDir) + node.openLogFile() + defer node.closeLogFile() + } + + for !sc.IsCanceled() { + var err error = nil + if !sc.Dry { + err = node.Execute() + } + if err != nil { + handleError(node) + switch node.ReadStatus() { + case NodeStatus_None: + // nothing to do + case NodeStatus_Error: + sc.lastError = err + fallthrough + default: + if done != nil { + done <- node + } + } + return + } + if node.Repeat { + node.incDoneCount() + time.Sleep(node.RepeatInterval) + continue + } + break + } + node.updateStatus(NodeStatus_Success) + if done != nil { + done <- node + } + }(node) + + time.Sleep(sc.DelaySec) + } + + time.Sleep(sc.pause) + } + wg.Wait() + + handlers := []string{} + switch sc.Status(g) { + case SchedulerStatus_Success: + handlers = append(handlers, constants.OnSuccess) + case SchedulerStatus_Error: + handlers = append(handlers, constants.OnFailure) + case SchedulerStatus_Cancel: + handlers = append(handlers, constants.OnCancel) + } + handlers = append(handlers, constants.OnExit) + for _, h := range handlers { + if n := sc.handlers[h]; n != nil { + log.Println(fmt.Sprintf("%s started", n.Name)) + err := sc.runNode(n) + if err != nil { + sc.lastError = err + } + if done != nil { + done <- n + } + } + } + return sc.lastError +} + +func (sc *Scheduler) runNode(node *Node) error { + defer func() { + node.FinishedAt = time.Now() + }() + + node.updateStatus(NodeStatus_Running) + + if !sc.Dry { + node.setupLog(sc.LogDir) + node.openLogFile() + defer node.closeLogFile() + err := node.Execute() + if err != nil { + node.updateStatus(NodeStatus_Error) + } else { + node.updateStatus(NodeStatus_Success) + } + } else { + node.updateStatus(NodeStatus_Success) + } + + return nil +} + +func (sc *Scheduler) setup() (err error) { + if sc.LogDir == "" { + sc.LogDir, err = settings.Get(settings.CONFIG__LOGS_DIR) + if err != nil { + return + } + } + if !sc.Dry { + if err = os.MkdirAll(sc.LogDir, 0755); err != nil { + return + } + } + sc.handlers = map[string]*Node{} + if sc.OnExit != nil { + sc.handlers[constants.OnExit] = &Node{Step: sc.OnExit} + } + if sc.OnSuccess != nil { + sc.handlers[constants.OnSuccess] = &Node{Step: sc.OnSuccess} + } + if sc.OnFailure != nil { + sc.handlers[constants.OnFailure] = &Node{Step: sc.OnFailure} + } + if sc.OnCancel != nil { + sc.handlers[constants.OnCancel] = &Node{Step: sc.OnCancel} + } + return +} + +func (sc *Scheduler) HanderNode(name string) *Node { + if v, ok := sc.handlers[name]; ok { + return v + } + return nil +} + +func handleError(node *Node) { + status := node.ReadStatus() + if status != NodeStatus_Cancel && status != NodeStatus_Success { + if node.RetryPolicy != nil && node.RetryPolicy.Limit > node.ReadRetryCount() { + log.Printf("%s failed but scheduled for retry", node.Name) + node.incRetryCount() + node.updateStatus(NodeStatus_None) + } else { + node.updateStatus(NodeStatus_Error) + } + } +} + +func (sc *Scheduler) IsCanceled() bool { + sc.mu.RLock() + defer sc.mu.RUnlock() + ret := sc.canceled == 1 + return ret +} + +func (sc *Scheduler) setCanceled() { + sc.mu.Lock() + defer sc.mu.Unlock() + sc.canceled = 1 +} + +func (sc *Scheduler) isRunning(g *ExecutionGraph) bool { + for _, node := range g.Nodes() { + switch node.ReadStatus() { + case NodeStatus_Running: + return true + } + } + return false +} + +func (sc *Scheduler) runningCount(g *ExecutionGraph) (count int) { + count = 0 + for _, node := range g.Nodes() { + switch node.ReadStatus() { + case NodeStatus_Running: + count++ + } + } + return count +} + +func (sc *Scheduler) isFinished(g *ExecutionGraph) bool { + for _, node := range g.Nodes() { + switch node.ReadStatus() { + case NodeStatus_Running, NodeStatus_None: + return false + } + } + return true +} + +func (sc *Scheduler) checkStatus(g *ExecutionGraph, in []NodeStatus) bool { + for _, node := range g.Nodes() { + s := node.ReadStatus() + var f = false + for i := range in { + f = s == in[i] + if f { + break + } + } + if !f { + return false + } + } + return true +} + +func (sc *Scheduler) Signal(g *ExecutionGraph, sig os.Signal, done chan bool) { + if !sc.IsCanceled() { + sc.setCanceled() + } + for _, node := range g.Nodes() { + node.signal(sig) + } + if done != nil { + defer func() { + done <- true + }() + for sc.isRunning(g) { + time.Sleep(sc.pause) + } + } +} + +func (sc *Scheduler) Cancel(g *ExecutionGraph, done chan bool) { + sc.setCanceled() + if done != nil { + defer func() { + done <- true + }() + } + for _, node := range g.Nodes() { + node.cancel() + } + for sc.isRunning(g) { + time.Sleep(sc.pause) + } +} + +func (sc *Scheduler) Status(g *ExecutionGraph) SchedulerStatus { + if sc.IsCanceled() && !sc.checkStatus(g, []NodeStatus{ + NodeStatus_Success, NodeStatus_Skipped, + }) { + return SchedulerStatus_Cancel + } + if g.StartedAt.IsZero() { + return SchedulerStatus_None + } + if sc.isRunning(g) { + return SchedulerStatus_Running + } + if sc.lastError != nil { + return SchedulerStatus_Error + } + return SchedulerStatus_Success +} + +func isReady(g *ExecutionGraph, node *Node) (ready bool) { + ready = true + for _, dep := range g.To(node.id) { + n := g.Node(dep) + switch n.ReadStatus() { + case NodeStatus_Success: + continue + case NodeStatus_Error: + if !n.ContinueOn.Failure { + ready = false + node.updateStatus(NodeStatus_Cancel) + node.Error = fmt.Errorf("upstream failed") + } + case NodeStatus_Skipped: + if !n.ContinueOn.Skipped { + ready = false + node.updateStatus(NodeStatus_Skipped) + node.Error = fmt.Errorf("upstream skipped") + } + case NodeStatus_Cancel: + ready = false + node.updateStatus(NodeStatus_Cancel) + default: + ready = false + } + } + return ready +} diff --git a/internal/scheduler/scheduler_test.go b/internal/scheduler/scheduler_test.go new file mode 100644 index 000000000..fc43f5958 --- /dev/null +++ b/internal/scheduler/scheduler_test.go @@ -0,0 +1,431 @@ +package scheduler_test + +import ( + "io/ioutil" + "jobctl/internal/config" + "jobctl/internal/constants" + "jobctl/internal/scheduler" + "jobctl/internal/settings" + "jobctl/internal/utils" + "os" + "path" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +var ( + testCommand = "true" + testCommandFail = "false" + testBinDir = path.Join(utils.MustGetwd(), "../../tests/bin") + testDir string +) + +func TestMain(m *testing.M) { + testDir = utils.MustTempDir("scheduler-test") + settings.InitTest(testDir) + code := m.Run() + os.RemoveAll(testDir) + os.Exit(code) +} + +func TestScheduler(t *testing.T) { + g, err := scheduler.NewExecutionGraph( + step("1", testCommand), + step("2", testCommand, "1"), + step("3", testCommandFail, "2"), + step("4", testCommand, "3"), + ) + require.NoError(t, err) + sc := scheduler.New(&scheduler.Config{ + MaxActiveRuns: 1, + }) + + counter := 0 + done := make(chan *scheduler.Node) + go func() { + for range done { + counter += 1 + } + }() + require.Error(t, sc.Schedule(g, done)) + assert.Equal(t, counter, 3) + assert.Equal(t, sc.Status(g), scheduler.SchedulerStatus_Error) + + nodes := g.Nodes() + assert.Equal(t, nodes[2].ReadStatus(), scheduler.NodeStatus_Error) + assert.Equal(t, nodes[3].ReadStatus(), scheduler.NodeStatus_Cancel) +} + +func TestSchedulerParallel(t *testing.T) { + g, sc := newTestSchedule(t, + &scheduler.Config{ + MaxActiveRuns: 1000, + }, + step("1", testCommand), + step("2", testCommand), + step("3", testCommand), + ) + err := sc.Schedule(g, nil) + require.NoError(t, err) + assert.Equal(t, sc.Status(g), scheduler.SchedulerStatus_Success) + + nodes := g.Nodes() + assert.Equal(t, nodes[0].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, nodes[1].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, nodes[2].ReadStatus(), scheduler.NodeStatus_Success) +} + +func TestSchedulerFailPartially(t *testing.T) { + g, sc, err := testSchedule(t, + step("1", testCommand), + step("2", testCommandFail), + step("3", testCommand, "1"), + step("4", testCommand, "3"), + ) + require.Error(t, err) + assert.Equal(t, sc.Status(g), scheduler.SchedulerStatus_Error) + + nodes := g.Nodes() + assert.Equal(t, nodes[0].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, nodes[1].ReadStatus(), scheduler.NodeStatus_Error) + assert.Equal(t, nodes[2].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, nodes[3].ReadStatus(), scheduler.NodeStatus_Success) +} + +func TestSchedulerContinueOnFailure(t *testing.T) { + g, sc, err := testSchedule(t, + step("1", testCommand), + &config.Step{ + Name: "2", + Command: testCommandFail, + Depends: []string{"1"}, + ContinueOn: config.ContinueOn{ + Failure: true, + }, + }, + step("3", testCommand, "2"), + ) + require.Error(t, err) + assert.Equal(t, sc.Status(g), scheduler.SchedulerStatus_Error) + + nodes := g.Nodes() + assert.Equal(t, nodes[0].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, nodes[1].ReadStatus(), scheduler.NodeStatus_Error) + assert.Equal(t, nodes[2].ReadStatus(), scheduler.NodeStatus_Success) +} + +func TestSchedulerAllowSkipped(t *testing.T) { + g, sc, err := testSchedule(t, + step("1", testCommand), + &config.Step{ + Name: "2", + Command: testCommand, + Depends: []string{"1"}, + Preconditions: []*config.Condition{ + { + Condition: "`echo 1`", + Expected: "0", + }, + }, + ContinueOn: config.ContinueOn{Skipped: true}, + }, + step("3", testCommand, "2"), + ) + require.NoError(t, err) + assert.Equal(t, sc.Status(g), scheduler.SchedulerStatus_Success) + + nodes := g.Nodes() + assert.Equal(t, nodes[0].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, nodes[1].ReadStatus(), scheduler.NodeStatus_Skipped) + assert.Equal(t, nodes[2].ReadStatus(), scheduler.NodeStatus_Success) +} + +func TestSchedulerCancel(t *testing.T) { + + g, _ := scheduler.NewExecutionGraph( + step("1", testCommand), + step("2", "sleep 60", "1"), + step("3", testCommandFail, "2"), + ) + sc := scheduler.New(&scheduler.Config{ + MaxActiveRuns: 1, + }) + + done := make(chan bool) + go func() { + <-time.After(time.Millisecond * 1000) + sc.Cancel(g, done) + }() + + _ = sc.Schedule(g, nil) + <-done // Wait for canceling finished + assert.Equal(t, sc.Status(g), scheduler.SchedulerStatus_Cancel) + + nodes := g.Nodes() + assert.Equal(t, nodes[0].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, nodes[1].ReadStatus(), scheduler.NodeStatus_Cancel) + assert.Equal(t, nodes[2].ReadStatus(), scheduler.NodeStatus_Cancel) +} + +func TestSchedulerRetryFail(t *testing.T) { + cmd := path.Join(testBinDir, "testfile.sh") + g, sc, err := testSchedule(t, + &config.Step{ + Name: "1", + Command: cmd, + ContinueOn: config.ContinueOn{Failure: true}, + RetryPolicy: &config.RetryPolicy{Limit: 1}, + }, + &config.Step{ + Name: "2", + Command: cmd, + Args: []string{"flag"}, + ContinueOn: config.ContinueOn{Failure: true}, + RetryPolicy: &config.RetryPolicy{Limit: 1}, + Depends: []string{"1"}, + }, + &config.Step{ + Name: "3", + Command: cmd, + Depends: []string{"2"}, + }, + step("4", cmd, "3"), + ) + assert.True(t, err != nil) + assert.Equal(t, sc.Status(g), scheduler.SchedulerStatus_Error) + + nodes := g.Nodes() + assert.Equal(t, nodes[0].ReadStatus(), scheduler.NodeStatus_Error) + assert.Equal(t, nodes[1].ReadStatus(), scheduler.NodeStatus_Error) + assert.Equal(t, nodes[2].ReadStatus(), scheduler.NodeStatus_Error) + assert.Equal(t, nodes[3].ReadStatus(), scheduler.NodeStatus_Cancel) + + assert.Equal(t, nodes[0].ReadRetryCount(), 1) + assert.Equal(t, nodes[1].ReadRetryCount(), 1) +} + +func TestSchedulerRetrySuccess(t *testing.T) { + cmd := path.Join(testBinDir, "testfile.sh") + tmpDir, err := ioutil.TempDir("", "scheduler_test") + tmpFile := path.Join(tmpDir, "flag") + + require.NoError(t, err) + defer os.Remove(tmpDir) + + go func() { + select { + case <-time.After(time.Millisecond * 300): + f, err := os.Create(tmpFile) + require.NoError(t, err) + f.Close() + } + }() + + g, sc, err := testSchedule(t, + step("1", testCommand), + &config.Step{ + Name: "2", + Command: cmd, + Args: []string{tmpFile}, + Depends: []string{"1"}, + RetryPolicy: &config.RetryPolicy{Limit: 10}, + }, + step("3", testCommand, "2"), + ) + assert.NoError(t, err) + assert.Equal(t, sc.Status(g), scheduler.SchedulerStatus_Success) + + nodes := g.Nodes() + assert.Equal(t, nodes[0].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, nodes[1].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, nodes[2].ReadStatus(), scheduler.NodeStatus_Success) + + if nodes[1].ReadRetryCount() == 0 { + t.Error("step 2 Should be retried") + } +} + +func TestStepPreCondition(t *testing.T) { + g, sc, err := testSchedule(t, + step("1", testCommand), + &config.Step{ + Name: "2", + Command: testCommand, + Depends: []string{"1"}, + Preconditions: []*config.Condition{ + { + Condition: "`echo 1`", + Expected: "0", + }, + }, + }, + step("3", testCommand, "2"), + &config.Step{ + Name: "4", + Command: testCommand, + Preconditions: []*config.Condition{ + { + Condition: "`echo 1`", + Expected: "1", + }, + }, + }, + step("5", testCommand, "4"), + ) + require.NoError(t, err) + assert.Equal(t, sc.Status(g), scheduler.SchedulerStatus_Success) + + nodes := g.Nodes() + assert.Equal(t, nodes[0].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, nodes[1].ReadStatus(), scheduler.NodeStatus_Skipped) + assert.Equal(t, nodes[2].ReadStatus(), scheduler.NodeStatus_Skipped) + assert.Equal(t, nodes[3].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, nodes[4].ReadStatus(), scheduler.NodeStatus_Success) +} + +func TestSchedulerOnExit(t *testing.T) { + g, sc := newTestSchedule(t, + &scheduler.Config{ + OnExit: step("onExit", testCommand), + }, + step("1", testCommand), + step("2", testCommand, "1"), + step("3", testCommand), + ) + + err := sc.Schedule(g, nil) + require.NoError(t, err) + + nodes := g.Nodes() + assert.Equal(t, nodes[0].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, nodes[1].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, nodes[2].ReadStatus(), scheduler.NodeStatus_Success) + + onExit := sc.HanderNode(constants.OnExit) + require.NotNil(t, onExit) + assert.Equal(t, onExit.ReadStatus(), scheduler.NodeStatus_Success) +} + +func TestSchedulerOnExitOnFail(t *testing.T) { + g, sc := newTestSchedule(t, + &scheduler.Config{ + OnExit: step("onExit", testCommand), + }, + step("1", testCommandFail), + step("2", testCommand, "1"), + step("3", testCommand), + ) + + err := sc.Schedule(g, nil) + require.Error(t, err) + + nodes := g.Nodes() + assert.Equal(t, nodes[0].ReadStatus(), scheduler.NodeStatus_Error) + assert.Equal(t, nodes[1].ReadStatus(), scheduler.NodeStatus_Cancel) + assert.Equal(t, nodes[2].ReadStatus(), scheduler.NodeStatus_Success) + + assert.Equal(t, sc.HanderNode(constants.OnExit).ReadStatus(), scheduler.NodeStatus_Success) +} + +func TestSchedulerOnCancel(t *testing.T) { + g, sc := newTestSchedule(t, + &scheduler.Config{ + OnSuccess: step("onSuccess", testCommand), + OnFailure: step("onFailure", testCommand), + OnCancel: step("onCancel", testCommand), + }, + step("1", testCommand), + step("2", "sleep 60", "1"), + ) + + done := make(chan bool) + go func() { + <-time.After(time.Millisecond * 500) + sc.Cancel(g, done) + }() + + err := sc.Schedule(g, nil) + require.NoError(t, err) + <-done // Wait for canceling finished + assert.Equal(t, sc.Status(g), scheduler.SchedulerStatus_Cancel) + + nodes := g.Nodes() + assert.Equal(t, nodes[0].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, nodes[1].ReadStatus(), scheduler.NodeStatus_Cancel) + assert.Equal(t, sc.HanderNode(constants.OnSuccess).ReadStatus(), scheduler.NodeStatus_None) + assert.Equal(t, sc.HanderNode(constants.OnFailure).ReadStatus(), scheduler.NodeStatus_None) + assert.Equal(t, sc.HanderNode(constants.OnCancel).ReadStatus(), scheduler.NodeStatus_Success) +} + +func TestSchedulerOnSuccess(t *testing.T) { + g, sc := newTestSchedule(t, + &scheduler.Config{ + OnExit: step("onExit", testCommand), + OnSuccess: step("onSuccess", testCommand), + OnFailure: step("onFailure", testCommand), + }, + step("1", testCommand), + ) + + err := sc.Schedule(g, nil) + require.NoError(t, err) + + nodes := g.Nodes() + assert.Equal(t, nodes[0].ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, sc.HanderNode(constants.OnExit).ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, sc.HanderNode(constants.OnSuccess).ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, sc.HanderNode(constants.OnFailure).ReadStatus(), scheduler.NodeStatus_None) +} + +func TestSchedulerOnFailure(t *testing.T) { + g, sc := newTestSchedule(t, + &scheduler.Config{ + OnExit: step("onExit", testCommand), + OnSuccess: step("onSuccess", testCommand), + OnFailure: step("onFailure", testCommand), + OnCancel: step("onCancel", testCommand), + }, + step("1", testCommandFail), + ) + + err := sc.Schedule(g, nil) + require.Error(t, err) + + nodes := g.Nodes() + assert.Equal(t, nodes[0].ReadStatus(), scheduler.NodeStatus_Error) + assert.Equal(t, sc.HanderNode(constants.OnExit).ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, sc.HanderNode(constants.OnSuccess).ReadStatus(), scheduler.NodeStatus_None) + assert.Equal(t, sc.HanderNode(constants.OnFailure).ReadStatus(), scheduler.NodeStatus_Success) + assert.Equal(t, sc.HanderNode(constants.OnCancel).ReadStatus(), scheduler.NodeStatus_None) +} + +func testSchedule(t *testing.T, steps ...*config.Step) ( + *scheduler.ExecutionGraph, *scheduler.Scheduler, error, +) { + t.Helper() + g, sc := newTestSchedule(t, + &scheduler.Config{MaxActiveRuns: 2}, steps...) + return g, sc, sc.Schedule(g, nil) +} + +func newTestSchedule(t *testing.T, c *scheduler.Config, steps ...*config.Step) ( + *scheduler.ExecutionGraph, *scheduler.Scheduler, +) { + t.Helper() + g, err := scheduler.NewExecutionGraph(steps...) + require.NoError(t, err) + return g, scheduler.New(c) +} + +func step(name, command string, depends ...string) *config.Step { + cmd, args := utils.SplitCommand(command) + return &config.Step{ + Name: name, + Command: cmd, + Args: args, + Depends: depends, + } +} diff --git a/internal/settings/settings.go b/internal/settings/settings.go new file mode 100644 index 000000000..45ef3b2da --- /dev/null +++ b/internal/settings/settings.go @@ -0,0 +1,61 @@ +package settings + +import ( + "fmt" + "jobctl/internal/utils" + "os" + "path" +) + +var ErrConfigNotFound = fmt.Errorf("config not found") + +var cache map[string]string = nil + +const ( + CONFIG__DATA_DIR = "JOBCTL__DATA" + CONFIG__LOGS_DIR = "JOBCTL__LOGS" +) + +func MustGet(name string) string { + val, err := Get(name) + if err != nil { + panic(fmt.Errorf("failed to get %s : %w", name, err)) + } + return val +} + +func init() { + load() +} + +func Get(name string) (string, error) { + if val, ok := cache[name]; ok { + return val, nil + } + return "", ErrConfigNotFound +} + +func load() { + dir := utils.MustGetUserHomeDir() + + cache = map[string]string{} + cache[CONFIG__DATA_DIR] = config( + CONFIG__DATA_DIR, + path.Join(dir, "/.jobctl/data")) + cache[CONFIG__LOGS_DIR] = config( + CONFIG__LOGS_DIR, + path.Join(dir, "/.jobctl/logs")) +} + +func InitTest(dir string) { + os.Setenv("HOME", dir) + load() +} + +func config(env, def string) string { + val := os.ExpandEnv(fmt.Sprintf("${%s}", env)) + if val == "" { + return def + } + return val +} diff --git a/internal/settings/settings_test.go b/internal/settings/settings_test.go new file mode 100644 index 000000000..2d78fa78d --- /dev/null +++ b/internal/settings/settings_test.go @@ -0,0 +1,63 @@ +package settings + +import ( + "jobctl/internal/utils" + "os" + "path" + "testing" + + "github.com/stretchr/testify/assert" +) + +var testHomeDir string + +func TestMain(m *testing.M) { + testHomeDir = utils.MustTempDir("settings_test") + InitTest(testHomeDir) + os.Exit(m.Run()) +} + +func TestReadSetting(t *testing.T) { + load() + + // read default configs + for _, test := range []struct { + Name string + Want string + }{ + { + Name: CONFIG__DATA_DIR, + Want: path.Join(testHomeDir, ".jobctl/data"), + }, + { + Name: CONFIG__LOGS_DIR, + Want: path.Join(testHomeDir, ".jobctl/logs"), + }, + } { + val, err := Get(test.Name) + assert.NoError(t, err) + assert.Equal(t, val, test.Want) + } + + // read from env variables + for _, test := range []struct { + Name string + Want string + }{ + { + Name: CONFIG__DATA_DIR, + Want: "/home/jobctl/data", + }, + { + Name: CONFIG__LOGS_DIR, + Want: "/home/jobctl/logs", + }, + } { + os.Setenv(test.Name, test.Want) + load() + + val, err := Get(test.Name) + assert.NoError(t, err) + assert.Equal(t, val, test.Want) + } +} diff --git a/internal/sock/address.go b/internal/sock/address.go new file mode 100644 index 000000000..bce6473f7 --- /dev/null +++ b/internal/sock/address.go @@ -0,0 +1,19 @@ +package sock + +import ( + "crypto/md5" + "fmt" + "path" + "strings" +) + +const sockDir = "/tmp" + +func GetSockAddr(key string) string { + s := strings.ReplaceAll(key, " ", "_") + name := strings.Replace(path.Base(s), path.Ext(path.Base(s)), "", 1) + h := md5.New() + h.Write([]byte(s)) + bs := h.Sum(nil) + return path.Join(sockDir, fmt.Sprintf("@jobctl-%s-%x", name, bs)) +} diff --git a/internal/sock/client.go b/internal/sock/client.go new file mode 100644 index 000000000..02545f3e3 --- /dev/null +++ b/internal/sock/client.go @@ -0,0 +1,48 @@ +package sock + +import ( + "bufio" + "fmt" + "io" + "log" + "net" + "net/http" +) + +type Client struct { + addr *net.UnixAddr +} + +func NewUnixClient(nw string) (*Client, error) { + addr, err := net.ResolveUnixAddr("unix", nw) + if err != nil { + return nil, err + } + return &Client{ + addr: addr, + }, nil +} + +func (cl *Client) Request(method, url string) (string, error) { + conn, err := net.DialUnix("unix", nil, cl.addr) + if err != nil { + return "", fmt.Errorf("the job is not running") + } + defer conn.Close() + request, err := http.NewRequest(method, url, nil) + if err != nil { + log.Printf("NewRequest %v", err) + return "", err + } + request.Write(conn) + response, err := http.ReadResponse(bufio.NewReader(conn), request) + if err != nil { + return "", err + } + body, err := io.ReadAll(response.Body) + if err != nil { + log.Printf("ReadAll %v", err) + return "", err + } + return string(body), nil +} diff --git a/internal/sock/server.go b/internal/sock/server.go new file mode 100644 index 000000000..25b74732c --- /dev/null +++ b/internal/sock/server.go @@ -0,0 +1,119 @@ +package sock + +import ( + "bufio" + "errors" + "io/ioutil" + "log" + "net" + "net/http" + "os" + "strings" +) + +type Server struct { + *Config + listener net.Listener + quit bool +} + +type Config struct { + Addr string + HandlerFunc HttpHandlerFunc +} + +type HttpHandlerFunc func(w http.ResponseWriter, r *http.Request) + +func NewServer(c *Config) (*Server, error) { + return &Server{ + Config: c, + quit: false, + }, nil +} + +var ( + ErrServerRequestedShutdown = errors.New("socket server is requested to shutdown") +) + +func (svr *Server) Serve(listen chan error) error { + os.Remove(svr.Addr) + var err error + svr.listener, err = net.Listen("unix", svr.Addr) + if err != nil { + listen <- err + return err + } + listen <- err + log.Printf("server is running at \"%v\"\n", svr.Addr) + defer func() { + svr.listener.Close() + os.Remove(svr.Addr) + }() + for { + conn, err := svr.listener.Accept() + if svr.quit { + return ErrServerRequestedShutdown + } + if err != nil { + return err + } + go func() { + request, err := http.ReadRequest(bufio.NewReader(conn)) + if err != nil { + log.Printf("Failed to read request %v", err) + return + } + svr.HandlerFunc(NewHttpResponseWriter(&conn), request) + conn.Close() + + if svr.quit { + svr.Shutdown() + } + }() + } +} + +func (svr *Server) Shutdown() { + if !svr.quit { + svr.quit = true + if svr.listener != nil { + if err := svr.listener.Close(); err != nil { + log.Printf("failed to close listener: %s", err) + } + } + } +} + +type HttpResponseWriter struct { + conn *net.Conn + header http.Header + statusCode int +} + +func NewHttpResponseWriter(conn *net.Conn) http.ResponseWriter { + return &HttpResponseWriter{ + conn: conn, + header: make(http.Header), + statusCode: http.StatusOK, + } +} + +func (w *HttpResponseWriter) Write(data []byte) (int, error) { + response := http.Response{ + StatusCode: w.statusCode, + ProtoMajor: 1, + ProtoMinor: 0, + Body: ioutil.NopCloser(strings.NewReader(string(data))), + Header: w.header, + } + response.Write(*w.conn) + return 0, nil +} + +func (w *HttpResponseWriter) Header() http.Header { + return w.header +} + +func (w *HttpResponseWriter) WriteHeader(statusCode int) { + w.statusCode = statusCode +} diff --git a/internal/sock/server_test.go b/internal/sock/server_test.go new file mode 100644 index 000000000..411befe9e --- /dev/null +++ b/internal/sock/server_test.go @@ -0,0 +1,65 @@ +package sock_test + +import ( + "io/ioutil" + "jobctl/internal/sock" + "jobctl/internal/utils" + "net/http" + "os" + "path" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +var ( + testsDir = path.Join(utils.MustGetwd(), "../../tests/testdata") +) + +func TestMain(m *testing.M) { + testHomeDir, err := ioutil.TempDir("", "controller_test") + if err != nil { + panic(err) + } + os.Setenv("HOME", testHomeDir) + code := m.Run() + os.RemoveAll(testHomeDir) + os.Exit(code) +} + +func TestStartServer(t *testing.T) { + tmpFile, err := ioutil.TempFile("", "test-server-start") + require.NoError(t, err) + defer os.Remove(tmpFile.Name()) + + unixServer, err := sock.NewServer( + &sock.Config{ + Addr: tmpFile.Name(), + HandlerFunc: func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + w.Write([]byte("OK")) + }, + }) + require.NoError(t, err) + + client, err := sock.NewUnixClient(tmpFile.Name()) + require.NoError(t, err) + + listen := make(chan error) + go func() { + for range listen { + } + }() + + go func() { + err = unixServer.Serve(listen) + require.NoError(t, err) + }() + + time.Sleep(time.Second * 1) + + ret, err := client.Request(http.MethodPost, "/") + assert.Equal(t, ret, "OK") +} diff --git a/internal/utils/assert.go b/internal/utils/assert.go new file mode 100644 index 000000000..683549cc6 --- /dev/null +++ b/internal/utils/assert.go @@ -0,0 +1,14 @@ +package utils + +import ( + "regexp" + "testing" +) + +func AssertPattern(t *testing.T, name string, want string, actual string) { + re := regexp.MustCompile(want) + + if !re.Match([]byte(actual)) { + t.Fatalf("%s should match %s, was %s", name, want, actual) + } +} diff --git a/internal/utils/utils.go b/internal/utils/utils.go new file mode 100644 index 000000000..83d650258 --- /dev/null +++ b/internal/utils/utils.go @@ -0,0 +1,152 @@ +package utils + +import ( + "io/ioutil" + "jobctl/internal/constants" + "os" + "os/exec" + "regexp" + "strings" + "time" +) + +func DefaultEnv() map[string]string { + return map[string]string{ + "PATH": "${PATH}", + } +} + +// MustGetUserHomeDir returns current working directory. +// Panics is os.UserHomeDir() returns error +func MustGetUserHomeDir() string { + hd, err := os.UserHomeDir() + if err != nil { + panic(err) + } + + return hd +} + +// MustGetwd returns current working directory. +// Panics is os.Getwd() returns error +func MustGetwd() string { + wd, err := os.Getwd() + if err != nil { + panic(err) + } + + return wd +} + +func FormatTime(t time.Time) string { + if t.IsZero() { + return constants.TimeEmpty + } else { + return t.Format(constants.TimeFormat) + } +} + +func ParseTime(val string) (time.Time, error) { + if val == constants.TimeEmpty { + return time.Time{}, nil + } + ret, err := time.ParseInLocation(constants.TimeFormat, val, time.Local) + if err != nil { + return time.Time{}, err + } + return ret, nil +} + +func FormatDuration(t time.Duration, defaultVal string) string { + if t == 0 { + return defaultVal + } else { + return t.String() + } +} + +func SplitCommand(cmd string) (program string, args []string) { + vals := strings.SplitN(os.ExpandEnv(cmd), " ", 2) + if len(vals) > 1 { + return vals[0], strings.Split(vals[1], " ") + } + return vals[0], []string{} +} + +func FileExists(file string) bool { + _, err := os.Stat(file) + return !os.IsNotExist(err) +} + +func OpenOrCreateFile(file string) (*os.File, error) { + if FileExists(file) { + return OpenFile(file) + } + return CreateFile(file) +} + +func OpenFile(file string) (*os.File, error) { + outfile, err := os.OpenFile(file, os.O_APPEND|os.O_WRONLY, 0755) + if err != nil { + return nil, err + } + return outfile, nil +} + +func CreateFile(file string) (*os.File, error) { + outfile, err := os.Create(file) + if err != nil { + return nil, err + } + return outfile, nil +} + +// https://github.com/sindresorhus/filename-reserved-regex/blob/master/index.js +var ( + filenameReservedRegex = regexp.MustCompile(`[<>:"/\\|?*\x00-\x1F]`) + filenameReservedWindowsNamesRegex = regexp.MustCompile(`(?i)^(con|prn|aux|nul|com[0-9]|lpt[0-9])$`) +) + +func ValidFilename(str, replacement string) string { + s := filenameReservedRegex.ReplaceAllString(str, replacement) + s = filenameReservedWindowsNamesRegex.ReplaceAllString(s, replacement) + return strings.ReplaceAll(s, " ", replacement) +} + +func ParseVariable(value string) (string, error) { + val, err := ParseCommand(os.ExpandEnv(value)) + if err != nil { + return "", err + } + return val, nil +} + +var tickerMatcher = regexp.MustCompile("`[^`]+`") + +func ParseCommand(value string) (string, error) { + matches := tickerMatcher.FindAllString(strings.TrimSpace(value), -1) + if matches == nil { + return value, nil + } + ret := value + for i := 0; i < len(matches); i++ { + command := matches[i] + str := strings.ReplaceAll(command, "`", "") + prog, args := SplitCommand(str) + out, err := exec.Command(prog, args...).Output() + if err != nil { + return "", err + } + ret = strings.ReplaceAll(ret, command, strings.TrimSpace(string(out[:]))) + + } + return ret, nil +} + +func MustTempDir(pattern string) string { + t, err := ioutil.TempDir("", pattern) + if err != nil { + panic(err) + } + return t +} diff --git a/internal/utils/utils_test.go b/internal/utils/utils_test.go new file mode 100644 index 000000000..45958b383 --- /dev/null +++ b/internal/utils/utils_test.go @@ -0,0 +1,73 @@ +package utils_test + +import ( + "io/ioutil" + "jobctl/internal/utils" + "os" + "path" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestMustGetUserHomeDir(t *testing.T) { + err := os.Setenv("HOME", "/test") + if err != nil { + t.Fatal(err) + } + hd := utils.MustGetUserHomeDir() + assert.Equal(t, "/test", hd) +} + +func TestMustGetwd(t *testing.T) { + wd, _ := os.Getwd() + assert.Equal(t, utils.MustGetwd(), wd) +} + +func TestFormatTime(t *testing.T) { + tm := time.Date(2022, 2, 1, 2, 2, 2, 0, time.Now().Location()) + fomatted := utils.FormatTime(tm) + assert.Equal(t, "2022-02-01 02:02:02", fomatted) + + parsed, err := utils.ParseTime(fomatted) + require.NoError(t, err) + assert.Equal(t, tm, parsed) + +} + +func TestFormatDuration(t *testing.T) { + dr := time.Second*5 + time.Millisecond*100 + assert.Equal(t, "5.1s", utils.FormatDuration(dr, "")) +} + +func TestSplitCommand(t *testing.T) { + command := "ls -al test/" + program, args := utils.SplitCommand(command) + assert.Equal(t, "ls", program) + assert.Equal(t, "-al", args[0]) + assert.Equal(t, "test/", args[1]) +} + +func TestFileExits(t *testing.T) { + require.True(t, utils.FileExists("/")) +} + +func TestValidFilename(t *testing.T) { + f := utils.ValidFilename("file\\name", "_") + assert.Equal(t, f, "file_name") +} + +func TestOpenOrCreateFile(t *testing.T) { + tmp, err := ioutil.TempDir("", "utils_test") + require.NoError(t, err) + name := path.Join(tmp, "/file_for_test.txt") + f, err := utils.OpenOrCreateFile(name) + require.NoError(t, err) + defer func() { + f.Close() + os.Remove(name) + }() + require.True(t, utils.FileExists(name)) +} diff --git a/tests/admin/.jobctl/admin.yaml b/tests/admin/.jobctl/admin.yaml new file mode 100644 index 000000000..45d06cad2 --- /dev/null +++ b/tests/admin/.jobctl/admin.yaml @@ -0,0 +1,5 @@ +host: ${HOST} +port: 8081 +jobs: "${HOME}/jobctl/jobs" +command: "${HOME}/jobctl/bin/jobctl" +workDir: "${HOME}/jobctl/jobs" \ No newline at end of file diff --git a/tests/admin/admin.yaml b/tests/admin/admin.yaml new file mode 100644 index 000000000..1ca6fb900 --- /dev/null +++ b/tests/admin/admin.yaml @@ -0,0 +1,6 @@ +host: ${HOST} +port: 8082 +jobs: "${HOME}/jobctl/jobs" +command: "${HOME}/jobctl/bin/jobctl" +workDir: "${HOME}/jobctl/jobs" +logEncodingCharset: euc-jp \ No newline at end of file diff --git a/tests/bin/testfile.sh b/tests/bin/testfile.sh new file mode 100755 index 000000000..20ea3d744 --- /dev/null +++ b/tests/bin/testfile.sh @@ -0,0 +1,8 @@ +#!/bin/sh + +if [ -f "$1" ]; then + echo "file exists" + exit 0 +fi +echo "file not found" +exit 1 \ No newline at end of file diff --git a/tests/config/.jobctl/config.yaml b/tests/config/.jobctl/config.yaml new file mode 100644 index 000000000..a177f9b25 --- /dev/null +++ b/tests/config/.jobctl/config.yaml @@ -0,0 +1,15 @@ +env: + LOG_DIR: "${HOME}/logs" +logDir: "${LOG_DIR}" +smtp: + host: "smtp.host" + port: "25" +errorMail: + from: "system@mail.com" + to: "error@mail.com" + prefix: "[ERROR]" +infoMail: + from: "system@mail.com" + to: "info@mail.com" + prefix: "[INFO]" +histRetentionDays: 7 \ No newline at end of file diff --git a/tests/config/err_no_name.yaml b/tests/config/err_no_name.yaml new file mode 100644 index 000000000..fb762334f --- /dev/null +++ b/tests/config/err_no_name.yaml @@ -0,0 +1,3 @@ +name: "" +steps: + - name: step 1 \ No newline at end of file diff --git a/tests/config/err_no_steps.yaml b/tests/config/err_no_steps.yaml new file mode 100644 index 000000000..02c258bb1 --- /dev/null +++ b/tests/config/err_no_steps.yaml @@ -0,0 +1 @@ +name: no_steps \ No newline at end of file diff --git a/tests/config/err_step_no_command.yaml b/tests/config/err_step_no_command.yaml new file mode 100644 index 000000000..c659982cb --- /dev/null +++ b/tests/config/err_step_no_command.yaml @@ -0,0 +1,3 @@ +name: test +steps: + - name: step 1 \ No newline at end of file diff --git a/tests/config/err_step_no_name.yaml b/tests/config/err_step_no_name.yaml new file mode 100644 index 000000000..dfefdc510 --- /dev/null +++ b/tests/config/err_step_no_name.yaml @@ -0,0 +1,3 @@ +name: test +steps: + - command: "printf 1" \ No newline at end of file diff --git a/tests/config/test.yaml b/tests/config/test.yaml new file mode 100644 index 000000000..b36493b81 --- /dev/null +++ b/tests/config/test.yaml @@ -0,0 +1,46 @@ +name: test job +description: this is a test job. +env: + LOG_DIR: ${HOME}/logs +logDir: ${LOG_DIR} +histRetentionDays: 3 +mailOn: + Error: true + Success: true +delaySec: 1 +maxActiveRuns: 1 +params: param1 param2 +smtp: + host: smtp.host + port: "25" +errorMail: + from: system@mail.com + to: error@mail.com + prefix: "[ERROR]" +infoMail: + from: system@mail.com + to: info@mail.com + prefix: "[INFO]" +preconditions: + - condition: "`printf 1`" + expected: "1" +steps: + - name: step 1 + dir: ${HOME} + command: "true" + mailOnError: true + continueOn: + failure: true + skipped: true + retryPolicy: + limit: 2 + preconditions: + - condition: "`printf test`" + expected: test + - name: step 2 + dir: ${HOME} + command: "false" + continueOn: + failure: true + depends: + - step 1 \ No newline at end of file diff --git a/tests/testdata/agent_retry.yaml b/tests/testdata/agent_retry.yaml new file mode 100644 index 000000000..4543f8fd4 --- /dev/null +++ b/tests/testdata/agent_retry.yaml @@ -0,0 +1,40 @@ +name: "agent retry" +steps: + - name: "1" + command: "true" + - name: "2" + command: "false" + continueOn: + failure: true + depends: ["1"] + - name: "3" + command: "true" + depends: ["2"] + - name: "4" + command: "true" + preconditions: + - condition: "`echo 0`" + expected: "1" + continueOn: + skipped: true + - name: "5" + command: "false" + depends: ["4"] + - name: "6" + command: "true" + depends: ["5"] + - name: "7" + command: "true" + preconditions: + - condition: "`echo 0`" + expected: "1" + depends: ["6"] + continueOn: + skipped: true + - name: "8" + command: "true" + preconditions: + - condition: "`echo 0`" + expected: "1" + - name: "9" + command: "false" \ No newline at end of file diff --git a/tests/testdata/all.yaml b/tests/testdata/all.yaml new file mode 100644 index 000000000..9e98b5d13 --- /dev/null +++ b/tests/testdata/all.yaml @@ -0,0 +1,55 @@ +name: test job +description: this is a test job. +env: + LOG_DIR: ${HOME}/logs +logDir: ${LOG_DIR} +histRetentionDays: 3 +mailOn: + failure: true + success: true +delaySec: 1 +maxActiveRuns: 1 +params: param1 param2 +smtp: + host: smtp.host + port: "25" +errorMail: + from: system@mail.com + to: error@mail.com + prefix: "[ERROR]" +infoMail: + from: system@mail.com + to: info@mail.com + prefix: "[INFO]" +preconditions: + - condition: "`echo 1`" + expected: "1" +handlerOn: + exit: + command: "onExit.sh" + success: + command: "onSuccess.sh" + failure: + command: "onFailure.sh" + cancel: + command: "onCancel.sh" + +steps: + - name: "1" + dir: ${HOME} + command: "true" + mailOnError: true + continueOn: + failure: true + skipped: true + retryPolicy: + limit: 2 + preconditions: + - condition: "`echo test`" + expected: test + - name: "2" + command: "false" + continueOn: + failure: true + depends: + - "1" \ No newline at end of file diff --git a/tests/testdata/basic_failure.yaml b/tests/testdata/basic_failure.yaml new file mode 100644 index 000000000..2d86d7dd2 --- /dev/null +++ b/tests/testdata/basic_failure.yaml @@ -0,0 +1,4 @@ +name: "basic failure" +steps: + - name: "1" + command: "false" \ No newline at end of file diff --git a/tests/testdata/basic_sleep.yaml b/tests/testdata/basic_sleep.yaml new file mode 100644 index 000000000..117b21376 --- /dev/null +++ b/tests/testdata/basic_sleep.yaml @@ -0,0 +1,4 @@ +name: "basic sleep" +steps: + - name: "1" + command: "sleep 1" \ No newline at end of file diff --git a/tests/testdata/basic_sleep_long.yaml b/tests/testdata/basic_sleep_long.yaml new file mode 100644 index 000000000..a33da8315 --- /dev/null +++ b/tests/testdata/basic_sleep_long.yaml @@ -0,0 +1,4 @@ +name: "basic sleep" +steps: + - name: "1" + command: "sleep 100" \ No newline at end of file diff --git a/tests/testdata/basic_success.yaml b/tests/testdata/basic_success.yaml new file mode 100644 index 000000000..fe95602dc --- /dev/null +++ b/tests/testdata/basic_success.yaml @@ -0,0 +1,4 @@ +name: "basic success" +steps: + - name: "1" + command: "true" \ No newline at end of file diff --git a/tests/testdata/basic_success_2.yaml b/tests/testdata/basic_success_2.yaml new file mode 100644 index 000000000..5e8b19167 --- /dev/null +++ b/tests/testdata/basic_success_2.yaml @@ -0,0 +1,6 @@ +name: "basic success 2" +steps: + - name: "1" + command: "true" + - name: "2" + command: "true" \ No newline at end of file diff --git a/tests/testdata/cmd_retry.yaml b/tests/testdata/cmd_retry.yaml new file mode 100644 index 000000000..a24d52980 --- /dev/null +++ b/tests/testdata/cmd_retry.yaml @@ -0,0 +1,41 @@ +name: "agent retry" +params: "param-value" +steps: + - name: "1" + command: "true" + - name: "2" + command: "false" + continueOn: + failure: true + depends: ["1"] + - name: "3" + command: "true" + depends: ["2"] + - name: "4" + command: "true" + preconditions: + - condition: "`echo 0`" + expected: "1" + continueOn: + skipped: true + - name: "5" + command: "false" + depends: ["4"] + - name: "6" + command: "echo parameter is $1" + depends: ["5"] + - name: "7" + command: "true" + preconditions: + - condition: "`echo 0`" + expected: "1" + depends: ["6"] + continueOn: + skipped: true + - name: "8" + command: "true" + preconditions: + - condition: "`echo 0`" + expected: "1" + - name: "9" + command: "false" \ No newline at end of file diff --git a/tests/testdata/err_no_name.yaml b/tests/testdata/err_no_name.yaml new file mode 100644 index 000000000..fb762334f --- /dev/null +++ b/tests/testdata/err_no_name.yaml @@ -0,0 +1,3 @@ +name: "" +steps: + - name: step 1 \ No newline at end of file diff --git a/tests/testdata/err_no_steps.yaml b/tests/testdata/err_no_steps.yaml new file mode 100644 index 000000000..02c258bb1 --- /dev/null +++ b/tests/testdata/err_no_steps.yaml @@ -0,0 +1 @@ +name: no_steps \ No newline at end of file diff --git a/tests/testdata/err_step_no_command.yaml b/tests/testdata/err_step_no_command.yaml new file mode 100644 index 000000000..c659982cb --- /dev/null +++ b/tests/testdata/err_step_no_command.yaml @@ -0,0 +1,3 @@ +name: test +steps: + - name: step 1 \ No newline at end of file diff --git a/tests/testdata/err_step_no_name.yaml b/tests/testdata/err_step_no_name.yaml new file mode 100644 index 000000000..9ff401f4d --- /dev/null +++ b/tests/testdata/err_step_no_name.yaml @@ -0,0 +1,3 @@ +name: test +steps: + - command: "echo 1" \ No newline at end of file diff --git a/tests/testdata/multiple_steps.yaml b/tests/testdata/multiple_steps.yaml new file mode 100644 index 000000000..20beb210c --- /dev/null +++ b/tests/testdata/multiple_steps.yaml @@ -0,0 +1,8 @@ +name: "multiple steps" +steps: + - name: "1" + command: "true" + - name: "2" + command: "true" + depends: + - "1" \ No newline at end of file diff --git a/tests/testdata/with_params.yaml b/tests/testdata/with_params.yaml new file mode 100644 index 000000000..7686bb955 --- /dev/null +++ b/tests/testdata/with_params.yaml @@ -0,0 +1,5 @@ +name: "with params" +params: "param-value" +steps: + - name: "1" + command: "echo \"params is $1\"" \ No newline at end of file diff --git a/tests/testdata/with_params_2.yaml b/tests/testdata/with_params_2.yaml new file mode 100644 index 000000000..bec982680 --- /dev/null +++ b/tests/testdata/with_params_2.yaml @@ -0,0 +1,5 @@ +name: "with params" +params: param-value1 param_value2 +steps: + - name: "1" + command: "echo \"params are ${1} and ${2}\"" \ No newline at end of file diff --git a/tests/testdata/with_precondition.yaml b/tests/testdata/with_precondition.yaml new file mode 100644 index 000000000..15a2af934 --- /dev/null +++ b/tests/testdata/with_precondition.yaml @@ -0,0 +1,8 @@ +name: "with precondition" +params: "foo" +preconditions: + - condition: ${1} + expected: "foo" +steps: + - name: "1" + command: "echo \"params is $1\"" \ No newline at end of file diff --git a/tests/testdata/with_teardown.yaml b/tests/testdata/with_teardown.yaml new file mode 100644 index 000000000..36f357880 --- /dev/null +++ b/tests/testdata/with_teardown.yaml @@ -0,0 +1,11 @@ +name: "with onExit" +handlerOn: + Exit: + command: "true" +steps: + - name: "1" + command: "true" + - name: "2" + command: "true" + depends: + - "1" \ No newline at end of file