Skip to content
This repository has been archived by the owner on Nov 27, 2023. It is now read-only.

Data loss on invalidation of code in the editor #666

Closed
lhein opened this issue Sep 1, 2022 · 18 comments
Closed

Data loss on invalidation of code in the editor #666

lhein opened this issue Sep 1, 2022 · 18 comments
Assignees
Labels
bug Something isn't working code editor Relates to the code editor

Comments

@lhein
Copy link

lhein commented Sep 1, 2022

Describe the bug
If you create a flow in the graphical editor and then switch to the code editor and make one line in the yaml invalid, parts of the code disappear as well as the corresponding figure(s) on the canvas.

To Reproduce
Steps to reproduce the behavior:

  1. See the video

Expected behavior
Allow invalid code while being at design time. Just trashing invalid code makes the UX really bad.

Screenshots
kaoto_deletes_invalid_stuff

@lhein lhein added the bug Something isn't working label Sep 1, 2022
@kahboom kahboom added the code editor Relates to the code editor label Sep 1, 2022
@kahboom kahboom pinned this issue Sep 1, 2022
@lhein
Copy link
Author

lhein commented Sep 5, 2022

might be linked to #673

@Delawen Delawen unpinned this issue Oct 17, 2022
@kahboom
Copy link
Contributor

kahboom commented Oct 31, 2022

Prioritizing as 'Medium' instead of 'High' because this requires a bigger discussion about how the frontend should work with the backend to deal with invalid input. It could evolve into a larger discussion about state management, if we choose to maintain state of the code editor as separate from the visualization.

@kahboom
Copy link
Contributor

kahboom commented Nov 2, 2022

So, we have three options here I think..

  1. Option 1: YAML is source of truth, and can be saved even if broken. Don't render the canvas if YAML is broken (we can't show partial because UI doesn't parse YAML). Doesn't replace invalid code, just leave it as is and give them some support maybe with LSPs (would have to be limited support since we have multi-DSL support).

  2. Option 2: Canvas is source of truth, so doesn't generate invalid YAML. Kaoto API adds in missing metadata to the YAML as it does now, and if the user wants to edit the YAML they are a more advanced user that knows they override the rendering. YAML might be invalid if they override, but they can't deploy it and have limited advice on what is wrong with the YAML.

  3. Option 3: Both YAML & Canvas are the sources of truth, depending which the user is editing (leave Kaoto as it is right now), remove invalid code from YAML. No in-between state support, but YAML is always valid, and the canvas is always correct.

@Delawen
Copy link
Contributor

Delawen commented Nov 2, 2022

From my point of view:

  • The canvas is constrained and we can trust it. But it is an intermediate step between the user and what is deployed.
  • The source code is more chaotic and we can only trust it if the backend cleans it. With the advantage of being directly what is deployed.

In an ideal world, both canvas and source code always contain the same information, just in a different format.

There are cases in which that doesn't happen.

User is writing something incomplete

This happens when the code can be inferred, but the user hasn't completed it. For example, the user may want to add a twitter-search step:

    ref:
      kind: Kamelet
      name: twitter-search-source

They write that. And with that information, we all know what the apiVersion is going to be based on the step catalog. But, if you don't add it manually, this code is not deployable. And if you add it manually, you may be using an invalid apiVersion that is not the one deployed on the server.

Kaoto, who knows what is behind the scenes, can add the apiVersion automatically to the source code:

    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1alpha1
      name: twitter-search-source

Should the source code be auto-fixed and add the missing apiVersion or should we wait for the user to complete it themselves?

User is drafting code

The user is half-writing something in the source code that is not yet clear what is going to be.

For example a user editing a KameletBinding may write something like:

    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1alpha1
      name: 

which hints at a new step being added, but the name is not yet clear. On this case, true, the canvas can visualize some placeholder, but let's not focus on the solution to this specific case because there may be others. Just focus on the user writing something that is not yet clear what will be in the final form. But it is a valid YAML, it just needs some pieces.

    ref:
      step: twitter-something //let's go back to this later

While the user finishes writing, both canvas and source code cannot be synchronized.

Should we leave that there in the source code? It won't be in the canvas, we have no way of adding that information there. But well, it is work in progress, we shouldn't remove it, right?

User is writing nonsense

And then, the user may be writing something that really doesn't make sense and will never make sense but to them it is the final form.

    ref:
       name: twitter.call(I'm so smart")
       kind: Kamelet
       apiVersion: unicorns.ahead@v1beta1

What do we do with this? We can't really distinguish this from the previous case. We must treat both equally: either leaving them there or removing those pieces of code.

@Delawen
Copy link
Contributor

Delawen commented Nov 2, 2022

So, from my perspective, the questions to solve this dilemma are:

Do we want the canvas, the source code, or both to be the source of truth?

If we decide the canvas is the source of truth:

  • Do we allow auto-fixing of the source code (which includes removing suspicious parts of the source code) to be always in sync with the canvas?
  • If not, how do we tell the user the source code is not in sync with the canvas?

If we decide the source code is the source of truth:

  • Should we disable the canvas until the source code is completely valid and parseable?
    • Do we allow auto-fixing of the source code (which includes removing suspicious parts of the source code) to be synceable with the canvas?
  • Do we add a save button on the source code editor to avoid intermediate code to try to be shown on the canvas?

If we decide both are the source of truth: Same as before:

  • Do we allow auto-fixing of the source code (which includes removing suspicious parts of the source code) to be always in sync with the canvas?
    • If not, how do we tell the user the source code is not in sync with the canvas?
  • Should we disable the canvas until the source code is valid and parseable?
  • Do we add a save button on the source code editor to avoid intermediate code to try to be shown on the canvas?

@mmelko
Copy link
Contributor

mmelko commented Nov 2, 2022

I can image leaving it as it is .. so both are source of truth but I'd disable auto fixing the source code or have a possibility to force fix.

I'd focus on how tho actually visualise invalid source in reasonable way:

If UI will send invalid source to the backend, backend will return invalid json instead of cleaned up, so for example step would have some additional state -valid/invalid or something.

We would visualise also invalid code if yaml is valid, thanks to the steps having valid/invalid property we could actually guide the user which step is invalid. it could be displayed as temporary step or normal step -- depending on part that is invalid, for example if the name is valid but properties aren't it can be visualised as concrete step with some exclamation mark.

We wouldn't allow deploying until code is valid. Even if we would allow, Kubernetes will return error anyway because AFAIK validation is happening there too.

@Delawen
Copy link
Contributor

Delawen commented Nov 2, 2022

If UI will send invalid source to the backend, backend will return invalid json instead of cleaned up, so for example step would have some additional state -valid/invalid or something.

If you edit the canvas with that, the source code is replaced and you lose the invalid code.

@igarashitm
Copy link
Contributor

igarashitm commented Nov 2, 2022

disable the canvas until the source code is valid and parseable

+1, disable once source code editor is open

or show a warning dialog saying You're exiting source editing mode. All incomplete code will be removed. Proceed? When user tries to switch to canvas

@mmelko
Copy link
Contributor

mmelko commented Nov 3, 2022

If UI will send invalid source to the backend, backend will return invalid json instead of cleaned up, so for example step would have some additional state -valid/invalid or something.

If you edit the canvas with that, the source code is replaced and you lose the invalid code.

As I mentioned .. I can imagine to visualise also invalid code to some extend so it wouldn't be removed necessary. Because only time where the code has to be valid and functional is right before deployment/export. So allowing users to fix the invalid code in visual way could be cool.

@Delawen
Copy link
Contributor

Delawen commented Nov 3, 2022

So allowing users to fix the invalid code in visual way could be cool.

Note that the invalid code will not be on the visual side. If it was there, it won't be invalid :)

@igarashitm
Copy link
Contributor

I understand it would be great if we can show the broken code in some way in canvas as well (maybe with an icon of a broken bin), but it would need a custom compiler/parser and handle line by line. I don't think it pays at this earlier stage.

@apupier
Copy link
Contributor

apupier commented Nov 4, 2022

Given all the discussions, it seems to me that:

  • the ideal solution is not clear
  • it will require a lot of work which won't fit in time scope for the first stable release for the ideal solution
  • it is a data loss issue so must be adressed for the first stable release

Consequently, I see 2 main steps:

  • Implement a solution in a relatively short time which avoids the data loss.
  • Discuss other steps to go to an ideal solution

For the "relatively short time" to implement, I think the solution mentioned about disabling the canvas when there is a not completely parseable and managed yaml file is the one that fit the most.
Even this part can be done in several steps, like as first iteration, remove the content on the canvas and display a warning on it. Then another iteration to find a way to report which part of yaml source code is not parseable. Another iteration showing the message with a non-editable content on the canvas which is doing as best as possible to render the part which is parseable.

@Delawen
Copy link
Contributor

Delawen commented Nov 15, 2022

My proposed solution for this issue:

  • Disable auto-saving form source code editor
  • Add a validate button on the source code editor that:
    • Warns the user that the source code will be cleaned up and may change after saving
    • Shows a diff that the user has to accept.
    • Updates the canvas with the content on the source code editor
    • Replaces the content of the source code editor with the valid content coming from the backend
  • Disable the canvas if the source code editor is being edited until saved
    • If the user tries to use the canvas, show a dialog that allows to rollback any changes on the source code to continue using the canvas

And on subsequent iterations we will work in improving the source code with features like (not limited to):

  • Invalid step: instead of removing it completely, show the invalid step on the canvas and keep as much as possible on the source code
  • Add more auto-completion on the source code to help the user generate valid code
  • Anything else that can prevent invalid code on the source code editor

@mmelko
Copy link
Contributor

mmelko commented Nov 15, 2022

My solution would be similar but:

  • Add validation status in the integration backend endpoint. Currently we are returning only the validated/modified integration.

    • this would allow to update the canvas only if code is valid and display the information about invalid code
    • auto-saving would still work but with only valid code
    • we could have settings for auto replacing invalid code
  • Canvas would be enabled and display only last valid integration

    • in case of incomplete but valid integration it would be still displayed.

@Delawen
Copy link
Contributor

Delawen commented Nov 15, 2022

@mmelko

Do we disable the canvas while editing the source code?

What if it is valid but incomplete code?

How are we going to implement that validation? Comparing the input and output of the text blob that comes from the backend?

@mmelko
Copy link
Contributor

mmelko commented Nov 15, 2022

Do we disable the canvas while editing the source code?
I would still render last valid canvas. If the user would try to edit it it could be a warning that code will disappear.

What if it is valid but incomplete code?
I would display it. So for exaple just Start step is that case. I want to see how it looks like. (Start + one action)

So I'm expecting that validation is done in the backend side. I'm imagining having the response of the integration endpoint like:

{
    'flow': '.....',
    'valid': 'false' 
}

I wouldn't compare the code because valid code still can be formatted or something and might differ. I don't know how validation is done on the backend side so if this is something that is hard to do in our current time frame I'm fine with your solution too.

@Delawen
Copy link
Contributor

Delawen commented Nov 15, 2022

So, that's the thing, there is no validation. We just marshal the yaml into java objects. So if we are going to add some validation status, we will need to implement that.

Plus, the idea of returning

{
    'flow': '.....',
    'valid': 'false' 
}

is not appealing to me because we are mixing there different media types (the source code media type and json)

@mmelko
Copy link
Contributor

mmelko commented Nov 30, 2022

PR: #972

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working code editor Relates to the code editor
Projects
None yet
Development

No branches or pull requests

6 participants