Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion ContinueOnError on DML Requests #304

Closed
SiPurdy opened this issue May 31, 2023 · 4 comments · Fixed by #306
Closed

Suggestion ContinueOnError on DML Requests #304

SiPurdy opened this issue May 31, 2023 · 4 comments · Fixed by #306

Comments

@SiPurdy
Copy link

SiPurdy commented May 31, 2023

There are a few situations where being able to run a DML statement which continues on error rather than aborting on the first error would be useful. Not something for every request, so ideally something definable as a hint, eg

update contact set
firstname='Steve'
where statecode=0
OPTION (USE HINT('CONTINUE_ON_ERROR'))

I've had a number of instances where mass record deletion (not using bulk deletion jobs) have failed due to other processes deleting records before Sql4Cds gets round to it.

It would be a bonus if in the case of deletes, if a record fails because its not present, that failure could be ignored.

@MarkMpn
Copy link
Owner

MarkMpn commented Jun 1, 2023

Thanks for the suggestion. I understand the case where an UPDATE or DELETE fails because the record no longer exists, and this should probably be default behaviour. I'm not sure about ignoring more general errors though - what's you use-case for this?

@SiPurdy
Copy link
Author

SiPurdy commented Jun 1, 2023

It's more to do with if you're running an update on 100k records, and 50 individual records spread evenly through the set fail. If there was a CONTINUE_ON_ERROR option, it's nicer to have it fail at the end (with all 50 exceptions) rather than have to re-run the query 50 times.

I've had it with plugins/real time workflows with deadlocking; it's nicer after the first run through be able to knock off the remaining 50 records, rather than re-run with 98k records, then 96k records...

@MarkMpn
Copy link
Owner

MarkMpn commented Jun 1, 2023

OK, so you would still want the update to fail with the same error message, but only after it's updated as many records in the batch as possible?

@SiPurdy
Copy link
Author

SiPurdy commented Jun 1, 2023

Yes if by batch we mean the whole set of 100k records, rather than the 10 we're executing in the sub-batch.

MarkMpn added a commit that referenced this issue Jun 1, 2023
@MarkMpn MarkMpn linked a pull request Jun 9, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants