You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just found out an issue with the downloadHubDbTable in the packages/cms-lib/hubdb.js file in the cms-lib package.
So essentially while in the API v2 implementation we used to fetch table rows using limit and offset parameters to fetch large tables, now in the new version v3 you must pass the limit and after params, where the after is a pointer to the next set of records and not just a number as the offset.
This cause an issue with the downloadHubDbTable where if you have 1001 records, the downloaded table will have 2000 records as it downloads the first 1000 twice.
I'm using the latest version of the cms-cli 2.1.1-beta.10
Steps to reproduce
Use the downloadHubDbTable method to download a table with more than 1000 rows, once downloaded open the file and you should have the first 1000 rows repeated as per many times the total rows of the table divided by 1000 (so if you have 2020 rows, you will get 3000 rows).
Description and Context
I just found out an issue with the
downloadHubDbTable
in thepackages/cms-lib/hubdb.js
file in the cms-lib package.So essentially while in the API
v2
implementation we used to fetch table rows using limit and offset parameters to fetch large tables, now in the new versionv3
you must pass the limit and after params, where the after is a pointer to the next set of records and not just a number as the offset.This cause an issue with the
downloadHubDbTable
where if you have 1001 records, the downloaded table will have 2000 records as it downloads the first 1000 twice.I'm using the latest version of the cms-cli
2.1.1-beta.10
Steps to reproduce
Use the downloadHubDbTable method to download a table with more than 1000 rows, once downloaded open the file and you should have the first 1000 rows repeated as per many times the total rows of the table divided by 1000 (so if you have 2020 rows, you will get 3000 rows).
Expected behavior
It should download all the rows on a file.
Who to Notify
@drewjenkins @anthmatic
The text was updated successfully, but these errors were encountered: