-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
losing precision in converting TIMESTAMP and INT64 to Javascript Number #6
Comments
Thank you for opening this, @c0b! I think we're okay on integer & timestamp precision, since it is available through the raw @callmehiphop what do you think? |
From @callmehiphop on October 10, 2016 19:55 Maybe in the docs we could recommend/show examples for precision using an int64 lib? We use node-int64 in Bigtable. |
From @lukesneeringer on March 13, 2017 19:41 Based on the discussion in the Node.js standup today, we are deciding this is not release blocking. A user who needs the full precision can get it from the raw API response in the third callback argument. |
I don't believe there is a good solution for this without introducing complexity. TIMESTAMPThe docs say that TIMESTAMPs are stored internally with microsecond precision, however, the raw API response seems to be returning the value in seconds. INT64The solution for this would be a bit more complex for the user. Currently, if you read one of these values, you get the native JS "Number" type. For our function Int(value) {
this.value = value.toString();
}
Int.prototype.valueOf = function() {
var number = Number(this.value);
if (number > Number.MAX_SAFE_INTEGER) {
throw new Error('Integer ' + this.value + ' is out of bounds.');
}
return number;
}; table.read(query, function(err, rows) {
var row = rows[0]
row = [
{
name: 'SafeInt',
value: {
value: '2' // string
}
},
{
name: 'OutOfBoundsInt',
value: {
value: '--out-of-bounds-integer--' // string
}
}
]
var safeInt = row[0].value
typeof safeInt === Spanner.Int
console.log(safeInt.value)
// '2' (String)
console.log(safeInt)
// 2 (Number type)
var outOfBoundsInt = row[1].value
typeof outOfBoundsInt === Spanner.Int
console.log(outOfBoundsInt.value)
// '--out-of-bounds-integer--' (String)
console.log(outOfBoundsInt)
// throws 'Integer '--out-of-bounds-integer-as-string--' is out of bounds.'
}) @lukesneeringer how should we determine if the precision is worth the complexity? |
@alexander-fenster any input? |
A pretty neat feature that the |
Sounds awesome to me, but please do run by @shollyman and @tswast first :) |
@callmehiphop seems like a great use of |
Type casting sounds like a good solution for this. It's actually similar to how we provide a |
Timestamp precision loss is a known issue with the wire format of BQ tabledata responses. The recently-beta BQ storage api addresses this and other issues. Would making it possible for tabledata consumers to request a different TIMESTAMP representation in tabledata responses address this (simplest is likely a string-encapsulated int64)? Or would a more complex format still be necessary? |
@shollyman sounds like a great option to me. |
Opened 128990665 internally as an FR for BigQuery team. |
@callmehiphop, is there anything active for the client side to do after the BigQuery fixes this on the server? If not, let's close this issue. |
Ping-o @callmehiphop |
@stephenplusplus @callmehiphop, could this use a design similar to the work @AVaksman has been working on on datastore? |
The main problem is the server doesn't even send the full precision over the wire with the |
The internal bug (128990665) was just marked fixed not too long ago, so I'm assigning this to @steffnay to investigate what needs to be done here, if anything. |
@steffnay Hey, trying to understand whether your PR #873 affects the We have tables with timestamps in microseconds, and when querying with I think these lines in Is there any way I can get a hold of my precious microseconds? We receive timestamped data in microsecond resolution from IoT units. |
Hello everybody, After reading this issue, I tried to query a table with identifiers stored in
Thank you! |
(I'm tempted to classify this as a "bug" but it is technically a feature request; asking @alvarowolfx to look into it.) |
@sergioregueira, the behavior that you reported is indeed an issue in our library. Basically the
But I opened a PR with a fix for that to support the way that you reported too. See #1191 |
As for the microsecond precision support, another PR was opened to fix that ( #1192 ). It basically that uses @google-cloud/precise-date to parse timestamp arriving as a float64 to a timestamp string with microsecond resolution. Example output:
|
Parses timestamps from the backend ( that arrives as a float64 ) using the [@google-cloud/precise-date](https://www.npmjs.com/package/@google-cloud/precise-date) lib to support microsecond resolution. Example output: ``` const bigquery = new BigQuery(); const [rows] = await bigquery.query({ query: 'SELECT TIMESTAMP("2014-09-27 12:30:00.123456Z")', }); console.log(JSON.stringify(rows)); // [{"f0_":{"value":"2014-09-27T12:30:00.123456000Z"}}] ``` Fixes #6
From @c0b on October 9, 2016 5:5
googleapis/google-cloud-node#1648 (comment)
The BigQuery TIMESTAMP has up to microseconds precision, but when converting to a JavaScript Date, it becomes up to milliseconds
googleapis/google-cloud-node#1648 (comment)
A JavaScript Number is really only a FLOAT64, there is no real INT64, so during conversion some precision is lost:
I don't really have a solution, please suggest when application need this much precision
Copied from original issue: googleapis/google-cloud-node#1681
The text was updated successfully, but these errors were encountered: