You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to know if it is possible to configure Docker Hive to be able to access CSV files stored in an S3 bucket.
In my case, I have the following example (table):
CREATE EXTERNAL TABLE test(
sequence String, Timestampval Timestamp, chargeState String, level String, temperature String, x String, y String, z String, designation String, isCalibrated String, min String, max String, block String, margin String, state string, Child_Type string, Sub_Child_Type string, SerialNumber bigint, metertype string, currentfile string, data_date string, hour string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES
('separatorChar'=';')
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
's3://useast1-dataload-prod/file_data/CurrentCondition/2024-04-30_02-04/';
Any help is appreciated
The text was updated successfully, but these errors were encountered:
Hi,
I would like to know if it is possible to configure Docker Hive to be able to access CSV files stored in an S3 bucket.
In my case, I have the following example (table):
Any help is appreciated
The text was updated successfully, but these errors were encountered: