Choose the DB data source into which you want to import this data.
Find a workflow that has this data source associated.
Navigate to the proper schema/table into which you want to import this data.
Use the 'Import a dataset' button to start the import wizard.
Use 'Choose File' to select the file you want to import. You can also choose whether to import all rows or a sample of the rows.
You should see that the file does not look good. Alpine uses ',' as the delimiter and " as the quote character. To change this, click 'Show additional formatting options'.
Use the dropdown menu to select 'Other' as your delimiter type. After selecting 'Other', use the space next to the dropdown to type in your custom delimiter.
Once you select your custom delimiter, your data should reappear in its correct format.
When finished importing, the file will appear in the data menu of the schema/table you selected. You are now free to drag that dataset to the workspace and use it in your workflows.
Hadoop
To customize delimiters for files on HDFS, you first need to upload the file onto your cluster. Instead of determining the delimiters on import, you do it when you drag the dataset to the canvas and configure it.
Navigate to the file you want to customize in the data explorer within the workflow editor.
Drag it to the workflow canvas.
Double-click the operator to bring up the operator properties. Select 'Hadoop File Structure'.
Here you can define the Escape character, the Quote character, and the Delimiter. Confirm that the columns have the correct name and type, then select 'OK'.
The dataset can now be used in your workflows and connected to other operators.