Problem with Iceberg Catalog and Singlestore deployment on Openshift

Hi everyone,

I’m new to SingleStore, so I might not have as much experience with it.
I’m using SingleStore v8.9 on OpenShift, and my S3-compatible storage is exposed through both NodePort and Route in OpenShift (in the default namespace, not in the SingleStore namespace).
Here’s what’s happening:

When I use the Route URL to connect to my S3-compatible object storage and run commands like CREATE DATABASE or CREATE PIPELINE AS LOAD DATA S3, everything works without any errors. For example:

CREATE DATABASE test_db
PARTITIONS 4 SUB_PARTITIONS 16
ON S3 ‘new_bucket/data_ss/test_db’
CONFIG ‘{“endpoint_url”: “https://minio-ceph-rgw.lh.com”, “verify_ssl”: false}’
CREDENTIALS ‘{“aws_access_key_id”: , “aws_secret_access_key”: }’;

CREATE PIPELINE temp_pipeline
AS LOAD DATA S3 ‘s3://landingdev/source/parquet/temp/data/*.parquet’
CONFIG ‘{“endpoint_url”: “https://minio-ceph-rgw.lh.com”}’
CREDENTIALS ‘{“aws_access_key_id”: , “aws_secret_access_key”: }’
BATCH_INTERVAL 2500 RESOURCE POOL pipeline_pool_1
ENABLE OUT_OF_ORDER OPTIMIZATION ENABLE OFFSETS METADATA GC SKIP DUPLICATE KEY ERRORS
INTO TABLE temp_table
FORMAT Parquet (id ← id, matthc ← matthc);

=> Both commands run successfully with no issues.

However, when I try to create a pipeline to ingest data from Iceberg using the same endpoint_url, I get the following error:
Caused by: java.net.UnknownHostException: rawdev.minio-ceph-rgw.lh.com

Here’s the command I used:
CREATE OR REPLACE PIPELINE addresses_pipe DEBUG
AS LOAD DATA S3 ‘temp_db.temp_table’
CONFIG ‘{
“catalog_type”: “HIVE”,
“catalog.uri”: “thrift://hive-metastore-svc.cluster.local:9083”,
“catalog.hive.metastore.client.auth.mode”: “PLAIN”,
“catalog.hive.metastore.client.plain.username”: “dev”,
“catalog.hive.metastore.client.plain.password”: ,
“catalog.metastore.use.SSL”: “true”,
“region”: “us-east-1”,
“catalog.hive.metastore.truststore.type”: “PKCS12”,
“catalog.hive.metastore.truststore.path”: “/tmp/truststore.p12”,
“catalog.hive.metastore.truststore.password”: “hive123”,
“endpoint_url”: “https://minio-ceph-rgw.lh.com”,
“catalog_name”: “rawdev”
}’
CREDENTIALS ‘{
“aws_access_key_id”: ,
“aws_secret_access_key”:
}’
SKIP DUPLICATE KEY ERRORS
INTO TABLE temp_table
FORMAT ICEBERG(…);

Interestingly, when I replace the endpoint_url with the IP:Port from the NodePort service, it works as expected.

My question is: Given that the Route URL works for S3 pipelines but not for Iceberg ingestion, is any additional configuration needed to support Iceberg with the Route URL in SingleStore?

I’d appreciate any advice or insights you can share.
Thanks a lot!

Hello, thank you for posting this detailed comparison.
We will fix the behavior in the upcoming release to be the same as for S3.

For now, can you try “catalog.s3.path-style-access”: “true” option in the config to have hostname resolved?