Snowflake JDBC connections fail with the error "Error retrieving metadata: Invalid state: Connection pool shut down." in Spotfire

Snowflake JDBC connections fail with the error "Error retrieving metadata: Invalid state: Connection pool shut down." in Spotfire

book

Article ID: KB0137830

calendar_today

Updated On:

Products Versions
Spotfire 14.0.4
Spotfire 14.0.4

Description

The Snowflake connections would fail frequently with the connection pool shutting down messages. The users would get interrupted with the following error message while working with the Snowflake requests:

Error message: Could not get contents of 'data_source name' from
the server.
The data source reported a failure.
InformationModelException at Spotfire.Dxp.Data:
Error retrieving metadata: Invalid state: Connection pool shut
down. A potential cause is closing of a connection when a query is
still running. (HRESULT: 80131500)
Stack Trace:
 at
Spotfire.Dxp.Data.InformationModel.InternalInformationModelMan
ager.TryPromptForCredentials(InformationModelServiceException
e, Guid dataSourceId)
 at
Spotfire.Dxp.Data.InformationModel.InternalInformationModelMan
ager.ListDataSourceChildren(Guid dataSourceId, String
dataSourceName)
 at
Spotfire.Dxp.Forms.Data.InformationDesigner.Cache.CachedData
Source.<GetChildren>d__34.MoveNext()
 at System.Collections.Generic.List`1..ctor(IEnumerable`1
collection)
 at
Spotfire.Dxp.Forms.Data.InformationDesigner.Cache.CachedItem.
EnsureChildrenLoaded()
 at
Spotfire.Dxp.Forms.Data.InformationDesigner.VirtualMultiSelectTr
eeView.TreeItem.Expand()
InformationModelServiceException at Spotfire.Dxp.Services:
Error retrieving metadata: Invalid state: Connection pool shut
down. A potential cause is closing of a connection when a query is
still running. (HRESULT: 80131509)

 

Resolution

In the server logs, the following entries were identified that were associated with the issue:

WARN 2025-06-23T23:33:38,562-0600 [] ws.server.STDOUTERR: Jun 23, 2025 11:33:38 PM net.snowflake.client.jdbc.SnowflakeChunkDownloader getNextChunkToConsume
WARN 2025-06-23T23:33:38,562-0600 [] ws.server.STDOUTERR: SEVERE: downloader encountered error: Max retry reached for the download of #chunk4 (Total chunks: 64) retry=10, error=java.lang.OutOfMemoryError: Java heap space
WARN 2025-06-23T23:33:38,562-0600 [] ws.server.STDOUTERR: 
WARN 2025-06-23T23:33:38,563-0600 [] ws.server.STDOUTERR: Jun 23, 2025 11:33:38 PM net.snowflake.client.jdbc.SnowflakeChunkDownloader logOutOfMemoryError
WARN 2025-06-23T23:33:38,563-0600 [] ws.server.STDOUTERR: SEVERE: Dump some crucial information below:
WARN 2025-06-23T23:33:38,563-0600 [] ws.server.STDOUTERR: Total milliseconds waiting for chunks: 28,905,
WARN 2025-06-23T23:33:38,563-0600 [] ws.server.STDOUTERR: Total memory used: 1,073,741,824, Max heap size: 1,073,741,824, total download time: 6,895 millisec,
WARN 2025-06-23T23:33:38,563-0600 [] ws.server.STDOUTERR: total parsing time: 2,172 milliseconds, total chunks: 64,
WARN 2025-06-23T23:33:38,563-0600 [] ws.server.STDOUTERR: currentMemoryUsage in Byte: -184,192,046, currentMemoryLimit in Bytes: 128,888,833 
WARN 2025-06-23T23:33:38,563-0600 [] ws.server.STDOUTERR: nextChunkToDownload: 6, nextChunkToConsume: 4
WARN 2025-06-23T23:33:38,563-0600 [] ws.server.STDOUTERR: Several suggestions to try to resolve the OOM issue:
WARN 2025-06-23T23:33:38,563-0600 [] ws.server.STDOUTERR: 1. increase the JVM heap size if you have more space; or 
WARN 2025-06-23T23:33:38,563-0600 [] ws.server.STDOUTERR: 2. use CLIENT_MEMORY_LIMIT to reduce the memory usage by the JDBC driver (https://docs.snowflake.net/manuals/sql-reference/parameters.html#client-memory-limit)3. please make sure 2 * CLIENT_PREFETCH_THREADS * CLIENT_RESULT_CHUNK_SIZE < CLIENT_MEMORY_LIMIT. If not, please reduce CLIENT_PREFETCH_THREADS and CLIENT_RESULT_CHUNK_SIZE too.

Note: A Spotire server restart would resolve the issue temporarily. However, after some time, when the users opened more connection requests to the Snowflake data sources then the issue started happening again.

From the warnings above, the issue occurs from Snowflake or with the way the JDBC driver handles all the connections. 

This error can also be resolved by increasing the JVM heap size for the application or by setting the 'CLIENT_MEMORY_LIMIT' to an optimized value for the Snowflake connection.

Based on the warnings in the server log file, we were able to handle the issue by following the conditions given below:

2 * CLIENT_PREFETCH_THREADS * CLIENT_RESULT_CHUNK_SIZE < CLIENT_MEMORY_LIMIT. 
If not, please reduce CLIENT_PREFETCH_THREADS and CLIENT_RESULT_CHUNK_SIZE too.


Note: The Snowflake parameters CLIENT_MEMORY_LIMIT, CLIENT_PREFETCH_THREADS & CLIENT_RESULT_CHUNK_SIZE were tweaked keeping in mind the value limits for the respective parameters, and that they satisfied the condition "2 * CLIENT_PREFETCH_THREADS * CLIENT_RESULT_CHUNK_SIZE < CLIENT_MEMORY_LIMIT".

Additional information:
It was observed that for a few of the Snowflake data sources, the maximum connections configured by the 'max-connections' parameter was set to a smaller value- 5 or 10. It was suggested to have this value increased to a higher, suitable value so that Snowflake would be able to handle more connection requests simultaneously.

Issue/Introduction

After upgrading to the Spotfire LTS version 14, the user would frequently come across the error of the connection pool shutting down for Snowflake connections.

Additional Information