How can I clear the Repository database if the database has been corrupted due to an external (i.e. environmental) issue?

How can I clear the Repository database if the database has been corrupted due to an external (i.e. environmental) issue?

book

Article ID: KB0072313

calendar_today

Updated On:

Products Versions
TIBCO Data Virtualization v7.0.8 and higher

Description

On rare occasions, tables in the metadata repository can get corrupted due to an environmental issue. This happens if a disk problem or an OS-level issue occurred in your environment that prevented disk write operations from completing.

An example would be the hard disk running out space while Repository tables are  being updated. When the hard disk is full, write operations to temporary files on the disk will fail. As a result, the metadata tables do not get completely updated and so the metadata is corrupted. This leads to odd behaviors such as:

(1) The TDV server hangs when it tries to retrieve the metadata or when it tries to use the metadata. Restarting the TDV services does not resolve the issue.

(2) The TDV server dies immediately after you attempt to start it.

(3) If this is a TDV cluster, you observe that:

(i)  None of the nodes are able to join the cluster.

and/or:

(ii) Multiple nodes are exhibiting similar odd behaviors, like hanging or shutting themselves down when you restart them. This happens because the  node with corrupt metadata has synced its metadata to the other nodes, causing the other nodes to have the same corruption. 

Issue/Introduction

The metadata in the Repository has become corrupted due to an external (i.e. environmental) issue

Resolution

Since the metadata is unusable, the only resolution is to wipe out the metadata and replace it with a good copy of the metadata. 

VERY IMPORTANT - PLEASE READ BEFORE PROCEEDING
Wiping out the metadata means that you are deleting all your resources/projects with the intention of restoring them from a Full Server backup CAR file that you know contains good metadata. Very specifically, wiping out the metadata means that TDV will look and act like a brand new first time installation that has absolutely no knowledge of any work that you have done in the past. Therefore, if you do not have a Full Server backup CAR file, YOU WILL PERMANENTLY LOSE ALL OF YOUR EXISTING WORK (i.e. ANY RESOURCES AND DATA SOURCES THAT YOU CREATED). There is NO WAY to recover the lost data. Therefore, if you do not have a Full Server Backup CAR file, and you are not in a position to permanently lose all your projects, PLEASE CONTACT TECHNICAL SUPPORT TO DISCUSS BEFORE YOU PROCEED.

For purposes of illustration, let us say that we have taken a full server backup file "FSB_good.car" before the issue occurred and that we intend to use this file to restore the metadata to the last known good state.

If this is a standalone TDV server:

1. If the server is responsive enough to obtain a full server backup, take a backup CAR (Studio >> Administration >> Full Server Backup) as a precaution.

2. Stop the TDV server. Do not stop the repository.

3.  List out all files under <TDV_INSTALL>\conf\repository\postgresql that begin with 'composite_clean'.

Example
--------
cd D:\TIBCO_DV\conf\repository\postgresql
dir composite_clean*
--------

If the list contains files named composite_clean-nnnnnnnnn.sql ('nnnnnnnnn' represents a version number), run repo_util.bat with the arguments:
             -dropSchema -schemaDropScript <highest versioned file>

For instance if the list is:
  •      composite_clean-080200001.sql
  •      composite_clean-080401001.sql
  •      composite_clean.sql
Then, composite_clean-080401001.sql is the highest versioned file and the command is:
--------
D:\TIBCO_DV\bin>repo_util.bat -dropSchema -schemaDropScript              D:\TIBCO_DV\conf\repository\postgresql\composite_clean-080401001.sql
--------

If the list does not contain any versioned files (i.e. only composite_clean.sql is present), the schemaDropScript argument is not required:
--------
D:\TIBCO_DV\bin>repo_util.bat -dropSchema  
--------

4. Start the TDV server.

5. Connect to the TDV server using Studio, and verify that it looks like a fresh installation.

6. Import a Full Server Backup  "FSB_good.car" containing good metadata.

If this is a cluster:

1. Take all the nodes out of the cluster.

2. If any of the nodes is responsive enough to obtain a full server backup, take a backup CAR (Studio >> Administration >> Full Server Backup) as a precaution.

3. Stop the TDV server on each node. Do not stop  the repository.

4. List out all files under <TDV_INSTALL>\conf\repository\postgresql that begin with 'composite_clean'.

Example
--------
cd D:\TIBCO_DV\conf\repository\postgresql
dir composite_clean*
--------

If the list contains files named composite_clean-nnnnnnnnn.sql ('nnnnnnnnn' represents a version number), run repo_util.bat with the arguments:
             -dropSchema -schemaDropScript <highest versioned file>

For instance if the list is:
  •      composite_clean-080200001.sql
  •      composite_clean-080401001.sql
  •      composite_clean.sql
Then, composite_clean-080401001.sql is the highest versioned file and the command is:
--------
D:\TIBCO_DV\bin>repo_util.bat -dropSchema -schemaDropScript              D:\TIBCO_DV\conf\repository\postgresql\composite_clean-080401001.sql
--------

If the list does not contain any versioned files (i.e. only composite_clean.sql is present), the schemaDropScript argument is not required:
--------
D:\TIBCO_DV\bin>repo_util.bat -dropSchema  
--------

5. Start the TDV server on each node. 

6. Connect to  each server using Studio, and verify that it looks like a fresh installation.

7. Add just one node to the cluster to begin with.  Once the node is in the cluster, import a backup "FSB_good.car" containing good metadata into it. Connect to it with Studio, and run a few basic checks to verify that that the node is working as expected.

8. Add the other nodes back one at a time. After adding each node, wait a while for it to sync the metadata (i.e. it should copy it from the first node that you added). Then, use Studio to run a few basic checks to verify that that the node is working as expected.