Tag Archives: database

Babelfish

Conhecem aquela história da fadinha do dente? ou aquela outra que você ouviu que um amigo de uma amigo de um conhecido do irmão da tia do sobrinho já usou e é legal?

Então, prazer esse é o Babelfish, ele é feinho, tem uns problemas, parece um político (vende mais do que entrega), mas tá aqui pra te ajudar.

Mas pra que ele server?

Imagina o seguinte, por algum acaso do destino o pessoal descobriu que o licenciamento do SQL Server é caro, não da mais para ficar usando o SQL Enterprise para manter as bases da empresa, o Standard não é uma opção (porque afinal é o Std e ninguém gosta,,, brincadeira) por qualquer que seja o motivo, as vezes até birra de gerente novo que tem birra do SQL Server (não sabe usar e quer dar pitaco de DBA), corte de custos, escolhe o motivo, tanto faz, é só uma desculpa para o: ¨por que não?¨

Como DBA temos que ter a mente aberta para outras dores de cabeça, afinal nossa vida é tranquila d+ para não sofrer multi-plataforma ou multi-banco.

TEORICAMENTE esse cara deve ajudar você a conseguir ter uma transição mais suave entre o SQL Server e o Postgresql, como um intermediário para interpretar o T-SQL para o PLPGSQL em tempo de execução.

Para entender detalhadamente como o Babelfish da AWS funciona, é importante mergulhar nas camadas técnicas e de comunicação que facilitam a compatibilidade entre SQL Server e PostgreSQL. O Babelfish atua como uma ponte entre esses dois mundos, interpretando chamadas SQL Server para o formato PostgreSQL e vice-versa, permitindo que aplicativos escritos para SQL Server interajam com o Aurora PostgreSQL como se estivessem se comunicando com um banco de dados SQL Server nativo. Vamos detalhar essas camadas e o processo de comunicação.

Entendendo as Camadas do Babelfish

O Babelfish implementa duas principais camadas de funcionalidade:

  1. Camada de Protocolo de Comunicação:
    • O SQL Server utiliza o protocolo TDS (Tabular Data Stream) para a comunicação entre cliente e servidor. O Babelfish é capaz de entender e interpretar o protocolo TDS, permitindo que clientes SQL Server se conectem ao Aurora PostgreSQL como se fosse um servidor SQL Server. Essa camada de comunicação traduz as requisições TDS em chamadas SQL compreensíveis pelo PostgreSQL.
  2. Camada de Tradução de SQL:
    • Após a comunicação ser estabelecida através do TDS, as instruções SQL, procedimentos armazenados, funções, e tipos de dados específicos do SQL Server são traduzidos para seus equivalentes no PostgreSQL. Esta camada lida com a conversão de sintaxes de consulta, tipos de dados, e semânticas de execução, assegurando que as operações executadas pelos aplicativos funcionem corretamente no Aurora PostgreSQL.

Como o Babelfish Funciona na Prática

O processo de interação entre um aplicativo SQL Server e o Aurora PostgreSQL através do Babelfish envolve várias etapas:

  1. Estabelecimento de Conexão:
    • O cliente SQL Server inicia uma conexão usando o protocolo TDS, direcionada ao endpoint do Babelfish na instância Aurora PostgreSQL.
    • O Babelfish aceita a conexão TDS, estabelecendo um canal de comunicação entre o cliente e o Aurora PostgreSQL.
  2. Execução de Consultas:
    • Quando uma consulta SQL Server é enviada pelo cliente, a camada de protocolo de comunicação do Babelfish recebe a requisição em formato TDS.
    • A camada de tradução de SQL então converte a consulta do formato SQL Server para uma consulta compatível com PostgreSQL, incluindo a tradução de tipos de dados e a sintaxe de procedimentos armazenados, se necessário.
  3. Processamento de Consultas:
    • A consulta traduzida é processada pelo Aurora PostgreSQL, e os resultados são gerados.
    • Os resultados são então encapsulados em um formato compreensível pelo protocolo TDS e enviados de volta ao cliente SQL Server através do Babelfish.
  4. Retorno dos Resultados:
    • O cliente SQL Server recebe os resultados como se estivesse interagindo com um banco de dados SQL Server nativo, completando a operação.

Considerações de Implementação

  • Compatibilidade: Enquanto o Babelfish oferece uma alta grau de compatibilidade, existem limitações e diferenças que devem ser consideradas. Nem todas as funcionalidades e comportamentos do SQL Server são suportados 1:1 no PostgreSQL através do Babelfish.
  • Otimização de Desempenho: A tradução entre os dialetos SQL pode introduzir overheads de desempenho. É importante monitorar o desempenho das aplicações e ajustar as consultas ou a configuração do Babelfish conforme necessário.
  • Gerenciamento de Transações: O Babelfish suporta transações, mas as diferenças na gestão de transações entre o SQL Server e o PostgreSQL podem requerer atenção especial para garantir a integridade dos dados.

Instalando e Configurando o Babelfish Localmente

Pré-Requisitos

Antes de iniciarmos, certifique-se de que você tem os seguintes pré-requisitos em seu sistema:

  • Sistema operacional compatível (Linux/Windows)
  • PostgreSQL instalado e funcionando
  • Acesso administrativo ao sistema
  • Conhecimento básico em linha de comando e administração de banco de dados

Passo 1: Baixar o Binário do Babelfish

Inicialmente, você precisa obter o binário do Babelfish para sua plataforma. A AWS disponibiliza esses binários através do seu site oficial ou repositório GitHub. Acesse o GitHub do Babelfish para encontrar a versão mais recente compatível com seu sistema.

Passo 2: Instalação do Babelfish

Após o download, siga os passos específicos para sua plataforma para descompactar e instalar o binário. Em sistemas baseados em Linux, geralmente envolve extrair o conteúdo do arquivo e executar um script de instalação. Por exemplo:

tar -zxvf babelfish-version-linux.tar.gz cd babelfish-version ./install.sh

Passo 3: Configurando o PostgreSQL para o Babelfish

Com o Babelfish instalado, o próximo passo é configurar seu PostgreSQL para trabalhar com o Babelfish. Isso geralmente envolve editar o arquivo de configuração postgresql.conf do PostgreSQL para incluir parâmetros específicos do Babelfish, como:

# Exemplo de parâmetros a serem adicionados ao postgresql.conf listen_addresses = '*' 
babelfishpg_tsql.database_name = 'NomeDoBancoSQLServer'

Não se esqueça de reiniciar o serviço do PostgreSQL após realizar essas alterações.

Passo 4: Criando um Banco de Dados Compatível com SQL Server

Você precisará criar um banco de dados no PostgreSQL que será utilizado pelo Babelfish para emular o comportamento do SQL Server. Isso pode ser feito utilizando o psql, o terminal interativo do PostgreSQL, com comandos como:

CREATE DATABASE NomeDoBancoSQLServer;

Passo 5: Testando a Conexão

Após a configuração, é crucial testar a conexão para assegurar que tudo está funcionando como esperado. Utilize suas ferramentas SQL Server habituais para conectar-se ao banco de dados PostgreSQL configurado, usando o endereço de IP e porta onde o PostgreSQL está rodando.

Se tudo der certo, você estará dentro do SSMS rodando query T-SQL contra um banco no Postgres.

Instalando e Configurando o Babelfish na AWS

A instalação do Babelfish é realizada através da criação de uma instância do Amazon Aurora PostgreSQL que inclui a camada de compatibilidade do Babelfish. Veja os passos:

  1. Crie uma instância do Amazon Aurora PostgreSQL:
    • Acesse o console da AWS e selecione o serviço Amazon RDS.
    • Clique em “Criar banco de dados” e selecione “Amazon Aurora”.
    • Escolha a edição compatível com PostgreSQL e a versão que inclui Babelfish.
    • Configure as especificações da instância conforme necessário e proceda com a criação.
  2. Habilitar Babelfish na Instância:
    • Após a criação da instância, navegue até a seção de parâmetros e habilite o Babelfish adicionando a configuração apropriada. Isso pode exigir a criação de um novo grupo de parâmetros se você desejar customizações específicas.

Configuração do Babelfish

Após a instalação, é necessário configurar o Babelfish para aceitar conexões do SQL Server:

  1. Configurar o Endpoint do SQL Server:
    • Localize o endpoint do Babelfish na seção de conectividade da instância Aurora PostgreSQL.
    • Este endpoint é usado para conectar aplicativos SQL Server ao Babelfish.
  2. Ajuste de Parâmetros:
    • Acesse o grupo de parâmetros da instância e ajuste as configurações para otimizar a performance e compatibilidade com seus aplicativos SQL Server.

Pontos de Atenção

Ele pode parecer a solução infalível para todos os seus problemas, mas como não existe almoço de graça ele tem limitações:

  1. Versão do Postgres oficialmente suportado, na instalação local a versão até o mento é a 14.6
  2. Limitações das “traduções”, ainda que ele faça alguns milagres naquele seu código grotesco, ele tem certas limitações, da uma lida no site do Bablefish para ver o que ele consegue fazer.
  3. Consumidor de recursos, já é de se esperar que como ele vai fazer uma tradução simultânea de comandos diretamente no canal, quanto maior o poder computacional mais rápido ele vai traduzir os comandos, então vejo esse como um dos principais gargalos de adoção na tecnologia, já que hardware é dinheiro e isso pode ficar caro.

GCP Cloud SQL – Restauração de instância

Imagine o seguinte cenário:

No universo de bancos de dados em nuvem, a necessidade de restaurar backups pode surgir devido a várias razões: um erro humano, falha em algum sistema ou até mesmo para criar um ambiente de teste. Independentemente do motivo, ter uma ferramenta que simplifique esse processo é essencial.

Apesar do Google Cloud Platform (GCP) oferecer uma solução robusta e escalável com o Cloud SQL, a restauração de backups pode ser um processo que exige múltiplos passos e certa familiaridade com a plataforma. Foi pensando nisso que desenvolvi o GCP Restore Tool.

Por Que Usar o GCP Restore Tool?

A ferramenta foi projetada com o objetivo de tornar o processo de restauração mais amigável e menos propenso a erros. Com uma interface gráfica intuitiva, você pode selecionar projetos, instâncias e o backup desejado para restaurar com apenas alguns cliques.

A maior vantagem? Você não precisa ser um expert em GCP ou lidar com comandos complexos. A ferramenta cuida de tudo para você.

Como Utilizar a Ferramenta

  • Instale a CLI da GCP
  • Abra o terminal e digite:
  • gcloud auth application-default login
  • Você será redirecionado para a tela de autenticação do console da GCP, faça a autenticação.
  • Olhe no terminal e ele terá retornado com algumas informações sobre a localização do arquivo
  • “application_default_credentials.json”
  • Esse arquivo é o que você deve carregar pelo botão “Load GCP Credentials”
  • Na seleção à esquerda, selecione o projeto e a instância de origem nos menus dropdown correspondentes.
  • Clique no botão “Load Backups” e a ferramenta irá listar todos os backups disponíveis para a instância escolhida.
  • Na seção à direita, selecione o projeto e a instância onde deseja restaurar o backup.
  • Após confirmar sua seleção, clique no botão “Restore”. A ferramenta fará todo o trabalho pesado para você, garantindo que o backup selecionado seja restaurado na instância de destino escolhida.

Este processo só funciona sobrescrevendo uma instância pré-existente, logo, você precisa criar uma instância no projeto de destino ou entender que esta atividade vai sobrescrever a instância de destino, tenha certeza de ter selecionado a instância certa.

Se tudo der certo, você vai receber uma mensagem que o processo foi enviado para a GCP, todo o processo ocorre em background pela GCP, então acompanhe pelo portal para saber o status da atividade.

No final, a instância restaurada vai ter o nome, IP, etc da instância que ela sobrescreveu, mas todos os usuários, senhas, bancos, ajustes pontuais, etc. da instância original.

Você pode baixar e acessar o código fonte no GitHub.

ATENÇÃO !!!

A ferramenta está em testes, então faça testes antes de executar a operação em produção.

Execute por sua conta e risco.

Gerar DACPAC versão c#

Já faz um tempo desde a versão em powershell do gerador de DacPac,,, visto alguns bugs e limitações que alguém poderia ter em utilizá-lo resolvi montar uma versão mais adaptável do App.

Essa versão foi feita em Visual Studio 2013 e usa C#, o fonte ficará disponível no github (assim que conseguir fazer upload) e aqui junto com a versão compilada e com instalador e fonte.

Essa versão utiliza .Net Framework 4 e está o mais enxuta possível (pelo meu nível de conhecimento em programação)

O instalador gera um atalho na área de trabalho e uma entrada no “adicionar/remover programas”.

  • Bugs conhecidos:
  1. Ele não aceita utilizar um diretório que possua espaço no nome, pelo que consegui entender na exceção a culpa não é da forma que eu estou chamando o app para a extração mas do próprio extrator.
  2. Se clicar 2x na caixa de texto onde está a localização do SqlPackage ele vai exibir um diretório padrão onde deve estar o executável, se você não selecionar o arquivo e clicar em cancelar ou ok, a caixa de texto vai ser preenchida pelo SqlPackage.exe sem o path completo, o que vai gerar um erro na execução da extração.

Se alguém se interessar em usar, gostaria de feedbacks sobre o app.

ATUALIZAÇÃO !!!

Adicionei o projeto ao GitHub, quem quiser ajudar ou alterar o projeto pode acessar o endereço https://github.com/bigleka/SQL-Server-Gerador-de-DACPAC

[polldaddy poll=8777809]

 

Recuperar DML através do Transaction Log

Todos sabem que quando é feita uma operação no banco muita coisa passa pelo Transaction Log. Pode ser de minimamente logado ou até quase absolutamente tudo e todo mundo sabe que é possível restaurar tudo do que foi feito através dos arquivos de backup.

O script abaixo é o tipo de script que vale muito ser guardado com todo o carinho naquele pendrive cheio de outros scripts… Ele lê o transaction log e gera as operações para reverter alguma modificação na tabela que está sendo pesquisada.

Ok, eu sei que existem aplicações que fazem isso, mas elas são pagas e dependendo do cenário você não pode instalar uma aplicação dessas em um servidor.

Também existe a opção de recover do banco e voltar o transaction até um momento antes daquela operação mas pode ser que seu banco tenha alguns TB´s ou você não possa simplesmente parar toda empresa por causa de alguns registros e comprometer todas as outras operações de outras partes dos sistemas.

Antes de mais nada quer entender como o script funciona? Leia este post do Paul Randal (b| t) você vai entender como funciona o fn_dblog e o fn_dump_dblog

Agora o script…

Originalmente o script está localizado aqui.

Ele possui direitos de cópia livres para modificação/publicação/etc.

/******
(c) Walk the Way, 2014 (see accompanying license)
To use this:
1) SQL Server Management Studio must be "Run as administrator" (or using an admin account)
2) The SQL login User must have SQL sysadmin Server Role
For running DBCC PAGE: it appears that the SQL User must have SQL sysadmin Server Role
For sys.fn_dblog: the SQL User must have SELECT permission for "master" in System Databases
For SQL 2008 uncomment line 1150 (DATE not suppported in SQL 2005)

The WTW Open License

Copyright (c) 2014 Walk the Way; all rights reserved, except as follows:

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software as follows: the unrestricted right to use, copy, modify, merge,
publish, distribute, sublicense, and to permit persons to whom the Software
is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

modo de usar:

Recover_Modified_Data_Proc 'Nome_Da_Base', 'dbo.Tabela'

Recover_Modified_Data_Proc 'Nome_Da_Base', 'dbo.Tabela', '2014/07/16', '2014/07/18' --com periodo

******/
IF EXISTS (SELECT *
FROM sys.objects
WHERE object_id = OBJECT_ID(N'[dbo].[Recover_Modified_Data_Proc]')
AND TYPE IN ( N'FN', N'IF', N'TF', N'FS', N'FT', N'P' ))
SET NOEXEC ON

--> Based on the work of Muhammad Imran generously published at:
--> http://raresql.com/2012/02/01/how-to-recover-modified-records-from-sql-server-part-1/
-->
--> Uncomment/Comment block below for testing/development
/****************************************
SET NOEXEC OFF
IF OBJECT_ID (N'[@ResultTable]', N'U') IS NOT NULL DROP TABLE [@ResultTable]
IF OBJECT_ID('tempdb..#T1') IS NOT NULL DROP TABLE #T1
IF OBJECT_ID('tempdb..#temp_Data') IS NOT NULL DROP TABLE #temp_Data
IF OBJECT_ID('tempdb..#CTE') IS NOT NULL DROP TABLE #CTE
IF OBJECT_ID('tempdb..#TransIdUpdateList') IS NOT NULL DROP TABLE #TransIdUpdateList
IF OBJECT_ID('tempdb..#TransIdDeleteList') IS NOT NULL DROP TABLE #TransIdDeleteList
IF OBJECT_ID('tempdb..#TransIdAllList') IS NOT NULL DROP TABLE #TransIdAllList
IF OBJECT_ID('tempdb..#AllocIdList') IS NOT NULL DROP TABLE #AllocIdList
IF OBJECT_ID('tempdb..#SysFnDblog') IS NOT NULL DROP TABLE #SysFnDblog
IF OBJECT_ID('tempdb..#DelRows') IS NOT NULL DROP TABLE #DelRows
DECLARE @Database_Name NVARCHAR(MAX)
DECLARE @SchemaName_n_TableName NVARCHAR(MAX)
DECLARE @Date_From DATETIME
DECLARE @Date_To DATETIME
SET @Database_Name = 'Test'
SET @SchemaName_n_TableName = 'Student'
SET @Date_From = '2014/05/22' --'1900/01/01'
SET @Date_To = '2099/01/01' --'9999/12/31'
print @Date_From
print @Date_To
****************************************/
--> Uncomment/Comment block below to create Stored Procedure
/****************************************/
IF OBJECT_ID('Recover_Modified_Data_Proc', 'P') IS NOT NULL
DROP PROC Recover_Modified_Data_Proc
GO
Create PROCEDURE Recover_Modified_Data_Proc
@Database_Name NVARCHAR(MAX),
@SchemaName_n_TableName NVARCHAR(MAX),
@Date_From DATETIME = '1900/01/01', --> Null equivalent
@Date_To DATETIME = '9999/12/31' --> largest date
AS

--> For SQL 2005; PIVOT requires compatibility_level = 90
IF ((SELECT [compatibility_level] from sys.databases where [name] = @Database_Name) < 90) BEGIN
RETURN -1
END
/****************************************/

DECLARE @Debug INT
SET @Debug = 0
DECLARE @parms NVARCHAR(1024)
DECLARE @Fileid INT
DECLARE @hex_pageid AS VARCHAR(MAX)
DECLARE @Pageid INT
DECLARE @Slotid INT
DECLARE @TransactionName VARCHAR(MAX)
DECLARE @CurrentLSN VARCHAR(MAX)
DECLARE @BeginTime DATETIME
DECLARE @RowLogContents0 VARBINARY(8000)
DECLARE @RowLogContents1 VARBINARY(8000)
DECLARE @RowLogContents2 VARBINARY(8000)
DECLARE @RowLogContents3 VARBINARY(8000)
DECLARE @RowLogContents3_Var VARCHAR(MAX)

DECLARE @RowLogContents4 VARBINARY(8000)
DECLARE @LogRecord VARBINARY(8000)
DECLARE @LogRecord_Var VARCHAR(MAX)

DECLARE @ConsolidatedPageID VARCHAR(MAX)
DECLARE @AllocUnitID BIGINT
DECLARE @TransactionID VARCHAR(MAX)
DECLARE @Operation VARCHAR(MAX)
DECLARE @DatabaseCollation VARCHAR(MAX)

--FOR @Operation ='LOP_MODIFY_COLUMNS'
DECLARE @RowLogData_Var VARCHAR(MAX)
DECLARE @RowLogData_Hex VARBINARY(MAX)

DECLARE @TotalFixedLengthData INT
DECLARE @FixedLength_Offset INT
DECLARE @VariableLength_Offset INT
DECLARE @VariableLength_Offset_Start INT
DECLARE @VariableLengthIncrease INT
DECLARE @FixedLengthIncrease INT
DECLARE @OldFixedLengthStartPosition INT
DECLARE @FixedLength_Loc INT
DECLARE @VariableLength_Loc INT
DECLARE @FixedOldValues VARBINARY(MAX)
DECLARE @FixedNewValues VARBINARY(MAX)
DECLARE @VariableOldValues VARBINARY(MAX)
DECLARE @VariableNewValues VARBINARY(MAX)

/* Pick The actual data
*/
DECLARE @temppagedata TABLE(
[RecordID] INT IDENTITY(1,1)
,[ParentObject] sysname
,[Object] sysname
,[Field] sysname
,[Value] sysname
)

DECLARE @pagedata TABLE(
[Page Index] int
,[DBCCid] int
,[ParentObject] sysname
,[Object] sysname
,[Field] sysname
,[Value] sysname
)

DECLARE @pageindex TABLE(
[PageIndexID] INT IDENTITY(1,1)
,[ConsolidatedPageID] VARCHAR(MAX)
,[Fileid] VARCHAR(MAX)
,[hex_pageid] VARCHAR(MAX)
,[Pageid] VARCHAR(MAX)
)

IF (@Debug > 2) BEGIN
--select * from sys.fn_dblog(NULL, NULL)
--> http://db4breakfast.blogspot.com/2013/03/fndblog-function-documentation.html
--> IMPORTANT: [databasename] > Properties > Options > Recovery Model: Full
--> http://msdn.microsoft.com/en-us/library/ms189275.aspx --> Recovery Models (SQL Server)
--> http://technet.microsoft.com/en-us/library/ms189272.aspx
--> ALTER DATABASE model SET RECOVERY FULL
--> Immediately after switching from the simple recovery model to the full recovery model or bulk-logged recovery model, take a full or differential database backup to start the log chain.
--> The switch to the full or bulk-logged recovery model takes effect only after the first data backup.
SELECT name, recovery_model_desc
FROM sys.databases
WHERE name = 'model' ;

select 'Log file usage'
SELECT name AS [File Name],
physical_name AS [Physical Name],
size/128.0 AS [Total Size in MB],
size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS [Available Space In MB],
[growth], [file_id]
,B.cntr_value as "LogFullPct"
FROM sys.database_files A
INNER JOIN sys.dm_os_performance_counters B ON RTRIM(B.instance_name)+'_log' = A.name
WHERE type_desc = 'LOG'
AND B.counter_name LIKE 'Percent Log Used%'
AND B.instance_name not in ('_Total', 'mssqlsystemresource')
--SELECT instance_name as [Database],cntr_value as "LogFullPct"
-- FROM sys.dm_os_performance_counters A
-- WHERE counter_name LIKE 'Percent Log Used%'
-- AND instance_name not in ('_Total', 'mssqlsystemresource')
-- AND cntr_value > 0;
END
IF (@Debug > 3) BEGIN
select 'sys.fn_dblog(NULL, NULL)'
--> LSN = Log Sequence Number
select [Current LSN],[Operation],[Context],[TRANSACTION ID],[Transaction Name],[Previous LSN],[AllocUnitId],[AllocUnitName],[Begin Time]
,[RowLog Contents 0],[RowLog Contents 1],[RowLog Contents 2],[RowLog Contents 3],[RowLog Contents 4],SUSER_SNAME([Transaction SID]) AS [Transaction Account]
from sys.fn_dblog(NULL, NULL)
where [TRANSACTION ID] IN (SELECT DISTINCT [TRANSACTION ID] FROM sys.fn_dblog(NULL, NULL) WHERE [Transaction Name] IN ('UPDATE','DELETE','INSERT'))
order by [Current LSN]
--select * from sys.allocation_units
END

--> Get [TRANSACTION ID]s & [Transaction Name]s for INSERTs, DELETEs, UPDATEs in the time range
-- This includes: [Operation]s of ('LOP_MODIFY_ROW','LOP_MODIFY_COLUMNS','LOP_INSERT_ROWS','LOP_DELETE_ROWS')
/*Use this subquery to filter the date*/
SELECT DISTINCT [TRANSACTION ID],[Transaction Name],SUSER_SNAME([Transaction SID]) AS [Transaction Account]
INTO #TransIdAllList
FROM sys.fn_dblog(NULL, NULL)
WHERE [Context] IN ('LCX_NULL') AND [Operation] IN ('LOP_BEGIN_XACT')
AND [Transaction Name] IN ('UPDATE','INSERT','DELETE')
AND CONVERT(NVARCHAR(11),[Begin Time]) BETWEEN @Date_From AND @Date_To

--> Now remove all ('UPDATE','INSERT','DELETE')_transactions that are NOT ('LOP_MODIFY_ROW','LOP_MODIFY_COLUMNS','LOP_INSERT_ROWS','LOP_DELETE_ROWS')_transactions
DELETE FROM #TransIdAllList WHERE [TRANSACTION ID] IN
(SELECT DISTINCT [TRANSACTION ID]
--INTO #TransIdAllList
FROM sys.fn_dblog(NULL, NULL)
WHERE [TRANSACTION ID] IN (SELECT [TRANSACTION ID] FROM #TransIdAllList)
AND [Operation] NOT IN (NULL,'LOP_MODIFY_ROW','LOP_MODIFY_COLUMNS','LOP_INSERT_ROWS','LOP_DELETE_ROWS'))

--> Get [TRANSACTION ID]s for UPDATEs in time range; both 'LOP_MODIFY_ROW' & 'LOP_MODIFY_COLUMNS'
SELECT [TRANSACTION ID],[Transaction Name]
INTO #TransIdUpdateList
FROM #TransIdAllList
WHERE [Transaction Name] IN ('UPDATE')

--> Get [TRANSACTION ID]s for DELETEs in time range
-- WARNING: the @Date_To needs to include the DELETE for deleted UPDATEs
SELECT DISTINCT [TRANSACTION ID]
INTO #TransIdDeleteList
FROM #TransIdAllList
WHERE [Transaction Name] IN ('DELETE')

--> Get appropriate [Allocation_unit_id]s
SELECT [Allocation_unit_id]
INTO #AllocIdList
FROM sys.allocation_units allocunits
INNER JOIN sys.partitions partitions ON
(allocunits.type IN (1, 3) AND partitions.hobt_id = allocunits.container_id)
OR (allocunits.type = 2 AND partitions.partition_id = allocunits.container_id)
WHERE object_id=object_ID('' + @SchemaName_n_TableName + '')

--> Get the sys.fn_dblog record info for all transactions of concern
SELECT [Current LSN],[Operation],[Context],[PAGE ID],[AllocUnitId],[Slot ID],[TRANSACTION ID],[Transaction Name],[Begin Time],[RowLog Contents 0],[RowLog Contents 1],[RowLog Contents 2],[RowLog Contents 3],[RowLog Contents 4],[Log Record Length],[Log Record Fixed Length],[Log Record]
INTO #SysFnDblog
FROM sys.fn_dblog(NULL, NULL)
WHERE [TRANSACTION ID] IN (SELECT [TRANSACTION ID] FROM #TransIdAllList)
--select * from #SysFnDblog

--> Get record info for deleted rows for UPDATEs that may have been deleted
SELECT [PAGE ID],[AllocUnitId],[Slot ID],[TRANSACTION ID],MAX([RowLog Contents 0]) AS [RowLog Contents 0] --> removes NULLs
INTO #DelRows
FROM #SysFnDblog
WHERE [Operation] IN ('LOP_DELETE_ROWS')
AND [AllocUnitId] IN (SELECT * FROM #AllocIdList)
AND [TRANSACTION ID] IN (SELECT * FROM #TransIdDeleteList)
GROUP BY [PAGE ID],[AllocUnitId],[Slot ID],[TRANSACTION ID]

IF (@Debug > 1) BEGIN
SELECT '#TransIdAllList'
SELECT * FROM #TransIdAllList
SELECT '#TransIdUpdateList'
SELECT * FROM #TransIdUpdateList
SELECT '#TransIdDeleteList'
SELECT * FROM #TransIdDeleteList
SELECT '#AllocIdList'
SELECT * FROM #AllocIdList
SELECT '#SysFnDblog '
SELECT * FROM #SysFnDblog
print 'select * from #DelRows'
SELECT '#DelRows'
SELECT * FROM #DelRows
END

DECLARE @ModifiedRawData TABLE(
[ID] INT IDENTITY(1,1)
,[PAGE ID] VARCHAR(MAX)
,[Slot ID] INT
,[AllocUnitId] BIGINT
,[Datum] VARCHAR(20)
,[TRANSACTION ID] VARCHAR(MAX)
,[Transaction Name] VARCHAR(MAX)
,[Current LSN] VARCHAR(MAX)
,[EntryID] INT DEFAULT -1
,[PrevEntry] INT DEFAULT -1
,[NextEntry] INT DEFAULT -1
,[ChainLevel] INT DEFAULT -1
,[Process] INT DEFAULT -1
,[Begin Time] DATETIME
,[RowLog Contents 0_var] VARCHAR(MAX)
,[RowLog Contents 0] VARBINARY(8000)
,[Slot Info] VARCHAR(MAX)
)

--> Get UPDATEd records information
INSERT INTO @ModifiedRawData ([PAGE ID],[Slot ID],[AllocUnitId],[Datum],[TRANSACTION ID],[Transaction Name],[Current LSN],[Begin Time],[RowLog Contents 0_var])
SELECT A.[PAGE ID],A.[Slot ID],A.[AllocUnitId],'Reference' AS [Datum],A.[TRANSACTION ID]
,(SELECT [Transaction Name] FROM #TransIdUpdateList WHERE [TRANSACTION ID] = A.[TRANSACTION ID]) AS [Transaction Name] --> always = 'UPDATE' here
,A.[Current LSN]
,(SELECT MAX([Begin Time]) FROM #SysFnDblog WHERE [TRANSACTION ID] = A.[TRANSACTION ID]) AS [Begin Time] --> eliminates NULL values
,NULL AS [Value] --> [RowLog Contents 0_var]
FROM #SysFnDblog A
WHERE A.[AllocUnitId] IN (SELECT * FROM #AllocIdList)
AND [Operation] IN ('LOP_MODIFY_ROW','LOP_MODIFY_COLUMNS')
AND [Context] IN ('LCX_HEAP','LCX_CLUSTERED')
AND [TRANSACTION ID] IN (SELECT [TRANSACTION ID] FROM #TransIdUpdateList)
GROUP BY A.[PAGE ID],A.[Slot ID],A.[AllocUnitId],[Begin Time],[Transaction ID],[Transaction Name],A.[Current LSN]
ORDER BY [Slot ID],[Current LSN] --> [Slot ID] is the record entry number
--SELECT * FROM @ModifiedRawData

--> Get the indexing information for DBCC PAGE
INSERT INTO @pageindex ([ConsolidatedPageID],[Fileid],[hex_pageid],[Pageid])
SELECT DISTINCT [PAGE ID],NULL,NULL,NULL FROM @ModifiedRawData
UPDATE @pageindex
SET [Fileid] = SUBSTRING([ConsolidatedPageID],0,CHARINDEX(':',[ConsolidatedPageID])) -- Seperate File ID from Page ID
, [hex_pageid] = '0x'+ SUBSTRING([ConsolidatedPageID],CHARINDEX(':',[ConsolidatedPageID])+1,LEN([ConsolidatedPageID])) -- Seperate the page ID
UPDATE @pageindex
SET [Pageid] = CONVERT(INT,CAST('' AS XML).value('xs:hexBinary(substring(sql:column("[hex_pageid]"),sql:column("t.pos")) )', 'varbinary(max)')) -- Convert Page ID from hex to integer
FROM (SELECT CASE SUBSTRING([hex_pageid], 1, 2) WHEN '0x' THEN 3 ELSE 0 END FROM @pageindex A WHERE A.[PageIndexID] = [PageIndexID]) AS t(pos)
--SELECT * FROM @pageindex

/*******************CURSOR START*********************/
--> Get DBCC PAGE data for each [ConsolidatedPageID]
DECLARE Page_Data_Cursor CURSOR FOR
SELECT [Fileid],[Pageid]
FROM @pageindex
ORDER BY [Fileid],[Pageid]
/****************************************/
OPEN Page_Data_Cursor

FETCH NEXT FROM Page_Data_Cursor INTO @Fileid,@Pageid
WHILE @@FETCH_STATUS = 0
BEGIN
DELETE @temppagedata
--> the next line requires cursor use due to augument requirements
INSERT INTO @temppagedata EXEC( 'DBCC PAGE(' + @DataBase_Name + ', ' + @Fileid + ', ' + @Pageid + ', 3) with tableresults,no_infomsgs;')
--> Concatenante all DBCC PAGE pages into one file by [ConsolidatedPageID] using index [PageIndexID]
INSERT INTO @pagedata ([Page Index],[DBCCid],[ParentObject],[Object],[Field],[Value])
SELECT B.[PageIndexID],A.[RecordID],[ParentObject],[Object],[Field],[Value]
FROM @temppagedata A
JOIN @pageindex B ON B.[Pageid] = @Pageid AND B.[Fileid] = @Fileid
FETCH NEXT FROM Page_Data_Cursor INTO @Fileid,@Pageid
END

CLOSE Page_Data_Cursor
DEALLOCATE Page_Data_Cursor
/*******************CURSOR END*********************/

--> Add the DBCC PAGE information OR DELETEd info to the UPDATEd records information
-- DBCC PAGE indexing is independent of [AllocUnitId]
UPDATE @ModifiedRawData
SET
[RowLog Contents 0_var] =
COALESCE( --> look for DBCC PAGE record set
UPPER((SELECT REPLACE(STUFF(
(SELECT REPLACE(SUBSTRING([VALUE],CHARINDEX(':',[Value])+1,48),'†','')
FROM @pagedata C
WHERE B.[Page Index] = C.[Page Index] --> EQUIVALENT to [ConsolidatedPageID]
AND A.[Slot ID] = LTRIM(RTRIM(SUBSTRING(C.[ParentObject],5,3)))
AND [Object] LIKE '%Memory Dump%'
GROUP BY [Value] FOR XML PATH('')
),1,1,'') ,' ','')
)) --> next, look for deleted record
,UPPER(CAST('' AS XML).value('xs:hexBinary(sql:column("C.[RowLog Contents 0]") )', 'varchar(max)'))
,'ERROR: '+B.[ParentObject] --> Missing [Slot ID] in "DBCC PAGE", and no corresponding DELETE found
)
,[Slot Info] = COALESCE(B.[ParentObject],'Slot '+CONVERT(VARCHAR,C.[Slot ID])+' deleted')
FROM @ModifiedRawData A
INNER JOIN @pageindex D ON D.[ConsolidatedPageID] = A.[PAGE ID]
LEFT JOIN @pagedata B ON
B.[Page Index] = D.[PageIndexID]
AND A.[Slot ID] = LTRIM(RTRIM(SUBSTRING(B.[ParentObject],5,3)))
AND B.[Object] LIKE '%Memory Dump%'
LEFT JOIN #DelRows C ON
C.[PAGE ID]=[A].[PAGE ID]
AND C.[AllocUnitId] = A.[AllocUnitId]
AND C.[Slot ID] = A.[Slot ID]

--> Convert the old data which is in string format to hex format (required).
UPDATE @ModifiedRawData SET [RowLog Contents 0] = CAST('' AS XML).value('xs:hexBinary(substring(sql:column("[RowLog Contents 0_var]"), 0) )', 'varbinary(max)')
FROM @ModifiedRawData

IF (@Debug > 0) BEGIN
select '@pageindex'
select * from @pageindex
select '@DBCC PAGE'
select * from @temppagedata
select '@pagedata'
select * from @pagedata --ORDER BY [Page ID],[AllocUnitId],[Current LSN],[DBCCid]
print 'select * from @ModifiedRawData1'
select '@ModifiedRawData1'
select * from @ModifiedRawData
END

DECLARE @PreliminaryRawData TABLE(
[ID] INT IDENTITY(1,1)
,[PAGE ID] VARCHAR(MAX)
,[Slot ID] INT
,[AllocUnitId] BIGINT
,[Datum] VARCHAR(20)
,[TRANSACTION ID] VARCHAR(MAX)
,[Transaction Name] VARCHAR(MAX)
,[Current LSN] VARCHAR(MAX)
,[Begin Time] DATETIME
,[RowLog Contents 0] VARBINARY(8000)
,[RowLog Contents 1] VARBINARY(8000)
,[RowLog Contents 2] VARBINARY(8000)
,[RowLog Contents 3] VARBINARY(8000)
,[RowLog Contents 4] VARBINARY(8000)
,[Log Record] VARBINARY(8000)
,[Operation] VARCHAR(MAX)
,[RowLog Contents 0_var] VARCHAR(MAX)
,[RowLog Contents 1_var] VARCHAR(MAX)
)

---Now we have the modifed data plus its slot ID , page ID and allocunit as well.
--After that we need to get the old values before modfication, these data are in chunks.
INSERT INTO @PreliminaryRawData
SELECT [PAGE ID],[Slot ID],[AllocUnitId],'Restoration' AS [Datum],[TRANSACTION ID]
,(SELECT [Transaction Name] FROM #TransIdAllList WHERE [TRANSACTION ID] = A.[TRANSACTION ID]) AS [Transaction Name]
,[Current LSN]
,(SELECT MAX([Begin Time]) FROM #SysFnDblog WHERE [TRANSACTION ID] = A.[TRANSACTION ID]) AS [Begin Time] --> eliminates NULL values
,[RowLog Contents 0],[RowLog Contents 1],[RowLog Contents 2],[RowLog Contents 3],[RowLog Contents 4]
,SUBSTRING ([Log Record],[Log Record Fixed Length],([Log Record Length]+1)-([Log Record Fixed Length])) as [Log Record]
,[Operation]
,UPPER(CAST('' AS XML).value('xs:hexBinary(sql:column("[RowLog Contents 0]") )', 'varchar(max)')) AS [RowLog Contents 0_var] --> New, added for reference; integrated below
,UPPER(CAST('' AS XML).value('xs:hexBinary(sql:column("[RowLog Contents 1]") )', 'varchar(max)')) AS [RowLog Contents 1_var] --> New, added for reference; integrated below
FROM #SysFnDblog A
WHERE [AllocUnitId] IN (SELECT * FROM #AllocIdList)
AND [Operation] IN ('LOP_MODIFY_ROW','LOP_MODIFY_COLUMNS','LOP_INSERT_ROWS','LOP_DELETE_ROWS')
AND [Context] IN ('LCX_HEAP','LCX_CLUSTERED')
--AND [TRANSACTION ID] IN (SELECT [TRANSACTION ID] FROM #TransIdAllList) --> same filter for #SysFnDblog
ORDER BY [Slot ID],[Transaction ID] DESC
--SELECT * FROM @PreliminaryRawData
/*
If the [Operation] Type is 'LOP_MODIFY_ROW' then it is very simple to recover the modified data. The old data is in [RowLog Contents 0] Field and modified data is in [RowLog Contents 1] Field. Simply replace it with the modified data and get the old data.
If the [Operation] Type is 'LOP_INSERT_ROWS' or 'LOP_DELETE_ROWS' then it is very simple to recover the data. The old data is in [RowLog Contents 0] Field.
*/
INSERT INTO @ModifiedRawData ([PAGE ID],[Slot ID],[AllocUnitId],[Datum],[TRANSACTION ID],[Transaction Name],[Current LSN],[Begin Time],[RowLog Contents 0_var],[Slot Info])
SELECT A.[PAGE ID],A.[Slot ID],A.[AllocUnitId],'Restoration' AS [Datum],A.[TRANSACTION ID],A.[Transaction Name],A.[Current LSN],A.[Begin Time]
,CASE WHEN A.[Operation] IN ('LOP_INSERT_ROWS','LOP_DELETE_ROWS') THEN
(SELECT UPPER(CAST('' AS XML).value('xs:hexBinary(sql:column("A.[RowLog Contents 0]") )','varchar(max)'))) --> for INSERTs and/or DELETEs with no UPDATEs (@ModifiedRawData is empty when this update starts)
ELSE --> look for the UPDATE mate from the previous record pull
REPLACE(UPPER(B.[RowLog Contents 0_var]),UPPER(CAST('' AS XML).value('xs:hexBinary(sql:column("A.[RowLog Contents 1]") )','varchar(max)')),UPPER(CAST('' AS XML).value('xs:hexBinary(sql:column("A.[RowLog Contents 0]") )','varchar(max)')))
END AS [RowLog Contents 0_var]
,B.[Slot Info]
FROM @PreliminaryRawData A
LEFT JOIN @ModifiedRawData B ON B.[Current LSN]=A.[Current LSN]
WHERE A.[Operation] IN ('LOP_MODIFY_ROW','LOP_INSERT_ROWS','LOP_DELETE_ROWS')

-- Convert the old data which is in string format to hex format (required).
UPDATE @ModifiedRawData SET [RowLog Contents 0] = CAST('' AS XML).value('xs:hexBinary(substring(sql:column("[RowLog Contents 0_var]"), 0) )', 'varbinary(max)')
FROM @ModifiedRawData
WHERE [RowLog Contents 0] IS NULL

IF (@Debug > 0) BEGIN
select '@PreliminaryRawData'
select * from @PreliminaryRawData
print 'select * from @ModifiedRawData2_row_ops'
select '@ModifiedRawData2_row_ops'
select * from @ModifiedRawData
END
--> remove these records since their processing is complete
--DELETE FROM @PreliminaryRawData WHERE [Operation] IN ('LOP_MODIFY_ROW','LOP_INSERT_ROWS','LOP_DELETE_ROWS')
DELETE FROM @PreliminaryRawData WHERE [Operation] IN ('LOP_INSERT_ROWS','LOP_DELETE_ROWS')
IF (@Debug > 0) BEGIN
print 'select * from @PreliminaryRawData_remaining'
select '@PreliminaryRawData_remaining'
select * from @PreliminaryRawData
END

/*******************CURSOR START*********************/
---Now we have modifed data plus its slot ID , page ID and allocunit as well.
--After that we need to get the old values before modfication, these data are in chunks.
IF (@Debug = 0) BEGIN
DECLARE Page_Data_Cursor CURSOR FOR
SELECT [PAGE ID],[Slot ID],[AllocUnitId],[TRANSACTION ID],[Transaction Name],[Current LSN],[Begin Time],[RowLog Contents 0],[RowLog Contents 1],[RowLog Contents 2],[RowLog Contents 3],[RowLog Contents 4],[Log Record],[Operation]
FROM @PreliminaryRawData
WHERE [Operation] IN ('LOP_MODIFY_COLUMNS')
ORDER BY [Slot ID],[TRANSACTION ID] DESC
END
ELSE
BEGIN
DECLARE Page_Data_Cursor CURSOR FOR
SELECT [PAGE ID],[Slot ID],[AllocUnitId],[TRANSACTION ID],[Transaction Name],[Current LSN],[Begin Time],[RowLog Contents 0],[RowLog Contents 1],[RowLog Contents 2],[RowLog Contents 3],[RowLog Contents 4],[Log Record],[Operation]
FROM @PreliminaryRawData
ORDER BY [Slot ID],[TRANSACTION ID] DESC
END

/****************************************/

OPEN Page_Data_Cursor

FETCH NEXT FROM Page_Data_Cursor INTO @ConsolidatedPageID,@Slotid,@AllocUnitID,@TransactionID,@TransactionName,@CurrentLSN,@BeginTime,@RowLogContents0,@RowLogContents1,@RowLogContents2,@RowLogContents3,@RowLogContents4,@LogRecord,@Operation
WHILE @@FETCH_STATUS = 0
BEGIN
IF @Operation IN ('LOP_MODIFY_ROW','LOP_INSERT_ROWS','LOP_DELETE_ROWS')
BEGIN
-- To see the debug printout, comment out the "DELETE FROM @PreliminaryRawData..." line above
IF (@Debug > 1) BEGIN
PRINT 'ConsolidatedPageID = '+@ConsolidatedPageID
PRINT 'Slotid = '+CONVERT(nvarchar,@Slotid)
PRINT 'AllocUnitID = '+CONVERT(nvarchar,@AllocUnitID)
PRINT 'TransactionID = '+@TransactionID
PRINT 'BeginTime = '+CONVERT(nvarchar,@BeginTime,121)
PRINT 'TransactionName = '+@TransactionName
DECLARE @RowLogContents0_var VARCHAR(MAX)
DECLARE @RowLogContents1_var VARCHAR(MAX)
SET @RowLogContents0_var=(SELECT UPPER(CAST('' AS XML).value('xs:hexBinary(sql:variable("@RowLogContents0") )', 'varchar(max)')))
SET @RowLogContents1_var=(SELECT UPPER(CAST('' AS XML).value('xs:hexBinary(sql:variable("@RowLogContents1") )', 'varchar(max)')))
PRINT 'RowLogContents0 = '+@RowLogContents0_var
PRINT 'RowLogContents1 = '+@RowLogContents1_var
IF (@TransactionName='UPDATE') BEGIN
DECLARE @RowLogContents0_var1 VARCHAR(MAX)
DECLARE @RowLogContents0_var2 VARCHAR(MAX)
DECLARE @RowLogContents1_var0 VARCHAR(MAX)
DECLARE @RowLogContents0_var0 VARCHAR(MAX)
SET @RowLogContents0_var1=(SELECT [RowLog Contents 0_var]
FROM @ModifiedRawData
WHERE [Current LSN]=@CurrentLSN AND [Datum]='Reference')
SET @RowLogContents1_var0=(SELECT UPPER(CAST('' AS XML).value('xs:hexBinary(sql:variable("@RowLogContents1") )','varchar(max)'))
FROM @ModifiedRawData
WHERE [Current LSN]=@CurrentLSN AND [Datum]='Reference')
SET @RowLogContents0_var0=(SELECT UPPER(CAST('' AS XML).value('xs:hexBinary(sql:variable("@RowLogContents0") )','varchar(max)'))
FROM @ModifiedRawData
WHERE [Current LSN]=@CurrentLSN AND [Datum]='Reference')
SET @RowLogContents0_var2=(SELECT REPLACE(UPPER([RowLog Contents 0_var]),UPPER(CAST('' AS XML).value('xs:hexBinary(sql:variable("@RowLogContents1") )','varchar(max)')),UPPER(CAST('' AS XML).value('xs:hexBinary(sql:variable("@RowLogContents0") )','varchar(max)')))
FROM @ModifiedRawData
WHERE [Current LSN]=@CurrentLSN AND [Datum]='Reference')
PRINT 'RowLogContents0_var1 = '+@RowLogContents0_var1
PRINT 'RowLogContents1_var0 = '+@RowLogContents1_var0
PRINT 'RowLogContents0_var0 = '+@RowLogContents0_var0
PRINT 'RowLogContents0_var2 = '+@RowLogContents0_var2
PRINT 'RowLogContents0_var2a= '+REPLACE(@RowLogContents0_var1,@RowLogContents1_var0,@RowLogContents0_var0)
END
PRINT ''
--DECLARE @LSN VARCHAR(50)
--DECLARE @LSN0 VARCHAR(50)
--DECLARE @LSN1 VARCHAR(50)
--DECLARE @LSN2 VARCHAR(50)
--DECLARE @LSN_int1 INTEGER
--DECLARE @LSN_int2 INTEGER
--SET @LSN=CONVERT(VARCHAR,@CurrentLSN)
--SET @LSN_int1=CHARINDEX(':',@LSN,1)
--SET @LSN_int2=CHARINDEX(':',@LSN,@LSN_int1+1)
--SET @LSN0=SUBSTRING(@LSN,1,@LSN_int1-1)
--SET @LSN1=SUBSTRING(@LSN,@LSN_int1+1,@LSN_int2-@LSN_int1-1)
--SET @LSN2=SUBSTRING(@LSN,@LSN_int2+1,LEN(@LSN))
--PRINT 'CurrentLSN='+@LSN0+':'+@LSN1+':'+@LSN2

--> SQL2005:CAST('' as xml).value('xs:hexBinary(substring(sql:variable("@CurrentLSN"),1,8))','varbinary(max)') == SQL2008:CONVERT(VARBINARY,SUBSTRING(@CurrentLSN,1,8),2)
--DECLARE @str VARCHAR(50)
--SET @str=CONVERT(VARCHAR,CONVERT(INT,CAST('' as xml).value('xs:hexBinary(substring(sql:variable("@CurrentLSN"),1,8))','varbinary(max)')))+':'+
-- CONVERT(VARCHAR,CONVERT(INT,CAST('' as xml).value('xs:hexBinary(substring(sql:variable("@CurrentLSN"),10,8))','varbinary(max)')))+':'+
-- CONVERT(VARCHAR,CONVERT(INT,CAST('' as xml).value('xs:hexBinary(substring(sql:variable("@CurrentLSN"),19,4))','varbinary(max)')))
--PRINT 'CurrentLSN_base10='+@str
--DECLARE @int BIGINT
--SET @int=CONVERT(BIGINT,CONVERT(VARCHAR,CONVERT(INT,CAST('' as xml).value('xs:hexBinary(substring(sql:variable("@CurrentLSN"),1,8))','varbinary(max)')))+
-- RIGHT('0000000000'+CONVERT(VARCHAR,CONVERT(INT,CAST('' as xml).value('xs:hexBinary(substring(sql:variable("@CurrentLSN"),10,8))','varbinary(max)'))),10)+
-- RIGHT('00000'+CONVERT(VARCHAR,CONVERT(INT,CAST('' as xml).value('xs:hexBinary(substring(sql:variable("@CurrentLSN"),19,4))','varbinary(max)'))),5) )
--PRINT 'CurrentLSN_int='+CONVERT(VARCHAR,@int)
END
END

IF @Operation ='LOP_MODIFY_COLUMNS'
BEGIN

/* If the @Operation Type is 'LOP_MODIFY_ROW' then we need to follow a different procedure to recover modified
.Because this time the data is also in chunks but merge with the data log.
*/
--First, we need to get the [RowLog Contents 3] Because in [Log Record] field the modified data is available after the [RowLog Contents 3] data.
SET @RowLogContents3_Var = CAST('' AS XML).value('xs:hexBinary(sql:variable("@RowLogContents3") )', 'varchar(max)')
SET @LogRecord_Var = CAST('' AS XML).value('xs:hexBinary(sql:variable("@LogRecord"))', 'varchar(max)')

---First get the modifed data chunks in string format
SET @RowLogData_Var = SUBSTRING(@LogRecord_Var, CHARINDEX(@RowLogContents3_Var,@LogRecord_Var)+LEN(@RowLogContents3_Var) ,LEN(@LogRecord_Var))
--Then convert it into the hex values.
SELECT @RowLogData_Hex = CAST('' AS XML).value('xs:hexBinary( substring(sql:variable("@RowLogData_Var"),0) )', 'varbinary(max)')
FROM (SELECT CASE SUBSTRING(@RowLogData_Var, 1, 2)WHEN '0x' THEN 3 ELSE 0 END) AS t(pos)

-- Before recovering the modfied data we need to get the total fixed length data size and start position of the varaible data

SELECT @TotalFixedLengthData = CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0] , 2 + 1, 2))))
,@VariableLength_Offset_Start = CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0] , 2 + 1, 2))))+5+CONVERT(INT, ceiling(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0] , CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0] , 2 + 1, 2)))) + 1, 2))))/8.0))
FROM @ModifiedRawData
WHERE [Current LSN]=@CurrentLSN

SET @FixedLength_Offset = CONVERT(BINARY(2),REVERSE(CONVERT(BINARY(4),(@RowLogContents0))))--)
SET @VariableLength_Offset = CONVERT(INT,CONVERT(BINARY(2),REVERSE(@RowLogContents0)))

/* We already have modified data chunks in @RowLogData_Hex but this data is in merge format (modified plus actual data)
So , here we need [Row Log Contents 1] field , because in this field we have the data length both the modified and actual data
so this length will help us to break it into original and modified data chunks.
*/
SET @FixedLength_Loc = CONVERT(INT,SUBSTRING(@RowLogContents1,1,1))
SET @VariableLength_Loc = CONVERT(INT,SUBSTRING(@RowLogContents1,3,1))

/*First , we need to break Fix length data actual with the help of data length */
SET @OldFixedLengthStartPosition= CHARINDEX(@RowLogContents4,@RowLogData_Hex)
SET @FixedOldValues = SUBSTRING(@RowLogData_Hex,@OldFixedLengthStartPosition,@FixedLength_Loc)
SET @FixedLengthIncrease = (CASE WHEN (Len(@FixedOldValues)%4) = 0 THEN 1 ELSE (4-(LEN(@FixedOldValues)%4)) END)
/*After that , we need to break Fix length data modified data with the help of data length */
SET @FixedNewValues =SUBSTRING(@RowLogData_Hex,@OldFixedLengthStartPosition+@FixedLength_Loc+@FixedLengthIncrease,@FixedLength_Loc)

/*Same we need to break the variable data with the help of data length*/
SET @VariableOldValues =SUBSTRING(@RowLogData_Hex,@OldFixedLengthStartPosition+@FixedLength_Loc+@FixedLengthIncrease+@FixedLength_Loc+(@FixedLengthIncrease),@VariableLength_Loc)
SET @VariableLengthIncrease = (CASE WHEN (LEN(@VariableOldValues)%4) = 0 THEN 1 ELSE (4-(Len(@VariableOldValues)%4))+1 END)
SET @VariableOldValues = (Case When @VariableLength_Loc = 1 Then @VariableOldValues+0x00 else @VariableOldValues end)

SET @VariableNewValues = SUBSTRING(SUBSTRING(@RowLogData_Hex,@OldFixedLengthStartPosition+@FixedLength_Loc+@FixedLengthIncrease+@FixedLength_Loc+(@FixedLengthIncrease-1)+@VariableLength_Loc+@VariableLengthIncrease,Len(@RowLogData_Hex)+1),1,Len(@RowLogData_Hex)+1) --LEN(@VariableOldValues)

/*here we need to replace the fixed length & variable length actaul data with modifed data
*/

SELECT @VariableNewValues=CASE
WHEN Charindex(Substring(@VariableNewValues,0,Len(@VariableNewValues)+1),[RowLog Contents 0])<>0 Then Substring(@VariableNewValues,0,Len(@VariableNewValues)+1)
WHEN Charindex(Substring(@VariableNewValues,0,Len(@VariableNewValues) ),[RowLog Contents 0])<>0 Then Substring(@VariableNewValues,0,Len(@VariableNewValues))
WHEN Charindex(Substring(@VariableNewValues,0,Len(@VariableNewValues)-1),[RowLog Contents 0])<>0 Then Substring(@VariableNewValues,0,Len(@VariableNewValues)-1)--3 --Substring(@VariableNewValues,0,Len(@VariableNewValues)-1)
WHEN Charindex(Substring(@VariableNewValues,0,Len(@VariableNewValues)-2),[RowLog Contents 0])<>0 Then Substring(@VariableNewValues,0,Len(@VariableNewValues)-2)
WHEN Charindex(Substring(@VariableNewValues,0,Len(@VariableNewValues)-3),[RowLog Contents 0])<>0 Then Substring(@VariableNewValues,0,Len(@VariableNewValues)-3) --5--Substring(@VariableNewValues,0,Len(@VariableNewValues)-3)
END
FROM @ModifiedRawData
WHERE [Current LSN]=@CurrentLSN

INSERT INTO @ModifiedRawData ([PAGE ID],[Slot ID],[AllocUnitId],[Datum],[TRANSACTION ID],[Transaction Name],[Current LSN],[Begin Time],[RowLog Contents 0_var],[RowLog Contents 0],[Slot Info])
SELECT @ConsolidatedPageID AS [PAGE ID],@Slotid AS [Slot ID],@AllocUnitID AS [AllocUnitId],'Restoration' AS [Datum],@TransactionID AS [TRANSACTION ID],@TransactionName AS [Transaction Name],@CurrentLSN AS [Current LSN],@BeginTime AS [Begin Time]
,NULL --> [RowLog Contents 0_var]
,ISNULL( --> due to chained updates; this only works on the most recent update
CAST(REPLACE(SUBSTRING([RowLog Contents 0],0,@TotalFixedLengthData+1),@FixedNewValues, @FixedOldValues) AS VARBINARY(max))
+ SUBSTRING([RowLog Contents 0], @TotalFixedLengthData + 1, 2)
+ SUBSTRING([RowLog Contents 0], @TotalFixedLengthData + 3, CONVERT(INT, ceiling(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], @TotalFixedLengthData + 1, 2))))/8.0)))
+ SUBSTRING([RowLog Contents 0], @TotalFixedLengthData + 3 + CONVERT(INT, ceiling(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], @TotalFixedLengthData + 1, 2))))/8.0)), 2)
+ SUBSTRING([RowLog Contents 0],@VariableLength_Offset_Start,(@VariableLength_Offset-(@VariableLength_Offset_Start-1)))
+ CAST(REPLACE(SUBSTRING([RowLog Contents 0],@VariableLength_Offset+1,Len(@VariableNewValues))
,@VariableNewValues,@VariableOldValues) AS VARBINARY)
+ Substring([RowLog Contents 0],@VariableLength_Offset+Len(@VariableNewValues)+1,LEN([RowLog Contents 0]))
,[RowLog Contents 0]) --> [RowLog Contents 0]
,[Slot Info]
FROM @ModifiedRawData
WHERE [Current LSN]=@CurrentLSN

--> Convert the old data which is in hex format to string format (not required).
UPDATE @ModifiedRawData SET [RowLog Contents 0_var] = CAST('' AS XML).value('xs:hexBinary(sql:column("[RowLog Contents 0]") )', 'varchar(max)')
FROM @ModifiedRawData
WHERE [Current LSN]=@CurrentLSN
END

FETCH NEXT FROM Page_Data_Cursor INTO @ConsolidatedPageID,@Slotid,@AllocUnitID,@TransactionID,@TransactionName,@CurrentLSN,@BeginTime,@RowLogContents0,@RowLogContents1,@RowLogContents2,@RowLogContents3,@RowLogContents4,@LogRecord,@Operation
END

CLOSE Page_Data_Cursor
DEALLOCATE Page_Data_Cursor
/*******************CURSOR END*********************/
------------------------------------------------------------------------------------------------------------
DECLARE @ModRawDataLoop INT
--> Add sequence index number(s)
UPDATE A SET [EntryID] = B.[index] FROM @ModifiedRawData A
JOIN (SELECT ROW_NUMBER() OVER (ORDER BY [Current LSN],[Datum] DESC) AS [index], [ID] FROM @ModifiedRawData) B ON
B.[ID] = A.[ID]
--> Add backward reference(s) and forward reference(s)
UPDATE A SET
[PrevEntry] =
ISNULL((SELECT MAX(B.[EntryID]) AS [EntryID] FROM @ModifiedRawData B
WHERE B.[Slot ID] = A.[Slot ID] AND B.[EntryID] < A.[EntryID]),0)
,[NextEntry] =
ISNULL((SELECT MIN(B.[EntryID]) AS [EntryID] FROM @ModifiedRawData B
WHERE B.[Slot ID] = A.[Slot ID] AND B.[EntryID] > A.[EntryID]),0)
FROM @ModifiedRawData A
--> Add chain level(s)
UPDATE A SET [ChainLevel] = B.[index] FROM @ModifiedRawData A
JOIN (SELECT DENSE_RANK() OVER (PARTITION BY [SLOT ID] ORDER BY [Current LSN])-1 AS [index], [ID] FROM @ModifiedRawData) B ON
B.[ID] = A.[ID]
--> Set the process order(s) --> the reverse of the [ChainLevel]
UPDATE A SET
[Process] = CASE WHEN ([ChainLevel] = 0) THEN 0
ELSE (SELECT MAX([ChainLevel]) FROM @ModifiedRawData B WHERE B.[Slot ID] = A.[Slot ID]) - [ChainLevel]
END
FROM @ModifiedRawData A
--select * from @ModifiedRawData order by [EntryID]

--> Get the chained UPDATE records (forward in time) to locate the replaced "Reference" record(s) (chained UPDATEs)
SELECT B.* INTO #T1
FROM @ModifiedRawData A
INNER JOIN @ModifiedRawData B ON B.[Transaction Name] = 'UPDATE'
AND B.[NextEntry] = A.[EntryID]
AND B.[Datum] = 'Reference' AND A.[Datum] = 'Restoration'
--AND B.[Current LSN] <> A.[Current LSN] --> should always be true
SET @ModRawDataLoop = @@ROWCOUNT
--select * from #T1

IF (@ModRawDataLoop > 0) BEGIN
WHILE (@ModRawDataLoop > 0) BEGIN
/*
If the [Operation] Type is 'LOP_MODIFY_ROW' then it is very simple to recover the modified data. The old data is in [RowLog Contents 0] Field and modified data is in [RowLog Contents 1] Field. Simply replace it with the modified data and get the old data.
*/
--> First, change the byte of [RowLog Contents 0_var] that may differ from the first byte of [RowLog Contents 1_var] (I don't know why these don't match sometimes)
--> Fix the copy before it is restored back into the applicable records (next)
UPDATE #T1 SET
[RowLog Contents 0_var] =
SUBSTRING(B.[RowLog Contents 0_var],1,PATINDEX('%'+SUBSTRING(A.[RowLog Contents 1_var],3,LEN(A.[RowLog Contents 1_var]))+'%',B.[RowLog Contents 0_var])-3)
+SUBSTRING(A.[RowLog Contents 1_var],1,2)
+SUBSTRING(B.[RowLog Contents 0_var],PATINDEX('%'+SUBSTRING(A.[RowLog Contents 1_var],3,LEN(A.[RowLog Contents 1_var]))+'%',B.[RowLog Contents 0_var]),LEN(B.[RowLog Contents 0_var]))
,[RowLog Contents 0] =
SUBSTRING(B.[RowLog Contents 0],1,PATINDEX('%'+SUBSTRING(A.[RowLog Contents 1_var],3,LEN(A.[RowLog Contents 1_var]))+'%',B.[RowLog Contents 0_var])/2-1)
+SUBSTRING(A.[RowLog Contents 1],1,1)
+SUBSTRING(B.[RowLog Contents 0],PATINDEX('%'+SUBSTRING(A.[RowLog Contents 1_var],3,LEN(A.[RowLog Contents 1_var]))+'%',B.[RowLog Contents 0_var])/2+1,LEN(B.[RowLog Contents 0]))
FROM @PreliminaryRawData A
INNER JOIN #T1 C ON C.[Current LSN] = A.[Current LSN]
INNER JOIN @ModifiedRawData B ON B.[EntryID] = C.[NextEntry] --> source location
WHERE A.[Operation] IN ('LOP_MODIFY_ROW') AND C.[Process] = 1
--select * from #T1
--> Restore (one level) the replaced (and therefore lost) "Reference" information (chained UPDATEs)
-- for both the "Reference" and "Restoration" records, and the original source record
UPDATE @ModifiedRawData SET
[RowLog Contents 0_var] = B.[RowLog Contents 0_var]
,[RowLog Contents 0] = B.[RowLog Contents 0]
FROM @ModifiedRawData A
INNER JOIN #T1 B ON A.[EntryID] IN (B.[EntryID],B.[PrevEntry],B.[NextEntry])
WHERE B.[Process] = 1
--select * from @ModifiedRawData order by [EntryID]
--> Now, the substitution will work, so update [RowLog Contents 0_var]
UPDATE @ModifiedRawData SET
[RowLog Contents 0_var] =
--> look for the UPDATE mate from the previous record pull
REPLACE(UPPER(B.[RowLog Contents 0_var]),UPPER(CAST('' AS XML).value('xs:hexBinary(sql:column("A.[RowLog Contents 1]") )','varchar(max)')),UPPER(CAST('' AS XML).value('xs:hexBinary(sql:column("A.[RowLog Contents 0]") )','varchar(max)')))
,[RowLog Contents 0] = NULL
FROM @PreliminaryRawData A
INNER JOIN @ModifiedRawData B ON B.[Current LSN]=A.[Current LSN]
INNER JOIN #T1 C ON C.[PrevEntry] = B.[EntryID]
WHERE A.[Operation] IN ('LOP_MODIFY_ROW') AND B.[Process] = 1

-- Convert the old data which is in string format to hex format (required).
UPDATE @ModifiedRawData SET
[RowLog Contents 0] = CAST('' AS XML).value('xs:hexBinary(substring(sql:column("[RowLog Contents 0_var]"), 0) )', 'varbinary(max)')
FROM @ModifiedRawData
WHERE [RowLog Contents 0] IS NULL
SET @ModRawDataLoop = @@ROWCOUNT
--print '@ModRawDataLoop (LOP_MODIFY_ROW)= '+CONVERT(varchar,@ModRawDataLoop)
--select * from @ModifiedRawData order by [EntryID]

/*******************CURSOR START*********************/
---Now we have modifed data plus its slot ID , page ID and allocunit as well.
--After that we need to get the old values before modfication, these data are in chunks.
DECLARE Page_Data_Cursor CURSOR FOR
SELECT A.[PAGE ID],A.[Slot ID],A.[AllocUnitId],A.[TRANSACTION ID],A.[Transaction Name],A.[Current LSN],A.[Begin Time],A.[RowLog Contents 0],[RowLog Contents 1],[RowLog Contents 2],[RowLog Contents 3],[RowLog Contents 4],[Log Record],[Operation]
FROM @PreliminaryRawData A
INNER JOIN @ModifiedRawData B ON B.[Current LSN] = A.[Current LSN] --> skip this section if there are no updates
WHERE [Operation] IN ('LOP_MODIFY_COLUMNS') AND B.[Process] = 1 AND B.[Datum] = 'Restoration'
ORDER BY A.[Slot ID],A.[TRANSACTION ID] DESC

SET @ModRawDataLoop = 0
/****************************************/

OPEN Page_Data_Cursor

FETCH NEXT FROM Page_Data_Cursor INTO @ConsolidatedPageID,@Slotid,@AllocUnitID,@TransactionID,@TransactionName,@CurrentLSN,@BeginTime,@RowLogContents0,@RowLogContents1,@RowLogContents2,@RowLogContents3,@RowLogContents4,@LogRecord,@Operation
WHILE @@FETCH_STATUS = 0
BEGIN
--IF @Operation ='LOP_MODIFY_COLUMNS'
BEGIN

/* If the @Operation Type is 'LOP_MODIFY_ROW' then we need to follow a different procedure to recover modified
.Because this time the data is also in chunks but merge with the data log.
*/
--First, we need to get the [RowLog Contents 3] Because in [Log Record] field the modified data is available after the [RowLog Contents 3] data.
SET @RowLogContents3_Var = CAST('' AS XML).value('xs:hexBinary(sql:variable("@RowLogContents3") )', 'varchar(max)')
SET @LogRecord_Var = CAST('' AS XML).value('xs:hexBinary(sql:variable("@LogRecord"))', 'varchar(max)')

---First get the modifed data chunks in string format
SET @RowLogData_Var = SUBSTRING(@LogRecord_Var, CHARINDEX(@RowLogContents3_Var,@LogRecord_Var)+LEN(@RowLogContents3_Var) ,LEN(@LogRecord_Var))
--Then convert it into the hex values.
SELECT @RowLogData_Hex = CAST('' AS XML).value('xs:hexBinary( substring(sql:variable("@RowLogData_Var"),0) )', 'varbinary(max)')
FROM (SELECT CASE SUBSTRING(@RowLogData_Var, 1, 2)WHEN '0x' THEN 3 ELSE 0 END) AS t(pos)

-- Before recovering the modfied data we need to get the total fixed length data size and start position of the varaible data

SELECT @TotalFixedLengthData = CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0] , 2 + 1, 2))))
,@VariableLength_Offset_Start = CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0] , 2 + 1, 2))))+5+CONVERT(INT, ceiling(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0] , CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0] , 2 + 1, 2)))) + 1, 2))))/8.0))
FROM @ModifiedRawData
WHERE [Current LSN]=@CurrentLSN

SET @FixedLength_Offset = CONVERT(BINARY(2),REVERSE(CONVERT(BINARY(4),(@RowLogContents0))))--)
SET @VariableLength_Offset = CONVERT(INT,CONVERT(BINARY(2),REVERSE(@RowLogContents0)))

/* We already have modified data chunks in @RowLogData_Hex but this data is in merge format (modified plus actual data)
So , here we need [Row Log Contents 1] field , because in this field we have the data length both the modified and actual data
so this length will help us to break it into original and modified data chunks.
*/
SET @FixedLength_Loc = CONVERT(INT,SUBSTRING(@RowLogContents1,1,1))
SET @VariableLength_Loc = CONVERT(INT,SUBSTRING(@RowLogContents1,3,1))

/*First , we need to break Fix length data actual with the help of data length */
SET @OldFixedLengthStartPosition= CHARINDEX(@RowLogContents4,@RowLogData_Hex)
SET @FixedOldValues = SUBSTRING(@RowLogData_Hex,@OldFixedLengthStartPosition,@FixedLength_Loc)
SET @FixedLengthIncrease = (CASE WHEN (Len(@FixedOldValues)%4) = 0 THEN 1 ELSE (4-(LEN(@FixedOldValues)%4)) END)
/*After that , we need to break Fix length data modified data with the help of data length */
SET @FixedNewValues =SUBSTRING(@RowLogData_Hex,@OldFixedLengthStartPosition+@FixedLength_Loc+@FixedLengthIncrease,@FixedLength_Loc)

/*Same we need to break the variable data with the help of data length*/
SET @VariableOldValues =SUBSTRING(@RowLogData_Hex,@OldFixedLengthStartPosition+@FixedLength_Loc+@FixedLengthIncrease+@FixedLength_Loc+(@FixedLengthIncrease),@VariableLength_Loc)
SET @VariableLengthIncrease = (CASE WHEN (LEN(@VariableOldValues)%4) = 0 THEN 1 ELSE (4-(Len(@VariableOldValues)%4))+1 END)
SET @VariableOldValues = (Case When @VariableLength_Loc = 1 Then @VariableOldValues+0x00 else @VariableOldValues end)

SET @VariableNewValues = SUBSTRING(SUBSTRING(@RowLogData_Hex,@OldFixedLengthStartPosition+@FixedLength_Loc+@FixedLengthIncrease+@FixedLength_Loc+(@FixedLengthIncrease-1)+@VariableLength_Loc+@VariableLengthIncrease,Len(@RowLogData_Hex)+1),1,Len(@RowLogData_Hex)+1) --LEN(@VariableOldValues)

/*here we need to replace the fixed length & variable length actaul data with modifed data
*/

SELECT @VariableNewValues=CASE
WHEN Charindex(Substring(@VariableNewValues,0,Len(@VariableNewValues)+1),[RowLog Contents 0])<>0 Then Substring(@VariableNewValues,0,Len(@VariableNewValues)+1)
WHEN Charindex(Substring(@VariableNewValues,0,Len(@VariableNewValues) ),[RowLog Contents 0])<>0 Then Substring(@VariableNewValues,0,Len(@VariableNewValues))
WHEN Charindex(Substring(@VariableNewValues,0,Len(@VariableNewValues)-1),[RowLog Contents 0])<>0 Then Substring(@VariableNewValues,0,Len(@VariableNewValues)-1)--3 --Substring(@VariableNewValues,0,Len(@VariableNewValues)-1)
WHEN Charindex(Substring(@VariableNewValues,0,Len(@VariableNewValues)-2),[RowLog Contents 0])<>0 Then Substring(@VariableNewValues,0,Len(@VariableNewValues)-2)
WHEN Charindex(Substring(@VariableNewValues,0,Len(@VariableNewValues)-3),[RowLog Contents 0])<>0 Then Substring(@VariableNewValues,0,Len(@VariableNewValues)-3) --5--Substring(@VariableNewValues,0,Len(@VariableNewValues)-3)
END
FROM @ModifiedRawData
WHERE [Current LSN]=@CurrentLSN

UPDATE @ModifiedRawData SET
[RowLog Contents 0] =
CAST(REPLACE(SUBSTRING([RowLog Contents 0],0,@TotalFixedLengthData+1),@FixedNewValues, @FixedOldValues) AS VARBINARY(max))
+ SUBSTRING([RowLog Contents 0], @TotalFixedLengthData + 1, 2)
+ SUBSTRING([RowLog Contents 0], @TotalFixedLengthData + 3, CONVERT(INT, ceiling(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], @TotalFixedLengthData + 1, 2))))/8.0)))
+ SUBSTRING([RowLog Contents 0], @TotalFixedLengthData + 3 + CONVERT(INT, ceiling(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], @TotalFixedLengthData + 1, 2))))/8.0)), 2)
+ SUBSTRING([RowLog Contents 0],@VariableLength_Offset_Start,(@VariableLength_Offset-(@VariableLength_Offset_Start-1)))
+ CAST(REPLACE(SUBSTRING([RowLog Contents 0],@VariableLength_Offset+1,Len(@VariableNewValues))
,@VariableNewValues,@VariableOldValues) AS VARBINARY)
+ Substring([RowLog Contents 0],@VariableLength_Offset+Len(@VariableNewValues)+1,LEN([RowLog Contents 0]))
,[RowLog Contents 0_var] = NULL
FROM @ModifiedRawData
WHERE [Current LSN]=@CurrentLSN AND [Process] = 1 AND [Datum] = 'Restoration'

--> Convert the old data which is in hex format to string format (not required).
UPDATE @ModifiedRawData SET [RowLog Contents 0_var] = CAST('' AS XML).value('xs:hexBinary(sql:column("[RowLog Contents 0]") )', 'varchar(max)')
FROM @ModifiedRawData
WHERE [RowLog Contents 0_var] IS NULL
SET @ModRawDataLoop = @ModRawDataLoop + @@ROWCOUNT

END

FETCH NEXT FROM Page_Data_Cursor INTO @ConsolidatedPageID,@Slotid,@AllocUnitID,@TransactionID,@TransactionName,@CurrentLSN,@BeginTime,@RowLogContents0,@RowLogContents1,@RowLogContents2,@RowLogContents3,@RowLogContents4,@LogRecord,@Operation
END
--print '@ModRawDataLoop (LOP_MODIFY_COLUMNS)= '+CONVERT(varchar,@ModRawDataLoop)

CLOSE Page_Data_Cursor
DEALLOCATE Page_Data_Cursor
/*******************CURSOR END*********************/
--select * from @ModifiedRawData order by [EntryID]

--> Update the process order
UPDATE #T1 SET [Process] = [Process] - 1
FROM #T1
WHERE [Process] > 0
SET @ModRawDataLoop = (SELECT COUNT(*) FROM #T1 WHERE [Process] > 0)
--print '@ModRawDataLoop (remaining)= '+CONVERT(varchar,@ModRawDataLoop)

IF (@ModRawDataLoop > 0) BEGIN
--> Update the process order
UPDATE @ModifiedRawData SET [Process] = [Process] - 1
FROM @ModifiedRawData
WHERE [Process] > 0

--> Now look ahead and get (temporarily store) the replaced "Reference" information (chained UPDATEs)
-- this is required for UPDATEs occurring before the most recent UPDATE
UPDATE #T1 SET --> redundant the for the first level
[RowLog Contents 0_var] = A.[RowLog Contents 0_var]
,[RowLog Contents 0] = A.[RowLog Contents 0]
FROM @ModifiedRawData A
INNER JOIN #T1 B ON B.[NextEntry] = A.[EntryID]
WHERE B.[Process] = 1
--select * from #T1
END
END
END
------------------------------------------------------------------------------------------------------------
IF (@Debug > 0) BEGIN
print 'select * from @ModifiedRawData2'
select '@ModifiedRawData2'
select * from @ModifiedRawData ORDER BY [Current LSN], [Datum] Desc
END

DECLARE @RowLogContents VARBINARY(8000)
DECLARE @AllocUnitName NVARCHAR(MAX)
DECLARE @SQL NVARCHAR(MAX)

DECLARE @bitTable TABLE(
[ID] INT
,[Bitvalue] INT
)
--Create table to set the bit position of one byte.

INSERT INTO @bitTable
SELECT 0,2 UNION ALL
SELECT 1,2 UNION ALL
SELECT 2,4 UNION ALL
SELECT 3,8 UNION ALL
SELECT 4,16 UNION ALL
SELECT 5,32 UNION ALL
SELECT 6,64 UNION ALL
SELECT 7,128

--Create table to collect the row data.
DECLARE @DeletedRecords TABLE(
[ID] INT IDENTITY(1,1)
,[RowLogContents] VARBINARY(8000)
,[AllocUnitID] BIGINT
,[TransactionID] NVARCHAR(Max)
,[Slot ID] INT
,[FixedLengthData] SMALLINT
,[TotalNoOfCols] SMALLINT
,[NullBitMapLength] SMALLINT
,[NullBytes] VARBINARY(8000)
,[TotalNoOfVarCols] SMALLINT
,[ColumnOffsetArray] VARBINARY(8000)
,[VarColumnStart] SMALLINT
,[NullBitMap] VARCHAR(MAX)
,[TRANSACTION ID] NVARCHAR(Max)
,[Transaction Name] NVARCHAR(MAX)
,[Datum] VARCHAR(20)
,[Current LSN] VARCHAR(MAX)
,[Begin Time] DATETIME
)
--Create a common table expression to get all the row data plus how many bytes we have for each row.
;WITH RowData AS
(SELECT
[RowLog Contents 0] AS [RowLogContents]
,[AllocUnitID]
,[ID] AS [TransactionID]
,[Slot ID] as [Slot ID]
--[Fixed Length Data] = Substring (RowLog content 0, Status Bit A+ Status Bit B + 1,2 bytes)
,CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) AS [FixedLengthData] --@FixedLengthData

--[TotalnoOfCols] = Substring (RowLog content 0, [Fixed Length Data] + 1,2 bytes)
,CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2),
REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2)))) as [TotalNoOfCols]

--[NullBitMapLength]=ceiling([Total No of Columns] /8.0)
,CONVERT(INT, CEILING(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2),
REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2))))/8.0)) as [NullBitMapLength]

--[Null Bytes] = Substring (RowLog content 0, Status Bit A+ Status Bit B + [Fixed Length Data] +1, [NullBitMapLength] )
,SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 3,
CONVERT(INT, CEILING(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2),
REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2))))/8.0))) as [NullBytes]

--[TotalNoOfVarCols] = Substring (RowLog content 0, Status Bit A+ Status Bit B + [Fixed Length Data] +1, [Null Bitmap length] + 2 )
,(CASE WHEN SUBSTRING([RowLog Contents 0], 1, 1) In (0x30,0x70)
THEN CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0],
CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 3 +
CONVERT(INT, CEILING(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2),
REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2))))/8.0)), 2))))
ELSE null END) AS [TotalNoOfVarCols]

--[ColumnOffsetArray]= Substring (RowLog content 0, Status Bit A+ Status Bit B + [Fixed Length Data] +1, [Null Bitmap length] + 2 , [TotalNoOfVarCols]*2 )
,(CASE WHEN SUBSTRING([RowLog Contents 0], 1, 1) In (0x30,0x70)
THEN SUBSTRING([RowLog Contents 0],
CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 3 +
CONVERT(INT, CEILING(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2),
REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2))))/8.0)) + 2,
(CASE WHEN SUBSTRING([RowLog Contents 0], 1, 1) In (0x30,0x70)
THEN CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0],
CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 3 +
CONVERT(INT, CEILING(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2),
REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2))))/8.0)), 2))))
ELSE null END) * 2)
ELSE null END) AS [ColumnOffsetArray]

-- Variable column Start = Status Bit A+ Status Bit B + [Fixed Length Data] + [Null Bitmap length] + 2+([TotalNoOfVarCols]*2)
,CASE WHEN SUBSTRING([RowLog Contents 0], 1, 1)In (0x30,0x70)
THEN (CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 4 +
CONVERT(INT, CEILING(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2),
REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2))))/8.0)) +
((CASE WHEN SUBSTRING([RowLog Contents 0], 1, 1) In (0x30,0x70)
THEN CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0],
CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 3 +
CONVERT(INT, CEILING(CONVERT(INT, CONVERT(BINARY(2), REVERSE(SUBSTRING([RowLog Contents 0], CONVERT(SMALLINT, CONVERT(BINARY(2),
REVERSE(SUBSTRING([RowLog Contents 0], 2 + 1, 2)))) + 1, 2))))/8.0)), 2))))
ELSE null END) * 2))
ELSE NULL END AS [VarColumnStart]
,[TRANSACTION ID]
,[Transaction Name]
,[Datum]
,[Current LSN]
,[Begin Time]
FROM @ModifiedRawData
)

---Use this technique to repeat the row till the no of bytes of the row.
,N1 (n) AS (SELECT 1 UNION ALL SELECT 1)
,N2 (n) AS (SELECT 1 FROM N1 AS X, N1 AS Y)
,N3 (n) AS (SELECT 1 FROM N2 AS X, N2 AS Y)
,N4 (n) AS (SELECT ROW_NUMBER() OVER(ORDER BY X.n) FROM N3 AS X, N3 AS Y)

INSERT INTO @DeletedRecords
SELECT [RowLogContents]
,[AllocUnitID]
,[TransactionID]
,[Slot ID]
,[FixedLengthData]
,[TotalNoOfCols]
,[NullBitMapLength]
,[NullBytes]
,[TotalNoOfVarCols]
,[ColumnOffsetArray]
,[VarColumnStart]
--Get the Null value against each column (1 means null zero means not null)
,[NullBitMap]=(REPLACE(STUFF(
(SELECT ',' +
(CASE WHEN [ID]=0 THEN CONVERT(NVARCHAR(1),(SUBSTRING(NullBytes, n, 1) % 2)) ELSE CONVERT(NVARCHAR(1),((SUBSTRING(NullBytes, n, 1) / [Bitvalue]) % 2)) END) --as [nullBitMap]
FROM N4 AS Nums
JOIN RowData AS C ON n <= NullBitMapLength
CROSS JOIN @bitTable WHERE C.[RowLogContents] = D.[RowLogContents]
ORDER BY [RowLogContents],n ASC FOR XML PATH('')
),1,1,''),',',''))
,[TRANSACTION ID]
,[Transaction Name]
,[Datum]
,[Current LSN]
,[Begin Time]
FROM RowData D
IF (@Debug > 0) BEGIN
print 'select * from @DeletedRecords'
select '@DeletedRecords'
select * from @DeletedRecords
END

CREATE TABLE [#temp_Data](
[FieldName] VARCHAR(MAX) COLLATE database_default NOT NULL
,[FieldValue] VARCHAR(MAX) COLLATE database_default NULL
,[FieldSetValue] VARCHAR(MAX) COLLATE database_default NULL
,[Rowlogcontents] VARBINARY(8000)
,[TransactionID] VARCHAR(MAX) COLLATE database_default NOT NULL
,[Slot ID] INT
,[NonID] INT
,[nullbit] INT
,[TRANSACTION ID] NVARCHAR(MAX)
,[Transaction Name] NVARCHAR(MAX)
,[Datum] VARCHAR(20)
,[Current LSN] VARCHAR(MAX)
,[Begin Time] DATETIME
--,[System_type_id] INT
)
--Create common table expression and join it with the rowdata table
--to get each column details
;With CTE AS
/*This part is for variable data columns*/
(SELECT
A.[ID]
,[Rowlogcontents]
,[TransactionID]
,[Slot ID]
,[TRANSACTION ID]
,[Transaction Name]
,[Datum]
,[Current LSN]
,[Begin Time]
,[NAME]
,cols.leaf_null_bit AS nullbit
,leaf_offset
,ISNULL(syscolumns.length, cols.max_length) AS [length]
,cols.system_type_id
,cols.leaf_bit_position AS bitpos
,ISNULL(syscolumns.xprec, cols.precision) AS xprec
,ISNULL(syscolumns.xscale, cols.scale) AS xscale
,SUBSTRING([nullBitMap], cols.leaf_null_bit, 1) AS is_null
--Calculate the variable column size from the variable column offset array
,(CASE WHEN leaf_offset<1 AND SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=0
THEN CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([ColumnOffsetArray], (2*(leaf_offset*-1))-1, 2))))
ELSE 0 END
) AS [Column value Size]

--Calculate the column length
,(CASE WHEN leaf_offset<1 AND SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=0
THEN CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING ([ColumnOffsetArray], (2*(leaf_offset*-1))-1, 2))))
- ISNULL(NULLIF(CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE (SUBSTRING([ColumnOffsetArray], (2*((leaf_offset*-1)-1))-1, 2)))), 0), [varColumnStart]) --> If Null OR zero
ELSE 0 END
) AS [Column Length]

--Get the Hexa decimal value from the RowlogContent
--HexValue of the variable column=Substring([Column value Size] - [Column Length] + 1,[Column Length])
--This is the data of your column but in the Hexvalue
,CASE WHEN SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=1
THEN NULL
ELSE SUBSTRING(Rowlogcontents,
(
(CASE WHEN leaf_offset<1 AND SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=0
THEN CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([ColumnOffsetArray], (2*(leaf_offset*-1))-1, 2))))
ELSE 0 END
)
-(
(CASE WHEN leaf_offset<1 AND SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=0
THEN CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([ColumnOffsetArray], (2*(leaf_offset*-1))-1, 2))))
- ISNULL(NULLIF(CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([ColumnOffsetArray], (2*((leaf_offset*-1)-1))-1, 2)))), 0), [varColumnStart]) --> If Null OR zero
ELSE 0 END
))
)+1,(
(CASE WHEN leaf_offset<1 AND SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=0
THEN CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([ColumnOffsetArray], (2*(leaf_offset*-1))-1, 2))))
- ISNULL(NULLIF(CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([ColumnOffsetArray], (2*((leaf_offset*-1)-1))-1, 2)))), 0), [varColumnStart]) --> If Null OR zero
ELSE 0 END
))
)
END AS hex_Value

FROM @DeletedRecords A
INNER JOIN sys.allocation_units allocunits ON
A.[AllocUnitId]=allocunits.[Allocation_Unit_Id]
INNER JOIN sys.partitions partitions ON
(allocunits.type IN (1, 3) AND partitions.hobt_id = allocunits.container_id)
OR (allocunits.type = 2 AND partitions.partition_id = allocunits.container_id)
INNER JOIN sys.system_internals_partition_columns cols ON cols.partition_id = partitions.partition_id
LEFT OUTER JOIN syscolumns ON syscolumns.id = partitions.object_id AND syscolumns.colid = cols.partition_column_id
WHERE leaf_offset < 0

UNION
/*This part is for fixed data columns*/
SELECT
A.[ID]
,[Rowlogcontents]
,[TransactionID]
,[Slot ID]
,[TRANSACTION ID]
,[Transaction Name]
,[Datum]
,[Current LSN]
,[Begin Time]
,[NAME]
,cols.leaf_null_bit AS [nullbit]
,leaf_offset
,ISNULL(syscolumns.length, cols.max_length) AS [length]
,cols.system_type_id
,cols.leaf_bit_position AS bitpos
,ISNULL(syscolumns.xprec, cols.precision) AS xprec
,ISNULL(syscolumns.xscale, cols.scale) AS xscale
,SUBSTRING([nullBitMap], cols.leaf_null_bit, 1) AS is_null
,(SELECT TOP 1 ISNULL(SUM(CASE WHEN C.leaf_offset >1 THEN max_length ELSE 0 END),0)
FROM sys.system_internals_partition_columns C
WHERE cols.partition_id=C.partition_id AND C.leaf_null_bit<cols.leaf_null_bit
)+5 AS [Column value Size]
,syscolumns.length AS [Column Length]

,CASE WHEN SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=1
THEN NULL
ELSE SUBSTRING(Rowlogcontents,
(SELECT TOP 1 ISNULL(SUM(CASE WHEN C.leaf_offset >1 THEN max_length ELSE 0 END),0)
FROM sys.system_internals_partition_columns C
WHERE cols.partition_id =C.partition_id AND C.leaf_null_bit<cols.leaf_null_bit
)+5
,syscolumns.length)
END AS hex_Value
FROM @DeletedRecords A
INNER JOIN sys.allocation_units allocunits ON
A.[AllocUnitId]=allocunits.[Allocation_Unit_Id]
INNER JOIN sys.partitions partitions ON
(allocunits.type IN (1, 3) AND partitions.hobt_id = allocunits.container_id)
OR (allocunits.type = 2 AND partitions.partition_id = allocunits.container_id)
INNER JOIN sys.system_internals_partition_columns cols ON cols.partition_id = partitions.partition_id
LEFT OUTER JOIN syscolumns ON syscolumns.id = partitions.object_id AND syscolumns.colid = cols.partition_column_id
WHERE leaf_offset>0
)
--Converting data from Hexvalue to its orgional datatype.
--Implemented datatype conversion mechanism for each datatype
--Select * from sys.columns Where [object_id]=object_id('' + @SchemaName_n_TableName + '')
--Select * from CTE
INSERT INTO #temp_Data
SELECT
NAME,
CASE
WHEN system_type_id IN (231, 239) THEN LTRIM(RTRIM(CONVERT(NVARCHAR(max),hex_Value))) --NVARCHAR ,NCHAR
WHEN system_type_id IN (167,175) THEN LTRIM(RTRIM(CONVERT(VARCHAR(max),REPLACE(hex_Value, 0x00, 0x20)))) --VARCHAR,CHAR
WHEN system_type_id = 48 THEN CONVERT(VARCHAR(MAX), CONVERT(TINYINT, CONVERT(BINARY(1), REVERSE (hex_Value)))) --TINY INTEGER
WHEN system_type_id = 52 THEN CONVERT(VARCHAR(MAX), CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE (hex_Value)))) --SMALL INTEGER
WHEN system_type_id = 56 THEN CONVERT(VARCHAR(MAX), CONVERT(INT, CONVERT(BINARY(4), REVERSE(hex_Value)))) -- INTEGER
WHEN system_type_id = 127 THEN CONVERT(VARCHAR(MAX), CONVERT(BIGINT, CONVERT(BINARY(8), REVERSE(hex_Value))))-- BIG INTEGER
WHEN system_type_id = 61 Then CONVERT(VARCHAR(MAX),CONVERT(DATETIME,CONVERT(VARBINARY(8000),REVERSE (hex_Value))),100) --DATETIME
--WHEN system_type_id IN( 40) Then CONVERT(VARCHAR(MAX),CONVERT(DATE,CONVERT(VARBINARY(8000),(hex_Value))),100) --DATE This datatype only works for SQL Server 2008
WHEN system_type_id =58 Then CONVERT(VARCHAR(MAX),CONVERT(SMALLDATETIME,CONVERT(VARBINARY(8000),REVERSE(hex_Value))),100) --SMALL DATETIME
WHEN system_type_id = 108 THEN CONVERT(VARCHAR(MAX), CAST(CONVERT(NUMERIC(38,30), CONVERT(VARBINARY,CONVERT(VARBINARY,xprec)+CONVERT(VARBINARY,xscale))+CONVERT(VARBINARY(1),0) + hex_Value) as FLOAT)) --- NUMERIC
WHEN system_type_id In(60,122) THEN CONVERT(VARCHAR(MAX),Convert(MONEY,Convert(VARBINARY(8000),Reverse(hex_Value))),2) --MONEY,SMALLMONEY
WHEN system_type_id =106 THEN CONVERT(VARCHAR(MAX), CAST(CONVERT(Decimal(38,34), CONVERT(VARBINARY,Convert(VARBINARY,xprec)+CONVERT(VARBINARY,xscale))+CONVERT(VARBINARY(1),0) + hex_Value) as FLOAT)) --- DECIMAL
WHEN system_type_id = 104 THEN CONVERT(VARCHAR(MAX),CONVERT (BIT,CONVERT(BINARY(1), hex_Value)%2)) -- BIT
WHEN system_type_id =62 THEN RTRIM(LTRIM(STR(CONVERT(FLOAT,SIGN(CAST(CONVERT(VARBINARY(8000),Reverse(hex_Value)) AS BIGINT)) * (1.0 + (CAST(CONVERT(VARBINARY(8000),Reverse(hex_Value)) AS BIGINT) & 0x000FFFFFFFFFFFFF) * POWER(CAST(2 AS FLOAT), -52)) * POWER(CAST(2 AS FLOAT),((CAST(CONVERT(VARBINARY(8000),Reverse(hex_Value)) AS BIGINT) & 0x7ff0000000000000) / EXP(52 * LOG(2))-1023))),53,LEN(hex_Value)))) --- FLOAT
WHEN system_type_id =59 THEN Left(LTRIM(STR(CAST(SIGN(CAST(Convert(VARBINARY(8000),REVERSE(hex_Value)) AS BIGINT))* (1.0 + (CAST(CONVERT(VARBINARY(8000),Reverse(hex_Value)) AS BIGINT) & 0x007FFFFF) * POWER(CAST(2 AS Real), -23)) * POWER(CAST(2 AS Real),(((CAST(CONVERT(VARBINARY(8000),Reverse(hex_Value)) AS INT) )& 0x7f800000)/ EXP(23 * LOG(2))-127))AS REAL),23,23)),8) --Real
WHEN system_type_id In (165,173) THEN (CASE WHEN CHARINDEX(0x,cast('' AS XML).value('xs:hexBinary(sql:column("hex_Value"))', 'VARBINARY(8000)')) = 0 THEN '0x' ELSE '' END) +cast('' AS XML).value('xs:hexBinary(sql:column("hex_Value"))', 'varchar(max)') -- BINARY,VARBINARY
WHEN system_type_id =36 THEN CONVERT(VARCHAR(MAX),CONVERT(UNIQUEIDENTIFIER,hex_Value)) --UNIQUEIDENTIFIER
END AS [FieldValue]
,NULL AS [FieldSetValue]
,[Rowlogcontents]
,[TransactionID]
,[Slot ID]
,[ID]
,[nullbit]
,[TRANSACTION ID]
,[Transaction Name]
,[Datum]
,[Current LSN]
,[Begin Time]
FROM CTE
ORDER BY [nullbit]

UPDATE #temp_Data SET [FieldSetValue] = --> New Column data
(CASE
-- VARCHAR; CHAR;
WHEN system_type_id In (167,175,189) THEN ISNULL('''' + [FieldValue] + '''','NULL')
-- NVARCHAR; NCHAR
WHEN system_type_id In (231,239) THEN ISNULL('N''' + [FieldValue] + '''','NULL')
-- SMALLDATETIME; DATE; DATETIME; UNIQUEIDENTIFIER
WHEN system_type_id In (58,40,61,36) THEN ISNULL('''' + [FieldValue] + '''','NULL')
-- TINYINT; SMALLINT; INT; REAL; MONEY; FLOAT; BIT; DECIMAL; NUMERIC; SMALLMONEY; BIGINT
WHEN system_type_id In (48,52,56,59,60,62,104,106,108,122,127) THEN ISNULL([FieldValue],'NULL')
END)
FROM sys.columns [D]
WHERE [object_id] = object_id(''+@SchemaName_n_TableName+'')
AND [Fieldname] = D.[name]

/*Create Update statement*/
/*Now we have the modified and actual data as well*/
/*We need to create the update statement in case of recovery*/

;With CTE AS
(SELECT
QUOTENAME([Name]) + '=' + ISNULL([A].[FieldSetValue],'NULL') + ' , ' AS [Field]
----(CASE
---- -- VARCHAR; CHAR;
---- WHEN system_type_id In (167,175,189) THEN QUOTENAME([Name]) + '=' + ISNULL('''' + [A].[FieldValue] + '''','NULL') + ' ,' + ' '
---- -- NVARCHAR; NCHAR
---- WHEN system_type_id In (231,239) THEN QUOTENAME([Name]) + '=' + ISNULL('N''' + [A].[FieldValue] + '''','NULL') + ' ,' + ''
---- -- SMALLDATETIME; DATE; DATETIME; UNIQUEIDENTIFIER
---- WHEN system_type_id In (58,40,61,36) THEN QUOTENAME([Name]) + '=' + ISNULL('''' + [A].[FieldValue] + '''','NULL') + ' ,' + ' '
---- -- TINYINT; SMALLINT; INT; REAL; MONEY; FLOAT; BIT; DECIMAL; NUMERIC; SMALLMONEY; BIGINT
---- WHEN system_type_id In (48,52,56,59,60,62,104,106,108,122,127) THEN QUOTENAME([Name]) + '=' + ISNULL([A].[FieldValue],'NULL') + ' ,' + ' '
---- END) AS [Field]
,A.[Slot ID]
,A.[TransactionID] AS [TransactionID]
,'D' AS [Type] --> Different
,[A].[Rowlogcontents]
,[A].[NonID]
,[A].[nullbit]
,[A].[TRANSACTION ID]
,[A].[Transaction Name]
,[A].[Datum]
,[A].[Current LSN]
,[A].[Begin Time]
FROM #temp_Data AS [A]
INNER JOIN #temp_Data AS [B] ON
[A].[FieldName]=[B].[FieldName]
AND [A].[Slot ID]=[B].[Slot ID]
--And [A].[TransactionID]=[B].[TransactionID]+1
AND [A].[TRANSACTION ID]=[B].[TRANSACTION ID] --> new condition
AND [A].[TransactionID]<>[B].[TransactionID] --> new condition
----AND [B].[TransactionID] =
---- (SELECT Min(Cast([TransactionID] AS INT)) AS [TransactionID]
---- FROM #temp_Data AS [C]
---- WHERE [A].[Slot ID]=[C].[Slot ID]
---- GROUP BY [Slot ID]
---- )
INNER JOIN sys.columns [D] ON [object_id] = object_id(''+@SchemaName_n_TableName+'')
AND A.[Fieldname] = D.[name]
WHERE ISNULL([A].[FieldValue],'') <> ISNULL([B].[FieldValue],'')

UNION ALL

SELECT
QUOTENAME([Name]) + '=' + ISNULL([A].[FieldSetValue],'NULL') + ' AND' AS [Field]
----(CASE
---- WHEN system_type_id In (167,175,189) THEN QUOTENAME([Name]) + '=' + ISNULL('''' + [A].[FieldValue] + '''','NULL') + ' AND'
---- WHEN system_type_id In (231,239) THEN QUOTENAME([Name]) + '='+ ISNULL('N''' + [A].[FieldValue] + '''','NULL') + ' AND'
---- WHEN system_type_id In (58,40,61,36) THEN QUOTENAME([Name]) + '=' + ISNULL('''' + [A].[FieldValue] + '''','NULL') + ' AND'
---- WHEN system_type_id In (48,52,56,59,60,62,104,106,108,122,127) THEN QUOTENAME([Name]) + '=' + ISNULL([A].[FieldValue],'NULL') + ' AND'
---- END) AS [Field]
,A.[Slot ID]
,A.[TransactionID] AS [TransactionID]
,'S' AS [Type] --> Same
,[A].[Rowlogcontents]
,[A].[NonID]
,[A].[nullbit]
,[A].[TRANSACTION ID]
,[A].[Transaction Name]
,[A].[Datum]
,[A].[Current LSN]
,[A].[Begin Time]
FROM #temp_Data AS [A]
INNER JOIN #temp_Data AS [B] ON [A].[FieldName] = [B].[FieldName]
AND [A].[Slot ID] = [B].[Slot ID]
--AND [A].[TransactionID] = [B].[TransactionID]+1
AND [A].[TRANSACTION ID]=[B].[TRANSACTION ID] --> new condition
AND [A].[TransactionID]<>[B].[TransactionID] --> new condition
----AND [B].[TransactionID] =
---- (SELECT MIN(CAST([TransactionID] AS INT)) AS [TransactionID]
---- FROM #temp_Data AS [C]
---- WHERE [A].[Slot ID]=[C].[Slot ID]
---- GROUP BY [Slot ID]
---- )
INNER JOIN sys.columns [D] ON [object_id] = object_id('' + @SchemaName_n_TableName + '')
AND [A].[Fieldname] = D.[name]
WHERE ISNULL([A].[FieldValue],'') = ISNULL([B].[FieldValue],'')
----AND A.[TransactionID] NOT IN
---- (SELECT MIN(CAST([TransactionID] AS INT)) AS [TransactionID]
---- FROM #temp_Data AS [C]
---- WHERE [A].[Slot ID]=[C].[Slot ID]
---- GROUP BY [Slot ID]
---- )

UNION ALL --> new

SELECT --> new
QUOTENAME([Name]) + '=' + ISNULL([A].[FieldSetValue],'NULL') + ' AND' AS [Field]
----(CASE
---- WHEN system_type_id In (167,175,189) THEN QUOTENAME([Name]) + '=' + ISNULL('''' + [A].[FieldValue] + '''','NULL') + ' AND'
---- WHEN system_type_id In (231,239) THEN QUOTENAME([Name]) + '=' + ISNULL('N''' + [A].[FieldValue] + '''','NULL') + ' AND'
---- WHEN system_type_id In (58,40,61,36) THEN QUOTENAME([Name]) + '=' + ISNULL('''' + [A].[FieldValue] + '''','NULL') + ' AND'
---- WHEN system_type_id In (48,52,56,59,60,62,104,106,108,122,127) THEN QUOTENAME([Name]) + '=' + ISNULL([A].[FieldValue],'NULL') + ' AND'
---- END) AS [Field]
,A.[Slot ID]
,A.[TransactionID] AS [TransactionID]
,CASE WHEN ([A].[Transaction Name] = 'INSERT') THEN 'N' ELSE 'O' END AS [Type] --> New or Old
,[A].[Rowlogcontents]
,[A].[NonID]
,[A].[nullbit]
,[A].[TRANSACTION ID]
,[A].[Transaction Name]
,[A].[Datum]
,[A].[Current LSN]
,[A].[Begin Time]
FROM #temp_Data AS [A]
INNER JOIN #temp_Data AS [B] ON
[A].[FieldName]=[B].[FieldName]
AND [A].[Slot ID]=[B].[Slot ID]
AND [A].[Transaction Name] IN ('INSERT','DELETE')
AND [A].[TransactionID]=[B].[TransactionID] --> self
INNER JOIN sys.columns [D] ON [object_id] = object_id(''+@SchemaName_n_TableName+'')
AND A.[Fieldname] = D.[name]
)

,CTEUpdateQuery AS
(SELECT
CASE A.[Transaction Name]
WHEN 'UPDATE' THEN
'UPDATE [' + @SchemaName_n_TableName + '] SET ' +
LEFT(STUFF(
(SELECT ' ' + ISNULL([Field],'')
FROM CTE B
WHERE A.[Slot ID]=B.[Slot ID] AND A.[TransactionID]=B.[TransactionID] AND B.[Type]='D' FOR XML PATH('')
),1,1,''), LEN(STUFF(
(SELECT ' ' + ISNULL([Field],'')
FROM CTE B
WHERE A.[Slot ID]=B.[Slot ID] AND A.[TransactionID]=B.[TransactionID] AND B.[Type]='D' FOR XML PATH('')
),1,1,'') )-2 --> -2 removes the final ', '
) +

' WHERE ' +

LEFT(STUFF(
(SELECT ' ' + ISNULL([Field],'') + ' '
FROM CTE C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Type]='S' FOR XML PATH('')
),1,1,'') ,LEN(STUFF(
(SELECT ' ' + ISNULL([Field],'') + ' '
FROM CTE C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Type]='S' FOR XML PATH('')
),1,1,'') )-4 --> -4 removes the final ' AND'
)
WHEN 'DELETE' THEN
'INSERT INTO [' + @SchemaName_n_TableName + '] (' +
LEFT(STUFF(
(SELECT ' [' + ISNULL([FieldName] + '],','')
FROM #temp_Data C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Rowlogcontents] IS NOT NULL FOR XML PATH('')
),1,1,'') ,LEN(STUFF(
(SELECT ' [' + ISNULL([FieldName] + '],','')
FROM #temp_Data C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Rowlogcontents] IS NOT NULL FOR XML PATH('')
),1,1,'') )-1 --> -1 removes the final ','
) + ') VALUES (' +
LEFT(STUFF(
(SELECT ' ' + ISNULL([FieldSetValue] + ',','')
FROM #temp_Data C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Rowlogcontents] IS NOT NULL FOR XML PATH('')
),1,1,'') ,LEN(STUFF(
(SELECT ' ' + ISNULL([FieldSetValue] + ',','')
FROM #temp_Data C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Rowlogcontents] IS NOT NULL FOR XML PATH('')
),1,1,'') )-1 --> -1 removes the final ','
) + ')'
WHEN 'INSERT' THEN
'DELETE FROM [' + @SchemaName_n_TableName + '] WHERE ' +
LEFT(STUFF(
(SELECT ' ' + ISNULL([Field],'')
FROM CTE C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Type]='N' FOR XML PATH('')
),1,1,'') ,LEN(STUFF(
(SELECT ' ' + ISNULL([Field],'')
FROM CTE C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Type]='N' FOR XML PATH('')
),1,1,'') )-4 --> -4 removes the final ' AND'
)
END AS [Update Statement]
,[Slot ID]
,[TransactionID]
,[Rowlogcontents]
,[A].[NonID]
,MAX([A].[nullbit])+1 AS [nullbit]
,[A].[TRANSACTION ID]
,[A].[Transaction Name]
,[A].[Datum]
,[A].[Current LSN]
,[A].[Begin Time]
FROM CTE A
GROUP BY [Slot ID]
,[TransactionID]
,[Rowlogcontents]
,[A].[NonID]
,[A].[TRANSACTION ID]
,[A].[Transaction Name]
,[A].[Begin Time]
,[A].[Current LSN]
,[A].[Datum]
)

INSERT INTO #temp_Data
SELECT 'Update Statement',ISNULL([Update Statement],''),NULL,NULL,[TransactionID],[Slot ID],[NonID],[nullbit],[TRANSACTION ID],[Transaction Name],[Datum],[Current LSN],[Begin Time]
FROM CTEUpdateQuery

/****************************************/
IF (@Debug > 0) BEGIN
print 'select * from CTE1'
select 'CTE1'
select * from (
/*This part is for variable data columns*/
SELECT
A.[ID]
,[Rowlogcontents]
,[TransactionID]
,[Slot ID]
,[TRANSACTION ID]
,[Transaction Name]
,[Datum]
,[Current LSN]
,[Begin Time]
,[NAME]
,cols.leaf_null_bit AS nullbit
,leaf_offset
,ISNULL(syscolumns.length, cols.max_length) AS [length]
,cols.system_type_id
,cols.leaf_bit_position AS bitpos
,ISNULL(syscolumns.xprec, cols.precision) AS xprec
,ISNULL(syscolumns.xscale, cols.scale) AS xscale
,SUBSTRING([nullBitMap], cols.leaf_null_bit, 1) AS is_null
--Calculate the variable column size from the variable column offset array
,(CASE WHEN leaf_offset<1 AND SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=0
THEN CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([ColumnOffsetArray], (2*(leaf_offset*-1))-1, 2))))
ELSE 0 END
) AS [Column value Size]

--Calculate the column length
,(CASE WHEN leaf_offset<1 AND SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=0
THEN CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING ([ColumnOffsetArray], (2*(leaf_offset*-1))-1, 2))))
- ISNULL(NULLIF(CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE (SUBSTRING([ColumnOffsetArray], (2*((leaf_offset*-1)-1))-1, 2)))), 0), [varColumnStart]) --> If Null OR zero
ELSE 0 END
) AS [Column Length]

--Get the Hexa decimal value from the RowlogContent
--HexValue of the variable column=Substring([Column value Size] - [Column Length] + 1,[Column Length])
--This is the data of your column but in the Hexvalue
,CASE WHEN SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=1
THEN NULL
ELSE SUBSTRING(Rowlogcontents,
(
(CASE WHEN leaf_offset<1 AND SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=0
THEN CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([ColumnOffsetArray], (2*(leaf_offset*-1))-1, 2))))
ELSE 0 END
)
-(
(CASE WHEN leaf_offset<1 AND SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=0
THEN CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([ColumnOffsetArray], (2*(leaf_offset*-1))-1, 2))))
- ISNULL(NULLIF(CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([ColumnOffsetArray], (2*((leaf_offset*-1)-1))-1, 2)))), 0), [varColumnStart]) --> If Null OR zero
ELSE 0 END
))
)+1,(
(CASE WHEN leaf_offset<1 AND SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=0
THEN CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([ColumnOffsetArray], (2*(leaf_offset*-1))-1, 2))))
- ISNULL(NULLIF(CONVERT(SMALLINT, CONVERT(BINARY(2), REVERSE(SUBSTRING([ColumnOffsetArray], (2*((leaf_offset*-1)-1))-1, 2)))), 0), [varColumnStart]) --> If Null OR zero
ELSE 0 END
))
)
END AS hex_Value

FROM @DeletedRecords A
INNER JOIN sys.allocation_units allocunits ON
A.[AllocUnitId]=allocunits.[Allocation_Unit_Id]
INNER JOIN sys.partitions partitions ON
(allocunits.type IN (1, 3) AND partitions.hobt_id = allocunits.container_id)
OR (allocunits.type = 2 AND partitions.partition_id = allocunits.container_id)
INNER JOIN sys.system_internals_partition_columns cols ON cols.partition_id = partitions.partition_id
LEFT OUTER JOIN syscolumns ON syscolumns.id = partitions.object_id AND syscolumns.colid = cols.partition_column_id
WHERE leaf_offset < 0

UNION
/*This part is for fixed data columns*/
SELECT
A.[ID]
,[Rowlogcontents]
,[TransactionID]
,[Slot ID]
,[TRANSACTION ID]
,[Transaction Name]
,[Datum]
,[Current LSN]
,[Begin Time]
,[NAME]
,cols.leaf_null_bit AS [nullbit]
,leaf_offset
,ISNULL(syscolumns.length, cols.max_length) AS [length]
,cols.system_type_id
,cols.leaf_bit_position AS bitpos
,ISNULL(syscolumns.xprec, cols.precision) AS xprec
,ISNULL(syscolumns.xscale, cols.scale) AS xscale
,SUBSTRING([nullBitMap], cols.leaf_null_bit, 1) AS is_null
,(SELECT TOP 1 ISNULL(SUM(CASE WHEN C.leaf_offset >1 THEN max_length ELSE 0 END),0)
FROM sys.system_internals_partition_columns C
WHERE cols.partition_id=C.partition_id AND C.leaf_null_bit<cols.leaf_null_bit
)+5 AS [Column value Size]
,syscolumns.length AS [Column Length]

,CASE WHEN SUBSTRING([nullBitMap], cols.leaf_null_bit, 1)=1
THEN NULL
ELSE SUBSTRING(Rowlogcontents,
(SELECT TOP 1 ISNULL(SUM(CASE WHEN C.leaf_offset >1 THEN max_length ELSE 0 END),0)
FROM sys.system_internals_partition_columns C
WHERE cols.partition_id =C.partition_id AND C.leaf_null_bit<cols.leaf_null_bit
)+5
,syscolumns.length)
END AS hex_Value
FROM @DeletedRecords A
INNER JOIN sys.allocation_units allocunits ON A.[AllocUnitId]=allocunits.[Allocation_Unit_Id]
INNER JOIN sys.partitions partitions ON
(allocunits.type IN (1, 3) AND partitions.hobt_id = allocunits.container_id)
OR (allocunits.type = 2 AND partitions.partition_id = allocunits.container_id)
INNER JOIN sys.system_internals_partition_columns cols ON cols.partition_id = partitions.partition_id
LEFT OUTER JOIN syscolumns ON syscolumns.id = partitions.object_id AND syscolumns.colid = cols.partition_column_id
WHERE leaf_offset>0
) AS t1
order by [Begin Time],[ID],[nullbit]

print 'select * from CTE2'
select 'CTE2'
select * into #CTE from (
SELECT
QUOTENAME([Name]) + '=' + ISNULL([A].[FieldSetValue],'NULL') + ' , ' AS [Field]
----(CASE
---- -- VARCHAR; CHAR;
---- WHEN system_type_id In (167,175,189) THEN QUOTENAME([Name]) + '=' + ISNULL('''' + [A].[FieldValue] + '''','NULL') + ' ,' + ' '
---- -- NVARCHAR; NCHAR
---- WHEN system_type_id In (231,239) THEN QUOTENAME([Name]) + '=' + ISNULL('N''' + [A].[FieldValue] + '''','NULL') + ' ,' + ''
---- -- SMALLDATETIME; DATE; DATETIME; UNIQUEIDENTIFIER
---- WHEN system_type_id In (58,40,61,36) THEN QUOTENAME([Name]) + '=' + ISNULL('''' + [A].[FieldValue] + '''','NULL') + ' ,' + ' '
---- -- TINYINT; SMALLINT; INT; REAL; MONEY; FLOAT; BIT; DECIMAL; NUMERIC; SMALLMONEY; BIGINT
---- WHEN system_type_id In (48,52,56,59,60,62,104,106,108,122,127) THEN QUOTENAME([Name]) + '=' + ISNULL([A].[FieldValue],'NULL') + ' ,' + ' '
---- END) AS [Field]
,A.[Slot ID]
,A.[TransactionID] AS [TransactionID]
,'D' AS [Type] --> Different
,[A].[Rowlogcontents]
,[A].[NonID]
,[A].[nullbit]
,[A].[TRANSACTION ID]
,[A].[Transaction Name]
,[A].[Datum]
,[A].[Current LSN]
,[A].[Begin Time]
FROM #temp_Data AS [A]
INNER JOIN #temp_Data AS [B] ON
[A].[FieldName]=[B].[FieldName]
AND [A].[Slot ID]=[B].[Slot ID]
--And [A].[TransactionID]=[B].[TransactionID]+1
AND [A].[TRANSACTION ID]=[B].[TRANSACTION ID] --> new condition
AND [A].[TransactionID]<>[B].[TransactionID] --> new condition
AND [A].[Current LSN]=[B].[Current LSN] --> new condition
----AND [B].[TransactionID] =
---- (SELECT Min(Cast([TransactionID] AS INT)) AS [TransactionID]
---- FROM #temp_Data AS [C]
---- WHERE [A].[Slot ID]=[C].[Slot ID]
---- GROUP BY [Slot ID]
---- )
INNER JOIN sys.columns [D] ON [object_id] = object_id(''+@SchemaName_n_TableName+'')
AND A.[Fieldname] = D.[name]
WHERE ISNULL([A].[FieldValue],'') <> ISNULL([B].[FieldValue],'')

UNION ALL

SELECT
QUOTENAME([Name]) + '=' + ISNULL([A].[FieldSetValue],'NULL') + ' AND' AS [Field]
----(CASE
---- WHEN system_type_id In (167,175,189) THEN QUOTENAME([Name]) + '=' + ISNULL('''' + [A].[FieldValue] + '''','NULL') + ' AND'
---- WHEN system_type_id In (231,239) THEN QUOTENAME([Name]) + '='+ ISNULL('N''' + [A].[FieldValue] + '''','NULL') + ' AND'
---- WHEN system_type_id In (58,40,61,36) THEN QUOTENAME([Name]) + '=' + ISNULL('''' + [A].[FieldValue] + '''','NULL') + ' AND'
---- WHEN system_type_id In (48,52,56,59,60,62,104,106,108,122,127) THEN QUOTENAME([Name]) + '=' + ISNULL([A].[FieldValue],'NULL') + ' AND'
---- END) AS [Field]
,A.[Slot ID]
,A.[TransactionID] AS [TransactionID]
,'S' AS [Type] --> Same
,[A].[Rowlogcontents]
,[A].[NonID]
,[A].[nullbit]
,[A].[TRANSACTION ID]
,[A].[Transaction Name]
,[A].[Datum]
,[A].[Current LSN]
,[A].[Begin Time]
FROM #temp_Data AS [A]
INNER JOIN #temp_Data AS [B] ON [A].[FieldName] = [B].[FieldName]
AND [A].[Slot ID] = [B].[Slot ID]
--AND [A].[TransactionID] = [B].[TransactionID]+1
AND [A].[TRANSACTION ID]=[B].[TRANSACTION ID] --> new condition
AND [A].[TransactionID]<>[B].[TransactionID] --> new condition
AND [A].[Current LSN]=[B].[Current LSN] --> new condition
----AND [B].[TransactionID] =
---- (SELECT MIN(CAST([TransactionID] AS INT)) AS [TransactionID]
---- FROM #temp_Data AS [C]
---- WHERE [A].[Slot ID]=[C].[Slot ID]
---- GROUP BY [Slot ID]
---- )
INNER JOIN sys.columns [D] ON [object_id] = object_id('' + @SchemaName_n_TableName + '')
AND [A].[Fieldname] = D.[name]
WHERE ISNULL([A].[FieldValue],'') = ISNULL([B].[FieldValue],'')
----AND A.[TransactionID] NOT IN
---- (SELECT MIN(CAST([TransactionID] AS INT)) AS [TransactionID]
---- FROM #temp_Data AS [C]
---- WHERE [A].[Slot ID]=[C].[Slot ID]
---- GROUP BY [Slot ID]
---- )

UNION ALL --> new

SELECT --> new
QUOTENAME([Name]) + '=' + ISNULL([A].[FieldSetValue],'NULL') + ' AND' AS [Field]
----(CASE
---- WHEN system_type_id In (167,175,189) THEN QUOTENAME([Name]) + '=' + ISNULL('''' + [A].[FieldValue] + '''','NULL') + ' AND'
---- WHEN system_type_id In (231,239) THEN QUOTENAME([Name]) + '=' + ISNULL('N''' + [A].[FieldValue] + '''','NULL') + ' AND'
---- WHEN system_type_id In (58,40,61,36) THEN QUOTENAME([Name]) + '=' + ISNULL('''' + [A].[FieldValue] + '''','NULL') + ' AND'
---- WHEN system_type_id In (48,52,56,59,60,62,104,106,108,122,127) THEN QUOTENAME([Name]) + '=' + ISNULL([A].[FieldValue],'NULL') + ' AND'
---- END) AS [Field]
,A.[Slot ID]
,A.[TransactionID] AS [TransactionID]
,CASE WHEN ([A].[Transaction Name] = 'INSERT') THEN 'N' ELSE 'O' END AS [Type] --> New or Old
,[A].[Rowlogcontents]
,[A].[NonID]
,[A].[nullbit]
,[A].[TRANSACTION ID]
,[A].[Transaction Name]
,[A].[Datum]
,[A].[Current LSN]
,[A].[Begin Time]
FROM #temp_Data AS [A]
INNER JOIN #temp_Data AS [B] ON
[A].[FieldName]=[B].[FieldName]
AND [A].[Slot ID]=[B].[Slot ID]
AND [A].[Transaction Name] IN ('INSERT','DELETE')
AND [A].[TransactionID]=[B].[TransactionID] --> self
INNER JOIN sys.columns [D] ON [object_id] = object_id(''+@SchemaName_n_TableName+'')
AND A.[Fieldname] = D.[name]
) as t2
select * from #CTE
order by [Begin Time],[Slot ID],[NonID],[nullbit]

print 'select * from CTEUpdateQuery'
select 'CTEUpdateQuery'
SELECT
CASE A.[Transaction Name]
WHEN 'UPDATE' THEN
'UPDATE [' + @SchemaName_n_TableName + '] SET ' +
LEFT(STUFF(
(SELECT ' ' + ISNULL([Field],'')
FROM #CTE B
WHERE A.[Slot ID]=B.[Slot ID] AND A.[TransactionID]=B.[TransactionID] AND B.[Type]='D' FOR XML PATH('')
),1,1,''), LEN(STUFF(
(SELECT ' ' + ISNULL([Field],'')
FROM #CTE B
WHERE A.[Slot ID]=B.[Slot ID] AND A.[TransactionID]=B.[TransactionID] AND B.[Type]='D' FOR XML PATH('')
),1,1,'') )-2 --> -2 removes the final ', '
) +

' WHERE ' +

LEFT(STUFF(
(SELECT ' ' + ISNULL([Field],'') + ' '
FROM #CTE C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Type]='S' FOR XML PATH('')
),1,1,'') ,LEN(STUFF(
(SELECT ' ' + ISNULL([Field],'') + ' '
FROM #CTE C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Type]='S' FOR XML PATH('')
),1,1,'') )-4 --> -4 removes the final ' AND'
)
WHEN 'DELETE' THEN
'INSERT INTO [' + @SchemaName_n_TableName + '] (' +
LEFT(STUFF(
(SELECT ' [' + ISNULL([FieldName] + '],','')
FROM #temp_Data C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Rowlogcontents] IS NOT NULL FOR XML PATH('')
),1,1,'') ,LEN(STUFF(
(SELECT ' [' + ISNULL([FieldName] + '],','')
FROM #temp_Data C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Rowlogcontents] IS NOT NULL FOR XML PATH('')
),1,1,'') )-1 --> -1 removes the final ','
) + ') VALUES (' +
LEFT(STUFF(
(SELECT ' ' + ISNULL([FieldSetValue] + ',','')
FROM #temp_Data C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Rowlogcontents] IS NOT NULL FOR XML PATH('')
),1,1,'') ,LEN(STUFF(
(SELECT ' ' + ISNULL([FieldSetValue] + ',','')
FROM #temp_Data C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Rowlogcontents] IS NOT NULL FOR XML PATH('')
),1,1,'') )-1 --> -1 removes the final ','
) + ')'
WHEN 'INSERT' THEN
'DELETE FROM [' + @SchemaName_n_TableName + '] WHERE ' +
LEFT(STUFF(
(SELECT ' ' + ISNULL([Field],'')
FROM #CTE C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Type]='N' FOR XML PATH('')
),1,1,'') ,LEN(STUFF(
(SELECT ' ' + ISNULL([Field],'')
FROM #CTE C
WHERE A.[Slot ID]=C.[Slot ID] AND A.[TransactionID]=C.[TransactionID] AND C.[Type]='N' FOR XML PATH('')
),1,1,'') )-4 --> -4 removes the final ' AND'
)
END AS [Update Statement]
,[Slot ID]
,[TransactionID]
,[Rowlogcontents]
,[A].[NonID]
,MAX([A].[nullbit])+1 AS [nullbit]
,[A].[TRANSACTION ID]
,[A].[Transaction Name]
,[A].[Datum]
,[A].[Current LSN]
,[A].[Begin Time]
FROM #CTE A
GROUP BY [Slot ID]
,[TransactionID]
,[Rowlogcontents]
,[A].[NonID]
,[A].[TRANSACTION ID]
,[A].[Transaction Name]
,[A].[Begin Time]
,[A].[Current LSN]
,[A].[Datum]
drop table #CTE
END

IF (@Debug > 0) BEGIN
print 'select * from temp_Data1'
select '#temp_Data1'
select * from #temp_Data order by [TransactionID],[nullbit]
END

--Create the column name in the same order to do pivot table.
DECLARE @FieldNames VARCHAR(MAX)
SET @FieldNames = STUFF(
(SELECT ','+CAST(QUOTENAME([Name]) AS VARCHAR(MAX))
FROM syscolumns
WHERE id = object_id(''+@SchemaName_n_TableName+'') FOR XML PATH('')
),1,1,'')

--Finally, pivot table and get the change history so one can get the data back.
--The [Update Statement] column will give you the query that you can execute for recovery.
-- NOTE: Need a placeholder type for [Execution Account] (else it will be int)
SET @sql = 'SELECT ROW_NUMBER() OVER (ORDER BY [LSN],[Datum] DESC) AS [RowNo],' + @FieldNames
+ ',[Datum],[Update Statement],[Transaction],-1 AS [Prev RowNo],[Date],[Execution Account],[TRANSACTION ID],[Slot ID],[LSN] '
+ 'INTO [@ResultTable] '
+ 'FROM (SELECT [FieldName],[FieldValue],[Transaction Name] AS [Transaction],[Datum] AS [Datum],[Begin Time] AS [Date]'
+',[TRANSACTION ID] AS [Execution Account],[TRANSACTION ID],[Slot ID],[Current LSN] AS [LSN] FROM #temp_Data) AS src '
+ 'PIVOT (Min([FieldValue]) FOR [FieldName] IN (' + @FieldNames + ',[Update Statement])) AS pvt '
+ 'ORDER BY [LSN],[Datum] DESC'

--> Create a table to hold the dynamic SQL results --> doesn't seem to work
DECLARE @sql1 VARCHAR(MAX)
SET @sql1 = 'Declare @result Table ([RecNo] INT NOT NULL IDENTITY(1,1) PRIMARY KEY,' + REPLACE(@FieldNames,',',' varchar(max),') + 'varchar(max),'
+ '[Datum] varchar(20),[Update Statement] varchar(max),[TRANSACTION ID] varchar(max),[Transaction] varchar(max)'
+',[Slot ID] int,[LSN] varchar(max),[Prev RowNo] int,[Date] varchar(20))'
--EXEC(@sql1)

EXEC sp_executesql @sql
IF OBJECT_ID (N'[@ResultTable]', N'U') IS NOT NULL
BEGIN
UPDATE [@ResultTable] SET [Prev RowNo] =
ISNULL((SELECT MAX(B.[RowNo]) AS [ID] FROM [@ResultTable] B
WHERE B.[Slot ID] = A.[Slot ID] AND B.[RowNo] < A.[RowNo]),0)
FROM [@ResultTable] A
UPDATE [@ResultTable] SET [Execution Account] =
(SELECT [Transaction Account] FROM #TransIdAllList B WHERE B.[Transaction ID] = A.[Transaction ID])
FROM [@ResultTable] A
SELECT * FROM [@ResultTable]
DROP TABLE [@ResultTable]
END

IF (@Debug > 0) BEGIN
print @FieldNames
print @sql1
print ''
print @sql
END

GO --> required for testing/debugging (dumps all local entities from above)
SET NOEXEC OFF

PS: Não entendeu nada ?!?? Blz… isso não é para os fracos… 😉

T-SQL Tuesday #26 – Segundas Chances

Todo mundo merece uma segunda chance,, ou para fazer melhor, ou para simplesmente não fazer,,, então,,, até no SQL temos uma segunda chance…

Dessa vez o Sr. David Howard (Blog | Twitter) deu a oportunidade de escrevermos novamente sobre algum tópico anterior do T-SQL Tuesday, ou para melhora-lo ou para fazer um se você não participou…

Eu resolvi melhorar o meu script para montar um Mirror com SQLCMD,,, na versão anterior (#25) se o banco de dados tiver mais que um datafile você tem que modificar o script na mão e compensar a criação desse outro arquivo… Nesta versão o script vai ler o arquivo de backup em uma temp table e montar o script de restore com todas as variáveis…

Vale lembrar que você precisa habilitar a opção de SQLCMD no SSMS e não esqueça de trocar o caminho do backup…

Everyone deserves a second chance, or to do better or to simply not do, then, even in SQL havea second chance…
This time Mr. David Howard (blog | twitter) gave us the opportunity to write again about a previous topic of Tuesday’s T-SQL, or to improve it or make one if you did not participate …
I decided to improve my script to create a Mirror with SQLCMD, in the previous version (#25) if the database has more than one datafile you have to modify the script and make the creation of another file … this version will read backup file in a temp table and mount the restore script with all the variables …
Remember that you need to enable the option of SQLCMD in SSMS and do not forget to change the backup path …

:SETVAR PrincipalServer (local)\INST01
:SETVAR MirrorServer (local)\INST02
:SETVAR SERVER (servername)
:SETVAR db DB_NAME
go

:ON ERROR EXIT
go

:CONNECT $(PrincipalServer)

ALTER DATABASE $(db)
 SET RECOVERY FULL
go

CREATE ENDPOINT Mirroring
 STATE=STARTED
 AS TCP (LISTENER_PORT=5091)
 FOR DATABASE_MIRRORING (ROLE=ALL)
GO

:CONNECT $(MirrorServer)

CREATE ENDPOINT Mirroring
 STATE=STARTED
 AS TCP (LISTENER_PORT=5092)
 FOR DATABASE_MIRRORING (ROLE=ALL)
GO

:CONNECT $(PrincipalServer)

BACKUP DATABASE $(db)
TO DISK = 'D:\DB01\local\$(db).bak'
WITH INIT
GO

:CONNECT $(MirrorServer)

DECLARE @InstanceName sql_variant,
 @InstanceDir sql_variant,
 @SQLDataRoot nvarchar(512),
 @ExecStr nvarchar(max)

SELECT @InstanceName = ISNULL(SERVERPROPERTY('InstanceName'), 'MSSQLServer')

EXECUTE master.dbo.xp_regread 'HKEY_LOCAL_MACHINE',
 'SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL',
 @InstanceName, @InstanceDir OUTPUT

SELECT @ExecStr = 'EXECUTE master.dbo.xp_regread '
 + '''HKEY_LOCAL_MACHINE'', '
 + '''SOFTWARE\Microsoft\Microsoft SQL Server\'
 + convert(varchar, @InstanceDir)
 + '\Setup'', ''SQLDataRoot'', @SQLDataRoot OUTPUT'

EXEC master.dbo.sp_executesql @ExecStr
 , N'@SQLDataRoot nvarchar(512) OUTPUT'
 , @SQLDataRoot OUTPUT

IF @SQLDataRoot IS NULL
BEGIN
 RAISERROR ('I can´t find where restore the Database... I going to stop...', 16, -1)
END

CREATE TABLE #BackupFileList
( LogicalName sysname NULL
 , PhysicalName sysname NULL
 , [Type] char(1)
 , FileGroupName sysname NULL
 , Size bigint
 , MaxSize bigint
 , FileId smallint
 , CreateLSN numeric(25,0)
 , DropLSN numeric(25,0)
 , UniqueId uniqueidentifier
 , ReadOnlyLSN numeric(25,0)
 , ReadWriteLSN numeric(25,0)
 , BackupSizeInBytes bigint
 , SourceBlockSize bigint
 , FileGroupId smallint
 , LogGroupGUID uniqueidentifier
 , DifferentialBaseLSN numeric(25,0)
 , DifferentialBaseGUID uniqueidentifier
 , IsReadOnly bit
 , IsPresent bit
 , TDEThumbPrint varbinary(32)
)

INSERT #BackupFileList
 EXEC('RESTORE FILELISTONLY FROM DISK = ''D:\DB01\local\$(db).bak''')

UPDATE #BackupFileList
 SET PhysicalName
 = @SQLDataRoot
 + N'\Data\'
 + REVERSE(SUBSTRING(REVERSE(PhysicalName)
 , 1, PATINDEX('%\%', REVERSE(PhysicalName)) -1))

DECLARE @LogicalName sysname
 , @PhysicalName sysname

DECLARE FileListCursor CURSOR FAST_FORWARD FOR
 SELECT LogicalName, PhysicalName
 FROM #BackupFileList

OPEN FileListCursor

FETCH NEXT FROM FileListCursor INTO @LogicalName, @PhysicalName

SELECT @ExecStr = N'RESTORE DATABASE $(db)' +
 N' FROM DISK = ''D:\DB01\local\$(db).bak''' +
 N' WITH MOVE ''' + @LogicalName + N''' TO ''' + @PhysicalName + N''''

FETCH NEXT FROM FileListCursor INTO @LogicalName, @PhysicalName

WHILE @@FETCH_STATUS <> -1
BEGIN
 SELECT @ExecStr = @ExecStr + N', MOVE ''' + @LogicalName
 + ''' TO ''' + @PhysicalName + ''''
 FETCH NEXT FROM FileListCursor INTO @LogicalName, @PhysicalName
END

SELECT @ExecStr = @ExecStr + N' , NORECOVERY, REPLACE'

-- if you receive the error message because the script don´t found where he need to restore, you can uncomment the next line and get the restore command,,,
-- to restore manual...
-- SELECT @ExecStr

EXEC (@ExecStr)

DEALLOCATE FileListCursor
GO

:CONNECT $(PrincipalServer)
SELECT DATABASEPROPERTYEX(N'$(db)', N'Status') AS 'Principal Database State' -- Returns ONLINE
SELECT db_name(sd.[database_id]) AS [Database Name],
 sd.mirroring_guid,
 sd.mirroring_state,
 sd.mirroring_state_desc,
 sd.mirroring_partner_name,
 sd.mirroring_witness_name,
 sd.mirroring_witness_state,
 sd.mirroring_witness_state_desc,
 sd.mirroring_role,
 sd.mirroring_role_desc,
 sd.mirroring_role_sequence,
 sd.mirroring_safety_level,
 sd.mirroring_safety_level_desc,
 sd.mirroring_safety_sequence,
 sd.mirroring_failover_lsn
FROM sys.database_mirroring AS sd
WHERE sd.[database_id] = db_id(N'$(db)')
go

:CONNECT $(MirrorServer)
SELECT DATABASEPROPERTYEX(N'$(db)', N'Status') AS 'Mirror Database State' -- Returns RESTORING
SELECT db_name(sd.[database_id]) AS [Database Name],
 sd.mirroring_guid,
 sd.mirroring_state,
 sd.mirroring_state_desc,
 sd.mirroring_partner_name,
 sd.mirroring_witness_name,
 sd.mirroring_witness_state,
 sd.mirroring_witness_state_desc,
 sd.mirroring_role,
 sd.mirroring_role_desc,
 sd.mirroring_role_sequence,
 sd.mirroring_safety_level,
 sd.mirroring_safety_level_desc,
 sd.mirroring_safety_sequence,
 sd.mirroring_failover_lsn
FROM sys.database_mirroring AS sd
WHERE sd.[database_id] = db_id(N'$(db)')
go

:CONNECT $(PrincipalServer)

BACKUP LOG $(db)
TO DISK = 'D:\DB01\local\$(db)_Log.bak'
WITH INIT
GO

:CONNECT $(MirrorServer)

RESTORE LOG $(db)
FROM DISK = 'D:\DB01\local\$(db)_Log.bak'
WITH NORECOVERY
GO

ALTER DATABASE $(db)
 SET PARTNER = 'TCP://$(server):5091'
GO

:CONNECT $(PrincipalServer)

ALTER DATABASE $(db)
 SET PARTNER = 'TCP://$(server):5092'
GO

SELECT db_name(sd.[database_id]) AS [Database Name],
 sd.mirroring_guid,
 sd.mirroring_state,
 sd.mirroring_state_desc,
 sd.mirroring_partner_name,
 sd.mirroring_witness_name,
 sd.mirroring_witness_state,
 sd.mirroring_witness_state_desc,
 sd.mirroring_role,
 sd.mirroring_role_desc,
 sd.mirroring_role_sequence,
 sd.mirroring_safety_level,
 sd.mirroring_safety_level_desc,
 sd.mirroring_safety_sequence,
 sd.mirroring_failover_lsn
FROM sys.database_mirroring AS sd
WHERE sd.[database_id] = db_id(N'$(db)')
GO

T-SQL Tuesday #025 – Truques

Este mês o Sr. Allen White (Blog | Twitter)  pediu para as pessoas dizerem quais truques elas utilizam para deixar o trabalho mais fácil…

Nós temos muitos códigos em powershell que ajudam a melhorar o dia, mas, powershell,,, aahh,,, se você usar o google, poderá encontrar muita gente escrevendo bons códigos que podem ajudá-lo.

Eu escrevi uma coisa legal, mas em T-SQL, para criar bases em mirror,,, Sim, eu sei, você pode encontrar um monte de blogs falando sobre como criar esse tipo de ambiente,,, mas eu vou fazer isso usando SQLCMD dentro do Management Studio,,, por quê? só pela diversão…

Eu vou pular a parte da criação dos end-poins porque você já sabe…

A parte legal está logo depois da versão em inglês,,,

Hi, i try translate all the text above, sorry if have some mistakes,,,

This month Mr. Allen White (Blog | Twitter) ask people to say what tricks we use to make our job easier…

We have a lot of code in powershell helping to improve the day, but, powershell,,, aahh,,, if you use the google will find a lot of people writing good scripts to help you.

I going to write some nice,, but in T-SQL, just to create a database mirror,,,, Yes, I know, you can find a lot of blogs talking about who to create a database mirror,,, but I will do this using the SQLCMD inside the Management Studio,, why? just for fun…

I´ll skip the creation of end-points because you already know…

The good part is here:


/* Change the principal server name and instance */
:SETVAR principal_server "SERVERNAME\INSTANCE1"
/* Change the mirror server name and instance */
:SETVAR mirror_server "SERVERNAME\INSTANCE2"
/* in this case I use the same server, but you can add one more setvar and add the other server */
:SETVAR server "SERVERNAME"
/* Database name to mirror */
:SETVAR db "DB_NAME"

:connect $(principal_server)

alter database $(db) set recovery full

backup database $(db)
to disk = 'D:\DB01\inst01\$(db).bak'
with compression, stats = 5

backup log $(db)
to disk = 'D:\DB01\inst01\$(db).trn'
with compression, stats = 5

go
:connect $(mirror_server)

restore database $(db)
from disk = 'D:\DB01\inst01\$(db).bak'
with file = 1,
move '$(db)' to 'D:\DB01\local\$(db).mdf',
move '$(db)_log' to 'D:\DB01\local\$(db).ldf',
norecovery, nounload, stats = 5

restore database $(db)
from disk = 'D:\DB01\inst01\$(db).trn'
with norecovery, nounload, stats = 5

alter database $(db)
set partner = 'TCP://$(server):5022'
go

:connect $(principal_server)

/* In this part you can change the setvar if was created other one */
alter database $(db)
set partner = 'TCP://$(server):5023'

 

Corrompendo um Banco SQL

Qual a necessidade de criar uma base corrompida?

Pra que você precisa desse tipo de coisa?

Você precisa treinar…

Mesmo que saiba o que fazer, é importante treinar para situações complicadas…

É melhor fazer muita tentativa em erro em um ambiente controlado do que no ambiente de produção,,, certo?

Então,,, antes de mais nada… eu sei que você sabe,,, mas não custa nada relembrar,,,

NUNCA FAÇA ISSO EM PRODUÇÃO !!!

Bom,,, com isso em mente,,, vamos começar,,,

1. Vamos criar uma base:

CREATE DATABASE [corrompeu]
GO

2. Vamos alterar o modo de recovery da base:

alter database corrompeu
set recovery full
GO

3. Agora vamos colocar uma tabela:

use corrompeu
GO

create table vendas
(
vendasID int identity,
clienteID int default convert(int, 100000 * RAND()),
vendaData datetime default getdate(),
vendaTotal money
)
GO

4. Adicionamos um índice pra dar gosto…

create clustered index vendaCI on vendas (vendasID)
GO

5. Colocamos uns dados,,,

set nocount ON
GO

declare @conta INT
select @conta = 0
while (@conta < 50000)
begin
    insert into vendas (vendaTotal)
    values (100*RAND())
    select @conta = @conta +1
end
GO

7. E vamos fazer uns backups

use master
GO

backup database corrompeu
to disk = ‘d:\db01\local\corrompeu_1.bak’
with init
go

backup log corrompeu
to disk = ‘d:\db01\local\corrompeu_2.trn’
go

8. Bom,,, com os backups feitos,,, vamos ver as páginas que foram criadas,,,

dbcc ind (‘corrompeu’,‘vendas’,1)
GO

9. Escolha uma página e coloque no lugar do XXXXX

DBCC TRACEON (3604)
GO
dbcc page(‘corrompeu’,1,XXXXX,3)

Você deve ver alguma coisa do tipo:

A página que eu escolhi foi a 1:493. Meu vendasID vai do registro 24256 até 245000.

Agora começa a ficar legal….

10. Vamos colocar a base offline:

alter database corrompeu
set offline
GO

11. Agora um simples cálculo…

select 493*8192
GO

Temos o número em decimal da localização da página no arquivo .mdf

12. Com esse número vamos utilizar um editor Hexadecimal para achar a linha dentro do arquivo .mdf.

  • Dentro do editor de Hexadecimal, abra o arquivo .mdf (neste caso D:\DB01\Corrompeu.mdf).
  • Clique em “Localizar” e escolha “Ir Para”.
  • Escolha a opção “DEC”, digite ou cole o resultado do cálculo acima e depois cliente em “HEX”, ele vai converter o valor para Hexadecimal.

  • clique em “OK”
  • Altere a linha onde o cursor esta piscando para 00 (zero-zero), ela vai ficar em vermelho.

  • Salve o arquivo.

13. Agora dentro do SQL vamos voltar com a base online:

alter database corrompeu
set online
GO

14. Usando o DBCC CHECKDB, vamos ver se a base está realmente corrompida…

Legal,,, temos uma base corrompida…

15. vamos tentar um select na tabela,,, e olha lá o erro,,,

Bom,,, legal… temos uma base corrompida,,, e agora?

Agora fica legal… o objetivo é deixar a base operacional sem perder informação… imagine que essa é sua base de produção e justamente essa tabela é a folha de pagamento,,, olha que legal…

Uma dica: tentei fazer o processo de restore no SQL Server Denali CTP 1 e não consegui restaurar apenas a página, tive que remover ela e reinserir os dados através de outra base, fiz o mesmo processo de restore apenas da página no SQL Server 2008 R2 e funcionou sem problema.

Se alguém precisar de ajuda é só deixar o comentário…

ATUALIZAÇÃO:

Segue o link do SkyDrive com a base, backup e o script desse exemplo:

https://skydrive.live.com/?cid=5145b04265f2979d&sc=documents&id=5145B04265F2979D%21171#

TOP 5 – Ferramentas grátis

ATUALIZAÇÃO !!! – 20/09/2012

Esse post é para falar de ferramentas gratuitas,,, é com muito pesar que estou retirando o SSMS Tools Pack do primeiro lugar, a partir da versão 2.5.0.0 ele deixou de ser de graça, logo, vai contra o intuito do post…

Estou substituindo pela ferramenta SSMSBoost

Tem gente que gosta de fazer as coisas na marra,,, sem ajuda de nada,,, script de baixo de script,,, Isso é muito legal, tem muita coisa que só se resolve assim,,,

O importante é conhecer o que o mercado oferece quando você quer “uma ajuda” ou pra realmente facilitar o dia a dia,,,

O meu TOP 5 de ferramentas gratuitas são:

  1. Pra quem gosta de trabalhar com o SSMS, um add-on bem legal é o SSMS Tools Pack desenvolvido por Mladen Prajdić. Ele adiciona algumas funções bem legais como: histórico, snippets, gerador de código… Acho uma ferramenta pequena e legal… Uma ferramenta muito interessante para adicionar funcionalidades ao SSMS é o SSMSBoost ele adicionar recursos muito bons como snippets, localizador de objetos, alterador de barra de titulo e uma coisa bem legal que é o cadastro de conexão onde você pode colocar alerta de ambiente de produção,,, ai ele avisa, dependendo do comando que você precisa prestar atenção antes de dar um truncate table por exemplo…. Ele é de graça, mas naquelas, você precisa reinstalar ele a cada 45 dias (não é trial, é só uma coisa chata que o desenvolvedor colocou),,,
  2. Quem nunca passou raiva com o gerador de plano de execução do SSMS que drop um banco?,,, Se você usar o SQL Sentry Plan Explorer pelo menos uma vez, não vai querer deixar de usar,,, ele mostra de uma forma fácil de entender qual parte do plano estásendo mais custoso para a operação… fora outras coisas legais…
  3. Não pode faltar de jeito nenhum o Who is Active desenvolvido por Adam Machanic e por falar nele, existe um add-on da Schema Solutionsque adiciona uma interface gráfica para a execução de procedure.
  4. Na primeira vez que vi essa ferramenta não achei que seria tão útil, mas o SQL Trace Analyzeré bem interessante. Ele analisa o Profiler capturado em arquivo ou banco e gera um relatório consolidado mostrando o impacto, tempo, processamento, IO, etc.. E de brinde ele instala um monitorador de Locks/Blocks. O problema dessa ferramenta é a parafernália que ele instala, mas você pode remover o resto das coisas e ficar só com o programa principal.
  5. E não podia faltar alguma forma de monitorar o que acontece com o banco,,, para isso achei o IgniteFree, uma ferramenta muito simples de configurar e com muita informação relevante. Claro que a versão Trial/Full tem mais opções, mas mesmo na versão free é uma ótima ferramenta. Ela é leve, não ocupa muito espaço, não gera pressão na máquina que está sendo monitorada e de quebra ainda consegue monitorar uns Oracles que você tenha perdido no ambiente…

Movimentar arquivos de Bancos de Dados

Nesse post vou falar como movimentar os arquivos de um banco de dados de uma unidade/diretório para outra unidade/diretório.

Ai você pensa:

“Pra que? Já sei como faz isso. Não tem nenhuma novidade.” ou

“Aaahh,,, isso é facil, uso o bom e velho backup/restore ou o detach/attach…”

Isso é verdade, não tem nenhuma grande novidade em fazer a movimentação mas, ai vem a parte divertida, pra que fazer do método fácil se podemos fazer do método interessante?

1.- Backup/Restore

Se executarmos o bom e velho backup/restore, no momento do restore podemos alterar o caminho onde os arquivos irão ser gravados, isso é simples.

Apenas precisaremos executar o processo de backup que, dependendo do tamanho da base, pode demorar alguns minutos ou até umas horas. Além de ter que ter um espaço para armazenar o backup ou executá-lo através de uma unidade de backup.

2.- Detach/Attach

Usando o Detach ganhamos algum tempo, você vai apenas mover os arquivos de log e dados para outro lugar e anexar a base novamente.

O problema disso é se você usar o Service Broker ou até o Mirror, você terá que refazer o processo.

3.- Alter Database (o “método interessante”)

Esse “método” a seguir cria uma procedure para executar a movimentação do banco para outro local usando o Alter database, mas é possível executá-lo em diversos bancos como vou demonstrar:

Vamos criar 2 bases de dados para testar:

create database banco1

create database banco2

Agora execute o scritp abaixo para criar a procedure que irá executar a movimentação dos arquivos:

IF ( OBJECT_ID(‘dbo.sp_MoveDatabase’) IS NOT NULL )
begin
DROP PROCEDURE dbo.sp_MoveDatabase
end
GO

create procedure sp_MoveDatabase

@NewDataFolder nvarchar(1000),
@NewLogFolder nvarchar(1000),
@DbList nvarchar(4000)

as
Begin
declare @DbTable table (lkey int identity (1,1) primary key, dbname nvarchar(100))
declare @FileTable table (lkey int identity (1,1) primary key, [name]nvarchar(100), physical_name nvarchar(1000), [type] int )
declare @sql nvarchar(4000)
declare @count int, @RowNum int
declare @DbName nvarchar(100)
declare @OldPath nvarchar(1000)
declare @Type int
declare @LogicalName nvarchar(100)
declare @ParmDefinition nvarchar(1000)
declare @FileName nvarchar(100)
declare @NewPath nvarchar(1000)
declare @ShowAdvOpt int
declare @XPCMD int

set nocount on;

if right(@DbList,1) = ‘,’

Begin
print ‘DbList must NOT end with “””‘
return

End
declare @MyString NVARCHAR(100)
declare @Pos INT
declare @NextPos INT
declare @String NVARCHAR(4000)
declare @Delimiter NVARCHAR(1)

set @String = @DbList
set @Delimiter = ‘,’
SET @String = @String + @Delimiter
SET @Pos = charindex(@Delimiter,@String)
WHILE (@pos <> 0)

BEGIN
SET @MyString = substring(@String,1,@Pos – 1)
insert into @DbTable (dbname) values (LTRIM(RTRIM(@MyString)))
SET @String = substring(@String,@pos+1,len(@String))
SET @pos = charindex(@Delimiter,@String)

END
set @ShowAdvOpt = cast(( select [value] from sys.configurations where [name] = ‘show advanced options’) as int)
set @XPCMD = cast(( select [value] from sys.configurations where [name] = ‘xp_cmdshell’) as int)
if right(@NewDataFolder,1)<> ‘\’ or right(@NewLogFolder,1)<>’\’

Begin
print ‘new path”s must end with \’
return
end
EXEC sp_configure ‘show advanced option’, ‘1’
RECONFIGURE

exec sp_configure ‘xp_cmdshell’ , ‘1’
RECONFIGURE

print ‘NewMdfFolder is ‘ + @NewDataFolder
print ‘NewLdfFolder is ‘ + @NewLogFolder

SET @RowNum = 1
SET @count = (select count(*) from @DbTable)
while @RowNum <= @count

Begin
select @DbName = DBName from @DbTable
where lKey = @RowNum
set @sql = ‘select name, physical_name, type from ‘ + @DbName + ‘.sys.database_files’
insert into @FileTable
exec sp_executesql @sql

— Derruba todas as conexoes configurando como single user with immediate
set @sql= ‘ALTER DATABASE [‘ + @DbName + ‘] SET SINGLE_USER WITH ROLLBACK IMMEDIATE’
print ”
print ‘Executando linha -‘ + @sql
exec sp_executesql @sql

— configura db off line
set @sql = ‘ALTER DATABASE [‘ + @DbName + ‘] SET OFFLINE;’
print ”
print ‘Executando linha – ‘ + @sql
exec sp_executesql @sql
select * from @FileTable
while @@rowcount > 0

begin
select top 1 @OldPath = physical_name, @Type = [type], @LogicalName = [name] from @FileTable

–move arquivos files
set @FileName = (SELECT REVERSE(SUBSTRING(REVERSE(@OldPath), 0, CHARINDEX(‘\’, REVERSE(@OldPath), 1))))
if @type = 0

begin
set @NewPath = @NewDataFolder + @FileName
end
else
begin

set @NewPath = @NewLogFolder + @FileName
end
set @Sql = ‘EXEC master..xp_cmdshell ”MOVE “‘ + @OldPath + ‘” “‘ + @NewPath +'””’
print ”
print ‘Executando linha -‘ + @sql
exec sp_executesql @sql

— altera caminho dos arquivos
set @sql = ‘ALTER DATABASE ‘ + @DbName + ‘ MODIFY FILE (NAME = ‘ + @LogicalName + ‘, FILENAME = “‘ + @NewPath + ‘”)’
exec sp_executesql @sql
delete from @FileTable where [name] = @LogicalName
select * from @FileTable
end –while

set @sql = ‘ALTER DATABASE [‘ + @DbName + ‘] SET ONLINE;’
print ”
print ‘Executing line -‘ + @sql
exec sp_executesql @sql
SET @RowNum = @RowNum + 1

— aceita multi user novamente.
set @sql= ‘ALTER DATABASE [‘ + @DbName + ‘] SET MULTI_USER’
print ”
print ‘Executing line -‘ + @sql
exec sp_executesql @sql
end
exec sp_configure ‘xp_cmdshell’ , @XPCMD

reconfigure
EXEC sp_configure ‘show advanced option’, @ShowAdvOpt

RECONFIGURE
End –procedure

** ATENÇÃO **

Os diretórios de destino já devem estar criados, caso você execute a procedure sem criar o destino o script vai alterar o caminho da base e não vai movimentar os arquivos e quando ela foi trazida online ela vai apresentar erro.

** ATENÇÃO 2 **

Esse script é do tipo “Vai filhão”, quando ele for executado todos os usuários conectados serão desconectados sem dó nem piedade, todas as transações que eles estiverem executando serão dropadas. As conexões apenas poderão ser reestabelecidas após o termino da movimentação.

Para executar a procedure execute-a da seguinte forma:

exec sp_MoveDatabase @NewDataFolder = ‘c:\teste\’, @NewLogFolder = ‘d:\teste\’, @sDbList = ‘banco1, banco2’

Se tudo der certo, as bases banco1 e banco2 estarão seus novos destinos.