Analyze LDF Files MS SQL Server 2000
Jul 20, 2005Anybody nows a tool to analyze LDF files in MS SQL Server 2000?
I mean, a tool that converts a LDF file in a set of SQL transactions?
(similar to dbtran in sybase)
thanks!
Anybody nows a tool to analyze LDF files in MS SQL Server 2000?
I mean, a tool that converts a LDF file in a set of SQL transactions?
(similar to dbtran in sybase)
thanks!
I want to analyze procedure cache, to find inefficient plans and parameter issues.
I do it trow DMV But my requests to DMV are very slow and demand resources because procedure cache is about several GB Actually I dont need on-line analysis.
Is it possible to have fast snapshot of procedure cache?
Hi.. I want to convert .dbf files to sql server 2000 tables.. without using any tools. I need to create a different structure for sql server tables other than contains in the .dbf files. May be the dbf files contain only 3 columns. but i need 5 columns and some calculations to determine the values of some fields to insert into sql server table... i need to code this using c# in asp.net.. can u help me? thanks in advance.. Fraijo
View 3 Replies View RelatedSomeone:
I have the need to upload a file via a webpage and then save that file into the database. I would also like to retrieve it and show it to the user.
Can someone show me an example of how this should be done. Also I am concerned of the pros and cons of saving files in the database. Is there a performance hit on the server? Will it make the database unstable? Has anyone had problems doing this?
Any suggestions, samples, and/or help are welcome.
I created some MDF/LDF in SQL 7.0 containing my DB. After that I did not deatch it from the SQL server 7.0 and physically formatted the H.Drive. SQL Server 7.0 data directory is still on D drive, I just formatted the C drive. Before that NT 4.0 and SQL server 7.0 were running but now I installed WINDOWS 2000 Server and SQL server 2000 on this machine.
Is there any way to access the previous DBs (MDF/LDF) and put it into the SQl server 2000.
Thanking you in anticipation
I created some MDF/LDF in SQL 7.0 containing my DB. After that I did not deatch it from the SQL server 7.0 and physically formatted the H.Drive. SQL Server 7.0 data directory is still on D drive, I just formatted the C drive. Before that NT 4.0 and SQL server 7.0 were running but now I installed WINDOWS 2000 Server and SQL server 2000 on this machine.
Is there any way to access the previous DBs (MDF/LDF) and put it into the SQl server 2000.
Thanking you in anticipation
HI!I am importing .txt files. How can i check the errors? I have created alog file, but the problem is that i lose some characters.I import for example:CodeABCFZHJHNfrom a text file, but sometimes Code can be 4 caracters longI import this 3 characters long now. When i add the same structuredtext file with some rows lenght 4, it skips the last character, but iget nothing in the log file.please helpxgirl
View 4 Replies View RelatedHi All,
I am a new user of SQL Server 2000. Please point me where I am able to get good online sources to be familiar with the SQL Server 2000. I wanted to import the primary file (MDF)and a log file (LDF), which are stored on two different floppy disks into the local SQL server 2000. Please direct me where I should start. Your help is greatly appreciated.
Thanks in advance
Lenka.
I just want to store pic.bmp file in server using image datatype but don't getting how to do that ..
i want to know about both the actions storing and retrieving from the image data type..
like
create table amit
(
im image,
);
insert into values ()...
select * form Amit will it work...
Hi
How to store flash files into the sql server 2000 database and again display them back in asp.net/C# user interface?thanks
Hi!I have a large project that is due to complete this week. In order tocomplete it I need SQL Server 2000 installed on a remote server. Mydisk is corrupt and to order another media disk would damage mydeadline. I have the licence and serial key, but now need good installfiles. I am even ready to buy another retail box, if I can find asupplier that would give me a download site for the media, while I waitfor the shipment!Please PLEASE help!Regards,Barry
View 6 Replies View RelatedI have a 500-MB full installation CD for SQL Server 2000. All I needis to install "Client Connectivity" component (about 272K) on a bunchof workstations for users across the nation.How do I reduce the installation file size, by eliminating most of theunwanted files?Thanks.
View 2 Replies View RelatedHi,
I'm trying to upgrade from SQL Server 2000 to 2005. The problem I am having is that when I try to attach the existing db files I get a message that says "database cannot be upgraded because it is read only or has read only files...."
Thing is... there is no write protection on the files.
Can anyone advise me on how to overcome this problem so that I can attach the db, please?
Thank you
Robert
Hello,
Please help me find solution to these questions?
1. How do I store audio and video files in sql server 2000.
2. Is it possible to store and retrieve audio and video files using t-sql
3. Which is the most efficient way to store and retrieve audio and video files in sql server
Your suggestions are most valued.
Thanks!
I have been asked to analyze a DB which was under a different team. I have done the following,
1.Identified tables without PK, and clustered index
2.identified FK’s without index
3.orphan users and weak passwords
4.analyzed SPS for bad code
Is there anything else I can do? Are there any ready-made scripts which I can use.
------------------------
I think, therefore I am - Rene Descartes
Hello, friends!
sorry for the stupid question:
I created a new index on table and I'm looking to a command equal to "Analyze table ", "Compute statistics "in Oracle
to check if that index is usefull.
Thank you very much in advance.
I will be grateful if you could answer a few more questions around Analyzing Key Influencers
1. When specifying the training data for Decision Tree, there is a SUGGEST button (Recommend inputs for currently set predictable) which recommends which input are related to the predictable attribute. It also gives a €˜Score€™ for each recommended inputs. What algorithm does the SUGGEST button use? Does it use simple entropy/correlation based algorithm OR sophisticated feature selection algorithms?
2. Can I access this €˜Score€™ and recommended inputs above programmatically?
3. What feature selection algorithms are used in SQL Server 2005? Can they be invoked programmatically?
5. In Logistic Regression mining model viewer, we get a chart which clearly shows what attributes favor which state of the predictable attribute. For example, income level < 23000 favors BikeBuyer = 0 (does not buy) with a score of 89.00. What algorithm is used to calculate the €˜Score€™? Can LR be used as a feature selector in case where the predicted attribute is binary (select the attributes that favor one state or the other with a score of, say, greater than some threshold)?
6. You suggested using Naive Bayes to find AKIs. What if the input attributes are all continuous (predicted attribute binary)? Shouldnt I be going for LR?
Thanks bunches
MA
Excuse the elementary question; I am new to this feature.
No matter what dataset I use, I get the following error:
"The task was not able to detect any key influencers for the 'xxx' column. The values of 'xxx' seem unrelated to values of other columns."
Any ideas on what is happening here?
Here is the sample dataset:
Loan Amount
CLTV
transmitted
1,000
51.0%
1
5,000
52.0%
1
9,000
53.0%
1
13,000
54.0%
1
17,000
55.0%
1
21,000
56.0%
1
25,000
57.0%
1
29,000
58.0%
1
33,000
59.0%
1
37,000
60.0%
0
41,000
61.0%
0
45,000
62.0%
0
49,000
63.0%
0
53,000
64.0%
0
57,000
65.0%
0
61,000
66.0%
0
65,000
67.0%
0
69,000
68.0%
0
73,000
69.0%
0
77,000
70.0%
0
Thanks in advance.......
Hi,
Below is an Oracle query used for cost optimization purpose :
analyze table test estimate statistics sample 2500 Rows;
Is there any equivalent for the above in SQL Server ?
Please advice,
Thanks,
Sam
Dear All,
i've used the DBCC showcontig command against my table table103
but i dont know how to analyze the results of fragmentation levels. please give me some explanations or some good links.....
the results are:
DBCC SHOWCONTIG scanning 'TABLE103' table...
Table: 'TABLE103' (1899257921); index ID: 1, database ID: 10
TABLE level scan performed.
- Pages Scanned................................: 20
- Extents Scanned..............................: 13
- Extent Switches..............................: 18
- Avg. Pages per Extent........................: 1.5
- Scan Density [Best Count:Actual Count].......: 15.79% [3:19]
- Logical Scan Fragmentation ..................: 90.00%
- Extent Scan Fragmentation ...................: 92.31%
- Avg. Bytes Free per Page.....................: 3281.4
- Avg. Page Density (full).....................: 59.46%
Vinod
Even you learn 1%, Learn it with 100% confidence.
Hi All
I struck up with Slow perfornace query,Please some body help me how to analyze Slow perforamnce queris.
Hi, the other day, some data was deleted by mistake, the data that we wanted to delete was in just 1 table, and we deleted the related data in a couple tables more...
We do full backups every Sunday and a Differential every day, my question is:
Is there any way to analyze the backup file to compare the backed up data with the data that the table has now, and by automatic means restore just some rows to the table, or at least see the data to insert it manually?
Thanks!
I have a 300MG DB and Query Analyzer gives me the "... DB larger than configured..." error when I try to connect to it...
What is the work around?
Thanks in advance
JEK
Msg 245, Level 16, State 1, Procedure CompactL1RecordsFromfirstINTRAD, Line 177
Conversion failed when converting the varchar value
'DECLARE rows_cursor CURSOR FOR
SELECT ask, dateTimed FROM iDay_Compr_GOOG
WHERE DAY (dateTimed) = ' to data type int.
I get this message while executing a stored procedure. It works "half way" but skips this statement and the whole block as a result. In the table @table_name ask is declared as float and dateTimed as DateTime. Similar statements work happily in other stored procedures with no problem. Even in the same stored procedure I have places where I use DAY (dateTimed) and they seem to work as far as I can see although it is a very branched out code with many IF ... ELSE's.
SET @SQL = 'DECLARE rows_cursor CURSOR FOR
SELECT ask, dateTimed FROM '+@table_name+'
WHERE DAY (dateTimed) = '+@day+' AND NOT ask = 0'
exec sp_sqlexec @SQL
OPEN rows_cursor
FETCH LAST FROM rows_cursor INTO @askQ, @lastTableDateTime
SET @SQL = 'UPDATE '+@table_name+' SET askSize = askSize + '+@askSize+'
WHERE dateTimed = '+@lastTableDateTime+' AND ask = '+@askQ
exec sp_sqlexec @SQL -- <== this is line 177
CLOSE rows_cursor
DEALLOCATE rows_cursor
Another puzzle for me is the line number. I used Edit-->GO TO--> 177 to single the line out and it seems to point to a different exec sp_execsql @SQL statement, the one that is down the road.
I can make neither head nor tail out of it. I am sure Jens, Cetin or Andrea or whoever stumble on this post will be able to figure this out. Anyone else's help will also be appreciated.
Many thanks.
I have installed SP2 on my laptop and I have installed the latest Data Mining Add-Ins.
When I open the sample spreadsheet and select the table in a worksheet called "Table Analysis Tools Sample", I dont get Analyze ribbon under Table Tools. Is ther any reson for it?
I dont get Data Mining option either! Is there something I have to do before getting those menus to appear in the ribbon?
I have been through "Getting Started" and set the AS connection to the local AS Server.
Thanks
Sutha
Edit 2007-8-9:
Added code to show database file sizes. Not really closely related to tables sizes, but a lot of the people who need this want to know why their database it so large, so it may help to know which files, especially the logs, are so large, and if the files have empty space in them.
-- Script to analyze table space usage using the
-- output from the sp_spaceused stored procedure
-- Works with SQL 7.0, 2000, and 2005
set nocount on
print 'Show Size, Space Used, Unused Space, Type, and Name of all database files'
select
[FileSizeMB]=
convert(numeric(10,2),sum(round(a.size/128.,2))),
[UsedSpaceMB]=
convert(numeric(10,2),sum(round(fileproperty( a.name,'SpaceUsed')/128.,2))) ,
[UnusedSpaceMB]=
convert(numeric(10,2),sum(round((a.size-fileproperty( a.name,'SpaceUsed'))/128.,2))) ,
[Type] =
case when a.groupid is null then '' when a.groupid = 0 then 'Log' else 'Data' end,
[DBFileName]= isnull(a.name,'*** Total for all files ***')
from
sysfiles a
group by
groupid,
a.name
with rollup
having
a.groupid is null or
a.name is not null
order by
case when a.groupid is null then 99 when a.groupid = 0 then 0 else 1 end,
a.groupid,
case when a.name is null then 99 else 0 end,
a.name
create table #TABLE_SPACE_WORK
(
TABLE_NAME sysnamenot null ,
TABLE_ROWS numeric(18,0)not null ,
RESERVED varchar(50) not null ,
DATA varchar(50) not null ,
INDEX_SIZE varchar(50) not null ,
UNUSED varchar(50) not null ,
)
create table #TABLE_SPACE_USED
(
Seqintnot null
identity(1,1)primary key clustered,
TABLE_NAME sysnamenot null ,
TABLE_ROWS numeric(18,0)not null ,
RESERVED varchar(50) not null ,
DATA varchar(50) not null ,
INDEX_SIZE varchar(50) not null ,
UNUSED varchar(50) not null ,
)
create table #TABLE_SPACE
(
Seqintnot null
identity(1,1)primary key clustered,
TABLE_NAME SYSNAME not null ,
TABLE_ROWS int not null ,
RESERVED int not null ,
DATA int not null ,
INDEX_SIZE int not null ,
UNUSED int not null ,
USED_MBnumeric(18,4)not null,
USED_GBnumeric(18,4)not null,
AVERAGE_BYTES_PER_ROWnumeric(18,5)null,
AVERAGE_DATA_BYTES_PER_ROWnumeric(18,5)null,
AVERAGE_INDEX_BYTES_PER_ROWnumeric(18,5)null,
AVERAGE_UNUSED_BYTES_PER_ROWnumeric(18,5)null,
)
declare @fetch_status int
declare @proc varchar(200)
select@proc= rtrim(db_name())+'.dbo.sp_spaceused'
declare Cur_Cursor cursor local
for
select
TABLE_NAME=
rtrim(TABLE_SCHEMA)+'.'+rtrim(TABLE_NAME)
from
INFORMATION_SCHEMA.TABLES
where
TABLE_TYPE= 'BASE TABLE'
order by
1
open Cur_Cursor
declare @TABLE_NAME varchar(200)
select @fetch_status = 0
while @fetch_status = 0
begin
fetch next from Cur_Cursor
into
@TABLE_NAME
select @fetch_status = @@fetch_status
if @fetch_status <> 0
begin
continue
end
truncate table #TABLE_SPACE_WORK
insert into #TABLE_SPACE_WORK
(
TABLE_NAME,
TABLE_ROWS,
RESERVED,
DATA,
INDEX_SIZE,
UNUSED
)
exec @proc @objname =
@TABLE_NAME ,@updateusage = 'true'
-- Needed to work with SQL 7
update #TABLE_SPACE_WORK
set
TABLE_NAME = @TABLE_NAME
insert into #TABLE_SPACE_USED
(
TABLE_NAME,
TABLE_ROWS,
RESERVED,
DATA,
INDEX_SIZE,
UNUSED
)
select
TABLE_NAME,
TABLE_ROWS,
RESERVED,
DATA,
INDEX_SIZE,
UNUSED
from
#TABLE_SPACE_WORK
end --While end
close Cur_Cursor
deallocate Cur_Cursor
insert into #TABLE_SPACE
(
TABLE_NAME,
TABLE_ROWS,
RESERVED,
DATA,
INDEX_SIZE,
UNUSED,
USED_MB,
USED_GB,
AVERAGE_BYTES_PER_ROW,
AVERAGE_DATA_BYTES_PER_ROW,
AVERAGE_INDEX_BYTES_PER_ROW,
AVERAGE_UNUSED_BYTES_PER_ROW
)
select
TABLE_NAME,
TABLE_ROWS,
RESERVED,
DATA,
INDEX_SIZE,
UNUSED,
USED_MB=
round(convert(numeric(25,10),RESERVED)/
convert(numeric(25,10),1024),4),
USED_GB=
round(convert(numeric(25,10),RESERVED)/
convert(numeric(25,10),1024*1024),4),
AVERAGE_BYTES_PER_ROW=
case
when TABLE_ROWS <> 0
then round(
(1024.000000*convert(numeric(25,10),RESERVED))/
convert(numeric(25,10),TABLE_ROWS),5)
else null
end,
AVERAGE_DATA_BYTES_PER_ROW=
case
when TABLE_ROWS <> 0
then round(
(1024.000000*convert(numeric(25,10),DATA))/
convert(numeric(25,10),TABLE_ROWS),5)
else null
end,
AVERAGE_INDEX_BYTES_PER_ROW=
case
when TABLE_ROWS <> 0
then round(
(1024.000000*convert(numeric(25,10),INDEX_SIZE))/
convert(numeric(25,10),TABLE_ROWS),5)
else null
end,
AVERAGE_UNUSED_BYTES_PER_ROW=
case
when TABLE_ROWS <> 0
then round(
(1024.000000*convert(numeric(25,10),UNUSED))/
convert(numeric(25,10),TABLE_ROWS),5)
else null
end
from
(
select
TABLE_NAME,
TABLE_ROWS,
RESERVED=
convert(int,rtrim(replace(RESERVED,'KB',''))),
DATA=
convert(int,rtrim(replace(DATA,'KB',''))),
INDEX_SIZE=
convert(int,rtrim(replace(INDEX_SIZE,'KB',''))),
UNUSED=
convert(int,rtrim(replace(UNUSED,'KB','')))
from
#TABLE_SPACE_USED aa
) a
order by
TABLE_NAME
print 'Show results in descending order by size in MB'
select * from #TABLE_SPACE order by USED_MB desc
go
drop table #TABLE_SPACE_WORK
drop table #TABLE_SPACE_USED
drop table #TABLE_SPACE
CODO ERGO SUM
Hi gurus,
I've created a linked server (and set up the corresponding schema.ini file) in order to perform bulk-inserts from some CSV text files into SQL tables (from my standpoint the text files are just for reading purposes). The linked server works fine (I can select the data in the files without a problem).
Now the question: is possible to automatically detect when one or more of those files change in order to start the import process automatically? Something like having a trigger created on the CSV files Or there's no easy way to do that so I have, to say something, to create a Job that periodically checks if the files have changed programatically (say, recording each file's timestamp everytime is imported and comparing the recorded value with the current one, or whatever)?
Thanks a lot in advance!
Here is a script I wrote that analyzes datasets and returns all the minimal composite and unary keys that uniquely identify records. I wrote it because I frequently have to analyze client spreadsheets and non-normalized data tables.
On my desktop server it took about two minutes to analyze 2000 permutations of a table with 50 columns and 5000 records.
Please try it out for me and let me know if it chokes on anything, or if you see any ways it could be improved!
I have installed the excel DM addin and am trying to work through the tutorials -
When I run the 'Analyze Key Influencers' tool against the sample data through a remote AS server I get:
The task was not able to detect any key influencers for the 'Purchased Bike' column. The values of 'Purchased Bike' seem unrelated to values of other columns.
however when I run it against a local AS server I get the expected results.
I can see no differences in settings or setup between the AS instances I am trying to use - perhaps a permissions issue?
Thank you
I proposed on a new server that we separate Data Files, Log Files, tempDB, Backups, etc. onto separate LUNS on a SAN with High Speed Solid State Drives.I was told that with the new technology with solid state SAN's that it would decrease performance and that it did not work the same way as it did when you had RAID 5's etc.I thought that if things were cared out correctly by a SAN Administrator they would know how to configure for optimal performance.
View 2 Replies View RelatedI have the need to delete old backup files via TSQL job. Found this solution online:
PushD "
emoteservershareDIFF" &&(
forfiles -m *DIFF*.sqb -d -1 -c "cmd /c del /q @path"
) & PopD
It works remotely if I run it via command prompt. But when I add this to a TSQL job on my remote SQL instance, it runs without deleting anything. What I'm missing?
I have several hundred .csv files that have a specific cell that I need to get into a SQL table.
These files are named after the date on which they were created...ie 8252005 would be todays date.
Im looking for a way to import this cell to SQL... the same cell in each file....
Thanks for any help
I am interested in applying 7.0 log files to 2000 database, that has been previously restored from a 7.0 version database. Assuming that the original database is one in the same, is this possible?
View 1 Replies View Related