Importance Of Sqlctr.h File
Sep 30, 2004
Hi all-- there is a file in C:Program FilesMicrosoft SQL ServerMSSQLBinn direcetory called sqlctr.h which contains a lot of counter parametres..could any one tell me having its importance and can we change any of parametres to gain performance..
Thanks in advance..
// This file is generated by the description file processor.
// Please do not edit.
#defineBUFMGR_OBJECT0
#define BUF_RESERVED_PAGE_COUNT2
#define BUF_CHECKPOINT_WRITES4
#define BUF_AWE_LOOKUP_MAPS6
#define BUF_BLOCK_WRITES8
#define BUF_COMMITTED_PAGE_COUNT10
#define BUF_AWE_UNMAP_CALLS12
#define BUF_TARGET_PAGE_COUNT14
#define BUF_AWE_UNMAP_PAGES16
#define BUF_CACHE_RATIO_BASE18
#define BUF_FREELIST_STALLS20
#define BUF_HASHED_PAGE_COUNT22
#define BUF_LIFE_EXPECTANCY24
#define BUF_CACHE_HIT_RATIO26
#define BUF_AWE_WRITE_MAPS28
#define BUF_PAGE_REQUESTS30
#define BUF_STOLEN_PAGE_COUNT32
#define BUF_BLOCK_READS34
#define BUF_NUM_FREE_BUFFERS36
#define BUF_LAZY_WRITES38
#define BUF_READAHEAD_PAGES40
#define BUF_AWE_STOLEN_MAPS42
#define BUF_PROCCACHE_SIZE44
#defineBUFPART_OBJECT46
#define BUFPART_NUM_FREE_BUFFERS48
#define BUFPART_FREE_BUFFERS_USED50
#define BUFPART_FREE_BUFFERS_EMPTY52
#defineGENERAL_OBJECT54
#define GO_LOGINS56
#define GO_LOGOUTS58
#define GO_USER_CONNECTIONS60
#defineLOCKS_OBJECT62
#define LCK_TOTAL_WAITTIME64
#define LCK_NUM_WAITS66
#define LCK_AVERAGE_WAITTIME_BASE68
#define LCK_NUM_DEADLOCKS70
#define LCK_NUM_TIMEOUTS72
#define LCK_NUM_REQUESTS74
#define LCK_AVERAGE_WAITTIME76
#defineDBMGR_OBJECT78
#define DB_REPLTRANS80
#define DB_DBCC_SCANRATE82
#define DB_REPLCOUNT84
#define DB_LOG_SIZE86
#define DB_LOG_TRUNCS88
#define DB_LOG_USED_PERCENT90
#define DB_LOG_SHRINKS92
#define DB_BULK_KILOBYTES94
#define DB_FLUSH_WAIT_TIME96
#define DB_ACT_XTRAN98
#define DB_LOGCACHE_READS100
#define DB_FLUSH_WAITS102
#define DB_BCK_DB_THROUGHPUT104
#define DB_DBCC_MOVERATE106
#define DB_LOG_GROWTHS108
#define DB_TOTAL_XTRAN110
#define DB_LOGCACHE_BASE112
#define DB_BYTES_FLUSHED114
#define DB_LOG_USED116
#define DB_LOGCACHE_RATIO118
#define DB_DATA_SIZE120
#define DB_BULK_ROWS122
#define DB_FLUSHES124
#defineLATCH_OBJECT126
#define LATCH_TOTAL_WAIT_NP128
#define LATCH_WAITS_NP130
#define LATCH_AVG_WAIT_NP132
#define LATCH_AVG_WAIT_BASE134
#defineACCESS_METHODS_OBJECT136
#define AM_EXTENTS_ALLOCATED138
#define AM_WORKTABLES_CREATED140
#define AM_GHOSTED_SKIPS142
#define AM_FULL_SCAN144
#define AM_PAGES_ALLOCATED146
#define AM_PAGE_SPLITS148
#define AM_SINGLE_PAGE_ALLOCS150
#define AM_EXTENTS_DEALLOCATED152
#define AM_PROBE_SCAN154
#define AM_FREESPACE_PAGES156
#define AM_WORKTABLES_FROM_CACHE_BASE158
#define AM_LOCKESCALATIONS160
#define AM_PAGE_DEALLOCS162
#define AM_WORKTABLES_FROM_CACHE164
#define AM_INDEX_SEARCHES166
#define AM_FREESPACE_SCANS168
#define AM_FORWARDED_RECS170
#define AM_WORKFILES_CREATED172
#define AM_SCAN_REPOSITION174
#define AM_RANGE_SCAN176
#defineSQL_OBJECT178
#define SQL_AUTOPARAM_REQ180
#define SQL_BATCH_REQ182
#define SQL_RECOMPILES184
#define SQL_AUTOPARAM_UNSAFE186
#define SQL_COMPILES188
#define SQL_AUTOPARAM_FAIL190
#define SQL_AUTOPARAM_SAFE192
#defineCACHE_OBJECT194
#define CACHE_USE_COUNT196
#define CACHE_HIT_RATIO_BASE198
#define CACHE_OBJECT_COUNT200
#define CACHE_HIT_RATIO202
#define CACHE_PGS_IN_USE204
#defineMEMORY_OBJECT206
#define MEMORY_MEMGRANT_MAXIMUM208
#define MEMORY_CONNECTION_MEMORY210
#define MEMORY_MEMGRANT_WAITERS212
#define MEMORY_MEMGRANT_OUTSTANDING214
#define MEMORY_SQL_CACHE_MEMORY216
#define MEMORY_OPTIMIZER_MEMORY218
#define MEMORY_LOCKS220
#define MEMORY_SERVER_MEMORY222
#define MEMORY_LOCKOWNERS_ALLOCATED224
#define MEMORY_LOCK_MEMORY226
#define MEMORY_LOCKS_ALLOCATED228
#define MEMORY_SERVER_MEMORY_TARGET230
#define MEMORY_LOCKOWNERS232
#define MEMORY_MEMGRANT_ACQUIRES234
#defineUSER_QUERY_OBJECT236
#define QUERY_INSTANCE238
#defineREPLICATION_AGENT_OBJECT240
#define RUNNING_INSTANCE242
#defineMERGE_AGENT_OBJECT244
#define MERGE_CONFLICTS_INSTANCE246
#define UPLOAD_INSTANCE248
#define DOWNLOAD_INSTANCE250
#defineLOGREADER_AGENT_OBJECT252
#define LOGREADER_LATENCY_INSTANCE254
#define LOGREADER_TRANSACTIONS_INSTANCE256
#define LOGREADER_COMMANDS_INSTANCE258
#defineDISTRIBUTION_AGENT_OBJECT260
#define DISTRIBUTION_TRANS_INSTANCE262
#define DISTRIBUTION_LATENCY_INSTANCE264
#define DISTRIBUTION_COMMANDS_INSTANCE266
#defineSNAPSHOT_AGENT_OBJECT268
#define SNAPSHOT_TRANSACTIONS_BCPED270
#define SNAPSHOT_COMMANDS_BCPED272
#defineBACKUP_DEV_OBJECT274
#define DB_BCK_DEV_THROUGHPUT276
View 3 Replies
ADVERTISEMENT
Mar 12, 2001
Hi,
I am using Sql 7.0 with sp2. I just started as a sql dba. I have a question here, What is the importance of SID's ? When we are mapping to sql logins and user_id 's how we have to give importance regarding SID's.
Pls suggest me a good article or some suggestions...
Thanks!
View 1 Replies
View Related
Apr 27, 2007
hi i have an association rule mining model and i want to order the output by the importance here is the select statment:
SELECT
[RelatedOrder].[Order Line]
From
[RelatedOrder]
NATURAL PREDICTION JOIN
(SELECT (SELECT 888 AS [Product ID]) AS [Order Line]) AS t
thanx
View 3 Replies
View Related
Jul 17, 2007
Hello Experts at Microsoft.
I am thinking of an easy way to explain importance to Marketers without going into the math. This is what i came up with so far. Does this sound correct to you guys?
Reasoning:
IMPORTANCE = Log(Improvement)
Improvement=P(X&Y)/(P(x)*P(y))
Improvement= (Probability 2 products are sold together)/(random chance 2 products are sold together)
If the (Probability 2 products are sold together) = (random chance 2 products are sold together) then Improvement=1. The log(1) = 0
IMPORTANCE SCORE
-2 to -1 10 to 100 times less likely than random chance
-1 to 0 0 to 10 times less likely than random chance
0 to 1 0 to 10 times more likely than random chance
1 to 2 10 to 100 times more likely than random chance
2 to 3 100 to 1000 times more likely than random chance
3 to 4 1000 to 10000 times more likely than random chance
4 to 5 10000 to 100000 times more likely than random chance
5 to 6 100000 to 1000000 times more likely than random chance
6 to 7 1000000 to 10000000 times more likely than random chance
View 1 Replies
View Related
Feb 14, 2008
I understand Mr. MacLennan's explanation provided at http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=282651&SiteID=1 and appreciate the time he took to explain how importance works. However, like the user with username "sang", I also ran the data in BI 2005 and got the same results listed by the aforementioned user. I did this using the following data:
donut
muffin
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
y
n
y
n
y
n
y
n
y
n
y
etc.
The rule muffin -> donut has an importance of -0.105302438, which is not the same as Mr. MacLennan's results. I tried switching the roles of a and b in a -> b and using different bases on the logarithms. I don't get the result of -0.105302438 with any of these. I also tried to calculate importance with a small data set I have and can't get the results using Mr. MacLennan's explanation with that data set either. Any thoughts on the descrepancy?
View 5 Replies
View Related
Feb 21, 2008
WHY DO WE USE TRIGGERS IN SQL SERVER2005.
WAT IS ITS IMPORTANCE.
AND SOME SAMPLES
PLEASE GIVE ME SOLUTIONS
View 1 Replies
View Related
Apr 12, 2007
hi,
i have a exercise using association datamining
my database have 350 records,
i use 90 records for datamining and it release some rules which i choose on top of mSOLAP_NODE_SCORE,
but when i use select statement to check my result i have 1 records, the same as my result, and 5 records not true;
for example:
rules A=a,B=b-> C=c
select * from <my_table> where A='a' and B='b' and C='c'; ==>1 record return
select * from <my_table> where A='a' and B='b' and C<>'c'; ==>5 records return
C with 3 values c1,c2,c
with the second statement C includes 2 c1 and 3 c2
i don't understand how they work.
i want to choose some best rules can present my database.
how can i choose importance and probability to get best rules.
with database have 90 records and a database have 350 records which values i should use for minimum_probability, Minimum_Support, Minimum_importance...
when i choose rules i should choose on importance or probability.
thanks for your help
View 4 Replies
View Related
Jan 31, 2008
Is there a way to explicitly assign 'weights' or 'importance' factors to attributes and have that to be considered by the association rules and decision trees algorithms during training? I would like to do so without preprocessing the data (In any case, I can't think on a way to assign weight with preprocessing to boolean attributes like 'smoker')
thanks
View 3 Replies
View Related
Mar 6, 2006
Can anyone tell me, how the Business ?ntelligence Studio calculates the importance of a rule. I can't find the formula. I know some formulas, but the result in SQL Server is completly different.
Thanks!
View 12 Replies
View Related
Oct 19, 2005
Those of you who have installed SQL Server 2005 may have noticed that the installation creates several new Windows groups on the server. Do not underestimate the importance of these groups.
View 3 Replies
View Related
Nov 5, 2003
Does anybody know how to send an email using xp_sendmail sp with HIGH importance setting for the message?
Thanks,
Dim
View 5 Replies
View Related
Dec 30, 2004
I try to search my data and sort the result by importance.
I'm using a MS Access database and my data (table1) looks like this:
Code:
ID NAME TEXT
1 Apples Good red apples
2 Bananas Fine yellow bananas
3 Yellow apples Great yellow apples
I want to search the data and get a result where the column "NAME" is more important than "TEXT". My SQL looks like this:
Code:
SELECT id,name,text,1 AS searchorder FROM table1 WHERE name LIKE '*yellow*'
UNION
SELECT id,name,text,2 AS searchorder FROM table1 WHERE text LIKE '*yellow*'
ORDER BY searchorder
The output is this:
Code:
ID NAME TEXT SEARCHORDER
3 Yellow apples Great yellow apples 1
2 Bananas Fine yellow bananas 2
3 Yellow apples Great yellow apples 2
So far so good - the order by importance works - but I do not get unique columns because of the searchorder column.
Can I fix my SQL so I get unique columns where the last line of "Yellow apples" does not appear or am I lost in space?
Best regards,
Peter from Denmark
View 2 Replies
View Related
Sep 22, 2006
During testing a package repetatively that deletes/inserts into several tables, over the course of several days, my package, which took 45 minutes to load 1700 XML files, began to take over 6 hours. Turns out it was an I/O bottleneck, and the Avg Disk Queue Length was around 200 and I was incurring many PAGEIOLATCH_EX. My devl machine uses a single local disk, no raid, so I had no options there, but I ran the maintenance wizard to recreate indexes/statistics and defraged the hard drive, and regained my original 45 minutes time. I guess I'll have to put a maintenance plan together to do this nightly.
-Kory
View 1 Replies
View Related
Sep 2, 2015
Currently have a single hard coded file path to the SSRS config file which parses the file and provides the reporting services web service url. My question is how would i run this same query against 100s of servers that may or may not share the same file path as the one hard coded ?
Is there a way to query the registry to find the location of the config file of any server ? which could be on D, E, F, H, etc.
I know I can string together the address followed by "reports" and named instance if needed, but some instances may not have used the default virtual directory name (Reports).
Am I going about this the hard way ? Is there a location where the web service url exists in a table ? I could not locate anything in the Reporting service database. Basically need to inventory all of my reporting services url's.
View 2 Replies
View Related
May 10, 2007
I have a customer they are running raid 5 on a windows 2000 server one of the drives went bad. The customer replaced the drive and raid rebuilt the drive, every thing seamed to be fine but there is one database file that cannot be attached to SQL. The file is 15G so I know there is information the error states that the file is not a Primary file. Any clue on how to fix this?
mdf file size 5,738,944 KB
ldf file size 10,176 KB
View 4 Replies
View Related
Sep 2, 2007
Greetings, I have just arrived back into the country (NZ) and back into ASP.NET.
I am having trouble with the following:An attempt to attach an auto-named database for file (file location).../Database.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share.
It has only begun since i decided i wanted to use IIS, I realise VWD comes with its own localhost, but since it is only temporary, i wanted a permanent shortcut on my desktop to link to my intranet page.
Anyone have any ideas why i am getting the above error? have searched many places on the internet and not getting any closer.
Cheers ~ J
View 3 Replies
View Related
Mar 1, 2006
how can i attach the mdf file with a corrupted/deleted Log file
my log file is deleted how can i attach the mdf file
View 1 Replies
View Related
Jun 18, 2015
I am testing some maintenance tasks sql commands such as index rebuild, index reorg, update statistics and db integrity check on a SQL Server 2014 Database. This is a new non-production vendor database (DB Size 500 GBs, Log Size 25 GBs) which eventually will be created in production. Currently, it is in full recovery model and without log backups. The database has a whole lot of indexes. I am just trying to rebuild and reorganize all the indexes (that need it), in addition to trying to get an idea of how long these maintenance task will take and the space needed in the log file to complete these tasks/commands. I would like to execute these tasks manually (the first time) to gather the duration and space required information. Eventually, I would probably schedule a weekly job to perform this maintenance.
I ran the index rebuild task on the database and noticed that the log file grew by over 50 GBs. I killed the process and truncated and shrunk the log file back down.
1. Does the index rebuild, index reorg, update statistics and db integrity check commands all use the log file?
2. Does Indexs Reorg have less impact on log file then Index Rebuild?
3. Should a truncate log and shrink log file be performed after these maintenance commands?
4. Should a full database backup be performed after these maintenance commands? Or before the maintenance commands?
I have read and understand that shrinking is not good for the database (could lead to more fragmentation and more data file growth when data is added) and I know about rebuilding indexes when fragmentation is GT 30% and reorganizing indexes when fragmentation is GT 5% and LE 30%.
Since this is a non-production database maybe I should set the recovery model to simple, run the maintenance commands and leave the database in simple recovery model unless the vendor needs it in full recovery model for some unknown reason.
5. With the simple recovery model the log file should be reused in a circular manner and not grow during these maintenance tasks. Is this correct?
View 3 Replies
View Related
Mar 10, 2008
Hello World,
I'm new to SSIS and would like a little assistance getting started, if possible...
Here is what I want to do:
Check if file exist (C:DTS UpgradeFilexxx.txt) --->
Archive file (C:DTS UpgradeArchive) --->
Check if file has data (true or false)
AND/OR
If there are any good website that have good direction, let me know
Thanks in advance for your help!!!
View 5 Replies
View Related
Jul 31, 2014
I need to write a process to get file size in kb and record count in a file. I was planning on writing a c# console app that takes the file path and name as a param however should i use a CLR?
I cant put a script in the ssis when it's bringing the file down because it has been deemed that we only use ssis for file consumption.
View 1 Replies
View Related
Jul 6, 2015
For a database, we have 4 data files in a particular file group and the file sizes are almost 70 GB each.
Do I come across any performance issues if I create/pre-allocate an additional data file in the same file group so that the existing files don't grow too much?
View 5 Replies
View Related
Jun 23, 2015
I have a package in which there are only one Data flow Task and it has only three components. 1) Source , which is a SQL db 2) destination and 3) OLE DB Destination flat file Error output file. I want the error file to be created ONLY if there is any error while dumping the data into destination DB. But , the issue is, the error flat file is being created inspite of No error while dumping the data from Source to Destination.
View 5 Replies
View Related
May 14, 2015
I'm copying files to a folder with the naming convention as follows in the source folder:
CM_ABC_MY_TEST.txt
In the destination folder, this filename needs to appear as:
CM_XYZ_MY_TEST.txt
In my File System Task, I'm pretty sure I'm going to need an expression with a replace, substring, etc. But am having a hard time nailing down the exact syntax.
View 10 Replies
View Related
Jul 24, 2015
Need to know how I can get the dynamic filename created in the FlatFile destination for insert into a package audit table?
Scenario: Have created a package that successfully outputs Dynamiclly named flat files { Format: C:Test’Comms_File_’ + ‘User::FileNumber’+’_’+Date +’.txt’
E.g.: Comms_File_1_20150724.txt, Comms_File_2_20150724.txt etc} using Foreach Loop Container :
* Enumerator Set to: “Foreach ADO Enumerator” with the ADO object source variable selected to identify how many total loop iterations there are i.e. Let’s say 4 thus 4 files to be created
*Variable Mappings : added the User::FileNumber – indicates which file number current loop iteration is i.e. 1,2,3,4
For the DataFlow task have a OLDBSource and a FlatFile Destination where Flat File ConnectionString is set up as:
@[User::Output_Path] + "Comms_File"+ @[User:: FileNumber] +"_" + replace((DT_WSTR, 10) (DT_DBDATE) GETDATE(),"-","")+ ".txt"
All this successfully creates these 4 files:
Comms_File_1_20150724.txt, Comms_File_2_20150724.txt, Comms_File_3_20150724.txt, Comms_File_4_20150724.txt
Now the QUESTION is how do I get these filenames as I need to insert them into a DB Audittable. The audit table looks like this:
CREATE TABLE dbo.MMMAudit
(
AuditID INT IDENTITY(1, 1) NOT NULL,
PackageName VARCHAR(100) NULL,
FileName VARCHAR(100) NULL,
LoadTime DATETIME NULL,
NumberofRecords INT NULL
)
To save the Filename & how many records in each file in our Audit Table, am using an Execute SQL Task and configuring it as this:
Execute SQL Task
Parameter mapping - Mapped the User Variable (RecordsInserted) and System Variable( PackageName) to Insert statement as shown below
SQLStatement: INSERT INTO [dbo].[MMMAudit] (
PackageName,NumerofRecords,LoadTime)
(?,?.GETDATE)
Again this all works terrific & populates the dbo.MMMAudit table as shown below BUT I also need to insert the respsctive file name – How do I do that?
AuditID PackageName FileName NumberOfRecords
1 MMM NULL 12
2 MMM NULL 23
3 MMM NULL 14
4 MMM NULL 1
View 2 Replies
View Related
Apr 24, 2007
Hi,
I am looking for tutorials about how to create dts et dtsx files.
Thanks for your help.
Arioule.
View 1 Replies
View Related
May 9, 2008
In my script task I have the following code. The task I'm trying to accomplish is:
If the filename on FTP can be found in the local archive folder of e: drive then show message "FileAlreadyThere" (I will ultimatley change it to do nothing); if the filename on FTP cannot be found in the local archive folder of e: drive then transfer the file to the local package folder on d: drive.
While the script task is executing I was watching it closely, but the problem i saw is that:
If some files on FTP are already in local archive folder and some are not, then it the files which are already in the archive folder are dumped to the package folder; then after that the files which are not in the archive folder are then dumped to the package folder. But I only want the new files on FTP to be transferred to the package folder for further processing.
Then after this is finished, I saw all the files in the package folder are refreshed one after another, after the first round of refresh the second round starts, after the second round finishes it then stopped. I saw it refreshes itself because the 'Date Modified' of the file changes. And I saw the script task turned green.
I don't see how the code below produced this result. Something is wrong in the logic of the loop? Anyone has any idea why it's behaving the way it is now? And how to change the code to accomplish what I want? Thanks a lot!!
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Imports System
Imports System.IO
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Runtime
Public Class ScriptMain
Public Sub Main()
Dim cm As ConnectionManager = Dts.Connections.Add("FTP")
cm.Properties("ServerName").SetValue(cm, "ftp2.name.com")
cm.Properties("ServerUserName").SetValue(cm, "username")
cm.Properties("ServerPassword").SetValue(cm, "password")
cm.Properties("ServerPort").SetValue(cm, "21")
cm.Properties("Timeout").SetValue(cm, "0")
cm.Properties("ChunkSize").SetValue(cm, "1000") '1000 kb
cm.Properties("Retries").SetValue(cm, "1")
Dim ftp As FtpClientConnection = New FtpClientConnection(cm.AcquireConnection(Nothing))
ftp.Connect()
ftp.SetWorkingDirectory("/directory")
Dim fileNames() As String
Dim folderNames() As String
ftp.GetListing(folderNames, fileNames)
If fileNames Is Nothing Then
MsgBox("NoFileOnFTP")
Else
Dim fileName As String
For Each fileName In fileNames
If File.Exists("c: emp" + fileName) Then
MsgBox("FileAlreadyThere")
Else
ftp.ReceiveFiles(fileNames, "c: emp", True, True)
End If
Next
End If
ftp.Close()
End Sub
End Class
View 3 Replies
View Related
Jan 2, 2008
Hey All,
Similar to a previous post (http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=244646&SiteID=1), I am trying to import data into a SQL Table.
I am trying to program a small application that will import product data obtained through suppliers via CD-ROM. One supplier in particular uses Fixed width colums, and data looks like this:
Example of Data
0124015Apple Crate 32.12
0124016Bananna Box 12.56
0124017Mango Carton 15.98
0124018Seedless Watermelon 42.98
My Table would then have:
ProductID as int
Name as text
Cost as money
How would I go about extracting the data with an XML Format file? I am stumbling over how to tell it where to start picking up data for a specific column.
Is there any way that I could trim the Name column (i.e.: "Mango Carton " --> "Mango Carton")?
I don't know if it makes any difference, but I've been calling SQL from my code by doing this:
Code in C# Form
SqlConnection SqlConnection = new SqlConnection(global::SQLClients.Properties.Settings.Default.ClientPhonebookConnectionString);
SqlCommand cmd = new SqlCommand();
cmd.CommandType = CommandType.Text;
cmd.CommandText = "INSERT INTO PhonebookTable(Name, PhoneNumber) VALUES('" + txtName.Text.ToString() + "', '" + txtPhoneNumber.Text.ToString() + "')";
cmd.Connection = SqlConnection;
SqlConnection.Open();
cmd.ExecuteNonQuery();
SqlConnection.Close();
RefreshData();
I am running Visual Studio C# Express 2005 and SQL Server Express 2005.
Thanks for your time,
Hayden.
View 1 Replies
View Related
Apr 14, 2014
I need find out the number of columns in flat file before i process that particular file.I have file name in @filename variable and file path is @filepath variable.But do not not that how i will check the column name in before i will process that file.
@filePath = C:DatabaseSourceFilesCAHCVSSourceFiles
And i am using for each loop container to read the file one by one and put the file name in @filename variable.and my file name like
Product_20120607060930.txt
Product_20130708060930.txt
[code]....
Now what i have to do is i need to make sure that ID,Name,City,County,Phone is there in flat file.if it is not there then i have to send mail to client saying that file is not valid.I need to also calculate the size of flat file.
View 4 Replies
View Related
Jun 16, 2014
The TEMPDB transaction log file keeps growing.The database server is new and the transaction log was presized to 1 GB on installation. After installing a number of databases, the log file grew over a day to 38GB. Issuing a manual checkpoint was the only way to free some space to allow it to be shrunk back to a usable size. The usage of the file is still going up.
I am struggling to find what process is causing the log to be used so heavily. Looking at the log reuse wait desc for tempdb returns "Nothing" and tempdb itself isn't being used very much or growing in size.
View 9 Replies
View Related
Apr 27, 2015
In a server we had File Growth,And then We had to Add New Hard Drive And New File On It.And Now We have New server with a Huge Hard Drive.But all files remaind.Can I Reduce This files to One data file or not ?
View 3 Replies
View Related
Jul 23, 2005
Hi,I am trying to use BULK INSERT with format file. All of our data hasfew bytes of header in the data file which I would like to skip beforedoing BULK INSERT.Is it possible to write format file to skip these few bytes ofheader before doing BULK INSERT? For example, I have a 1 GB data filewith 1000 byte header. Except for first 1000 bytes, rest of the data isgood for BULK INSERT.Thanks in advance. Sorry if it is really a dumb question as I am newto BULK INSERT and practicing still.Bob
View 7 Replies
View Related
Jul 20, 2005
I've production sql server 7 sp3 on windows NT. I had a 8GB data file ofwhich 5GB were used and 3GB were unused. I wanted to take back the unused3GB.So I did the following with EM GUI:1. I tried to "truncate fre space from end of the file". Didn't truncatethe file. I believe there was no empty space at the end of the file.2. Next I chose the option to "shrink file to 5GB". And to my horror thedata file instead of taking just 5GB took the empty spaces also and the sizeof the used data file went to 8GB.Any idea what's going on?TIA,SP
View 2 Replies
View Related
Aug 26, 2015
I have a ssis package where I need to have excel destination. In the Excel file, I need to have few rows with some text and then populate data below the text. One the text is like this:
Data as of: 08/25/2015
if the report ran today, then Data as of will have Yesterday. So, if the user opens that excel file after a week, then user should see same Data as of: 08/25/2015. not today()-day(1).
I was planing to handle on excel side with today()-day(1). but it only works the day it was run. Then the excel file is open after few days later, then it might as Data as of: 08/30/2015 which is not true. It should still stay Data as of:
08/25/2015 on what ever date the excel file is open. The SSIS package runs only once.
How do I handle this so that whenever user open the file, they will see Data as of: 08/25/2015. This is not a column in excel. It is like a description of data in excel.
View 3 Replies
View Related