SQL 2012 :: Log File Viewer - Can See Only 1000 Records
Mar 9, 2013Using the Log file viewer in sql auditing I can see only 1000 record....How can we see more than 1000 records or earlier data...
View 6 RepliesUsing the Log file viewer in sql auditing I can see only 1000 record....How can we see more than 1000 records or earlier data...
View 6 RepliesI am trying to transfer 90 million records/250 bytes row length from oracle 8i to sqlserver 2000
using DTS and it is taking 2 seconds to transfer 1000 records. Is there any way I can transfer 90 million records fast at all. This will take more than 10 hours to transfer it.
Thanks,
Ranjan
Hi,Currently we're a building a metadatadriven datawarehouse in SQLServer 2000. We're investigating the possibility of the updatingtables with enormeous number of updates and insert and the use ofcheckpoints (for simple recovery and Backup Log for full recovery).On several website people speak about full transaction log and thepace of growing can't keep up with the update. Therefore we want tocreate a script which flushes the dirty pages to the disk. It's notquite clear to me how it works. Questions we have is:* How does the process of updating, insert and deleting works with SQLServer 2000 with respect to log cache, log file, buffer cache, commit,checkpoint, etc?What happens when?* As far as i can see now: i'm thinking of creating chunks of data of1000 records with a checkpoint after the Query. SQL server has thedefault of implicit transactions and so it will not need a commit.Something like this?* How do i create chunks of 1000 records automatically withoutcreating a identity field or something. Is there something like SELECTNEXT 1000?Greetz,Hennie
View 6 Replies View RelatedI have a table set of records. Its contains some customerID,SportsGoods,Price in different datetime. I want to add customer spent. If crossed 1000 means i have to show purchase time when it is crossed 1000. I need query without while and looping.
Example:
Customer NameGoodsPriceDatePurchased
ABat2501/31/2014
ABall221/31/2014
BCarrom Board4752/2/2014
CTennis Ball502/1/2014
AFootball1502/2/2014
DBat2501/31/2014
BBall221/31/2014
AHockey Bat1252/4/2014
CChess552/4/2014
AVolley Ball552/4/2014
We just switched from Sql server 2008R2 to Sql server 2012.I am facing one problem with identity Columns "When ever i restarts my sql server,the seed value for each identity column is increased by 1000 (For int identity column it is 1000 and for big int it is 10000).
"For Example if seed value of any table was 3 then after restarting sql server will be 1003 if i again restart sql server it will be 2003 and so on."
After searching on google i found that it is a new feature (don't know what is use of it) in sql server 2012 and having only two solution if you want old identity concept
1. Use sequence object -
a) I am using same database in sql server 2008 and 2012 both so can't use sequence in 2008.
b) if i go with sequence then need not change save procedure for each table,which is bulky task for us.
2. Use Trace Flag 272 (-T272)
I can go with this solution because there is need not do any changes in my application.Some one suggested me that add -T272 in startup parameter,after this sql server identity column will work normal as previous version.I did the same but it is not working.
I don't want to do any changes in my database structure.
how to use this -T272 or why it is not working.
I don't want to use this new identity feature how to suppress it. Why -T272 is not working.
After SQL server service restart, a column which is set to auto increment jumped 1000. To fix the issue, I had to add T272 trace flag to SQL startup parameters. However, I did not see the column being reseeded after the service restart, it is still showing the 1000 jump. Am I doing something wrong?
Below the log showing the flag being added to the error log:
LogDateProcessorInfoErrorMSG
2015-06-15 22:29:53.850ServerRegistry startup parameters:
-d E:DATAmaster.mdf
-e E:logERRORLOG
-l E:DATAmastlog.ldf
-T 272
writing the query for the following, I need to collapse the continuity. If the termdate for an ID is one day less than the effdate of the next id (for the same ID) i need to collapse the records. See below example .....how should i write the query which will give me the desired output. i.e., get min(effdate) and max(termdate) if termdate is one day less than the effdate of next record.
ID effdate termdate
556868 1999-01-01 1999-06-30
556868 1999-07-01 1999-10-31
556869 2002-10-01 2004-01-31
556872 1999-02-01 2000-08-31
556872 2000-11-01 2004-01-31
556872 2004-02-01 2004-02-29
output should be ......
ID effdate termdate
556868 1999-01-01 1999-10-31
556869 2002-10-01 2004-01-31
556872 1999-02-01 2000-08-31
556872 2000-11-01 2004-02-29
I have a win.forms application part of functionality of which is to show rdlc report. When I try to launch the application it says that ReportViewer assembly is missing, which was expected. When I downloaded and try to install viewer runtime from here: [URL] .....
I receive that Microsoft SQL Server System CLR Types are not installed and must be installed first. I downloaded appropriate installation from [URL] .... and it installed successfully. But when I try to run viewer runtime installation it still says that Microsoft SQL Server System CLR Types are not installed. What do I miss?
Is there any way to control this scenario, I know trick to put 10 on each row ,but I need to split them unevenly, 10 on first page and the rest on second page. is it possible ?
=Ceiling((RowNumber(Nothing)) / 10)
On the SQL Server the Event Viewer shows the same messages and errors every evening between 22:05:00 and 22:08:00. The following information messages are shown for every database:
"I/O is frozen on database <database name>. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup."
"I/O was resumed on database <database name>. No user action is required."
"Database backed up. Database: <database name>, creation date(time): 2003/04/08(09:13:36), pages dumped: 306, first LSN: 44:148:37, last LSN: 44:165:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{A79410F7-4AC5-47CE-9E9B-F91660F1072B}4'}). This is an informational message only. No user action is required."
After the 3 messages the following error message is shown for every database:
"BACKUP failed to complete the command BACKUP LOG <database name>. Check the backup application log for detailed messages."
I have added a Maintenance Plan but these jobs run after 02:00:00 at night.
Where can I find the command or setup which will backup all databases and log files at 22:00:00 in the evening?
Hello friends,
I have the following (simplified):
1. Flat File Source
2. Conditional Split, Case Good = !ISNULL(KEY) Case Error = ISNULL(KEY)
3. Case Good -> Writes to Good Flat File (with timestamp in the title)
4. Case Error -> Writes to Error Flat File (with timestamp in the title)
Most job runs have no errors but the error file is created as a zero byte file anyway. If there are no error records I don't want the error file created. How might I accomplish this?
Thanks
I'm on SSMS under Management --> Maintenance Plans --> (my transaction log backup plan)
I right click and choose "View History" at which point the log file viewer window opens and it just sits there... with the progress circle runnning just saying "Initialize Log #2..."
After roughly 5 to 10 minutes it returns results.
Any suggestions on how I should resolve this or what to trouble shoot?
Thank you.
Does anyone know how to convert the number 1000 to appear as 1,000 in a SQL Statement?
View 4 Replies View RelatedI need to write a process to get file size in kb and record count in a file. I was planning on writing a c# console app that takes the file path and name as a param however should i use a CLR?
I cant put a script in the ssis when it's bringing the file down because it has been deemed that we only use ssis for file consumption.
I need find out the number of columns in flat file before i process that particular file.I have file name in @filename variable and file path is @filepath variable.But do not not that how i will check the column name in before i will process that file.
@filePath = C:DatabaseSourceFilesCAHCVSSourceFiles
And i am using for each loop container to read the file one by one and put the file name in @filename variable.and my file name like
Product_20120607060930.txt
Product_20130708060930.txt
[code]....
Now what i have to do is i need to make sure that ID,Name,City,County,Phone is there in flat file.if it is not there then i have to send mail to client saying that file is not valid.I need to also calculate the size of flat file.
The TEMPDB transaction log file keeps growing.The database server is new and the transaction log was presized to 1 GB on installation. After installing a number of databases, the log file grew over a day to 38GB. Issuing a manual checkpoint was the only way to free some space to allow it to be shrunk back to a usable size. The usage of the file is still going up.
I am struggling to find what process is causing the log to be used so heavily. Looking at the log reuse wait desc for tempdb returns "Nothing" and tempdb itself isn't being used very much or growing in size.
I have a filetable that contains a binary file. I need to do a selective read of the file stored in the file table. I can write a C# CLR function that will open the file, read n bytes the from a starting byte. Or I can write a SQL statement that reads the stream in the filetable into a VARBINARY variable using SUBSTRING beginning at the starting byte (offset from 1) for the same n bytes.
Both give me the same result. However, the SQL statement takes considerably longer to read. I know there is overhead in reading through SQL (interpreted language), but the difference in performance is substantial, and I can only attribute this performance degradation if SQL first tries to "load" the entire stream before it identifies the portion of the stream that it needs to read beginning at the starting byte offset.
I wonder if this is the case or if there is another option to read a stream from a filetable directly through SQL queries that is more efficient.
I have been given a Product table whoes all column types are varchar(8000)
One of the column is Price and other is DecimalPosition. Price column includes price without any decimal place and the data in DecimlaPosition column determins where the decimal should be placed.
So for instance, if the Price column includes '1000' and DecimalPosision includes '2' >> then it means that the actual price for this product is '10.00' and NOT '1000'. Similarly, if the DecimalPosision includes '3' >> then it means that the actual price for this product is '1.000' and NOT '1000'My question is that when I am getting the price for a product from this table, how can I get the price in the correct format, e..g like '10.00' and not '1000'Should I use SQL statements to convert 1000 into 10.00 or should I use some sort of programming logic to convert 1000 into 10.00.kind regards
How to select the first 1000 rows from the tbale in sql server 6.5..?
View 1 Replies View RelatedI have started to install a 3rd party web-based product for our clients that uses SQL 2000 as its backend
However every time that they create a new 'topic' within the web app, it creates a new database, with a single table in it!! - There could/will be 1000's of these 'topics' created
I have told the company we buy this from that this is not acceptable - they have asked why!!
Can anyone point me in the direction of preferably a Microsoft document that I can send to them, as just saying 'you just don't do it that way' isn't working and I can't find anything easily myself
Many thanks
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB
2) Create table called TEST on primary
3) Insert 40MB of data into test
4) Create another file group called temp in primary size 200MB
5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group.
6) Add another 2 files called DATA2 and DATA3. Both are 200MB.
7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3
8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB,
DATA2 = 13MB,
DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB
DATA1 = 10MB
DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over
the remaining files in PRIMARY.
We have a large 'History' database that is currently about 4.5TB, with most of that in a datafile that is 4.2TB. We wanted to stop growth on the one large data file and have SQL Server allocate new data to the other data files, but this throws an error when we attempt to change the MAXSIZE settings:
ALTER failed for Database 'History'
MODIFY FILE failed. Specified size is less than or equal to current size.
The SQL Server is saying we can have a max size of 2TB, and anything over that is blocked. Since this is being blocked, the file continues to grow.
Is there any way to cap the growth of the 4.2TB file and not allow any more data to be written to it?
is there limitation for size of file to store in db by filestream in sql server 2008?or it accept all sizes?
View 1 Replies View Relatedif l want to commit the transactions after every thousand how would l build it into the script?
Begin Transaction
Select a.AccountNo,
a.TransactionNo,
a.TransactionAmount,
a.TransactionDate
Into dbo.test1
From Trans_May_14Aug2002 a,Reds_JuL_Trans_08Jul2002 b
Where ltrim(rtrim(left(a.AccountNo,20)))=ltrim(rtrim(lef t(b.AccountNo,20)))
AND
ltrim(rtrim(left(a.TransactionNo,20)))=ltrim(rtrim (left(b.TransactionNo,20)))
AND
a.TransactionAmount=b.TransactionAmount
AND
a.TransactionDate =b.TransactionDate
AND
ltrim(rtrim(left(a.Product,20))) IN ('PR060','PR061','PR091',
'PR096','PR111','PR121',
'PR122')
AND ltrim(rtrim(left(a.Transactiontype,20))) IN
('TR001','TR003','TR011',
'TR013','TR027','TR028',
'TR042','TR043','TR044',
'TR045','TR998','TR999')
AND ltrim(rtrim(left(a.journaltype,20))) NOT IN
('JT000','JT720','JT721',
'JT722','JT723','JT725',
'JT726','JT729','JT730',
'JT737','JT738','JT739',
'JT740','JT743','JT746',
'JT751')
OR ltrim(rtrim(left(a.JournalType,20))) IS NULL
AND a.TransactionDate > '2002-04-30'AND b.transactionDate < '2002-07-01'
Commit
I was wondering if it is possible to have a DB table with 1000 columns?
The other way is of course to break these columns into 1000 rows and an ID which tells what exactly does it relate to.
I want to know the pros and cons of having 1000 columns/rows for one set of related data.
The reason to need 1000 columns in the first place is that there are about 1000 questions in a set whose answers need to be saved for one session (hence all should go together).
Can anybody shed some light on it? Has anybody tried something so crazy before?
Ok some company has handed me this .xls file containing a 1000+ users -- their emails (which are to be their user names), and their passwords. Both are in plain text format. I want to add these users to the ASPNET_DB, with the condition that the passwords and userids are encrypted, as they are in the table.
How should I do this?
Thanks very much.
I am developing a form for a mortgage company. There can be any number of borrowers on a given loan, and the business has asked that this form return only 2 borrowers at a time for a loan. For example, if there are 3 borrowers for a loan, they want the first copy of the form to print the first 2 borrowers and then another copy of the form to print the 3rd. No matter how many copies are printed, they want the borrower information to be labeled as 'Borrower1' xyz and 'Borrower2' xyz. Also, there will be a LOT more fields returned on the real form, so the sample information below is very simplified test data.
Sample Data:
CREATE TABLE #t (LoanID VARCHAR(5), BorrowerName VARCHAR(20), BorrowerOrder INT);
GO
INSERT INTO #t VALUES
('::E', 'John Smith', 0)
, ('::E', 'Jane Smith', 1)
, ('::E', 'Rob Jackson', 2)
, ('AF_CF', 'Sloan Burton', 1)
[code]...
I don't want that 2nd record to return. This result is what makes me think of gaps and islands, but I don't know if the 2nd record is really an island since it's (1) not stored this way...it's returning this way because of the query and (2) it's not sequential data..I tried restricting this by putting this into a CTE and then returning only the odd numbered records like I have below. This runs pretty quickly when dealing with one loan. But...I am concerned that the CTE will be slow when we run batches of loans.
Attempt with CTE:
--With CTE
;WITH cte AS
(SELECT
Borrower1 = BorrowerName
, Borrower2 = LEAD(BorrowerName) OVER(ORDER BY BorrowerOrder)
, RowNumber = ROW_NUMBER() OVER(ORDER BY BorrowerOrder)
[code]...
Is there a better, cleaner way to do this? Or is the CTE the best way to go?
[URL]
I had a problem before of not been able to find the rows with 0 values. I've now managed this although it's brought up duplicate rows due to the discounts been different on the same Mfr_part_number. I tried using the max function on the isnull (Exhibit_Discount.Discount, 0.00) AS Discount instead but to no success. i think i maybe something to do with PK keys not been used in the set-up of the database.
Use Sales_Builder
Go
SELECT DISTINCT
GBPriceList.[Mfr_Part_Num],
[Code]....
Problem 1: I have the following table which shows the location of a person at 1 hour intervals
IdEntityIDEntityNameLocationIDTimexdelta
11MickeyClub house03001
21MickeyClub house04001
31MickeyPark05002
41MickeyMinnies Boutique06003
51MickeyMinnies Boutique07003
61MickeyClub house08004
71MickeyClub house09004
81MickeyPark10005
91MickeyClub house11006
The delta increments by +1 every time the location changes.
I would like to return an aggregate grouped by delta as per example below.
EntityNameLocationIDStartTimeEndTime
MickeyClub house03000500
MickeyPark05000600
MickeyMinnies Boutique06000800
MickeyClub house08001000
MickeyPark10001100
MickeyClub house11001200
I am using the following query (which works fine):
select
min(timex) as start_date
,end_date
,entityid
,entityname
,locationid
[code]....
However I would like to not use the delta (it takes effort to calculate and populate it); instead I am wondering if there is any way to calculate it as part / whilst running the query.
Problem 2:I have the following table which shows the location of different people at 1 hour intervals
IdEntityIDEntityNameLocationIDTimexDelta
11MickeyClub house09001
21MickeyClub house10001
31MickeyPark11002
42DonaldClub house09001
52DonaldPark10002
62DonaldPark11002
73GoofyPark09001
83GoofyClub house10002
93GoofyPark11003
I would like to return an aggregate grouped by person and location.
For example
EntityIDEntityNameLocationIDStartTimeEndTime
1MickeyClub house09001100
1MickeyPark11001200
2DonaldClub house09001000
2DonaldPark10001200
3GoofyPark09001000
3GoofyClub house10001100
3GoofyPark11001200
What modifications do I need to the above query (Problem 1)?
I have this table:
CREATE TABLE [dbo].[ACT_SECUNDARIA](
[CODACTIVIDADE] [int] IDENTITY(1,1) NOT NULL,
[CODCTB] [int] NOT NULL,
[CODCAE] [int] NULL,
[CODSECTOR] [int] NOT NULL,
[Code] ....
I want to delete every record that has more than one entry (codctb; codcae)
For example: if there are three records with the same codctb and codcae I want to delete two so that there can only be one.
How can I achieve this using t-sql?
We are having a requirement to Aggregate the data and create LY, CY data across 5 Metrics.
the Volume of Data will be 200 Mil - 250 Mil
Server Configuration:
Memory: 32 GB
Processors: 16
understand the bench mark configuration needed for the server & any hints on better aggregation methods & LY / CY data
[crossposted]Hi, I wonder if anyone might lend me a brain.I have a stock database to build that covers over 1000 products, whichmight be said to exist in around 50 product families.Obviously, just to be awkward all the types of stock will havedifferent attributes. So one product might be a tube withinside/outside diameter and length and another a T shaped cable joint.All I can come up with is a separate table for each stock type familyand store the table name and product code in the main stock table, so:Tables:ProdAProdBProdCStockStock attributes:ProdIdProdTableAmountDateetc..ProdA attribute:ProdIdAttributeXAttributeYAttributeZetc..Then use code to parse the table and product ID to select the correctquery to get the product details. BUT This seems awefuly inelegant andpotentially wrong so I'm loathe to continue down this route.Can anyone tell me the "right" way to do this, I feel sure it must bea classic db design exercise, but unfortunatly one they didn't teachus at University -- or maybe I was asleep...Thanks!
View 4 Replies View RelatedIs there a way to change the Open table - Return Top... -1000 defaultto something like 10. It should return only 10 by default? Any registrykeys?
View 1 Replies View Related