i have a big table (120 million records) and i want to take all this table and to insert it into another table. since this BULK insert operation can make all kind of performance problem i would like to make the bulk insert via small chunks. the table does not have any idintity.
can someone give me an exapmle with rowcount or with a loop to make each time an insert into select statment and to insert in each time for example 5000 rows.
Here's a little SP to break up those long-running, massively-locking, bring-app-to-a-halt queries. By default it does 500 rows at a time and allows for a maximum SQL query size of 4000 characters; it should be trivial to adjust those.
Cheers -b
CREATE PROCEDURE p_BatchExecute (@vcSQL varchar(4000)) AS set nocount on DECLARE @iRows int select @iRows=1 SET ROWCOUNT 500 WHILE @iRows>0 BEGIN print 'Executing batch of 500...' exec (@vcSQL) set @iRows=@@ROWCOUNT END GO
the part I€™m having trouble with is the next part of my code where i would search the rows in tbltimes and find the row WHERE employeename = Lblname.text & TimeIn is not null & TimeOut IS null. Then INSERT Lbltime into the Timeout column of that existing row.
Any ideas as to the easiest way to accomplish this would be a great help, I€™m officially stumped atm.
Im having a lot of trouble inserting a small time value into a table cell. I gave the cell column the data type 'DateTime', i found i couldnt manually insert a time only value such as '12:30 PM' into a column with 'SmallDateTime'. Something about a "SmallDateTime Overflow Error". However if i enter a similar time value into a table column with the data type 'DateTime' it will happily accept it and leave it as entered.
The real problem seems to be when i try to send a time value to that column with my ASP.NET application. Because it inserts the time value and todays date. So that if i send:
12:30 PM
It will be stored as:
15/11/2003 12:30:00 PM
I only want to store the short time, not the date especially not the date that row was created on because thats useless for the purposes of what my application is trying to achieve and just creates problems down the track when selecting rows.
I have a table that's 25,000,000 records... about 10 fields. I need to export this data to a flat file in no more than 500,000 record chunks. I've tried the following algorithm, adding a flag field called "exported" with default value 0.
do: - mark random 500,000 records, setting exported = -1 - export everything in that table where exported = -1 - set exported = 1 where exported = -1 loop
This was pretty slow, taking about 10 hours last night to run.
I find myself wanting a sort of a split dataset task in SSIS, being able to split records a chunk of records out of a dataset and handle them. Anyone have ideas for me?
I need to export records to a flat file using a dataflow task, but want no more than 50,000 records in each file. What's the best way to automate this?
I have a table based around requisitions, and each requisition has a number of positions. That number can change over time through updates to pertinent rows rather than through transaction-like records that record an entire history, and I'm only able to get a monthly snapshot of the table. What I decided to do is still use one table for OLAP (fact_requisitions) but add a column called period_key that refers to the month the data comes from. So if I have two months of data then the table has each requisition twice, possibly with differing position counts, and new requisitions from the second month are only present once. Then I tried to filter the MDX query like so:
SELECT { ([Dim TimeRequestClosed].[Year - MonthNumber].[Year_Text].&[2008].&[1],[Dim Requisitions].[Period].[Period Key].&[200801]) } ON COLUMNS, NON EMPTY { ([Dim Location].[Region Name].MEMBERS, [Dim Location].[Period Key].&[200801]) } ON ROWS FROM [Requisitions] WHERE [Measures].[Request Closed Date Count]
This query doesn't work even though the data is there, it just returns nulls. Am I going about this all wrong? If not, what might I be doing wrong, and how would I get the query to return more than one period (e.g. tell Dim Requisition to match up with Dim Location on the period key)?
From what I can see, the 'varbinary(max)' data type is not supported, and the 'image' data type is supposed to go away. Is there some other way to store large chunks (10MB to 100MB) of data into an SSEv DB?
If I have to use the 'image' data type to so this, does anyone have a code sample that would let me push an array() of numbers into an 'image' field, and unload an 'image' field into an array()?
We have data that consists of an employee number, a start time and a finish time, similar to the example below
EMP Â STARTTIME Â Â Â Â Â Â Â Â Â Â ENDTIME
00001 10-Feb-2012 06:00:00 10-Feb-2012 10:00:00
00002 10-Feb-2012 07:15:00 10-Feb-2012 10:00:00
00003 10-Feb-2012 08:00:00 10-Feb-2012 10:00:00
I am trying to come up with a procedure in SQL that will give me each 15 minute block throughout the day and a count of how many employees are expected to be at work at the start of that 15 minute block. So, given the example above I would like to return
10-Feb-2012 00:00:00Â Â Â Â 0 10-Feb-2012 00:15:00Â Â Â Â 0 10-Feb-2012 06:00:00Â Â Â Â 1 10-Feb-2012 06:15:00Â Â Â Â 1
[code]....
I'm not too worried if the date part is not included in the result as this could be determined elsewhere, but how can I do this grouping/counting?
I'm using the code below to send files that are in a blob file in my database to the browser client. The code sends the file in chunks in order to increase performance. The file I'm using to test with is 7MB. It works great on Windows XP with any browser. It takes virtually the same amount of time compared to downloading the file directly from the webserver. However, Windows 2000 and Mac OS X both take about 4x the amount of time it takes to download the file on XP machines. Why the performance difference? Is there anything I can do to fix this? I tried downloading the file directly from the webserver instead of getting it out of the database and it takes the same amount of time on all 3 OS. I had the same problem on Windows XP when I wasn't sending the file in chunks, but after using the code below, it started working for XP only.
Dim bufferSize As Integer = 24000 Dim outbyte(bufferSize - 1) As Byte Dim retval As Long Dim startIndex As Long = 0
Dim sql As String = "SELECT ..." Dim cmd As New SqlCommand(sql, conn) conn.open() Dim dr As SqlDataReader = cmd.ExecuteReader(CommandBehavior.SequentialAccess) If dr.Read() Then ' Reset the starting byte for a new BLOB. startIndex = 0
' Read bytes into outbyte() and retain the number of bytes returned. retval = dr.GetBytes(DocCol, startIndex, outbyte, 0, bufferSize) Current.Response.Clear() Current.Response.Buffer = True Current.Response.ContentType = "application/octet-stream" Current.Response.AddHeader("Content-Disposition", "attachment; filename=" & myfile" & "." & myextension)
Do While retval = bufferSize Current.Response.BinaryWrite(outbyte) Current.Response.Flush()
' Reposition the start index to the end of the last buffer and fill the buffer. startIndex += bufferSize retval = dr.GetBytes(DocCol, startIndex, outbyte, 0, bufferSize) Loop
'Write the remainder of the last chunk Dim remaining(retval) As Byte Array.Copy(outbyte, 0, remaining, 0, retval) Current.Response.BinaryWrite(remaining) Current.Response.Flush() Current.Response.Close() End If dr.Close() conn.Close()
Using the SqlClient provider I'm trying to write big datachunks of maybe 20 MB each to SQL server to store in BLOBs using blobColumn.Write(...) using .NET 2.0 dbcommand object calling a Stored procedure
CREATE PROCEDURE [dbo].[putBlobByPK]
(
@id dKey
, @value VARBINARY(MAX)
, @offset bigint
, @length bigint
, @ModDttm dModDttm OUT
, @ModUser dModUser OUT
, @ModClient dModClient OUT
, @ModAppl dModAppl OUT
)
....
When doing this I can do this exactly 3 times than the application hangs (for ever).
When looking in the SQL Server log, I find the following to errors:
Error: 4014, Severity: 20, Status: 2.
A fatal error occurred while reading the input stream from the network. The session will be terminated.
I don't get this error on the client! OK, the session died.
What may be the problem?
I write big chunks like this to avoid many writes as the data shall be replicated later using peer to peer replication. And the more writes used for writing the total BLOB the more huge becomes the transaction log of the subscriber database.
I need to display only the char having start with 'ACCT -AMOUNT',Now problem is that some records having the lower case character like 'acct amount'. But i want to display only the upper case char start with 'ACCT-AMOUNT'.
I have to used the 'like ' statement but it is showing all the row inculding the lower case also.
Hi,Probable there is a simple solution for this, hopefully someone candirect me in the right direction.I have a table with a persons firstname, lastname, birthdate andaddress. However, I want to select only one person per address, namelythe eldest of all persons living on the same address.Can anyone provide me a solution?Thanks in advance.Duncan
HelloI have a case where Partners are some kind of Super-Users and arestored in a SQL Server database. Best is IMO to put both in the sametable:table Customers:CustomerID[pr.key][blabla]PartnerIDBut of course I have to reference the partnerid from another table andI want SQL Server to maintain the integrity rules. I could splitCustomers en Partners into different tables, but that would not bewise i think.Or I could just reference the CustomerID from the other table and-know- that we are talking about a partner, but in that case it itpossible to reference a customer that is not a partner, and i want toavoid that.Any ideas?Freek Versteijn
I have a report that groups by vendor and then by invoice within a vendor.
I want to create a report totals that contains
Number of vendors=2 Number of invoices=3 Total Inv Amounts=600.00 Total Item Amounts=475.00
How? Number of vendors is =count(distinct vendors) ?? Cant do that for # Invoices because of possible duplicate invoices used by different vendors. How do i get the total invamt? Each occurrence of the invamt fld is in a list2 as =first(invamt). I really need the sum of each first(invamt).
I currently use JET 4.0 as my database in a small app I distribute. SQL Server Express is way way too big to distribute. I am looking to move to a better database.
Is there a version of SQL that gives simple database CRUD support, but is super small to distribute? I am also considering Firebird as it's a full database and only 3MB on the client as an embedded tool.
i have table cosists of 3 columns i need to select multiple rows from the table example : i need to select rows with id=(1,5,9,7,11,15,20,23,42,65) in one select statement can any body answer me? and i will give him a( )
I'm just about to launch a forum, and right now its built in ASP classic with an Access db. I origianlly used Access because most hosts charge extra for MS-SQL server. Recently I switched to Jodohost who offers Access, MS-SQL and MySql for at no additional cost. So now that I have the option, I would like to pick the best solution before I launch.
* My questions are...
- What is the best db solution to go with for a currently small forum?
- How problematic do you think a data migration would be in the future if I stayed with Access for now and upgraded to MS-SQL with a full forum?
- Is it just smarter to go with MS-SQL now, with an empty forum, regardless of any preformance issues because potential migration problems are a greater risk?
- At what point does the speed of MS-SQL at high volumes over come the potential lags in accessing MS-SQL if it is hosted on a different machine from the one hosting the webpages?
* And please keep in mind...
- I have no db training. I can muddle through Access well enough, but administrating MS-SQL I think might be another story
- This fourm will start off very small, but could grow to be quite large
- I may not stay with jodohost, and would therefore likely have to pay more for MS-SQL (which I woudl rather not do)
Hello, could someone pls help me with this table design.
I have a project table and a code table. The code table has things like priorities (High, Medium, Low).
Now, I want all projects to be able to use these 'global' codes as well as define their own. So, they could define their own priority code 'Critical', that only their project can see.
I have listed a view below and a portion of the result set that is returned when I run the code in Query Analyzer. This is part of a timesheet application that logs hours per SCHLSTUID per SECTIONID per week. This returns the SCHLSTUID(user's ID), SECTIONID, Date that the week starts, the first date that time was logged. The user could be in several SECTIONID's for the same week. I need to modify this so that it returns the date that the first time was logged for any of the SECTIONID's per week. I know that this is probably something simple that I'm overlooking but I just can't get it to work correctly.Example: SCHLSTUID SECTIONID ATTSTARTDT FirstTimeEnteredDOn601868445 EN4AR001 2005-09-18 20:59:21.120 2005-09-19 20:59:21.120601868445 MAA1R001 2005-09-18 20:59:21.120 2005-09-18 20:59:21.120This would need to return 2005-09-18 20:59:21.120------------------------------------------------------------------------------------------------------601868445 EN4AR001 2005-10-02 20:59:37.427 2005-10-02 20:59:37.427601868445 MAA1R001 2005-10-02 20:59:37.427 2005-10-02 20:59:37.427This would need to return either 2005-10-02 20:59:37.427------------------------------------------------------------------------------------------------------601868445 EN4AR001 2005-10-09 20:59:37.823 2005-10-09 20:59:37.823601868445 MAA1R001 2005-10-09 20:59:37.823 2005-10-13 20:59:37.823This would need to return 2005-10-09 20:59:37.823----------------------------------------------------------------------------------------------------------------------------------------------------CREATE VIEW dbo.vExportStartWeekASSELECT TOP 100 PERCENT schlstuid, sectionid, ATTSTARTDT, MIN(TimesheetDate) AS FirstTimeEnteredOnFROM (SELECT schlstuid, sectionid, ATTSTARTDT, ATTSTARTDT AS TimesheetDate, sunmns AS TimeEntered FROM TimeSheetDailyAttendance UNION ALL SELECT schlstuid, sectionid, ATTSTARTDT, ATTSTARTDT AS TimesheetDate, sunhrs AS TimeEntered FROM TimeSheetDailyAttendance UNION ALL SELECT schlstuid, sectionid, ATTSTARTDT, DATEADD(d, 1, ATTSTARTDT) AS TimesheetDate, monmns AS TimeEntered FROM TimeSheetDailyAttendance UNION ALL SELECT schlstuid, sectionid, ATTSTARTDT, DATEADD(d, 1, ATTSTARTDT) AS TimesheetDate, monhrs AS TimeEntered FROM TimeSheetDailyAttendance UNION ALL SELECT schlstuid, sectionid, ATTSTARTDT, DATEADD(d, 2, ATTSTARTDT) AS TimesheetDate, tuemns AS TimeEntered FROM TimeSheetDailyAttendance UNION ALL SELECT schlstuid, sectionid, ATTSTARTDT, DATEADD(d, 2, ATTSTARTDT) AS TimesheetDate, tuehrs AS TimeEntered FROM TimeSheetDailyAttendance UNION ALL SELECT schlstuid, sectionid, ATTSTARTDT, DATEADD(d, 3, ATTSTARTDT) AS TimesheetDate, wedmns AS TimeEntered FROM TimeSheetDailyAttendance UNION ALL SELECT schlstuid, sectionid, ATTSTARTDT, DATEADD(d, 3, ATTSTARTDT) AS TimesheetDate, wedhrs AS TimeEntered FROM TimeSheetDailyAttendance UNION ALL SELECT schlstuid, sectionid, ATTSTARTDT, DATEADD(d, 4, ATTSTARTDT) AS TimesheetDate, Thrmns AS TimeEntered FROM TimeSheetDailyAttendance UNION ALL SELECT schlstuid, sectionid, ATTSTARTDT, DATEADD(d, 4, ATTSTARTDT) AS TimesheetDate, Thrhrs AS TimeEntered FROM TimeSheetDailyAttendance UNION ALL SELECT schlstuid, sectionid, ATTSTARTDT, DATEADD(d, 5, ATTSTARTDT) AS TimesheetDate, Frimns AS TimeEntered FROM TimeSheetDailyAttendance UNION ALL SELECT schlstuid, sectionid, ATTSTARTDT, DATEADD(d, 5, ATTSTARTDT) AS TimesheetDate, Frihrs AS TimeEntered FROM TimeSheetDailyAttendance UNION ALL SELECT schlstuid, sectionid, ATTSTARTDT, DATEADD(d, 6, ATTSTARTDT) AS TimesheetDate, Satmns AS TimeEntered FROM TimeSheetDailyAttendance UNION ALL SELECT schlstuid, sectionid, ATTSTARTDT, DATEADD(d, 6, ATTSTARTDT) AS TimesheetDate, Sathrs AS TimeEntered FROM TimeSheetDailyAttendance) TimesheetDatesWHERE (TimeEntered <> 0)GROUP BY schlstuid, sectionid, ATTSTARTDTORDER BY schlstuid----------------------------------------------------------------------------------------------------------------------------------------------------This is a portion of what is returned:SCHLSTUID SECTIONID ATTSTARTDT FirstTimeEnteredDOn601868445 EN4AR001 2005-09-18 20:59:21.120 2005-09-19 20:59:21.120601868445 MAA1R001 2005-09-18 20:59:21.120 2005-09-18 20:59:21.120601868445 EN4AR001 2005-09-25 20:59:36.670 2005-09-25 20:59:36.670601868445 EN4AR001 2005-10-02 20:59:37.427 2005-10-02 20:59:37.427601868445 MAA1R001 2005-10-02 20:59:37.427 2005-10-02 20:59:37.427601868445 EN4AR001 2005-10-09 20:59:37.823 2005-10-09 20:59:37.823601868445 MAA1R001 2005-10-09 20:59:37.823 2005-10-13 20:59:37.823----------------------------------------------------------------------------------------------------------------------------------------------------Thank you for any help that you can give me.Scott
when i insert an empty date from an asp page, sql 7.0 generates a default value 1/1/1900. This is normal. However, I need to know how do I turn that feature off so I would not generate the default value.
Currently, my front in application uses asp--vb scripts. Please help.
I tried assigning a null value to my variant, but sql still generate that default date/time
Hello, I'd like to confirm something regarding SQL Server account permissions. Is the NT domain admin also a member of the sysadmin role on the local server running SQL Server 7.0 on an NT 4.0 platform? Thank you
We have a large and active MSSQL 2000 database. Recently, after a rebuild of the server, we had a problem with the SQL service SQLSERVERAGENT. The service could not start as the service account lost local permission to the registry. During this time, all of the data being sent to the database from our application accumulated into the database .ldf file. By the time we were able to get the service restarted, our .ldf file was approx. 28 Gigs. When the service restarded, the .ldf file shrunk down to regular size,about 40 megs, and the .trx tlog file grew up to 28 gigs for that specific period (new file every hour).
The problem is, the database file (database.mdf) stayed about the same as it was before the service was restarted. When the .ldf transfered to the .trn none of the 28 gigs of data got stored in the database. What does this mean? Perhaps with the service stopped the application using the db saw problems and did not commit the data making it all useless? Or is it possible that the data in the .trn log just needs to be forced to commit to the .mdf???
Is there any way to verify the data in the 28 gig .trn file and figure out if we should get it stored to the database? If yes, how would we go about verifying it, and after that how would we force it to commit to the .mdf file? Am I on the right track here or is it not as I see it??
i have performed all the steps for the replication there are two problems i encounytring (i) how to check whether the replication is working or not (ii) can replication on the subscribing server automatically create the table which publishing server is publshing (iii)please tell me someother forums for sql server waiting for reply ashish bhatnagar
Hello All, I have a database and I am trying to run a query against it, it returns the data fine, I just want it displayed differently. Here is the example; I am a novice at SQL I dabble here and there so any help would be great!
Base Code Select LOA.Account0 as 'Account Name' From local_accounts LOA Where LOA.Account0 like '%Microsoft%' Group by LOA.ACCOUNT0
Account Name Win32_UserAccount.Domain='Microsoft',Name='Jason'
I would like the data to be displayed differently in my query. I would like just the domain user name to appear in the row.
Account Name MicrosoftJason
Does anyone have example code of something like this.