Hi, I have a problem importing data from SQL Server 2000 'text' columns to SQL Server 2005 nvarchar(max) columns. I get the following error when encountering a transfer of any column that matches the above. The error is copied below,
Any help on this greatly appreciated...
ERROR : errorCode=-1071636471 description=An OLE DB error has occurred. Error code: 0x80004005.An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Unicode data is odd byte size for column 3. Should be even byte size.". helpFile=dtsmsg.rll helpContext=0 idofInterfaceWithError={8BDFE893-E9D8-4D23-9739-DA807BCDC2AC} (Microsoft.SqlServer.DtsTransferProvider)
hi i am getting an error with my code, it says 'value of type byte canot be converted to 1 dimensional array of byte' do you know why and how i can correct this error, the follwoing is my code. can anyone help me correct the error and let me know ow to solve it thanks for any help givenPublic Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequestDim myConnection As New Data.SqlClient.SqlConnection("ConnectionString") myConnection.Open() Dim sql As String = "Select Image_Content from ImageGallery where Img_Id=@ImageId"Dim cmd As New Data.SqlClient.SqlCommand(sql, myConnection)cmd.Parameters.Add("@imgID", Data.SqlDbType.Int).Value = context.Request.QueryString("id") cmd.Prepare()Dim dr As Data.SqlClient.SqlDataReader = cmd.ExecuteReader() dr.Read() context.Response.ContentType = dr("imgType").ToString()context.Response.BinaryWrite(CByte(dr("imgData"))) ----- this is the line with the error End Sub
I have a table based around requisitions, and each requisition has a number of positions. That number can change over time through updates to pertinent rows rather than through transaction-like records that record an entire history, and I'm only able to get a monthly snapshot of the table. What I decided to do is still use one table for OLAP (fact_requisitions) but add a column called period_key that refers to the month the data comes from. So if I have two months of data then the table has each requisition twice, possibly with differing position counts, and new requisitions from the second month are only present once. Then I tried to filter the MDX query like so:
SELECT { ([Dim TimeRequestClosed].[Year - MonthNumber].[Year_Text].&[2008].&[1],[Dim Requisitions].[Period].[Period Key].&[200801]) } ON COLUMNS, NON EMPTY { ([Dim Location].[Region Name].MEMBERS, [Dim Location].[Period Key].&[200801]) } ON ROWS FROM [Requisitions] WHERE [Measures].[Request Closed Date Count]
This query doesn't work even though the data is there, it just returns nulls. Am I going about this all wrong? If not, what might I be doing wrong, and how would I get the query to return more than one period (e.g. tell Dim Requisition to match up with Dim Location on the period key)?
From what I can see, the 'varbinary(max)' data type is not supported, and the 'image' data type is supposed to go away. Is there some other way to store large chunks (10MB to 100MB) of data into an SSEv DB?
If I have to use the 'image' data type to so this, does anyone have a code sample that would let me push an array() of numbers into an 'image' field, and unload an 'image' field into an array()?
Does anyone have a routine that takes a row of data from database, duplicates/triplicates it, appends some information to it and writes it out as 2/3 CSV rows.
Col 9 is char (2) and Col10 is char (34). It is this column that needs to be broken up into several columns depending on the value in Col 9.
Col1 to Col3 is the key to the record.
So say if record 1 has Col 9 value 'AA' then Col10 ( 34 bytes) is to be spilt into 10+10+10+4 (four columns).The value 'AA' can repeat for several records and the value in Col 10 can change for the same value 'AA'.
Now say record 27 has Col 9 value 'BB' then Col 10 is to be split as 5+25+4 (3 columns).
There are 15 such unique values of Col 9. I have the file layouts for Col 10 for each distinct value of Col 9. So using the file layouts and Table A which exists in my database how do I proceed.
Need I make 15 tables ( one each for the 15 unique Col 9 values). These structure Col1..Col2..Col3...Col9 (the key fields and Col 9) will be common to every table. Plus the file layouts will serve as additional columns specific to each of these tables.
In my database there is a text field type that is used to enter streetaddress. This address could be a few lines long, each line with acarriage return at the end.Is there a way to search for these carriage returns and break out whatis in each line seperately?Thanks.Mike
i have a big table (120 million records) and i want to take all this table and to insert it into another table. since this BULK insert operation can make all kind of performance problem i would like to make the bulk insert via small chunks. the table does not have any idintity.
can someone give me an exapmle with rowcount or with a loop to make each time an insert into select statment and to insert in each time for example 5000 rows.
I have a table that's 25,000,000 records... about 10 fields. I need to export this data to a flat file in no more than 500,000 record chunks. I've tried the following algorithm, adding a flag field called "exported" with default value 0.
do: - mark random 500,000 records, setting exported = -1 - export everything in that table where exported = -1 - set exported = 1 where exported = -1 loop
This was pretty slow, taking about 10 hours last night to run.
I find myself wanting a sort of a split dataset task in SSIS, being able to split records a chunk of records out of a dataset and handle them. Anyone have ideas for me?
I need to export records to a flat file using a dataflow task, but want no more than 50,000 records in each file. What's the best way to automate this?
Here's a little SP to break up those long-running, massively-locking, bring-app-to-a-halt queries. By default it does 500 rows at a time and allows for a maximum SQL query size of 4000 characters; it should be trivial to adjust those.
Cheers -b
CREATE PROCEDURE p_BatchExecute (@vcSQL varchar(4000)) AS set nocount on DECLARE @iRows int select @iRows=1 SET ROWCOUNT 500 WHILE @iRows>0 BEGIN print 'Executing batch of 500...' exec (@vcSQL) set @iRows=@@ROWCOUNT END GO
We have data that consists of an employee number, a start time and a finish time, similar to the example below
EMP Â STARTTIME Â Â Â Â Â Â Â Â Â Â ENDTIME
00001 10-Feb-2012 06:00:00 10-Feb-2012 10:00:00
00002 10-Feb-2012 07:15:00 10-Feb-2012 10:00:00
00003 10-Feb-2012 08:00:00 10-Feb-2012 10:00:00
I am trying to come up with a procedure in SQL that will give me each 15 minute block throughout the day and a count of how many employees are expected to be at work at the start of that 15 minute block. So, given the example above I would like to return
10-Feb-2012 00:00:00Â Â Â Â 0 10-Feb-2012 00:15:00Â Â Â Â 0 10-Feb-2012 06:00:00Â Â Â Â 1 10-Feb-2012 06:15:00Â Â Â Â 1
[code]....
I'm not too worried if the date part is not included in the result as this could be determined elsewhere, but how can I do this grouping/counting?
I'm using the code below to send files that are in a blob file in my database to the browser client. The code sends the file in chunks in order to increase performance. The file I'm using to test with is 7MB. It works great on Windows XP with any browser. It takes virtually the same amount of time compared to downloading the file directly from the webserver. However, Windows 2000 and Mac OS X both take about 4x the amount of time it takes to download the file on XP machines. Why the performance difference? Is there anything I can do to fix this? I tried downloading the file directly from the webserver instead of getting it out of the database and it takes the same amount of time on all 3 OS. I had the same problem on Windows XP when I wasn't sending the file in chunks, but after using the code below, it started working for XP only.
Dim bufferSize As Integer = 24000 Dim outbyte(bufferSize - 1) As Byte Dim retval As Long Dim startIndex As Long = 0
Dim sql As String = "SELECT ..." Dim cmd As New SqlCommand(sql, conn) conn.open() Dim dr As SqlDataReader = cmd.ExecuteReader(CommandBehavior.SequentialAccess) If dr.Read() Then ' Reset the starting byte for a new BLOB. startIndex = 0
' Read bytes into outbyte() and retain the number of bytes returned. retval = dr.GetBytes(DocCol, startIndex, outbyte, 0, bufferSize) Current.Response.Clear() Current.Response.Buffer = True Current.Response.ContentType = "application/octet-stream" Current.Response.AddHeader("Content-Disposition", "attachment; filename=" & myfile" & "." & myextension)
Do While retval = bufferSize Current.Response.BinaryWrite(outbyte) Current.Response.Flush()
' Reposition the start index to the end of the last buffer and fill the buffer. startIndex += bufferSize retval = dr.GetBytes(DocCol, startIndex, outbyte, 0, bufferSize) Loop
'Write the remainder of the last chunk Dim remaining(retval) As Byte Array.Copy(outbyte, 0, remaining, 0, retval) Current.Response.BinaryWrite(remaining) Current.Response.Flush() Current.Response.Close() End If dr.Close() conn.Close()
Using the SqlClient provider I'm trying to write big datachunks of maybe 20 MB each to SQL server to store in BLOBs using blobColumn.Write(...) using .NET 2.0 dbcommand object calling a Stored procedure
CREATE PROCEDURE [dbo].[putBlobByPK]
(
@id dKey
, @value VARBINARY(MAX)
, @offset bigint
, @length bigint
, @ModDttm dModDttm OUT
, @ModUser dModUser OUT
, @ModClient dModClient OUT
, @ModAppl dModAppl OUT
)
....
When doing this I can do this exactly 3 times than the application hangs (for ever).
When looking in the SQL Server log, I find the following to errors:
Error: 4014, Severity: 20, Status: 2.
A fatal error occurred while reading the input stream from the network. The session will be terminated.
I don't get this error on the client! OK, the session died.
What may be the problem?
I write big chunks like this to avoid many writes as the data shall be replicated later using peer to peer replication. And the more writes used for writing the total BLOB the more huge becomes the transaction log of the subscriber database.
I've got a Table that has over 500,000 row in it. Now I need to convert the whole thing into Excel to import into another application. So I need to break the table into 10 different tables. How can I do that?
I hope I can get this across clearly.I have a table that needs to be broken into 3 tables.Col1 Col2 Col3 Col4 Col5 Col6 Col7Col1 and Col2 need to go into LookupTable1Col3 and Col4 into LookupTable2If Col5 is twice the width.... haha just kidding...so Col5 and Col6 go into LookupTable3There is a 4th table which is made up of foreign keys which are the PK ofLookupTable1,2,3My questions is, how to get the data from the columns of each row and add itto its respective lookuptableand sequentially step throw the table to repeat the above step until I'veprocessed each rowthanks folksT.B
Is it possible by any kind of workaround to break the 8kb limit on user-defines datatypes?
My datatype can contain an arbitrary number of double-precision points meaning that I in best case only can store 512 points (2 x 8 x 512). there's a few extra bytes used for something else, but this is roughly the maximum, which is far from what I in many cases need. I serialize the object myself to ensure that I only store what I really need.
I have been working on a pretty ugly stored procedure recently, while debugging I added a char(10) to the end of each line of the SQL query so I could copy it to query analyzer(QA) and debug the SQL syntax output from of the stored procedure.
It had no effect on the stored procedure working, but when I copied the query to QA it got the error below, so I removed them all and added them in one line at a time to find the problem.
--Server: Msg 170, Level 15, State 1, Line 3 --Line 3: Incorrect syntax near ','.
Below are the 2 querys, the only difference is the Char(10) between Amt6 and Amt7!
I have a column that has text delimited by a percent sign that I wishto turn into rows.Example:A column contains ROBERT%CAMARDA, I want to turn that into two rows,one row with ROBERT and antoher row with CAMARDA.I will have source rows that have zero, one, or many percent signdelimiters that will correspond to that many rows (One percent signwill create 2 rows, 2 percent signs will create 3 rows and so forth).Any thoughts?TIARob
Hi, I have a report that is frustrating me. I've built this report, and it is not yet used in production. What it does is page break in places that I don't understand.
The FIRST area it would break after a line that had wrapped text to the line below. Even though there was plenty of room to fit it, and the entire rest of the group on the page.
The second area I honestly have NO idea why it breaks.
Hi people,I have a little problem with this!There are some variables in C# code: int personID = 10; string personName = "Tom";
System.IO.BinaryReader reader = new System.IO.BinaryReader(FileUploadPersonPhoto.PostedFile.InputStream);
byte[] personPhoto = reader.ReadBytes(FileUploadPersonPhoto.PostedFile.ContentLength); After that, there is a SQL query: string query = "INSERT INTO PersonTable (PersonID, PersonName, PersonPhoto) VALUES ("
+ personID + ", '" + personName + "', " + personPhoto + ")"; In Debug mode, value of the query is "INSERT INTO PersonTable (PersonID, PersonName, PersonPhoto) VALUES (10, 'Tom', System.Byte[])" and it does not work! Is there any prefix or something else that I should put and make it work?Thank you in advance!P.S. Do not want to use sql parameters at this piont!
I have a binary column with length 20 in SQL server table. I store dynamically C# byte[] value range from 0 to 19. So for example, If I store length of 16 and when I try to retrive it back into byte [] in C# it returns whole length 20. When I see in debug it has value from 0 to 15 which I want to use but from 16 to 19 is zero. How can I get just length value which I stored.
I use DataRow to get whole row and from the row object I extract byte [] based on column name.
I need some help working with CLR UDTs. I have created two UDTs called trajectory and point. Each trajectory consists of a list of points. Each point consists of three members : lon( type double), lat( type double) and datetime.
I have written my own IBinarySerialize.Write method for the trajectory type which is the following:
Dim maxSize As Integer = 4000 Dim value As String = "" Dim paddedvalue As String
Dim i As Integer Dim pt As Point
For i = 0 To point_list.Count - 1
pt = point_list.Item(i)
If i = 0 Then value = value & pt.X & "|" & pt.Y & "|" & pt.D
Else
value = value & ">" & pt.X & "|" & pt.Y & "|" & pt.D End If
I have a bigint column called "MillisecondsSince1970" that I need to convert to a date - SSIS is erroring out when I use DATEADD with the 8 byte int (if I use 4 byte it works but the column is bigger than 4 byte). The error is really lame:
[Derived Column [79]] Error: The "component "Derived Column" (79)" failed because error code 0xC0049067 occurred, and the error row disposition on "output column "Date" (100)" specifies failure on error. An error occurred on the specified object of the specified component.
Anyone have a way around it... a VB.NET equivalent of DATEADD or something else I can do?
Can we regenerate a .rdl file based on the byte[] stream created by Render method?
user sends the report name with parameters to application server and it is application server sends the request to Reporting Server and get the report back (Render method). But how to pass the returned byte[] stream to user and show her/him a report?
On a webpage, there are filters to choose from. Like date, amount, SSN (multiple filters can be choosen).I have a single query so far. SqlCommand cmd = new SqlCommand("SELECT [column1], [column2], [column3], [column4], [column5] FROM [table] WHERE [column4] = 'condition4' AND [column5] = @total_bill AND [last_change] >= @txtStartDate AND [last_change] <= @txtEndDate ", Conn) ; cmd.Parameters.Add(new SqlParameter("@total_bill", total_bill1.Text)); cmd.Parameters.Add(new SqlParameter("@txtStartDate", txtStartDate.Text)); cmd.Parameters.Add(new SqlParameter("@txtEndDate", txtEndDate.Text)); I want to break the query so that it executes on the basis of different sets of conditions (filters). If I dont select date filter, then the above query will not execute properly.Please help.
I’m okay with simple queries but as I’m no expert and have failed to find perhaps the correct wording to describe this method, if at all possible to do, so I have come to ask here.
What I would like to do is take a column from a query and then break down that column into separate results.
So the full query results: 36,18/09/2007 10:00:00,NULL,000102000304,NULL
The column I would like to brake down is (Unique Reference Number): 000102000304
And I would like to break it down to get the last 2 parts (0003 and 04): 0001 | 02 | 0003 | 04
Is this possible to do? If so where should I be looking or what should I be looking at?
I want to show a third-level overview of how many to-do items are in various statuses (pending declined and completed) for the third level individuals.
In this example there's 2 regional leaders (level3), each with local leaders (level2) and individual team members (level1)
Status "1" is a new or pending to-do item Status "2" is a declined item Status "3" is a completed item
Here's what I want the output to look like (I may need to add columns for the final report but it should look as follows):
LEVEL3 | Status 1 | Status 1 | Status 2 | Status 3 | LEADER | TOTAL | 2012 | TOTAL | TOTAL | -------|----------|----------|----------|----------| paul | 3 | 1 | 1 | 0 | roger | 1 | 1 | 1 | 0 |
Here's some sample data similar to my live DB:
CREATE TABLE #items (id int, datestamp datetime, userid int, status int) INSERT INTO #items (id,datestamp,userid,status) VALUES (1,'12/26/2011',1,1) INSERT INTO #items (id,datestamp,userid,status) VALUES (2,'12/26/2011',2,1) INSERT INTO #items (id,datestamp,userid,status) VALUES (3,'1/2/2012',4,1) INSERT INTO #items (id,datestamp,userid,status) VALUES (4,'1/3/2012',1,2)
[Code] ....
I have a view called hierarchy that shows me the hierarchy for each level 1 user. Here's the source for the view:
SELECT a.id 'level1', b.id 'level2', c.id 'level3' FROM #users a INNER JOIN #users b ON a.reportsto = b.id INNER JOIN #users c ON b.reportsto = c.id
I've run into table/view limits when using a straight query and timeouts when trying to create a cursor that looks through all level3 users and populates a temporary table with an INSERT and multiple UPDATE statements to get the individual counts for each column for each user.
(Just a note on the actual database in case it's relevant: It has about 1000 LEVEL1 users, 50 Level2, and 15 Level 3 users. It will generate about 1400 to-do items per week.)
Need query to expand the count from a month to the entire year in 2013 .Below is my query where I got the values for a month in 2013 ,How do I expand on this query so that it generates for the entire 2013
Query SELECT RD.RPTDESC,Count(SR.RPTDESC) AS ReportCount,sum(SR.Hrs) as ProdHours,Sum(SR.Mins) as ProdMins, (sum(SR.Hrs)*60+ Sum(SR.Mins)) as TotalProdTime, (sum(SR.Hrs)*60+ Sum(SR.Mins)/Count(SR.RPTDESC)) as AverageProdTime,sum(SR.TriageHrs) as TriageHours,Sum(SR.TriageMins) as TriageMins, (sum(SR.TriageHrs)*60+ Sum(SR.TriageMins)) as TotalTriageTime,(sum(SR.TriageHrs)*60+ Sum(SR.TriageMins)/Count(SR.RPTDESC)) as AverageTriageTime
Also when I am running for months individually there are certain months where for certain reportids there is no data returned,I would like to populate all zeroes for that row for example report id 306 in February and how to get decimal values in average fields.