Using the new encryption included in SQL Server 2005, what is a good way to determine what length I should use for the column?
For example, I am encrypting a column, its maxlength is about 30 characters, but when encrypted, the encrypted value extends from between 50 and no more than 68 characters-
So if I had a column with a max of 500 or so characters, how could I know what varbinary length I should set it to if I were to encrypt it, without actually finding the highest value I could possibly fit into the field?
Is it good practice to just make it a varbinary(max) field?
I have a package that I migrated from DTS. There are several "execute SQL" tasks that take an Input parameter of "myVariable", which is a string defined at the package level and has a default value assigned to it.
However, the package is throwing this error:
Error 1 Validation error. Err LogComletion : Failed to lock variable "myVariable" for read access with error 0xC001F032 "The variable name is ambiguous because multiple variables with this name exist in different namespaces. Specify namespace-qualified name to prevent ambiguity.".
How come I cannot use this variable when defined at the package level? If I move the variable to the task level, it works fine, but I want to re-use the variable in several places in the package.
I cannot find the data type for parameter mapping from Execute SQL Task Editor to make this works.
1. Execute SQL Task 1 - select max(columnA) from tableA. ColumnA is varbinary(8); set result to variable which data type is Object.
2. Execute SQL Task 2 - update tableB set columnB = ? What data type should I use to map the parameter? I tried different data types, none working except GUI but it returned wrong result.
Does SSIS variable support varbinary data type? I know there's a bug issue with bigint data type and there's a work-around. Is it same situation with varbinary?
we can use 'sp_executesql' to execute any statemens. I have made a search and people, seems, need the dynamic sql only to process some table/cloumn unknown in advance. My idea is that the dynamic SQL feature is ideal for passing blocks of code (aka delegates). Particularily, you may require to execute different procedures under some acquired locks. A procedure would acquire the locks, execute the code and release the locks. The problem is, however, that I cannot find the specification for the variable length parameters. It seems not feasible for SPs. Nevertheless, the 'sp_executesql itself does accept the variable number of parameters. How? Can we look at the defenition?
I'm new to SQL Server. I installed a copy of 6.5 on my server and set it up today. I received a db from a colleague and have been unable to find out what the variable types and lengths are. This may be very easy but I need to know.
Hi,A query is exceeding the length of varchar and nvarchar variable.Because I'm picking the data from each record from table and giving itto the query.suggest me some way to do it.sample query:SELECT P1.*, (P1.Q1 + P1.Q2 + P1.Q3 + P1.Q4) AS YearTotalFROM (SELECT Year,SUM(CASE P.Quarter WHEN 1 THEN P.Amount ELSE 0 END) ASQ1,SUM(CASE P.Quarter WHEN 2 THEN P.Amount ELSE 0 END) ASQ2,SUM(CASE P.Quarter WHEN 3 THEN P.Amount ELSE 0 END) ASQ3,SUM(CASE P.Quarter WHEN 4 THEN P.Amount ELSE 0 END) AS Q4FROM Pivot1 AS PGROUP BY P.Year) AS P1GO---> even the P.QUARTER .... FIELD NAME IS BEING GENERATEDDYNAMICALLY.MY QUERY IS EXCEEDING VARCHAR AND NVARCHAR LIMIT.THANX IN ADV.
Morning All,Can I have some help with this one please, I am having to make a fixed length text file based on information from the DBDeclare @EDIString varchar(MAX)Declare @RecordType varchar(2)Declare @RegistrationMark varchar(7)Declare @Model_Chassis varchar(11)Declare @LocationCode Varchar(4)Declare @MovementDate varchar(8)Declare @IMSAccountCode varchar(5)Declare @MovementType varchar(8)Declare @NotUsed1 Varchar(28)Declare @NotUsed2 varchar(7)Select @RecordType = RecordType, @RegistrationMark = RegistrationMark, @Model_Chassis = Model_And_Chassis, @LocationCode = LocationCode, @MovementDate = MovementDate, @IMSAccountCode = IMSAccountCode, @Movementtype = MovementTypeCode from Fiat_OutBoundOnce I have selected the information from the DB I need to ensure that each field is the correct length. I therefore want to pass the variable and the length of the variable into a function to return the correct length.So if location Code = 'AB' this needs to be four characters long so want to pass it into a function and return 'AB 'As I need to do this for 70+ variables is there an easy way to obtain the length of the collation for the variable?regardsTom
I have a requirement to import a file of rows containing fixed length data. The problem is that each row can be one of 5 different formats (i.e. different columns) -- where the "type" of row is indicated by the first two characters of the row. Each row gets inserted into its own table.
Could I use a simple Conditional Split to route the rows? Or is the split for routing similiar rows? Anyways, problems are never this simple...
In addition, each "grouping" of rows is related. The "first" row is considered the "primary" row (and gets a row id via IDENTITY, whereas the remaining rows in the group are "secondary" rows and have foreign key references back the the primary rows id.
Given (using spaces to separate columns and CrLf to show "grouping"):
So, the first 3 lines are all related to a MSFT record which needs to be spread across multiple tables. The next three lines are all related to AAPL, And the next FOUR lines (yes, each record can have zero, one, or more secondary rows) are related to CSCO.
(If this is still not clear, all the "01" rows will be written to [Table1] with each row having an IDENTITY value. All the "02" rows will be written to [Table2] the a FK pointing to the correct [Table1] row. All the "03" rows will be written to... and so on.
I am running SQLServer 2000 to parse and store records in the EDIX12 format. This consists of variable length delimited records which I am passing to the "transforms" tab to process with VBScript.
The problem is though each segment has a defined number of fields, N, the standard states that if the final M fieds are empty/blank they are not to be sent. Thus, a segment defined to have 20 fields may have 6 the first time I see it, 13 the next time, etc. To access the columns in VBScript I use DTSSource("Col001"). This works as long as the columns are there, but gives an error when they are not. Is there a parameter telling me how many columns are defined? Or is there something akin to IFEXISTS("Colxxx") or exceptions?
How can I handle this situation? One suggestion has been to pass the entire segment to the Transforms section and break it up there.
Finally, what resources can yuo point me to for reference? I'd like to get good at using DTS since my client wants their project written for it.
I have this doubt and want to be sure if my thinking is correct.
Lets consider 2 tables one with Fixed length columns (char) and other table with Variable length columns (Varchar).
The table with fixed length column will always allocate same size within a Page however, table with variable length column will allocate actual length of data within a page.
I think that updates happening on table with fixed length columns will have more possibility of InPlace updates at least from data length perspective, however updates on table with variable length columns will have more split updates from data length perspective.
is there any way or a tool to identify if in procedure the Parameter length was declarated less than table Column length ..
I have a table
CREATE TABLE TEST001 (KeyName Varchar(100) ) a procedure CREATE PROCEDURE SpFindNames ( @KeyName VARCHAR(40) ) AS BEGIN SELECT KeyName FROM TEST001 WHERE KeyName = @KeyName END KeyName = @KeyName
Here table Column with 100 char length "KeyName" was compared with SP parameter "@KeyName" with length 40 char ..
IS there any way to find out all such usage on the ALL Procedures in the Database ?
For those of you who would like to reference my exact issue, I'm dealing with the RSExecution SSIS package at the "Update Parameters" data flow task, at the Script Component.
The script tries to split parameter data into name and value. Unfortunately, I have several reports that are passing parameters that are very large. One example has over 65,000 characters all in the normal "¶mname=value&parm2=value..." format.
The code in the script works fine until it gets to one of these very large parameter sets. I have figured out what is causing the issue. Here's some code:
Dim paramBlob as Byte() paramBlob = Row.BlobColumn.GetBlobData(0, Row.BlobColumn.Length)
The second parameter of the .GetBlobData function takes an INTEGER as its count! Therefore, no matter what kind of datatype I pass to the string that the script will later split, it will be limited to 32767 characters.
THIS IS A PROBLEM!!!
Does anyone know a workaround for this issue? I need all of the parameter data to be reported, and I would hate to have to skip over rows like this. Also, if I'm missing something, please fill me in!
I am trying to narrow down this problem. Basically, I added 3 columns to my article table. It holds the article id, article text, author and so on. I tested my program before adding the additional field to the program. The program works fine and I can add an article, and edit the same article even though it skips over the 3 new fields in the database. It just puts nulls into those columns.So, now I have added one of the column names I added in the database to the code. I changed my businesslogic article.vb code and the addarticle.aspx, as well as the New article area in the addartivle.aspx.vb page. The form now has an additional textbox field for the ShortDesc which is a short description of the article. This is the problem now: The command parameters.length is 9 and there are 10 parameter values. Right in the middle of the 10 values is the #4 value which I inserted into the code. It says Nothing when I hover my mouse over the code after my program throws the exception in 17 below. Why is command parameters.length set to 9 instead of 10? Why isn't it reading the information for value 4 like all the other values and placing it's value there and calculating 10 instead of 9? Where are these set in the program? Sounds to me like they are hard coded in someplace and I need to change them to match everything else. 1 ' This method assigns an array of values to an array of SqlParameters.2 ' Parameters:3 ' -commandParameters - array of SqlParameters to be assigned values4 ' -array of objects holding the values to be assigned5 Private Overloads Shared Sub AssignParameterValues(ByVal commandParameters() As SqlParameter, ByVal parameterValues() As Object)6 7 Dim i As Integer8 Dim j As Integer9 10 If (commandParameters Is Nothing) AndAlso (parameterValues Is Nothing) Then11 ' Do nothing if we get no data12 Return13 End If14 15 ' We must have the same number of values as we pave parameters to put them in16 If commandParameters.Length <> parameterValues.Length Then17 Throw New ArgumentException("Parameter count does not match Parameter Value count.") 18 End If19 20 ' Value array21 j = commandParameters.Length - 122 For i = 0 To j23 ' If the current array value derives from IDbDataParameter, then assign its Value property24 If TypeOf parameterValues(i) Is IDbDataParameter Then25 Dim paramInstance As IDbDataParameter = CType(parameterValues(i), IDbDataParameter)26 If (paramInstance.Value Is Nothing) Then27 commandParameters(i).Value = DBNull.Value28 Else29 commandParameters(i).Value = paramInstance.Value30 End If31 ElseIf (parameterValues(i) Is Nothing) Then32 commandParameters(i).Value = DBNull.Value33 Else34 commandParameters(i).Value = parameterValues(i)35 End If36 Next37 End Sub ' AssignParameterValues38 39 40 41
Dear experts, while i'm trying to insert data , i got error like Ambiguous column name...that is related to joins the query is INSERT INTO MATABLE044 SELECT newid(), ledger.COLUMN002, wh.COLUMN001, Loc.COLUMN005, ledger.COLUMN008, ledger.COLUMN009, ledger.COLUMN010, ledger.COLUMN011,0, 1-1-1900, ledger.COLUMN006, ledger.COLUMN007, ledger.COLUMN014, 'Y', ledger.column016,ledger.COLUMN020, ledger.COLUMN021, ledger.COLUMN022,ledger.COLUMN023, ledger.COLUMN024,ledger.COLUMN025,ledger.COLUMN026,ledger.COLUMN027,Loc.COLUMN005,ledger.COLUMN029,ledger.COLUMN030,ledger.COLUMN002EE, 'NULL','NULL','NULL','NULL','NULL','NULL','NULL','NULL','NULL','NULL','SNSP_BRANCH','SNSP_BRANCH','SNSP_USER',convert(varchar(19),getdate(),120),NULL,convert(varchar(19),getdate(),120),newid(),'DBMS_REPUTIL.GLOBAL_NAME','en'
FROM MATABLE043 ledger Inner JOIN MATABLE012 wh on ledger.column002 = wh.column002 Inner JOIN MATABLE024 Loc on wh.column001 = loc.column004 WHERE COLUMN002 ='c5a0bcc4-743a-4d9a-80a5-7c2ebbf6deec' and COLUMN006 = '3d42a36e-7df7-41cd-a6ee-189f4e3f2ad5' AND COLUMN009 NOT IN ( SELECT COLUMN006 FROM MATABLE044)
what is the meaning
thank you very much
Vinod Even you learn 1%, Learn it with 100% confidence.
t1.[Converted_Mortgage_Number], t1.[Converted_Mortgage_Advance_No], t1.[Original_Account_Number], CASE WHEN t3.description IS NOT NULL THEN t3.description ELSE t1.CACS_Location_Code END AS CACS_Location_Code, t1.[Original_Account_Number_1], t1.[Payment_Amount], t1.[Letter_ID], t1.[Batch_Activity_Date], t1.[Activity_Date], t1.[Activity_Time], CASE WHEN t6.description IS NOT NULL THEN t1.[Collection_Activity] + ' - '+ t6.description ELSE t1.[Collection_Activity] END AS [Collection_Activity], CASE WHEN t5.description IS NOT NULL THEN t1.[Party_Contact_Code] + ' - ' + t5.description ELSE t1.[Party_Contact_Code] END AS [Party_Contact_Code], CASE WHEN t7.description IS NOT NULL THEN t1.[Place_Called] + ' - ' + t7.description ELSE t1.[Place_Called] END AS [Place_Called], t1.[Promise_Amount_1], t1.[Promise_D_ate_1], t1.[Promise_Amount_2], t1.[Promise_D_ate_2], CASE WHEN t8.description IS NOT NULL THEN t1.[Non_Pay_Excuse_Code] + ' - ' + t8.description ELSE t1.[Non_Pay_Excuse_Code] END AS [Non_Pay_Excuse_Code], CASE WHEN t2.description IS NOT NULL THEN t1.CACS_State + ' - ' + t2.description ELSE t1.CACS_State END AS CACS_State, t1.[User_ID/Transaction_Type], t1.[Arrears_Amount], t4.[Sequence_Number], t4.[History_Text], ROW_NUMBER() OVER (ORDER BY [$criteria] $direction) AS RowNumber
FROM format_cacs_cch AS t1
LEFT OUTER JOIN format_cacs_history_1 AS t4 ON t1.Converted_Mortgage_Number = t4.Converted_Mortgage_Number AND t1.Activity_Date = t4.Activity_Date AND t1.Activity_Time = t4.Activity_Time
LEFT JOIN sd_cacs_states AS t2 ON t1.CACS_Location_Code = t2.location AND t1.CACS_State = t2.state
LEFT JOIN sd_cacs_Location_details AS t3 ON t1.CACS_Location_Code = t3.code
LEFT JOIN sd_party_contacted_code AS t5 ON t1.[Party_Contact_Code] = t5.code
LEFT JOIN sd_collection_activity_code AS t6 ON t1.[Collection_Activity] = t6.code
LEFT JOIN sd_place_called AS t7 ON t1.[Place_Called] = t7.code
LEFT JOIN sd_non_pay_excuse_code AS t8 ON t1.[Non_Pay_Excuse_Code] = t8.code
WHERE (t1.Converted_Mortgage_Number = '$mortgage' $additional_date_params)
I'm planning an upgrade to SQL Server 2012 and noticed the following.
We have a query that includes a duplicate column name in the SELECT statement. This query happens to work in a db we have that is set to SQL 2000 (80) compatibility level, but the same exact query fails when I set the db to SQL 2008 (100) compatibility level.
I'm brand new to this forum and I'm stuck. We've got a query which queries three tables: booking, catheter and patients. The following (dodgy) query has pulled off the data for years, without problems:
SELECT CARDIAC_CATHETER.CARDIAC_CATHETER_ID, CARDIAC_CATHETER.OPERATOR1, CARDIAC_CATHETER.TEST_DATE, BOOKING.STATUS, BOOKING.TEST_DATE AS BOOKING_TEST_DATE, BOOKING.PROCEDURE_DATE_TIME, CARDIAC_CATHETER.ITEM_NO, CARDIAC_CATHETER.BILLING_CATEGORY, PATIENTS.REFERRING_PHYSICIAN1, PATIENTS.HOSPITAL_UNIT_NUMBER FROM (CARDIAC_CATHETER INNER JOIN PATIENTS ON CARDIAC_CATHETER.PATIENT_ID = PATIENTS.PATIENT_ID) LEFT OUTER JOIN BOOKING ON PATIENTS.PATIENT_ID = BOOKING.PATIENT_ID AND 'CATHETER' = BOOKING.CLINIC AND BOOKING.STATUS <> 'CANCELLED' AND BOOKING.PROCEDURE_DATE_TIME >= (CARDIAC_CATHETER.TEST_DATE - 1) AND BOOKING.PROCEDURE_DATE_TIME < (CARDIAC_CATHETER.TEST_DATE + 1) WHERE CARDIAC_CATHETER.TEST_DATE > '2005-04-01' AND CARDIAC_CATHETER.BILLING_CATEGORY LIKE '3%'
About 2500 perfect rows and 12 rows which have duplicate catheter id values. There are six pairs of duplicated records. It's only the cardiac_catheter.catheter_id and cardiac_catheter.test_date which are duplicated - the first row shown below contains correct values, the second doens't. The other values returned, on both rows, are correct. Here is some sample data:
I'm running into problems with ambiguous column names. TransID also exists in tblWork. I tried adding SHPD + WORK, but it won't accept the SHPD on the UPDATE line. How do I code around this error? Thanks. UPDATE tblShipmentDet SHPD SET SHPD.Quantity = WORK.Quantity FROM tblWork WORK WHERE TransID = WORK.RecordID
I am getting an ambiguous column name error, but I am aliasing the column in question. The problem seems to be that I'm trying to use the same column in 2 inner joins:
select trackprice.id from trackprice tp inner join track tr on tp.trackid = tr.id inner join trackstorage ts on ts.trackid = tr.id inner join ... etc ....
.... gives me the error 'ambiguous column name trackid' - can this not be done with inner joins this way?
Can someone help with this syntax? I have a non-sensicle examplebelow, but it illustrates the problem if you copy/paste into QA.**********************************use pubsgoupdate authors set address = 'some address'from authors ainner join authors a2 on a.zip = a2.zip---------------------------------------------Server: Msg 8154, Level 16, State 1, Line 2The table 'authors' is ambiguous.**********************************