Suppose a very basic question, but how can I easily update values in a record. I can easily READ the value with this code.
Set Rsx = Server.CreateObject("ADODB.RecordSet")
sSQL= "SELECT * from prodfeatures"
Rsx.Open sSQL, sDSN, adOpenStatic, adLockReadOnly, adCmdText
While Not Rsx.eof
i=i+1
mte(i)=Rsx("id")
mfnum(i)=Rsx("featureother1")
Rsx.movenext
Wend
..now I try an UPDATE sql to insert the values from the array into another field in the corresponding records - I can get it to work, but it is very stupid done. Actually I must open and close the database for every record to get it to work:
For j=1 To i
Set Rsx = Server.CreateObject("ADODB.RecordSet")
sSQL= "UPDATE prodfeatures SET featurenum=" & mfnum(j) & " WHERE id=" & mte(j)
Rsx.Open sSQL, sDSN, adOpenStatic, adLockReadOnly, adCmdText
next
...I suppose this is silly programming, but it actually works well, but it takes a really lot of time to execute...
I have a script that I use after some amount of data massaging (not shown). I would like to be able to change the
1) denominator value (the value 8 in line 32 of my code) based on how many columns are selected by the where clause:
where left(CapNumber,charindex('_', CapNumber)-1) = 1where capNumber is a value like [1_1], [1_4], [1_6]...[1_9] capNumber can be any values from [1_1]...[14_10] depending upon the specialty value (example: Allergy) and the final number after the equal sign is a number from 1 to 14)
2) I'd like to dynamically determine the series depending upon which values correspond to the specialty and run for each where: left(CapNumber,charindex('_', CapNumber)-1) = n. n is a number between 1 and 14.
3) finally I'd like to dynamically determine the columns in line 31 (4th line from the bottom)
If I do it by hand it's 23 * 14 separate runs to get separate results for each CapNumber series within specialty. The capNumber series is like [1_1], [1_2], [1_3],[1_4], [1_5], [1_6], [1_7], [1_8],[1_9] ... [8_4],[8_7] ... [14_1], [14_2],...[14_10] etc.
Again, the series are usually discontinuous and specific to each specialty.
Here's the portion of the script (it's at the end) that I'm talking about:
--change values in square brackets below for each specialty as needed and change the denom number in the very last query.
if object_id('tempdb..#tempAllergy') is not null drop table #tempAllergy select * into #tempAllergy from dbo.#temp2 T
[Code] ....
If I were to do it manually I'd uncomment each series line in turn and comment the one I just ran.
What the right table setup is when you have a table with values that will never change.
I'm building a small project for our sales team so they can track their leads & customers. I have a table called "Contacts", which contains the names of every prospect they've ever reached out to. In that table is a field called "ContactTypeId", which is a reference to a table called "ContactTypes".
My ContactTypes table looks like this:
ContactTypeId | Name | Description ---------------------------------------------- 1 | Lead | A contact who has shown interest in our service 2 | Qualified Lead | A more deeply engaged, sales-ready contact 3 | Customer | A contact paying for our services
So, I set it up this way because I figured it was cleaner to associate a ContactTypeID# to each Contact in the Contacts table, rather than actually spelling out "Lead", "Customer", etc for each record in the Contacts table.
But is this right? If these values in my ContactTypes table are NEVER going to change, should I still have this table at all or is it better to just insert "Lead", "Qualified Lead", or "Customer" into the Contacts table? Just looking to take the correct approach.
The columns fro which we do not enter any values.Fro eg in my table i did not insert any values to Balance col,in my table. In SQL 2005,management studio,when i said modify table and changed the default to 0 from NULL and also unchecked the checkbox saying Allow Nulls.
I did the above changes and said save.It gave an error saying
'AccountsBalance' table - Unable to modify table. Cannot insert the value NULL into column 'Balance', table 'AccountBalanceSheet.dbo.Tmp_AccountsBalance'; column does not allow nulls. INSERT fails. The statement has been terminated.
Hi, all, does anyone know of a way to change default values inside of a table without affecting existing values?
I have a table with default = 0 in usermask column. Some rows already have non 0 values in them. I now need to change the default to 1 but still want the non 0 values be there. I also want the new default to take effect when new rows are entered. What's the best and quickest way to do it?
I had created a database named "Company-Data" that contains a table named "tblStock". This table contains several columns such as ID, Product, Quantity. The datatype for the column "Quantity" is INT.
I had entered 100 records to the table, now I want to change the datatype for the column "Quantity" from INT to MONEY.
How can we do this without loosing the data that has been entered the the column previously.
I have three columns in my table with the following datatypes:
Date - DateTime Height - Decimal Change_From_last_Height - Decimal
I am using "SQL Server 2005 Express Edition". I'm fairly new to SQL and would greatly appreciate any help or advice I can get.
The "date" column increases by an extra day in every new row and I then enter the new height of the plant. What I want to know is how I can get SQL server express to automatically enter the difference in height between the current row's height and that of the previous row.
Is it possible to automate the entry in the Change_From_Last_Height column in SQL?
Put another way, I know how to find the difference between two values in the same row but different columns, but how do I calculate the difference between values in adjacent Rows (ie. Rows next to each other)?
When I use calculation in generating reports, I get "NaN" value in preview , I wanted to avoid this and display "0" if the result of calculation is null or infinity..
I used the below calculation
=CDbl(Fields!POST_TEST_COUNT.Value/Fields!ENROLL_Actual.Value), here i taken input as percentage for every field.
Please let me know how to avoid "NaN" and "infinity", while displaying in reports
when ever i see the prview NAN values are displayed and remaining fields are shows appropriate query results like percentage values Like 98%.00 some thing but i have a problem with NAN values i need to change those NAN values to 0
I have a stored procedure with a SELECT statement, that retrieves 1 row. SELECT name FROM tblNames WHERE nameID = "1" I want all the NULL values in that row to be change in some default values. How do I do this?
We migrate data from a legacy system to new system using SSIS. The primary key of legacy system is a user-defined sql server which holds alpha-numeric values. The primary key of new system is a big int(sequential numbers).
When we migrate data, we generate a sequential number for each legacy key(the primary key of legacy data) and insert data in to new system tables. The newly generated sequential numbers and the legacy keys are persisted in an intermidiate table for look up operations of child tables.
We are facing problem when we try to migrate tables which has self referring coulumns. For example a table called Employee has a column ManagerKey which refers to Key column of Employee table. We are struck up in defining data flow tasks to replace legacy ManagerKey column values with the new values(sequential values) generated during the migration process.
In an existing SSRS 2012 report, I have a requirement for a user to be able to select by multiple school(s) and/or multiple grade(s). This is fine except certain schools like elementary have grade levels of KG to 06, Middle school has grades of 06 to 09 and high school has grades of 10 to 12. Thus for example, if a user has initially selected grades 11 and 12 and then they select an elementary school that has grades KG through 06, the 'grade level' selection would need to change.In other words, I am thinking of initially having the available and default values for the parameter called 'Grade' set to KG through 12. However if a school is selected that only has specific grade levels like elementary school, how can I override the original grade level and only allow the user to select grades that the particular school contains?
We ran into weird/interesting issue with below details.
Version: Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64) Standard Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200):
We are using SQLCMD to run DDL script on our product database in below order. That script has below content.
step # 1 - database collation change (case -sensitive) statement as very first statement of the script step # 2 - Actual DDL SQL statements step # 3 - database collation change back to original (case insensitive)
When we execute all above 3 steps in single script using SQLCMD on our test_server#1 , it is successful but when same is being implemented on test_ server#2 , it is failing.We ensured that there is no other user accessing the db and setting on both the server are all default/basic. Separating out all 3 steps in 3 different script working fine. This is only problem when we combine them into single script and fire it using SQLCMD. If it is something related to session/transaction then we should hit same issue on our test_server#1 server as well but that is not the case.test_server#1 and test_server#2 has exact same database/data, just two different physical machine & SQL Server instance.
How to count the number of values that exist in a row based on the values from an array of numbers. Basically the the array of numbers I want to look for are in row 1 of table [test 1] and I want to search for them and count the "out of" in table [test 2]. Excuse me for not using the easiest way to convey my question below. I guess in short I have 10 numbers and like to find how many of those numbers exist in each row. short example:
I am trying to think my way through a solution which I believe others have probably come across... I am trying to implement a matching routine wherein I need to match an address against a high value and a low value (or, for that matter an input date vs. a start and end date) to return the desired row ... i.e. if I were to use a straight vb program I would just use the following lookup:
" WHERE zip_code = @zip_code AND addr_prim_lo <= @street_number AND addr_prim_hi >= @street_number " & _
" AND addr_prim_oe = @addr_prim_oe AND street_pre = @street_pre AND street_name = @street_name " & _
" AND street_suff = @street_suff AND street_post = @street_post " & _
" AND (expiry_date = '' OR expiry_date = '00000000' OR expiry_date > @expiry_date)" & _
" GROUP BY fire_ID, police_ID, fire_opt_in_out, police_opt_in_out"
My question, then, is how would you perform this type of query using a lookup / merge join or script? I have not found a way to implement a way to set the input columns? I can set the straight matches without a problem, i.e. lookup zip code = input zip code, but can't think of the correct way to set comparisons, i.e. lookup value 1 <= input value AND lookup value 2 >= input value
I have a DTSX package which reads values from a fixed-length text file using a data reader and writes some of the column values from the file to an Oracle table. We have used this DTSX several times without incident but recently the process started inserting NULL values for some of the columns when there was a valid value in the source file. If we extract some of the rows from the source file into a smaller file (i.e 10 rows which incorrectly returned NULLs) and run them through the same package they write the correct values to the table, but running the complete file again results in the NULL values error. As well, if we rerun the same file multiple times the incidence of NULL values varies slightly and does not always seem to impact the same rows. I tried outputting data to a log file to see if I can determine what happens and no error messages are returned but it seems to be the case that the NULL values occur after pulling in the data via a Data Reader. Has anyone seen anything like this before or does anyone have a suggestion on how to try and get some additional debugging information around this error?
I am working with a data set containing several years' of monetary values. I have entries for past dates and the associated values, and I also have entries for future dates. I need to populate the values of the future date records with the values from the same date the previous year. Is there any way this can be done in Power Pivot?
I have to use the above comma separated values into a SQL Search query whose datatype is integer. How would i do this Search query in the IN Operator of SQL Server. My query is :
declare @id varchar(50) set @id= '3,4,6,7' set @id=(select replace(@id,'''',''))-- in below select query Id is of Integer datatype select *from ehsservice where id in(@id)
But this query throws following error message:
Conversion failed when converting the varchar value '3,4,6,7' to data type int.
I have my stored procedure set to Territory_code IN (@Territory)
, now , how do i enter in more then one value. When i select the multi value check box, it gives me more spaces. But then doesnt recognize the values when i put in more then one. am i doing something wrong?
I receive the input file with some 100 columns and some 20k+ rows and I want to check the incoming input row is existed in the database or not based on 2 key columns. If the row is existed then I need to check all the columns (nearly 100 columns) values in input and the database are equal or not. If both are equal I need to treat them seperately if not there is a seperate logic. How Can I do that check for each row and for each column?
Basically the algorithm is like this, if the input file row is not existed in the database then treat that as new row else if the input row is existed in the database then check all the columns are equal or not. If all the columns are equal then treat that as existing row and do nothing else if some columns are not equal then treat this row seperately.
I found some thing to achieve the above thing. 1. Take the input row and check in the database. 2. If the row is not found in the database then treat it as new row. 3. If row is found in the database then a) Take the source row and prepare a concatenated string for all the columns b) Take the database row and prepare a concatenated string for all the columns c) Find out the hash code for the 2 strings and then compare hash codes for equal.
The disadvantage of this is running a loop 2*m*n times where m is the number of rows and n is the number of columns. It should be done 2 times for input file row and database row.
Can anybody suggest a good method to do this?
What does the function "GetHashCode" for InputBuffer in method "Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)" will do? Will it generates hash code based on all the columns values?
I have a situation in SSRS to get the common values between the two columns where the values are sorted comma separated as below.Ex:
ColumnA : abc,cde,efg ColumnB : cde,xyz,abc
the result in
ColumnC : cde,abc
similarly Column A and B will have n number records. I need to right an expression or the Code function to get the required result in ColumnC. I am using SharePoint Lists as Datasource. Cannot write SQL query to achieve this requirement.
I am SSRS user, We have a .net UI from where we want to pass multi select values, but these values are comma separated in the database. how can I write a sql query such that when I select multi values on my UI, the comma separated values are take care of.
I need to replace all the "User Friendly Names" with "System Names" in the calculations, i.e., I need "Sales Units" to be replaced with "cSalesUnits", "AUR" replaced with "cAUR", "Comp Sales Units" with "cCompSalesUnits", and "Comp AUR" with "cCompAUR". (It isn't always as easy as removing spaces and added 'c' to the beginning of the string...)
I have created a CTE of all the "Look-up" values, and have tried all kinds of joins, and other functions to achieve this, but so far nothing has quite worked.
How can I accomplish this?
Here is some SQL for set up. There are over 500 formulas that need updating with over 400 different "look up" possibilities, so hard coding something isn't really an option.
I developed the following T-SQL query that runs successfully, but I was looking for a more efficient and concise way to do this. Is there a CTE that can replace all of these case statements? I've updated my query as below. Although this sample query works, it's not working for my real data. Instead, I get an error. At the bottom is the error part of my real query.I copied all of the tables from the first query block below. But when I wrote the bottom query block, it underlined in red the words "answer" and "question." It says "Invalid column name".
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#hard_values') ) DROP TABLE #hard_values; if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#dummy_data') ) DROP TABLE #dummy_data; if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#temp')
Hi I have one problem in SSIS for passing Variable Values while executing Package. I'm giving in details as below:
I opened Microsoft SQL Server Management Studio Made Connection to Integration Services To Execute Package I Right Clicked and Click on Run Package Then I Clicked on Execute and package was executed successfully.
Problem is that if I try to Set Values then Package through Error DTExec: Could not set ProcessMode value to M.
Basically I could not understand in which format I should pass the Variables. What I tried is listed below:
ProcessMode;M Package.Variables[User:rocessMode].Value;M Package.Variables[ProcessMode].Value;M But every time I got errors.
And then I tried from Command Line DTEXEC /DTS "MSDBLoad_Order" /SERVER SERVERNAME /MAXCONCURRENT " -1 " /CHECKPOINTING OFF /REPORTING V /SET Package.Variables[ProcessMode].Value;M
First time Process run successfully. And it has changed the ProcessMode to M also. But after that it was also not changing the ProcessMode Value to M.
Please help in regarding. I tried a lot from Site examples also, but could not get proper solution.
I have SQL Server 2012 SSIS. I have Excel source and OLE DB Destination.I have problem with importing CustomerSales column.CustomerSales values like 1000.00,2000.10,3000.30,NotAvailable.So I have decimal values and nvarchar mixed in on Excel column. This is requirement for solution.However SSIS reads only numeric values correctly and nvarchar values are set as Null. Why?