I have a perl program that is looping through a hash of a hash. I need to Update any existing records but also insert any new records in the table using collected data in the hash.
Life would be very simple if it was possible to use a Where Clause in an Insert statement but not does not work.
Here is some example code from my program:
sub Test{
foreach my $table(keys %$HoH){
foreach my $field(keys %{$HoH->{$table}}){
if($table eq "CPU"){
my $CPUstatement = "INSERT INTO CPU(CPUNumber, Name, MaxClockSpeed, SystemNetName)
Values ('$field',
'$HoH->{CPU}{$field}{Name}',
'$HoH->{CPU}{$field}{MaxClockSpeed}' ,
'$HoH->{Host}{SystemNetName}')";
print "$CPUstatement
";
if ($db->Sql($CPUstatement))
{
print "Error on SQL Statement
";
Win32::ODBC::DumpError();
}
else
{
print "successful
";
}
}
}
There is something very strange going on here. Tested with ADO 2.7 andMSDE/2000. At first, things look quite sensible.You have a simple SQL query, let's sayselect * from mytab where col1 = 1234Now, let's write a simple VB program to do this query back to anMSDE/2000 database on our local machine. Effectively, we'llrs.open sSQLrs.closeand do that 1,000 times. We wont bother fetching the result set, itisn't important in this example.No problem. On my machine this takes around 1.6 seconds and modifyingthe code so that the column value in the where clause changes eachtime (i.e col1 = nnnn), doesn't make a substantial difference to thistime. Well, that all seems reasonable, so moving right along...Now we do it with a stored procedurecreate procedure proctest(@id int)asselect * from mytab where col1 = @idand we now find that executingproctest nnnn1,000 times takes around 1.6 seconds whether or not the argumentchanges. So far so good. No obvious saving, but then we wouldn'texpect any. The query is very simple, after all.Well, get to the point!Now create a table-returning UDFcreate function functest(@id int) returns table asreturn(select * from mytab where col1 = @id)try calling that 1,000 times asselect * from functest(nnnn)and we get around 5.5 seconds on my machine if the argument changes,otherwise 1.6 seconds if it remains the same for each call.Hmm, looks like the query plan is discarded if the argument changes.Well, that's fair enough I guess. UDFs might well be more expensive...gotta be careful about using them. It's odd that discarding the queryplan seems to be SO expensive, but hey, waddya expect?. (perhaps theUDF is completely rebuilt, who knows)last test, then. Create an SP that calls the UDFcreate procedure proctest1(@id int)asselect * from functest(@id)Ok, here's the $64,000 question. How long will this take if @idchanges each time. The raw UDF took 5.5 seconds, remember, so thisshould be slightly slower.But... IT IS NOT.. It takes 1.6 seconds whether or not @id changes.Somehow, the UDF becomes FOUR TIMES more efficient when wrapped in anSP.My theory, which I stress is not entirely scientific, goes somethinglike this:-I deduce that SQL Server decides to reuse the query plan in thiscircumstance but does NOT when the UDF is called directly. This iscounter-intuitive but it may be because SQL Server's query parser istuned for conventional SQL i.e it can saywell, I've gotselect * from mytab WHERE [something or other]and now I've gotselect * from mytab WHERE [something else]so I can probably re-use the query plan from last time. (I don't knowif it is this clever, but it does seem to know when twotextually-different queries have some degree of commonality)Whereas withselect * from UDF(arg1)andselect * from UDF(arg2)it goes... hmm, mebbe not.... I better not risk it.But withsp_something arg1andsp_something arg2it goes... yup, i'll just go call it... and because the SP was alreadycompiled, the internal call to the UDF already has a query plan.Anyway, that's the theory. For more complex UDFs, by the way, theperformance increase can be a lot more substantial. On a big complexUDF with a bunch of joins, I measured a tenfold increase inperformance just by wrapping it in an SP, as above.Obviously, wrapping a UDF in an SP isn't generally a good thing; theidea of UDFs is to allow the column list and where clause to filterthe rowset of the UDF, but if you are repeatedly calling the UDF withthe same where clause and column list, this will make it a *lot*faster.
I am currently working with C and SQL Server 2012. My requirement is to Bulk fetch the records and Insert/Update the same in the other table with some business logic? How do i do this?
I have 2 tables: First table: empID,PlanID,groupID Second: PlanID,groupID,EffectiveDate,TerminationDate,DeadlineDate I need to show only employee with in spesific group who is not enroll for the current month until deadline passed.
I have some table data and know how I want the results but I'm just having a bit of trouble in constructing the SQL logic to obtain the desired results. There's a site where visitors are able to select from a list of parts, and it will return a set of model/products that they can produce with the selected parts. Here's the data ...
tblModel tblPart ModelId ModelName PartId PartName ---------------------- ---------------------- 1 Alpha 1 CHOO1 Stem 2 Bravo 2 BH034 Rod 3 Bravo Pro 3 HRE Seat
I have two tables X,Y X empno....Sal.....Tax.....Returns...name 1.....4500....1050.... 750.......robert 2.....5750.....1560....900.......john 3.....4000.....900.....600.......keen 4.....6100....1200.....1000......stauton
Y empno....Sal.....Tax.....Returns...name 1.....4500....1000.... 000.......robert 2.....5750.....1200....900.......john 3.....4000.....900.....600.......keen 4.....6100....1000.....1000.......stauton
If you see the above tables I have data mismatch in X and Y tables for the same empno. I need to write a query which shows emp no and columns(name of col) where the data mismatch has occured. I came up with a query which I have to write for every individual column to get the mismatch. Since there 120 columns it is pretty hard task..i m looking for a logic where I can write a query which shows mismatched data in columns.
Expected Output table z col1..col2 1......tax 3......tax Appreciate your help.
Hi, sorry for a newbie question but I was wondering if the following is possible:
I have to select some information from a table which I have already created a query for. This information then has to be inserted into a new table but needs another column (not the promary key) with another unique custom identifer for each record in the format EX01 which is incremented by 1 for each record. I was wondering how is it possible to do this?
My approach was to create a view and then insert the values form the view into the new table but I still have no idea how to do the unique identifer. Was the first part of my approach correct or have been wrong from the start?
I have a little system of 3 tables Job, employees and times. This times table has the fields times_id, employee_id and job_idI'm trying to have a query that pull of employees that don't have a certain job_id yet. I'm going to put this data in a table so the user knows they are available for that job. The code i have isn't working, and i'm not sure why.SELECT DISTINCT times.employee_id, employee.employee_nameFROM employee INNER JOIN times ON employee.employee_id = times.employee_id WHERE (times.job_id <> @job_id) Thanks in advance for any help. I'm sure I missing someting silly, or maybe i need to have a stored procedure involved?... Thanks!
Hi All I have a sqlserver database with product, catagory and sub catagory format. Before I describe my problem, let me share whats I have in db. Their are two types of sinaros, either the products are directly assigned to a catagory or a product is placed in subcatagory that is in turn have a catagory. I use the following table struct for both of the scenarios: Product>>subcat bridge>>subcatagory Product>>catbridge>>catagory Here are the queries to get them: 1. If product assinged direct in catagory then select product.pid, product.Prd_heading,product.[Description], product.Brand, product.img from product,cat_bridge,category where product.id=cat_bridge.pid and cat_bridge.catid=category.catid; 2. If product assinged to sub cat then select product.pid, product.Prd_heading,product.[Description], product.Brand, product.img from product,subcat_bridge,subcategory where product.pid=subcat_bridge.pid and subcat_bridge.subcatid=subcategory.subcatid and subcategory.catid=category.catid; Now the problem is, I want to use a single query to download all the products to a CSV format and I need to combine both of the queries with a single one, probably with if else logic, but I am not getting it, I mean how to acheive. Can anyone help me sort this out? Thanks in Advance Regards Mykhan
I am dealing with two tables and I am trying to take one column from a table and match the records with another table and append the data of that column.
I used an update query that looks like this:
UPDATE Acct_table Set Acct_table.Score = (Select Score_tbl.Score from Score_tbl Where Acct_table.Acctnb = Score_tbl.Acctnb
This process has been running for over an hour and a half and is building a large log file. I am curious to know if there is a better command that I can use in order to join the tables and then just drop the column from one to the other. Both tables are indexed on Acctnb.
Hi, sorry for another newbie question but I was wondering if the following is possible:
I have to take some information from a table which I have already created a query for. This information then has to be inserted into a new table but needs another column (not the promary key) with another unique custom identifer for each record in the format EX01 which is incremented by 1 for each record. I was wondering how is it possible to do this?
My approach was to create a view and then insert the values form the view into the new table but I still have no idea how to do the unique identifer. Was the first part of my approach correct or have been wrong from the start?
I'd like to make a logic statement, that would take as arguments result of the sql select query. In more details: I would like to create a local Bool variable that would be false if some value is NULL in the table (or select query).Query example:select taskID from Users where Login=@usernameWhich classes/methods should i use to solve this problem? I use SqlDataSource to get access to database and i think i should use something like SqlDataSource.UpdateCommand and SqlDataSource.UpdateParameters but dont know how to build from this a logic statement.Thanks in advance
Hello, I have a requirement to select millions of rows from table and need do some parsing each row. I have identity column on each table. Here is the query logic I'm following. Logic A:Uisng While loop processing data row by row Logic B: Using Cursor Processing row by row
Here is the perormance on the above ran against .5 millions rows of data.
like this i need to populate. i need to show all the groupname and belonging systemName and belonging devicename and belonging sensorname
so please give me query for this complex operation please criteria's 1.GRoup can have systems and system can have devices and device can have sensors 2.GRoup can have systems and systems can have sensors[no device] 3.GRoup can have systems and systems can have devices [no sensor] 4.GRoup can only have system [no device, no sensor] 5.GRoup can have only sensor[no system, no device] so please give me query for this. not stored procedures.i need query for this
have a SQL2K/VB.NET05 -based website that uses a complex search query, whose results will contain additional logic to be evaluated. There are thousands of records and growing, so it is not feasible to code this within the program...it must be evaluated inline or after the query, and it is also not feasible to set up additional fields and tables to handle the logic.
For a very general example: In the .NET code, the following variables are recognized: sex="M" Paid=4350.00 Outstanding=28000.50
One of the query result fields will contain the additional logic to evaluate and another will tell the type of expression..
EVAL EXPR BOOL Sex='F' and Paid/Outstanding < 27.50 BOOL Sex='M' and Paid/Outstanding < 38 or Sex='F' INT Paid*52.33
In other words..the thousands of records being returned have their own additional logic to evaluate. Is there a way this can be done by importing the variable into SQL server and testing it during the query?
If not, is there a way that I can run the code in the middle of .NET? I know I could run scripted code while in ASP, but ASP.NET is compiled, so I dont know if it can be done there....
I have a below query which have a date filter like "EST_PICK_DATE between '2015-02-01' and '2015-06-01'", where the logic is EST_PICK_DATE should be 3 months from the current month and 1st date of next month. Ex for current month MAY, EST_PICK_DATE shoulc be between '2015-02-01' and '2015-06-01'. I need to write below query dynamically. In below query i have hardcoded the value ("EST_PICK_DATE between '2015-02-01' and '2015-06-01'"), but it should take dynamically. How to achieve this?
I am using this query in SSIS package, So Shall i do in SQL level or we should implement this logic in package? If yes, How?
INSERT INTO STG_Open_Orders (Div_Code, net_price, gross_price) SELECT ord.DIV_CODE AS Div_Code, ord_l.NET_PRICE AS net_price, ord_l.gross_price AS gross_price, FROM ORD ord inner join ORD_L ord_l ONord.ORD_ID=ord_l.ORD_ID WHERE ord_l.EST_PICK_DATE BETWEEN '2015-02-01' AND'2015-06-01'
Hello, I need some major help, I need to make a database using SQL server for a forum, now I am using pHpBB, but i need that database. I was thinking about it, it doesnt need to be complicated or anything. I really have no idea where to start so any help. Thank you in advance
I have a new business, and a part of that business includes receiving large amounts of data from time to time. I just found out yesterday that I'm going to be receiving about 1TB of data from an new client! I'm not set up at all for this large of a data set.
I want to use SQL Server as my database. Can I load SQL on a Desktop PC without having to buy a server? How?
I don't have a clue as to how I need to get set up for this data...hardware or software. Any advice you can give will be outstanding!!!!!
I have a site that was supposed to go live yesterday.I am using M$ SQL Express 2005 and the Express Manager.I setup everything using Windows authentication on my local computer. I backed up the database through the manager and simply did a restore to the live database server.I copied my aspx files and everything else.I changed my connection string to allow for SQL Authentication (because I was having trouble with Windows authentication).For some reason, my SQL authenticated user can do whatever it wants within the SQL manager, but I am unable to login to the site. I get no errors, just the usual failed login attempt text.Can someone please help. I don't know where to start on this one.Thanks,Joshua Foulk
I hope I haven't messed up! I was importing some data, and it started taking too long and seemed to have locked up, I did a cold boot and when I tried to open the db it would just load...
I have then detached it and tried to reattach the db, but it seems to just load forever.. I let it sit there for an hour and still nothing...
the DB has a 17gig LDF file and I can't attach without it...
For some reason when I'm trying to restore a back up, I'm encountering this problem, I've asked numerous people and been in and out of chatrooms all day and night, has anyone got any idea what to do?
Should I just pass an array of column names and use the AddWithValues SqlCommand method while looping through the array? Any comments are greatly welcomed.
I have created a database with three tables. The database has been up for a month now and contains about 20,000 records. In order to improve performance and resolve some issues I an attempting to change some of the table information. i.e. allow nulls in a few fields. When I use TSQL or the GUI to make these changes I get the following error: Timeout expired. The timeout period elapsed prior to completion of the operation or server not responding.
Source: .Net SQLClient Data Provider
I have SQL Server Express SP2 installed with .Net framework v3.0
Note: This issue appears when I attempt to delete a row from a table as well.
I have a report with a table and three columns. Whenever the data in one of the columns cannot fit on a single page it continues on the next page BUT, there is no header on the next page and the data in the other two columns repeats itself on the next page. (This behavior happens when I export to PDF)
Is this a known issue? I have tried every setting I could think of.
I have a horizontal barchart with integer values, both positive and negative. Major gridlines are shown and I have not specified the interval or the label format. Sometimes the gridline labels (Y axis) show decimal values. Is there anyway to format these? I have tried to format the labels as "#0" but then I see labels occuring twice. I have also played with the intervals, but then sometimes, depending on the values, the zero line is not being shown.
Hi, We have a product that is developed in ASP and works with SQL Server 2000 or 2005. Since it€™s an ERP, we also use Reporting Services 2000 or 2005. Our application needs 3 registered DLLs that were, a long time ago, developed to support our entire application. Since we are using Windows Server 2003 x64 editions in our clients with SQL Server 2005 x64 edition, we managed to register the 32 bit DLLs in the 64 bit system. We installed them as a COM+ component, ran the command €œcscript.exe adsutil.vbs set W3SVC/AppPools/Enable32BitAppOnWin64 true€? and our application worked fine. This command caused the IIS to use the .NET 2.0 32 bit version so that our DLLs could be correctly invoked. But now Reporting Services doesn€™t work because it needs the 64 bit version of the .Net framework. When I try to connhecto to localhost/reports, I get the error "%1 is not a valid Win32 application". Is there any workaround to this problem so that i can deploy the application and the database in the same machine?
I'm trying to insert data into locally stored database (SQL Server). The data I want inserted, is presented in a Treeview control and the data is fetched from a Webservice. The data is returned in form of a dataset. The treeview contains checkboxes allowing a user to select what to install in the locally stored database.
To sum up:
1. Get data from a webservice' not my problem 2. Present data in a Treview control' not my problem 3. Allow to user to select which data to install' not my problem 4. Insert data that the user has selected into my db' MY PROBLEM!!!!
The Treeview is generated with DataRelations between Group and Rule.
My locally stored database is designed by a third party provider and therefore the database must not be altered. The table I want to store data in is called "Groups" and it looks like this:
GroupID uniqueidentifier ' (newid()) GroupName nvarchar(50) ParentGroupID uniqueidentifier' if grouptype = 0 then ParentGroupID must have a value. GroupType tinyint ' 0 = subgroup, 1 = "top"group
The third party also created a stored procedure called pr_AddGroup taking the following parameters:
@GroupName ' can be both the RuleName and the GroupName @GroupType ' can be 0 for subgroup or 1 for "top"group @ParentGroup ' GUID
The problem with this stored procedure is that it does not have return value, which is here my problem actually lies. If it returned @@IDENTITY I could use this as the parameter for @ParentGroup. Instead I figure I must create two sqlCommand's (one calling pr_AddGroup and another calling SELECT @@IDENTITY to get the newly created record).
My SQL Commands look like this
Dim cmd As SqlCommand Dim Conn As SqlConnection = New SqlConnection Conn.ConnectionString = "Data Source=myServer;Initial Catalog=myTable;Integrated Security=SSPI" cmd = New SqlCommand cmd.CommandType = CommandType.StoredProcedure cmd.Connection = Conn cmd.CommandText = "pr_AddGroup"
dim cmd2 as SqlCommand cmd2 = new SqlCommand cmd2.commandtype = commandtype.Text cmd2.commandtext = "SELECT @@IDENTITY as ID FROM Groups" cmd2.connection = Conn
dim ParentGroupGUID as system.guid
To get the data inserted in the Groups table I would something like the following, but the code is very ugly (and it doesn't work either);
For Each Group In TreeView1.Nodes ' Loop through Groups If Group.Checked Then cmd.Parameters("@GroupName").Value = Group.Text.ToString cmd.Parameters("@GroupType").Value = 1
For Each Rule In Group.Nodes ' Loop through Rules. If Rule.Checked Then cmd.Parameters("@GroupName").Value = Group.Text.ToString cmd.Parameters("@GroupType").Value = 1 cmd.Parameters("@ParentGroup").value = ParentGroupGUID cmd.ExecuteNonquery() End If Next Next
I've spent the last 5 hours figuring out this problem, so ANY help is appreciated :-)
I had a problem today where I could not see column names and alltables had a _1 after them when viewing a Sql Server view inEnterprise Managere.g.TableName Company when added to the view would be named Company_1 andthe only columns available were 1 which was *(All Columns)After looking through the news groups I saw several occurrences ofthis problem but no answers that gave a fixAfter some investigation I found that it is caused when the databasename in SqlServer has a . in it!Test1 >> Fine the view designer works fineTest1.6 >> Problems as listed aboveI don't see why Enterprise Manager allows database names with .'s ifit is going to create such problems.
SQL Server 2005 is installed on a brand new 64-bit server (Windows 2003 x64 std. Edition, 2.4 Ghz AMD opteron- 2cpu, 8.8 Gb of RAM). There is barely few hundred rows of data scattered among few tables in one database.
SQL server and SSIS performace grossly degrades overnight and in the morning everything is slow including the clicking of tool bar selection.It takes 3 seconds to execute a simple select statement against an empty table.
It takes15-20 seconds to execute a SSIS package that normally would take 2-3 seconds.
But once SQL Server is restarted, everything returns to normal and the performance is good all day and then the next day everything is slow again.
I feel like ssis encryption model has a serious flaw. Especially when linked to SQL Agent jobs.
I have posted and others have posted messages about this. Something is plain wrong with ssis encryption keys and password protection. Also, you do not have the choice not to protect the packages. In my case, protecting packages is completely useless.
I created config files for al my packages connections passswords.
Now, by our IT Policy, I had to change again my password and of course, all packages now return multiple errors when I open them.
Hopefully, the config file did its job and the packages are ran anyways by SQL Agent, however, having to manually retype and resave all packages not to have the errors is just a plain hassle. Not to speak about people not using the config files and the correct "Run As" sql agent account.
I stress the fact that in a real world production environment all packages are driven by SQL Agent jobs and MUST run automatically.
Here is the error I get after opening a package after changing my password:
Error 1 Error loading Constants05.dtsx: Failed to decrypt protected XML node "DTS:Password" with error 0x8009000B "Key not valid for use in specified state.". You may not be authorized to access this information. This error occurs when there is a cryptographic error. Verify that the correct key is available. c:projectsssis packagesssis constantsConstants05.dtsx 1 1
So Why is'nt this key automatically adjusted after Windows NT Domain password Change?
How can I refresh the key, not to have to reype all the packages connections passwords and rebuilding, Checkin-in again all the stuff?
I do not think the solution is "Use an application account which password never changes when you create your ssis packages" however at this time, this is the only solution I can think of.
How do you guys deal with this problem?
I still do not understand the ssis security model I feel it is diconnected from the reality and unpracticable in a production environment like mine.
I'm importing a csv-file delimited with semicolons. Firstly I LTRIM the columns "in place" and the data imports fine. All the numbers in right columns in the target table. Then I add another Derived Colum Transformation to replace decimal character comma (,) to a dot (.) in order to convert the string/varchar value to numeric. But here I run into trouble. Running the task ends in success but the result in the target table (same as above) is not. All the commas are now dots as expected but what is worse is that SSIS have added values in cells that should not be there. I get values in cells that shoud be empty!
Shortly: Only LTRIM([Column1]) as expression and "Derived Column" as Replace 'Column1' works OK.
But adding REPLACE-expression (i.e REPLACE(LTRIM([Column1]) , "," , ".") to this breaks things up
I'm aware that I could do this with SQL but this is not the point...