I have two databases DB1 and DB2 DB1 has a source table named 'Source' I have created a login 'Test_user' in DB2 with Public access. I have also created a view named 'Test_view' in DB2 which references data from DB1.dbo.Source
I have a stored proc which evaluates a table of 'tasks' to return the task with the highest priority for a given user - at the same time, once that task has been found - I want to allocate that user to the task.
I have created a stored proc that works...however I'm sure this requirement /pattern is common & I would like some best practice for this pattern of update and select within a transaction.
Here is some sample data:
use tempdb; go if OBJECT_ID('Tasks', 'U') is not null drop table tasks; go create table tasks ( TaskId int identity primary key,
[Code] ....
And here's what my current stored proc looks like;
if OBJECT_ID('pGetNextTask', 'P') is not null drop proc pGetNextTask; go create proc pGetNextTask ( @UserID char(10), @TaskID int output )
I have a table. Highlight------------- Id Name Detail StartDate EndDate Priority
I want to make a query which returns 1 Highlight in the current date. But remember I have already set the Hightlight Priority 1 to 5. And I want that Hight Priority rows select more times than Low Priority Rows.
I have a package that loads staging tables from an Oracle source DB. In the data flow tab I have 30+ read table/write table task combinations. When I run the package 3-4 of the read/write combos execute at a time. What I'm trying to control is the priority order of the combo execution. My goal is to minimize to total load time by having the larger table transfers run first and the smaller table transfers fill in until they are all complete. Currently, the largest table (16 million) transfers last (because it was the last combo that I created?).
I am unable to the access on table even after providing the SELECT permission on table.
Used Query by me :
Here Test is schema ; Card is table ; User is Satish
To grant select on Table
GRANT SELECT ON TEST.Card TO satish Even after this it is not working, So provided select on schema also. used query : GRANT SELECT ON SCHEMA::TEST TO Satish.
it runs over night, pulls 10.5m rows. Inserts into a table, from a select (so "insert...select", rather than "select into"), from many tables, grouping on a max. It's complex. During the day, it runs fine - maybe 25 minutes. At night it *sometimes* runs fine, but then sometimes takes 4hours.Checking this morning there were 230 threads open for this one query. Checking sys.dm_os_tasks and sys.dm_os_waiting_tasks there were no other wait types on that session_id. None at all. Checking activity monitor, most of the existing threads were suspended on the insert.
There are 24 cores, but NUMA'd. We have maxdop on the server of 8. The maxdop option on the query is 12, just to speed up the select. Index and Stats refreshed daily. We've eight identical tempdb data files, on a separate spindle. Checking those, they are filling up using the round robin correctly
Would the delay be due to SQL trying to combine so many 'select' threads into one 'insert' thread (as it can't insert in parallel; 2014 can, apparently. Upgrades not available!)Should i change the SP to run the select into a temp table (table variable?) with a maxdop of 12, then do the insert into the actual table using a maxdop of 1. Checking the execution plans for a table variable implies the subtree cost comes down from 2.5m to 357k. What's best - temp table or table variable?
I have a table (ScriptTable) which holds a groupID Nvarchar(10) ,SQLStatement Nvarchar (150)
Table Fields = GroupID SQLStatement 1234 Select CUSTNO, CUSTNAME,CUSTADDRESS from custtable where customerNo = 'AB123' 9876 Select CUSTNO, CUSTNAME,CUSTADDRESS from custtable where customerNo = 'XY*'
What I need is to take each select statement in turn and add the data into a temp table. I can use any method but it needs to be the most efficient. There can also be a varying number of select statements to run through each time my job is run.
I want to return all rows in table giftregistryitems with an additional column that holds the sum of column `amount` in table giftregistrypurchases for the respective item in table giftregistryitems.
What I tried, but what returns NULL for purchasedamount:
SELECT (SELECT SUM(amount) from giftregistrypurchases gps where registryid=gi.registryid AND gp.itemid=gps.itemid) as purchasedamount,* FROM giftregistryitems gi LEFT JOIN giftregistrypurchases gp on gp.registryid=gi.id WHERE gi.registryid=2
How can I achieve what I need?
Here are my table definitions and data:
/****** Object: Table [dbo].[giftregistryitems] Script Date: 02-05-15 22:37:11 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[giftregistryitems]( [id] [int] IDENTITY(1,1) NOT NULL,
how to import a text file with a list of NI numbers into a new table with a column to list all the NI numbers? I think I use the Select INTO clause, but not sure how to do this?
I have created a table(T1) from select query result, that Select query is parameterised. Now I need to update the select query table(T1) based on the result every time.
Below is my Query:
ALTER PROCEDURE [dbo].[RPT_Cost_copy] SELECT MEII.*, SIMM.U_SP2DC, UPPER(SIMM.U_C3C2) AS GRP3,sb.cost, PREV.Z1, PREV.Z3, SB.Z2, SB.Z4,SIMM.U_C3DC1 AS FAM INTO T1 FROM (SELECT a.meu, a.mep2, SUM(a.mest) as excst FROM mei as A WHERE a.myar=@yr and a.mprd=@mth AND LTRIM(A.MCU) <> '' AND LTRIM(A.MRP2) <> ''
In a Library Management database we have these tables
1) Document ( DocNo , Doc_type , permalink,inDate) 2)Title(id, DocNo,Main_Title, Other_Title) 3)Author(id , Author_Name , Author_Family,Type--Like:main author , translator ,....) 4)Publisher(id,DocNo , Name,Publisedate,address) 5)Subject(id,DocNo,Subject) 6)Description(id,DocNo,ISBN,description)--one document may have some ISBN,etc
In document table I have 500,000 records.
I want to search a word in these tables ,for example i want to search 'Computer' ,this word may be in subject or title or description and etc. How can I do this with best performance?
I have a stored procedure that I have written that manipulates date fields in order to produce certain reports. I would like to add a column in the dataset that will be a join from another table (the table name is Periods).
USE [International_Forecast_New] GO /****** Object: StoredProcedure [dbo].[GetOpenResult] Script Date: 01/07/2014 11:41:35 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO
[Code] ....
What I need is to add the period, quarter and year to the dataset based on the "Store_Open" value.
For example Period 2 looks like this Period Quarter Year Period Start Period End 2 1 20142014-01-27 2014-02-23
So if the store_open value is 02/05/2014, it would populate Period 2, Quarter 1, Year 2014.
We have equipment table which stores Equipment_ID,Code,Parent_Id etc..for each Equipment_ID there is a Parent_Id. The PK is Equipment_ID Now i want to select the Code for the Parent_Id which also sits in the same table. All the Parent_Id's also are Equipment_ID's.
What is the best way to transfer data from the staging table into the main table.
Example: Staging Table Name: TableA_satge (# of rows - millions) Main Table Name: TableA_main (# of rows - billions)
Note: Staging table may have some data same as the main table.
Currently I am doing: - Load data into staging table (TableA_stage) - Remove any duplication of rows from the staging table (TableA_stage) - Disable all indexes on main table (TableA_main) - Insert into main table (TableA_main) from staging table (TableA_stage) - Remove any duplication of rows from the main table using CTE (TableA_main) - Rebuild indexes on main_table (TableA_main)
The problem with the above method is that, it takes a lot of time and log file size grows very big.
I have created a trigger that is set off every time a new item has been added to TableA.The trigger then inserts 4 rows into TableB that contains two columns (item, task type).
Each row will have the same item, but with a different task type.ie.
Where interid in ('comp1', 'comp2', 'comp4', 'comp5')
what would be the best way to using these scripts pull the data to my testDW and not have duplicate data issues?
I was thinking of using a staging DB on the GP cluster and then building an import data package to run nightly. the issue i had was how do i avoid duplicate data ?
I have two table with some identical fields and I am trying to populate one of the tables with a row that has been selected from the other table.Is there some standard code that I can use to take the selected row and input the data into the appropriate fields in the other table?
I have a table with dates and values and other columns. In a proc i need to get the result as Month and the values for all the months whether or not the data exists for the month.
The Similar table would be-
create table testing( DepDate datetime, val int) insert into testing values ('2014-01-10 00:00:00.000', 1) insert into testing values ('2014-05-19 00:00:00.000', 10) insert into testing values ('2014-08-15 00:00:00.000', 20) insert into testing values ('2014-11-20 00:00:00.000', 30)
I had a function like below :Public Sub Getdata2(ByVal query, ByVal db, ByVal name) Dim selectSQL As StringDim con As SqlConnection Dim cmd As SqlCommand Dim reader As SqlDataReader Trycon = New SqlConnection("User ID=xxx;password=xxx;persist security info=True;Initial Catalog=database1;Data Source=xxx.xxx.xxx.xxx.xxx,xxxxx;") Dim username As String username = Request.QueryString("username") selectSQL = "SET DATEFORMAT DMY;Insert into table1(hse_num, st_name, proj_name, unit_num, postal, n_postal, flr_area, flr_sf, flr_rate, flr_ra_sf, land_area, land_sf, land_rate, lnd_ra_sf, prop_code, cont_date, title, sisv_ref, r_date, rec_num, source, username, DGP, Remarks, Sub_Code, caveat, consider, age) select hse_num, st_name, proj_name, unit_num, postal, n_postal, flr_area, flr_sf, flr_rate, flr_ra_sf, land_area, land_sf, land_rate, lnd_ra_sf, prop_code, cont_date, title, sisv_ref, r_date, rec_num, source, '" & username & "', DGP, Remarks, Sub_Code, caveat, consider, age from [yyy.yyy.yyy.yyy,yyyy].database2.dbo.table2 where " & querycmd = New SqlCommand(selectSQL, con) con.Open() reader = cmd.ExecuteReader() lbl.Text = selectSQLCatch ex As Exception lbl.Text = ex.Message Finally If (Not con Is Nothing) Then con.Close() con.Dispose() End If End Try End Sub '------------------------------------------------------------------------------------------------------------------------------------------------------------------------- May i know that how do i retrieve data from [yyy.yyy.yyy.yyy,yyyy].database2.dbo.table2 due to diffrent server, diffrent UID and Password