Is it possible to block updates from Excel but still allow selects? We're using SQL Server authentication for the time being and a number of users use Excel to query the database. One user has figured out that Excel can also send data to SQL Server. Is there a straightforward way to prevent this?
I have a cube with a partition configures in write back.
Users in Excel need to see the totals of the line. Unfornately they have the bad idea to write in this cell sometimes and not in the leaf cells.
As there is some MDX code behind in the weight expression field, we got some weird values : one is negative and others ARe 10 times the initial value in the total. So it's very dangerous.
How can we block the writing in this totals cells ?
I have the following stored procedure to test scope of variables
alter proc updatePrereq @pcntr int, @pmax int as begin
[Code] ....
In the above script @i is declare in the if block only when the @pcntr value is 1. Assume the above stored procedure is called 5 times from this script
declare @z int set @z = 1 declare @max int set @max = 5 while @z <= @max begin exec dbo.updatePrereq @z, @max set @z = @z + 1 end go
As i said earlier `@i` variable exists only when `@pcntr` is `1`. Therefore when i call the stored procedure for the second time and so forth the control cannot enter the if block therefore @i variable wouldn't even exist. But the script prints the value in `@i` in each iteration, How comes this is possible should it throw an error saying `@i` variable does not exist when `@pcntr` values is greater than `1`?
In SSIS 2008R2, I have a dataflow with an xlsx source and the destination is a SQL Server 2008R2 table. The files are delivered from a location where staff members 'work with' the source files. The files are produced monthly.
The dataflow that contains the file breaks upon the attempt to process subsequent monthly xlsx files with a message similar to the following:
--************* [TNUQQ [16]] Warning: The external columns for component "TNUQQ" (16) are out of synchronization with the data source columns. The column "F12" needs to be added to the external columns. The external column "county_taxable_sale_amount" needs to be updated. The external column "city_taxable_sale_amount" needs to be updated. The external column "district_taxable_sale_amount" needs to be updated. The external column "QTY" (62) needs to be removed from the external columns. --*************
I've noticed that some columns in the file ship with no data. A column with no data can be typed as datetime one month, and then float another month. I've tried to load xlsx to raw to table, but that does not work around this issue.
I've tried to set 'ValidateExternalMetadata' to 'False' on the  Excel source, but that does not work either. Aside from going back to the folks who ship the file to us, is there anything that can be done in SSIS to work around this issue, and still wind up with valid data?
I need a script that inserts the data of an excel sheet into a table. If something already exists it should leave it, unless it's edited in the excel sheet and so on and so on. This proces has to go through a stored procedure... ...But how?
I have a project that consists of a SQL db with an Access front end as the user interface. Here is the structure of the table on which this question is based:
Code Block
create table #IncomeAndExpenseData ( recordID nvarchar(5)NOT NULL, itemID int NOT NULL, itemvalue decimal(18, 2) NULL, monthitemvalue decimal(18, 2) NULL ) The itemvalue field is where the user enters his/her numbers via Access. There is an IncomeAndExpenseCodes table as well which holds item information, including the itemID and entry unit of measure. Some itemIDs have an entry unit of measure of $/mo, while others are entered in terms of $/yr, others in %/yr.
For itemvalues of itemIDs with entry units of measure that are not $/mo a stored procedure performs calculations which converts them into numbers that has a unit of measure of $/mo and updates IncomeAndExpenseData putting these numbers in the monthitemvalue field. This stored procedure is written to only calculate values for monthitemvalue fields which are null in order to avoid recalculating every single row in the table.
If the user edits the itemvalue field there is a trigger on IncomeAndExpenseData which sets the monthitemvalue to null so the stored procedure recalculates the monthitemvalue for the changed rows. However, it appears this trigger is also setting monthitemvalue to null after the stored procedure updates the IncomeAndExpenseData table with the recalculated monthitemvalues, thus wiping out the answers.
How do I write a trigger that sets the monthitemvalue to null only when the user edits the itemvalue field, not when the stored procedure puts the recalculated monthitemvalue into the IncomeAndExpenseData table?
I'm self-taught on T-SQL so forgive me if this is a dumb question. If I have a stored procedure containing the following...
IF [condition] SELECT ... ELSE SELECT ...
Would it be more efficient/proper/etc. to break those two SELECTs out into their own SPs and execute them from the IF statement?
IF [condition] EXEC [SP] ELSE EXEC [SP]
I've got a number of pretty lengthy SPs that I think are pretty tight but if I can make them more efficient by breaking them down into smaller tasks, I'd rather do that.
Hi... I have data that i am getting through a dbf file. and i am dumping that data to a sql server... and then taking the data from the sql server after scrubing it i put it into the production database.. right my stored procedure handles a single plan only... but now there may be two or more plans together in the same sql server database which i need to scrub and then update that particular plan already exists or inserts if they dont...
this is my sproc... ALTER PROCEDURE [dbo].[usp_Import_Plan] @ClientId int, @UserId int = NULL, @HistoryId int, @ShowStatus bit = 0-- Indicates whether status messages should be returned during the import.
AS
SET NOCOUNT ON
DECLARE @Count int, @Sproc varchar(50), @Status varchar(200), @TotalCount int
SET @Sproc = OBJECT_NAME(@@ProcId)
SET @Status = 'Updating plan information in Plan table.' UPDATE Statements..Plan SET PlanName = PlanName1, Description = PlanName2 FROM Statements..Plan cp JOIN ( SELECT DISTINCT PlanId, PlanName1, PlanName2 FROM Census ) c ON cp.CPlanId = c.PlanId WHERE cp.ClientId = @ClientId AND ( IsNull(cp.PlanName,'') <> IsNull(c.PlanName1,'') OR IsNull(cp.Description,'') <> IsNull(c.PlanName2,'') )
SET @Count = @@ROWCOUNT IF @Count > 0 BEGIN SET @Status = 'Updated ' + Cast(@Count AS varchar(10)) + ' record(s) in ClientPlan.' END ELSE BEGIN SET @Status = 'No records were updated in Plan.' END
SET @Status = 'Adding plan information to Plan table.' INSERT INTO Statements..Plan ( ClientId, ClientPlanId, UserId, PlanName, Description ) SELECT DISTINCT @ClientId, CPlanId, @UserId, PlanName1, PlanName2 FROM Census WHERE PlanId NOT IN ( SELECT DISTINCT CPlanId FROM Statements..Plan WHERE ClientId = @ClientId AND ClientPlanId IS NOT NULL )
SET @Count = @@ROWCOUNT IF @Count > 0 BEGIN SET @Status = 'Added ' + Cast(@Count AS varchar(10)) + ' record(s) to Plan.' END ELSE BEGIN SET @Status = 'No information was added Plan.' END
SET NOCOUNT OFF
So how do i do multiple inserts and updates using this stored procedure...
What is your opinion using Sub Selects versus Joining Tables?
SELECT (SELECT COMPANY_NAME FROM COMPANIES WHERE COMPANY_ID = c.COMPANY_ID) AS 'Company Name'
FROM COMPANY_ORDERS c
vs
SELECT co.COMPANY_NAME AS 'Company Name' FROM COMPANIES c, COMPANY co WHERE c.COMPANY_ID = co.COMPANY_ID
I'm not having any problems, just curious what everyone is practicing and is "system-actically" more resourceful. I personally like using Sub Selects (or Sub Queries) so this way I wouldn't have to deal with complex joins and where conditions, especially when your just dealing with a releational table.
I'm having problems optimizing a sql select statement that uses a LIKE statement coupled with an OR clause. For simplicity sake, I'll demonstrate this with a scaled down example:
CompanyAddressAssoc is the many-to-many associative table for Company and Address. A search query is required that, given a search string ( i.e. 'TEST' ), return all Company -> Address records where either the CompanyName or AddressName starts with the parameter:
Select c.CompanyID, c.CompanyName, a.AddressName
FROM Company c
LEFT OUTER JOIN CompanyAddressAssoc caa ON caa.CompanyID = c.CompanyID
LEFT OUTER JOIN Address a ON a.AddressID = caa.AddressID
WHERE ((c.CompanyName LIKE 'TEST%') OR (a.AddressName LIKE 'TEST%))
There are proper indexes on all tables. The execution plan creates a hash table on one LIKE query, then meshes in the other LIKE query. This takes a very long time to do, given a dataset of 500,000+ records in Company and Address.
Is there any way to optimize this query, or is it a problem with the base table implementation?
I remember when studying for exam 70-528 that I read a practice question that said it was better to return 2 select satements rather than a join on two tables as this would be more efficent. The question from MCPD Self Paced Training Kit: Designing and Developing Web Based Applications Using Microsoft .NET Framework (Certification Series): Designing and Developing ... Applications Using Microsoft .NET Framework was: You are an ASP.NET application developer. You are participating in the design of a Web retail application. The application accepts orders from the clients and stores them in a SQL Server 2005 database. Each order contains header information (order date, client information, and shipping address) and details information, which includes information about ordered products. Each detail line contains information about one ordered product and its quantity and price. All of the preceding information will be stored in two database tables: one table called Orders for the order header and another table, called OrderDetails, for order details in which each record in the orders table could have one or more related records in the OrderDetails table. When the user selects the specific order ID, the application is supposed to select all the information about the order from the database, and you need to create a stored procedure that allows selection of information about all the fields for the stored order from the database, based on a provided order ID value. Which of the following stored procedures do you recommend creating in a database to achieve the best performance on a database server side and reduce network traffic between application and database server?The answer was:CREATE PROCEDURE ORDERS_SELECT_ORDER @ORDER_ID INT AS SELECT * FROM Orders WHERE OrderID=@ORDER_IDSELECT * FROM OrderDetails WHERE OrderID=@ORDER_ID RETURNAs oppose to the traditional: CREATE PROCEDURE SP_SELECT_ORDER @ORDER_ID INT AS SELECT * FROM Orders LEFT JOIN OrderDetails ON Orders.OrderID = OrderDetails.OrderlD WHERE Orders.OrderID=@ORDER_IDThe explanation was: To achieve the best performance, avoid names of the stored procedures that start with sp_ prefix. If the name of the stored procedure starts with sp_ SQL Server always looks for the stored procedure in the master database first because all the system stored procedures start with the same prefix and are stored in a master database. It requires extra effort from the server to locate the stored procedure. This occurs even if you qualify the stored procedure with the database name. To avoid this issue, use a custom naming convention rather than the sp_ prefix. In addition, using a join inside of the stored procedure for the provided scenario will require an extra task from the server to join the data and produce a joined result. It will also lead to the redundant data for the header information that will be duplicated with each detail of the order and will increase traffic between the client and the server. Using only two separate SELECT SQL statements will allow you to avoid a join operation and reduce the size of the returned data to the client.Any one got any views on this as I have never actually seen this demostrated in a sample applcation. Is it a lot more efficent?ThanksScott
I'm about to build a batch of procedures that will be returning one row of data that is created by referencing the same table multiple times. The example below is very simplified but you'll see the differences.
Which is better:
Option A:
SELECT a.Field1, a.Field2, b.Field1, b.Field2 FROM MyTable a, MyTable b WHERE a.ID = 10 AND a.Type = 1 AND b.ID = 10 AND b.Type = 2
Or Option B:
SELECT a.Field1, a.Field2, b.Field1, b.Field2 FROM MyTable a JOIN MyTable b ON a.ID = b.ID WHERE a.ID = 10 AND a.Type = 1 AND b.Type = 2
I have a statment that checks fine in query analyser but fails in my webmatrix asp net page. I wonder if it's being too picky?Dim queryString As String =SELECT distinct [CSULOG5].[status] , [CSULOG5].[lmca_nbr] FROM [CSULOG5] works butDim queryString As String = SELECT distinct [CSULOG5].[status] + [CSULOG5].[lmca_nbr] FROM [CSULOG5] does not
I am trying to write a query that selects all the mis_key that is not in the subquery. The left join query is working but when nested it comes out blank. Here is the query:
select * from [TLC NEW Inv. Anal.].dbo.['Home Decor$'] WHERE NOT EXISTS (SELECT * FROM [TLC New].DBO.['Fabric or basket accessory$'] LEFT JOIN [TLC NEW Inv. Anal.].DBO.['Home Decor$'] ON [TLC New].DBO.['Fabric or basket accessory$'].[Item #]=[TLC NEW Inv. Anal.].DBO.['Home Decor$'].mis_key)
Hello, i would like to take the following select statements and make one table out of them. However, i want each statement to be its own column. The Union adds all of the statements into one colummn. Ultimately, i would like to place these into a stored procedure. Any help would be apreciated. Here are the statements:
select count(chalf_upd) as CHalfFile from elpc where elpc.chalf_upd is not null
select count (halfupd) as HalfFile from elpc where elpc.halfupd is not null
select count (cstaff_upd) as CStaffFile from elpc where elpc.cstaff_upd is not null
select count (staffupd) as StaffFile from elpc where elpc.staffupd is not null
select count (coffice_up) as COfficeFile from elpc where elpc.coffice_up is not null
select count (officeupd) as OfficeFile from elpc where elpc.officeupd is not null
select count (signupd) as SignFile from elpc where elpc.signupd is not null
select count (informupd) as ISignFile from elpc where elpc.informupd is not null
select count(staff_color_lastupd) as InStaffFile from staff_graphics_lastupdate where staff_graphics_lastupdate.staff_color_lastupd is not null
Strange Game. The only winnng move is not to play.
I have a Stored Procedure that performs a simple SELECT. The Selecthave no locking hints or other hints and the database is set up in astandard configuration.The problem is that the SELECT runs for some time and while it isrunning I can see (in the profiler) that other SPs with simple SELECTsare held waiting until "my" SP has finished. The other SPs may beother instances of the same SP as the one I'm running. All SPscontains simple SELECTs and should only hold shared locks.I have also checked if there are any locks holding the other SPs back- there isn't any.So my question is: What resouce can hold out other simple SELECTs inthis situation? Where should I look to identify the resource?RegardsBjørn
Ok, here's the deal. At the moment I have problems with extracting some data as the table structures refer to itself. To get the data I want I have three sql-queries setup as views. the first one takes the raw data out of the table:
fsp_marken:
Code Snippet SELECT TOP (100) PERCENT LFDNR, KRITERIUM, EINORDNUNG, CASE kriterium WHEN 1 THEN 'Marken' ELSE CASE kriterium WHEN 2 THEN 'Markenhauptgruppe' ELSE CASE kriterium WHEN 3 THEN 'Markenuntergruppe' END END END AS Gruppierung
FROM dbo.ARTEINORDNUNG
WHERE (KRITERIUM = 1) AND (LFDNR <> 0) OR (KRITERIUM = 2) AND (LFDNR <> 0) OR (KRITERIUM = 3) AND (LFDNR <> 0)
ORDER BY KRITERIUM
The second one takes the necessary information from the first statement fsp_marken and fills up additional info in this case the article number:
fsp_marken2:
Code Snippet SELECT dbo.ARTIKEL.ARTNR1, CASE fsp_marken.kriterium WHEN 1 THEN fsp_marken.einordnung END AS Marken, CASE fsp_marken.kriterium WHEN 2 THEN fsp_marken.einordnung END AS Markenhauptgruppe, CASE fsp_marken.kriterium WHEN 3 THEN fsp_marken.einordnung END AS Markenuntergruppe
FROM dbo.ARTIKEL INNER JOIN dbo.AEINORD ON dbo.ARTIKEL.LFDNR = dbo.AEINORD.ARTNR INNER JOIN dbo.FSP_MARKEN ON dbo.AEINORD.KRITERIUM = dbo.FSP_MARKEN.KRITERIUM AND dbo.AEINORD.EINORDNUNG = dbo.FSP_MARKEN.LFDNR
GROUP BY dbo.ARTIKEL.ARTNR1, dbo.FSP_MARKEN.KRITERIUM, dbo.FSP_MARKEN.EINORDNUNG
The third one takes the results from fsp_marken2 and groups them by artnr, getting only one resultset per article number:
fsp_marken3:
Code Snippet SELECT ARTNR1, MAX(ISNULL(Marken, CHAR(NULL))) AS Marken, MAX(ISNULL(Markenuntergruppe, CHAR(NULL))) AS Markenuntergruppe, MAX(ISNULL(Markenhauptgruppe, CHAR(NULL))) AS Markenhauptgruppe
FROM dbo.FSP_MARKEN2
GROUP BY ARTNR1
What I need is doing all the stuff in just one sql-statement relating on ARTIKEL.ARTNR1 as primary. I don't know If this can be accomplished...
I need help in figuring out the proper way of writing a stored procedure out correctly to get my desired datasource. In my ocnIdToRatePlanOptions table, I will recieve a parameter via request.querystring @ocnId to filter out my result set for ocnIdToRatePlan table. Based on the ocnId filtered I want it to select the corresponding tables too.So, if a querystring is passed that is 3955 in my ocnIdToRatePlanOptions table, I want it to use it to create a select for RatePlan1. If a querystring is passed that is 1854 in my ocnIdToRatePlanOptions table, I want it to use to create a select for RatePlan2. Is this possible? ocnIdToRatePlanOptions Table [otrpoRefId] [int] IDENTITY(1,1) NOT NULL,[FKocnId] [nvarchar](4) NOT NULL,[FKrpoRefId] [int] NOT NULL,1, 3955, 12, 1854, 2RatePlan1 Table[rp1RefId] [int] IDENTITY(1,1) NOT NULL,[FKocnId] [nvarchar](4) NOT NULL[fee] [decimal](18, 2) NOT NULL1, 3955, 1.002, 2350, 2.00RatePla2 Table[rp2RefId] [int] IDENTITY(1,1) NOT NULL,[FKocnId] [nvarchar](4) NOT NULL,[q_0_50] [numeric](18, 2) NOT NULL CONSTRAINT [DF_ratePlan2_q_0_50] DEFAULT ((225)),[q_51_100] [numeric](18, 2) NOT NULL CONSTRAINT [DF_ratePlan2_q_51_100] DEFAULT ((325)),[q_101_150] [numeric](18, 2) NOT NULL CONSTRAINT [DF_ratePlan2_q_101_150] DEFAULT ((345)),[q_151_200] [numeric](18, 2) NOT NULL CONSTRAINT [DF_ratePlan2_q_151_200] DEFAULT ((400)),[q_201_250] [numeric](18, 2) NOT NULL CONSTRAINT [DF_ratePlan2_q_201_250] DEFAULT ((450)),[q_251_300] [numeric](18, 2) NOT NULL CONSTRAINT [DF_ratePlan2_q_251_300] DEFAULT ((500)),[q_301_400] [numeric](18, 2) NOT NULL CONSTRAINT [DF_ratePlan2_q_300_400] DEFAULT ((650)),[q_401_600] [numeric](18, 2) NOT NULL CONSTRAINT [DF_ratePlan2_q_401_600] DEFAULT ((950)),[q_601] [numeric](18, 2) NOT NULL CONSTRAINT [DF_ratePlan2_q_601] DEFAULT ((1.50)) 1,1854, 225.00, 325.00, 345.00, 400.00, 450.00, 500.00, 650.00, 950.00, 1.502,8140, 225.00, 325.00, 345.00, 400.00, 450.00, 500.00, 650.00, 950.00, 1.50
I'm having problems with handling a very large amount of user records - about 100.000 - 150.000 records. Instead of selecting all of them at a time, how do I f.ex. select 1000 of them? (f.ex. get nr. 1 - nr 1000, then get nr. 1001 - nr. 2000) ???
Hello I'm trying to figure out how to write this query...for example,
A stud attended University of Toronto in 2001 and then attended Seneca College in 2006 ..... I want to select the Education Institute that the stud attended last.
I'd investigate the performance differences between two similar queries (written against AdventureWorks) :
select FirstName, LastName from Person.Contact where LEFT(LastName,2) = 'Mc'
-- cost .877619
-- after index.0931743
select FirstName, LastName from Person.Contact where LastName Like 'Mc%'
-- cost .875662 - missing index - add a nonclustered index including first name
-- after index: .0033256I was surprised to see that the Estimated Execution Plan shows a slightly higher cost for LEFT(LastName,2) over LIKE. The Plan for LIKE recommended creating a nonclustered index including FirstName. The LEFT Plan did not recommend any new indexes, yet after I created it the estimated performance increased dramatically, thought not nearly as dramatically as for the LIKE query.
One lessons to take from this is that that you can't always relay on Estimated Execution Plan to tell you where new indexes should be. How accurate are the costs in general? Other lessons?
I want to merge these queries in one query. When I use UNION ALL parameter sth_tarih sort is wrong.
SELECT TOP 5 sth_stok_kod,sth_evrakno_seri,sth_evrakno_sira,cha_kod ,sth_RECno,sth_tarih FROM STOK_HAREKETLERI AS SH INNER JOIN CARI_HESAP_HAREKETLERI AS CHH ON SH.sth_evrakno_sira= CHH.cha_evrakno_sira WHERE sth_stok_kod = ( SELECT sth_stok_kod FROM STOK_HAREKETLERI WHERE sth_RECno = (SELECT MAX (sth_RECno) FROM STOK_HAREKETLERI) ) AND sth_evraktip = 3 ORDER BY sth_stok_kod ASC,sth_tarih DESC
COUNTING NUMBER OF SELECTS MADEtable mytable {id, data, hits}users view data from the table:SELECT data FROM mytable WHERE id=1 --for exampleSELECT data FROM mytable WHERE id=20 --for example....How do increment the hits column without replacing the above with the below?update mytable SET hits=hits+1 WHERE id=1;SELECT data FROM mytable WHERE id=1update mytable SET hits=hits+1 WHERE id=1;SELECT data FROM mytable WHERE id=20....I believe triggers can't be used as they only trigger on update/delete events.I'm using sql server 2000 (latest patches) with aspThanksAlex
I am trying to run 3 dynamic selects from stored proc, really onlythe table name is dynamic.. Anway I'm kinda lost on how I canaccomplish this.. this is what I have but it only returns the firstresult.. that being basicCREATE PROCEDURE email_complexity@TableName VarChar(100)ASDeclare @SQL VarChar(1000)Declare @SQL1 VarChar(1000)Set nocount onSELECT @SQL = 'SELECT Count(complexity) AS basic FROM 'SELECT @SQL = @SQL + @TableNameSELECT @SQL = @SQL + ' WHERE len(complexity) = 5'Exec ( @SQL)SELECT @SQL1 = 'SELECT Count(complexity) AS moderate FROM 'SELECT @SQL1 = @SQL1 + @TableNameSELECT @SQL1 = @SQL1 + ' WHERE len(complexity) = 8'Exec ( @SQL1)ReturnIs there a better way of doing this??tiaDave
hi, i'm using Access 2007 and i'm trying to join two selects and create two new columns[complete and not complete] where 'x' denotes a hit was made. i will use this later for grouping. here is my code so far. thanks.
SELECT tblOutlookTask.TaskSubject, tblOutlookTask.PercentComplete, tblOutlookTask.ID FROM tblOutlookTask WHERE (((tblOutlookTask.PercentComplete)=100))
SELECT tblOutlookTask.TaskSubject, tblOutlookTask.PercentComplete, tblOutlookTask.IDFROM tblOutlookTask WHERE (((tblOutlookTask.PercentComplete)<>100))
I want to know how efficient are cross database selects such as
select column from data.dbo.table;
The reason I'm inquiring is because I want to design a database whereas I'm tightly coupling the tables in several databases that are all referencing each other in some way shape or form.
If it is not too efficient or if you know of a way to make these cross database selects more efficient please offer the suggestion.