SQL Server 2014 :: Run Same Script For Multiple Clients
Jul 30, 2014
I have a script that needs to be run for 50 different @ClientID. I dont want to run this script individually for each clientid. Would 'SET @clientID in (111, 222, 333) work? I've been told that it wouldn't. Short version of the script is.....
Hello,I'm looking into offering a custom data driven web app that I wrote for an organization that I'm apart of to other similar organizations. I would be hosting the data and web application code on my dedicated server. This application is using the membership api supplied in .NET 2.0 and also has my own custom data tables within it.My question is what would be the best way to add clients to this? Should I simply create a new database for each new client like so: ACME_Database, ABC_Database, AAA_Database etc. Or should I add some sort of client "Tag" (tag meaning column within each datatable) to these databases and then update my SQL queries to process them accordingly. I imagine I could do both but I guess I need some advice from people that already had experiance with providing this kind of service. Thanks!Jason
I have a SQL Server 2005 Express database that was designed to be used by one client. What is the best way to change the design so it can contain multiple clients that can only see data entered by users of each client organization? Also I'm using the asp.net membership database to handle login and profiles. Can this be used with my multi client database?
Hi all,I've got a question concerning synchronization of multiple database clients.Consider a database, accessed by two clients. Is it guarantted, that ifclient 1 succefully commits a transaction, client 2 _instantly_ sees thechanges made by client 1?thx and regards,StephanP.S..: For MS Jet databases I know there _is_ a delay between writing datato the database and being able to read this updated value from anotherconnection,so the question refers to "real" DBMs (esp. SQL Server) only ;)
We have a situation where multiple-random client connections to SQL server get disconnected.
The workstations are not consistent, the time is not consistent, and the functions being run are not consistent. One thing that we can reproduce is that sometime, but not always, if a workstation runs a function that calls a specific stored procedure (SQL native client, ADO, SQL Server 2005 SP2), during the course of the parameter validation (after the .execute) SQL sends an ACK, RST followed by two RST TCP packet back to the client and disconnects the connection.
At the same time, several other connections across multiple computers are also dropped by SQL. However, some computers may have multiple connections to SQL and some, but not all of these will be dropped. Other computers will not have their connections dropped at all.
No SQL server errors are logged. Trace flags 4029 and 3689 have been set.
We have the network packet traces and the SQL profiler output to show this.
This is not reproducable at any other site.
Any ideas on what this could be would be greatly appreciated. -- Dana Comolli MS ISV Partner
I've been running SQL Server 2005 for over three months, and have got the database up to speed. Using SSMSE for an interface.
I've have been trying to get the Intranet to link to the database (ASP), but after a little research i found we needed to change the Authentication from 'Windows Authentication' to a mix of windows auth and sql login. Needing to restart the server for the changes to take effect.
After the restart things started to go pear shaped.
Before the restart we were able to run queries from different clients at the same time (4 workstations). Now we are unable to access anything on the database if a query is running on a seperate client. As in table properties, running a query. As i've said, we were able to do this perfectly fine before the restart.
We also use other programs to generate records for the database. These programs now have difficulty connecting to the database. We've gone through the wizards and as far as we are aware things should be working. We have created extra username/logins for these programs so not one computer/client/program uses the same connection login. However, they are unable to connect even after the wizard says there is no problem connecting to the server.
Is there any *restore to factory/default settings* button? We've even tried reverting to Windows Authentication to try and solve the problem, but it didn't work, we're pretty dependant on the database for day to day operations.
Hi Everyone,I have one database file which needs to be accesable to both a .net website and a vb.net application.The application collects data from the 'real world', processes it and adds the data to the database. The Website reads data from the database and displays the data in a graphical format.Now for the problem; While the Application is connected to the database (adding data), the Website is denied acces to the database. All connections to the DB are disposed when the Application is not uploading data, but the Website continues to be locked out (making it pretty useless) Closing the Application makes the Website work fine, but of course this means no 'real world' data can be updated.Can anyone suggest what is happening here, and more importantly how to fix it? A walkthrough must be available for this problem somewhere,... I just can't find it :(Thanks all,*The Stressed One*
I have a system use MS SQL 2005 & .NET 2.0, my tables don't have rowversion, but I heard SQL 2005 manage a rowversion by itself, can I use this to do a "ConflictDetection".All I try to do is I want to get a error when I try to update a row which been modified by someone else after I read row. Thanks.
I think I will have a problem with Id’s when I do merge replication. The problem is, I have several clients (local shops) who can do additions to a (Customer) table. Other tables will link to that table (things that a customer buys) with their foreign key, so the id of this table is very important. Now, twice a day that table with customers is merged from all clients to the central server. (the table with things that the customer buys, does not needs to be merged as it only remains local).
Possible problem: if we assign ranges to identifiers for each client (thus local shop), which will be the best option we found until so far, how can we then detect the doubles? (with doubles, we mean that when the same customer is added in two local shops for a first time, then we will have the same customer twice in the central database)(afterwards, when we delete one customer row in the central database, we might have a id conflict when we merge back to the clients -as a customer might have already bought some goods at the two different shops meanwhile-)
What’s the common practice to resolve this? (we need to do Merge Replication, because during the day, our local shops are disconnected from the central server)
We have multiple databases (1 per client) and we want to use reporting services for ad-hoc reporting. Our authentication is done by using client's database so security is custom.
What is the best way to set this up? Is it possible to somehow allow users/admins to use only their database and never see any others? I know we can install a separate instance of RS and do that but I am wondering if there is a better way to run multiple clients doing different reports using exclusively their databases without having to create a new instance.
WE have a query which pulls revenue by country and client for the last 3 years. Right now we have each year being reported in separate columns but we would like to have the revenues for each year for each client to appear on one row. Below is the current query we have setup.
SELECT p.country_code, p.local_client_code, wwc.local_client_name, case when pr.fiscal_year = 2015 then sum(pr.local_consulting_fees*er.rate) + sum(pr.local_product_fees * er.rate) + sum(pr.local_admin_fees * er.rate) + sum(pr.local_misc_fees * er.rate) else 0 end as '2015 Revenue',
We are consolidating some old SQL server-environments from 'OLD' to 'NEW' and one of our vendors is protesting on behalve of the collation we use on our 'NEW' SQL server.
Our old server (SQL 2005) contains databases with collation SQL_Latin1_General_CP1_CI_AS
Our new server (2014) has the standard collation Latin1_General_CI_AS
Both collations have CI and AS
From experience I know different databases can reside next to eachother on the same Instance.
The only problem could be ('could be !!') the use of TempDB with a high volume of transaction to be executured in TempDB and choosing for Snapshot Isolation Level ....
The application the databases belong to is very static, hardly updated, and questioned only several time per hour (so no TempDB issue I guess).
using different databases using a different collation running on the same instance?
Attempting to build a report were you can place a specific code in the parameter field and it will return all row values based on that particular code. I have a similar report that works great, but the specific code is just in 1 column, the one I'm trying to create has the potential to have that code in up to 20 different spots. I have the report built, but the issue I'm facing is linking the parameter. Is there a way to link 1 parameter to multiple column options?
Here's an example:
Docflo Distribution Group Queue Status Pend1 Pend 2 Pend 3 Pend 4 Pend 5 ABC ABC1 Catch All NEW 123 126 125 621 129 ABC ABC1 Various PENDED 621 123 872 542 630
Right now if I were to link the parameter to the Pend1 field, I would get every line I wanted that had Pend "123", but it would not include any of the lines where Pend "123" was in Pend 2, Pend 3, Pend 4, so on.
How would I link the parameter to more than 1 column so it would return all rows with a specific code no matter which Pend column it was in?
I am playing around in a test environment with SQL Server 2014. I have a question about the default location of the report server databases when you have multiple report server instances installed on one server.
I did a very simple install of SQL Server 2014 with the database and Reporting Services in Native Mode (install only) features selected. Accepting the default locations, I ended up with the following locations as you would expect:
Running the Reporting Services Configuration Manager, I created the Report Server database. After creating the Report Server database, the related files will be located below in the SQL folder as I would expect.
Next I installed another instance, which I called Test, of SQL Server 2014 like I did above. I now have the following folder structure the Test instance as I expect.
I can easily query multiple servers using the multi-server query function in Central Management Server and write some of the results to logging tables. I would like to be able to do this via a scheduled job. So far I am finding that even setting up Master/Target Servers this may not work and the only workaround is either using SSIS, SQLCMD (by basically hard coding the servername) and possibly Powershell.
tell me if they have been successful just using standard jobs and querying against multiple servers?
If I can't save the results to a 'central' database/table (I can do this when in SSMS), but can still query against multiple servers I was thinking I could write the results to a CSV file that a SSIS job picks up.
I have attempted using SSIS to iterate through servers and have been plagued with intermittent connection issues when using a For...Loop container.
I have a query with huge number of case statements. Basically I need to short this query with getting rid of these hundreds of CASE statements.
Because of the nature of the application I am not allowed to use a function, and just wondering if there is a possible way to rewrite this with COALESCE().
SELECT CASE WHEN A.[COL_1] LIKE '%cricket%' THEN 'ck' + ',' ELSE '' END + CASE WHEN A.[COL_1] LIKE '%soccer%' THEN 'sc' + ',' ELSE '' END + .... CASE WHEN A.[RESIUTIL_DESC] LIKE '%base%ball' THEN 'BB' + ',' ELSE '' END FROM TableName A
I am trying to find books which have the same title and publisher name as at least two other books and need to also show the book ref (ISBN number). I have the below script so far:
SELECT isbn, title, publishername FROM book WHERE title in (SELECT title FROM book GROUP BY title HAVING count(title)>2 or count(publishername)>2) order by title;
This is a snap shot of the output:
ISBN Title Publishername 0-1311804-3-6 C Prentice Hall * 0-0788132-1-2 C OSBORNE MCGRAW-HILL * 0-0788153-8-X C OSBORNE MCGRAW-HILL * 0-9435183-3-4 C Database Development MIS * 1-5582806-2-6 C Database Development MIS
[Code] ....
What I should be seeing is only the ones I have put an * next to. What am I missing from the scrip?
I have a requirement where in I have to concatenate the fields based on their sequence given in another table along with respect to their lengths. eg..
Input 1:
Table A: (below are the fields and their respective values, not all fields will have values) ----------- KSCHL - ZIC0 (KEY) KOTABNR - 521 (KEY) MATNR KUNNR-->1234567890 LIFNR VKORG-->a234 PRCTR KUNRE-->4355325363 LIFRE-->88390234 PRODH
Table BIt contains the same fields as in table A and will have sequence number in which the concatenation should happen. The length field(LEN) will have corresponding field lengths(pipe delimited) should be considered in concatenation)
Note: If the field length given in Table B doesn't match with actual size of the fields then, the field should be filled with 2 left spaces while concatenation.. Eg. In above example say LIFNR value = 88390234(len =icon_cool.gif then after concat the value should be like below:
12345678904355325363a234 88390234
Note:The fields are not constant..I have around 40 fields like that in which any combination of fields can be possible...eg..
I am not sure which field has the value 1, 2 etc.. and how many fields are forming the combination..It can be sometimes 3/40 fields or it can be 10/40 fields...I have to dynamically get those values and concat...
I can have any number of fields for concatenation..above example is just for 4...it should be dynamic enough to handle any number of fields..
I'm looking at installing 2008R2 and 2014 side by side, then using Mirroring to provide HA for the 2008R2 instance and AoHA for the 2014 instance. I'd be using the same two physical servers for both the Mirroring pair and the AoHA pair.
I have got 4 MS Access Database Files, which have got 3 Tables each, means Total 12 Tables which gets updated with new data every evening, by an external application. Means new data gets appended to all these 12 Tables.
I want to have exact same 4 Databases, which have got 3 Tables each, means Total 12 Tables, but WITHIN MS SQL SERVER. And then update all of these 12 Tables every evening, with the corresponding updates from the respective tables from the MS Access Databases.
I do not want to Manually Update all these 12 tables every evening into SQL Server. Hopefully there would be some easier method to do this in automatic manner.
I want to do something with error checking in my company. For this we have a selection of different tables and the data needs to meet various validation rules else it is classed as an error.
To deal with this I'm currently thinking of this approach:
1. Create a view pulling all of the various data together from the multiple tables. 2. Create an empty 'errors' data table. 3. Create an Excel file with a button to call a Check for Errors Script
Then in the the script:
1. Clear the 'errors' data table 2. Call multiple scripts, each of which uses the new view, applies the checks for that specific error and writes any erroring data into the 'errors' data table (along with a text string with the unique error code for filtering / sorting purposes). 3. After calling all the scripts, the table can be refreshed in excel when when used with a pivot table can show the various errors, and let us drill down into all the data so we can fix them.
Also.. Ideally, I'd like some way to write comments in an excel column for each entry and error code and be able to write that back into a comment table.
ID - INT Machine - TINYINT StartTime - DATETIME EndTime - DATETIME
What I am trying to do is figure out how much time is used for production per day. The problem is, there are production runs that run over midnight and possible multiple days without ending. For example, if I have the following data:
Is it possible to assign multiple columns from a SQL query to one variable. In the below query I have different variable (email, fname, month_last_taken) from same query being assigned to different columns, can i pass all columns to one variable only and then extract that column out of that variable later? This way I just need to write the query once in the complete block.
DECLARE @email varchar(500) ,@intFlag INT ,@INTFLAGMAX int ,@TABLE_NAME VARCHAR(100)
We have multiple databases on a single instance in an OLTP environment. I have my data files on a separate SAN LUN from my transaction log files (and a few NDFs split out onto additional LUNs). I was wondering if there is a performance benefit to putting each LDF file on its own LUN? Or at least my few busiest LDFs?
We are currently on 2012, but I'm having to put together specs for a 2014 installation and need to answer this question without having an environment in which I can benchmark different setups. I just want to hear whether or not others have done this (why or why not?).
We have the below query which is pulling in Sales and Revenue information. Since the sale is recorded in just one month and the revenue is recorded each month, we need to have the results of this query to only list the Sales amount once, but still have all the other revenue amounts listed for each month. In this example, the sale is record in year 2014 and month 10, but there are revenues in every month as well for the rest of 2014 and the start of 2015 but we only want to the sales amount to appear once on this results set.
I have a requirement to delete all the orphans users for the databases. The issue I am having is with when database principal owns a schema in the DB, User cannt be dropped.
How do I transfer it to DBO in case I am looping multiple databases. This is what I got so far .
declare @is_read_only nvarchar (200) Select @is_read_only = is_read_only from master.sys.databases where name='test' /* This should be a parameter value */ IF @IS_READ_ONLY= 0 BEGIN Declare @SQL as varchar (200)
I have a table #vert where I have value column. This data needs to be updated into two channel columns in #hori table based on channel number in #vert table.
CREATE TABLE #Vert (FILTER VARCHAR(3), CHANNEL TINYINT, VALUE TINYINT) INSERT #Vert Values('ABC', 1, 22),('ABC', 2, 32),('BBC', 1, 12),('BBC', 2, 23),('CAB', 1, 33),('CAB', 2, 44) -- COMBINATION OF FILTER AND CHANNEL IS UNIQUE CREATE TABLE #Hori (FILTER VARCHAR(3), CHANNEL1 TINYINT, CHANNEL2 TINYINT) INSERT #Hori Values ('ABC', NULL, NULL),('BBC', NULL, NULL),('CAB', NULL, NULL) -- FILTER IS UNIQUE IN #HORI TABLE
One way to achieve this is to write two update statements. After update, the output you see is my desired output
UPDATE H SET CHANNEL1= VALUE FROM #Hori H JOIN #Vert V ON V.FILTER=H.FILTER WHERE V.CHANNEL=1 -- updates only channel1 UPDATE H SET CHANNEL2= VALUE FROM #Hori H JOIN #Vert V ON V.FILTER=H.FILTER WHERE V.CHANNEL=2 -- updates only channel2 SELECT * FROM #Hori -- this is desired output
my channels number grows in #vert table like 1,2,3,4...and so Channel3, Channel4....so on in #hori table. So I cannot keep writing too many update statements. One other way is to pivot #vert table and do single update into #hori table.
In the ECASE table there is trigger to get the max value of case_id column in ecase based on project and increment one to that case_id value and insert into ecase table .
When we insert a new record to the ECASE table this trigger calls and insert the case_id column value.
When i run with multiple threads , the transaction is rolled back because of trigger . The reason is , on the project table the lock is happening while getting the max value of case_id column based on project.
I read , When sql server Database having multiple data files within single filegroup then sql server writes data in multiple proportional file algorithm where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup.
so if there is no filegroups created and multiple secondary files are attached in databse , is there same way data stored and writes data in multiple files by the same algorithm or any different way.