One of the new tests that we are running have to do with load testing an application with a constrained network pipe. I like this. One of my beefs has been related to stored procedures that return bloated result sets. This new set of tests potentially gives me some more amunition to use when I review stored procedures. A piece that I would like to produce as a result of this has to do with establishing an output bandwidth standard for our database servers. I have a few biased ideas but I would like to know if any of you have any similar pre-existing standards along this line. Any help?
Does anyone know a wayto compress data between two database connections over "low bandwidth" lines in order to speedup datatransfer? Used connections are Oracle<->SQL2000 and SQL2000<->SQL2000.
I have scenerio that find out the Bandwitdh size between clint and server. I wanted find out howmuch size of data recieveing from server at a time. Any advice regarding this. Thanks, Ravi
I am building a SQL server on a Windows 2000 Server box. It has a public IP and we have 4 T1 lines. After I install SQL 2000 it will run for about 30 minutes and then it will take up all the bandwidth on the network to such a degree that my users are unable to access the internet. I have been troubleshooting this over the last couple of days. This morning I reinstalled SQL with just the default DB's and after 45 mins it brought the network down again. Any help on this would be greatly appreciated.
We have a customer, call them customerA CustomerA has just employed a new Chief Information Officer.
The New CIO would like to make a few changes to the current way we have things right now. Bare with me as I explain our environment.
We have The following Servers
Application Server[FSS (Dual DC Xeon 2.33GHz CPU, 4GB RAM, 3x72GB hdd RAID] Database Server [FSS (Dual DC Xeon 2.33GHz CPU, 4GB RAM, 3x72GB hdd RAID)]
Both the above servers sit in a fully fault tolerant environment with daily backups power generators etc etc etc.1000MB network everything work 100%
Now my question:
The new CIO would like to move the DB server out of this environment to a remote building downtown, and have the application server connect to this remote db server. Is this advisable? What are the required connection speeds, so that the end user does not have to wait for a response? What is the recommended response time IIS should give?
Please also note that we are in South Africa and broadband is VERY expensive / unreliable.
The CIO would like to install a 2Mbit radio link from this building to the ISP I’m not sure on the cost, but do know that this type of link is very unstable, Their failover will be a 512kb fixed line or diginet line. Cost about ZAR 8000.00 per month
The application is extremely data / graphs intensive as it contain property information. I think they average about 16000 visitors daily, not a lot.
If anyone can give me any input i would appreciated.
HiI have two SQL2000 servers in different sites, once a day approximately1M of data in the form of a large update is required to be transferedbetween the 2. We have use of a 2M pipe between the servers but thereis no quality of service, the other users on the pipe are traders sothere must be no interruption in the quality of their bandwidth at anytime.Is there any way of throttling back the data transfer between the twoservers to restrict its bandwidth use. Obviously we want to retain themax bandwidth on our local network.The pipe is administered by a seperate company so we do not have adminaccess to their gateways, routers etc.. so a solution which we canimplement on our database servers would be the easiest.I am not sure if this is the right newsgroup for this but anyinformation would be greatThanksMark
I am evaluating the possibility of replicating a database over a network to our HQ from the control site (one way). The original database is on SQL server.
The database is likely to grow to many terabytes so we would be using transactional replication.
The table to be replicated recieves about 500 records per second. The table will probably consist of a record key (8 byte int), site ID (4 byte int), reading (8-byte float), and timestamp (8 byte timestamp). All up, 28 bytes + whatever overhead exists.
MINOR DETAIL: The HQ's copy should be preferably no more than 1 hour behind the control site's. This would be a long-term setup that would last for many years. Our link is currently about 2 MB/s, and is fairly reliable.
QUESTIONS: I'm guessing that (bytes/record)*(records/second) won't be the whole story. Does anyone have an estimate of the average data efficiency factor for transactional replication? How would I go about calculating how much bandwidth would be needed? Is there a formulae hiding somewhere?
I am writing an client application that will be connected to data source. However, I do not know what the data source will be (could be mySQL, SQL Server Express, MS Access, ODBC, etc). I would like to write my data access queries using a standard that will be accepted by various data sources. What is such standard and where can I read more about it?
I would like to create a new database and follow some standard. I am hoping that there is some ANSI documentation or Microsoft documentation on a NAMING standard when creating objects in the database.i.e Table name "tblEmployees" Column name "txtLastName" Is there any GOOD documentation on creating a database using a PROVEN, and ACCEPTED standard?
Are there any sql commenting standards, for example some programming languages have commenting standards like Javadoc in Java, etc. Just wondering if there is any standard for TSQL comments? thanks
I was wondering if there are generally accpeted naming standards for SQL Server ojbects (tables, store procedeures, triggers, views etc.) that might be available somewhere on the WEB. I was also wondering if most DBA`s prefix the object names like "sp_" or suffix the object like "Customer_T"? Any opinions?
Can anyone help me translate this statement from using the legacyouter joins to the SQL-92 standards?Select CA.* From Customer C, Shipper S, Customer_Order CO,Cust_Address CA Where CA.Customer_ID =* CO.Customer_IDand CA.Addr_No =* isnull(S.Ship_To_Addr_No,CO.Ship_To_Addr_No)and C.ID = CO.Customer_IDand (S.Shipped_Date between '1/1/2003' and '12/31/2003')Try as I may, I simply can't find a working left, right, or full outerjoin statement that would give me the same results as the abovestatement gives. I thought this was suppose to work but don't knowwhy it doesn't. Anybody care to try or perhaps tell me why thestatement below doesn't work:Select CA.* From Customer C, ((Customer_Order CO left outer joinCust_Address CA on CA.Customer_ID = CO.Customer_ID) left outer joinShipper S on CA.Addr_No =isnull(S.Ship_To_Addr_No,CO.Ship_To_Addr_No)) Where C.ID =CO.Customer_IDand (S.Shipped_Date between '1/1/2003' and '12/31/2003')Thanks,Tony
I was told that XML names must not start with the letters xml (or XML, or Xml, etc) But I was able to store such data in sql 2005. Any thoughts on this one?
Are there common naming standards for SQL tables and stored procedures? I'm creating a table for target audiences and was going to set it up like this:
This table is really straight forward, but let me know if you would change anything. I want to use all of the most common naming standards throughout my database.
Maybe I didn't search hard enough on BOL, but does Microsoft have a documented set of standards regarding custom component development for SSIS. Things like:
- extend this base class, implement this interface
I have an application that stores xml data in an unusal manor. Basically a SQL Key column and an XML string.The XML string is not really standard XML, but it is what it is, and I'm stuck with it. It is in the format;
<row key="Value.01" xml:space="preserve"><c1>FirstName</c1><c2>LastName</c2><c3>10 Street Address, City ST 012345-1234</c3><c4>5</c4><c5>50</c5><c6>500</c6></row>
I am able to pull values out via SELECT p.value('(./c1)[1]', 'VARCHAR(8000)') AS c1, p.value('(./c2)[1]', 'VARCHAR(8000)') AS c2 FROM dbo.UserXMLTable CROSS APPLY XMLRECORD.nodes('/row') t(p) where p.value('(./c1)[1]', 'VARCHAR(8000)') like 'First%'
However I've been struggling with selecting row with a LIKE clause. Something like ;
SELECT * FROM dbo.F_UserXMLTable where XMLRECORD.value('(./c1)[1]', 'VARCHAR(8000)') like 'First%'
I have tried a number of permutations of XML syntax but so far have been stumpled.
Please note "<row key="Value.01" xml:space="preserve">" has a <SP> in the name 'row key' .
Having a difficult time setting up a development environment and a set of standards for SSIS package development.
First of all, you can't run the dataflow object "SQL Server Destination" in BIDS because BULKCOPY can only be run from the actual server. So how do you test/debug a package with this object in it?
Second of all, if you create an SSIS package on a developer computer in BIDS, and then import it into the SSIS package store on your development SQL server, you can't run the package from Management Studio on the developer PC. You get the error "DTS_E_PRODUCTLEVELTOLOW" when it tries to run any of the SSIS. Do I have to have SSIS installed on the developer client machine? How do I do that without installing a full server instance on each client machine (not to mention the licsense issues)?
Lastly, what protection level would you suggest using for production? We are having issues with ODBC connection passwords being decrypted and thus package steps failing in using "EncryptSensitiveWithUserKey". What exactly does this protection level do? Our network is physically very locked down, so we arent worried about SSIS package security too much, just looking for a way for them to work reliably without having to setup complicated security scenarios.
Visual Studio provides IntelliSense and targeted standards compliance code checkers that are extremely useful when writing code. A good example is for web pages targeted to XHTML 1.0 transitional versus strict versus XHTML 1.1.
Is there anything comparable for SQL coding in any of the Microsoft products whether Visual Studio or SQL Server Management Studio or any other development environment?
I'm looking for IntelliSense that can be targeted to one of three alternative configurations: (1) ANSI SQL-92 only, or (2) ANSI SQL-99 only, or alternatively, (3) T-SQL with proprietary Microsoft features/functions (ie, not ANSI compliant in the sense that it is no longer portable per ANSI criteria - it will break when ported due to presence of Microsoft proprietary features/functions).
If standards targeted IntelliSense is not available in any of the Microsoft products for SQL development, is there any third-party product that provides this capability?
I was told that AES 128/SHA1 is supported for SQL Server Compact 3.5. The problem is that I couldn't find any product literature from Microsoft that specifies exactly that and my client wants us to provide proof on that.
I hope to get the endorsement from the forum here, and it would be great if someone could point me to some Microsoft resources that specifies the support clearly.
I am sure this questions have been asked but i was not able to find a useful information.
1. I am looking for a SSIS Standards document or source. A document developer can use for developing ssis package. This should include how to name each container, task , and how to organize things. Basics of SSIS. Is there some kind of source where i can find this information . We are starting out to migrate from informatica to SSIS but before we do that we would like to put standards in place so all SSIS development is consistent.
2. SSIS project documentation template that we can use to document each project. Is there anything out there that we can follow to document each of our project.
Hello All, I am migrating data from one database to another. I am using Multicast to seperate (legal street,legal mail and legal city) and (mail_street,mail_state,mail_zip,mail_city) also later after UNION of the above I am doing two lookups as I had to get contact ID and Customer ID from other two tables. In UNION i am matching (Mail street legal street) and so on.
I am getting double the data in the output. my input data is 1000000 and im gettin 2000000.
I am setting up P2P replication for a high latency environment (although at 100MBit per sec, the bandwidth is not a serious issue ).
I have noticed that at high load, the bandwidth between Distribution and Subscriber servers maxes out at 1MBit per second (our network bandwidth is 100MBit per second).
This causes transactions to 'back up' and when the load reduces they 'catch up' and synchronise.
Does anyone know where this limitation of 1Mbs is being enforced and what is the way round it?
I have seen the -SubscriptionStreams NN parameter, but this is not applicable in P2P replication.
hi, good day, can we output data from sql query into file ? for example, if i have a select sql statement which capture many records and i would like to output it into "tab" elimiter text file format
select hierarchy.hiername,devicefail.deviceid ,sum(DATEDIFF(minute,started,ended)) as duration ,100 - SUM(datediff(minute,started,ended))/(672 * 60.0000)*100 AS Uptime from devicefail LEFT JOIN device ON device.deviceid = devicefail.deviceid LEFT JOIN hierarchy ON device.hierlevel = hierarchy.hierlevel where devicefail.started >= '2013-02-01 00:00:00'and devicefail.ended <='2013-02-28 23:59:59' and devicefail.componentid like 201 or devicefail.componentid like 0 group by devicefail.deviceid,hierarchy.hiername,devicefail. componentid order by hiername
I run a SQL query to select a few colums of data using a select statement, I want the output to be stored in the new table which can be defined in the SQL statement how can I do it?
I have this script which outputs a combination of financial data. I have recently joined a table that includes a narrative and when I run the query, it runs perfectly. However, when I copy and paste the output into Excel, not all the data is showing. After the process of elimination, it is because Excel doesn't like the narrative column and not all data is copied across which is very annoying. If I remove the narrative column, then all the data copies over correctly.