Does anyone know a wayto compress data between two database connections over "low bandwidth" lines in order to speedup datatransfer?
Used connections are Oracle<->SQL2000 and SQL2000<->SQL2000.
One of the new tests that we are running have to do with load testing an application with a constrained network pipe. I like this. One of my beefs has been related to stored procedures that return bloated result sets. This new set of tests potentially gives me some more amunition to use when I review stored procedures. A piece that I would like to produce as a result of this has to do with establishing an output bandwidth standard for our database servers. I have a few biased ideas but I would like to know if any of you have any similar pre-existing standards along this line. Any help?
Can anyone tell me whether there is any data compression in SQL6.5. Have concerns with network traffic, and was wondering if data compression was a function SQL6.5, or if the data compression have to be coded into the actual database?
I tried to disable the data compression for granular backup purposes on SQL 2014 with following query: EXEC [dbo].[prc_EnablePrefixCompression] @online = 0, @disable = 1
I received following error message: Msg 2812, Level 16, State 62, Line 1...Could not find stored procedure 'dbo.prc_EnablePrefixCompression'.
I found it pretty interesting. I checked the size of a database, before implementing database compression across all the user tables in a database. And Post implementation of compression too I checked the size of the database.
I did not find any difference. But if I expand the table and check propetires->storage and I can see that PAGE compression is implemented across all the tables, but no compaction in the size of the db. It still remains the same.
Hi, I need to convert from mssqlto Postgres and I need to export all MS-SQL table data to a CSV or TXT file (one file per table)
Presumably, all data per row (of a table) must be in one line. Then when you copy to another database, a new line of data means a new row in the table.
However, MS SQL is exporting a large varchar text field as multiple lines. The data itself is many lines, so exporting it causes the data for one row to fall onto many lines.
My question: How do I escape new lines? When MS SQL exports the data, I want to replace all NEW LINES / carriage returns by /n or by <br> tag (since the data will be for web use).
(pls note I am not actually handling the ms sql database, so any response would be greatly appreciated as I advise the person in charge of the mssql db accordingly).
Hi, I have a scenario, where I have a string column from database with value as "FTW*Christopher,Lawson|FTW*Bradley,James". In my report, I need to split this column at each " | " symbol and place each substring one below the other in one row of a report as shown below .
I have scenerio that find out the Bandwitdh size between clint and server. I wanted find out howmuch size of data recieveing from server at a time. Any advice regarding this. Thanks, Ravi
I have a Site 2509CRUZ2 and SB1931 that have Multiple Contract ID's assigned to them in the table. Which are in 12 font below. I need to create a script to take these duplicates and place them in their own view but not delete them from the table.
I'm trying to produce a chart that has both actual data values from a database and matching "line of best fit" plots on the one chart. I have 4 data series, 2 of the actual data values and 2 that represent the values for the "line of best fit". What I want to do is:
Plot the actual values just as data points (joining them with lines in meaningless) and Plot the "line of best fit" values as a line.
When I edit the data series on the scatter chart, I can see the "Plot data as line" option but it is "greyed" out.
Have I missed something really simple here or is this not possible. I'm using RS2005
I am building a SQL server on a Windows 2000 Server box. It has a public IP and we have 4 T1 lines. After I install SQL 2000 it will run for about 30 minutes and then it will take up all the bandwidth on the network to such a degree that my users are unable to access the internet. I have been troubleshooting this over the last couple of days. This morning I reinstalled SQL with just the default DB's and after 45 mins it brought the network down again. Any help on this would be greatly appreciated.
We have a customer, call them customerA CustomerA has just employed a new Chief Information Officer.
The New CIO would like to make a few changes to the current way we have things right now. Bare with me as I explain our environment.
We have The following Servers
Application Server[FSS (Dual DC Xeon 2.33GHz CPU, 4GB RAM, 3x72GB hdd RAID] Database Server [FSS (Dual DC Xeon 2.33GHz CPU, 4GB RAM, 3x72GB hdd RAID)]
Both the above servers sit in a fully fault tolerant environment with daily backups power generators etc etc etc.1000MB network everything work 100%
Now my question:
The new CIO would like to move the DB server out of this environment to a remote building downtown, and have the application server connect to this remote db server. Is this advisable? What are the required connection speeds, so that the end user does not have to wait for a response? What is the recommended response time IIS should give?
Please also note that we are in South Africa and broadband is VERY expensive / unreliable.
The CIO would like to install a 2Mbit radio link from this building to the ISP I’m not sure on the cost, but do know that this type of link is very unstable, Their failover will be a 512kb fixed line or diginet line. Cost about ZAR 8000.00 per month
The application is extremely data / graphs intensive as it contain property information. I think they average about 16000 visitors daily, not a lot.
If anyone can give me any input i would appreciated.
HiI have two SQL2000 servers in different sites, once a day approximately1M of data in the form of a large update is required to be transferedbetween the 2. We have use of a 2M pipe between the servers but thereis no quality of service, the other users on the pipe are traders sothere must be no interruption in the quality of their bandwidth at anytime.Is there any way of throttling back the data transfer between the twoservers to restrict its bandwidth use. Obviously we want to retain themax bandwidth on our local network.The pipe is administered by a seperate company so we do not have adminaccess to their gateways, routers etc.. so a solution which we canimplement on our database servers would be the easiest.I am not sure if this is the right newsgroup for this but anyinformation would be greatThanksMark
I am evaluating the possibility of replicating a database over a network to our HQ from the control site (one way). The original database is on SQL server.
The database is likely to grow to many terabytes so we would be using transactional replication.
The table to be replicated recieves about 500 records per second. The table will probably consist of a record key (8 byte int), site ID (4 byte int), reading (8-byte float), and timestamp (8 byte timestamp). All up, 28 bytes + whatever overhead exists.
MINOR DETAIL: The HQ's copy should be preferably no more than 1 hour behind the control site's. This would be a long-term setup that would last for many years. Our link is currently about 2 MB/s, and is fairly reliable.
QUESTIONS: I'm guessing that (bytes/record)*(records/second) won't be the whole story. Does anyone have an estimate of the average data efficiency factor for transactional replication? How would I go about calculating how much bandwidth would be needed? Is there a formulae hiding somewhere?
We are using a table that may give 1 to and unknown number of data elements (ie. years) . How can we break this to show only three years in each row. Since we don't know the number years we really won't know the number of rows needed. Years are stored in their own table by line.
car make year1 year2 year3 A volare 1995 1996 1997 a volare 1997 1998 1999 b toyat 1965 1966 1968
We can pivot out the first X# but we don't know how many lines so we don't know how many rows we will be creating.
I have been wanting to compress my database. I am not really sure how this is done. I was looking on Enterprise Mangr. and if you right click on the db and go to all tasks, there is an option to shrink database. Is this the way you would compress your database, or are there other ways of doing this?
My application send/retrieve large data from and to the database over the internet. Is there anyway or method that i can compress the query before submit to database server? really appreciate any advice and comments.
We have run some tests on our application. Average message is about 2.5 MB. Messages are send once every 30 minutes. This is 3.5 Gb per month for one site. Now we already have three sites that will be sending this messages. This will be VERY high load on the WAN channel, and will cost us a LOT of money .
Does SQL Server replication impliment any kind of compression? It seems to me that this would be very helpful for congested WAN links and costly merge replication.
Hi, I was looking for a column compression functionality in SQL Server Compact and it seems that it doesn't exist (maybe I'm wrong?). I wonder if the SQL Server Team plan to implement column compression and if yes, when can we expect it?Thank you for your help!
I have a database which is 72GB, which is backed up every night as part of the maintenance plan. I have plenty of storage space, and the server that runs the database is fairly powerful (quad-processor 3.2ghz, 64bit, 48GB RAM) and is part of an active-passive cluster. The database backup is also copied to a SAN location.
My issue is with the size of the backup file. As part of the Disaster Recovery plan, I need to copy this database backup file accross the network to a remote site, so that in the event of a disaster at the site, business can continue at the remote site after restoring the database backup file. However, my database backup file is so big that I cannot copy it accross the network in time for the next morning. I have tried using WinRar and have managed to achieve a file about 20% of its original size, but it takes 2 hours to produce this file.
Is there any recommended reeading for this type of issue? Log shipping / mirroring has been investigated and will be part of the DR model but the 'powers that be' insist on having a full copy performed to the remote site.
Any suggestions? Thanks in advance guys n gals :-)
I am wanting to use backup compression on a few sql servers (2008R2 and 2012). I have never touched compression before and always just gone with the default.I use Ola Hallengren's scripts to do the backups which, if not specifically specified, will use the server default.Backups (FULL and LOG) have been happening successfully on these servers for years.SQL won't care that the previous backup in the set was uncompressed or that the hourly transaction log backups previously taken were uncompressed?Restore statements (T-SQL) will be identical?From everything I am reading it is simply a case of setting the configuration and acting like nothing changed but I just wanted to be 100% certain.1 of the servers is a SharePoint backend.All of the backup files from all servers are picked up by Commvault backup system.
I have a report being utilized for return address labels, conforming to Avery 5167. I have tried designing both as a table and as data in rectangles. Since these are return labels their is only one instance of data replicated for all textboxes, therefore the columns are of consistant length.
The report has seven columns of precise measurement, the data filled colums are set as 1.75in, 0.25in, and between the data columns are blank columns set to .3125in, 0.25in. I have also tried to fill data into the blank colums and set the font color to white. All report margins are set to 0in and the table location is 0.04167in, 0.125in. all textboxes have the properties for increase/decrease to accomadate turned off.
The biggest issue I am having is in printing from the deployed report versus printing while designing. I have adjusted for the glitch in margins for RS2005 and have printed succussfully prior to deployment to adhere to the specific measures for the label, report margins and textbox height and width and blank column spacing.
However, after deploying the report the data spacing seems to be compressed when printed. I am not getting the same measure between data fields as I had when designing the report. The entire printout seems to be slightly compressed.
When printed from design colums 1,3,5, and 7 start respectively at .3125in,2.375in,4.3756in,6.4375in from the edge of the page.
When printed fromt he deployed report columns 1,3,5 and7 start respectively at .3125in,2.3125in,4.28125in,6.28125in.
The progression of compressed measure seems to increase from left to right inthat by the time the report is printed via the deployed report the "Tab" area is a difference of .15625in which when printing for very precise template format this creates a problem. As well as a headache when one thinks the report is correct before deployment.
I have a server in a datacenter (SQL 2005 ent) that collects large quantities of data from our visitors. I need to set up a secondary database in our office (different geographic location) that will server 2 purposes, 1, a backup of the database and 2, allow us to perform complex queries on the data.
There is no updating of the data on the secondary server so no changes need to go back to the primary server. A database in standby mode is fine and users on the secondary server can be disconnected when it's being updated.
I have transaction log shipping working well in a staging environment (LAN). My first question is is there any reason why transaction log shipping would not work over a WAN with a VPN connection?
And my second question is can I compress the trn files for transport over the WAN. If I manually compress the files with winzip they compress by 98%. That translates into a huge saving when I am leasing a line to transport these files.