Hi,We need to use a free database for a project because of tight budget.Is MSDE ok for handling large volume of data and 70 - 80 users?My understanding is that MSDE is optimized for 5 concurrent users.Is MySQL better than MSDE?Thanks,Ben
I have a summary table with a 9 field composite primary key. Every 10 minutes, my system generates 2 files of 500,000 to 750,000 rows to be summarized into this table. I first Bulk insert those into a temp table, and then trigger an inner-join update query to do the updates, followed by a left-outer join to do the inserts. As the day goes on, millions of rows in my summary table, this process is too slow. Any ideas about causes/solutions???
I am facing problems with concurrent access in SQL Server 2000,The scenario is that the DB contains one huge de-normalized table containing 40 million records.
The application frequently queries this table to populate other derived tables,the sql queries take a long time to return results.
So if one query is in execution the other user's query goes into a wait mode.Please suggest how I can better this.
Hello ,i am a master student and i am making a seminar about high volume DB performance problems ,example : if i have a table with length of 1000000 record and this length is growing exponentially by the time,what the problems may i face in insertion ,deletion , search,in such table?? and what the problems in processing such DB in general
I have been asked to design a solution for a client of mine who basically requires the daily analysis and reconciliation of the differences between 2 extremely large text files.
The files are not in an identical format but are both in some form of delimited format (one is CSV, the other is a little more complex). For the sake of this question, let's assume that I can effectively import each file into an MS SQL table.
Each file will have in excess of 100,000 rows each day (new data for each day).
Whilst I know that MS SQL does easily have the capacity to store the data, is there a recommended way to tackle the potential problems (I imagine that performance is important... they will be running the report every day)
Or is building the solution as simple as importing the data into 2 tables, and then querying the differences and outputting as a report using Crystal?
i am doing a research about high volume database treatment (maybe a database with tera bytes volume) , so is there any optimization or specialization for queries deal with such database? !!
I am looking to improve the performance of my sql server databases.
I currently have a dual location system, the database server setup is basically a quad xeon with 4gb at my office and a double xeon with 4gb at a remote webhosting location. There are separate application/web/intranet servers at each site. The two databases servers are replicated with the local server publishing to the remote server.
The relational database holds circa 26 million records, growing by a volume of 10,000 per day, there are approximately 50,000 queries performed per day.
My theory is that the replication of the two databases is causing a slowdown; despite fast network connections (averaging 200ms between servers) the replication seems to place a large load on the local server. Would it be sensible to replicate to a second local server and then replicate to the remote server, placing any burden on the second server?
I am planning to upgrade the local server to a high capacity 4+ cpu 64bit server, my problem is that although I have noticed a slow down in performance over time, I am unsure how to go about measuring and quantifying this in order to diagnose the bottlenecks and ensure that investing in a new server would be worthwhile. Where would one be best advised to start this project?
We are in the process of moving existing clustered SQL server databases to AWS. There is one major database that has intensive reads and writes transactions. I'm wondering what is the best design to optimize the performance for both R/W since we have constant issues historically with the current environment when massive updates are happening. Reads shall have higher priority over writes.
I have a question on training large volume of datasets. In this case, the training will take a long while to complete, is there anything we can do to improve that? I know, we obviously cant split the training dataset into different smaller datasets. What we can do to improve that?
Hope my question is clear for your help.
Thank you very much in advance for your advices and help and I am looking forward to hearing from you shortly.
Hi, I have to transform about 60 millions of data and it runs so slow that it never finishes in my testing. Should I have to process it chunk by chunk? Or is there any other techniques I can use (I am using data flow task). Thanks for advice.
The server being used is a Intel Xeon E5310 Clovertown 1.6GHz 2 x 4MB L2 Cache Socket 771 80W Quad-Core 2U Passive Processor.
The problem is that this server is slowing down everytime about 1000 users log into a forum which the server is running. I think that the server should be able to handle this many users with no problems but I am not sure if that is the case.. The problem is probably something to do with the SQL of the server I am guessing. The server is not mine but I want to help the owner of the server as well as the users who are trying to access this forum but cant because of this server issue. If I was able to get the SQL would I be able to fix this problem? I doubt you need this but the server url is www.smashboards.com
I am fairly new to servers and have never really set one up myself yet. Forgive me for my lack of knowledge about them.
Is there a way to configure mirroring to go from High Availability to High Protection without having to reconfigure Database Mirroring? Using the interface in Management Studio, I can change the configuration option to High Performance, but not High Protection despite both of them being Synchronous.
If not, what are the recommended steps to configure the mirror once it already has been configured? Is just like initially setting up the mirror or would there be any shortcuts I could take? If I stop the mirroring and remove the witness, will the High Protection option be available?
I realise this is a stupid quesiton but i cannot really find any confirmation of this in BOL.
If you are running High Safety with automatic failover, when failover occurs does this automatically change to High Performance mode. SInce for failover to occur something has happen with the primary , it will be impossible to commit transactions on the new primary and mirror asyncronously since 1 of them is no longer available.
So am i correct in assuming that automatic failover also automatically changes the mode to High Performacne for that session?
We have a 4 processor 350 Hz NT 4.0 SQL server. Currently we have an application that is inserting rows one at a time, each row insert is a separate transaction. Currenty we are averaging 2500 rows a second with each row ( 56 bytes wide). The data and the log are on one string of Raid disk. We plan to get another controller and raid string to separate the data and the log onto separate controllers. The developer is modifying the application to insert the data in blocks. What is the impact to the transaction log? He seems to think that by inserting page blocks on rows there would be less data going into the transaction log. Why would this be so? Does anyone have any information on practical limits for inserts and log truncation with similar machine configurations. He would like to try to get around 150,000 rows a second. Has anyone accomplished inserts at this rate? What type of machine configuration?
I Have not been able to solve this problem from quiete a while now.
I am using sql server 2005.
I have got a table which contains these columns - start date, end date and volumes if the month in the start date is same as that of end date, the volume remains same, else if the months in the two dates are different, then i have to distribute the volume in such a way that some part will go in the first month and the rest in the other month.. i have to somehow calculate (or prorate) the volume according to the no of days in each month
I have to perform a query on this table so that I can group the volumes for different months and different years.
We have an existing SSRS server, and have just created a new child domain. We'll be migrating users from the parent to the child, and want to add the users of that new domain with access to SSRS. In the parent domain they are able to access, but after migration with the child domain account, they cannot.
I have added the group CHILDDomain Users with a system user role on SSRS, and PARENTDomain Users was already there.
Is there any additional step I should/could take to get this active?
I have had this issue just pop up. I have local users who can connect fine, but my users that require connection by VPN cannot connect. I get the server not available or access denied error. I did confirm that the VPN'ers are connected to the network correctly and can see that their shares and mappings are correct. Any ideas? Thanking you all in advance!!
I just noticed that; although my server has 2 physical volumes my log files and DB are on the same one. How do I do it?It's SQL Server 2000 running on Windows 2000 Server.As a side note: Why does the database's Properties display in EM allow definition of multiple log files?Thank you!
Hello!Does anybody know whether mssql2000 and emc mirrorvew _certified_ forjoint work?(Mirrorview is a fc-based remote mirroring solution)I mean is it supported from the MS point of view to put mssqldatafiles on emc mirrorview volumes?For example Oracle corp. has "Oracle Compatible Remote MirroringTechnologies" certification.But what about MS?
We have an application that was built and testing using SQL Server Express. One of our clients is deploying it using SQL Server Standard and plans to put the data files and log files on separate disk volumes.
In allocating the available disks to the volumes, they are looking for a recommendation on how big the log file volume versus the data file volume should be. Over time there will several years worth of data in the data files. I assume the log files need to be at least big enough to log all the changes between back-ups. Are there any general rules of thumb? Or whitepapers that discuss the trade-offs?
I am trying to revert back to Windows 7 after upgrading to Windows 10, however it will not let me and the following message occurs: "Remove new accounts.Before you can go back to a previous version of Windows, you'll need to remove any user accounts you added after the most recent upgrade. The accounts need to be completely removed, including their profiles.You created one account (NT SERVICEMSSQLSERVER) Go to Settings> Accounts> Other users to remove these accounts and then try again".However I did not create any new users and there are no other users listed in the Accounts section.
I have a question on how to sum data by a certain date range. Here is the data I'm looking at. I have volume measured usually (but not always) every day. I want to sum the volume from the 2nd of the month to the first of the next month. I want to do this for every month. I have the columns of my data listed below. Can anyone help me with this? I've been trying to read up on it, but I'm not finding anything.
My day started with loading huge volume of data and my data flow task failed to do so.
My data flow has a flat file connected to a OLEDB target. This is a one to one mapping. My source file contains 50 lac records and it is of 500 MB in size.
I'm processing the data with all the default buffer settings. I have 4 CPUs in my server.
the system process DTSDebug.exe is utilizing more than 2GB page size. My average CPU usage being 70% when one of those CPU s is hitting 100% utilization.
I'm very new to SSIS. So, please provide me some info how do i set my buffers and do we have any PDF for performance and tuning in SSIS ?
Do we have any bulk load transformation in SSIS to load into DB2UDB ?
I am in the process of choosing between either SQL Workgroup or Standard Edition. I see the differences in features on the comparison table, but do not see any references to the differing capabilities in handling transactions.
Is there any differences between Workgroup and Standard in terms of handling transaction/data capabilities? i.e. Does Standard have the superior capability in handling X times more TPMs than Workgroup?
If not, am I correct to assume that this is totally determined by hardware configuration (# of CPUs, processor speed, HD speed, RAM) ?
If the data volume / transactions handling is solely determined by hardware configuration, and I know the # of transactions and amount of R/W per second, .......where would be a good reference to know what kind of hardware configuration I need (ideally, once I know the hardware configuration, I guess I would be able to determine I need Workgroup or Standard)
We are creating an enterprise application for fuel, and I am fighting with my DBA about the proper way to store volume and currency in the database. We have 2 main arguments. The first argument is whether we should store costs in the database in $ and convert in the presentation layer, or to store the amount and currency in the database. We sell product from the US in dollar but depending on the customer we may invoice in Euro. Our second argument is the same, execept with volume and UOM. We often purchase product by BBL but sell/transfer by gallon, or Ton.
Not really a question. Just looking for people with experience with SB in a highly transaction env. with passing a lot of messages. What kind of challenges have you ran into when you are processing the messages. I am currently writing a SB application for a large financial institution, and want to get some ideas of challenges that I might face when volume gets really high (couple of million transactions per day). Thanks, Tim
hi alli've got two tables called "webusers" (id, name, fk_country) and "countries" (id, name) at the meantime, i've a search-page where i can fill a form to search users. in the dropdown to select the country i included an option which is called "all countries". now the problem is: how can i make a stored procedure that makes a restriction to the fk_country depending on the submitted fk_country parameter?it should be something like SELECT * FROM webusers(if @fk_country > 0, which is the value for "all countries"){ WHERE fk_country = @fk_country} who has an idea how to solve this problem?
4 -5 years ago, I started my career as a translator translating the MetaTexis CAT (Computer Aided Translation Software).
It's amazing to see all the improvements that have been made until now, but recently I found some problems regarding databases:
I heard that ACCESS databases volume is limited up to 2 GB and that SQL 2005 databases volume is limited up to 4 GB, but I think this information is wrong, or at least I was only able to import 10% of that amount.
Speaking in words, 2 GB doesn't represent a database with a volume of 125,000 segments/sentences (for ACCESS) and 4 GB a volume of 250,000 (for SQL 2005).
Concrete, my "mega.mxtm" database has "only" 359 MB and suddenly I refuses to import more sentences. Is that normal? (MICROSOFT SQL 2005)
Question: Is the new SQL 2008 also limited? Is there any way to "free" or increase the volume capacity?
Point 2: As I updated the SQL 2005 into 2008 I am not able to open the "old" "mega.mxtm" anymore... : (
Do anyone have an idea of how to split one report (Report subscribed for automatic delivery) into many file based on the volume of the data retrieved (records1-50 first file, 51-100 second file ect.,).
Say for example I have an employee and department table. The report is designed to provide a list of employees for a given department. If the department contains more than 50 employees then the report is exported individual file for every 50 employees.