Volume Distribution

Sep 11, 2006

Hi Guys

I Have not been able to solve this problem from quiete a while now.

I am using sql server 2005.


I have got a table which contains these columns - start date, end date and volumes
if the month in the start date is same as that of end date, the volume remains same, else if the months in the two dates are different, then i have to distribute the volume in such a way that some part will go in the first month and the rest in the other month.. i have to somehow calculate (or prorate) the volume according to the no of days in each month


I have to perform a query on this table so that I can group the volumes for different months and different years.

Hope I have made this quite clear.





Thanks

Mita

View 7 Replies


ADVERTISEMENT

Distribution Clean Up: Distribution Job Not Removing Snapshots

Jul 6, 2007

On SQL Server 2005 SP2 for Publisher and Distributor on the same instance, my old snapshots are not being cleaned up.



The following error is in the agent history:

Executed as user: DomainMyUser. Could not remove directory '\vmsql01ReplDatauncPublication_TRANSACTIONAL20070702104416'. Check the security context of xp_cmdshell and close other processes that may be accessing the directory. [SQLSTATE 42000] (Error 20015). The step failed.



xp_cmdshell is enabled and I can run commands like :

exec master.dbo.xp_cmdshell ' md c:TestFolder'

The permissions to the snapshot share and file system are that DomainMyUser has full control.

I have logged into the machine as this user and can remove snapshots so it does not seem to be a permission issue.





On other machines I do not get any errors but the snapshot folder still is not cleaned up.

Any suggestion as to what the problem could be?



Thanks,

Amy

View 3 Replies View Related

Volume Inserts

Jan 24, 2000

We have a 4 processor 350 Hz NT 4.0 SQL server. Currently we have an application
that is inserting rows one at a time, each row insert is a separate transaction.
Currenty we are averaging 2500 rows a second with each row ( 56 bytes wide).
The data and the log are on one string of Raid disk. We plan to get another controller
and raid string to separate the data and the log onto separate controllers.
The developer is modifying the application to insert the data in blocks. What is the
impact to the transaction log? He seems to think that by inserting page blocks on
rows there would be less data going into the transaction log. Why would this be so?
Does anyone have any information on practical limits for inserts and log truncation
with similar machine configurations. He would like to try to get around 150,000 rows a second.
Has anyone accomplished inserts at this rate? What type of machine configuration?

View 1 Replies View Related

High Volume I/O!!!

Feb 28, 2008

I have a summary table with a 9 field composite primary key. Every 10 minutes, my system generates 2 files of 500,000 to 750,000 rows to be summarized into this table. I first Bulk insert those into a temp table, and then trigger an inner-join update query to do the updates, followed by a left-outer join to do the inserts. As the day goes on, millions of rows in my summary table, this process is too slow. Any ideas about causes/solutions???

RLiss

View 2 Replies View Related

How Do I Move The Transaction Log To Another Volume?

Jul 17, 2007

I just noticed that; although my server has 2 physical volumes my log files and DB are on the same one. How do I do it?It's SQL Server 2000 running on Windows 2000 Server.As a side note: Why does the database's Properties display in EM allow definition of multiple log files?Thank you!

View 1 Replies View Related

Mssql2000 On Emc Mirrorview Volume

Jul 20, 2005

Hello!Does anybody know whether mssql2000 and emc mirrorvew _certified_ forjoint work?(Mirrorview is a fc-based remote mirroring solution)I mean is it supported from the MS point of view to put mssqldatafiles on emc mirrorview volumes?For example Oracle corp. has "Oracle Compatible Remote MirroringTechnologies" certification.But what about MS?

View 1 Replies View Related

Volume Size Recommendations

Apr 24, 2007

We have an application that was built and testing using SQL Server Express. One of our clients is deploying it using SQL Server Standard and plans to put the data files and log files on separate disk volumes.



In allocating the available disks to the volumes, they are looking for a recommendation on how big the log file volume versus the data file volume should be. Over time there will several years worth of data in the data files. I assume the log files need to be at least big enough to log all the changes between back-ups. Are there any general rules of thumb? Or whitepapers that discuss the trade-offs?



Thanks in advance...



View 1 Replies View Related

Installing SQLServer In A Raw Volume

Jan 29, 2007

Hi All
I would like to know whether SQLServer can be installed in a raw volume or not..... Is there best practice guide for this. .
Regards,
Vijay

View 2 Replies View Related

Summing Volume By Date Range

Jun 16, 2008

I have a question on how to sum data by a certain date range. Here is the data I'm looking at. I have volume measured usually (but not always) every day. I want to sum the volume from the 2nd of the month to the first of the next month. I want to do this for every month. I have the columns of my data listed below. Can anyone help me with this? I've been trying to read up on it, but I'm not finding anything.

Entity Date Measured Volume
1 4/01/2008 5
1 4/02/2008 4
1 4/03/2008 6
1 4/04/2008 5
1 4/08/2008 7
1 4/12/2008 8
1 4/13/2008 5
1 4/14/2008 7
1 4/25/2008 8
1 4/30/2008 9
1 5/01/2008 6
1 5/02/2008 8

Thanks in advance for any help!

View 4 Replies View Related

Handling High Volume Data

Apr 18, 2007

Hi Guys,

I am facing problems with concurrent access in SQL Server 2000,The scenario is that the DB contains one huge de-normalized table containing 40 million records.

The application frequently queries this table to populate other derived tables,the sql queries take a long time to return results.

So if one query is in execution the other user's query goes into a
wait mode.Please suggest how I can better this.

Or do I need to upgrade to 2005.

Regards,
Prashant

View 2 Replies View Related

High Volume DB Performance Problems

Mar 19, 2008

Hello ,i am a master student and i am making a seminar about high volume DB performance problems ,example : if i have a table with length of 1000000 record and this length is growing exponentially by the time,what the problems may i face in insertion ,deletion , search,in such table?? and what the problems in processing such DB in general

View 1 Replies View Related

Loading Huge Volume Data

May 31, 2007

Hi Good morning to all,



My day started with loading huge volume of data and my data flow task failed to do so.



My data flow has a flat file connected to a OLEDB target. This is a one to one mapping. My source file contains 50 lac records and it is of 500 MB in size.



I'm processing the data with all the default buffer settings. I have 4 CPUs in my server.



the system process DTSDebug.exe is utilizing more than 2GB page size. My average CPU usage being 70% when one of those CPU s is hitting 100% utilization.



I'm very new to SSIS. So, please provide me some info how do i set my buffers and do we have any PDF for performance and tuning in SSIS ?



Do we have any bulk load transformation in SSIS to load into DB2UDB ?



If so how do i get it installed?





Thanks in advance,

Suresh N

View 2 Replies View Related

Transaction/Data Volume Vs SQL Edition

Nov 15, 2007

I am in the process of choosing between either SQL Workgroup or Standard Edition. I see the differences in features on the comparison table, but do not see any references to the differing capabilities in handling transactions.

Is there any differences between Workgroup and Standard in terms of handling transaction/data capabilities? i.e. Does Standard have the superior capability in handling X times more TPMs than Workgroup?

If not, am I correct to assume that this is totally determined by hardware configuration (# of CPUs, processor speed, HD speed, RAM) ?

If the data volume / transactions handling is solely determined by hardware configuration, and I know the # of transactions and amount of R/W per second, .......where would be a good reference to know what kind of hardware configuration I need (ideally, once I know the hardware configuration, I guess I would be able to determine I need Workgroup or Standard)

Thanks in advance,
benbry

View 3 Replies View Related

Store Currency And Volume In Database

Aug 22, 2007

We are creating an enterprise application for fuel, and I am fighting with my DBA about the proper way to store volume and currency in the database. We have 2 main arguments. The first argument is whether we should store costs in the database in $ and convert in the presentation layer, or to store the amount and currency in the database. We sell product from the US in dollar but depending on the customer we may invoice in Euro. Our second argument is the same, execept with volume and UOM. We often purchase product by BBL but sell/transfer by gallon, or Ton.

please tell us the best practice for our dilemma.

View 2 Replies View Related

Service Broker Volume Question

Apr 5, 2007

Not really a question. Just looking for people with experience with SB in a highly transaction env. with passing a lot of messages. What kind of challenges have you ran into when you are processing the messages. I am currently writing a SB application for a large financial institution, and want to get some ideas of challenges that I might face when volume gets really high (couple of million transactions per day).
Thanks,
Tim

View 4 Replies View Related

MSDE For Large Volume And High No. Of Users?

Sep 21, 2005

Hi,We need to use a free database for a project because of tight budget.Is MSDE ok for handling large volume of data and 70 - 80 users?My understanding is that MSDE is optimized for 5 concurrent users.Is MySQL better than MSDE?Thanks,Ben

View 1 Replies View Related

Solution Design For High Volume Of Data

Apr 18, 2006

Hi,

I have been asked to design a solution for a client of mine who basically requires the daily analysis and reconciliation of the differences between 2 extremely large text files.

The files are not in an identical format but are both in some form of delimited format (one is CSV, the other is a little more complex). For the sake of this question, let's assume that I can effectively import each file into an MS SQL table.

Each file will have in excess of 100,000 rows each day (new data for each day).

Whilst I know that MS SQL does easily have the capacity to store the data, is there a recommended way to tackle the potential problems (I imagine that performance is important... they will be running the report every day)

Or is building the solution as simple as importing the data into 2 tables, and then querying the differences and outputting as a report using Crystal?

Any suggestions appreciated.

Thanks

Rael

View 10 Replies View Related

SQL Server Functionsp To Know Volume's Total Size

May 6, 2008

Hi,

Is there any SQL Server functionstored procedure available to get drivesvolumes and mount points total size and free space information?

I don’t want to use clr approach for this.

Can you please provide any pointers related to this.

Thanks in advance,

-Kiran

View 1 Replies View Related

High Volume Database Query Optimization

Dec 10, 2007

Hello every body

i am doing a research about high volume database treatment (maybe a database with tera bytes volume) , so is there any optimization or specialization for queries deal with such database? !!

View 5 Replies View Related

Question On Large Volume Of Training Dataset

May 10, 2007

Hi, all experts here,

Thanks a lot for your kind attention.

I have a question on training large volume of datasets. In this case, the training will take a long while to complete, is there anything we can do to improve that? I know, we obviously cant split the training dataset into different smaller datasets. What we can do to improve that?

Hope my question is clear for your help.

Thank you very much in advance for your advices and help and I am looking forward to hearing from you shortly.

With best regards,

Yours sincerely,

View 3 Replies View Related

SQL 2005 And SQL 2008 Database Volume Capacity

Dec 1, 2007

Hi Everybody:

4 -5 years ago, I started my career as a translator translating the MetaTexis CAT (Computer Aided Translation Software).

It's amazing to see all the improvements that have been made until now, but recently I found some problems regarding
databases:

I heard that ACCESS databases volume is limited up to 2 GB and that SQL 2005 databases volume is limited up to 4 GB, but I think this information is wrong, or at least I was only able to import 10% of that amount.

Speaking in words, 2 GB doesn't represent a database with a volume of 125,000 segments/sentences (for ACCESS) and 4 GB a volume of 250,000 (for SQL 2005).

Concrete, my "mega.mxtm" database has "only" 359 MB and suddenly I refuses to import more sentences. Is that normal? (MICROSOFT SQL 2005)

Question: Is the new SQL 2008 also limited? Is there any way to "free" or increase the volume capacity?

Point 2: As I updated the SQL 2005 into 2008 I am not able to open the "old" "mega.mxtm" anymore... : (

Regards!


De Sena Viegas

View 4 Replies View Related

How To Split One Report Into Many File Based On Volume.

May 4, 2007

Do anyone have an idea of how to split one report (Report subscribed for automatic delivery) into many file based on the volume of the data retrieved (records1-50 first file, 51-100 second file ect.,).

Say for example I have an employee and department table. The report is designed to provide a list of employees for a given department. If the department contains more than 50 employees then the report is exported individual file for every 50 employees.

Can anyone suggest a way to do this€¦

Regards,
Krishna

View 1 Replies View Related

Huge Volume Of Data Loading Issue

Aug 21, 2007

Hi all,

I've faced a problem with the below error when I load 1.5m data into oracle database.


The buffer manager detected that the system was low on virtual memory, but was unable to swap out any buffers.

Please help. Thanks

View 19 Replies View Related

Performance Tuning On High Volume Server

Jan 4, 2007

I am looking to improve the performance of my sql server databases.

I currently have a dual location system, the database server setup is basically a quad xeon with 4gb at my office and a double xeon with 4gb at a remote webhosting location. There are separate application/web/intranet servers at each site. The two databases servers are replicated with the local server publishing to the remote server.

The relational database holds circa 26 million records, growing by a volume of 10,000 per day, there are approximately 50,000 queries performed per day.

My theory is that the replication of the two databases is causing a slowdown; despite fast network connections (averaging 200ms between servers) the replication seems to place a large load on the local server. Would it be sensible to replicate to a second local server and then replicate to the remote server, placing any burden on the second server?



I am planning to upgrade the local server to a high capacity 4+ cpu 64bit server, my problem is that although I have noticed a slow down in performance over time, I am unsure how to go about measuring and quantifying this in order to diagnose the bottlenecks and ensure that investing in a new server would be worthwhile. Where would one be best advised to start this project?

View 5 Replies View Related

Is It Safe To Use Volume Mount Points With SQL 2005?

Jul 20, 2006

Hello,

To implement the new SQL 2005, I plan to make the environment easy to manage. The environment should be simple to document and be automated via scripts. Therefore I plan to use mount points as described below.

On a typical SQL server with multiple drives like C, D, E, F, G, H. Where each drive will have various folders to hold SQL code, data files, transaction log files, tempdb files, snapshot files, and other types of files. This typical environment is not pretty and is hard to write scripts for.

So I plan to standardize on one standard directory structure via volume mount point. On all new SQL 2005 servers, we should see drive E as the one and only SQL Server directory. Other drives will be mounted to drive E as shown.

E:

SQLSERVER local folder -sql code for each db instance

SQLSHARED local folder -sql shared tools for all db instances

SQLTLOG1 Drive H -db transaction log

SQLSNAP1 Drive F -db snapshot files

SQLTEMPDB1 Drive H -tempdb main data file

SQLWORK Drive D - DBA work area

SQLDATA1 Drive G -db data files

SQLDATA2 Future Drive -if SQLDATA1 is too large for any direct attached drive, or to get more I/O throughput.

With this implementation, I can easily write scripts to manage the environment. Also if any mounted volume is out of space, we can swap the based drive without doing any change to database configuration. We can also switch from direct attached drive to SAN in the future.

Do you think mount point is safe to use with SQL 2005? I know it is supported.

Do you have a standard directory structure for your environment? How do you do it?

Thanks,

KTMD

View 2 Replies View Related

Adding New Volume And Swapping Drive Letters....

Apr 11, 2007

Hello - I have a SQL 2000 server which has a D: drive that contains all of my databases (system and user). I am running out of space on this volume and need to migrate the contents of this volume to a larger one. My initial plan was to introduce a new volume to the server (say a K: drive). Backup all databases (of course), and then stop all SQL services. Copy all data from D: to K:. Once data is copied, swap drive letter names (D: to I: and then K: to D. Then restart SQL services. SQL should not know any better since everything was on the D: drive when it went down, and everythiing is still on the D: drive when it came back up, correct?

The other option mentioned is to detatch the databases, copy the data and then reattach them in their new locations. I understand this method, but it seems more involved (and riskier) than just renaming the drives. Does anyone have an opinion regarding these two migration methods? Thanks for your help.

Chris

View 3 Replies View Related

SQL Backup, Volume Shadow Copy && Trans. Logs

Mar 9, 2005

Hi,

Can someone please express their experience and recommendations for the following questions?

1. Do you need to unable SQL-Backup if you already using a 3rd party software utility to backup the SQL data drives?

2. If you do not want to shutdown SQL services for backup can you use Volume Shadow Copy as a solution to open files?

3. What size would you recommended to use for the Transaction Logs DB if my Data DB size is 2GB

I am running Windows 2003 with SQL 2000/SP3. Obviously, I am not an expert in SQL so I appreciate your help.

Thanks

View 2 Replies View Related

SQL 2012 :: FCI Installation Volume Cluster Group Error

Dec 3, 2014

Issue when building a FCI where after the step of specifying what drives the data, logs, tempdb go, the installer errors with this message:

Exit message: The volume that contains SQL Server data directory D:SQL_DataMSSQL11.MSSQLSERVERMSSQLDATA does not belong to the cluster group.

We are running Windows Server 2012 R2 and SQL Server 2012 Enterprise with Veritas Storage Foundation for Windows 6.1. I've confirmed that the disk group exists and is mounted in Windows and presented as available storage. Symantec has an article detailing this same circumstance but the Microsoft KB article that they refer to only has hotfixes for Windows Server 2012, not R2.

View 2 Replies View Related

SQL Server 2008 :: High Volume Reads And Writes?

Jul 6, 2015

We are in the process of moving existing clustered SQL server databases to AWS. There is one major database that has intensive reads and writes transactions. I'm wondering what is the best design to optimize the performance for both R/W since we have constant issues historically with the current environment when massive updates are happening. Reads shall have higher priority over writes.

View 2 Replies View Related

Using CSV (Cluster Share Volume) Support Filestream Feature?

Jul 22, 2015

I am looking for information on using CSV storage for a windows 2012r2 Hyper-V cluster configuration.

The cluster will be used by SQL Server 2014.

Mainly, does it support filestream.I have not found much documentation for using Cluster shared volumes with SQL Server 2014.There was a session at Tech-Ed 2014 on the subject.

SQL Server 2014 is the first version to support CSV storage.

View 1 Replies View Related

Slowly Changing Dimensions Errors (volume Related?)

Jan 20, 2006

Hello,

We are having issues with SCDs that appear to be volume related. Small dimensions work fine. Pasted below is the last few lines of our package run without any contention (We get different errors at different time with contention referring to buffer or virtual memory). It seems to be calling out the SCD main task but just gives a number and not much of a description.

2) For the errors referring to a buffer ( I can paste them also but did not want to put too much at once); we have the "BufferTempStoragePath" pointing to a drive with lots of space. We have tinkered with the DefaultBufferMax Rows and Size but that seems to just control how big the "batches" are (the row count size changes you see).

Any configuration type things we could try to avoid these errors? Any more detail that can be gleaned from the messages below or the number? This is kind of a show stopper as we are late in the cycle and need larger (at this point only about 600,000 rows) volumes to go through the SCD tasks. The performance seems to be good enough (about 20 minutes). We have tens of millions working fine in fact packages not using SCDs.

Thanks. Geoff


Information: 0x4004300C at Data Flow Task, DTS.Pipeline: Execute phase is beginning.
Error: 0xC0047022 at Data Flow Task, DTS.Pipeline: The ProcessInput method on component "Slowly Changing Dimension" (45421) failed with error code 0x8007000E. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.
Error: 0xC0047021 at Data Flow Task, DTS.Pipeline: Thread "WorkThread0" has exited with error code 0x8007000E.
Error: 0xC0047039 at Data Flow Task, DTS.Pipeline: Thread "WorkThread1" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown.
Error: 0xC0047021 at Data Flow Task, DTS.Pipeline: Thread "WorkThread1" has exited with error code 0xC0047039.
Error: 0xC02020C4 at Data Flow Task, Control File Extended view as source [959]: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.
Error: 0xC0047038 at Data Flow Task, DTS.Pipeline: The PrimeOutput method on component "Control File Extended view as source" (959) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
Error: 0xC0047021 at Data Flow Task, DTS.Pipeline: Thread "SourceThread0" has exited with error code 0xC0047038.
Information: 0x40043008 at Data Flow Task, DTS.Pipeline: Post Execute phase is beginning.
Information: 0x40043009 at Data Flow Task, DTS.Pipeline: Cleanup phase is beginning.
Information: 0x4004300B at Data Flow Task, DTS.Pipeline: "component "Insert New Rows into Product Dim" (51111)" wrote 0 rows.
Task failed: Data Flow Task
Warning: 0x80019002 at ProductDimensionIncr: The Execution method succeeded, but the number of errors raised (7) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
SSIS package "ProductDimensionIncr.dtsx" finished: Failure.

View 1 Replies View Related

SQL 05 Sets Registry Wrong? Volume Shadow Copy Service

Sep 12, 2005

hello,

View 5 Replies View Related

Error Opening Datafile: Filename, Directory, Volume Syntax Incorrect

Jan 6, 2004

I receive this error message when I try and run my DTS
package:

"Error opening datafile: The filename, directory name, or
volume label syntax is incorrect."

I set the file name property of the Text File (Source) to
a valid file, but when I run the package and it returns
the above error message, it sets the file name property
to <not displayable>.

Does anyone have any experience with such a problem? If
so please help!!! Thanks

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved