Read only transacted replication to about 60 clients. 100 or so transactions a day (8 hours), transaction (record) is probably about 8K/transaction. Reliable connection over 100base-T. Latency - 15 minutes would be great.
This doable? -- I couldn't find a timing/ sizing model any place.
I have few more clarification regarding time series. Firstly In my model the month level product sales value represented across 1st day of every month. So that the key time column is of datetime datatype containing a sequence of dates representing the 1st day of every month of the year. Eg: 2006-01-01, 2006-02-01€¦€¦€¦. etc. all in (yy-mm-dd) format But when I make prediction for next five months, though it makes monthly predictions the date part for the months are random whereas I expect the date part to be 1st of every month. What is the reason for this and how can I overcome it. Secondly Predicted sales values for some time period are negative though I do not have any negative value in the training data. What is the reason for this and how can I rectify it?
Thirdly In one of your earlier posts you had said that the time series algorithm does not have any built in time intelligence but uses the key time column as a time sequence stamp. So If have to make predictions for a particular time period where the time slice for each time period is 25 days or 50 days etc, then I understand that the input data used to train the model should be in the same time sequence. Or Can I specify the span of the time period according to which the prediction needs to be made? Basically how can I use the same time series model to make monthly, yearly, quarterly, daily or predictions or for custom time period like I have mentioned above.
I have built a time series model to forecast sales value
I have data from jan 2004 to jan 2006 and the sales value is at a day level in my database. But I am aggregating it to month level in the DSV of the mining model.
I am required to make only historical predictions using the above model starting form jan 2004 to jan 2006 for every month.
I have set Historical_Model_Count and Historical_Model_Gap parameter values to 24 and 10 respectively, and trying to predict for the past few months (PredictTImeseries(SalesValue,-1,1))
But its throwing me the following error
Error(Data Mining): A time series prediction was requested with a start time further in the past than the internal models of the mining model, Sales Forecast, specified in the HISTORIC_MODEL_GAP and HISTORIC_MODEL_COUNT parameters can process
In fact it throws the above error irrespective of what the Historical_Model_Count and Historical_Model_Gap parameter values are
I am not able to figure our why this problem is happening?
What should the parameter values for the above scenario?
It would also be helpful if I can get an explanation on how these two parameters affect the historical predictions. I kind of understand that these two parameters are important for historical predictions but don€™t know why or how.
I have built a Sales Forecast model to predict the sales value. Along with making historic predictions for previous time periods I also want to retrieve the actual sales values for those periods.
How can I achieve this in a time series model?
I also would like to know how do mining models store the data.
Do they store the data in the same table/view format as their respecive data source view or in the Model Content format.
I am new to SSAS and i want to try to build a "Sales" model. I will have some "Usage" data for some timespanns, but I am not quite sure how to tackle this. Is there somewhere a "Howto" for this?
Edit: There are several locations, and for each location a forecast is needed. And the Icing would be If I would be able to tell where my supplies must go 1st to achieve the best sales...
The potential Client wants to use Oracle but I would like to show them that SQL Server is the better tool for this ;)
i am trying to find a way to link an 'initial' Sale ID of a product to 'future' Sale IDs of products that will trace back to the original Sale ID.For example, if I call the original sale , 'Sale ID #123', how can i link future Sale ID's (child[ren]) and all future sales to the original Sale ID #123? Can I use a Surrogate Key or similar function?
Parent:Sale ID #123 Children:Â Sale ID # 456, Sale ID #789, Sale ID #.....
how I can link the original Sales ID (Parent) to Sale ID's (child[ren]) of future purchases currently existing and in the future going forward?
I want to calculate the sum of actual sales until a date and forecast sales after a date.I am not sure what the best approach to this problem is, but I have tried my best with the following approach. Any better ways to solve this (using DAX).
I have created a parameter table that offers the last date of each month as possible choices to the user. I have tried to create a measure that sums actual sales up until this date.
Hi, assuming a data driven web application was written, and it is used by a huge number of customers, we can set up some load balancing infrastructure, where multiple web servers run to process all requests. In this case, the database might run on its own machine. But what can be done if one machine is not enough for the database anymore? Is it possible to build some kind of database cluster? And if yes, how does the application work with such a cluster? Is there still ONE connection string or is it needed to query multiple SQL servers? Long story short; Can a SQL Server database cluster be handled as ONE SQL Server instance from outside, even it is hosted on multiple machines?And if yes, what edition of SQL Server 2005 is required? RegardsMarco Buerckel
I have web application for which I need to come up with a scalability plan. The application right now is running on 2 computers, the front end is running Tomcat with JSP (Java Server Pages). The back end is MS SQL 2000
The database is running fine now, but the file size and CPU utilization is growing and I need to figure out a way to distribute this database over several machines.
I looked at MSCE and setting up a cluster, but I think that the solution will NOT work in an intensive enterprise environment. (Am I wrong?). For one thing, there is a hard limit of 16 nodes in a cluster. Also, it requires a shared RAID or FibreChannel Array. These solutions are very expensive. Also, it's not easy to add additional capacity if you reach the maximum of the RAID array. For example, if you have a FibreChannel Array with 8 Terrabytes of capacity, if your DB uses up all 8 terrabytes, you have to buy a whole new enclosure.
One of the major objectives I have is to minimize cost and support unrestricted database growth.
Here is what I would like to achieve. If any of you know of a solution or something that YOU have USED which might give the functionality I require .. I would greatly appreciate your feedback.
Wish List:
1) Run single SQL Instance on regular PCs, which host 1 or More Tables (Table level partitioning) of a very large database.
2) Distributed JOINs on Tables residing on different physical PCs
3) Multiple Machines hosting same table (Fault Tolerance)
4) Single Virtual IP for database to Application Front End.
-----------
I looked at C-JDBC which is an open source middleware that can do some of the things above. I liked it, but it doesn't seem to support Distributed JOINs which means that our application - which requires quite a few tables JOINs - won't be able to achieve Table level partitioning.
Can anyone recommend any clustering middleware which you can recommend?
I'm currently developing an ASP.NET site with SQL Server 2005 Standard and I'd like to ask a question about the future of the database. It needs to have continuity and performance. I'm thinking about doing replication or mirroring for continuity and table partitioning for performance. I admit I've never done any of those before and I'll learn about them but they're not needed at this time. The question is, I'm currently designing the database and do I have to anything for consideration for those things I'm thinking of implementing later? For example, I'm using Identity in my tables but I've heard about identity crisis using replication with identity columns, therefore I'm thinking of using Guid's but now I fear the Guid column index itself will be the slowdown factor in the first place.
Any suggestions to consider? I'd appreciate any opinions.
Hi, My application is about to scale up significantly, and it seems that SSB could be very useful to help me scale it right, especially I like the multiple readers feature.
So, here's the deal €“ my application is about to get lots of records (peaks could gets up to €“ 300k records per hour). Each record must be processed separately, but could be processed in parallel €“ no dependency between rows (gut feeling tells me not to overuse parallelism, but I€™ll do my tests). It takes about 10ms-15ms to process a single row; I don't mind sliding to non peak hours if needed. Each record is constructed of about 10 columns.
My questions:
1. Is SSB the right solution as a buffer queue? any famous use case is similar to my problem ?
2. What is the fastest easiest way to serialize / deserialize each record?
I would prefer not using CLR integration due to performance issues and stick with tsql for now. I don€™t necessarily prefer XML serialization. If binary serialization works faster, it€™s fine with me.
I have a pivot chart that displays 3 data series.Series 1 values range from 17 to 106.Series 2 values range from 1 to 18.Series 3 values range from 0 to 1.When I display all three series on the chart, the value axis labels run from1 to 120 with major unit lines at intervals of 20.When I remove Series 1, the chart automatically adjusts itself and thevalue axis labels run from 1 to 20 with lines at intervals of 2, thusnicely accommodating the value ranges of Series 1 and Series 2.All this is fine. But when I remove Series 2, leaving only Series 3,the labels run from 1 to 1.2 with interval spacing of .2. Since thesevalues are counts, it makes little sense to show decimal values.Can anyone provide advice on how to keep these axis labels fromshowing decimal values, if a user removes Series 1 and Series 2?Thanks.- Bob
We use timed subscriptions to do almost all of our reporting. Reports are delivered (primarily via e-mail and printer) once they are completed and users don't have to "watch the pot boil" so to speak.
Apparently SSRS has some load balancing capability whereby it lets only a limited number of threads/reports run concurrently. We often reach this max and lock ourselves up on some very long-running reports, causing other important reports to wait a long time.
We've added some operational reports (ie. document prints) to the mix. These reports run off of OLTP data. They are very fast and very high priority. Waiting on them is not an option. Is there some way we can get SSRS to work on these operational reports in preference to other types of reports (eg. "just for kicks" reports)? I think we'd almost like to add another SSRS server and dedicate it to the operational reports. Ideally the new SSRS server would use the same Report Server database but would only work on subscriptions for certain documents.
Has anybody else tried to solve this problem? This MS document does really address subscriptions or load balancing by report: http://www.microsoft.com/technet/prodtechnol/sql/2005/pspsqlrs.mspx
We have a 3 month old quad processor/dual core server running SQLServer 2005 and already it is getting close to hitting the CPU wall.An 8 way CPU box is prohibitively expensive and out of the question. Iam looking desperately for a way to TRULY scale out SQL server...in thesame way that IIS can be scaled out via App Center.The "in the box" solution for SQL Server 2005 scaling out is the DMV.Unfortunately this solution makes the system less available rather thanmore (one server outage takes them all out for that table) and requiresserious rearchitecting of the software to use. Contrast this to IISand AppCenter where each added server makes the system more available,and requires no rearchitecting to work.Before someone says "what you want can't be done in adatabase"...Oracle has an application server middleware product thatlets you do both of the above. Just plug a new server with Oracle onit, and you've doubled your capacity. But SQL Server 2005 doesn't yethave a similar capability.So I read with great interest the following article that talks aboutwhy this is the case with SQL Server. There are two issues that makeit very difficult to do:http://www.sql-server-performance.c...ability_availab...You can create a crude pool using replication, but the performancetimes look horrendous.However, the article also talks about the latest developments in thisfield...specifically MIDDLEWARE that can create a scale out solutionthat is more available and that requires simply adding new servers toscale up.I found two companies which seem to offer this new capability:http://www.metaverse.cc/newsevents.asp?cid=17999andhttp://www.pcticorp.com/product.aspxBoth companies appear to have patents or a patent pending on theprocess. I tried to contact metaverse but got no reply, despite theirrecent press release. I just emailed Pcticorp today to see if I couldlearn more about their product.My question for this group is:Does anyone have experience with either of the two products (or anyothers that provide this capability)?Many thanks in advance for your help.Ian Ippolitohttp://www.rentacoder.com
I have a report which totals sales by customer. Then table footer has a grand total of all customer sales. I would like to get a percent of each customer's sales against the total sales. How do I get the sum from the table footer to use in an individual customer row?
I need to develop a language specific dwh, meaning that descriptions of products are available from a SAP system in multiple languages. English is the most important language and that is the standard. But, there are also requirements of countries that wants productdescriptions in their language.Â
Productnr Productdesc Language 1       product    EN 1       produkt    DE
One option is to column the descriptions, but that is not very elegantly. I was thinking of using bridge tables to model this but you have to always select a language in a filter (I think)..
I'm thinking of a technical solution, such that when a user logs on, the language is determined and a view determines whether to pick a certain product table specific for a certain language. But then I don't have the opportunity to interchange the different language specific fields in a report (or in my case PowerPivot).
We have our Production server having database on which Few DTS packages execute every night. Most of them have Bulk Insert stored procedures running.
SO we have to set Recovery Model of the database to simple for that period of time, otherwise it will blow up our logs.
Is there any way we can set up log shipping between our production and standby server, but pause it for some time, set recovery model of primary db to simple, execute DTS Bulk Insert Jobs, Bring it Back to Full recovery Model AND finally bring back Log SHipping.
It it possible, if yes how can we achieve this.
If not what could be another DR solution in this scenario.
I'm trying to bring down a file from the AS/400 to update a SQL database table. Either thru the configuration limitations on the SQL server or ??? I cannot reach the as/400 from my SQL box so I'm running the data down to a Citrix server to a text file via client access. I need to automate the DTS package run on the SQL box to pull the data from the Citrix server over to the SQL box. How do I set up the DTS package to run on a schedule?
How can I import an xml file to SQL 2005 at the same time every night? I will need to create a new database first via the import after that I will be appending to the dataabase. Then I will need to xport the data into a different xml file and upload to my hosting system which is yahoo. All so the original xml file is on the yahoo. store server.
I have SQL 2005 standard install on my personal work stations. When I iron the bugs out I will be running this from our SQl 2005 Enterprise server.
We've an a VB application connecting to SQL Server 7. Recently the log files show 'Automation error' in one of the routine. I've scan thru the code & pass in some test case & it is ok.
Any idea what could have cause the 'Automation error'?
I am a New Bee in SQL.I would like to know,As a System Admin I would like to automate the task of keeping the SQL Server up to date with latest Service Packs and Patches. Is there any procedure / Software where I can automatically download the patch / sp's from the site and install on the servers or maybe schedule the task for every month.
I have just taken over the DBA role for an application which at best you can describe their DR plans as woeful! So basically i am starting from scratch and trying to automate the database re-creation as much as possible. I have managed to write some bespoke scripts for backup devices, and user ids etc., but the main bulk of the work is in the re-creation of the 300 databases they have (boy i wish i had been here when they did this database design!!).
Anyway, onto my question. I know that i can go to each database in SQL Enterprise manager (SQL 2000 by the way), and right click and select generate scripts and it will do it for me, but i was wondering (and hoping) that there was a API interface that i could utilise which would allow me to pass in the database name and for it to generate the scripts automatically. Height of laziness i know, but it would allow me to generate the scripts for the databases on a regular basis in order to keep the DR recovery scripts up to date.
Thank you in advance for your help, even though i am guessing the answer will be "Tough, you have to do it by hand!"
Hi,I tried to add column headers and delete all rows to excel databasewhich is a linked server to my sql server db.I got an error message, "delete data through linked server is notsupported by Isam".I tried to use FSO to write to the excel file, but seems like FSOsupport write to only text file.the last time I tried to create an excel applicaiton object and temperwith its cell, I hang my Excel application and had to reinstall it.Have any ideas,Thank you very much
Last night was the first night we used Sql Server Agent to run our Warehouse ETL packages. We use the built in logging of SSIS as well as our own event logging mechanism as it goes through all of the packages. Unfortunately, when we checked this morning the agent showed failure but in both logs from the packages no errors were reported. I checked the Windows event log and the Sql Server logs but found nothing that would appear to report an error.
Does anyone have any ideas what I might be missing, any recommendations on things to check, we have manually been running these packages for weeks with no problems. Any help would be greatly appreciated! Thank you in advance!
When I move - but not migrate - packages to 2005, this still works. I don't have a problem with permissions or any of the other problems that I have seen posted in this forum. But if I migrate a package to SSIS, the package is now seen in sysdtspackages90 instead of table sysdtspackages (assuming I remove the old version). But "LoadFromSQLServer(...)" can't find it anymore. So it seems like one of the following is true:
a) There is a different item that I should use in sp_oacreate besides DTS.Package. I tried SSIS.Package or DTS.Package90 which seemed like reasonable possibilities but they don't work. Is there something that will work? Or is "LoadFromSQLServer(...)" not supported for SSIS packages?
b) Or OLE Automation doesn't work with SSIS packages. OLE Automation is enabled and it works on non-migrated packages. Is it true that it was decided by Microsoft that this capability was not needed in SSIS?
I see numerous postings saying not to use the sp_OA routines. I understand there are other ways to execute an SSIS package from a SP and that there may be more secure ways. But I need to know if it is an absolute requirement to use these other methods.
i have created a rss script for report deployment which i working pretty good but icheckout my all reports on report server and from another machine i mapped that drive from command line and run rss script then i got following error
:Automation>rs -i PromotionAutomation.rss -s http://fsaleem/reportserver -v en v=Dev -v DBServer=fsaleem
Unhandled Exception: System.TypeInitializationException: The type initializer fo r "Microsoft.ReportingServices.ScriptHost.ScriptHost" threw an exception. ---> S ystem.Security.SecurityException: Request for the permission of type System.Secu rity.Permissions.FileIOPermission, mscorlib, Version=1.0.5000.0, Culture=neutral , PublicKeyToken=b77a5c561934e089 failed. at System.Security.CodeAccessSecurityEngine.CheckHelper(PermissionSet granted Set, PermissionSet deniedSet, CodeAccessPermission demand, PermissionToken permT oken) at System.Security.CodeAccessSecurityEngine.Check(PermissionToken permToken, CodeAccessPermission demand, StackCrawlMark& stackMark, Int32 checkFrames, Int32 unrestrictedOverride) at System.Security.CodeAccessSecurityEngine.Check(CodeAccessPermission cap, S tackCrawlMark& stackMark) at System.Security.CodeAccessPermission.Demand() at System.Reflection.Assembly.VerifyCodeBaseDiscovery(String codeBase) at System.Reflection.Assembly.GetName(Boolean copiedName) at System.Reflection.Assembly.GetName() at Microsoft.ReportingServices.ScriptHost.ScriptHost..cctor()
The state of the failed permission was: <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Vers ion=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" PathDiscovery="Z:AutomationRS.exe"/>
--- End of inner exception stack trace --- at Microsoft.ReportingServices.ScriptHost.ScriptHost.Main(String[] args)
I have spent the last couple of weeks going through about 15-20 different examples/tutorials on running DTS packages from inside a C# Windows app (and even asp.net). I must have a mental block, because I can't seem to make it click in my head. I have hundreds of DTS packages saved as .dtsx files. I can double click on them and run them perfectly. What I need to do is be able to run them conditionally from inside a windows app. Could anyone point me to a tutorial or example that could help me with this? I've even looked at some VB source to see if I could work it out, but I keep running up against a wall. I'd appreciate any help anyone can provide.