Can I implement a slowly changing dimension type 2 in SSAS. I am looking at creating an SSAS cube which can pull data directly from an operational OLTP database. The source database does not maitain history of changes for the dimensions, and I wanted to know if SSAS will help me keep that history by defining certain dimensions as a SCD. If so how do I define that rule. All tutorials I have read only skim on that topic and don't describe the steps/ways to define it in SSAS. Any help would be appreciated.
I am trying to understand the way roles/users/permissions fit together in SQl7.0. I created a user/login FSHUTE than I created a role called Developers and granted permissions to all the objects in the database. Then I made FSHUTE part of the Developers role. FSHUTE is also part of the DBO role. There were some problems accessing the objects. If a users belongs to multiple roles are the permissions cumulative? Or do they replace each other? Do I need to give to the individual user access to every object? Since I was completly comfused I ran the SP_helprotect procedure and only the permissions that were associated with FSHUTE displayed. HELP!!!
This is my first experience with a SAN and we're about to migrate my SQL server to the SAN. The SAN is a nice product made by HP and worth about $200k, so it's not a cobbled together system...but I can't find any documentation to indicate if I should have the system level db's (master, msdb, and tempdb) running of the local system or the SAN.
The next step is a cluster, which brings forward the same question...do I store the system level db's on the local or the shared storage device?
I'm just beginning my R&D on SQL Server Replication for a project and had some questions about what paths to pursue
We have multiple locations which we want to all work off the same set of data essentially, but the kicker is that many of the sites become disconnected for minutes or either days at a time due to their remote locations, so we cannot simply deploy a web application with one SQL backend because remote sites will be unable to work during service disruptions.
Ideally, what I'd like to be able to do is have multiple instances of the SQL server/web applications(one at each site..they all have their own internal network), but which replicates with the main site/all other sites whenever it can(i.e. the internet is available). So one site goes without internet for a few days, they have up to date data from the last replication and they can work off their own sql server/web application, and any updates they've made/or any updates from other sites they've missed during the disconnected period would be replicated when the opportunity arises(ie. the web connectivity comes back)
Is this scenario possible? and I am struggling to find the right strategy path for implementing something like this. Any guidance would be greatly appreciated.
Not being an ace on T-SQL or working with Sql Server in general beyond the basics, I thought I'd ask for some input/ideas/feedback on an implementation question...
Here's the simple scenario: I new user is added to a Users table in a Master database. Because of size limitations in Sql Server Express, every user has her own little database. This database gets created at the same time as the new user is inserted into the Users table. The name of this database is stored as a field in the Users table.
How would you go about implementing a scenario like this? Would you implement it at all or solve it in some other way?
I was hoping I coudl write a stored proc. that takes the user data and new database name as parameters, adds the user to the Users table and then creates the new database... Is this feasable? (Some prel. research would suggest that this might not be as simple as it seems...)
I have implemented Full text search in my web application. I am using sql server 2005 database. I used “contains “Keyword for full text search. These are syntax as given below: CONTAINS ( { column | * } , '< contains_search_condition >' ) < contains_search_condition > ::= { < simple_term > | < prefix_term > | < generation_term > | < proximity_term > | < weighted_term > } | { ( < contains_search_condition > ) { AND | AND NOT | OR } < contains_search_condition > [ ...n ] } My search gives correct results according to AND NOT and OR. But, it is not working if I used AND. Please give me solution. Its very urgent for me……
Hi there,I want to implement XNOR condition in my SQL statement.XNOR =====x y = ? ---------0 0 = 10 1 = 01 0 = 01 1 = 1Can any one help me out what operator I should use?BTW what operator do we use for C# or VB.Net?Thank you for any tip.-Ahsan
a) Thanks for all contributions on this site, they have been very useful but, i am trying to implement a security procedure including auditing on sql server 6.5,7, sybase and oracle can anyone help?
b) does any one know any site like this that covers issues on oracle ( besides Oracle Technet)?
I am currently looking to buy or build software that enables my firm to:
Run a set of scripts to create a base database - empty database with minimum amounts of data (i.e. lookup data and seed data). So that when a new development project begins all that is needed to build the database is to run these scripts.
The challenges we face is managing these scripts. It would be easier from a manual perspective to only have 5-6 scripts, so that we do not need to manually open and execute each script (representing an object). The problem with this is when you modify a stored procedure, you modify it in your version control platform (which is single object based) and then you need to find and replace the object in your script.
What we are looking to do is buy or build an application to manage these files for us. I was wondering if anyone has purchased a solution to accomodate this?
We currently have ApexSQL Diff - which works well to compare contents of databases (structurally it does well, seems to have a problem comparing data but that is another discussion). What it does not do well is script out an entire database, and if it were to do this, it would put it all in one file.
I have Oracle 9i personal edition on my laptop which I use to learn Oracle/SQL including creating and quering tables. With this I have been able to go into jobs confident with being able to query Enterprise implementations of Oracle in a commercial environment.
Now I would like to learn Microsoft/SQL and I was wondering whether Microsoft's SQL Server 2005 Express will be enough for me to learn MS/SQL like Oracle personal edition has.
I new to sql procedures and I need your advice on below subject. Thanks in advance.
I have products table that has catid (string 50) and subcatid (string 50) I like to transfer the string to another table (categories) and keep only int values in products catid and subcatid.
therefore I created categories table and added 3 fields;
catid int (primary and autonumber) catname string 50 will be equal to category name in products table parentid int (if it is a parent category, it will be equal to 0 otherwise if it is a subcategory it will be equal to parentcatid)
so instead writing a vb program I like to ask your advice on better solutions.
please help
Also I am using sql express 2005 and I wanted to practice (learn) sql DTS. is there any way I can download this package and work it with my system?
I want to perform Oracle's VARRAY like implementation in SQL SERVER. How should I go about it? I've done an implementation in Oracle where I have a VARRAY variable which stores a number of insert statements where each INSERT statement is a single array element. I then traverse through the VARRAY and executed each INSERT statements inside a loop.
Hi all, Sorry if it looks a little cluttered!I have these two tables: CAMPAIGN(MEDIACODE varchar(10) UncheckedSPECIALOFFERCODE varchar(15) UncheckedLAYOUT varchar(10) UncheckedHEADERTEXT varchar(100) CheckedSORTORDER varchar(10) UncheckedSORTORDERCOLUMN varchar(50) UncheckedWIDTH varchar(50) Checked)andPROMORATEVIEW(MEDIACODE varchar(10) CheckedSPECIALOFFERCODE varchar(15) CheckedCAMPAIGNCODE varchar(15) CheckedNUMBEROFISSUES smallint CheckedRATE decimal(9, 0) CheckedDESPATCHMETHODCODE varchar(50) Checked)CAMPAIGN HAS ONLY ONE ROW:(CE DG8398 GRID ASC NUMBEROFISSUES NULL)ANDPROMORATEVIEWTOO MANY AND HERE A VERY SMALL RANGE OF RECORDSMEDIACODE SPECIALOFFERCODE CAMPAIGNCODE NUMBEROFISSUES RATE DESPATCHMETHODCODE(...CE CER1R02 CER1 12 429 WCE CER1R03 CER1 24 829 WCE CER1R03 CER1 12 429 WCE DG8398 DG8398 12 411 FCE DG8398 DG8398 12 405 1CE DG8399 DG8399 12 399 WCE DG8399 DG8399 12 735 1CE DG8399 DG8399 12 756 ACE DG8400 DG8400 12 756 ACE DG8400 DG8400 12 396 W...)Now the question:Why these two OUTER JOINS RETURN the same number of rows in My Sql2000 & 2005 express???1.SELECT DISTINCT P.MEDIACODE, P.SPECIALOFFERCODE,CD.MEDIACODEFROM CAMPAIGN AS CD RIGHT OUTER JOINPROMORATEVIEW AS P ON P.MEDIACODE = CD.MEDIACODE AND P.SPECIALOFFERCODE = CD.SPECIALOFFERCODE2.SELECT DISTINCT P.MEDIACODE, P.SPECIALOFFERCODE,CD.MEDIACODEFROM CAMPAIGN AS CD RIGHT OUTER JOINPROMORATEVIEW AS P ON P.MEDIACODE = CD.MEDIACODE AND P.SPECIALOFFERCODE = CD.SPECIALOFFERCODE AND CD.SORTORDER IS NULLIf you still with me here is what I am trying to do:to right outer join the campaign with promorateview and the if it's not overriden in campaign( there is no record for MEDIACODE, SPECIALOFFERCODE in campaign ) I will use that view to override the default values.I suspect that the last statement makes no "influence" on right outer join.Another work around for me now is that if I create a view that has these values and then filter it with VIEW.SORTORDER IS NULL it does the job but I am trying to get rid of this extra view.Thanks for your time, avarair
Hi, I would like to introduce stop word list in my search engine tool (web page). Could anyone please provide me the steps required for implementing this? This is basically to filter the stop words used for searching in my search page. The technologies used are ASP .NET 2.0 using VB .NET, VS .NET 2005 and SQL Server 2000 as the backend.(Note: I have already got the stop word list with me). Thanks.
We would like to Share the BatchJobs functionality before going to the difficulties what we are facing in SSIS package Design and it's implementation.
BatchFile Process Steps:
1.Flat File is coming from the upstream server(DB2 main Frame Server) to the linux box in which the jobs are scheduled using crons.
2.The Flat file is validated whether having header and trailer information.The flat file name itself (eg. APPLWIC00077.dat ) will have it's sequence number ie 00077.This number will be checked against Flat file Header information Which also gives the sequence number information .If both number matches,The Loading will be initiated.Same way checking happens with Trailer.If anything goes wrong,mail will be sent and Loading will not happen.
3.The Job will process the coming flat file and Loading it into SQL Server Staging Table.
4.If loading is successful,then Some Sql server Procedure are called to validate the Staging Table Data,Filters,then move the Data from Staging table to Target table.
5.All Process or Steps are tracked,If anything goes wrong,Mail will be sent to Concerned Person.
Utilities Involved:
1. SQL Loader to Load the FlatFile Data into the SQL Server 2005 Database.
2. SQLPLUS to execute the PL/SQL procedures.
3. SENDMAIL or MAILX to send mails to externals.
Additional Information:
The SQL Loader Control File Splits the flatfile data by Position wise in order to load it into the particular column of the Staging Table.No column delimiters.Data are taken position wise (like column1 -> Sequence No , 1-3 -> column2 , 4-8 -> column2 )and checked against a condition By which the particular Staging Table is selected.
Difficulties We face in migration in regard to SSIS Package Design and it's Implementation:
1. We could Receive the flat file using FTP Task and we don't know how to open and validate the flat file data Like What we mentioned in STEP 2.
2. We could load the sample flat file which is properly delimited like {CR}{LF} ,{CR},{LF},semi -colon,comma,colon,tab,vertical bar .
If flatfile is messed up and data need to be processed by position wise (Scenario mentioned in Additional Information) ,We don't know how to implement this in SSIS package with the Limited delimiter option given.
FYI : Flat File columns are not fixed and ragged type.
3. We need logging information like howmany rows inserted, discarded,something in detail..How can we log these information using SSIS.This will help to send detailed mail to Application handler.It should be automated using SSIS.
4. How can we execute Procedures Using available SSIS tasks ( Executing many Procedures which is residing in Sql server 2005 using Single sql file as input to SSIS package)
Real World: Backup Strategy and implementation, how? A quote:
€œReal World:
Whether you back up to tape or disk drive, you should use the tape rotation technique. Create multiple sets, and then write to these sets on a rotating basis. With a disk drive, for example, you could create these back files on different network drives and use them as follows:
//servername/data1drive/backups/AWorks_Set1.bak. Used in week 1, 3, 5 and so on for full and differential backups. //servername/data2drive/backups/AWorks_Set2.bak. Used in week 2, 4, 6 and so on for full and differential backups. //servername/data3drive/backups/AWorks_Set3.bak. Used in the first week of the month for full and differential backups. //servername/data4drive/backups/AWorks_Set4.bak. Used in the first week of the quarter for full and differential backups.
Do not forget that each time you start a new rotation on a tape set, you should overwrite the existing media. For example, you would append all backups in week 1. Then, when starting the next rotation in week 3, you would overwrite the existing media for the first backup and then append the remaining backups for the week.€?
I understand these concepts, however in €˜the real world€™ how do you go about implementing these jobs in SQL2K and how on earth do you schedule the tasks to overwrite, for example, week 1, when on week 3€™s rotation.
Could I have real world examples or scripts for the jobs that would carry out this task? It appears that whatever course you do, it does not fully cover the above, and I have only worked on my own and never with a DBA, so I have never seen this implemented in any environment.
I would like full details on this please, as I need to get my head around it.
Hi! I am evaluating an architecture for one of our project... a SQL database containing the data (backend) and a second database containing the development code (frontend) linked to the backend with synonyms.
It enables to upgrade the code without touching the data. Or to change the backend / use a different set of data at will.
Everything was going fine, the behavior was expected to be EXACTLY the same with synonyms as with real tables. But I came accross a problem: Let's say we have a synonym (frontend) dbo.TABLE1 that points to a table (backend) with a IDENTITY column.
I have a sp (frontend) with the following code: INSERT INTO dbo.TABLE1... SELECT @SCOPE_IDENTITY = SCOPE_IDENTITY()
Well in that case, @SCOPE_IDENTITY is NULL!
Anyone has ever faced that problem? Should I use another function to return the last ID inserted? Or is it the backend/frontend architecture that is completely flawed? I also heard there's a way, by creating the tables and the code on different filegroups, to restore only the tables or the code...
Hi, I am facing a problem in Reporting Service 2005 custom security(formsauthentication). I have implemented Custom Security(using sample code in msdn site) Reporting Service 2005. When i try to view report from my webapplication(within iframe) it redirects me to Logon.aspx page instead of showing result.If both Reporting Service 2005 and Webapplication is present in the same machine then there is no issues it is working fine, the session is establised. If both are (Reporting Services 2005 and Webapplication) in different in machine then the above mention error is occured ie., it redirects to Logon.aspx page. Please help.
I want to prepare for the TS: Microsoft SQL Server 2005 - Implementation and Maintenance exam. I want to know that how can i tackle it in the proper way. Can somebody please give me tips & tricks or study guide?
Hi List Im trying to set up an implementation of Project Real --it works like this- Create two system environment variables called REAL_Root_Dir and REAL_Configuration with the values given below. Click on Start -> Control Panel -> System. Go to the Advanced Panel, click Environment Variables button, then New in the System variables box.
If the Project REAL files were installed at C:Microsoft Project REAL, then the variable values will be:
The package OLEDB connections work like this First read enviroment variable to get location of config file Next read Config File to get connection string for Config Database <?xml version="1.0"?> <DTSConfiguration> <Configuration ConfiguredType="Property" Path="Package.Connections[SQL - Configuration].Properties[ConnectionString]" ValueType="String"> <ConfiguredValue>Data Source=(local);Initial Catalog=DataWarehouseABC;Provider=SQLNCLI.1;Integrated Security=SSPI;</ConfiguredValue> </Configuration> </DTSConfiguration> Next read Config database to get connection strings for Source and Destination databases
Destination database is called "DataWarehouseABC" Source database is called "SnapshotABC"
the Source database OLEDB connection works 100% however the destination OLDB connection we get this error below PS--Both source and destination databases are on the same development machine , however both databases are restored bak files from another production machine
Error 1 Error loading LoadGroup_Daily.dtsx: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Login failed for user 'xxxxxx'.". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Cannot open database "DataWarehouseABC" requested by the login. The login failed.".
Any ideas on how one OLEDB Connection in this package can get this corruption
Hi, can I assign a NAS(Network Attached Storage) server to store the database file(readable and writable) and assign other several MS SQL database servers which will use the same database file in NAS to achieve the objective of high availability?
If it can, how can I set it up in MS SQL Server or it requires another 3rd patry software to set it up?
hi, In SSAS is it possible for lookup table, for instance: if the purchase goes above 500 then rating is 5,if 400 then rating 4...the purchase values like 500,400 are in one dimension table. hence the rating are in another dimension table.
Is it possible to make a lookup with these dimension table by providing rating?
One of our SSAS 2005 environments gives the following error in Management Studio if the hostname is specified. If the IP address is used SSMS is able to connect. Any help would be appreciated.
at Microsoft.AnalysisServices.AdomdClient.XmlaClient.Authenticate(ConnectionInfo connectionInfo, DateTime startTime) at Microsoft.AnalysisServices.AdomdClient.XmlaClient.OpenTcpConnection(ConnectionInfo connectionInfo) at Microsoft.AnalysisServices.AdomdClient.XmlaClient.Connect(ConnectionInfo connectionInfo, Boolean beginSession) at Microsoft.AnalysisServices.AdomdClient.AdomdConnection.XmlaClientProvider.Microsoft.AnalysisServices.AdomdClient.AdomdConnection.IXmlaClientProviderEx.ConnectXmla() at Microsoft.AnalysisServices.AdomdClient.AdomdConnection.ConnectToXMLA(Boolean createSession, Boolean isHTTP) at Microsoft.AnalysisServices.AdomdClient.AdomdConnection.Open() at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.ObjectExplorer.ValidateConnection(UIConnectionInfo ci, IServerType server) at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.ObjectExplorer.ConnectToServer(Object connectionInfo, IDbConnection liveConnection, Boolean validateConnection)
===================================
The target principal name is incorrect (Microsoft.AnalysisServices.AdomdClient)
------------------------------ Program Location:
at Microsoft.AnalysisServices.AdomdClient.NTAuthentication.GetOutgoingBlob(Byte[] incomingBlob, Boolean& handshakeComplete) at Microsoft.AnalysisServices.AdomdClient.XmlaClient.Authenticate(ConnectionInfo connectionInfo, DateTime startTime)
Can someone please tell me the difference between these two pieces of software? I am looking for a *concise* definition of both packages, what they do, what they don't do, and how they work together. We use SQL Server and are working to create a data warehouse from our production db.
We use SSAS for a few OLAP cubes but I have read that it is recommend to first warehouse the data then build cubes on top of the warehouse? I assume you use SSIS to build the warehouse?
Any links would be greatly appreciated. Thanks in advance for your helpful advice.
Last time I worked with SSAS and build a Cube. Because I€™m now very happy with the front-end excel 2003 or excel 2007 I thought I build my own Report with SSRS. Now there is something I don€™t understand: I build a Hierarchy in SSAS that a want to use in SSRS. Is there a chance to use it without any features? Have I to use parameters or something another? In my opinion it makes any sense to build a hierarchy new, because it€™s already exists in SSAS.