I am new to this forum so I am not sure if this is posted in the appropriate group.
Okay - this may be a silly question - what is a crosslink table? Is this easily achieved in the MySQL query browser? Here is some more details on what I am trying to achieve. I have seen it done but not sure where to start....
The products correspond as follows:
- To be eligible for 'product a' the buyer must meet two qualifications:
1. Specify their 'model of their engine.'
2. Specify the 'year' of their engine.
Is there a way to structure my tables so that if a person specifies 'a specific year' and 'specific model' the correct product A, B, or C will pull up for them?
Do I create separate tables named 'Engine Model' and 'Engine Year' -- and then link to the product table with foreign keys? If so, how do I specify more than one grouping. For example product A is compatible with 2004-2007 engines and 10 different engine models.
I am new to mySql -- but a quick study -- hopefully this is easy to implement.
Hello i am doing my final project in my University and i have chosen to do a beta my university portal.I am facing a problem because i don't know how to link students with his specific data when he logs in to the site for example i want to show his specific grades when he logs in or anything else that is related to himand this is what i am using to show the grades with the use of data grid 1 SELECT Student.StudentiId, Student.StudentName, Student.StudentSurname, Course.Course, Exams.Datewritten, Exam.Exam, Exams.Grade2 FROM Student 3 INNER JOIN 4 Exams 5 ON Student.StudentiId = Exams.Studentid 6 INNER JOIN 7 Course 8 ON Exams.Courseid = Course.Courseid 9 INNER JOIN10 Exam 11 ON Exams.Examid = Exam.Examid; I am very confused since i dunno how to do thisShould i need to relate the asp.net membership database with mine?P.S my vb.net skills are low Here is my database Schema
-got 3 stored procedures which return data that shall be shown on report
-need one recordset as datasource (or can i use more than one here?)
Problem:
Data was unrelated before, now needs to be on same report, that's why until now i have 3 different pretty complex stored procedures returning a recordset each.
I could of course copy and paste the whole 3 into 1 new stored proc, but when one changes i had to change the newly created one too (which might get messy when doing a lot of maintenance and changes on the others)
Can create a stored procedure that simply integrates those 3 into one recordset something like this (in pseudo-code):
I have a new SQL 2005 (SP2) Reporting Services server to which I've just upgraded and deployed some SSRS 2000 reports.
I have a subreport that contains a matrix with two groups. The report data seems to be inexplicably repeating the data for the first row in the group for all rows in the group. Example:
ID1 ID2 DisplayData
1 1 A
1 2 B
1 3 C
2 1 A
2 2 B
2 3 C
Parent group is on ID1, child group is on ID2, report would show:
1 1 A
2 A
3 A
2 1 A
2 A
3 A
Is this a matrix bug in 2005 SP2, or do I need to do something differently? I can no longer pull a comparison version from an SSRS 2000 server to verify, but I believe it was working as expected before...
I have a requirement to display the total of a Group after subtracting a specific value from the same Group.
Example: Say the below data is grouped on a particular columnÂ
Group Values Month
Jan-15 Feb-15 Mar-15
A 10 20 30 B 5 10 25 C 1 2 3 D 5 10 15
Total 11 22 33
Formula is : Â Sum(A+C+D)- Sum(B)
What is the best way to Group the above scenario from SSRS level and display the result as shown above. I am able to display all the values except the last total row where am displaying the complete total i.e. 21 Â 42 Â 73.
How do I dynamically subtract the values for row B which is one of the group values.
We have a "main" SQL 2014 server who imports XML files using SSIS in a datacenter. In remote sites (which are warehouses), there is an instance of SQL 2014 Express. A merge replication is setup, as every operations done on each site must be "forwared" to the main database, as some XML files are generated as output for an ERP system.
Now, the merge replication replicate all the data to the server on each sites. But a specific site don't need the data of every other sites, only the data relevant to itself (which is the warehouse code). Is there a way to replicate only the data relevant to each individual sites to the subscribers? Or is there a better way than replication to accomplish this?
i am currently working on designing a database for a bank as a school project for my database class. We have to draw up an entity relationship diagram, Sql tables, database size estimate etc. I am currently working on the security portion of the project. I need to list the groups that have access to my application and use a grid format to show access to specific tables.
I am currently working on designing a database for a bank as a school project for my database class. We have to draw up an entity relationship diagram, Sql tables, database size estimate etc. I am currently working on the security portion of the project. I need to list the groups that have access to my application and use a grid format to show access to specific tables.
Role Loans Payments Transactions Accounts Customer Emplo Database Admin SUID SUID SUID SUID SUID SUID Branch Manager SUI SUI SUI SUI SUI SUI Internal Auditor S S S S S S Loan Officer SUID SUI SUI S S Tellers S S S S SU Customers U
I am very early on in developing a website to track issues with projects which is tied to a SQL database. I have my Projects Table, my Users Table, and am creating a third table to track issues. I'm wondering what is the best way to assign specific users to specific data/projects. The user should only be able to view & update the projects assigned to him. He should not be able to see other projects. What is the best way to assign projects/data to the users to make sure they are only viewing their data?
"pRecordSet" is an ADO recordset. The database column "MyColumn" is of type "decimal(19,10)".
The most important question for me is, if the regional settings of the database server or the regional settings of the client PC are considered during the conversion from the string to the decimal value. For example in standard French regional settings the "." would not be recognized as decimal separator.
I am also wondering if the language of the database instance, in which this data is saved, is considered during this conversion or any other settings of this database instance.
So my general question is: Does anybody know exactly what rules apply during the above mentioned conversion?
I want to transfer a recordset (derived from an Oracle datasource) into an SQL2000 Server table using VBScript in a ActiveX Script Task using a DTS.
Currently I use the OPENROWSET (and OPENQUERY) method, however the lenght of the querytext seems to be limited to 8192 bytes. At this moment I have reached the limit of this lenght, and I am looking for a solution.
Futher info: The query returns large recordsets of 100,000s of records with 100s of columns. Because of it's complex structure, the standard data transformation in SLQ2000 is not an option.
Using a substitute like:
varRecords = rst1.GetRows For intI = 0 To UBound(varRecords, 2) rst2.AddNew For intJ = 0 To UBound(varRecords, 1) rst2(intJ) = varRecords(intJ, intI) Next Next rst2.Update
I have two dbs with the same table names and fields but different data. My connection is right, as well as my query. How come that my program can't access the data from one table while the there's no problem with the another one? My recordset is empty even the record I'm searching for exists. It's rs.recordcount is -1. Is that database corrupted?
I hope someone can help me in this problem. I create the Procedure to generate the dynamic SQL statement, and execute the dynamic SQL at end of the Procedure. When I done in Server Management Studio, all thing is OK, return me the record.
Is there a way to return just the raw data from a report? I do not mean XML rending which marries the dataset information with the layout information. I just want the raw data from the dataset. If the XML returned was the result of the recordset from the de-parameteraized query only that would be great!
I am looking into writing a Custom Rendering Extension, but I am not fining much outside of the same MS example circulating. It seems to only deal with layout data and not the raw dataset data.
Is there way to get to just the raw data and ignore layout? Is the dataset in the SSRS Temp Dd that you can query after a report has run? If anyone knows of any way to get to the raw data that would be fantastic.
I have been using a recordset destination in a data flow where I need to perform some complex manipulation on a dataset, including combining some information from a web service and updating records that already exist, vs. inserting them.
I have a script task that modifies the dataset as needed, and then saves it back to the variable it came from.
However, when it comes time to write the data to the database, I couldn't find an appropriate tool - there's no "recordset source" object in the data flow task, and use of a "for each" loop with a sql call to a stored proc takes 20 minutes for a few thousand rows.
The best way I could find around this was as follows:
Call the .NET ".GetXML" method on the dataset and put the resulting XML data into a string variable Generate an XSD for that XML (it comes out like <NewDataSet><Table1>...) Use an XML source in the data flow task.This works, and the same data insert that took 20 minutes via the loop / stored procs now takes under 10 seconds.
It seems horribly inefficient to have to do this - there should be a way to just dump my dataset back into a table natively without all that extra stuff.
I am having a problem writing a large amount of ntext data to a field within an ADO recordset. I am using the append chunk method but it does not seem to work. The SQL 7 field will hold the data its only about 60K.
Hi there, I need to develop a module and wondering what would be the best implementation...
I get a list of files from a text file and store it in a Recordset Destination (object variable "CUST_INV_LIST").
I need to check that all the files in a directory are in the list. I can loop through the files in the directory using a ForEach container, but how do I check if it is in the CUST_INV_LIST recordset?
I thought about using another ForEach container to loop through the recordset, check if the physical file is equal to that row, if so set a flag, ... but it's neither elegant nor effective.
Another option would be to use a Script Task to search for the physical file name in the recordset. I tried with the Data.OleDb.OleDbDataAdapter, etc. but I get and error when I declare Data.DataTable. Anyways that method of accessing a recordset is not recommanded and seems complicated.
I have an update query in an OLE DB Destination (access mode: SQL Command) that updates a table with an INNER JOIN from another table in another database. I'm getting the error, "No disconnected recordset available for the specified SQL statement". Does this have to do with the SQL query trying to access the other database? How can I get around this error?
I have created an Access2003 project (existing data) that links to external data. First I connected to a SQL Server 2000 database. Success. Then I tried to set up a Transact SQL data connection to a legacy MDW-secured Access97 database. (A third-party VB6 application goes against it, and we don't have the source code, so we cannot upgrade it.)
The Transact SQL link tests OK but I cannot select any of the tables or queries from the list presented. However, with the same credentials, I can use these same objects in Excel 2003.
When setting up the link in Access2003, I specify JET 4.0 OLE DB Provider, I enter the MDW file on the All tab, a username and a password on the Connection tab where I browse to the MDB file, and specify Shared Deny None on the Advanced tab. When I test the connection, it tests OK ("Test connection succeeded"). Yet on the "Select the Database and Table/Cube which contains the data you want" dialog, "(Default)" appears in the grayed-out dropdown. Then, beneath that dropdown, there is a grid with Name and Description columns. The grid contains query names but the grid is not enabled. The list of queries is this table is grayed out. Neither of the scrollbars works.
BUT... if I use the SAME username and password in Excel2003, and specify the same MDW, there is no problem working with these same database objects in the legacy Access97 database. WHAT IS DIFFERENT ABOUT THE WIZARD IN EXCEL THAT ALLOWS IT TO SUCCEED AND THE WIZARD IN ACCESS THAT CAUSES IT TO FAIL HERE? In Excel, the list of available providers says Microsoft Access Driver, not JET 4.0 OLE DB Provider.
When trying to link to an SQL table in Access 2003, the software appears to be malfunctioning.
The sequence of events is File - Get External Data - Link Tables - Files of Type: ODBC Databases().
The Problem: On two of my computers, the select data source window does not pop up, preventing me from linking to any ODBC data source.
Observations: This function has worked normally in the recent past and works on other computers running Access 2003. One difference between the computers working and non-working computers is Norton Antivirus 2006 (recent upgrade).
Has anyone experienced anything like this? What's going on?
Te first record of a Recordest obtained from a Command Object executing a Stored Procedure, doesn't show the first record when I asociate this to a data report.(VB6 - SQL7) (ADO 2.1)
If I execute the stored procedure directly from query analizer, I have obtained the right resultset.
I'm running the following SQL query from LabVIEW, a graphical programming language, using the built in capabilities it has for database connectivity:
   DECLARE @currentID int    SET @currentID = (SELECT MIN(ExperimentID) FROM Jobs_t WHERE JobStatus = 'ToRun');    UPDATE [dbo].[Jobs_t]    SET [JobStatus] = 'Pending'    WHERE ExperimentID = @currentID;    SELECT @currentID AS result <main.img>
This is the analogous code to main() is a C-like language. The first block, which has the "Connection Information" wire going into it, opens a .udl file and creates an ADO.NET _Connection reference, which is later used to invoke methods for the query.
<execute query.img>
This is the inside of the second block, the one with "EXE" and the pink wire going into it. The boxes with the gray border operate much like "switch" statements. The wire going into the "?" terminal on these boxes determines which case gets executed. The yellow boxes with white rectangels dropping down are invoke nodes and property nodes; they accept a reference to an object and allow you to invoke methods and read/write properties of that object. You can see the _Recordset object here as well. <fetch recordset.img>
Here's the next block to be executed, the one whose icon reads "FETCH ALL". We see that the first thing to execute on the far left grabs some properties of the recordset, and returns them in a "struct" (the pink wire that goes into the box that reads "state"). This is where the code fails. The recordset opened in the previous VI (virtual instrument) has a status of "closed", and the purple variant (seen under "Read all the data available") comes back empty.
The rest of the code is fairly irrelevant, as it's just converting the received variant into usable data, and freeing the recordset reference opened previously. My question is, why would the status from the query of the recordset be "closed"? I realize that recordsets are "closed" when the query returns no rows, but executing that query in SSMS returns good data. Also, executing the LabVIEW code does the UPDATE in the query, so I know that's not broken either.
I am looking for the easiest way of rebalancing data across multiple files.
Instead of creating a secondary filegroup and then dropping and recreating all indexes in the database which is going to take ages (we have a lot of tables and indexes), I am trying to just add more files to the primary file group and then rebalance data evenly between these.
I guessed that adding the new files to the primary file group and then rebuilding all indexes on a table should redistribute the table over these multiple file groups evenly. This is not the case though. It does rebalance data a bit but I still end up with the majority on the first file that existed.
I have attached the script I am running, maybe it is something in the create database/file statements that is the issue.
Basically what I am seeing is to start off with the table is 160MB. I then add the file groups and rebuild all indexes on the table. The first file is then about 100MB and each of the three other files are about 20MB. I would expect them all to be the same size.
I have a two node SQL 2012 AlwaysOn HADR cluster (v11.0.3412) with 4 availability groups configured. The AG groups are set to synchronous mode and the secondary is not readable (we do not want the synchronous replica readable so we do not risk any reads causing contention so we maintain fast performance).
On the secondary we are getting a persistent failure with the Data Collector job called Collection_Set_3_Upload. The failure occurs within the second job step. That job step is executing the following command:
dcexec -u -s 3 -i "$(ESCAPE_DQUOTE(MACH))$(ESCAPE_DQUOTE(INST))" The error message is as follows:
Log Job History (collection_set_3_upload) Step ID 2 Server CLUSTERNODE2 Job Name collection_set_3_upload Step Name collection_set_3_upload_upload Duration 00:00:07
[Code] ....
I know I can prevent this error message by enabling readable secondaries, but we do not want this.
I have tried stopping the data collection jobs and purging the cache directory but to no avail. It will succeed the first time then persistently fail again with the same message every time after that.
In addition, if I set the one failing AG group to readable secondary the job succeeds. So that means that 3/4 work fine, only this one is having an issue.
A test installation script I ran on my dev rig seems to have broken the OLE DB Provider for Sql Server. The uninstall must have deleted a DLL or unregistered a lib. When I try to connect to a SQL database from this machine now get the following error:
==
Microsoft Data Link Error
Test connection failed because of an error in initializing provider. Unspecified error.
==
It is not the connection string because this was working fine prior to me running the test installation script.
Can you give advice on how to repair the SQL Server data provider for WinXP SP2?
I have tried to re-install MDAC 2.8 with no success due SP2 diallowing this. I uninstalled SP2 and re-installed MDAC 2.8 but the problem remains.
I'm all out of ideas, so any help would be appreciated.
This is a bit confusing but here goes: I need to access data in SAP via OLE DB. I can't go direct to the back end database (Oracle), we have to use RFC or BAPI calls to access the SAP data. That's part works, we have a DLL that accesses the SAP data we need.
please help, when I access my report and click on the subscription tab, on the top page i only see the "new subscription" button. I also tried creating a data driven sub in sql manger but it was disbaled. please help how to correct this one.
Ok so facebook groups have 100,000's of members. Members can be part of an unlimited number of groups, and a group can have an unlimited number of members.
Comma Deliniated String seems absurd. Many-2-Many Database relationship seems like it won't scale well t the 10's of thousands and 100's of thousands of members (especially if you have 1000-5000 groups). A table for each group would work but thats a bit over the top in my opinion. XML file doesn't seem to be any better than the above options.
I am no database guru, but I can't figure out a scalable method of doing this, be it with or without a database. I need something that can support 10 groups that have 20 members each OR 1000 groups with 100,000 members each.
Any help, suggestions, or kicked in the right direction would be most appreciated.
i've been looking at moving one of our processed from excel (+vba) into t-sql to make life easier but am stuck.
We have lots of tasks that are assigned to work groups which we want to distribute evenly across the work groups. This is a simple task for ntile.. However when these tasks are no longer required they are removed which leaves the groups uneven. When new tasks are added we want to still try to keep these groups balanced.
Create table Jobs(jobid int identity(1,1), name varchar(100),Groupid int)
--Existing tasks
Insert into Jobs(name,Groupid) values ('Task1',1) Insert into Jobs(name,Groupid) values ('Task2',1) Insert into Jobs(name,Groupid) values ('Task3',1) Insert into Jobs(name,Groupid) values ('Task4',1) Insert into Jobs(name,Groupid) values ('Task5',2) Insert into Jobs(name,Groupid) values ('Task6',2) Insert into Jobs(name,Groupid) values ('Task6',2) Insert into Jobs(name,Groupid) values ('Task7',3)
-- New tasks
Insert into Jobs(name) values ('TaskA') Insert into Jobs(name) values ('TaskB') Insert into Jobs(name) values ('TaskC') Insert into Jobs(name) values ('TaskD') Insert into Jobs(name) values ('TaskE') Insert into Jobs(name) values ('TaskF')
This gives us 6 unassigned tasks and a uneven group assignment
- 500 GB DW - 5 GB in smaller DBs - 220 GB TempDB - 350 GB in Log files.
My machine is Fujitsu Primergy 64 cores (with HT) and 192 GB RAM.
I have several IO locations:
- 540 GB in-server HDD 15k RAID10 - 1 TB HDD 15k RAID10 on SAN (separete controller) - 2 TB HDD 15k RAID10 on SAN (same controlller as below) - 800GB SSD RAID10 on SAN (same controller as above)
Data warehouse has 2 fact tables that are absolutely crucial and quite large.
Now i want to organize DB into several Filegroups and put them on different drives. Filegroups I'm thinking of:
- FILEGROUP1: for 1st crucial Fact Table - FILEGROUP2: for 2nd crucial Fact Table - FILEGROUP3: for tempDB - FILEGROUP4: for dimensions data - FILEGROUP5: for the rest of facts data - FILEGROUP6: for dimensions indexes - FILEGROUP7: for the rest of facts indexes - FILEGROUP8: for 1 log file of one smaller DB (its in full-recovery and its quite large) - FILEGROUP9: for the rest of log files - FILEGROUP10: others
How should I organize them across available drives? I was thinking about sth like:
I know that having multiple filegroups on the same drive is pointless regarding performance, but in future i could actually add some more drives, so i want to separate them now.
Also - how much files per filegroups should i create? Considering 1 or 2. Except TempDB where I am going for 4.