I have only been working with .NET for about a month now so my knowledge and understanding are pretty low in some areas, including this one.
What I have is a number of webpages that have several RadChart controls on them (8 to 12 charts per page). Each of these charts is bound to a seperate SQLDataSource with a variation of the following query:
1 SELECT [QC Shot Logs].penetration, charges.penmra, charges.penmts, [QC Shot Logs].time
2 FROM [QC Shot Logs] INNER JOIN (select top 1 run.runid, run.linerpress#, run.partid
3 from run where run.linerpress# = '1' order by run.[date] desc) as drun ON [QC Shot Logs].RunID = dRun.RunID
4 INNER JOIN Charges ON dRun.PartID = Charges.PartID
5 where [qc shot logs].shottype < 3 order by [qc shot logs].time
Each chart represents the results of a quality control check on explosives that my company uses and puts the data in a visual format so that it is easy to see if the blasts passed or failed inspection. The different variations in the SQL code above are simply a change in line #3: where run.linerpress# = '1', where run.linerpress# = '2', and so on up to '12'.
Data is added and modified in the database quite a bit throughout the day at random times, so I need the charts to refresh their data pretty frequently (done with a RadTimer control). The problem that is caused then is that I have 12 RadCharts individually bound to 12 SQLDataSources that all get refreshed every 30 seconds or so, which takes quite a bit of time. The main Quality Control computer monitors four webpages at a time, one of them with 12 Charts/Sources, and three with 8 Charts/Sources each. Having 36 charts up at a time with 36 hits on the database going as well causes a great deal of lag on the system. In addition to that though, there are also two other pages (each with 12 charts/sources) that can be viewed by any number of people at any time, which (obviously) causes more lag on the system.
So, on to my actual questions:
1) Can I use a single SQLDataSource control to gather all of the data I need for the 8-12 RadCharts on a single page? I'm not too well versed in SQL either, so I don't know if I can modify the SQL so that it picks up all of the information I need, or if I can gather the data for the first chart, change a variable to pull the data for the second, change the variable again, etc.
2) Is there a way to refresh the RadCharts so that the data they display is relatively real time without having to refresh the SQLDataSources as well?
And then, of course, any suggestions at all that you can give me that will help to improve overall performace or anything of the like is definitely appreciated.
I was wondering, if you have multiple SqlDataSources on a single aspx page, does the framework need to open and close a SEPERATE connection for each SqlDataSource. This would make it much less efficient than coding with the ADO.NET objects and just opn and close the connection once irrespective how many quries that page needs. Your comments would be appreciated.
Hi to all, i have a very important problem...pls help me
I have a server with Windows 2003 Server (3GB RAM) and SQL SERVER 2000 running on it, my problem is that sqlserv.exe eat 2GB RAM even if on this server nothing happen....if i restart the service of SQL Server everything is ok eat just 30 MB RAM but after some minutes...after i made a simple select or a simple delete....simple things he arrive to 2GB RAM...this not happen until now(2 days ago)...the problem is that when he arrive to ~2GB RAM the statio run very slow and i receive the error "Time out expire"
this problem happens and on my local station where i have Win XP PRO (1 GB RAM) SQL Server 2000....
I have a large set of data that I need to match against another large set of data. The reference table has 9.8mill rows and my input has 14.6mill rows. I started with a new project. I added my connection, then a task to clear the result table, then my data flow, then my OLE source, then my Fuzzy Lookup task, then my SQL Server Destination. I set the connection of my OLE source and set the query to pull the data. Then I set the connection of my Fuzzy Lookup task, set the reference table and told it to create a new index (the problem also occurs if I use a generated index) and then set up the matching criteria. Then I set the connection and destination for the SQL Server Destination.
After setting all this up, I hit Run. The thing ran great until ~ 800k rows and then it failed. I ran it several times and it always failed right around 800k with a message saying there was not enough space and then an error with buffers being passed to the Fuzzy Lookup component. I opened Task Manager and watched the resources as it ran and was amazed at what I saw. The Fuzzy Lookup component eats up every bit of Virtual Memory available and when it can't take any more, it errors out. I tried setting the Max Memory setting on the component and it seems to have no effect. I also played with the buffer settings on the data flow task to no avail. I even went as far as to put an identity on my input table and create a function that outputs selects that use a between on the identity to break the data into 600k chunks. I set up a ForEach component and DTS variables, but the Fuzzy Lookup component does not free the VM after the iteration of the ForEach component!
I ended up running each chunk of 600k one at a time. I have to automate this for the future, so I need a solution. Does anyone have an idea for me?
We are just finishing our migration to SQL 2012. In our old environment, the instance which held our SharePoint databases also served other applications. We did not experience any performance related issues in the past due to this.
SharePoint basically requires MAXDOP to be 1, which is correct on the old server. Since this configuration may not be ideal for other applications that may be put within our environment, we our entertaining the idea of isolating SharePoint into its own instance, probably on the same box.
My manager wants me to come up with performance trace data to better prove that we need to go this route since we apparently have had issues in the past by blindly following Microsoft's best practices.
1.MAXDOP configuration - I understand this may be a 2 pronged approach that would require looking at various execution plans and CPU related counters in Perfmon. SharePoint likely requires a maxdop of 1 due to the nature of the application (lots of concurrent processes). What is the best way to show this need graphically?
2. Memory configuration for multiple instances - Does the Total Server Memory reveal all the memory that a given SQL instance is utilizing? Should I use this counter to identify appropriate min/max memory configurations for multiple instances on a single cluster?
The problem with the perfmon approach is that it's scope is limited to just the server. Since our SharePoint environment is currently being shared with other applications, I understand that I may have to utilize DMV statistics to narrow down my analysis.
I am facing a problem in writing the stored procedure for multiple search criteria.
I am trying to write the query in the Procedure as follows
Select * from Car where Price=@Price1 or Price=@price2 or Price=@price=3 and where Manufacture=@Manufacture1 or Manufacture=@Manufacture2 or Manufacture=@Manufacture3 and where Model=@Model1 or Model=@Model2 or Model=@Model3 and where City=@City1 or City=@City2 or City=@City3
I am Not sure of the query but am trying to get the list of cars that are to be filtered based on the user input.
I am trying to create a report using Reporting Services.
My problem right now is that the way the table is constructed, I am trying to pull 3 seperate values i.e. One is the number of Hours, One is the type of work, and the 3rd is the Grade, out of one column and place them in 3 seperate columns in the report.
I can currently get one value but how to get the information I need to be able to use in my reports.
So far what I've been working with SQL Reporting Services 2005 I love it and have made several reports, but this one has got me stumped.
Any help would be appreciated.
Thanks.
I might not have made my problem quite clear enough. My table has one column labeled value. The value in that table is linked through an ID field to another table where the ID's are broken down to one ID =Number of Hours, One ID = Grade and One ID= type of work.
What I'm trying to do is when using these ID's and seperate the value related to those ID's into 3 seperate columns in a query for using in Reporting Services to create the report
As you can see, I'm attempting to change the name of the same column 3 times to reflect the correct information and then link them all to the person, where one person might have several entries in the other fields.
As you can see I can change the names individually in queries and pull the information seperately, it's when roll them altogether is where I'm running into my problem
Thanks for the suggestions that were made, I apoligize for not making the problem clearer.
Here is a copy of what I'm attempting to accomplish. I didn't have it with me last night when posting.
--Pulls the Service Opportunity
SELECT cs.value AS "Service Opportunity"
FROM Cstudent cs
INNER JOIN cattribute ca ON ca.attributeid = cs.attributeid
WHERE ca.name = 'Service Opportunity'
--Pulls the Number of Hours
SELECT cs.value AS 'Number of Hours'
FROM Cstudent cs
INNER JOIN cattribute ca ON ca.attributeid =cs.attributeid
WHERE ca.name ='Num of Hours'
--Pulls the Person Grade Level
SELECT cs.value AS 'Grade'
FROM Cstudent cs
INNER JOIN cattribute ca ON ca.attributeid =cs.attributeid
WHERE ca.name ='Grade'
--Pulls the Person Number, First and Last Name and Grade Level
SELECT s.personnumber, s.lastname, s.firstname, cs.value as "Grade"
FROM student s
INNER JOIN cperson cs ON cs.personid = s.personid
INNER JOIN cattribute ca ON ca.attributeid = cs.attributeid
WHERE cs.value =(SELECT cs.value AS 'Grade'
WHERE ca.attributeid = cs.attributeid AND ca.name='Grade')
I have a requirement where in i have around 15 different flat files , filenames are fixed but folder path can be changed(i think i should use a variable for folder path). These 15 files data should go to their respective tables in the database.
Whether I need to create separate data flow task for each file or separate package? In addition to these, example : while importing product data into product table, if product ID already exists, we need to ignore it and upload only the new records.
I am in the process of creating a Report, and in this, i need ONLY the row groups (Parents and Child).I have a Parent group field called "Dept", and its corresponding field is MacID.I cannot create a child group or Column group (because that's not what i want).I am then inserting rows below MacID, and then i toggle the other rows to MacID and MacID to Dept.
I'm trying to create an email report which gives a result of multiple results from multiple databases in a table format bt I'm trying to find out if there is a simple format I can use.Here is what I've done so far but I'm having troble getting into html and also with the database column:
I'm trying to get some XML data into SQL Server but i ran into problem when inserting the data (multiple orders with multiple order details) using a single sproc. Is it possible, or do I have to do in some other way? :confused:
I simplified my example to this: ----------------------------- --CREATE PROCEDURE sp_InsertOrders AS
DECLARE @docHandle INT, @xmlDoc VARCHAR(4000), @orderID INT
--DROP TABLE #Orders CREATE TABLE #Orders ( OrderId SMALLINT IDENTITY(1,1), FkCustomerID SMALLINT NOT NULL, OrderDate DATETIME NOT NULL )
--DROP TABLE #OrderDetails CREATE TABLE #OrderDetails ( OrderDetailsId SMALLINT IDENTITY(1,1), FkOrderID SMALLINT NOT NULL, ProductID SMALLINT NOT NULL, UnitPrice SMALLINT NOT NULL )
INSERT INTO #Orders (FkCustomerID, OrderDate) SELECT CustomerID, OrderDate FROM OpenXML(@docHandle, 'Orders/Order', 3) WITH ( CustomerID INTEGER, OrderDate DATETIME )
SET @OrderID = @@IDENTITY;
--INSERT INTO #OrderDetails (@OrderID, ProductID, UnitPrice) SELECT @OrderID AS OrderID, ProductID, UnitPrice FROM OpenXML(@docHandle, 'Orders/Order/OrderDetails', 3) WITH ( ProductID INTEGER, UnitPrice INTEGER ) -----------------------------
All orders are inserted first which makes the use of @@IDENTITY incorrect (it works fine if you insert a single order with multiple order details). Since it was quite some time since I last worked with SQL I am not sure if am doing it the right way... :confused: :confused: Anybody out there who knows how to solve the problem?
I concatenate multiple rows from one table in multiple columns like this:
--Create Table CREATE TABLE [Person].[Person_1]( [BusinessEntityID] [int] NOT NULL, [PersonType] [nchar](2) NOT NULL, [FirstName] [varchar](100) NOT NULL, CONSTRAINT [PK_Person_BusinessEntityID_1] PRIMARY KEY CLUSTERED
[Code] ....
This works very well, but I want to concatenate more rows with different [PersonType]-Values in different columns and I don't like the overhead, of using the same table in every subquery ([Person_1]). Is there a more elegant way to do this, without using a temp table or something else?
I need to update multiple columns in a table with multiple condition.
For example, this is my Query
update Table1 set weight= d.weight, stateweight=d.stateweight, overallweight=d.overallweight from (select * from table2)d where table1.state=d.state and table1.month=d.month and table1.year=d.year
If table matches all the three column (State,month,year), it should update only weight column and if it matches(state ,year) it should update only the stateweight column and if it matches(year) it should update only the overallweight column
I can't write an update query for each condition separately because its a huge select
I'm trying to create a database that takes specific information from a number of databases on different servers to make some reporting that we have much easier.
I'm pretty new to SQL so I'm not sure of the best way to proceed. I read an article that suggested I use the OPENROWSET command. The problem is, the version of SQL that came with one of the programmes we use is limited and will not allow you to turn on the allow "Ad Hoc distributed Queries" so the SLQ statement will not execute.
I'm confused why it won't let me to connect through ODBC as I've created a web page that selects data from this database with no problems!
Here is the SQL statement that I've written to make sure it is the correct one (on the msdn library page it said that this was the ODBC connection):
SELECT a.* FROM OPENROWSET('MSDASQL','DRIVER=(SQL Server);SERVER=APPOLOACT7;UID=sa;PWD=***************', 'SELECT * FROM MDCTestAndDev.dbo.TBL_CONTACT') AS a
I've also created the ODBC connection using the tool on Administration Tools>Data Sources ODBC
Any help would be greatly appreciated (also any ways of selecting from one database and inserting it into another will be helpfull)
After hitting limitations in the SQL CLR world that bar us from invoking COM objects we are forced to use windows services to read the messages off the Service Broker Queues. Unfortunately we loose the auto activation feature in the Queues, but we can still read messages and perform the SQL work under one transaction.
We are going to attempt to take N messages simultaneously from the Queue, though N instances of a windows service. If the messages send to the queue are one message per conversation, will we be able to achieve having N readers take messages off simultaneounsly?
Thank you very much,
Lubomir
P.S. if anyone has a better approach to obtaining the message in "out of sql code" or invoking external (not assemblies stores in SQL server) code libraries, that would be etremely nice to hear. I have thought about invoking a web service through CLR, but that is probably too much overhead - MSMQ seems much more appealing than a web service;
I have just taken over the job of sorting out a rather poorly designed database. It looks like it was 'upsized' from an access database to the SQL server. The SQL server is the 2000 version.
Now I am trying to generate a report of what the students in the database are owing by referencing the Receipt table and then all the available payment methods and allocations. I was wondering if there was anyway to work out data being displayed twice (Let me demonstrate)
Note1: All the tables are linked by a key of ReceiptNo. From what I can see there is a table for every payment type and allocation but no link between the two other then the receipt number.
Using the query: SELECT T_Receipt.ReceiptNo, T_cheque.Amount AS Chq_Amount, T_credit.Amount AS Cre_Amount, StandingOrder.Amount AS Stn_Amount, T_BankTransfer.amount AS Bnk_Amount, T_cash.TotalAmount AS Cas_Amount, T_RentPayment.AmountPayed AS Ren_Paid, T_AdminPayment.AmountPaid AS Adm_Paid, T_InternetBilling.Total AS Int_Paid, T_Utilities.AmountPaid AS Util_Amount, T_InvoicePayment.amountPaid AS Inv_Paid, T_OtherPayments.paymentAmount AS Oth_Paid, T_parkingBill.paymentAmount AS Prk_Paid, T_TelephoneBills.TelephoneCredit AS Tel_Paid, T_DepositPayment.[Deposit payment] AS Dep_Amount, T_Receipt.cancelled AS Canceled, T_Receipt.RemittanceReceiptNo AS Rec_Ref, T_Receipt.Student FROM T_Receipt INNER JOIN T_DepositPayment ON T_Receipt.ReceiptNo = T_DepositPayment.receiptNo LEFT OUTER JOIN T_RentPayment ON T_Receipt.ReceiptNo = T_RentPayment.RentPaymentNo LEFT OUTER JOIN StandingOrder ON T_Receipt.ReceiptNo = StandingOrder.ReceiptNo LEFT OUTER JOIN T_TelephoneBills ON T_Receipt.ReceiptNo = T_TelephoneBills.ReceiptNo LEFT OUTER JOIN T_parkingBill ON T_Receipt.ReceiptNo = T_parkingBill.ReceiptNo LEFT OUTER JOIN T_OtherPayments ON T_Receipt.ReceiptNo = T_OtherPayments.ReceiptNo LEFT OUTER JOIN T_InvoicePayment ON T_Receipt.ReceiptNo = T_InvoicePayment.receiptNo LEFT OUTER JOIN T_cash ON T_Receipt.ReceiptNo = T_cash.ReceiptNo LEFT OUTER JOIN T_AdminPayment ON T_Receipt.ReceiptNo = T_AdminPayment.ReceiptNo LEFT OUTER JOIN T_BankTransfer ON T_Receipt.ReceiptNo = T_BankTransfer.receiptNo LEFT OUTER JOIN T_Utilities ON T_Receipt.ReceiptNo = T_Utilities.receiptNo LEFT OUTER JOIN T_credit ON T_Receipt.ReceiptNo = T_credit.ReceiptNo LEFT OUTER JOIN T_cheque ON T_Receipt.ReceiptNo = T_cheque.ReceiptNo LEFT OUTER JOIN T_InternetBilling ON T_Receipt.ReceiptNo = T_InternetBilling.ReceiptNo GROUP BY T_Receipt.Student, T_Receipt.ReceiptNo, T_cheque.Amount, T_credit.Amount, StandingOrder.Amount, T_BankTransfer.amount, T_cash.TotalAmount, T_AdminPayment.AmountPaid, T_InternetBilling.Total, T_Utilities.AmountPaid, T_InvoicePayment.amountPaid, T_OtherPayments.paymentAmount, T_parkingBill.paymentAmount, T_TelephoneBills.TelephoneCredit, T_Receipt.cancelled, T_Receipt.RemittanceReceiptNo, T_DepositPayment.[Deposit payment], T_RentPayment.AmountPayed, T_Receipt.Student HAVING (T_Receipt.Student LIKE N'06%')
Which gives a result of:
RecNo. 30429 Cheque 250 Deposit 250
30429 679.98 250
This is fine but when I do analysis on this it appears as though the student has paid two deposit payments. I was wondering with out querying each table independently from an application if there was a criteria to specify that I only get one deposit result. So as such say, give me all the payments but I only want one result from the other tables. I though about discrete but that wouldn't work here.
I am rather new to reporting on SQL Server 2005 so please be patient with me.
I need to create a report that will generate system information for a server, the issue im having is that the table I am having to gather the information from seems to only allow me to pull off data from only one row.
For example,. Each row contains a different system part (I.e. RAM) this would be represented by an identifier (1), but I to list each system part as a column in a report
The table (System Info) looks like:-
ID | System part | 1 | RAM 2 | Disk Drive 10| CPU 11| CD ROM |
Which
So basically I need it to look like this.
Name | IP | RAM | Disk Drive| ---------------------------------------------- A | 127.0.0.1 | 512MB | Floppy
So Far my SQL code looks like this for 1 item SELECT SYSTEM PART FROM System Info WHERE System.ID = 1
How would I go about displaying the other system parts as columns with info
I have 7 source databases and one target database, all using the same structure. The structure is made of 10 tables, with foreign key constraints.
I need to merge the source databases into the target (which won't have any data before that process, but will already have the correct schema), and to keep the relationships between the records.
I know how to iterate over the source databases (with SMO foreach), but I'd like to know if someone can advise the best copy method for that context in SSIS ? (I don't want to keep the primary keys, but I need to keep the relationships...)
I have a couple of hundred flat files to import into database tables using SSIS.
The files can be divided into groups by the format they use. I understand that I could import each group of files that have a common format at the same time using a Foreach Loop Container.
However, the example for the Foreach Loop Container has multiple files all being imported into the same database table. In my case, each file needs to be imported into a different database table.
Is it possible to import each set of files with the same format into different tables in a simple loop? I can't see a way to make a Data Flow Destination item accept its table name dynamically, which seems to prevent me doing this.
I suppose I could make a different Data Flow Destination item for each file, in the Data Flow. Would that be a reasonable solution, or is there a simpler solution, or should I just resign myself to making a separate Data Flow for every single file?
I have created a single FULLTEXT on col2 & col3. suppose i want to search col2='engine' and col3='toyota' i write query as
SELECT
TBL.col2,TBL.col3 FROM
TBL INNER JOIN
CONTAINSTABLE(TBL,col2,'engine') TBL1 ON
TBL.col1=TBL1.[key] INNER JOIN
CONTAINSTABLE(TBL,col3,'toyota') TBL2 ON
TBL.col1=TBL2.[key]
Every thing works well if database is small. But now i have 20 million records in my database. Taking an exmaple there are 5million record with col2='engine' and only 1 record with col3='toyota', it take substantial time to find 1 record.
I was thinking this i can address this issue if i merge both columns in a Single column, but i cannot figure out what format i save it in single column that i can use query to extract correct information. for e.g.; i was thinking to concatinate both fields like col4= ABengineBA + ABBToyotaBBA and in search i use SELECT
TBL.col4 FROM
TBL INNER JOIN
CONTAINSTABLE(TBL,col4,' "ABengineBA" AND "ABBToyotaBBA"') TBL1 ON
TBL.col1=TBL1.[key] Result = 1 row
But it don't work in following scenario col4= ABengineBA + ABBCorola ToyotaBBA
SELECT
TBL.col4 FROM
TBL INNER JOIN
CONTAINSTABLE(TBL,col4,' "ABengineBA" AND "ABB*ToyotaBBA"') TBL1 ON
TBL.col1=TBL1.[key]
Result=0 Row Any idea how i can write second query to get result?
Hello I am building a survey application. I have 8 questions. Textbox - Call reference Dropdownmenu - choose Support method Radio button lists - Customer satisfaction questions 1-5 Multiline textbox - other comments. I want to insert textbox, dropdown menu into a db table, then insert each question score into a score column with each question having an ID. I envisage to do this I will need an insert query for the textbox and dropdownlist and then an insert for each question based on ID and score. Please help me! Thanks Andrew
Hello all,I'm trying to request a number of URLS (one for each user) from my database, then place each of these results into a separate string variables. I believed that SqlDataReader could do this for me, but I am unsure of how to accomplish this, or if I am walking down the wrong road. The current code is below (the section in question is in bold), please ignore the fact that I'm using MySQL as the commands work in the same way. public partial class main : System.Web.UI.Page{ String UserName; String userId; String HiveConnectionString; String Current_Location; ArrayList Location; public String Location1; public String Location2; public String Location3; //Int32 x = 0; private void Page_Load(object sender, EventArgs e) { if (User.Identity.IsAuthenticated) { UserName = Membership.GetUser().ToString(); userId = Membership.GetUser().ProviderUserKey.ToString(); HiveConnectionString = "Database=hive;Data Source=localhost;User Id=hive_admin;Password=West7647"; using (MySql.Data.MySqlClient.MySqlConnection conn = new MySql.Data.MySqlClient.MySqlConnection(HiveConnectionString)) { // Map Updates MySql.Data.MySqlClient.MySqlCommand Locationcmd = new MySql.Data.MySqlClient.MySqlCommand( "SELECT Location FROM tracker WHERE Location = IsOnline = '1'"); Locationcmd.Parameters.Add("?PKID", MySql.Data.MySqlClient.MySqlDbType.VarChar, 255).Value = userId; Locationcmd.Connection = conn; conn.Open(); MySql.Data.MySqlClient.MySqlDataReader LocationReader = Locationcmd.ExecuteReader(); while (LocationReader.Read()) { Location1 = LocationReader.GetString(0); //Location2 = LocationReader.GetString(1); // This does not work.. } LocationReader.Close(); conn.Close(); // IP Display MySql.Data.MySqlClient.MySqlCommand Checkcmd = new MySql.Data.MySqlClient.MySqlCommand( "SELECT UserName FROM tracker WHERE PKID = ?PKID"); Checkcmd.Parameters.Add("?PKID", MySql.Data.MySqlClient.MySqlDbType.VarChar, 255).Value = userId; Checkcmd.Connection = conn; conn.Open(); object UserExists = Checkcmd.ExecuteScalar(); conn.Close(); if(UserExists == null) { MySql.Data.MySqlClient.MySqlCommand Insertcmd = new MySql.Data.MySqlClient.MySqlCommand( "INSERT INTO tracker (PKID, UserName, IpAddress, IsOnline) VALUES (?PKID, ?Username, ?IpAddress, 1)"); Insertcmd.Parameters.Add("?IpAddress", MySql.Data.MySqlClient.MySqlDbType.VarChar, 15).Value = Request.UserHostAddress; Insertcmd.Parameters.Add("?Username", MySql.Data.MySqlClient.MySqlDbType.VarChar, 255).Value = UserName; Insertcmd.Parameters.Add("?PKID", MySql.Data.MySqlClient.MySqlDbType.VarChar, 255).Value = userId; Insertcmd.Connection = conn; conn.Open(); Insertcmd.ExecuteNonQuery(); conn.Close(); } else { MySql.Data.MySqlClient.MySqlCommand Updatecmd = new MySql.Data.MySqlClient.MySqlCommand( "UPDATE tracker SET IpAddress = ?IpAddress, IsOnline = '1' WHERE UserName = ?Username AND PKID = ?PKID"); Updatecmd.Parameters.Add("?IpAddress", MySql.Data.MySqlClient.MySqlDbType.VarChar, 15).Value = Request.UserHostAddress; Updatecmd.Parameters.Add("?Username", MySql.Data.MySqlClient.MySqlDbType.VarChar, 255).Value = UserName; Updatecmd.Parameters.Add("?PKID", MySql.Data.MySqlClient.MySqlDbType.VarChar, 255).Value = userId; Updatecmd.Connection = conn; conn.Open(); Updatecmd.ExecuteNonQuery(); conn.Close(); } } } } Can anyone advise me on what I should be doing (even if its just a "you should be using this command) if this is not correct? In fact any pointers would be nice !Thanks everyone!
I want to search in fulltextindexes for multiple searchterms in multiple columns. The difficulty is: I don't want only the records with columns that contains both searchterms. I also want the records of which one column contains one of the searchterm ans another column contains one of the searchterms.
For example I search for NETWORK and PERFORMANCE in two columns. Jobdescr_________________________|Jobtext Bad NETWORK PERFORMANCE________|Slow NETWORK browsing in Windows XP Bad application PERFORMANCE_______|Because of slow NETWORK browsing, the application runs slow.
I only get the first record because JobDescr contains both searchterms I don't get the second record because none of the columns contains both searchterms
I managed to find a workaround:
SELECT T3.jobid, T3.jobdescr FROM (SELECT jobid FROM dba.job WHERE contains(jobdescr, 'network*') or CONTAINS(jobtext, 'network*') ) T1 INNER JOIN (SELECT jobid FROM dba.job WHERE contains(jobdescr, 'performance*') or CONTAINS(jobtext, 'performance*')) T2 ON T2.Jobid = T1.Jobid INNER JOIN (SELECT jobid, jobdescr FROM dba.job) T3 ON T3.Jobid = T1.Jobid OR T3.Jobid = T2.JobId It works but i guess this will result in a heavy database load when the number of searchterms and columns will increase.
I have an Parent table (Parentid, LastName, FirstName) and Kids table (Parentid, KidName, Age, Grade, Gender, KidTypeID) , each parent will have multiple kids, I need the result as below:
I am trying to restore multiple .bak backup SQL database files onto a new server. However, I have found that it will not allow me to restore multiple databases at once. Is there a way to do this so that I do not have to manually upload one at a time? I tried adding all the .bak files at once to the backup device window but it only did the first one listed. It would be so much easier to restore them all at once so that I do not have to continue this manual process. I am restoring them via device.
I need to be able to bulk insert a bunch of tables from their corresponding flat file. I have created an XML file (see below) which has the file name/table name pair at each node. I then created a ForEachLoop task and used the Node enumeration type and the following OuterXpathString: ReferenceFiles/File. At this point I get lost. How do I pass the 2 inside node values (file name and table name) to variables which I can then use as expressions for the bulk insert task inside the Foreach?