CASE
WHEN CAST(wo.start_date AS TIME) BETWEEN '00:00:00' AND '00:59:59' THEN 0
WHEN CAST(wo.start_date AS TIME) BETWEEN '01:00:00' AND '01:59:59' THEN 1
WHEN CAST(wo.start_date AS TIME) BETWEEN '02:00:00' AND '02:59:59' THEN 2
WHEN CAST(wo.start_date AS TIME) BETWEEN '03:00:00' AND '03:59:59' THEN 3
WHEN CAST(wo.start_date AS TIME) BETWEEN '04:00:00' AND '04:59:59' THEN 4
[code]....
The purpose is to take a row and set it to the hour of the day that it occurred in. This works fine, however I would like to force it to display every hour 0-23 regardless of whether or not it has a corresponding row.
So, if no row exists for 0, display 0 with null values for the rest of the columns.
I have a table that is recording hits to a website. Everytime someone views a page, the datetime of the hit is recorded in a field called hit_date_time. I would like to be able to come up with a query that will show how many hits occured on a given day or given days, broken down by hour.
The resulting table for two days would look something like: Time Hits 1/1/01 12:00 1 1/1/01 1:00 23 1/1/01 2:00 54 1/2/01 1:00 15 1/2/01 2:00 14
I can't seem to figure out how to write the query so that I can take into consideration the date and hour of the event so that I can count it.
I have an Itanium 64bit server to run SSIS packages on. I have one package with three parralell streams. When I run the package in 64 bit mode using dtexec, it runs through validation and exits with no reported errors, when I run it from a job, the job fails and says to see job log, which has no errors.
When I run it in 32 bit mode using the GUI, it runs all the way through.
Does anyone know how to launch SSIS in 32 bit mode from a job on an Itanium?
This is a really wide spread - more than a time discussed - on SQL CE MSDN Forums - Issue !!! Is there any way i can commit changes which happens during runtime (when i am developing the application) such as inserts/updates and deletes to the .sdf DB on the machine ?????
As our DB has no primary keys or indexes ive taken a copy of all populated tables and tried to force primary keys within a new DB.
the problem is all off the tables have multiple datasets within them, a dataset for each year. This causes all instances of ID numbers to not be unique as they are replicated for every year they are active.
Its a school database so a student who has been here for 3 years will have 3 instances of his ID number, one for each years' data set.
So how do i force primary keys if there is no unique identifier? ive been highlighting both data set and ID columns and setting that combination as the primary key.
Essentially i need to analyse the relationships between the tabls in a diagram and also run some speed tests to see how fast the db works when it has indexes and primary keys.
the reason im writing is that ive done this on ten tables and with another 160 to do im just checking im doing the right thing?
In the following procedure i write the results to a temp table called #temp1I now want to count the results of #temp1, if the count of #temp1 = 0 I want to insert 'No Records Found' into #temp.ERRORMSG else return what is in the table
any idea on how to do this?
ALTER PROC [dbo].[SPU_RPT_Savings_AnomalyDispatches] 40,'04/01/07|06/30/07' @PropertyID varchar(4000), @DropDown varchar(50)
AS SELECT Client.CLIENT, Client.CLIENTID, ErrorEmailLog.ID, ErrorEmailLog.SITEID, ErrorEmailLog.PROPID, ErrorEmailLog.DISTINCTERRORS, ErrorEmailLog.ERRORMSG, ErrorEmailLog.ERRORDATETIME, ErrorEmailLog.EMAILRECIPIENTS, Property.PROPERTY, Property.STREET, Property.CITY, Property.STATE, Property.ZIP, Property.PHONE INTO #TEMP1 FROM ErrorEmailLog INNER JOIN Property ON ErrorEmailLog.PROPID = Property.PROPID INNER JOIN Client ON Property.CLIENTID = Client.CLIENTID WHERE (ErrorEmailLog.ERRORDATETIME BETWEEN SUBSTRING(CONVERT(VARCHAR(12), @DropDown), 0, 9)
Hello all, Is there any way to force Autonum to generate a number before an entire record is created? Some of my forms will not work because it needs a number already listed in its index (which uses Autonum) and cannot add to the table until it is created.I really need it to have a number ready and waiting upon the last record's completion.
im testing an application change that should handle a timeout on a stored procedure being called from the application. thing is, the timeout that we experience in production that led to this fix is random. so is there some way for me to setup a test stored procedure or some way to call the SP so that i can test a timeout scenario? im using MFC and the CDatabase::ExecuteSQL method to call this SP if you were wondering at all. this app is running locally on the server that has an instance of SQL Server Express 2k5 on it. server is running win 2k3.
I have a slight problem, a query that i have written produces data with 2 primary keys the same... however, DINSTINCT wont work in this case as the rows are still different...
Is their a way to force 1 column to always be unique?
Heres the query:
SELECT TOP 5 ORDER_ITEM.ItemID AS 'Item ID', ITEM.ItemName AS 'Item Name', (SELECT SUM(OrdItem2.ItemQuantity) FROM ORDER_ITEM OrdItem2 WHERE OrdItem2.ItemID = ORDER_ITEM.ItemID ) AS Total_Purchased, SUM(ORDER_ITEM.ItemQuantity) AS 'Customer Purchased', CUSTOMER.customerForename AS 'Customer Forename', CUSTOMER.customerSurname AS 'Customer Surname' FROM ITEM, ORDER_ITEM, ORDER_T, CUSTOMER WHERE ITEM.ItemID = ORDER_ITEM.ItemID AND ORDER_ITEM.OrderID = ORDER_0510096.OrderID AND ORDER_T.CustomerID = CUSTOMER.CustomerID GROUP BY ORDER_ITEM.ItemID, ITEM.ItemName, CUSTOMER.customerForename, CUSTOMER.customerSurname ORDER BY Total_Purchased DESC
The query is supposed to select the TOP 5 Products sold as well as selecting the customer that purchased the greatest amount of that item and the amount they purchased.
Currently, i will get 2 duplicate rows (except for customers name and the items the purchased. Like this:
ItemID 83630Mathew Smith 8 366Tony Wattage
Which is kinda annoying.... is there anyway i can prevent this?
And also apart from the Where Joins... is there a more efficient way of writing this?
I am developing a simple DB-Library program in C calling SQL Server 2000 onwindows 2003 and NT 4. I have some T-SQL code that checks for the existenceof a table and want to abort the program if the table doesn't exist. I issuea raiserror if the table doesn't exist and then call RETURN.I construct the string using sprintf and pass it dbfcnd and dbsqlexec. Sincethe commands work, there is no error to halt the execution of the program.Is there an easy, clean way to force dbsqlexec to fail? Do I need a storedprocedure to return an error code and then deal with that?Thanks for any advice,-Gary
A stored procedure in the cache is automatically recompiled when a table it refers to has a table structure change. User defined functions are not. Here's a simplified code sample:
set nocount on go
create table tmpTest (a int, b int, c int)
insert into tmpTest (a, b, c) values (1, 2, 3) insert into tmpTest (a, b, c) values (2, 3, 4) go
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[fTest]') and xtype in (N'FN', N'IF', N'TF')) drop function [dbo].[fTest] GO
CREATE FUNCTION dbo.fTest (@a int) RETURNS TABLE AS RETURN (SELECT * from tmpTest where a = @a) GO
select * from fTest(1)
CREATE TABLE dbo.Tmp_tmpTest ( a int NULL, b int NULL, d int NULL, c int NULL ) ON [PRIMARY] IF EXISTS(SELECT * FROM dbo.tmpTest) EXEC('INSERT INTO dbo.Tmp_tmpTest (a, b, c) SELECT a, b, c FROM dbo.tmpTest TABLOCKX') DROP TABLE dbo.tmpTest EXECUTE sp_rename N'dbo.Tmp_tmpTest', N'tmpTest', 'OBJECT'
select * from fTest(1)
drop table tmpTest
Running it, the output is:
a b c ----------- ----------- ----------- 1 2 3
Caution: Changing any part of an object name could break scripts and stored procedures. The OBJECT was renamed to 'tmpTest'. a b c ----------- ----------- ----------- 1 2 NULL
(I know that "select *" is bad, but it's a lot of legacy code that I'm working with here, and that's how it's written.)
The function doesn't detect that the table has changed in structure, or even that there is no longer a dependency on tmpTest. (Appending a column rather than inserting has the same effect, in that only the first 3 columns are returned.)
DBCC FREEPROCCACHE has no effect, not that I really expected it to, but you never know...
Is there any way, other than dropping and recreating, to force a recompilation of a particular function in memory, or perhaps all functions?
Due to a lack of planning during an Active Directory migration last year, I'm now stuck with an immutable service master key on one of my production servers. Since I'm posting here, I guess it's obvious that we have no backup from which to restore. The account that all of the SQL services used to run under no longer exists, so the WITH OLD_ACCOUNT workaround is not viable. And REGENERATE fails, as expected, with Msg 15329, Level 16, State 2, Line 2, "The current master key cannot be decrypted..."
After some research, including several of Laurentiu's blog entries, it seems that my only path at this point is to use the FORCE option to REGENERATE. (And then to immediately backup the service master key at several geographically disparate locations!!
Considering that:
We aren't actively using any of SQL Server's encryption capabilities, the closest we come is that one of our legacy applications calls the old PWDENCRYPT() function to hash passwords
##MS_ServiceMasterKey## is the only record in master.sys.symmetric_keys, and every other database's sys.symmetric_keys table is empty
What, if anything, am I likely to lose if I ALTER SERVICE MASTER KEY FORCE REGENERATE? My understanding is that since we don't have any database master keys and aren't using encryption, there's no real potential for corruption or loss. However, I want to be a little more confident about this before I give it a go.
Im trying to do an interactive sort , one of the rows returned from my datasource called 'Total' i wish to display at the bottom always. is there a way i can do this?
I've tried the below on the column header but the total is either at the bottom or the top how can i check the ordering if Ascending or Descending? Then i cld swop the 1 and the 2 around.
In a decision tree algorithm, is there a known way to force a branch at a top level? For exmaple, I have 30 known decision patterns that are going to be completely different and I don't want them to intermingle. I wanted to force a branch at the top node on one of the 30 patterns so I wouldn't have to create 30 mining models per client.
SELECT acct.USERNAME, SUM(trans.CHARGES) - SUM(trans.CREDITS) AS [Charges - Credits], MAX(trans.ENDPERIOD) AS [Billed Through], acct.FULLNAME, bill.COMPANY, bill.BILLTOCOMPANY, bill.firstname, bill.lastname, bill.STREET1, bill.STREET2, bill.CITY, bill.STATE, bill.ZIPCODE, bill.COUNTRY, acct.PHONE1, acct.PHONE2, bill.EMAIL, acct.BILLPERIOD, acct.PLAN FROM TRANS trans, ACCTS acct, BILLING bill WHERE trans.ACCTNUM = acct.ACCTNUM and bill.ACCTNUM = acct.ACCTNUM and bill.ACCTNUM = trans.ACCTNUM AND acct.CLOSED = 0 AND acct.SUSPENDED = 0 GROUP BY acct.USERNAME, acct.FULLNAME, bill.COMPANY, bill.BILLTOCOMPANY, bill.firstname, bill.lastname, bill.STREET1, bill.STREET2, bill.CITY, bill.STATE, bill.ZIPCODE, bill.COUNTRY, acct.PHONE1, acct.PHONE2, bill.EMAIL, acct.BILLPERIOD, acct.PLAN HAVING SUM(trans.CHARGES) - SUM(CREDITS) > 0 ORDER BY [Billed Through] DESC
SELECT acct.USERNAME, SUM(trans.CHARGES) - SUM(trans.CREDITS) AS [Charges - Credits], MAX(trans.ENDPERIOD) AS [Billed Through], acct.FULLNAME, bill.COMPANY, bill.BILLTOCOMPANY, bill.firstname, bill.lastname, bill.STREET1, bill.STREET2, bill.CITY, bill.STATE, bill.ZIPCODE, bill.COUNTRY, acct.PHONE1, acct.PHONE2, bill.EMAIL, acct.BILLPERIOD, acct.PLANFROM TRANS trans, ACCTS acct, BILLING billWHERE trans.ACCTNUM = acct.ACCTNUM AND bill.ACCTNUM = acct.ACCTNUM AND bill.ACCTNUM = trans.ACCTNUM AND acct.CLOSED = 0 AND acct.SUSPENDED = 0GROUP BY acct.USERNAME, acct.FULLNAME, bill.COMPANY, bill.BILLTOCOMPANY, bill.firstname, bill.lastname, bill.STREET1, bill.STREET2, bill.CITY, bill.STATE, bill.ZIPCODE, bill.COUNTRY, acct.PHONE1, acct.PHONE2, bill.EMAIL, acct.BILLPERIOD, acct.PLANHAVING SUM(trans.CHARGES) - SUM(CREDITS) > 0ORDER BY [Billed Through] DESC
Incorrect syntax near the keyword 'PLAN'.
If i take out SELECT & GROUP BY acct.plan, it works fine.
I've googled a bit and found 'EXPLAIN PLAN' command, I assume it's parsing the 'PLAN' as a command and screwing stuff up. I don't get why it'd take it for a command instead of a column. How does one select a keyword as a column name? Brackets & single quotes didn't do the trick.
Hello All, Is it possible to force a delete of a table even when another program is using that DB, and still has some view data on that specific table. I know that I can delete tables if another program is just have connection to the DB, but not using the specific table I'd like to delete. Can it be done also on a viewed table?
The principle, partner and witnesss have two NICs each (NIC1 and NIC2). I want them to communicate in NIC2 for sending logs and establishing quorum. This will happen in their own private network (say 192.168.1.0/24). The NIC1 in each server will be available for client communication. The domain and clients are in the network (say 10.1.1.0/24).
I am using the same domain account as SQL server service account in all three servers.
I have set the Interactive Height in my SQL report to 11 inches. I have also set a page break to occur after each table group in my report. When I run the report in SQL Reporting Services (or in Visual Studio 2005) the only page breaks that occur are the ones after each group. The Interactive Height setting is not causing page breaks. The first group in my report prints out as 4 pages but is showing up on the html screen as 1 long page. The first page break finally occurs at the end of the first group.
How can I get the Interactive Height to force page breaks within each of my groups? I am using SQL Reporting Services 2005.
I have a need to force a package to run using the 32-bit runtime from the SQL Server Agent. The machine is a x64 unit. I'm having to use an ODBC driver to extract data from our ERP package that will only run in 32 bit. Any help would be appreciated.
I have statement which is comparing a smalldatetime column to literal string as follows:
sales_date ='21-9-2004 0:0:0.000'
when I run the statement in query analyzer it bombs out with:
Server: Msg 296, Level 16, State 3, Line 1 The conversion of char data type to smalldatetime data type resulted in an out-of-range smalldatetime value.
If I alter the format of the date literal to '2004-09-21 00:00:00' the statement works.
Is there anyway of forcing the statement to treat '21-9-2004 0:0:0.000' as '2004-09-21 00:00:00' without modifying the statement itself ?
I have to output as one filed name, mailing address, and phone of each company that we do business with. the out put must look as follows: Company Name Store number: ##### Street Address 1 Street Address 2 City, State Zip + 4 Phone: (###) ###-####
Each piece of data is a single field and will have to be concatenated into one. How do I force the line breaks where I need them?
Also each piece of data will be compared to a €œsource€? for validity and I will need to mare the individual data pieces red where €œwrong€?. Like so: VALID MY DATA Company Name Company Name Store number: ##### Store number: ##### Street Address 1 Street Address 1 Street Address 2 Street Address 2 City, State Zip + 4 City, State Zip + 4 Phone: (###) ###-#### Phone: (###) ###-####
Any ideas on how to do this? I have used IIF for conditional formating of field colors in the past, but my expierence is that it changes the WHOLE display field not just piece of data the IIF is wrapped around.
We run 2014 enterprise. Can parts of the phrase that goes into a text box be forced to go to designated lines in that box? For instance if I have the phrase aaaaa bbbbb ccccc ddddd going into my text box , can I force a carriage return after each word (eg aaaaa)  within the textbox?Â
I'm starting out with Visual Studio 2008 and SQL Server CE and have come unstuck at the first hurdle.
I have performed the below simple steps: 1. I have created a table with an identity field in it. The identify increment is a positive value (1). This works fine and I can populate data to the table directly. 2. I drag the table adapter from the dataset to a form to create a datagrid form.
When I run this form, the field for the identify column populates with negative values and when I go to save the record the application errors.
Note: this only happens with SQL Server CE. I do exactly the same with a server based db and get positive increments via the form.
The simple-ugly solution is to just set a negative increment in the table to match what is going on via the form, but this strikes me as a cop out. It is most likely that I am overlooking something obvious and I would be very appreciative if someone could point me in the right direction.
I’ve been looking at a bug in an application stored proc. This merely inserts 7 million rows from one table into another (empty) table.
INSERTÂ dbo.TradeDataPrimary (Id, SinaiTrade)Â SELECTÂ ts.Id, CAST(ts.XmlTrade AS varbinary(MAX)) FROM dbo.TradePrimary ts
Both tables have a clustered index with the same key order (on the first column) but the query optimizer insisted on placing a SORT step in the query plan. This use a lot of tempdb which I wanted to avoid.
I discovered that the bug was a data type mismatch. Fixing the bug and/or dropping the clustered index from the target table before the insert resulted in the plan I expected, that is, with no sort.
I was hoping it would run faster but, unfortunately, as can be seen from the new plan below, a serial plan is used. This results in the insert taking nearly three times as long as the original.
I’ve been looking at Paul White’s article: [URL] .... and Adam Machanic’s: [URL] ....
Both are excellent articles but neither approaches i.e. trace flag 8649 or make_parallel () give me a parallel plan. Am I missing a trick here? How I can force parallelism? I know there are no features that require a serial zone in the plan otherwise the plan with the sort would not be parallel..
This is SQL Server 2012 Enterprise 11.0.5522.0, 512 GB of RAM and 16 procs.
In my fact table (month granularity), I've got a calculation defined:Â
Margin%=DIVIDE(SUM(Margin);SUM(SalesAmount))
How should I define a calculation based on "Margin%", so that if I browse the Pivot Table by months it would always show the year level result instead? My date dimension is also with Month granularity and it has a Year column in it.Â
I'm trying to get a count of Employed and Available contractors per time period, and I have a table of Contracts... something like:
CREATE TABLE empContract( empContractID INT IDENTITY(10000,1) PRIMARY KEY, StartDate DATE NOT NULL, EndDate DATE, ContractorAssigned INT, FOREIGN KEY ContractorID REFERENCES Contractor(ContractorID) );
I don't think this is possible without the existence of some kind of Calendar table. Given the existence of a calendar table, the query seems really simple - just something like:
SELECT cal.CalendarDate, ec.ContractID FROM Calendar cal LEFT JOIN empContract ec ON cal.CalendarDate BETWEEN ec.StartDate AND ec.EndDate
The left join forces the existence of all dates in a range (@StartDate and @EndDate), so that when I try to create a graph with counts by day, I don't have any gaps in my time series.
This is more of a philosophical post, but feedbacks are welcome!
I am working at migrating SQL 2000 DTS packages that pulls data from MAS90 via ODBC connections. At first, I REALLY tried to learn SSIS and I hated it at first, with all the new things one has to do to get a simple import to work. After a while, I begin to appreciate some of the new design and the more tiered approach. Indeed, I tried to use SSIS to import the tables and even learned how to overcome the Unicode/non-Unicode conversion errors by using the Import Wizard to do the grunt work.
But today I came across a show stopper: My imports are failing because the source lied about its metadata type and I am getting a "Value too large for output column" error. I tried to recreate the Task to no avail. I searched on the web and there are very few posts regarding to this and unfortunately I don't have a way to tweak my ODBC connection properties for MAS90 to some how "fool" SSIS. I finally give up and migrate the DTS 2000 package instead.
I am not too happy about this solution because I know that more likely or not Microsoft will discontinue support for such legacy approach and then it is more work down the road. I REALLY wanted to do it right, to rebuild it natively in SSIS but why does SSIS have to make things so hard by enforcing the type checks so tightly? Is it so bad to allow users who know the data better to by pass the validations? We are not working in a perfect Comp Sci 101 world where every thing is scrubbed clean, we work in a world of bad, old, malformed data.
If there is a way for me to overcome that "value too large" error, I am all ears. Thank you for reading.
1. I downloaded ChartFXRSTrial and created one chart, and able to deploy it in ReportServer, but the problem is the reports are not showing there, i checked the configuration of .dll files in the help provided by chartFX, but couldn't get anything, so could help me on this.
2. How to give Tooltip for the X- axis or Y- axis values in the ReportServer , i tried using chartproperties of .rdl file, but didnt understand it. can help me on this, and one more i tried with Dundas too, If im giving tooltip as #valy then, it is showing samething in reportserver instead the values of 'Y-axis'.
Hello people, I'm migrating an application from asp to asp.net and access to ms sql 2005. On the old db, there's a table with a column to store date and a column to store time. Ms sql only works with date and time together, right? I created a query to get the time from time column and update the date column inserting the time. I did something like this: UPDATE S_ACC_MONEYSET Date_Oper = CONVERT(varchar, Date_Oper, 101) + ' ' + CONVERT(varchar, Time_Oper, 108) Does anyone knows a better way to do that? Any other comments? Thanks!
I have about 23 SQL servers running 6.5 SP3 or SP5a in a 24 hour environment. Most of the activity takes place between 6am and 11pm, with few transaction after 11pm. What is best to do with the main DB's transaction log, have the truncate at checkpoint option checked OR back up the transaction log a couple times a week? The Database is backed up every 6-8 hours.
Hi all, We have many messages in SQL server logs that say 'Log backed up:Database:COM,creation date(time):2003/03/26(19:41:58),first' why is this message appeared every one hour?